Author Archives: Greg Satell

About Greg Satell

Greg Satell is a popular speaker and consultant. His latest book, Cascades: How to Create a Movement That Drives Transformational Change, is available now. Follow his blog at Digital Tonto or on Twitter @Digital Tonto.

Innovation the Amazon Way

Innovation the Amazon Way

GUEST POST from Greg Satell

In 2014, Stephenie Landry was finishing up her one-year stint as Technical Advisor to Jeff Wilke, who oversees Amazon’s worldwide consumer business, which is a mentor program that allows high potential executives to shadow a senior leader and learn first-hand. Her next assignment would define her career.

At most companies, an up-and-comer like Stephenie might be given a division to run or work on a big acquisition deal. Amazon, however, is a different kind of place. Landry wrote a memo outlining plans for a new service she’d been thinking about, Prime Now, which today offers one-hour delivery to customers in over 50 cities across 9 countries.

It’s no secret that Amazon is one of the world’s most innovative companies. Starting out as a niche service selling books online, it’s now not only a dominant retailer, but has pioneered new categories such as cloud computing and smart speakers. The key to its success is not any one process, but how it integrates a customer obsession deep within its culture and practice.

Starting With The Customer And Working Back

At the heart of how Amazon innovates is its six-page memo, which is required at the start of every new initiative. What makes it effective isn’t so much the structure of the document itself, but how it is used to embed a fanatical focus on the customer from the day one. It’s something that Amazon employees have impressed upon them early in their careers.

So the first step in developing Prime Now was to write a press release. Landry’s document was not only a description of the service, but how hypothetical customers would react to it. How did the service affect them? What surprised them about it? What concerns did they want addressed? The exercise forced her to internalize how Amazon customers would think and feel about Prime Now from the very start.

Next she wrote a series of FAQ’s anticipating concerns for both customers and for various stakeholders within the firm, like the CFO, operations people and the leadership of the Prime program. So Landry had to imagine what questions each would have, how any issues would be resolved and then explain things in clear, concise language.

All of this happens before the first meeting is held, a single line of code is written or an early prototype is built, because the company strongly believes that until you internalize the customer’s perspective, nothing else really matters. That’s key to how the company operates.

A Deeply Embedded Writing Culture

It’s no accident that the first step to develop a new product at Amazon is a memo rather than, say, a PowerPoint deck or a kickoff meeting. As Fareed Zakaria once put it, “Thinking and writing are inextricably intertwined. When I begin to write, I realize that my ‘thoughts’ are usually a jumble of half-baked, incoherent impulses strung together with gaping logical holes between them”.

So the company focuses on building writing skills early in an executive’s career. “Writing is a key part of our culture,” Landry told me. “I started writing press releases for smaller features and projects. One of my first was actually about packaging for diamond rings. Over years of practice and coaching, I got better at it.” Being able to write a good memo is also a key factor in advancement at Amazon. If you want to rise, you need to write and write well.

She also stressed to me the importance of brevity. “Keeping things concise and to the point forces you to think things through in a way that you wouldn’t otherwise. You can’t hide behind complexity, you actually have to work through it,” Landry said. Or, as another Amazon leader put it, “Perfection is achieved when there is nothing left to remove.”

Moreover, writing a memo isn’t a solo effort, but a collaborative process. Typically, executives spend a week or more and sharing the document with colleagues, getting feedback, honing and tweaking it until every conceivable facet is deeply thought through.

Reinventing The Office Meeting

Another unique facet of Amazon’s culture is how meetings are run. In recent years, a common complaint throughout the corporate world is how the number of meetings has become so oppressive that it’s hard to get any work done. Research from MIT shows that executives spend an average of nearly 23 hours a week in meetings, up from less than 10 hours in 1960

At Amazon, however, the six-page memo cuts down on the number of meetings that are called. If you have to spend a week writing a memo, you don’t just start sending out invites whenever the fancy strikes you. Similarly, the company’s practice of limiting attendance to roughly the number of people that can share two pizzas also promotes restraint.

Each meeting starts out with a 30-60 minute reading period in which everybody digests the memo. From there, all attendees are asked to share gut reactions — senior leaders typically speak last — and then delve into what might be missing, ask probing questions and drill down into any potential issues that may arise.

Subsequent meetings follow the same pattern to review the financials, hone the concept and review mockups as the team further refines ideas and assumptions. “It’s usually not one big piece of feedback that you get,” Landry stressed. “It is really all about the smaller questions, they help you get to a level of detail that really brings the idea to life.”

All of this may seem terribly cumbersome to fast moving executives accustomed to zinging in and out of meetings all day, but you often need to go slow to move fast. In the case of Prime Now, the service took just 111 days to go from an idea on a piece of paper to a product launch in one zip code in Manhattan and expanded quickly from there.

Co-evolving Culture And Practice

Every company innovates differently. Apple has a fanatical focus on design. IBM’s commitment to deep scientific research has enabled it to stay on the cutting edge and compete long after most of its competitors have fallen by the wayside. Google integrates a number of innovation strategies into a seamless whole

What works for one company would likely not work for another, a fact that Amazon CEO Jeff Bezos highlighted in a recent letter to shareholders. “We never claim that our approach is the right one – just that it’s ours – and over the last two decades, we’ve collected a large group of like-minded people. Folks who find our approach energizing and meaningful,” he wrote.

The truth is that there is no one “true” path to innovation because innovation, at its core, is about solving problems and every enterprise chooses different problems to solve. While IBM might be happy to have its scientists work for decades on some arcane technology and Google gladly allows its employees to pursue pet projects, those things probably wouldn’t fly at Amazon.

However, the one thing that all great innovators have in common is that culture and practice are deeply intertwined. That’s what makes them so hard to copy. Anybody can write a six-page memo or start meetings with a reading period. It’s not those specific practices, but the commitment to the values they reflect, that has driven Amazon’s incredible success.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Four Principles of Successful Digital Transformation

Four Principles of Successful Digital Transformation

GUEST POST from Greg Satell

When Steve Jobs and Apple launched the Macintosh with great fanfare in 1984, it was to be only one step in a long journey that began with Douglas Engelbart’s Mother of All Demos and the development of the Alto at Xerox PARC more than a decade before. The Macintosh was, in many ways, the culmination of everything that came before.

Yet it was far from the end of the road. In fact, it wouldn’t be until the late 90s, after the rise of the Internet, that computers began to have a measurable effect on economic productivity. Until then, personal computers were mainly an expensive device to automate secretarial work and for kids to play video games.

The truth is that innovation is never a single event, but a process of discovery, engineering and transformation. Yet what few realize is that it is the last part, transformation, that is often the hardest and the longest. In fact, it usually takes about 30 years to go from an initial discovery to a major impact on the world. Here’s what you can do to move things along.

1. Identify A Keystone Change

About a decade before the Macintosh, Xerox invented the Alto, which had many of the features that the Macintosh later became famous for, such as a graphical user interface, a mouse and a bitmapped screen. Yet while the Macintosh became legendary, the Alto never really got off the ground and is now remembered, if at all, as little more than a footnote.

The difference in outcomes had much less to do with technology than it had to do with vision. While Xerox had grand plans to create the “office of the future,” Steve Jobs and Apple merely wanted to create a cool gadget for middle class kids and enthusiasts. Sure, they were only using it to write term papers and play video games, but they were still buying.

In my book, Cascades, I call this a “keystone change,” based on something my friend Talia Milgrom-Elcott told me about ecosystems. Apparently, every ecosystem has one or two keystone species that it needs to thrive. Innovation works the same way, you first need to identify a keystone change before a transformation can begin.

One common mistake is to immediately seek out the largest addressable market for a new product or service. That’s a good idea for an established technology or product category, but when you have something that’s truly new and different, it’s much better to find a hair on fire use case, a problem that’s someone needs solved so badly that they are willing to put up with early glitches and other shortcomings.

2. Indoctrinate Values, Beliefs And Skills

A technology is more than just a collection of transistors and code or even a set of procedures, but needs specific values and skills to make it successful. For example, to shift your business to the cloud, you need to give up control of your infrastructure, which requires a completely new mindset. That’s why so many digital transformations fail. You can’t create a technology shift without a mind shift as well.

For example, when the Institute for Healthcare Improvement began its quest to save 100,000 lives through evidence-based quality practices, it spent significant time preparing the ground beforehand, so that people understood the ethos of the movement. It also created “change kits” and made sure the new procedures were easy to implement to maximize adoption.

In a similar vein, Facebook requires that all new engineers, regardless of experience or expertise, go through its engineering bootcamp. “Beyond the typical training program, at our Bootcamp new engineers see first-hand, and are able to infer, our unique system of values,” Eddie Ruvinsky, an Engineering Director at the company, told me.

“We don’t do this so much through training manuals and PowerPoint decks,” he continued,”but through allowing them to solve real problems working with real people who are going to be their colleagues. We’re not trying to shovel our existing culture at them, but preparing them to shape our culture for the future.”

Before you can change actions, you must first transform values, beliefs and skills.

3. Break Through Higher Thresholds Of Resistance

Growing up in Iowa in the 1930s, Everett Rogers, noticed something strange in his father’s behavior. Although his father loved electrical gadgets, he was hesitant to adopt hybrid seed corn, even though it had higher yields. In fact, his father only made the switch after he saw his neighbor’s hybrid seen crop thrive during a drought in 1936.

This became the basis for Rogers’ now-familiar diffusion of innovations theory, in which an idea first gets popular with a group of early adopters and then only later spreads to other people. Later, Geoffrey Moore explained that most innovations fail because they never cross the chasm from the early adopters to the mainstream.

Both theories have become popular, but are often misunderstood. Early adopters are not a specific personality type, but people with a low threshold of resistance to a particular idea or technology. Remember that Rogers’s father was an early adopter of electrical gadgets, but was more reticent with seed corn.

As network theory pioneer Duncan Watts explained to me, an idea propagates through “easily influenced people influencing other easily influenced people.” So it’s important to start a transformation with people who are already enthusiastic, work out the inevitable kinks and then move on to people slightly more reticent, once you’ve proved success in that earlier group.

4. Focus On The Network, Not The Nodes

Perhaps the biggest mistake that organizations commit when trying to implement a new technology is to try to push everything from above, either through carrots, like financial incentives, or sticks, like disciplinary action for noncompliance. That may give senior management the satisfaction of “taking action,” but can often backfire.

People are much more willing to adopt something new if they feel like its their idea. The Institute for Healthcare Improvement, for example, designated selected institutions to act as “nodes” to help spread its movement. These weren’t watchdogs, but peers that were early adopters who could help their colleagues adopt the new procedures effectively.

In a similar vein, IBM has already taken significant steps to drive adoption of Quantum computing, a technology that won’t be commercially available for years. First it created the Q Experience, an early version of its technology available through the cloud for anyone to use. It has also set up its Q Network of early adopter companies who are working with IBM to develop practical applications for quantum computing.

To date, tens of thousands have already run hundreds of thousands of experiments on Q Experience and about a dozen companies have joined the Q Network. So while there is still significant discovery and engineering to be done, the transformation is already well underway. It always pays to start early.

The truth is that transformation is always about the network, not the nodes. That’s why you need to identify a keystone change, indoctrinate the values and skills that will help you break through higher thresholds of resistance and continuously connect with a diverse set of stakeholders to drive change forward.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

DNA May Be the Next Frontier of Computing and Data Storage

DNA May Be the Next Frontier of Computing and Data Storage

GUEST POST from Greg Satell

Data, as many have noted, has become the new oil, meaning that we no longer regard the information we store as merely a cost of doing business, but a valuable asset and a potential source of competitive advantage. It has become the fuel that powers advanced technologies such as machine learning.

A problem that’s emerging, however, is that our ability to produce data is outstripping our ability to store it. In fact, an article in the journal Nature predicts that by 2040, data storage would consume 10–100 times the expected supply of microchip-grade silicon, using current technology. Clearly, we need a data storage breakthrough.

One potential solution is DNA, which is a million times more information dense than today’s flash drives. It also is more stable, more secure and uses minimal energy. The problem is that it is currently prohibitively expensive. However, a startup that has emerged out of MIT, called CATALOG, may have found the breakthrough we’re looking for: low-cost DNA Storage.

The Makings Of A Scientist-Entrepreneur

Growing up in his native Korea, Hyunjun Park never planned on a career in business, much less the technology business, but expected to become a biologist. He graduated with honors from Seoul National University and then went on to earn a PhD from the University of Wisconsin. Later he joined Tim Lu’s lab at MIT, which specializes in synthetic biology.

In an earlier time, he would have followed an established career path, from PhD to post-doc to assistant professor to tenure. These days, however, there is a growing trend for graduate students to get an entrepreneurial education in parallel with the traditional scientific curriculum. Park, for example, participated in both the Wisconsin Entrepreneurial Bootcamp and Start MIT.

He also met a kindred spirit in Nate Roquet, a PhD candidate who, about to finish his thesis, had started thinking about what to do next. Inspired by a talk from given by the Chief Science Officer at a seed fund, IndieBio, the two began to talk in earnest about starting a company together based on their work in synthetic biology.

As they batted around ideas, the subject of DNA storage came up. By this time, the advantages of the technology were well known but it was not considered practical, costing hundreds of thousands of dollars to store just a few hundred megabytes of data. However, the two did some back-of -the-envelope calculations and became convinced they could do it far more cheaply.

Moving From Idea To Product

The basic concept of DNA storage is simple. Essentially, you just encode the ones and zeros of digital code into the T, G, A and C’s of genetic code. However, stringing those genetic molecules together is tedious and expensive. The idea that Park and Roquet came up with was to use enzymes to alter strands of DNA, rather than building them up piece by piece.

Contrary to popular opinion, most traditional venture capital firms, such as those that populate Sand Hill Road in Silicon Valley, don’t invest in ideas. They invest in products. IndieBio, however, isn’t your typical investor. They give only give a small amount of seed capital, but offer other services, such as wet labs, entrepreneurial training and scientific mentorship. Park and Roquet reached out to them and found some interest.

“We invest in problems, not necessarily solutions,” Arvind Gupta, Founder at IndieBio told me. “Here the problem is massive. How do you keep the world’s knowledge safe? We know DNA can last thousands of years and can be replicated very inexpensively. That’s a really big deal and Hyunjun and Nate’s approach was incredibly exciting.”

Once the pair entered IndieBio’s four-month program, they found both promise and disappointment. Their approach could dramatically reduce the cost of storing information in DNA, but not nearly quickly enough to build a commercially viable product. They would need to pivot if they were going to turn their idea into an actual business.

Scaling To Market

One flaw in CATALOG’s approach was that the process was too complex to scale. Yet they found that by starting with just a few different DNA strands and attaching them together, much like a printing press pre-arranges words in a book, they could come up with something that was not only scalable, but commercially viable from a cost perspective.

The second problem was more thorny. Working with enzymes is incredibly labor intensive and, being biologists, Park and Roquet didn’t have the mechanical engineering expertise to make their process feasible. Fortunately, an advisor, Darren Link, connected the pair to Cambridge Consultants, an innovation consultancy that could help them.

“We started looking at the problem and it seemed that, on paper at least, we could make it work,” Richard Hammond, Technology Director and Head of Synthetic Biology at Cambridge Consultants, told me. “Now we’re about halfway through making the first prototype and we believe we can make it work and scale it significantly. We’re increasingly confident that we can solve the core technical challenges.”

In 2018 CATALOG introduced the world to Shannon, its prototype DNA writer. In 2022 CATALOG announced its DNA computation work at the HPC User Forum. But CATALOG isn’t without competition in the space. For example, Western Digital‘s LTO-9 from 2022, can store 18 TB per cartridge. CATALOG for its part is partnering with Seagate “on several initiatives to advance scalable and automated DNA-based storage and computation platforms, including making DNA-based platforms up to 1000 times smaller.” That should make the process competitive for archival storage, such as medical and legal records as well as storing film databases at movie studios.

“I think the fact that we’re inventing a completely new medium for data storage is really exciting,” Park told me. “I don’t think that we know yet what the true potential is because the biggest use cases probably don’t exist yet. What I do know is that our demand for data storage will soon outstrip our supply and we are thrilled about the possibility of solving that problem.”

Going Beyond Digital

A generation ago, the task of improving data storage would have been seen as solely a computer science problem. Yet today, the digital era is ending and we’re going to have to look further and wider for solutions to the problems we face. With the vast improvement in genomics, which is far outpacing Moore’s law these days, we can expect biology to increasingly play a role.

“Traditional, information technology has been strictly the realm of electrical engineers, physicists and coders,” Gupta of IndieBio told me. “What we’re increasingly finding is that biology, which has been honed for millions of years by evolution, can often point the way to solutions that are more robust and potentially, much cheaper and more efficient.”

Yet this phenomenon goes far beyond biology. We’re also seeing similar accelerations in other fields, such as materials science and space-related technologies. We’re also seeing a new breed of investors, like IndieBio, that focus specifically on scientist entrepreneurs. “I consider myself a product of the growing ecosystem for scientific entrepreneurs at universities and in the investor community,” Park told me.

Make no mistake. We are entering a new era of innovation and the traditional Silicon Valley approach will not get us where we need to go. Instead, we need to forge greater collaboration between the scientific community, the investor community and government agencies to solve problems that are increasingly complex and interdisciplinary.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

How Tribalism Can Kill Innovation

How Tribalism Can Kill Innovation

GUEST POST from Greg Satell

While history tends to single out individuals, the truth is that when you look behind the story of any heroic leader, what you find is a network of loyal supporters, active collaborators and outside facilitators that are behind any great achievement. Nobody accomplishes anything significant alone.

That’s probably why it’s become fashionable for pundits to encourage us to “find our tribe,” a network of like-minded people who share your ambitions. Don’t listen to them. The truth is that great things are achieved not by taking comfort from your tribe, but from going beyond it and reaching out to those who aren’t of like mind.

The problem with focusing too much on your tribe is that those people tend to think the same way you do. They frequent the same places, watch the same TED talks and read the same blogs. That may be great for giving you some comfort and confidence, but it also acts as an echo chamber that will reinforce flawed assumptions and lead you down a false path.

The Problem With Closed Networks

In 2005, a team of researchers decided to study why some Broadway plays become hits and others flop. They looked at all the usual factors, such as production budget, marketing budget and the track record of the director, but what they found was that what was most important factor was the informal networks of relationships among the cast and crew.

If no one had ever worked together before, both financial and creative results tended to be poor. However, if the networks among the cast and crew became too dense—for all intents and purposes, becoming a tribe—performance also suffered. It was the teams that had elements of both, strong ties and new blood, that had the greatest success.

The same effect has been found elsewhere. In studies of star engineers at Bell Labs, the German automotive industry and currency traders it has been shown that tightly clustered groups, combined with long range “weak ties” that allow information to flow freely among disparate clusters of activity, consistently outperform close networks of likeminded people.

Just as we need to invest in building strong, trustful relationships, we also need to go beyond our comfort zone and seek out new connections. It’s far too easy to hide in a tribe.

The Discomfort of Diversity

While studies show that closed networks lead to worse performance, it has long been established that diversity improves performance. Researchers at the University of Michigan found that diverse groups can solve problems better than a more homogeneous team of greater objective ability. Another study that simulated markets showed that ethnic diversity deflated asset bubbles.

While the studies noted above merely simulate diversity in a controlled setting, there is also evidence from the real world that diversity produces better outcomes. A McKinsey report that covered 366 public companies in a variety of countries and industries found that those which were more ethnically and gender diverse performed significantly better than others.

Yet diversity also has a downside. In Political Tribes, Yale Professor Amy Chua notes that we are hardwired to be suspicious of others. For example, in a study where young children were randomly assigned to red or blue groups, they liked pictures of other kids who wore t-shirts that reflected their own group better. A study of adults had similar findings.

So you can see the attraction of tribes. We feel uncomfortable with people who we perceive as different. Surrounding ourselves with people who see things the way we do, on the other hand, makes us feel confident and powerful.

Mixing With The Heathens

Growing up in Iowa in the 1930s, Everett Rogers, noticed something strange in his father’s behavior. Although his father loved electrical gadgets, he was hesitant to adopt hybrid seed corn, even though it had higher yields. In fact, his father only made the switch after he saw his neighbor’s hybrid crop thrive during a drought in 1936.

This became the inspiration for Rogers’ now-familiar diffusion of innovations theory, in which an idea first gets popular with a group of early adopters and then only later spreads to other people. Geoffrey Moore later pointed out that most innovations fail because they never cross the chasm from the early adopters to the mainstream.

A study done by researchers at Kellogg and Stanford explains why. They put together groups of college students to solve a murder mystery. The groups made up of students from the same sorority or fraternity felt more confident and successful, even though they performed worse on the task than integrated groups that experienced more conflict, uncertainty and doubt.

That’s the problem with staying in your tribe. Sure, it feels great to have your ideas supported and reinforced by people you like and respect, but they are doing so because they already believe the same things that you do. To actually achieve something worthwhile, however, you have to go beyond preaching to the choir and start mixing with the heathens.

Do You Want To Make A Point Or Do You Want To Make A Difference?

In my book, Cascades, I cover a wide range of movements. Some, like the civil rights movement and the campaign to save 100,000 lives, succeeded brilliantly. Others, like Occupy and the technology companies along Boston’s Route 128, failed miserably. Another thing I found is that many movements that ultimately succeeded, failed initially because they failed to go beyond their tribe.

Here’s what Srdja Popović, who helped lead the Otpor movement that overthrew the brutal regime of Slobodan Milošević in 2000, told me about the initial student protests in 1992.

These were very ‘Occupy’ type of protests where we occupied the five biggest universities and lived there in our little islands of common sense with intellectuals and rock bands while the rest of the country was more or less supportive of Milošević’s idea. And this is where we began to understand that staying in your little blurb of common sense was not going to save the country.

In a similar vein, Nelson Mandela started out as an angry nationalist, but eventually learned that to get results, he would have to actively collaborate with others that didn’t quite see things the same way he did. In Poland, Solidarity’s first actions were disastrous, because they only involved workers. It was only through a later alliance between workers, intellectuals and the church that the movement ultimately succeeded.

Today, both America and the world have become increasingly tribal and it’s easy to retreat into what Srdja calls “your little blurb of common sense.” You can state your beliefs, make your point and see the heads nod around you. You can live in comfort, knowing that any voices of dissent will be quickly shouted down, as you self righteously feel they should be.

However, at some point, you will have to decide if you want to make a point or whether you want to make a difference. To achieve anything worthwhile, you have to go beyond your tribe.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Preparing the Next Generation for a Post-Digital Age

Preparing the Next Generation for a Post-Digital Age

GUEST POST from Greg Satell

An education is supposed to prepare you for the future. Traditionally, that meant learning certain facts and skills, like when Columbus discovered America or how to do long division. Today, curricula have shifted to focus on a more global and digital world, like cultural history, basic computer skills and writing code.

Yet the challenges that our kids will face will be much different than we did growing up and many of the things a typical student learns in school today will no longer be relevant by the time he or she graduates college. In fact, a study at the University of Oxford found that 47% of today’s jobs will be eliminated over the next 20 years.

In 10 or 20 years, much of what we “know” about the world will no longer be true. The computers of the future will not be digital. Software code itself is disappearing, or at least becoming far less relevant. Many of what are considered good jobs today will be either automated or devalued. We need to rethink how we prepare our kids for the world to come.

Understanding Systems

The subjects we learned in school were mostly static. 2+2 always equaled 4 and Columbus always discovered America in 1492. Interpretations may have differed from place to place and evolved over time, but we were taught that the world was based on certain facts and we were evaluated on the basis on knowing them.

Yet as the complexity theorist Sam Arbesman has pointed out, facts have a half life and, as the accumulation of knowledge accelerates, those half lives are shrinking. For example, when we learned computer programming in school, it was usually in BASIC, a now mostly defunct language. Today, Python is the most popular language, but will likely not be a decade from now.

Computers themselves will be very different as well, based less on the digital code of ones and zeros and more on quantum laws and the human brain. We will likely store less information on silicon and more in DNA. There’s no way to teach kids how these things will work because nobody, not even experts, is quite sure yet.

So kids today need to learn less about how things are today and more about the systems future technologies will be based on, such as quantum mechanics, genetics and the logic of code. One thing economists have consistently found is that it is routine jobs that are most likely to be automated. The best way to prepare for the future is to develop the ability to learn and adapt.

Applying Empathy And Design Skills

While machines are taking over many high level tasks, such as medical analysis and legal research, there are some things they will never do. For example, a computer will never strike out in a Little League game, have its heart broken or see its child born. So it is very unlikely, if not impossible, that a machine will be able to relate to a human like other humans can.

That absence of empathy makes it hard for machines to design products and processes that will maximize enjoyment and utility for humans. So design skills are likely to be in high demand for decades to come as basic production and analytical processes are increasingly automated.

We’ve already seen this process take place with regard to the Internet. In the early days, it was a very technical field. You had to be a highly skilled engineer to make a website work. Today, however, building a website is something any fairly intelligent high school student can do and much of the value has shifted to front-end tasks, like designing the user experience.

With the rise of artificial intelligence and virtual reality our experiences with technology will become more far immersive and that will increase the need for good design. For example, conversational analysts (yes, that’s a real job) are working with designers to create conversational intelligence for voice interfaces and, clearly, virtual reality will be much more design intensive than video ever was.

The Ability To Communicate Complex Ideas

Much of the recent emphasis in education has been around STEM subjects (science, technology, engineering and math) and proficiency in those areas is certainly important for today’s students to understand the world around them. However, many STEM graduates are finding it difficult to find good jobs.

On the other hand, the ability to communicate ideas effectively is becoming a highly prized skill. Consider Amazon, one of the most innovative and technically proficient organizations on the planet. However, a key factor to its success its writing culture. The company is so fanatical about the ability to communicate that developing good writing skills are essential to building a successful career there.

Think about Amazon’s business and it becomes clear why. Sure, it employs highly adept engineers, but to create a truly superior product those people need to collaborate closely with designers, marketers, business development executives and others. To coordinate all that activity and keep everybody focused on delivering a specific experience to the customer, communication needs to be clear and coherent.

So while learning technical subjects like math and science is always a good idea, studying things like literature, history and philosophy is just as important.

Collaborating And Working In Teams

Traditionally, school work has been based on individual accomplishment. You were supposed to study at home, come in prepared and take your test without help. If you looked at your friend’s paper, it was called cheating and you got in a lot of trouble for it. We were taught to be accountable for achievements on our own merits.

Yet consider how the nature of work has changed, even in highly technical fields. In 1920, most scientific papers were written by sole authors, but by 1950 that had changed and co-authorship became the norm. Today, the average paper has four times as many authors as it did then and the work being done is far more interdisciplinary and done at greater distances than in the past.

Make no mistake. The high value work today is being done in teams and that will only increase as more jobs become automated. The jobs of the future will not depend as much on knowing facts or crunching numbers, but will involve humans collaborating with other humans to design work for machines. Collaboration will increasingly be a competitive advantage.

That’s why we need to pay attention not just to how our kids work and achieve academically, but how they play, resolve conflicts and make others feel supported and empowered. The truth is that value has shifted from cognitive skills to social skills. As kids will increasingly be able to learn complex subjects through technology, the most important class may well be recess.

Perhaps most of all, we need to be honest with ourselves and make peace with the fact that our kids educational experience will not — and should not — mirror our own. The world which they will need to face will be far more complex and more difficult to navigate than anything we could imagine back in the days when Fast Times at Ridgemont High was still popular.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Change the World With a Keystone Change

Change the World With a Keystone Change

GUEST POST from Greg Satell

On December 31st, 1929, the Indian National Congress, the foremost nationalist group on the subcontinent, issued a Declaration of Purna Swaraj, or complete independence from British rule. It also announced a campaign of civil disobedience, but no one had any idea what form it should take. That task fell to Mohandas Gandhi.

The Mahatma returned to his ashram to contemplate next steps. After his efforts to organize against the Rowlatt Act a decade earlier ended in disaster, he struggled to find a way forward. As he told a friend at the time, “I am furiously thinking day and night and I do not see a way out of the darkness.”

Finally, he decided he would march for salt, which impressed almost no one. It seemed to be an incredibly inconsequential issue, especially considering what was at stake. Yet what few realized at the time was that he had identified a keystone change that would break the logjam and the British hold on power. Today the Salt March is known as Gandhi’s greatest triumph.

A Tangible And Achievable Goal

One of Gandhi’s biggest challenges was to connect the lofty goals and high-minded rhetoric of the elites who led the Indian National Congress with the concerns of everyday Indians. These destitute masses didn’t much care whether they were ruled by British elites or Indian elites and, to them, abstract concepts like “freedom” and “independence” meant little.

Salt, on the other hand, was something that was tangible for everyone, but especially for the poorest Indians and the British salt laws provided a clear and actionable target. All you had to do to defy them was to boil seawater to produce salt. What at first seemed trivial became a powerful call for mass action.

In my book, Cascades, I found that every successful movement for change, whether it was a corporate turnaround, a social initiative or a political uprising, began with a keystone change like Gandhi’s salt protests. To achieve a grand vision, you always have to start somewhere and the best place to begin is with a clear and achievable goal.

In some cases, as with voting rights in the women’s movement in the 19th century and, more recently, marriage equality for the LGBT movement, identifying a keystone change took decades. In other cases, such as improving worker safety in Paul O’Neil’s turnaround of Alcoa or a campaign to save 100,000 lives in Don Berwick’s quest to improve quality in medical care, the keystone change was part of the initial plan.

Involving Multiple Stakeholders

The concept of Indian independence raised a number of thorny issues, many of which have not been resolved to this day. Tensions between majority Hindus and minority Muslims created suspicions about how power would be structured after British rule. Similarly, coordinating action between caste Hindus and “untouchables” was riddled with difficulty. Christians and Sikhs had their own concerns.

Yet anger about the Salt Laws helped bring all of these disparate groups together. It was clear from the outset that everyone would benefit from a repeal. Also, because participating was easy—again, it was as simple as boiling sea water—little coordination was needed. Most of all, being involved in a collective effort helped to ease tensions somewhat.

Wyeth Pharmaceuticals took a similar approach to its quest to reduce costs by 25% through implementing lean manufacturing methods at its factories. Much like Gandhi, the executives understood that transforming the behaviors of 20,000 employees across 16 large facilities, most of whom were skeptical of the change, was no simple task.

So they started with one process — factory changeovers — and reduced the time it took to switch from producing one product to another in half. “That changed assumptions of what was possible,” an advisor that worked on the project told me. “It allowed us to implement metrics, improve collaboration and trained the supervisor to reimagine her perceived role from being a taskmaster that pushed people to work harder to a coach that enables improved performance.”

Breaking Through Higher Thresholds Of Resistance

By now most people are familiar with the diffusion of innovations theory developed by Everett Rogers. A new idea first gains traction among a small group of innovators and early adopters, then later spreads to the mainstream. Some have suggested that early adopters act as “influentials” or “opinion leaders” that spur an idea forward, but that is largely a myth.

What is much closer to the truth is that we all have different thresholds of resistance to a new idea and these thresholds are highly contextual. For example, as a Philadelphia native, I will enthusiastically try out a new cheesesteak place, but have kept the same hairstyle for 30 years. My wife, on the other hand, is much more adventurous with hairstyles than she is with cheesesteaks.

Yet we are all influenced by those around us. So if our friends and neighbors start raving about a cheesesteak, she might give it a try and may even tell people about it. Or, as network theory pioneer Duncan Watts explained to me, an idea propagates through “easily influenced people influencing other easily influenced people.”

That’s how transformative ideas gain momentum and it’s easy to see how a keystone change can help move the process along. By starting out with a tangible goal, such as protesting the salt tax or reducing changeover time at a single factory, you can focus your efforts on people who have lower thresholds of resistance and they, in turn, can help the idea spread to others who are more reticent.

Paving The Way For Future Change

Perhaps most importantly, a keystone change paves the way for larger changes later on. Gandhi’s Salt March showed that the British Raj could be defied. Voting rights for women and, later, blacks, allowed them to leverage their newfound power at the polls. Reducing changeover time showed how similar results could be achieved in other facets of manufacturing. The 100,000 lives campaign helped spur a a quality movement in healthcare.

None of these things happened all at once, but achieving a keystone change showed what was possible, attracted early adopters to the cause and helped give them a basis for convincing others that even more could be achieved. As one of Gandhi’s followers remarked, before the Salt March, the British “were all sahibs and we were obeying. No more after that.”

Another benefit of a keystone change is that it is much less likely to provoke a backlash than a wider, sweeping vision. One of the reasons that the Salt March was successful is that the British didn’t actually gain that much revenue from the tax on salt, so were slow to react to it. The 100,000 lives campaign involved only six relatively easy to implement procedures, rather than pushing hospitals to pursue wholesale change all at once.

So while it’s important to dream big and have lofty goals, the first step is always a keystone change. That’s how you first build a sense of shared purpose and provide a platform from which a movement for change can spread. Before the Salt March, Gandhi was considered by many to be a Hindu nationalist. It was only after that he truly became an inspiration to all Indian people and many others around the world.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Why Quiet Geniuses Excel at Breakthroughs

Why Quiet Geniuses Excel at Breakthroughs

GUEST POST from Greg Satell

When you think of breakthrough innovation, someone like Steve Jobs, Jeff Bezos or Elon Musk often comes to mind. Charismatic and often temperamental, people like these seem to have a knack for creating the next big thing and build great businesses on top of them. They change the world in ways that few can.

Yet what often goes unnoticed is that great entrepreneurs build their empires on the discoveries of others. Steve jobs didn’t invent the computer or the mobile phone any more than Jeff Bezos discovered e-commerce or Elon Musk dreamed up electric cars. Those things were created by scientists and engineers that came long before.

In researching my book, Mapping Innovation, I got to know many who truly helped create the future and I found them to be different than most people, but not in a way that you’d expect. While all were smart and hardworking, the most common trait among them was their quiet generosity and that can teach us a lot about how innovation really works.

How Jim Allison Figured it All Out

At least in appearance, Jim Allison is a far cry from how you would normally picture a genius to look like. Often disheveled with a scruffy beard, he kind of mumbles out a slow Texas drawl that belies his amazingly quick mind. Unassuming almost to a fault, when I asked him about his accomplishments he just said, “well, I always did like figuring things out.”

When Jim was finishing up graduate school, scientists had just discovered T-cells and he told me that he was fascinated by how these things could zip around your body and kill things for you, but not actually hurt you. The thing was, nobody had the faintest idea how it all worked. So Jim decided to become an immunologist and devote his life to figuring it all out.

Over the next few decades, he and his colleagues at other labs did indeed do much to figure it out. They found one receptor, called B-7, which acts like an ignition switch that initiates the immune response, another, CD-28, that acts like a gas pedal and revs things up into high gear and a third, called CTLA-4, that puts on the brakes so things don’t spin out of control.

Jim played a part in all of this, but his big breakthrough came from the work of another scientist in his lab, which made him suspect that the problem with cancer wasn’t that our immune system can’t fight it, but that it puts the brakes on too soon. He thought that if he could devise a way to pull those brakes off, we could cure cancer in a new and different way.

As it turned out, Jim was right. Today, cancer immunotherapy has become a major field unto itself and, in October 2018, he won the Nobel Prize for his discovery of it. Yet the truth is that it wasn’t one major breakthrough, but a decades-long process of slowly putting the pieces together that made it all possible.

How Gary Starkweather Went From Blowup To Breakthrough

Gary Starkweather is every bit as quiet and unassuming as Jim Allison. Yet when I talked to him a few years ago, I could still hear the anger in his voice as he told me about an incident that happened almost 50 years before. In the late 60s, Gary had an idea to invent a new kind of printer, but his boss at Xerox was thwarting his efforts.

At the time, Gary was one of the few experts in the emerging field of laser optics, so there weren’t many others who could understand his work, much less how it could be applied to the still obscure field of computers. His boss was, in fact, was so hostile to Gary’s project that he threatened to fire anyone who worked with him on it.

Furious, the normally mild mannered Gary went over his boss’s head. He walked into the Senior Vice President’s office and threatened, “Do you want me to do this for you or for someone else?” For the stuffy, hierarchical culture of Xerox, it was outrageous behavior, but as luck would have it, the stunt paid off. News of Gary’s work made it across the country to the fledgling computer lab that Xerox had recently established in California, the Palo Alto Research Center (PARC).

Gary thrived in the freewheeling, collaborative culture at PARC. The researchers there had developed a graphical technology called bitmapping, but had no way to print the images out until he showed up. His development of the laser printer was not only a breakthrough in its own right, but with the decline of Xerox’s copier business, it actually saved the company.

The Wild Ideas Of Charlie Bennett

Charlie Bennett is one of those unusual minds that amazes everyone he meets. He told me that when he was growing up in the quiet Westchester village of Croton-on-Hudson he was a “geek before geeks were cool.” While the other kids were playing sports and trading baseball cards, what really inspired Charlie was Watson and Crick’s discovery of the structure of DNA.

So he went to college and majored in biochemistry and then went on to Harvard to do his graduate work, where he served as James Watson’s teaching assistant. Yet it was an elective course he took on the theory of computation that would change his fate. That’s where he first encountered the concept of a Turing Machine and he was amazed how similar it was to DNA.

So Charlie never became a geneticist, but went to work for IBM as a research scientist. It proved to be just the kind of place where a mind like his could run free, discussing wild ideas like quantum cryptography with colleagues around the globe. It was one of those discussions, with Gilles Brassard, that led to his major breakthrough.

What the two discussed was the wildest idea yet. They proposed to transfer information by quantumly entangling photons, something that Einstein had derisively called “spooky action at a distance” and was adamant couldn’t happen. Yet the two put a team together and, in 1993, successfully completed the quantum teleportation experiment.

That, in turn, led Charlie just a few months later to write down his four laws of quantum information, which formed the basis for IBM’s quantum computing program. Today, in his eighties, Charlie is semi-retired, but still goes into the labs at IBM research to quietly discuss wild ideas with the younger scientists, such as the quantum internet that’s continuing to emerge now.

For Innovation, Generosity Is A Competitive Advantage

My conversations with Jim, Gary, Charlie and many others made an impression on me. They were all giants in their fields (although Jim hadn’t won his Nobel yet) and I was a bit intimidated talking to them. Yet I found them to be some of the kindest, most generous people I ever met. Often, they seemed as interested in me as I was in them.

In fact, the behavior was so consistent that I figured it couldn’t be an accident. So I researched the matter further and found a number of studies that helped explain it. One, at Bell Labs, found that star engineers had a knack for “knowing who knows.” Another at the design firm IDEO found that great innovators essentially act as “knowledge brokers.“

A third study helps explain why knowledge brokering is so important. Analyzing 17.9 million papers, the researchers found that the most highly cited work tended to be mostly rooted within a traditional field, with just a smidgen of insight taken from some unconventional place. Breakthrough creativity occurs at the nexus of conventionality and novelty.

So as it turns out, generosity is often a competitive advantage for innovators. By actively sharing their ideas, they build up larger networks of people willing to share with them. That makes it that much more likely that they will come across that random piece of information and insight that will help them crack a really tough problem.

So if you want to find a truly great innovator, don’t look for the ones that make the biggest headlines are that are most inspiring on stage. Look for those who spend their time a bit off to the side, sharing ideas, supporting others and quietly pursuing a path that few others are even aware of.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Value Doesn’t Disappear

It Shifts From One Place to Another

Value Doesn't Disappear

GUEST POST from Greg Satell

A few years ago, I published an article about no-code software platforms, which was very well received. Before long, however, I began to get angry — and sometimes downright nasty — comments from software engineers who were horrified by the notion that you can produce software without actually understanding the code behind it.

Of course, no-code platforms don’t obviate the need for software engineers, but rather automate basic tasks so that amateurs can design applications by themselves. These platforms are, necessarily, limited but can increase productivity dramatically and help line managers customize technology to fit the task at hand.

Similarly, when FORTRAN, the first real computer language, was invented, many who wrote machine code objected, much like the software engineers did to my article. Yet Fortran didn’t destroy computer programming, but democratized and expanded it. The truth is that value never disappears. It just shifts to another place and that’s what we need to learn to focus on.

Why Robots Aren’t Taking Our Jobs

Ever since the financial crisis we’ve been hearing about robots taking our jobs. Yet just the opposite seems to be happening. In fact, we increasingly find ourselves in a labor shortage. Most tellingly, the shortage is especially acute in manufacturing, where automation is most pervasive. So what’s going on?

The fact is that automation doesn’t actually replace jobs, it replaces tasks. To understand how this works, think about the last time you walked into a highly automated Apple store, which actually employs more people than a typical retail location of the same size. They aren’t there to ring up your purchase any faster, but to do all the things that a machine can’t do, like answer your questions and solve your problems.

A few years ago I came across an even more stark example when I asked Vijay Mehta, Chief Innovation Officer for Consumer Information Services at Experian about the effect that shifting to the cloud had on his firm’s business. The first order effect was simple, they needed a lot less technicians to manage its infrastructure and those people could easily be laid off.

Yet they weren’t. Instead Experian shifted a lot of that talent and expertise to focus on creating new services for its customers. One of these, a cloud enabled “data on demand” platform called Ascend has since become one of the $4 billion company’s most profitable products.

Now think of what would have happened if Experian had merely seen cloud technology as an opportunity to cut costs. Sure, it would have fattened its profit margins temporarily, but as its competitors moved to the cloud that advantage would have soon been eroded and, without new products its business would soon decline.

The Outsourcing Dilemma

Another source of disruption in the job market has been outsourcing. While no one seemed to notice when large multinational corporations were outsourcing blue-collar jobs to low cost countries, now so-called “gig economy” sites like Upwork and Fiverr are doing the same thing for white collar professionals like graphic designers and web developers.

So you would expect to see a high degree of unemployment for those job categories, right? Actually no. The Bureau of Labor Statistics expects demand for graphic designers to increase 4% by 2026 and web developers to increase 15%. The site Mashable recently named web development as one of 8 skills you need to get hired in today’s economy.

It’s not hard to see why. While it is true that a skilled professional in a low-cost country can do small projects of the same caliber as those in high cost countries, those tasks do not constitute a whole job. For large, important projects, professionals must collaborate closely to solve complex problems. It’s hard to do that through text messages on a website.

So while it’s true that many tasks are being outsourced, the number of jobs has actually increased. Just like with automation, outsourcing doesn’t make value disappear, but shifts it somewhere else.

The Social Impact

None of this is to say that the effects of technology and globalization hasn’t been real. While it’s fine to speak analytically about value shifting here and there, if a task that you spent years to learn to do well becomes devalued, you take it hard. Economists have also found evidence that disruptions in the job market have contributed to political polarization.

The most obvious thing to do is retrain workers that have been displaced, but it turns out that’s not so simple. In Janesville, a book which chronicles a small town’s struggle to recover from the closing of a GM plant, author Amy Goldstein found that the workers that sought retraining actually did worse than those that didn’t.

When someone loses their job, they don’t need training. They need another job and removing yourself from the job market to take training courses can have serious costs. Work relationships begin to decay and there is no guarantee that the new skills you learn will be in any more demand than the old ones you already had.

In fact, Peter Capelli at the Wharton School argues that the entire notion of a skills gap in America is largely a myth. One reason that there is such a mismatch between the rhetoric about skills and the data is that the most effective training often comes on the job from an employer. It is augmenting skills, not replacing them that creates value.

At the same time, increased complexity in the economy is making collaboration more important, so often the most important skills workers need to learn are soft skills, like writing, listening and being a better team player.

You Can’t Compete With A Robot By Acting Like One

The future is always hard to predict. While it was easy to see that Amazon posed a real problem for large chain bookstores like Barnes & Noble and Borders, it was much less obvious that small independent bookstores would thrive. In much the same way, few saw that ten years after the launch of the Kindle that paper books would surge amid a decline in e-books.

The one overriding trend over the past 50 years or so is that the future is always more human. In Dan Schawbel’s recent book, Back to Human, the author finds that the antidote for our overly automated age is deeper personal relationships. Things like trust, empathy and caring can’t be automated or outsourced.

There are some things a machine will never do. It will never strike out in a little league game, have its heart broken or see its child born. That makes it hard — impossible really — for a machine ever to work effectively with humans as a real person would. The work of humans is increasingly to work with other humans to design work for machines.

That why perhaps the biggest shift in value is from cognitive to social skills. The high paying jobs today have less to do with the ability to retain facts or manipulate numbers (we now use a computer for those things), but require more deep collaboration, teamwork and emotional intelligence.

So while even the most technically inept line manager can now easily produce an application that it would have once required a highly skilled software engineer, to design the next generation of technology, we need engineers and line managers to work more closely together.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Trust as a Competitive Advantage

Trust as a Competitive Advantage

GUEST POST from Greg Satell

One of the most rewarding things about writing my book Mapping Innovation was talking to the innovators themselves. All of them were prominent (one recently won the Nobel Prize), but I found them to be the among the kindest and most generous people you can imagine, nothing like the difficult and mercurial stereotype.

At first, this may seem counterintuitive, because any significant innovation takes ambition, drive and persistence. Yet a study at the design firm IDEO sheds some light. It found that great innovators are essentially knowledge brokers who place themselves at the center of information networks. To do that, you need to build trust.

A report from Accenture Strategy analyzing over 7,000 firms found this effect to be even more widespread than I had thought. When evaluating competitive agility, it found trust “disproportionately impacts revenue and EBITDA.” The truth is that to compete effectively you need to build deep bonds of trust throughout a complex ecosystem of stakeholders.

From Value Chain To Value Ecosystem

In Michael Porter’s landmark book, Competitive Advantage, the Harvard professor argued that the key to long-term success was to dominate the value chain by maximizing bargaining power among suppliers, customers, new market entrants and substitute goods. The goal was to create a sustainable competitive advantage your rivals couldn’t hope to match.

Many of the great enterprises of the 20th century were built along those lines. Firms like General Motors under Alfred Sloan, IBM under Thomas J. Watson (and later, his son Thomas Watson Jr.) as well as others so thoroughly dominated the value chains in their respective industries that they were able to maintain leading positions in their industries for decades.

Clearly, much has changed since Porter wrote his book nearly 40 years ago. Today, we live in a networked world and competitive advantage is no longer the sum of all efficiencies, but the sum of all connections. Strategy, therefore, must be focused on widening and deepening links to resources outside the firm.

So you can see why trust has taken on greater importance. Today, firms like General Motors and IBM need to manage a complex ecosystem of partners, suppliers, investors and customer relationships and these depend on trust. If one link is broken anywhere in the ecosystem, the others will weaken too and business will suffer.

The Cost Of A Trust Event

The study was not originally designed to measure the effect of trust specifically, but overall competitive agility. It looked at revenue growth and profitability over time and then incorporated metrics measuring Sustainability and Trust to get a larger picture of a firm’s ability to compete.

The Accenture Strategy analysis is wide ranging, incorporating over 4 million data points. It also included Arabesque’s S-Ray data from over 50,000 sources to come up with a quantitative score and rate companies on their sustainability practices, as well as a proprietary measurement of trust across customers, employees, investors, suppliers, analysts, and the media.

Yet when the analysts began to examine the data, they found that the trust metrics disproportionately affected the overall score. For example, a consumer focused company that had a sustainability-oriented publicity event backfire lost an estimated $400 million in future revenues. Another company that was named in a money laundering scandal lost $1 billion.

All too often, acting expediently is seen as being pragmatic, because cutting corners can save you money up front. Yet what the report makes clear is that companies today need to start taking trust more seriously. In today’s voraciously competitive environment, taking a major hit of any kind can hamstring operations for years and sometimes permanently.

Where Trust Hits The Hardest

When the issues of trust come up, we immediately think about consumers. With social media increasing the velocity of information, even a seemingly minor incident can go viral, causing widespread outrage. That kind of thing can send customers flocking to competitors.

Yet as I dug into the report’s data more deeply, I found that the effect varied widely by industry. For example, in manufacturing, media and insurance, the cost of a trust incident was fairly low, but in industries such as banking, retail and industrial services, the impact could be five to ten times higher.

What seems to make the difference is that industries that are most sensitive to a trust event have more complex ecosystems. For example, a retail operation needs to maintain strong relationships with hundreds and sometimes thousands of suppliers. Banking, on the other hand, is highly sensitive to the cost of capital. A drop in trust can send costs surging.

Further, in industries like high tech and industrial services, companies need to stay on the cutting edge to compete. That requires highly collaborative partnerships with other companies to share knowledge and expertise. Once trust is lost, it’s devilishly hard to earn back and competitors gain an edge.

Building Resiliency

The trust problem is amazingly widespread. Accenture found that 54% of firms in the study experienced some kind of trust event and these can come from anywhere: a careless employee, a data breach, a defective product, etc. Yet Jessica Long, one of the Accenture Strategy Managing Directors who led the study, told me that a company can improve its resiliency significantly.

“It’s not so much a matter of preventing a trust event,” she says. “The world is a messy place and things happen. The real difference is how you respond and the resiliency you’ve built up through forging strong foundations in the crucial components of competitive agility: growth, profitability, sustainability and trust.”

Think about Steve Jobs and Apple, which encountered a number of trust events during his tenure. However, because he so clearly demonstrated his commitment to “insanely great” products, customers, employees and partners were more forgiving than they would be with another company. Or, more recently, the scandal when two men were arrested at a Starbucks store. Because Howard Schultz has built a reputation for fairness and because he acted decisively, the impact was far less than it could have been.

Perhaps most crucial is to build a culture of empathy. One of the things that most surprised me about the innovators I researched for my book is that many seemed almost as interested in me and my project as I was in them. I could see how others would want to work with them and share information and insights. It was that kind of access that led them to solve problems no one else could.

What the Accenture report shows is that the same thing is true for profit seeking companies. The best strategy to build trust is to actually be trustworthy. Think about how your actions affect customers, employees, partners and other stakeholders and treat their success as you would your own.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Don’t Blame Technology When Innovation Goes Wrong

Don't Blame Technology When Innovation Goes Wrong

GUEST POST from Greg Satell

When I speak at conferences, I’ve noticed that people are increasingly asking me about the unintended consequences of technological advance. As our technology becomes almost unimaginably powerful, there is growing apprehension and fear that we will be unable to control what we create.

This, of course, isn’t anything new. When trains first appeared, many worried that human bodies would melt at the high speeds. In ancient Greece, Plato argued that the invention of writing would destroy conversation. None of these things ever came to pass, of course, but clearly technology has changed the world for good and bad.

The truth is that we can’t fully control technology any more than we can fully control nature or each other. The emergence of significant new technologies unleash forces we can’t hope to understand at the outset and struggle to deal with long after. Yet the most significant issues are most likely to be social in nature and those are the ones we desperately need to focus on.

The Frankenstein Archetype

It’s no accident that Mary Shelley’s novel Frankenstein was published at roughly the same time as the Luddite movement was in full swing. As cottage industries were replaced by smoke belching factories, the sense that man’s creations could turn against him was palpable and the gruesome tale, considered by many to be the first true work of science fiction, touched a nerve.

In many ways, trepidation about technology can be healthy. Concern about industrialization led to social policies that helped mitigate its worst effects. In much the same way, scientists concerned about the threat of nuclear Armageddon did much to help establish policies that would prevent it.

Yet the initial fears almost always prove to be unfounded. While the Luddites burned mills and smashed machines to prevent their economic disenfranchisement, the industrial age led to a rise in the living standards of working people. In a similar vein, more advanced weapons has coincided with a reduction of violent deaths throughout history.

On the other hand, the most challenging aspects of technological advance are often things that we do not expect. While industrialization led to rising incomes, it also led to climate change, something neither the fears of the Luddites nor the creative brilliance of Shelley could have ever conceived of.

The New Frankensteins

Today, the technologies we create will shape the world as never before. Artificially intelligent systems are automating not only physical, but cognitive labor. Gene editing techniques, such as CRISPR, are enabling us to re-engineer life itself. Digital and social media have reshaped human discourse.

So it’s not surprising that there are newfound fears about where it’s all going. A study at Oxford found that 47% of US jobs are at risk of being automated over the next 20 years. The speed and ease of gene editing raises the possibility of biohackers wreaking havoc and the rise of social media has coincided with a disturbing rise of authoritarianism around the globe.

Yet I suspect these fears are mostly misplaced. Instead of massive unemployment, we find ourselves in a labor shortage. While it is true that the biohacking is a real possibility, our increased ability to cure disease will most probably greatly exceed the threat. The increased velocity of information also allows good ideas to travel faster and farther.

On the other hand, these technologies will undoubtedly unleash new challenges that we are only beginning to understand. Artificial intelligence raises disturbing questions about what it means to be human, just as the power of genomics will force us to grapple with questions about the nature of the individual and social media forces us to define the meaning of truth.

Revealing And Building

Clearly, Shelly and the Luddites were very different. While Shelley was an aristocratic intellectual, the Luddites were working class weavers. Yet both saw the rise of technology as the end to a way of life and, in that way, both were right. Technology, if nothing else, forces us to adapt, often in ways we don’t expect.

In his 1954 essay, The Question Concerning Technology the German philosopher Martin Heidegger sheds some light on these issues. He described technology as akin to art, in that it reveals truths about the nature of the world, brings them forth and puts them to some specific use. In the process, human nature and its capacity for good and evil is also revealed.

He gives the example of a hydroelectric dam, which reveals the energy of a river and puts it to use making electricity. In much the same sense, Mark Zuckerberg did not “build” a social network at Facebook, but took natural human tendencies and channeled them in a particular way. After all, we go online not for bits or electrons, but to connect with each other.

Yet in another essay, Building Dwelling Thinking, he explains that building also plays an important role, because to build for the world, we first must understand what it means to live in it. The revealing power of technology forces us to rethink old truths and re-imagine new societal norms. That, more than anything else, is where the challenges lie.

Learning To Ask The Hard Questions

We are now nearing the end of the digital age and entering a new era of innovation which will likely be more impactful than anything we’ve seen since the rise of electricity and internal combustion a century ago. This, in turn, will initiate a new cycle of revealing and building that will be as challenging as anything humanity has ever faced.

So while it is unlikely that we will ever face a robot uprising, artificial intelligence does pose a number of troubling questions. Should safety systems in a car prioritize the life of a passenger or a pedestrian? Who is accountable for the decisions an automated system makes? We worry about who is teaching our children, but scarcely stop to think about who is training our algorithms.

These are all questions that need answers within the next decade. Beyond that, we will have further quandaries to unravel, such as what is the nature of work and how do we value it? How should we deal with the rising inequality that automation creates? Who should benefit from technological breakthroughs?

The unintentional consequences of technology have less to do with the relationship between us and our inventions than it does between us and each other. Every technological shift brings about a societal shift that reshapes values and norms. Clearly, we are not helpless, but we are responsible. These are very difficult questions and we need to start asking them. Only then can we begin the cycle of revealing truths and building a better future.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.