Author Archives: Greg Satell

About Greg Satell

Greg Satell is a popular speaker and consultant. His latest book, Cascades: How to Create a Movement That Drives Transformational Change, is available now. Follow his blog at Digital Tonto or on Twitter @Digital Tonto.

We Are Killing Innovation in America

We Are Killing Innovation in America

GUEST POST from Greg Satell

Throughout America’s history, technological innovation has been key to security and prosperity. Whether it was through entrepreneurs like Thomas Edison, Henry Ford and Thomas Watson, or government programs like the Manhattan Project, the Apollo Program and the Human Genome Project, The United States has been on the cutting edge.

Today, as we enter a new era of innovation, America remains at the forefront of scientific discoveries in advanced areas such as artificial intelligence, synthetic biology, new computing architectures and materials science. Continued investment in science, both public and private, provides the “seed corn” for continued dominance in the 21st century.

Still, scientific advancement is not enough. We need entrepreneurs to start companies and mid-level technicians and engineers to implement technologies. The truth is that America’s human capital is being hollowed out and that’s becoming a serious problem that we need to address. Once we lose our competitive edge, we might never get it back.

1. Food Insecurity

Awhile back I was speaking to a group of community college administrators and I asked them what their biggest challenge was. I was shocked when every single one of them told me that it was food insecurity. Apparently, it is the number one reason that kids drop out. Only about 20% of students at community colleges earn a degree.

I was even more surprised that there are similar trends at four-year institutions. In fact, a study found that about half of all college students struggle with food insecurity. This number becomes even harder to stomach when you consider that there is also an unprecedented construction boom on college campuses.

So colleges are spending billions to build fancy dorms and rec centers while half of their students don’t have enough to eat. Is it any wonder that they are dropping out? In Weapons of Math Destruction, Cathy O’Neil points out that much of university spending is driven by college rankings like those published by US News & World Report. Maybe a “food insecurity index” should be included?

Any way you look at it, we are undermining a significant portion of our most ambitious young people because we can’t provide them with enough to eat. How can we expect to win the future when kids are dropping out of school to get a meal?

2. Tuition And Student Loans

One of the most important factors that led to American technological and economic dominance has been our commitment to higher education. The Morrill Acts in the 19th century created land grant universities that trained students in agriculture and engineering in every state. Later, the G.I. Bill helped an entire generation go to college and became the basis for a new era of prosperity.

This commitment to education made America the most educated country in the world. More recently, however, we’ve fallen to fifth among OECD countries for post-secondary education. This hasn’t been because less Americans are going to college, in fact, more people go to college today than in 2000. It’s just that the rest of the world is moving faster than we are.

A big factor in our decline has been tuition, which has risen from an average of $15,160 in 1988 to 34,740 in 2018. Not surprisingly, student debt is exploding. It has nearly tripled in the last decade. In fact student debt has become so onerous that it now takes about 20 years to pay off four years for college and even more to pursue a graduate degree.

So the bright young people who we don’t starve we are condemning to decades of what is essentially indentured servitude. That’s no way to run an entrepreneurial economy. In fact, a study done by the Federal Reserve Bank of Philadelphia found that student debt has a measurable negative impact on new business creation.

3. A Broken Healthcare System

There has long been a political debate about whether health care is a right or not and there are certainly moral issues that deserve attention. When I travel internationally, it is not uncommon for people to comment on how barbaric they find our healthcare system, where the uninsured die from treatable diseases and many go bankrupt due to medical costs.

Leaving the moral concerns aside though, our healthcare system represents a huge economic burden. Consider that in the US healthcare expenditures account for roughly 18% of GDP. Most countries in the OECD spend roughly half that. To add insult to injury, healthcare outcomes in the US are generally worse than the OECD average. In fact, the CDC reports that life expectancy is actually declining in America.

Think about trying to run a business that not only produces an inferior product, but also gives up 9 points of margin due to higher costs. Clearly that’s untenable. A study in the Journal of Health Economics also found that, much like student debt, concerns about health insurance inhibits entrepreneurship.

It’s important to note that each of these are uniquely American problems. No other developed country has the same issues with healthcare or student debt. While food insecurity is an issue in some developed countries, it is far more severe in the US. All of this represents a significant competitive disadvantage.

There’s Plenty Of People At The Bottom

Far too often, we see innovation as strictly a matter of startup companies and R&D labs. So we invest in science and entrepreneurship programs to fuel technology. Yet while those things are surely important, they don’t drive advancement by themselves. We need normal, everyday people to make the most out of their potential.

As I explained in Mapping Innovation, developing breakthrough technologies is a process of discovery, engineering and transformation. The transformational part is often overlooked, because it relies not on a single entrepreneur or company, but on an ecosystem to support it. That takes networks of firms working together, each forming a piece of the overall puzzle.

Most of these companies are not household names. They supply components, implement solutions, create complementary goods and so on. Many are small businesses. We need not only geniuses to create the future, but also technicians, consultants and service providers.

In 1959 the physicist Richard Feynman gave a famous talk titled There’s Plenty of Room at the Bottom to alert the scientific community to the possibilities of nanotechnology. I think the same can be said of innovation in America today. Our most valuable resource is our human capital. If we can’t feed, educate and nurture that talent, our future will not be bright.

There’s plenty of people at the bottom with almost limitless potential to increase our national capacity for prosperity, security and well being. Yet instead of empowering them, we undermining them and, in doing so, assuring our own decline.

— Article courtesy of the Digital Tonto blog and an earlier version appeared on Inc.com
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

A Quantum Computing Primer

A Quantum Computing Primer

GUEST POST from Greg Satell

Every once in a while, a technology comes along with so much potential that people can’t seem to stop talking about it. That’s fun and exciting, but it can also be confusing. Not all of the people who opine really know what they’re talking about and, as the cacophony of voices increases to a loud roar, it’s hard to know what to believe.

We’re beginning to hit that point with quantum computing. Listen to some and you imagine that you’ll be strolling down to your local Apple store to pick one up any day now. Others will tell you that these diabolical machines will kill encryption and bring global commerce to a screeching halt. None of this is true.

What is true though is that quantum computing is not only almost unimaginably powerful, it is also completely different than anything we’ve ever seen before. You won’t use a quantum computer to write emails or to play videos, but the technology will significantly impact our lives over the next decade or two. Here’s a basic guide to what you really need to know.

Computing In 3 Dimensions

Quantum computing, as any expert will tell you, uses quantum effects such as superposition and entanglement to compute, unlike digital computers that use strings of ones and zeros. Yet quantum effects are so confusing that the great physicist Richard Feynman once remarked that nobody, even world class experts like him, really understands them.

So instead of quantum effects, think of quantum computing as a machine that works in three dimensions rather than two-dimensions like digital computers. The benefits of this should be obvious, because you can fit a lot more stuff into three dimensions than you can into two, so a quantum computer can handle vastly more complexity than the ones we’re used to.

Another added benefit is that we live in three dimensions, so quantum computers can simulate the systems we deal with every day, like those in materials and biological organisms. Digital computers can do this to some extent, but some information always gets lost translating the data from a three dimensional world to a two dimensional one, which leads to problems.

I want to stress that this isn’t exactly an accurate description of how quantum computers really work, but it’s close enough for you to get the gist of why they are so different and, potentially, so useful.

Coherence And Error Correction

Everybody makes mistakes and the same goes for machines. When you think of all the billions of calculations a computer makes, you can see how even an infinitesimally small error rate can cause a lot of problems. That’s why computers have error correction mechanisms built into their code to catch mistakes and correct them.

With quantum computers the problem is much tougher because they work with subatomic particles and these systems are incredibly difficult to keep stable. That’s why quantum chips need to be kept within a fraction of a degree of absolute zero. At even a sliver above that, the system “decoheres” and we won’t be able to make sense out of anything.

It also leads to another problem. Because quantum computers are so prone to error, we need a whole lot of quantum bits (or qubits) for each qubit that performs a logical function. In fact, with today’s technology, we need more than a thousand physical qubits (the kind that are in a machine) for each qubit that can reliably perform a logical function.

This is why most of the fears of quantum computing killing encryption and destroying the financial system are mostly unfounded. The most advanced quantum computers today only have about 50 qubits, not nearly enough to crack anything. We will probably have machines that strong in a decade or so, but by that time quantum safe encryption should be fairly common.

Building Practical Applications

Because quantum computers are so different, it’s hard to make them efficient for the tasks that we use traditional computers for because they effectively have to translate two-dimensional digital problems into their three-dimensional quantum world. The error correction issues only compound the problem.

There are some problems, however, that they’re ideally suited to. One is to simulate quantum systems, like molecules and biological systems, which can be tremendously valuable for people like chemists, materials scientists and medical researchers. Another promising area is large optimization problems for use in the financial industry and helping manage complex logistics.

Yet the people who understand those problems know little about quantum computing. In most cases, they’ve never seen a quantum computer before and have trouble making sense out of the data they generate. So they will have to spend some years working with quantum scientists to figure it out and then some more years explaining what they’ve learned to engineers who can build products and services.

We tend to think of innovation as if it is a single event. The reality is that it’s a long process of discovery, engineering and transformation. We are already well into the engineering phase of quantum computing—we have reasonably powerful machines that work—but the transformation phase has just begun.

The End Of The Digital Revolution And A New Era Of Innovation

One of the reasons that quantum computing has been generating so much excitement is that Moore’s Law is ending. The digital revolution was driven by our ability to cram more transistors onto a silicon wafer, so once we are not able to do that anymore, a key avenue of advancement will no longer be viable.

So many assume that quantum computing will simply take over where digital computing left off. It will not. As noted above, quantum computers are fundamentally different than the ones we are used to. They use different logic, require different computing languages and algorithmic approaches and are suited to different tasks.

That means the major impacts from quantum computers won’t hit for a decade or more. That’s not at all unusual. For example, although Apple came out with the Macintosh in 1984, it wasn’t until the late 90s that there was a measurable bump in productivity. It takes time for an ecosystem to evolve around a technology and drive a significant impact.

What’s most important to understand, however, is that the quantum era will open up new worlds of possibility, enabling us to manage almost unthinkable complexity and reshape the physical world. We are, in many ways, just getting started.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Sometimes You Should Collaborate Instead of Compete

Sometimes You Should Collaborate Instead of Compete

GUEST POST from Greg Satell

Boeing and Airbus are arch-rivals, competing vigorously over decades for supremacy in the global aviation market, much like DowDupont and BASF do in chemicals. Yet all of these companies, along with many others, collaborate at places like the Composites Institute (IACMI). They do this not out of any altruism, of course, but self-interest.

It is at places like the Composites Institute that profit-driven companies can explore the future with top notch scientists from places like Oak Ridge National Laboratory, Michigan State University and Purdue as well as dozens of smaller companies active in the space. To not participate would be to risk being cut out of important developments.

This type of activity is not entirely new. In the 80s, semiconductor firms, along with the Department of Defense, created SEMATECH to regain competitiveness against foreign competition, while still fighting it out in the marketplace. The truth is that sometimes you need to collaborate and sometimes you have to compete. Here’s how to know the difference.

The Value Chain and Competitive Advantage

In Michael Porter’s landmark book, Competitive Advantage, the Harvard professor argued that the key to long-term success was to dominate the value chain by maximizing bargaining power among suppliers, customers, new market entrants and substitute goods. The goal was to create a sustainable competitive advantage your rivals couldn’t hope to match.

Porter’s ideas dominated thinking in corporate strategy for decades, yet they had a fatal flaw that wasn’t always obvious. Thinking in terms of value chains is viable when technology is relatively static, but when the marketplace is rapidly evolving it can get you locked out of important ecosystems and greatly diminish your ability to compete.

To understand why, consider open-source software. When Linux first rose to prominence, Microsoft CEO Steve Ballmer called it a cancer. Yet more recently, its current CEO announced that the company loves Linux. That didn’t happen out of any sort of newfound benevolence, but because it recognized that it couldn’t continue to shut itself out and compete.

To thrive in an ecosystem driven world, you must constantly widen and deepen connections. Instead of always looking to maximize bargaining power, you need to look for opportunities to co-create with customers and suppliers, to integrate your products and services with potential substitutes and to form partnerships with new market entrants.

A New Era Of Innovation

The philosopher Martin Heidegger argued that technological advancement is a process of revealing and building. Scientists reveal new phenomena through exploration and experiment and then later engineers figure out how to channel these phenomena to some specific use. For example, the advancements in theoretical physics revealed in the 1920s and 30s were channeled into transistors and microchips later on.

Eventually, the new technology and its implications are understood well enough to support broad adoption and a transformational period ensues. The need for revealing lessens greatly and value shifts towards building rapidly for use. We have seen much of this in the last 30 years as the digital revolution has shifted its emphasis toward skills like rapid prototyping and iteration.

Yet every technology eventually hits theoretical limits and that’s where we are now with respect to digital technology. The fact is that atoms are only so small and the speed of light is only so fast. So that limits how many transistors can fit on a silicon wafer and how fast we can compute by zipping electrons through them. Make no mistake, the future will not be digital.

So we need to embark on a new cycle of revealing and building in areas like quantum computing, synthetic biology and materials science. These things cannot be rapidly prototyped because we simply don’t understand them well enough yet. We need to explore them to reveal and, eventually, to begin building in earnest once again.

Emerging Platforms For Collaboration

Now we can understand why Boeing and Airbus are happy to join organizations like the Composite Institute. Both need to explore and neither can go it alone. They need partners, like research universities, government labs and other firms to help them uncover new things. As open source enthusiasts are fond of saying, “with enough eyeballs, all bugs are shallow.”

Yet the Composites Institute is just one node in the network of Manufacturing Institutes set up under the Obama Administration to support this type of collaboration. In areas ranging from advanced fabrics and biofabrication to additive manufacturing and wide-gap semiconductors, firms large and small are working with scientists to uncover new principles.

To understand how different this is from earlier eras, consider the case of IBM. When it developed the PC, it did so largely in secret with a skunk works the company set up in Florida. With quantum computing, however, it has built up an expansive network of collaborators, including labs, customers and startups.

They don’t do this out of any newfound altruism, but because it significantly speeds up the exploration process. As George Crabtree, Director of JCESR, a consortium of national labs, research universities and private firms developing advanced battery technology, put it to me. “Usually discovery propagates at the speed of publication, but here, we can operate within the time frame of the next coffee break.”

Innovation Is Never A Single Event

All too often, we view innovation as the work of a single genius who, in a moment of sudden epiphany, conjures up an idea that changes the world. In reality, things never work like that. Innovation is never a single event, but a process of discovery, engineering and transformation, which usually takes about 30 years to create a significant impact.

It’s important to note, however, that in no way means it takes 30 years to develop an innovative product. Far from it, in fact. What it means is that the next big thing is usually already about 29 years old! The truth is that the next big thing always starts out looking like nothing at all. That’s why it is crucial to invest in exploration to reveal it.

As we have seen, exploration is best done in numbers. Businesses today, such as in the semiconductor industry, that rely on the principles of quantum mechanics revealed in the 1920s and 30s were in no way harmed by the fact that those discoveries were published openly and taught in universities. In fact, they greatly benefitted from it.

Yet the products built on those principles are highly proprietary and the secrets behind the design of those products are closely guarded. That’s the key to navigating collaboration and competition. You collaborate to reveal, but compete to develop and build. To build a great enterprise, you need to learn to do both zealously.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

How A Networked Culture Drives Experian’s Innovation

How A Networked Culture Drives Experian's Innovation

GUEST POST from Greg Satell

In Who Says Elephants Can’t Dance, the bestselling memoir of the his historic turnaround at IBM, Lou Gerstner wrote, “I came to see, in my time at IBM, culture isn’t just one aspect of the game—It is the game. In the end, an organization is nothing more than the collective capacity of its people to create value.”

There has been endless discussion about whether change should be driven from the top-down or the bottom-up, but that is, for the most part, a red herring. True transformation tends to move side-to-side, driven through horizontal connections among peers. The best way to create change in an organization is to empower it.

That’s why the data giant Experian invested years networking its organization and found that it paid off when it mattered most. While traditional hierarchies waste valuable time and effort pushing orders down the chain of command, networked organizations can adapt to changing market conditions with far more agility. Transformation begins with a networked culture.

An Innovation Culture Is A Collaborative Culture

One of the most common questions I get asked by senior managers is “How can we find more innovative people?” I know the type they have in mind. Someone energetic and dynamic, full of ideas and able to present them powerfully. It seems like everybody these days is looking for an early version of Steve Jobs. Yet the truth is that an innovative culture is a collaborative culture.

When Justin Hastings arrived at Experian North America as Chief Human Resources Officer, he saw it as his job to support and empower the culture. “Essentially, we run a talent business,” he told me. “My job is to not only supply, maintain and retain that talent, but to make sure those people are are motivated and see real meaning and value in their work.”

“Culture is front and center, just incredibly important to us,” he continued. “It’s the enabler of business performance. Our company is all about driving the innovation that drives value for customers. A big part of what makes that possible is that we work hard to make everybody here feel included, that the company’s success is their success.”

However, Hastings warns that building a culture takes a lot more than just some pleasant platitudes in an employee handbook, nice speeches by the CEO and company events. “You can’t just build a culture from the top down. To be authentic, you have to build your culture organically, through informal networks,” he explains.

The Strength of Weak Ties and Boundary Spanners

In the early 1970s, sociologist Mark Granovetter began researching how professional, technical and managerial workers found jobs in the Boston area. He was somewhat surprised to find that they often found work someone they knew, but not a close contact, like a friend or family member, but someone more removed, like a friend of a friend or a distant cousin. He called this principle the ‘Strength of Weak Ties’.

Further analysis shows why it works. Those who are closest to us know pretty much the same things we do, because they frequent similar places and do similar things. So if we want to gain access to new information, we need to broaden our scope and connect with people further out on the social spectrum.

Hastings noticed this principle at work in Experian’s volunteer efforts. For example, many employees participate in its “Le Tour de Experian” bike rides to benefit charity. They do it to do some good and have some fun, but Hastings saw that the bike riders were also building strong bonds across organizational boundaries and these bonds were resulting in professional collaborations that created value for Experian and its customers.

Network scientists call people like the collaborating bike riders boundary spanners, because although they form strong bonds with each other, they essentially play the role of “weak ties” in Granovetter’s research. They perform a crucial function by linking disparate parts of the organization and helping knowledge and information to circulate.

Hastings figured that he could accelerate the formation of boundary spanners throughout Experian by giving employees the opportunity to organize around things they care about. Experian clubs, like the biking group, are focused on interests, while Employee Resource Groups focus on identity, like Latino heritage, gay pride or military service.

Using Networks To Empower Transformation

When Barry Libenson first arrived at Experian as Global CIO in 2015, he devoted the first few months to getting a sense what its customers wanted. It quickly became clear that what they coveted most was real-time access to data. If he could provide that by shifting Experian’s technology infrastructure to the cloud, it could be an enormous opportunity.

Yet it could also be an enormous problem. “There was a lot of concern that we were going to disrupt our own business and that we would lose control of our data,” Libenson told me. “For years, Experian’s business model had been based on a traditional architecture. There were also security concerns.” To make matters worse, research by McKinsey indicates that roughly 75% of transformational initiatives fail.

So instead of trying to force change through, Libenson sought to empower it. Much like Hastings did with Clubs and Employees Resource Groups, he identified people within the organization that were already enthusiastic about the shift to cloud technology and made sure they were trained to implement it. Those early apostles could then help convert others.

Libenson also saw how Experian’s networked culture helped smooth the way. “Digital transformation is somewhat of a misnomer. You’re not really transforming the technology, but more importantly, the people who use it. Having a networked culture means that you can spread enthusiasm about transformation—as well as the expertise to implement it—much faster and with far less resistance than you could otherwise,” he says.

The Journey Continues

As I explain in Cascades, all too often an initial success gives way to inertia, backsliding and eventually, failure. The truth is, it’s not enough to just drive change, you also need to learn how to survive victory. You do that by focusing on culture and values, rather than on any one particular objective, so that you are constantly preparing for the next challenge.

These days, Experian is highly focused on leveraging artificial intelligence which, much like the shift to the cloud, is both a great opportunity and a potential problem. AI has the potential to vastly improve things like credit scores, but the algorithms can’t be a “black box.” To be effective, they must be auditable and explainable.

Experian’s Datalabs unit is hard at work creating more transparent AI algorithms, and making progress, but the technology will only be valuable if Datalabs scientists can work effectively with professionals from other divisions of the company. So Eric Haller, who leads Datalabs, set up a series of seminars to connect with the rest of the company.

“To implement this technology requires a certain amount of sophistication that is relatively rare,” he told me. “So not only were we putting information out there, through the connections we made we were also able to identify expertise throughout our company we were not aware of. Those new relationships have already opened up new possibilities for collaboration.”

What’s interesting and salient about how the network culture was built at Experian is how it all seems so mundane. Many firms have clubs, employee groups and volunteer efforts. Seminars aren’t particularly unusual, either. Yet it’s not any one program or platform, but how those initiatives are optimized to widen and deepen informal bonds across the organization, that makes the difference.

The truth is that, today, competitiveness is no longer determined by the sum of all efficiencies within a business, but the sum of all connections.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

What Pundits Always Get Wrong About the Future

What Pundits Always Get Wrong About the Future

GUEST POST from Greg Satell

Peter Thiel likes to point out that we wanted flying cars, but got 140 characters instead. He’s only partly right. For decades futuristic visions showed everyday families zipping around in flying cars and it’s true that even today we’re still stuck on the ground. Yet that’s not because we’re unable to build one. In fact the first was invented in 1934.

The problem is not so much with engineering, but economics, safety and convenience. We could build a flying car if we wanted to, but to make one that can compete with regular cars is another matter entirely. Besides, in many ways, 140 characters are better than a flying car. Cars only let us travel around town, the Internet helps us span the globe.

That has created far more value than a flying car ever could. We often fail to predict the future accurately because we don’t account for our capacity to surprise ourselves, to see new possibilities and take new directions. We interact with each other, collaborate and change our priorities. The future that we predict is never as exciting as the one we eventually create.

1. The Future Will Not Look Like The Past

We tend to predict the future by extrapolating from the present. So if we invent a car and then an airplane, it only seems natural that we can combine the two. If family has a car, then having one that flies can seem like a logical next step. We don’t look at a car and dream up, say, a computer. So in 1934, we dreamed of flying cars, but not computers.

It’s not just optimists that fall prey to this fundamental error, but pessimists too. In Homo Deus, author and historian Yuval Noah Harari points to several studies that show that human jobs are being replaced by machines. He then paints a dystopian picture. “Humans might become militarily and economically useless,” he writes. Yeesh!

Yet the picture is not as dark as it may seem. Consider the retail apocalypse. Over the past few years, we’ve seen an unprecedented number of retail store closings. Those jobs are gone and they’re not coming back. You can imagine thousands of retail employees sitting at home, wondering how to pay their bills, just as Harari predicts.

Yet economist Michael Mandel argues that the data tell a very different story. First, he shows that the jobs gained from e-commerce far outstrip those lost from traditional retail. Second, he points out that the total e-commerce sector, including lower-wage fulfillment centers, has an average wage of $21.13 per hour, which is 27 percent higher than the $16.65 that the average worker in traditional retail earns.

So not only are more people working, they are taking home more money too. Not only is the retail apocalypse not a tragedy, it’s somewhat of a blessing.

2. The Next Big Thing Always Starts Out Looking Like Nothing At All

Every technology eventually hits theoretical limits. Buy a computer today and you’ll find that the technical specifications are much like they were five years ago. When a new generation of iPhones comes out these days, reviewers tout the camera rather than the processor speed. The truth is that Moore’s law is effectively over.

That seems tragic, because our ability to exponentially increase the number of transistors that we can squeeze onto a silicon wafer has driven technological advancement over the past few decades. Every 18 months or so, a new generation of chips has come out and opened up new possibilities that entrepreneurs have turned into exciting new businesses.

What will we do now?

Yet there’s no real need to worry. There is no 11th commandment that says, “Thou shalt compute with ones and zeros” and the end of Moore’s law will give way to newer, more powerful technologies, like quantum and neuromorphic computing. These are still in their nascent stage and may not have an impact for at least five to ten years, but will likely power the future for decades to come.

The truth is that the next big thing always starts out looking like nothing at all. Einstein never thought that his work would have a practical impact during his lifetime. When Alexander Fleming first discovered penicillin, nobody noticed. In much the same way, the future is not digital. So what? It will be even better!

3. It’s Ecosystems, Not Inventions, That Drive The Future

When the first automobiles came to market, they were called “horseless carriages” because that’s what everyone knew and was familiar with. So it seemed logical that people would use them much like they used horses, to take the occasional trip into town and to work in the fields. Yet it didn’t turn out that way, because driving a car is nothing like riding a horse.

So first people started taking “Sunday drives” to relax and see family and friends, something that would be too tiring to do regularly on a horse. Gas stations and paved roads changed how products were distributed and factories moved from cities in the north, close to customers, to small towns in the south, where land and labor were cheaper.

As the ability to travel increased, people started moving out of cities and into suburbs. When consumers could easily load a week’s worth of groceries into their cars, corner stores gave way to supermarkets and, eventually, shopping malls. The automobile changed a lot more than simply how we got from place to place. It changed our way of life in ways that were impossible to predict.

Look at other significant technologies, such as electricity and computers, and you find a similar story. It’s ecosystems, rather than inventions, that drive the future.

4. We Can Only Validate Patterns Going Forward

G. H. Hardy once wrote that, “a mathematician, like a painter or poet, is a maker of patterns. If his patterns are more permanent than theirs, it is because they are made with ideas.” Futurists often work the same way, identifying patterns in the past and present, then extrapolating them into the future. Yet there is a substantive difference between patterns that we consider to be preordained and those that are to be discovered.

Think about Steve Jobs and Apple for a minute and you will probably recognize the pattern and assume I misspelled the name of his iconic company by forgetting to include the “e” at the end. But I could have just have easily been about to describe an “Applet” he designed for the iPhone or some connection between Jobs and Appleton WI, a small town outside Green Bay.

The point is that we can only validate patterns going forward, never backward. That, in essence, is what Steve Blank means when he says that business plans rarely survive first contact with customers and why his ideas about lean startups are changing the world. We need to be careful about the patterns we think we see. Some are meaningful. Others are not.

The problem with patterns is that future is something we create, not some preordained plan that we are beholden to. The things we create often become inflection points and change our course. That may frustrate the futurists, but it’s what makes life exciting for the rest of us.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Eureka Moment Fallacy

The Eureka Moment Fallacy

GUEST POST from Greg Satell

In 1928, Alexander Fleming arrived at his lab to find that a mysterious mold had contaminated his Petri dishes and was eradicating the bacteria colonies he was trying to grow. Intrigued, he decided to study the mold. That’s how Fleming came to be known as the discoverer of penicillin.

Fleming’s story is one that is told and retold because it reinforces so much about what we love about innovation. A brilliant mind meets a pivotal moment of epiphany and — Eureka! — the world is forever changed. Unfortunately, that’s not really how things work. It wasn’t true in Fleming’s case and it won’t work for you.

The truth is that innovation is never a single event, but a process of discovery, engineering and transformation, which is why penicillin didn’t become commercially available until 1945 (and the drug was actually a different strain of the mold than Fleming had discovered). We need to stop searching for Eureka moments and get busy with the real work of innovating.

Learning To Recognize And Define Problems

Before Fleming, there was Ignaz Semmelweis and to understand Fleming’s story it helps to understand that of his predecessor. Much like Fleming, Semmelweis was a bright young man of science who had a moment of epiphany. In Semmelweis’s case, he was one of the first to realize that infections could spread from doctor to patient.

That simple insight led him to institute a strict regime of hand washing at Vienna General Hospital. Almost immediately, the incidence of deadly childbed fever dropped precipitously. Yet his ideas were not accepted at the time and Semmelweis didn’t do himself any favors by refusing to format his data properly or to work collaboratively to build support for his ideas. Instead, he angrily railed against the medical establishment he saw as undermining his work.

Semmelweis would die in an insane asylum, ironically from an infection he contracted under care, and never got to see the germ theory of disease emerge from the work of people like Louis Pasteur and Robert Koch. That’s what led to the study of bacteriology, sepsis and Alexander Fleming growing those cultures that were contaminated by the mysterious mold.

When Fleming walked into his lab on that morning in 1928, he was bringing a wealth of experiences to the problem. During World War I, he had witnessed many soldiers die from sepsis and how applying antiseptic agents to the wound often made the problem worse. Later, he found that nasal secretions inhibited bacterial growth.

So when the chance discovery of penicillin happened, it was far from a single moment, but rather a “happy accident” that he had spent years preparing for.

Combining Domains

Today, we remember Fleming’s discovery of penicillin as a historic breakthrough, but it wasn’t considered to be so at the time. In fact, when it was first published in the British Journal of Experimental Pathology, nobody really noticed. The truth is that what Fleming discovered couldn’t have cured anybody. It was just a mold secretion that killed bacteria in a Petri dish.

Perhaps even more importantly, Fleming was ill-equipped to transform penicillin into something useful. He was a pathologist that largely worked alone. To transform his discovery into an actual cure, he would need chemists and other scientists, as well as experts in fermentation, manufacturing, logistics and many other things. To go from milliliters in the lab to metric tons in the real world is no trivial thing.

So Fleming’s paper lay buried in a scientific journal for ten years before it was rediscovered by a team led by Howard Florey and Ernst Chain at the University of Oxford. Chain, a world-class biochemist, was able to stabilize the penicillin compound and another member of the team, Norman Heatley, developed a fermentation process to produce it in greater quantities.

Because Florey and Chain led a larger team in a bigger lab they were also had the staff and equipment to perform experiments on mice, which showed that penicillin was effective in treating infections. However, when they tried to cure a human, they found that they were not able to produce enough of the drug. They simply didn’t have the capacity.

Driving A Transformation

By the time Florey and Chain had established the potential of penicillin it was already 1941 and England was at war, which made it difficult to find funding to scale up their work. Luckily, Florey had done a Rhodes Scholarship in the United States and was able to secure a grant to travel to America and continue the development of penicillin with US-based labs.

That collaboration produced two more important breakthroughs. First, they were able to identify a more powerful strain of the penicillin mold. Second, they developed a fermentation process utilizing corn steep liquor as a medium. Corn steep liquor was common in the American Midwest, but virtually unheard of back in England.

Still, they needed to figure out a way to scale up production and that was far beyond the abilities of research scientists. However, the Office of Scientific Research and Development (OSRD), a government agency in charge of wartime research, understood the potential of penicillin for the war effort and initiated an aggressive program, involving two dozen pharmaceutical companies, to overcome the challenges.

Working feverishly, they were able to produce enough penicillin to deploy the drug for D-Day in 1944 and saved untold thousands of lives. After the war was over, in 1945, penicillin was made commercially available, which touched off a “golden age” of antibiotic research and new drugs were discovered almost every year between 1950 and 1970.

Innovation Is Never A Single Event

The story of Fleming’s Eureka! moment is romantic and inspiring, but also incredibly misleading. It wasn’t one person and one moment that changed the world, but the work of many over decades that made an impact. As I explain in my book, Cascades, it is small groups, loosely connected, but united by a shared purpose that drive transformational change.

In fact, the development of penicillin involved not one, but a series of epiphanies. First, Fleming discovered penicillin. Then, Florey and Chain rediscovered Fleming’s work. Chain stabilized the compound, Heatley developed the fermentation process, other scientists identified the more powerful strain and corn steep liquor as a fermentation medium. Surely, there were many other breakthroughs involving production, logistics and treatment that are lost to history.

This is not the exception, but the rule. The truth is that the next big thing always starts out looking like nothing at all. For example, Jim Allison, who recently won the Nobel Prize for his development of cancer immunotherapy, had his idea rejected by pharmaceutical companies, much like the medical establishment dismissed Semmelweis back in the 1850s.

Yet Allison kept at it. He continued to pound the pavement, connect and collaborate with others and that’s why today he his hailed as a pioneer and a hero. That’s why we need to focus less on inventions and more on ecosystems. It’s never a single moment of Eureka! that truly changes the world, but many of them.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

How to Fix Corporate Transformation Failure

How to Fix Corporate Transformation Failure

GUEST POST from Greg Satell

We live in an age in which change has become the only constant. So it’s not surprising that change management models have become popular. Executives are urged to develop a plan to communicate the need for change, create a sense of urgency and then drive the process through to completion.

Unfortunately, the vast majority of these efforts fail and it’s not hard to see why. Anybody who’s ever been married or had kids knows first-hand how difficult it can be to convince even a single person of something. Any effort to persuade hundreds, if not thousands, of people through some kind of mass effort is setting the bar pretty high.

However, as I explain in Cascades, what you can do is help them convince each other by changing the dynamic so that people enthusiastic about change can influence other (slightly less) enthusiastic people. The truth is that small groups, loosely connected, but united by a shared purpose drive transformational change. So that’s where you need to start.

The Power Of Local Majorities

In the 1950’s, the prominent psychologist Solomon Asch undertook a pathbreaking series of conformity studies. The design of the study was simple, but ingenuous. He merely showed people pairs of cards, asking them to match the length of a single line on one card with one of three on an adjacent card. The answer was meant to be obvious.

However, as the experimenter went around the room, one person after another gave the same wrong answer. When it reached the final person in the group (in truth, the only real subject, the rest were confederates), the vast majority of the time that person conformed to the majority opinion, even if it was obviously wrong!

Majorities don’t just rule, they also influence, especially local majorities. The effect is even more powerful when the issue at hand is more ambiguous than the length of a line on a card. More recent research suggests that the effect applies not only to people we know well, but that we are also influenced even by second and third-degree relationships.

So perhaps the best way to convince somebody of something is to surround them with people who hold a different opinion. To extend the marriage analogy a bit, I might have a hard time convincing my wife or daughter, say, that my jokes are funny and not at all corny, but if they are surrounded by people who think I’m hilarious, they’ll be more likely to think so too.

Changing Dynamics

The problem with creating change throughout an organization is that any sufficiently large group of people will hold a variety of opinions about virtually any matter and these opinions tend to be widely dispersed. So the first step in creating large-scale change is to start thinking about where to target your efforts and there are two tools that can help you do that.

The first, called the Spectrum of Allies, helps you identify which people are active or passive supporters of the change you want to bring about, which are neutral and which actively or passively oppose it. Once you are able to identify these groups, you can start mobilizing the most enthusiastic supporters to start influencing the other groups to shift their opinions. You probably won’t ever convince the active opposition, but you can isolate and neutralize them.

The second tool, called the Pillars of Support, identifies stakeholder groups that can help bring change about. In a typical corporation, these might be business unit leaders, customer groups, industry associations, regulators and so on. These stakeholders are crucial for supporting the status quo, so if you want to drive change effectively, you will need to pull them in.

What is crucial is that every tactic mobilizes a specific constituency in the Spectrum of Allies to influence a specific stakeholder group in the Pillars of Support. For example, in 1984, Anti-Apartheid activists spray-painted “WHITES ONLY” and “BLACKS” above pairs of Barclays ATMs in British university town to draw attention to the bank’s investments in South Africa.

This of course, had little to no effect on public opinion in South Africa, but it meant a lot to the English university students that the bank wanted to attract. Its share of student accounts quickly plummeted from 27% to 15% and two years later Barclays pulled out all of its investments from the country, which greatly damaged the Apartheid regime.

Identifying A Keystone Change

Every change effort begins with a grievance: sales are down, customers are unhappy or perhaps a new technology threatens to disrupt a business model. Change starts when leaders are able to articulate a clear and affirmative “vision for tomorrow” that is empowering and points toward a better future.

However, the vision can rarely be achieved all at once. That’s why successful change efforts define a keystone change, which identifies a tangible goal, involves multiple stakeholders and paves the way for future change. A successful keystone change can supercharge your efforts to shift the Spectrum of Allies and pull in Pillars of Support.

For example, when Experian’s CIO, Barry Libenson, set out to shift his company to the cloud, he knew it would be an enormous undertaking. As one of the largest credit bureaus in the world, there were serious concerns that shifting its computing infrastructure would create vulnerabilities in its cybersecurity and its business model.

So rather than embarking on a multi-year death march to implement cloud technology throughout the company, he started with building internal APIs to build momentum. The move involved many of the same stakeholders he would need for the larger project, but involved far less risk and was able to show clear benefits that paved the way for future change.

In Cascades, I detail a number of cases, from major turnarounds at companies like IBM and Alcoa, to movements to gain independence in India and to secure LGBT rights in America. In each case, a keystone change played a major role in bringing change about.

Surviving Victory

As Saul Alinsky pointed out decades ago, every revolution inspires a counterrevolution. So many change efforts that show initial success ultimately fail because of backlash from key stakeholders. That’s why it is crucial to plan how you will survive victory by rooting your change effort in values, skills and capabilities, rather than in specific objectives or tactics.

For example, Blockbuster Video’s initial response to Netflix in 2004 was extremely successful and, by 2007, it was winning new subscribers faster than the upstart. Yet because it rooted its plan solely in terms of strategy and tactics, the changes were only skin deep. After the CEO left because of a compensation dispute, the strategy was quickly reversed. Blockbuster went bankrupt a few years later.

Compare that to the success at Experian. In both cases, large, successful enterprises needed to move against a disruptive threat. In both cases, legacy infrastructure and business models needed to be replaced. At Experian, however, the move was not rooted in a strategy imposed from above, but through empowering the organization with new skills and capabilities.

That made all the difference, because rather than having to convince the rank and file of the wisdom of moving to the cloud, Libenson was able to empower those already enthusiastic about the initiative. They then became advocates, brought others along and, before long, the enthusiasts soon outnumbered the skeptics.

The truth is you can’t overpower, bribe or coerce people to embrace change. By focusing on changing the dynamics upon which a transformation can take place, you can empower those within your organization to drive change themselves. The role of a leaders is no longer to plan and direct action, but to inspire and empower belief.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

AI and the Productivity Paradox

AI and the Productivity Paradox

GUEST POST from Greg Satell

In the 1970’s and 80’s, business investment in computer technology were increasing by more than twenty percent per year. Strangely though, productivity growth had decreased during the same period. Economists found this turn of events so strange that they called it the productivity paradox to underline their confusion.

Productivity growth would take off in the late 1990s, but then mysteriously drop again during the mid-aughts. At each juncture, experts would debate whether digital technology produced real value or if it was all merely a mirage. The debate would continue even as industry after industry was disrupted.

Today, that debate is over, but a new one is likely to begin over artificial intelligence. Much like in the early 1970s, we have increasing investment in a new technology, diminished productivity growth and “experts” predicting massive worker displacement . Yet now we have history and experience to guide us and can avoid making the same mistakes.

You Can’t Manage (Or Evaluate) What You Can’t Measure

The productivity paradox dumbfounded economists because it violated a basic principle of how a free market economy is supposed to work. If profit seeking businesses continue to make substantial investments, you expect to see a return. Yet with IT investment in the 70s and 80s, firms continued to increase their investment with negligible measurable benefit.

A paper by researchers at the University of Sheffield sheds some light on what happened. First, productivity measures were largely developed for an industrial economy, not an information economy. Second, the value of those investments, while substantial, were a small portion of total capital investment. Third, the aggregate productivity numbers didn’t reflect differences in management performance.

Consider a widget company in the 1970s that invested in IT to improve service so that it could ship out products in less time. That would improve its competitive position and increase customer satisfaction, but it wouldn’t produce any more widgets. So, from an economic point of view, it wouldn’t be a productive investment. Rival firms might then invest in similar systems to stay competitive but, again, widget production would stay flat.

So firms weren’t investing in IT to increase productivity, but to stay competitive. Perhaps even more importantly, investment in digital technology in the 70s and 80s was focused on supporting existing business models. It wasn’t until the late 90s that we began to see significant new business models being created.

The Greatest Value Comes From New Business Models—Not Cost Savings

Things began to change when firms began to see the possibilities to shift their approach. As Josh Sutton, CEO of Agorai, an AI marketplace, explained to me, “The businesses that won in the digital age weren’t necessarily the ones who implemented systems the best, but those who took a ‘digital first’ mindset to imagine completely new business models.”

He gives the example of the entertainment industry. Sure, digital technology revolutionized distribution, but merely putting your programming online is of limited value. The ones who are winning are reimagining storytelling and optimizing the experience for binge watching. That’s the real paradigm shift.

“One of the things that digital technology did was to focus companies on their customers,” Sutton continues. “When switching costs are greatly reduced, you have to make sure your customers are being really well served. Because so much friction was taken out of the system, value shifted to who could create the best experience.”

So while many companies today are attempting to leverage AI to provide similar service more cheaply, the really smart players are exploring how AI can empower employees to provide a much better service or even to imagine something that never existed before. “AI will make it possible to put powerful intelligence tools in the hands of consumers, so that businesses can become collaborators and trusted advisors, rather than mere service providers,” Sutton says.

It Takes An Ecosystem To Drive Impact

Another aspect of digital technology in the 1970s and 80s was that it was largely made up of standalone systems. You could buy, say, a mainframe from IBM to automate back office systems or, later, Macintoshes or a PCs with some basic software to sit on employees desks, but that did little more than automate basic clerical tasks.

However, value creation began to explode in the mid-90s when the industry shifted from systems to ecosystems. Open source software, such as Apache and Linux, helped democratize development. Application developers began offering industry and process specific software and a whole cadre of systems integrators arose to design integrated systems for their customers.

We can see a similar process unfolding today in AI, as the industry shifts from one-size-fits-all systems like IBM’s Watson to a modular ecosystem of firms that provide data, hardware, software and applications. As the quality and specificity of the tools continues to increase, we can expect the impact of AI to increase as well.

In 1987, Robert Solow quipped that, “ You can see the computer age everywhere but in the productivity statistics,” and we’re at a similar point today. AI permeates our phones, smart speakers in our homes and, increasingly, the systems we use at work. However, we’ve yet to see a measurable economic impact from the technology. Much like in the 70s and 80s, productivity growth remains depressed. But the technology is still in its infancy.

We’re Just Getting Started

One of the most salient, but least discussed aspects of artificial intelligence is that it’s not an inherently digital technology. Applications like voice recognition and machine vision are, in fact, inherently analog. The fact that we use digital technology to execute machine learning algorithms is actually often a bottleneck.

Yet we can expect that to change over the next decade as new computing architectures, such as quantum computers and neuromorphic chips, rise to the fore. As these more powerful technologies replace silicon chips computing in ones and zeroes, value will shift from bits to atoms and artificial intelligence will be applied to the physical world.

“The digital technology revolutionized business processes, so it shouldn’t be a surprise that cognitive technologies are starting from the same place, but that’s not where they will end up. The real potential is driving processes that we can’t manage well today, such as in synthetic biology, materials science and other things in the physical world,” Agorai’s Sutton told me.

In 1987, when Solow made his famous quip, there was no consumer Internet, no World Wide Web and no social media. Artificial intelligence was largely science fiction. We’re at a similar point today, at the beginning of a new era. There’s still so much we don’t yet see, for the simple reason that so much has yet to happen.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Avoid These Four Myths While Networking Your Organization

Avoid These Four Myths While Networking Your Organization

GUEST POST from Greg Satell

In an age of disruption, everyone has to adapt eventually. However, the typical organization is ill-suited to change direction. Managers spend years—and sometimes decades—working to optimize their operations to deliver specific outcomes and that can make an organization rigid in the face of a change in the basis of competition.

So it shouldn’t be surprising that the idea of a networked organizations have come into vogue. While hierarchies tend to be rigid, networks are highly adaptable and almost infinitely scalable. Unfortunately, popular organizational schemes such as matrixed management and Holacracy have had mixed results, at best.

The truth is that networks have little to do with an organization chart and much more to do with how informal connections form in your organization, especially among lower-level employees. In fact, coming up with a complex scheme is likely to do little more than cause a lot of needless confusion. Here are the myths you need to avoid.

Myth #1: You Need To Restructure Your Organization

In the early 20th century, the great sociologist Max Weber noted that the sweeping industrialization taking place would lead to a change in how organizations operated. As cottage industries were replaced by large enterprises, leadership would have to become less traditional and focused on charismatic leaders and more organized and rational.

He also foresaw that jobs would need to be broken down into small, specific tasks and be governed by a system of hierarchy, authority and responsibility. This would require a more formal mode of organization—a bureaucracy—in which roles and responsibilities were clearly defined. Later, executives such as Alfred Sloan at General Motors perfected the model.

Most enterprises are still set up this way because it remains the most efficient way to organize tasks. It aligns authority with accountability and optimizes information flow. Everybody knows where they stand and what they are responsible for. Organizational restructures are painful and time consuming because they disrupt and undermine the normal workflow.

In fact, reorganizations can backfire if they cut informal ties that don’t show up on the organization chart. So a better path is to facilitate informal ties so that people can coordinate work that falls in between organizational boundaries. In his book One Mission, McChrystal Group President Chris Fussell calls this a “hybrid organization.”

Myth #2 You Have To Break Down Silos

In 2005, researchers at Northwestern University took on the age old question: “What makes a hit on Broadway.” They looked at all the normal stuff you would imagine to influence success, such as the production budget, the marketing budget and the track record of the director. What they found, however, was surprising.

As it turns out, the most important factor was how the informal networks of the cast and crew were structured. If nobody had ever worked together before, results were poor, but if too many people had previously worked together, results also suffered. It was in the middle range, where there was both familiarity and disruption, that produced the best results.

Notice how the study doesn’t mention anything about the formal organization of the cast and crew. Broadway productions tend to have very basic structures, with a director leading the creative team, a producer managing the business side and others heading up things like music, choreography and so on. That makes it easy for a cast and crew to set up, because everyone knows their place.

The truth is that silos exist because they are centers of capability. Actors work with actors. Set designers work with set designers and so on. So instead of trying to break down silos, you need to start thinking about how to connect them. In the case of the Broadways plays, that was done through previous working relationships, but there are other ways to achieve the same goal.

Myth #3: You Need To Identify Influentials, Hubs And Bridges

In Malcolm Gladwell’s breakaway bestseller The Tipping Point, he wrote “The success of any kind of social epidemic is heavily dependent on the involvement of people with a particular and rare set of social gifts,” which he called “The Law of the Few.” Before long, it seemed like everybody from marketers to organizational theorists were looking to identify a mysterious group of people called “influentials.”

Yet as I explain in Cascades, decades of empirical evidence shows that influentials are a myth. While it is true that some people are more influential than others, their influence is highly contextual and not significant enough to go to the trouble of identifying them. Also, a study that analyzed the emails of 60,000 people found that information does not need rely on hubs or bridges.

With that said, there are a number of ways to network your organization by optimizing organizational platforms for connection. For example, Facebook’s Engineering Bootcamp found that “bootcampers tend to form bonds with their classmates who joined near the same time and those bonds persist even after each has joined different teams.”

One of my favorite examples of how even small tweaks can improve connectivity is a project done at a bank’s call center. When it was found that a third of variation in productivity could be attributed to informal communication outside of meetings, the bank arranged for groups to go on coffee break together, increasing productivity by as much as 20% while improving employee satisfaction at the same time.

Myth #4: Networks Don’t Need Leadership

Perhaps the most damaging myth about networks is that they don’t need strong leadership. Many observers have postulated that because technology allows people to connect with greater efficiency, leaders are no longer critical to organizing work. The reality is that nothing can be further from the truth.

The fact is that it is small groups, loosely connected, but united by a shared purpose that drive change. While individuals can form loosely connected small groups, they can rarely form a shared purpose by themselves. So the function of leadership these days is less to plan and direct action than it is to empower and inspire belief.

So perhaps the biggest shift is not one of tactics, but of mindset. In traditional hierarchies, information flows up through the organization and orders flow down. That helps leaders maintain control, but it also makes the organization slow to adapt and vulnerable to disruption.

Leaders need to learn how to facilitate information flow through horizontal connections so people lower down in the organization can act on it without waiting for approval. That’s where shared purpose comes in. Without a common purpose and shared values, pushing decision making down will only result in chaos. It’s much easier to get people to do what you want if they already want what you want.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Malcolm Gladwell Trap

The Malcolm Gladwell Trap

GUEST POST from Greg Satell

A few years ago I bought a book that I was really excited about. It’s one of those books that created a lot of buzz and it was highly recommended by someone I respect. The author’s pedigree included Harvard, Stanford, McKinsey and a career as a successful entrepreneur and CEO.

Yet about halfway in I noticed that he was choosing facts to fit his story and ignoring critical truths that would indicate otherwise, much like Malcolm Gladwell’s often does in his books. Once I noticed a few of these glaring oversights I found myself not being able to fully trust anything the author wrote and set the book aside.

Stories are important and facts matter. When we begin to believe in false stories, we begin to make decisions based on them. When these decisions go awry, we’re likely to blame other factors, such as ourselves, those around us or other elements of context and not the false story. That’s how many businesses fail. They make decisions based on the wrong stories.

Don’t Believe Everything You Think

Go to just about any innovation conference and you will find some pundit on stage telling a story about a famous failure, usually Blockbuster, Kodak or Xerox. In each case, the reason given for the failure is colossal incompetence by senior management: Blockbuster didn’t recognize the Netflix threat. Kodak invented, but then failed to market, a digital camera. Xerox PARC developed technology, but not products.

In each case, the main assertion is demonstrably untrue. Blockbuster did develop and successfully execute a digital strategy, but its CEO left the company due a dispute and the strategy was reversed. Kodak’s EasyShare line of digital cameras were top sellers, but couldn’t replace the massive profits the company made developing film. The development of the laser printer at Xerox PARC actually saved the company.

None of this is very hard to uncover. Still, the author fell for two of these bogus myths (Kodak and Xerox), even after obviously doing significant research for the book. Most probably, he just saw something that fit with his narrative and never bothered to question whether it was true or not, because he was to busy validating what he already knew to be true.

This type of behavior is so common that there is a name for it: confirmation bias. We naturally seek out information that confirms our existing beliefs. It takes significant effort to challenge our own assumptions, so we rarely do. To overcome that is hard enough. Yet that’s only part of the problem.

Majorities Don’t Just Rule, They Also Influence

In the 1950’s, Solomon Asch undertook a pathbreaking series of conformity studies. What he found was that in small groups, people will conform to a majority opinion. The idea that people have a tendency toward conformity is nothing new, but that they would give obviously wrong answers to simple and unambiguous questions was indeed shocking.

Now think about how hard it is for a more complex idea to take hold across a broad spectrum of people, each with their own biases and opinions. The truth is that majorities don’t just rule, they also influence. More recent research suggests that the effect applies not only to people we know well, but that we are also influenced even by second and third degree relationships.

We tend to accept the beliefs of people around us as normal. So if everybody believes that the leaders of Blockbuster, Kodak and Xerox were simply dullards who were oblivious to what was going on around them, then we are very likely to accept that as the truth. Combine this group effect with confirmation bias, it becomes very hard to see things differently.

That’s why it’s important to step back and ask hard questions. Why did these companies fail? Did foolish and lazy people somehow rise to the top of successful organizations, or did smart people make bad decisions? Was there something else to the story? Given the same set of facts, would we act any differently?

The Inevitable Paradigm Shift

The use of the term “paradigm shift” has become so common that most people are unaware that it started out having a very specific meaning. The idea of a paradigm shift was first established by Thomas Kuhn in his book The Structure of Scientific Revolutions, to describe how scientific breakthroughs come to the fore.

It starts with an established model, the kind we learn in school or during initial training for a career. Models become established because they are effective and the more proficient we become at applying a good model, the better we perform. The leaders in any given field owe much of their success to these models.

Yet no model is perfect and eventually anomalies show up. Initially, these are regarded as “special cases” and are worked around. However, as the number of special cases proliferate, the model becomes increasingly untenable and a crisis ensues. At this point, a fundamental change in assumptions has to take place if things are to move forward.

The problem is that most people who are established in the field believe in the traditional model, because that’s what most people around them believe. So they seek out facts to confirm these beliefs. Few are willing to challenge what “everybody knows” and those who do are often put at great professional and reputational risk.

Why We Fail To Adapt

Now we can begin to see why not only businesses, but whole industries get disrupted. We tend to defend, rather than question, our existing beliefs and those around us often reinforce them. To make matters worse, by this time the idea has become so well established that we will often incur switching costs if we abandon it. That’s why we fail to adapt.

Yet not everybody shares our experiences. Others, who have not grown up with the conventional wisdom, often do not have the same assumptions. They also don’t have an existing peer group that will enforce those assumptions. So for them, the flaws are much easier to see, as are the opportunities to doing things another way.

Of course, none of this has to happen. As I describe in Mapping Innovation, some companies, such as IBM and Procter & Gamble, have survived for over a century because they are always actively looking for new problems to solve, which forces them to look for new ideas and insights. It compels them to question what they think they know.

Getting stories right is hard work. You have to force yourself. However, we all have an obligation to get it right. For me, that means relentlessly checking every fact with experts, even for things that I know most people won’t notice. Inevitably, I get things wrong—sometimes terribly wrong— and need to be corrected. That’s always humbling.

I do it because I know stories are powerful. They take on a life of their own. Getting them right takes effort. As my friend Whitney Johnson points out, the best way to avoid disruption is to first disrupt yourself.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.