Author Archives: Greg Satell

About Greg Satell

Greg Satell is a popular speaker and consultant. His latest book, Cascades: How to Create a Movement That Drives Transformational Change, is available now. Follow his blog at Digital Tonto or on Twitter @Digital Tonto.

Where People Go Wrong with Minimum Viable Products

Where People Go Wrong with Minimum Viable Products

GUEST POST from Greg Satell

Ever since Eric Reis published his bestselling book, The Lean Startup, the idea of a minimum viable product (MVP) has captured the imagination of entrepreneurs and product developers everywhere. The idea of testing products faster and cheaper has an intuitive logic that simply can’t be denied.

Yet what is often missed is that a minimum viable product isn’t merely a stripped down version of a prototype. It is a method to test assumptions and that’s something very different. A single product often has multiple MVPs, because any product development effort is based on multiple assumptions.

Developing an MVP isn’t just about moving faster and cheaper, but also minimizing risk. In order to test assumptions, you first need to identify them and that’s a soul searching process. You have to take a hard look at what you believe, why you believe it and how those ideas can be evaluated. Essentially, MVP’s work because they force you to do the hard thinking early.

Every Idea Has Assumptions Built In

In 1990, Nick Swinmurn had an idea for a business. He intended to create a website to sell shoes much like Amazon did for books. This was at the height of the dotcom mania, when sites were popping up to sell everything from fashion to pet food to groceries, so the idea itself wasn’t all that original or unusual.

What Swinmurn did next, however, was. Rather than just assuming that people would be willing to buy shoes online or conducting expensive marketing research, he built a very basic site, went to a shoe store and took pictures of shoes, which he placed on the site. When he got an order, he bought the shoes retail and shipped them out. He lost money on every sale.

That’s a terrible way to run a business, but a great — and incredibly cheap — way to to test a business idea. Once he knew that people were willing to buy shoes online, he began to build all the elements of a fully functioning business. Ten years later, the company he created, Zappos was acquired by Amazon for $1.2 billion.

Notice how he didn’t just assume that his business idea was viable. He tested it and validated it. He also learned other things, such as what styles were most popular. Later, Zappos expanded to include handbags, eyewear, clothing, watches, and kids’ merchandise.

The Cautionary Tale Of Google Glass

Now compare how Swinmurn launched his business with Google’s Glass debacle. Instead of starting with an MVP, it announced a full-fledged prototype complete with a snazzy video. Through augmented reality projected onto the lenses, users could seamlessly navigate an urban landscape, send and receive messages and take photos and videos. It generated a lot of excitement and seemed like a revolutionary new way to interact with technology.

Yet criticism quickly erupted. Many were horrified that hordes of wandering techno-hipsters could be surreptitiously recording us. Others had safety concerns about everything from people being distracted while driving to the devices being vulnerable to hacking. Soon there was a brewing revolt against “Google Glassholes.”

Situations like the Google Glass launch are startlingly common. In fact, the vast majority of new product launches fail because there’s no real way to know whether you have the right product-market fit customers actually get a chance to interact with the product. Unfortunately, most product development efforts start by seeking out the largest addressable market. That’s almost always a mistake.

If you are truly creating something new and different, you want to build for the few and not the many. That’s the mistake that Google made with its Glass prototype.

Identifying A Hair On Fire Use Case

The alternative to trying to address the largest addressable market is to identify a hair-on-fire use case. The idea is to find a potential customer that needs to solve a problem so badly that they almost literally have their hair on fire. These customers will be more willing to co-create with you and more likely to put up with the inevitable bugs and glitches that always come up.

For example, Tesla didn’t start out by trying to build an electric car for the masses. Instead, it created a $100,000 status symbol for Silicon Valley millionaires. Because these customers could afford multiple cars, range wasn’t as much of a concern. The high price tag also made a larger battery more feasible. The original Tesla Roadster had a range of 244 miles.

The Silicon Valley set were customers with their hair on fire. They wanted to be seen as stylish and eco-friendly, so were willing to put up with the inevitable limitations of electric cars. They didn’t have to depend on them for their commute or to pick the kids up at soccer practice. As long as the car was cool enough, they would buy it.

Interestingly, Google Glass made a comeback as an industrial product and had a nice run from 2019 to 2023 before they went away for good. For hipsters, an augmented reality product is far from a necessity, but a business that needs to improve productivity can be a true “hair-on-fire” use case. As the product improves and gains traction, it’s entirely possible that it eventually makes its way back to the consumer market in some form.

Using An MVP To Pursue A Grand Challenge

One of the criticisms of minimum viable products is that they are only suited for simple products and tweaks, rather than truly ambitious projects. Nothing could be further from the truth. The reality is that the higher your ambitions, the more important it is for you to start with a minimum viable product.

IBM is one company that has a long history of pursuing grand challenges such as the Deep Blue project which defeated world champion Garry Kasparov at chess and the Blue Gene project which created a new class of “massively parallel” supercomputers. More recently were the Jeopardy grand challenge, which led to the development of its current Watson business and the Debater project.

Notice that none of these were fully featured products. Rather they were attempts to, as IBM’s Chief Innovation Officer, Bernie Meyerson, put it to me, invent something that “even experts in the field, regard as an epiphany and changes assumptions about what’s possible.” That would be hard to do if you were trying to create a full featured product for a demanding customer.

That’s the advantage of creating an MVP. It essentially acts as a research lab where you can safely test hypotheses and eliminate sources of uncertainty. Once you’ve done that, you can get started trying to build a real business.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Why Communication Skills Trump Coding for Our Kids’ Future

Why Communication Skills Trump Coding for Our Kids' Future

GUEST POST from Greg Satell

Many say that coding is the new literacy. Kids are encouraged to learn programming in school and take coding courses online. In that famous scene in The Graduate Dustin Hoffman’s character was encouraged by a family friend to go into plastics. If it were shot today, it would have probably been computer code.

This isn’t actually that new. I remember first being taught how to code in middle school in the early 80s in BASIC (a mostly defunct language now). Yet even today, coding is far from an essential skill. In fact, with the rise of no-code platforms, there is a strong argument to be made that code is becoming less important.

Don’t get me wrong, there’s still plenty of coding to be done on the back end and programming is certainly a perfectly reasonable thing to learn. However, there’s no reason people need to learn it to have a successful, productive career. On the other hand writing, as well as other communication skills, will only become more important in the decades to com.

The Future Is Not Digital

During the past few decades, digital technology has become largely synonymous with innovation. Every 18 months or so, a new generation of processors has come out that was faster, more powerful and cheaper than its predecessors. Entrepreneurs would leverage these new capabilities to create exciting new products and disrupt entire industries.

Yet now that’s all coming to an end. Every technology eventually hits theoretical limits and that’s where we are now with regard to digital processors. We have maybe one or two generations of advancement and then, with some clever workarounds, we may be able to stretch the technology for a decade or so, but it’s highly unlikely that it’ll last any longer than that.

That’s not so horrible. There’s no 11th Commandment that says, “Thou shalt compute in ones and zeroes,” and there are nascent architectures that are potentially far more powerful than digital computers, such as quantum and neuromorphic computing. Neither of these, however are digital technologies. They operate on fundamentally different logic and will use different code.

So instead of learning to code, maybe our kids would be better served by learning about quantum mechanics or neurology. Those would seem to be far more relevant to their future.

The Shift From Bits To Atoms

Digital technology is largely virtual. Transistors on silicon wafers compute ones and zeroes so that images can flash across our screens. That can be very useful, because we can simulate things on a screen much more cheaply than in the physical world, but it’s also limited. We can’t eat, wear or live in a virtual world.

The important technologies of the next generation, however, will be based on atoms rather than bits. Advances in genomics have led to the new field of synthetic biology and a revolution in materials science is transforming our ability to develop advance materials for manufacturing, clean energy and space exploration. So maybe instead of learning how to code, kids should be studying genetics and chemistry.

As we develop new technologies, we will also need to design experiences so that we can use them more effectively. For example, we need linguists and conversational analysts to design better voice interfaces. Kids who study those things may be able to build great careers.

The rapid pace of technological advancement over the next generation will surely put stress on society. Digital technology has helped produce massive income inequality and a rise in extremism. We will need sociologists and political scientists to help us figure out how to cope with these new, much more powerful technologies.

Collaboration Is The New Competitive Advantage

When my generation was in school, we were preparing for a future that seemed pretty clear cut. We assumed we would become doctors, lawyers, executives and engineers and spend our entire lives working in our chosen fields. It didn’t turn out that way. These days a business model is unlikely to last a decade, much less a lifetime.

Kids today need to prepare to become lifelong learners because the pace of change will not slow down. In fact, it is likely to accelerate beyond anything we can imagine today. The one thing we can predict about the future is that collaboration will be critical for success. People like geneticists and quantum scientists will need to work closely with chemists, designers sociologists and specialists in fields that haven’t even been invented yet.

These are, in fact, longstanding trends. The journal Nature recently noted that the average scientific paper today has four times as many authors as one did in 1950 and the work they are doing is far more interdisciplinary and done at greater distances than in the past. We can only expect these trends to become more prominent in the future.

In order to collaborate effectively, you need to communicate effectively and that’s where writing comes in. Being able to express thoughts and ideas clearly and cogently is absolutely essential to collaboration and innovation.

Writing Well Is Thinking Well

Probably the most overlooked aspect of writing is that it does more than communicate thoughts, but helps form them. As Fareed Zakaria has put it. “Thinking and writing are inextricably intertwined. When I begin to write, I realize that my ‘thoughts’ are usually a jumble of half-baked, incoherent impulses strung together with gaping logical holes between them.”

“Whether you’re a novelist, a businessman, a marketing consultant or a historian,” he continues, “writing forces you to make choices and it brings clarity and order to your ideas.” Zakaria also points to Jeff Bezos’ emphasis on memo writing as an example of how clarity of expression leads to innovation.

In fact, Amazon considers writing so essential to its ability to innovate that it has become a key part of its culture. It’s hard to make much of a career at Amazon if you cannot write well, because to create products and services that are technically sound, easy to use and efficiently executed, a diverse group of highly skilled people need to tightly coordinate their efforts.

Today, as the digital revolution comes to an end and we enter a new era of innovation, it’s easy to get overwhelmed by the rapid advancement of breakthrough technologies. However, the key to success in our uncertain future will be humans collaborating with other humans to design work for machines. That starts with writing effectively.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Robots Aren’t Really Going to Take Over

The Robots Aren't Really Going to Take Over

GUEST POST from Greg Satell

In 2013, a study at Oxford University found that 47% of jobs in the United States are likely to be replaced by robots over the next two decades. As if that doesn’t seem bad enough, Yuval Noah Harari, in his bestselling book Homo Deus, writes that “humans might become militarily and economically useless.” Yeesh! That doesn’t sound good.

Yet today, ten years after the Oxford Study, we are experiencing a serious labor shortage. Even more puzzling is that the shortage is especially acute in manufacturing, where automation is most pervasive. If robots are truly taking over, then why are having trouble finding enough humans to do work that needs being done?

The truth is that automation doesn’t replace jobs, it replaces tasks and when tasks become automated, they largely become commoditized. So while there are significant causes for concern about automation, such as increasing returns to capital amid decreasing returns to labor, the real danger isn’t with automation itself, but what we choose to do with it.

Organisms Are Not Algorithms

Harari’s rationale for humans becoming useless is his assertion that “organisms are algorithms.” Much like a vending machine is programed to respond to buttons, humans and other animals are programed by genetics and evolution to respond to “sensations, emotions and thoughts.” When those particular buttons are pushed, we respond much like a vending machine does.

He gives various data points for this point of view. For example, he describes psychological experiments in which, by monitoring brainwaves, researchers are able to predict actions, such as whether a person will flip a switch, even before he or she is aware of it. He also points out that certain chemicals, such as Ritalin and Prozac, can modify behavior.

Therefore, he continues, free will is an illusion because we don’t choose our urges. Nobody makes a conscious choice to crave chocolate cake or cigarettes any more than we choose whether to be attracted to someone other than our spouse. Those things are a product of our biological programming.

Yet none of this is at all dispositive. While it is true that we don’t choose our urges, we do choose our actions. We can be aware of our urges and still resist them. In fact, we consider developing the ability to resist urges as an integral part of growing up. Mature adults are supposed to resist things like gluttony, adultery and greed.

Revealing And Building

If you believe that organisms are algorithms, it’s easy to see how humans become subservient to machines. As machine learning techniques combine with massive computing power, machines will be able to predict, with great accuracy, which buttons will lead to what actions. Here again, an incomplete picture leads to a spurious conclusion.

In his 1954 essay, The Question Concerning Technology the German philosopher Martin Heidegger sheds some light on these issues. He described technology as akin to art, in that it reveals truths about the nature of the world, brings them forth and puts them to some specific use. In the process, human nature and its capacity for good and evil is also revealed.

He gives the example of a hydroelectric dam, which reveals the energy of a river and puts it to use making electricity. In much the same sense, Mark Zuckerberg did not “build” a social network at Facebook, but took natural human tendencies and channeled them in a particular way. After all, we go online not for bits or electrons, but to connect with each other.

In another essay, Building Dwelling Thinking, Heidegger explains that building also plays an important role, because to build for the world, we first must understand what it means to live in it. Once we understand that Mark Zuckerberg, or anyone else for that matter, is working to manipulate us, we can work to prevent it. In fact, knowing that someone or something seeks to control us gives us an urge to resist. If we’re all algorithms, that’s part of the code.
Social Skills Will Trump Cognitive Skills

All of this is, of course, somewhat speculative. What is striking, however, is the extent to which the opposite of what Harari and other “experts” predict is happening. Not only have greater automation and more powerful machine learning algorithms not led to mass unemployment it has, as noted above, led to a labor shortage. What gives?

To understand what’s going on, consider the legal industry, which is rapidly being automated. Basic activities like legal discovery are now largely done by algorithms. Services like LegalZoom automate basic filings. There are even artificial intelligence systems that can predict the outcome of a court case better than a human can.

So it shouldn’t be surprising that many experts predict gloomy days ahead for lawyers. By now, you can probably predict the punchline. The number of lawyers in the US has increased by 15% since 2008 and it’s not hard to see why. People don’t hire lawyers for their ability to hire cheap associates to do discovery, file basic documents or even, for the most part, to go to trial. In large part, they want someone they can trust to advise them.

The true shift in the legal industry will be from cognitive to social skills. When much of the cognitive heavy lifting can be done by machines, attorneys who can show empathy and build trust will have an advantage over those who depend on their ability to retain large amounts of information and read through lots of documents.

Value Never Disappears, It Just Shifts To Another Place

In 1900, 30 million people in the United States worked as farmers, but by 1990 that number had fallen to under 3 million even as the population more than tripled. So, in a matter of speaking, 90% of American agriculture workers lost their jobs, mostly due to automation. Yet somehow, the twentieth century was seen as an era of unprecedented prosperity.

You can imagine anyone working in agriculture a hundred years ago would be horrified to find that their jobs would vanish over the next century. If you told them that everything would be okay because they could find work as computer scientists, geneticists or digital marketers, they would probably have thought that you were some kind of a nut.

But consider if you told them that instead of working in the fields all day, they could spend that time in a nice office that was cool and dry because of something called “air conditioning,” and that they would have machines that cook meals without needing wood to be chopped and hauled. To sweeten the pot you could tell them that ”work” would mostly consist largely of talking to other people. They may have imagined it as a paradise.

The truth is that value never disappears, it just shifts to another place. That’s why today we have less farmers, but more food and, for better or worse, more lawyers. It is also why it’s highly unlikely that the robots will take over, because we are not algorithms. We have the power to choose.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Why Most Corporate Innovation Programs Fail

(And How To Make Them Succeed)

Why Most Corporate Innovation Programs Fail

GUEST POST from Greg Satell

Today, everybody needs to innovate. So it shouldn’t be surprising that corporate innovation programs have become wildly popular. There is an inherent tradeoff between innovation and the type of optimization that operational executives excel at. Creating a separate unit to address innovation just makes intuitive sense.

Yet corporate innovation programs often fail and it’s not hard to see why. Unlike other business functions, like marketing or finance, in a healthy organization everybody takes pride in their ability to innovate. Setting up a separate innovation unit can often seem like an affront to those who work hard to innovate in operational units.

Make no mistake, a corporate innovation program is no panacea. It doesn’t replace the need to innovate every day. Yet a well designed program can augment those efforts, take the business in new directions and create real value. The key to a successful innovation program is to develop a clear purpose built on a shared purpose that can solve important problems.

A Good Innovation Program Extends, It Doesn’t Replace

It’s no secret that Alphabet is one of the most powerful companies in the world. Nevertheless, it has a vulnerability that is often overlooked. Much like Xerox and Kodak decades ago, it’s highly dependent on a single revenue stream. In 2018, 86% of its revenues came from advertising, mostly from its Google search business.

It is with this in mind that the company created its X division. Because the unit was set up to pursue opportunities outside of its core search business, it didn’t encounter significant resistance. In fact, the X division is widely seen as an extension of what made Alphabet so successful in the first place.

Another important aspect is that the X division provides a platform to incubate internal projects. For example, Google Brain started out as a “20% time project.” As it progressed and needed more resources, it was moved to the X division, where it was scaled up further. Eventually, it returned to the mothership and today is an integral part of the core business.

Notice how the vision of the X division was never to replace innovation efforts in the core business, but to extend them. That’s been a big part of its success and has led to exciting new business like Waymo autonomous vehicles and the Verily healthcare division.

Focus On Commonality, Not Difference

All too often, innovation programs thrive on difference. They are designed to put together a band of mavericks and disruptors who think differently than the rest of the organization. That may be great for instilling a strong esprit de corps among those involved with the innovation program, but it’s likely to alienate others.

As I explain in Cascades, any change effort must be built on shared purpose and shared values. That’s how you build trust and form the basis for effective collaboration between the innovation program and the rest of the organization. Without those bonds of trust, any innovation effort is bound to fail.

You can see how that works in Alphabet’s X division. It is not seen as fundamentally different from the core Google business, but rather as channeling the company’s strengths in new directions. The business opportunities it pursues may be different, but the core values are the same.

The key question to ask is why you need a corporate innovation program in the first place. If the answer is that you don’t feel your organization is innovative enough, then you need to address that problem first. A well designed innovation program can’t be a band-aid for larger issues within the core business.

Executive Sponsorship Isn’t Enough

Clearly, no corporate innovation program can be successful without strong executive sponsorship. Commitment has to come from the top. Yet just as clearly, executive sponsorship isn’t enough. Unless you can build support among key stakeholders inside and outside the organization, support from the top is bound to erode.

For example, when Eric Haller started Datalabs at Experian, he designed it to be focused on customers, rather than ideas developed internally. “We regularly sit down with our clients and try and figure out what’s causing them agita,” he told me, “because we know that solving problems is what opens up enormous business opportunities for us.”

Because the Datalabs units works directly with customers to solve problems that are important to them, it has strong support from a key stakeholder group. Another important aspect at Datalabs is that once a project gets beyond the prototype stage it goes to one of the operational units within the company to be scaled up into a real business. Over the past five years businesses originated at Datalabs have added over $100 million in new revenues.

Perhaps most importantly, Haller is acutely aware how innovation programs can cause resentment, so he works hard to reduce tensions through building collaborations around the organization. Datalabs is not where “innovation happens” at Experian. Rather it serves to augment and expand capabilities that were already there.

Don’t Look For Ideas, Identify Meaningful Problems

Perhaps most importantly, an innovation program should not be seen as a place to generate ideas. The truth is that ideas can come from anywhere. So designating one particular program in which ideas are supposed to happen will not only alienate the rest of the organization, it is also likely to overlook important ideas generated elsewhere.

The truth is that innovation isn’t about ideas. It’s about solving problems. In researching my book, Mapping Innovation, I came across dozens of stories from every conceivable industry and field and it always started with someone who came across a problem they wanted to solve. Sometimes, it happened by chance, but in most cases I found that great innovators were actively looking for problems that interested them.

If you look at successful innovation programs like Alphabet’s X division and Experian’s Datalabs, the fundamental activity is exploration. X division explores domains outside of search, while Datalabs explores problems that its customers need solved. Once you identify a meaningful problem, the ideas will come.

That’s the real potential of innovation programs. They provide a space to explore areas that don’t fit with the current business, but may play an important role in its future. A good innovation program doesn’t replace capabilities in the core organization, but leverages them to create new opportunities.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Four Major Shifts Driving the 21st Century

Four Major Shifts Driving the 21st Century

GUEST POST from Greg Satell

In 1900, most people lived much like their ancestors had for millennia. They lived and worked on farms, using animal power and hand tools to augment their own abilities. They inhabited small communities and rarely, if ever, traveled far from home. They engaged in small scale violence and lived short, hard lives.

That would all change over the next century as we learned to harness the power of internal combustion, electricity and atoms. These advancements allowed us to automate physical labor on a large scale, engage in mass production, travel globally and wage violence that could level entire cities.

Today, at the beginning of a new century, we are seeing similar shifts that are far more powerful and are moving far more quickly. Disruption is no longer seen as merely an event, but a way of life and the fissures are there for all to see. Our future will depend on our determination to solve problems faster than our proclivity to continually create them.

1. Technology Shifts

At the turn of the 20th century, electricity and internal combustion were over a decade old, but hadn’t made much of an impact yet. That would change in the 1920s, as roads got built and new appliances that harnessed the power of electricity were invented. As ecosystems formed around new technologies, productivity growth soared and quality of life increased markedly.

There would be two more major technology shifts over the course of the century. The Green Revolution and the golden age of antibiotics in the 50s and 60s saved an untold number of lives. The digital revolution in the 90s created a new era of communication and media that still reverberates today.

These technological shifts worked for both good and ill in that they revealed the best and worst parts of human nature. Increased mobility helped to bring about violence on a massive scale during two world wars. The digital revolution made war seem almost antiseptic, enabling precision strikes to kill people half a world away at the press of a button.

Today, we are on the brink of a new set of technological shifts that will be more powerful and more pervasive than any we have seen before. The digital revolution is ending, yet new technologies, such as novel computing architectures, artificial intelligence, as well as rapid advancements in genomics and materials science promise to reshape the world as we know it.

2. Resource Shifts

As new technologies reshaped the 20th century, they also reshaped our use of resources. Some of these shifts were subtle, such as how the invention of synthetic indigo dye in Germany affected farmers in India. Yet the biggest resource shift, of course, was the increase in the demand for oil.

The most obvious impact from the rise of oil was how it affected the Middle East. Previously nomadic societies were suddenly awash in money. Within just a single generation, countries like Saudi Arabia, Iraq and Iran became global centers of power. The Arab Oil Embargo of the 1970s nearly brought western societies to their knees and prolonged the existence of the Soviet Union.

So I was more than surprised last year to find when I was at a conference in Bahrain that nearly every official talked openly about he need to “get off oil.” With the rise of renewable energy, depending on a single commodity is no longer a viable way to run a society. Today, solar power is soaring in the Middle East.

Still, resource availability remains a powerful force. As the demand for electric vehicles increases, the supply of lithium could become a serious issue. Already China is threatening to leverage its dominance in rare earth elements in the trade war with the United States. Climate change and population growth is also making water a scarce resource in many places.

3. Migrational Shifts

One of the most notable shifts in the 20th century was how the improvement in mobility enabled people to “vote with their feet.” Those who faced persecution or impoverishment could, if they dared, sail off to some other place where the prospects were better. These migrational shifts also helped shape the 20th century and will likely do the same in the 21st.

Perhaps the most notable migration in the 20th century was from Europe to the United States. Before World War I, immigrants from Southern and Eastern Europe flooded American shores and the backlash led to the Immigration Act of 1924. Later, the rise of fascism led to another exodus from Europe that included many of its greatest scientists.

It was largely through the efforts of immigrant scientists that the United States was able to develop technologies like the atomic bomb and radar during World War II. Less obvious though is the contributions of second and third generation citizens, who make up a large proportion of the economic and political elite in the US.

Today, the most noteworthy shift is the migration of largely Muslim people from war-torn countries into Europe. Much like America in the 1920s, the strains of taking in so many people so quickly has led to a backlash, with nationalist parties making significant gains in many countries.

4. Demographic Shifts

While the first three shifts played strong roles throughout the 20th century, demographic shifts, in many ways, shaped the second half of the century. The post war generation of Baby Boomers repeatedly challenged traditional values and led the charge in political movements such as the struggle for civil rights in the US, the Prague Spring in Czechoslovakia and the March 1968 protests in Poland.

The main drivers of the Baby Boomer’s influence have been its size and economic prosperity. In America alone, 76 million people were born in between 1946 and 1964, and they came of age in the prosperous years of the 1960s. These factors gave them unprecedented political and economic clout that continues to this day.

Yet now, Millennials, who are more diverse and focused on issues such as the environment and tolerance, are beginning to outnumber Baby Boomers. Much like in the 1960s, their increasing influence is driving trends in politics, the economy and the workplace and their values often put them in conflict with the baby boomers.

However, unlike the Baby Boomers, Millennials are coming of age in an era where prosperity seems to be waning. With Baby Boomers retiring and putting further strains on the economy, especially with regard to healthcare costs, tensions are on the rise.

Building On Progress

As Mark Twain is reputed to have said, “History doesn’t repeat itself, but it does rhyme.” While shifts in technology, resources, migration and demographics were spread throughout the 20th century, today we’re experiencing shifts in all four areas at once. Given that the 20th century was rife with massive wars and genocide, that is somewhat worrying.

Many of the disturbing trends around the world, such as the rise of authoritarian and populist movements, global terrorism and cyber warfare, can be attributed to the four shifts. Yet the 20th century was also a time of great progress. Wars became less frequent, life expectancy doubled and poverty fell while quality of life improved dramatically.

So today, while we face seemingly insurmountable challenges, we should also remember that many of the shifts that cause tensions, also give us the power to solve our problems. Advances in genomics and materials science can address climate change and rising healthcare costs. A rising, multicultural generation can unlock creativity and innovation. Migration can move workers to places where they are sorely needed.

The truth is that every disruptive era is not only fraught with danger, but also opportunity. Every generation faces unique challenges and must find the will to solve them. My hope is that we will do the same. The alternative is unthinkable.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Four Transformation Secrets Business Leaders Can Learn from Social and Political Movements

Four Transformation Secrets Business Leaders Can Learn from Social and Political Movements

GUEST POST from Greg Satell

In 2004, I was managing a major news organization during the Orange Revolution in Ukraine. One of the things I noticed was that thousands of people, who would normally be doing thousands of different things, would stop what they were doing and start doing the same things all at once, in nearly complete unison, with no clear authority guiding them.

What struck me was how difficult it was for me to coordinate action among the people in my company. I thought if I could harness the forces I saw at work in the Orange Revolution, it could be a powerful model for business transformation. That’s what started me out on the 15-year journey that led to my book, Cascades.

What I found was that many of the principles of successful movements can be applied to business transformation. Also, because social and political movements so well documented—there are often thousands of contemporary accounts from every conceivable perspective—we can gain insights that a traditional case studies miss. Here are four principles you can apply.

1. Failure Doesn’t Have To Be Fatal

One of the things that amazed me while researching revolutionary movements was how consistently failure played a part in their journey. Mahatma Gandhi’s early efforts to bring independence to India led to the massacre at Amritsar in 1919. Our own efforts in Ukraine in 2004 ultimately led to Viktor Yanukovych’s rise to power in 2010.

In the corporate context, it is often a crisis that leads to transformational change. In the early 90s, IBM was nearly bankrupt and many thought the company should be broken up. That’s what led to the Gerstner revolution that put the company back on track and a similar crisis at Alcoa presaged record profits under Paul O’Neil.

In fact, Lou Gertner would later say that failure and transformation are inextricably linked. “Transformation of an enterprise begins with a sense of crisis or urgency,” he told a groups of Harvard Business School students. “No institution will go through fundamental change unless it believes it is in deep trouble and needs to do something different to survive.”

What’s important about early failures is what you learn from them. In every successful transformation I researched, what turned the tide was when the insights gained from early failures were applied to create a keystone change that set out a clear and concrete initiative, involved multiple stakeholders and paved the way for a greater transformation down the road.

2. Don’t Bet Your Transformation On Persuasion

Any truly transformational change is going to encounter significant resistance. Those who fear change and support the status quo can be expected to work to undermine your efforts. That’s fairly obvious in social and political movements like the civil rights movements or the struggle against Apartheid, but often gets overlooked in the corporate context.

All too often change management efforts seek to convince opponents through persuasion. That’s unlikely to succeed. Betting your transformation on the idea that, given the right set of arguments or snappy slogans, those who oppose the change that you seek will immediately see the light is unrealistic. What you can do, however, is set out to influence stakeholders who can wield influence.

For example, in the 1980s, anti-Aparthied activists activists led a campaign against Barclays Bank in British university towns. That probably did little to persuade white nationalists in South Africa, but it severely affected Barclays’ business, which pulled its investments from South Africa. That and similar efforts made Apartheid economically untenable and helped lead to its downfall.

In a corporate transformation, there are many institutions, such as business units, customer groups, industry associations, and others that can wield significant influence. By looking at stakeholder groups more broadly, you can win important allies that can help you drive transformation forward.

3. Be Explicit About Your Values

Today, we regard Nelson Mandela as an almost saintly figure, but it wasn’t always that way. Throughout his career as an activist, he was accused of being a communist, an anarchist and worse. When confronted with these accusations, however, he always pointed out that no one had to guess what he believed in, because it was written down in the Freedom Charter in 1955.

Being explicit about values helped to signal to external stakeholders, such as international institutions, that the anti-Aparthied activists shared common values. In fact, although the Freedom Charter was a truly revolutionary document, its call for things like equal rights and equal protection would be considered utterly unremarkable in most countries.

After Apartheid fell and Mandela rose to power, the values spelled out in the Freedom Charter became important constraints. If, for example, a core value is that all national groups should be treated equal, then Mandela’s government clearly couldn’t oppress whites. His reconciliation efforts are a big part of the reason he is so revered today.

Irving Wladawsky-Berger, one of Gerster’s key lieutenants, told me how values played a similar role during IBM’s turnaround years. “The Gerstner revolution wasn’t about technology or strategy, it was about transforming our values and our culture to be in greater harmony with the market… Because the transformation was about values first and technology second, we were able to continue to embrace those values as the technology and marketplace continued to evolve.”

4. Every Revolution Inspires A Counter-Revolution

After the Orange Revolution ended in 2005, we felt triumphant. We overcame enormous odds and had won. Little did we know that there would be much darker days ahead. In 2010, Viktor Yanukovych, the man we took to the streets to keep out of power, was elected president in an election that international observers judged to be free and fair.

While surprising, this is hardly uncommon. Similar events took place during the Arab Spring. The government of Hosni Mubarrak was overthrown only to be replaced by that of Abdel Fattah el-Sisi, who is possibly even more oppressive. Harvard professor Rita Gunther McGrath points out that in today’s business environment, competitive advantage tends to be transient.

The truth is that every revolution inspires a counter-revolution. That’s why the early days of victory are often the most fragile. That’s when you tend to take your foot off the gas and relax, while at the same time those who oppose the change you worked to build are just beginning to plan to redouble their efforts.

That’s why you need a plan to survive victory from the start rooted in shared values. In the final analysis, driving change is less about a series of objectives than it is about forming a common cause. That’s just as true in a corporate transformation as it is in a social or political revolution.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Digital Revolution Has Been A Giant Disappointment

The Digital Revolution Has Been A Giant Disappointment

GUEST POST from Greg Satell

One of the most often repeated episodes in the history of technology is when Steve Jobs was recruiting John Sculley from his lofty position as CEO at Pepsi to come to Apple. “Do you want to sell sugar water for the rest of your life,”Jobs asked, “or do you want to come with me and change the world?”

It’s a strange conceit of digital denizens that their businesses are something nobler than other industries. While it is true that technology can do some wonderful things, if the aim of Silicon Valley entrepreneurs was truly to change the world, why wouldn’t they apply their formidable talents to something like curing cancer or feeding the hungry?

The reality, as economist Robert Gordon explains in the The Rise and Fall of American Growth, is that the measurable impact has been relatively meager. According to the IMF, except for a relatively short burst in growth between 1996 and 2004, productivity has been depressed since the 1970s. We need to rethink how technology impacts our world.

The Old Productivity Paradox

In the 1970s and 80s, business investment in computer technology was increasing by more than 20% per year. Strangely though, productivity growth had decreased during the same period. Economists found this turn of events so strange that they called it the productivity paradox to underline their confusion.

The productivity paradox dumbfounded economists because it violated a basic principle of how a free market economy is supposed to work. If profit seeking businesses continue to make substantial investments, you expect to see a return. Yet with IT investment in the 70s and 80s, firms continued to increase their investment with negligible measurable benefit.

A paper by researchers at the University of Sheffield sheds some light on what happened. First, productivity measures were largely developed for an industrial economy, not an information economy. Second, the value of those investments, while substantial, were a small portion of total capital investment. Third, businesses weren’t necessarily investing to improve productivity, but to survive in a more demanding marketplace.

Yet by the late 1990s, increased computing power combined with the Internet to create a new productivity boom. Many economists hailed the digital age as a “new economy” of increasing returns, in which the old rules no longer applied and a small initial advantage would lead to market dominance. The mystery of the productivity paradox, it seemed, had been solved. We just needed to wait for the technology to hit critical mass.

The New Productivity Paradox

By 2004, the law of increasing returns was there for everyone to see. Google already dominated search, Amazon ruled e-commerce, Apple would go on to dominate mobile computing and Facebook would rule social media. Yet as the dominance of the tech giants grew, productivity would once again fall to depressed levels.

Yet today, more than a decade later, we’re in the midst of a second productivity paradox, just as mysterious as the first one. New technologies like mobile computing and artificial intelligence are there for everyone to see, but they have done little, if anything, to boost productivity.

At the same time the power of digital technology is diminishing. Moore’s law, the decades old paradigm of continuous doubling in the power of computer processing is slowing down and soon will end completely. Without advancement in the underlying technology, it is hard to see how digital technology will ever power another productivity boom.

Considering the optimistic predictions of digital entrepreneurs like Steve Jobs, this is incredibly disappointing. Compare the meager eight years of elevated productivity that digital technology produced with the 50-year boom in productivity created in the wake of electricity and internal combustion and it’s clear that digital technology simply doesn’t measure up.

The Baumol Effect, The Clothesline Paradox and Other Headwinds

Much like the first productivity paradox, it’s hard to determine exactly why the technological advancement over the last 15 years has amounted to so little. Most likely, it is not one factor in particular, but the confluence of a number of them. Increasing productivity growth in an advanced economy is no simple thing.

One possibility for the lack of progress is the Baumol effect, the principle that some sectors of the economy are resistant to productivity growth. For example, despite the incredible efficiency that Jeff Bezos has produced at Amazon, his barber still only cuts one head of hair at a time. In a similar way, sectors like healthcare and education, which require a large amount of labor inputs that resist automation, will act as a drag on productivity growth.

Another factor is the Clothesline paradox, which gets its name from the fact that when you dry your clothes in a machine, it figures into GDP data, but when you hang them on a clothesline, no measurable output is produced. In much the same way, when you use a smartphone to take pictures or to give you directions, there is considerable benefit that doesn’t result in any financial transactions. In fact, because you use less gas and don’t develop film, GDP decreases somewhat.

Additionally, the economist Robert Gordon, mentioned above, notes six headwinds to economic growth, including aging populations, limits to increasing education, income inequality, outsourcing, environmental costs due to climate change and rising household and government debt. It’s hard to see how digital technology will make a dent in any of these problems.

Technology is Never Enough to Change the World

Perhaps the biggest reason that the digital revolution has been such a big disappointment is because we expected the technology to largely do the work for us. While there is no doubt that computers are powerful tools, we still need to put them to good use and we have clearly missed opportunities in that regard.

Think about what life was like in 1900, when the typical American family didn’t have access to running water, electricity or gas powered machines such as tractors or automobiles. Even something simply like cooking a meal took hours of backbreaking labor. Yet investments in infrastructure and education combined with technology to produce prosperity.

Today, however, there is no comparable effort to invest in education and healthcare for those who cannot afford it, to limit the effects of climate change, to reduce debt or to do anything of anything of significance to mitigate the headwinds we face. We are awash in nifty gadgets, but in many ways we are no better off than we were 30 years ago.

None of this was inevitable, but the somewhat the results of choices that we have made. We can, if we really want to, make different choices in the days and years ahead. What I hope we have learned from our digital disappointments is that technology itself is never enough. We are truly the masters of our fate, for better or worse.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Shared Values Key to Achieving the Most Radical Visions

Shared Values Key to Achieving the Most Radical Visions

GUEST POST from Greg Satell

With the political season heating up, an increasingly frequent topic of discussion is how radical candidates should be. Some say that the optimal strategy is to be mainstream and court the middle. Others argue that it is better to more extreme and rile up the passions of your most active supporters.

Yet as I explain in Cascades that’s a false choice. The truth is that once seemingly radical positions, such as voting rights for women, civil rights for disenfranchised racial groups and same-sex marriage are now considered mainstream. To win those battles, however, activists needed to appeal to shared values.

What’s key isn’t any particular policy, but whether you can appeal to common values and mobilize supporters to influence institutions that will determine whether you can bring change about. You don’t do that through enforcing ideological purity or demonizing your opposition, but by putting forward an affirmative vision for a better future.

Change Starts With Passionate Grievance

As a young man, Nelson Mandela was angry. “I was sympathetic to the ultra-revolutionary stream of African nationalism,” he would later write. “I was angry at the white man, not at racism. While I was not prepared to hurl the white man into the sea, I would have been perfectly happy if he climbed aboard his steamships and left the continent of his own volition.”

After the National Party won elections in 1948 on a white supremacist platform, things got worse for native blacks , Indians and coloureds (mixed race). Mixed marriages were outlawed and it was mandated that races would live in segregated areas. This policy of Apartheid would only become more extreme over the next half century.

Mandela and his comrades stepped up their efforts as well. Rather than just merely protesting, the African National Congress (ANC) adopted a program of direct action, including boycotts, stay-at-homes, strikes and other tactics designed to undermine the Apartheid regime. Whatever hopes for working within the system that had remained were now gone for good.

Yet while Mandela’s actions intensified, his views tempered somewhat. Originally skeptical of building links with other racial groups, he began to see the value of collaboration. That’s what set the stage dealing the first blow to Apartheid, The Freedom Charter.

Searching Out Common Values

In June 1955, the Congress of The People, a gathering that included blacks, Coloureds, Indians and liberal whites convened to draft and adopt the Freedom Charter, much like the Continental Congress gathered to produce the Declaration of Independence in America. The idea was to come up with a common and inclusive vision.

However, the Freedom Charter was anything but moderate. It was a “revolutionary document precisely because the changes it envisioned could not be achieved without radically altering the economic and political structure of South Africa… In South Africa, to merely achieve fairness, one had to destroy apartheid itself, for it was the very embodiment of injustice.”

Yet despite its radical aims, the Freedom Charter spoke to common values, such as equal rights and equal protection under the law—not just among the signatories, but for anyone living in a free society. It didn’t seem so at the time—and the struggle would go on for decades—but the Freedom Charter ended up being the first major blow to Apartheid.

In later years, when Mandela was accused of being a communist, an anarchist and worse, he would point out that nobody had to guess what he believed, because it had been written down in the Freedom Charter in 1955. Of course, it would have been conceived differently if it had been an ANC-only document-—and some within the ANC bitterly protested—but it was the common ground that document created that brought about the end of Apartheid.

Influencing Institutions

All too often, those who seek to bring about change, whether that change be in an organization, an industry, a community or throughout society as a whole, seek only to mobilize support among interest groups. That’s necessary, but far from sufficient. The truth is that only institutions can bring about real change.

In South Africa, Mandela and his comrades suffered under an all-powerful regime. Yet what they understood was that the government relied on many institutions outside the country for its survival. That was a significant vulnerability that could be exploited by mobilizing interest groups to influence key institutions.

One key campaign was taken against Barclays Bank in British university towns. For example, in 1984, Anti-Apartheid activists spray-painted “WHITES ONLY” and “BLACKS” above pairs of Barclays ATMs in British university town to draw attention to the bank’s investments in South Africa.

This of course, had little to no effect on public opinion in South Africa, but it meant a lot to the English university students that the bank wanted to attract. Barclays share of student accounts quickly plummeted from 27% to 15% and two years later Barclays pulled out all of its investments from the country.

It was a major blow that helped lead to other corporate divestments, sanctions from western governments and, eventually, the downfall of the regime. Apartheid had simply become economically untenable.

Surviving Victory

Mandela’s ascension to the Presidency of South Africa in 1994 was a historic triumph, but if it had stopped there the victory would have been limited. As we have seen more recently in places ranging from Ukraine to Egypt, even great, hard-fought victories can quickly be reversed. Every revolution inspires a counter-revolution.

To achieve lasting change, you need to plan to survive victory and you do that by reaffirming your commitment to common values. In the case of South Africa, that meant adhering to the principles of the Freedom Charter, which called for equal rights for all citizens, even for the white oppressors. That’s why today Mandela is remembered as a hero and not some tin-pot dictator.

In researching Cascades, I found that these principles held true not only in political and social contexts, but even in the corporate world. Radical change was achieved in firms ranging from IBM, Alcoa and Experian to fields like healthcare and education. In many cases, the degree of change surpassed anything anyone thought possible.

The truth is that success doesn’t depend on how radical or how moderate the vision, but how well you can appeal to shared values. Or, as Mandela himself put it, “to be free is not merely to cast off one’s chains, but to live in a way that respects and enhances the freedom of others.”

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Three Cognitive Biases That Can Kill Innovation

Three Cognitive Biases That Can Kill Innovation

GUEST POST from Greg Satell

Probably the biggest myth about innovation is that it’s about ideas. It’s not. It’s about solving problems. The truth is that nobody cares about what ideas you have, they care about the problems you can solve for them. So don’t worry about coming up with a brilliant idea. If you find a meaningful problem, the ideas will come.

The problem with ideas is that so many of them are bad. Remember New Coke? It seemed like a great idea at first. The new formula tested well among consumers and even had some initial success in the market. Yet what the marketers missed is that many had an emotional attachment to the old formula and created a huge backlash.

Our minds tend to play tricks on us. We think we’ve done our homework and that we base our ideas on solid insights, but often that’s not the case. We see what we want to see and then protect our ideas by ignoring or explaining away facts that don’t fit the pattern. In particular, we need to learn to identify and avoid these three cognitive biases that kill innovation.

1. Availability Bias

It’s easy to see where the marketers at Coke went wrong. They had done extensive market testing and the results came back wildly positive. People consistently preferred the new Coke formula over the old one. The emotional ties that people had to the old formula, however, were harder to see.

Psychologists call these types of errors availability bias. We tend to base our judgments on the information that is most easily available, such as market testing, and neglect other factors, such as emotional bonds. Often the most important factors are the ones that you don’t see and therefore don’t figure into your decision making.

The way to limit availability bias is to push yourself to get uncomfortable facts in front of you. In his new book, Farsighted, Steven Johnson notes two techniques that can help. The first, called pre-mortems, asks you to imagine that the project has failed and figure out why it happened. The second, called red teaming sets up an independent team to find holes in the idea.

Amazon’s innovation process is specifically set up to overcome availability bias. Project managers are required to write a 6-page memo at the start of every project, which includes a press release of both positive and negative reactions. Through a series of meetings, other stakeholders do their best to poke holes in the idea. None of this guarantees success, but Amazon’s track record is exceptionally good.

2. Confirmation Bias

Availability bias isn’t the only way we come to believe things that aren’t true. The machinery in our brains is naturally geared towards making quick judgments. We tend to lock onto the first information we see (called priming) and that affects how we see subsequent data (framing). Sometimes, we just get bad information from a seemingly trustworthy, but unreliable source.

In any case, once we come to believe something, we will tend to look for information that confirms it and discount contrary evidence. We will also interpret new information differently according to our preexisting beliefs. When presented with a relatively ambiguous set of facts, we are likely to see them as supporting out position.

This dynamic plays out in groups as well. We tend to want to form an easy consensus with those around us. Dissent and conflict are uncomfortable. In one study that asked participants to solve a murder mystery, the more diverse teams came up with better answers, but reported doubt and discomfort. The more homogeneous teams performed worse, but were more confident.

Imagine yourself sitting in a New Coke planning meeting. How much courage would it have taken to challenge the consensus view? How much confidence would you have in your dissent? What repercussions would you be willing to risk? We’d all like to think that we’d speak up, but would we?

3. The Semmelweis Effect

In 1847, a young doctor named Ignaz Semmelweis had a major breakthrough. Working in a maternity ward, he discovered that a regime of hand washing could dramatically lower the incidence of childbed fever. Unfortunately, instead of being lauded for his accomplishment, he was castigated and considered a quack. The germ theory of disease didn’t take hold until decades later.

The phenomenon is now known as the Semmelweis effect, the tendency for professionals in a particular field to reject new knowledge that contradicts established beliefs. The Semmelweis effect is, essentially, confirmation bias on a massive scale. It is simply very hard for people to discard ideas that they feel have served them well.

However, look deeper into the Semmelweis story and you will find a second effect that is just as damaging. When the young doctor found that his discovery met some initial resistance, he railed against the establishment instead of collecting more evidence and formatting and communicating his data more clearly. He thought it just should have been obvious.

Compare that to the story of Jim Allison, who discovered cancer immunotherapy. At first, pharmaceutical companies refused to invest in Jim’s idea. Yet unlike Semmelweis, he kept working to gather more data and convince others that his idea could work. Unlike Semmelweis, who ended up dying in an insane asylum, Jim won the Nobel Prize.

We all have a tendency to reject those who reject our ideas. Truly great innovators like Jim Allison, however, just look at that as another problem to solve.

Don’t Believe Everything You Think

When I’m in the late stages of writing a book, I always start sending out sections to be fact checked by experts and others who have first-person knowledge of events. In some cases, these are people I have interviewed extensively, but in others sending out the fact checks is my first contact with them.

I’m always amazed how generous people are with their time, willing in some cases to go through material thoroughly just to help me get the story straight. Nevertheless, whenever something comes back wrong, I always feel defensive. I know I shouldn’t, but I do. When told that I’m wrong, I just have the urge to push back.

But I don’t. I fight that urge because I know how dangerous it is to believe everything you think, which is why I go to so much effort to send out the fact checks in the first place. That’s why, instead of publishing work that’s riddled with errors and misinterpretations, my books have held up even after being read thousands of times. I’d rather feel embarrassed at my desk than in the real world.

The truth is that our most fervently held beliefs are often wrong. That’s why we need to make the effort to overcome the flawed machinery in our minds. Whether that is through a formal process like pre-mortems and red teams, or simply seeking out a fresh pair of eyes, we need to avoid believing everything we think.

That’s much easier said than done, but if you want to innovate consistently, that’s what it takes.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Ideas Have Limited Value

Ideas Have Limited Value

GUEST POST from Greg Satell

There is a line of thinking that says that the world is built on ideas. It was an idea that launched the American Revolution and created a nation. It was an idea that led Albert Einstein to pursue relativity, Linus Pauling to invent a vaccine and for Steve Jobs to create the iPhone and build the most valuable company in the world.

It is because of the power of ideas that we hold them so dear. We want to protect those we believe are valuable and sometimes become jealous when others think them up first. There’s nothing so rapturous as the moment of epiphany in which an idea forms in our mind and begins to take shape.

Clearly, ideas are important, but not as many believe. America is what it is today, for better or worse, not just because of the principles of its founding, but because of the actions that came after it. We revere people like Einstein, Pauling and Jobs not because of their ideas, but what they did with them. The truth is that although possibilities are infinite, ideas are limited.

The Winklevoss Affair

The muddled story of Facebook’s origin is now well known. Mark Zuckerberg met with the Winklevoss twins and another Harvard classmate to discuss building a social network together. Zuckerberg agreed, but then sandbagged his partners while he built and launched a competing site. He would later pay out a multimillion dollar settlement for his misdeeds.

Zuckerberg and the Winklevoss twins were paired in the news together again recently when Facebook announced that it’s developing a new cryptocurrency called Libra. As it happens, the Winklevoss twins have been high profile investors in Bitcoin for a while now. The irony was too delicious for many in the media to ignore. First he stole their idea for Facebook and now he’s doing the same with cryptocurrencies!

Of course this is ridiculous. Social networks like Friendster and Myspace existed before Facebook and many others came after. Most failed. In much the same way, many people today have ideas about starting cryptocurrency businesses. Most of them will fail too. The value of an initial idea is highly questionable.

Different people have similar ideas all the time. In fact, in a landmark study published in 1922 identified 148 major inventions or discoveries that at least two different people, working independently, arrived at the same time. So the fact that both the Winklevoss twins and Zuckerberg wanted to launch a social network was meaningless.

The truth is that Zuckerberg didn’t have to pay the Winklevoss twins because he stole their idea, but because he used their trust to actively undermine their business to benefit his. His crime wasn’t creation, but destruction.

The Semmelweis Myth

In 1847, a young doctor named Ignaz Semmelweis had a major breakthrough. Working in a maternity ward, he discovered that a regime of hand washing could dramatically lower the incidence of childbed fever. Unfortunately, the medical establishment rejected his idea and the germ theory of disease didn’t take hold until decades later.

The phenomenon is now known as the Semmelweis effect, the tendency for people to reject new knowledge that contradicts established beliefs. We tend to think that a great idea will be immediately obvious to everyone, but the opposite usually happens. Ideas that have the power to change the world always arrive out of context for the simple reason that the world hasn’t changed yet.

However, the Semmelweis effect is misleading. As Sherwin Nuland explains in The Doctor’s Plague, there’s more to the story than resistance to a new idea. Semmelweis didn’t see the value in communicating his work effectively, formatting his publications clearly or even collecting data in a manner that would gain his ideas greater acceptance.

Here again, we see the limits of ideas. Like a newborn infant, they can’t survive alone. They need to be nurtured to grow. They need to make friends, interact with other ideas and mature. The tragedy of Semmelweis is not that the medical establishment did not immediately accept his idea, but that he failed to steward it in such a way that it could spread and make an impact.

Why Blockbuster Video Really Failed

One of the most popular business myths today is that of Blockbuster Video. As the story is usually told, the industry giant failed to recognize the disruptive threat that Netflix represented. The truth is that the company’s leadership not only recognized the problem, but developed a smart strategy and executed it well.

The failure, in fact, had less to do with strategy and tactics than it did with managing stakeholder networks. Blockbuster moved quickly to launch an online business, cut late fees and innovated its business model. However, resistance from franchisees, who were concerned that the changes would kill their business, and from investors and analysts, who balked at the cost of the initiatives, sent the stock price reeling.

From there things spiraled downward. The low stock price attracted the corporate raider Carl Icahn, who got control of the board. His overbearing style led to a compensation dispute with Blockbuster’s CEO, John Antioco. Frustrated, Antioco negotiated his exit and left the company in July of 2007.

His successor, Jim Keyes, was determined to reverse Antioco’s strategy, cut investment in the subscription model, reinstated late fees and shifted focus back to the retail stores in a failed attempt to “leapfrog” the online subscription model. Three years later, in 2010, Blockbuster filed for bankruptcy.

The Fundamental Fallacy Of Ideas

One of the things that amazed me while I was researching my book Cascades was how often movements behind powerful ideas failed. The ones that succeeded weren’t those with different ideas or those of higher quality, but those that were able to align small groups, loosely connected, but united by a shared purpose.

The stories of the Winklevoss twins, Ignaz Semmelweis and Blockbuster Video are all different versions of the same fundamental fallacy, that ideas, if they are powerful enough, can stand on their own. Clearly, that’s not the case. Ideas need to be adopted and then combined with other ideas to make an impact on the world.

The truth is that ideas need ecosystems to support them and that doesn’t happen overnight. To make an idea viable in the real world it needs to continually connect outward, gaining adherents and widening its original context. That takes more than an initial epiphany. It takes the will to make the idea subservient to its purpose.

What we have to learn to accept is that what makes an idea powerful is its ability to solve problems. The ideas embedded in the American Constitution were not new at the time of the country’s founding, but gained power by their application in the real world. In much the same way, we revere Einstein’s relativity, Pauling’s vaccine and Jobs iPhone because of their impact on the world.

As G.H. Hardy once put it, “For any serious purpose, intelligence is a very minor gift.” The same can be said about ideas. They do not and cannot stand alone, but need the actions of people to bring them to life.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.