Author Archives: Greg Satell

About Greg Satell

Greg Satell is a popular speaker and consultant. His latest book, Cascades: How to Create a Movement That Drives Transformational Change, is available now. Follow his blog at Digital Tonto or on Twitter @Digital Tonto.

The End of the Digital Revolution

Here’s What You Need to Know

The End of the Digital Revolution

GUEST POST from Greg Satell

The history of digital technology has largely been one of denial followed by disruption. First came the concept of the productivity paradox, which noted the limited economic impact of digital technology. When e-commerce appeared, many doubted that it could ever compete with physical retail. Similar doubts were voiced about digital media.

Today, it’s hard to find anyone who doesn’t believe in the power of digital technology. Whole industries have been disrupted. New applications driven by cloud computing, artificial intelligence and blockchain promise even greater advancement to come. Every business needs to race to adopt them in order to compete for the future.

Ironically, amid all this transformation the digital revolution itself is ending. Over the next decade, new computing architectures will move to the fore and advancements in areas like synthetic biology and materials science will reshape entire fields, such as healthcare, energy and manufacturing. Simply waiting to adapt won’t be enough. The time to prepare is now.

1. Drive Digital Transformation

As I explained in Mapping Innovation, innovation is never a single event, but a process of discovery, engineering and transformation. Clearly, with respect to digital technology, we are deep into the transformation phase. So the first part of any post-digital strategy is to accelerate digital transformation efforts in order to improve your competitive position.

One company that’s done this very well is Walmart. As an old-line incumbent in the physical retail industry, it appeared to be ripe for disruption as Amazon reshaped how customers purchased basic items. Why drive out to a Walmart store for a package of toothpaste when you can just click a few buttons on your phone?

Yet rather than ceding the market to Amazon, Walmart has invested heavily in digital technology and has achieved considerable success. It wasn’t any one particular tactic or strategy made the difference, but rather the acknowledgment that every single process needed to be reinvented for the digital age. For example, the company is using virtual reality to revolutionize how it does in-store training.

Perhaps most of all, leaders need to understand that digital transformation is human transformation. There is no shortage of capable vendors that can implement technology for you. What’s key, however, is to shift your culture, processes and business model to leverage digital capabilities.

2. Explore Post-Digital Technologies

While digital transformation is accelerating, advancement in the underlying technology is slowing down. Moore’s law, the consistent doubling of computer chip performance over the last 50 years, is nearing its theoretical limits. It has already slowed down considerably and will soon stop altogether. Yet there are non-digital technologies under development that will be far more powerful than anything we’ve ever seen before.

Consider Intel, which sees its future in what it calls heterogeneous computing combining traditional digital chips with non-digital architectures, such as quantum and neuromorphic. It announced a couple of years ago its Pohoiki Beach neuromorphic system that processes information up to 1,000 times faster and 10,000 more efficiently than traditional chips for certain tasks.

IBM has created a network to develop quantum computing technology, which includes research labs, startups and companies that seek to be early adopters of the technology. Like neuromorphic computing, quantum systems have the potential to be thousands, if not millions, of times more powerful than today’s technology.

The problem with these post-digital architectures is that no one really knows how they are going to work. They operate on a very different logic than traditional computers, will require new programming languages and algorithmic strategies. It’s important to start exploring these technologies now or you could find yourself years behind the curve.

3. Focus on Atoms, Not Bits

The digital revolution created a virtual world. My generation was the first to grow up with video games and our parents worried that we were becoming detached from reality. Then computers entered offices and Dan Bricklin created Visicalc, the first spreadsheet program. Eventually smartphones and social media appeared and we began spending almost as much time in the virtual world as we did in the physical one.

Essentially, what we created was a simulation economy. We could experiment with business models in our computers, find flaws and fix them before they became real. Computer-aided design (CAD) software allowed us to design products in bits before we got down to the hard work of shaping atoms. Because it’s much cheaper to fail in the virtual world than the physical one, this made our economy much more efficient.

Yet the next great transformation will be from bits to atoms. Digital technology is creating revolutions in things like genomics and materials science. Artificial intelligence and cloud computing are reshaping fields like manufacturing and agriculture. Quantum and neuromorphic computing will accelerate these trends.

Much like those new computing architectures, the shift from bits to atoms will create challenges. Applying the simulation economy to the world of atoms will require new skills and we will need people with those skills to move from offices in urban areas to factory floors and fields. They will also need to learn to collaborate effectively with people in those industries.

4. Transformation is Always a Journey, Never a Destination

The 20th century was punctuated by two waves of disruption. The first, driven by electricity and internal combustion, transformed almost every facet of daily life and kicked off a 50-year boom in productivity. The second, driven by the microbe, the atom and the bit, transformed fields such as agriculture, healthcare and management.

Each of these technologies followed the pattern of discovery, engineering and transformation. The discovery phase takes place mostly out of sight, with researchers working quietly in anonymous labs. The engineering phase is riddled with errors, as firms struggle to shape abstract concepts into real products. A nascent technology is easy to ignore, because its impact hasn’t been felt yet.

The truth is that disruption doesn’t begin with inventions, but when an ecosystem emerges to support them. That’s when the transformation phase begins and takes us by surprise, because transformation never plays out like we think it will. The future will always, to a certain extent, unpredictable for the simple reason that it hasn’t happened yet.

Today, we’re on the brink of a new era of innovation that will be driven by new computing architectures, genomics, materials science and artificial intelligence. That’s why we need to design our organizations for transformation by shifting from vertical hierarchies to horizontal networks.

Most of all, we need to shift our mindsets from seeing transformation as set of discreet objectives to a continuous journey of discovery. Digital technology has only been one phase of that journey. The most exciting things are still yet to come.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Is China Our New Sputnik Moment?

Is China Our New Sputnik Moment?

GUEST POST from Greg Satell

When the Soviets launched Sputnik, the first space satellite, into orbit in 1957, it was a wake-up call for America. Over the next year, President Eisenhower would sign the National Defense Education Act to spur science education, increase funding for research and establish NASA and DARPA to spur innovation.

A few years ago, a report by the Council on Foreign Relations (CFR) argued that we are at a similar point today, but with China. While we have been steadily decreasing federal investment in R&D over the past few decades, our Asian rival has been ramping up and now threatens our leadership in key technologies such as AI, genomics and quantum information technology.

Clearly, we need to increase our commitment to science and innovation and that means increasing financial investment. However, what the report makes clear is that money alone won’t solve the problem. We are, in several important ways, actually undermining our ability to innovate, now and in the future. We need to renew our culture of innovation in America.

Educating And Attracting Talent

The foundation of an innovation economy is education, especially in STEM subjects. Historically, America has been the world’s best educated workforce, but more recently we’ve fallen to fifth among OECD countries for post-secondary education. That’s alarming and something we will certainly need to reverse if we are to compete effectively.

Our educational descent can be attributed to three major causes. First, the rest of the world has become more educated, so the competition has become stiffer. Second, is financing. Tuition has nearly tripled in the last decade and student debt has become so onerous that it now takes about 20 years to pay off four years for college. Third, we need to work harder to attract talented people to the United States.

The CFR report recommends developing a “21st century National Defense Education Act” to create scholarships in STEM areas and making it easier for foreign students to get Green Cards when they graduate from our universities. It also points out that we need to work harder to attract foreign talent, especially in high impact areas like AI, genomics and quantum computing.

Unfortunately, we seem to be going the other way. The number of international students to American universities is declining. Policies like the muslim ban and concerns about gun violence are deterring scientific talent coming here. The denial rate for those on H1-B visas has increased from 4% in 2016 to 18% in the first quarter of 2019.

Throughout our history, it has been our openness to new people and new ideas that has made America exceptional. It’s a legitimate question whether that’s still true.

Building Technology Ecosystems

In the 1980s, the US semiconductor industry was on the ropes. Due to increased competition from low-cost Japanese manufacturers, American market share in the DRAM market fell from 70% to 20%. The situation not only had a significant economic impact, there were also important national security implications.

The federal government responded with two initiatives, the Semiconductor Research Corporation and SEMATECH, both of which were nonprofit consortiums that involved government, academia and industry. By the 1990s. American semiconductor manufacturers were thriving again.

Today, we have similar challenges with rare earth elements, battery technology and many manufacturing areas. The Obama administration responded by building similar consortiums to those that were established for semiconductors: The Critical Materials Institute for rare earth elements, JCESR for advanced batteries and the 14 separate Manufacturing Institutes.

Yet here again, we seem to be backsliding. The current administration has sought to slash funding for the Manufacturing Extension Partnership that supports small and medium sized producers. An addendum to the CFR report also points out that the administration has pushed for a 30% cut in funding for the national labs, which support much of the advanced science critical to driving American technology forward.

Supporting International Trade and Alliances

Another historical strength of the US economy has been our open approach to trade. The CFR report points out that our role as a “central node in a global network of research and development,” gave us numerous advantages, such as access to foreign talent at R&D centers overseas, investment into US industry and cooperative responses to global challenges.

However, the report warns that “the Trump administration’s indiscriminate use of tariffs against China, as well as partners and allies, will harm U.S. innovative capabilities.” It also faults the Trump administration for pulling out of the Trans-Pacific Partnership trade agreement, which would have bolstered our relationship with Asian partners and increased our leverage over China.

The tariffs undermine American industry in two ways. First, because many of the tariffs are on intermediate goods which US firms use to make products for export, we’re undermining our own competitive position, especially in manufacturing. Second, because trade partners such as Canada and the EU have retaliated against our tariffs, our position is weakened further.

Clearly, we compete in an ecosystem driven world in which power does not come from the top, but emanates from the center. Traditionally, America has positioned itself at the center of ecosystems by constantly connecting out. Now that process seems to have reversed itself and we are extremely vulnerable to others, such as China, filling the void.

We Need to Stop Killing Innovation in America

The CFR report, whose task force included such luminaries as Admiral William McRaven, former Google CEO Eric Schmidt and economist Laura Tyson, should set alarm bells ringing. Although the report was focused on national security issues, it pertains to general competitiveness just as well and the picture it paints is fairly bleak.

After World War II, America stood almost alone in the world in terms of production capacity. Through smart policy, we were able to transform that initial advantage into long-term technological superiority. Today, however we have stiff competition in areas ranging from AI to synthetic biology to quantum systems.

At the same time, we seem to be doing everything we can to kill innovation in America. Instead of working to educate and attract the world’s best talent, we’re making it harder for Americans to attain higher education and for top foreign talent to come and work here. Instead of ramping up our science and technology programs, presidential budgets regular recommend cutting them. Instead of pulling our allies closer, we are pushing them away.

To be clear, America is still at the forefront of science and technology, vying for leadership in every conceivable area. However, as global competition heats up and we need to be redoubling our efforts, we seem to be doing just the opposite. The truth is that our prosperity is not a birthright to which we are entitled, but a legacy that must be lived up to.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Questions Are More Powerful Than We Think

Questions Are More Powerful Than We Think

GUEST POST from Greg Satell

When I was 27, I moved to Warsaw, Poland to work in the nascent media industry that was developing there. I had experience working in media in New York, so I was excited to share what I’d learned and was confident that my knowledge and expertise would be well received.

It wasn’t. Whenever I began to explain how a media business was supposed to work, people would ask me, “why?” That forced me to think about it and, when I did, I began to realize that many of the principles I had taken for granted were merely conventions. Things didn’t need to work that way and could be done differently.

That’s when I first learned the power of a question. As Warren Berger explains in A More Beautiful Question, while answers tend to close a discussion, questions help us open new doors and can lead to genuine breakthroughs. Yet not all questions are equal. Asking good questions is a skill that takes practice and effort to learn to do well. Here’s where to start.

Why?

When we are young, we ask lots of “why?” questions. Why is the sky blue? Why can’t we fly like birds? Why do I have to go to bed at a certain time? It is through asking why that we learn basic things about the world. Yet as we get older, we tend to think we know things and stop questioning fundamental assumptions.

That’s where I was when I first arrived in Poland. I had gone through extensive training and knew things. I was proud of the knowledge that I had gained and didn’t question whether those things were necessarily true. My new Polish colleagues, on the other hand, were emerging from 50 years of communism and so were unencumbered with that illusion of knowledge.

In researching my book, Mapping Innovation, I spoke to dozens of world class innovators, and I was amazed how often breakthroughs started with a “Why?” question. For example, Jim Allison, a prominent immunologist who had lost family members to cancer, asked himself why our immune system doesn’t attack tumors.

“Why?” questions can be frustrating, because there are rarely easy answers to them, and they almost always lead to more questions. There’s even a technique called the 5 Whys that is designed to uncover root problems. Nevertheless, if you want to get beyond fundamental assumptions, you need to start with asking “why?”

What If?

While asking “why?” can help alert us to new opportunities, asking “What if” can lead us into new directions and open new doors. Einstein was famous for these types of thought experiments. Asking “What if I would ride on a bolt of lightning?” led to his theory of special relativity and asking, “What if I was riding on an elevator in space?” led to general relativity.

Often, we can use “What if?” questions to propose answers to our “Why?” questions. For example, after Jim Allison asked himself why our immune system doesn’t attack tumors, he followed it up by asking, “what if our immune system actually does attack tumors, but shuts off too soon?”

That took him in a completely new direction. He began to experiment with regulating the immune response and achieved amazing results. Eventually, he would win the Nobel Prize for his role in establishing the new field of cancer immunotherapy. It all started because he was able to imagine new possibilities with a “What if?” question.

Another way we can use “What If? questions is to remove or add constraints. For example, we can ask ourselves, “What if we didn’t have to worry about costs?” or “What if we could only charge our customers half of what we’re charging now?” Asking “What if? Questions can often alert us to possibilities what we weren’t aware of.

How?

Asking “Why?” and “What if? questions can open up new opportunities, eventually we need to answer the “How?” question. “How?” questions can be especially difficult because answering them often involves knowledge, resources and capabilities that we do not possess. That’s what makes “How?” questions fundamentally more collaborative.

For example, as a research executive at Eli Lilly, Alph Bingham became interested in why some chemistry problems never got solved. One observation he made was that when he was in graduate school, if there were 20 people in a class, they would often come up with 20 different approaches to a problem, but in industry scientists generally worked alone.

Long an admirer of Linux, he was fascinated with the way thousands of volunteers were able to create and advance complex software that could compete with the best proprietary products. So he began to think “What if we could do something like Linux, but with a bounty?” He thought that if he got more people working on the “How?” question, he might be able to solve more problems.

The fruit of his efforts, called Innocentive went live in June 2001 with 21 problems, many of which the company had been working on for years. Although the bounties were small in the context of the pharmaceutical industry — $20,000 to $25,000 — by the end of the year a third of them were solved. It was an astounding success.

It soon became clear that more challenges on the site would attract more solvers, so they started recruiting other companies to the platform. When results improved, they even began inviting competitors to post challenges as well. Today, Innocentive has over 100,000 solvers that work out hundreds of problems so tough that even the smartest companies can’t crack them.

Building A Culture Of Inquiry

When I first arrived in Poland, I was prepared to give all the answers, because that’s what I was trained for. The media business in New York had been around for a long time and everything was supposedly worked out. Follow the model, I was told, and you’ll be successful. That’s why the questions my new colleagues posed took me by surprise.

Yet once I started asking questions myself, I began to see opportunities everywhere. As I travelled and worked in different countries, I found that everywhere I went, people ran nearly identical businesses in completely different ways and most were convinced that their way was the “right” way. Most saw little utility in questioning how things were done.

That’s why most people can’t innovate. In fact, while researching Mapping Innovation, I found that the best innovators were not the ones who were the smartest or even the ones who worked the hardest, but those who continually looked for new problems to solve. They were always asking new questions, that’s how they found new things.

The truth is that to drive innovation, we need to build a culture of inquiry. We need to ask “why” things are done the way they are done, “what if” we took a different path and “how” things can be done differently. If you don’t explore, you won’t discover and if you don’t discover, you won’t invent. Once you stop inventing, you will be disrupted.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Humans, Not Technology, Drive Business Success

Humans, Not Technology, Drive Business Success

GUEST POST from Greg Satell

Silicon Valley is often known as a cut-throat, technocratic place where the efficiency of algorithms often define success. Competition is ferocious and the pace of disruption and change can be dizzying. It’s not the type of environment where soft skills are valued particularly highly or even at all.

So, it’s somewhat ironic that Bill Campbell became a Silicon Valley legend by giving hugs and professing love to those he worked with. As coach to executives ranging from Steve Jobs to the entire Google executive team, Campbell preached and practiced a very personal style of business.

Yet while I was reading Trillion Dollar Coach in which former Google executives explain Campbell’s leadership principles, it became clear why he had such an impact. Even in Silicon Valley, technology will only take you so far. The success of a business ultimately depends on the success of the people in it. To compete over the long haul, that’s where you need to focus.

The Efficiency Paradox

In 1911, Frederick Winslow Taylor published The Principles of Scientific Management, based on his experience as a manager in a steel factory. It took aim at traditional management methods and suggested a more disciplined approach. Rather than have workers pursue tasks in their own manner, he sought to find “the one best way” and train accordingly.

Taylor wrote, “It is only through enforced standardization of methods, enforced adoption of the best implements and working conditions, and enforced cooperation that this faster work can be assured. And the duty of enforcing the adoption of standards and enforcing this cooperation rests with management alone.”

Before long, Taylor’s ideas became gospel, spawning offshoots such as scientific marketing, financial engineering and the Six Sigma movement. It was no longer enough to simply work hard, you had to measure, analyze and optimize everything. Over the years these ideas have become so central to business thinking that they are rarely questioned.

Yet management guru Henry Mintzberg has pointed out how a “by-the-numbers” depersonalized approach can often backfire. “Managing without soul has become an epidemic in society. Many managers these days seem to specialize in killing cultures, at the expense of human engagement.”

The evidence would seem to back him up. One study found that of 58 large companies that have announced Six Sigma programs, 91 percent trailed the S&P 500 in stock performance. That, in essence, is the efficiency paradox. When you manage only what you can measure, you end up ignoring key factors to success.

How Generosity Drives Innovation

While researching my book, Mapping Innovation, I interviewed dozens of top innovators. Some were world class scientists and engineers. Others were high level executives at large corporations. Still others were highly successful entrepreneurs. Overall, it was a pretty intimidating group.

So, I was surprised to find that, with few exceptions, they were some of the kindest and most generous people I have ever met. The behavior was so consistent that I felt that it couldn’t be an accident. So I began to research the matter further and found that when it comes to innovation, generosity really is a competitive advantage.

For example, one study of star engineers at Bell Labs found that the best performers were not the ones with the best academic credentials, but those with the best professional networks. A similar study of the design firm IDEO found that great innovators essentially act as brokers able to access a diverse array of useful sources.

A third study helps explain why knowledge brokering is so important. Analyzing 17.9 million papers, the researchers found that the most highly cited work tended to be largely rooted within a traditional field, but with just a smidgen of insight taken from some unconventional place. Breakthrough creativity occurs at the nexus of conventionality and novelty.

The truth is that the more you share with others, the more they’ll be willing to share with you and that makes it much more likely you’ll come across that random piece of information or insight that will allow you to crack a really tough problem.

People As Profit Centers

For many, the idea that innovation is a human centered activity is intuitively obvious. So it makes sense that the high-tech companies that Bill Campbell was involved in would work hard to create environments to attract the best and the brightest people. However, most businesses have much lower margins and have to keep a close eye on the bottom line.

Yet here too there is significant evidence that a human-focused approach to management can yield better results. In The Good Jobs Strategy MIT’s Zeynep Ton found that investing more in well-trained employees can actually lower costs and drive sales. A dedicated and skilled workforce results in less turnover, better customer service and greater efficiency.

For example, when the recession hit in 2008, Mercadona, Spain’s leading discount retailer, needed to cut costs. But rather than cutting wages or reducing staff, it asked its employees to contribute ideas. The result was that it managed to reduce prices by 10% and increased its market share from 15% in 2008 to 20% in 2012.

Its competitors maintained the traditional mindset. They reduced cut wages and employee hours, which saved them some money, but customers found poorly maintained stores with few people to help them, which damaged their brand long-term. The cost savings Mercadona’s employees identified, on the other hand, in many cases improved service and productivity and these gains persisted long after the crisis was over.

Management Beyond Metrics

The truth is that it’s easy to talk about putting people first, but much harder to do it in practice. Research suggests that once a group goes much beyond 200 people social relationships break down, so once a business gets beyond that point, it becomes natural to depersonalize management and focus on metrics.

Yet the best managers understand that it’s the people that drive the numbers. As legendary IBM CEO Lou Gerstner once put it, “Culture isn’t just one aspect of the game… It is the game. What does the culture reward and punish – individual achievement or team play, risk taking or consensus building?”

In other words, culture is about values. The innovators I interviewed for my book valued solving problems, so were enthusiastic about sharing their knowledge and expertise with others, who happily reciprocated. Mercadona valued its people, so when it asked them to find ways to save money during the financial crisis, they did so enthusiastically.

That’s why today, three years after his death, Bill Campbell remains a revered figure in Silicon Valley, because he valued people so highly and helped them learn to value each other. Management is not an algorithm. It is, in the final analysis, an intensely human activity and to do it well, you need to put people first.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Building a True Revolution

Building a True Revolution

GUEST POST from Greg Satell

“Revolution” is a term that gets thrown around a lot. There was an Industrial Revolution powered by steam and then another one powered by oil and electricity. The Green Revolution transformed the way we fed ourselves. Many political revolutions have overthrown powerful regimes and the digital revolution changed the way we work with information.

My friend Srdja Popović, who helped lead the Bulldozer Revolution that overthrew Slobodan Milošević in Serbia, told me that the goal of a revolution should be to become mainstream, to be mundane and ordinary. If you are successful it should be difficult to explain what was won because the previous order seems so unbelievable.

The problem with most would-be revolutionaries is that they seek exactly the opposite. All too often, they seek attention, excitement and crowds of admiring fans. Yet all that noise is likely to create enemies just as fast as it makes friends. True revolutions aren’t won in the streets or on the airwaves, but through smart strategies that transform basic beliefs.

A Shift in Paradigms

The idea of a paradigm shift was first established by Thomas Kuhn in his book The Structure of Scientific Revolutions, which explained how scientific breakthroughs come to the fore. It starts with an established model, the kind we learn in school or during initial training for a career. Eventually, those models are shown to be untenable, and a period of instability ensues until a new paradigm can be created and adopted.

While Kuhn developed his theory to describe advancements in science, it has long been clear that it applies more broadly. For example, in my experiences in post-communist countries, the comfort of the broken, but relatively stable, system seemed to many to be preferable to the instability of change.

In the corporate world, models are not only mindsets, but are embedded in systems, processes and practices, which makes them especially pervasive. To bring change about, you need to disrupt basic operations and that comes with costs. Customers, partners and suppliers depend on the stability of how an organization does business.

So, the first step to driving change about is to create a new vision that can credibly replace the existing model without causing so much chaos that the perceived costs outweigh the benefits. As I explain in my book, Cascades, successful revolutionaries are more than just warriors, they are also educators that are able to mobilize others through the power of their vision.

Mobilizing Small Groups, Loosely Connected

We tend to think of revolutions as mass actions, such as protestors storming the streets or excited customers lining up outside an Apple store, yet they don’t start out that way. Revolutions begin with small groups, loosely connected, but united by a shared purpose.

For example, groups like the Cambridge Apostles and the Bloomsbury Group helped launch intellectual revolutions in early 20th century Cambridge. The Homebrew Computer Club helped bring about the digital revolution. Groups like Otpor, Kmara and Pora formed the grassroots of the Color Revolutions in the early 2000s.

What made these groups effective was their ability to connect and bring others in. For example the Homebrew Computer Club would hold convene informal gatherings at a bar after the more formal meetings of the club. In the Serbian revolution that overthrew Slobodan Milošević, Otpor used humor and street pranks to attract people to their cause.

Revolutions are driven by networks and power in networks emanates from the center. You move to the center by connecting out. That’s how you mobilize and gain influence. What you do with that power and influence, however, will determine if your revolution will succeed.

Influencing Institutional Change

Mobilization can be a powerful force but does not in itself create a revolution. To bring change about, you need to be able to influence institutions that have the power to drive change. For example, Martin Luther King Jr. didn’t write a single piece of legislation or decide a single court case but was able to influence the legislative and legal systems through his activism.

In his efforts to reform the Pentagon, Colonel John Boyd went outside the chain of command to brief congressional staffers and a small circle of journalists. As he gained support from Congress and the media, he was able to put pressure on the Generals and create a reform movement within the US military.

Now compare that to the Occupy Movement, which mobilized activists in 951 cities across 82 countries. However, they wanted to have nothing to do with institutions and actually refused opportunities to influence them. In fact, when Congressman John Lewis, himself a civil rights leader, showed up at a rally, they turned him away. Is it any wonder they never achieved any tangible change?

Make no mistake. If you truly want to bring change about, you have to mobilize somebody to influence something. Merely sending people out in the streets with signs won’t amount to much.

Preparing for the Counterrevolution

In his 2004 State of the Union Address, President Bush delivered a full-throated condemnation of same-sex marriage. Incensed, San Francisco Mayor Gavin Newsom decided to unilaterally begin performing weddings for gay and lesbian couples at City Hall, in what was termed the Winter of Love. 4,027 couples were married before their nuptials were annulled by the California Supreme Court a month later.

The backlash was fierce and led Proposition 8, an amendment to the California Constitution that prohibited gay marriage, on the ballot. It was passed with a narrow majority of 52% of the electorate and was so harsh that it not only galvanized LGBT activists, but also began to sway public opinion.

The tide began to change when LBGT activists, began to appeal to values they shared with the general public, such as the right to live in committed relationships and raise happy, healthy families. In a Newsweek op-ed, Ted Olson, a conservative Republican lawyer who had previous served as President Bush’s Solicitor General, argued that legalizing same-sex marriage wasn’t strictly a gay issue, but would be “a recognition of basic American principles.”

Today, same sex marriage has become, to paraphrase my friend Srdja, mundane. It has become a part of everyday life that is widely accepted as the normal course of things. That’s when you know a revolution is complete. Not when the fervor of zealots drive people out into the streets, but when those in the mainstream begin to accept it as the normal course of business.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Artificial Intelligence is Forcing Us to Answer Some Very Human Questions

Artificial Intelligence is Forcing Us to Answer Some Very Human Questions

GUEST POST from Greg Satell

Chris Dixon, who invested early in companies ranging from Warby Parker to Kickstarter, once wrote that the next big thing always starts out looking like a toy. That’s certainly true of artificial intelligence, which started out playing games like chess, go and playing humans on the game show Jeopardy!

Yet today, AI has become so pervasive we often don’t even recognize it anymore. Besides enabling us to speak to our phones and get answers back, intelligent algorithms are often working in the background, providing things like predictive maintenance for machinery and automating basic software tasks.

As the technology becomes more powerful, it’s also forcing us to ask some uncomfortable questions that were once more in the realm of science fiction or late-night dorm room discussions. When machines start doing things traditionally considered to be uniquely human, we need to reevaluate what it means to be human and what is to be a machine.

What Is Original and Creative?

There is an old literary concept called the Infinite Monkey Theorem. The basic idea is that if you had an infinite amount of monkeys pecking away an infinite amount of keyboards, they would, in time, produce the complete works of Shakespeare or Tolstoy or any other literary masterpiece.

Today, our technology is powerful enough to simulate infinite monkeys and produce something that looks a whole lot like original work. Music scholar and composer David Cope has been able to create algorithms that produce original works of music which are so good that even experts can’t tell the difference. Companies like Narrative Science are able to produce coherent documents from raw data this way.

So there’s an interesting philosophical discussion to be had about what what qualifies as true creation and what’s merely curation. If an algorithm produces War and Peace randomly, does it retain the same meaning? Or is the intent of the author a crucial component of what creativity is about? Reasonable people can disagree.

However, as AI technology becomes more common and pervasive, some very practical issues are arising. For example, Amazon’s Audible unit has created a new captions feature for audio books. Publishers sued, saying it’s a violation of copyright, but Amazon claims that because the captions are created with artificial intelligence, it is essentially a new work.

When machines can create does that qualify as an original, creative intent? Under what circumstances can a work be considered new and original? We are going to have to decide.

Bias And Transparency

We generally accept that humans have biases. In fact, Wikipedia lists over 100 documented biases that affect our judgments. Marketers and salespeople try to exploit these biases to influence our decisions. At the same time, professional training is supposed to mitigate them. To make good decisions, we need to conquer our tendency for bias.

Yet however much we strive to minimize bias, we cannot eliminate it, which is why transparency is so crucial for any system to work. When a CEO is hired to run a corporation, for example, he or she can’t just make decisions willy nilly, but is held accountable to a board of directors who represent shareholders. Records are kept and audited to ensure transparency.

Machines also have biases which are just as pervasive and difficult to root out. Amazon had to scrap an AI system that analyzed resumes because it was biased against female candidates. Google’s algorithm designed to detect hate speech was found to be racially biased. If two of the most sophisticated firms on the planet are unable to eliminate bias, what hope is there for the rest of us?

So, we need to start asking the same questions of machine-based decisions as we do of human ones. What information was used to make a decision? On what basis was a judgment made? How much oversight should be required and by whom? We all worry about who and what are influencing our children, we need to ask the same questions about our algorithms.

The Problem of Moral Agency

For centuries, philosophers have debated the issue of what constitutes a moral agent, meaning to what extent someone is able to make and be held responsible for moral judgments. For example, we generally do not consider those who are insane to be moral agents. Minors under the age of eighteen are also not fully held responsible for their actions.

Yet sometimes the issue of moral agency isn’t so clear. Consider a moral dilemma known as the trolley problem. Imagine you see a trolley barreling down the tracks that is about to run over five people. The only way to save them is to pull a lever to switch the trolley to a different set of tracks, but if you do one person standing there will be killed. What should you do?

For the most part, the trolley problem has been a subject for freshman philosophy classes and avant-garde cocktail parties, without any real bearing on actual decisions. However, with the rise of technologies like self-driving cars, decisions such as whether to protect the life of a passenger or a pedestrian will need to be explicitly encoded into the systems we create.

On a more basic level, we need to ask who is responsible for a decision an algorithm makes, especially since AI systems are increasingly capable of making judgments humans can’t understand. Who is culpable for an algorithmically driven decision gone bad? By what standard should they be evaluated?

Working Towards Human-Machine Coevolution

Before the industrial revolution, most people earned their living through physical labor. Much like today, tradesman saw mechanization as a threat — and indeed it was. There’s not much work for blacksmiths or loom weavers these days. What wasn’t clear at the time was that industrialization would create a knowledge economy and demand for higher paid cognitive work.

Today, we’re going through a similar shift, but now machines are taking over cognitive tasks. Just as the industrial revolution devalued certain skills and increased the value of others, the age of thinking machines is catalyzing a shift from cognitive skills to social skills. The future will be driven by humans collaborating with other humans to design work for machines that creates value for other humans.

Technology is, as Marshal McLuhan pointed out long ago, an extension of man. We are constantly coevolving with our creations. Value never really disappears, it just shifts to another place. So, when we use technology to automate a particular task, humans must find a way to create value elsewhere, which creates an opportunity to create new technologies.

This is how humans and machines coevolve. The dilemma that confronts us now is that when machines replace tasks that were once thought of as innately human, we must redefine ourselves and that raises thorny questions about our relationship to the moral universe. When men become gods, the only thing that remains to conquer is ourselves.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Coming Innovation Slowdown

The Coming Innovation Slowdown

GUEST POST from Greg Satell

Take a moment to think about what the world must have looked like to J.P. Morgan a century ago, in 1919. He was not only an immensely powerful financier with access to the great industrialists of the day, but also an early adopter of new technologies. One of the first electric generators was installed at his home.

The disruptive technologies of the day, electricity and internal combustion, were already almost 40 years old, but had little measurable economic impact. Life largely went on as it always had. That would quickly change over the next decade when those technologies would drive a 50-year boom in productivity unlike anything the world had ever seen before.

It is very likely that we are at a similar point now. Despite significant advances in technology, productivity growth has been depressed for most of the last 50 years. Over the next ten years, however, we’re likely to see that change as nascent technologies hit their stride and create completely new industries. Here’s what you’ll need to know to compete in the new era.

1. Value Will Shift from Bits to Atoms

Over the past few decades, innovation has become almost synonymous with digital technology. Every 18 months or so, semiconductor manufacturers would bring out a new generation of processors that were twice as powerful as what came before. These, in turn, would allow entrepreneurs to imagine completely new possibilities.

However, while the digital revolution has given us snazzy new gadgets, the impact has been muted. Sure, we have hundreds of TV channels and we’re able to talk to our machines and get coherent answers back, but even at this late stage, information and communication technologies make up only about 6% of GDP in advanced countries.

At first, that sounds improbable. How could so much change produce so little effect? But think about going to a typical household in 1960, before the digital revolution took hold. You would likely see a TV, a phone, household appliances and a car in the garage. Now think of a typical household in 1910, with no electricity or running water. Even simple chores like cooking and cleaning took hours of backbreaking labor.

The truth is that much of our economy is still based on what we eat, wear and live in, which is why it’s important that the nascent technologies of today, such as synthetic biology and materials science, are rooted in the physical world. Over the next generation, we can expect innovation to shift from bits back to atoms.

2. Innovation Will Slow Down

We’ve come to take it for granted that things always accelerate because that’s what has happened for the past 30 years or so. So we’ve learned to deliberate less, to rapidly prototype and iterate and to “move fast and break things” because, during the digital revolution, that’s what you needed to do to compete effectively.

Yet microchips are a very old technology that we’ve come to understand very, very well. When a new generation of chips came off the line, they were faster and better, but worked the same way as earlier versions. That won’t be true with new computing architectures such as quantum and neuromorphic computing. We’ll have to learn how to use them first.

In other cases, such as genomics and artificial intelligence, there are serious ethical issues to consider. Under what conditions is it okay to permanently alter the germ line of a species. Who is accountable for the decisions and algorithm makes? On what basis should those decisions be made? To what extent do they need to be explainable and auditable?

Innovation is a process of discovery, engineering and transformation. At the moment, we find ourselves at the end of one transformational phase and about to enter a new one. It will take a decade or so to understand these new technologies enough to begin to accelerate again. We need to do so carefully. As we have seen over the past few years, when you move fast and break things, you run the risk of breaking something important.

3. Ecosystems Will Drive Technology

Let’s return to J.P. Morgan in 1919 and ask ourselves why electricity and internal combustion had so little impact up to that point. Automobiles and electric lights had been around a long time, but adoption takes time. It takes a while to build roads, to string wires and to train technicians to service new inventions reliably.

As economist Paul David pointed out in his classic paper, The Dynamo and the Computer, it takes time for people to learn how to use new technologies. Habits and routines need to change to take full advantage of new technologies. For example, in factories, the biggest benefit electricity provided was through enabling changes in workflow.

The biggest impacts come from secondary and tertiary technologies, such as home appliances in the case of electricity. Automobiles did more than provide transportation, but enables a shift from corner stores to supermarkets and, eventually, shopping malls. Refrigerated railroad cars revolutionized food distribution. Supply chains were transformed. Radios, and later TV, reshaped entertainment.

Nobody, not even someone like J.P. Morgan could have predicted all that in 1919, because it’s ecosystems, not inventions, that drive transformation and ecosystems are non-linear. We can’t simply extrapolate out from the present and get a clear future of what the future is going to look like.

4. You Need to Start Now

The changes that will take place over the next decade or so are likely to be just as transformative—and possibly even more so—than those that happened in the 1920s and 30s. We are on the brink of a new era of innovation that will see the creation of entirely new industries and business models.

Yet the technologies that will drive the 21st century are still mostly in the discovery and engineering phases, so they’re easy to miss. Once the transformation begins in earnest, however, it will likely be too late to adapt. In areas like genomics, materials science, quantum computing and artificial intelligence, if you get a few years behind, you may never catch up.

So the time to start exploring these new technologies is now and there are ample opportunities to do so. The Manufacturing USA Institutes are driving advancement in areas as diverse as bio-fabrication, additive manufacturing and composite materials. IBM has created its Q Network to help companies get up to speed on quantum computing and the Internet of Things Consortium is doing the same thing in that space.

Make no mistake, if you don’t explore, you won’t discover. If you don’t discover you won’t invent. And if you don’t invent, you will be disrupted eventually, it’s just a matter of time. It’s always better to prepare than to adapt and the time to start doing that is now.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Why Business Strategies Should Not Be Scientific

Why Business Strategies Should Not Be Scientific

GUEST POST from Greg Satell

When the physicist Richard Feynman took the podium to give the commencement speech at CalTech in 1974, he told the strange story of cargo cults. In certain islands in the South Pacific, he explained, tribal societies had seen troops build airfields during World War and were impressed with the valuable cargo that arrived at the bases.

After the troops left, the island societies built their own airfields, complete with mock radios, aircraft and mimicked military drills in the hopes of attracting cargo themselves. It seems more than a little silly, and of course, no cargo every came. Yet these tribal societies persisted in their strange behaviors.

Feynman’s point was that we can’t merely mimic behaviors and expect to get results. Yet even today, nearly a half century later, many executives and business strategists have failed to learn that simple lesson by attempting to inject “science” into strategy. The truth is that while strategy can be informed by science, it can never be, and shouldn’t be, truly scientific.

Why Business Case Studies Are Flawed

In 2004, I was leading a major news organization during the Orange Revolution in Ukraine. What struck me at the time was how thousands of people, who would ordinarily be doing thousands of different things, would stop what they were doing and start doing the same thing, all at once, in nearly perfect unison, with little or no formal coordination.

That’s what started the journey that ultimately resulted in my book, Cascades. I wanted to harness those same forces to create change in a business context, much like the protesters in Ukraine achieved in a political context and countless others, such as the LGBT activists, did in social contexts. In my research I noticed how different studies of political and social movements were from business case studies.

With historical political and social movements, such as the civil rights movement or the United States or the anti-Apartheid struggle in South Africa, there was abundant scholarship often based on hundreds, if not thousands of contemporary accounts. Business case studies, on the other hand, were largely done by a small team performing a handful of interviews.

When I interviewed people involved in the business cases, I found that they shared some important features with political and social movements that weren’t reported in the case studies. What struck me was that these features were noticed at the time, and in some cases discussed, but weren’t regarded as significant.

To be clear, I’m not arguing that my research was more “scientific,” but I was able to bring a new perspective. Business cases are, necessarily, usually focused on successful efforts, researched after the fact and written from a management perspective. We rarely get much insight into failed efforts or see perspectives from ordinary customers, line workers, competitors and so on.

The Halo Effect

Good case studies are written by experienced professionals who are trained to analyze a business situations from a multitude of perspectives. However, their ability to do that successfully is greatly limited by the fact that they already know the outcome. That can’t help but to color their analysis.

In The Halo Effect, Phil Rosenzweig explains how those perceptions can color conclusions. He points to the networking company Cisco during the dotcom boom. When it was flying high, it was said to have an unparalleled culture with happy people who worked long hours but loved every minute of it. When the market tanked, however, all of the sudden its culture came to be seen as “cocksure” and “naive.”

It is hard to see how company’s culture could change so drastically in such a short amount of time, with no significant change in leadership. More likely, given a successful example, analysts looked at particular qualities in a positive light. However, when things began to go the other way, those same qualities were perceived as negative.

So when an organization is doing well, we see them as “idealistic” and “values driven,” but when things go sour, those same traits are seen as “arrogant” and “impractical.” Given the same set of facts, we can, and often do, come to very different conclusions when our perception of the outcomes changes.

The Problem with Surveys

Besides case studies, another common technique to analyze business trends and performance are executive surveys. Typically, a research company or consulting firm sends out questionnaires to a few hundred executives and then analyze the results. Much like Feynman described, surveys give these studies an air of scientific rigor.

This appearance of scientific rigor is largely a mirage. Yes, there are numbers, graphs and pie charts, much as your would see in a scientific paper, but there are usually important elements missing, such as a clearly formulated formulated hypothesis, a control group, and a peer review process.

Another problematic aspect is that these types of studies emphasize what a typical executive thinks about a particular business issue or trend. So what they really examine is the current zeitgeist, which may or may not reflect current market reality. A great business strategy does not merely reflect what typical executives know, but exploits what they do not.

Perhaps most importantly, these types of surveys are generally not marketed as simple opinion surveys, but as sources of profound insight designed to help leaders get an edge over their competitors. The numbers, graphs and pie charts are specifically designed to look “scientific” in order to make them appear to be statements of empirical fact.

Your Strategy Is Always Wrong, You Have to Make It Right

We’d like strategy to be scientific, because few leaders like to admit that they are merely betting on an idea. Nobody wants to go to their investors and say, “I have a hunch about something and I’d like to risk significant resources to find out if I’m right.” Yet that’s exactly what successful business do all the time.

If strategy was truly scientific, then you would expect management to get better over time, much as, say, cancer treatment or technology performance does. However, just the opposite seems to be the case. The average tenure on the S&P 500 has been shrinking for decades and CEOs get fired more often.

The truth is that strategy can never be scientific, because the business context is always evolving. Even if you have the right strategy today, it may not be the right strategy for tomorrow. Changes in technology, consumer behavior and the actions of your competitors make that a near certainty.

So instead of assuming that your strategy is right, a much better course is to assume that it is wrong in at least some aspects. Techniques like pre-mortems and red teams can help you to expose flaws in a strategy and make adjustments to overcome them. The more you assume you are wrong, the better your chances are of being right.

Or, as Feynman himself put it, “The first principle is that you must not fool yourself—and you are the easiest person to fool.”

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Globalization and Technology Have Failed Us

Globalization and Technology Have Failed Us

GUEST POST from Greg Satell

In November 1989, there were two watershed events that would change the course of world history. The fall of the Berlin Wall would end the Cold War and open up markets across the world. That very same month, Tim Berners-Lee would create the World Wide Web and usher in a new technological era of networked computing.

It was a time of great optimism. Books like Francis Fukayama’s The End of History predicted a capitalist, democratic utopia, while pundits gushed over the seemingly neverending parade of “killer apps,” from email and e-commerce to social media and the mobile web. The onward march of history seemed unstoppable.

Today, 30 years on, it’s time to take stock and the picture is somewhat bleak. Instead of a global technological utopia, there are a number of worrying signs ranging from income inequality to the rise of popular authoritarianism. The fact is that technology and globalization have failed us. It’s time to address some very real problems.

Where’s the Productivity?

Think back, if you’re old enough, to before this all started. Life before 1989 was certainly less modern prior to 1989, we didn’t have mobile phones or the Internet, but for the most part it was fairly similar to today. We rode in cars and airplanes, watched TV and movies, and enjoyed the benefits of home appliances and air conditioners.

Now try to imagine what life was like in 1900, before electricity and internal combustion gained wide adoption. Even doing a simple task like cooking a meal or cleaning the house took hours of backbreaking labor to haul wood and water. While going back to living in the 1980s would involve some inconvenience, we would struggle to survive before 1920.

The productivity numbers bear out this simple observation. The widespread adoption of electricity and internal combustion led to a 50-year boom in productivity between 1920 and 1970. The digital revolution, on the other hand, created only an 8-year blip between 1996 and 2004. Even today, with artificial intelligence on the rise, productivity remains depressed.

At this point, we have to conclude that despite all the happy talk and grand promises of “changing the world,” the digital revolution has been a huge disappointment. While Silicon Valley has minted billionaires at record rates, digital technology has not made most of us measurably better off economically.

Winners Taking All

The increase of globalization and the rise of digital commerce was supposed to be a democratizing force, increasing competition and breaking the institutional monopoly on power. Yet just the opposite seems to have happened, with a relatively small global elite grabbing more money and more power.

Consider market consolidation. An analysis published in the Harvard Business Review showed that from airlines to hospitals to beer, market share is increasingly concentrated in just a handful of firms. As more expansive study of 900 industries conducted by The Economist found that two thirds have become more dominated by larger players.

Perhaps not surprisingly, we see the same trends in households as we do with businesses. The OECD reports that income inequality is at its highest level in over 50 years. Even in emerging markets, where millions have been lifted out of poverty, most of the benefits have gone to a small few.

The consequences of growing inequality are concrete and stark. Social mobility has been declining in America for decades, transforming the “land of opportunity” into what is increasingly a caste system. Anxiety and depression are rising to epidemic levels. Life expectancy for the white working class is actually declining, mostly due to “deaths of despair” due to drugs, alcohol and suicide. The overall picture is dim and seemingly getting worse.

The Failure Of Freedom

Probably the biggest source of optimism in the 1990s was the end of the Cold War. Capitalism was triumphant and many of the corrupt, authoritarian societies of the former Soviet Union began embracing democracy and markets. Expansion of NATO and the EU brought new hope to more than a hundred million people. China began to truly embrace markets as well.

I moved to Eastern Europe in the late 1990s and was able to observe this amazing transformation for myself. Living in Poland, it seemed like the entire country was advancing through a lens of time-lapse photography. Old, gray concrete building gave way to modern offices and apartment buildings. A prosperous middle class began to emerge.

Yet here as well things now seem to be going the other way. Anti-democratic regimes are winning elections across Europe while rising resentment against immigrant populations take hold throughout the western world. In America, we are increasingly mired in a growing constitutional crisis.

What is perhaps most surprising about the retreat of democracy is that it is happening not in the midst of some sort of global depression, but during a period of relative prosperity and low unemployment. Nevertheless, positive economic data cannot mask the basic truth that a significant portion of the population feels that the system doesn’t work for them.

It’s Time To Start Taking Responsibility For A Messy World

Looking back, it’s hard to see how an era that began with such promise turned out so badly. Yes, we’ve got cooler gadgets and streaming video. There have also been impressive gains in the developing world. Yet in so-called advanced economies, we seem to be worse off. It didn’t have to turn out this way. Our current predicament is the result of choices that we made.

Put simply, we have the problems we have today because they are the problems we have chosen not to solve. While the achievements of technology and globalization are real, they have also left far too many behind. We focused on simple metrics like GDP and shareholder value, but unfortunately the world is not so elegant. It’s a messy place and doesn’t yield so easily to reductionist measures and strategies.

There has, however, been some progress. The Business Roundtable, an influential group of almost 200 CEOs of America’s largest companies, in 2019 issued a statement that discarded the old notion that the sole purpose of a business is to provide value to shareholders. There are also a number of efforts underway to come up with broader measures of well being to replace GDP.

Yet we still need to learn an important lesson: technology alone will not save us. To solve complex challenges like inequality, climate change and the rise of authoritarianism we need to take a complex, network based approach. We need to build ecosystems of talent, technology and information. That won’t happen by itself, we have to make better choices.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Reality Behind Netflix’s Amazing Success

The Reality Behind Netflix's Amazing Success

GUEST POST from Greg Satell

Today, it’s hard to think of Netflix as anything but an incredible success. Its business has grown at breakneck speed and now streams to 190 countries, yet it has also been consistently profitable, earning over $12 billion last year. With hit series like Orange is the New Black and Stranger Things, it broke the record for Emmy Nominations in 2018.

Most of all, the company has consistently disrupted the media business through its ability to relentlessly innovate. Its online subscription model upended the movie rental business and drove industry giant Blockbuster into bankruptcy. Later, it pioneered streaming video and introduced binge watching to the world.

Ordinarily, a big success like Netflix would offer valuable lessons for the rest of us. Unfortunately, its story has long been shrouded in myth and misinformation. That’s why Netflix Co-Founder Marc Randolph’s book, That Will Never Work, is so valuable. It not only sets the story straight, it offers valuable insight into how to create a successful business.

The Founding Myth

Anthropologists have long been fascinated by origin myths. The Greek gods battled and defeated the Titans to establish Olympus. Remus and Romulus were suckled by a she-wolf and then established Rome. Adam and Eve were seduced by a serpent, ate the forbidden fruit and were banished from the Garden of Eden.

The reason every culture invents origin myths is that they help make sense of a confusing world and reinforce the existing order. Before science, people were ill-equipped to explain things like disease and natural disasters. So, stories, even if the were apocryphal, gave people comfort that there was a rhyme and reason to things.

So it shouldn’t be surprising that an unlikely success such as Netflix has its own origin myth. As legend has it, Co-Founder Reed Hastings misplaced a movie he rented and was charged a $40 dollar late fee. Incensed, he set out to start a movie business that had no late fees. That simple insight led to a disruptive business model that upended the entire industry.

The truth is that late fees had nothing to do with the founding of Netflix. What really happened is that Reed Hastings and Marc Randolph, soon to be unemployed after the sale of their company, Pure Atria, were looking to ride the new e-commerce wave and become the “Amazon of” something. Netflix didn’t arise out of a moment of epiphany, but a process of elimination.

The Subscription Model Was an Afterthought

Netflix really got its start through a morning commute. As Pure Atria was winding down, Randolph and Hastings would drive together from Santa Crux on Highway 17 over the mountain into Silicon Valley. It was a long drive, which gave them lots of time to toss around e-commerce ideas that ranged from customized baseball bats to personalized shampoo.

The reason they eventually settled on movies was the introduction of DVD’s. In 1997, there were very few titles available, so stores didn’t stock them. They were also small and light and were easy to ship. Best of all, the movie studios recognized that they had made a mistake pricing movies on videotape too high and planned to offer DVD’s at a level consumers would buy them.

In the beginning, Netflix earned most of its money selling movies, not renting them. However, before long they realized that it was only a matter of time before Amazon and Walmart began selling DVD’s as well. Once that happened, it was unlikely that Netflix would be able to compete, and they would have to find a way to make the rental model work.

The subscription model began as an experiment. No one seemed to want to rent movies by mail, so they were desperate to find a different model and kept trying things until they hit on something that worked. It wasn’t part of a master plan, but the result of trial and error. “If you would have asked me on launch day to describe what Netflix would eventually look like,” Randolph wrote, “I would have never come up with a monthly subscription service.”

The Canada Principle

As Netflix began to grow it was constantly looking for ways to grow its business. One idea that continually came up was expanding to Canada. It’s just over the border, is largely English speaking, has a business-friendly regulatory environment and shares many cultural traits with the US. It just seemed like an obvious way to increase sales.

Yet they didn’t do it for two reasons. First, while Canada is very similar to the US, it is still another country, with its own currency, laws and other complicating factors. Also, while English is commonly spoken in most parts of Canada, in some regions French predominates. So, what looked simple at first had the potential to become maddeningly complex.

The second and more important reason was that it would have diluted their focus. Nobody has unlimited resources. You only have a certain number of people who can do a certain number of things. For every Canadian problem they had to solve, that was one problem that they weren’t solving in the much larger US business.

That became what Randolph called the “Canada Principle,” or the idea that you need to maximize your focus by limiting the number of opportunities that you pursue. It’s why they dropped DVD sales to focus on renting movies and then dropped a la carte rental to focus on the subscription business. That singularity of focus played a big part in Netflix’s success.

Nobody Knows Anything

Randolph’s mantra throughout the book is that “nobody knows anything.” He borrowed the phrase from the writer William Goldman’s memoir Adventures in the Screen Trade. What Goldman meant was that nobody truly knows how a movie will do until it’s out. Some movies with the biggest budgets and greatest stars flop, while some of the unlikeliest indy films are hits.

For Randolph though, it’s more of a guiding business philosophy. “For every good idea,” he says, “there are a thousand bad ideas it is indistinguishable from.” The only real way to tell the difference is to go out and try them, see what works, discard the failures and build on the successes. You have to, in other words, dare to be crap.

Over the years, I’ve had the chance to get to know hundreds of great innovators and they all tell a different version of the same story. While they often became known for one big idea, they had tried thousands of others before they arrived at the one that worked. It was perseverance and a singularity of focus, not a sudden epiphany, that made the difference.

That’s why the myth of the $40 late fee, while seductive, can be so misleading. What made Netflix successful wasn’t just one big idea. In fact, just about every assumption they made when they started the company was wrong. Rather, it was what they learned along the way that made the difference. That’s the truth of how Netflix became a media powerhouse.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.