Author Archives: Greg Satell

About Greg Satell

Greg Satell is a popular speaker and consultant. His latest book, Cascades: How to Create a Movement That Drives Transformational Change, is available now. Follow his blog at Digital Tonto or on Twitter @Digital Tonto.

Building a True Revolution

Building a True Revolution

GUEST POST from Greg Satell

“Revolution” is a term that gets thrown around a lot. There was an Industrial Revolution powered by steam and then another one powered by oil and electricity. The Green Revolution transformed the way we fed ourselves. Many political revolutions have overthrown powerful regimes and the digital revolution changed the way we work with information.

My friend Srdja Popović, who helped lead the Bulldozer Revolution that overthrew Slobodan Milošević in Serbia, told me that the goal of a revolution should be to become mainstream, to be mundane and ordinary. If you are successful it should be difficult to explain what was won because the previous order seems so unbelievable.

The problem with most would-be revolutionaries is that they seek exactly the opposite. All too often, they seek attention, excitement and crowds of admiring fans. Yet all that noise is likely to create enemies just as fast as it makes friends. True revolutions aren’t won in the streets or on the airwaves, but through smart strategies that transform basic beliefs.

A Shift in Paradigms

The idea of a paradigm shift was first established by Thomas Kuhn in his book The Structure of Scientific Revolutions, which explained how scientific breakthroughs come to the fore. It starts with an established model, the kind we learn in school or during initial training for a career. Eventually, those models are shown to be untenable, and a period of instability ensues until a new paradigm can be created and adopted.

While Kuhn developed his theory to describe advancements in science, it has long been clear that it applies more broadly. For example, in my experiences in post-communist countries, the comfort of the broken, but relatively stable, system seemed to many to be preferable to the instability of change.

In the corporate world, models are not only mindsets, but are embedded in systems, processes and practices, which makes them especially pervasive. To bring change about, you need to disrupt basic operations and that comes with costs. Customers, partners and suppliers depend on the stability of how an organization does business.

So, the first step to driving change about is to create a new vision that can credibly replace the existing model without causing so much chaos that the perceived costs outweigh the benefits. As I explain in my book, Cascades, successful revolutionaries are more than just warriors, they are also educators that are able to mobilize others through the power of their vision.

Mobilizing Small Groups, Loosely Connected

We tend to think of revolutions as mass actions, such as protestors storming the streets or excited customers lining up outside an Apple store, yet they don’t start out that way. Revolutions begin with small groups, loosely connected, but united by a shared purpose.

For example, groups like the Cambridge Apostles and the Bloomsbury Group helped launch intellectual revolutions in early 20th century Cambridge. The Homebrew Computer Club helped bring about the digital revolution. Groups like Otpor, Kmara and Pora formed the grassroots of the Color Revolutions in the early 2000s.

What made these groups effective was their ability to connect and bring others in. For example the Homebrew Computer Club would hold convene informal gatherings at a bar after the more formal meetings of the club. In the Serbian revolution that overthrew Slobodan Milošević, Otpor used humor and street pranks to attract people to their cause.

Revolutions are driven by networks and power in networks emanates from the center. You move to the center by connecting out. That’s how you mobilize and gain influence. What you do with that power and influence, however, will determine if your revolution will succeed.

Influencing Institutional Change

Mobilization can be a powerful force but does not in itself create a revolution. To bring change about, you need to be able to influence institutions that have the power to drive change. For example, Martin Luther King Jr. didn’t write a single piece of legislation or decide a single court case but was able to influence the legislative and legal systems through his activism.

In his efforts to reform the Pentagon, Colonel John Boyd went outside the chain of command to brief congressional staffers and a small circle of journalists. As he gained support from Congress and the media, he was able to put pressure on the Generals and create a reform movement within the US military.

Now compare that to the Occupy Movement, which mobilized activists in 951 cities across 82 countries. However, they wanted to have nothing to do with institutions and actually refused opportunities to influence them. In fact, when Congressman John Lewis, himself a civil rights leader, showed up at a rally, they turned him away. Is it any wonder they never achieved any tangible change?

Make no mistake. If you truly want to bring change about, you have to mobilize somebody to influence something. Merely sending people out in the streets with signs won’t amount to much.

Preparing for the Counterrevolution

In his 2004 State of the Union Address, President Bush delivered a full-throated condemnation of same-sex marriage. Incensed, San Francisco Mayor Gavin Newsom decided to unilaterally begin performing weddings for gay and lesbian couples at City Hall, in what was termed the Winter of Love. 4,027 couples were married before their nuptials were annulled by the California Supreme Court a month later.

The backlash was fierce and led Proposition 8, an amendment to the California Constitution that prohibited gay marriage, on the ballot. It was passed with a narrow majority of 52% of the electorate and was so harsh that it not only galvanized LGBT activists, but also began to sway public opinion.

The tide began to change when LBGT activists, began to appeal to values they shared with the general public, such as the right to live in committed relationships and raise happy, healthy families. In a Newsweek op-ed, Ted Olson, a conservative Republican lawyer who had previous served as President Bush’s Solicitor General, argued that legalizing same-sex marriage wasn’t strictly a gay issue, but would be “a recognition of basic American principles.”

Today, same sex marriage has become, to paraphrase my friend Srdja, mundane. It has become a part of everyday life that is widely accepted as the normal course of things. That’s when you know a revolution is complete. Not when the fervor of zealots drive people out into the streets, but when those in the mainstream begin to accept it as the normal course of business.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Artificial Intelligence is Forcing Us to Answer Some Very Human Questions

Artificial Intelligence is Forcing Us to Answer Some Very Human Questions

GUEST POST from Greg Satell

Chris Dixon, who invested early in companies ranging from Warby Parker to Kickstarter, once wrote that the next big thing always starts out looking like a toy. That’s certainly true of artificial intelligence, which started out playing games like chess, go and playing humans on the game show Jeopardy!

Yet today, AI has become so pervasive we often don’t even recognize it anymore. Besides enabling us to speak to our phones and get answers back, intelligent algorithms are often working in the background, providing things like predictive maintenance for machinery and automating basic software tasks.

As the technology becomes more powerful, it’s also forcing us to ask some uncomfortable questions that were once more in the realm of science fiction or late-night dorm room discussions. When machines start doing things traditionally considered to be uniquely human, we need to reevaluate what it means to be human and what is to be a machine.

What Is Original and Creative?

There is an old literary concept called the Infinite Monkey Theorem. The basic idea is that if you had an infinite amount of monkeys pecking away an infinite amount of keyboards, they would, in time, produce the complete works of Shakespeare or Tolstoy or any other literary masterpiece.

Today, our technology is powerful enough to simulate infinite monkeys and produce something that looks a whole lot like original work. Music scholar and composer David Cope has been able to create algorithms that produce original works of music which are so good that even experts can’t tell the difference. Companies like Narrative Science are able to produce coherent documents from raw data this way.

So there’s an interesting philosophical discussion to be had about what what qualifies as true creation and what’s merely curation. If an algorithm produces War and Peace randomly, does it retain the same meaning? Or is the intent of the author a crucial component of what creativity is about? Reasonable people can disagree.

However, as AI technology becomes more common and pervasive, some very practical issues are arising. For example, Amazon’s Audible unit has created a new captions feature for audio books. Publishers sued, saying it’s a violation of copyright, but Amazon claims that because the captions are created with artificial intelligence, it is essentially a new work.

When machines can create does that qualify as an original, creative intent? Under what circumstances can a work be considered new and original? We are going to have to decide.

Bias And Transparency

We generally accept that humans have biases. In fact, Wikipedia lists over 100 documented biases that affect our judgments. Marketers and salespeople try to exploit these biases to influence our decisions. At the same time, professional training is supposed to mitigate them. To make good decisions, we need to conquer our tendency for bias.

Yet however much we strive to minimize bias, we cannot eliminate it, which is why transparency is so crucial for any system to work. When a CEO is hired to run a corporation, for example, he or she can’t just make decisions willy nilly, but is held accountable to a board of directors who represent shareholders. Records are kept and audited to ensure transparency.

Machines also have biases which are just as pervasive and difficult to root out. Amazon had to scrap an AI system that analyzed resumes because it was biased against female candidates. Google’s algorithm designed to detect hate speech was found to be racially biased. If two of the most sophisticated firms on the planet are unable to eliminate bias, what hope is there for the rest of us?

So, we need to start asking the same questions of machine-based decisions as we do of human ones. What information was used to make a decision? On what basis was a judgment made? How much oversight should be required and by whom? We all worry about who and what are influencing our children, we need to ask the same questions about our algorithms.

The Problem of Moral Agency

For centuries, philosophers have debated the issue of what constitutes a moral agent, meaning to what extent someone is able to make and be held responsible for moral judgments. For example, we generally do not consider those who are insane to be moral agents. Minors under the age of eighteen are also not fully held responsible for their actions.

Yet sometimes the issue of moral agency isn’t so clear. Consider a moral dilemma known as the trolley problem. Imagine you see a trolley barreling down the tracks that is about to run over five people. The only way to save them is to pull a lever to switch the trolley to a different set of tracks, but if you do one person standing there will be killed. What should you do?

For the most part, the trolley problem has been a subject for freshman philosophy classes and avant-garde cocktail parties, without any real bearing on actual decisions. However, with the rise of technologies like self-driving cars, decisions such as whether to protect the life of a passenger or a pedestrian will need to be explicitly encoded into the systems we create.

On a more basic level, we need to ask who is responsible for a decision an algorithm makes, especially since AI systems are increasingly capable of making judgments humans can’t understand. Who is culpable for an algorithmically driven decision gone bad? By what standard should they be evaluated?

Working Towards Human-Machine Coevolution

Before the industrial revolution, most people earned their living through physical labor. Much like today, tradesman saw mechanization as a threat — and indeed it was. There’s not much work for blacksmiths or loom weavers these days. What wasn’t clear at the time was that industrialization would create a knowledge economy and demand for higher paid cognitive work.

Today, we’re going through a similar shift, but now machines are taking over cognitive tasks. Just as the industrial revolution devalued certain skills and increased the value of others, the age of thinking machines is catalyzing a shift from cognitive skills to social skills. The future will be driven by humans collaborating with other humans to design work for machines that creates value for other humans.

Technology is, as Marshal McLuhan pointed out long ago, an extension of man. We are constantly coevolving with our creations. Value never really disappears, it just shifts to another place. So, when we use technology to automate a particular task, humans must find a way to create value elsewhere, which creates an opportunity to create new technologies.

This is how humans and machines coevolve. The dilemma that confronts us now is that when machines replace tasks that were once thought of as innately human, we must redefine ourselves and that raises thorny questions about our relationship to the moral universe. When men become gods, the only thing that remains to conquer is ourselves.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Coming Innovation Slowdown

The Coming Innovation Slowdown

GUEST POST from Greg Satell

Take a moment to think about what the world must have looked like to J.P. Morgan a century ago, in 1919. He was not only an immensely powerful financier with access to the great industrialists of the day, but also an early adopter of new technologies. One of the first electric generators was installed at his home.

The disruptive technologies of the day, electricity and internal combustion, were already almost 40 years old, but had little measurable economic impact. Life largely went on as it always had. That would quickly change over the next decade when those technologies would drive a 50-year boom in productivity unlike anything the world had ever seen before.

It is very likely that we are at a similar point now. Despite significant advances in technology, productivity growth has been depressed for most of the last 50 years. Over the next ten years, however, we’re likely to see that change as nascent technologies hit their stride and create completely new industries. Here’s what you’ll need to know to compete in the new era.

1. Value Will Shift from Bits to Atoms

Over the past few decades, innovation has become almost synonymous with digital technology. Every 18 months or so, semiconductor manufacturers would bring out a new generation of processors that were twice as powerful as what came before. These, in turn, would allow entrepreneurs to imagine completely new possibilities.

However, while the digital revolution has given us snazzy new gadgets, the impact has been muted. Sure, we have hundreds of TV channels and we’re able to talk to our machines and get coherent answers back, but even at this late stage, information and communication technologies make up only about 6% of GDP in advanced countries.

At first, that sounds improbable. How could so much change produce so little effect? But think about going to a typical household in 1960, before the digital revolution took hold. You would likely see a TV, a phone, household appliances and a car in the garage. Now think of a typical household in 1910, with no electricity or running water. Even simple chores like cooking and cleaning took hours of backbreaking labor.

The truth is that much of our economy is still based on what we eat, wear and live in, which is why it’s important that the nascent technologies of today, such as synthetic biology and materials science, are rooted in the physical world. Over the next generation, we can expect innovation to shift from bits back to atoms.

2. Innovation Will Slow Down

We’ve come to take it for granted that things always accelerate because that’s what has happened for the past 30 years or so. So we’ve learned to deliberate less, to rapidly prototype and iterate and to “move fast and break things” because, during the digital revolution, that’s what you needed to do to compete effectively.

Yet microchips are a very old technology that we’ve come to understand very, very well. When a new generation of chips came off the line, they were faster and better, but worked the same way as earlier versions. That won’t be true with new computing architectures such as quantum and neuromorphic computing. We’ll have to learn how to use them first.

In other cases, such as genomics and artificial intelligence, there are serious ethical issues to consider. Under what conditions is it okay to permanently alter the germ line of a species. Who is accountable for the decisions and algorithm makes? On what basis should those decisions be made? To what extent do they need to be explainable and auditable?

Innovation is a process of discovery, engineering and transformation. At the moment, we find ourselves at the end of one transformational phase and about to enter a new one. It will take a decade or so to understand these new technologies enough to begin to accelerate again. We need to do so carefully. As we have seen over the past few years, when you move fast and break things, you run the risk of breaking something important.

3. Ecosystems Will Drive Technology

Let’s return to J.P. Morgan in 1919 and ask ourselves why electricity and internal combustion had so little impact up to that point. Automobiles and electric lights had been around a long time, but adoption takes time. It takes a while to build roads, to string wires and to train technicians to service new inventions reliably.

As economist Paul David pointed out in his classic paper, The Dynamo and the Computer, it takes time for people to learn how to use new technologies. Habits and routines need to change to take full advantage of new technologies. For example, in factories, the biggest benefit electricity provided was through enabling changes in workflow.

The biggest impacts come from secondary and tertiary technologies, such as home appliances in the case of electricity. Automobiles did more than provide transportation, but enables a shift from corner stores to supermarkets and, eventually, shopping malls. Refrigerated railroad cars revolutionized food distribution. Supply chains were transformed. Radios, and later TV, reshaped entertainment.

Nobody, not even someone like J.P. Morgan could have predicted all that in 1919, because it’s ecosystems, not inventions, that drive transformation and ecosystems are non-linear. We can’t simply extrapolate out from the present and get a clear future of what the future is going to look like.

4. You Need to Start Now

The changes that will take place over the next decade or so are likely to be just as transformative—and possibly even more so—than those that happened in the 1920s and 30s. We are on the brink of a new era of innovation that will see the creation of entirely new industries and business models.

Yet the technologies that will drive the 21st century are still mostly in the discovery and engineering phases, so they’re easy to miss. Once the transformation begins in earnest, however, it will likely be too late to adapt. In areas like genomics, materials science, quantum computing and artificial intelligence, if you get a few years behind, you may never catch up.

So the time to start exploring these new technologies is now and there are ample opportunities to do so. The Manufacturing USA Institutes are driving advancement in areas as diverse as bio-fabrication, additive manufacturing and composite materials. IBM has created its Q Network to help companies get up to speed on quantum computing and the Internet of Things Consortium is doing the same thing in that space.

Make no mistake, if you don’t explore, you won’t discover. If you don’t discover you won’t invent. And if you don’t invent, you will be disrupted eventually, it’s just a matter of time. It’s always better to prepare than to adapt and the time to start doing that is now.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Why Business Strategies Should Not Be Scientific

Why Business Strategies Should Not Be Scientific

GUEST POST from Greg Satell

When the physicist Richard Feynman took the podium to give the commencement speech at CalTech in 1974, he told the strange story of cargo cults. In certain islands in the South Pacific, he explained, tribal societies had seen troops build airfields during World War and were impressed with the valuable cargo that arrived at the bases.

After the troops left, the island societies built their own airfields, complete with mock radios, aircraft and mimicked military drills in the hopes of attracting cargo themselves. It seems more than a little silly, and of course, no cargo every came. Yet these tribal societies persisted in their strange behaviors.

Feynman’s point was that we can’t merely mimic behaviors and expect to get results. Yet even today, nearly a half century later, many executives and business strategists have failed to learn that simple lesson by attempting to inject “science” into strategy. The truth is that while strategy can be informed by science, it can never be, and shouldn’t be, truly scientific.

Why Business Case Studies Are Flawed

In 2004, I was leading a major news organization during the Orange Revolution in Ukraine. What struck me at the time was how thousands of people, who would ordinarily be doing thousands of different things, would stop what they were doing and start doing the same thing, all at once, in nearly perfect unison, with little or no formal coordination.

That’s what started the journey that ultimately resulted in my book, Cascades. I wanted to harness those same forces to create change in a business context, much like the protesters in Ukraine achieved in a political context and countless others, such as the LGBT activists, did in social contexts. In my research I noticed how different studies of political and social movements were from business case studies.

With historical political and social movements, such as the civil rights movement or the United States or the anti-Apartheid struggle in South Africa, there was abundant scholarship often based on hundreds, if not thousands of contemporary accounts. Business case studies, on the other hand, were largely done by a small team performing a handful of interviews.

When I interviewed people involved in the business cases, I found that they shared some important features with political and social movements that weren’t reported in the case studies. What struck me was that these features were noticed at the time, and in some cases discussed, but weren’t regarded as significant.

To be clear, I’m not arguing that my research was more “scientific,” but I was able to bring a new perspective. Business cases are, necessarily, usually focused on successful efforts, researched after the fact and written from a management perspective. We rarely get much insight into failed efforts or see perspectives from ordinary customers, line workers, competitors and so on.

The Halo Effect

Good case studies are written by experienced professionals who are trained to analyze a business situations from a multitude of perspectives. However, their ability to do that successfully is greatly limited by the fact that they already know the outcome. That can’t help but to color their analysis.

In The Halo Effect, Phil Rosenzweig explains how those perceptions can color conclusions. He points to the networking company Cisco during the dotcom boom. When it was flying high, it was said to have an unparalleled culture with happy people who worked long hours but loved every minute of it. When the market tanked, however, all of the sudden its culture came to be seen as “cocksure” and “naive.”

It is hard to see how company’s culture could change so drastically in such a short amount of time, with no significant change in leadership. More likely, given a successful example, analysts looked at particular qualities in a positive light. However, when things began to go the other way, those same qualities were perceived as negative.

So when an organization is doing well, we see them as “idealistic” and “values driven,” but when things go sour, those same traits are seen as “arrogant” and “impractical.” Given the same set of facts, we can, and often do, come to very different conclusions when our perception of the outcomes changes.

The Problem with Surveys

Besides case studies, another common technique to analyze business trends and performance are executive surveys. Typically, a research company or consulting firm sends out questionnaires to a few hundred executives and then analyze the results. Much like Feynman described, surveys give these studies an air of scientific rigor.

This appearance of scientific rigor is largely a mirage. Yes, there are numbers, graphs and pie charts, much as your would see in a scientific paper, but there are usually important elements missing, such as a clearly formulated formulated hypothesis, a control group, and a peer review process.

Another problematic aspect is that these types of studies emphasize what a typical executive thinks about a particular business issue or trend. So what they really examine is the current zeitgeist, which may or may not reflect current market reality. A great business strategy does not merely reflect what typical executives know, but exploits what they do not.

Perhaps most importantly, these types of surveys are generally not marketed as simple opinion surveys, but as sources of profound insight designed to help leaders get an edge over their competitors. The numbers, graphs and pie charts are specifically designed to look “scientific” in order to make them appear to be statements of empirical fact.

Your Strategy Is Always Wrong, You Have to Make It Right

We’d like strategy to be scientific, because few leaders like to admit that they are merely betting on an idea. Nobody wants to go to their investors and say, “I have a hunch about something and I’d like to risk significant resources to find out if I’m right.” Yet that’s exactly what successful business do all the time.

If strategy was truly scientific, then you would expect management to get better over time, much as, say, cancer treatment or technology performance does. However, just the opposite seems to be the case. The average tenure on the S&P 500 has been shrinking for decades and CEOs get fired more often.

The truth is that strategy can never be scientific, because the business context is always evolving. Even if you have the right strategy today, it may not be the right strategy for tomorrow. Changes in technology, consumer behavior and the actions of your competitors make that a near certainty.

So instead of assuming that your strategy is right, a much better course is to assume that it is wrong in at least some aspects. Techniques like pre-mortems and red teams can help you to expose flaws in a strategy and make adjustments to overcome them. The more you assume you are wrong, the better your chances are of being right.

Or, as Feynman himself put it, “The first principle is that you must not fool yourself—and you are the easiest person to fool.”

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Globalization and Technology Have Failed Us

Globalization and Technology Have Failed Us

GUEST POST from Greg Satell

In November 1989, there were two watershed events that would change the course of world history. The fall of the Berlin Wall would end the Cold War and open up markets across the world. That very same month, Tim Berners-Lee would create the World Wide Web and usher in a new technological era of networked computing.

It was a time of great optimism. Books like Francis Fukayama’s The End of History predicted a capitalist, democratic utopia, while pundits gushed over the seemingly neverending parade of “killer apps,” from email and e-commerce to social media and the mobile web. The onward march of history seemed unstoppable.

Today, 30 years on, it’s time to take stock and the picture is somewhat bleak. Instead of a global technological utopia, there are a number of worrying signs ranging from income inequality to the rise of popular authoritarianism. The fact is that technology and globalization have failed us. It’s time to address some very real problems.

Where’s the Productivity?

Think back, if you’re old enough, to before this all started. Life before 1989 was certainly less modern prior to 1989, we didn’t have mobile phones or the Internet, but for the most part it was fairly similar to today. We rode in cars and airplanes, watched TV and movies, and enjoyed the benefits of home appliances and air conditioners.

Now try to imagine what life was like in 1900, before electricity and internal combustion gained wide adoption. Even doing a simple task like cooking a meal or cleaning the house took hours of backbreaking labor to haul wood and water. While going back to living in the 1980s would involve some inconvenience, we would struggle to survive before 1920.

The productivity numbers bear out this simple observation. The widespread adoption of electricity and internal combustion led to a 50-year boom in productivity between 1920 and 1970. The digital revolution, on the other hand, created only an 8-year blip between 1996 and 2004. Even today, with artificial intelligence on the rise, productivity remains depressed.

At this point, we have to conclude that despite all the happy talk and grand promises of “changing the world,” the digital revolution has been a huge disappointment. While Silicon Valley has minted billionaires at record rates, digital technology has not made most of us measurably better off economically.

Winners Taking All

The increase of globalization and the rise of digital commerce was supposed to be a democratizing force, increasing competition and breaking the institutional monopoly on power. Yet just the opposite seems to have happened, with a relatively small global elite grabbing more money and more power.

Consider market consolidation. An analysis published in the Harvard Business Review showed that from airlines to hospitals to beer, market share is increasingly concentrated in just a handful of firms. As more expansive study of 900 industries conducted by The Economist found that two thirds have become more dominated by larger players.

Perhaps not surprisingly, we see the same trends in households as we do with businesses. The OECD reports that income inequality is at its highest level in over 50 years. Even in emerging markets, where millions have been lifted out of poverty, most of the benefits have gone to a small few.

The consequences of growing inequality are concrete and stark. Social mobility has been declining in America for decades, transforming the “land of opportunity” into what is increasingly a caste system. Anxiety and depression are rising to epidemic levels. Life expectancy for the white working class is actually declining, mostly due to “deaths of despair” due to drugs, alcohol and suicide. The overall picture is dim and seemingly getting worse.

The Failure Of Freedom

Probably the biggest source of optimism in the 1990s was the end of the Cold War. Capitalism was triumphant and many of the corrupt, authoritarian societies of the former Soviet Union began embracing democracy and markets. Expansion of NATO and the EU brought new hope to more than a hundred million people. China began to truly embrace markets as well.

I moved to Eastern Europe in the late 1990s and was able to observe this amazing transformation for myself. Living in Poland, it seemed like the entire country was advancing through a lens of time-lapse photography. Old, gray concrete building gave way to modern offices and apartment buildings. A prosperous middle class began to emerge.

Yet here as well things now seem to be going the other way. Anti-democratic regimes are winning elections across Europe while rising resentment against immigrant populations take hold throughout the western world. In America, we are increasingly mired in a growing constitutional crisis.

What is perhaps most surprising about the retreat of democracy is that it is happening not in the midst of some sort of global depression, but during a period of relative prosperity and low unemployment. Nevertheless, positive economic data cannot mask the basic truth that a significant portion of the population feels that the system doesn’t work for them.

It’s Time To Start Taking Responsibility For A Messy World

Looking back, it’s hard to see how an era that began with such promise turned out so badly. Yes, we’ve got cooler gadgets and streaming video. There have also been impressive gains in the developing world. Yet in so-called advanced economies, we seem to be worse off. It didn’t have to turn out this way. Our current predicament is the result of choices that we made.

Put simply, we have the problems we have today because they are the problems we have chosen not to solve. While the achievements of technology and globalization are real, they have also left far too many behind. We focused on simple metrics like GDP and shareholder value, but unfortunately the world is not so elegant. It’s a messy place and doesn’t yield so easily to reductionist measures and strategies.

There has, however, been some progress. The Business Roundtable, an influential group of almost 200 CEOs of America’s largest companies, in 2019 issued a statement that discarded the old notion that the sole purpose of a business is to provide value to shareholders. There are also a number of efforts underway to come up with broader measures of well being to replace GDP.

Yet we still need to learn an important lesson: technology alone will not save us. To solve complex challenges like inequality, climate change and the rise of authoritarianism we need to take a complex, network based approach. We need to build ecosystems of talent, technology and information. That won’t happen by itself, we have to make better choices.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Reality Behind Netflix’s Amazing Success

The Reality Behind Netflix's Amazing Success

GUEST POST from Greg Satell

Today, it’s hard to think of Netflix as anything but an incredible success. Its business has grown at breakneck speed and now streams to 190 countries, yet it has also been consistently profitable, earning over $12 billion last year. With hit series like Orange is the New Black and Stranger Things, it broke the record for Emmy Nominations in 2018.

Most of all, the company has consistently disrupted the media business through its ability to relentlessly innovate. Its online subscription model upended the movie rental business and drove industry giant Blockbuster into bankruptcy. Later, it pioneered streaming video and introduced binge watching to the world.

Ordinarily, a big success like Netflix would offer valuable lessons for the rest of us. Unfortunately, its story has long been shrouded in myth and misinformation. That’s why Netflix Co-Founder Marc Randolph’s book, That Will Never Work, is so valuable. It not only sets the story straight, it offers valuable insight into how to create a successful business.

The Founding Myth

Anthropologists have long been fascinated by origin myths. The Greek gods battled and defeated the Titans to establish Olympus. Remus and Romulus were suckled by a she-wolf and then established Rome. Adam and Eve were seduced by a serpent, ate the forbidden fruit and were banished from the Garden of Eden.

The reason every culture invents origin myths is that they help make sense of a confusing world and reinforce the existing order. Before science, people were ill-equipped to explain things like disease and natural disasters. So, stories, even if the were apocryphal, gave people comfort that there was a rhyme and reason to things.

So it shouldn’t be surprising that an unlikely success such as Netflix has its own origin myth. As legend has it, Co-Founder Reed Hastings misplaced a movie he rented and was charged a $40 dollar late fee. Incensed, he set out to start a movie business that had no late fees. That simple insight led to a disruptive business model that upended the entire industry.

The truth is that late fees had nothing to do with the founding of Netflix. What really happened is that Reed Hastings and Marc Randolph, soon to be unemployed after the sale of their company, Pure Atria, were looking to ride the new e-commerce wave and become the “Amazon of” something. Netflix didn’t arise out of a moment of epiphany, but a process of elimination.

The Subscription Model Was an Afterthought

Netflix really got its start through a morning commute. As Pure Atria was winding down, Randolph and Hastings would drive together from Santa Crux on Highway 17 over the mountain into Silicon Valley. It was a long drive, which gave them lots of time to toss around e-commerce ideas that ranged from customized baseball bats to personalized shampoo.

The reason they eventually settled on movies was the introduction of DVD’s. In 1997, there were very few titles available, so stores didn’t stock them. They were also small and light and were easy to ship. Best of all, the movie studios recognized that they had made a mistake pricing movies on videotape too high and planned to offer DVD’s at a level consumers would buy them.

In the beginning, Netflix earned most of its money selling movies, not renting them. However, before long they realized that it was only a matter of time before Amazon and Walmart began selling DVD’s as well. Once that happened, it was unlikely that Netflix would be able to compete, and they would have to find a way to make the rental model work.

The subscription model began as an experiment. No one seemed to want to rent movies by mail, so they were desperate to find a different model and kept trying things until they hit on something that worked. It wasn’t part of a master plan, but the result of trial and error. “If you would have asked me on launch day to describe what Netflix would eventually look like,” Randolph wrote, “I would have never come up with a monthly subscription service.”

The Canada Principle

As Netflix began to grow it was constantly looking for ways to grow its business. One idea that continually came up was expanding to Canada. It’s just over the border, is largely English speaking, has a business-friendly regulatory environment and shares many cultural traits with the US. It just seemed like an obvious way to increase sales.

Yet they didn’t do it for two reasons. First, while Canada is very similar to the US, it is still another country, with its own currency, laws and other complicating factors. Also, while English is commonly spoken in most parts of Canada, in some regions French predominates. So, what looked simple at first had the potential to become maddeningly complex.

The second and more important reason was that it would have diluted their focus. Nobody has unlimited resources. You only have a certain number of people who can do a certain number of things. For every Canadian problem they had to solve, that was one problem that they weren’t solving in the much larger US business.

That became what Randolph called the “Canada Principle,” or the idea that you need to maximize your focus by limiting the number of opportunities that you pursue. It’s why they dropped DVD sales to focus on renting movies and then dropped a la carte rental to focus on the subscription business. That singularity of focus played a big part in Netflix’s success.

Nobody Knows Anything

Randolph’s mantra throughout the book is that “nobody knows anything.” He borrowed the phrase from the writer William Goldman’s memoir Adventures in the Screen Trade. What Goldman meant was that nobody truly knows how a movie will do until it’s out. Some movies with the biggest budgets and greatest stars flop, while some of the unlikeliest indy films are hits.

For Randolph though, it’s more of a guiding business philosophy. “For every good idea,” he says, “there are a thousand bad ideas it is indistinguishable from.” The only real way to tell the difference is to go out and try them, see what works, discard the failures and build on the successes. You have to, in other words, dare to be crap.

Over the years, I’ve had the chance to get to know hundreds of great innovators and they all tell a different version of the same story. While they often became known for one big idea, they had tried thousands of others before they arrived at the one that worked. It was perseverance and a singularity of focus, not a sudden epiphany, that made the difference.

That’s why the myth of the $40 late fee, while seductive, can be so misleading. What made Netflix successful wasn’t just one big idea. In fact, just about every assumption they made when they started the company was wrong. Rather, it was what they learned along the way that made the difference. That’s the truth of how Netflix became a media powerhouse.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Our Fear of China is Overblown

Our Fear of China is Overblown

GUEST POST from Greg Satell

The rise of China over the last 40 years has been one of history’s great economic miracles. According to the World Bank, since it began opening up its economy in 1979, China’s GDP has grown from a paltry $178 billion to a massive $13.6 trillion. At the same time, research by McKinsey shows that its middle class is expanding rapidly.

What’s more, it seems like the Asian giant is just getting started. China has become increasingly dominant in scientific research and has embarked on two major initiatives: Made in China 2025, which aims to make it the leading power in 10 emerging industries, and a massive Belt and Road infrastructure initiative that seeks to shore up its power throughout Asia.

Many predict that China will dominate the 21st century in much the same way that America dominated the 20th. Yet I’m not so sure. First, American dominance was due to an unusual confluence of forces unlikely to be repeated. Second, China has weaknesses—and we have strengths—that aren’t immediately obvious. We need to be clear headed about China’s rise.

The Making of an American Century

America wasn’t always a technological superpower. In fact, at the turn of the 20th century, much like China at the beginning of this century, the United States was largely a backwater. Still mostly an agrarian nation, the US lacked the industrial base and intellectual heft of Europe. Bright young students would often need to go overseas for advanced degrees. With no central bank, financial panics were common.

Yet all that changed quickly. Industrialists like Thomas Edison and Henry Ford put the United States at the forefront of the two most important technologies of the time, electricity and internal combustion. Great fortunes produced by a rising economy endowed great educational institutions. In 1913 the Federal Reserve Act was passed, finally bringing financial stability to a growing nation. By the 1920s, much like China today, America had emerged as a major world power.

Immigration also played a role. Throughout the early 1900s immigrants coming to America provided enormous entrepreneurial energy as well as cheap labor. With the rise of fascism in the 1930s, our openness to new people and new ideas attracted many of the world’s greatest scientists to our shores and created a massive brain drain in Europe.

At the end of World War II, the United States was the only major power left with its industrial base still intact. We seized the moment wisely, using the Marshall Plan to rebuild our allies and creating scientific institutions, such as the National Science Foundation (NSF) and the National Institutes of Health (NIH) that fueled our technological and economic dominance for the rest of the century.

There are many parallels between the 1920s and the historical moment of today, but there are also many important differences. It was a number of forces, including our geography, two massive world wars, our openness as a culture and a number of wise policy choices that led to America’s dominance. Some of these factors can be replicated, but others cannot.

MITI and the Rise of Japan

Long before China loomed as a supposed threat to American prosperity and dominance, Japan was considered to be a chief economic rival. Throughout the 1970s and 80s, Japanese firms came to lead in many key industries, such as automobiles, electronics and semiconductors. The United States, by comparison, seemed feckless and unable to compete.

Key to Japan’s rise was a long-term industrial policy. The Ministry of International Trade and Industry (MITI) directed investment and funded research that fueled an economic miracle. Compared to America’s haphazard policies, Japan’s deliberate and thoughtful strategy seemed like a decidedly more rational and wiser model.

Yet before long things began to unravel. While Japan continued to perform well in many of the industries and technologies that the MITI focused on, it completely missed out on new technologies, such as minicomputers and workstations in the 1980s and personal computers in the 1990s. As MITI continued to support failing industries, growth slowed and debt piled up, leading to a lost decade of economic malaise.

At the same time, innovative government policy in the US also helped turn the tide. For example, in 1987 a non-profit consortium made up of government labs, research universities and private sector companies, called SEMATECH, was created to regain competitiveness in the semiconductor industry. America soon retook the lead, which continues even today.

China 2025 and the Belt and Road Initiative

While the parallels with America in the 1920s underline China’s potential, Japan’s experience in the 1970s and 80s highlight its peril. Much like Japan, it is centralizing decision-making around a relatively small number of bureaucrats and focusing on a relatively small number of industries and technologies.

Much like Japan back then, China seems wise and rational. Certainly, the technologies it is targeting, such as artificial intelligence, electric cars and robotics would be on anybody’s list of critical technologies for the future. The problem is that the future always surprises us. What seems clear and obvious today may look ridiculous and naive a decade from now.

To understand the problem, consider quantum computing, which China is investing heavily in. However, the technology is far from monolithic. In fact, there are a wide variety of approaches being championed by different firms, such as IBM, Microsoft, Google, Intel and others. Clearly, some of these firms are going to be right and some will be wrong.

The American firms that get it wrong will fail, but others will surely succeed. In China, however, the ones that get it wrong will likely be government bureaucrats who will have the power to prop up state supported firms indefinitely. Debt will pile up and competitiveness will decrease, much like it did in Japan in the 1990s.

This is, of course, speculation. However, there are indications that it is already happening. A recent bike sharing bubble has ignited concerns that similar over-investment is happening in artificial intelligence. Many investors have also become concerned that China’s slowing economy will be unable to support its massive debt load.

The Path Forward

The rise of China presents a generational challenge. Clearly, we cannot ignore a rising power, yet we shouldn’t overreact either. While many have tried to cast China as a bad actor, engaging in intellectual theft, currency manipulation and other unfair trade policies, others point out that it is wisely investing for the long-term while the US manages by the quarter.

Interestingly, as Fareed Zakaria recently pointed out, the same accusations made about China’s unfair trade policies today were leveled at Japan 40 years ago. In retrospect, however, our fears about Japan seem almost quaint. Not only were we not crushed by Japan’s rise, we are clearly better for it, incorporating Japanese ideas like lean manufacturing and combining them with our own innovations.

I suspect, or at least I hope, that we will benefit from China’s rise much as we did from Japan’s. We will learn from its innovations and be inspired to develop more of our own. If a Chinese scientist invents a cure for cancer, American lives will be saved. If an American scientist invents a better solar panel, fewer Chinese will be choking on smog.

Perhaps most of all, we need to remember that what made the 20th Century the American Century was our ability to rise to the challenges that history presented. Whether it was rebuilding Europe in the 40s and 50s, or Sputnik in the 50s and 60s or Japan in the 70s and 80s, competition always brought out the best in us. Then, as now, our destiny was our own to determine.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Why Humans Fail to Plan for the Future

Why Humans Fail to Plan for the Future

GUEST POST from Greg Satell

I was recently reading Michiu Kaku’s wonderful book, The Future of Humanity, about colonizing space and was amazed how detailed some of the plans are. Plans for a Mars colony, for example, are already fairly advanced. In other cases, scientists are actively thinking about technologies that won’t be viable for a century or more.

Yet while we seem to be so good at planning for life in outer space, we are much less capable of thinking responsibly about the future here on earth, especially in the United States. Our federal government deficit recently rose to 4.6% of GDP, which is obviously unsustainable in an economy that’s growing at a meager 2.3%.

That’s just one data point, but everywhere you look we seem to be unable to plan for the future. Consumer debt in the US recently hit levels exceeding those before the crash in 2008. Our infrastructure is falling apart. Air quality is getting worse. The list goes on. We need to start thinking more seriously about the future, but don’t seem to be able. Why is that?

It’s Biology, Stupid

The simplest and most obvious explanation for why we fail to plan for the future is basic human biology. We have pleasure centers in our brains that release a hormone called dopamine, which gives us a feeling of well-being. So, it shouldn’t be surprising that we seek to maximize our dopamine fix in the present and neglect the future.

Yuval Noah Harari made this argument in his book Homo Deus, in which he argued that “organisms are algorithms.” Much like a vending machine is programed to respond to buttons, Harari argues, humans and other animals are programed by genetics and evolution to respond to “sensations, emotions and thoughts.” When those particular buttons are pushed, we respond much like a vending machine does.

He gives various data points for this point of view. For example, he describes psychological experiments in which, by monitoring brainwaves, researchers are able to predict actions, such as whether a person will flip a switch, even before he or she is aware of it. He also points out that certain chemicals, such as Ritalin and Prozac, can modify behavior.

Yet this somehow doesn’t feel persuasive. Adults in even primitive societies are expected to overcome basic urges. Citizens of Ancient Rome were taxed to pay for roads that led to distant lands and took decades to build. Medieval communities built churches that stood for centuries. Why would we somehow lose our ability to think long-term in just the past generation or so?

The Profit Motive

Another explanation of why we neglect the future is the profit motive. Pressed by demanding shareholders to deliver quarterly profits, corporate executives focus on showing short-term profits instead of investing for the future. The result is increased returns to fund managers, but a hollowing out of corporate competitiveness.

A recent article in Harvard Business Review would appear to bear this out. When a team of researchers looked into the health of the innovation ecosystem in the US, they found that corporate America has largely checked out. They also observed that storied corporate research labs, such as Bell Labs and Xerox PARC have diminished over time.

Yet take a closer look and the argument doesn’t hold up. In fact, the data from the National Science Foundation shows that corporate research has increased from roughly 40% of total investment in the 1950s and 60s to more than 60% today. At the same time, while some firms have closed research facilities, others, such as Microsoft, IBM and Google have either opened new ones or greatly expanded previous efforts. Overall R&D spending has risen over time.

Take a look at how Google innovates and you’ll be able to see the source for some the dissonance. 50 years ago, the only real option for corporate investment in research was a corporate lab. Today, however, there are many other avenues, including partnerships with academic researchers, internal venture capital operations, incubators, accelerators and more.

The Free Rider Problem

A third reason we may fail to invest in the future is the free rider problem. In this view, the problem is not that we don’t plan for the future, but that we don’t want to spend money on others who are undeserving. For example, why should we pay higher taxes to educate kids from outside our communities? Or to infrastructure projects that are wasteful and corrupt?

This type of welfare queen argument can be quite powerful. Although actual welfare fraud has been shown to be incredibly rare, there are many who believe that the public sector is inherently wasteful and money would be more productively invested elsewhere. This belief doesn’t only apply to low-income people, but also to “elites” such as scientists.

Essentially, this is a form of kinship selection. We are more willing to invest in the future of people who we see as similar to ourselves, because that is a form of self-survival. However, when we find ourselves asked to invest in the future of those we see as different from ourselves, whether that difference is of race, social class or even profession, we balk.

Yet here again, a closer look and the facts don’t quite fit with the narrative. Charitable giving, for example, has risen almost every year since 1977. So, it’s strange that we’re increasingly generous in giving to those who are in need, but stingy when it comes to things like infrastructure and education.

A New Age of Superstition

What’s especially strange about our inability to plan for the future is that it’s relatively new. In fact, after World War II, we invested heavily in the future. We created new avenues for scientific investment at agencies like the National Science Foundation and the National Institutes of Health, rebuilt Europe with the Marshall Plan and educated an entire generation with the GI Bill.

It wasn’t until the 1980s that our willingness to plan for and invest in the future began to wane, mostly due to two ideas that warped decision making. The first, called the Laffer Curve, argued that by lowering taxes we can increase revenue and that tax cuts, essentially, pay for themselves. The second, shareholder value, argued that whatever was best for shareholders is also best for society.

Both ideas have been partially or thoroughly debunked. Over the past 40 years, lower tax rates have consistently led to lower revenues and higher deficits. The Business Roundtable, an influential group of almost 200 CEOs of America’s largest companies, recently denounced the concept of shareholder value. Yet strangely, many still use both to support anti-future decisions.

We seem to be living in a new era of superstition, where mere belief is enough to inspire action. So projects which easily capture the imagination, such as colonizing Mars, are able to garner fairly widespread support, while investing in basic things like infrastructure, debt reduction or the environment are neglected.

The problem, in other words, seems to be mostly in the realm of a collective narrative. We are more than capable of enduring privation today to benefit tomorrow, just as businesses routinely take less profits today to invest in tomorrow. We are even capable of giving altruistically to others in need. All we need is a story to believe in.

There is, however, the possibility that it is not the future we really have a problem with, but each other and that our lack of a common story arises from a lack of shared values which leads to major differences in how we view the same facts. In any case, the future suffers.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Hard Facts Are a Hard Thing

Hard Facts Are a Hard Thing

GUEST POST from Greg Satell

In 1977, Ken Olsen, the founder and CEO of Digital Equipment Corporation, reportedly said, “There is no reason for any individual to have a computer in his home.” It was an amazingly foolish thing to say and, ever since, observers have pointed to Olsen’s comment to show how supposed experts can be wildly wrong.

The problem is that Olsen was misquoted. In fact, his company was actually in the business of selling personal computers and he had one in his own home. This happens more often than you would think. Other famous quotes, such IBM CEO Thomas Watson predicting that there would be a global market for only five computers, are similarly false.

There is great fun in bashing experts, which is why so many inaccurate quotes get repeated so often. If the experts are always getting it wrong, then we are liberated from the constraints of expertise and the burden of evidence. That’s the hard thing about hard facts. They can be so elusive that it’s easy to believe doubt their existence. Yet they do exist and they matter.

The Search for Absolute Truth

In the early 20th century, science and technology emerged as a rising force in western society. The new wonders of electricity, automobiles and telecommunication were quickly shaping how people lived, worked and thought. Empirical verification, rather than theoretical musing, became the standard by which ideas were measured.

It was against this backdrop that Moritz Schlick formed the Vienna Circle, which became the center of the logical positivist movement and aimed to bring a more scientific approach to human thought. Throughout the 20’s and 30’s, the movement spread and became a symbol of the new technological age.

At the core of logical positivism was Ludwig Wittgenstein’s theory of atomic facts, the idea the world could be reduced to a set of statements that could be verified as being true or false—no opinions or speculation allowed. Those statements, in turn, would be governed by a set of logical algorithms which would determine the validity of any argument.

It was, to the great thinkers of the day, both a grand vision and an exciting challenge. If all facts could be absolutely verified, then we could confirm ideas with absolute certainty. Unfortunately, the effort would fail so miserably that Wittgenstein himself would eventually disown it. Instead of building a world of verifiable objective reality, we would be plunged into uncertainty.

The Fall of Logic and the Rise of Uncertainty

Ironically, while the logical positivist movement was gaining steam, two seemingly obscure developments threatened to undermine it. The first was a hole at the center of logic called Russell’s Paradox, which suggested that some statements could be both true and false. The second was quantum mechanics, a strange new science in which even physical objects could defy measurement.

Yet the battle for absolute facts would not go down without a fight. David Hilbert, the most revered mathematician of the time, created a program to resolve Russell’s Paradox. Albert Einstein, for his part, argued passionately against the probabilistic quantum universe, declaring that “God does not play dice with the universe.”

Alas, it was all for naught. Kurt Gödel would prove that every logical system is flawed with contradictions. Alan Turing would show that all numbers are not computable. The Einstein-Bohr debates would be resolved in Bohr’s favor, destroying Einstein’s vision of an objective physical reality and leaving us with an uncertain universe.

These developments weren’t all bad. In fact, they were what made modern computing possible. However, they left us with an uncomfortable uncertainty. Facts could no longer be absolutely verifiable, but would stand until they could be falsified. We could, after thorough testing, become highly confident in our facts, but never completely sure.

Science, Truth and Falsifiability

In Richard Feynman’s 1974 commencement speech at Cal-Tech, he recounted going to a new-age resort where people were learning reflexology. A man was sitting in a hot tub rubbing a woman’s big toe and asking the instructor, “Is this the pituitary?” Unable to contain himself, the great physicist blurted out, “You’re a hell of a long way from the pituitary, man.”

His point was that it’s relatively easy to make something appear “scientific” by, for example, having people wear white coats or present charts and tables, but that doesn’t make it real science. True science is testable and falsifiable. You can’t merely state what you believe to be true, but must give others a means to test it and prove you wrong.

This is important because it’s very easy for things to look like the truth, but actually be false. That’s why we need to be careful, especially when we believe something to be true. The burden is even greater when it is something that “everybody knows.” That’s when we need to redouble our efforts, dig in and make sure we verify our facts.

“We’ve learned from experience that the truth will out,” Feynman said. “The first principle is that you must not fool yourself—and you are the easiest person to fool.” Truth doesn’t reveal itself so easily, but it’s out there and we can find it if we are willing to make the effort.

The Lie of a Post-Truth World

Writing a non-fiction book can be a grueling process. You not only need to gather hundreds of pages of facts and mold them into a coherent story that interests the reader, but also to verify that those facts are true. For both of my books, Mapping Innovation and Cascades, I spent countless hours consulting sources and sending out fact checks.

Still, I lived in fear knowing that whatever I put on the page would permanently be there for anyone to discredit. In fact, I would later find two minor inaccuracies in my first book (ironically, both had been checked with primary sources). These were not, to be sure, material errors, but they wounded me. I’m sure, in time, others will be uncovered as well.

Yet I don’t believe that those errors diminish the validity of the greater project. In fact, I think that those imperfections serve to underline the larger truth that the search for knowledge is always a journey, elusive and just out of reach. We can struggle for a lifetime to grasp even a small part of it, but to shake free even a few seemingly insignificant nuggets can be a gift.

Yet all too often people value belief more than facts. That’s why they repeat things that aren’t factual, because they believe they point to some deeper truth that defy facts in evidence. Yet that is not truth. It is just a way of fooling yourself and, if you’re persuasive, fooling others as well. Still, as Feynman pointed out long ago, “We’ve learned from experience that the truth will out.”

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Software Isn’t Going to Eat the World

Software Isn't Going to Eat the World

GUEST POST from Greg Satell

In 2011, technology pioneer Marc Andreessen declared that software is eating the world. “With lower start-up costs and a vastly expanded market for online services,” he wrote, “the result is a global economy that for the first time will be fully digitally wired — the dream of every cyber-visionary of the early 1990s, finally delivered, a full generation later.

Yet as Derek Thompson recently pointed out in The Atlantic, the euphoria of Andreessen and his Silicon Valley brethren seems to have been misplaced. Former unicorns like Uber, Lyft, and Peloton have seen their value crash, while WeWork saw its IPO self-destruct. Hardly “the dream of every cyber-visionary.”

The truth is that we still live in a world of atoms, not bits and most of the value is created by making things we live in, wear, eat and ride in. For all of the tech world’s astounding success, it still makes up only a small fraction of the overall economy. So, taking a software centric view, while it has served Silicon Valley well in the past, may be its Achilles heel in the future.

The Silicon Valley Myth

The Silicon Valley way of doing business got its start in 1968, when an investor named Arthur Rock backed executives from Fairchild Semiconductor to start a new company, which would become known as Intel. Unlike back east, where businesses depended on stodgy banks for finance, on the west coast venture capitalists, many of whom were former engineers themselves, would decide which technology companies got funded.

Over the years, a virtuous cycle ensued. Successful tech companies created fabulously wealthy entrepreneurs and executives, who would in turn invest in new ventures. Things shifted into hyperdrive when the company Andreessen founded, Netscape, quadrupled its value on its first day of trading, kicking off the dotcom boom.

While the dotcom bubble would crash in 2000, it wasn’t all based on pixie dust. As the economist W. Brian Arthur explained in Harvard Business Review, while traditional industrial companies were subject to diminishing returns, software companies with negligible marginal costs could achieve increasing returns powered by network effects.

Yet even as real value was being created and fabulous new technology businesses prospered, an underlying myth began to take hold. Rather than treating software business as a special case, many came to believe that the Silicon Valley model could be applied to any business. In other words, that software would eat the world.

The Productivity Paradox (Redux)

One reason that so many outside of Silicon Valley were skeptical of the technology boom for a long time was a longstanding productivity paradox. Although throughout the 1970s and 80s, business investment in computer technology was increasing by more than 20% per year, productivity growth had diminished during the same period.

In the late 90s, however, this trend reversed itself and productivity began to soar. It seemed that Andreessen and his fellow “cyber-visionaries were redeemed. No longer considered outcasts, they became the darlings of corporate America. It appeared that a new day was dawning and the Silicon Valley ethos took hold.

While the dotcom crash deflated the bubble in 2000, the Silicon Valley machine was soon rolling again. Web 2.0 unleashed the social web, smartphones initiated the mobile era and then IBM’s Watson’s defeat of human champions on the game show Jeopardy! heralded a new age of artificial intelligence.

Yet still, we find ourselves in a new productivity paradox. By 2005, productivity growth had disappeared once again and has remained diminished ever since. To paraphrase economist Robert Solow, we see software everywhere except in the productivity statistics.

The Platform Fallacy

Today, pundits are touting a new rosy scenario. They point out that Uber, the world’s largest taxi company, owns no vehicles. Airbnb, the largest accommodation provider, owns no real estate. Facebook, the most popular media owner, creates no content and so on. The implicit assumption is that it is better to build software that makes matches than to invest in assets.

Yet platform-based businesses have three inherent weaknesses that aren’t always immediately obvious. First, they lack barriers to entry, which makes it difficult to create a sustainable competitive advantage. Second, they tend to create “winner-take-all” markets so for every fabulous success like Facebook, you can have thousands of failures. Finally, rabid competition leads to high costs.

The most important thing to understand about platforms is that they give us access to ecosystems of talent, technology and information and it is in those ecosystems where the greatest potential for value creation lies. That’s why, to become profitable, platform businesses eventually need to invest in real assets.

Consider Amazon: Almost two thirds of Amazon’s profits come from its cloud computing unit, AWS, which provides computing infrastructure for other organizations. More recently, it bought Whole Foods and began opening Amazon Go retail stores. The more that you look, Amazon looks less like a platform and more like a traditional pipeline business.

Reimagining Innovation for a World of Atoms

The truth is that the digital revolution, for all of the excitement and nifty gadgets it has produced, has been somewhat of a disappointment. Since personal computers first became available in the 1970’s we’ve had less than ten years of elevated productivity growth. Compare that to the 50-year boom in productivity created in the wake of electricity and internal combustion and it’s clear that digital technology falls short.

In a sense though, the lack of impact shouldn’t be that surprising. Even at this late stage, information and communication technologies only make up for about 6% of GDP in advanced economies. Clearly, that’s not enough to swallow the world. As we have seen, it’s barely enough to make a dent.

Yet still, there is great potential in the other 94% of the economy and there may be brighter days ahead in using computing technology to drive advancement in the physical world. Exciting new fields, such as synthetic biology and materials science may very well revolutionize industries like manufacturing, healthcare, energy and agriculture.

So, we are now likely embarking on a new era of innovation that will be very different than the digital age. Rather than focused on one technology, concentrated in one geographical area and dominated by a handful of industry giants, it will be widely dispersed and made up of a diverse group of interlocking ecosystems of talent, technology and information.

Make no mistake. The future will not be digital. Instead, we will need to learn how to integrate a diverse set of technologies to reimagine atoms in the physical world.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.