Author Archives: Greg Satell

About Greg Satell

Greg Satell is a popular speaker and consultant. His latest book, Cascades: How to Create a Movement That Drives Transformational Change, is available now. Follow his blog at Digital Tonto or on Twitter @Digital Tonto.

Rethinking Agility for the Post-Digital Age

Rethinking Agility for the Post-Digital Age

GUEST POST from Greg Satell

For the past 50 years, innovation has largely been driven by our ability to cram more transistors onto a silicon wafer. That’s what’s allowed us to double the power of our technology every two years or so and led to the continuous flow of new products and services streaming out of innovative organizations.

Perhaps not surprisingly, over the past few decades agility has become a defining competitive attribute. Because the fundamentals of digital technology have been so well understood, much of the value has shifted to applications and things like design and user experience. Yet that will change in the years ahead.

Over the next few decades we will struggle to adapt to a post-digital age and we will need to rethink old notions about agility. To win in this new era of innovation we will have to do far more than just move fast and break things. Rather, we will have to manage four profound shifts in the basis of competition that will challenge some of our most deeply held notions.

Shift 1: From Transistor-Based Computers to New Computing Architectures

In 1965, Intel’s Gordon Moore published a paper that established predicted Moore’s Law, the continuous doubling of transistors that can fit on an integrated circuit. With a constant stream of chips that were not only more powerful, but cheaper, successful firms would rapidly prototype and iterate to speed new applications to market.

Yet now Moore’s Law is ending. Despite the amazing ingenuity of engineers, the simple reality is that every technology eventually hits theoretical limits. The undeniable fact is that atoms are only so small and the speed of light is only so fast and that limits what we can do with transistors. To advance further, we will simply have to find a different way to compute things.

The two most promising candidates are quantum computing and neuromorphic chips, both of which are vastly different from digital computing, utilizing different logic and require different computer languages and algorithmic approaches than classical computers. The transition to these architectures won’t be seamless.

We will also use these architectures in much different ways. Quantum computers will be able to handle almost incomprehensible complexity, generating computing spaces larger than the number of atoms in the known universe. Neuromorphic chips are potentially millions of times more efficient than conventional chips and are much more effective with continuous streams of data, so may be well suited for edge computing and tasks like machine vision.

Shift 2: From Bits to Atoms

The 20th century saw two major waves of innovation. The first, dominated by electricity and internal combustion, revolutionized how we could manipulate the physical world. The second, driven by quantum physics, microbial science and computing, transformed how we could work with the microscopic and the virtual.

The past few decades have been dominated by the digital revolution and it seems like things have been moving very fast, but looks can be deceiving. If you walked into an average 1950s era household, you would see much that you would recognize, including home appliances, a TV and an automobile. On the other hand, if you had to live in a 1900’s era home, with no running water or electricity, you would struggle to survive.

The next era will combine aspects of both waves, essentially using bits to drive atoms. We’re building vast databases of genes and materials, cataloging highly specific aspects of the physical world. We are also using powerful machine learning algorithms to analyze these vast droves of data and derive insights. The revolution underway is so profound that it’s reshaping the scientific method.

In the years to come, new computing architectures are likely to accelerate this process. Simulating chemistry is one of the first applications being explored for quantum computers, which will help us build larger and more detailed databases. Neuromorphic technology will allow us to shift from the cloud to the edge, enabling factories to get much smarter.

The way we interface with the physical world is changing as well. New techniques such as CRISPR helps us edit genes at will. There is also an emerging revolution in materials science that will transform areas like energy and manufacturing. These trends are still somewhat nascent, but have truly transformative potential.

Shift 3: From Rapid Iteration to Exploration

Over the past 30 years, we’ve had the luxury of working with technologies we understand extremely well. Every generation of microchips opened vast new possibilities, but worked exactly the same way as the last generation, creating minimal switching costs. The main challenge was to design applications.

So it shouldn’t be surprising that rapid iteration emerged as a key strategy. When you understand the fundamental technology that underlies a product or service, you can move quickly, trying out nearly endless permutations until you arrive at an optimized solution. That’s often far more effective than a planned, deliberate approach.

Over the next decade or two, however, the challenge will be to advance technology that we don’t understand well at all. As noted above, quantum and neuromorphic computing are still in their nascent stages. Improvements in genomics and materials science are redefining the boundaries of those fields. There are also ethical issues involved with artificial intelligence and genomics that will require us to tread carefully.

So in the future, we will need to put greater emphasis on exploration to understand these new technologies and how they relate to our businesses. Instead of looking to disrupt markets, we will need to pursue grand challenges to solve fundamental problems. Most of all, it’s imperative to start early. By the time many of these technologies hit their stride, it will be too late to catch up.

Shift 4. From Hyper Competition to Mass Collaboration

The competitive environment we’ve become used to has been relatively simple. For each particular industry, there have been distinct ecosystems based on established fields of expertise. Competing firms raced to transform fairly undifferentiated inputs into highly differentiated products and services. You needed to move fast to get an edge.

This new era, on the other hand, will be one of mass collaboration in which government partners with academia and industry to explore new technologies in the pre competitive phase. For example, the Joint Center for Energy Storage Research combines the work of five national labs, a dozen or so academic institutions and hundreds of companies to develop advance batteries. Covid has redefined how scientists collaborate across institutional barriers.

Or consider the Manufacturing Institutes set up under the Obama administration. Focusing on everything from advanced fabrics to biopharmaceuticals, these allow companies to collaborate with government labs and top academics to develop the next generation of technologies. They also operate dozens of testing facilities to help bring new products to market faster.

I’ve visited some of these facilities and have had the opportunity to talk with executives from participating companies. What struck me was how palpable the excitement about the possibilities of this new era was. Agility for them didn’t mean learning to run faster down a chosen course, but to widen and deepen connections throughout a technological ecosystem.

Over the past few decades, we have largely been moving faster and faster down a predetermined path. Over the next few decades, however, we’ll increasingly need to explore multiple domains at once and combine them into something that produces value. We’ll need to learn how to go slower to deliver much larger impacts.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Innovation is About Conversations Not Knowledge

Innovation is About Conversations Not Knowledge

GUEST POST from Greg Satell

One of the most often repeated stories about innovation is that of Alexander Fleming who, returning from his summer holiday in 1928, found that his bacterial cultures were contaminated by a strange mold. Yet instead of throwing away his work, he decided to study the mold instead and discovered penicillin.

What’s often left out is that it wasn’t Fleming who developed penicillin into a miracle drug. In fact, it wasn’t until a decade later that a team led by Howard Florey and Ernst Chain rediscovered Fleming’s work and, collaborating with several labs in the United States, ushered in the new era of antibiotics.

For some reason, we tend to assume that great innovators are lone geniuses. However, in researching my book, Mapping Innovation, I found just the opposite to be true. Innovation is, in fact, a highly social activity and great innovators cultivate long standing relationships with trusted thought partners. This was always true, but Covid has pushed it to new heights.

The “Martians” Of Fasori

Like a lot of children, Eugene Wigner lacked confidence in math class. In Eugene’s case, however, the problem wasn’t any lack of mathematical ability, but that his classmate and friend at the Fasori Gimnázium was John von Neuman, possibly the single greatest mathematician of the 20th century. Outmatched, Eugene chose to focus on physics, for which he would win the Nobel Prize in 1963.

The two were part of a group of Hungarian scientists that came to be known as the Martians, including such luminaries as Edward Teller, John Kemeny, George Polya and Paul Erdős, just to name a few. The group would help to revolutionize mathematics, physics and computer science for half a century.

In 1939, one of the “Martians,” Leo Szilard, became increasingly concerned about the explosive power of nuclear energy, which was poorly understood at the time. He went to confer with his friend Wigner and the two considered the matter important enough to sound the alarm. They drafted a letter, which Albert Einstein signed, which was ultimately delivered to President Roosevelt and led to the development of the Manhattan Project.

Each of the Martians was a genius in his own right, but combined they formed an important network of support that helped them all thrive and led to breakthroughs such as the first modern computer and the BASIC computer language. The world today is unquestionably better for it.

The Olympia Academy

In 1901, Albert Einstein was a recent graduate of the mathematics and physics in the teaching diploma program at the Zürich polytechnic school. Finding himself unable to find a job. he put an ad in the newspaper to attract students he could tutor to earn some money. A Romanian philosophy student named Maurice Solovine answered the ad.

As it turned out, Einstein didn’t think Solovine needed lessons in physics, but the two hit it off and Einstein invited his new friend to come and visit any time he wanted. Soon, a mathematician named Conrad Habicht joined the group and the three friends began to refer to their meetings as The Olympia Academy.

The meetings eventually began to take on a regular rhythm. They would read books from intellectual giants such as Ernst Mach, Henri Poincaré and David Hume, then discuss them late into the night and sometimes into the early morning hours. The debates were often vigorous, but always cordial.

Einstein would later credit these meetings with helping him come up with the ideas that led to the miracle year papers that would shift the foundations of modern physics. Einstein would, of course, become one of the world’s most famous people, but he never forgot his two friends from the Olympia Academy. The three stayed in touch throughout their lives, exchanging ideas and debating finer points.

The Bloomsbury Group

Historically, most intellectual clubs were exclusively male. That was certainly true of the Hungarian “Martians” and the Olympia Academy, as with others such as the Vienna Circle, but the Bloomsbury Group of early 20th century Cambridge was an unusual exception.

Although it was itself somewhat of an offshoot of the wholly male society of Apostles, the Bloomsbury included accomplished women such as Vanessa Bell and Virginia Woolf. It would come to be highly influential in areas as diverse as art, economics, literature and politics

It began in 1905, when Thoby Steven started hosting “Thursday Evening’s” for Cambridge intellectuals visiting London and his sister Vanessa followed up with “Friday Club.” The loose gathering’s became an informal network that included literary types like E. M. Forster and Lytton Strachey, as well as such luminaries as the economist John Maynard Keynes and the philosopher Bertrand Russell.

Although the group came to be seen as snobbish and out of touch, the accomplishments of its members cannot be denied. Nor can the fact that even as they grew in fame and had increasing demands on their time, they continued to see deep value in the dialogue they had with each other.

Collaboration Is A Competitive Advantage

What’s most interesting about groups such as the “Martians,” the Olympia Academy and the Bloomsbury group is not just that they exist, but how devoted their members were to them and how integral they saw the dialogue they produced to their own successes. For many of the same reasons, highly innovative firms often design workspaces to promote collaboration.

That’s no accident. Decades of research, including a study of “star” engineers at Bell labs as well as one of designers at IDEO, found that the best innovators are not necessarily the smartest or even the hardest working, but those who actively build up a strong network of collaborators. Another study of 17.9 million scientific papers found that the most highly cited work tends to come from a group of specialists in a specific field collaborating with an outsider.

Today’s technology creates new opportunities to collaborate and the impact is beginning to be felt. As Jennifer Doudna explained in The Economist, the Covid crisis is ushering in a “new era of science” in which collaboration is accelerating across national, institutional and disciplinary boundaries in ways that are unprecedented.

What’s becoming clear is that collaboration is increasingly becoming a competitive advantage and it’s not just what you know, but who you talk to, that will determine whether you succeed or fail. The better networks we build, the more likely it will be that we stumble across that random bit of information or insight that can help us solve an important problem.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Four Key Attributes of Transformational Leaders

Four Key Attributes of Transformational Leaders

GUEST POST from Greg Satell

Change isn’t what it used to be. Where earlier generations had leaders like Gandhi, King and Mandela, today’s change efforts seem rudderless. Movements like #Occupy, Black Lives Matter and #MeToo hold marches replete with strident calls for change, but they never seem to get anywhere. Lots of sound and fury, signifying nothing.

Many believe that if only these movements had more charismatic leaders or more inspiring oratory they would be able to gain more traction. Others say that society has become too corrupt and our politics too coarse to make change happen. They want to blow the system up, not work within it.

The truth is that leadership has little to do with fancy speeches or clever slogans. The notion that today’s call for change face greater opposition than the British Raj, Jim Crow or Apartheid is simply laughable. In researching my book, Cascades, however, I found that, despite important differences, transformational leaders had these four things in common.

1. They Work To Make A Difference, Not Just Make A Point

When the #Occupy Wall Street movement broke out in 2011, it inspired millions with its rallying call,, “We are the 99%.” Yet soon it became clear that all was not well. As New York Times columnist Joe Nocera noted, the group “had plenty of grievances, aimed mainly at the ‘oppressive” power of corporations,’ but “never got beyond their own slogans.” It soon fizzled out, a massive waste of time.

Making lots of noise and achieving little seems to be a common theme among the failed revolutions of today. All too often they believe that the righteousness of their cause, along with some clever memes on social media, will win the day. It won’t. Real change requires real work. You have to want to make a difference, not just make a point

It’s not just young activists who make this mistake. Corporate bigwigs often fall into the same trap. They seek to “disrupt” without any real affirmative plan for change. In Lights Out, Wall Street Journal reporters Thomas Gryta and Ted Mann chronicle how General Electric CEO Jerrfey Immelt tried mightily to gin up the stock price and project an innovative image, but did little to create actual value.

For transformative leaders, making a difference is the real point. Thurgood Marshall, to take just one example, spent decades working in relative obscurity, not to mention facing significant danger, before he triumphed in Brown vs. Board of Education. If we are to achieve anything of significance, we need to think less about disruption and more about tackling grand challenges.

2. They Lead With Values

Today, we regard Nelson Mandela as an almost saintly figure, but it wasn’t always that way. In fact, throughout his career as an activist, he was accused of being a communist, an anarchist and worse. When confronted with these accusations, however, he always pointed out that no one had to guess what he believed in, because it was written down in the Freedom Charter in 1955.

Being explicit about values helped to signal to external stakeholders, such as international institutions, that the anti-Aparthied activists shared common values with them. In fact, although the Freedom Charter was a truly revolutionary document, its call for things like equal rights and equal protection would be considered unremarkable in most societies.

After Apartheid fell and Mandela rose to power, the values spelled out in the Freedom Charter became important constraints. To uphold the stated principle that “all should be equal under the law,” his government couldn’t oppress whites. His reconciliation efforts are a big part of the reason he is so revered today.

Values are just as powerful in a corporate context for many of the same reasons. In Lou Gerstner’s IBM turnaround in the 1990s, for example, he not only put forth serving customers as an important value, he also made it clear that he was willing to forego revenue on every sale to make good on it. His willingness to incur costs showed his values were more than lip service.

Make no mistake. Every significant change comes with costs and being explicit about values makes it clear what costs you are willing to incur. Far too many would-be change leaders fail to be explicit about their values because they don’t want to be constrained in any way. It’s much easier to spout slogans like “Disrupt” or “Innovate or Die” than to think seriously about what change will cost you and others.

3. They Shape Networks

The March on Washington was a defining moment for the civil rights movement and for America. So it shouldn’t be surprising that those seeking change today, such as Black Lives Matter and the modern women’s movement, try to emulate that earlier success with marches of their own. These efforts consistently fail to achieve anything real and, in fact, often do significant damage when they spin out of control.

The truth is that the civil rights movement didn’t become powerful because it held the March on Washington. In fact, it was very much the opposite. The March on Washington was held because the civil rights movements had already become powerful. It wasn’t an opening shot, but part of the end game, the culmination of decades of painstaking work of not just Martin Luther King Jr., but a broad array of leaders.

General Stanley McChrystal took a similar approach in revamping the US Special Forces in Iraq to fight Al Qaeda. Realizing that a conventional approach would not be effective against an unconventional opponent, he declared that “it takes a network to defeat a network and shifted his “focus from moving pieces on the board to shaping the ecosystem.”

The truth is that it is networks of small groups, loosely connected but united by a shared purpose that drives transformational change. Effective leaders know that their role is to empower others by helping to connect people in order to achieve that purpose.

4. They Learn From Their Mistakes

One of the most surprising things I found in my research is how consistently early efforts failed. The first march on Washington, the Woman Suffrage Procession of 1913, quickly spiraled out of control. Gandhi’s first efforts to bring disobedience to India ended so horribly he would later call it his Himalayan miscalculation. Steve jobs, quite famously, was fired from Apple.

What made the difference wasn’t the mistakes they made, but how they learned from them. Alice Paul developed more innovative strategies, such as the Silent Sentinel protests, which were less vulnerable to disruption. Suffrage was won in 1919. Gandhi replaced widespread protests with the Salt March. Steve Jobs became more focused and built the World’s most valuable company.

Unfortunately, many of today’s activists don’t seem to have the same strategic flexibility. Once the #Occupy protesters went home, they never seemed to come up with an alternative approach. The riots at Ferguson were replaced, six years later, by the George Floyd riots. The modern women’s movement continues to march, with little to show for it.

None of this is to say that these causes are unworthy or that they are doomed to failure. What it does mean is that, if they are to succeed, they need to understand how revolutions fail and do something different. In an age of disruption, the only viable strategy is to adapt.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

A Brave Post-Coronavirus New World

A Brave Post-Coronavirus New World

GUEST POST from Greg Satell

In 1973, in the wake of the Arab defeat in the Yom Kippur war with Israel, OPEC instituted an oil embargo on America and its allies. The immediate effects of the crisis was a surge in gas prices and a recession in the west. The ripple effects, however, were far more complex and played out over decades.

The rise in oil prices brought much needed hard currency to the Soviet Union, prolonging its existence and setting the stage for its later demise. The American auto industry, with its passion for big, gas guzzling cars, lost ground to the emergent. The new consciousness of conservation led to the establishment of the Department of Energy.

Today the Covid-19 crisis has given a shock to the system and we’re at a similar inflection point. The most immediate effects have been economic recession and the rapid adoption of digital tools, such as video conferencing. Over the next decade or so, however, the short-term impacts will combine with other more longstanding trends to reshape technology and society.

Pervasive Transformation

We tend to think about innovation as if it were a single event, but the truth is that it’s a process of a process of discovery, engineering and transformation, which takes decades to run its course. For example, Alan Turing discovered the principles of a universal computer in 1936, but it wasn’t until the 1950s and 60s that digital computers became commercially available.

Even then, digital technology, didn’t really begin to become truly transformational until the mid-90s. By this time, it was well understood enough to make the leap from highly integrated systems to modular ecosystems, making the technology cheaper, more functional and more reliable. The number of applications exploded and the market grew quickly.

Still, as the Covid-19 crisis has made clear, we’ve really just been scratching the surface. Although digital technology certainly accelerated the pace of work, it did fairly little to fundamentally change the nature of it. People still commuted to work in an office, where they would attend meetings in person, losing hours of productive time each and every day.

Over the next decade, we will see pervasive transformation. As Mark Zuckerberg has pointed out, once people can work remotely, they can work from anywhere, which will change the nature of cities. Instead of “offsite” meetings, we may very well have “onsite” meetings where people from their home cities over travel to headquarters to do more active collaboration.

These trends will combine with nascent technologies like artificial intelligence and blockchain to revolutionize business processes and supply chains. Organizations that cannot adopt key technologies will very likely find themselves unable to compete.

The Rise of Heterogeneous Computing

The digital age did not begin with personal computers in the 70s and 80s, but started back in the 1950s with the shift from electromechanical calculating machines to transistor based mainframes. However, because so few people used computers back then—they were largely relegated to obscure back office tasks and complex scientific calculations—the transformation took place largely out of public view.

A similar process is taking place today with new architectures such as quantum and neuromorphic computing. While these technologies are not yet commercially viable, they are advancing quickly and will eventually become thousands, if not millions, of times more effective than digital systems.

However, what’s most important to understand is that they are fundamentally different from digital computers and from each other. Quantum computers will create incredibly large computing spaces that will handle unimaginable complexity. Neuromorphoic systems, based on the human brain, will be massively powerful, vastly more efficient and more responsive.

Over the next decade we’ll be shifting to a heterogeneous computing environment, where we use different architectures for different tasks. Most likely, we’ll still use digital technology as an interface to access systems, but increasingly performance will be driven by more advanced architectures.

A Shift From Bits to Atoms

The digital revolution created a virtual world. My generation was the first to grow up with video games and our parents worried that we were becoming detached from reality. Then computers entered offices and Dan Bricklin created Visicalc, the first spreadsheet program. Eventually smartphones and social media appeared and we began spending almost as much time in the virtual world as we did in the physical one.

Essentially, what we created was a simulation economy. We could experiment with business models in our computers, find flaws and fix them before they became real. Computer-aided design (CAD) software allowed us to quickly and cheaply design products in bits before we got down to the hard, slow work of shaping atoms. Because it’s much cheaper to fail in the virtual world than the physical one, this made our economy more efficient.

Today we’re doing similar things at the molecular level. For example, digital technology was combined with synthetic biology to quickly sequence the Covid-19 virus. These same technologies then allowed scientists to design vaccines in days and to bring them to market in less than a year.

A parallel revolution is taking in materials science, while at the same time digital technology is beginning to revolutionize traditional industries such as manufacturing and agriculture. The expanded capabilities of heterogeneous computing will accelerate these trends over the next few decades.

What’s important to understand is that we spend vastly more money on atoms than bits. Even at this advanced stage, information technologies only make up about 6% of GDP in advanced economies. Clearly, there is a lot more opportunity in the other 94%, so the potential of the post-digital world is likely to far outstrip anything we’ve seen in our lifetimes.

Collaboration is the New Competitive Advantage

Whenever I think back to when we got that first computer back in the 1980s, I marvel at how different the world was then. We didn’t have email or mobile phones, so unless someone was at home or in the office, they were largely unreachable. Without GPS, we had to either remember where things were or ask for directions.

These technologies have clearly changed our lives dramatically, but they were also fairly simple. Email, mobile and GPS were largely standalone technologies. There were, of course, technical challenges, but these were relatively narrow. The “killer apps” of the post-digital era will require a much higher degree of collaboration over a much more diverse set of skills.

To understand how different this new era of innovation will be, consider how IBM developed the PC. Essentially, they sent some talented engineers to Boca Raton for a year and, in that time, developed a marketable product. For quantum computing, however, it is building a vast network, including national labs, research universities, startups and industrial partners.

The same will be true of the post-Covid world. It’s no accident that Zoom has become the killer app of the pandemic. The truth is that the challenges we will face over the next decade will be far too complex for any one organization to tackle it alone. That’s why collaboration is becoming the new competitive advantage. Power will reside not at the top of hierarchies, but at the center of networks and ecosystems.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Innovation Requires Constraints

Innovation Requires Constraints

GUEST POST from Greg Satell

Some years ago, I wrote an article in Harvard Business Review about stock buybacks, which were being pilloried at the time. Many people thought that companies were spending too much money to gin up their stock price when they could be investing those funds into innovation, making better products and creating new markets.

Yet I pointed out that things weren’t as they seemed. As Clayton Christensen had showed around the same time, there was a superabundance of capital (in response to the financial crisis, central banks had been flooding markets with money) and corporations had more money than they could profitably invest.

I also suspected, although the evidence was scant at the time, that the extra money was going to Silicon Valley startups, which seemed to me to be less potentially problematic, especially when the public sector was being woefully underfunded at the same time. Today, we can see the results and they aren’t pretty. Without constructive constraints, even good ideas go bad.

The Chimera of Mass Adoption

Shai Agassi had a good idea. His key insight was that electric cars couldn’t survive without an ecosystem of charging stations. Therefore, he reasoned, to spur mass adoption you needed to develop the cars and the charging stations in tandem. Once you relieved the problem of “range anxiety,” so the theory went, ordinary consumers would buy in.

An entrepreneur at heart, Agassi started a company, Better Place, to make his vision a reality and, with the support of a wide array of celebrities and politicians, raised nearly a billion dollars of venture capital. It seemed like a sure winner. After all, with that much money and star power, what could go wrong?

As it turns out, everything could go wrong. From the design of the cars, to the charging stations to the batteries themselves, every detail was fraught with problems. But with so much money, Agassi could continue to press forward, sell his vision and win over partners. Instead of resolving issues, they multiplied. In a few short years, the company was bankrupt.

The truth is that, outside of software, going after mass adoption from the start is usually a bad idea. Rather than trying to please everybody at once, you are much better off focusing on a hair-on-fire use case—a small segment of customers that has a problem they need solved so badly that they almost literally have their hair on fire—and building up from there.

Incidentally, that is exactly what Elon Musk did with Tesla. He didn’t try to build for the mass market, but for Silicon Valley millionaires who wanted a cool, eco-friendly car and wouldn’t need to rely on it for everyday use. That foothold allowed the company to learn from its inevitable mistakes, improve the product and its manufacturing process and, eventually, to prevail in the marketplace against much bigger, but more traditional, competitors.

Buying Into The Silicon Valley Myth

While Agassi’s idea had a certain logic to it, Adam Neumann’s is much harder to figure out. Essentially, he sold investors on the idea that renting coworking space to businesses, which was not at all a new or innovative idea, could somehow be married with some Silicon Valley pixie dust. The result was WeWork, a $47 billion debacle.

While WeWork is, in many ways, an exceptional case, in others it is surprisingly mundane. For more than a decade, investors—and the business community at large— have bought into the Silicon Valley myth that its model of venture-funded entrepreneurship is widely applicable outside of software and consumer gadgets. It is not.

The truth is that Silicon Valley’s way of doing business was a specific solution that applied to a limited set of industries where low or near-zero marginal costs and the potential for network effects made increasing returns to investment not only possible, but a legitimate business planning objective.

Unfortunately, when you try to apply those same business principles to an industry where those conditions do not exist, you essentially get a Ponzi scheme. As long as investors continue to pour money in, the business can continue to win market share by undercutting competitors on price. Eventually though, as in the case of WeWork, the bottom falls out.

The Cult of Talent

Better Place and WeWork, as well as other notable “unicorn debacles” such as Uber and Theranos, are cautionary tales. Venture capitalists, believing in their own brilliance as well as their ability to spot it in others, shoveled money into founders with questionable ideas and, as soon became apparent, even worse morals.

But what if you could have the best of both worlds? What if you could take all of that Silicon Valley venture money and, instead of throwing it all at some young hotshot, invest it in some grizzled veterans with real track records. Instead of betting on a long shot, you could essentially put your money on a proven performer.

That, essentially, was the idea behind Quibi, a short form video company founded by Jeffrey Katzenberg, who revived Disney’s animation studio and then went on to even greater success as Co-Founder of Dreamworks, and Meg Whitman, who led eBay from a small startup of 30 people to become a global powerhouse employing thousands and earning billions.

Yet these two old hands, with all of their experience and know-how, somehow managed to do even worse than the more obviously incompetent Agassi and Neumann. Despite raising more than $2 billion, within seven months of launching, Quibi acknowledged defeat, shutting down operations and vowing to return whatever money that was left over to investors.

A Recurring Pattern of Fundamental Fallacy

It’s not hard to see an underlying pattern in all three of these massive failures. Venture investors, whose model is based on the principle that one outsized success can easily make up for any number of failed ventures, have come to believe that betting big can increase the chance of hitting that unlikely triumph.

What they don’t seem to have considered is that too much money can make a good idea go bad. Clearly, electric cars can succeed in the marketplace. Coworking spaces have been a viable business model for decades. There’s no question that Katzenberg and Whitman are talented executives. Yet, with the massive support of investors, they all failed massively.

Yet researchers have known for decades that creativity needs constraints. When you have a limited budget, you simply don’t have the luxury of ignoring problems. You have to face up to them and solve them or you won’t survive. When you have virtually unlimited resources, however, you can leave the hard stuff till another day. Eventually, it all comes crashing down.

Unfortunately, as Charles Duhigg explains in a piece in The New Yorker, that Silicon Valley investors who are seen as insufficiently “founder friendly,” now find themselves shut out of the best deals. Further research has begun to show that these tendencies, souped up by an overabundance of capital, have begun to crowd out good investments.

Or, put another way, Silicon Valley is building a doomsday machine and we desperately need to get off.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Why Good Ideas Fail

(And How to Help Yours Succeed)

Why Good Ideas Fail

GUEST POST from Greg Satell

In 1891, Dr. William Coley had an unusual idea. Inspired by an obscure case, in which a man who had contracted a severe infection was cured of cancer, the young doctor purposely infected a tumor on his patient’s neck with a heavy dose of bacteria. Miraculously, the tumor vanished and the patient remained cancer free even five years later.

You would think that such an accomplishment would be hailed as a breakthrough, but much like Ignaz Semmelweis a half century before, Coley’s work was met with skepticism. In fact, it would take over 100 years, until the first drug was approved in 2011, for immunotherapy to become widely accepted by the medical community.

This is far more common than you would think. We tend to think that if we get an idea right, that others will recognize its worth. That’s hardly ever true. In fact, if your idea is truly new and different, you can expect to encounter stiff resistance. Success or failure depends less on the actual value of an idea than how you overcome resistance and scale to impact.

The Tyranny of Paradigms

The use of the term paradigm shift has become so common that we scarcely stop to think where it came from. When Thomas Kuhn first introduced the concept in his classic The Structure of Scientific Revolutions, he described more than an event, but a process, which had pervaded the history of science.

It starts with an established model, the kind we learn in school or during initial training for a career. Models become established because, at least on some level, they work. So the more proficient we become at applying a good model the more favorable others view our performance. It’s what allows us to rise through the ranks and become successful.

Yet all models are, in some way, incomplete. Newton’s dynamics, to take just one famous example, work perfectly well for objects we encounter in everyday life and survived more than three centuries with little modification. It was only when scientists started looking closely at objects that were very small and very large that a need for Einstein’s theories arose.

That’s why new paradigms almost always face significant resistance and need to be championed by outsiders or newcomers. Or, as the physicist Max Planck put it “a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”

An Idea In Its Infancy

Pixar founder Ed Catmull once wrote that “early on, all of our movies suck.” The trick, he pointed out, is to go beyond the initial germ of an idea and put in the hard work it takes to get something to go “from suck to not-suck.” He called early ideas “ugly babies,” because they start out, “awkward and unformed, vulnerable and incomplete.”

There’s something romantic about the early stages of an idea, but it’s important to remember that, much like Catmull’s, ugly babies, your idea is never going to be as weak and vulnerable as those early days before you get a chance to work out the inevitable kinks. You need to be careful not to overexpose it or it may die an early death.

So it’s important to overcome the urge to start with a bang and, at least in the beginning, focus on a relatively small circle who can help your ugly baby grow. These should be people you know and trust, or at least have indicated some enthusiasm for the concept. They should also be people who will be willing to point out early flaws.

For example, in his efforts to reform the Pentagon, Colonel John Boyd began every initiative by briefing a group of collaborators called the “Acolytes,” who would help hone and sharpen the ideas. He then moved on to congressional staffers, elected officials and the media. By the time the top brass were aware of what he was doing, he had too much support to ignore.

Learning What You Don’t Know

While your idea is still an “ugly baby, there’s still much that you don’t know and the evidence is rarely clear. In the case of Dr. Coley and immunotherapy, injecting cancer patients with toxins to make them sick was not only counter-intuitive, it often didn’t work. It seemed to help in a few rare cases, but not in most others and Coley couldn’t explain why.

As it turned out, the story was far stranger than anyone could have imagined. Coley and his supporters assumed that injecting toxins jump-started the immune system, but that wasn’t the case. In reality, our immune system is perfectly capable of identifying and attacking cancer cells. In fact, it seems that it kills off potentially cancerous cells all of the time.

Unfortunately, some cancers develop counterattacks. They evolve molecules that bind to specific receptors in our immune system and turn off the immune response. That was the reason why immunotherapy efforts kept failing, until Jim Allison made his breakthrough discovery in 1995.

What Allison figured out, more than a century after Coley’s experiment, was that we can engineer molecules that can “ block the blockers” and cure previously incurable cancers. Even then, it wasn’t an easy path. By the time he came around, many had tried and failed to develop an immune approach to cancer. It would take three years to find a firm willing to fund his work.

The drug based on Allison’s work, called Yervoy, received FDA approval in 2011, 16 years after his initial discovery. Finally, the floodgates opened and the work of countless immunotherapy researchers over more than a century began to bear fruit. Today, there are thousands of active scientists working in the field.

Ideas Need Ecosystems

Today, both William Coley and Jim Allison are celebrated scientists. However, while Coley’s work was never widely accepted during his lifetime, Jim Allison won the Nobel Prize for his work. Both had essentially the same idea, that the immune response could be used to fight cancer and save lives.

The truth is that ideas need ecosystems to scale to impact. Coley worked tirelessly to promote the potential of immune approaches to cancer and, after his death in 1936, left a small but dedicated cadre of disciples determined to continue his work. His daughter, Helen, set up the Cancer Research Institute to fund further discovery.

By the time Jim Allison came around there were powerful molecular analysis techniques that allowed he and his colleagues to identify specific molecules within the immune system and understand their function. There were also networks of venture capital that funded firms like the one that supported Allison’s work.

Power in an ecosystem lies not at the top but emanates from the center and you move to the center by connecting out. That’s what separates great innovators from people who merely have ideas. They understand that no idea can stand on its own. Much like Catmull’s ugly babies, they need to be nourished and protected if they are to grow and make a real impact on the world.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Power of the Humility Principle

The Power of the Humility Principle

GUEST POST from Greg Satell

In 1929, just before the stock market crash, Louis Bamberger and his sister, Caroline Bamberger Fuld, sold their department store in Newark to R.H. Macy and Company for $25 million ($343 million in 2015 dollars). Grateful to the people of Newark for their support, they planned to endow a medical college in that city.

Things didn’t turn out that way. They were convinced by Abraham Flexner to create the Institute for Advanced Study instead, and to build it in Princeton. It would soon be the home of Albert Einstein and would become a beacon for scientists fleeing Europe, who would prove critical to winning the war and making America a technological superpower.

What always struck me about the story is that the Bambergers achieved their greatest impact not through greater knowledge or accomplishment, but humility. They could have stuck to their initial plan, but because they were willing to see its flaws and support another’s dream, they were able to change the world. We rarely understand the full impact of our actions.

Meritocracy and Humiliation

In 1940, James Conant, the President of Harvard, gave a talk at the University of California that was soon republished in The Atlantic magazine. Entitled, “Education for a Classless Society,” it championed the idea of social mobility based on merit, rather than privilege being handed down through inheritance.

Today, Conant’s idea has become inseparably intertwined with the American dream and repeated with almost metronomic regularity by politicians seeking office, parents raising children and educators trying to motivate students. We’re told, “You can be anything you want to” and “You can make it if you try.”

Yet as Michael Sandel points out in The Tyranny of Merit, this sorting system has had an insidious effect on our culture. Those who are deemed worthy get all the benefits that society has to offer. Those that are not are not only left behind, but are seen as “takers” rather than “makers” and therefore undeserving of even basic things like access to health and child care.

The unlucky have come to be seen as culpable and those more fortunate consider themselves beholden to no one. Many in America, especially the two thirds of the country who do not have a college degree, are not only poor, but humiliated, creating opportunities for populist politicians. Elites, for their part, wonder what’s the matter with Kansas?.

Citizens United, The Rise of Regulation and the Decline of Competitive Markets

In 2009, a conservative organization called Citizens United brought a suit against the Federal Elections Commission which argued that limits on corporate political donations violated the free speech clause of the First Amendment. Its success at the Supreme Court led to the rise of Super PACs and nearly unlimited political spending.

At first, things went according to plan. Studies have found that the ruling did indeed help Republicans, especially in their effort to win statehouses in 2010 and take control of redistricting. However, the decision also opened the door to massive funding of liberal causes and Democrats handily outraised Republicans in the 2020 election.

Yet perhaps the most perverse effect of the Citizens United decision has been how it has fed the rise of lobbying expenditures and regulation. When you allow business to invest unlimited amounts of money to influence government, it should be surprising that a significant portion of that money is used to restrict competition.

It’s hard to escape the irony. An organization that bills itself as dedicated supporting free enterprise and “restoring our government to citizens’ control” has not only led to a weakening of free markets but is also deeply unpopular. Pretty much the opposite of what was intended.

Income Inequality and Healthcare Costs

Research from the Pew Foundation finds that inequality is not only at record levels in the United States, but significantly higher than other developed nations. That should be cause for alarm in itself, but there is also growing evidence that there may be a reflexive relationship between income inequality and healthcare costs.

First, let’s start with the obvious. Income inequality has been shown to adversely affect mental and physical health. Part of the reason this is so is that people at the low end of income spectrum suffer from adverse social comparisons, which lead to depression and anxiety. However, evidence also suggests that even higher income people suffer from fear of losing their position, which has larger implications in a more unequal society.

There’s significant evidence that causality runs in the opposite direction. Because most Americans have insurance plans with high deductibles, we’re often getting hit with big out-of-pocket bills. Researchers have found that these expenses are having a measurable impact on income inequality.

Put simply, we’re becoming so worried about money that it’s affecting our physical and mental health and the costs associated with that deterioration in our health that it’s making us poor, creating a vicious cycle that’s bankrupting our mind, body and spirit.

We Need to Think Less Like Engineers and More Like Gardeners

James Conant was a scientist and an educator, not an economist or a politician. Nevertheless, his ideas have deeply contributed to America’s political zeitgeist. In much the same way, the activists at Citizens United probably didn’t imagine that achieving their goals would undermine their aims. Few medical specialists are aware of the economic impacts of health policy.

We usually take action to solve specific, narrow problems within a domain in which we have acquired some expertise. Often, we train for years to develop that expertise and years more to gain the experience needed to plan and implement an effective solution. During all that time, we rarely stop to consider the impact of our work outside our chosen field.

In a sense, we’ve been trained to think like engineers. We identify problems to be solved, reduce those problems to a limited set of variables, develop metrics to evaluate those variables and develop a solution that is optimized for those metrics. Unfortunately, the solutions we create often create even more problems.

That’s the essence of the humility principle. We rarely fully understand the consequences of the actions we take. We live in a world not of linear cause and effect, but complex ecosystems in which even our best laid plans touch of a complex web of ripple effects.

It’s time for us to take a more biological view in which we think less like engineers and more like gardeners that grow and nurture ecosystems. Instead of assuming we can design perfect solutions, we need to take a more Bayesian approach and make our systems less imperfect over time, fertilizing and pruning as we go.

A good place to start is to, like the Bambergers, think less of ourselves and open up to the mysteries of a universe we do not understand, to people who possess knowledge we do not and to the potential of the future as a collaborative project.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Silicon Valley Has Become a Doomsday Machine

Silicon Valley Has Become a Doomsday Machine

GUEST POST from Greg Satell

I was working on Wall Street in 1995 when the Netscape IPO hit like a bombshell. It was the first big Internet stock and, although originally priced at $14 per share, it opened at double that amount and quickly zoomed to $75. By the end of the day, it had settled back at $58.25 and, just like that, a tiny company with no profits was worth $2.9 billion.

It seemed crazy, but economists soon explained that certain conditions, such as negligible marginal costs and network effects, would lead to “winner take all markets” and increasing returns to investment. Venture capitalists who bet on this logic would, in many cases, become rich beyond their wildest dreams.

Yet as Charles Duhigg explained in The New Yorker, things have gone awry. Investors who preach prudence are deemed to be not “founder friendly” and cut out of deals. Evidence suggests that the billions wantonly plowed into massive failures like WeWork and Quibi are crowding out productive investments. Silicon Valley is becoming a ticking time bomb.

The Rise Of Silicon Valley

In Regional Advantage, author AnnaLee Saxenian explained how the rise of the computer can be traced to the buildup of military research after World War II. At first, most of the entrepreneurial activity centered around Boston, but the scientific and engineering talent attracted to labs based in Northern California soon began starting their own companies.

Back east, big banks were the financial gatekeepers. In the Bay Area, however, small venture capitalists, many of whom were ex-engineers themselves, invested in entrepreneurs. Stanford Provost Frederick Terman, as well as existing companies, such as Hewlett Packard, also devoted resources to broaden and strengthen the entrepreneurial ecosystem.

Saxenian would later point out to me that this was largely the result of an unusual confluence of forces. Because there was a relative dearth of industry in Northern California, tech entrepreneurs tended to stick together. In a similar vein, Stanford had few large corporate partners to collaborate with, so sought out entrepreneurs. The different mixture produced a different brew and Silicon Valley developed a unique culture and approach to business.

The early success of the model led to a process that was somewhat self-perpetuating. Engineers became entrepreneurs and got rich. They, in turn, became investors in new enterprises, which attracted more engineers to the region, many of whom became entrepreneurs. By the 1980’s, Silicon Valley had surpassed Route 128 outside Boston to become the center of the technology universe.

The Productivity Paradox and the Dotcom Bust

As Silicon Valley became ascendant and information technology gained traction, economists began to notice something strange. Although businesses were increasing investment in computers at a healthy clip, there seemed to be negligible economic impact. As Robert Solow put it, “You can see the computer age everywhere but in the productivity statistics.” This came to be known as the productivity paradox.

Things began to change around the time of the Netscape IPO. Productivity growth, which had been depressed since the early 1970s, began to surge and the idea of “increasing returns” began to take hold. Companies such as Webvan and Pets.com, with no viable business plan or path to profitability, attracted hundreds of millions of dollars from investors.

By 2000, the market hit its peak and the bubble burst. While some of the fledgling Internet companies, such as Cisco and Amazon, did turn out well, thousands of others went down in flames. Other more conventional businesses, such as Enron, World Com and Arthur Anderson, got caught up in the hoopla, became mired in scandal and went bankrupt.

When it was all over there was plenty of handwringing, a small number of prosecutions, some reminiscing about the Dutch tulip mania of 1637 and then everybody went on with their business. The Federal Reserve Bank pumped money into the economy, the Bush Administration pushed big tax cuts and within a few years things were humming again.

Web 2.0. Great Recession and the Rise Of the Unicorns

Out of the ashes of the dotcom bubble arose Web 2.0, which saw the emergence of new social platforms like Facebook, LinkedIn and YouTube that leveraged their own users to create content and grew exponentially. The launch of the iPhone in 2007 ushered in a new mobile era and, just like that, techno-enthusiasts were once again back in vogue. Marc Andreessen, who founded Netscape, would declare that software was eating the world.

Yet trouble was lurking under the surface. Productivity growth disappeared in 2005 just as mysteriously as it appeared in 1996. All the money being pumped into the economy by the Fed and the Bush tax cuts had to go somewhere and found a home in a booming housing market. Mortgage bankers, Wall Street traders, credit raters and regulators all looked the other way while the bubble expanded and then, somewhat predictably, imploded.

But this time, there were no zany West Coast startup entrepreneurs to blame. It was, in fact, the establishment that had run us off the cliff. The worthless assets at the center didn’t involve esoteric new business models, but the brick and mortar of our homes and workplaces. The techno-enthusiasts could whistle past the graveyard, pitying the poor suckers who got caught up in a seemingly anachronistic fascination with things made with atoms.

Repeating a now-familiar pattern, the Fed pumped money into the economy to fuel the recovery, establishment industries, such as the auto companies in Detroit were discredited and a superabundance of capital needed a place to go and Silicon Valley looked attractive.

The era of the unicorns, startup companies worth more than a billion dollars, had begun.

Charting A New Path Forward

In his inaugural address, Ronald Reagan declared that, “Government is not the solution to our problem, government is the problem.” In his view, bureaucrats were the enemy and private enterprise the hero, so he sought to dismantle federal regulations. This led to the Savings and Loan crisis that exploded, conveniently or inconveniently, during the first Bush administration.

So small town bankers became the enemy while hotshot Wall Street traders and, after the Netscape IPO, Internet entrepreneurs and venture capitalists became heroes. Wall Street would lose its luster after the global financial meltdown, leaving Silicon Valley’s venture-backed entrepreneurship as the only model left with any genuine allure.

That brings us to now and “big tech” is increasingly under scrutiny. At this point, the government, the media, big business, small business, Silicon Valley, venture capitalists and entrepreneurs have all been somewhat discredited. There is no real enemy left besides ourselves and there are no heroes coming to save us. Until we learn to embrace our own culpability we will never be able to truly move forward.

Fortunately, there is a solution. Consider the recent Covid crisis, in which unprecedented collaboration between governments, large pharmaceutical companies, innovative startups and academic scientists developed a life-saving vaccine in record time. Similar, albeit fledgling, efforts have been going on for years.

Put simply, we have seen the next big thing and it is each other. By discarding childish old notions about economic heroes and villains we can learn to collaborate across historical, organizational and institutional boundaries to solve problems and create new value. It is in our collective ability to solve problems that we will create our triumph or our peril.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

How to Free Ourselves of Conspiracy Theories

How to Free Ourselves of Conspiracy Theories

GUEST POST from Greg Satell

If you think about it, postal carriers should be a little bit creepy. If someone told you that an agent of the federal government would come to your house everyday with access to information about places you shop, businesses you transact with and people you know well enough to trade holiday cards with, it might cause you some alarm.

Yet we don’t find postal carriers creepy. In fact, despite vigorous efforts to malign the Postal Service, we trust it far more than most institutions. The truth is that we don’t conjure up conspiracy theories to explain the everyday and mundane, but some far off yonder which we cannot clearly designate, yet find threatening nonetheless.

The function conspiracy theories play is to explain things that we don’t understand and feel out of our control. So it shouldn’t be surprising that the age of Covid has spawned a myriad of crazy, dangerous notions. What we need to come to terms with is that the real problem plaguing society is a basic lack of trust and that is where the battle for truth must be fought.

The Visceral Abstract

One of the frustrating things about modern life is that we experience so little of it directly. As Leonard Read pointed out in his 1966 essay, I, Pencil, the manufacture of even the simplest modern object is beyond the reach of a single person. Today, people depend on technologies to get through their day, but have only the barest notion of how they function.

The truth is that we live in a world of the visceral abstract, where strange theories govern our everyday lives. People may not care much, or even believe in, Einstein’s theory of special relativity, but if GPS satellites aren’t calibrated to take it into account, the delivery man won’t be able to bring their dinner. In much the same way, the Coronavirus will mutate, and the most infectious variant will dominate, no matter what you think of Darwin’s theory.

As Francis Fukuyama explains in his recent book, Identity, the pace of change and disruption in modern society demands that we make choices about who we are. Faced with so much we don’t understand there is no small amount of appeal to rejecting the unknown in favor of simpler explanations in the form of conspiracy theories.

Populists often say that they want to “take our country back,” but what they really mean is that they want to take our existence back. They want to banish the fabulous yonder for something closer and more tangible. They offer safe harbor and, for people who feel stranded on the rocks, with the sea crashing over them, the attraction can be undeniable.

Conforming To Local Majorities

We all have a certain capacity to believe in an idea to or to partake in an action. We may be highly skeptical or wildly enthusiastic, depending on our innate preferences and previous experiences, but history shows that individuals—and, in fact, entire societies—are vulnerable to suggestion.

We are, for example, highly affected by what those around us think. In fact, a series of famous experiments first performed in the 1950’s, and confirmed many times since then, showed that we will conform to the opinions of those around us even if they are obviously wrong. More recent research has found that the effect extends to three degrees of social distance.

The effect is then multiplied by our tendency to be tribal, even when the source of division is arbitrary. For example, in a study where young children were randomly assigned to a red or a blue group, they liked pictures of other kids who wore t-shirts that reflected their own group better. In another study of adults that were randomly assigned to “leopards” and “tigers,” fMRI studies noted hostility to outgroup members regardless of their race.

So it isn’t surprising that people will be more willing to believe, say, a conspiracy theory floated by a high school friend than information from a government agency or recognized news source. If the majority of people around you believe something, you’re likely to believe it too, because that’s what’s close and tangible.

During the pandemic, when everybody is stuck inside, the effect of local majorities, especially in isolated online communities, is significantly more powerful than usual. These communities may be, in fact, at a long distance geographically, but in mental and social space, they make up a large part of our immediate environment.

The Psychology Of Delusion

Once we are exposed to an idea and influenced by those around us to be sympathetic to it, two cognitive biases begin to kick in. The first, called availability bias, is our tendency overweight information that is most available to us. For example, reading or hearing about traffic fatalities on the news will do little to affect our driving habits, but when we pass a bad accident on the road, we’ll naturally slow down and become more cautious.

It’s amazing how powerful availability bias can be. Researchers have found that it even affects how investors react to analysts reports, how corporations invest in research and how jurors evaluate witness testimony. Other studies find that availability bias affects medical judgments. Even in matters of great import, we tend not to look very far for information.

Again, it’s easy to see how the pandemic combined with the Internet can make us more susceptible. Stuck at home, we spend more time engaging with communities online, where we tend to be surrounded by likeminded people. Their opinion will seem more real to us than those of “experts” from outside our community, whether that community is virtual or not.

This effect is then combined with confirmation bias, our tendency to seek out information that supports our prior beliefs and reject contrary evidence. Those who fall prey to conspiracy theories often report spending a lot of time searching the Internet and watching YouTube videos, which confirm and extend their discussions with “fellow travelers.”

Rebuilding Trust

Once we become aware of where conspiracy theories come from, it becomes easier to understand why we tend to be far more suspicious of, say, public officials or medical experts than our postal carriers. We tend to trust those we see as being part of our communities and are suspicious of those we see as outsiders.

Unfortunately, the stresses on our society will only intensify over the next decade as we undergo major shifts in technology, resources, migration and demography. These changes will inevitably hit some segments of society harder than others and, it’s safe to assume, those left behind will likely feel that society has forsaken them.

We need to learn how to rebuild trust, even with our enemies and the best—perhaps the only way—to do that is by focusing on shared values. We might, for example, disagree on exactly how our criminal justice system should function, but we can all agree that everyone has the right to live in a safe community. We may not agree on the specifics of a “Green New Deal,” but can all see the importance of investing in our rural communities and small towns.

Most of all, we need to rebuild a sense of connection. Fortunately, network science tells us that it takes relatively few connections to drastically reduce social distance. Trust is personal, not political. It can’t be legislated or mandated but arises out of shared experience that contributes to the collective well-being. Like our mail carriers, our institutions must be seen to be competently serving us and having our best interests at heart.

In the final analysis, our problem is not one of information, but that of basic good will. The antidote is not stronger arguments, but more capable public service.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Four Paradigm Shifts Defining Our Next Decade

Four Paradigm Shifts Defining Our Next Decade

GUEST POST from Greg Satell

The statistician George Box pointed out that “all models are wrong, but some are useful.” He meant that we create models as simplified representations of reality. They are merely tools and should never be mistaken for reality itself. Unfortunately, that’s much easier to say than it is to practice.

All too often, models take on the illusion of reality. We are trained, first at school and then on the job, to use models to make decisions. Most of the time the models are close enough to reality that we don’t really notice the discrepancy. Other times we notice that the model is off, but we dismiss it an unusual case or anomaly.

Yet the real world is always changing. So, models tend to get more wrong—and hence less useful— over time. Eventually, the once-useful models become misleading and we undergo a paradigm shift. Today, as we experience a period of enormous change, we need to unlearn old models and replace them with new ones. They too will be wrong, but hopefully useful.

1. From Value Chains to Ecosystems

The dominant view of strategy in the 20th century was based on Michael Porter’s ideas about competitive advantage. In essence, he argued that the key to long-term success was to dominate the value chain by maximizing bargaining power among suppliers, customers, new market entrants and substitute goods.

Yet markets today are much faster, more interconnected and more complex than they were when Porter formulated his ideas about competitive advantage. In a fast-moving information economy, firms increasingly depend on ecosystems to compete. That drastically changes the game.

Ecosystems are nonlinear and complex. Power emanates from the center instead of at the top of a value chain. You move to the center by connecting out. In a networked-driven world you need to continually widen and deepen links to other stakeholders within the ecosystem. That’s how you gain access to resources like talent, technology and information.

Consider the mobility revolution that is disrupting the auto industry. In an earlier age, the auto giants would have sought to use their market clout to dominate nascent players in an attempt to preserve their position. Now however, they are creating partnerships with tech companies, startups and others in order to innovate more effectively in the space.

Even more impressive has been the global effort to fight the Covid crisis, in which unprecedented collaboration between governments, large pharmaceutical companies, innovative startups and academic scientists developed a life-saving vaccine in record time. Similar, albeit fledgling, efforts have been going on for years.

2. From Maximizing Bargaining Power to Building Resilience and Trust

Porter’s ideas dominated thinking in corporate strategy for decades, yet they had a fatal flaw that wasn’t always obvious. Thinking in terms of value chains is viable when technology is relatively static, but when the marketplace is rapidly evolving it can get you locked out of important ecosystems and greatly diminish your ability to compete.

A report from Accenture Strategy analyzing over 7000 firms found that trust itself is increasingly becoming a competitive advantage. When evaluating competitive agility, it found trust “disproportionately impacts revenue and EBITDA.” The truth is that to compete effectively you need to build deep bonds of trust throughout a complex ecosystem of stakeholders.

If you are always looking to maximize your bargaining power, you are likely to cut yourself off from important information and capabilities that you will need to effectively compete. That’s one reason that the Business Roundtable, an influential group of almost 200 CEOs of America’s largest companies, issued a statement that discarded the old notion that the purpose of a business is solely to create shareholder value in favor of a broader stakeholder approach.

It is through forging bonds of trust that a business can build resiliency. If a company is seen as trustworthy, then it can draw on the goodwill of customers, employees, partners and communities to help it overcome a disruptive event. If, on the other hand, it is seen as greedy and predatory, everything becomes much harder. We need to learn how to rebuild trust.

3. From Vertical Agility to Horizontal Agility

For the past 50 years, innovation has largely been driven by our ability to cram more transistors onto a silicon wafer. That’s what’s allowed us to double the power of our technology every 18 months or so and led to the continuous flow of new products and services streaming out of innovative organizations.

Perhaps not surprisingly, over the past few decades agility has become a defining competitive attribute. Because the fundamentals of digital technology have been so well understood, much of the value shifted to applications, rather than fundamental technologies and things like design and user experience. Yet that will change in the years ahead.

Over the past few decades, agility has largely meant moving faster and faster down a predetermined path. Over the next few decades, however, agility will take on a new meaning: the ability to explore multiple domains at once and combine them into something that produces value. We’ll need to learn how to go slower to deliver much larger impacts.

Over the next few decades we will struggle to adapt to a post-digital age and we will need to rethink old notions about agility. To win in this new era of innovation we will have to do far more than just move fast and break things.

4. From Bits to Atoms

In The Rise and Fall of American Growth, economist Robert Gordon argues that the rapid productivity growth the US experienced from 1920-1970 is largely a thing of the past. While there may be short spurts of growth, like there was in the late 90’s, we’re not likely to see a sustained period of progress anytime soon.

Among the reasons he gives is that, while earlier innovations such as electricity and the internal combustion engine had broad implications, the impact of digital technology has been amazingly narrow. The evidence bears this out. We see, to paraphrase Robert Solow, digital technology just about everywhere except in the productivity statistics.

Still, there are indications that the future will look very different than the past. Digital technology is beginning to power new areas in the physical world, such as synthetic biology and materials science, that are already having a profound impact on such high potential fields as medical research renewable energy and manufacturing.

It is all too easy to get caught up in old paradigms. When progress is powered by chip performance and the increased capabilities of computer software, we tend to judge the future by those same standards. What we often miss is that paradigms shift and the challenges—and opportunities—of the future are likely to be vastly different.

In an age of disruption, the only viable strategy is to adapt.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.