Author Archives: Greg Satell

About Greg Satell

Greg Satell is a popular speaker and consultant. His latest book, Cascades: How to Create a Movement That Drives Transformational Change, is available now. Follow his blog at Digital Tonto or on Twitter @Digital Tonto.

Why Business Strategies Should Not Be Scientific

Why Business Strategies Should Not Be Scientific

GUEST POST from Greg Satell

When the physicist Richard Feynman took the podium to give the commencement speech at CalTech in 1974, he told the strange story of cargo cults. In certain islands in the South Pacific, he explained, tribal societies had seen troops build airfields during World War and were impressed with the valuable cargo that arrived at the bases.

After the troops left, the island societies built their own airfields, complete with mock radios, aircraft and mimicked military drills in the hopes of attracting cargo themselves. It seems more than a little silly, and of course, no cargo every came. Yet these tribal societies persisted in their strange behaviors.

Feynman’s point was that we can’t merely mimic behaviors and expect to get results. Yet even today, nearly a half century later, many executives and business strategists have failed to learn that simple lesson by attempting to inject “science” into strategy. The truth is that while strategy can be informed by science, it can never be, and shouldn’t be, truly scientific.

Why Business Case Studies Are Flawed

In 2004, I was leading a major news organization during the Orange Revolution in Ukraine. What struck me at the time was how thousands of people, who would ordinarily be doing thousands of different things, would stop what they were doing and start doing the same thing, all at once, in nearly perfect unison, with little or no formal coordination.

That’s what started the journey that ultimately resulted in my book, Cascades. I wanted to harness those same forces to create change in a business context, much like the protesters in Ukraine achieved in a political context and countless others, such as the LGBT activists, did in social contexts. In my research I noticed how different studies of political and social movements were from business case studies.

With historical political and social movements, such as the civil rights movement or the United States or the anti-Apartheid struggle in South Africa, there was abundant scholarship often based on hundreds, if not thousands of contemporary accounts. Business case studies, on the other hand, were largely done by a small team performing a handful of interviews.

When I interviewed people involved in the business cases, I found that they shared some important features with political and social movements that weren’t reported in the case studies. What struck me was that these features were noticed at the time, and in some cases discussed, but weren’t regarded as significant.

To be clear, I’m not arguing that my research was more “scientific,” but I was able to bring a new perspective. Business cases are, necessarily, usually focused on successful efforts, researched after the fact and written from a management perspective. We rarely get much insight into failed efforts or see perspectives from ordinary customers, line workers, competitors and so on.

The Halo Effect

Good case studies are written by experienced professionals who are trained to analyze a business situations from a multitude of perspectives. However, their ability to do that successfully is greatly limited by the fact that they already know the outcome. That can’t help but to color their analysis.

In The Halo Effect, Phil Rosenzweig explains how those perceptions can color conclusions. He points to the networking company Cisco during the dotcom boom. When it was flying high, it was said to have an unparalleled culture with happy people who worked long hours but loved every minute of it. When the market tanked, however, all of the sudden its culture came to be seen as “cocksure” and “naive.”

It is hard to see how company’s culture could change so drastically in such a short amount of time, with no significant change in leadership. More likely, given a successful example, analysts looked at particular qualities in a positive light. However, when things began to go the other way, those same qualities were perceived as negative.

So when an organization is doing well, we see them as “idealistic” and “values driven,” but when things go sour, those same traits are seen as “arrogant” and “impractical.” Given the same set of facts, we can, and often do, come to very different conclusions when our perception of the outcomes changes.

The Problem with Surveys

Besides case studies, another common technique to analyze business trends and performance are executive surveys. Typically, a research company or consulting firm sends out questionnaires to a few hundred executives and then analyze the results. Much like Feynman described, surveys give these studies an air of scientific rigor.

This appearance of scientific rigor is largely a mirage. Yes, there are numbers, graphs and pie charts, much as your would see in a scientific paper, but there are usually important elements missing, such as a clearly formulated formulated hypothesis, a control group, and a peer review process.

Another problematic aspect is that these types of studies emphasize what a typical executive thinks about a particular business issue or trend. So what they really examine is the current zeitgeist, which may or may not reflect current market reality. A great business strategy does not merely reflect what typical executives know, but exploits what they do not.

Perhaps most importantly, these types of surveys are generally not marketed as simple opinion surveys, but as sources of profound insight designed to help leaders get an edge over their competitors. The numbers, graphs and pie charts are specifically designed to look “scientific” in order to make them appear to be statements of empirical fact.

Your Strategy Is Always Wrong, You Have to Make It Right

We’d like strategy to be scientific, because few leaders like to admit that they are merely betting on an idea. Nobody wants to go to their investors and say, “I have a hunch about something and I’d like to risk significant resources to find out if I’m right.” Yet that’s exactly what successful business do all the time.

If strategy was truly scientific, then you would expect management to get better over time, much as, say, cancer treatment or technology performance does. However, just the opposite seems to be the case. The average tenure on the S&P 500 has been shrinking for decades and CEOs get fired more often.

The truth is that strategy can never be scientific, because the business context is always evolving. Even if you have the right strategy today, it may not be the right strategy for tomorrow. Changes in technology, consumer behavior and the actions of your competitors make that a near certainty.

So instead of assuming that your strategy is right, a much better course is to assume that it is wrong in at least some aspects. Techniques like pre-mortems and red teams can help you to expose flaws in a strategy and make adjustments to overcome them. The more you assume you are wrong, the better your chances are of being right.

Or, as Feynman himself put it, “The first principle is that you must not fool yourself—and you are the easiest person to fool.”

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Globalization and Technology Have Failed Us

Globalization and Technology Have Failed Us

GUEST POST from Greg Satell

In November 1989, there were two watershed events that would change the course of world history. The fall of the Berlin Wall would end the Cold War and open up markets across the world. That very same month, Tim Berners-Lee would create the World Wide Web and usher in a new technological era of networked computing.

It was a time of great optimism. Books like Francis Fukayama’s The End of History predicted a capitalist, democratic utopia, while pundits gushed over the seemingly neverending parade of “killer apps,” from email and e-commerce to social media and the mobile web. The onward march of history seemed unstoppable.

Today, 30 years on, it’s time to take stock and the picture is somewhat bleak. Instead of a global technological utopia, there are a number of worrying signs ranging from income inequality to the rise of popular authoritarianism. The fact is that technology and globalization have failed us. It’s time to address some very real problems.

Where’s the Productivity?

Think back, if you’re old enough, to before this all started. Life before 1989 was certainly less modern prior to 1989, we didn’t have mobile phones or the Internet, but for the most part it was fairly similar to today. We rode in cars and airplanes, watched TV and movies, and enjoyed the benefits of home appliances and air conditioners.

Now try to imagine what life was like in 1900, before electricity and internal combustion gained wide adoption. Even doing a simple task like cooking a meal or cleaning the house took hours of backbreaking labor to haul wood and water. While going back to living in the 1980s would involve some inconvenience, we would struggle to survive before 1920.

The productivity numbers bear out this simple observation. The widespread adoption of electricity and internal combustion led to a 50-year boom in productivity between 1920 and 1970. The digital revolution, on the other hand, created only an 8-year blip between 1996 and 2004. Even today, with artificial intelligence on the rise, productivity remains depressed.

At this point, we have to conclude that despite all the happy talk and grand promises of “changing the world,” the digital revolution has been a huge disappointment. While Silicon Valley has minted billionaires at record rates, digital technology has not made most of us measurably better off economically.

Winners Taking All

The increase of globalization and the rise of digital commerce was supposed to be a democratizing force, increasing competition and breaking the institutional monopoly on power. Yet just the opposite seems to have happened, with a relatively small global elite grabbing more money and more power.

Consider market consolidation. An analysis published in the Harvard Business Review showed that from airlines to hospitals to beer, market share is increasingly concentrated in just a handful of firms. As more expansive study of 900 industries conducted by The Economist found that two thirds have become more dominated by larger players.

Perhaps not surprisingly, we see the same trends in households as we do with businesses. The OECD reports that income inequality is at its highest level in over 50 years. Even in emerging markets, where millions have been lifted out of poverty, most of the benefits have gone to a small few.

The consequences of growing inequality are concrete and stark. Social mobility has been declining in America for decades, transforming the “land of opportunity” into what is increasingly a caste system. Anxiety and depression are rising to epidemic levels. Life expectancy for the white working class is actually declining, mostly due to “deaths of despair” due to drugs, alcohol and suicide. The overall picture is dim and seemingly getting worse.

The Failure Of Freedom

Probably the biggest source of optimism in the 1990s was the end of the Cold War. Capitalism was triumphant and many of the corrupt, authoritarian societies of the former Soviet Union began embracing democracy and markets. Expansion of NATO and the EU brought new hope to more than a hundred million people. China began to truly embrace markets as well.

I moved to Eastern Europe in the late 1990s and was able to observe this amazing transformation for myself. Living in Poland, it seemed like the entire country was advancing through a lens of time-lapse photography. Old, gray concrete building gave way to modern offices and apartment buildings. A prosperous middle class began to emerge.

Yet here as well things now seem to be going the other way. Anti-democratic regimes are winning elections across Europe while rising resentment against immigrant populations take hold throughout the western world. In America, we are increasingly mired in a growing constitutional crisis.

What is perhaps most surprising about the retreat of democracy is that it is happening not in the midst of some sort of global depression, but during a period of relative prosperity and low unemployment. Nevertheless, positive economic data cannot mask the basic truth that a significant portion of the population feels that the system doesn’t work for them.

It’s Time To Start Taking Responsibility For A Messy World

Looking back, it’s hard to see how an era that began with such promise turned out so badly. Yes, we’ve got cooler gadgets and streaming video. There have also been impressive gains in the developing world. Yet in so-called advanced economies, we seem to be worse off. It didn’t have to turn out this way. Our current predicament is the result of choices that we made.

Put simply, we have the problems we have today because they are the problems we have chosen not to solve. While the achievements of technology and globalization are real, they have also left far too many behind. We focused on simple metrics like GDP and shareholder value, but unfortunately the world is not so elegant. It’s a messy place and doesn’t yield so easily to reductionist measures and strategies.

There has, however, been some progress. The Business Roundtable, an influential group of almost 200 CEOs of America’s largest companies, in 2019 issued a statement that discarded the old notion that the sole purpose of a business is to provide value to shareholders. There are also a number of efforts underway to come up with broader measures of well being to replace GDP.

Yet we still need to learn an important lesson: technology alone will not save us. To solve complex challenges like inequality, climate change and the rise of authoritarianism we need to take a complex, network based approach. We need to build ecosystems of talent, technology and information. That won’t happen by itself, we have to make better choices.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Reality Behind Netflix’s Amazing Success

The Reality Behind Netflix's Amazing Success

GUEST POST from Greg Satell

Today, it’s hard to think of Netflix as anything but an incredible success. Its business has grown at breakneck speed and now streams to 190 countries, yet it has also been consistently profitable, earning over $12 billion last year. With hit series like Orange is the New Black and Stranger Things, it broke the record for Emmy Nominations in 2018.

Most of all, the company has consistently disrupted the media business through its ability to relentlessly innovate. Its online subscription model upended the movie rental business and drove industry giant Blockbuster into bankruptcy. Later, it pioneered streaming video and introduced binge watching to the world.

Ordinarily, a big success like Netflix would offer valuable lessons for the rest of us. Unfortunately, its story has long been shrouded in myth and misinformation. That’s why Netflix Co-Founder Marc Randolph’s book, That Will Never Work, is so valuable. It not only sets the story straight, it offers valuable insight into how to create a successful business.

The Founding Myth

Anthropologists have long been fascinated by origin myths. The Greek gods battled and defeated the Titans to establish Olympus. Remus and Romulus were suckled by a she-wolf and then established Rome. Adam and Eve were seduced by a serpent, ate the forbidden fruit and were banished from the Garden of Eden.

The reason every culture invents origin myths is that they help make sense of a confusing world and reinforce the existing order. Before science, people were ill-equipped to explain things like disease and natural disasters. So, stories, even if the were apocryphal, gave people comfort that there was a rhyme and reason to things.

So it shouldn’t be surprising that an unlikely success such as Netflix has its own origin myth. As legend has it, Co-Founder Reed Hastings misplaced a movie he rented and was charged a $40 dollar late fee. Incensed, he set out to start a movie business that had no late fees. That simple insight led to a disruptive business model that upended the entire industry.

The truth is that late fees had nothing to do with the founding of Netflix. What really happened is that Reed Hastings and Marc Randolph, soon to be unemployed after the sale of their company, Pure Atria, were looking to ride the new e-commerce wave and become the “Amazon of” something. Netflix didn’t arise out of a moment of epiphany, but a process of elimination.

The Subscription Model Was an Afterthought

Netflix really got its start through a morning commute. As Pure Atria was winding down, Randolph and Hastings would drive together from Santa Crux on Highway 17 over the mountain into Silicon Valley. It was a long drive, which gave them lots of time to toss around e-commerce ideas that ranged from customized baseball bats to personalized shampoo.

The reason they eventually settled on movies was the introduction of DVD’s. In 1997, there were very few titles available, so stores didn’t stock them. They were also small and light and were easy to ship. Best of all, the movie studios recognized that they had made a mistake pricing movies on videotape too high and planned to offer DVD’s at a level consumers would buy them.

In the beginning, Netflix earned most of its money selling movies, not renting them. However, before long they realized that it was only a matter of time before Amazon and Walmart began selling DVD’s as well. Once that happened, it was unlikely that Netflix would be able to compete, and they would have to find a way to make the rental model work.

The subscription model began as an experiment. No one seemed to want to rent movies by mail, so they were desperate to find a different model and kept trying things until they hit on something that worked. It wasn’t part of a master plan, but the result of trial and error. “If you would have asked me on launch day to describe what Netflix would eventually look like,” Randolph wrote, “I would have never come up with a monthly subscription service.”

The Canada Principle

As Netflix began to grow it was constantly looking for ways to grow its business. One idea that continually came up was expanding to Canada. It’s just over the border, is largely English speaking, has a business-friendly regulatory environment and shares many cultural traits with the US. It just seemed like an obvious way to increase sales.

Yet they didn’t do it for two reasons. First, while Canada is very similar to the US, it is still another country, with its own currency, laws and other complicating factors. Also, while English is commonly spoken in most parts of Canada, in some regions French predominates. So, what looked simple at first had the potential to become maddeningly complex.

The second and more important reason was that it would have diluted their focus. Nobody has unlimited resources. You only have a certain number of people who can do a certain number of things. For every Canadian problem they had to solve, that was one problem that they weren’t solving in the much larger US business.

That became what Randolph called the “Canada Principle,” or the idea that you need to maximize your focus by limiting the number of opportunities that you pursue. It’s why they dropped DVD sales to focus on renting movies and then dropped a la carte rental to focus on the subscription business. That singularity of focus played a big part in Netflix’s success.

Nobody Knows Anything

Randolph’s mantra throughout the book is that “nobody knows anything.” He borrowed the phrase from the writer William Goldman’s memoir Adventures in the Screen Trade. What Goldman meant was that nobody truly knows how a movie will do until it’s out. Some movies with the biggest budgets and greatest stars flop, while some of the unlikeliest indy films are hits.

For Randolph though, it’s more of a guiding business philosophy. “For every good idea,” he says, “there are a thousand bad ideas it is indistinguishable from.” The only real way to tell the difference is to go out and try them, see what works, discard the failures and build on the successes. You have to, in other words, dare to be crap.

Over the years, I’ve had the chance to get to know hundreds of great innovators and they all tell a different version of the same story. While they often became known for one big idea, they had tried thousands of others before they arrived at the one that worked. It was perseverance and a singularity of focus, not a sudden epiphany, that made the difference.

That’s why the myth of the $40 late fee, while seductive, can be so misleading. What made Netflix successful wasn’t just one big idea. In fact, just about every assumption they made when they started the company was wrong. Rather, it was what they learned along the way that made the difference. That’s the truth of how Netflix became a media powerhouse.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Our Fear of China is Overblown

Our Fear of China is Overblown

GUEST POST from Greg Satell

The rise of China over the last 40 years has been one of history’s great economic miracles. According to the World Bank, since it began opening up its economy in 1979, China’s GDP has grown from a paltry $178 billion to a massive $13.6 trillion. At the same time, research by McKinsey shows that its middle class is expanding rapidly.

What’s more, it seems like the Asian giant is just getting started. China has become increasingly dominant in scientific research and has embarked on two major initiatives: Made in China 2025, which aims to make it the leading power in 10 emerging industries, and a massive Belt and Road infrastructure initiative that seeks to shore up its power throughout Asia.

Many predict that China will dominate the 21st century in much the same way that America dominated the 20th. Yet I’m not so sure. First, American dominance was due to an unusual confluence of forces unlikely to be repeated. Second, China has weaknesses—and we have strengths—that aren’t immediately obvious. We need to be clear headed about China’s rise.

The Making of an American Century

America wasn’t always a technological superpower. In fact, at the turn of the 20th century, much like China at the beginning of this century, the United States was largely a backwater. Still mostly an agrarian nation, the US lacked the industrial base and intellectual heft of Europe. Bright young students would often need to go overseas for advanced degrees. With no central bank, financial panics were common.

Yet all that changed quickly. Industrialists like Thomas Edison and Henry Ford put the United States at the forefront of the two most important technologies of the time, electricity and internal combustion. Great fortunes produced by a rising economy endowed great educational institutions. In 1913 the Federal Reserve Act was passed, finally bringing financial stability to a growing nation. By the 1920s, much like China today, America had emerged as a major world power.

Immigration also played a role. Throughout the early 1900s immigrants coming to America provided enormous entrepreneurial energy as well as cheap labor. With the rise of fascism in the 1930s, our openness to new people and new ideas attracted many of the world’s greatest scientists to our shores and created a massive brain drain in Europe.

At the end of World War II, the United States was the only major power left with its industrial base still intact. We seized the moment wisely, using the Marshall Plan to rebuild our allies and creating scientific institutions, such as the National Science Foundation (NSF) and the National Institutes of Health (NIH) that fueled our technological and economic dominance for the rest of the century.

There are many parallels between the 1920s and the historical moment of today, but there are also many important differences. It was a number of forces, including our geography, two massive world wars, our openness as a culture and a number of wise policy choices that led to America’s dominance. Some of these factors can be replicated, but others cannot.

MITI and the Rise of Japan

Long before China loomed as a supposed threat to American prosperity and dominance, Japan was considered to be a chief economic rival. Throughout the 1970s and 80s, Japanese firms came to lead in many key industries, such as automobiles, electronics and semiconductors. The United States, by comparison, seemed feckless and unable to compete.

Key to Japan’s rise was a long-term industrial policy. The Ministry of International Trade and Industry (MITI) directed investment and funded research that fueled an economic miracle. Compared to America’s haphazard policies, Japan’s deliberate and thoughtful strategy seemed like a decidedly more rational and wiser model.

Yet before long things began to unravel. While Japan continued to perform well in many of the industries and technologies that the MITI focused on, it completely missed out on new technologies, such as minicomputers and workstations in the 1980s and personal computers in the 1990s. As MITI continued to support failing industries, growth slowed and debt piled up, leading to a lost decade of economic malaise.

At the same time, innovative government policy in the US also helped turn the tide. For example, in 1987 a non-profit consortium made up of government labs, research universities and private sector companies, called SEMATECH, was created to regain competitiveness in the semiconductor industry. America soon retook the lead, which continues even today.

China 2025 and the Belt and Road Initiative

While the parallels with America in the 1920s underline China’s potential, Japan’s experience in the 1970s and 80s highlight its peril. Much like Japan, it is centralizing decision-making around a relatively small number of bureaucrats and focusing on a relatively small number of industries and technologies.

Much like Japan back then, China seems wise and rational. Certainly, the technologies it is targeting, such as artificial intelligence, electric cars and robotics would be on anybody’s list of critical technologies for the future. The problem is that the future always surprises us. What seems clear and obvious today may look ridiculous and naive a decade from now.

To understand the problem, consider quantum computing, which China is investing heavily in. However, the technology is far from monolithic. In fact, there are a wide variety of approaches being championed by different firms, such as IBM, Microsoft, Google, Intel and others. Clearly, some of these firms are going to be right and some will be wrong.

The American firms that get it wrong will fail, but others will surely succeed. In China, however, the ones that get it wrong will likely be government bureaucrats who will have the power to prop up state supported firms indefinitely. Debt will pile up and competitiveness will decrease, much like it did in Japan in the 1990s.

This is, of course, speculation. However, there are indications that it is already happening. A recent bike sharing bubble has ignited concerns that similar over-investment is happening in artificial intelligence. Many investors have also become concerned that China’s slowing economy will be unable to support its massive debt load.

The Path Forward

The rise of China presents a generational challenge. Clearly, we cannot ignore a rising power, yet we shouldn’t overreact either. While many have tried to cast China as a bad actor, engaging in intellectual theft, currency manipulation and other unfair trade policies, others point out that it is wisely investing for the long-term while the US manages by the quarter.

Interestingly, as Fareed Zakaria recently pointed out, the same accusations made about China’s unfair trade policies today were leveled at Japan 40 years ago. In retrospect, however, our fears about Japan seem almost quaint. Not only were we not crushed by Japan’s rise, we are clearly better for it, incorporating Japanese ideas like lean manufacturing and combining them with our own innovations.

I suspect, or at least I hope, that we will benefit from China’s rise much as we did from Japan’s. We will learn from its innovations and be inspired to develop more of our own. If a Chinese scientist invents a cure for cancer, American lives will be saved. If an American scientist invents a better solar panel, fewer Chinese will be choking on smog.

Perhaps most of all, we need to remember that what made the 20th Century the American Century was our ability to rise to the challenges that history presented. Whether it was rebuilding Europe in the 40s and 50s, or Sputnik in the 50s and 60s or Japan in the 70s and 80s, competition always brought out the best in us. Then, as now, our destiny was our own to determine.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Why Humans Fail to Plan for the Future

Why Humans Fail to Plan for the Future

GUEST POST from Greg Satell

I was recently reading Michiu Kaku’s wonderful book, The Future of Humanity, about colonizing space and was amazed how detailed some of the plans are. Plans for a Mars colony, for example, are already fairly advanced. In other cases, scientists are actively thinking about technologies that won’t be viable for a century or more.

Yet while we seem to be so good at planning for life in outer space, we are much less capable of thinking responsibly about the future here on earth, especially in the United States. Our federal government deficit recently rose to 4.6% of GDP, which is obviously unsustainable in an economy that’s growing at a meager 2.3%.

That’s just one data point, but everywhere you look we seem to be unable to plan for the future. Consumer debt in the US recently hit levels exceeding those before the crash in 2008. Our infrastructure is falling apart. Air quality is getting worse. The list goes on. We need to start thinking more seriously about the future, but don’t seem to be able. Why is that?

It’s Biology, Stupid

The simplest and most obvious explanation for why we fail to plan for the future is basic human biology. We have pleasure centers in our brains that release a hormone called dopamine, which gives us a feeling of well-being. So, it shouldn’t be surprising that we seek to maximize our dopamine fix in the present and neglect the future.

Yuval Noah Harari made this argument in his book Homo Deus, in which he argued that “organisms are algorithms.” Much like a vending machine is programed to respond to buttons, Harari argues, humans and other animals are programed by genetics and evolution to respond to “sensations, emotions and thoughts.” When those particular buttons are pushed, we respond much like a vending machine does.

He gives various data points for this point of view. For example, he describes psychological experiments in which, by monitoring brainwaves, researchers are able to predict actions, such as whether a person will flip a switch, even before he or she is aware of it. He also points out that certain chemicals, such as Ritalin and Prozac, can modify behavior.

Yet this somehow doesn’t feel persuasive. Adults in even primitive societies are expected to overcome basic urges. Citizens of Ancient Rome were taxed to pay for roads that led to distant lands and took decades to build. Medieval communities built churches that stood for centuries. Why would we somehow lose our ability to think long-term in just the past generation or so?

The Profit Motive

Another explanation of why we neglect the future is the profit motive. Pressed by demanding shareholders to deliver quarterly profits, corporate executives focus on showing short-term profits instead of investing for the future. The result is increased returns to fund managers, but a hollowing out of corporate competitiveness.

A recent article in Harvard Business Review would appear to bear this out. When a team of researchers looked into the health of the innovation ecosystem in the US, they found that corporate America has largely checked out. They also observed that storied corporate research labs, such as Bell Labs and Xerox PARC have diminished over time.

Yet take a closer look and the argument doesn’t hold up. In fact, the data from the National Science Foundation shows that corporate research has increased from roughly 40% of total investment in the 1950s and 60s to more than 60% today. At the same time, while some firms have closed research facilities, others, such as Microsoft, IBM and Google have either opened new ones or greatly expanded previous efforts. Overall R&D spending has risen over time.

Take a look at how Google innovates and you’ll be able to see the source for some the dissonance. 50 years ago, the only real option for corporate investment in research was a corporate lab. Today, however, there are many other avenues, including partnerships with academic researchers, internal venture capital operations, incubators, accelerators and more.

The Free Rider Problem

A third reason we may fail to invest in the future is the free rider problem. In this view, the problem is not that we don’t plan for the future, but that we don’t want to spend money on others who are undeserving. For example, why should we pay higher taxes to educate kids from outside our communities? Or to infrastructure projects that are wasteful and corrupt?

This type of welfare queen argument can be quite powerful. Although actual welfare fraud has been shown to be incredibly rare, there are many who believe that the public sector is inherently wasteful and money would be more productively invested elsewhere. This belief doesn’t only apply to low-income people, but also to “elites” such as scientists.

Essentially, this is a form of kinship selection. We are more willing to invest in the future of people who we see as similar to ourselves, because that is a form of self-survival. However, when we find ourselves asked to invest in the future of those we see as different from ourselves, whether that difference is of race, social class or even profession, we balk.

Yet here again, a closer look and the facts don’t quite fit with the narrative. Charitable giving, for example, has risen almost every year since 1977. So, it’s strange that we’re increasingly generous in giving to those who are in need, but stingy when it comes to things like infrastructure and education.

A New Age of Superstition

What’s especially strange about our inability to plan for the future is that it’s relatively new. In fact, after World War II, we invested heavily in the future. We created new avenues for scientific investment at agencies like the National Science Foundation and the National Institutes of Health, rebuilt Europe with the Marshall Plan and educated an entire generation with the GI Bill.

It wasn’t until the 1980s that our willingness to plan for and invest in the future began to wane, mostly due to two ideas that warped decision making. The first, called the Laffer Curve, argued that by lowering taxes we can increase revenue and that tax cuts, essentially, pay for themselves. The second, shareholder value, argued that whatever was best for shareholders is also best for society.

Both ideas have been partially or thoroughly debunked. Over the past 40 years, lower tax rates have consistently led to lower revenues and higher deficits. The Business Roundtable, an influential group of almost 200 CEOs of America’s largest companies, recently denounced the concept of shareholder value. Yet strangely, many still use both to support anti-future decisions.

We seem to be living in a new era of superstition, where mere belief is enough to inspire action. So projects which easily capture the imagination, such as colonizing Mars, are able to garner fairly widespread support, while investing in basic things like infrastructure, debt reduction or the environment are neglected.

The problem, in other words, seems to be mostly in the realm of a collective narrative. We are more than capable of enduring privation today to benefit tomorrow, just as businesses routinely take less profits today to invest in tomorrow. We are even capable of giving altruistically to others in need. All we need is a story to believe in.

There is, however, the possibility that it is not the future we really have a problem with, but each other and that our lack of a common story arises from a lack of shared values which leads to major differences in how we view the same facts. In any case, the future suffers.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Hard Facts Are a Hard Thing

Hard Facts Are a Hard Thing

GUEST POST from Greg Satell

In 1977, Ken Olsen, the founder and CEO of Digital Equipment Corporation, reportedly said, “There is no reason for any individual to have a computer in his home.” It was an amazingly foolish thing to say and, ever since, observers have pointed to Olsen’s comment to show how supposed experts can be wildly wrong.

The problem is that Olsen was misquoted. In fact, his company was actually in the business of selling personal computers and he had one in his own home. This happens more often than you would think. Other famous quotes, such IBM CEO Thomas Watson predicting that there would be a global market for only five computers, are similarly false.

There is great fun in bashing experts, which is why so many inaccurate quotes get repeated so often. If the experts are always getting it wrong, then we are liberated from the constraints of expertise and the burden of evidence. That’s the hard thing about hard facts. They can be so elusive that it’s easy to believe doubt their existence. Yet they do exist and they matter.

The Search for Absolute Truth

In the early 20th century, science and technology emerged as a rising force in western society. The new wonders of electricity, automobiles and telecommunication were quickly shaping how people lived, worked and thought. Empirical verification, rather than theoretical musing, became the standard by which ideas were measured.

It was against this backdrop that Moritz Schlick formed the Vienna Circle, which became the center of the logical positivist movement and aimed to bring a more scientific approach to human thought. Throughout the 20’s and 30’s, the movement spread and became a symbol of the new technological age.

At the core of logical positivism was Ludwig Wittgenstein’s theory of atomic facts, the idea the world could be reduced to a set of statements that could be verified as being true or false—no opinions or speculation allowed. Those statements, in turn, would be governed by a set of logical algorithms which would determine the validity of any argument.

It was, to the great thinkers of the day, both a grand vision and an exciting challenge. If all facts could be absolutely verified, then we could confirm ideas with absolute certainty. Unfortunately, the effort would fail so miserably that Wittgenstein himself would eventually disown it. Instead of building a world of verifiable objective reality, we would be plunged into uncertainty.

The Fall of Logic and the Rise of Uncertainty

Ironically, while the logical positivist movement was gaining steam, two seemingly obscure developments threatened to undermine it. The first was a hole at the center of logic called Russell’s Paradox, which suggested that some statements could be both true and false. The second was quantum mechanics, a strange new science in which even physical objects could defy measurement.

Yet the battle for absolute facts would not go down without a fight. David Hilbert, the most revered mathematician of the time, created a program to resolve Russell’s Paradox. Albert Einstein, for his part, argued passionately against the probabilistic quantum universe, declaring that “God does not play dice with the universe.”

Alas, it was all for naught. Kurt Gödel would prove that every logical system is flawed with contradictions. Alan Turing would show that all numbers are not computable. The Einstein-Bohr debates would be resolved in Bohr’s favor, destroying Einstein’s vision of an objective physical reality and leaving us with an uncertain universe.

These developments weren’t all bad. In fact, they were what made modern computing possible. However, they left us with an uncomfortable uncertainty. Facts could no longer be absolutely verifiable, but would stand until they could be falsified. We could, after thorough testing, become highly confident in our facts, but never completely sure.

Science, Truth and Falsifiability

In Richard Feynman’s 1974 commencement speech at Cal-Tech, he recounted going to a new-age resort where people were learning reflexology. A man was sitting in a hot tub rubbing a woman’s big toe and asking the instructor, “Is this the pituitary?” Unable to contain himself, the great physicist blurted out, “You’re a hell of a long way from the pituitary, man.”

His point was that it’s relatively easy to make something appear “scientific” by, for example, having people wear white coats or present charts and tables, but that doesn’t make it real science. True science is testable and falsifiable. You can’t merely state what you believe to be true, but must give others a means to test it and prove you wrong.

This is important because it’s very easy for things to look like the truth, but actually be false. That’s why we need to be careful, especially when we believe something to be true. The burden is even greater when it is something that “everybody knows.” That’s when we need to redouble our efforts, dig in and make sure we verify our facts.

“We’ve learned from experience that the truth will out,” Feynman said. “The first principle is that you must not fool yourself—and you are the easiest person to fool.” Truth doesn’t reveal itself so easily, but it’s out there and we can find it if we are willing to make the effort.

The Lie of a Post-Truth World

Writing a non-fiction book can be a grueling process. You not only need to gather hundreds of pages of facts and mold them into a coherent story that interests the reader, but also to verify that those facts are true. For both of my books, Mapping Innovation and Cascades, I spent countless hours consulting sources and sending out fact checks.

Still, I lived in fear knowing that whatever I put on the page would permanently be there for anyone to discredit. In fact, I would later find two minor inaccuracies in my first book (ironically, both had been checked with primary sources). These were not, to be sure, material errors, but they wounded me. I’m sure, in time, others will be uncovered as well.

Yet I don’t believe that those errors diminish the validity of the greater project. In fact, I think that those imperfections serve to underline the larger truth that the search for knowledge is always a journey, elusive and just out of reach. We can struggle for a lifetime to grasp even a small part of it, but to shake free even a few seemingly insignificant nuggets can be a gift.

Yet all too often people value belief more than facts. That’s why they repeat things that aren’t factual, because they believe they point to some deeper truth that defy facts in evidence. Yet that is not truth. It is just a way of fooling yourself and, if you’re persuasive, fooling others as well. Still, as Feynman pointed out long ago, “We’ve learned from experience that the truth will out.”

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Software Isn’t Going to Eat the World

Software Isn't Going to Eat the World

GUEST POST from Greg Satell

In 2011, technology pioneer Marc Andreessen declared that software is eating the world. “With lower start-up costs and a vastly expanded market for online services,” he wrote, “the result is a global economy that for the first time will be fully digitally wired — the dream of every cyber-visionary of the early 1990s, finally delivered, a full generation later.

Yet as Derek Thompson recently pointed out in The Atlantic, the euphoria of Andreessen and his Silicon Valley brethren seems to have been misplaced. Former unicorns like Uber, Lyft, and Peloton have seen their value crash, while WeWork saw its IPO self-destruct. Hardly “the dream of every cyber-visionary.”

The truth is that we still live in a world of atoms, not bits and most of the value is created by making things we live in, wear, eat and ride in. For all of the tech world’s astounding success, it still makes up only a small fraction of the overall economy. So, taking a software centric view, while it has served Silicon Valley well in the past, may be its Achilles heel in the future.

The Silicon Valley Myth

The Silicon Valley way of doing business got its start in 1968, when an investor named Arthur Rock backed executives from Fairchild Semiconductor to start a new company, which would become known as Intel. Unlike back east, where businesses depended on stodgy banks for finance, on the west coast venture capitalists, many of whom were former engineers themselves, would decide which technology companies got funded.

Over the years, a virtuous cycle ensued. Successful tech companies created fabulously wealthy entrepreneurs and executives, who would in turn invest in new ventures. Things shifted into hyperdrive when the company Andreessen founded, Netscape, quadrupled its value on its first day of trading, kicking off the dotcom boom.

While the dotcom bubble would crash in 2000, it wasn’t all based on pixie dust. As the economist W. Brian Arthur explained in Harvard Business Review, while traditional industrial companies were subject to diminishing returns, software companies with negligible marginal costs could achieve increasing returns powered by network effects.

Yet even as real value was being created and fabulous new technology businesses prospered, an underlying myth began to take hold. Rather than treating software business as a special case, many came to believe that the Silicon Valley model could be applied to any business. In other words, that software would eat the world.

The Productivity Paradox (Redux)

One reason that so many outside of Silicon Valley were skeptical of the technology boom for a long time was a longstanding productivity paradox. Although throughout the 1970s and 80s, business investment in computer technology was increasing by more than 20% per year, productivity growth had diminished during the same period.

In the late 90s, however, this trend reversed itself and productivity began to soar. It seemed that Andreessen and his fellow “cyber-visionaries were redeemed. No longer considered outcasts, they became the darlings of corporate America. It appeared that a new day was dawning and the Silicon Valley ethos took hold.

While the dotcom crash deflated the bubble in 2000, the Silicon Valley machine was soon rolling again. Web 2.0 unleashed the social web, smartphones initiated the mobile era and then IBM’s Watson’s defeat of human champions on the game show Jeopardy! heralded a new age of artificial intelligence.

Yet still, we find ourselves in a new productivity paradox. By 2005, productivity growth had disappeared once again and has remained diminished ever since. To paraphrase economist Robert Solow, we see software everywhere except in the productivity statistics.

The Platform Fallacy

Today, pundits are touting a new rosy scenario. They point out that Uber, the world’s largest taxi company, owns no vehicles. Airbnb, the largest accommodation provider, owns no real estate. Facebook, the most popular media owner, creates no content and so on. The implicit assumption is that it is better to build software that makes matches than to invest in assets.

Yet platform-based businesses have three inherent weaknesses that aren’t always immediately obvious. First, they lack barriers to entry, which makes it difficult to create a sustainable competitive advantage. Second, they tend to create “winner-take-all” markets so for every fabulous success like Facebook, you can have thousands of failures. Finally, rabid competition leads to high costs.

The most important thing to understand about platforms is that they give us access to ecosystems of talent, technology and information and it is in those ecosystems where the greatest potential for value creation lies. That’s why, to become profitable, platform businesses eventually need to invest in real assets.

Consider Amazon: Almost two thirds of Amazon’s profits come from its cloud computing unit, AWS, which provides computing infrastructure for other organizations. More recently, it bought Whole Foods and began opening Amazon Go retail stores. The more that you look, Amazon looks less like a platform and more like a traditional pipeline business.

Reimagining Innovation for a World of Atoms

The truth is that the digital revolution, for all of the excitement and nifty gadgets it has produced, has been somewhat of a disappointment. Since personal computers first became available in the 1970’s we’ve had less than ten years of elevated productivity growth. Compare that to the 50-year boom in productivity created in the wake of electricity and internal combustion and it’s clear that digital technology falls short.

In a sense though, the lack of impact shouldn’t be that surprising. Even at this late stage, information and communication technologies only make up for about 6% of GDP in advanced economies. Clearly, that’s not enough to swallow the world. As we have seen, it’s barely enough to make a dent.

Yet still, there is great potential in the other 94% of the economy and there may be brighter days ahead in using computing technology to drive advancement in the physical world. Exciting new fields, such as synthetic biology and materials science may very well revolutionize industries like manufacturing, healthcare, energy and agriculture.

So, we are now likely embarking on a new era of innovation that will be very different than the digital age. Rather than focused on one technology, concentrated in one geographical area and dominated by a handful of industry giants, it will be widely dispersed and made up of a diverse group of interlocking ecosystems of talent, technology and information.

Make no mistake. The future will not be digital. Instead, we will need to learn how to integrate a diverse set of technologies to reimagine atoms in the physical world.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Technology Pushing Us into a New Ethical Universe

Technology Pushing Us into a New Ethical Universe

GUEST POST from Greg Satell

We take it for granted that we’re supposed to act ethically and, usually, that seems pretty simple. Don’t lie, cheat or steal, don’t hurt anybody on purpose and act with good intentions. In some professions, like law or medicine, the issues are somewhat more complex, and practitioners are trained to make good decisions.

Yet ethics in the more classical sense isn’t so much about doing what you know is right, but thinking seriously about what the right thing is. Unlike the classic “ten commandments” type of morality, there are many situations that arise in which determining the right action to take is far from obvious.

Today, as our technology becomes vastly more powerful and complex, ethical issues are increasingly rising to the fore. Over the next decade we will have to build some consensus on issues like what accountability a machine should have and to what extent we should alter the nature of life. The answers are far from clear-cut, but we desperately need to find them.

The Responsibility of Agency

For decades intellectuals have pondered an ethical dilemma known as the trolley problem. Imagine you see a trolley barreling down the tracks that is about to run over five people. The only way to save them is to pull a lever to switch the trolley to a different set of tracks, but if you do that, one person standing there will be killed. What should you do?

For the most part, the trolley problem has been a subject for freshman philosophy classes and avant garde cocktail parties, without any real bearing on actual decisions. However, with the rise of technologies like self-driving cars, decisions such as whether to protect the life of a passenger or a pedestrian will need to be explicitly encoded into the systems we create.

That’s just the start. It’s become increasingly clear that data bias can vastly distort decisions about everything from whether we are admitted to a school, get a job or even go to jail. Still, we’ve yet to achieve any real clarity about who should be held accountable for decisions an algorithm makes.

As we move forward, we need to give serious thought to the responsibility of agency. Who’s responsible for the decisions a machine makes? What should guide those decisions? What recourse should those affected by a machine’s decision have? These are no longer theoretical debates, but practical problems that need to be solved.

Evaluating Tradeoffs

“Now I am become Death, the destroyer of worlds,” said J. Robert Oppenheimer, quoting the Bhagavad Gita. upon witnessing the world’s first nuclear explosion as it shook the plains of New Mexico. It was clear that we had crossed a Rubicon. There was no turning back and Oppenheimer, as the leader of the project, felt an enormous sense of responsibility.

Yet the specter of nuclear Armageddon was only part of the story. In the decades that followed, nuclear medicine saved thousands, if not millions of lives. Mildly radioactive isotopes, which allow us to track molecules as they travel through a biological system, have also been a boon for medical research.

The truth is that every significant advancement has the potential for both harm and good. Consider CRISPR, the gene editing technology that vastly accelerates our ability to alter DNA. It has the potential to cure terrible diseases such as cancer and Multiple Sclerosis, but also raises troubling issues such as biohacking and designer babies.

In the case of nuclear technology many scientists, including Oppenheimer, became activists. They actively engaged with the wider public, including politicians, intellectuals and the media to raise awareness about the very real dangers of nuclear technology and work towards practical solutions.

Today, we need similar engagement between people who create technology and the public square to explore the implications of technologies like AI and CRISPR, but it has scarcely begun. That’s a real problem.

Building A Consensus Based on Transparency

It’s easy to paint pictures of technology going haywire. However, when you take a closer look, the problem isn’t so much with technological advancement, but ourselves. For example, the recent scandals involving Facebook were not about issues inherent to social media websites, but had more to do with an appalling breach of trust and lack of transparency. The company has paid dearly for it and those costs will most likely continue to pile up.

It doesn’t have to be that way. Consider the case of Paul Berg, a pioneer in the creation of recombinant DNA, for which he won the Nobel Prize. Unlike Zuckerberg, he recognized the gravity of the Pandora’s box he had opened and convened the Asilomar Conference to discuss the dangers, which resulted in the Berg Letter that called for a moratorium on the riskiest experiments until the implications were better understood.

In her book, A Crack in Creation, Jennifer Doudna, who made the pivotal discovery for CRISPR gene editing, points out that a key aspect of the Asilomar conference was that it included not only scientists, but also lawyers, government officials and media. It was the dialogue between a diverse set of stakeholders, and the sense of transparency it produced, that helped the field advance.

The philosopher Martin Heidegger argued that technological advancement is a process of revealing and building. We can’t control what we reveal through exploration and discovery, but we can—and should—be wise about what we build. If you just “move fast and break things,” don’t be surprised if you break something important.

Meeting New Standards

In Homo Deus, Yuval Noah Harari writes that the best reason to learn history is “not in order to predict, but to free yourself of the past and imagine alternative destinies.” As we have already seen, when we rush into technologies like nuclear power, we create problems like Chernobyl and Fukushima and reduce technology’s potential.

The issues we will have to grasp over the next few decades will be far more complex and consequential than anything we have faced before. Nuclear technology, while horrifying in its potential for destruction, requires a tremendous amount of scientific expertise to produce it. Even today, it remains confined to governments and large institutions.

New technologies, such as artificial intelligence and gene editing are far more accessible. Anybody with a modicum of expertise can go online and download powerful algorithms for free. High school kids can order CRISPR kits for a few hundred dollars and modify genes. We need to employ far better judgment than organizations like Facebook and Google have shown in the recent past.

Some seem to grasp this. Most of the major tech companies have joined with the ACLU, UNICEF and other stakeholders to form the Partnership On AI to create a forum that can develop sensible standards for artificial intelligence. Salesforce recently hired a Chief Ethical and Human Use Officer. Jennifer Doudna has begun a similar process for CRISPR at the Innovative Genomics Institute.

These are important developments, but they are little more than first steps. We need a more public dialogue about the technologies we are building to achieve some kind of consensus of what the risks are and what we as a society are willing to accept. If not, the consequences, financial and otherwise, may be catastrophic.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Creating Change That Lasts

Creating Change That Lasts

GUEST POST from Greg Satell

When Lou Gerstner took over at IBM in 1993, the century-old tech giant was in dire straits. Overtaken by nimbler upstarts, like Microsoft in software, Compaq in hardware and Intel in microprocessors, it was hemorrhaging money. Many believed that it needed to be broken up into smaller, more focused units in order to compete.

Yet Gerstner saw it differently and kept the company intact, which led to one of the most dramatic turnarounds in corporate history. Today, more than a quarter century later, while many of its formal rivals have long since disappeared IBM is still profitable and on the cutting edge of many of the most exciting technologies.

That success was no accident. In researching my book, Cascades, I studied not only business transformations, but many social and political movements as well. What I found is that while most change efforts fail, the relatively few that succeed follow a pattern that is amazingly consistent. If you want to create change that lasts, here’s what you need to do.

Build Trust Through Shared Values

When Mahatma Gandhi returned to India, he began to implement a strategy of civil disobedience similar to what he had so successful in his campaigns in South Africa. He would later call this his Himalayan miscalculation. “Before a people could be fit for offering civil disobedience,” he later wrote, “they should thoroughly understand its deeper implications.”

One of the key tenets of transformation is that you can’t change fundamental behaviors without changing fundamental beliefs. So Gerstner, like Gandhi, first set out to change the culture within his organization. He saw that IBM had lost sight of its values. For example, the company had always valued competitiveness, but by the time he arrived much of that competitive energy was directed at fighting internal battles rather than in the marketplace.

“We needed to integrate as a team inside the company so that we could integrate for the customers on their premises,” Gerster would later say. “It flew in the face of what everybody did in their careers before I arrived there. It meant that we would share technical plans, we would move toward common technical standards and plans, we would not have individual transfer pricing between every product so that everybody could get their little piece of the customers’ money.”

He pushed these values constantly, through personal conversations, company emails, in the press and at company meetings. As Irving Wladawsky-Berger, one of Gerstner’s chief lieutenants, told me, “Lou refocused us all on customers and listening to what they wanted and he did it by example. We started listening to customers more because he listened to customers.”

Create a Clear Vision for the Future

At his very first press conference, Gerstner declared: “the last thing IBM needs right now is a vision.” So it was ironic that he developed a vision for the company within months into his tenure. What he noticed was that the culture within IBM had degraded to such an extent that it was hard to align its business units around a coherent strategy

Every change effort begins with a list of grievances. Sales are down, your industry is being disrupted or technology is passing you by. But until you are able to articulate a clear vision for how you want things to look in the future, any change is bound to be fleeting. For Gerstner at IBM, that vision was to put customers, rather than technology, at the center.

He started with a single keystone change, shifting IBM’s focus from its own “proprietary stack of technologies” to its customers’ “stack of business processes.” That focus on the customer was much more clear and tangible than simply “changing the culture.” It also would require multiple stakeholders to work together and pave the way for future change.

In my research, I found that every successful transformation, whether it was a political movement, a social movement or a business transformation, was able to identify a keystone change that paved the way for a larger vision. So if you want to bring about lasting transformation, that’s a great place to start.

Identify Support — And Opposition

Once Gerstner decided to focus his transformation strategy on IBM’s customers, he found that they were terrified at the prospect of the company failing or being broken up. They depended on IBM’s products to manage mission critical processes. They also needed a partner who could help them transition legacy technology to the Internet.

He also found that he could create new allies to support his mission. For example, IBM had a history of competing with application developers, but wasn’t making much money in the application business. So he started treating the application developers as true partners and gained their support.

Yet every significant change effort is bound to attract opposition as well. There will always be a certain faction that is so tied to the old ways of doing things that they will do whatever they can to undermine the transformation and IBM was no different. Some executives, for example, enjoyed the infighting and turf battles that had become the norm. Gerstner took a zero tolerance policy and even fired some senior executives who didn’t get with the program.

Compare that to Blockbuster Video. As I’ve noted before, the company actually devised a viable strategy to meet the Netflix threat but was unable to align internal stakeholders around that strategy.

Treat Transformation as a Journey, Not A Destination

Probably the most impressive thing about IBM’s turnaround in the 90s is how it has endured. Gerstner left the firm in 2002, and it has its share of ups and downs since then, but still rakes in billions in profit every year and continues to innovate in cutting edge areas such as blockchain and quantum computing.

“The Gerstner revolution wasn’t about technology or strategy, it was about transforming our values and our culture to be in greater harmony with the market,” Wladawsky-Berger told me. “Because the transformation was about values first and technology second, we were able to continue to embrace those values as the technology and marketplace continued to evolve.”

That’s what sets those that succeed from those who fail. You can’t bet your future on a particular strategy, program or tactic, because the future will always surprise us. It is how you align people behind a strategy, through forging shared values and building trust, that will determine whether change endures.

Perhaps most of all, you need to understand that transformation is always a journey, never a destination. Success is never a straight line. There will be ups and downs. But if you keep fighting for a better tomorrow, you will not only be able to bring about the change you seek, but the next ones after that as well.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

3 Things Politicians Can Do to Create Innovation

3 Things Politicians Can Do to Create Innovation

GUEST POST from Greg Satell

In the 1960s, the federal government accounted for more than 60% of all research funding, yet by 2016 that had fallen to just over 20%. During the same time, businesses’ share of R&D investment more than doubled from about 30% to almost 70%. Government’s role in US innovation, it seems, has greatly diminished.

Yet new research suggests that the opposite is actually true. Analyzing all patents since 1926, researchers found that the number of patents that relied on government support has risen from 12% in the 1980s to almost 30% today. Interestingly, the same research found that startups benefitted the most from government research.

As we struggle to improve productivity from historical lows, we need the public sector to play a part. The truth is that the government has a unique role to play in driving innovation and research is only part of it. In addition to funding labs and scientists, it can help bring new ideas to market, act as a convening force and offer crucial expertise to private businesses.

1. Treat Knowledge As A Public Good

By 1941, it had become clear that the war raging in Europe would soon envelop the US. With this in mind, Vannevar Bush went to President Roosevelt with a visionary idea — to mobilize the nation’s growing scientific prowess for the war effort. Roosevelt agreed and signed an executive order that would create the Office of Scientific Research and Development (OSRD).

With little time to build labs, the OSRD focused on awarding grants to private organizations such as universities. It was, by all accounts, an enormous success and lead to important breakthroughs such as the atomic bomb, proximity fuze and radar. As the war was winding down, Roosevelt asked Bush to write a report to continue OSRD’s success peacetime.

That report, titled Science, The Endless Frontier, was delivered to President Truman and would set the stage for America’s lasting technological dominance. It set forth a new vision in which scientific advancement would be treated as a public good, financed by the government, but made available for private industry. As Bush explained:

Basic research leads to new knowledge. It provides scientific capital. It creates the fund from which the practical applications of knowledge must be drawn. New products and new processes do not appear full-grown. They are founded on new principles and new conceptions, which in turn are painstakingly developed by research in the purest realms of science.

The influence of Bush’s idea cannot be overstated. It led to the creation of new government agencies, such as the National Science Foundation (NSF), the National Institutes of Health (NIH) and, later, the Defense Advanced Research Projects Agency (DARPA). These helped to create a scientific infrastructure that has no equal anywhere in the world.

2. Help to Overcome the Valley of Death

Government has a unique role to play in basic research. Because fundamental discoveries are, almost by definition, widely applicable, they are much more valuable if they are published openly. At the same time, because private firms have relatively narrow interests, they are less able to fully leverage basic discoveries.

However, many assume that because basic research is a primary role for public investment that it is its only relevant function. Clearly, that’s not the case. Another important role government has to play is helping to overcome the gap between the discovery of a new technology and its commercialization, which is so fraught with peril that it’s often called the “Valley of Death.”

The oldest and best known of initiative is SBIR/STTR program, which is designed to help startups commercialize cutting-edge research. Grants are given in two phases. In the first, a proof-of-concept phase, grants are capped at $150,000. If that’s successful, up to $1 million more can be awarded. Some SBIR/STTR companies, such as Qualcomm, iRobot and Symantec, have become industry leaders.

Other more focused programs have also been established. ARPA-e focuses exclusively on advanced energy technologies. Lab Embedded Entrepreneurship Programs (LEEP) give entrepreneurs access to the facilities and expertise of the National Labs in addition to a small grant. The Manufacturing Extension Program (MEP) helps smaller companies build the skills they need to be globally competitive.

3. Act As a Convening Force

A third role government can play is that of a convening force. For example, in 1987 a non-profit consortium made up of government labs, research universities and private sector companies, called SEMATECH, was created to regain competitiveness in the semiconductor industry. America soon regained its lead, which continues even today.

The reason that SEMATECH was so successful was that it combined the scientific expertise of the country’s top labs with the private sector’s experience in solving real world problems. It also sent a strong signal that the federal government saw the technology as important, which encouraged private companies to step up their investment as well.

Today, a number of new initiatives have been launched that follow a similar model. The most wide-ranging is the Manufacturing USA Institutes, which are helping drive advancement in everything from robotics and photonics to biofabrication and composite materials. Others, such as JCESR and the Critical Materials Institute, are more narrowly focused.

Much like its role in supporting basic science and helping new technologies get through the “Valley of Death,” acting as a convening force is something that, for the most part, only the federal government can do.

Make No Mistake: This Is Our New Sputnik Moment

In the 20th century three key technologies, electricity, internal combustion and computing drove economic advancement and the United States led each one. That is why it is often called the “American Century.” No country, perhaps since the Roman Empire, has ever so thoroughly dominated the known world.

Yet the 21st century will be different. The most important technologies will be things like synthetic biology, materials science and artificial intelligence. These are largely nascent and it’s still not clear who, if anybody, will emerge as a clear leader. It is very possible that we will compete economically and technologically with China, much like we used to compete politically and militarily with the Soviet Union.

Yet back in the Cold War, it was obvious that the public sector had an important role to play. When Kennedy vowed to go to the moon, nobody argued that the effort should be privatized. It was clear that such an enormous undertaking needed government leadership at the highest levels. We pulled together and we won.

Today, by all indications, we are at a new Sputnik moment in which our global scientific and technological leadership is being seriously challenged. We can respond with imagination, creating novel ways to, as Bush put it, “turn the wheels of private and public enterprise,” or we can let the moment pass us by and let the next generation face the consequences.

One thing is clear. We will be remembered for what we chose to do.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.