Author Archives: Greg Satell

About Greg Satell

Greg Satell is a popular speaker and consultant. His latest book, Cascades: How to Create a Movement That Drives Transformational Change, is available now. Follow his blog at Digital Tonto or on Twitter @Digital Tonto.

We Must Stop Worshiping Algorithms

We Must Stop Worshiping Algorithms

GUEST POST from Greg Satell

In 1954 the economist Paul Samuelson received a postcard from his friend Jimmie Savage asking, “ever hear of this guy?” The ”guy” in question was Louis Bachelier, an obscure mathematician who wrote a dissertation in 1900 that anticipated Einstein’s famous paper on Brownian motion published five years later.

The operative phrase in Bachelier’s paper, “the mathematical expectation of the speculator is zero,” was as powerful as it was unassuming. It implied that markets could be tamed using statistical techniques developed more than a century earlier and would set us down the path that led to the 2008 financial crisis.

For decades we’ve been trying to come up with algorithms to help us engineer our way out of uncertainty and they always fail for the same reason: the world is a messy place. Trusting our destiny to mathematical formulas does not eliminate human error, it merely gives preference to judgements encoded in systems beforehand over choices made by people in real time.

The False Promise Of Financial Engineering

By the 1960s a revolution in mathematical finance, based on Bachelier’s paper and promoted by Samuelson, began to gain momentum. A constellation of new discoveries such as efficient portfolios, the capital asset pricing model (CAPM) and, later, the Black-Scholes model for options pricing created a standard model for thinking about economics and finance.

As things gathered steam, Samuelson’s colleague at MIT, Paul Cootner, compiled the most promising papers in a 500-page tome, The Random Character of Stock Market Prices, which became an instant classic. The book would become a basic reference for the new industries of financial engineering and risk management that were just beginning to emerge at the time.

However, early signs of trouble were being ignored. Included in Cootner’s book was a paper by Benoit Mandelbrot that warned that there was something seriously wrong afoot. He showed, with very clear reasoning and analysis, that actual market data displayed far more volatility than was being predicted. In essence, he was pointing out that Samuelson and his friends were vastly underestimating risk in the financial system.

Leading up to the Great Recession, other warning signs would emerge, such as the collapse of LTCM hedge fund in 1998 and of Enron three years later, but the idea that mathematical formulas could engineer risk out of the system endured. The dreams turned to nightmares in 2008, when the entire house of cards collapsed into the worst financial crisis since the 1930s.

The Road To Shareholder Value

By 1970, Samuelson’s revolution in economics was well underway, but companies were still run much as they were for decades. Professional managers ran companies according to their best judgment about what was best for their shareholders, customers, employees and the communities that they operated in, which left room for variance in performance.

That began to change when Milton Friedman, published an Op-Ed in The New York Times, which argued that managers had only one responsibility: to maximize shareholder value. Much like Bachelier’s paper, Friedman’s assertion implied a simple rule-of-thumb with only one variable to optimize for, rather than personal judgement, should govern.

This was great news for people managing businesses, who no longer had to face the same complex tradeoffs when making decisions. All they had to worry about was whether the stock price went up. Rather than having to choose between investing in factories and equipment to produce more product, or R&D to invent new things, they could simply buy back more stock.

The results are now in and they are abysmal. Productivity growth has been depressed since the 1970s. While corporate profits have grown as a percentage of GDP, household incomes have decoupled from economic growth and stagnated. Markets are less free and less competitive. Even social mobility in the US, the ability for ordinary people to achieve the American dream, has been significantly diminished.

The Chimera Of “Consumer Welfare”

The Gilded Age in America that took place at the end of the 19th century was a period of rapid industrialization and the amassing of great wealth. As railroads began to stretch across the continent, the fortunes of the Rockefellers, Vanderbilts, Carnegies and Morgans were built. The power of these men began to rival governments.

It was also an era of great financial instability. The Panic of 1873 and the Panic of 1893 devastated a populace already at the mercy of the often avaricious tycoons who dominated the marketplace. The Sherman Antitrust Act of 1890 and the Clayton Antitrust Act of 1914 were designed to re-balance the scales and bring competition back to the market.

For the most part they were successful. The breakup of AT&T in the 1980s paved the way for immense innovation in telecommunications. Antitrust action against IBM paved the way for the era of the PC and regulatory action against Microsoft helped promote competition in the Internet. American markets were the most competitive in the world.

Still, competition is an imprecise term. Robert Bork and other conservative legal thinkers wanted a simple, more precise standard, based on consumer welfare. In their view, for regulators to bring action against a company, they had to show that the firm’s actions raise the prices of goods or services.

Here again, human judgment was replaced with an algorithmic approach that led to worse outcomes. Over 75% of industries have seen a rise in industry concentration levels since the late 1990s, which has helped to bring about a decline in business dynamism and record income inequality.

The Chimera Of Objectivity

Humans can be irrational and maddening. Decades of research have shown that, when given the exact same set of facts, even experts will make very different assessments. Some people will be more strict, others more lenient. Some of us are naturally optimistic, others are cynics. A family squabble in the morning can affect the choices we make all day.

So it’s not unreasonable to want to improve quality and reduce variance in our decision making by taking a more algorithmic approach by offering clear sets of instructions that hold sway no matter who applies them. They promise to make things more reliable, reduce uncertainty and, hopefully, improve effectiveness.

Yet as Yassmin Abdel-Magied and I explained in Harvard Business Review, algorithms don’t eliminate human biases, they merely encode them. Humans design the algorithms, collect the data that form the basis for decisions and interpret the results. The notion that algorithms are purely objective is a chimera.

The problem with algorithms is that they encourage us to check out, to fool ourselves into thinking we’ve taken human error out of the system and stop paying attention. They allow us to escape accountability, at least for a while, as we pass the buck to systems that spit out answers which affect real people.

Over the past 20 or thirty years, we’ve allowed this experiment to play out and the results have been tragic. It’s time we try something else.

— Article courtesy of the Digital Tonto blog
— Image credit: Google Gemini (NanoBanana)

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Strategic Self-Righteousness is Not a Thing

Strategic Self-Righteousness is Not a Thing

GUEST POST from Greg Satell

Not long ago I was participating in a discussion on the social audio app, Clubhouse, and I said something a lady didn’t like that triggered her emotions. “Obviously, you need to be educated,” she said before subjecting me to a prolonged harangue riddled with inaccuracies, logical gaps and non-sequiturs.

Yet putting the merits of her argument aside, her more serious error was trying to overpower, rather than attract, in order to further her argument. If anything, she undermined her cause. Nobody likes a bully. Perhaps even more importantly, silencing opposing views restricts your informational environment and situational awareness.

This is why Gandhi so strictly adhered to the principle of ahimsa, which not only proscribed physical violence, but that of words or even thoughts. Everyone has their own sense of identity and dignity. Violating that will not bring you closer to success, but will almost certainly set you on a path to failure. Self-righteousness isn’t a strategy, but the lack of one.

Forming An Identity With Differentiated Values

Humans, by nature, seek out ideas to believe in. Ideas give us purpose and a sense of mission. That’s why every religion begins with an origin story, because it is our ideas that differentiate us from others and give us a sense of worth. What does it mean to be a Christian, Jew, or Muslim, a socialist or a capitalist, if we’re not differentiated by our beliefs?

So it shouldn’t be surprising that when people want to express their ideas, they tend to start with how their beliefs are different, because it is the dogmatic aspects of the concepts that drive their passion. Perhaps even more importantly, it is their conspicuous devotion that signals their inclusion with a particular tribe of shared identity.

Humans naturally form tribes in this way. In a study of adults that were randomly assigned to “leopards” and “tigers,” fMRI studies noted hostility to out-group members. Similar results were found in a study involving five year-old children and even in infants. Evolutionary psychologists attribute this tendency to kin selection, which explains how groups favor those who share their attributes in the hopes that those attributes will be propagated.

So when we’re passionate about an idea, we not only want to share it and “educate” others, we will also tend to see any threats to its survival as an affront to our identity. We begin to view ourselves as protectors and bond with others who share our purpose. We need to be aware of this pattern, because we’re all susceptible to it and that’s where the trouble starts.

Echo Chambers And The Emergence Of A Private Language

Spend time in an unfamiliar tribe and you’ll immediately notice that they share a private language. Minnesota Vikings fans shout “Skol!” Military people talk about distance in terms of “klicks,” and might debate the relative importance of HUMINT vs. SIGINT. Step into a marketing meeting and you’ll be subjected to a barrage of acronyms.

The philosopher Ludwig Wittgenstein explained how these types of private languages can be problematic. He made the analogy of a beetle in a box. If everybody had something in a box that they called a beetle, but no one could examine each other’s box, there would be no way of knowing whether everybody was actually talking about the same thing or not.

What Wittgenstein pointed out was that in this situation, the term “beetle” would lose relevance and meaning. It would simply refer to something that everybody had in their box, whatever that was. Everybody could just nod their heads not knowing whether they were talking about an insect, a German automobile or a British rock band.

Clearly, the way we tend to self-sort ourselves into homophilic, homogeneous groups will shape how we perceive what we see and hear, but it will also affect how we access information. Recently, a team of researchers at MIT looked into how we share information—and misinformation—with those around us. What they found was troubling.

When we’re surrounded by people who think like us, we share information more freely because we don’t expect to be questioned. We’re also less likely to check our facts, because we know that those we are sharing the item with will be less likely to inspect it themselves. So when we’re in a filter bubble, we not only share more, we’re also more likely to share things that aren’t true. Greater polarization leads to greater misinformation.

The Growing Backlash

One of the many things I’ve learned from my friend Srdja Popović is that the phase after an initial victory is often the most dangerous. Every revolution inspires its own counter-revolution. That is the physics of change. While you’re celebrating your triumph, the forces arrayed against you are redoubling their efforts to undermine what you’re trying to achieve.

Yet nestled safely within your tribe, speaking a private language in an echo chamber, you are unlikely to see the storm gathering storm. If most of the people around you think like you do, change seems inevitable. You tell each other stories about how history is on your side and the confluence of forces are in your favor.

Consider the case of diversity training. After the killing of George Floyd by a police officer led to massive global protests in over 2,000 towns and 60 countries, corporations around the world began to ramp up their diversity efforts, hiring “Chief Diversity Officers” and investing in training. For many, it was the dawn of a growing consciousness and a brighter, more equitable future.

It hasn’t seemed to turn out that way, though. Increased diversity training has not led to better outcomes and, in fact, there is increasing evidence of backlash. In particular researchers note that much of the training makes people feel targeted. Telling people that they owe their positions to something other than hard work and skill offends their dignity and can actually trigger exactly the behaviors that diversity programs are trying to change.

These misgivings are rarely voiced out loud, however, which is why change advocates rarely notice the growing chorus waiting for an opportunity to send the pendulum swinging in the other direction.

Learning To Survive Victory

In The Righteous Mind, social psychologist Jonathan Haidt makes the point that many of our opinions are a product of our inclusion in a particular team. Because our judgments are so closely intertwined with our identity, contrary views can feel like an attack. So we feel the urge to lash out and silence opposition. That almost guarantees a failure to survive victory.

I first noticed this in the aftermath of the Ukraine’s Orange Revolution in 2004. Having overcome a falsified election, we were so triumphant that we failed to see the gathering storm. Because we felt that the forces of history were on our side, we dismissed signs that the corrupt and thuggish Viktor Yanukovich was staging a comeback and paid a terrible price.

I see the same pattern in our work helping organizations with transformational initiatives. Change leaders feel so passionately about their idea they want to push it through, silence dissent, launch it with a big communication campaign and create strong incentives to get on board. They’re sure that once everybody understands the idea, they’ll love it too.

The truth is to bring about lasting change you need to learn to love your haters. They’re the ones who can help alert you to early flaws, which gives you the opportunity to fix them before they can do serious damage. They can also help you to identify shared values that can help you communicate more effectively and also design dilemmas that will send people your way.

But in order to do that, you need to focus your energy on winning converts, rather than punishing heretics. It’s more important to make a difference than it is to make a point.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Four Signs of an Industry Disruption

Four Signs of an Industry Disruption

GUEST POST from Greg Satell

In his book, Thinking, Fast and Slow, Nobel laureate Daniel Kahneman explained that there are two modes of thinking that we use to make decisions, which he calls “System 1” and “System 2.” The first is more instinctual and automatic, the second more rational and deliberative. We need to use both to make good decisions.

Businesses also have two systems, which can sometimes conflict. One is immediate and operational. It seeks to optimize processes, gain market share and maximize profitability. The second builds capacity for the long term, by investing in employees, building trustful partnerships and creating new markets to compete for the future.

Obviously, these are not mutually exclusive. Just as we can step back and think rationally about instinctual urges, we can invest for both the short and the long term. Yet given that every business eventually matures and needs to renew itself, many end up taking the wrong path. Here are four signs that your industry might be in the process of being disrupted.

1. Maturing Technology

Fifteen years ago hardly anyone had a smartphone. Social media was in its infancy. Artificial intelligence was still science fiction. Yet today all of those things are somewhat mature technologies that have become an integral part of everyday life. Anywhere you go you see people using them as a matter of habit.

It’s become conventional wisdom to look at these developments and say that technology is accelerating. It certainly seems that way. Nevertheless, look a little closer and it becomes clear that’s not really true. Buy a computer or smartphone today and its capabilities are not that different to those that came out five years ago.

The truth is that every major technology has a similar life cycle called an S-curve. It emerges weak, buggy and flawed. Adoption is slow. In time, it hits its stride and enters a period of rapid growth until maturity and an inevitable slowdown. That’s what’s happening now with digital technology and we can expect many areas to slow down in the years to come.

In the 1920s and 30s there was a time of explosive growth in the automobile industry and electronic appliances. The 1950s and 60s was a golden age for antibiotics, with a number of life-saving new drugs being discovered every year. The 1970s were considered the heyday for airlines and the past few decades have been focused on digital technology.

Yet every technology matures and every S-curve flattens, which is exactly what we’re seeing with digital technology today. Moore’s Law, the consistent doubling of transistors we can cram on a silicon wafer, is ending, and the digital era will end with it. Once opportunities to innovate narrow, firms look to other avenues to increase profits.

2. Consolidation

One of the key tools in any strategist’s toolbox is Michael Porter’s five forces analysis. The basic idea is that to compete effectively, you need to focus not just on the key competitors in your industry, but also customers, suppliers, new market entrants and substitutes. To build competitive advantage, you need to increase your bargaining power against all five.

Yet when an industry is in decline, the forces external to the industry get the upper hand. With new market entrants and substitutes becoming more attractive, customers and suppliers are in a position to negotiate better deals, margins get squeezed and profits come under pressure.

That’s why a lot of consolidation in an industry is usually a bad sign. It means that firms within the industry don’t see enough opportunities to improve their business by serving their customers more effectively, through innovating their products or their business models. To maintain margins, they need to combine with each other to control supply.

I think it’s clear that Silicon Valley is going through some version of this today. With Moore’s Law ending, the opportunities to innovate are narrowing and acquisitions are accelerating. The last breakthrough product, arguably, was the iPhone launched in 2007. Startups, don’t try to upend incumbents anymore, they sell to them.

3. Rent Seeking & Regulatory Capture

The goal of every business is to defy markets. Any firm at the mercy of supply and demand will find itself unable to make an economic profit—that is profit over and above its cost of capital. In other words, unless a firm can beat Adam’s Smith’s invisible hand, investors would essentially be better off putting their money in the bank.

That leaves entrepreneurs and managers with two viable strategies. The first is innovation. Firms can create new and better products that produce new value. The second, rent seeking, is associated with activities like lobbying and regulatory capture, which seeks to earn a profit without creating added value. In fact, rent seeking often makes industries less competitive.

There is abundant evidence that over the last 20 years, American firms have shifted from an innovation mindset to one that focuses more on rent seeking. First and foremost, has been the marked increase in lobbying expenditures, which have more than doubled since 1998, especially in the tech industry. Firms invest money for a reason. They expect a return.

It seems like they are getting their money’s worth. Corporate tax rates in the US have steadily decreased and are now among the lowest in the developed world. Occupational licensing, often the result of lobbying by trade associations, has increased five-fold since the 1950s. These restrictions have coincided with a decrease in the establishment of new firms.

If your industry is more focused on protecting existing markets than creating new ones, that is one sign that it is vulnerable to disruption.

4. The Inevitable Scandals

In the 1920s the Teapot Dome scandal rocked Washington. The Secretary of the Interior, Albert Bacon Fall, was found to have corruptly leased Navy petroleum reserves to private companies. In response, Congress was given the right to subpoena any US citizen’s tax records as well as increased regulation of campaign finance.

In the century since, we have had continuous cycles of largesse and reform. The savings and loan crisis in the 1980s led to the FIRREA Act to increase oversight. Accounting scandals, like those involving Enron and WorldCom, led to Sarbanes Oxley. The Financial Crisis led to Dodd-Frank.

More recently, tens of billions of dollars were plowed into WeWork before it was exposed as little more than a Ponzi scheme. The Theranos fraud went on for more than a decade before its board realized that its product was an elaborate ruse. FTX was valued at $32 billion but turned out to be worthless. Yet there has been no reform.

As Bain pointed out a decade ago, the extreme measures taken after the Great Recession led to a superabundance of capital, which paved the way for the highest profit margins in half a century. Now it seems that the era of easy money and easy regulation is ending, making it a near certainty that more frauds will be exposed.

We need to learn the telltale signs that an industry is being disrupted. Once technology begins to mature, we can expect consolidation, rent-seeking and regulatory capture to follow. After that, it’s just a matter of how much time—and how big the bubble gets—before everything bursts.

— Article courtesy of the Digital Tonto blog
— Image credit: Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Stealing From the Garden of Eden

Stealing From the Garden of Eden

GUEST POST from Greg Satell

The story of the Garden of Eden is one of the oldest in recorded history, belonging not only to the world’s three major Abrahamic faiths of Judaism, Christianity and Islam, but also having roots in Greek and Sumerian mythology. It’s the ultimate origin archetype: We were once pure, innocent and good, but then were corrupted in some way and cast out.

As Timothy Snyder points out in his excellent course on The Making of Modern Ukraine, this template of innocence, corruption and expulsion often leads us to a bad place, because it implies that anything we do to remove that corrupting influence would be good and just. When you’re fighting a holy war, the ends justify the means.

The Eden myth is a favorite of demagogues, hucksters and con artists because it is so powerful. We’re constantly inundated with scapegoats— the government, big business, tech giants, the “billionaire” class, immigrants, “woke” society — to blame for our fall from grace. We need to learn to recognize the telltale signs that someone is trying to manipulate us.

The Assertion Of Victimhood

In 1987, a rather drab and dull Yugoslavian apparatchik named Slobodan Milošević was visiting Kosovo field, the site of the Serbs humiliating defeat at the hands of the Ottoman empire in 1389. While meeting with local leaders, he heard a commotion outside and found police beating back a huge crowd of Serbs and Montenegrins.

“No one should dare to beat you again!” Milošević is reported to have said and, in that moment, that drab apparatchik was transformed into a political juggernaut who left death and destruction in his path. For the first time since World War II, a genocide was perpetrated in Europe and the term ethnic cleansing entered the lexicon.

In Snyder’s book, Bloodlands, which chronicled the twin horrors of Hitler and Stalin, he points out that if we are to understand how humans can do such atrocious things to other humans, we first need to understand that they saw themselves as the true victims. When people believe that their survival is at stake, there is very little they won’t assent to.

The assertion of victimhood doesn’t need to involve life and death. Consider the recent Twitter Files “scandal,” in which the social media giant’s new owner leaked internal discussions about content moderation. The journalists who were given access asserted that those discussions amounted to an FBI-Big Tech conspiracy to censor important information. They paint sinister pictures of dark forces working to undermine our access to information.

When you read the actual discussions, however, what you see is a nuanced discussion about how to balance a number of competing values. How do we balance national security and public safety with liberty and free speech? At what point does speech become inciteful and problematic? Where should lines be drawn?

The Dehumanization Of An Out-group

Demagogues, hucksters and con men abhor nuance because victimhood requires absolutes. The victim must be completely innocent and the perpetrator must be purely evil for the Eden myth sleight of hand to work. There are no innocent mistakes, only cruelty and greed will serve to build the narrative.

Two years after Milošević political transformation at Kosooe field he returned there to commemorate the 600 anniversary of the Battle of Kosovo, where he claimed that “​​the Serbs have never in the whole of their history conquered and exploited others.” Having established that predicate, the stage was set for the war in Bosnia and the atrocities that came with it.

Once you establish complete innocence, the next step is to dehumanize the out-group. The media aren’t professionals who make mistakes, they are “scum who spread lies.” Tech giants aren’t flawed organizations, but ones who deliberately harm the public. Public servants like Anthony Fauci and philanthropists like Bill Gates are purported to engage in nefarious conspiracies that undermine the public well-being.

The truth is, of course, that nothing is monolithic. People have multiple motivations, some noble, others less so. Government agencies tend to attract mission-driven public servants, but can also be prone to overreach and abuse of power. Entrepreneurs like Elon Musk can have both benevolent aspirations to serve mankind and problematic character flaws.

It is no accident that the states in the US with the fewest immigrants tend to have the most anti-immigrant sentiment. The world is a messy place, which is why real-world experience undermines the Manichean worldview that demagogues, hucksters and con artists need to prepare the ground for what comes next.

The Vow For Retribution

It is now a matter of historical record what came of Milošević. After the horrors of the genocides his government perpetrated, his regime was brought down in the Bulldozer Revolution, the first of a string of Color Revolutions that spread across Eastern Europe. He was then sent to The Hague to stand trial, where would die in his prison cell.

Milošević made a common mistake (and one Vladimir Putin is repeating today). Successful demagogues, hucksters and con artists know to never make good on their vows for retribution. In order to serve its purpose, the return to Eden must remain aspirational, a fabulous yonder that will never be truly attained. Once you actually try to get there, it will be exposed as a mirage.

Yet politicians who vow to bring down evil corporations can depend on a steady stream of campaign contributions. In much the same way, entrepreneurs and entrepreneurs who rail against government bureaucrats can be enthusiastically invited to speak to the media and at investor conferences.

It is a ploy that has continued to be effective from antiquity to the present-day because it strikes at our primordial tendencies toward tribalism and justice, which is why we can expect it to continue. It’s a pattern that recurs with such metronomic regularity precisely because we are so vulnerable to it.

Being Aware Is Half The Battle

In my friend Bob Burg’s wonderful book, Adversaries into Allies, he makes the distinction between persuasion and manipulation. Bob says that persuasion involves helping someone to make a decision by explaining the benefits of a particular course of action, while manipulation takes advantage of negative emotions, such as anger, fear and greed.

So it shouldn’t be surprising that those who want to manipulate us tell origin stories in which we were once innocent and good until a corrupting force diminished us. It is that narrative that allows them to assert victimhood, dehumanize an out-group and promise, if given the means, that they will deliver retribution and a return to our rightful place.

These are the tell-tale signs that reveal demagogues, hucksters and con artists. It doesn’t matter if they are seeking backing for a new technology, belief in a new business model or public office, there will always be an “us” and a “them” and there can never be a “we together,” because “they,” are trying to deceive us, take what is rightfully ours and rob us of our dignity.

Yet once we begin to recognize those signs, we can use those emotional pangs as markers that alert us to the need to scrutinize claims more closely, seek out a greater diversity of perspectives and examine alternative narratives. We can’t just believe everything we think. It is the people who are telling us things that we want to be true that are best able to deceive us.

Those who pursue evil and greed always claim that they are on the side of everything righteous and pure. That’s what we need to watch out for most.

— Article courtesy of the Digital Tonto blog
— Image credit: Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Future of Humanity in an Artificially Intelligent World

The Future of Humanity in an Artificially Intelligent World

GUEST POST from Greg Satell

The Argentinian writer Jorge Borges had a fascination with a concept known as the infinite monkey theorem. The idea is that if you had an infinite amount of monkeys pecking away at an infinite amount of typewriters, they would randomly create the collected works of Tolstoy and every other masterwork ever written (or that could be written).

The theorem, which has been around for at least a century, is troubling because it calls into question what it means to be human. If we can be inspired by something that could so easily be randomly generated, then what does it mean to be meaningful? Is meaning just an illusion we construct to make ourselves happy?

In recent years, the rise of artificial intelligence has transformed this theoretical dilemma into an intensely practical issue. In a world in which machines are taking over work long thought of as intensely human, what is the role of human labor? How do we create value that is distinct from what machines can do faster and cheaper? The answers will shape our future.

Machines Replacing Humans

The first industrial robot, called Unimate, was installed on an assembly line at General Motors in 1961. Since then, robots have become highly integrated into our economy. They do dangerous jobs, like bomb disposal, as well as more prosaic ones, like running warehouses and legal discovery.

And lest you think that automation only applies to low-skill, mechanical jobs, robots are also invading the creative realm. One book written by a machine was even accepted as a submission for the prestigious Hoshi Shinichi Literary Award in Japan. There are now robots that advise physicians, movie producers and music executives. Some even write songs.

Yet these things have been, for the most part, going on in the background. Very few of us see robots at work in factories, much less evaluating scripts, reading mammograms or doing research for lawyers. However, the introduction of ChatGPT brings the power of AI to anyone with an Internet connection.

If you haven’t tried it yourself yet, here’s an example: I asked it to explain the infinite monkey theorem to me and this is what I got:

The infinite monkey theorem states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type any given text, such as the complete works of William Shakespeare. The theorem is often used as a humorous illustration of the basics of probability theory. However, the theorem is actually a misnomer, since the chances of a monkey randomly typing a meaningful sequence of characters is incredibly small, and the concept of an “infinite” amount of time is physically impossible.

And just for fun, I asked it to produce a one paragraph promotional biography of Greg Satell:

Greg Satell is a thought leader in innovation and business strategy, with over 20 years of experience in the industry. He is a popular speaker and author, known for his bestselling book “Mapping Innovation: A Playbook for Navigating a Disruptive Age.” Greg has also held executive roles at leading companies and has a track record of driving innovation and growth through effective business strategies.

Each of these responses appeared in seconds and, while they are not what I would have chosen to write myself, they are more than passable. With the technology advancing at this pace, how can we compete with the machines?

The Automation Paradox

In 1900, 30 million people in the United States were farmers, but by 1990 that number had fallen to under 3 million even as the population more than tripled. So, in a manner of speaking, 90% of American agriculture workers lost their jobs due to automation. Yet those out-of-work farmers weren’t impoverished. In fact, the 20th century was an era of unprecedented prosperity.

Consider this: Although the workforce in the US has more than doubled since 1950, labor participation rates remain close to all-time highs. Still, a recent report by the US Chamber of Commerce found that we have a massive labor shortage. In the highly-automated manufacturing sector, it estimated that even if every unemployed person with experience were employed, it would only fill half of the vacant jobs.

In fact, when you look at highly automated fields, they tend to be the ones that have major labor shortages. You see touchscreens everywhere you go, but 70% of openings in the retail sector go unfilled. Autopilot has been around for decades, but we face a massive global pilot shortage that’s getting worse every year.

Once a task becomes automated, it also becomes largely commoditized and value is then created in an area that wasn’t quite obvious when people were busy doing more basic things. Go to an Apple store and you’ll notice two things: lots of automation and a sea of employees in blue shirts there to help, troubleshoot and explain things to you. Value doesn’t disappear, it just shifts to a different place.

One striking example of this is the humble community bookstore. With the domination of Amazon, you might think that small independent bookstores would be doomed, but instead they’re thriving. While its true that they can’t match Amazon’s convenience, selection or prices, people are flocking to small local shops for other reasons, such as deep expertise in particular subject matter and the chance to meet people with similar interests.

The Irrational Mind

To understand where value is shifting now, the work of neuroscientist Antonio Damasio can shed some light. He studied patients who, despite having perfectly normal cognitive ability, had lost the ability to feel emotion. Many would assume that, without emotions to distract them, these people would be great at making perfectly rational decisions.

But they weren’t. In fact, they couldn’t make any decisions at all. They could list the factors at play and explain their significance, but they couldn’t feel one way or another about them. In effect, without emotion they couldn’t form any intention. One decision was just like any other, leading to an outcome that they cared nothing about.

The social psychologist Jonathan Haidt built on Damasio’s work to form his theory of social intuitionism. What Haidt found in his research is that we don’t make moral judgments through conscious reasoning, but rather through unconscious intuition. Essentially, we automatically feel a certain way about something and then come up with reasons that we should feel that way.

Once you realize that, it becomes clear why Apple needs so many blue shirts at its stores and why independent bookstores are thriving. An artificial intelligence can access all the information in the world, curate that information and present it to us in an understandable way, but it can’t understand why we should care about it.

In fact, humans often disguise our true intent, even to ourselves. A student might say he wants a new computer to do schoolwork, but may really want a stronger graphics engine to play video games. In much the same way, a person may want to buy a book about a certain subject, but also truly covet a community which shares the same interest.

The Library of Babel And The Intention Economy

In his story The Library of Babel, Borges describes a library which contains books with all potential word combinations in all possible languages. Such a place would encompass all possible knowledge, but would also be completely useless, because the vast majority of books would be gibberish consisting of random strings of symbols.

In essence, deriving meaning would be an exercise in curation, which machines could do if they perfectly understood our intentions. However, human motives are almost hopelessly complex. So much so, in fact, that even we ourselves often have difficulty understanding why we want one thing and not another.

There are some things that a computer will never do. Machines will never strike out at a Little League game, have their hearts broken in a summer romance or see their children born. The inability to share human experiences makes it difficult, if not impossible, for computers to relate to human emotions and infer how those feelings shape preferences in a given context.

That’s why the rise of artificial intelligence is driving a shift from cognitive to social skills. The high paying jobs today have less to do with the ability to retain facts or manipulate numbers—we now use computers for those things—than it does with humans serving other humans. That requires more deep collaboration, teamwork and emotional intelligence.

To derive meaning in an artificially intelligent world we need to look to each other and how we can better understand our intentions. The future of technology is always more human.

HALLOWEEN BONUS: Save 30% on the eBook, hardcover or softcover of Braden Kelley’s latest book Charting Change (now in its second edition) — FREE SHIPPING WORLDWIDE — using code HAL30 until midnight October 31, 2025

— Article courtesy of the Digital Tonto blog
— Image credit: Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Moving From Disruption to Resilience

Moving From Disruption To Resilience

GUEST POST from Greg Satell

In the 1990s, a newly minted professor at Harvard Business School named Clayton Christensen began studying why good companies fail. What he found was surprising. They weren’t failing because they lost their way, but rather because they were following time-honored principles, such as listening to their customers, investing in R&D and improving their products.

As he researched further he realized that, under certain circumstances, a market becomes over-served, the basis of competition changes and firms become vulnerable to a new type of competitor. In his 1997 book, The Innovator’s Dilemma, he coined the term disruptive technology.

It was an idea whose time had come. The book became a major bestseller and Christensen the world’s top business guru. Yet many began to see disruption as more than a special case, but a mantra; an end in itself rather than a means to an end. Today, we’ve disrupted ourselves into oblivion and we desperately need to make a shift. It’s time to move toward resilience.

The Disruption Gospel

We like to think of ourselves as living in a fast-moving age, but that’s probably more hype than anything else. Before 1920 most households in America lacked electricity and running water. Even the most basic household tasks, like washing or cooking a meal, took hours of backbreaking labor to haul water and cut firewood. Cars were rare and few people traveled more than 10 miles from home.

That would change in the next few decades as household appliances and motorized transportation transformed American life. The development of penicillin in the 1940s would bring about a “Golden Age” of antibiotics and revolutionize medicine. The 1950s brought a Green Revolution that would help expand overseas markets for American goods.

By the 1970s, innovation began to slow. After half a century of accelerated productivity growth, it would enter a long slump. The rise of Japan and stagflation contributed to an atmosphere of malaise. After years of dominance, the American model seemed to have its best days behind it. For the first time in the post-war era, the future was uncertain.

That began to change in the 1980s. A new president, Ronald Reagan, talked of a “shining city on a hill”, and declared that “Government is not the solution to our problem, government is the problem.” A new “Washington Consensus,” took hold that preached fiscal discipline, free trade, privatization and deregulation.

At the same time a management religion took hold, with Jack Welch as its patron saint. No longer would CEO’s weigh the interests of investors with customers, communities, employees and other stakeholders, everything would be optimized for shareholder value. General Electric, and then broader industry, would embark on a program of layoffs, offshoring and financial engineering in order to trim the fat and streamline their organizations.

The End Of History?

There were early signs that we were on the wrong path. Despite the layoffs that hollowed out America’s industrial base and impoverished many of its communities, productivity growth, which had been depressed since the 1970s, didn’t even budge. Poorly thought out deregulation in the banking industry led to a savings and loan crisis and a recession.

At this point, questions should have been raised, but two events in November 1989 would reinforce the prevailing wisdom. First, The fall of the Berlin Wall would end the Cold War and discredit socialism. Then Tim Berners-Lee would create the World Wide Web and usher in a new technological era of networked computing.

With markets opening across the world, American-trained economists at the IMF and the World Bank traveled the globe preaching the market discipline prescribed by the Washington Consensus, often imposing policies that would never be accepted developed markets back home. Fueled by digital technology, productivity growth in the US finally began to pick up in 1996, creating budget surpluses for the first time in decades.

Finally, it appeared that we had hit upon a model that worked. We would no longer leave ourselves to the mercy of bureaucrats at government agencies or executives at large organizations who had gotten fat and sloppy. The combination of market and technological forces would point the way for us.

The calls for deregulation increased, even if it meant increased disruption. Most notably, Glass-Steagall Act, which was designed to limit risk in the financial system, was repealed in 1999. Times were good and we had unbridled capitalism and innovation to thank for it. The Washington Consensus had been proven out, or so it seemed.

The Silicon Valley Doomsday Machine

By the year 2000, the first signs of trouble began to appear. The money rushing into Silicon Valley created a bubble which bursted and took several notable corporations with it. Massive frauds were uncovered at firms like Enron and WorldCom, which also brought down their auditor, Arthur Anderson. Calls for reform led to the Sarbanes-Oxley Act that increased standards for corporate governance.

Yet the Bush Administration concluded that the problem was too little disruption, not too much, and continued to push for less regulation. By 2005, the increase in productivity growth that began in 1996 dissipated as suddenly as it had appeared. Much like in the late 80s, the lack of oversight led to a banking crisis, except this time it wasn’t just regional savings and loans that got caught up, but the major financial center institutions left exposed.

That’s what led to the Great Recession. To stave off disaster, central banks embarked on an extremely stimulative strategy called quantitative easing. This created a superabundance of capital which, with few places to go, ended up sloshing around in Silicon Valley helping to create a new age of “unicorns,” with over 1000 startups valued at more than $1 billion.

Today, we’re seeing the same kind of scandals we saw in the early 2000’s, except the companies being exposed aren’t established firms like Enron, Worldcom and Arthur Anderson, but would-be disrupters like WeWork, Theranos and FTX. Unlike those earlier failures, there has been no reckoning. If anything, tech billionaires like Marc Andreessen and Elon Musk billionaires seem emboldened.

At the same time, there is growing evidence that hyped-up excesses are crowding out otherwise viable businesses in the real economy. When WeWork “disrupted” other workspaces it wasn’t because of any innovation, technological or otherwise, but rather because huge amounts of venture capital allowed it to undercut competitors. Silicon Valley is beginning to look less like an industry paragon and more like a doomsday machine.

Realigning Prosperity With Security

It’s been roughly 25 years since Clayton Christensen inaugurated the disruptive era and what he initially intended to describe as a special case has been implemented as a general rule. Disruption is increasingly self-referential, used as both premise and conclusion, while the status quo is assumed to be inadequate as an a priori principle.

The results, by just about any metric imaginable, have been tragic. Despite all the hype about innovation, productivity growth remains depressed. Two decades of lax antitrust enforcement have undermined competitive markets in the US. We’ve gone through the worst economic crisis since the 1930s and the worst pandemic since the 1910s.

At the same time, social mobility is declining, while anxiety and depression are rising to epidemic levels. Wages have stagnated, while the cost of healthcare and education has soared. Income inequality is at its highest level in 50 years. The average American is worse off, in almost every way, than before the cult of disruption took hold.

It doesn’t have to be this way. We can change course and invest in resilience. There have been positive moves. The infrastructure legislation and the CHIPS legislation both represent huge investments in our future, while the poorly named Inflation Reduction Act represents the largest investment in climate ever. Businesses have begun reevaluating their supply chains.

Yet the most important shift, that of mindset, has yet to come. Not everything needs to be optimized. Not every cost needs to be cut. We cannot embark on changes just for change’s sake. We need to pursue fewer initiatives that achieve greater impact and, when we feel the urge to disrupt, we need to ask, disruption in the service of what?

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

You Must Accept That People Are Irrational

You Must Accept That People Are Irrational

GUEST POST from Greg Satell

For decades, economists have been obsessed with the idea of “enlightened self-interest,” building elaborate models based on the assumption that people make rational choices. Business and political leaders have used these models to shape competitive strategies, compensation, tax policies and social services among other things.

It’s clear that the real world is far more complex than that. Consider the prisoner’s dilemma, a famous thought experiment in which individuals acting in their self-interest make everyone worse off. In a wide array of real world and experimental contexts, people will cooperate for the greater good rather than pursue pure self-interest.

We are wired to cooperate as well as to compete. Identity and dignity will guide our actions even more than the prospect for loss or gain. While business schools have trained generations of managers to assume that they can optimize results by designing incentives, the truth is that leaders that can forge a sense of shared identity and purpose have the advantage.

Overcoming The Prisoner’s Dilemma

John von Neumann was a frustrated poker player. Despite having one of the best mathematical minds in history that could probably calculate the odds better than anyone on earth, he couldn’t tell whether other players were bluffing or not. It was his failure at poker that led him to create game theory, which calculates the strategies of other players.

As the field developed, it was expanded to include cooperative games in which players could choose to collaborate and even form coalitions with each other. That led researchers at RAND to create the prisoner’s dilemma, in which two suspects are being interrogated separately and each offered a reduced sentence to confess.

Prisoner's Dilemma

Here’s how it works: If both prisoners cooperate with each other and neither confesses, they each get one year in prison on a lesser charge. If one confesses, he gets off scot-free, while his partner gets 5 years. If they both rat each other out, then they get three years each—collectively the worst outcome of all.

Notice how from a rational viewpoint, the best strategy is to defect. No matter what one guy does, the other one is better off ratting him out. If both pursue self-interest, they are made worse off. It’s a frustrating problem. Game theorists call it a Nash equilibrium—one in which nobody can improve their position by unilateral move. In theory, you’re basically stuck.

Yet in a wide variety of real-world contexts, ranging from the survival strategies of guppies to military alliances, cooperation is credibly maintained. In fact, there are a number of strategies that have proved successful in overcoming the prisoner’s dilemma. One, called tit-for-tat, relies on credible punishments for defections. Even more effective, however, is building a culture of shared purpose and trust.

Kin Selection And Identity

Evolutionary psychology is a field very similar to game theory. It employs mathematical models to explain what types of behaviors provide the best evolutionary outcomes. At first, this may seem like the utilitarian approach that economists have long-employed, but when you combine genetics with natural selection, you get some surprising answers.

Consider the concept of kin selection. From a purely selfish point of view, there is no reason for a mother to sacrifice herself for her child. However, from an evolutionary point of view, it makes perfect sense for parents to put their kids first. Groups who favor children are more likely to grow and outperform groups who don’t.

This is what Richard Dawkins meant when he called genes selfish. If we look at things from our genes’ point of view, it makes perfect sense for them to want us to sacrifice ourselves for children, who are more likely to be able to propagate our genes than we are. The effect would logically also apply to others, such as cousins, that likely carry our genes.

Researchers have also applied the concept of kin selection to other forms of identity that don’t involve genes, but ideas (also known as memes) in examples such as patriotism. When it comes to people or ideas we see as an important part of our identity, we tend to take a much more expansive view of our interests than traditional economic models would predict.

Cultures of Dignity

It’s not just identity that figures into our decisions, but dignity as well. Consider the ultimatum game. One player is given a dollar and needs to propose how to split it with another player. If the offer is accepted, both players get the agreed upon shares. If it is not accepted, neither player gets anything.

If people acted purely rationally, offers as low as a penny would be routinely accepted. After all, a penny is better than nothing. Yet decades of experiments across different cultures show that most people do not accept a penny. In fact, offers of less than 30 cents are routinely rejected as unfair because they offend people’s dignity and sense of self.

Results from ultimatum game are not uniform, but vary in different cultures and more recent research suggests why. In a study in which a similar public goods game was played it was found that cooperative—as well as punitive—behavior is contagious, spreading through three degrees of interactions, even between people who haven’t had any direct contact.

Whether we know it or not, we are constantly building ecosystems of norms that reward and punish behavior according to expectations. If we see the culture we are operating in as trusting and generous, we are much more likely to act collaboratively. However, if we see our environment as cutthroat and greedy, we’ll tend to model that behavior in the same way.

Forging Shared Identity And Shared Purpose

In an earlier age, organizations were far more hierarchical. Power rested at the top. Information flowed up, orders went down, work got done and people got paid. Incentives seemed to work. You could pay more and get more. Yet in today’s marketplace, that’s no longer tenable because the work we need done is increasingly non-routine.

That means we need people to do more than merely carry out tasks, they need to put all of their passion and creativity into their work to perform at a high-level. They need to collaborate effectively in teams and take pride in the impact their efforts produce. To achieve that at an organizational level, leaders need to shift their mindsets.

As David Burkus explained in his TED Talk, humans are prosocial. They are vastly more likely to perform when they understand and identify with who their work benefits than when they are given financial incentives or fed some grandiose vision. Evolutionary psychologists have long established that altruism is deeply embedded in our sense of tribe.

The simple truth is that we can no longer coerce people to do what we want with Rube Goldberg-like structures of carrots and sticks, but must inspire people to want what we want. Humans are not purely rational beings, responding to stimuli as if they were vending machines that spit out desired behaviors when the right buttons are pushed, but are motivated by identity and dignity more than anything else.

Leadership is not an algorithm, but a practice of creating meaning through relationships of trust in the context of a shared purpose.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Learning Business and Life Lessons from Monkeys

Learning Business and Life Lessons from Monkeys

GUEST POST from Greg Satell

Franz Kafka was especially skeptical about parables. “Many complain that the words of the wise are always merely parables and of no use in daily life,” he wrote. “When the sage says: ‘Go over,’ he does not mean that we should cross to some actual place… he means some fabulous yonder…that he cannot designate more precisely, and therefore cannot help us here in the very least.

Business pundits, on the other hand, tend to favor parables, probably because telling simple stories allows for the opportunity to seem both folksy and wise at the same time. When Warren Buffet says “Only when the tide goes out do you discover who’s been swimming naked,” it doesn’t sound so much like an admonishment.

Over the years I’ve noticed that some of the best business parables involve monkeys. I’m not sure why that is, but I think it has something to do with taking intelligence out of the equation. We’re often prone to imagining ourselves as the clever hero of our own story and we neglect simple truths. That may be why monkey parables have so much to teach us.

1. Build The #MonkeyFirst

When I work with executives, they often have a breakthrough idea they are excited about. They begin to tell me what a great opportunity it is and how they are perfectly positioned to capitalize on it. However, when I begin to dig a little deeper it appears that there is some major barrier to making it happen. When I try to ask about it, they just shut down.

One reason that this happens is that there is a fundamental tension between innovation and operations. Operational executives tend to focus on identifying clear benchmarks to track progress. That’s fine for a typical project, but when you are trying to do something truly new and different, you have to directly confront the unknown.

At Google X, the tech giant’s “moonshot factory,” the mantra is #MonkeyFirst. The idea is that if you want to get a monkey to recite Shakespeare on a pedestal, you start by training the monkey, not building the pedestal, because training the monkey is the hard part. Anyone can build a pedestal.

The problem is that most people start with the pedestal, because it’s what they know and by building it, they can show early progress against a timeline. Unfortunately, building a pedestal gets you nowhere. Unless you can actually train the monkey, working on the pedestal is wasted effort.

The moral: Make sure you address the crux of the problem and don’t waste time with peripheral issues.

2. Don’t Get Taken In By Coin Flipping Monkeys

We live in a world that worships accomplishment. Sports stars who have never worked in an office are paid large fees to speak to corporate audiences. Billionaires who have never walked a beat speak out on how to fight crime (even as they invest in gun manufacturers). Others like to espouse views on education, although they have never taught a class.

Many say that you can’t argue with success, but consider this thought experiment: Put a million monkeys in a coin flipping contest. The winners in each round win a dollar and the losers drop out. After twenty rounds, there will only be two monkeys left, each winning $262,144. The vast majority of the other monkeys leave with merely pocket change.

How much would you pay the winning monkeys to speak at your corporate event? Would you invite them to advise your company? Sit on your board? Would you be interested in their views about how to raise your children, invest your savings or make career choices? Would you try to replicate their coin-flipping success? (Maybe it’s all in the wrist).

The truth is that chance and luck play a much bigger part in success than we like to admit. Einstein, for example, became the most famous scientist of the 20th century not just because of his discoveries but also due to an unlikely coincidence. True accomplishment is difficult to evaluate, so we look for signals of success to guide our judgments.

The moral: Next time you judge someone, either by their success or lack thereof, ask yourself whether you are judging actual accomplishment or telltale signs of successful coin flipping. It’s harder to tell the difference than you’d think.

3. The Infinite Monkey Theorem

There is an old thought experiment called the Infinite Monkey Theorem, which is eerily disturbing. The basic idea is that if there were an infinite amount of monkeys pecking away on an infinite amount of keyboards they would, in time, produce the complete works of Shakespeare, Tolstoy and every other literary masterpiece.

It’s a perplexing thought because we humans pride ourselves on our ability to recognize and evaluate patterns. The idea that something we value so highly could be randomly generated is extremely unsettling. Yet there is an entire branch of mathematics, called Ramsey Theory, devoted to the study of how order emerges from random sets of data.

While the infinite monkey theorem is, of course, theoretical, technology is forcing us to confront the very real dilemma’s it presents. For example, music scholar and composer David Cope has been able to create algorithms that produce original works of music that are so good even experts can’t tell they are computer generated. So what is the value of human input?

The moral: Much like the coin flipping contest, the infinite monkey theorem makes us confront what we value and why. What is the difference between things human produced and identical works that are computer generated? Are Tolstoy’s words what give his stories meaning? Or is it the intent of the author and the fact that a human was trying to say something important?

Imagining Monkeys All Around Us

G. H. Hardy, widely considered a genius, wrote that “For any serious purpose, intelligence is a very minor gift.” What he meant was that even in purely intellectual pursuits, such as his field of number theory, there are things that are far more important. It was, undoubtedly, intellectual humility that led Hardy to Ramanujuan, perhaps his greatest discovery of all.

Imagining ourselves to be heroes of our own story can rob us of the humility we need to succeed and prosper. Mistaking ourselves for geniuses can often get us into trouble. People who think they’re playing it smart tend to make silly mistakes, both because they expect to see things that others don’t and because they fail to look for and recognize trouble signs.

Parables about monkeys can be useful because nobody expects them to be geniuses, which demands that we ask ourselves hard questions. Are we doing the important work, or the easiest tasks to show progress on? If monkeys flipping coins can simulate professional success, what do we really celebrate? If monkeys tapping randomly on typewriters can create masterworks, what is the value of human agency?

The truth is that humans are prone to be foolish. We are unable, outside a few limited areas of expertise, to make basic distinctions in matters of importance. So we look for signals of prosperity, intelligence, shared purpose and other things we value to make judgments about what information we should trust. Imagining monkeys around us helps us to be more careful.

Sometimes the biggest obstacle between where we are now and the fabulous yonder we seek is just the few feet in front of us.

— Article courtesy of the Digital Tonto blog
— Image credit: Flickr

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Identity is Crucial to Change

Identity is Crucial to Change

GUEST POST from Greg Satell

In an age of disruption, the only viable strategy is to adapt. Today, we are undergoing major shifts in technology, resources, migration and demography that will demand that we make changes in how we think and what we do. The last time we saw this much change afoot was during the 1920s and that didn’t end well. The stakes are high.

In a recent speech, the EU’s High Representative for Foreign Affairs and Security Policy Josep Borrell highlighted the need for Europe to change and adapt to shifts in the geopolitical climate. He also pointed out that change involves far more than interests and incentives, carrots and sticks, but even more importantly, identity.

“Remember this sentence,” he said. “’It is the identity, stupid.’ It is no longer the economy, it is the identity.” What he meant was that human beings build attachments to things they identify with and, when those are threatened, they are apt to behave in a visceral, reactive and violent way. That’s why change and identity are always inextricably intertwined.

“We can’t define the change we want to pursue until we define who we want to be.” — Greg Satell

The Making Of A Dominant Model

Traditional models come to us with such great authority that we seldom realize that they too once were revolutionary. We are so often told how Einstein is revered for showing that Newton’s mechanics were flawed it is easy to forget that Newton himself was a radical insurgent, who rewrote the laws of nature and ushered in a new era.

Still, once a model becomes established, few question it. We go to school, train for a career and hone our craft. We make great efforts to learn basic principles and gain credentials when we show that we have grasped them. As we strive to become masters of our craft we find that as our proficiency increases, so does our success and status.

The models we use become more than mere tools to get things done, but intrinsic to our identity. Back in the nineteenth century, the miasma theory, the notion that bad air caused disease, was predominant in medicine. Doctors not only relied on it to do their job, they took great pride in their mastery of it. They would discuss its nuances and implications with colleagues, signaling their membership in a tribe as they did.

In the 1840s, when a young doctor named Ignaz Semmelweis showed that doctors could prevent infections by washing their hands, many in the medical establishment were scandalized. First, the suggestion that they, as men of prominence, could spread something as dirty as disease was insulting. Even more damaging, however, was the suggestion that their professional identity was, at least in part, based on a mistake.

Things didn’t turn out well for Semmelweis. He railed against the establishment, but to no avail. He would eventually die in an insane asylum, ironically of an infection he contracted under care, and the questions he raised about the prevailing miasma paradigm went unanswered.

A Gathering Storm Of Accumulating Evidence

We all know that for every rule, there are exceptions and anomalies that can’t be explained. As the statistician George Box put it, “all models are wrong, but some are useful.” The miasma theory, while it seems absurd today, was useful in its own way. Long before we had technology to study bacteria, smells could alert us to their presence in unsanitary conditions.

But Semmelweis’s hand-washing regime threatened doctors’ view of themselves and their role. Doctors were men of prominence, who saw disease emanating from the smells of the lower classes. This was more than a theory. It was an attachment to a particular view of the world and their place in it, which is one reason why Semmelweis experienced such backlash.

Yet he raised important questions and, at least in some circles, doubts about the miasma theory continued to grow. In 1854, about a decade after Semmelweis instituted hand washing, a cholera epidemic broke out in London and a miasma theory skeptic named John Snow was able to trace the source of the infection to a single water pump.

Yet once again, the establishment could not accept evidence that contradicted its prevailing theory. William Farr, a prominent medical statistician, questioned Snow’s findings. Besides, Snow couldn’t explain how the water pump was making people sick, only that it seemed to be the source of some pathogen. Farr, not Snow, won the day.

Later it would turn out that a septic pit had been dug too close to the pump and the water had been contaminated with fecal matter. But for the moment, while doubts began to grow about the miasma theory, it remained the dominant model and countless people would die every year because of it.

Breaking Through To A New Paradigm

In the early 1860s, as the Civil War was raging in the US, Louis Pasteur was researching wine-making in France. While studying the fermentation process, he discovered that microorganisms spoiled beverages such as beer and milk. He proposed that they be heated to temperatures between 60 and 100 degrees Celsius to avoid spoiling, a process that came to be called pasteurization

Pasteur guessed that the similar microorganisms made people sick which, in turn, led to the work of Robert Koch and Joseph Lister. Together they would establish the germ theory of disease. This work then led to not only better sanitary practices, but eventually to the work of Alexander Fleming, Howard Florey and Ernst Chain and development of antibiotics.

To break free of the miasma theory, doctors needed to change the way they saw themselves. The miasma theory had been around since Hippocrates. To forge a new path, they could no longer be the guardians of ancient wisdom, but evidence-based scientists, and that would require that everything about the field be transformed.

None of this occurred in a vacuum. In the late 19th century, a number of long-held truths, from Euclid’s Geometry to Aristotle’s logic, were being discarded, which would pave the way for strange new theories, such as Einstein’s relativity and Turing’s machine. To abandon these old ideas, which were considered gospel for thousands of years, was no doubt difficult. Yet it was what we needed to do to create the modern world.

Moving From Disruption to Resilience

Today, we stand on the precipice of a new paradigm. We’ve suffered through a global financial crisis, a pandemic and the most deadly conflict in Europe since World War II. The shifts in technology, resources, migration and demography are already underway. The strains and dangers of these shifts are already evident, yet the benefits are still to come.

To successfully navigate the decade ahead, we must make decisions not just about what we want, but who we want to be. Nowhere is this playing out more than in Ukraine right now, where the war being waged is almost solely about identity. Russians want to deny Ukrainian identity and to defy what they see as the US-led world order. Europeans need to take sides. So do the Chinese. Everyone needs to decide who they are and where they stand.

This is not only true in international affairs, but in every facet of society. Different eras make different demands. The generation that came of age after World War II needed to rebuild and they did so magnificently. Yet as things grew, inefficiencies mounted and the Boomer Generation became optimizers. The generations that came after worshiped disruption and renewal. These are, of course, gross generalizations, but the basic narrative holds true.

What should be clear is that where we go from here will depend on who we want to be. My hope is that we become protectors who seek to make the shift from disruption to resilience. We can no longer simply worship market and technological forces and leave our fates up to them as if they were gods. We need to make choices and the ones we make will be greatly influenced by how we see ourselves and our role.

As Josep Borrell so eloquently put it: It is the identity, stupid. It is no longer the economy, it is the identity.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

What We See Influences How We’ll Act

What We See Influences How We'll Act

GUEST POST from Greg Satell

“Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually slaves of some defunct economist,” John Maynard Keynes, himself a long dead economist, once wrote. We are, much more than we’d like to admit, creatures of our own age, taking our cues from our environment.

That’s why we need to be on the lookout for our own biases. The truth, as we see it, is often more of a personalized manifestation of the zeitgeist than it is the product of any real insight or reflection. As Richard Feynman put it, “The first principle is that you must not fool yourself—and you are the easiest person to fool. So you have to be very careful about that.”

We can’t believe everything we think. We often seize upon the most easily available information, rather than the most reliable sources. We then seek out information that confirms those beliefs and reject evidence that contradicts existing paradigms. That’s what leads to bad decisions. If what we see determines how we act, we need to look carefully.

The Rise And Fall Of Social Darwinism

In the 1860s, in response to Darwin’s ideas, Herbert Spencer and others began promoting the theory of Social Darwinism. The basic idea was that “survival of the fittest” meant that society should reflect a Hobbesian state of nature, in which most can expect a life that is “nasty, brutish and short,” while an exalted few enjoy the benefits of their superiority.

This was, of course, a gross misunderstanding of Darwin’s work. First, Darwin never used the term, “survival of the fittest,” which was actually coined by Spencer himself. Secondly, Darwin never meant to suggest that there are certain innate qualities that make one individual better than others, but that as the environment changes, certain traits tend to be propagated which, over time, can lead to a new species.

Still, if you see the world as a contest for individual survival, you will act accordingly. You will favor a laissez-faire approach to society, punishing the poor and unfortunate and rewarding the rich and powerful. In some cases, such as Nazi Germany and in the late Ottoman empire, Social Darwinism was used as a justification for genocide.

While some strains of Social Darwinism still exist, for the most part it has been discredited, partly because of excesses such as racism, eugenics and social inequality, but also because more rigorous approaches, such as evolutionary psychology, show that altruism and collaboration can themselves be adaptive traits.

The Making Of The Modern Organization

When Alfred Sloan created the modern corporation at General Motors in the early 20th century, what he really did was create a new type of organization. It had centralized management, far flung divisions and was exponentially more efficient at moving around men and material than anything that had come before.

He called it “federal decentralization.” Management would create operating principles, set goals and develop overall strategy, while day-to-day decisions were performed by people lower down in the structure. While there was some autonomy, it was more like an orchestra than a jazz band, with the CEO as conductor.

Here again, what people saw determined how they acted. Many believed that a basic set of management principles, if conceived and applied correctly, could be adapted to any kind of business, which culminated in the “Nifty Fifty” conglomerates of the 60’s and 70’s. It was, in some sense, an idea akin to Social Darwinism, implying that there are certain innate traits that make an organization more competitive.

Yet business environments change and, while larger organizations may be able to drive efficiencies, they often find it hard to adapt to changing conditions. When the economy hit hard times in the 1970s, the “Nifty Fifty” stocks vastly under-performed the market. By the time the 80s rolled around, conglomerates had fallen out of fashion.

Industries and Value Chains

In 1985, a relatively unknown professor at Harvard Business School named Michael Porter published a book called Competitive Advantage, which explained that by optimizing every facet of the value chain, a firm could consistently outperform its competitors. The book was an immediate success and made Porter a management superstar.

Key to Porter’s view was that firms compete in industries that are shaped by five forces: competitors, customers, suppliers, substitutes, and new market entrants. So he advised leaders to build and leverage bargaining power in each of those directions to create a sustainable competitive advantage for the long term.

If you see your business environment as being neatly organized in specific industries, everybody is a potential rival. Even your allies need to be viewed with suspicion. So, for example, when a new open source operating system called Linux appeared, Microsoft CEO Steve Ballmer considered it to be a threat and immediately attacked, calling it a cancer.

Yet even as Ballmer went on the attack, the business environment was changing. As the internet made the world more connected, technology companies found that leveraging that connectivity through open source communities was a winning strategy. Microsoft’s current CEO, Satya Nadella, says that the company loves Linux. Ultimately, it recognized that it couldn’t continue to shut itself out and compete effectively.

Looking To The Future

Take a moment to think about what the world must have looked like to J.P. Morgan a century ago, in 1922. The disruptive technologies of the day, electricity and internal combustion, were already almost 40 years old, but had little measurable economic impact. Life largely went on as it always had and the legendary financier lorded over his domain of corporate barons.

That would quickly change over the next decade when those technologies would gain traction, form ecosystems and drive a 50-year boom. The great “trusts” that he built would get broken up and by 1930 virtually all of them would be dropped as components of the Dow Jones Industrial average. Every face of life would be completely transformed.

We’re at a similar point today, on the brink of enormous transformation. The recent string of calamities, including a financial meltdown, a pandemic and the deadliest war in Europe in 80 years, demand that we take a new path. Powerful shifts in technology, demographics, resources and migration, suggest that even more disruption may be in our future.

The course we take from here will be determined by how we see the world we live in. Do we see our fellow citizens as a burden or an asset? Are new technologies a blessing or a threat? Is the world full of opportunities to be embraced or dangers we need to protect ourselves from? These are questions we need to think seriously about.

How we answer them will determine what comes next.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.