Author Archives: Greg Satell

About Greg Satell

Greg Satell is a popular speaker and consultant. His latest book, Cascades: How to Create a Movement That Drives Transformational Change, is available now. Follow his blog at Digital Tonto or on Twitter @Digital Tonto.

Stealing From the Garden of Eden

Stealing From the Garden of Eden

GUEST POST from Greg Satell

The story of the Garden of Eden is one of the oldest in recorded history, belonging not only to the world’s three major Abrahamic faiths of Judaism, Christianity and Islam, but also having roots in Greek and Sumerian mythology. It’s the ultimate origin archetype: We were once pure, innocent and good, but then were corrupted in some way and cast out.

As Timothy Snyder points out in his excellent course on The Making of Modern Ukraine, this template of innocence, corruption and expulsion often leads us to a bad place, because it implies that anything we do to remove that corrupting influence would be good and just. When you’re fighting a holy war, the ends justify the means.

The Eden myth is a favorite of demagogues, hucksters and con artists because it is so powerful. We’re constantly inundated with scapegoats— the government, big business, tech giants, the “billionaire” class, immigrants, “woke” society — to blame for our fall from grace. We need to learn to recognize the telltale signs that someone is trying to manipulate us.

The Assertion Of Victimhood

In 1987, a rather drab and dull Yugoslavian apparatchik named Slobodan Milošević was visiting Kosovo field, the site of the Serbs humiliating defeat at the hands of the Ottoman empire in 1389. While meeting with local leaders, he heard a commotion outside and found police beating back a huge crowd of Serbs and Montenegrins.

“No one should dare to beat you again!” Milošević is reported to have said and, in that moment, that drab apparatchik was transformed into a political juggernaut who left death and destruction in his path. For the first time since World War II, a genocide was perpetrated in Europe and the term ethnic cleansing entered the lexicon.

In Snyder’s book, Bloodlands, which chronicled the twin horrors of Hitler and Stalin, he points out that if we are to understand how humans can do such atrocious things to other humans, we first need to understand that they saw themselves as the true victims. When people believe that their survival is at stake, there is very little they won’t assent to.

The assertion of victimhood doesn’t need to involve life and death. Consider the recent Twitter Files “scandal,” in which the social media giant’s new owner leaked internal discussions about content moderation. The journalists who were given access asserted that those discussions amounted to an FBI-Big Tech conspiracy to censor important information. They paint sinister pictures of dark forces working to undermine our access to information.

When you read the actual discussions, however, what you see is a nuanced discussion about how to balance a number of competing values. How do we balance national security and public safety with liberty and free speech? At what point does speech become inciteful and problematic? Where should lines be drawn?

The Dehumanization Of An Out-group

Demagogues, hucksters and con men abhor nuance because victimhood requires absolutes. The victim must be completely innocent and the perpetrator must be purely evil for the Eden myth sleight of hand to work. There are no innocent mistakes, only cruelty and greed will serve to build the narrative.

Two years after Milošević political transformation at Kosooe field he returned there to commemorate the 600 anniversary of the Battle of Kosovo, where he claimed that “​​the Serbs have never in the whole of their history conquered and exploited others.” Having established that predicate, the stage was set for the war in Bosnia and the atrocities that came with it.

Once you establish complete innocence, the next step is to dehumanize the out-group. The media aren’t professionals who make mistakes, they are “scum who spread lies.” Tech giants aren’t flawed organizations, but ones who deliberately harm the public. Public servants like Anthony Fauci and philanthropists like Bill Gates are purported to engage in nefarious conspiracies that undermine the public well-being.

The truth is, of course, that nothing is monolithic. People have multiple motivations, some noble, others less so. Government agencies tend to attract mission-driven public servants, but can also be prone to overreach and abuse of power. Entrepreneurs like Elon Musk can have both benevolent aspirations to serve mankind and problematic character flaws.

It is no accident that the states in the US with the fewest immigrants tend to have the most anti-immigrant sentiment. The world is a messy place, which is why real-world experience undermines the Manichean worldview that demagogues, hucksters and con artists need to prepare the ground for what comes next.

The Vow For Retribution

It is now a matter of historical record what came of Milošević. After the horrors of the genocides his government perpetrated, his regime was brought down in the Bulldozer Revolution, the first of a string of Color Revolutions that spread across Eastern Europe. He was then sent to The Hague to stand trial, where would die in his prison cell.

Milošević made a common mistake (and one Vladimir Putin is repeating today). Successful demagogues, hucksters and con artists know to never make good on their vows for retribution. In order to serve its purpose, the return to Eden must remain aspirational, a fabulous yonder that will never be truly attained. Once you actually try to get there, it will be exposed as a mirage.

Yet politicians who vow to bring down evil corporations can depend on a steady stream of campaign contributions. In much the same way, entrepreneurs and entrepreneurs who rail against government bureaucrats can be enthusiastically invited to speak to the media and at investor conferences.

It is a ploy that has continued to be effective from antiquity to the present-day because it strikes at our primordial tendencies toward tribalism and justice, which is why we can expect it to continue. It’s a pattern that recurs with such metronomic regularity precisely because we are so vulnerable to it.

Being Aware Is Half The Battle

In my friend Bob Burg’s wonderful book, Adversaries into Allies, he makes the distinction between persuasion and manipulation. Bob says that persuasion involves helping someone to make a decision by explaining the benefits of a particular course of action, while manipulation takes advantage of negative emotions, such as anger, fear and greed.

So it shouldn’t be surprising that those who want to manipulate us tell origin stories in which we were once innocent and good until a corrupting force diminished us. It is that narrative that allows them to assert victimhood, dehumanize an out-group and promise, if given the means, that they will deliver retribution and a return to our rightful place.

These are the tell-tale signs that reveal demagogues, hucksters and con artists. It doesn’t matter if they are seeking backing for a new technology, belief in a new business model or public office, there will always be an “us” and a “them” and there can never be a “we together,” because “they,” are trying to deceive us, take what is rightfully ours and rob us of our dignity.

Yet once we begin to recognize those signs, we can use those emotional pangs as markers that alert us to the need to scrutinize claims more closely, seek out a greater diversity of perspectives and examine alternative narratives. We can’t just believe everything we think. It is the people who are telling us things that we want to be true that are best able to deceive us.

Those who pursue evil and greed always claim that they are on the side of everything righteous and pure. That’s what we need to watch out for most.

— Article courtesy of the Digital Tonto blog
— Image credit: Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Future of Humanity in an Artificially Intelligent World

The Future of Humanity in an Artificially Intelligent World

GUEST POST from Greg Satell

The Argentinian writer Jorge Borges had a fascination with a concept known as the infinite monkey theorem. The idea is that if you had an infinite amount of monkeys pecking away at an infinite amount of typewriters, they would randomly create the collected works of Tolstoy and every other masterwork ever written (or that could be written).

The theorem, which has been around for at least a century, is troubling because it calls into question what it means to be human. If we can be inspired by something that could so easily be randomly generated, then what does it mean to be meaningful? Is meaning just an illusion we construct to make ourselves happy?

In recent years, the rise of artificial intelligence has transformed this theoretical dilemma into an intensely practical issue. In a world in which machines are taking over work long thought of as intensely human, what is the role of human labor? How do we create value that is distinct from what machines can do faster and cheaper? The answers will shape our future.

Machines Replacing Humans

The first industrial robot, called Unimate, was installed on an assembly line at General Motors in 1961. Since then, robots have become highly integrated into our economy. They do dangerous jobs, like bomb disposal, as well as more prosaic ones, like running warehouses and legal discovery.

And lest you think that automation only applies to low-skill, mechanical jobs, robots are also invading the creative realm. One book written by a machine was even accepted as a submission for the prestigious Hoshi Shinichi Literary Award in Japan. There are now robots that advise physicians, movie producers and music executives. Some even write songs.

Yet these things have been, for the most part, going on in the background. Very few of us see robots at work in factories, much less evaluating scripts, reading mammograms or doing research for lawyers. However, the introduction of ChatGPT brings the power of AI to anyone with an Internet connection.

If you haven’t tried it yourself yet, here’s an example: I asked it to explain the infinite monkey theorem to me and this is what I got:

The infinite monkey theorem states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type any given text, such as the complete works of William Shakespeare. The theorem is often used as a humorous illustration of the basics of probability theory. However, the theorem is actually a misnomer, since the chances of a monkey randomly typing a meaningful sequence of characters is incredibly small, and the concept of an “infinite” amount of time is physically impossible.

And just for fun, I asked it to produce a one paragraph promotional biography of Greg Satell:

Greg Satell is a thought leader in innovation and business strategy, with over 20 years of experience in the industry. He is a popular speaker and author, known for his bestselling book “Mapping Innovation: A Playbook for Navigating a Disruptive Age.” Greg has also held executive roles at leading companies and has a track record of driving innovation and growth through effective business strategies.

Each of these responses appeared in seconds and, while they are not what I would have chosen to write myself, they are more than passable. With the technology advancing at this pace, how can we compete with the machines?

The Automation Paradox

In 1900, 30 million people in the United States were farmers, but by 1990 that number had fallen to under 3 million even as the population more than tripled. So, in a manner of speaking, 90% of American agriculture workers lost their jobs due to automation. Yet those out-of-work farmers weren’t impoverished. In fact, the 20th century was an era of unprecedented prosperity.

Consider this: Although the workforce in the US has more than doubled since 1950, labor participation rates remain close to all-time highs. Still, a recent report by the US Chamber of Commerce found that we have a massive labor shortage. In the highly-automated manufacturing sector, it estimated that even if every unemployed person with experience were employed, it would only fill half of the vacant jobs.

In fact, when you look at highly automated fields, they tend to be the ones that have major labor shortages. You see touchscreens everywhere you go, but 70% of openings in the retail sector go unfilled. Autopilot has been around for decades, but we face a massive global pilot shortage that’s getting worse every year.

Once a task becomes automated, it also becomes largely commoditized and value is then created in an area that wasn’t quite obvious when people were busy doing more basic things. Go to an Apple store and you’ll notice two things: lots of automation and a sea of employees in blue shirts there to help, troubleshoot and explain things to you. Value doesn’t disappear, it just shifts to a different place.

One striking example of this is the humble community bookstore. With the domination of Amazon, you might think that small independent bookstores would be doomed, but instead they’re thriving. While its true that they can’t match Amazon’s convenience, selection or prices, people are flocking to small local shops for other reasons, such as deep expertise in particular subject matter and the chance to meet people with similar interests.

The Irrational Mind

To understand where value is shifting now, the work of neuroscientist Antonio Damasio can shed some light. He studied patients who, despite having perfectly normal cognitive ability, had lost the ability to feel emotion. Many would assume that, without emotions to distract them, these people would be great at making perfectly rational decisions.

But they weren’t. In fact, they couldn’t make any decisions at all. They could list the factors at play and explain their significance, but they couldn’t feel one way or another about them. In effect, without emotion they couldn’t form any intention. One decision was just like any other, leading to an outcome that they cared nothing about.

The social psychologist Jonathan Haidt built on Damasio’s work to form his theory of social intuitionism. What Haidt found in his research is that we don’t make moral judgments through conscious reasoning, but rather through unconscious intuition. Essentially, we automatically feel a certain way about something and then come up with reasons that we should feel that way.

Once you realize that, it becomes clear why Apple needs so many blue shirts at its stores and why independent bookstores are thriving. An artificial intelligence can access all the information in the world, curate that information and present it to us in an understandable way, but it can’t understand why we should care about it.

In fact, humans often disguise our true intent, even to ourselves. A student might say he wants a new computer to do schoolwork, but may really want a stronger graphics engine to play video games. In much the same way, a person may want to buy a book about a certain subject, but also truly covet a community which shares the same interest.

The Library of Babel And The Intention Economy

In his story The Library of Babel, Borges describes a library which contains books with all potential word combinations in all possible languages. Such a place would encompass all possible knowledge, but would also be completely useless, because the vast majority of books would be gibberish consisting of random strings of symbols.

In essence, deriving meaning would be an exercise in curation, which machines could do if they perfectly understood our intentions. However, human motives are almost hopelessly complex. So much so, in fact, that even we ourselves often have difficulty understanding why we want one thing and not another.

There are some things that a computer will never do. Machines will never strike out at a Little League game, have their hearts broken in a summer romance or see their children born. The inability to share human experiences makes it difficult, if not impossible, for computers to relate to human emotions and infer how those feelings shape preferences in a given context.

That’s why the rise of artificial intelligence is driving a shift from cognitive to social skills. The high paying jobs today have less to do with the ability to retain facts or manipulate numbers—we now use computers for those things—than it does with humans serving other humans. That requires more deep collaboration, teamwork and emotional intelligence.

To derive meaning in an artificially intelligent world we need to look to each other and how we can better understand our intentions. The future of technology is always more human.

HALLOWEEN BONUS: Save 30% on the eBook, hardcover or softcover of Braden Kelley’s latest book Charting Change (now in its second edition) — FREE SHIPPING WORLDWIDE — using code HAL30 until midnight October 31, 2025

— Article courtesy of the Digital Tonto blog
— Image credit: Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Moving From Disruption to Resilience

Moving From Disruption To Resilience

GUEST POST from Greg Satell

In the 1990s, a newly minted professor at Harvard Business School named Clayton Christensen began studying why good companies fail. What he found was surprising. They weren’t failing because they lost their way, but rather because they were following time-honored principles, such as listening to their customers, investing in R&D and improving their products.

As he researched further he realized that, under certain circumstances, a market becomes over-served, the basis of competition changes and firms become vulnerable to a new type of competitor. In his 1997 book, The Innovator’s Dilemma, he coined the term disruptive technology.

It was an idea whose time had come. The book became a major bestseller and Christensen the world’s top business guru. Yet many began to see disruption as more than a special case, but a mantra; an end in itself rather than a means to an end. Today, we’ve disrupted ourselves into oblivion and we desperately need to make a shift. It’s time to move toward resilience.

The Disruption Gospel

We like to think of ourselves as living in a fast-moving age, but that’s probably more hype than anything else. Before 1920 most households in America lacked electricity and running water. Even the most basic household tasks, like washing or cooking a meal, took hours of backbreaking labor to haul water and cut firewood. Cars were rare and few people traveled more than 10 miles from home.

That would change in the next few decades as household appliances and motorized transportation transformed American life. The development of penicillin in the 1940s would bring about a “Golden Age” of antibiotics and revolutionize medicine. The 1950s brought a Green Revolution that would help expand overseas markets for American goods.

By the 1970s, innovation began to slow. After half a century of accelerated productivity growth, it would enter a long slump. The rise of Japan and stagflation contributed to an atmosphere of malaise. After years of dominance, the American model seemed to have its best days behind it. For the first time in the post-war era, the future was uncertain.

That began to change in the 1980s. A new president, Ronald Reagan, talked of a “shining city on a hill”, and declared that “Government is not the solution to our problem, government is the problem.” A new “Washington Consensus,” took hold that preached fiscal discipline, free trade, privatization and deregulation.

At the same time a management religion took hold, with Jack Welch as its patron saint. No longer would CEO’s weigh the interests of investors with customers, communities, employees and other stakeholders, everything would be optimized for shareholder value. General Electric, and then broader industry, would embark on a program of layoffs, offshoring and financial engineering in order to trim the fat and streamline their organizations.

The End Of History?

There were early signs that we were on the wrong path. Despite the layoffs that hollowed out America’s industrial base and impoverished many of its communities, productivity growth, which had been depressed since the 1970s, didn’t even budge. Poorly thought out deregulation in the banking industry led to a savings and loan crisis and a recession.

At this point, questions should have been raised, but two events in November 1989 would reinforce the prevailing wisdom. First, The fall of the Berlin Wall would end the Cold War and discredit socialism. Then Tim Berners-Lee would create the World Wide Web and usher in a new technological era of networked computing.

With markets opening across the world, American-trained economists at the IMF and the World Bank traveled the globe preaching the market discipline prescribed by the Washington Consensus, often imposing policies that would never be accepted developed markets back home. Fueled by digital technology, productivity growth in the US finally began to pick up in 1996, creating budget surpluses for the first time in decades.

Finally, it appeared that we had hit upon a model that worked. We would no longer leave ourselves to the mercy of bureaucrats at government agencies or executives at large organizations who had gotten fat and sloppy. The combination of market and technological forces would point the way for us.

The calls for deregulation increased, even if it meant increased disruption. Most notably, Glass-Steagall Act, which was designed to limit risk in the financial system, was repealed in 1999. Times were good and we had unbridled capitalism and innovation to thank for it. The Washington Consensus had been proven out, or so it seemed.

The Silicon Valley Doomsday Machine

By the year 2000, the first signs of trouble began to appear. The money rushing into Silicon Valley created a bubble which bursted and took several notable corporations with it. Massive frauds were uncovered at firms like Enron and WorldCom, which also brought down their auditor, Arthur Anderson. Calls for reform led to the Sarbanes-Oxley Act that increased standards for corporate governance.

Yet the Bush Administration concluded that the problem was too little disruption, not too much, and continued to push for less regulation. By 2005, the increase in productivity growth that began in 1996 dissipated as suddenly as it had appeared. Much like in the late 80s, the lack of oversight led to a banking crisis, except this time it wasn’t just regional savings and loans that got caught up, but the major financial center institutions left exposed.

That’s what led to the Great Recession. To stave off disaster, central banks embarked on an extremely stimulative strategy called quantitative easing. This created a superabundance of capital which, with few places to go, ended up sloshing around in Silicon Valley helping to create a new age of “unicorns,” with over 1000 startups valued at more than $1 billion.

Today, we’re seeing the same kind of scandals we saw in the early 2000’s, except the companies being exposed aren’t established firms like Enron, Worldcom and Arthur Anderson, but would-be disrupters like WeWork, Theranos and FTX. Unlike those earlier failures, there has been no reckoning. If anything, tech billionaires like Marc Andreessen and Elon Musk billionaires seem emboldened.

At the same time, there is growing evidence that hyped-up excesses are crowding out otherwise viable businesses in the real economy. When WeWork “disrupted” other workspaces it wasn’t because of any innovation, technological or otherwise, but rather because huge amounts of venture capital allowed it to undercut competitors. Silicon Valley is beginning to look less like an industry paragon and more like a doomsday machine.

Realigning Prosperity With Security

It’s been roughly 25 years since Clayton Christensen inaugurated the disruptive era and what he initially intended to describe as a special case has been implemented as a general rule. Disruption is increasingly self-referential, used as both premise and conclusion, while the status quo is assumed to be inadequate as an a priori principle.

The results, by just about any metric imaginable, have been tragic. Despite all the hype about innovation, productivity growth remains depressed. Two decades of lax antitrust enforcement have undermined competitive markets in the US. We’ve gone through the worst economic crisis since the 1930s and the worst pandemic since the 1910s.

At the same time, social mobility is declining, while anxiety and depression are rising to epidemic levels. Wages have stagnated, while the cost of healthcare and education has soared. Income inequality is at its highest level in 50 years. The average American is worse off, in almost every way, than before the cult of disruption took hold.

It doesn’t have to be this way. We can change course and invest in resilience. There have been positive moves. The infrastructure legislation and the CHIPS legislation both represent huge investments in our future, while the poorly named Inflation Reduction Act represents the largest investment in climate ever. Businesses have begun reevaluating their supply chains.

Yet the most important shift, that of mindset, has yet to come. Not everything needs to be optimized. Not every cost needs to be cut. We cannot embark on changes just for change’s sake. We need to pursue fewer initiatives that achieve greater impact and, when we feel the urge to disrupt, we need to ask, disruption in the service of what?

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

You Must Accept That People Are Irrational

You Must Accept That People Are Irrational

GUEST POST from Greg Satell

For decades, economists have been obsessed with the idea of “enlightened self-interest,” building elaborate models based on the assumption that people make rational choices. Business and political leaders have used these models to shape competitive strategies, compensation, tax policies and social services among other things.

It’s clear that the real world is far more complex than that. Consider the prisoner’s dilemma, a famous thought experiment in which individuals acting in their self-interest make everyone worse off. In a wide array of real world and experimental contexts, people will cooperate for the greater good rather than pursue pure self-interest.

We are wired to cooperate as well as to compete. Identity and dignity will guide our actions even more than the prospect for loss or gain. While business schools have trained generations of managers to assume that they can optimize results by designing incentives, the truth is that leaders that can forge a sense of shared identity and purpose have the advantage.

Overcoming The Prisoner’s Dilemma

John von Neumann was a frustrated poker player. Despite having one of the best mathematical minds in history that could probably calculate the odds better than anyone on earth, he couldn’t tell whether other players were bluffing or not. It was his failure at poker that led him to create game theory, which calculates the strategies of other players.

As the field developed, it was expanded to include cooperative games in which players could choose to collaborate and even form coalitions with each other. That led researchers at RAND to create the prisoner’s dilemma, in which two suspects are being interrogated separately and each offered a reduced sentence to confess.

Prisoner's Dilemma

Here’s how it works: If both prisoners cooperate with each other and neither confesses, they each get one year in prison on a lesser charge. If one confesses, he gets off scot-free, while his partner gets 5 years. If they both rat each other out, then they get three years each—collectively the worst outcome of all.

Notice how from a rational viewpoint, the best strategy is to defect. No matter what one guy does, the other one is better off ratting him out. If both pursue self-interest, they are made worse off. It’s a frustrating problem. Game theorists call it a Nash equilibrium—one in which nobody can improve their position by unilateral move. In theory, you’re basically stuck.

Yet in a wide variety of real-world contexts, ranging from the survival strategies of guppies to military alliances, cooperation is credibly maintained. In fact, there are a number of strategies that have proved successful in overcoming the prisoner’s dilemma. One, called tit-for-tat, relies on credible punishments for defections. Even more effective, however, is building a culture of shared purpose and trust.

Kin Selection And Identity

Evolutionary psychology is a field very similar to game theory. It employs mathematical models to explain what types of behaviors provide the best evolutionary outcomes. At first, this may seem like the utilitarian approach that economists have long-employed, but when you combine genetics with natural selection, you get some surprising answers.

Consider the concept of kin selection. From a purely selfish point of view, there is no reason for a mother to sacrifice herself for her child. However, from an evolutionary point of view, it makes perfect sense for parents to put their kids first. Groups who favor children are more likely to grow and outperform groups who don’t.

This is what Richard Dawkins meant when he called genes selfish. If we look at things from our genes’ point of view, it makes perfect sense for them to want us to sacrifice ourselves for children, who are more likely to be able to propagate our genes than we are. The effect would logically also apply to others, such as cousins, that likely carry our genes.

Researchers have also applied the concept of kin selection to other forms of identity that don’t involve genes, but ideas (also known as memes) in examples such as patriotism. When it comes to people or ideas we see as an important part of our identity, we tend to take a much more expansive view of our interests than traditional economic models would predict.

Cultures of Dignity

It’s not just identity that figures into our decisions, but dignity as well. Consider the ultimatum game. One player is given a dollar and needs to propose how to split it with another player. If the offer is accepted, both players get the agreed upon shares. If it is not accepted, neither player gets anything.

If people acted purely rationally, offers as low as a penny would be routinely accepted. After all, a penny is better than nothing. Yet decades of experiments across different cultures show that most people do not accept a penny. In fact, offers of less than 30 cents are routinely rejected as unfair because they offend people’s dignity and sense of self.

Results from ultimatum game are not uniform, but vary in different cultures and more recent research suggests why. In a study in which a similar public goods game was played it was found that cooperative—as well as punitive—behavior is contagious, spreading through three degrees of interactions, even between people who haven’t had any direct contact.

Whether we know it or not, we are constantly building ecosystems of norms that reward and punish behavior according to expectations. If we see the culture we are operating in as trusting and generous, we are much more likely to act collaboratively. However, if we see our environment as cutthroat and greedy, we’ll tend to model that behavior in the same way.

Forging Shared Identity And Shared Purpose

In an earlier age, organizations were far more hierarchical. Power rested at the top. Information flowed up, orders went down, work got done and people got paid. Incentives seemed to work. You could pay more and get more. Yet in today’s marketplace, that’s no longer tenable because the work we need done is increasingly non-routine.

That means we need people to do more than merely carry out tasks, they need to put all of their passion and creativity into their work to perform at a high-level. They need to collaborate effectively in teams and take pride in the impact their efforts produce. To achieve that at an organizational level, leaders need to shift their mindsets.

As David Burkus explained in his TED Talk, humans are prosocial. They are vastly more likely to perform when they understand and identify with who their work benefits than when they are given financial incentives or fed some grandiose vision. Evolutionary psychologists have long established that altruism is deeply embedded in our sense of tribe.

The simple truth is that we can no longer coerce people to do what we want with Rube Goldberg-like structures of carrots and sticks, but must inspire people to want what we want. Humans are not purely rational beings, responding to stimuli as if they were vending machines that spit out desired behaviors when the right buttons are pushed, but are motivated by identity and dignity more than anything else.

Leadership is not an algorithm, but a practice of creating meaning through relationships of trust in the context of a shared purpose.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Learning Business and Life Lessons from Monkeys

Learning Business and Life Lessons from Monkeys

GUEST POST from Greg Satell

Franz Kafka was especially skeptical about parables. “Many complain that the words of the wise are always merely parables and of no use in daily life,” he wrote. “When the sage says: ‘Go over,’ he does not mean that we should cross to some actual place… he means some fabulous yonder…that he cannot designate more precisely, and therefore cannot help us here in the very least.

Business pundits, on the other hand, tend to favor parables, probably because telling simple stories allows for the opportunity to seem both folksy and wise at the same time. When Warren Buffet says “Only when the tide goes out do you discover who’s been swimming naked,” it doesn’t sound so much like an admonishment.

Over the years I’ve noticed that some of the best business parables involve monkeys. I’m not sure why that is, but I think it has something to do with taking intelligence out of the equation. We’re often prone to imagining ourselves as the clever hero of our own story and we neglect simple truths. That may be why monkey parables have so much to teach us.

1. Build The #MonkeyFirst

When I work with executives, they often have a breakthrough idea they are excited about. They begin to tell me what a great opportunity it is and how they are perfectly positioned to capitalize on it. However, when I begin to dig a little deeper it appears that there is some major barrier to making it happen. When I try to ask about it, they just shut down.

One reason that this happens is that there is a fundamental tension between innovation and operations. Operational executives tend to focus on identifying clear benchmarks to track progress. That’s fine for a typical project, but when you are trying to do something truly new and different, you have to directly confront the unknown.

At Google X, the tech giant’s “moonshot factory,” the mantra is #MonkeyFirst. The idea is that if you want to get a monkey to recite Shakespeare on a pedestal, you start by training the monkey, not building the pedestal, because training the monkey is the hard part. Anyone can build a pedestal.

The problem is that most people start with the pedestal, because it’s what they know and by building it, they can show early progress against a timeline. Unfortunately, building a pedestal gets you nowhere. Unless you can actually train the monkey, working on the pedestal is wasted effort.

The moral: Make sure you address the crux of the problem and don’t waste time with peripheral issues.

2. Don’t Get Taken In By Coin Flipping Monkeys

We live in a world that worships accomplishment. Sports stars who have never worked in an office are paid large fees to speak to corporate audiences. Billionaires who have never walked a beat speak out on how to fight crime (even as they invest in gun manufacturers). Others like to espouse views on education, although they have never taught a class.

Many say that you can’t argue with success, but consider this thought experiment: Put a million monkeys in a coin flipping contest. The winners in each round win a dollar and the losers drop out. After twenty rounds, there will only be two monkeys left, each winning $262,144. The vast majority of the other monkeys leave with merely pocket change.

How much would you pay the winning monkeys to speak at your corporate event? Would you invite them to advise your company? Sit on your board? Would you be interested in their views about how to raise your children, invest your savings or make career choices? Would you try to replicate their coin-flipping success? (Maybe it’s all in the wrist).

The truth is that chance and luck play a much bigger part in success than we like to admit. Einstein, for example, became the most famous scientist of the 20th century not just because of his discoveries but also due to an unlikely coincidence. True accomplishment is difficult to evaluate, so we look for signals of success to guide our judgments.

The moral: Next time you judge someone, either by their success or lack thereof, ask yourself whether you are judging actual accomplishment or telltale signs of successful coin flipping. It’s harder to tell the difference than you’d think.

3. The Infinite Monkey Theorem

There is an old thought experiment called the Infinite Monkey Theorem, which is eerily disturbing. The basic idea is that if there were an infinite amount of monkeys pecking away on an infinite amount of keyboards they would, in time, produce the complete works of Shakespeare, Tolstoy and every other literary masterpiece.

It’s a perplexing thought because we humans pride ourselves on our ability to recognize and evaluate patterns. The idea that something we value so highly could be randomly generated is extremely unsettling. Yet there is an entire branch of mathematics, called Ramsey Theory, devoted to the study of how order emerges from random sets of data.

While the infinite monkey theorem is, of course, theoretical, technology is forcing us to confront the very real dilemma’s it presents. For example, music scholar and composer David Cope has been able to create algorithms that produce original works of music that are so good even experts can’t tell they are computer generated. So what is the value of human input?

The moral: Much like the coin flipping contest, the infinite monkey theorem makes us confront what we value and why. What is the difference between things human produced and identical works that are computer generated? Are Tolstoy’s words what give his stories meaning? Or is it the intent of the author and the fact that a human was trying to say something important?

Imagining Monkeys All Around Us

G. H. Hardy, widely considered a genius, wrote that “For any serious purpose, intelligence is a very minor gift.” What he meant was that even in purely intellectual pursuits, such as his field of number theory, there are things that are far more important. It was, undoubtedly, intellectual humility that led Hardy to Ramanujuan, perhaps his greatest discovery of all.

Imagining ourselves to be heroes of our own story can rob us of the humility we need to succeed and prosper. Mistaking ourselves for geniuses can often get us into trouble. People who think they’re playing it smart tend to make silly mistakes, both because they expect to see things that others don’t and because they fail to look for and recognize trouble signs.

Parables about monkeys can be useful because nobody expects them to be geniuses, which demands that we ask ourselves hard questions. Are we doing the important work, or the easiest tasks to show progress on? If monkeys flipping coins can simulate professional success, what do we really celebrate? If monkeys tapping randomly on typewriters can create masterworks, what is the value of human agency?

The truth is that humans are prone to be foolish. We are unable, outside a few limited areas of expertise, to make basic distinctions in matters of importance. So we look for signals of prosperity, intelligence, shared purpose and other things we value to make judgments about what information we should trust. Imagining monkeys around us helps us to be more careful.

Sometimes the biggest obstacle between where we are now and the fabulous yonder we seek is just the few feet in front of us.

— Article courtesy of the Digital Tonto blog
— Image credit: Flickr

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Identity is Crucial to Change

Identity is Crucial to Change

GUEST POST from Greg Satell

In an age of disruption, the only viable strategy is to adapt. Today, we are undergoing major shifts in technology, resources, migration and demography that will demand that we make changes in how we think and what we do. The last time we saw this much change afoot was during the 1920s and that didn’t end well. The stakes are high.

In a recent speech, the EU’s High Representative for Foreign Affairs and Security Policy Josep Borrell highlighted the need for Europe to change and adapt to shifts in the geopolitical climate. He also pointed out that change involves far more than interests and incentives, carrots and sticks, but even more importantly, identity.

“Remember this sentence,” he said. “’It is the identity, stupid.’ It is no longer the economy, it is the identity.” What he meant was that human beings build attachments to things they identify with and, when those are threatened, they are apt to behave in a visceral, reactive and violent way. That’s why change and identity are always inextricably intertwined.

“We can’t define the change we want to pursue until we define who we want to be.” — Greg Satell

The Making Of A Dominant Model

Traditional models come to us with such great authority that we seldom realize that they too once were revolutionary. We are so often told how Einstein is revered for showing that Newton’s mechanics were flawed it is easy to forget that Newton himself was a radical insurgent, who rewrote the laws of nature and ushered in a new era.

Still, once a model becomes established, few question it. We go to school, train for a career and hone our craft. We make great efforts to learn basic principles and gain credentials when we show that we have grasped them. As we strive to become masters of our craft we find that as our proficiency increases, so does our success and status.

The models we use become more than mere tools to get things done, but intrinsic to our identity. Back in the nineteenth century, the miasma theory, the notion that bad air caused disease, was predominant in medicine. Doctors not only relied on it to do their job, they took great pride in their mastery of it. They would discuss its nuances and implications with colleagues, signaling their membership in a tribe as they did.

In the 1840s, when a young doctor named Ignaz Semmelweis showed that doctors could prevent infections by washing their hands, many in the medical establishment were scandalized. First, the suggestion that they, as men of prominence, could spread something as dirty as disease was insulting. Even more damaging, however, was the suggestion that their professional identity was, at least in part, based on a mistake.

Things didn’t turn out well for Semmelweis. He railed against the establishment, but to no avail. He would eventually die in an insane asylum, ironically of an infection he contracted under care, and the questions he raised about the prevailing miasma paradigm went unanswered.

A Gathering Storm Of Accumulating Evidence

We all know that for every rule, there are exceptions and anomalies that can’t be explained. As the statistician George Box put it, “all models are wrong, but some are useful.” The miasma theory, while it seems absurd today, was useful in its own way. Long before we had technology to study bacteria, smells could alert us to their presence in unsanitary conditions.

But Semmelweis’s hand-washing regime threatened doctors’ view of themselves and their role. Doctors were men of prominence, who saw disease emanating from the smells of the lower classes. This was more than a theory. It was an attachment to a particular view of the world and their place in it, which is one reason why Semmelweis experienced such backlash.

Yet he raised important questions and, at least in some circles, doubts about the miasma theory continued to grow. In 1854, about a decade after Semmelweis instituted hand washing, a cholera epidemic broke out in London and a miasma theory skeptic named John Snow was able to trace the source of the infection to a single water pump.

Yet once again, the establishment could not accept evidence that contradicted its prevailing theory. William Farr, a prominent medical statistician, questioned Snow’s findings. Besides, Snow couldn’t explain how the water pump was making people sick, only that it seemed to be the source of some pathogen. Farr, not Snow, won the day.

Later it would turn out that a septic pit had been dug too close to the pump and the water had been contaminated with fecal matter. But for the moment, while doubts began to grow about the miasma theory, it remained the dominant model and countless people would die every year because of it.

Breaking Through To A New Paradigm

In the early 1860s, as the Civil War was raging in the US, Louis Pasteur was researching wine-making in France. While studying the fermentation process, he discovered that microorganisms spoiled beverages such as beer and milk. He proposed that they be heated to temperatures between 60 and 100 degrees Celsius to avoid spoiling, a process that came to be called pasteurization

Pasteur guessed that the similar microorganisms made people sick which, in turn, led to the work of Robert Koch and Joseph Lister. Together they would establish the germ theory of disease. This work then led to not only better sanitary practices, but eventually to the work of Alexander Fleming, Howard Florey and Ernst Chain and development of antibiotics.

To break free of the miasma theory, doctors needed to change the way they saw themselves. The miasma theory had been around since Hippocrates. To forge a new path, they could no longer be the guardians of ancient wisdom, but evidence-based scientists, and that would require that everything about the field be transformed.

None of this occurred in a vacuum. In the late 19th century, a number of long-held truths, from Euclid’s Geometry to Aristotle’s logic, were being discarded, which would pave the way for strange new theories, such as Einstein’s relativity and Turing’s machine. To abandon these old ideas, which were considered gospel for thousands of years, was no doubt difficult. Yet it was what we needed to do to create the modern world.

Moving From Disruption to Resilience

Today, we stand on the precipice of a new paradigm. We’ve suffered through a global financial crisis, a pandemic and the most deadly conflict in Europe since World War II. The shifts in technology, resources, migration and demography are already underway. The strains and dangers of these shifts are already evident, yet the benefits are still to come.

To successfully navigate the decade ahead, we must make decisions not just about what we want, but who we want to be. Nowhere is this playing out more than in Ukraine right now, where the war being waged is almost solely about identity. Russians want to deny Ukrainian identity and to defy what they see as the US-led world order. Europeans need to take sides. So do the Chinese. Everyone needs to decide who they are and where they stand.

This is not only true in international affairs, but in every facet of society. Different eras make different demands. The generation that came of age after World War II needed to rebuild and they did so magnificently. Yet as things grew, inefficiencies mounted and the Boomer Generation became optimizers. The generations that came after worshiped disruption and renewal. These are, of course, gross generalizations, but the basic narrative holds true.

What should be clear is that where we go from here will depend on who we want to be. My hope is that we become protectors who seek to make the shift from disruption to resilience. We can no longer simply worship market and technological forces and leave our fates up to them as if they were gods. We need to make choices and the ones we make will be greatly influenced by how we see ourselves and our role.

As Josep Borrell so eloquently put it: It is the identity, stupid. It is no longer the economy, it is the identity.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

What We See Influences How We’ll Act

What We See Influences How We'll Act

GUEST POST from Greg Satell

“Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually slaves of some defunct economist,” John Maynard Keynes, himself a long dead economist, once wrote. We are, much more than we’d like to admit, creatures of our own age, taking our cues from our environment.

That’s why we need to be on the lookout for our own biases. The truth, as we see it, is often more of a personalized manifestation of the zeitgeist than it is the product of any real insight or reflection. As Richard Feynman put it, “The first principle is that you must not fool yourself—and you are the easiest person to fool. So you have to be very careful about that.”

We can’t believe everything we think. We often seize upon the most easily available information, rather than the most reliable sources. We then seek out information that confirms those beliefs and reject evidence that contradicts existing paradigms. That’s what leads to bad decisions. If what we see determines how we act, we need to look carefully.

The Rise And Fall Of Social Darwinism

In the 1860s, in response to Darwin’s ideas, Herbert Spencer and others began promoting the theory of Social Darwinism. The basic idea was that “survival of the fittest” meant that society should reflect a Hobbesian state of nature, in which most can expect a life that is “nasty, brutish and short,” while an exalted few enjoy the benefits of their superiority.

This was, of course, a gross misunderstanding of Darwin’s work. First, Darwin never used the term, “survival of the fittest,” which was actually coined by Spencer himself. Secondly, Darwin never meant to suggest that there are certain innate qualities that make one individual better than others, but that as the environment changes, certain traits tend to be propagated which, over time, can lead to a new species.

Still, if you see the world as a contest for individual survival, you will act accordingly. You will favor a laissez-faire approach to society, punishing the poor and unfortunate and rewarding the rich and powerful. In some cases, such as Nazi Germany and in the late Ottoman empire, Social Darwinism was used as a justification for genocide.

While some strains of Social Darwinism still exist, for the most part it has been discredited, partly because of excesses such as racism, eugenics and social inequality, but also because more rigorous approaches, such as evolutionary psychology, show that altruism and collaboration can themselves be adaptive traits.

The Making Of The Modern Organization

When Alfred Sloan created the modern corporation at General Motors in the early 20th century, what he really did was create a new type of organization. It had centralized management, far flung divisions and was exponentially more efficient at moving around men and material than anything that had come before.

He called it “federal decentralization.” Management would create operating principles, set goals and develop overall strategy, while day-to-day decisions were performed by people lower down in the structure. While there was some autonomy, it was more like an orchestra than a jazz band, with the CEO as conductor.

Here again, what people saw determined how they acted. Many believed that a basic set of management principles, if conceived and applied correctly, could be adapted to any kind of business, which culminated in the “Nifty Fifty” conglomerates of the 60’s and 70’s. It was, in some sense, an idea akin to Social Darwinism, implying that there are certain innate traits that make an organization more competitive.

Yet business environments change and, while larger organizations may be able to drive efficiencies, they often find it hard to adapt to changing conditions. When the economy hit hard times in the 1970s, the “Nifty Fifty” stocks vastly under-performed the market. By the time the 80s rolled around, conglomerates had fallen out of fashion.

Industries and Value Chains

In 1985, a relatively unknown professor at Harvard Business School named Michael Porter published a book called Competitive Advantage, which explained that by optimizing every facet of the value chain, a firm could consistently outperform its competitors. The book was an immediate success and made Porter a management superstar.

Key to Porter’s view was that firms compete in industries that are shaped by five forces: competitors, customers, suppliers, substitutes, and new market entrants. So he advised leaders to build and leverage bargaining power in each of those directions to create a sustainable competitive advantage for the long term.

If you see your business environment as being neatly organized in specific industries, everybody is a potential rival. Even your allies need to be viewed with suspicion. So, for example, when a new open source operating system called Linux appeared, Microsoft CEO Steve Ballmer considered it to be a threat and immediately attacked, calling it a cancer.

Yet even as Ballmer went on the attack, the business environment was changing. As the internet made the world more connected, technology companies found that leveraging that connectivity through open source communities was a winning strategy. Microsoft’s current CEO, Satya Nadella, says that the company loves Linux. Ultimately, it recognized that it couldn’t continue to shut itself out and compete effectively.

Looking To The Future

Take a moment to think about what the world must have looked like to J.P. Morgan a century ago, in 1922. The disruptive technologies of the day, electricity and internal combustion, were already almost 40 years old, but had little measurable economic impact. Life largely went on as it always had and the legendary financier lorded over his domain of corporate barons.

That would quickly change over the next decade when those technologies would gain traction, form ecosystems and drive a 50-year boom. The great “trusts” that he built would get broken up and by 1930 virtually all of them would be dropped as components of the Dow Jones Industrial average. Every face of life would be completely transformed.

We’re at a similar point today, on the brink of enormous transformation. The recent string of calamities, including a financial meltdown, a pandemic and the deadliest war in Europe in 80 years, demand that we take a new path. Powerful shifts in technology, demographics, resources and migration, suggest that even more disruption may be in our future.

The course we take from here will be determined by how we see the world we live in. Do we see our fellow citizens as a burden or an asset? Are new technologies a blessing or a threat? Is the world full of opportunities to be embraced or dangers we need to protect ourselves from? These are questions we need to think seriously about.

How we answer them will determine what comes next.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Why Big Ideas Often Fail to Survive Victory

Why Big Ideas Often Fail To Survive Victory

GUEST POST from Greg Satell

I still vividly remember a whiskey drinking session I had with a good friend in my flat in Kyiv in early 2005, shortly after the Orange Revolution had concluded. We were discussing what would come after and, knowing that I had lived in Poland during years of reform, he was interested in my opinion about the future. I told him NATO and EU ascension was the way to go.

My friend, a prominent journalist, disagreed. He thought that Ukraine should pursue a “Finnish model,” in which it would pursue good relations with both Russia and the west, favoring neither. As he saw it, the Ukrainian people, who had just been through months of political turmoil, should pursue a “third way” and leave the drama behind.

As it turned out, we were both wrong. The promise of change would soon turn to nightmare, ending with an evil, brutal regime and a second Ukrainian revolution a decade later. I would later find that this pattern is so common that there is even a name for it: the failure to survive victory. To break the cycle you first need to learn to anticipate it and then to prepare for it.

The Thrill Of A New Direction And An Initial Success

In the weeks after the Orange Revolution I happened to be in Warsaw and saw a huge banner celebrating democracy movements in Eastern Europe, with Poland’s Solidarity movement as the first and Ukraine’s Orange revolution as the last in the series. Everyone thought that Ukraine would follow its neighbor into peace and prosperity.

We were triumphant and it seemed like the forces of history were on our side. That’s one reason why we failed to see the forces that were gathering. Despite our enthusiasm, those who opposed our cause didn’t just melt away and go home. In fact they redoubled their efforts to undermine what we had achieved. We never really saw it coming.

I see the same thing in my work with organizational transformations. Once people get a taste of that initial success—they win executive sponsorship for their initiative, get a budget approved or even achieve some tangible progress on the ground—they think it will all get easier. It never does. In fact, it usually gets harder.

Make no mistake. Opposition doesn’t erupt in spite of an early success, but because of it. A change initiative only becomes a threat to the status quo when it begins to gain traction. That’s when the knives come out and, much like my friend and I after the Orange Revolution, most people working to bring about change are oblivious to it.

If you are working for a change that you believe in passionately, chances are you’re missing a brewing storm. Almost everyone does the first time around (and many never learn to recognize it).

Propagating Echo Chambers

One of the reasons we failed to see trouble brewing back then was that, as best we could tell, everyone around us saw things the same way we did. Whatever dissenting voices we did come across seemed like an aberration to us. Sure, some people were still stuck in the old ways, we thought, but with history on our side how could we fail?

Something similar happened in the wake of the George Floyd protests. The city council in Minneapolis, where the incident took place, voted to defund the police. Taking its cue, corporate America brought in armies of consultants to set out the new rules of the workplace. In one survey, 85% of CHRO’s said that they were expanding diversity and inclusion efforts. With such an outpouring of news coverage and emotion, who would dare to question them?

The truth is that majorities don’t just rule, they also influence in a number of ways. First, decades of studies show that we tend to conform to the views around us and that effect extends out to three degrees of relationships. Not only people we know, but the friends of their friends—most of whom we don’t even know—affect how we think.

It isn’t just what we hear but also what we say that matters. Research from MIT suggests that when we are around people we expect to agree with us, we’re less likely to check our facts and more likely to share information that isn’t true. That, in turn, impacts our informational environment, helping to create an echo chamber that reinforces our sense of certainty.

The Inevitable Backlash

Almost as soon as the new Ukrainian government took power in 2005, the opposition went on the offensive. While the new President, Viktor Yushchenko was seen positively, they attacked the people around him. His Prime Minister, Yulia Tymoshenko, was portrayed as a calculating and devious woman. When Yushchenko’s son got into trouble, questions were raised about corruption in his father’s administration.

A similar pattern took hold in the wake of the George Floyd protests. Calls for racial justice were portrayed as anti-police and law enforcement budgets across the country increased as “We Support Our Police” signs went up on suburban lawns. Critical Race Theory, an obscure legal concept rarely discussed outside of universities, became a political punching bag. Today, as layoffs increase, corporate diversity efforts are sure to take a hit.

These patterns are not exceptions. They are the rule. As Saul Alinsky pointed out, every revolution inspires a counter-revolution. That is the physics of change. Every reaction provokes a reaction. Every success impacts your environment and some of those changes will not be favorable to your cause. They will expose vulnerabilities that can be exploited by those who oppose your idea.

Yet Alinsky didn’t just identify the problem, he also pointed to a solution. “Once we accept and learn to anticipate the inevitable counter-revolution, we may then alter the historical pattern of revolution and counter-revolution from the traditional slow advance of two steps forward and one step backward to minimizing the latter,” he writes.

In other words, the key to surviving victory is to prepare for the backlash that is sure to come and build a strategy to overcome it.

Building A Shared Future Rooted In Shared Values

In the two decades I have been researching transformation and change, the failure to survive victory is probably the most consistent aspect of it. In fact, it is so common you can almost set your watch by it. Amazingly, no matter how many times change advocates experience it, they rarely see it coming. Many, in fact, seem to take pride in how many battles they have lost, seeing it as some kind of badge of honor.

The uncomfortable truth is that success doesn’t necessarily begat more success. Often it breeds failure. People mistake a moment for a movement and think that their time has finally come. Believing change to be inevitable, they get cocky and overconfident and miss the networks of unseen connections forming in opposition. They make sure to press a point, but fail to make a difference.

Lasting change always needs to be built on common ground. That’s what we failed to see all those years ago, when I began my journey. You can never base your revolution on any particular person, technology or policy. It needs to be rooted in shared values and if we truly care about change, we need to hold ourselves accountable to be effective messengers.

We can’t just preach to the choir. Sometimes we need to venture out of the church and mix with the heathens. We can be clear about where we stand and still listen to those who see things differently. That doesn’t mean we compromise. In fact, we should never compromise the values we believe in. What we can do, however, is identify common ground upon which to build a shared future.

These principles hold true whether the change you seek is in your organization, your industry, your community or throughout society as a whole. If you fail to learn and apply them, don’t be surprised when you fail to survive victory.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Sometimes Ancient Wisdom Needs to be Left Behind

Sometimes Ancient Wisdom Needs to be Left Behind

GUEST POST from Greg Satell

I recently visited Panama and learned the incredible story of how the indigenous Emberá people there helped to teach jungle survival skills to Apollo mission astronauts. It is a fascinating combining and contrast of ancient wisdom and modern technology, equipping the first men to go to the moon with insights from both realms.

Humans tend to have a natural reverence for old wisdom that is probably woven into our DNA. It stands to reason that people more willing to stick with the tried and true might have a survival advantage over those who were more reckless. Ideas that stand the test of time are, by definition, the ones that worked well enough to be passed on.

Paradoxically, to move forward we need to abandon old ideas. It was only by discarding ancient wisdoms that we were able to create the modern world. In much the same way, to move forward now we’ll need to debunk ideas that qualify as expertise today. As in most things, our past can help serve as a guide. Here are three old ideas we managed to transcend.

1. Euclid’s Geometry

The basic geometry we learn in grade school, also known as Euclidean geometry, is rooted in axioms observed from the physical world, such as the principle that two parallel lines never intersect. For thousands of years mathematicians built proofs based on those axioms to create new knowledge, such as how to calculate the height of an object. Without these insights, our ability to shape the physical world would be negligible.

In the 19th century, however, men like Gauss, Lobachevsky, Bolyai and Riemann started to build new forms of non-Euclidean geometry based on curved spaces. These were, of course, completely theoretical and of no use in daily life. The universe, as we experience it, doesn’t curve in any appreciable way, which is why police ask us to walk a straight line if they think we’ve been drinking.

But when Einstein started to think about how gravity functioned, he began to suspect that the universe did, in fact, curve over large distances. To make his theory of general relativity work he had to discard the old geometrical thinking and embrace new mathematical concepts. Without those critical tools, he would have been hopelessly stuck.

Much like the astronauts in the Apollo program, we now live in a strange mix of old and new. To travel to Panama, for example, I personally moved through linear space and the old Euclidean axioms worked perfectly well. However, to navigate, I had to use GPS, which must take into account curved spaces for Einstein’s equations to correctly calculate distances between the GPS satellites and points on earth.

2. Aristotle’s Logic

In terms of longevity and impact, only Aristotle’s logic rivals Euclid’s geometry. At the core of Aristotle’s system is the syllogism, which is made up of propositions that consist of two terms (a subject and a predicate). If the propositions in the syllogism are true, then the argument has to be true. This basic notion that conclusions follow premises imbues logical statements with a mathematical rigor.

Yet much like with geometry, scholars began to suspect that there might be something amiss. At first, they noticed minor flaws that had to do with a strange paradox in set theory which arose with sets that are members of themselves. For example, if the barber who shaves everyone in town who doesn’t shave themselves, then who shaves the barber?

At first, these seemed like strange anomalies, minor exceptions to rules that could be easily explained away. Still, the more scholars tried to close the gaps, the more problems appeared, leading to a foundational crisis. It would only be resolved when a young logician named Kurt Gödel published his theorems that proved logic, at least as we knew it, is hopelessly broken.

In a strange twist, another young mathematician, Alan Turing, built on Gödel’s work to create an imaginary machine that would make digital computers possible. In other words, in order for Silicon Valley engineers to code to create logical worlds online, they need to use machines built on the premise that perfectly logical systems are inherently unworkable.

Of course, as I write this, I am straddling both universes, trying to put build logical sentences on those very same machines.

3. The Miasma Theory of Disease

Before the germ theory of disease took hold in medicine, the miasma theory, the notion that bad air caused disease, was predominant. Again, from a practical perspective this made perfect sense. Harmful pathogens tend to thrive in environments with decaying organic matter that gives off bad smells. So avoiding those areas would promote better health.

Once again, this basic paradigm would begin to break down with a series of incidents. First, a young doctor named Ignaz Semmelweis showed that doctors could prevent infections by washing their hands, which suggested that something besides air carried disease. Later John Snow was able to trace the source of a cholera epidemic to a single water pump.

Perhaps not surprisingly, these were initially explained away. Semmelweis failed to format his data properly and was less than an effective advocate for his work. John Snow’s work was statistical, based on correlation rather than causality. A prominent statistician William Farr, who supported the miasma theory, argued for an alternative explanation.

Still, as doubts grew, more scientists looked for answers. The work of Robert Koch, Joseph Lister and Louis Pasteur led to the germ theory. Later, Alexander Fleming, Howard Florey and Ernst Chain would pioneer the development of antibiotics in the 1940s. That would open the floodgates and money poured into research, creating modern medicine.

Today, we have gone far beyond the germ theory of disease and even lay people understand that disease has myriad causes, including bacteria, viruses and other pathogens, as well as genetic diseases and those caused by strange misfolded proteins known as prions.

To Create The Future, We Need To Break Free Of The Past

If you were a person of sophistication and education in the 19th century, your world view was based on certain axiomatic truths, such as parallel lines never cross, logical propositions are either true or false and “bad airs” made people sick. For the most part, these ideas would have served you well for the challenges you faced in daily life.

Even more importantly, your understanding of these concepts would signal your inclusion and acceptance into a particular tribe, which would confer prestige and status. If you were an architect or engineer, you needed to understand Euclid’s geometric axions. Aristotle’s rules of logic were essential to every educated profession. Medical doctors were expected to master the nuances of the miasma theory.

To stray from established orthodoxies carries great risk, even now. It is no accident that those who were able to bring about new paradigms, such as Einstein, Turing and John Snow, came from outside the establishment. More recently, people like Benoit Mandelbrot, Jim Allison and Katalin Karikó had to overcome fierce resistance to bring new ways of thinking to finance, cancer immunotherapy and mRNA vaccines respectively.

Today, it’s becoming increasingly clear we need to break with the past. In just over a decade, we’ve been through a crippling financial crisis, a global pandemic, deadly terrorist attacks, and the biggest conflict in Europe since World War II. We need to confront climate change and a growing mental health crisis. Yet it is also clear that we can’t just raze the global order to the ground and start all over again.

So what do we leave in the past and what do we bring with us into the future? Which new lessons do we need to learn and which old ones do we need to unlearn? Perhaps most importantly, what do we need to create anew and what can we rediscover in the ancient?

Throughout history, we have learned that the answer lies not in merely speculating about ideas, but in finding real solutions to problems we face.

— Article courtesy of the Digital Tonto blog
— Image credit: 1 of 950+ FREE quote slides from http://misterinnovation.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Strategy Lacking Purpose Will Always Fail

Strategy Lacking Purpose Will Always Fail

GUEST POST from Greg Satell

In 1989, just before the fall of the Berlin Wall, Francis Fukuyama published an essay in the journal The National Interest titled The End of History, which led to a bestselling book. Many took his argument to mean that, with the defeat of communism, US-style liberal democracy had emerged as the only viable way of organizing a society.

He was misunderstood. His actual argument was far more nuanced and insightful. After explaining the arguments of philosophers like Hegel and Kojeve, Fukuyama pointed out that even if we had reached an endpoint in the debate about ideologies, there would still be conflict because of people’s need to express their identity.

We usually think of strategy as a rational, analytic activity, with teams of MBA’s poring over spreadsheets or generals standing before maps. Yet if we fail to take into account human agency and dignity, we’re missing the boat. Strategy without purpose is doomed to fail, however clever the calculations. Leaders need to take note of that basic reality.

Taking Stock Of The Halo Effect

Business case studies are written by experienced professionals who are trained to analyze past situations from multiple perspectives. However, their ability to do that successfully is greatly limited by the fact that they already know the outcome of the situation they are studying. That can’t help but to color their analysis.

In The Halo Effect, Phil Rosenzweig explains how those perceptions can color conclusions. He points to the networking company Cisco during the dotcom boom. When it was flying high, it was said to have an unparalleled culture with people that worked long hours but loved every minute of it. When the market tanked, however, all of the sudden its culture came to be seen as “cocksure” and “naive.”

It is hard to see how a company’s culture could change so drastically in such a short amount of time, with no significant change in leadership. More likely, seeing Cisco’s success, analysts looked at particular qualities in a positive light. However, when things began to go the other way, those same qualities were perceived as negative.

When an organization is doing well, we may find its people to be “idealistic” and “values driven,” but when things go sour, those same traits come to be seen as “impractical” and “arrogant.” Given the same set of facts, we can—and often do—come to very different conclusions when our perception of the outcomes changes.

In most cases, analysts don’t have a stake in the outcome. From their point of view, they probably see themselves as objectively analyzing facts and following them to their most logical outcomes. Yet when the purpose for writing an analysis changes from telling a success story to lamenting a cautionary tale, their perception of events tends to change markedly.

Reassessing The Value Chain

For decades, the dominant view of business strategy was based on Michael Porter’s ideas about competitive advantage. In essence, he argued that the key to long-term success was to dominate the value chain by maximizing bargaining power among suppliers, customers, new market entrants and substitute goods.

Yet as AnnaLee Saxenian explained in Regional Advantage, around the same time that Porter’s ideas were ascending among CEOs in the establishment industries on the east coast, a very different way of doing business was gaining steam in Silicon Valley. The firms there saw themselves not as isolated fiefdoms, but as part of a larger ecosystem.

The two models are built on very different assumptions. The Porter model sees the world as made up of transactions. Optimize your strategy to create efficiencies, derive the maximum value out of every transaction and you will build a sustainable competitive advantage. The Silicon Valley model, however, saw the world as made up of connections and optimized their strategies to widen and deepen linkages.

Microsoft is one great example of this shift. When Linux first rose to prominence, Microsoft CEO Steve Ballmer called it a cancer. Yet more recently, its current CEO announced that the company loves Linux. That didn’t happen out of any sort of newfound benevolence, but because it recognized that it couldn’t continue to shut itself out and still be able to compete.

When you see the world as the “sum of all efficiencies,” the optimal strategy is to dominate. However, if you see the world as made up of the “sum of all connections,” the optimal strategy is to attract. You need to be careful to be seen as purposeful rather than predatory.

The Naïveté Of The “Realists”

Since at least the times of Richelieu, foreign policy theorists have been enthralled by the concept of Realpolitik, the notion that world affairs are governed by interests, not ideological, moral or ethical considerations. Much like with Porter’s “competitive advantage,” strategy is treated as a series of transactions rather than relationships.

Rational calculation of interests is one of those ideas that seems pragmatic on the surface, but is actually hopelessly academic and unworkable in the real world. How do you identify the “interests” you are supposed to be basing your decisions on if not by considering what you value? And how do you assess your values without taking into account your beliefs, morals and ethics?

To understand how such “realism” goes awry, consider the prominent political scientist John Mearsheimer. In March, he gave an interview to The New Yorker in which he argued that, by failing to recognize Russia’s role and interests as a great power, the US had erred greatly in its support of Ukraine.

Yet it is clear now that the Russians were the ones who erred. First, they failed to recognize that the world would see their purpose as immoral. Second, they failed to recognize how their aggression would empower Ukraine’s sense of nationhood. Third, they did not see how Europe would come to regard economic ties with Russia to be against their interests.

Nothing you can derive from military or economic statistics will give you insight into human agency. Excel sheets may not be motivated by purpose, but people are.

Strategy Is Not A Game Of Chess

Antonio Damasio, a neuroscientist who researches decision making, became intrigued when one of his patients, a highly intelligent and professionally successful man named “Elliot,” suffered from a brain lesion that impaired his ability to experience emotion. It soon became clear that Elliot was unable to make decisions..

Elliot’s prefrontal cortex, which governs the executive function, was fully intact. His memory and ability to understand events were normal as well. He was, essentially, a completely rational being with normal cognitive function, but no emotions. The problem was that although Elliot could understand all the factors that would go into making a decision, he could not weigh them. Without emotions, all options were all essentially the same.

In the real world, strategy is not a game of chess, in which we move inert pieces around a board. While we can make rational assessments about various courses of action, ultimately people have to care about the outcome. For a strategy to be meaningful, it needs to speak to people’s values, hopes, dreams and ambitions.

A leader’s role cannot be merely to plan and direct action, but must be to inspire and empower belief in a common endeavor. That’s what widens and deepens the meaningful connections that can enable genuine transformation.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.