Have We Made AI Interfaces Too Human?

Could a Little Uncanny Valley Help Add Some Much Needed Skepticism to How We Treat AI Output?

Have We Made AI Interfaces Too Human?

GUEST POST from Pete Foley

A cool element of AI is how ‘human’ it appear’s to be. This is of course a part of its ‘wow’ factor, and has helped to drive rapid and widespread adoption. It’s also of course a clever illusion, as AI’s don’t really ‘think’ like real humans. But the illusion is pretty convincing. And most of us, me included, who have interacted with AI at any length, have probably at times all but forgotten they are having a conversation with code, albeit sophisticated code.

Benefits of a Human-LIke Interface: And this humanizing of the user interface brings multiple benefits. It is of course a part of the ‘wow’ factor that has helped drive rapid and widespread adoption of the technology. The intuitive, conversational interface also makes it far easier for everyday users to access information without training in search techniques. While AI’s they don’t fundamentally have access to better information than an old fashioned Google search, they are much easier to use. And the humanesque output not only provides ‘ready to use’ and pre-synthesized information, but also increases the believability of the output. Furthermore, by creating an illusion of human-like intelligence, it implicitly implies emotions, compassion and critical thinking behind the output, even if it’s not really there

Democratizing Knowledge: And in many ways, this is a really good thing. Knowledge is power. Democratizing access to it has many benefits, and in so doing adds checks and balances to our society we’ve never before enjoyed. And it’s part of a long-term positive trend. Our societies have evolved from shaman and priests jealously guarding knowledge for their own benefit, through the broader dissemination enabled by the Gutenberg press, books and libraries. That in turn gave way to mass media, the internet, and now the next step, AI. Of course, it’s not quite that simple, as it’s also a bit of an arms race. With this increased access to information has come ever more sophisticated ways in which today’s ’shamans’ or leaders try to protect their advantage. They may no longer use solar eclipses to frighten an astronomically ignorant populace into submission and obedience. But spinning, framing, controlled narratives, selective dissemination of information, fake news, media control, marketing, behavioral manipulation and ’nudging’ are just a few ways in which the flow of information is controlled or manipulated today. We have moved in the right direction, but still have a way to go, and freedom of information and it’s control are always in some kind of arms race.

Two Edged Sword: But this humanization of AI can also be a two edged sword, and comes with downsides in addition to the benefits described above. It certainly improves access and believability, and makes output easier to disseminate, but also hides its true nature. AI operates in a quite different way from a human mind. It lacks intrinsic ethics, emotional connections, genuine empathy, and ‘gut feelings’. To my inexpert mind, it in some uncomfortable ways resembles a psychopath. It’s not evil in a human sense by any means, but it also doesn’t care, and lacks a moral or ethical framework

A brutal example is the recent case of Adam Raine, where ChatGPT advised him on ways to commit suicide, and helped him write a suicide note. A sane human would never do this, but the humanesque nature of the interface appeared to create an illusion for that unfortunate individual that he was dealing with a human, and the empathy, emotional intelligence and compassion that comes with that.

That may be an extreme example. But the illusion of humanity and the ability to access unfiltered information can also bring more subtle issues. For example, while the ability to interrogate AI around our symptoms before visiting a physician certainly empowers us to take a more proactive role in our healthcare. But it can also be counterproductive. A patient who has convinced themselves of an incorrect diagnosis can actually harm themselves, or make a physicians job much harder. And AI lacks the compassion to break bad news gently, or add context in the way a human can.

The Uncanny Valley: That brings me to the Uncanny Valley. This describes when technology approaches but doesn’t quite achieve perfection in human mimicry. In the past we could often detect synthetic content on a subtle and implicit level, even if we were not conscious of it. For example, a computerized voice that missed subtle tonal inflections, or a photoshopped image or manipulated video that missed subtle facial micro expressions might not be obvious, but often still ‘felt’ wrong. Or early drum machines were so perfect that they lacked the natural ’swing’ of even the most precise human drummer, and so had to be modified to include randomness that was below the threshold of conscious awareness, but made them ‘feel’ real.

This difference between conscious and unconscious evaluation creates cognitive dissonance that can result in content feeling odd, or even ‘creepy’. And often, the closer we got to eliminating that dissonance, the creepier it feels. When I’ve dealt with the uncanny valley in the past, it’s generally been something we needed to ‘fix’. For example, over-photoshopping in a print ad, or poor CGI. But be careful what you wish for. AI appears to have marched through the ‘uncanny valley’ to the point where its output feels human. But despite feeling right, it may still lack the ethical, moral or emotional framework of the human responses it mimics.

This begs a question, ‘do we need some implicit as well as explicit cues that remind us we are not dealing with a real human? Could a slight feeling of ‘creepiness maybe help to avoid another Adam Raine? Should we add back some ‘uncanny valley’, and turn what used to be something we thought of as an ‘enemy’ to good use? The latter is one of my favorite innovation strategies. Whether it’s vaccination, or exposure to risks during childhood, or not over-sanitizing, sometimes a little of what does us harm can do us good. Maybe the uncanny valley we’ve typical tried to overcome could now actually help us?

Would just a little implicit doubt also encourage us to think a bit more deeply about the output, rather than simply cut and paste it into a report? By making AI output sound so human, it potentially removes the need for cognitive effort to process the output. Thinking that played a key role in translating search into output can now be skipped. Synthesizing and processing output from a ‘old fashioned’ Google search requires effort and comprehension. With AI, it is all to easy to regurgitate the output, skip meaningful critical thinking, and share what we really don’t understand. Or perhaps worse, we can create an illusion of understanding where we don’t think deeply or causally enough to even realize that we don’t understand what we are sharing. It’s in some ways analogous to proof reading, in that it’s all to easy to skip over content we think we already know, even if we really don’t . And the more we skip over content, the more difficult it is to be discerning, or question the output. When a searcher receives answers in prose he or she can cut and paste into a report or essay, less effort effort and critical thinking goes into comprehension and the critical thinking, and the risk of sharing inaccurate information, or even nonsense increases.

And that also brings up another side effect of low engagement with output – confirmation bias. If the output is already in usable form, doesn’t require synthesizing or comprehension, and it agrees with our beliefs or motivations, it’s a perfect storm. There is little reason to question it, or even truly understand it. We are generally pretty good at challenging something that surprises us, or that we disagree with. But it takes a lot of will, and a deep adherence to the scientific method to challenge output that supports our beliefs or theories

Question everything, and you do nothing! The corollary to this is surely ‘that’s the point of AI?’ It’s meant to give us well structured, and correct answers, and in so doing free up our time for more important things, or to act on ideas, rather than just think about them. If we challenge and analyze every output, why use AI in the first place? That’s certainly fair, but taking AI output without any question is not smart either. Remember that it isn’t human, and is still capable of making really stupid mistakes. Okay, so are humans, but AI is still far earlier in its evolutionary journey, and prone to unanticipated errors. I suspect the answer to this lies in how important the output is, and where it will be used. If it’s important, treat AI output as a hypothesis. Don’t believe everything you read, and before simply sharing or accepting, ask ourselves and AI itself questions around what went into the conclusions, where the data came from, and what the critical thinking path is. Basically apply the scientific method to AI output much the same as we would, or should our own ideas.

Cat Videos and AI Action Figures: Another related risk with AI is if we let it become an oracle. We not only treat its output as human, but as super human. With access to all knowledge, vastly superior processing power compared to us mere mortals, and apparent human reasoning, why bother to think for ourselves? A lot of people worry about AI becoming sentient, more powerful than humans, and the resultant doomsday scenarios involving Terminators and Skynet. While it would be foolish to ignore such possibilities, perhaps there is a more clear and present danger, where instead of AI conquering humanity, we simply cede our position to it. Just as basic mathematical literacy has plummeted since the introduction of calculators, and spell-check has reduced our basic literary capability, what if AI erodes our critical thinking and problem solving? I’m not the first to notice that with the internet we have access to all human knowledge, but all too often use it for cat videos and porn. With AI, we have an extraordinary creativity enhancing tool, but use masses of energy and water for data centers to produce dubious action figures in our own image. Maybe we need a little help doing better with AI. A little ‘uncanny Valley’ would not begin to deal with all of the potential issues, but maybe simply not fully trusting AI output on an implicit level might just help a little bit.

Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Back to Basics for Leaders and Managers

Back to Basics for Leaders and Managers

GUEST POST from Robyn Bolton

Imagine that you are the CEO working with your CHRO on a succession plan.  Both the CFO and COO are natural candidates, and both are, on paper, equally qualified and effective.

The CFO distinguishes herself by consistently working with colleagues to find creative solutions to business issues, even if it isn’t the optimal solution financially, and inspiring them with her vision of the future. She attracts top talent and builds strong relationships with investors who trust her strategic judgment. However, she sometimes struggles with day-to-day details and can be inconsistent in her communication with direct reports.

The COO inspires deep loyalty from his team through consistent execution and reliability. People turn down better offers to stay because they trust his systematic approach, flawless delivery, and deep commitment to developing people. However, his vision rarely extends beyond “do things better,” rigidly adhering to established processes and shutting down difficult conversations with peers when change is needed.

Who so you choose?

The COO feels like the safer bet, especially in uncertain times, given his track record of proven execution, loyal teams, and predictable results. While the CFO feels riskier because she’s brilliant but inconsistent, visionary but scattered.

It’s not an easy question to answer.

Most people default to “It depends.”

It doesn’t depend.

It doesn’t “depend,” because being CEO is a leadership role and only the CFO demonstrates leadership behaviors. The COO, on the other hand, is a fantastic manager, exactly the kind of person you want and need in the COO role. But he’s not the leader a company needs, no matter how stable or uncertain the environment.

Yet we all struggle with this choice because we’ve made “leadership” and “management” synonyms. Companies no longer have “senior management teams,” they have “senior/executive leadership teams.”  People moving from independent contributor roles to oversee teams are trained in “people leadership,” not “team management” (even though the curriculum is still largely the same).

But leadership and management are two fundamentally different things.

Leader OR Manager?

There are lots of definitions of both leaders and managers, so let’s go back to the “original” distinction as defined by Warren Bennis in his 1987 classic On Becoming a Leader

LeadersManagers
·       Do the right things·       Challenge the status quo·       Innovate·       Develops·       Focuses on people·       Relies on trust·       Has a long-range perspective·       Asks what and why·       Has an eye on the horizon·       Do things right·       Accept the status quo·       Administers·       Maintains·       Focuses on systems and structures·       Relies on control·       Has a short-range view·       Asks how and when·       Has an eye on the bottom line

In a nutshell: leaders inspire people to create change and pursue a vision while managers control systems to maintain operations and deliver results.

Leaders AND Managers!

Although the roles of leaders and managers are different, it doesn’t mean that the person who fills those roles is capable of only one or the other. I’ve worked with dozens of people who are phenomenal managers AND leaders and they are as inspiring as they are effective.

But not everyone can play both roles and it can be painful, even toxic, when we ask managers to take on leadership roles and vice versa. This is the problem with labeling everything outside of individual contributor roles as “leadership.”

When we designate something as a “people leadership” role and someone does an outstanding job of managing his team, we believe he’s a leader and promote him to a true leadership role (which rarely ends well).  Conversely, when we see someone displaying leadership qualities and promote her into “people leadership,” we may be shocked and disappointed when she struggles to manage as effortlessly as she inspires.

The Bottom Line

Leadership and Management aren’t the same thing, but they are both essential to an organization’s success. They key is putting the right people in the right roles and celebrating their unique capabilities and contributions.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Sometimes Ancient Wisdom Needs to be Left Behind

Sometimes Ancient Wisdom Needs to be Left Behind

GUEST POST from Greg Satell

I recently visited Panama and learned the incredible story of how the indigenous Emberá people there helped to teach jungle survival skills to Apollo mission astronauts. It is a fascinating combining and contrast of ancient wisdom and modern technology, equipping the first men to go to the moon with insights from both realms.

Humans tend to have a natural reverence for old wisdom that is probably woven into our DNA. It stands to reason that people more willing to stick with the tried and true might have a survival advantage over those who were more reckless. Ideas that stand the test of time are, by definition, the ones that worked well enough to be passed on.

Paradoxically, to move forward we need to abandon old ideas. It was only by discarding ancient wisdoms that we were able to create the modern world. In much the same way, to move forward now we’ll need to debunk ideas that qualify as expertise today. As in most things, our past can help serve as a guide. Here are three old ideas we managed to transcend.

1. Euclid’s Geometry

The basic geometry we learn in grade school, also known as Euclidean geometry, is rooted in axioms observed from the physical world, such as the principle that two parallel lines never intersect. For thousands of years mathematicians built proofs based on those axioms to create new knowledge, such as how to calculate the height of an object. Without these insights, our ability to shape the physical world would be negligible.

In the 19th century, however, men like Gauss, Lobachevsky, Bolyai and Riemann started to build new forms of non-Euclidean geometry based on curved spaces. These were, of course, completely theoretical and of no use in daily life. The universe, as we experience it, doesn’t curve in any appreciable way, which is why police ask us to walk a straight line if they think we’ve been drinking.

But when Einstein started to think about how gravity functioned, he began to suspect that the universe did, in fact, curve over large distances. To make his theory of general relativity work he had to discard the old geometrical thinking and embrace new mathematical concepts. Without those critical tools, he would have been hopelessly stuck.

Much like the astronauts in the Apollo program, we now live in a strange mix of old and new. To travel to Panama, for example, I personally moved through linear space and the old Euclidean axioms worked perfectly well. However, to navigate, I had to use GPS, which must take into account curved spaces for Einstein’s equations to correctly calculate distances between the GPS satellites and points on earth.

2. Aristotle’s Logic

In terms of longevity and impact, only Aristotle’s logic rivals Euclid’s geometry. At the core of Aristotle’s system is the syllogism, which is made up of propositions that consist of two terms (a subject and a predicate). If the propositions in the syllogism are true, then the argument has to be true. This basic notion that conclusions follow premises imbues logical statements with a mathematical rigor.

Yet much like with geometry, scholars began to suspect that there might be something amiss. At first, they noticed minor flaws that had to do with a strange paradox in set theory which arose with sets that are members of themselves. For example, if the barber who shaves everyone in town who doesn’t shave themselves, then who shaves the barber?

At first, these seemed like strange anomalies, minor exceptions to rules that could be easily explained away. Still, the more scholars tried to close the gaps, the more problems appeared, leading to a foundational crisis. It would only be resolved when a young logician named Kurt Gödel published his theorems that proved logic, at least as we knew it, is hopelessly broken.

In a strange twist, another young mathematician, Alan Turing, built on Gödel’s work to create an imaginary machine that would make digital computers possible. In other words, in order for Silicon Valley engineers to code to create logical worlds online, they need to use machines built on the premise that perfectly logical systems are inherently unworkable.

Of course, as I write this, I am straddling both universes, trying to put build logical sentences on those very same machines.

3. The Miasma Theory of Disease

Before the germ theory of disease took hold in medicine, the miasma theory, the notion that bad air caused disease, was predominant. Again, from a practical perspective this made perfect sense. Harmful pathogens tend to thrive in environments with decaying organic matter that gives off bad smells. So avoiding those areas would promote better health.

Once again, this basic paradigm would begin to break down with a series of incidents. First, a young doctor named Ignaz Semmelweis showed that doctors could prevent infections by washing their hands, which suggested that something besides air carried disease. Later John Snow was able to trace the source of a cholera epidemic to a single water pump.

Perhaps not surprisingly, these were initially explained away. Semmelweis failed to format his data properly and was less than an effective advocate for his work. John Snow’s work was statistical, based on correlation rather than causality. A prominent statistician William Farr, who supported the miasma theory, argued for an alternative explanation.

Still, as doubts grew, more scientists looked for answers. The work of Robert Koch, Joseph Lister and Louis Pasteur led to the germ theory. Later, Alexander Fleming, Howard Florey and Ernst Chain would pioneer the development of antibiotics in the 1940s. That would open the floodgates and money poured into research, creating modern medicine.

Today, we have gone far beyond the germ theory of disease and even lay people understand that disease has myriad causes, including bacteria, viruses and other pathogens, as well as genetic diseases and those caused by strange misfolded proteins known as prions.

To Create The Future, We Need To Break Free Of The Past

If you were a person of sophistication and education in the 19th century, your world view was based on certain axiomatic truths, such as parallel lines never cross, logical propositions are either true or false and “bad airs” made people sick. For the most part, these ideas would have served you well for the challenges you faced in daily life.

Even more importantly, your understanding of these concepts would signal your inclusion and acceptance into a particular tribe, which would confer prestige and status. If you were an architect or engineer, you needed to understand Euclid’s geometric axions. Aristotle’s rules of logic were essential to every educated profession. Medical doctors were expected to master the nuances of the miasma theory.

To stray from established orthodoxies carries great risk, even now. It is no accident that those who were able to bring about new paradigms, such as Einstein, Turing and John Snow, came from outside the establishment. More recently, people like Benoit Mandelbrot, Jim Allison and Katalin Karikó had to overcome fierce resistance to bring new ways of thinking to finance, cancer immunotherapy and mRNA vaccines respectively.

Today, it’s becoming increasingly clear we need to break with the past. In just over a decade, we’ve been through a crippling financial crisis, a global pandemic, deadly terrorist attacks, and the biggest conflict in Europe since World War II. We need to confront climate change and a growing mental health crisis. Yet it is also clear that we can’t just raze the global order to the ground and start all over again.

So what do we leave in the past and what do we bring with us into the future? Which new lessons do we need to learn and which old ones do we need to unlearn? Perhaps most importantly, what do we need to create anew and what can we rediscover in the ancient?

Throughout history, we have learned that the answer lies not in merely speculating about ideas, but in finding real solutions to problems we face.

— Article courtesy of the Digital Tonto blog
— Image credit: 1 of 950+ FREE quote slides from http://misterinnovation.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Metaphysics Philosophy

Metaphysics Philosophy

GUEST POST from Geoffrey A. Moore

Philosophy is arguably the most universal of all subjects. And yet, it is one of the least pursued in the liberal arts curriculum. The reason for this, I will claim, is that the entire field was kidnapped by some misguided academics around a century ago, and since then no one has paid the ransom to free it. That’s not OK, and with that in mind, here is a series of four blogs that taken together constitute an Emancipation Proclamation.

There are four branches of philosophy, and in order of importance they are

  1. metaphysics,
  2. ethics,
  3. epistemology, and
  4. logic.

This post will address the first of these four, with subsequent posts addressing the remaining three.

Metaphysics is best understood in terms of Merriam-Webster’s definition: “the philosophical study of the ultimate causes and underlying nature of things.” In everyday language, it answers the most fundamental kinds of philosophical questions:

  • What’s happening?
  • What is going on?
  • Where and how do we fit in?
  • In other words, what kind of a hand have we been dealt?

Metaphysics, however, is not normally conceived in everyday terms. Here is what the Oxford English Dictionary (OED) has to say about it in its lead definition:

That branch of speculative inquiry which treats of the first principles of things, including such concepts as being, substance, essence, time, space, cause, identity, etc.; theoretical philosophy as the ultimate science of Being and Knowing.

The problem is that concepts like substance and essence are not only intimidatingly abstract, they have no meaning in modern cosmology. That is, they are artifacts of an earlier era when things like the atomic nature of matter and the electromagnetic nature of form were simply not understood. Today, they are just verbiage.

But wait, things get worse. Here is the OED in its third sense of the word:

[Used by some followers of positivist, linguistic, or logical philosophy] Concepts of an abstract or speculative nature which are not verifiable by logical or linguistic methods.

The Oxford Companion to the Mind sheds further light on this:

The pejorative sense of ‘obscure’ and ‘over-speculative’ is recent, especially following attempts by A.J. Ayer and others to show that metaphysics is strictly nonsense.

Now, it’s not hard to understand what Ayer and others were trying to get at, but do we really want to say that the philosophical study of the ultimate causes and underlying nature of things is strictly nonsense? Instead, let’s just say that there is a bunch of unsubstantiated nonsense that calls itself metaphysics but that isn’t really metaphysics at all. We can park that stuff with magic crystals and angels on the head of a pin and get back to what real metaphysics needs to address—what exactly is the universe, what is life, what is consciousness, and how do they all work together?

The best platform for so doing, in my view, is the work done in recent decades on complexity and emergence, and that is what organizes the first two-thirds of The Infinite Staircase. Metaphysics, it turns out, needs to be understood in terms of strata, and then within those strata, levels or stair steps. The three strata that make the most sense of things are as follows:

  1. Material reality as described by the sciences of physics, chemistry, and biology, or what I called the metaphysics of entropy. This explains all emergence up to the entrance of consciousness.
  2. Psychological and social reality, as explained by the social sciences, or what I called the metaphysics of Darwinism, which builds the transition from a world of mindless matter up to one of matter-less mind, covering the intermediating emergence of desire, consciousness, values, and culture.
  3. Symbolic reality, as explained by the humanities, or what I called the metaphysics of memes, which begins with the introduction of language that in turn enables the emergence of humanity’s two most powerful problem-solving tools, narrative and analytics, culminating in the emergence of theory, ideally a theory of everything, which is, after all, what metaphysics promised to be in the first place.

The key point here is that every step in this metaphysical journey is grounded in verifiable scholarship ranging over multiple centuries and involving every department in a liberal arts faculty—except, ironically, the philosophy department which is holed up somewhere on campus, held hostage by forces to be discussed in later blogs.

That’s what I think. What do you think?

Image Credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The Most Challenging Obstacles to Achieving Artificial General Intelligence

The Unclimbed Peaks

The Most Challenging Obstacles to Achieving Artificial General Intelligence

GUEST POST from Art Inteligencia

The pace of artificial intelligence (AI) development over the last decade has been nothing short of breathtaking. From generating photo-realistic images to holding surprisingly coherent conversations, the progress has led many to believe that the holy grail of artificial intelligence — Artificial General Intelligence (AGI) — is just around the corner. AGI is defined as a hypothetical AI that possesses the ability to understand, learn, and apply its intelligence to solve any problem, much like a human. As a human-centered change and innovation thought leader, I am here to argue that while we’ve made incredible strides, the path to AGI is not a straight line. It is a rugged, mountainous journey filled with profound, unclimbed peaks that require us to solve not just technological puzzles, but also fundamental questions about consciousness, creativity, and common sense.

We are currently operating in the realm of Narrow AI, where systems are exceptionally good at a single task, like playing chess or driving a car. The leap from Narrow AI to AGI is not just an incremental improvement; it’s a quantum leap. It’s the difference between a tool that can hammer a nail perfectly and a person who can understand why a house is being built, design its blueprints, and manage the entire process while also making a sandwich and comforting a child. The true obstacles to AGI are not merely computational; they are conceptual and philosophical. They require us to innovate in a way that goes beyond brute-force data processing and into the realm of true understanding.

The Three Grand Obstacles to AGI

While there are many technical hurdles, I believe the path to AGI is blocked by three foundational challenges:

  • 1. The Problem of Common Sense and Context: Narrow AI lacks common sense, a quality that is effortless for humans but incredibly difficult to code. For example, an AI can process billions of images of cars, but it doesn’t “know” that a car needs fuel or that a flat tire means it can’t drive. Common sense is a vast, interconnected web of implicit knowledge about how the world works, and it’s something we’ve yet to find a way to replicate.
  • 2. The Challenge of Causal Reasoning: Current AI models are masterful at recognizing patterns and correlations in data. They can tell you that when event A happens, event B is likely to follow. However, they struggle with causal reasoning — understanding why A causes B. True intelligence involves understanding cause-and-effect relationships, a critical component for true problem-solving, planning, and adapting to novel situations.
  • 3. The Final Frontier of Human-Like Creativity & Understanding: Can an AI truly create something new and original? Can it experience “aha!” moments of insight? Current models can generate incredibly creative outputs based on patterns they’ve seen, but do they understand the deeper meaning or emotional weight of what they create? Achieving AGI requires us to cross the final chasm: imbuing a machine with a form of human-like creativity, insight, and self-awareness.

“We are excellent at building digital brains, but we are still far from replicating the human mind. The real work isn’t in building bigger models; it’s in cracking the code of common sense and consciousness.”


Case Study 1: The Fight for Causal AI (Causaly vs. Traditional Models)

The Challenge:

In scientific research, especially in fields like drug discovery, identifying causal relationships is everything. Traditional AI models can analyze a massive database of scientific papers and tell a researcher that “Drug X is often mentioned alongside Disease Y.” However, they cannot definitively state whether Drug X *causes* a certain effect on Disease Y, or if the relationship is just a correlation. This lack of causal understanding leads to a time-consuming and expensive process of manual verification and experimentation.

The Human-Centered Innovation:

Companies like Causaly are at the forefront of tackling this problem. Instead of relying solely on a brute-force approach to pattern recognition, Causaly’s platform is designed to identify and extract causal relationships from biomedical literature. It uses a different kind of model to recognize phrases and structures that denote cause and effect, such as “is associated with,” “induces,” or “results in.” This allows researchers to get a more nuanced, and scientifically useful, view of the data.

The Result:

By focusing on the causal reasoning obstacle, Causaly has enabled researchers to accelerate the drug discovery process. It helps scientists filter through the noise of correlation to find genuine causal links, allowing them to formulate hypotheses and design experiments with a much higher probability of success. This is not about creating AGI, but about solving one of its core components, proving that a human-centered approach to a single, deep problem can unlock immense value. They are not just making research faster; they are making it smarter and more focused on finding the *why*.


Case Study 2: The Push for Common Sense (OpenAI’s Reinforcement Learning Efforts)

The Challenge:

As impressive as large language models (LLMs) are, they can still produce nonsensical or factually incorrect information, a phenomenon known as “hallucination.” This is a direct result of their lack of common sense. For instance, an LLM might confidently tell you that you can use a toaster to take a bath, because it has learned patterns of words in sentences, not the underlying physics and danger of the real world.

The Human-Centered Innovation:

OpenAI, a leader in AI research, has been actively tackling this through a method called Reinforcement Learning from Human Feedback (RLHF). This is a crucial, human-centered step. In RLHF, human trainers provide feedback to the AI model, essentially teaching it what is helpful, honest, and harmless. The model is rewarded for generating responses that align with human values and common sense, and penalized for those that do not. This process is an attempt to inject a form of implicit, human-like understanding into the model that it cannot learn from raw data alone.

The Result:

RLHF has been a game-changer for improving the safety, coherence, and usefulness of models like ChatGPT. While it’s not a complete solution to the common sense problem, it represents a significant step forward. It demonstrates that the path to a more “intelligent” AI isn’t just about scaling up data and compute; it’s about systematically incorporating a human-centric layer of guidance and values. It’s a pragmatic recognition that humans must be deeply involved in shaping the AI’s understanding of the world, serving as the common sense compass for the machine.


Conclusion: AGI as a Human-Led Journey

The quest for AGI is perhaps the greatest scientific and engineering challenge of our time. While we’ve climbed the foothills of narrow intelligence, the true peaks of common sense, causal reasoning, and human-like creativity remain unscaled. These are not problems that can be solved with bigger servers or more data alone. They require fundamental, human-centered innovation.

The companies and researchers who will lead the way are not just those with the most computing power, but those who are the most creative, empathetic, and philosophically minded. They will be the ones who understand that AGI is not just about building a smart machine; it’s about building a machine that understands the world the way we do, with all its nuances, complexities, and unspoken rules. The path to AGI is a collaborative, human-led journey, and by solving its core challenges, we will not only create more intelligent machines but also gain a deeper understanding of our own intelligence in the process.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Dall-E

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Growth is Not the Answer

Growth is Not the Answer

GUEST POST from Mike Shipulski

Most companies have growth objectives – make more, sell more and generate more profits. Increase profit margin, sell into new markets and twist our products into new revenue. Good news for the stock price, good news for annual raises and plenty of money to buy the things that will help us grow next year. But it’s not good for the people that do the work.

To increase sales the same sales folks will have to drive more, call more and do more demos. Ten percent more work for three percent more compensation. Who really benefits here? The worker who delivers ten percent more or the company that pays them only three percent more? Pretty clear to me it’s all about the company and not about the people.

To increase the number of units made implies that there can be no increase in the number of people required to make them. To increase throughput without increasing headcount, the production floor will have less time for lunch, less time for improving their skills and less time to go to the bathroom. Sure, they can do Lean projects to eliminate waste, as long as they don’t miss their daily quota. And sure, they can help with Six Sigma projects to reduce variation, as long as they don’t miss TAKT time. Who benefits more – the people or the company?

Increased profit margin (or profit percentage) is the worst offender. There are only two ways to improve the metric – sell it for more or make it for less. And even better than that is to sell it for more AND make it for less. No one can escape this metric. The sales team must meet with more customers; the marketing team must work doubly hard to define and communicate the value proposition; the engineering staff must reduce the time to launch the product and make it perform better than their best work; and everyone else must do more with less or face the chopping block.

In truth, corporate growth is the fundamental behind global warming, reduced life expectancy in the US and the ridiculous increase in the cost of healthcare. Growth requires more products and more products require more material mined, pumped or clear-cut from the planet. Growth puts immense pressure on the people doing the work and increases their stress level. And when they can’t deliver, their deep sense of helplessness and inadequacy causes them to kill themselves. And healthcare costs increase because the companies within (and insuring) the system need to make more profit. Who benefits here? The people in our community? The people doing the work? The planet? Or the companies?

What if we decided that companies could not grow? What if instead companies paid dividends to the people do the work based on the profit the company makes? With constant output wouldn’t everyone benefit year-on-year?

What if we decided output couldn’t grow? What if instead, as productivity increased, companies required people to work fewer hours? What if everyone could make the same number of products in seven hours and went home an hour early, working seven and getting paid for eight? Would everyone be better off? Wouldn’t the planet be better off?

What if we decided the objective of companies was to employ more people and give them a sense of purpose and give meaning to their lives? What if we used the profit created by productivity improvements to employ more people? Wouldn’t our communities benefit when more people have good jobs? Wouldn’t people be happier because they can make a contribution to their community? Wouldn’t there be less stress and fewer suicides when parents have enough money to feed their kids and buy them clothes? Wouldn’t everyone benefit? Wouldn’t the planet benefit?

Year-on-year growth is a fallacy. Year-on-year growth stresses the planet and the people doing the work. Year-on-year growth is good for no one except the companies demanding year-on-year growth.

The planet’s resources are finite; people’s ability to do work is finite; and the stress level people can tolerate is finite. Why not recognize these realities?

And why not figure out how to structure companies in a way that benefits the owners of the company, the people doing the work, the community where the work is done and the planet?

Image credit: Dall-E

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The Tricky Business of Tariffs

The Tricky Business of Tariffs

GUEST POST from Shep Hyken

Tariffs are creating havoc and panic for both customers and businesses. Depending on what you read, the cost-of-living increase for the average consumer can be thousands of dollars a year. And it’s the same for business, but often at a much higher cost. Anything a business purchases to run its day-to-day operations is potentially exposed to higher prices due to tariffs. Whatever businesses buy—supplies, inventory, equipment and more—when it costs them more, that cost is passed on to their customers.

This isn’t the first time there has been “tariff panic.” As recently as 2018, there were tariffs. I wrote a Forbes article about an e-bike company that was forced to raise its prices due to a 25% import tariff. The company was open about the reasons for the price increase and embraced the problem rather than becoming a victim of it. Here are some ways to manage the impact of tariffs:

  • Be Transparent: Everyone may know about the tariffs, but explaining how they are impacting costs will help justify the price increase. In other words, don’t hide the fact that tariffs are impacting your costs.
  • Partner with Vendors: Ask vendors to work with you on a solution to lower costs that won’t hit their bottom lines. If you buy from a vendor every month, maybe it’s less expensive to buy the same amount but ship quarterly instead of monthly. Work with them to find creative ways to reduce costs. This can benefit everyone.
  • Improve Efficiency to Offset Costs: If you’ve thought about a way to improve a process or efficiency but haven’t acted on it, now may be the perfect time to do so. Sometimes being forced to do something can work in your favor. And be sure to share what you’re changing to help reduce costs. Customers may appreciate you even more.
  • Add Value Instead of Just Raising Prices: When price increases are unavoidable, find a way to justify the higher cost. It could include anything—enhanced customer service, a loyalty rewards program, a special promotion and more. Customers may accept paying more if they feel they are getting more value in return.

What NOT to do:

  • Don’t Take Advantage of Customer Panic: As I write this article, people are going to car dealerships to buy cars before the prices increase and finding that the dealers are selling above the retail sticker price because of the demand. Do you think a customer will forget they were “gouged” by a company taking advantage of them during tough times? (That’s a rhetorical question, but just in case you don’t know the answer … They won’t!)
  • Don’t Say, “It’s Not my Fault”: Even when price increases are beyond your control, don’t be defensive. This can give the impression of a lack of confidence and lack of control that can erode the trust you have with your customers.
  • Don’t Say, “It’s the Same Everywhere You Go”: If the customer understands tariffs, they already know this. Stating you have no choice isn’t going to make the customer feel good. Go back to the list of what you can do and find a way to avoid this and the “it’s not my fault” response.

Customers want to hear what you’re doing to help them. They also like to be educated. Knowledge can give the customer a sense of control. Demonstrating genuine concern for the situation and sharing what you’re doing to minimize the impact of tariff-related price increases builds trust that will pay dividends long after the current economic challenges have passed.

Image Credits: Unsplash, Shep Hyken

This article originally appeared on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The Crisis Innovation Trap

Why Proactive Innovation Wins

LAST UPDATED: September 3, 2025 at 12:00PM
The Crisis Innovation Trap

by Braden Kelley and Art Inteligencia

In the narrative of business, we often romanticize the idea of “crisis innovation.” The sudden, high-stakes moment when a company, backed against a wall, unleashes a burst of creativity to survive. The pandemic, for instance, forced countless businesses to pivot their models overnight. While this showcases incredible human resilience, it also reveals a dangerous and costly trap: the belief that innovation is something you turn on only when there’s an emergency. As a human-centered change and innovation thought leader, I’ve seen firsthand that relying on crisis as a catalyst is a recipe for short-term fixes and long-term decline. True, sustainable innovation is not a reaction; it’s a proactive, continuous discipline.

The problem with waiting for a crisis is that by the time it hits, you’re operating from a position of weakness. You’re making decisions under immense pressure, with limited resources, and with a narrow focus on survival. This reactive approach rarely leads to truly transformative breakthroughs. Instead, it produces incremental changes and tactical adaptations—often at a steep price in terms of burnout, strategic coherence, and missed opportunities. The most successful organizations don’t innovate to escape a crisis; they innovate continuously to prevent one from ever happening.

The Cost of Crisis-Driven Innovation

Relying on crisis as your innovation driver comes with significant hidden costs:

  • Reactive vs. Strategic: Crisis innovation is inherently reactive. You’re fixing a symptom, not addressing the root cause. This prevents you from engaging in the deep, strategic thinking necessary for true market disruption.
  • Loss of Foresight: When you’re in a crisis, all attention is on the immediate threat. This short-term focus blinds you to emerging trends, shifting customer needs, and new market opportunities that could have been identified and acted upon proactively.
  • Burnout and Exhaustion: Innovation requires creative energy. Forcing your teams into a constant state of emergency to innovate leads to rapid burnout, high turnover, and a culture of fear, not creativity.
  • Suboptimal Outcomes: The solutions developed in a crisis are often rushed, inadequately tested, and sub-optimized. They are designed to solve an immediate problem, not to create a lasting competitive advantage.

“Crisis innovation is a sprint for survival. Proactive innovation is a marathon for market leadership. You can’t win a marathon by only practicing sprints when the gun goes off.”

Building a Culture of Proactive, Human-Centered Innovation

The alternative to the crisis innovation trap is to embed innovation into your organization’s DNA. This means creating a culture where curiosity, experimentation, and a deep understanding of human needs are constant, not sporadic. It’s about empowering your people to solve problems and create value every single day.

  1. Embrace Psychological Safety: Create an environment where employees feel safe to share half-formed ideas, question assumptions, and even fail. This is the single most important ingredient for continuous innovation.
  2. Allocate Dedicated Resources: Don’t expect innovation to happen in people’s spare time. Set aside dedicated time, budget, and talent for exploratory projects and initiatives that don’t have an immediate ROI.
  3. Focus on Human-Centered Design: Continuously engage with your customers and employees to understand their frustrations and aspirations. True innovation comes from solving real human problems, not just from internal brainstorming.
  4. Reward Curiosity, Not Just Results: Celebrate learning, even from failures. Recognize teams for their efforts in exploring new ideas and for the insights they gain, not just for the products they successfully launch.

Case Study 1: Blockbuster vs. Netflix – The Foresight Gap

The Challenge:

In the late 1990s, Blockbuster was the undisputed king of home video rentals. It had a massive physical footprint, brand recognition, and a highly profitable business model based on late fees. The crisis of digital disruption and streaming was not a sudden event; it was a slow-moving signal on the horizon.

The Reactive Approach (Blockbuster):

Blockbuster’s management was aware of the shift to digital, but they largely viewed it as a distant threat. They were so profitable from their existing model that they had no incentive to proactively innovate. When Netflix began gaining traction with its subscription-based, DVD-by-mail service, Blockbuster’s response was a reactive, half-hearted attempt to mimic it. They launched an online service but failed to integrate it with their core business, and their culture remained focused on the physical store model. They only truly panicked and began a desperate, large-scale innovation effort when it was already too late and the market had irreversibly shifted to streaming.

The Result:

Blockbuster’s crisis-driven innovation was a spectacular failure. By the time they were forced to act, they lacked the necessary strategic coherence, internal alignment, and cultural agility to compete. They didn’t innovate to get ahead; they innovated to survive, and they failed. They went from market leader to bankruptcy, a powerful lesson in the dangers of waiting for a crisis to force your hand.


Case Study 2: Lego’s Near-Death and Subsequent Reinvention

The Challenge:

In the early 2000s, Lego was on the brink of bankruptcy. The brand, once a global icon, had become a sprawling, unfocused company that was losing relevance with children increasingly drawn to video games and digital entertainment. The company’s crisis was not a sudden external shock, but a slow, painful internal decline caused by a lack of proactive innovation and a departure from its core values. They had innovated, but in a scattered, unfocused way that diluted the brand.

The Proactive Turnaround (Lego):

Lego’s new leadership realized that a reactive, last-ditch effort wouldn’t save them. They saw the crisis as a wake-up call to fundamentally reinvent how they innovate. Their strategy was not just to survive but to thrive by returning to a proactive, human-centered approach. They went back to their core product, the simple plastic brick, and focused on deeply understanding what their customers—both children and adult fans—wanted. They launched several initiatives:

  • Re-focus on the Core: They trimmed down their product lines and doubled down on what made Lego special—creativity and building.
  • Embracing the Community: They proactively engaged with their most passionate fans, the “AFOLs” (Adult Fans of Lego), and co-created new products like the highly successful Lego Architecture and Ideas series. This wasn’t a reaction to a trend; it was a strategic partnership.
  • Thoughtful Digital Integration: Instead of panicking and launching a thousand digital products, they carefully integrated their physical and digital worlds with games like Lego Star Wars and movies like The Lego Movie. These weren’t rushed reactions; they were part of a long-term, strategic vision.

The Result:

Lego’s transformation from a company on the brink to a global powerhouse is a powerful example of the superiority of proactive innovation. By not just reacting to their crisis but using it as a catalyst to build a continuous, human-centered innovation engine, they not only survived but flourished. They turned a painful crisis into a foundation for a new era of growth, proving that the best time to innovate is always, not just when you have no other choice.


Eight I's of Infinite Innovation

The Eight I’s of Infinite Innovation

Braden Kelley’s Eight I’s of Infinite Innovation provides a comprehensive framework for organizations seeking to embed continuous innovation into their DNA. The model starts with Ideation, the spark of new concepts, which must be followed by Inspiration—connecting those ideas to a compelling, human-centered vision. This vision is refined through Investigation, a process of deeply understanding customer needs and market dynamics, leading to the Iteration of prototypes and solutions based on real-world feedback. The framework then moves from development to delivery with Implementation, the critical step of bringing a viable product to market. This is not the end, however; it’s a feedback loop that requires Invention of new business models, a constant process of Improvement based on outcomes, and finally, the cultivation of an Innovation culture where the cycle can repeat infinitely. Each ‘I’ builds upon the last, creating a holistic and sustainable engine for growth.

Conclusion: The Time to Innovate is Now

The notion of “crisis innovation” is seductive because it offers a heroic narrative. But behind every such story is a cautionary tale of a company that let a problem fester for far too long. The most enduring, profitable, and relevant organizations don’t wait for a burning platform to jump; they are constantly building new platforms. They have embedded a culture of continuous, proactive innovation driven by a deep understanding of human needs. They innovate when times are good so they are prepared when times are tough.

The time to innovate is not when your stock price plummets or your competitor launches a new product. The time to innovate is now, and always. By making innovation a fundamental part of your business, you ensure your organization’s longevity and its ability to not just survive the future, but to shape it.

Image credit: Pixabay

Content Authenticity Statement: The topic area and the key elements to focus on were decisions made by Braden Kelley, with help from Google Gemini to shape the article and create the illustrative case studies.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






McKinsey is Wrong That 80% Companies Fail to Generate AI ROI

McKinsey is Wrong That 80% Companies Fail to Generate AI ROI

GUEST POST from Robyn Bolton

Sometimes, you see a headline and just have to shake your head.  Sometimes, you see a bunch of headlines and need to scream into a pillow.  This week’s headlines on AI ROI were the latter:

  • Companies are Pouring Billions Into A.I. It Has Yet to Pay Off – NYT
  • MIT report: 95% of generative AI pilots at companies are failing – Forbes
  • Nearly 8 in 10 companies report using gen AI – yet just as many report no significant bottom-line impact – McKinsey

AI has slipped into what Gartner calls the Trough of Disillusionment. But, for people working on pilots,  it might as well be the Pit of Despair because executives are beginning to declare AI a fad and deny ever having fallen victim to its siren song.

Because they’re listening to the NYT, Forbes, and McKinsey.

And they’re wrong.

ROI Reality Check

In 20205, private investment in generative AI is expected to increase 94% to an estimated $62 billion.  When you’re throwing that kind of money around, it’s natural to expect ROI ASAP.

But is it realistic?

Let’s assume Gen AI “started” (became sufficiently available to set buyer expectations and warrant allocating resources to) in late 2022/early 2023.  That means that we’re expecting ROI within 2 years.

That’s not realistic.  It’s delusional. 

ERP systems “started” in the early 1990s, yet providers like SAP still recommend five-year ROI timeframes.  Cloud Computing“started” in the early 2000s, and yet, in 2025, “48% of CEOs lack confidence in their ability to measure cloud ROI.” CRM systems’ claims of 1-3 years to ROI must be considered in the context of their 50-70% implementation failure rate.

That’s not to say we shouldn’t expect rapid results.  We just need to set realistic expectations around results and timing.

Measure ROI by Speed and Magnitude of Learning

In the early days of any new technology or initiative, we don’t know what we don’t know.  It takes time to experiment and learn our way to meaningful and sustainable financial ROI. And the learnings are coming fast and furious:

Trust, not tech, is your biggest challenge: MIT research across 9,000+ workers shows automation success depends more on whether your team feels valued and believes you’re invested in their growth than which AI platform you choose.

Workers who experience AI’s benefits first-hand are more likely to champion automation than those told, “trust us, you’ll love it.” Job satisfaction emerged as the second strongest indicator of technology acceptance, followed by feeling valued.  If you don’t invest in earning your people’s trust, don’t invest in shiny new tech.

More users don’t lead to more impact: Companies assume that making AI available to everyone guarantees ROI.  Yet of the 70% of Fortune 500 companies deploying Microsoft 365 Copilot and similar “horizontal” tools (enterprise-wide copilots and chatbots), none have seen any financial impact.

The opposite approach of deploying “vertical” function-specific tools doesn’t fare much better.  In fact, less than 10% make it past the pilot stage, despite having higher potential for economic impact.

Better results require reinvention, not optimization:  McKinsey found that call centers that gave agents access to passive AI tools for finding articles, summarizing tickets, and drafting emails resulted in only a 5-10% call time reduction.  Centers using AI tools to automate tasks without agent initiation reduced call time by 20-40%.

Centers reinventing processes around AI agents? 60-90% reduction in call time, with 80% automatically resolved.

How to Climb Out of the Pit

Make no mistake, despite these learnings, we are in the pit of AI despair.  42% of companies are abandoning their AI initiatives.  That’s up from 17% just a year ago.

But we can escape if we set the right expectations and measure ROI on learning speed and quality.

Because the real concern isn’t AI’s lack of ROI today.  It’s whether you’re willing to invest in the learning process long enough to be successful tomorrow.

Image credit: Microsoft CoPilot

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Strategy Lacking Purpose Will Always Fail

Strategy Lacking Purpose Will Always Fail

GUEST POST from Greg Satell

In 1989, just before the fall of the Berlin Wall, Francis Fukuyama published an essay in the journal The National Interest titled The End of History, which led to a bestselling book. Many took his argument to mean that, with the defeat of communism, US-style liberal democracy had emerged as the only viable way of organizing a society.

He was misunderstood. His actual argument was far more nuanced and insightful. After explaining the arguments of philosophers like Hegel and Kojeve, Fukuyama pointed out that even if we had reached an endpoint in the debate about ideologies, there would still be conflict because of people’s need to express their identity.

We usually think of strategy as a rational, analytic activity, with teams of MBA’s poring over spreadsheets or generals standing before maps. Yet if we fail to take into account human agency and dignity, we’re missing the boat. Strategy without purpose is doomed to fail, however clever the calculations. Leaders need to take note of that basic reality.

Taking Stock Of The Halo Effect

Business case studies are written by experienced professionals who are trained to analyze past situations from multiple perspectives. However, their ability to do that successfully is greatly limited by the fact that they already know the outcome of the situation they are studying. That can’t help but to color their analysis.

In The Halo Effect, Phil Rosenzweig explains how those perceptions can color conclusions. He points to the networking company Cisco during the dotcom boom. When it was flying high, it was said to have an unparalleled culture with people that worked long hours but loved every minute of it. When the market tanked, however, all of the sudden its culture came to be seen as “cocksure” and “naive.”

It is hard to see how a company’s culture could change so drastically in such a short amount of time, with no significant change in leadership. More likely, seeing Cisco’s success, analysts looked at particular qualities in a positive light. However, when things began to go the other way, those same qualities were perceived as negative.

When an organization is doing well, we may find its people to be “idealistic” and “values driven,” but when things go sour, those same traits come to be seen as “impractical” and “arrogant.” Given the same set of facts, we can—and often do—come to very different conclusions when our perception of the outcomes changes.

In most cases, analysts don’t have a stake in the outcome. From their point of view, they probably see themselves as objectively analyzing facts and following them to their most logical outcomes. Yet when the purpose for writing an analysis changes from telling a success story to lamenting a cautionary tale, their perception of events tends to change markedly.

Reassessing The Value Chain

For decades, the dominant view of business strategy was based on Michael Porter’s ideas about competitive advantage. In essence, he argued that the key to long-term success was to dominate the value chain by maximizing bargaining power among suppliers, customers, new market entrants and substitute goods.

Yet as AnnaLee Saxenian explained in Regional Advantage, around the same time that Porter’s ideas were ascending among CEOs in the establishment industries on the east coast, a very different way of doing business was gaining steam in Silicon Valley. The firms there saw themselves not as isolated fiefdoms, but as part of a larger ecosystem.

The two models are built on very different assumptions. The Porter model sees the world as made up of transactions. Optimize your strategy to create efficiencies, derive the maximum value out of every transaction and you will build a sustainable competitive advantage. The Silicon Valley model, however, saw the world as made up of connections and optimized their strategies to widen and deepen linkages.

Microsoft is one great example of this shift. When Linux first rose to prominence, Microsoft CEO Steve Ballmer called it a cancer. Yet more recently, its current CEO announced that the company loves Linux. That didn’t happen out of any sort of newfound benevolence, but because it recognized that it couldn’t continue to shut itself out and still be able to compete.

When you see the world as the “sum of all efficiencies,” the optimal strategy is to dominate. However, if you see the world as made up of the “sum of all connections,” the optimal strategy is to attract. You need to be careful to be seen as purposeful rather than predatory.

The Naïveté Of The “Realists”

Since at least the times of Richelieu, foreign policy theorists have been enthralled by the concept of Realpolitik, the notion that world affairs are governed by interests, not ideological, moral or ethical considerations. Much like with Porter’s “competitive advantage,” strategy is treated as a series of transactions rather than relationships.

Rational calculation of interests is one of those ideas that seems pragmatic on the surface, but is actually hopelessly academic and unworkable in the real world. How do you identify the “interests” you are supposed to be basing your decisions on if not by considering what you value? And how do you assess your values without taking into account your beliefs, morals and ethics?

To understand how such “realism” goes awry, consider the prominent political scientist John Mearsheimer. In March, he gave an interview to The New Yorker in which he argued that, by failing to recognize Russia’s role and interests as a great power, the US had erred greatly in its support of Ukraine.

Yet it is clear now that the Russians were the ones who erred. First, they failed to recognize that the world would see their purpose as immoral. Second, they failed to recognize how their aggression would empower Ukraine’s sense of nationhood. Third, they did not see how Europe would come to regard economic ties with Russia to be against their interests.

Nothing you can derive from military or economic statistics will give you insight into human agency. Excel sheets may not be motivated by purpose, but people are.

Strategy Is Not A Game Of Chess

Antonio Damasio, a neuroscientist who researches decision making, became intrigued when one of his patients, a highly intelligent and professionally successful man named “Elliot,” suffered from a brain lesion that impaired his ability to experience emotion. It soon became clear that Elliot was unable to make decisions..

Elliot’s prefrontal cortex, which governs the executive function, was fully intact. His memory and ability to understand events were normal as well. He was, essentially, a completely rational being with normal cognitive function, but no emotions. The problem was that although Elliot could understand all the factors that would go into making a decision, he could not weigh them. Without emotions, all options were all essentially the same.

In the real world, strategy is not a game of chess, in which we move inert pieces around a board. While we can make rational assessments about various courses of action, ultimately people have to care about the outcome. For a strategy to be meaningful, it needs to speak to people’s values, hopes, dreams and ambitions.

A leader’s role cannot be merely to plan and direct action, but must be to inspire and empower belief in a common endeavor. That’s what widens and deepens the meaningful connections that can enable genuine transformation.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.