Sometimes Ancient Wisdom Needs to be Left Behind

Sometimes Ancient Wisdom Needs to be Left Behind

GUEST POST from Greg Satell

I recently visited Panama and learned the incredible story of how the indigenous Emberá people there helped to teach jungle survival skills to Apollo mission astronauts. It is a fascinating combining and contrast of ancient wisdom and modern technology, equipping the first men to go to the moon with insights from both realms.

Humans tend to have a natural reverence for old wisdom that is probably woven into our DNA. It stands to reason that people more willing to stick with the tried and true might have a survival advantage over those who were more reckless. Ideas that stand the test of time are, by definition, the ones that worked well enough to be passed on.

Paradoxically, to move forward we need to abandon old ideas. It was only by discarding ancient wisdoms that we were able to create the modern world. In much the same way, to move forward now we’ll need to debunk ideas that qualify as expertise today. As in most things, our past can help serve as a guide. Here are three old ideas we managed to transcend.

1. Euclid’s Geometry

The basic geometry we learn in grade school, also known as Euclidean geometry, is rooted in axioms observed from the physical world, such as the principle that two parallel lines never intersect. For thousands of years mathematicians built proofs based on those axioms to create new knowledge, such as how to calculate the height of an object. Without these insights, our ability to shape the physical world would be negligible.

In the 19th century, however, men like Gauss, Lobachevsky, Bolyai and Riemann started to build new forms of non-Euclidean geometry based on curved spaces. These were, of course, completely theoretical and of no use in daily life. The universe, as we experience it, doesn’t curve in any appreciable way, which is why police ask us to walk a straight line if they think we’ve been drinking.

But when Einstein started to think about how gravity functioned, he began to suspect that the universe did, in fact, curve over large distances. To make his theory of general relativity work he had to discard the old geometrical thinking and embrace new mathematical concepts. Without those critical tools, he would have been hopelessly stuck.

Much like the astronauts in the Apollo program, we now live in a strange mix of old and new. To travel to Panama, for example, I personally moved through linear space and the old Euclidean axioms worked perfectly well. However, to navigate, I had to use GPS, which must take into account curved spaces for Einstein’s equations to correctly calculate distances between the GPS satellites and points on earth.

2. Aristotle’s Logic

In terms of longevity and impact, only Aristotle’s logic rivals Euclid’s geometry. At the core of Aristotle’s system is the syllogism, which is made up of propositions that consist of two terms (a subject and a predicate). If the propositions in the syllogism are true, then the argument has to be true. This basic notion that conclusions follow premises imbues logical statements with a mathematical rigor.

Yet much like with geometry, scholars began to suspect that there might be something amiss. At first, they noticed minor flaws that had to do with a strange paradox in set theory which arose with sets that are members of themselves. For example, if the barber who shaves everyone in town who doesn’t shave themselves, then who shaves the barber?

At first, these seemed like strange anomalies, minor exceptions to rules that could be easily explained away. Still, the more scholars tried to close the gaps, the more problems appeared, leading to a foundational crisis. It would only be resolved when a young logician named Kurt Gödel published his theorems that proved logic, at least as we knew it, is hopelessly broken.

In a strange twist, another young mathematician, Alan Turing, built on Gödel’s work to create an imaginary machine that would make digital computers possible. In other words, in order for Silicon Valley engineers to code to create logical worlds online, they need to use machines built on the premise that perfectly logical systems are inherently unworkable.

Of course, as I write this, I am straddling both universes, trying to put build logical sentences on those very same machines.

3. The Miasma Theory of Disease

Before the germ theory of disease took hold in medicine, the miasma theory, the notion that bad air caused disease, was predominant. Again, from a practical perspective this made perfect sense. Harmful pathogens tend to thrive in environments with decaying organic matter that gives off bad smells. So avoiding those areas would promote better health.

Once again, this basic paradigm would begin to break down with a series of incidents. First, a young doctor named Ignaz Semmelweis showed that doctors could prevent infections by washing their hands, which suggested that something besides air carried disease. Later John Snow was able to trace the source of a cholera epidemic to a single water pump.

Perhaps not surprisingly, these were initially explained away. Semmelweis failed to format his data properly and was less than an effective advocate for his work. John Snow’s work was statistical, based on correlation rather than causality. A prominent statistician William Farr, who supported the miasma theory, argued for an alternative explanation.

Still, as doubts grew, more scientists looked for answers. The work of Robert Koch, Joseph Lister and Louis Pasteur led to the germ theory. Later, Alexander Fleming, Howard Florey and Ernst Chain would pioneer the development of antibiotics in the 1940s. That would open the floodgates and money poured into research, creating modern medicine.

Today, we have gone far beyond the germ theory of disease and even lay people understand that disease has myriad causes, including bacteria, viruses and other pathogens, as well as genetic diseases and those caused by strange misfolded proteins known as prions.

To Create The Future, We Need To Break Free Of The Past

If you were a person of sophistication and education in the 19th century, your world view was based on certain axiomatic truths, such as parallel lines never cross, logical propositions are either true or false and “bad airs” made people sick. For the most part, these ideas would have served you well for the challenges you faced in daily life.

Even more importantly, your understanding of these concepts would signal your inclusion and acceptance into a particular tribe, which would confer prestige and status. If you were an architect or engineer, you needed to understand Euclid’s geometric axions. Aristotle’s rules of logic were essential to every educated profession. Medical doctors were expected to master the nuances of the miasma theory.

To stray from established orthodoxies carries great risk, even now. It is no accident that those who were able to bring about new paradigms, such as Einstein, Turing and John Snow, came from outside the establishment. More recently, people like Benoit Mandelbrot, Jim Allison and Katalin Karikó had to overcome fierce resistance to bring new ways of thinking to finance, cancer immunotherapy and mRNA vaccines respectively.

Today, it’s becoming increasingly clear we need to break with the past. In just over a decade, we’ve been through a crippling financial crisis, a global pandemic, deadly terrorist attacks, and the biggest conflict in Europe since World War II. We need to confront climate change and a growing mental health crisis. Yet it is also clear that we can’t just raze the global order to the ground and start all over again.

So what do we leave in the past and what do we bring with us into the future? Which new lessons do we need to learn and which old ones do we need to unlearn? Perhaps most importantly, what do we need to create anew and what can we rediscover in the ancient?

Throughout history, we have learned that the answer lies not in merely speculating about ideas, but in finding real solutions to problems we face.

— Article courtesy of the Digital Tonto blog
— Image credit: 1 of 950+ FREE quote slides from http://misterinnovation.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Metaphysics Philosophy

Metaphysics Philosophy

GUEST POST from Geoffrey A. Moore

Philosophy is arguably the most universal of all subjects. And yet, it is one of the least pursued in the liberal arts curriculum. The reason for this, I will claim, is that the entire field was kidnapped by some misguided academics around a century ago, and since then no one has paid the ransom to free it. That’s not OK, and with that in mind, here is a series of four blogs that taken together constitute an Emancipation Proclamation.

There are four branches of philosophy, and in order of importance they are

  1. metaphysics,
  2. ethics,
  3. epistemology, and
  4. logic.

This post will address the first of these four, with subsequent posts addressing the remaining three.

Metaphysics is best understood in terms of Merriam-Webster’s definition: “the philosophical study of the ultimate causes and underlying nature of things.” In everyday language, it answers the most fundamental kinds of philosophical questions:

  • What’s happening?
  • What is going on?
  • Where and how do we fit in?
  • In other words, what kind of a hand have we been dealt?

Metaphysics, however, is not normally conceived in everyday terms. Here is what the Oxford English Dictionary (OED) has to say about it in its lead definition:

That branch of speculative inquiry which treats of the first principles of things, including such concepts as being, substance, essence, time, space, cause, identity, etc.; theoretical philosophy as the ultimate science of Being and Knowing.

The problem is that concepts like substance and essence are not only intimidatingly abstract, they have no meaning in modern cosmology. That is, they are artifacts of an earlier era when things like the atomic nature of matter and the electromagnetic nature of form were simply not understood. Today, they are just verbiage.

But wait, things get worse. Here is the OED in its third sense of the word:

[Used by some followers of positivist, linguistic, or logical philosophy] Concepts of an abstract or speculative nature which are not verifiable by logical or linguistic methods.

The Oxford Companion to the Mind sheds further light on this:

The pejorative sense of ‘obscure’ and ‘over-speculative’ is recent, especially following attempts by A.J. Ayer and others to show that metaphysics is strictly nonsense.

Now, it’s not hard to understand what Ayer and others were trying to get at, but do we really want to say that the philosophical study of the ultimate causes and underlying nature of things is strictly nonsense? Instead, let’s just say that there is a bunch of unsubstantiated nonsense that calls itself metaphysics but that isn’t really metaphysics at all. We can park that stuff with magic crystals and angels on the head of a pin and get back to what real metaphysics needs to address—what exactly is the universe, what is life, what is consciousness, and how do they all work together?

The best platform for so doing, in my view, is the work done in recent decades on complexity and emergence, and that is what organizes the first two-thirds of The Infinite Staircase. Metaphysics, it turns out, needs to be understood in terms of strata, and then within those strata, levels or stair steps. The three strata that make the most sense of things are as follows:

  1. Material reality as described by the sciences of physics, chemistry, and biology, or what I called the metaphysics of entropy. This explains all emergence up to the entrance of consciousness.
  2. Psychological and social reality, as explained by the social sciences, or what I called the metaphysics of Darwinism, which builds the transition from a world of mindless matter up to one of matter-less mind, covering the intermediating emergence of desire, consciousness, values, and culture.
  3. Symbolic reality, as explained by the humanities, or what I called the metaphysics of memes, which begins with the introduction of language that in turn enables the emergence of humanity’s two most powerful problem-solving tools, narrative and analytics, culminating in the emergence of theory, ideally a theory of everything, which is, after all, what metaphysics promised to be in the first place.

The key point here is that every step in this metaphysical journey is grounded in verifiable scholarship ranging over multiple centuries and involving every department in a liberal arts faculty—except, ironically, the philosophy department which is holed up somewhere on campus, held hostage by forces to be discussed in later blogs.

That’s what I think. What do you think?

Image Credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Most Challenging Obstacles to Achieving Artificial General Intelligence

The Unclimbed Peaks

The Most Challenging Obstacles to Achieving Artificial General Intelligence

GUEST POST from Art Inteligencia

The pace of artificial intelligence (AI) development over the last decade has been nothing short of breathtaking. From generating photo-realistic images to holding surprisingly coherent conversations, the progress has led many to believe that the holy grail of artificial intelligence — Artificial General Intelligence (AGI) — is just around the corner. AGI is defined as a hypothetical AI that possesses the ability to understand, learn, and apply its intelligence to solve any problem, much like a human. As a human-centered change and innovation thought leader, I am here to argue that while we’ve made incredible strides, the path to AGI is not a straight line. It is a rugged, mountainous journey filled with profound, unclimbed peaks that require us to solve not just technological puzzles, but also fundamental questions about consciousness, creativity, and common sense.

We are currently operating in the realm of Narrow AI, where systems are exceptionally good at a single task, like playing chess or driving a car. The leap from Narrow AI to AGI is not just an incremental improvement; it’s a quantum leap. It’s the difference between a tool that can hammer a nail perfectly and a person who can understand why a house is being built, design its blueprints, and manage the entire process while also making a sandwich and comforting a child. The true obstacles to AGI are not merely computational; they are conceptual and philosophical. They require us to innovate in a way that goes beyond brute-force data processing and into the realm of true understanding.

The Three Grand Obstacles to AGI

While there are many technical hurdles, I believe the path to AGI is blocked by three foundational challenges:

  • 1. The Problem of Common Sense and Context: Narrow AI lacks common sense, a quality that is effortless for humans but incredibly difficult to code. For example, an AI can process billions of images of cars, but it doesn’t “know” that a car needs fuel or that a flat tire means it can’t drive. Common sense is a vast, interconnected web of implicit knowledge about how the world works, and it’s something we’ve yet to find a way to replicate.
  • 2. The Challenge of Causal Reasoning: Current AI models are masterful at recognizing patterns and correlations in data. They can tell you that when event A happens, event B is likely to follow. However, they struggle with causal reasoning — understanding why A causes B. True intelligence involves understanding cause-and-effect relationships, a critical component for true problem-solving, planning, and adapting to novel situations.
  • 3. The Final Frontier of Human-Like Creativity & Understanding: Can an AI truly create something new and original? Can it experience “aha!” moments of insight? Current models can generate incredibly creative outputs based on patterns they’ve seen, but do they understand the deeper meaning or emotional weight of what they create? Achieving AGI requires us to cross the final chasm: imbuing a machine with a form of human-like creativity, insight, and self-awareness.

“We are excellent at building digital brains, but we are still far from replicating the human mind. The real work isn’t in building bigger models; it’s in cracking the code of common sense and consciousness.”


Case Study 1: The Fight for Causal AI (Causaly vs. Traditional Models)

The Challenge:

In scientific research, especially in fields like drug discovery, identifying causal relationships is everything. Traditional AI models can analyze a massive database of scientific papers and tell a researcher that “Drug X is often mentioned alongside Disease Y.” However, they cannot definitively state whether Drug X *causes* a certain effect on Disease Y, or if the relationship is just a correlation. This lack of causal understanding leads to a time-consuming and expensive process of manual verification and experimentation.

The Human-Centered Innovation:

Companies like Causaly are at the forefront of tackling this problem. Instead of relying solely on a brute-force approach to pattern recognition, Causaly’s platform is designed to identify and extract causal relationships from biomedical literature. It uses a different kind of model to recognize phrases and structures that denote cause and effect, such as “is associated with,” “induces,” or “results in.” This allows researchers to get a more nuanced, and scientifically useful, view of the data.

The Result:

By focusing on the causal reasoning obstacle, Causaly has enabled researchers to accelerate the drug discovery process. It helps scientists filter through the noise of correlation to find genuine causal links, allowing them to formulate hypotheses and design experiments with a much higher probability of success. This is not about creating AGI, but about solving one of its core components, proving that a human-centered approach to a single, deep problem can unlock immense value. They are not just making research faster; they are making it smarter and more focused on finding the *why*.


Case Study 2: The Push for Common Sense (OpenAI’s Reinforcement Learning Efforts)

The Challenge:

As impressive as large language models (LLMs) are, they can still produce nonsensical or factually incorrect information, a phenomenon known as “hallucination.” This is a direct result of their lack of common sense. For instance, an LLM might confidently tell you that you can use a toaster to take a bath, because it has learned patterns of words in sentences, not the underlying physics and danger of the real world.

The Human-Centered Innovation:

OpenAI, a leader in AI research, has been actively tackling this through a method called Reinforcement Learning from Human Feedback (RLHF). This is a crucial, human-centered step. In RLHF, human trainers provide feedback to the AI model, essentially teaching it what is helpful, honest, and harmless. The model is rewarded for generating responses that align with human values and common sense, and penalized for those that do not. This process is an attempt to inject a form of implicit, human-like understanding into the model that it cannot learn from raw data alone.

The Result:

RLHF has been a game-changer for improving the safety, coherence, and usefulness of models like ChatGPT. While it’s not a complete solution to the common sense problem, it represents a significant step forward. It demonstrates that the path to a more “intelligent” AI isn’t just about scaling up data and compute; it’s about systematically incorporating a human-centric layer of guidance and values. It’s a pragmatic recognition that humans must be deeply involved in shaping the AI’s understanding of the world, serving as the common sense compass for the machine.


Conclusion: AGI as a Human-Led Journey

The quest for AGI is perhaps the greatest scientific and engineering challenge of our time. While we’ve climbed the foothills of narrow intelligence, the true peaks of common sense, causal reasoning, and human-like creativity remain unscaled. These are not problems that can be solved with bigger servers or more data alone. They require fundamental, human-centered innovation.

The companies and researchers who will lead the way are not just those with the most computing power, but those who are the most creative, empathetic, and philosophically minded. They will be the ones who understand that AGI is not just about building a smart machine; it’s about building a machine that understands the world the way we do, with all its nuances, complexities, and unspoken rules. The path to AGI is a collaborative, human-led journey, and by solving its core challenges, we will not only create more intelligent machines but also gain a deeper understanding of our own intelligence in the process.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Dall-E

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Growth is Not the Answer

Growth is Not the Answer

GUEST POST from Mike Shipulski

Most companies have growth objectives – make more, sell more and generate more profits. Increase profit margin, sell into new markets and twist our products into new revenue. Good news for the stock price, good news for annual raises and plenty of money to buy the things that will help us grow next year. But it’s not good for the people that do the work.

To increase sales the same sales folks will have to drive more, call more and do more demos. Ten percent more work for three percent more compensation. Who really benefits here? The worker who delivers ten percent more or the company that pays them only three percent more? Pretty clear to me it’s all about the company and not about the people.

To increase the number of units made implies that there can be no increase in the number of people required to make them. To increase throughput without increasing headcount, the production floor will have less time for lunch, less time for improving their skills and less time to go to the bathroom. Sure, they can do Lean projects to eliminate waste, as long as they don’t miss their daily quota. And sure, they can help with Six Sigma projects to reduce variation, as long as they don’t miss TAKT time. Who benefits more – the people or the company?

Increased profit margin (or profit percentage) is the worst offender. There are only two ways to improve the metric – sell it for more or make it for less. And even better than that is to sell it for more AND make it for less. No one can escape this metric. The sales team must meet with more customers; the marketing team must work doubly hard to define and communicate the value proposition; the engineering staff must reduce the time to launch the product and make it perform better than their best work; and everyone else must do more with less or face the chopping block.

In truth, corporate growth is the fundamental behind global warming, reduced life expectancy in the US and the ridiculous increase in the cost of healthcare. Growth requires more products and more products require more material mined, pumped or clear-cut from the planet. Growth puts immense pressure on the people doing the work and increases their stress level. And when they can’t deliver, their deep sense of helplessness and inadequacy causes them to kill themselves. And healthcare costs increase because the companies within (and insuring) the system need to make more profit. Who benefits here? The people in our community? The people doing the work? The planet? Or the companies?

What if we decided that companies could not grow? What if instead companies paid dividends to the people do the work based on the profit the company makes? With constant output wouldn’t everyone benefit year-on-year?

What if we decided output couldn’t grow? What if instead, as productivity increased, companies required people to work fewer hours? What if everyone could make the same number of products in seven hours and went home an hour early, working seven and getting paid for eight? Would everyone be better off? Wouldn’t the planet be better off?

What if we decided the objective of companies was to employ more people and give them a sense of purpose and give meaning to their lives? What if we used the profit created by productivity improvements to employ more people? Wouldn’t our communities benefit when more people have good jobs? Wouldn’t people be happier because they can make a contribution to their community? Wouldn’t there be less stress and fewer suicides when parents have enough money to feed their kids and buy them clothes? Wouldn’t everyone benefit? Wouldn’t the planet benefit?

Year-on-year growth is a fallacy. Year-on-year growth stresses the planet and the people doing the work. Year-on-year growth is good for no one except the companies demanding year-on-year growth.

The planet’s resources are finite; people’s ability to do work is finite; and the stress level people can tolerate is finite. Why not recognize these realities?

And why not figure out how to structure companies in a way that benefits the owners of the company, the people doing the work, the community where the work is done and the planet?

Image credit: Dall-E

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Tricky Business of Tariffs

The Tricky Business of Tariffs

GUEST POST from Shep Hyken

Tariffs are creating havoc and panic for both customers and businesses. Depending on what you read, the cost-of-living increase for the average consumer can be thousands of dollars a year. And it’s the same for business, but often at a much higher cost. Anything a business purchases to run its day-to-day operations is potentially exposed to higher prices due to tariffs. Whatever businesses buy—supplies, inventory, equipment and more—when it costs them more, that cost is passed on to their customers.

This isn’t the first time there has been “tariff panic.” As recently as 2018, there were tariffs. I wrote a Forbes article about an e-bike company that was forced to raise its prices due to a 25% import tariff. The company was open about the reasons for the price increase and embraced the problem rather than becoming a victim of it. Here are some ways to manage the impact of tariffs:

  • Be Transparent: Everyone may know about the tariffs, but explaining how they are impacting costs will help justify the price increase. In other words, don’t hide the fact that tariffs are impacting your costs.
  • Partner with Vendors: Ask vendors to work with you on a solution to lower costs that won’t hit their bottom lines. If you buy from a vendor every month, maybe it’s less expensive to buy the same amount but ship quarterly instead of monthly. Work with them to find creative ways to reduce costs. This can benefit everyone.
  • Improve Efficiency to Offset Costs: If you’ve thought about a way to improve a process or efficiency but haven’t acted on it, now may be the perfect time to do so. Sometimes being forced to do something can work in your favor. And be sure to share what you’re changing to help reduce costs. Customers may appreciate you even more.
  • Add Value Instead of Just Raising Prices: When price increases are unavoidable, find a way to justify the higher cost. It could include anything—enhanced customer service, a loyalty rewards program, a special promotion and more. Customers may accept paying more if they feel they are getting more value in return.

What NOT to do:

  • Don’t Take Advantage of Customer Panic: As I write this article, people are going to car dealerships to buy cars before the prices increase and finding that the dealers are selling above the retail sticker price because of the demand. Do you think a customer will forget they were “gouged” by a company taking advantage of them during tough times? (That’s a rhetorical question, but just in case you don’t know the answer … They won’t!)
  • Don’t Say, “It’s Not my Fault”: Even when price increases are beyond your control, don’t be defensive. This can give the impression of a lack of confidence and lack of control that can erode the trust you have with your customers.
  • Don’t Say, “It’s the Same Everywhere You Go”: If the customer understands tariffs, they already know this. Stating you have no choice isn’t going to make the customer feel good. Go back to the list of what you can do and find a way to avoid this and the “it’s not my fault” response.

Customers want to hear what you’re doing to help them. They also like to be educated. Knowledge can give the customer a sense of control. Demonstrating genuine concern for the situation and sharing what you’re doing to minimize the impact of tariff-related price increases builds trust that will pay dividends long after the current economic challenges have passed.

Image Credits: Unsplash, Shep Hyken

This article originally appeared on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Crisis Innovation Trap

Why Proactive Innovation Wins

The Crisis Innovation Trap

by Braden Kelley and Art Inteligencia

In the narrative of business, we often romanticize the idea of “crisis innovation.” The sudden, high-stakes moment when a company, backed against a wall, unleashes a burst of creativity to survive. The pandemic, for instance, forced countless businesses to pivot their models overnight. While this showcases incredible human resilience, it also reveals a dangerous and costly trap: the belief that innovation is something you turn on only when there’s an emergency. As a human-centered change and innovation thought leader, I’ve seen firsthand that relying on crisis as a catalyst is a recipe for short-term fixes and long-term decline. True, sustainable innovation is not a reaction; it’s a proactive, continuous discipline.

The problem with waiting for a crisis is that by the time it hits, you’re operating from a position of weakness. You’re making decisions under immense pressure, with limited resources, and with a narrow focus on survival. This reactive approach rarely leads to truly transformative breakthroughs. Instead, it produces incremental changes and tactical adaptations—often at a steep price in terms of burnout, strategic coherence, and missed opportunities. The most successful organizations don’t innovate to escape a crisis; they innovate continuously to prevent one from ever happening.

The Cost of Crisis-Driven Innovation

Relying on crisis as your innovation driver comes with significant hidden costs:

  • Reactive vs. Strategic: Crisis innovation is inherently reactive. You’re fixing a symptom, not addressing the root cause. This prevents you from engaging in the deep, strategic thinking necessary for true market disruption.
  • Loss of Foresight: When you’re in a crisis, all attention is on the immediate threat. This short-term focus blinds you to emerging trends, shifting customer needs, and new market opportunities that could have been identified and acted upon proactively.
  • Burnout and Exhaustion: Innovation requires creative energy. Forcing your teams into a constant state of emergency to innovate leads to rapid burnout, high turnover, and a culture of fear, not creativity.
  • Suboptimal Outcomes: The solutions developed in a crisis are often rushed, inadequately tested, and sub-optimized. They are designed to solve an immediate problem, not to create a lasting competitive advantage.

“Crisis innovation is a sprint for survival. Proactive innovation is a marathon for market leadership. You can’t win a marathon by only practicing sprints when the gun goes off.”

Building a Culture of Proactive, Human-Centered Innovation

The alternative to the crisis innovation trap is to embed innovation into your organization’s DNA. This means creating a culture where curiosity, experimentation, and a deep understanding of human needs are constant, not sporadic. It’s about empowering your people to solve problems and create value every single day.

  1. Embrace Psychological Safety: Create an environment where employees feel safe to share half-formed ideas, question assumptions, and even fail. This is the single most important ingredient for continuous innovation.
  2. Allocate Dedicated Resources: Don’t expect innovation to happen in people’s spare time. Set aside dedicated time, budget, and talent for exploratory projects and initiatives that don’t have an immediate ROI.
  3. Focus on Human-Centered Design: Continuously engage with your customers and employees to understand their frustrations and aspirations. True innovation comes from solving real human problems, not just from internal brainstorming.
  4. Reward Curiosity, Not Just Results: Celebrate learning, even from failures. Recognize teams for their efforts in exploring new ideas and for the insights they gain, not just for the products they successfully launch.

Case Study 1: Blockbuster vs. Netflix – The Foresight Gap

The Challenge:

In the late 1990s, Blockbuster was the undisputed king of home video rentals. It had a massive physical footprint, brand recognition, and a highly profitable business model based on late fees. The crisis of digital disruption and streaming was not a sudden event; it was a slow-moving signal on the horizon.

The Reactive Approach (Blockbuster):

Blockbuster’s management was aware of the shift to digital, but they largely viewed it as a distant threat. They were so profitable from their existing model that they had no incentive to proactively innovate. When Netflix began gaining traction with its subscription-based, DVD-by-mail service, Blockbuster’s response was a reactive, half-hearted attempt to mimic it. They launched an online service but failed to integrate it with their core business, and their culture remained focused on the physical store model. They only truly panicked and began a desperate, large-scale innovation effort when it was already too late and the market had irreversibly shifted to streaming.

The Result:

Blockbuster’s crisis-driven innovation was a spectacular failure. By the time they were forced to act, they lacked the necessary strategic coherence, internal alignment, and cultural agility to compete. They didn’t innovate to get ahead; they innovated to survive, and they failed. They went from market leader to bankruptcy, a powerful lesson in the dangers of waiting for a crisis to force your hand.


Case Study 2: Lego’s Near-Death and Subsequent Reinvention

The Challenge:

In the early 2000s, Lego was on the brink of bankruptcy. The brand, once a global icon, had become a sprawling, unfocused company that was losing relevance with children increasingly drawn to video games and digital entertainment. The company’s crisis was not a sudden external shock, but a slow, painful internal decline caused by a lack of proactive innovation and a departure from its core values. They had innovated, but in a scattered, unfocused way that diluted the brand.

The Proactive Turnaround (Lego):

Lego’s new leadership realized that a reactive, last-ditch effort wouldn’t save them. They saw the crisis as a wake-up call to fundamentally reinvent how they innovate. Their strategy was not just to survive but to thrive by returning to a proactive, human-centered approach. They went back to their core product, the simple plastic brick, and focused on deeply understanding what their customers—both children and adult fans—wanted. They launched several initiatives:

  • Re-focus on the Core: They trimmed down their product lines and doubled down on what made Lego special—creativity and building.
  • Embracing the Community: They proactively engaged with their most passionate fans, the “AFOLs” (Adult Fans of Lego), and co-created new products like the highly successful Lego Architecture and Ideas series. This wasn’t a reaction to a trend; it was a strategic partnership.
  • Thoughtful Digital Integration: Instead of panicking and launching a thousand digital products, they carefully integrated their physical and digital worlds with games like Lego Star Wars and movies like The Lego Movie. These weren’t rushed reactions; they were part of a long-term, strategic vision.

The Result:

Lego’s transformation from a company on the brink to a global powerhouse is a powerful example of the superiority of proactive innovation. By not just reacting to their crisis but using it as a catalyst to build a continuous, human-centered innovation engine, they not only survived but flourished. They turned a painful crisis into a foundation for a new era of growth, proving that the best time to innovate is always, not just when you have no other choice.


Eight I's of Infinite Innovation

The Eight I’s of Infinite Innovation

Braden Kelley’s Eight I’s of Infinite Innovation provides a comprehensive framework for organizations seeking to embed continuous innovation into their DNA. The model starts with Ideation, the spark of new concepts, which must be followed by Inspiration—connecting those ideas to a compelling, human-centered vision. This vision is refined through Investigation, a process of deeply understanding customer needs and market dynamics, leading to the Iteration of prototypes and solutions based on real-world feedback. The framework then moves from development to delivery with Implementation, the critical step of bringing a viable product to market. This is not the end, however; it’s a feedback loop that requires Invention of new business models, a constant process of Improvement based on outcomes, and finally, the cultivation of an Innovation culture where the cycle can repeat infinitely. Each ‘I’ builds upon the last, creating a holistic and sustainable engine for growth.

Conclusion: The Time to Innovate is Now

The notion of “crisis innovation” is seductive because it offers a heroic narrative. But behind every such story is a cautionary tale of a company that let a problem fester for far too long. The most enduring, profitable, and relevant organizations don’t wait for a burning platform to jump; they are constantly building new platforms. They have embedded a culture of continuous, proactive innovation driven by a deep understanding of human needs. They innovate when times are good so they are prepared when times are tough.

The time to innovate is not when your stock price plummets or your competitor launches a new product. The time to innovate is now, and always. By making innovation a fundamental part of your business, you ensure your organization’s longevity and its ability to not just survive the future, but to shape it.

Image credit: Pixabay

Content Authenticity Statement: The topic area and the key elements to focus on were decisions made by Braden Kelley, with help from Google Gemini to shape the article and create the illustrative case studies.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

McKinsey is Wrong That 80% Companies Fail to Generate AI ROI

McKinsey is Wrong That 80% Companies Fail to Generate AI ROI

GUEST POST from Robyn Bolton

Sometimes, you see a headline and just have to shake your head.  Sometimes, you see a bunch of headlines and need to scream into a pillow.  This week’s headlines on AI ROI were the latter:

  • Companies are Pouring Billions Into A.I. It Has Yet to Pay Off – NYT
  • MIT report: 95% of generative AI pilots at companies are failing – Forbes
  • Nearly 8 in 10 companies report using gen AI – yet just as many report no significant bottom-line impact – McKinsey

AI has slipped into what Gartner calls the Trough of Disillusionment. But, for people working on pilots,  it might as well be the Pit of Despair because executives are beginning to declare AI a fad and deny ever having fallen victim to its siren song.

Because they’re listening to the NYT, Forbes, and McKinsey.

And they’re wrong.

ROI Reality Check

In 20205, private investment in generative AI is expected to increase 94% to an estimated $62 billion.  When you’re throwing that kind of money around, it’s natural to expect ROI ASAP.

But is it realistic?

Let’s assume Gen AI “started” (became sufficiently available to set buyer expectations and warrant allocating resources to) in late 2022/early 2023.  That means that we’re expecting ROI within 2 years.

That’s not realistic.  It’s delusional. 

ERP systems “started” in the early 1990s, yet providers like SAP still recommend five-year ROI timeframes.  Cloud Computing“started” in the early 2000s, and yet, in 2025, “48% of CEOs lack confidence in their ability to measure cloud ROI.” CRM systems’ claims of 1-3 years to ROI must be considered in the context of their 50-70% implementation failure rate.

That’s not to say we shouldn’t expect rapid results.  We just need to set realistic expectations around results and timing.

Measure ROI by Speed and Magnitude of Learning

In the early days of any new technology or initiative, we don’t know what we don’t know.  It takes time to experiment and learn our way to meaningful and sustainable financial ROI. And the learnings are coming fast and furious:

Trust, not tech, is your biggest challenge: MIT research across 9,000+ workers shows automation success depends more on whether your team feels valued and believes you’re invested in their growth than which AI platform you choose.

Workers who experience AI’s benefits first-hand are more likely to champion automation than those told, “trust us, you’ll love it.” Job satisfaction emerged as the second strongest indicator of technology acceptance, followed by feeling valued.  If you don’t invest in earning your people’s trust, don’t invest in shiny new tech.

More users don’t lead to more impact: Companies assume that making AI available to everyone guarantees ROI.  Yet of the 70% of Fortune 500 companies deploying Microsoft 365 Copilot and similar “horizontal” tools (enterprise-wide copilots and chatbots), none have seen any financial impact.

The opposite approach of deploying “vertical” function-specific tools doesn’t fare much better.  In fact, less than 10% make it past the pilot stage, despite having higher potential for economic impact.

Better results require reinvention, not optimization:  McKinsey found that call centers that gave agents access to passive AI tools for finding articles, summarizing tickets, and drafting emails resulted in only a 5-10% call time reduction.  Centers using AI tools to automate tasks without agent initiation reduced call time by 20-40%.

Centers reinventing processes around AI agents? 60-90% reduction in call time, with 80% automatically resolved.

How to Climb Out of the Pit

Make no mistake, despite these learnings, we are in the pit of AI despair.  42% of companies are abandoning their AI initiatives.  That’s up from 17% just a year ago.

But we can escape if we set the right expectations and measure ROI on learning speed and quality.

Because the real concern isn’t AI’s lack of ROI today.  It’s whether you’re willing to invest in the learning process long enough to be successful tomorrow.

Image credit: Microsoft CoPilot

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Strategy Lacking Purpose Will Always Fail

Strategy Lacking Purpose Will Always Fail

GUEST POST from Greg Satell

In 1989, just before the fall of the Berlin Wall, Francis Fukuyama published an essay in the journal The National Interest titled The End of History, which led to a bestselling book. Many took his argument to mean that, with the defeat of communism, US-style liberal democracy had emerged as the only viable way of organizing a society.

He was misunderstood. His actual argument was far more nuanced and insightful. After explaining the arguments of philosophers like Hegel and Kojeve, Fukuyama pointed out that even if we had reached an endpoint in the debate about ideologies, there would still be conflict because of people’s need to express their identity.

We usually think of strategy as a rational, analytic activity, with teams of MBA’s poring over spreadsheets or generals standing before maps. Yet if we fail to take into account human agency and dignity, we’re missing the boat. Strategy without purpose is doomed to fail, however clever the calculations. Leaders need to take note of that basic reality.

Taking Stock Of The Halo Effect

Business case studies are written by experienced professionals who are trained to analyze past situations from multiple perspectives. However, their ability to do that successfully is greatly limited by the fact that they already know the outcome of the situation they are studying. That can’t help but to color their analysis.

In The Halo Effect, Phil Rosenzweig explains how those perceptions can color conclusions. He points to the networking company Cisco during the dotcom boom. When it was flying high, it was said to have an unparalleled culture with people that worked long hours but loved every minute of it. When the market tanked, however, all of the sudden its culture came to be seen as “cocksure” and “naive.”

It is hard to see how a company’s culture could change so drastically in such a short amount of time, with no significant change in leadership. More likely, seeing Cisco’s success, analysts looked at particular qualities in a positive light. However, when things began to go the other way, those same qualities were perceived as negative.

When an organization is doing well, we may find its people to be “idealistic” and “values driven,” but when things go sour, those same traits come to be seen as “impractical” and “arrogant.” Given the same set of facts, we can—and often do—come to very different conclusions when our perception of the outcomes changes.

In most cases, analysts don’t have a stake in the outcome. From their point of view, they probably see themselves as objectively analyzing facts and following them to their most logical outcomes. Yet when the purpose for writing an analysis changes from telling a success story to lamenting a cautionary tale, their perception of events tends to change markedly.

Reassessing The Value Chain

For decades, the dominant view of business strategy was based on Michael Porter’s ideas about competitive advantage. In essence, he argued that the key to long-term success was to dominate the value chain by maximizing bargaining power among suppliers, customers, new market entrants and substitute goods.

Yet as AnnaLee Saxenian explained in Regional Advantage, around the same time that Porter’s ideas were ascending among CEOs in the establishment industries on the east coast, a very different way of doing business was gaining steam in Silicon Valley. The firms there saw themselves not as isolated fiefdoms, but as part of a larger ecosystem.

The two models are built on very different assumptions. The Porter model sees the world as made up of transactions. Optimize your strategy to create efficiencies, derive the maximum value out of every transaction and you will build a sustainable competitive advantage. The Silicon Valley model, however, saw the world as made up of connections and optimized their strategies to widen and deepen linkages.

Microsoft is one great example of this shift. When Linux first rose to prominence, Microsoft CEO Steve Ballmer called it a cancer. Yet more recently, its current CEO announced that the company loves Linux. That didn’t happen out of any sort of newfound benevolence, but because it recognized that it couldn’t continue to shut itself out and still be able to compete.

When you see the world as the “sum of all efficiencies,” the optimal strategy is to dominate. However, if you see the world as made up of the “sum of all connections,” the optimal strategy is to attract. You need to be careful to be seen as purposeful rather than predatory.

The Naïveté Of The “Realists”

Since at least the times of Richelieu, foreign policy theorists have been enthralled by the concept of Realpolitik, the notion that world affairs are governed by interests, not ideological, moral or ethical considerations. Much like with Porter’s “competitive advantage,” strategy is treated as a series of transactions rather than relationships.

Rational calculation of interests is one of those ideas that seems pragmatic on the surface, but is actually hopelessly academic and unworkable in the real world. How do you identify the “interests” you are supposed to be basing your decisions on if not by considering what you value? And how do you assess your values without taking into account your beliefs, morals and ethics?

To understand how such “realism” goes awry, consider the prominent political scientist John Mearsheimer. In March, he gave an interview to The New Yorker in which he argued that, by failing to recognize Russia’s role and interests as a great power, the US had erred greatly in its support of Ukraine.

Yet it is clear now that the Russians were the ones who erred. First, they failed to recognize that the world would see their purpose as immoral. Second, they failed to recognize how their aggression would empower Ukraine’s sense of nationhood. Third, they did not see how Europe would come to regard economic ties with Russia to be against their interests.

Nothing you can derive from military or economic statistics will give you insight into human agency. Excel sheets may not be motivated by purpose, but people are.

Strategy Is Not A Game Of Chess

Antonio Damasio, a neuroscientist who researches decision making, became intrigued when one of his patients, a highly intelligent and professionally successful man named “Elliot,” suffered from a brain lesion that impaired his ability to experience emotion. It soon became clear that Elliot was unable to make decisions..

Elliot’s prefrontal cortex, which governs the executive function, was fully intact. His memory and ability to understand events were normal as well. He was, essentially, a completely rational being with normal cognitive function, but no emotions. The problem was that although Elliot could understand all the factors that would go into making a decision, he could not weigh them. Without emotions, all options were all essentially the same.

In the real world, strategy is not a game of chess, in which we move inert pieces around a board. While we can make rational assessments about various courses of action, ultimately people have to care about the outcome. For a strategy to be meaningful, it needs to speak to people’s values, hopes, dreams and ambitions.

A leader’s role cannot be merely to plan and direct action, but must be to inspire and empower belief in a common endeavor. That’s what widens and deepens the meaningful connections that can enable genuine transformation.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Is All Publicity Good Publicity?

Some Insights from Cracker Barrel

Is All Publicity Good Publicity?

GUEST POST from Pete Foley

The Cracker Barrel rebrand has certainly created a lot of media and social media attention.  Everything happened so fast that I have had to rewrite this introduction twice in as many days. Originally written when the new logo was in place, it has subsequently been withdrawn and replaced with the original one.

It’s probably been a expensive, somewhat embarrassing and sleepless week for the Cracker Barrel management team. But also one that generated a great deal of ‘free’ publicity for them. You could argue that despite the cost of a major rebranding and de-branding, this episode was priceless from a marketing penetration perspective. There is no way they could have spent enough to generate the level of media and social media they have achieved, if not necessarily enjoyed.

But of course, it raises the perennial question ‘is all publicity good publicity?’  With brands, I’d argue not always.  For certain, both good and bad publicity adds to ‘brand fluency’ and mental availability. But whether that is positively or negatively valanced, or triggers implicit or explicit approach or avoid responses is less straightforward. A case in point is of course Budweiser, who generated a lot of free media, but are still trying to drag themselves out of the Bud Light controversy.

Listening to the Customer: But when the dust settles, I suspect that Cracker Barrel will come out of this quite well. They enjoyed massive media and social media exposure, elevating the ‘mindshare’ of their brand. And to their credit, they’ve also, albeit a little reluctantly, listened to their customers. The quick change back to their legacy branding must ave been painful, but from a customer perspective, it screams ‘I hear you, and I value you’.

The Political Minefield. But there is some lingering complexity. Somehow the logo change became associated with politics. That is not exactly unusual these days, and when it happens, it inevitably triggers passion, polarization and outrage. I find it a quite depressing commentary on the current state of society that a restaurant logo can trigger ‘outrage. But like it or not, as change agents, these emotions, polarization and dubious political framing are a reality we all have to deal with. In this case, I personally suspect that any politically driven market effects will be short-lived. To my eye, any political position was unintentional, generated by social media rather than the company, and the connection between logo design and political affiliation is at best tenuous, and lacks the depth of meaning typically required for persistent outrage. The mobs should move on.

The Man on the Moon: But it does illustrate a broader problem for innovation derived from our current polarized society. If a logo simplification can somehow take on political overtones, pretty much any change or innovation can. Change nearly always comes with supporters and detractors, reflecting the somewhat contradictory nature of human behavior and cognition – we are change agents who also operate largely from habits. Our response to innovation is therefore inherently polarized, both as individuals and as a society, with elements of both behavioral inertia and change affinity. But with society deeply polarized and divided, it is perhaps inevitable that we will see connections between two different polarizations, whether they are logical or causal or not. We humans are pattern creators, evolved to see connections where they may or may not exist. This ability to see patterns using partial data protected us, and helped us see predators, food or even potential mates using limited information. Spotting a predator from a few glimpses through the trees obviously has huge advantages over waiting until it ambushes us. So we see animals in clouds, patterns in the stars, faces on the moon, and on some occasions, political intent where none probably exists.

My original intent with this article was to look at the design change for the logo from a fundamental visual science perspective. From that perspective, I thought it was quite flawed. But as the story quickly evolved, I couldn’t ignore the societal, social media and political element. Context really does matter. But if we step back from that, there are stillo some really interesting technical design insights we can glean.

1.  Simplicity is deceptively complex. The current trend towards reducing complexity and even color in a brands visual language superficially makes sense.  After all, the reduced amount of information and complexity should be easier for our brains to visually process.  And low cognitive processing costs come with all sorts of benefits. But unfortunately it’s not quite that simple.  With familiar objects, our brain doesn’t construct images from scratch, but instead takes the less intuitive, but more cognitively efficient route of unconsciously matching what we see to our existing memory.  This allows us to recognize familiar objects with a minimum of cognitive effort, and without needing to process all of the visual details they contain.  Our memory, as opposed to our vision, fills in much of the details.  But this process means that dramatic simplification of a well established visual language or brand, if not done very carefully, can inhibit that matching process.  So counterintuitively, if we remove the wrong visual cues, it can make a simplified visual language or brand more difficult to process than it’s original, and thus harder to find, at least for established customers.  Put another way, the way our visual system operates, it automatically and very quickly (faster than we can consciously think) reduces images down to their visual essence. If we try to do that ourselves, we need to very clearly understand what the key visual elements are, and make sure we keep the right ones. Cracker Barrel has lost some basic shapes, and removed several visual elements completely, meaning it has likely not done a great job in that respect.

2.  Managing the Distinctive-Simple Trade Off.  Our brains have evolved to be very efficient, so as noted above, we only do the ‘heavy lifting’ of encoding complex designs into memory once.  We then use a shortcut of matching what we see to what we already know, and so can recognize relatively complex but familiar objects with relatively little effort. This matching process means a familiar visual scene like the old Cracker Barrel logo is quickly processed as a ‘whole’, as opposed to a complex, detailed image.  But unfortunately, this means the devil is in the details, and a dramatic simplification like Cracker Barrels can unintentionally remove many of the cues or signals that allowed us to unconsciously recognize it with minimal cognitive effort. 

And the process of minimizing visual complexity can also remove much of what made the brand both familiar and distinctive in parallel.  And it’s the relatively low resolution elements of the design that make it distinctive.  To get a feel for this, try squinting at the old and new brand.  With the old design, squinting loses the details of the barrel, or the old man,  But the rough shape of them, and of the logo, and their relative positions remain.  That gives a rough approximation of what our visual system feeds into our brain when looking for a match with our memory. Do the same with the new logo, and it has little or no consistency or distinctivity.  This means the new logo is unintentionally making it harder for customers to either find it (in memory or elsewhere) or recognize it. 

As a side effect, oversimplification also risks looking ‘generic’, and falling into the noise created by a growing sea of increasingly simplified logos. Now, to be fair, historical context matters.  If information is not encoded into memory, the matching process fails, and a visual memory needs to be built from scratch.  So if we were a new brand, Cracker Barrels new brand visual language might lack distinctivity, but it would certainly carry ease of processing benefits for new customers, whereas the legacy label would likely be too complex, and would quite likely be broadly deselected.  But because the old design already owns ‘mindspace’ with existing customers, the dramatic change risks and removal of basic visual cues asks repeat customers to ’think’ at a more conscious level, and so potentially challenges long established habits.  A major risk for any established brand  

3.  Distinctivity Matters. All visual branding represents a trade off.  We need signal to noise characteristics that stand out from the crowd, or we are unlikely to be noticed. But we also need to look like we belong to a category, or we risk being deselected.  It’s a balancing act.  Look too much like category archetypes, and lack distinctivity, and we fade into the background noise, and appear generic.  But look too different, and we stand out, but in a potentially bad way, by asking potential customers to put in too much work to understand us. This will often lead a customer to quickly de-select us.  It’s a trade off where controlled complexity can curate distinctive cues to stand out, while also incorporating enough category prototype cues to make it feel right.  Combine this with sufficient simplicity to ease processing fluency, and we likely have a winning design, especially for new customers.  But it’s a delicate balancing act between competing variables

4.  People don’t like change. As mentioned earlier, we have a complex relationship with change. We like some, but not too much. Change asks their brains to work harder, so it needs to provide value. I’m skeptical the in this case, it added commensurate value to the customer.  And change also breaks habits. So any major rebrand comes with risk for a well established brand.  But it’s a balancing act, and we should remain locked into aging designs forever.  As the context we operate in changes, we need to ‘move with the times’, and remain consistent in our relationship with our context, at least as much as we remain consistent with our history. 

And of course, there is also a trade off between a visual language that resonates with existing customers and one designed to attract new ones, as ultimately, virtually every brand needs both trial and repeat.   But for established brands evolutionary change is usually the way to achieve reach and trial without alienating existing customers.  Coke are the masters of this.   Look at how their brand has evolved over time, staying contemporary, but without creating the kind of ‘cognitive jolts’ the Cracker Barrel rebrand has created.  If you look at an old Coke advertisement, you intuitively know both that it’s old, but also that it is Coke.

Brands and Politics.    I generally advise brands to stay out of politics. With a few exceptions, entering this minefield risks alienating 50% of our customers. And any subsequent ‘course corrections’ risk alienating those that are left. For a vast majorities of companies, the cost-benefit equation simply doesn’t work!

But in this case, we are seeing consumers interpreting change through a political lens, even when that was not the intent. But just because it’s not there doesn’t mean it doesn’t matter, as Cracker barrel is discovered.  So I’m changing my advice from ‘don’t be political’ to ‘try and anticipate if you’re initiative could be misunderstood as political’.  It’s a subtle, but important difference. 

And as a build, marketers often try to incorporate secondary messages into their communication.  But in todays charged political climate, I think we need to be careful about being too ‘clever’ in this respect.  Consumer’s sensitivity to socio-political cues is very high at present, as the Cracker Barrel example shows.  So if they can see political content where none was intended, they are quite likely to spot any secondary or ‘implicit’ messaging.   So for example, an advertisement that features a lot of flags and patriotic displays, or one that predominately features members of the LBGTQ community both run a risk of being perceived as ‘making a political statement’, whether it is intended to or not.  There is absolutely nothing wrong with either patriotism or the LBGT community, and to be fair, as society becomes increasingly polarized, it’s increasingly hard to create content that doesn’t somehow offend someone.  At least without becoming so ‘vanilla’ that the content is largely pointless, and doesn’t cut through the noise. But from a business perspective, in today’s socially and politically fractured world, any perceived political bias or message in either direction comes with business risks.  Proceed with caution.

And keep in mind we’ve evolved to respond more intensely to negatives than positives – Caution kept our ancestors alive.  If we half see a coiled object in the grass that could be a garden hose or a snake, our instinct  is to back off.  If we mistake a garden hose for a snake to cost is small. But if we mistake a venomous snake for a garden hose, the cost could be high. 

As I implied earlier, when consumers look at our content though specific and increasingly intense partisan lens, it’s really difficult for us to not be perceived as being either ‘for’ or ‘against’ them. And keep in mind, the cost of undoing even an unintended political statement is inevitably higher than the cost of making it. So it’s at very least worth trying to avoid being dragged into a political space whenever possible, especially as a negative.  So be careful out there, and embrace some devils advocate thinking. Even if we are not trying to make a point, implicitly or explicitly, we need to step back and look at how those who see the world from deeply polarized position could interpret us.  The ‘no such thing as bad publicity’ concept sits on very thin ice at this moment in time, where social media often seeks to punish more than communicate  

Image credits: Wikimedia Commons

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

How Neuromorphic Computing Will Unlock Human-Centered Innovation

The Next Great Leap

How Neuromorphic Computing Will Unlock Human-Centered Innovation

GUEST POST from Art Inteligencia

I’ve long advocated that the most transformative innovation is not just about technology, but about our ability to apply it in a way that creates a more human-centered future. We’re on the cusp of just such a shift with neuromorphic computing.

So, what exactly is it? At its core, neuromorphic computing is a radical departure from the architecture that has defined modern computing since its inception: the von Neumann architecture. This traditional model separates the processor (the CPU) from the memory (RAM), forcing data to constantly shuttle back and forth between the two. This “von Neumann bottleneck” creates a massive energy and time inefficiency, especially for tasks that require real-time, parallel processing of vast amounts of data—like what our brains do effortlessly.

Neuromorphic computing, as the name suggests, is directly inspired by the human brain. Instead of a single, powerful processor, it uses a network of interconnected digital neurons and synapses. These components mimic their biological counterparts, allowing for processing and memory to be deeply integrated. Information isn’t moved sequentially; it’s processed in a massively parallel, event-driven manner.

Think of it like this: A traditional computer chip is like a meticulous librarian who has to walk to the main stacks for every single piece of information, one by one. A neuromorphic chip is more like a vast, decentralized community where every person is both a reader and a keeper of information, and they can all share and process knowledge simultaneously. This fundamental change in architecture allows neuromorphic systems to be exceptionally efficient at tasks like pattern recognition, sensor fusion, and real-time decision-making, consuming orders of magnitude less power than traditional systems.

It’s this leap in efficiency and adaptability that makes it so critical for human-centered innovation. It enables intelligent devices to operate for years on a small battery, allows autonomous systems to react instantly to their environment, and opens the door to new forms of human-machine interaction.


Case Study 1: Accelerating Autonomous Systems with Intel’s Loihi 2

In the world of autonomous vehicles and robotics, real-time decision-making is a matter of safety and efficiency. Traditional systems struggle with **sensor fusion**, the complex task of integrating data from various sensors like cameras, lidar, and radar to create a cohesive understanding of the environment. This process is energy-intensive and often suffers from latency.

The Intel Loihi 2 neuromorphic chip represents a significant leap forward. Researchers have demonstrated that by using spiking neural networks, Loihi 2 can handle sensor fusion with remarkable speed and energy efficiency. In a study focused on datasets for autonomous systems, the chip was shown to be over 100 times more energy-efficient than a conventional CPU and nearly 30 times more efficient than a GPU. This dramatic reduction in power consumption and increase in speed allows for quicker course corrections and improved collision avoidance, moving us closer to a future where robots and vehicles don’t just react to their surroundings, but intelligently adapt.


Case Study 2: Revolutionizing Medical Diagnostics with IBM’s TrueNorth

The field of medical imaging is a prime candidate for neuromorphic disruption. Diagnosing conditions from complex scans like MRIs requires the swift and accurate **segmentation** of anatomical structures. This is a task that demands high computational power and is often handled by GPUs in a clinical setting.

A pioneering case study on the IBM TrueNorth neurosynaptic system demonstrated its ability to perform spinal image segmentation with exceptional efficiency. A deep learning network implemented on the TrueNorth chip was able to delineate spinal vertebrae and disks more than 20 times faster than a GPU-accelerated network, all while consuming less than 0.1W of power. This breakthrough proves that neuromorphic hardware can perform complex medical image analysis with the speed needed for real-time surgical or diagnostic environments, paving the way for more accessible and instant diagnoses.


The Vanguard of Innovation: A Glimpse at the Leaders

The innovation in neuromorphic computing is being driven by a powerful confluence of established tech giants and nimble startups. Intel and IBM, as highlighted in the case studies, continue to lead with their research platforms, Loihi and TrueNorth, respectively. Their work provides the foundational hardware for the entire ecosystem.

However, the field is also teeming with promising newcomers. Companies like BrainChip are pioneering ultra-low-power AI for edge applications, enabling sensors to operate for years on a single charge. SynSense is at the forefront of event-based vision, creating cameras that only process changes in a scene, dramatically reducing data and power requirements. Prophesee is another leader in this space, with partnerships with major companies like Sony and Bosch for their event-based machine vision sensors. The Dutch startup Innatera is focused on ultra-low-power processors for advanced cognitive applications, while MemComputing is taking a unique physics-based approach to solve complex optimization problems. This dynamic landscape ensures a constant flow of new ideas and applications, pushing the boundaries of what’s possible.


In the end, neuromorphic computing is not just about building better computers; it’s about building a better future. By learning from the ultimate example of efficiency—the human brain—we are creating a new generation of technology that will not only perform more efficiently but will empower us to solve some of our most complex human challenges, from healthcare to transportation, in ways we’ve only just begun to imagine.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.