Category Archives: Innovation

The Irish Spirit

Lessons in Resilience and Radical Creativity

LAST UPDATED: March 17, 2026 at 3:17 AM

The Irish Spirit - Lessons in Resilience and Radical Creativity

by Braden Kelley and Art Inteligencia


Beyond the Luck of the Irish: A Strategic Foundation

St. Patrick’s Day often arrives draped in the superficial — green beer, plastic shamrocks, and the persistent myth of “the luck of the Irish.” But for those of us navigating the complex waters of human-centered change and innovation, there is a much deeper well to draw from than mere fortune.

In the world of digital transformation, “luck” is rarely a random lightning strike. Instead, it is the byproduct of a culture that is perpetually prepared for opportunity — a fundamental tenet of any robust innovation strategy. Ireland’s history serves as a definitive masterclass in stoking the innovation bonfire. It is a narrative defined by the ability to pivot in the face of existential adversity, using communal resilience as a primary engine for growth.

The Modern Creative Landscape

Today, Ireland occupies a unique global position. It sits at the intersection of ancient, soulful arts and the cutting-edge rigors of the modern tech sector. This isn’t a coincidence; it’s the result of a national identity that values intellectual agility. Whether it is a rural community re-imagining its local economy or a Dublin-based tech giant scaling a new framework, the underlying pulse remains the same: a blend of high-tech capability and high-touch humanity.

The Thesis: A Survival Mechanism

The core takeaway for change leaders is this: Irish creativity is not just about aesthetic output or poetic flair. It is a survival mechanism. It is rooted in three distinct pillars that every modern organization needs to thrive:

  • Resilience: The emotional and structural capacity to endure “The Great Contraction” and emerge with a new value proposition.
  • Narrative: The use of storytelling to bridge the gap between technical change and human adoption.
  • Connection: Prioritizing the “Human-Centered” element of innovation to ensure that technology serves autonomy rather than eroding it.

By examining these cultural traits, we can move beyond the holiday tropes and uncover practical lessons for building organizational agility and fostering a culture where radical creativity is the standard, not the exception.

The Power of the “Sennachie”: Narrative as a Strategic Framework

In the ancient Irish tradition, the Sennachie (pronounced shan-a-key) was much more than a simple storyteller. They were the custodians of history, the keepers of genealogy, and the navigators of local law. In modern organizational terms, the Sennachie was the ultimate Chief Experience Officer — ensuring that every member of the community understood their place within the collective narrative.

When we look at digital transformation or complex human-centered change, the technical hurdles are rarely what cause a project to fail. It is the narrative vacuum. Without a compelling story, employees fill that silence with anxiety, resistance, and skepticism. The Irish tradition teaches us that the story is not an “add-on” to the strategy; the story is the strategy.

Narrative as an Alignment Tool

A well-crafted narrative serves as a North Star for distributed innovation teams. It provides the “Why” that bridges the gap between a high-level vision and daily execution. In Ireland, stories were used to maintain identity through centuries of upheaval. In business, we use narrative to:

  • Socialize Innovation: Moving an idea from a slide deck to the “water cooler” conversation requires a narrative that resonates on a human level.
  • Build Empathy: By focusing on the “Characters” (our customers and employees) rather than just the “Features,” we ensure the solution actually solves a human pain point.
  • Overcome Organizational Resistance: A story that honors the past while pointing toward a necessary future reduces the “immune system” response of the corporate culture.

Application: The “Great Story” Framework

To apply this Irish wisdom to your next project, stop writing technical requirements and start drafting the “Great Story” of the change. This involves moving beyond content and focusing on context. Who are the heroes of this transformation? What is the “villain” (e.g., inefficiency, poor customer experience, or technical debt)? And most importantly, what does the “happily ever after” look like for the individual contributor?

By adopting the mindset of the Sennachie, leaders can move away from “managing” change and toward stoking the imagination of their teams. When people can see themselves in the story, they don’t just participate in the change — they own it.

Constraint-Based Innovation: Creating from Scarcity

One of the most profound lessons we can learn from the Irish experience is the art of innovation under pressure. For centuries, Ireland was defined by geographical isolation and limited natural resources. Yet, rather than stifling progress, these boundaries acted as a crucible for radical resourcefulness. In the world of FutureHacking™, we recognize that unlimited budgets often lead to bloated, unfocused projects, while tight constraints force a team to identify the most elegant, high-impact solutions.

Ireland’s modern transformation into a global “Silicon Isle” wasn’t fueled by an abundance of coal or iron, but by the strategic cultivation of its only infinite resource: intellectual and imaginative capital. This shift from an agrarian society to a digital leader is a prime example of how an “island mentality” — the recognition of finite boundaries — can drive a culture to seek out-sized returns through pure ingenuity.

The “Scarcity Mindset” vs. “Abundance Thinking”

In organizational change, we often hear “we don’t have the budget” or “we don’t have the headcount” as excuses for stagnation. The Irish model suggests a flip in perspective. Scarcity isn’t a wall; it’s a design constraint. When we look at innovation through this lens, we begin to:

  • Prioritize the Essential: Without the luxury of waste, every move must contribute directly to the Customer Experience (CX).
  • Leverage Hidden Assets: Like the Irish turning humble ingredients into world-renowned exports, organizations must look at their existing data, talent, and “dark” assets to create new value.
  • Encourage Radical Collaboration: When resources are low, the only way to scale is through partnership and shared ecosystems.

Application: Innovation as a Survival Skill

To apply this to your own innovation bonfire, start by viewing your current constraints as the parameters of a creative challenge. If you had 50% less time or 80% less budget, what is the one thing that must still work? That “one thing” is your core value proposition.

By embracing the Irish spirit of “making do” and then “making better,” leaders can foster a culture that doesn’t fear limitations but uses them as a springboard for organizational agility. True innovation isn’t about having the most; it’s about doing the most with what you have.

The “Meitheal” Mentality: Radical Collaboration and Ecosystem Thinking

In the heart of Irish rural tradition lies the concept of the Meitheal (pronounced meh-hel). It describes a group of neighbors coming together to help one another with the harvest or other labor-intensive tasks. There was no formal contract, only the understood social capital of mutual support. If one farmer’s crop was at risk, the community became the safety net.

In modern digital transformation, we often suffer from “Silo Syndrome” — where departments guard their resources and data as if they were private fiefdoms. The Meitheal mentality offers a powerful antidote. It shifts the focus from “Hero Innovation” (the lone genius) to “Community Innovation,” where the collective intelligence of the organization is harvested for the benefit of the Customer Experience (CX).

Breaking the Silos: From Hierarchy to Community

To build a truly agile organization, we must move beyond rigid reporting lines and toward fluid, purpose-driven clusters. When we apply the Meitheal spirit to a Modern Experience Management Office (XMO), we see:

  • Shared Burden, Shared Success: When a project hits a bottleneck, resources from other “neighboring” departments flow toward the problem without the need for bureaucratic escalation.
  • Cross-Functional Agility: The ability to assemble “Tiger Teams” that possess diverse skill sets — designers, developers, and strategists — all focused on a single harvest: the project’s completion.
  • Mutual Accountability: In a Meitheal, you help today because you might need help tomorrow. This creates a culture of psychological safety and long-term trust.

Application: Harvesting the Collective Intelligence

How do you “socialize” the Meitheal in a corporate environment? Start by identifying the “shared harvests” in your organization. These are the goals that no single department can achieve alone — such as improving the **End-to-End User Journey**.

By fostering a culture where helping a colleague is seen as a strategic contribution rather than a distraction from one’s “real job,” leaders can stoke the innovation bonfire across the entire enterprise. Radical collaboration isn’t just a buzzword; it’s the ancient Irish secret to doing more together than we ever could apart.

Comfortable with the “Craic”: The Role of Play in High-Stakes Innovation

In Irish culture, “The Craic” (pronounced crack) is often misunderstood by outsiders as mere small talk or revelry. In reality, it is a sophisticated form of social intelligence. It encompasses news, gossip, entertainment, and, most importantly, sharp-witted conversation. For an innovation leader, the “Craic” represents the ultimate expression of psychological safety — an environment where ideas can be batted around, deconstructed, and reimagined without the fear of corporate reprisal.

When we look at the Experience Level Measures (XLMs) of high-performing teams, one of the leading indicators of success is the frequency of informal, playful interaction. If your team is too afraid to joke, they are likely too afraid to take the risks necessary for a “FutureHacking™” breakthrough.

Wit as a Navigation Tool for Complexity

The Irish use wit not just for humor, but as a way to navigate Moral Uncertainty and complex social dynamics. In a business context, a culture that embraces the “Craic” benefits from:

  • Reduced Friction: Humor is a lubricant for change. It allows teams to acknowledge the absurdity of a difficult situation while still moving toward a solution.
  • Rapid Prototyping of Ideas: In a playful environment, “What if?” becomes a natural part of the conversation rather than a formal exercise.
  • Resilience Against Burnout: The ability to find joy in the process — especially during a grueling digital transformation — is what keeps the “innovation bonfire” burning long after the initial excitement has faded.

Application: Creating a “Low-Anxiety” Innovation Zone

To apply this, leaders must model vulnerability and playfulness. This doesn’t mean forced fun or “mandatory happy hours.” It means creating a culture where quick thinking and diverse perspectives are celebrated. It’s about building a space where the “High-Anxiety” personas in your organization feel safe enough to contribute their “Digital Skeptic” viewpoints without being shut down.

When your team is comfortable with the “Craic,” they aren’t just working; they are engaging in a communal creative act. Innovation is serious business, but it shouldn’t be somber. By injecting a bit of the Irish spirit into your workflows, you transform a workplace into an Innovation Ecosystem where the best ideas can finally breathe.

Conclusion: Stoking Your Own Creative Bonfire

As we’ve explored, the “Luck of the Irish” is a misnomer for what is actually a disciplined, culturally ingrained approach to resilience and radical creativity. From the narrative mastery of the Sennachie to the communal strength of the Meitheal, the lessons from Ireland provide a robust blueprint for any leader navigating the complexities of human-centered innovation.

In the world of digital transformation, we often get blinded by the “shiny objects” — the latest AI tools or software platforms. But the Irish spirit reminds us that innovation is 10% technology and 90% people. The “Pot of Gold” at the end of the change management rainbow isn’t a finished product; it is a sustainable, agile culture that is capable of reinventing itself time and again.

The Call to Action: Adopt a “FutureHacking™” Mindset

To bring these lessons into your own organization, don’t just celebrate the holiday — integrate its principles:

  • Tell the Story: Stop issuing mandates and start building a narrative where your employees are the protagonists.
  • Embrace the “Craic”: Lower the anxiety in your innovation zones to allow for the kind of playful friction that sparks truly original ideas.
  • Focus on the Human Experience: Use Experience Level Measures (XLMs) to ensure your “innovations” are actually improving the lives of your customers and staff.

Creativity is a renewable resource, but it requires a hearth. By fostering a environment that values storytelling, collaboration, and resourcefulness, you aren’t just managing a project; you are stoking an innovation bonfire that will light the way through even the most uncertain economic shifts.

This St. Patrick’s Day, let’s look beyond the shamrocks and recognize that our greatest creative assets are already sitting right in front of us: our people, our stories, and our shared commitment to making tomorrow better than today.

Image credits: Google Gemini

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from Gemini to clean up the article and add citations.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Four Things I Have Learned About Ideas

Four Things I Have Learned About Ideas

GUEST POST from Greg Satell

I’ve always been inspired by ideas. Some, like Aristotle’s logic, shape the world for millennia. Others, like Einstein’s relativity, completely change our conceptions of what is possible. Still others, like mRNA vaccines, seem to emerge at just the right time. Ideas are what have marked humanity’s progress from living in caves to civilizations.

Yet bad ideas can destroy just as completely as good ideas can create. Fascism led Europe to effectively wipe itself out in little more than a decade. Communism relegated hundreds of millions of people to poverty and struggle. Corporate debacles like like Enron, WeWork and Theranos, have shown us that the wrong idea can cost billions.

We need to handle ideas with care, being open enough to new ones so that we don’t miss out on opportunities, but skeptical enough that we don’t get taken in by ones that do harm. What I’ve learned researching innovation and change is that creating, parsing and evaluating ideas is a skill that must be practiced and honed over time. Here are 4 things to keep in mind.

1. Ideas Can Come From Anywhere

Albert Einstein was an outcast in the world of physics when he unleashed four papers on the world that would change the field forever. When Jim Allison discovered cancer immunotherapy, it took him three years to find anyone who would invest in it. Katalin Karikó was told to abandon her research into mRNA vaccines or be demoted.

In The Structure of Scientific Revolutions, science historian Thomas Kuhn explained why breakthroughs so often happen this way. As the world changes and evolves, flaws in existing models become more evident, eventually becoming untenable. That’s what sets the stage for a paradigm shift. “Failure of existing rules is the prelude to a search for new ones,” he wrote.

Yet new paradigms almost always need to be championed by outsiders or newcomers rather than acknowledged experts. As the physicist Max Planck put it “a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”

In Mapping Innovation, I showed how data and real-world experience bear this out. On the innovation platform Innocentive (now Wazoku Crowd), problems tend not to be solved within the domain in which they arose, but by a practitioner in an adjacent field. In fact, a study analyzing 17.9 million papers found the most highly cited work tended to come from highly specialized experts partnering with an outsider.

2. Ideas Need To Develop Over Time

In 1891, Dr. William Coley had an unusual idea. Inspired by an obscure case, in which a man who had contracted a severe infection was cured of cancer, the young doctor purposely infected a tumor on his patient’s neck with a heavy dose of bacteria. Miraculously, the tumor vanished and the patient remained cancer free even five years later.

It was a breakthrough, of sorts, but for more than a 100 years Coley’s work was viewed with skepticism and, in truth, there were serious problems with it. Coley couldn’t explain the underlying mechanism by which an infection could cure cancer and he couldn’t replicate his results with any consistency. When radiation therapy began showing success, most people forgot about Coley’s and his work.

Yet a small cadre of supporters kept the faith alive. His daughter, Helen Coley Nauts, would establish the Cancer Research Institute in 1953 to support immune-based approaches to cancer treatment. Over the next four decades, glimmers of hope would appear from time to time, but no one could make Dr. Coley’s idea work.

Then, in 1995 there was a breakthrough. Following a hunch, Jim Allison figured that maybe the problem wasn’t that our bodies couldn’t identify and fight cancer cells, but that something was switching the immune response off. If we could switch it back on, we would have a completely new tool to fight cancer. Allison would win the Nobel Prize for his work on the development of the first cancer immunotherapy drug in 2018.

Dr. Coley had the right idea from the start, but it wasn’t enough. It would take over a century to develop better understanding of cancer, genomics, as well as tools like recombinant DNA to make it work. Literally thousands of researchers worked around the globe for decades to make good on an initial insight.

3. Ideas Need Ecosystems

When Jim Allison was finishing up graduate school in the early 1970s, they had just discovered T cells and he was fascinated. He would later tell me how he was amazed about how all these things could be flying around our bodies killing things and somehow not hurt us. He decided to focus his career on figuring out how it all worked.

Over the next decade, Jim and his colleagues started piecing together a larger picture of how the immune system worked through a vast array of signals and receptors that regulate our immune response, triggering it to increase activity and to shut down once the threat has dissolved. A colleague had noticed that one of these molecules inhibited tumor growth.

Dr. Coley and Jim Allison occupied world’s. To Coley, the immune system was like an on/off switch and, triggering the immune system should lead directly to an immune response to fight cancer. Yet Allison was part of a much larger ecosystem that led to a different understanding that allowed him to target a specific receptor in the regulation system. That opened the floodgates and now cancer immunotherapy is a major field of its own.

The simple fact is that ideas need ecosystems. Look at any major technology and it’s not the initial invention that creates the impact, but the secondary and tertiary technologies. Electricity needed appliances to change the world. The internal combustion engine needed vehicles. Computers needed software and the Internet.

We can’t just look at nodes, but must consider networks. It’s through those connections that we create the combinations that can help us solve important problems.

4. You Need To Let The Muse Know You’re Serious

One of the toughest things about ideas is that they can only be validated forward, never backward. You never know if you have the right idea until it’s been tested in the real world and, even then, there could be some confounding factor you may be missing. As Kevin Ashton put it, “Creation is a long journey, where most turns are wrong and most ends are dead.”

That’s tough work. You can’t just expect lightning to strike. Truly creative people know you have to work at it every day. Sometimes it goes easier and sometimes it’s a bit tougher. There are constant disappointments and true epiphanies are rare. But if you keep with it you’ll find that most days you can come up with something, even if it’s something small.

Somebody told me once that you have to let the muse know that you’re serious. Producing ideas leads to more ideas, which allows you to start creating connections between them. The more you produce, the better the chances are that some of those connections will be novel and lead to something important. That’s how you produce an idea that matters.

But even then the work isn’t over, because the world your idea enters into keeps evolving and changing. That’s why you need to share it and encourage others to build on it so that it can grow and reach its true potential. Ideas must combine and recombine so that they can memetically evolve. For our ideas to succeed, we need to serve them well.

As Daniel Dennett put it, “A scholar is just a library’s way of making another library.”

— Article courtesy of the Digital Tonto blog
— Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Innovación Resiliente

Por qué el futuro pertenece a las organizaciones que piensan en tres dimensiones

Por qué el futuro pertenece a las organizaciones que piensan en tres dimensiones

ÚLTIMA ACTUALIZACIÓN: 11 de marzo de 2026 a las 5:28 PM (ENGLISH LANGUAGE VERSION)

por Braden Kelley y Art Inteligencia


I. La chispa: Un diagrama de Venn que captura una verdad poderosa

La inspiración para este artículo provino de un visual simple pero poderoso compartido en una publicación reciente de Hugo Gonçalves. La imagen ilustraba la relación entre el Pensamiento de Futuro (Future Thinking), el Pensamiento de Diseño (Design Thinking) y el Pensamiento Sistémico (Systems Thinking) utilizando un diagrama de Venn que situaba la Innovación Resiliente en el centro.

A primera vista, el marco parece obvio. Cada disciplina ya está bien establecida en el mundo de la innovación:

     

  • El Pensamiento de Futuro ayuda a las organizaciones a anticipar múltiples futuros posibles.
  •  

  • El Pensamiento de Diseño se centra en resolver problemas a través de un enfoque centrado en el ser humano.
  •  

  • El Pensamiento Sistémico fomenta el examen de los sistemas de forma holística para comprender la complejidad.

Pero lo que hace que el diagrama sea convincente no son los círculos individuales. Es la visión revelada en sus intersecciones. Cuando estas disciplinas operan juntas en lugar de aisladas, desbloquean capacidades que de otro modo serían difíciles de alcanzar para las organizaciones.

En la intersección del Pensamiento de Futuro y el Pensamiento de Diseño, las organizaciones comienzan a diseñar soluciones para escenarios futuros en lugar de simplemente reaccionar a las condiciones presentes.

Donde el Pensamiento de Diseño se encuentra con el Pensamiento Sistémico, la innovación se vuelve tanto centrada en el ser humano como consciente del sistema, produciendo soluciones que tienen en cuenta la complejidad del mundo real y los efectos dominó.

Y donde el Pensamiento de Futuro se cruza con el Pensamiento Sistémico, las organizaciones adquieren la capacidad de preparar los sistemas para la sostenibilidad a largo plazo y la creciente complejidad.

Innovación Resiliente

Cuando las tres perspectivas se unen, surge algo más poderoso: la capacidad de crear innovaciones que no solo sean deseables y viables hoy, sino lo suficientemente resilientes como para prosperar en múltiples futuros posibles.

En un mundo definido por el cambio acelerado, la incertidumbre y los sistemas interconectados, la innovación resiliente puede ser la capacidad más importante que las organizaciones pueden desarrollar. Y como sugiere este sencillo diagrama, prospera en la intersección de tres formas poderosas de pensar.

II. El problema de la innovación unidimensional

La mayoría de las organizaciones buscan la innovación a través de una única lente dominante. Algunas se apoyan fuertemente en talleres de pensamiento de diseño y prototipado rápido. Otras invierten en prospectiva estratégica para anticipar disrupciones futuras. Otras se centran en el análisis de sistemas para comprender la complejidad y la dinámica organizacional.

Cada uno de estos enfoques proporciona información valiosa. Pero cuando se utilizan de forma aislada, cada uno tiene también limitaciones significativas.

El pensamiento de diseño, por ejemplo, destaca por descubrir las necesidades humanas y traducirlas en soluciones convincentes. Sin embargo, incluso la idea más deseable puede fracasar si ignora los sistemas más amplios en los que debe operar: estructuras regulatorias, cadenas de suministro, normas culturales o incentivos organizacionales.

El pensamiento de futuro ayuda a las organizaciones a explorar la incertidumbre e imaginar múltiples futuros posibles. La planificación de escenarios y el escaneo del horizonte pueden ampliar la conciencia estratégica y reducir las sorpresas. Pero la prospectiva por sí sola rara vez produce soluciones que la gente esté lista para adoptar.

El pensamiento sistémico proporciona la capacidad de mapear la complejidad, comprender los bucles de retroalimentación e identificar puntos de apalancamiento dentro de entornos interconectados. Sin embargo, una visión profunda del sistema no se traduce automáticamente en soluciones que resuenen con los usuarios humanos.

Cuando las organizaciones confían en uno solo de estos enfoques, la innovación a menudo se estanca. Las ideas pueden ser creativas pero poco prácticas, visionarias pero desconectadas del comportamiento humano, o analíticamente sólidas pero difíciles de implementar.

El desafío no es que estas disciplinas sean defectuosas. El desafío es que están incompletas por sí solas.

La innovación actual tiene lugar en entornos que son simultáneamente humanos, complejos e inciertos. Abordar solo una dimensión de esa realidad conduce inevitablemente a puntos ciegos.

La innovación resiliente requiere algo más: la integración de múltiples formas de pensar que juntas permitan a las organizaciones anticipar el cambio, comprender la complejidad y diseñar soluciones que la gente realmente adopte.

III. Pensamiento de Futuro: Anticipar múltiples futuros posibles

Uno de los supuestos más peligrosos que pueden hacer las organizaciones es que el futuro se parecerá mucho al presente. La historia muestra repetidamente que los mercados, las tecnologías y las expectativas de la sociedad pueden cambiar más rápido de lo que incluso los líderes experimentados anticipan.

Aquí es donde el Pensamiento de Futuro se vuelve esencial, y la metodología FutureHacking™ ayuda a que cada uno sea su propio futurista.

El pensamiento de futuro no consiste en predecir un único resultado. En cambio, se centra en explorar una gama de futuros plausibles para que las organizaciones puedan prepararse para la incertidumbre en lugar de reaccionar a ella después de los hechos.

Los practicantes del pensamiento de futuro utilizan herramientas como el escaneo del horizonte, el análisis de tendencias y la planificación de escenarios para identificar señales emergentes de cambio e imaginar cómo esas señales podrían combinarse para dar forma a diferentes entornos futuros.

Al examinar múltiples futuros posibles, las organizaciones amplían su imaginación estratégica. Comienzan a ver oportunidades y riesgos que, de otro modo, permanecerían invisibles cuando la planificación se basa únicamente en el rendimiento pasado o en las condiciones actuales del mercado.

El pensamiento de futuro ayuda a los líderes a hacer mejores preguntas:

     

  • ¿Qué cambios en el horizonte podrían remodelar nuestra industria?
  •  

  • ¿Qué tecnologías o comportamientos emergentes podrían alterar nuestras suposiciones?
  •  

  • ¿Cómo podrían evolucionar las necesidades de nuestros clientes en la próxima década?

Cuando las organizaciones incorporan el pensamiento de futuro en sus esfuerzos de innovación, adquieren la capacidad de diseñar estrategias y soluciones que sigan siendo relevantes incluso cuando las condiciones cambien.

Sin embargo, la prospectiva por sí sola no crea innovación. Imaginar el futuro es solo el principio. Las organizaciones también deben traducir esas visiones en soluciones que la gente valore y los sistemas puedan sostener.

Es por eso que el pensamiento de futuro se vuelve mucho más poderoso cuando se combina con otras perspectivas, particularmente la creatividad centrada en el ser humano del pensamiento de diseño y la comprensión holística que proporciona el pensamiento sistémico.

IV. Pensamiento de Diseño: Resolver problemas con un enfoque centrado en el ser humano

Si el pensamiento de futuro amplía nuestra visión de lo que podría suceder, el pensamiento de diseño ayuda a garantizar que las soluciones que creamos realmente importen a las personas a las que están destinadas.

El pensamiento de diseño se basa en una premisa engañosamente simple: la innovación tiene éxito cuando comienza con una comprensión profunda de las necesidades, los comportamientos y las motivaciones humanas. En lugar de empezar con la tecnología o las capacidades internas, el pensamiento de diseño comienza con la empatía.

Los practicantes utilizan métodos como la observación, las entrevistas, el mapeo del viaje del cliente (journey mapping) y el prototipado rápido para descubrir ideas sobre cómo las personas experimentan los productos, servicios y sistemas en el mundo real.

A través de este proceso, las organizaciones van más allá de las suposiciones y comienzan a diseñar soluciones que reflejan necesidades humanas genuinas. Las ideas se exploran a través de la experimentación iterativa, lo que permite a los equipos aprender rápidamente qué funciona, qué no y por qué.

Este enfoque ofrece varias ventajas poderosas:

     

  • Saca a la luz necesidades de los clientes no satisfechas o no articuladas.
  •  

  • Fomenta la experimentación y el aprendizaje rápido.
  •  

  • Aumenta la probabilidad de que las nuevas soluciones sean adoptadas por las personas para las que han sido diseñadas.

El pensamiento de diseño recuerda a las organizaciones que la innovación no consiste simplemente en crear algo nuevo. Se trata de crear algo que la gente decida adoptar.

Sin embargo, incluso la solución más centrada en el ser humano puede fracasar si ignora los sistemas más amplios en los que debe operar. Un producto bellamente diseñado puede tener dificultades frente a restricciones regulatorias, limitaciones de la cadena de suministro o resistencia cultural dentro de las organizaciones.

Es por eso que el pensamiento de diseño por sí solo no es suficiente. Para crear innovaciones que realmente perduren, las organizaciones también deben comprender los complejos sistemas que rodean a esas soluciones.

V. Pensamiento Sistémico: Ver el sistema completo

Mientras que el pensamiento de diseño se centra en las personas y el pensamiento de futuro explora la incertidumbre, el pensamiento sistémico ayuda a las organizaciones a comprender los entornos complejos en los que debe operar la innovación.

Las organizaciones modernas no existen de forma aislada. Funcionan dentro de sistemas interconectados formados por clientes, socios, proveedores, reguladores, tecnologías, culturas y estructuras internas. Los cambios en una parte del sistema a menudo crean efectos dominó en muchas otras.

El pensamiento sistémico anima a los líderes e innovadores a dar un paso atrás y examinar estas relaciones de forma holística en lugar de centrarse solo en los componentes individuales.

Los practicantes utilizan herramientas como mapas de sistemas, diagramas de bucles causales y mapeo de ecosistemas de partes interesadas para identificar patrones, dependencias y bucles de retroalimentación que influyen en los resultados a lo largo del tiempo.

Esta perspectiva proporciona varias ventajas críticas:

     

  • Revela interdependencias ocultas dentro de entornos complejos.
  •  

  • Ayuda a identificar puntos de apalancamiento donde pequeños cambios pueden crear un gran impacto.
  •  

  • Reduce la probabilidad de consecuencias no deseadas al introducir nuevas soluciones.

Muchas innovaciones fracasan no porque la idea fuera defectuosa, sino porque el sistema circundante nunca fue diseñado para soportarla. Los incentivos pueden estar desalineados. Los procesos pueden resistirse al cambio. La infraestructura puede no existir para escalar la solución.

El pensamiento sistémico ayuda a los innovadores a reconocer estas realidades estructurales a tiempo, lo que les permite diseñar soluciones que encajen dentro de los sistemas en los que operan, o que los remodelen intencionadamente.

Sin embargo, el pensamiento sistémico por sí solo también puede quedarse corto. El análisis profundo de la complejidad no produce automáticamente soluciones que resuenen con las personas o anticipen cambios futuros.

Es por eso que la innovación resiliente surge no de una sola perspectiva, sino de la intersección del pensamiento de futuro, el pensamiento de diseño y el pensamiento sistémico trabajando juntos.

Infografía de Innovación Resiliente

VI. Pensamiento de Futuro + Pensamiento de Diseño: Diseñar soluciones para escenarios futuros

Cuando el pensamiento de futuro y el pensamiento de diseño se unen, la innovación pasa de resolver los problemas de hoy a diseñar soluciones que sigan siendo significativas en el mundo del mañana.

El pensamiento de futuro amplía el horizonte temporal. Ayuda a las organizaciones a explorar tecnologías emergentes, expectativas sociales en evolución y posibles disrupciones que podrían remodelar el entorno en el que operan los productos y servicios.

El pensamiento de diseño aporta la perspectiva humana. Garantiza que las ideas desarrolladas en respuesta a estas posibilidades futuras sigan basándose en las necesidades, motivaciones y comportamientos humanos reales.

Juntas, estas disciplinas permiten a las organizaciones diseñar soluciones no solo para el momento presente, sino para múltiples futuros posibles.

En lugar de preguntar solo “¿Qué necesitan los clientes hoy?”, los equipos comienzan a hacer preguntas más profundas:

     

  • ¿Cómo podrían evolucionar las expectativas de los clientes en los próximos cinco a diez años?
  •  

  • ¿Qué nuevos comportamientos podrían surgir a medida que las tecnologías maduren?
  •  

  • ¿Cómo podrían las normas sociales cambiantes remodelar lo que la gente valora?

De esta intersección surgen varias prácticas:

     

  • Crear personajes del futuro que representen cómo podrían comportarse los usuarios en diferentes escenarios.
  •  

  • Construir prototipos basados en escenarios que prueben cómo se desempeñan las soluciones bajo diferentes condiciones futuras.
  •  

  • Utilizar el diseño especulativo para explorar posibilidades audaces antes de que se conviertan en realidad.

Esta combinación ayuda a las organizaciones a evitar una trampa común de la innovación: diseñar soluciones perfectamente optimizadas para un presente que ya está empezando a desaparecer.

Al integrar la prospectiva con el diseño centrado en el ser humano, las organizaciones crean innovaciones que están mejor preparadas para evolucionar a medida que se desarrolla el futuro.

VII. Pensamiento de Diseño + Pensamiento Sistémico

La innovación centrada en el ser humano es más poderosa cuando tiene en cuenta el sistema más amplio.
La integración de la empatía con la conciencia de la complejidad garantiza que las soluciones no solo sean deseables, sino también viables y escalables dentro de los sistemas del mundo real.

Muchas innovaciones bienintencionadas fracasan porque descuidan la dinámica del sistema, lo que conduce a consecuencias no deseadas que pueden socavar la adopción, la eficiencia o el impacto a largo plazo.

Prácticas de ejemplo

     

  • Mapeo del viaje + Mapeo del sistema: Comprender la experiencia del usuario junto con el sistema más amplio en el que opera.
  •  

  • Análisis del ecosistema de partes interesadas: Identificar a todos los actores, relaciones y dependencias que influyen en los resultados.
  •  

  • Diseñar para la política, la cultura y la infraestructura simultáneamente: Garantizar que las soluciones sean compatibles con el entorno real, no solo con escenarios ideales.

Beneficio: Soluciones que escalan eficazmente y perduran dentro de sistemas complejos, reduciendo el riesgo y maximizando el impacto a largo plazo.

VIII. Pensamiento de Futuro + Pensamiento Sistémico

Combinar la anticipación con la comprensión estructural permite a las organizaciones preparar los sistemas para la sostenibilidad y la complejidad a largo plazo. Esta intersección garantiza que las estrategias y las innovaciones no sean solo reactivas, sino resilientes al cambio y a la disrupción.

Muchas organizaciones fracasan porque planifican para el futuro sin considerar las dinámicas de todo el sistema, lo que las deja vulnerables cuando el cambio ocurre inevitablemente.

Prácticas de ejemplo

     

  • Mapeo de resiliencia: Identificar las vulnerabilidades y fortalezas del sistema para anticipar riesgos y oportunidades.
  •  

  • Diseño de estrategia adaptativa: Desarrollar estrategias que puedan flexibilizarse y evolucionar a medida que cambian las condiciones.
  •  

  • Creación de capacidades a largo plazo: Invertir en habilidades, procesos y estructuras que sostengan la innovación a lo largo del tiempo.

Beneficio: Las organizaciones se preparan para la volatilidad, siendo capaces de responder a desafíos complejos sin ser descarriladas por la disrupción.

IX. El centro del diagrama de Venn: Innovación resiliente

La verdadera resiliencia en la innovación ocurre en la intersección de las tres disciplinas: Pensamiento de Futuro, Pensamiento de Diseño y Pensamiento Sistémico. Las organizaciones que operan aquí anticipan múltiples futuros posibles, diseñan soluciones que los humanos realmente desean y comprenden los sistemas dentro de los cuales esas soluciones deben sobrevivir.

Este enfoque holístico va más allá de los esfuerzos de innovación aislados, garantizando que las soluciones sean deseables, viables y adaptables en un mundo complejo.

Capacidades en el centro

     

  • Portafolios de innovación adaptativos: Mantener un conjunto diverso de iniciativas que puedan pivotar a medida que cambian las condiciones.
  •  

  • Experimentación a través de escenarios futuros: Probar soluciones frente a múltiples futuros posibles para validar su robustez.
  •  

  • Transformación de sistemas centrada en el ser humano: Rediseñar procesos, estructuras y políticas para alinearlos con las necesidades humanas reales dentro de las limitaciones sistémicas.

Beneficio: Las organizaciones logran una innovación resiliente que puede prosperar en medio de la incertidumbre, la disrupción y la complejidad, en lugar de simplemente sobrevivir a ellas.

Cita sobre perspectivas de resiliencia en la innovación

X. Qué deben hacer los líderes para desarrollar esta capacidad

Construir una innovación resiliente requiere que los líderes cambien su mentalidad y sus prácticas. Ya no basta con tratar la innovación como un departamento estanco o una iniciativa aislada. Los líderes deben crear activamente las condiciones que permitan que la prospectiva, el diseño y el pensamiento sistémico trabajen juntos.

Cambios prácticos en el liderazgo

     

  • Dejar de tratar la innovación como un departamento: Integrar la innovación en todos los equipos y funciones, no solo en una unidad.
  •  

  • Desarrollar capacidades de prospectiva, diseño y sistemas conjuntamente: Desarrollar habilidades interdisciplinarias que permitan el pensamiento tridimensional.
  •  

  • Fomentar la colaboración interdisciplinaria: Promover la comunicación y la resolución compartida de problemas entre diferentes áreas de especialización.
  •  

  • Medir la resiliencia, no solo la eficiencia: Rastrear la adaptabilidad a largo plazo, el impacto en el sistema y la preparación para el futuro, no solo los resultados a corto plazo.
  •  

  • Diseñar organizaciones que puedan evolucionar continuamente: Crear estructuras y procesos que permitan el aprendizaje, la adaptación y la iteración constantes.

Al adoptar estas prácticas de liderazgo, las organizaciones pueden garantizar que sus esfuerzos de innovación no solo sean creativos, sino también resilientes y escalables dentro de sistemas complejos.

XI. Una prueba sencilla para su organización

Para evaluar si su organización está realmente desarrollando capacidades de innovación resiliente, hágase tres preguntas críticas:

     

  1. ¿Estamos diseñando solo para los clientes de hoy o para las realidades del mañana?
    Esta pregunta pone a prueba si su innovación anticipa necesidades y escenarios futuros.
  2.  

  3. ¿Nuestras soluciones funcionan solo en entornos piloto o dentro de sistemas reales?
    Esto evalúa si las innovaciones son escalables y resilientes dentro de los complejos sistemas en los que deben operar.
  4.  

  5. ¿Estamos resolviendo problemas humanos o solo optimizando procesos?
    Esto garantiza que sus soluciones estén genuinamente centradas en el ser humano, no solo que sean operativamente eficientes.

Si la respuesta a cualquiera de estas preguntas es “no”, es probable que la capacidad faltante se encuentre en una de las intersecciones del Pensamiento de Futuro, el Pensamiento de Diseño y el Pensamiento Sistémico. Abordar estas brechas es fundamental para lograr una innovación resiliente.

XII. Reflexión final: La innovación ya no es lineal

El mundo se ha vuelto demasiado complejo para la innovación basada en un solo método. Las organizaciones que prosperen en el futuro serán aquellas que operen en la intersección de:

     

  • Anticipación: Prepararse para múltiples futuros posibles.
  •  

  • Comprensión humana: Diseñar soluciones que la gente realmente quiera y adopte.
  •  

  • Conciencia del sistema: Garantizar que las soluciones puedan sobrevivir y escalar dentro de los sistemas del mundo real.

La innovación resiliente no proviene de ver el futuro con claridad. Proviene de estar preparado para muchos futuros posibles y de diseñar sistemas y soluciones que puedan adaptarse cuando lleguen. Las organizaciones que dominen este enfoque son las que perdurarán, evolucionarán y prosperarán.

Preguntas frecuentes: Innovación resiliente

1. ¿Qué es la innovación resiliente?

La innovación resiliente es la capacidad de una organización para anticipar múltiples futuros posibles, diseñar soluciones que los humanos realmente deseen y garantizar que esas soluciones sobrevivan y escalen dentro de sistemas complejos. Surge en la intersección del Pensamiento de Futuro, el Pensamiento de Diseño y el Pensamiento Sistémico.

2. ¿Por qué las organizaciones tienen dificultades con la innovación unidimensional?

Muchas organizaciones confían en un único enfoque —como el pensamiento de diseño, el pensamiento sistémico o el pensamiento de futuro— sin integrar los demás. Esto puede dar lugar a soluciones que son deseables pero no viables, o perspicaces pero no accionables, lo que resulta en una innovación que no logra escalar ni adaptarse.

3. ¿Cómo pueden los líderes desarrollar capacidades de innovación resiliente?

Los líderes pueden fomentar la innovación resiliente integrando la colaboración interdisciplinaria, desarrollando capacidades de prospectiva, diseño y sistemas de forma conjunta, midiendo la resiliencia (no solo la eficiencia) y diseñando organizaciones que puedan aprender, adaptarse y evolucionar continuamente.

p.d. Kristy Lundström planteó la cuestión de si “regenerativa” sería un mejor adjetivo que “resiliente”, y yo respondí que depende de dónde se tracen los límites de la palabra resiliente. Tiendo a pensar en ella como una palabra activa en lugar de pasiva, lo que significa que la forma en que veo la palabra incorpora elementos de regeneración y de hacer que las cosas sucedan. ¡Sigue innovando!

Créditos de imagen: ChatGPT, Google Gemini

Declaración de autenticidad del contenido: El área temática, los elementos clave en los que centrarse, etc., fueron decisiones tomadas por Braden Kelley, con un poco de ayuda de ChatGPT para limpiar el artículo y añadir citas.

Suscríbase al semanario Human-Centered Change & InnovationRegístrese aquí para recibir semanalmente en su bandeja de entrada el boletín Human-Centered Change & Innovation.

Making Ring-fenced Funding Work

Toughest Challenge Series: Episode 2

Making Ring-fenced Funding Work

GUEST POST from Geoffrey A. Moore


Inspired by the HP Incubations Team

Here’s the challenge. Everyone gets that you need to ring-fence funding for incubating Horizon 3 initiatives. At the corporate level, with the CEO’s direct sponsorship, this can be managed as a separate operating unit with its own budget. The challenge is when the incubation is nested. That means it is being funded out of the operating budget of a Performance Zone business unit, not from some special set-aside allocation.

Nested incubation represents the majority of internally funded Horizon 3 investments. (M&A is a different vehicle, funded out of capex not opex, and is not subject to the challenges we will discuss here). The reason there is a strong preference for nested incubations is that, if successful, they are of immediate interest to the business unit’s current customer base as well as its partner ecosystem. That is, while there can be high technical risk, there is little to no market risk. That said, it is still early days, the technology is not proven, product-market fit still needs to be determined, so it is in no position to generate ROI in the current fiscal year.

The challenge comes to the fore in a tough year where the corporation has to cut back on its operating expenses. Everybody is expected to take a haircut, tighten their belts, suck it up, and carry on. The problem is, when it comes to managing incubations, this simply does not work. Incubation is all about getting and maintaining momentum. If at any point you take your foot off the accelerator, you will lose momentum, and you will never get it back. Instead, you will salvage what you can from the R&D and write the whole thing off to bad timing. But let’s be clear: this is not management, this is mismanagement.

So, what’s the fix? It starts with the business unit surfacing its incubation opportunity during the annual budgeting process. It proposes to set aside a portion of its next year’s budget dedicated to funding the incubation, with funding released on a VC-model based on milestone attainment. This is documented and agreed to at the Executive Leadership Team level. If bad times hit, the choice is never to take a haircut; it is either to carry on or cancel things altogether, and it is made in dialog with the ELT since either way it could have a material impact on the enterprise’s market valuation.

Once the nested incubation has been agreed to, then the business unit leader is responsible for ensuring its funding stays ring-fenced. In particular, this means that resources assigned to the incubation effort cannot be “borrowed” by the current product lines to temporarily address an urgent need. Again, this is all about maintaining momentum.

To ensure this works as planned, here is a tip from a long-time friend and colleague who is the CFO at a major enterprise:

All ring-fenced items are documented and agreed upon at the ELT level. The way it works is the finance team who work with the budget holder is the guardian of all ring-fenced spend. When changes need to be made, they can’t touch ring-fenced spend. Of course, you have to limit the number of ring-fenced items to give freedom of execution to the leaders, but it’s an effective mechanism.

That’s what he thinks. And that’s what I think too. What do you think?

Image Credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Innovation Should Always Serve the People

Innovation Should Always Serve the People

GUEST POST from Greg Satell

The global activist Srdja Popović once told me that the goal of a revolution should be to become mainstream, to be mundane and ordinary. If you are successful it should be difficult to explain what was won because the previous order seems so unbelievable. That’s what true transformation looks like.

Yet many leaders approach innovation and change as if they were swashbuckling heroes in their own action movie. Companies like Theranos, WeWork and Uber squandered billions of dollars on business models that never made any sense. People post their latest ChatGPT prompts on social media while Elon Musk trolls Twitter.

These days, innovation has become, far too often, solipsistic and self-referential, pursued for the glory of the innovators themselves rather than for the benefit of everyone else and there is increasing evidence the venture-funded entrepreneurship model is crowding out more productive investments. We need to move away from hype and focus on impact.

The Eureka Moment Myth

In 1928, Alexander Fleming, a brilliant but sometimes careless scientist, arrived at his lab after a summer holiday to find that a mysterious mold had contaminated his Petri dishes and was eradicating the bacteria colonies he was trying to grow. Intrigued, he decided to study the mold. That’s how Fleming came to be known as the discoverer of penicillin.

Fleming’s story is one that is told and retold because it reinforces so much about what we love about innovation. A brilliant mind meets a pivotal moment of epiphany and—Eureka!— the world is forever changed. Unfortunately, that’s not really how things work. It wasn’t true in Fleming’s case and it won’t work for you.

The truth is that when Fleming published his results in 1929, few took notice. It wasn’t until 1939, a decade later, that Howard Florey and Ernst Chain came across Fleming’s long forgotten paper, understood its significance and undertook the hard work to transform it into a viable treatment that could actually help people.

Yet even then, to make a significant impact on the world, penicillin had to be produced in massive quantities, something that was far out of the reach of two research chemists. Florey reached out to the Rockefeller Foundation for help and moved to the US to work with American labs. In 1943 the U.S.’s War Production Board enlisted 21 companies to produce supplies for the war effort, saving countless lives and ushering in the new age of antibiotics.

The truth is that innovation is never a single event and is rarely achieved by a single person or organization. Rather, it is a process of discovery, engineering and transformation that typically takes decades to complete.

The Rise Of So-So Innovations

It’s been clear for some time now that we’ve been in the midst of a second productivity paradox. The first one, which lasted from the early 1970s to the mid 1990s, saw diminished productivity gains amid increased investment in information technology and prompted economist Robert Solow to note, “You can see the computer age everywhere but in the productivity statistics.”

In 1996, with the rise of the Internet, productivity growth began to boom again but then disappeared just as abruptly in 2004 and hasn’t returned since. Despite the hype surrounding things such as Web 2.0, the mobile Internet and, most recently, artificial intelligence, productivity growth continues to slump.

Part of the answer may have to do with what economists Daron Acemoglu and Pascual Restrepo refer to as so-so technologies, such as automated customer service, which produce meager productivity gains but displace workers nonetheless. In effect, they give the appearance of progress but don’t really improve our lives.

Consider an airport bar where ordering has been automated through the use of touchscreens. It’s hard to see how, given the high rent, food preparation and other costs, this technology would have a dramatic effect on productivity akin to, say, replacing a horse with a tractor in an agricultural economy. In fact, given that the technology hasn’t been widely deployed outside airports, the major effect seems to be inconveniencing patrons.

Acemoglu and Restrepo argue that a large-scale version of this phenomenon has been occurring since the late 80s. Digital technologies, to a large extent, have displaced labor, but have not had the same offsetting productivity impact as earlier technologies so the overall effect is to decrease wages rather than to raise living standards.
What Innovation Really Looks Like

Katalin Karikó, published her first paper on mRNA-based therapy way back in 1990. Unfortunately, she wasn’t able to win grants to fund her work and, by 1995, things came to a head. She was told that she could either direct her energies in a different way, or be demoted. Katalan chose to stick with it and, if the Covid pandemic had never hit, her name might very well be lost to history.

This type of thing is not unusual. Jim Allison, who won the Nobel Prize for his work on cancer immunotherapy, had a very similar experience when he had his breakthrough, despite having already become a prominent leader in the field. “It was depressing,” he told me. “I knew this discovery could make a difference, but nobody wanted to invest in it.”

The truth is that the next big thing always starts out looking like nothing at all. Things that really change the world always arrive out of context for the simple reason that the world hasn’t changed yet. Kevin Ashton, who himself first came up with the idea for RFID chips, wrote in his book, How to Fly A Horse, “Creation is a long journey, where most turns are wrong and most ends are dead.”

Because digital technology has become so pervasive, offering a substantial architecture that lends itself to tweaking, we’ve lost the plot. Innovation isn’t about Silicon Valley billionaires peacocking around on social media, but solving important problems. We need to shift our focus from disrupting industries to tackling grand challenges.

Building Collaborative Networks And To Tackle Grand Challenges

While researching my book Mapping Innovation, I had the opportunity to interview dozens of great innovators, from world-class scientists to super-successful entrepreneurs and top executives at some of the world’s largest corporations. I was surprised to find that, in almost every case, they were some of the most thoughtful, generous people I’d ever met.

The truth is that, for innovation, generosity is often a competitive advantage. By actively sharing their ideas, innovators build up larger networks of people willing to share with them. That makes it that much more likely that they will come across that random piece of information and insight that will help them crack a really tough problem.

The digital revolution has been, if anything, a huge disappointment and Silicon Valley’s tendency to be solipsistic and self-referential probably has a lot to do with that. The simple fact is that the developers banging away at their laptops can achieve little on their own. To tackle our most significant challenges, such as curing cancer, climate change and global hunger, they need to work effectively with specialists with different skills and perspectives.

What we need today is to build collaborative networks to solve grand challenges. The recent CHIPS Bill is a good start. It not only significantly increases our investment in basic research and development, but also allocates billions of dollars of investments into building regional ecosystems and advanced manufacturing.

Yet the most important thing we need to change is our mindset. We need to focus less on disruption and more on creation and, to create for the world we need to focus on what it means to live in it. We can no longer measure progress in terms of how many billionaires a technology creates. We need to focus on making a meaningful impact on people’s lives.

— Article courtesy of the Digital Tonto blog
— Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Is There Such a Thing as a Collective Growth Mindset?

Is There Such a Thing as a Collective Growth Mindset?

GUEST POST from Stefan Lindegaard

We often talk about growth mindset as an individual trait but what if mindsets could be shared? What if a team could collectively believe in its ability to learn, adapt, and grow?

I believe it’s possible. In fact, teams with a collective growth mindset often:

  • Learn faster and adapt better to change
  • Handle mistakes and uncertainty with psychological safety
  • Build stronger alignment and collaboration
  • Unlock higher creativity and innovation

Research increasingly supports this. Studies show that shared growth beliefs within teams are linked to higher creativity and performance. It’s less about one person’s mindset and more about how the team thinks, acts, and learns together.

That’s why I created this framework on The Collective Growth Mindset – a team-based approach built on five interconnected areas: Mindset, Shape/Pulse, Communicate, Learn and Network. It’s work in progress but please share your thoughts.

But here’s the real challenge: A collective growth mindset doesn’t just “happen.” It requires leadership, shared practices, and deliberate effort.

So, a few questions for reflection:

  • Does your team have a collective mindset — or just individual ones? If you have a collective mindset, how would you describe this?
  • What helps or hinders your team’s ability to learn and adapt together?
  • How intentional are you about building this as part of your culture?

Let’s learn together!

Image Credit: Stefan Lindegaard

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Moral Uncertainty Engines

Designing Systems That Know They Might Be Wrong

LAST UPDATED: March 6, 2026 at 5:07 PM

Moral Uncertainty Engines

GUEST POST from Art Inteligencia


I. Introduction: The Next Frontier in Responsible Innovation

As artificial intelligence and algorithmic systems take on increasingly consequential roles in our organizations and societies, a new challenge is emerging. The most dangerous systems are not necessarily the ones that make mistakes. The most dangerous systems are the ones that operate with complete confidence that they are right.

Innovation has always involved uncertainty. But when technology begins influencing decisions about hiring, healthcare, financial access, mobility, and public policy, uncertainty is no longer just a business risk—it becomes a moral one.

This is where a new concept begins to take shape: Moral Uncertainty Engines.

A Moral Uncertainty Engine is a decision architecture designed to recognize that ethical clarity is often elusive. Instead of embedding a single moral framework into a system, these engines evaluate decisions through multiple ethical lenses, quantify disagreements between them, and surface those tensions for human oversight.

In other words, they are systems designed not just to make decisions, but to acknowledge when the ethical landscape is ambiguous.

This represents a profound shift in how we design intelligent systems. For decades, the goal of technology was optimization—finding the single best answer. But the reality of human values is messier. What maximizes efficiency may conflict with fairness. What benefits the majority may harm the vulnerable. What is legal may not always be ethical.

Moral Uncertainty Engines do not attempt to eliminate these tensions. Instead, they illuminate them.

In doing so, they create the possibility for organizations to move beyond simplistic “ethical AI” checklists toward something far more powerful: systems that actively help leaders navigate complex moral tradeoffs.

Because the future of responsible innovation will not belong to the organizations that claim to have solved ethics. It will belong to the ones humble enough to admit they haven’t—and wise enough to design systems that help them think through it anyway.

II. What Is a Moral Uncertainty Engine?

Before we can explore the potential of Moral Uncertainty Engines, we need a clear understanding of what they are and why they matter. At their core, Moral Uncertainty Engines are decision-support systems designed to recognize that ethical certainty is often an illusion.

Traditional algorithms are built to optimize for a defined objective—maximize profit, minimize cost, increase efficiency, or predict outcomes with the highest statistical accuracy. But real-world decisions rarely involve just one objective. They involve competing values, conflicting priorities, and ethical tradeoffs that cannot always be resolved with a single formula.

A Moral Uncertainty Engine is a system designed to evaluate decisions through multiple ethical frameworks simultaneously and to acknowledge when those frameworks disagree.

Instead of embedding a single moral rule set into a system, these engines assess potential actions across different ethical perspectives and quantify the level of uncertainty or conflict between them. The result is not necessarily a single definitive answer, but a clearer picture of the ethical terrain surrounding a decision.

In practice, a Moral Uncertainty Engine typically performs several key functions:

  • Multi-framework evaluation – analyzing decisions through several ethical lenses rather than relying on a single rule set.
  • Ethical tradeoff analysis – identifying where different value systems produce conflicting recommendations.
  • Uncertainty scoring – measuring how confident the system can be in a morally acceptable course of action.
  • Transparency and explanation – making visible the reasoning behind recommendations.
  • Human escalation triggers – flagging decisions where ethical disagreement is high and human judgment is required.

To understand how this works, consider the most common ethical frameworks used in moral reasoning. A Moral Uncertainty Engine might evaluate a decision using several of these simultaneously:

  • Utilitarianism – Which option produces the greatest overall good?
  • Rights-based ethics – Does the decision violate fundamental rights?
  • Justice and fairness – Are harms and benefits distributed equitably?
  • Care ethics – How does the decision affect the most vulnerable stakeholders?

When these frameworks align, the system can move forward with confidence. But when they conflict—as they often do—the engine highlights the disagreement and surfaces the ethical tension instead of burying it.

This is the key insight behind Moral Uncertainty Engines: ethical complexity should not be hidden inside algorithms. It should be surfaced, measured, and navigated deliberately.

In many ways, these systems represent the next step in the evolution of responsible innovation. Rather than pretending that technology can eliminate moral ambiguity, they acknowledge that ambiguity is part of the landscape—and they help leaders make better decisions within it.

III. Why Moral Uncertainty Matters Now

The concept of Moral Uncertainty Engines might sound theoretical at first, but the forces making them necessary are already here. As organizations deploy increasingly autonomous technologies and algorithmic decision systems, they are encountering ethical dilemmas at a scale and speed that traditional governance structures were never designed to handle.

In the past, ethical decisions were typically made by humans, often slowly and with room for debate. Today, many of those same decisions are being influenced—or outright determined—by automated systems operating in milliseconds.

That shift creates a fundamental challenge: machines are excellent at optimizing defined objectives, but they struggle when the objectives themselves are morally contested.

AI Systems Are Increasingly Making Moral Decisions

Consider how many domains already rely on algorithmic decision-making:

  • Autonomous vehicles determining how to react in unavoidable accident scenarios
  • Healthcare systems prioritizing patients for scarce treatments
  • Hiring algorithms screening job candidates
  • Financial models determining who receives loans or credit
  • Content moderation systems deciding what speech is allowed online

Each of these systems contains embedded value judgments—whether explicitly designed or not. The problem is that most organizations treat these judgments as technical questions rather than ethical ones.

There Is No Universal Ethical Consensus

Humans themselves rarely agree on the “correct” moral answer in complex situations. Different cultures, organizations, and individuals prioritize different values. Some emphasize maximizing overall benefit, while others prioritize protecting individual rights or safeguarding vulnerable populations.

When technology is designed around a single ethical assumption, it risks imposing that value system invisibly and at scale.

Moral Uncertainty Engines acknowledge this reality by recognizing that ethical frameworks often produce conflicting recommendations. Instead of pretending consensus exists, they surface the disagreement so that organizations can navigate it deliberately.

The Risk of Moral Overconfidence

Perhaps the greatest danger in modern algorithmic systems is not error—it is overconfidence. Many AI systems produce outputs that appear authoritative, even when the underlying ethical reasoning is incomplete, biased, or based on questionable assumptions.

This can create what might be called moral automation bias, where humans defer to algorithmic recommendations simply because they appear objective or mathematically grounded.

Moral Uncertainty Engines introduce a critical counterbalance: they explicitly communicate when a decision is ethically ambiguous, contested, or uncertain.

The Innovation Opportunity

Organizations that learn how to operationalize moral uncertainty will gain an important advantage. They will be better equipped to:

  • Build trust with customers and stakeholders
  • Navigate regulatory scrutiny
  • Avoid reputational crises driven by opaque algorithms
  • Make more resilient long-term decisions

In other words, acknowledging ethical uncertainty is not a weakness. It is a capability—one that responsible innovators will increasingly need as technology becomes more powerful and more deeply embedded in human lives.

IV. How Moral Uncertainty Engines Work

To understand the potential of Moral Uncertainty Engines, it helps to look at how such a system might actually function in practice. While the concept is still emerging, the underlying architecture draws from fields like decision science, AI safety, machine ethics, and risk management.

At a high level, a Moral Uncertainty Engine acts as a layered decision-support system. Rather than producing a single optimized answer, it evaluates potential actions through multiple ethical perspectives and identifies where those perspectives align—or conflict.

A simplified architecture typically includes four key layers.

Layer 1: Situation Awareness

Every ethical decision begins with context. The system first gathers relevant information about the situation, including:

  • The stakeholders involved
  • The potential consequences of different actions
  • Legal or regulatory constraints
  • The scale and reversibility of potential harm

This layer ensures that the system understands the environment in which a decision is being made before attempting to evaluate its ethical implications.

Layer 2: Ethical Framework Evaluation

Next, the system analyzes the possible courses of action through multiple ethical frameworks. Each framework evaluates the decision according to its own principles and priorities.

For example:

  • Utilitarian perspective: Which option produces the greatest overall benefit?
  • Rights-based perspective: Does any option violate fundamental rights?
  • Justice perspective: Are harms and benefits distributed fairly?
  • Care perspective: How are vulnerable stakeholders affected?

Each framework generates its own assessment of the available choices.

Layer 3: Moral Aggregation

Once the frameworks have evaluated the options, the system compares their recommendations. In some cases, the frameworks may converge on a similar outcome. In others, they may strongly disagree.

Several approaches can be used to combine these evaluations, including weighted voting models, scenario simulations, or expected moral value calculations. The goal is not necessarily to produce a single definitive answer, but to understand the balance of ethical considerations across the frameworks.

Layer 4: Uncertainty and Escalation

The final layer measures how much disagreement exists between the ethical perspectives. If the frameworks align strongly, the system may proceed with a recommendation. If they diverge significantly, the system can flag the decision as ethically uncertain.

At this point, several actions may occur:

  • The system provides an explanation of the ethical tradeoffs
  • A confidence or uncertainty score is generated
  • The decision is escalated to human oversight

This is the core value of a Moral Uncertainty Engine. Instead of hiding ethical tension behind an optimized output, it reveals the complexity of the decision and invites human judgment where it matters most.

In many ways, these systems function less like automated decision-makers and more like ethical copilots—tools that help organizations think more clearly about the moral consequences of their choices.

V. Case Study: Autonomous Vehicles and the Trolley Problem

Few examples illustrate the challenge of moral uncertainty more clearly than autonomous vehicles. When self-driving systems operate on public roads, they must continuously make decisions that involve safety tradeoffs. Most of the time these choices are routine—slow down, change lanes, maintain distance. But in rare circumstances, a vehicle may face an unavoidable accident scenario where harm cannot be completely prevented.

These moments resemble the classic ethical thought experiment known as the “trolley problem,” where a decision must be made between two outcomes, each involving some form of harm. While philosophers have debated such scenarios for decades, autonomous vehicle developers must translate those debates into operational decisions inside real-world systems.

The difficulty is that different ethical frameworks often produce different answers. A strictly utilitarian approach might prioritize minimizing total casualties. A rights-based perspective might argue that intentionally choosing to harm one person to save others violates fundamental moral principles. A fairness perspective might question whether certain groups are systematically placed at greater risk.

Many early attempts to address these questions focused on encoding a single rule or priority structure into the vehicle’s decision logic. But this approach assumes that there is one universally acceptable ethical answer—an assumption that rarely holds across cultures, legal systems, or public opinion.

A Moral Uncertainty Engine offers a different approach. Instead of hard-coding a single moral rule, the system evaluates potential actions across multiple ethical frameworks and identifies where they agree and where they conflict.

For example, the system might:

  • Analyze the scenario from a utilitarian perspective focused on minimizing total harm
  • Evaluate whether any potential action violates protected rights
  • Assess whether the risks are being distributed fairly among stakeholders

If these frameworks converge on the same outcome, the system can act with greater confidence. If they diverge significantly, the vehicle may default to a predefined safety posture—such as minimizing speed and impact energy—rather than making an ethically aggressive tradeoff.

More importantly, the decision framework itself becomes transparent and auditable. Engineers, regulators, and the public can examine how ethical considerations were evaluated rather than treating the system as a black box.

The lesson from autonomous vehicles extends far beyond transportation. As technology becomes increasingly embedded in complex human environments, organizations will need systems that can recognize ethical tension instead of pretending it doesn’t exist.

Moral Uncertainty Engines provide a path toward that future—one where intelligent systems are designed not only to act, but to reflect the moral complexity of the world they operate within.

VI. Case Study: AI Medical Triage and the Ethics of Scarcity

Healthcare provides one of the most powerful real-world examples of why moral uncertainty matters. Medical systems regularly face situations where resources are limited and difficult prioritization decisions must be made. During public health crises, such as pandemics, these tradeoffs can become especially stark.

Hospitals may need to decide how to allocate ventilators, ICU beds, specialized treatments, or transplant organs when demand exceeds supply. Historically, these decisions have been guided by medical ethics boards, physician judgment, and carefully developed triage protocols. Increasingly, however, algorithmic systems are being introduced to help manage these decisions at scale.

Many triage algorithms are designed to optimize measurable outcomes such as survival probability or expected life-years saved. While these metrics may appear objective, they can create serious ethical tensions when translated into real-world policy.

For example, prioritizing expected life-years may unintentionally disadvantage older patients. Models that rely heavily on historical health data may penalize individuals from underserved communities who have historically received less access to preventative care. Systems designed purely around statistical survival probabilities may overlook broader ethical considerations about fairness, dignity, or social vulnerability.

This is precisely the kind of scenario where a Moral Uncertainty Engine could provide meaningful support.

Instead of optimizing for a single metric, the system evaluates triage decisions through several ethical perspectives simultaneously. A utilitarian framework may prioritize maximizing the number of lives saved. A justice-based framework may emphasize equitable access across demographic groups. A care-based framework may highlight the needs of the most vulnerable patients.

When these perspectives align, the system can offer a strong recommendation. But when they conflict—as they often do in healthcare—the engine surfaces that conflict rather than hiding it behind a numerical score.

The result is not an automated moral verdict. Instead, clinicians and ethics boards receive a clearer picture of the ethical tradeoffs embedded in each decision. The system may present alternative allocation scenarios, highlight potential bias risks, or flag cases that require human deliberation.

In this way, the technology functions less as a replacement for human judgment and more as a decision companion. It expands the visibility of ethical consequences while preserving the role of human responsibility.

Healthcare leaders already recognize that medical decisions involve more than statistics. Moral Uncertainty Engines simply help bring that ethical complexity into the design of the systems that increasingly shape those decisions.

VII. Leading Companies and Startups Exploring Moral Uncertainty

Moral Uncertainty Engines are still an emerging concept, but the foundational components of this category are already being developed across the technology ecosystem. Large technology firms, AI safety organizations, governance platforms, and startups focused on responsible AI are all contributing pieces of what could eventually become full ethical decision infrastructures.

While few organizations are explicitly using the term “Moral Uncertainty Engine,” many are working on the critical building blocks: AI alignment systems, ethical reasoning frameworks, transparency tools, and governance platforms designed to ensure responsible decision-making.

Large Technology Companies

Several major technology companies are investing heavily in AI alignment and responsible innovation. Their research programs are exploring ways to ensure that increasingly autonomous systems operate within acceptable ethical boundaries.

  • OpenAI – Research into alignment methods such as reinforcement learning from human feedback and systems designed to incorporate human values into AI behavior.
  • Google DeepMind – Work on AI safety, scalable oversight, and constitutional approaches to guiding model behavior.
  • Microsoft – Development of responsible AI frameworks, governance tools, and organizational guidelines for ethical AI deployment.

These companies are helping to define the infrastructure that future ethical decision systems will rely upon.

Emerging Startups

A growing number of startups are focusing specifically on governance, auditing, and ethical oversight for AI systems. These organizations are building platforms that help companies monitor algorithmic behavior, detect bias, and ensure compliance with evolving regulatory standards.

  • Credo AI – Provides governance platforms designed to help organizations operationalize responsible AI practices.
  • Holistic AI – Offers tools for auditing AI systems, identifying bias, and evaluating risk across machine learning models.
  • CIRIS – Focuses on runtime governance layers designed to help organizations manage the behavior of AI agents in production environments.

These companies are not yet full Moral Uncertainty Engines, but they are building the monitoring and governance layers that such systems will likely require.

Academic and Research Institutions

Some of the most important advances in machine ethics and moral decision systems are emerging from research institutions exploring how ethical reasoning can be integrated into AI architectures.

  • Stanford Human-Centered AI
  • MIT Media Lab
  • Oxford’s AI safety and governance research community

Researchers in these communities are experimenting with methods for translating ethical theory into operational systems capable of evaluating tradeoffs, measuring moral uncertainty, and providing transparent reasoning.

Taken together, these organizations represent the early ecosystem surrounding what could become one of the most important innovation categories of the next decade: technologies designed not just to make decisions, but to help society navigate the moral complexity that accompanies them.

VIII. The Innovation Opportunities

If Moral Uncertainty Engines sound like a niche academic concept today, history suggests that may not remain the case for long. Many of the most important innovation categories begin as abstract ideas before evolving into entire industries. Cloud computing, cybersecurity, and digital trust platforms all followed similar paths.

As AI systems become more deeply embedded in critical decisions, the ability to surface ethical tradeoffs and navigate moral uncertainty will become an increasingly valuable capability. This opens the door to several new innovation opportunities for entrepreneurs, technology companies, and forward-looking organizations.

Ethical Infrastructure Platforms

One opportunity lies in the creation of ethical infrastructure platforms—systems designed to plug into existing AI models and decision engines to provide moral evaluation layers. These platforms could function much like security software or monitoring tools, continuously assessing algorithmic behavior and flagging ethical risks.

Capabilities in this category might include:

  • Multi-framework ethical scoring for algorithmic decisions
  • Real-time bias detection and mitigation
  • Transparency dashboards for regulators and stakeholders
  • Ethical risk monitoring across large AI deployments

In effect, these platforms would provide the ethical equivalent of observability tools used in modern software systems.

Organizational Decision Copilots

Another opportunity lies in decision-support tools designed specifically for human leaders. Instead of automating decisions, these systems would act as ethical copilots—helping executives, policymakers, and product teams evaluate complex tradeoffs before implementing new technologies or policies.

Such tools might help organizations:

  • Simulate the ethical consequences of product features
  • Evaluate policy choices across competing value systems
  • Identify stakeholder groups most likely to be affected by a decision
  • Stress-test innovations against potential ethical controversies

In this model, the goal is not to replace human judgment, but to strengthen it with better visibility into ethical complexity.

Ethical Digital Twins

A particularly intriguing possibility is the development of ethical digital twins—simulation environments where organizations can test how different decisions might impact stakeholders across multiple ethical frameworks before deploying them in the real world.

Just as engineers use digital twins to simulate the performance of physical systems, leaders could use ethical simulation environments to anticipate unintended consequences, reputational risks, or fairness concerns before they emerge.

The Birth of a New Category

If these opportunities mature, Moral Uncertainty Engines could become the foundation for a new category of enterprise technology focused on ethical intelligence. Organizations would no longer rely solely on legal compliance or reactive crisis management to address ethical challenges. Instead, they would have systems designed to help them navigate those challenges proactively.

In a world where innovation increasingly shapes society at scale, the ability to operationalize ethical awareness may become just as important as the ability to write code or analyze data.

IX. The Risks and Criticisms of Moral Uncertainty Engines

Like any emerging technology category, Moral Uncertainty Engines bring both promise and potential pitfalls. While these systems could help organizations navigate complex ethical terrain more thoughtfully, they also raise legitimate concerns about how moral reasoning is translated into software and who ultimately holds responsibility for the outcomes.

If organizations are not careful, the very tools designed to improve ethical decision-making could inadvertently create new forms of risk.

The Danger of Moral Outsourcing

One of the most common criticisms is the risk of moral outsourcing. When organizations rely too heavily on algorithmic systems to evaluate ethical decisions, leaders may begin to treat those systems as final authorities rather than decision-support tools.

This can create a dangerous dynamic where responsibility quietly shifts from humans to algorithms. Instead of asking whether a decision is morally defensible, leaders may simply ask whether the system approved it.

Moral Uncertainty Engines should never replace human judgment. Their purpose is to illuminate ethical tradeoffs—not to absolve decision-makers of responsibility.

The Illusion of Objectivity

Another concern is the possibility that ethical scoring systems may create a false sense of precision. Numbers, dashboards, and scores can make complex moral questions appear more objective than they actually are.

But ethical frameworks themselves contain assumptions and value judgments. The choice of which frameworks to include, how they are weighted, and how outcomes are interpreted can all influence the system’s conclusions.

Without transparency, these embedded assumptions may go unnoticed by the people relying on the system.

Cultural and Societal Bias

Ethics is deeply shaped by culture, history, and social context. A system designed around one set of moral priorities may not reflect the values of another community or region.

If Moral Uncertainty Engines are built primarily by a narrow set of organizations or cultural perspectives, they could unintentionally export those values into systems used around the world.

Designing these systems responsibly will require diverse input from ethicists, policymakers, technologists, and communities affected by the decisions being modeled.

The Complexity Challenge

Finally, there is a practical challenge: ethical reasoning is incredibly complex. Translating philosophical frameworks into computational systems is difficult, and oversimplification is always a risk.

Not every moral dilemma can be captured in a model, and not every ethical conflict can be resolved through structured analysis.

Recognizing these limitations is essential. The goal of Moral Uncertainty Engines should not be to mechanize morality, but to provide better tools for navigating difficult decisions.

If designed thoughtfully, these systems can serve as valuable companions to human judgment. But if treated as definitive authorities, they risk becoming yet another example of technology that promises clarity while quietly obscuring the deeper questions that matter most.

X. The Leadership Imperative

The rise of Moral Uncertainty Engines underscores a critical lesson for leaders: technology alone cannot solve ethical complexity. Organizations that rely on automated systems to make moral decisions without human oversight risk both moral and reputational failure.

Leaders must approach these tools as companions rather than replacements—systems designed to illuminate ethical tradeoffs, measure uncertainty, and support thoughtful deliberation.

Key Principles for Responsible Leadership

  • Accountability: Leaders retain ultimate responsibility for decisions, even when supported by Moral Uncertainty Engines.
  • Transparency: Ensure that the reasoning behind system recommendations is visible, understandable, and auditable by humans.
  • Human Oversight: Use automated insights as decision-support, not as authoritative directives. Escalate ethically ambiguous scenarios to human judgment.
  • Ethical Culture: Encourage organizational practices that prioritize ethical reflection alongside operational efficiency and innovation.
  • Diversity of Perspectives: Incorporate insights from ethicists, technologists, and stakeholders representing different communities and cultural contexts.

Moral Uncertainty Engines are powerful because they make ethical ambiguity visible. But the value of that visibility depends entirely on the people interpreting it. Leaders who are willing to engage with these systems thoughtfully—questioning assumptions, evaluating tradeoffs, and embracing uncertainty—will turn ethical complexity into a strategic advantage.

In short, the technology alone does not create ethical outcomes. It is the combination of human judgment, responsible leadership, and machine-supported insight that allows organizations to navigate moral uncertainty successfully.

XI. Conclusion: Designing Systems That Know Their Limits

Moral Uncertainty Engines represent a profound shift in how we think about technology and ethics. They are not designed to replace human judgment, nor to provide definitive moral answers. Instead, they offer a framework for surfacing ethical tradeoffs, quantifying uncertainty, and supporting deliberate decision-making in complex contexts.

The systems of the future will need to balance intelligence with humility. They must optimize for outcomes while acknowledging the moral ambiguity inherent in most consequential decisions. By doing so, they create space for leaders, teams, and organizations to reflect, deliberate, and choose responsibly.

Across industries—from autonomous vehicles to healthcare triage, from hiring algorithms to public policy—ethical complexity is unavoidable. Moral Uncertainty Engines give organizations the tools to confront that complexity openly rather than hiding it behind optimization metrics or opaque algorithms.

In practice, these engines act as ethical copilots. They illuminate areas of tension, highlight disagreements between frameworks, and provide decision-makers with richer, more nuanced insights. The true measure of their success is not perfect moral accuracy, but the degree to which they enable human leaders to make informed, accountable, and ethically aware decisions.

Ultimately, the organizations that thrive in an increasingly automated and interconnected world will be those that design systems capable of acknowledging their limits—and that pair those systems with leaders willing to navigate uncertainty thoughtfully. In this way, Moral Uncertainty Engines may become one of the most important tools for fostering responsible innovation in the 21st century.

Frequently Asked Questions

1. What is a Moral Uncertainty Engine?

A Moral Uncertainty Engine is a decision-support system designed to evaluate choices through multiple ethical frameworks, quantify areas of disagreement, and provide transparent guidance or escalation when ethical uncertainty is high. Its purpose is to help organizations navigate complex moral tradeoffs rather than replace human judgment.

2. Why are Moral Uncertainty Engines important today?

As AI and algorithmic systems increasingly make decisions that affect people’s lives, the ability to surface and manage ethical uncertainty becomes critical. These engines reduce risks of overconfidence, bias, and hidden ethical assumptions, enabling organizations to make more responsible, accountable, and trusted decisions.

3. Which industries or applications can benefit from Moral Uncertainty Engines?

Any sector where complex decisions with moral implications are made can benefit, including healthcare triage, autonomous vehicles, hiring and HR systems, financial services, content moderation, and public policy. Essentially, any domain where decisions have significant ethical consequences can leverage these systems to guide thoughtful human oversight.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Do You Have an Empty Tank?

Do You Have an Empty Tank?

GUEST POST from Mike Shipulski

Sometimes your energy level runs low. That’s not a bad thing, it’s just how things go. Just like a car’s gas tank runs low, our gas tanks, both physical and emotional, also need filling. Again, not a bad thing. That’s what gas tanks are for – they hold the fuel.

We’re pretty good at remembering that a car’s tank is finite. At the start of the morning commute, the car’s fuel gauge gives a clear reading of the fuel level and we do the calculation to determine if we can make it or we need to stop for fuel. And we do the same thing in the evening – look at the gauge, determine if we need fuel and act accordingly. Rarely we run the car out of fuel because the car continuously monitors and displays the fuel level and we know there are consequences if we run out of fuel.

We’re not so good at remembering our personal tanks are finite. At the start of the day, there are no objective fuel gauges to display our internal fuel levels. The only calculation we make – if we can make it out of bed we have enough fuel for the day. We need to do better than that.

Our bodies do have fuel gages of sorts. When our fuel is low we can be irritable, we can have poor concentration, we can be easily distracted. Though these gages are challenging to see and difficult to interpret, they can be used effectively if we slow down and be in our bodies. The most troubling part has nothing to do with our internal fuel gages. Most troubling is we fail to respect their low fuel warnings even when we do recognize them. It’s like we don’t acknowledge our tanks are finite.

We don’t think our cars are flawed because their fuel tanks run low as we drive. Yet, we see the finite nature of our internal fuel tanks as a sign of weakness. Why is that? Rationally, we know all fuel tanks are finite and their fuel level drops with activity. But, in the moment, when are tanks are low, we think something is wrong with us, we think we’re not whole, we think less of ourselves.

When your tank is low, don’t curse, don’t blame, don’t feel sorry and don’t judge. It’s okay. That’s what tanks do.

A simple rule for all empty tanks – put fuel in them.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Resilient Innovation

Why the Future Belongs to Organizations That Think in Three Dimensions

Why the Future Belongs to Organizations That Think in Three Dimensions

LAST UPDATED: March 11, 2026 at 6:56 PM (SPANISH LANGUAGE VERSION)

by Braden Kelley and Art Inteligencia


I. The Spark: A Venn Diagram That Captures a Powerful Truth

Inspiration for this article came from a simple but powerful visual shared in a recent post by Hugo Gonçalves. The image illustrated the relationship between Future Thinking, Design Thinking, and Systems Thinking using a Venn diagram that placed Resilient Innovation at the center.

At first glance the framework seems obvious. Each discipline is already well established in the innovation world:

  • Future Thinking helps organizations anticipate multiple possible futures.
  • Design Thinking focuses on solving problems through a human-centered approach.
  • Systems Thinking encourages examining systems holistically to understand complexity.

But what makes the diagram compelling is not the individual circles. It is the insight revealed at their intersections. When these disciplines operate together rather than in isolation, they unlock capabilities that are difficult for organizations to achieve otherwise.

At the intersection of Future Thinking and Design Thinking, organizations begin designing solutions for future scenarios rather than merely reacting to present conditions.

Where Design Thinking meets Systems Thinking, innovation becomes both human-centered and system-aware, producing solutions that account for real-world complexity and ripple effects.

And where Future Thinking intersects with Systems Thinking, organizations gain the ability to prepare systems for long-term sustainability and increasing complexity.

Resilient Innovation

When all three perspectives come together, something more powerful emerges: the ability to create innovations that are not only desirable and viable today, but resilient enough to thrive across multiple possible futures.

In a world defined by accelerating change, uncertainty, and interconnected systems, resilient innovation may be the most important capability organizations can develop. And as this simple diagram suggests, it thrives at the intersection of three powerful ways of thinking.

II. The Problem with One-Dimensional Innovation

Most organizations pursue innovation through a single dominant lens. Some lean heavily into design thinking workshops and rapid prototyping. Others invest in strategic foresight to anticipate future disruption. Still others focus on systems analysis to understand complexity and organizational dynamics.

Each of these approaches provides valuable insight. But when used in isolation, each also has significant limitations.

Design thinking, for example, excels at uncovering human needs and translating them into compelling solutions. Yet even the most desirable idea can fail if it ignores the larger systems it must operate within — regulatory structures, supply chains, cultural norms, or organizational incentives.

Future thinking helps organizations explore uncertainty and imagine multiple possible futures. Scenario planning and horizon scanning can expand strategic awareness and reduce surprise. But foresight alone rarely produces solutions that people are ready to adopt.

Systems thinking provides the ability to map complexity, understand feedback loops, and identify leverage points within interconnected environments. However, deep system insight does not automatically translate into solutions that resonate with human users.

When organizations rely on only one of these approaches, innovation often stalls. Ideas may be creative but impractical, visionary but disconnected from human behavior, or analytically sound but difficult to implement.

The challenge is not that these disciplines are flawed. The challenge is that they are incomplete on their own.

Innovation today takes place in environments that are simultaneously human, complex, and uncertain. Addressing only one dimension of that reality inevitably leads to blind spots.

Resilient innovation requires something more: the integration of multiple ways of thinking that together allow organizations to anticipate change, understand complexity, and design solutions people will actually embrace.

III. Future Thinking: Anticipating Multiple Possible Futures

One of the most dangerous assumptions organizations can make is that the future will look largely like the present. History repeatedly shows that markets, technologies, and societal expectations can shift faster than even experienced leaders anticipate.

This is where Future Thinking becomes essential, and the FutureHacking™ methodology helps everyone be their own futurist.

Future thinking is not about predicting a single outcome. Instead, it focuses on exploring a range of plausible futures so organizations can prepare for uncertainty rather than react to it after the fact.

Practitioners of future thinking use tools such as horizon scanning, trend analysis, and scenario planning to identify emerging signals of change and imagine how those signals might combine to shape different future environments.

By examining multiple possible futures, organizations expand their strategic imagination. They begin to see opportunities and risks that would otherwise remain invisible when planning is based solely on past performance or current market conditions.

Future thinking helps leaders ask better questions:

  • What changes on the horizon could reshape our industry?
  • Which emerging technologies or behaviors might disrupt our assumptions?
  • How might our customers’ needs evolve over the next decade?

When organizations incorporate future thinking into their innovation efforts, they gain the ability to design strategies and solutions that remain relevant even as conditions change.

However, foresight alone does not create innovation. Imagining the future is only the beginning. Organizations must also translate those insights into solutions that people value and systems can support.

That is why future thinking becomes far more powerful when combined with other perspectives — particularly the human-centered creativity of design thinking and the holistic understanding provided by systems thinking.

IV. Design Thinking: Solving Problems with a Human-Centered Approach

If future thinking expands our view of what might happen, design thinking helps ensure that the solutions we create actually matter to the people they are intended to serve.

Design thinking is grounded in a deceptively simple premise: innovation succeeds when it begins with a deep understanding of human needs, behaviors, and motivations. Rather than starting with technology or internal capabilities, design thinking begins with empathy.

Practitioners use methods such as observation, interviews, journey mapping, and rapid prototyping to uncover insights about how people experience products, services, and systems in the real world.

Through this process, organizations move beyond assumptions and begin designing solutions that reflect genuine human needs. Ideas are then explored through iterative experimentation, allowing teams to quickly learn what works, what doesn’t, and why.

This approach offers several powerful advantages:

  • It surfaces unmet or unarticulated customer needs.
  • It encourages experimentation and rapid learning.
  • It increases the likelihood that new solutions will be embraced by the people they are designed for.

Design thinking reminds organizations that innovation is not simply about creating something new. It is about creating something people will choose to adopt.

However, even the most human-centered solution can fail if it ignores the broader systems in which it must operate. A beautifully designed product may struggle against regulatory constraints, supply chain limitations, or cultural resistance within organizations.

This is why design thinking alone is not enough. To create innovations that truly endure, organizations must also understand the complex systems surrounding those solutions.

V. Systems Thinking: Seeing the Whole System

While design thinking focuses on people and future thinking explores uncertainty, systems thinking helps organizations understand the complex environments in which innovation must operate.

Modern organizations do not exist in isolation. They function within interconnected systems made up of customers, partners, suppliers, regulators, technologies, cultures, and internal structures. Changes in one part of the system often create ripple effects across many others.

Systems thinking encourages leaders and innovators to step back and examine these relationships holistically rather than focusing only on individual components.

Practitioners use tools such as system maps, causal loop diagrams, and stakeholder ecosystem mapping to identify patterns, dependencies, and feedback loops that influence outcomes over time.

This perspective provides several critical advantages:

  • It reveals hidden interdependencies within complex environments.
  • It helps identify leverage points where small changes can create large impact.
  • It reduces the likelihood of unintended consequences when introducing new solutions.

Many innovations fail not because the idea was flawed, but because the surrounding system was never designed to support it. Incentives may be misaligned. Processes may resist change. Infrastructure may not exist to scale the solution.

Systems thinking helps innovators recognize these structural realities early, allowing them to design solutions that fit within — or intentionally reshape — the systems they operate within.

Yet systems thinking alone can also fall short. Deep analysis of complexity does not automatically produce solutions that resonate with people or anticipate future shifts.

This is why resilient innovation emerges not from any one perspective, but from the intersection of future thinking, design thinking, and systems thinking working together.

Resilient Innovation Infographic

VI. Future Thinking + Design Thinking: Designing Solutions for Future Scenarios

When future thinking and design thinking come together, innovation shifts from solving today’s problems to designing solutions that remain meaningful in tomorrow’s world.

Future thinking expands the time horizon. It helps organizations explore emerging technologies, evolving social expectations, and potential disruptions that could reshape the environment in which products and services operate.

Design thinking brings the human perspective. It ensures that ideas developed in response to these future possibilities remain grounded in real human needs, motivations, and behaviors.

Together, these disciplines allow organizations to design solutions not just for the present moment, but for multiple possible futures.

Rather than asking only “What do customers need today?” teams begin asking deeper questions:

  • How might customer expectations evolve in the next five to ten years?
  • What new behaviors could emerge as technologies mature?
  • How might shifting social norms reshape what people value?

Several practices emerge from this intersection:

  • Creating future personas that represent how users might behave in different scenarios.
  • Building scenario-based prototypes that test how solutions perform under different future conditions.
  • Using speculative design to explore bold possibilities before they become reality.

This combination helps organizations avoid a common innovation trap: designing solutions perfectly optimized for a present that is already beginning to disappear.

By integrating foresight with human-centered design, organizations create innovations that are better prepared to evolve as the future unfolds.

VII. Design Thinking + Systems Thinking

Human-centered innovation is most powerful when it takes the wider system into account.
Integrating empathy with complexity awareness ensures that solutions are not only desirable but also viable and scalable within real-world systems.

Many well-intentioned innovations fail because they neglect system dynamics—leading to unintended consequences that can undermine adoption, efficiency, or long-term impact.

Example Practices

  • Journey Mapping + System Mapping: Understand the user experience alongside the broader system in which it operates.
  • Stakeholder Ecosystem Analysis: Identify all the players, relationships, and dependencies that influence outcomes.
  • Designing for Policy, Culture, and Infrastructure Simultaneously: Ensure solutions are compatible with the real-world environment, not just ideal scenarios.

Benefit: Solutions that scale effectively and endure within complex systems, reducing risk and maximizing long-term impact.

VIII. Future Thinking + Systems Thinking

Combining anticipation with structural understanding enables organizations to prepare systems for long-term sustainability and complexity. This intersection ensures that strategies and innovations are not just reactive but resilient to change and disruption.

Many organizations fail because they plan for the future without considering system-wide dynamics, leaving them vulnerable when change inevitably occurs.

Example Practices

  • Resilience Mapping: Identify system vulnerabilities and strengths to anticipate risks and opportunities.
  • Adaptive Strategy Design: Develop strategies that can flex and evolve as conditions change.
  • Long-Term Capability Building: Invest in skills, processes, and structures that sustain innovation over time.

Benefit: Organizations become prepared for volatility, able to respond to complex challenges without being derailed by disruption.

IX. The Center of the Venn Diagram: Resilient Innovation

True innovation resilience happens at the intersection of all three disciplines: Future Thinking, Design Thinking, and Systems Thinking. Organizations that operate here anticipate multiple possible futures, design solutions humans actually want, and understand the systems those solutions must survive inside.

This holistic approach moves beyond isolated innovation efforts, ensuring solutions are desirable, viable, and adaptable in a complex world.

Capabilities at the Center

  • Adaptive Innovation Portfolios: Maintain a diverse set of initiatives that can pivot as conditions change.
  • Experimentation Across Future Scenarios: Test solutions against multiple possible futures to validate robustness.
  • Human-Centered System Transformation: Redesign processes, structures, and policies to align with real human needs within systemic constraints.

Benefit: Organizations achieve resilient innovation that can thrive amidst uncertainty, disruption, and complexity, rather than merely surviving it.

Innovation Resilience Insights Quote

X. What Leaders Must Do to Build This Capability

Building resilient innovation requires leaders to shift their mindset and practices. It’s no longer enough to treat innovation as a siloed department or isolated initiative. Leaders must actively create the conditions that allow foresight, design, and systems thinking to work together.

Practical Leadership Shifts

  • Stop Treating Innovation as a Department: Embed innovation across teams and functions, not just in a single unit.
  • Build Foresight, Design, and Systems Capabilities Together: Develop cross-disciplinary skills that enable three-dimensional thinking.
  • Encourage Cross-Disciplinary Collaboration: Foster communication and shared problem-solving across different expertise areas.
  • Measure Resilience, Not Just Efficiency: Track long-term adaptability, system impact, and future-readiness, not only short-term outputs.
  • Design Organizations That Can Evolve Continuously: Create structures and processes that allow constant learning, adaptation, and iteration.

By adopting these leadership practices, organizations can ensure that their innovation efforts are not only creative but also resilient and scalable within complex systems.

XI. A Simple Test for Your Organization

To evaluate whether your organization is truly building resilient innovation capabilities, ask three critical questions:

  1. Are we designing only for today’s customers, or tomorrow’s realities?
    This question tests whether your innovation anticipates future needs and scenarios.
  2. Do our solutions work only in pilot environments, or within real systems?
    This evaluates whether innovations are scalable and resilient within the complex systems they must operate in.
  3. Are we solving human problems, or just optimizing processes?
    This ensures that your solutions are genuinely human-centered, not just operationally efficient.

If the answer to any of these is “no,” the missing capability likely lies at one of the intersections of Future Thinking, Design Thinking, and Systems Thinking. Addressing these gaps is critical for achieving resilient innovation.

XII. Final Thought: Innovation Is No Longer Linear

The world has become too complex for single-method innovation. Organizations that thrive in the future will be those that operate at the intersection of:

  • Anticipation: Preparing for multiple possible futures.
  • Human Understanding: Designing solutions people actually want and will adopt.
  • System Awareness: Ensuring solutions can survive and scale within real-world systems.

Resilient innovation does not come from seeing the future clearly. It comes from being prepared for many possible futures and designing systems and solutions that can adapt when they arrive. Organizations that master this approach are the ones that will endure, evolve, and thrive.

FAQ: Resilient Innovation

1. What is resilient innovation?

Resilient innovation is the ability of an organization to anticipate multiple possible futures, design solutions humans actually want, and ensure those solutions survive and scale within complex systems. It emerges at the intersection of Future Thinking, Design Thinking, and Systems Thinking.

2. Why do organizations struggle with one-dimensional innovation?

Many organizations rely on a single approach—such as design thinking, systems thinking, or future thinking—without integrating the others. This can lead to solutions that are desirable but not viable, or insightful but not actionable, resulting in innovation that fails to scale or adapt.

3. How can leaders build resilient innovation capabilities?

Leaders can foster resilient innovation by embedding cross-disciplinary collaboration, developing foresight, design, and systems capabilities together, measuring resilience (not just efficiency), and designing organizations that can continuously learn, adapt, and evolve.

p.s. Kristy Lundström posed the question of whether regenerative would be a better adjective than resilient, and I responded that it depends on where you draw the boundaries on the word resilient. I tend to think of it as an active word instead of a passive one, meaning the way that I look at the word incorporates elements of regeneration and making *#&! happen. Keep innovating!

Image credits: ChatGPT, Google Gemini

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from ChatGPT to clean up the article and add citations.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Top 10 Human-Centered Change & Innovation Articles of February 2026

Top 10 Human-Centered Change & Innovation Articles of February 2026Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are February’s ten most popular innovation posts:

  1. Three Myths That Kill Change and Transformation — by Greg Satell
  2. Why a Customer Experience Audit is Non-Negotiable in 2026 — by Braden Kelley
  3. Innovation Lessons from the 50 Most Admired Companies of 2026 — by Braden Kelley
  4. Is Your Customer Experience a Lie? — by Braden Kelley
  5. Important or Urgent? — by Stefan Lindegaard
  6. The Greatest Inventor You’ve Never Heard of — by John Bessant
  7. 5 Simple Keys to Becoming a Powerful Communicator — by Greg Satell
  8. Do You Have What It Takes to be a Visionary? — Exclusive Interview with Mark C. Winters
  9. Temporal Agency – How Innovators Stop Time from Bullying Them — by Art Inteligencia
  10. Causal AI – Moving Beyond Prediction to Purpose — by Art Inteligencia

BONUS – Here are five more strong articles published in January that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Build a Common Language of Innovation on your team

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last five years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.