Moral Uncertainty Engines

Designing Systems That Know They Might Be Wrong

LAST UPDATED: March 6, 2026 at 5:07 PM

Moral Uncertainty Engines

GUEST POST from Art Inteligencia


I. Introduction: The Next Frontier in Responsible Innovation

As artificial intelligence and algorithmic systems take on increasingly consequential roles in our organizations and societies, a new challenge is emerging. The most dangerous systems are not necessarily the ones that make mistakes. The most dangerous systems are the ones that operate with complete confidence that they are right.

Innovation has always involved uncertainty. But when technology begins influencing decisions about hiring, healthcare, financial access, mobility, and public policy, uncertainty is no longer just a business risk—it becomes a moral one.

This is where a new concept begins to take shape: Moral Uncertainty Engines.

A Moral Uncertainty Engine is a decision architecture designed to recognize that ethical clarity is often elusive. Instead of embedding a single moral framework into a system, these engines evaluate decisions through multiple ethical lenses, quantify disagreements between them, and surface those tensions for human oversight.

In other words, they are systems designed not just to make decisions, but to acknowledge when the ethical landscape is ambiguous.

This represents a profound shift in how we design intelligent systems. For decades, the goal of technology was optimization—finding the single best answer. But the reality of human values is messier. What maximizes efficiency may conflict with fairness. What benefits the majority may harm the vulnerable. What is legal may not always be ethical.

Moral Uncertainty Engines do not attempt to eliminate these tensions. Instead, they illuminate them.

In doing so, they create the possibility for organizations to move beyond simplistic “ethical AI” checklists toward something far more powerful: systems that actively help leaders navigate complex moral tradeoffs.

Because the future of responsible innovation will not belong to the organizations that claim to have solved ethics. It will belong to the ones humble enough to admit they haven’t—and wise enough to design systems that help them think through it anyway.

II. What Is a Moral Uncertainty Engine?

Before we can explore the potential of Moral Uncertainty Engines, we need a clear understanding of what they are and why they matter. At their core, Moral Uncertainty Engines are decision-support systems designed to recognize that ethical certainty is often an illusion.

Traditional algorithms are built to optimize for a defined objective—maximize profit, minimize cost, increase efficiency, or predict outcomes with the highest statistical accuracy. But real-world decisions rarely involve just one objective. They involve competing values, conflicting priorities, and ethical tradeoffs that cannot always be resolved with a single formula.

A Moral Uncertainty Engine is a system designed to evaluate decisions through multiple ethical frameworks simultaneously and to acknowledge when those frameworks disagree.

Instead of embedding a single moral rule set into a system, these engines assess potential actions across different ethical perspectives and quantify the level of uncertainty or conflict between them. The result is not necessarily a single definitive answer, but a clearer picture of the ethical terrain surrounding a decision.

In practice, a Moral Uncertainty Engine typically performs several key functions:

  • Multi-framework evaluation – analyzing decisions through several ethical lenses rather than relying on a single rule set.
  • Ethical tradeoff analysis – identifying where different value systems produce conflicting recommendations.
  • Uncertainty scoring – measuring how confident the system can be in a morally acceptable course of action.
  • Transparency and explanation – making visible the reasoning behind recommendations.
  • Human escalation triggers – flagging decisions where ethical disagreement is high and human judgment is required.

To understand how this works, consider the most common ethical frameworks used in moral reasoning. A Moral Uncertainty Engine might evaluate a decision using several of these simultaneously:

  • Utilitarianism – Which option produces the greatest overall good?
  • Rights-based ethics – Does the decision violate fundamental rights?
  • Justice and fairness – Are harms and benefits distributed equitably?
  • Care ethics – How does the decision affect the most vulnerable stakeholders?

When these frameworks align, the system can move forward with confidence. But when they conflict—as they often do—the engine highlights the disagreement and surfaces the ethical tension instead of burying it.

This is the key insight behind Moral Uncertainty Engines: ethical complexity should not be hidden inside algorithms. It should be surfaced, measured, and navigated deliberately.

In many ways, these systems represent the next step in the evolution of responsible innovation. Rather than pretending that technology can eliminate moral ambiguity, they acknowledge that ambiguity is part of the landscape—and they help leaders make better decisions within it.

III. Why Moral Uncertainty Matters Now

The concept of Moral Uncertainty Engines might sound theoretical at first, but the forces making them necessary are already here. As organizations deploy increasingly autonomous technologies and algorithmic decision systems, they are encountering ethical dilemmas at a scale and speed that traditional governance structures were never designed to handle.

In the past, ethical decisions were typically made by humans, often slowly and with room for debate. Today, many of those same decisions are being influenced—or outright determined—by automated systems operating in milliseconds.

That shift creates a fundamental challenge: machines are excellent at optimizing defined objectives, but they struggle when the objectives themselves are morally contested.

AI Systems Are Increasingly Making Moral Decisions

Consider how many domains already rely on algorithmic decision-making:

  • Autonomous vehicles determining how to react in unavoidable accident scenarios
  • Healthcare systems prioritizing patients for scarce treatments
  • Hiring algorithms screening job candidates
  • Financial models determining who receives loans or credit
  • Content moderation systems deciding what speech is allowed online

Each of these systems contains embedded value judgments—whether explicitly designed or not. The problem is that most organizations treat these judgments as technical questions rather than ethical ones.

There Is No Universal Ethical Consensus

Humans themselves rarely agree on the “correct” moral answer in complex situations. Different cultures, organizations, and individuals prioritize different values. Some emphasize maximizing overall benefit, while others prioritize protecting individual rights or safeguarding vulnerable populations.

When technology is designed around a single ethical assumption, it risks imposing that value system invisibly and at scale.

Moral Uncertainty Engines acknowledge this reality by recognizing that ethical frameworks often produce conflicting recommendations. Instead of pretending consensus exists, they surface the disagreement so that organizations can navigate it deliberately.

The Risk of Moral Overconfidence

Perhaps the greatest danger in modern algorithmic systems is not error—it is overconfidence. Many AI systems produce outputs that appear authoritative, even when the underlying ethical reasoning is incomplete, biased, or based on questionable assumptions.

This can create what might be called moral automation bias, where humans defer to algorithmic recommendations simply because they appear objective or mathematically grounded.

Moral Uncertainty Engines introduce a critical counterbalance: they explicitly communicate when a decision is ethically ambiguous, contested, or uncertain.

The Innovation Opportunity

Organizations that learn how to operationalize moral uncertainty will gain an important advantage. They will be better equipped to:

  • Build trust with customers and stakeholders
  • Navigate regulatory scrutiny
  • Avoid reputational crises driven by opaque algorithms
  • Make more resilient long-term decisions

In other words, acknowledging ethical uncertainty is not a weakness. It is a capability—one that responsible innovators will increasingly need as technology becomes more powerful and more deeply embedded in human lives.

IV. How Moral Uncertainty Engines Work

To understand the potential of Moral Uncertainty Engines, it helps to look at how such a system might actually function in practice. While the concept is still emerging, the underlying architecture draws from fields like decision science, AI safety, machine ethics, and risk management.

At a high level, a Moral Uncertainty Engine acts as a layered decision-support system. Rather than producing a single optimized answer, it evaluates potential actions through multiple ethical perspectives and identifies where those perspectives align—or conflict.

A simplified architecture typically includes four key layers.

Layer 1: Situation Awareness

Every ethical decision begins with context. The system first gathers relevant information about the situation, including:

  • The stakeholders involved
  • The potential consequences of different actions
  • Legal or regulatory constraints
  • The scale and reversibility of potential harm

This layer ensures that the system understands the environment in which a decision is being made before attempting to evaluate its ethical implications.

Layer 2: Ethical Framework Evaluation

Next, the system analyzes the possible courses of action through multiple ethical frameworks. Each framework evaluates the decision according to its own principles and priorities.

For example:

  • Utilitarian perspective: Which option produces the greatest overall benefit?
  • Rights-based perspective: Does any option violate fundamental rights?
  • Justice perspective: Are harms and benefits distributed fairly?
  • Care perspective: How are vulnerable stakeholders affected?

Each framework generates its own assessment of the available choices.

Layer 3: Moral Aggregation

Once the frameworks have evaluated the options, the system compares their recommendations. In some cases, the frameworks may converge on a similar outcome. In others, they may strongly disagree.

Several approaches can be used to combine these evaluations, including weighted voting models, scenario simulations, or expected moral value calculations. The goal is not necessarily to produce a single definitive answer, but to understand the balance of ethical considerations across the frameworks.

Layer 4: Uncertainty and Escalation

The final layer measures how much disagreement exists between the ethical perspectives. If the frameworks align strongly, the system may proceed with a recommendation. If they diverge significantly, the system can flag the decision as ethically uncertain.

At this point, several actions may occur:

  • The system provides an explanation of the ethical tradeoffs
  • A confidence or uncertainty score is generated
  • The decision is escalated to human oversight

This is the core value of a Moral Uncertainty Engine. Instead of hiding ethical tension behind an optimized output, it reveals the complexity of the decision and invites human judgment where it matters most.

In many ways, these systems function less like automated decision-makers and more like ethical copilots—tools that help organizations think more clearly about the moral consequences of their choices.

V. Case Study: Autonomous Vehicles and the Trolley Problem

Few examples illustrate the challenge of moral uncertainty more clearly than autonomous vehicles. When self-driving systems operate on public roads, they must continuously make decisions that involve safety tradeoffs. Most of the time these choices are routine—slow down, change lanes, maintain distance. But in rare circumstances, a vehicle may face an unavoidable accident scenario where harm cannot be completely prevented.

These moments resemble the classic ethical thought experiment known as the “trolley problem,” where a decision must be made between two outcomes, each involving some form of harm. While philosophers have debated such scenarios for decades, autonomous vehicle developers must translate those debates into operational decisions inside real-world systems.

The difficulty is that different ethical frameworks often produce different answers. A strictly utilitarian approach might prioritize minimizing total casualties. A rights-based perspective might argue that intentionally choosing to harm one person to save others violates fundamental moral principles. A fairness perspective might question whether certain groups are systematically placed at greater risk.

Many early attempts to address these questions focused on encoding a single rule or priority structure into the vehicle’s decision logic. But this approach assumes that there is one universally acceptable ethical answer—an assumption that rarely holds across cultures, legal systems, or public opinion.

A Moral Uncertainty Engine offers a different approach. Instead of hard-coding a single moral rule, the system evaluates potential actions across multiple ethical frameworks and identifies where they agree and where they conflict.

For example, the system might:

  • Analyze the scenario from a utilitarian perspective focused on minimizing total harm
  • Evaluate whether any potential action violates protected rights
  • Assess whether the risks are being distributed fairly among stakeholders

If these frameworks converge on the same outcome, the system can act with greater confidence. If they diverge significantly, the vehicle may default to a predefined safety posture—such as minimizing speed and impact energy—rather than making an ethically aggressive tradeoff.

More importantly, the decision framework itself becomes transparent and auditable. Engineers, regulators, and the public can examine how ethical considerations were evaluated rather than treating the system as a black box.

The lesson from autonomous vehicles extends far beyond transportation. As technology becomes increasingly embedded in complex human environments, organizations will need systems that can recognize ethical tension instead of pretending it doesn’t exist.

Moral Uncertainty Engines provide a path toward that future—one where intelligent systems are designed not only to act, but to reflect the moral complexity of the world they operate within.

VI. Case Study: AI Medical Triage and the Ethics of Scarcity

Healthcare provides one of the most powerful real-world examples of why moral uncertainty matters. Medical systems regularly face situations where resources are limited and difficult prioritization decisions must be made. During public health crises, such as pandemics, these tradeoffs can become especially stark.

Hospitals may need to decide how to allocate ventilators, ICU beds, specialized treatments, or transplant organs when demand exceeds supply. Historically, these decisions have been guided by medical ethics boards, physician judgment, and carefully developed triage protocols. Increasingly, however, algorithmic systems are being introduced to help manage these decisions at scale.

Many triage algorithms are designed to optimize measurable outcomes such as survival probability or expected life-years saved. While these metrics may appear objective, they can create serious ethical tensions when translated into real-world policy.

For example, prioritizing expected life-years may unintentionally disadvantage older patients. Models that rely heavily on historical health data may penalize individuals from underserved communities who have historically received less access to preventative care. Systems designed purely around statistical survival probabilities may overlook broader ethical considerations about fairness, dignity, or social vulnerability.

This is precisely the kind of scenario where a Moral Uncertainty Engine could provide meaningful support.

Instead of optimizing for a single metric, the system evaluates triage decisions through several ethical perspectives simultaneously. A utilitarian framework may prioritize maximizing the number of lives saved. A justice-based framework may emphasize equitable access across demographic groups. A care-based framework may highlight the needs of the most vulnerable patients.

When these perspectives align, the system can offer a strong recommendation. But when they conflict—as they often do in healthcare—the engine surfaces that conflict rather than hiding it behind a numerical score.

The result is not an automated moral verdict. Instead, clinicians and ethics boards receive a clearer picture of the ethical tradeoffs embedded in each decision. The system may present alternative allocation scenarios, highlight potential bias risks, or flag cases that require human deliberation.

In this way, the technology functions less as a replacement for human judgment and more as a decision companion. It expands the visibility of ethical consequences while preserving the role of human responsibility.

Healthcare leaders already recognize that medical decisions involve more than statistics. Moral Uncertainty Engines simply help bring that ethical complexity into the design of the systems that increasingly shape those decisions.

VII. Leading Companies and Startups Exploring Moral Uncertainty

Moral Uncertainty Engines are still an emerging concept, but the foundational components of this category are already being developed across the technology ecosystem. Large technology firms, AI safety organizations, governance platforms, and startups focused on responsible AI are all contributing pieces of what could eventually become full ethical decision infrastructures.

While few organizations are explicitly using the term “Moral Uncertainty Engine,” many are working on the critical building blocks: AI alignment systems, ethical reasoning frameworks, transparency tools, and governance platforms designed to ensure responsible decision-making.

Large Technology Companies

Several major technology companies are investing heavily in AI alignment and responsible innovation. Their research programs are exploring ways to ensure that increasingly autonomous systems operate within acceptable ethical boundaries.

  • OpenAI – Research into alignment methods such as reinforcement learning from human feedback and systems designed to incorporate human values into AI behavior.
  • Google DeepMind – Work on AI safety, scalable oversight, and constitutional approaches to guiding model behavior.
  • Microsoft – Development of responsible AI frameworks, governance tools, and organizational guidelines for ethical AI deployment.

These companies are helping to define the infrastructure that future ethical decision systems will rely upon.

Emerging Startups

A growing number of startups are focusing specifically on governance, auditing, and ethical oversight for AI systems. These organizations are building platforms that help companies monitor algorithmic behavior, detect bias, and ensure compliance with evolving regulatory standards.

  • Credo AI – Provides governance platforms designed to help organizations operationalize responsible AI practices.
  • Holistic AI – Offers tools for auditing AI systems, identifying bias, and evaluating risk across machine learning models.
  • CIRIS – Focuses on runtime governance layers designed to help organizations manage the behavior of AI agents in production environments.

These companies are not yet full Moral Uncertainty Engines, but they are building the monitoring and governance layers that such systems will likely require.

Academic and Research Institutions

Some of the most important advances in machine ethics and moral decision systems are emerging from research institutions exploring how ethical reasoning can be integrated into AI architectures.

  • Stanford Human-Centered AI
  • MIT Media Lab
  • Oxford’s AI safety and governance research community

Researchers in these communities are experimenting with methods for translating ethical theory into operational systems capable of evaluating tradeoffs, measuring moral uncertainty, and providing transparent reasoning.

Taken together, these organizations represent the early ecosystem surrounding what could become one of the most important innovation categories of the next decade: technologies designed not just to make decisions, but to help society navigate the moral complexity that accompanies them.

VIII. The Innovation Opportunities

If Moral Uncertainty Engines sound like a niche academic concept today, history suggests that may not remain the case for long. Many of the most important innovation categories begin as abstract ideas before evolving into entire industries. Cloud computing, cybersecurity, and digital trust platforms all followed similar paths.

As AI systems become more deeply embedded in critical decisions, the ability to surface ethical tradeoffs and navigate moral uncertainty will become an increasingly valuable capability. This opens the door to several new innovation opportunities for entrepreneurs, technology companies, and forward-looking organizations.

Ethical Infrastructure Platforms

One opportunity lies in the creation of ethical infrastructure platforms—systems designed to plug into existing AI models and decision engines to provide moral evaluation layers. These platforms could function much like security software or monitoring tools, continuously assessing algorithmic behavior and flagging ethical risks.

Capabilities in this category might include:

  • Multi-framework ethical scoring for algorithmic decisions
  • Real-time bias detection and mitigation
  • Transparency dashboards for regulators and stakeholders
  • Ethical risk monitoring across large AI deployments

In effect, these platforms would provide the ethical equivalent of observability tools used in modern software systems.

Organizational Decision Copilots

Another opportunity lies in decision-support tools designed specifically for human leaders. Instead of automating decisions, these systems would act as ethical copilots—helping executives, policymakers, and product teams evaluate complex tradeoffs before implementing new technologies or policies.

Such tools might help organizations:

  • Simulate the ethical consequences of product features
  • Evaluate policy choices across competing value systems
  • Identify stakeholder groups most likely to be affected by a decision
  • Stress-test innovations against potential ethical controversies

In this model, the goal is not to replace human judgment, but to strengthen it with better visibility into ethical complexity.

Ethical Digital Twins

A particularly intriguing possibility is the development of ethical digital twins—simulation environments where organizations can test how different decisions might impact stakeholders across multiple ethical frameworks before deploying them in the real world.

Just as engineers use digital twins to simulate the performance of physical systems, leaders could use ethical simulation environments to anticipate unintended consequences, reputational risks, or fairness concerns before they emerge.

The Birth of a New Category

If these opportunities mature, Moral Uncertainty Engines could become the foundation for a new category of enterprise technology focused on ethical intelligence. Organizations would no longer rely solely on legal compliance or reactive crisis management to address ethical challenges. Instead, they would have systems designed to help them navigate those challenges proactively.

In a world where innovation increasingly shapes society at scale, the ability to operationalize ethical awareness may become just as important as the ability to write code or analyze data.

IX. The Risks and Criticisms of Moral Uncertainty Engines

Like any emerging technology category, Moral Uncertainty Engines bring both promise and potential pitfalls. While these systems could help organizations navigate complex ethical terrain more thoughtfully, they also raise legitimate concerns about how moral reasoning is translated into software and who ultimately holds responsibility for the outcomes.

If organizations are not careful, the very tools designed to improve ethical decision-making could inadvertently create new forms of risk.

The Danger of Moral Outsourcing

One of the most common criticisms is the risk of moral outsourcing. When organizations rely too heavily on algorithmic systems to evaluate ethical decisions, leaders may begin to treat those systems as final authorities rather than decision-support tools.

This can create a dangerous dynamic where responsibility quietly shifts from humans to algorithms. Instead of asking whether a decision is morally defensible, leaders may simply ask whether the system approved it.

Moral Uncertainty Engines should never replace human judgment. Their purpose is to illuminate ethical tradeoffs—not to absolve decision-makers of responsibility.

The Illusion of Objectivity

Another concern is the possibility that ethical scoring systems may create a false sense of precision. Numbers, dashboards, and scores can make complex moral questions appear more objective than they actually are.

But ethical frameworks themselves contain assumptions and value judgments. The choice of which frameworks to include, how they are weighted, and how outcomes are interpreted can all influence the system’s conclusions.

Without transparency, these embedded assumptions may go unnoticed by the people relying on the system.

Cultural and Societal Bias

Ethics is deeply shaped by culture, history, and social context. A system designed around one set of moral priorities may not reflect the values of another community or region.

If Moral Uncertainty Engines are built primarily by a narrow set of organizations or cultural perspectives, they could unintentionally export those values into systems used around the world.

Designing these systems responsibly will require diverse input from ethicists, policymakers, technologists, and communities affected by the decisions being modeled.

The Complexity Challenge

Finally, there is a practical challenge: ethical reasoning is incredibly complex. Translating philosophical frameworks into computational systems is difficult, and oversimplification is always a risk.

Not every moral dilemma can be captured in a model, and not every ethical conflict can be resolved through structured analysis.

Recognizing these limitations is essential. The goal of Moral Uncertainty Engines should not be to mechanize morality, but to provide better tools for navigating difficult decisions.

If designed thoughtfully, these systems can serve as valuable companions to human judgment. But if treated as definitive authorities, they risk becoming yet another example of technology that promises clarity while quietly obscuring the deeper questions that matter most.

X. The Leadership Imperative

The rise of Moral Uncertainty Engines underscores a critical lesson for leaders: technology alone cannot solve ethical complexity. Organizations that rely on automated systems to make moral decisions without human oversight risk both moral and reputational failure.

Leaders must approach these tools as companions rather than replacements—systems designed to illuminate ethical tradeoffs, measure uncertainty, and support thoughtful deliberation.

Key Principles for Responsible Leadership

  • Accountability: Leaders retain ultimate responsibility for decisions, even when supported by Moral Uncertainty Engines.
  • Transparency: Ensure that the reasoning behind system recommendations is visible, understandable, and auditable by humans.
  • Human Oversight: Use automated insights as decision-support, not as authoritative directives. Escalate ethically ambiguous scenarios to human judgment.
  • Ethical Culture: Encourage organizational practices that prioritize ethical reflection alongside operational efficiency and innovation.
  • Diversity of Perspectives: Incorporate insights from ethicists, technologists, and stakeholders representing different communities and cultural contexts.

Moral Uncertainty Engines are powerful because they make ethical ambiguity visible. But the value of that visibility depends entirely on the people interpreting it. Leaders who are willing to engage with these systems thoughtfully—questioning assumptions, evaluating tradeoffs, and embracing uncertainty—will turn ethical complexity into a strategic advantage.

In short, the technology alone does not create ethical outcomes. It is the combination of human judgment, responsible leadership, and machine-supported insight that allows organizations to navigate moral uncertainty successfully.

XI. Conclusion: Designing Systems That Know Their Limits

Moral Uncertainty Engines represent a profound shift in how we think about technology and ethics. They are not designed to replace human judgment, nor to provide definitive moral answers. Instead, they offer a framework for surfacing ethical tradeoffs, quantifying uncertainty, and supporting deliberate decision-making in complex contexts.

The systems of the future will need to balance intelligence with humility. They must optimize for outcomes while acknowledging the moral ambiguity inherent in most consequential decisions. By doing so, they create space for leaders, teams, and organizations to reflect, deliberate, and choose responsibly.

Across industries—from autonomous vehicles to healthcare triage, from hiring algorithms to public policy—ethical complexity is unavoidable. Moral Uncertainty Engines give organizations the tools to confront that complexity openly rather than hiding it behind optimization metrics or opaque algorithms.

In practice, these engines act as ethical copilots. They illuminate areas of tension, highlight disagreements between frameworks, and provide decision-makers with richer, more nuanced insights. The true measure of their success is not perfect moral accuracy, but the degree to which they enable human leaders to make informed, accountable, and ethically aware decisions.

Ultimately, the organizations that thrive in an increasingly automated and interconnected world will be those that design systems capable of acknowledging their limits—and that pair those systems with leaders willing to navigate uncertainty thoughtfully. In this way, Moral Uncertainty Engines may become one of the most important tools for fostering responsible innovation in the 21st century.

Frequently Asked Questions

1. What is a Moral Uncertainty Engine?

A Moral Uncertainty Engine is a decision-support system designed to evaluate choices through multiple ethical frameworks, quantify areas of disagreement, and provide transparent guidance or escalation when ethical uncertainty is high. Its purpose is to help organizations navigate complex moral tradeoffs rather than replace human judgment.

2. Why are Moral Uncertainty Engines important today?

As AI and algorithmic systems increasingly make decisions that affect people’s lives, the ability to surface and manage ethical uncertainty becomes critical. These engines reduce risks of overconfidence, bias, and hidden ethical assumptions, enabling organizations to make more responsible, accountable, and trusted decisions.

3. Which industries or applications can benefit from Moral Uncertainty Engines?

Any sector where complex decisions with moral implications are made can benefit, including healthcare triage, autonomous vehicles, hiring and HR systems, financial services, content moderation, and public policy. Essentially, any domain where decisions have significant ethical consequences can leverage these systems to guide thoughtful human oversight.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Do You Have an Empty Tank?

Do You Have an Empty Tank?

GUEST POST from Mike Shipulski

Sometimes your energy level runs low. That’s not a bad thing, it’s just how things go. Just like a car’s gas tank runs low, our gas tanks, both physical and emotional, also need filling. Again, not a bad thing. That’s what gas tanks are for – they hold the fuel.

We’re pretty good at remembering that a car’s tank is finite. At the start of the morning commute, the car’s fuel gauge gives a clear reading of the fuel level and we do the calculation to determine if we can make it or we need to stop for fuel. And we do the same thing in the evening – look at the gauge, determine if we need fuel and act accordingly. Rarely we run the car out of fuel because the car continuously monitors and displays the fuel level and we know there are consequences if we run out of fuel.

We’re not so good at remembering our personal tanks are finite. At the start of the day, there are no objective fuel gauges to display our internal fuel levels. The only calculation we make – if we can make it out of bed we have enough fuel for the day. We need to do better than that.

Our bodies do have fuel gages of sorts. When our fuel is low we can be irritable, we can have poor concentration, we can be easily distracted. Though these gages are challenging to see and difficult to interpret, they can be used effectively if we slow down and be in our bodies. The most troubling part has nothing to do with our internal fuel gages. Most troubling is we fail to respect their low fuel warnings even when we do recognize them. It’s like we don’t acknowledge our tanks are finite.

We don’t think our cars are flawed because their fuel tanks run low as we drive. Yet, we see the finite nature of our internal fuel tanks as a sign of weakness. Why is that? Rationally, we know all fuel tanks are finite and their fuel level drops with activity. But, in the moment, when are tanks are low, we think something is wrong with us, we think we’re not whole, we think less of ourselves.

When your tank is low, don’t curse, don’t blame, don’t feel sorry and don’t judge. It’s okay. That’s what tanks do.

A simple rule for all empty tanks – put fuel in them.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Reality Rule for Business

The Reality Rule for Business

GUEST POST from Shep Hyken

Most of us learned the Golden Rule at a young age: “Do unto others as you would have them do unto you.” This is a perfect rule for business, and specifically customer service and customer experience (CX). It translates into treating customers the way you want to be treated. It makes sense … or does it?

My colleague Dr. Tony Alessandra came up with a version of the Golden Rule he calls the Platinum Rule: “Do unto others as they would like done unto them.” Changing two words, you to them, in this rule means not everyone wants to be treated the in same way you might like to be treated. And in a broader sense, not everyone wants to be treated the same way.

However, when it comes to certain customers, no matter how you treat them, it doesn’t matter. If you don’t recognize this, it can break both employee satisfaction and customer satisfaction. That means it can also break a business.

The Expectation Trap

Recently, I read Give Hospitality by Taylor Scott, which tells the story of an employee who left her job because of a toxic workplace culture and found the perfect job where people, both employees and customers, were treated with respect and dignity. In her second week of training, she read a quote displayed on the company’s training room wall:

“Nothing in the Golden Rule says others will treat us as we have treated them. It only says we must treat others the way we would want to be treated.” – Rosa Parks

This quote from the legendary civil rights activist highlights a basic truth about customer service: exceptional treatment of customers doesn’t guarantee the customer will respond the same way. Yet many front-line employees and managers fall into the expectation trap and become frustrated when customers remain difficult despite receiving outstanding service.

The Danger of Misplaced Expectations

When employees expect customers to change their behavior to mirror that of employees, there is a possible danger of:

  • Employee Burnout: Front-line staff become disillusioned when their exceptional effort to take care of their customers isn’t appreciated or met with a more positive response. This is one of the top reasons it’s hard to keep good customer service reps. They say, “I can’t take it anymore,” and quit.
  • Inconsistent Customer Service: Frustrated employees may begin to take on the attitudes of their difficult customers, creating an inconsistent and bad experience for other customers.
  • Customers Leave: Difficult customers can become your most loyal customers when their problems are resolved with patience, kindness and professionalism, even if they don’t show it in their reactions. To avoid this, employees must be persistent and follow a new rule. (Read on!)

The Danger of Misplaced Expectations

The Reality Rule

Up until now, we have had the Golden Rule and the Platinum Rule. Now we have the Reality Rule:

Treat customers well, even if they don’t treat you well.

This isn’t about unacceptable abuse from a customer. Customers who cross the line with verbal abuse and threats fall under the category of Customers Who Aren’t Worth Doing Business With. Customers are allowed to be angry and agitated. They may be upset about the company or a product, and sometimes their behavior is driven by factors beyond your control.

The Reality Rule has three components:

  1. Control Your Response: While you can’t control the customer’s behavior, you have complete control over your attitude, effort and professionalism. Don’t let your angry customer’s behavior cause you to derail.
  2. Be Consistent: You know what it takes to deliver a great experience. Stay true to the core value of taking care of customers and, as just mentioned and worth mentioning again, don’t let your angry customer’s behavior cause you to go off track.
  3. Turn Foes into Friends: This is more of a goal than a rule, but it’s a goal you must start with in every tenuous interaction. My annual customer service and CX research finds that 81% of customers said they would consider returning to a company if it actively sought to make amends for a bad customer experience. When you handle a complaint properly, the customer will have higher confidence in you and your company than if the problem had never happened at all.

Final Words

When your team embraces the Reality Rule, magic happens. Difficult customers often transform into loyal advocates. Employee satisfaction increases when they understand their role and what they have control over. And your organization builds a reputation for taking care of customers, even when there are problems or complaints.

Remember, you’re not treating customers well because you expect them to change their behavior, although it’s nice when it happens — and sometimes it does. You’re doing it because it’s the right thing to do, knowing in the long run it pays dividends to properly manage problems and complaints. The Reality Rule creates the kind of experience that gets customers to say, “I’ll be back!

This article was originally published on Forbes.com.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Resilient Innovation

Why the Future Belongs to Organizations That Think in Three Dimensions

Why the Future Belongs to Organizations That Think in Three Dimensions

LAST UPDATED: March 11, 2026 at 6:56 PM (SPANISH LANGUAGE VERSION)

by Braden Kelley and Art Inteligencia


I. The Spark: A Venn Diagram That Captures a Powerful Truth

Inspiration for this article came from a simple but powerful visual shared in a recent post by Hugo Gonçalves. The image illustrated the relationship between Future Thinking, Design Thinking, and Systems Thinking using a Venn diagram that placed Resilient Innovation at the center.

At first glance the framework seems obvious. Each discipline is already well established in the innovation world:

  • Future Thinking helps organizations anticipate multiple possible futures.
  • Design Thinking focuses on solving problems through a human-centered approach.
  • Systems Thinking encourages examining systems holistically to understand complexity.

But what makes the diagram compelling is not the individual circles. It is the insight revealed at their intersections. When these disciplines operate together rather than in isolation, they unlock capabilities that are difficult for organizations to achieve otherwise.

At the intersection of Future Thinking and Design Thinking, organizations begin designing solutions for future scenarios rather than merely reacting to present conditions.

Where Design Thinking meets Systems Thinking, innovation becomes both human-centered and system-aware, producing solutions that account for real-world complexity and ripple effects.

And where Future Thinking intersects with Systems Thinking, organizations gain the ability to prepare systems for long-term sustainability and increasing complexity.

Resilient Innovation

When all three perspectives come together, something more powerful emerges: the ability to create innovations that are not only desirable and viable today, but resilient enough to thrive across multiple possible futures.

In a world defined by accelerating change, uncertainty, and interconnected systems, resilient innovation may be the most important capability organizations can develop. And as this simple diagram suggests, it thrives at the intersection of three powerful ways of thinking.

II. The Problem with One-Dimensional Innovation

Most organizations pursue innovation through a single dominant lens. Some lean heavily into design thinking workshops and rapid prototyping. Others invest in strategic foresight to anticipate future disruption. Still others focus on systems analysis to understand complexity and organizational dynamics.

Each of these approaches provides valuable insight. But when used in isolation, each also has significant limitations.

Design thinking, for example, excels at uncovering human needs and translating them into compelling solutions. Yet even the most desirable idea can fail if it ignores the larger systems it must operate within — regulatory structures, supply chains, cultural norms, or organizational incentives.

Future thinking helps organizations explore uncertainty and imagine multiple possible futures. Scenario planning and horizon scanning can expand strategic awareness and reduce surprise. But foresight alone rarely produces solutions that people are ready to adopt.

Systems thinking provides the ability to map complexity, understand feedback loops, and identify leverage points within interconnected environments. However, deep system insight does not automatically translate into solutions that resonate with human users.

When organizations rely on only one of these approaches, innovation often stalls. Ideas may be creative but impractical, visionary but disconnected from human behavior, or analytically sound but difficult to implement.

The challenge is not that these disciplines are flawed. The challenge is that they are incomplete on their own.

Innovation today takes place in environments that are simultaneously human, complex, and uncertain. Addressing only one dimension of that reality inevitably leads to blind spots.

Resilient innovation requires something more: the integration of multiple ways of thinking that together allow organizations to anticipate change, understand complexity, and design solutions people will actually embrace.

III. Future Thinking: Anticipating Multiple Possible Futures

One of the most dangerous assumptions organizations can make is that the future will look largely like the present. History repeatedly shows that markets, technologies, and societal expectations can shift faster than even experienced leaders anticipate.

This is where Future Thinking becomes essential, and the FutureHacking™ methodology helps everyone be their own futurist.

Future thinking is not about predicting a single outcome. Instead, it focuses on exploring a range of plausible futures so organizations can prepare for uncertainty rather than react to it after the fact.

Practitioners of future thinking use tools such as horizon scanning, trend analysis, and scenario planning to identify emerging signals of change and imagine how those signals might combine to shape different future environments.

By examining multiple possible futures, organizations expand their strategic imagination. They begin to see opportunities and risks that would otherwise remain invisible when planning is based solely on past performance or current market conditions.

Future thinking helps leaders ask better questions:

  • What changes on the horizon could reshape our industry?
  • Which emerging technologies or behaviors might disrupt our assumptions?
  • How might our customers’ needs evolve over the next decade?

When organizations incorporate future thinking into their innovation efforts, they gain the ability to design strategies and solutions that remain relevant even as conditions change.

However, foresight alone does not create innovation. Imagining the future is only the beginning. Organizations must also translate those insights into solutions that people value and systems can support.

That is why future thinking becomes far more powerful when combined with other perspectives — particularly the human-centered creativity of design thinking and the holistic understanding provided by systems thinking.

IV. Design Thinking: Solving Problems with a Human-Centered Approach

If future thinking expands our view of what might happen, design thinking helps ensure that the solutions we create actually matter to the people they are intended to serve.

Design thinking is grounded in a deceptively simple premise: innovation succeeds when it begins with a deep understanding of human needs, behaviors, and motivations. Rather than starting with technology or internal capabilities, design thinking begins with empathy.

Practitioners use methods such as observation, interviews, journey mapping, and rapid prototyping to uncover insights about how people experience products, services, and systems in the real world.

Through this process, organizations move beyond assumptions and begin designing solutions that reflect genuine human needs. Ideas are then explored through iterative experimentation, allowing teams to quickly learn what works, what doesn’t, and why.

This approach offers several powerful advantages:

  • It surfaces unmet or unarticulated customer needs.
  • It encourages experimentation and rapid learning.
  • It increases the likelihood that new solutions will be embraced by the people they are designed for.

Design thinking reminds organizations that innovation is not simply about creating something new. It is about creating something people will choose to adopt.

However, even the most human-centered solution can fail if it ignores the broader systems in which it must operate. A beautifully designed product may struggle against regulatory constraints, supply chain limitations, or cultural resistance within organizations.

This is why design thinking alone is not enough. To create innovations that truly endure, organizations must also understand the complex systems surrounding those solutions.

V. Systems Thinking: Seeing the Whole System

While design thinking focuses on people and future thinking explores uncertainty, systems thinking helps organizations understand the complex environments in which innovation must operate.

Modern organizations do not exist in isolation. They function within interconnected systems made up of customers, partners, suppliers, regulators, technologies, cultures, and internal structures. Changes in one part of the system often create ripple effects across many others.

Systems thinking encourages leaders and innovators to step back and examine these relationships holistically rather than focusing only on individual components.

Practitioners use tools such as system maps, causal loop diagrams, and stakeholder ecosystem mapping to identify patterns, dependencies, and feedback loops that influence outcomes over time.

This perspective provides several critical advantages:

  • It reveals hidden interdependencies within complex environments.
  • It helps identify leverage points where small changes can create large impact.
  • It reduces the likelihood of unintended consequences when introducing new solutions.

Many innovations fail not because the idea was flawed, but because the surrounding system was never designed to support it. Incentives may be misaligned. Processes may resist change. Infrastructure may not exist to scale the solution.

Systems thinking helps innovators recognize these structural realities early, allowing them to design solutions that fit within — or intentionally reshape — the systems they operate within.

Yet systems thinking alone can also fall short. Deep analysis of complexity does not automatically produce solutions that resonate with people or anticipate future shifts.

This is why resilient innovation emerges not from any one perspective, but from the intersection of future thinking, design thinking, and systems thinking working together.

Resilient Innovation Infographic

VI. Future Thinking + Design Thinking: Designing Solutions for Future Scenarios

When future thinking and design thinking come together, innovation shifts from solving today’s problems to designing solutions that remain meaningful in tomorrow’s world.

Future thinking expands the time horizon. It helps organizations explore emerging technologies, evolving social expectations, and potential disruptions that could reshape the environment in which products and services operate.

Design thinking brings the human perspective. It ensures that ideas developed in response to these future possibilities remain grounded in real human needs, motivations, and behaviors.

Together, these disciplines allow organizations to design solutions not just for the present moment, but for multiple possible futures.

Rather than asking only “What do customers need today?” teams begin asking deeper questions:

  • How might customer expectations evolve in the next five to ten years?
  • What new behaviors could emerge as technologies mature?
  • How might shifting social norms reshape what people value?

Several practices emerge from this intersection:

  • Creating future personas that represent how users might behave in different scenarios.
  • Building scenario-based prototypes that test how solutions perform under different future conditions.
  • Using speculative design to explore bold possibilities before they become reality.

This combination helps organizations avoid a common innovation trap: designing solutions perfectly optimized for a present that is already beginning to disappear.

By integrating foresight with human-centered design, organizations create innovations that are better prepared to evolve as the future unfolds.

VII. Design Thinking + Systems Thinking

Human-centered innovation is most powerful when it takes the wider system into account.
Integrating empathy with complexity awareness ensures that solutions are not only desirable but also viable and scalable within real-world systems.

Many well-intentioned innovations fail because they neglect system dynamics—leading to unintended consequences that can undermine adoption, efficiency, or long-term impact.

Example Practices

  • Journey Mapping + System Mapping: Understand the user experience alongside the broader system in which it operates.
  • Stakeholder Ecosystem Analysis: Identify all the players, relationships, and dependencies that influence outcomes.
  • Designing for Policy, Culture, and Infrastructure Simultaneously: Ensure solutions are compatible with the real-world environment, not just ideal scenarios.

Benefit: Solutions that scale effectively and endure within complex systems, reducing risk and maximizing long-term impact.

VIII. Future Thinking + Systems Thinking

Combining anticipation with structural understanding enables organizations to prepare systems for long-term sustainability and complexity. This intersection ensures that strategies and innovations are not just reactive but resilient to change and disruption.

Many organizations fail because they plan for the future without considering system-wide dynamics, leaving them vulnerable when change inevitably occurs.

Example Practices

  • Resilience Mapping: Identify system vulnerabilities and strengths to anticipate risks and opportunities.
  • Adaptive Strategy Design: Develop strategies that can flex and evolve as conditions change.
  • Long-Term Capability Building: Invest in skills, processes, and structures that sustain innovation over time.

Benefit: Organizations become prepared for volatility, able to respond to complex challenges without being derailed by disruption.

IX. The Center of the Venn Diagram: Resilient Innovation

True innovation resilience happens at the intersection of all three disciplines: Future Thinking, Design Thinking, and Systems Thinking. Organizations that operate here anticipate multiple possible futures, design solutions humans actually want, and understand the systems those solutions must survive inside.

This holistic approach moves beyond isolated innovation efforts, ensuring solutions are desirable, viable, and adaptable in a complex world.

Capabilities at the Center

  • Adaptive Innovation Portfolios: Maintain a diverse set of initiatives that can pivot as conditions change.
  • Experimentation Across Future Scenarios: Test solutions against multiple possible futures to validate robustness.
  • Human-Centered System Transformation: Redesign processes, structures, and policies to align with real human needs within systemic constraints.

Benefit: Organizations achieve resilient innovation that can thrive amidst uncertainty, disruption, and complexity, rather than merely surviving it.

Innovation Resilience Insights Quote

X. What Leaders Must Do to Build This Capability

Building resilient innovation requires leaders to shift their mindset and practices. It’s no longer enough to treat innovation as a siloed department or isolated initiative. Leaders must actively create the conditions that allow foresight, design, and systems thinking to work together.

Practical Leadership Shifts

  • Stop Treating Innovation as a Department: Embed innovation across teams and functions, not just in a single unit.
  • Build Foresight, Design, and Systems Capabilities Together: Develop cross-disciplinary skills that enable three-dimensional thinking.
  • Encourage Cross-Disciplinary Collaboration: Foster communication and shared problem-solving across different expertise areas.
  • Measure Resilience, Not Just Efficiency: Track long-term adaptability, system impact, and future-readiness, not only short-term outputs.
  • Design Organizations That Can Evolve Continuously: Create structures and processes that allow constant learning, adaptation, and iteration.

By adopting these leadership practices, organizations can ensure that their innovation efforts are not only creative but also resilient and scalable within complex systems.

XI. A Simple Test for Your Organization

To evaluate whether your organization is truly building resilient innovation capabilities, ask three critical questions:

  1. Are we designing only for today’s customers, or tomorrow’s realities?
    This question tests whether your innovation anticipates future needs and scenarios.
  2. Do our solutions work only in pilot environments, or within real systems?
    This evaluates whether innovations are scalable and resilient within the complex systems they must operate in.
  3. Are we solving human problems, or just optimizing processes?
    This ensures that your solutions are genuinely human-centered, not just operationally efficient.

If the answer to any of these is “no,” the missing capability likely lies at one of the intersections of Future Thinking, Design Thinking, and Systems Thinking. Addressing these gaps is critical for achieving resilient innovation.

XII. Final Thought: Innovation Is No Longer Linear

The world has become too complex for single-method innovation. Organizations that thrive in the future will be those that operate at the intersection of:

  • Anticipation: Preparing for multiple possible futures.
  • Human Understanding: Designing solutions people actually want and will adopt.
  • System Awareness: Ensuring solutions can survive and scale within real-world systems.

Resilient innovation does not come from seeing the future clearly. It comes from being prepared for many possible futures and designing systems and solutions that can adapt when they arrive. Organizations that master this approach are the ones that will endure, evolve, and thrive.

FAQ: Resilient Innovation

1. What is resilient innovation?

Resilient innovation is the ability of an organization to anticipate multiple possible futures, design solutions humans actually want, and ensure those solutions survive and scale within complex systems. It emerges at the intersection of Future Thinking, Design Thinking, and Systems Thinking.

2. Why do organizations struggle with one-dimensional innovation?

Many organizations rely on a single approach—such as design thinking, systems thinking, or future thinking—without integrating the others. This can lead to solutions that are desirable but not viable, or insightful but not actionable, resulting in innovation that fails to scale or adapt.

3. How can leaders build resilient innovation capabilities?

Leaders can foster resilient innovation by embedding cross-disciplinary collaboration, developing foresight, design, and systems capabilities together, measuring resilience (not just efficiency), and designing organizations that can continuously learn, adapt, and evolve.

p.s. Kristy Lundström posed the question of whether regenerative would be a better adjective than resilient, and I responded that it depends on where you draw the boundaries on the word resilient. I tend to think of it as an active word instead of a passive one, meaning the way that I look at the word incorporates elements of regeneration and making *#&! happen. Keep innovating!

Image credits: ChatGPT, Google Gemini

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from ChatGPT to clean up the article and add citations.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

VC-Backed Firms in Regulated Industries

The Times They Are A-Changin’

VC-Backed Firms in Regulated Industries

GUEST POST from Geoffrey A. Moore

This week I have had conversations with executive teams of VC-backed firms working in three different regulated industries: Healthcare, Telco, and Financial Services. All of them reported that their sales pipelines were around 3X what they were a year ago. We didn’t dig into why, although I expect that it means the incumbent providers are under increasing pressure to modernize their operating models and streamline their infrastructure models to meet customer demand and pricing pressure.

The reason we did not get to discuss why this is happening is that each of the teams was more focused on how — how do we adapt our playbook to this new development? You might not think an upsurge in demand would be a problem, but all three of these firms are at least an order of magnitude sub-scale to properly address the demands of their target customers. How do you ride such a wave demand without wiping out? How do you scale and not break your company?

Understanding the Dynamics of the Situation

The easiest way to see what is going on here is to examine it through the lens of the Hierarchy of Powers. Here’s how it plays out:

  • Category Power. The category is shifting from resisting the next wave to embracing it, albeit reluctantly, because the status quo is deteriorating, and it is clear something has to change. This leads to the upsurge in RFPs and RFIs that each company is now seeing. Budget is being created whereas before it had to be scrounged. This is great news for each enterprise, but it has its challenges.
  • Company Power. Compared to the Tier 1 prospects each of these companies is targeting, their own is tiny indeed. All of them lack the global reach and depth of personnel their customers require. Nonetheless, these are their most valuable prospects, so they must find a way to engage. That’s the core of the challenge.
  • Market Power. Each company has already focused on a single vertical—that is how they got as far as they have. Now they are going to have to focus even more rigorously in order to control their exposure to too much demand coming at them too fast and too soon. To secure market power, to become the go-to vendor for their category of offer for this vertical, they must prioritize the right subset of prospects and do whatever it takes to get them over the line.
  • Offer Power. This is where each company shines. It is why they are each attracting the attention of companies that a year ago were not returning their calls. Their products, however, are highly complex, and the implementations even more so, so they cannot support runaway growth. Moreover, the regulated industries they serve impose rigorous, one might even say onerous, demands, creating a whole series of hoops to jump through before they can get to the other side. How do you “catch the wave” when the sign on the beach says “proceed with caution”?
  • Execution Power. At the end of the day, this is the crux of the challenge. How can a subscale company with a world-class offer meet the demands of a regulated industry dominated by behemoth enterprises? How should it adapt its playbook?

Adapting the Playbook

Given this change in dynamics, here are the kinds of adaptions that are called for:

  • Control your destiny by narrowing your focus. The key for all three enterprises is to win a handful of Tier 1 accounts that the rest of the industry looks to for best practices. Winning these accounts will establish them as the go-to choice for the industry as a whole. This objective trumps all others, and every organization inside the company needs to reprioritize its workload accordingly.
  • Hold fast to your priorities. This is an internal transformation that requires strict discipline to execute. In the past, it was OK to step off the path to address an impromptu request because the demand for everyone’s time was less insistent. Now it is not. Use weekly commits as a way to make workloads visible, and intervene whenever they are drifting off course.
  • Stay very focused on your top-tier target accounts. Every one of them is a priority, even when they may not be giving you all the reception you want. Conversely, all other prospects are a distraction even when they are inviting you in.
  • Continue to serve your existing customer base. These are not the Tier 1 players we are targeting, but they are references that can help win those accounts. In addition, they are the early adopters who put their faith in you. You must do right by them.
  • Align with a big friend. Your target customers need you to bring many more resources to the table than you have inside your company. The good news is that these same customers work with global service providers who specialize in helping them on-board next-generation offers. You need to secure strong support from at least one of these, and you probably cannot easily support more than one, so pick one you think you can trust, and go all in with them on your go-to-market planning.
  • Let the big friend help you clear your regulatory hurdles. Time is your scarcest resource, and unfortunately, regulated industries are not good at moving swiftly. It’s a mismatch in operating models. VC-backed companies take risks to save time; regulated industries take time to reduce risk. This is not something you are well positioned to deal with. Global services firms, on the other hand, already have relationships with the regulatory authorities you must interface with, not to mention the bandwidth to work through the mandated processes. Do whatever you can to get their help in expediting whatever needs to be done.
  • Create the solution playbook that you and your GSI friend will co-deliver. Do not let the GSI take over the implementation. You know a lot more about what it takes to make your solution work than they do. But you can make sure that the work is profitable for them by giving them the playbook and letting them bill for their time. You don’t need the services revenue anywhere near as much as you need the Tier 1 account win.
  • Defer inbound requests that take you off strategy. You don’t have to say no. You just have to say, not yet. Given the amount of stress that any Tier 1 engagement will put on your firm, taking even one account that is off-script risks breaking your camel’s back.
  • Defer inbound interest around an acquisition. You are at an inflection point in value creation that is potentially extraordinary, the very outcome you and your investors have been preparing for. This is not the time to let go of the reins, particularly if they are going to get handed to an established enterprise whose culture is likely to clash with yours. Moreover, you cannot afford the distraction of all the due diligence that M&A discussions necessarily entail. M&A cannot solve your Tier 1 problem. You have to do that yourself.

Now, to be clear, there are exceptions that could overrule any one of the prescriptions above, so each team needs to review them in light of its own history and circumstances. The key point is that when the market is shifting from a state of scarcity to one of abundance, there is a short time window to catch that wave. The large competitors cannot move fast enough to do this themselves — that is why they are interested in making an acquisition. You are agile enough to do so, but you are painfully subscale — hence the need for the somewhat drastic prescriptions above. Navigating this part of the journey is tricky, but if you stay focused on winning (and keeping!) a handful of Tier 1 accounts, you are making the best bet.

That’s what I think. What do you think?

Image Credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

You Need a Customer Experience Risk & Revenue Leakage Diagnostic

Why You’re Losing More Than You Think — and Don’t Even Know It

LAST UPDATED: March 11, 2026 at 6:27 PM (SPANISH LANGUAGE VERSION)

by Braden Kelley and Art Inteligencia


I. The Invisible Cost of Friction

Most organizations measure revenue. Some measure profit. A growing number measure customer satisfaction. But very few measure revenue at risk — and almost none systematically measure experience-driven revenue leakage.

The hard truth is this: what customers experience today determines what finance reports tomorrow. Friction in the customer journey rarely shows up immediately on a balance sheet. Instead, it accumulates quietly — in hesitation, in doubt, in abandoned transactions, in unresolved issues, and in eroding trust.

Every confusing onboarding flow. Every policy that makes sense internally but frustrates externally. Every moment where a customer has to work harder than they expected. These are not minor inconveniences. They are micro-withdrawals from future growth.

When friction compounds, it becomes invisible leakage:

  • Customers buy less than they intended.
  • Customers delay decisions.
  • Customers quietly explore alternatives.
  • Customers leave without complaint.

Because traditional dashboards focus on lagging indicators, leaders often miss the early warning signs. By the time churn rises or margins compress, the experience damage has already been done.

Customer experience is not a “soft” discipline. It is a leading indicator of financial performance. If you are not measuring friction financially, you are tolerating it culturally.

The first step toward sustainable growth is acknowledging a simple but uncomfortable reality: what you cannot see is already costing you.

II. What Is a Customer Experience Risk & Revenue Leakage Diagnostic?

A Customer Experience Risk & Revenue Leakage Diagnostic is a structured, cross-functional assessment designed to uncover where your organization is unintentionally creating friction, eroding trust, and putting future revenue at risk.

It is not a satisfaction survey. It is not a brand perception study. And it is not a one-time journey mapping workshop.

It is a strategic instrument that connects customer experience directly to financial performance.

At its core, the diagnostic is designed to:

  1. Identify friction across the end-to-end customer journey
    From awareness and onboarding to service and renewal, it reveals where customers hesitate, struggle, or disengage.
  2. Quantify the financial impact of experience breakdowns
    It translates moments of frustration into measurable revenue exposure, cost-to-serve distortion, and lifetime value erosion.
  3. Prioritize improvements based on risk and recovery potential
    It enables leadership to focus on interventions that reduce risk, restore trust, and unlock trapped growth.

Unlike traditional CX metrics that tell you what happened, this diagnostic helps you understand why it happened — and what it is costing you.

By integrating operational data, customer feedback, employee insight, and financial modeling, the organization gains a clear view of:

  • Where revenue is quietly leaking
  • Where trust is weakening
  • Where internal complexity is surfacing as external pain
  • Where competitors are gaining advantage through simplicity

In short, a Customer Experience Risk & Revenue Leakage Diagnostic reframes customer experience from a qualitative aspiration into a measurable performance and risk management discipline.

III. Why Traditional Metrics Fail

Most organizations believe they are measuring customer experience effectively. They track Net Promoter Score (NPS), Customer Satisfaction (CSAT), conversion rates, churn rates, and average handle time. These metrics are familiar. They are benchmarked. They are reported to leadership regularly.

The problem is not that these metrics are wrong. The problem is that they are incomplete — and mostly lagging indicators.

They tell you what happened. They rarely tell you why it happened. And almost never do they tell you what it is costing you before it shows up in revenue.

The Three Core Limitations

  1. They Measure Sentiment, Not Exposure
    A customer can report being “satisfied” while still experiencing friction that reduces purchase frequency, basket size, or long-term loyalty.
  2. They Are Aggregated and Diluted
    Journey-level breakdowns are often hidden inside company-wide averages. A single high-friction touchpoint can erode trust even if the overall score appears stable.
  3. They Are Backward-Looking
    By the time churn rises or referrals fall, the experience damage has already compounded. Leadership is reacting to symptoms, not preventing causes.

Most importantly, traditional metrics rarely connect experience breakdowns directly to financial risk. Without that connection, friction becomes normalized.

Measurement shapes behavior. If you do not measure friction in financial terms, you unintentionally signal that it is tolerable.

A Customer Experience Risk & Revenue Leakage Diagnostic shifts the focus from “How are we scoring?” to a far more strategic question:

“Where are we unintentionally putting future revenue at risk?”

That reframing changes the conversation — from reporting outcomes to preventing loss and unlocking growth.

IV. The Four Hidden Sources of Revenue Leakage

Revenue rarely disappears in dramatic fashion. It erodes quietly — through friction, misalignment, and unexamined assumptions. Most organizations don’t have a revenue problem. They have a leakage problem.

A Customer Experience Risk & Revenue Leakage Diagnostic exposes four primary sources of hidden loss.

1. Friction Leakage

Friction leakage occurs when customers encounter unnecessary effort, confusion, or delay throughout their journey.

  • Abandoned carts and incomplete applications
  • Complicated onboarding experiences
  • Repetitive support interactions
  • Opaque pricing or renewal processes

Every moment of confusion acts as a micro-tax on growth. Individually small. Collectively significant.

2. Trust Leakage

Trust leakage is more subtle — and more dangerous. It happens when promises and delivery drift apart.

  • Inconsistent messaging across channels
  • Unmet service commitments
  • Poor recovery after failure
  • Policy decisions that prioritize internal efficiency over customer fairness

Trust is the invisible infrastructure of sustainable growth. When it weakens, customers may not complain — they simply reduce engagement.

3. Capability Leakage

Capability leakage originates inside the organization but manifests externally. It occurs when employees lack the tools, authority, or alignment needed to deliver a seamless experience.

  • Siloed data systems
  • Disconnected technology platforms
  • Incentives that reward internal metrics over customer outcomes
  • Front-line employees unable to resolve issues without escalation

Internal complexity always becomes external friction.

4. Strategic Blind Spots

Strategic leakage occurs when leadership decisions unintentionally trade long-term growth for short-term optimization.

  • Cost-cutting that degrades customer value
  • Underinvestment in journey orchestration
  • Failure to listen to front-line and edge-of-organization insights
  • Overconfidence in lagging indicators

The edges of the organization are where the future first becomes visible. If leadership is not looking there, risk compounds silently.

When these four forms of leakage intersect, the financial impact multiplies. The diagnostic does not just identify them — it quantifies them, transforming abstract experience concerns into measurable business priorities.

V. The Business Case: Why This Diagnostic Is Now Essential

The question is no longer whether customer experience matters. The question is whether you can afford to leave it undiagnosed.

Market dynamics have shifted. Expectations have accelerated. Transparency has increased. Acquisition costs continue to rise. In this environment, unmanaged experience risk is a strategic liability.

1. Customer Expectations Are Compounding

Customers do not compare you only to direct competitors. They compare you to the best experience they have had anywhere. Friction tolerance declines every year.

What felt “acceptable” five years ago now feels outdated. What feels slightly inconvenient today becomes unacceptable tomorrow.

2. Digital Transparency Amplifies Experience Gaps

One broken interaction can scale rapidly through reviews, social platforms, and peer networks.

Experience inconsistency is no longer contained. Reputation moves at the speed of visibility.

3. Growth Is More Expensive Than Retention

Customer acquisition costs continue to climb across industries. When revenue leaks through preventable friction, organizations are forced to spend more just to stand still.

Protecting and expanding lifetime value is now a financial imperative — not a marketing aspiration.

4. Innovation Without Experience Discipline Fails

Organizations invest heavily in new products, services, and technologies. But innovation layered on top of broken journeys simply magnifies dysfunction.

Scale amplifies whatever system you have — good or bad. If the experience foundation is fragile, growth initiatives will expose the cracks.

5. Risk Management Must Extend Beyond Compliance

Most enterprises have mature financial and operational risk frameworks. Few have equivalent rigor applied to customer experience risk.

A Customer Experience Risk & Revenue Leakage Diagnostic closes that gap, elevating experience from a functional concern to a board-level performance and risk management priority.

In today’s environment, diagnosing experience risk is not optional. It is foundational to sustainable, human-centered growth.

CX Risk and Revenue Leakage Diagnostic Business Case

VI. What a High-Impact Diagnostic Actually Measures

If you are going to treat customer experience as a growth and risk discipline, you must measure it with the same rigor you apply to financial performance. A high-impact Customer Experience Risk & Revenue Leakage Diagnostic goes far beyond sentiment scores.

It evaluates exposure, root causes, and financial implications — across the entire customer lifecycle.

A. Journey-Level Risk Exposure

The diagnostic identifies where customers hesitate, struggle, or disengage across key stages of the journey.

  • Drop-off and abandonment patterns
  • Cycle time delays
  • Escalation and repeat contact rates
  • Inconsistent cross-channel transitions

Rather than looking at averages, it isolates specific high-risk touchpoints where friction compounds and revenue becomes vulnerable.

B. Emotional Friction Points

Not all risk is operational. Some of the most expensive leakage begins at the emotional level.

  • Moments of uncertainty or confusion
  • Moments of perceived unfairness
  • Moments where trust is tested
  • Moments where customers feel unheard

Emotional friction reduces confidence — and reduced confidence lowers commitment, expansion, and advocacy.

C. Operational Root Causes

High-impact diagnostics do not stop at symptoms. They trace friction back to systemic drivers.

  • Policy-driven constraints
  • Technology integration gaps
  • Siloed data and decision rights
  • Misaligned incentives and performance metrics

Internal complexity inevitably surfaces as external customer pain. Sustainable solutions require structural insight.

D. Financial Impact Modeling

The most critical component is quantification. Friction must be translated into financial terms.

  • Revenue at risk by journey stage
  • Lifetime value erosion
  • Cost-to-serve inflation
  • Margin compression driven by service recovery

When experience breakdowns are expressed in dollars, prioritization becomes clearer and alignment accelerates.

A high-impact diagnostic makes the invisible visible — not just emotionally, but economically.

VII. From Insight to Action: Turning Risk into Recovery

A diagnostic without activation is theater.

Insight alone does not recover revenue. Awareness alone does not restore trust. If the findings from a Customer Experience Risk & Revenue Leakage Diagnostic do not change behavior, structure, and investment decisions, then the organization has simply produced a more sophisticated report.

The goal is not understanding. The goal is recovery.

1. Capture Immediate Revenue Through Quick Wins

Every diagnostic surfaces friction points that can be resolved quickly:

  • Simplifying confusing onboarding steps
  • Clarifying pricing language
  • Reducing redundant approval gates
  • Fixing high-volume support failure points

These are not cosmetic improvements. They are revenue recovery mechanisms. When friction decreases, conversion improves. When clarity increases, hesitation declines. Early wins build organizational momentum and prove that experience discipline drives financial results.

2. Eliminate Structural Sources of Systemic Friction

Some leakage is not tactical. It is architectural.

Siloed systems. Misaligned incentives. Policy-driven complexity. Governance bottlenecks.

These require cross-functional intervention. This is where leadership courage matters. Because structural friction is usually owned by no one — and tolerated by everyone.

True recovery demands redesigning how the organization works, not just how the customer journey looks.

3. Invest in Capability to Prevent Recurrence

Experience breakdowns often trace back to capability gaps:

  • Frontline employees without decision authority
  • Teams without access to unified customer data
  • Leaders without visibility into journey-level risk metrics

If the organization cannot detect friction early, it will continue to leak revenue quietly. Capability investment turns reactive firefighting into proactive orchestration.

4. Institutionalize Experience Accountability

Lasting change requires governance.

That means:

  • Assigning executive ownership for journey health
  • Embedding experience risk metrics into performance dashboards
  • Aligning incentives with friction reduction and trust preservation

Measurement shapes behavior. When experience risk is measured financially, it stops being a “soft” concern and becomes a board-level priority.

The Shift

When organizations move from insight to action, the narrative changes.

We are not improving customer satisfaction.
We are recovering growth.
We are protecting margin.
We are strengthening trust.

A Customer Experience Risk & Revenue Leakage Diagnostic is not the finish line. It is the ignition point. What matters is what the organization does next — how quickly it acts, how boldly it redesigns, and how deeply it commits to human-centered accountability.

Because friction compounds.

But so does disciplined recovery.

Turning Risk Into Recovery

VIII. The Cultural Impact

Conducting a Customer Experience Risk & Revenue Leakage Diagnostic is not just about numbers and dashboards. It is a catalyst for cultural transformation.

When an organization quantifies experience risk, it sends a clear signal: customer outcomes are inseparable from business performance.

Key Cultural Shifts

  • Finance Pays Attention: Revenue leakage is now measurable and visible, making it a board-level concern rather than an abstract notion.
  • Operations Engage: Front-line teams see how their actions directly influence financial outcomes, motivating proactive problem-solving.
  • Leadership Prioritizes: Strategic planning incorporates experience risk as a key dimension alongside cost, efficiency, and growth targets.
  • Employees Gain Clarity: Everyone understands how day-to-day decisions impact customer trust, loyalty, and revenue.

The conversation shifts from:

“How satisfied are our customers?”

To a more strategic and actionable question:

“How much growth are we leaving on the table?”

This cultural shift embeds accountability for experience across all levels of the organization. It moves customer experience from a departmental initiative to an enterprise-wide performance discipline.

Ultimately, organizations that embrace this mindset are more agile, more resilient, and more capable of sustaining profitable growth.

IX. The Leadership Imperative

Human-centered change begins with leaders who are willing to see reality clearly. A Customer Experience Risk & Revenue Leakage Diagnostic provides the lens to identify hidden friction, quantify its impact, and prioritize action.

Leadership cannot afford to rely on assumptions, anecdotal feedback, or lagging metrics. The future of growth is determined by how well the organization prevents leakage before it appears on the balance sheet.

Core Principles for Leaders

  • See Reality Clearly: Recognize that friction and trust erosion are real, measurable threats to revenue and loyalty.
  • Measure What Truly Matters: Go beyond NPS, CSAT, and churn metrics. Quantify revenue at risk and the financial impact of experience breakdowns.
  • Act Proactively: Use diagnostic insights to guide immediate interventions, structural improvements, and capability development.
  • Embed Accountability: Make experience risk a shared responsibility across functions, not a siloed initiative.

A diagnostic without leadership activation is just a report. True impact comes when insights are operationalized, turning risk into recovery and friction into opportunity.

Ultimately, leaders who embrace this approach shift the organizational conversation from:

“Are we delivering good experiences?”

To a more strategic and urgent question:

“Where are we unintentionally putting future revenue at risk, and how do we fix it?”

This is the leadership imperative: see, measure, act, and embed a culture where customer experience drives sustainable growth.

X. Closing Thought

Innovation does not fail because ideas are weak. It fails because the experience system cannot support them. A brilliant product, service, or solution cannot thrive if friction, trust gaps, or operational constraints block its path to the customer.

If you want sustainable growth, three imperatives are clear:

  1. Stop guessing: Uncover hidden friction and revenue leakage before it escalates.
  2. Stop relying on lagging indicators: Traditional metrics alone will not reveal the silent risks undermining growth.
  3. Diagnose, quantify, and act: Translate insights into immediate interventions, structural fixes, and capability investments.

Because what you cannot see will eventually show up — in churn, in margin compression, and in lost relevance. Waiting until it appears on financial statements is too late.

A Customer Experience Risk & Revenue Leakage Diagnostic gives organizations the clarity, rigor, and foresight needed to protect revenue, strengthen trust, and enable innovation to scale successfully.

In the end, the diagnostic is not just a tool. It is a strategic mindset: measure what matters, see reality, and act decisively. Those who embrace it will not just survive disruption — they will thrive in it.


Reserve your Customer Experience Risk & Revenue Leakage Diagnostic with Braden Kelley today


FAQ: Customer Experience Risk & Revenue Leakage Diagnostic

1. What exactly is a Customer Experience Risk & Revenue Leakage Diagnostic?

It is a structured assessment that identifies friction points across the customer journey, measures the financial impact of experience breakdowns, and prioritizes actions to reduce risk and recover lost revenue. Unlike traditional surveys, it connects customer experience directly to measurable business outcomes.

2. How does this diagnostic differ from traditional CX metrics like NPS or CSAT?

Traditional metrics are lagging indicators that report what has already happened. A diagnostic goes deeper by uncovering hidden sources of friction and trust erosion, quantifying revenue at risk, and linking operational and emotional touchpoints to tangible financial consequences. It transforms CX from a qualitative measure into a strategic risk and growth tool.

3. Who in the organization benefits from this diagnostic?

Everyone from leadership to front-line employees benefits. Leaders gain visibility into financial risk and opportunity, operations teams understand where to focus improvements, and employees see how daily actions impact customer trust and revenue. It aligns the entire organization around measurable experience outcomes.


Reserve your Customer Experience Risk & Revenue Leakage Diagnostic with Braden Kelley today


Image credits: ChatGPT, Google Gemini

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from ChatGPT to clean up the article and add citations.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Top 10 Human-Centered Change & Innovation Articles of February 2026

Top 10 Human-Centered Change & Innovation Articles of February 2026Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are February’s ten most popular innovation posts:

  1. Three Myths That Kill Change and Transformation — by Greg Satell
  2. Why a Customer Experience Audit is Non-Negotiable in 2026 — by Braden Kelley
  3. Innovation Lessons from the 50 Most Admired Companies of 2026 — by Braden Kelley
  4. Is Your Customer Experience a Lie? — by Braden Kelley
  5. Important or Urgent? — by Stefan Lindegaard
  6. The Greatest Inventor You’ve Never Heard of — by John Bessant
  7. 5 Simple Keys to Becoming a Powerful Communicator — by Greg Satell
  8. Do You Have What It Takes to be a Visionary? — Exclusive Interview with Mark C. Winters
  9. Temporal Agency – How Innovators Stop Time from Bullying Them — by Art Inteligencia
  10. Causal AI – Moving Beyond Prediction to Purpose — by Art Inteligencia

BONUS – Here are five more strong articles published in January that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Build a Common Language of Innovation on your team

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last five years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Why Change Doesn’t Have to Start at the Top

Why Change Doesn't Have to Start at the Top

GUEST POST from Greg Satell

In 2004 I found myself running a major news organization during the Orange Revolution in Ukraine. It was one of those moments when the universe opens up, reveals a bit of itself and you realize the world doesn’t work the way you thought it did. What struck me at the time was that nobody with any conventional form of power had any ability to shape events at all.

One of the myths that is constantly repeated is that change needs to start at the top. Clearly that is not true. It wasn’t true of the Color Revolutions that spread across Eastern Europe. Nor was it true of social movements like the fight for LGBT rights. Despite what you may have heard, it doesn’t hold true for organizations either.

What is true is that if you are going to bring about genuine change you need to influence institutions and that means you need, at some point, to involve senior leaders, but it rarely starts with them. The myth that change has to start at the top is a copout — a reason to do nothing when you can do something. Make no mistake. Change can come from anywhere.

Weaving Webs of Influence

Movements, as the name implies, are kinetic. They start somewhere and they end up somewhere else. That’s one reason why why so many successful change efforts become misunderstood. People look back at an event like the 1963 March on Washington and think that’s what made the civil rights movement successful. Nothing could be further from the truth. That wasn’t what built the movement, it was part of the end game.

Consider that the first “March on Washington,” the Woman Suffrage Procession of 1913, was a disaster. None of the others since 1963 did much either. The civil rights march came after nearly a decade of boycotts, sit-ins, Freedom Rides and other tactics that built the movement before it finally found its moment. Still, it’s the moment that people remember.

In much the same way, whenever we see a successful transformation we look to the actions of leaders. We see a CEO who gave a speech, a marketer who came up with a big product idea or an engineer who took a project in a new direction. These events are real, but they rarely, if ever, appear out of nowhere. They are products of webs of influence.

When we look more closely, we inevitably find that the CEO was inspired to give the pivotal speech from a conversation he had with his daughter. The marketer got the initial idea for the campaign from a junior team member. Or the engineer changed the direction of the project after a fateful encounter he had in the cafeteria.

Our decisions are the product of complex systems. Anything can start anywhere. Don’t let anyone tell you differently.

Going to Where the Energy Is

Transformations, in retrospect, often seem inevitable, even obvious. Yet they don’t start out that way. The truth is that it is small groups, loosely connected, but united by a common purpose that drives transformation. So the first thing you want to do is identify your apostles — people who are already excited about the possibilities for change.

For example, in his efforts to reform the Pentagon, Colonel John Boyd began every initiative by briefing a group of collaborators he called the “Acolytes,” who would help hone and sharpen the ideas. He then moved on to congressional staffers, elected officials and the media. By the time general officers were aware of what he was doing, he had too much support to ignore.

In a similar vein, a massive effort to implement lean manufacturing methods at Wyeth Pharmaceuticals began with one team at one factory, but grew to encompass 17,000 employees across 25 sites worldwide and cut manufacturing costs by 25%. The campaign that overthrew Serbian dictator Slobodan Milošević started with just 5 kids in a coffee shop.

One advantage to starting small is that you can identify your apostles informally, even through casual conversations. In skills-based transformations, change leaders often start with workshops and see who seems enthusiastic or comes up after the session. Your apostles don’t need to have senior positions or special skills, they just have to be passionate.

There’s something about human nature that, when we’re passionate about an idea, makes us want to go convince the skeptics. Don’t do that. Start with people who want your idea to succeed. If you feel the urge to convince or persuade, that’s a sign that you either have the wrong idea or the wrong people.

“You have to go where the energy is,” John Gadsby, who built a movement for process improvement inside Procter & Gamble that has grown to encompass 60,000 employees, told me. “We’ll choose energy and excitement and enthusiasm over the right position, or the person at the right leadership level, or the person whose job it is supposed to be to do that.”

Mobilizing People To Influence Institutions

In the early 1990s, writer and activist Jeffrey Ballinger published a series of investigations about Nike’s use of sweatshops in Asia. People were shocked by the horrible conditions that workers — many of them children — were subjected to. In most cases, the owners lived outside the countries where the factories were located and had little contact with their employees.

At first, Nike’s CEO, Phil Knight, was defiant. “I often reacted with self-righteousness, petulance, anger. On some level I knew my reaction was toxic, counterproductive, but I couldn’t stop myself,” he would later write in his memoir, Shoe Dog. He pointed out that his company didn’t own the factories, that he’d worked with the owners to improve conditions and that the stories, as gruesome as they were, were exceptions.

The simple truth is that change rarely, if ever, starts at the top because it is people with power that create the status quo. They are attached to what they’ve built and take pride in their accomplishments, just like the rest of us. That’s why, to bring about genuine change — change that lasts — you need to mobilize people to influence institutions (or those, like Knight, who yield institutional power).

Eventually, that’s what happened at Nike. The protests took their toll. “We had to admit,” Knight remembered, “We could do better.” Going beyond its own factories, the company established the Fair Trade Labor Association and published a comprehensive report of its own factories. Today, the company’s track record may not be perfect, but it’s become more a part of the solution than a part of the problem.

Change Is Never Top-Down Or Bottom-Up

At a pivotal moment during the height of the civil rights movement, Robert Kennedy, Attorney General of the United States and brother to the President, would turn to the activist John Lewis and say, “’John, the people, the young people of the SNCC, have educated me. You have changed me. Now I understand.”

Lewis, just a young kid in his twenties at the time, was himself the product of webs of influence. He was shaped by mentors like Jim Lawson and Keller Miller Smith, as well as by peers such as Diane Nash, Bernard Lafayette and James Bevel. They, in turn, influenced others to get out, protest and shape the minds of people like Robert Kennedy.

As I explain in Cascades, transformation isn’t top-down or bottom-up, but happens from side-to-side. You can find the entire spectrum — from active support to active resistance — at every level. The answer doesn’t lie in any specific strategy or initiative, but in how people are able to internalize the need for change and transfer ideas through social bonds.

Change never happens all at once and can’t simply be willed into existence. The best way to do that is to empower those who already believe in change to bring in those around them. That’s what’s key to successful transformations. A leader’s role is not to plan and direct action, but to inspire and empower belief.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Has AI Killed Design Thinking?

Or Just Removed Its Excuses?

LAST UPDATED: March 2, 2026 at 5:13 PM

Has AI Killed Design Thinking?

by Braden Kelley and Art Inteligencia


I. The Question Everyone Is Whispering

Something fundamental has changed in how products are created.

Artificial intelligence can now generate working software in minutes. Designers can move from an idea to a functional prototype without waiting for engineering. Engineers can generate interface concepts, user flows, and even early product ideas with a few well-crafted prompts.

The traditional product development cycle — design, then build, then test — is collapsing into something faster, messier, and far more fluid.

In the past, the biggest constraint in innovation was the cost and time required to build something. Today, AI dramatically reduces that barrier. Entire features, experiments, and even applications can be created almost instantly.

Which raises an uncomfortable question that many product leaders, designers, and engineers are quietly asking:

If we can ship almost immediately, do we still need design thinking?

At first glance, the answer might seem obvious. Design thinking was created to help teams understand people, define the right problems, and avoid building the wrong solutions. Those goals have not disappeared.

But when the cost of building approaches zero, the role of design inevitably changes. The traditional pacing of discovery, ideation, prototyping, and testing begins to compress. The boundaries between designer and engineer begin to blur.

And as those boundaries dissolve, the question is no longer simply whether design thinking still matters.

The deeper question is whether the discipline itself must evolve to survive in a world where almost anyone can turn an idea into working software.

II. Design Thinking Was Built for a World of Scarcity

To understand how artificial intelligence is reshaping product creation, it helps to remember the environment in which design thinking originally emerged.

Design thinking did not appear because organizations suddenly discovered empathy or creativity. It emerged because building things was expensive, slow, and risky. Every product decision carried significant cost, and mistakes could take months or years to correct.

In that world, organizations needed a structured way to reduce uncertainty before committing engineering resources. Design thinking provided that structure.

Its now-famous stages helped teams move deliberately from understanding people to building solutions:

  • Empathize — deeply understand the people you are designing for.
  • Define — frame the real problem worth solving.
  • Ideate — generate a wide range of possible solutions.
  • Prototype — create rough representations of potential ideas.
  • Test — validate whether those ideas actually work for people.

The goal was simple: avoid spending months building something no one actually needed.

Design thinking slowed teams down in the right places so they could move faster later. It created space for exploration before the heavy machinery of engineering was set in motion.

But this entire framework assumed one critical constraint:

Building was the most expensive part of innovation.

Prototypes were often static mockups. Experiments required engineering time. Even small product changes could take weeks or months to ship.

In other words, design thinking was optimized for a world where the biggest risk was building the wrong thing.

Today, AI is rapidly changing that assumption. When working software can be generated in minutes rather than months, the bottleneck shifts — and the role of design must evolve with it.

III. AI Has Flipped the Innovation Constraint

For most of the history of digital product development, the limiting factor in innovation was the ability to build. Even the best ideas had to wait in line for scarce engineering resources, long development cycles, and complex release processes.

Artificial intelligence is rapidly dismantling that constraint.

Today, AI tools can generate functional code, working interfaces, and interactive prototypes in minutes. What once required a team of specialists and weeks of effort can often be produced by a single individual in an afternoon.

Designers can now:

  • Create interactive prototypes that behave like real products
  • Generate front-end code directly from design concepts
  • Rapidly explore multiple product directions

Engineers can now:

  • Generate user interfaces and layouts
  • Experiment with product concepts before committing to full builds
  • Quickly iterate on product experiences

The barrier between idea and implementation is shrinking dramatically.

As a result, the core constraint in innovation is no longer the ability to build something. The new constraint is the ability to decide what should actually be built.

When creation becomes cheap, judgment becomes the scarce resource.

Organizations can now generate more ideas, features, and experiments than they have the capacity to evaluate thoughtfully. The risk is no longer simply building the wrong thing slowly.

The risk is building thousands of things quickly without enough clarity about which ones actually matter.

This shift fundamentally changes the role of design. Instead of primarily helping teams avoid costly mistakes in development, design increasingly becomes the discipline that helps organizations navigate overwhelming possibility.

IV. The Blurring of Roles: Designers Reach Forward, Engineers Reach Back

One of the most profound effects of AI in product development is the erosion of traditional professional boundaries.

For decades, the technology industry operated with relatively clear separations of responsibility. Designers focused on user needs, interaction models, and visual systems. Engineers translated those designs into working software. Product managers coordinated priorities and timelines between the two.

That structure was largely a reflection of technical limitations. Designing and building required specialized tools, knowledge, and workflows that made cross-disciplinary work difficult.

AI is rapidly dissolving those barriers.

Designers can now reach forward into the domain that once belonged exclusively to engineering. With AI-assisted tools, they can generate working interfaces, produce front-end code, and simulate complex user interactions without waiting for implementation.

At the same time, engineers can reach backward into design. AI systems can help them generate layouts, propose interface structures, and explore experience flows that once required specialized design expertise.

The result is a new kind of creative overlap:

  • Designers who can prototype in code
  • Engineers who can explore experience design
  • Product creators who move fluidly between disciplines

The traditional model of work moving through a linear chain — research to design to engineering — begins to give way to a far more integrated creative process.

The future product creator is not defined by a job title, but by the ability to move fluidly between understanding problems and building solutions.

This does not mean design expertise or engineering skill become less important. If anything, the opposite is true. As tools make it easier for everyone to participate in creation, the depth of real craft becomes more visible and more valuable.

But it does mean the rigid boundaries between “designer” and “builder” are beginning to dissolve, creating a new generation of hybrid creators who can move seamlessly between imagining, designing, and shipping experiences.

V. The Death of the Handoff

For decades, most product development operated like a relay race. Work moved from one team to the next through a series of formal handoffs.

Researchers gathered insights and passed them to designers. Designers created wireframes and mockups that were handed to engineering. Engineers translated those designs into working software and eventually passed the finished product to testing and operations.

Each transition introduced delays, misinterpretations, and loss of context. The original understanding of the problem often became diluted as it traveled through the system.

Artificial intelligence is accelerating the collapse of this model.

When individuals can move rapidly from idea to prototype to functional product, the need for rigid handoffs begins to disappear. A single person can now:

  • Explore a user problem
  • Design a potential solution
  • Generate working code
  • Launch an experiment

Instead of waiting for work to pass from one discipline to another, creators can stay connected to the entire lifecycle of an idea.

The distance between insight and implementation is shrinking.

This shift has profound implications for how innovation happens inside organizations. Instead of large teams coordinating complex handoffs, smaller groups — or even individuals — can rapidly test ideas and learn from real-world feedback.

Product development begins to look less like an industrial assembly line and more like a creative studio, where ideas are explored, built, and refined continuously.

The most effective teams in this environment will not simply move faster. They will maintain ownership of ideas from the moment a problem is discovered all the way through to the moment a solution is experienced by real people.

VI. What AI Actually Kills

Artificial intelligence is not killing design thinking.

What it is killing are many of the habits that organizations adopted in the name of design thinking but that were never truly about understanding people or solving meaningful problems.

For years, some teams have mistaken the appearance of innovation for the practice of it. Workshops replaced experiments. Sticky notes replaced decisions. Slide decks replaced prototypes.

When building was slow and expensive, these behaviors were often tolerated because teams needed time to align before committing resources. But in a world where working solutions can be generated almost instantly, those habits quickly become friction.

AI removes the excuses that allowed these patterns to persist.

Process Theater

Innovation workshops that generate energy but not outcomes become difficult to justify when teams can build and test ideas immediately.

Endless Ideation

Brainstorming sessions that produce dozens of ideas without committing to experiments lose their value when ideas can be rapidly turned into prototypes and evaluated in the real world.

Documentation Instead of Exploration

Detailed reports, long strategy decks, and static artifacts once helped communicate ideas across teams. But when AI allows concepts to be expressed through working experiences, documentation becomes less important than experimentation.

Safe Innovation

Perhaps most importantly, AI challenges organizations that use process as a shield against risk. When it becomes easy to test bold ideas quickly and cheaply, avoiding experimentation becomes a choice rather than a necessity.

AI doesn’t eliminate design thinking. It eliminates the distance between thinking and doing.

The organizations that thrive in this environment will not be the ones with the most polished innovation processes. They will be the ones that are most willing to replace discussion with discovery and ideas with experiments.

Has AI Killed Design Thinking Infographic

VII. The New Role of Design: Decision Velocity

When the cost of building drops dramatically, the nature of competitive advantage changes.

In the past, organizations succeeded by efficiently transforming ideas into products. Engineering capacity, technical expertise, and operational discipline were often the primary constraints.

But when AI can generate working software, prototypes, and experiments almost instantly, the challenge is no longer how quickly something can be built.

The challenge becomes how quickly and wisely teams can decide what is actually worth building.

In an AI-driven world, innovation speed is no longer about development velocity — it is about decision velocity.

This is where the role of design evolves.

Design shifts from primarily producing artifacts — wireframes, mockups, and prototypes — to guiding the choices that shape meaningful innovation.

Designers increasingly become the people who help teams:

  • Frame the right problems to solve
  • Clarify human needs and motivations
  • Prioritize which ideas deserve experimentation
  • Interpret signals from real-world user behavior

In other words, design becomes less about shaping the interface of a product and more about shaping the direction of learning.

When organizations can generate thousands of potential solutions, the real value lies in identifying the small number that actually create meaningful value for people.

Designers, at their best, help organizations navigate that complexity. They connect technology to human context, helping teams avoid the trap of building faster without thinking better.

In the AI era, design is not slowing innovation down. It is helping organizations move quickly without losing their sense of where they should be going.

VIII. From Design Thinking to Design Doing

As artificial intelligence compresses the distance between idea and implementation, the nature of design practice begins to change. The emphasis shifts away from structured stages and toward continuous experimentation.

Traditional design thinking frameworks helped teams organize their thinking before committing to build. But in an AI-enabled environment, building itself becomes part of the thinking process.

Instead of long cycles of analysis followed by development, teams can now explore ideas directly through working prototypes and rapid experiments.

The most effective teams no longer separate thinking from building. They think by building.

This shift marks a move from design thinking to what might be called design doing.

In this model, learning happens through fast cycles of creation, feedback, and refinement. Ideas are not debated endlessly in workshops or captured in lengthy documents. They are explored through tangible experiences that can be observed, tested, and improved.

The practical differences begin to look like this:

Traditional Model AI-Enabled Model
Workshops and brainstorming sessions Rapid experiments and live prototypes
Personas and research summaries Behavioral data and real-world signals
Concept mockups Functional prototypes
Long planning cycles Continuous learning loops

None of this diminishes the importance of understanding people. If anything, the need for deep human insight becomes even more important as the pace of experimentation accelerates.

What changes is how that understanding is expressed. Instead of existing primarily as documents or presentations, insight becomes embedded directly into the experiences teams create and test.

In an AI-native organization, design is no longer a phase that happens before development begins. It becomes an ongoing activity woven directly into the act of building and learning.

IX. Human Trust Becomes the New Design Material

As artificial intelligence accelerates the speed of building, the most important design challenges begin to shift away from usability and toward something deeper: trust.

When products can be created, modified, and deployed almost instantly, the risk is not simply poor interface design. The risk is creating experiences that feel disconnected from human values, human context, and human expectations.

AI makes it easier than ever to generate functionality. But it does not automatically ensure that what is generated is responsible, understandable, or aligned with the needs of the people who will use it.

In an AI-driven world, the most important design material is no longer pixels or screens — it is human trust.

This raises a new set of responsibilities for designers, engineers, and product leaders alike.

Teams must think carefully about questions such as:

  • Do people understand what the system is doing?
  • Are decisions being made transparently?
  • Does the experience respect human autonomy?
  • Does the technology reinforce or erode confidence?

As AI systems become more powerful, the danger is not just that they might fail. The danger is that they might succeed in ways that quietly undermine the relationship between organizations and the people they serve.

Design therefore becomes a critical safeguard. It ensures that rapid technological capability does not outpace thoughtful consideration of human consequences.

In this sense, the role of design expands beyond shaping products. It becomes the discipline that ensures technology remains grounded in human meaning, responsibility, and trust.

X. The Future: Designers Who Ship, Engineers Who Empathize

As AI blurs the traditional boundaries between design and engineering, the most valuable creators in the future will be those who can move fluidly between imagining, designing, and building.

Designers will need to ship working products, not just static prototypes. Engineers will need to empathize deeply with users, understanding problems and shaping experiences that align with human needs.

The new hybrid product creator embodies both curiosity and capability, bridging the gap between thinking and doing. They are able to:

  • Rapidly translate insights into working solutions
  • Experiment and learn from real-world user behavior
  • Balance technical feasibility with human desirability
  • Maintain alignment between strategy, design, and execution

In this new landscape, design thinking does not disappear — it evolves. AI removes many of the barriers that previously prevented designers and engineers from collaborating fully and iterating quickly.

The organizations that succeed will be those where everyone has the ability to both understand humans and act on that understanding at the speed of AI.

The future belongs to hybrid creators who can navigate ambiguity, make fast decisions, and embed human trust into every experiment. In such a world, innovation is no longer the domain of specialists — it is the responsibility of anyone capable of connecting insight with action.

XI. The Real Question Leaders Should Be Asking

The debate is often framed as a dramatic question: “Has AI killed design thinking?” But this framing misses the deeper challenge facing organizations today.

The real question is not whether design thinking survives — it is whether organizations are prepared to operate in a world where anyone can turn ideas into working products almost instantly.

In this AI-accelerated environment, success depends less on the speed of coding or the elegance of design frameworks. It depends on human judgment, understanding, and alignment.

Leaders must ask themselves:

  • Do our teams know what problems are truly worth solving?
  • Can we prioritize experiments that create real human value?
  • Are we embedding human trust and ethical consideration into everything we build?
  • Are our designers and engineers equipped to operate across traditional boundaries?

In this new era, the organizations that thrive will not be the ones with the fastest developers or the slickest design processes.

They will be the organizations that can rapidly identify meaningful opportunities, make thoughtful decisions, and maintain human-centered principles while moving at the speed of AI.

Innovation will no longer belong to the people who can code. It will belong to the people who understand humans well enough to know what should be built in the first place.

The role of leadership is no longer just managing workflows — it is shaping the environment in which hybrid creators can think, act, and build responsibly at unprecedented speed.

New Tools for the New Design Reality

Get the new design thinking downloadsTo help you find problems worth solving and to design and execute experiments, I created a couple of visual and collaborative tools to help you thrive in this new reality. Download them both from my store and enjoy!

  1. Problem Finding Canvas — Only $4.99 for a limited time
  2. Experiment Canvas — FREE

FAQ: AI and the Evolution of Design Thinking

1. Has AI made design thinking obsolete?
No. AI has not killed design thinking, but it has changed the context in which it operates. Traditional design thinking frameworks assumed that building was slow and expensive. With AI accelerating the creation of prototypes and software, design thinking evolves from a staged process into a continuous cycle of experimentation and decision-making.
2. How are the roles of designers and engineers changing with AI?
AI blurs the traditional boundaries between designers and engineers. Designers can now generate working code and functional prototypes, while engineers can explore user experience and interface design. The future favors hybrid creators who can both understand human needs and rapidly implement solutions.
3. What becomes the main focus of design in an AI-driven product environment?
The primary focus shifts from producing artifacts to guiding decision-making and protecting human trust. Design becomes the discipline that helps teams prioritize meaningful experiments, interpret real-world feedback, and ensure that rapid technological development remains aligned with human values and needs.


Image credits: ChatGPT

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from ChatGPT to clean up the article and add citations.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Celebrate Your Small Team Wins

Celebrate Your Small Team Wins

GUEST POST from David Burkus

Progress is a powerful human motivator. But unfortunately, many teams mark progress only when projects are complete or big milestones are crossed. They don’t often celebrate small wins that build up to those big completions.

But recent research suggests that small wins celebrated regularly are a more potent way to keep teams engaged and motivated. In a landmark study from Teresa Amabile, participants were most energized and motivated not in the aftermath of a big celebration, but when they had little breakthroughs — when they found small wins to celebrate.

In this article, we’ll outline four keys to celebrate small wins on teams more powerfully, so that small wins can have a BIG effect on your team’s motivation.

1. Celebrate Daily

The first key to celebrating small wins on teams is to celebrate daily. It’s important to have a ritual on your team where wins are celebrated on a regular basis — preferably daily. Celebrating daily has two big effects on teams. The first is that it becomes something embedded in the culture and something that makes the day feel incomplete without the celebration moment. The second is that it reinforces the message that a win is a win no matter how small, and that gradually encourages the team to look beyond big milestones and appreciation smaller victories much more.

There are a few good ways to celebrate daily. You could end each day with a different member of the team sharing their win, with a new person every day. Or if you have the time, you could do one win per person every day. But you could also make it a game by trying to find three wins each day and seeing how long into the day it takes to get there. If you’re on site, hang a whiteboard where everyone can see it. If you’re remote or hybrid, make it a dedicated channel in Slack, Teams, or whatever communication tool you use. Regardless, celebrate daily in order to reiterate the concept that there is something worth celebrating every single day.

2. Celebrate Progress

The second key to celebrating small wins on teams is to celebrate progress. As reviewed above, progress is a powerful human motivator. Many teams only measure progress based on external markers like milestones or project completions. And that can be highly motivating and an easy way to connect small wins to progress. Even if it’s a very little victory, when it’s listed, you can talk about how that win brings the team closer to a significant milestone or to project completion.

But savvy leaders connect small wins to internal progress as well. Many individual victories listed during daily small win sessions will be more indicative of that person’s improved skills or career progress. So, make the effort to remind the person celebrating how that win never would have happened without the growth in a specific area that you’ve noticed over time — and even better if you can point to the future growth that win suggests. Between external and internal markers of progress, it should be simple to connect every victorious moment to the momentum of your team.

3. Celebrate Contributions

The third key to celebrating small wins on teams is to celebrate contributions. Work is teamwork. Most victories are a team effort — even small wins. It may have been volunteering to help on a specific project, or just handing off their work in a timely fashion so the next person could build upon it. Some people do have small wins in isolation, but more likely someone else’s effort contributed in some way to that person’s success. So, when one teammate is stating their win, make sure they’re also expressing gratitude to the teammates that helped them.

Ideally, teammates learn over time to use small win celebrations as a gratitude exercise as well. But as a leader you may need to model the way during your shares and ask specific questions that draw out the contribution when others share. Overtime, that should turn celebrating contributions into a regular habit on the team. And the team will internalize their interdependence upon each other — and celebrate their collaborations as well.

4. Celebrate Impact

The fourth key to celebrating small wins on teams is to celebrate impact, as in celebrate the impact that this win is going to have not on the team but on the people who that team serves. Progress is a potent motivator but it’s even more potent when combined with a sense of purpose. And the clearest, more powerful way to help employees feel purpose in their work is to connect their work to an act of service — the more specific the connection the better. Leaders ought to provide a concise answer to the question “who is served by the work that we do.” The “who” could be customers or end users, or stakeholders, or even other teams inside the organization who are enabled by the work your team does.

So, when teams celebrate small wins, help them connect the win to how it serves those beneficiaries. Hopefully, they notice the connection on their own but if not, you may need to ask specific questions that draw that connection out. Ending each celebration session with a connection to impact and purpose reminds people that their work matters—and hence their wins matter as well.

In the end, that’s what most individuals and teams need to be motivated by their work. They need to know their work matters. And a daily ritual of celebrating small wins (and the contributions, progress, and impact of those wins) becomes a daily reminder of what matters. And that should motivate everyone on the team to do their best work ever.

Image credit: Pexels

Originally published at https://davidburkus.com on March 6, 2023.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.