Tag Archives: AI

The Agentic Paradox

Why Giving AI More Autonomy Requires Us to Give Humans More Agency

LAST UPDATED: April 10, 2026 at 7:11 PM

The Agentic Paradox

by Braden Kelley and Art Inteligencia


The Rise of the Machine “Doer”

For the past few years, we have lived in the era of Generative AI — a world of sophisticated chatbots and creative assistants that respond to our prompts. But as we move deeper into 2026, the landscape has shifted. We are now entering the age of Agentic AI. These are not just tools that talk; they are autonomous systems capable of executing complex workflows, making real-time decisions, and acting on our behalf across digital ecosystems.

On the surface, this promises the ultimate efficiency. We imagine a future where the “busy work” vanishes, leaving us free to innovate. However, a troubling Agentic Paradox has emerged: as we grant machines more autonomy to act, many humans are finding themselves with less agency. Instead of feeling liberated, workers often feel like they are merely “babysitting” algorithms or reacting to a relentless stream of machine-generated outputs.

This disconnect creates a high-stakes leadership challenge. If we focus solely on the autonomy of the machine, we risk creating an “algorithmic anxiety” that stifles the very human creativity we need to thrive. To succeed in this new era, leaders must realize that the more powerful our AI agents become, the more we must intentionally “upgrade” the agency, authority, and strategic focus of our people.

The Thesis: The goal of innovation in 2026 is not to build the most autonomous machine, but to build a human-centered ecosystem where AI agents manage the tasks and empowered humans manage the intent.

The Hidden Cost: The Cognitive Load Crisis

The promise of Agentic AI was a reduction in workload, but for many organizations, the reality has been a shift in the type of work rather than a reduction of it. This has birthed the Cognitive Load Crisis. While an autonomous agent can process data and execute tasks 24/7, it lacks the contextual wisdom to understand the nuances of organizational culture or ethical gray areas. This leaves the human “orchestrator” in a state of perpetual high-alert.

Instead of performing deep, meaningful work, leaders and employees are becoming trapped in the Supervision Trap. They are forced to manage a relentless firehose of machine-generated notifications, approvals, and “check-ins.” This creates a fragmented mental state where the human mind is constantly context-switching between different agent streams, leading to a unique form of 2026 burnout — digital exhaustion without the satisfaction of tactile achievement.

Furthermore, as AI agents take over more of the “doing,” we see an erosion of Deep Work. When every minute is spent verifying the output of an algorithm, the quiet space required for radical innovation and strategic foresight vanishes. We are effectively trading our long-term creative capacity for short-term operational speed.

  • Notification Fatigue: The mental tax of being the constant “emergency brake” for autonomous systems.
  • Loss of Intuition: The danger of becoming so reliant on agentic data that we lose our “gut feel” for the market.
  • The Feedback Loop: A system where humans spend more time managing machines than mentoring people.

To break this cycle, we must stop treating AI agents as simple productivity tools and start treating them as entities that require a new architecture of human attention. If we don’t manage the cognitive load, our most talented people will eventually shut down, leaving the “Magic Makers” of our organization feeling like mere cogs in a machine-led wheel.

Agentic Paradox Spectrum Infographic

Redefining Roles: From “The Conscript” to “The Architect”

As the landscape of work shifts, so too must our understanding of how individuals contribute to the innovation ecosystem. In my work on the Nine Innovation Roles, I’ve often highlighted how different archetypes fuel organizational growth. In this agentic age, we are seeing a dramatic migration of these roles. If we are not intentional, our best people will default into the role of The Conscript — those who are merely drafted into service to support the AI’s agenda, performing the monotonous tasks of verification and data cleanup.

The goal of a human-centered transformation is to automate the role of the “Conscript” and elevate the human into the role of The Architect or The Magic Maker. When the AI handles the heavy lifting of execution, the human is finally free to focus on Intent. This is where true agency resides. Agency is not the ability to do more; it is the power to decide what is worth doing and why it matters to the human beings we serve.

However, there is a dangerous “Agency Gap” emerging. If an organization implements AI agents without redefining human job descriptions, employees lose their sense of ownership. When the machine becomes the primary creator, the human “spark” is extinguished. We must ensure that AI serves as the support staff for human intuition, not the other way around.

The Migration of Value

The AI Agent Role The Human Agency Role
The Conscript: Handling repetitive execution and data synthesis. The Architect: Designing the systems and ethical frameworks for the AI.
The Facilitator: Coordinating schedules and managing basic workflows. The Revolutionary: Identifying the “radical” shifts the AI isn’t programmed to see.
The Specialist: Performing deep-dive technical analysis at scale. The Magic Maker: Applying empathy and storytelling to turn data into a movement.

By clearly delineating these roles, leaders can close the Agency Gap. We must empower our teams to move away from “monitoring” and toward “orchestrating.” This transition is the difference between a workforce that feels obsolete and one that feels essential.

Agentic Workforce Migration Infographic

FutureHacking™ the Cognitive Workflow

To navigate the complexities of 2026, organizations cannot rely on reactive strategies. We must use FutureHacking™ — a collective foresight methodology — to map out how the relationship between human intelligence and agentic automation will evolve. This isn’t just about predicting technology; it’s about engineering the “Human-Agent Interface” so that it scales without crushing the human spirit.

The core of this approach involves identifying the Innovation Bonfire within your team. In this metaphor, the AI agents are the fuel — abundant, powerful, and capable of sustaining a massive output. However, the humans must remain the spark. Without the human spark of intent and empathy, the fuel is just a cold pile of logs. FutureHacking™ allows teams to visualize where the “fuel” might be smothering the “spark” and adjust the workflow before burnout sets in.

By engaging in collective foresight, teams can proactively decide which cognitive territories are “Human-Core.” These are the areas where we intentionally limit AI autonomy to preserve our creative agency and cultural identity. It’s about choosing where we want the machine to lead and where we require a human to hold the compass.

  • Mapping the Friction: Identifying which agent-led tasks are creating the most mental “drag” for the team.
  • Defining Non-Negotiables: Establishing which parts of the customer and employee experience must remain 100% human-centric.
  • Intent Modeling: Shifting the focus from “What can the agent do?” to “What outcome are we trying to hack for the future?”

When we FutureHack our workflows, we move from being passive recipients of technological change to being the active architects of our organizational destiny. We ensure that as the machine gets smarter, our collective human intelligence becomes more focused, not more fragmented.

Framework: The “Agency First” Operating Model

Building a resilient organization in the age of Agentic AI requires more than just new software; it requires a new operating philosophy. We must move away from a model of Machine Management and toward a model of Intent Orchestration. This framework provides three critical steps to ensure that human agency remains the primary driver of your business value.

1. Cognitive Offloading, Not Task Dumping

The goal of automation should be to reduce the mental noise for the employee, not just to move a task from a human to a machine. If a human still has to track, verify, and worry about every step the agent takes, the cognitive load hasn’t decreased — it has merely changed shape.
The Strategy: Design “set and forget” guardrails that allow agents to operate within a defined ethical and operational “sandbox,” only alerting the human when a decision falls outside of those parameters.

2. The “Human-in-the-Loop” Upgrade

We must shift the role of the worker from Monitor to Mentor. In the old model, the human checks the machine’s homework for errors. In the “Agency First” model, the human coaches the agent on why certain decisions are better than others, treating the AI as an apprentice. This reinforces the human’s position as the source of wisdom and authority, preventing the “Conscript” mentality.

3. Intent-Based Leadership

Management must evolve to focus on the Intent rather than the Activity. In a world where agents can generate infinite activity, “busyness” is no longer a proxy for value. Leaders must empower their teams to spend their time defining the “Commander’s Intent” — the high-level objectives and human-centered outcomes that the AI agents must then figure out how to achieve.

Intent Based Leadership Blueprint Infographic

The Agency Audit: Ask your team this week: “Does this new AI agent give you more time to think strategically, or does it just give you more machine-generated work to manage?” The answer will tell you if you are facing an Agentic Paradox.

Conclusion: Leading the Human-Centered Revolution

The true test of leadership in 2026 is not how quickly you can deploy autonomous agents, but how effectively you can protect and amplify the human spirit within your organization. As we navigate the Agentic Paradox, we must remember that technology is a force multiplier, but it requires a human “integer” to multiply. Without a clear sense of agency, even the most advanced AI becomes a source of friction rather than a source of freedom.

By addressing the Cognitive Load Crisis and intentionally moving our teams out of “Conscript” roles and into “Architectural” ones, we do more than just improve efficiency — we future-proof our culture. We ensure that our organizations remain places of meaning, creativity, and purpose.

The “Year of Truth” demands that we be honest about the mental tax of automation. It calls on us to use FutureHacking™ not just to map out our tech stacks, but to map out our human potential. The companies that win the next decade won’t be those with the smartest agents; they will be the ones that used those agents to give their people the time and agency to be truly, radically human.

“Innovation is a team sport where the machines play the support roles so the humans can score the points.”

Are you ready to hack your agentic future?

Frequently Asked Questions

What is the primary difference between Generative AI and Agentic AI?

Generative AI focuses on creating content (text, images, code) based on human prompts. Agentic AI goes a step further by having the autonomy to execute multi-step workflows, make decisions, and interact with other systems to complete a goal without constant human intervention.

How can leaders identify if their team is suffering from the Agentic Paradox?

Look for signs of the “Supervision Trap,” where employees spend more time managing and verifying machine outputs than performing strategic work. If your team feels busier but reports a decline in creative output or “Deep Work,” they are likely experiencing the paradox.

What role does FutureHacking™ play in managing AI integration?

FutureHacking™ is a collective foresight methodology used to visualize the long-term impact of AI on organizational roles. It helps teams proactively define “Human-Core” territories, ensuring that as AI scales, it supports rather than smothers human agency and innovation.

Image credits: Google Gemini

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from Google Gemini to clean up the article, add images and create infographics.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Top 10 Human-Centered Change & Innovation Articles of March 2026

Top 10 Human-Centered Change & Innovation Articles of March 2026Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are March’s ten most popular innovation posts:

  1. Resilient Innovation — by Braden Kelley
  2. Has AI Killed Design Thinking? — by Braden Kelley
  3. Mapping Customer Experience Risk to the P&L — by Braden Kelley
  4. Moral Uncertainty Engines — by Art Inteligencia
  5. Necesita un Diagnóstico de Riesgo de Experiencia del Cliente y Fuga de Ingresos — por Braden Kelley
  6. Layoffs, AI, and the Future of Innovation — by Braden Kelley
  7. Organizational Digital Exhaust Analysis — by Art Inteligencia
  8. You Need a Customer Experience Risk & Revenue Leakage Diagnostic — by Braden Kelley
  9. Stereotypes – Are They Useful and Should We Use Them? — by Pete Foley
  10. Is There Such a Thing as a Collective Growth Mindset? — by Stefan Lindegaard

BONUS – Here are five more strong articles published in February that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Build a Common Language of Innovation on your team

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last five years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Four Psychological Disruptions of AI at Work

LAST UPDATED: April 3, 2026 at 4:20 PM

The Four Psychological Disruptions of AI at Work

by Braden Kelley and Art Inteligencia


Most AI-and-work frameworks are built around economics – job categories, task automation rates, re-skilling costs. This one is built around something different: the interior experience of the person sitting at the desk. The four disruptions mapped in this infographic were identified not through labor market data, but through a human-centered lens – the same lens used in design thinking and change management to surface the needs, fears, and identity stakes that people rarely articulate out loud but always feel.

The framework draws on three converging sources: organizational psychology research on professional identity and role transition; change management practice, particularly the observed patterns of how workers respond when their expertise is devalued or displaced; and direct observation of how individuals are actually experiencing AI adoption in their workplaces right now – not in surveys, but in the unguarded conversations that happen before and after workshops, in the margins of keynotes, in the questions people ask when they think no one important is listening.


Why these four disruptions

1

Competence Displacement

The skill that defined you no longer distinguishes you.

Professional identity is heavily anchored in the belief that what I know how to do has value. When AI can replicate a signature competency – even imperfectly – it attacks that anchor directly. The disruption isn’t primarily about job loss. It’s about the sudden, disorienting feeling that years of deliberate practice have been, in some meaningful sense, made ordinary.

This disruption appears earliest and most acutely in knowledge workers whose expertise was previously considered difficult to acquire – writers, analysts, coders, researchers, strategists.

2

Purpose Erosion

The meaning embedded in the craft begins to hollow out.

Work is not only instrumental – it is ritual. The process of doing difficult things carefully, over time, is itself a source of meaning. When automation removes the friction, it can also remove the satisfaction. This is subtler than competence displacement and slower to surface, but ultimately more corrosive. People find themselves producing more output and feeling less connected to it.

This disruption is particularly acute for people who chose their profession not just for income but for intrinsic love of the work – and who built their identity around that love.

3

Belonging Disruption

The social fabric of work shifts when AI enters the team.

Work teams are social ecosystems built on complementary expertise, shared struggle, and mutual reliance. AI changes those dynamics in ways that are easy to overlook. When an AI tool makes one team member dramatically more productive, or when collaborative tasks are partially automated, the invisible social contracts of the team – who depends on whom, who contributes what – are quietly renegotiated. Belonging depends on feeling needed. When that changes, isolation can follow.

This disruption tends to surface not as explicit conflict but as a gradual withdrawal – people collaborating less, sharing less, protecting their remaining territory.

4

Status Anxiety

The professional hierarchy is being redrawn by AI fluency.

Workplace status has always been tied to expertise scarcity – the person who knew things others didn’t held power. AI is redistributing that scarcity rapidly. Early and confident AI adopters gain speed, output, and visibility. Those who resist, or who are slower to adapt, find themselves losing ground in ways that feel both unfair and disorienting. The new status question – are you someone who uses AI, or someone AI is used on? – is already being asked in organizations, even when no one says it explicitly.

This disruption is uniquely uncomfortable because it combines external threat (status loss) with internal shame (the fear of being seen as behind).


How to read the framework

These four disruptions are not sequential stages – they are simultaneous and overlapping. A single professional can be experiencing all four at once, with different intensities depending on their role, their organization, and how rapidly AI is being adopted around them. The infographic presents them as discrete panels for clarity, but the lived experience is messier and more entangled.

They are also not uniformly negative. Each disruption contains within it the seed of a corresponding renewal: competence displacement can become an invitation to lead with judgment rather than task execution; purpose erosion can prompt a deeper reckoning with what the work is ultimately for; belonging disruption can surface the human connection that was always the real foundation of team cohesion; status anxiety can motivate the kind of deliberate identity authoring that makes professionals more resilient over the long term.

The framework is designed to give leaders and individuals a common language for conversations that are currently happening in fragments — in one-to-ones, in exit interviews, in the silence after a difficult all-hands. Named things can be worked with. Unnamed things can only be endured.

This framework is a practitioner’s model, not a peer-reviewed clinical instrument. It is designed for use in workshops, coaching conversations, and organizational change programs as a starting point for honest dialogue — not as a diagnostic or classification system. It will evolve as our collective understanding of AI’s human impact deepens.

Framework developed by Braden Kelley as part of the article series Psychological Impact of AI on Work Identity  ·  Braden Kelley  ·  © 2026

Image credits: Gemini

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from Claude AI to clean up the article and add citations.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Humans and AI BOTH Hallucinate

Humans and AI BOTH Hallucinate

GUEST POST from Shep Hyken

One of the reasons customers are concerned about or even scared of artificial intelligence (AI) is that it has been known to provide incorrect answers. The result is frustration and concern over whether to believe any AI-fueled technology. In my annual customer service and customer experience research, I asked more than 1,000 U.S. consumers if they ever received wrong or incorrect information from an AI self-service technology. Fifty-one percent said yes.

No, AI is not perfect. Even though the technology continues to improve, it still makes mistakes. And my response to those who claim they won’t trust AI because of those mistakes is to ask, “Has a live customer support agent ever given you bad information?”

That question gets a surprised look, and then a smile, and then an acknowledgement, something like, “You’re right. I never thought about that.”

When AI gives bad information, I refer to that as Artificial Incompetence. It’s just as frustrating when we experience bad information from a live agent, which I call HI, or Human Incompetence. I doubt – I actually know – that the AI and the human aren’t trying to give you bad information.

I once called a customer support number to get help with what seemed like a straightforward question. I didn’t like the answer I received. It just didn’t make sense. Rather than argue, I thanked the agent, hung up, and dialed the same customer support number. A different agent answered, and I asked the same question. This time, I liked the answer. Two humans from the same company answering the same question, but with two completely different answers. And we worry about AI being inconsistent!

AI Hallucination Cartoon Shep Hyken

AI and Humans Make Mistakes

The reality is that both AI and humans make mistakes, and both will continue to do so. The difference is our expectations. We don’t expect humans to be perfect, so when they are not, we may be disappointed, maybe even angry. We may or may not forgive them, but usually, we just chalk it up to being … human. But it’s different when interacting with AI. We expect it to be reliable, and when it makes a mistake, we often assume the entire system is flawed.

Perhaps we should treat both with the same reasonable expectations and the same healthy skepticism we apply to weather forecasters, who use sophisticated technology and have years of training yet still can’t seem to get tomorrow’s forecast right half the time. Well, it seems like half the time! That doesn’t mean we won’t be checking the forecast before we plan our outdoor activities. AI, too, is sophisticated technology that can make life easier.

Image credits: Gemini, Shep Hyken

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Layoffs, AI, and the Future of Innovation

Efficiency Breakthrough or Creative Bankruptcy?

LAST UPDATED: March 21, 2026 at 10:24 PM

Layoffs, AI, and the Future of Innovation

by Braden Kelley and Art Inteligencia


Framing the Debate: Signals or Symptoms?

A new wave of layoffs across technology companies has reignited a familiar but increasingly urgent question: what exactly are we witnessing? On the surface, the explanation seems straightforward — companies are tightening costs, responding to macroeconomic pressures, and recalibrating after years of aggressive hiring. But beneath that surface lies a deeper and more consequential debate about the future of innovation, the role of engineers, and the impact of artificial intelligence on knowledge work itself.

Two competing narratives have quickly emerged. The first frames these layoffs as a rational and even necessary evolution. In this view, advances in AI-powered development tools — ranging from large language models to code-generation systems — have fundamentally altered the productivity equation. Engineers equipped with tools like Claude or OpenAI Code can now accomplish in hours what once took days. The implication is clear: if output can be maintained or even increased with fewer people, then reducing headcount is not a sign of weakness but a signal of maturation. Companies are becoming leaner, more efficient, and ultimately more profitable.

The second narrative is far less optimistic. It suggests that layoffs are not a leading indicator of a smarter, AI-augmented future, but a trailing indicator of something more troubling — an innovation slowdown. According to this perspective, many technology companies have already harvested the most accessible opportunities within their existing platforms. What remains is incremental improvement rather than transformative change. In such an environment, cutting engineering talent becomes less about efficiency gains and more about a lack of compelling new problems to solve. The cupboard, in other words, may not be empty — but it may be significantly less full than it once was.

What makes this moment particularly complex is that both narratives can be true at the same time. AI is undeniably increasing productivity in certain domains, compressing development cycles and enabling smaller teams to deliver meaningful results. At the same time, innovation has never been solely a function of efficiency. Breakthroughs emerge from exploration, from cross-functional collisions, and from a willingness to invest in uncertain futures. Layoffs, especially when executed at scale, can disrupt the very conditions that make those breakthroughs possible.

This tension forces us to confront a more nuanced question: are these layoffs a signal of transformation or a symptom of stagnation? Are organizations courageously embracing a new model of AI-augmented work, or are they retreating into cost-cutting as a substitute for bold thinking? The answer matters, because it shapes not only how we interpret today’s decisions, but how we design organizations for tomorrow.

For leaders, the stakes extend beyond quarterly earnings. The choices being made now will determine whether AI becomes a catalyst for a new era of human-centered innovation or a tool that accelerates efficiency at the expense of imagination. For engineers, the implications are equally profound. Their roles are being redefined in real time — not just in terms of what they produce, but in how they create value within increasingly AI-mediated systems.

Ultimately, this is not just a debate about layoffs. It is a debate about what organizations choose to optimize for: productivity or possibility, efficiency or exploration, output or insight. And in that choice lies the future trajectory of innovation itself.

The Case for “Smarter, Leaner, More Profitable”

For many technology leaders, the recent wave of layoffs is not a retreat — it is a re-calibration. The argument is grounded in a simple but powerful premise: the economics of software development have fundamentally changed. With the rapid advancement of AI-assisted coding tools, the amount of output a single engineer can produce has increased dramatically. What once required large, specialized teams can now be accomplished by smaller, more versatile groups augmented by intelligent systems.

Tools such as Claude and OpenAI Code are not merely incremental improvements in developer productivity; they represent a shift in how work gets done. Routine coding tasks, boilerplate generation, debugging assistance, and even architectural suggestions can now be offloaded to AI. This allows engineers to spend less time writing repetitive code and more time focusing on higher-value activities such as system design, problem framing, and integration across complex environments.

In this emerging model, the role of the engineer evolves from builder to orchestrator. Instead of manually crafting every line of code, engineers guide, refine, and validate the outputs of AI systems. The result is a compression of development cycles — features are built faster, iterations occur more rapidly, and time-to-market shrinks. From a business perspective, this translates into a compelling opportunity: maintain or even increase output while reducing labor costs.

This logic is not without precedent. Across industries, waves of automation have consistently redefined the relationship between labor and productivity. In manufacturing, the introduction of robotics did not eliminate production; it scaled it. In many cases, it also improved quality and consistency. Proponents of the current shift argue that AI represents a similar inflection point for knowledge work. The companies that adapt fastest will be those that learn to pair human creativity with machine efficiency.

From a financial standpoint, the incentives are clear. Reducing headcount while sustaining output improves margins, a priority that has become increasingly important in an environment where growth-at-all-costs is no longer rewarded. Investors are placing greater emphasis on profitability and operational discipline, and companies are responding accordingly. Leaner teams are not just a byproduct of technological change — they are a strategic choice aligned with evolving market expectations.

There is also a strategic argument that goes beyond cost savings. By automating lower-value tasks, organizations can theoretically redeploy human talent toward more innovative efforts. Engineers freed from routine work can focus on solving harder problems, exploring new product ideas, and experimenting with emerging technologies. In this view, AI does not replace innovation capacity; it expands it by removing friction from the development process.

Smaller teams can also mean faster decision-making. With fewer layers of coordination required, organizations can become more agile, responding quickly to changing market conditions and customer needs. This agility is often cited as a competitive advantage, particularly in fast-moving technology sectors where speed can determine success or failure.

Ultimately, the “smarter, leaner” argument rests on a belief that efficiency and innovation are not mutually exclusive. Instead, they are mutually reinforcing. By leveraging AI to increase productivity, companies can create the financial and operational headroom needed to invest in the next wave of innovation. Layoffs, in this context, are not an admission of weakness — they are a signal that the underlying system of value creation is being rewritten.

The Case for “Innovation Is Running Dry”

While the efficiency narrative is compelling, an equally important — and more unsettling — interpretation of recent layoffs is gaining traction: that they reflect not technological progress, but an innovation slowdown. In this view, companies are not simply becoming leaner because they can do more with less, but because they have fewer truly novel problems worth investing in. The layoffs, therefore, are less a signal of transformation and more a symptom of diminishing opportunity.

Over the past decade, many technology companies have scaled around a set of highly successful platforms and business models. These platforms have been optimized, expanded, and monetized with remarkable effectiveness. But maturity brings constraints. As systems stabilize and markets saturate, the number of greenfield opportunities naturally declines. What remains is often incremental improvement — refinements, extensions, and efficiencies — rather than the kind of breakthrough innovation that requires large, exploratory engineering teams.

In this context, layoffs can be interpreted as a rational response to a shrinking frontier. If there are fewer bold bets to pursue, there is less need for the capacity required to pursue them. The risk, however, is that this becomes a self-reinforcing cycle. As organizations reduce investment in exploration, they further limit their ability to discover the next wave of opportunity. Over time, efficiency begins to crowd out possibility.

Compounding this dynamic is an increasing reliance on metrics that prioritize productivity over potential. Organizations are becoming exceptionally good at measuring what is already known — velocity, output, utilization — but far less adept at valuing what has yet to be discovered. When success is defined primarily by efficiency gains, it becomes harder to justify the uncertainty and longer time horizons associated with breakthrough innovation.

The rise of AI tools adds another layer of complexity. While these tools can accelerate development, they do not inherently generate new insight. They are trained on existing patterns, which means they are exceptionally effective at extending the present but less equipped to invent the future. This creates the risk of an “illusion of progress,” where output increases but originality does not. More code is produced, but not necessarily more meaningful innovation.

There are also significant cultural consequences to consider. Layoffs, particularly when they affect engineering and product teams, can erode trust and psychological safety within an organization. When employees perceive that their roles are precarious, they are less likely to take risks, challenge assumptions, or pursue unconventional ideas. Yet these behaviors are precisely what fuel innovation. In attempting to optimize for efficiency, companies may inadvertently suppress the very creativity they depend on for long-term growth.

Another often overlooked impact is the loss of institutional knowledge. Experienced engineers carry not just technical expertise, but contextual understanding of systems, decisions, and past experiments. When they leave, they take with them insights that are difficult to codify or replace. This loss can slow future innovation efforts, even as short-term efficiency metrics appear to improve.

Ultimately, the concern is not that companies are becoming more efficient — it is that they may be becoming too narrowly focused on efficiency at the expense of exploration. Innovation requires slack, curiosity, and a willingness to invest in uncertain outcomes. When organizations begin to treat these elements as expendable, they risk signaling something far more significant than cost discipline: a diminishing appetite for invention itself.

Paths to AI-Driven Engineering Outcomes

The Human-Centered Tension: Productivity vs. Possibility

Beneath the surface of the efficiency versus stagnation debate lies a deeper, more human tension — one that cannot be resolved by technology alone. At its core, innovation has never been just about output. It has always been about the quality of thinking, the diversity of perspectives, and the collisions between ideas that spark something new. When organizations focus too narrowly on productivity, they risk overlooking the very conditions that make possibility achievable.

Innovation does not emerge from isolated efficiency; it emerges from interaction. It is the byproduct of cross-functional curiosity — engineers engaging with designers, product managers challenging assumptions, customers re-framing problems, and leaders creating space for exploration. These interactions are often messy, inefficient, and difficult to measure. But they are also where breakthroughs live. When layoffs reduce not just headcount but diversity of thought and opportunities for collaboration, the innovation system itself becomes less dynamic.

The rise of AI-augmented work introduces a new layer to this tension. As engineers increasingly rely on AI tools to generate code, suggest solutions, and optimize workflows, their role begins to shift. They move from hands-on builders to orchestrators of machine-assisted output. While this shift can increase speed and efficiency, it also raises an important question: what happens to deep craft? The tacit knowledge developed through wrestling with complexity — the kind that often leads to unexpected insights — may be diminished if too much of the process is abstracted away.

There is also a cognitive risk. AI systems are designed to identify and replicate patterns based on existing data. This makes them powerful tools for scaling what is already known, but less effective at challenging foundational assumptions. If organizations become overly dependent on these systems, they may unintentionally standardize thinking. The range of possible solutions narrows, not because people lack creativity, but because the tools they use guide them toward familiar patterns.

Trust plays a critical role in navigating this tension. In environments where employees feel secure, valued, and empowered, they are more likely to experiment, take risks, and pursue unconventional ideas. Layoffs, particularly when they are frequent or poorly communicated, can erode that trust. The result is a more cautious workforce — one that prioritizes safety over exploration. In such environments, productivity may remain high, but the willingness to pursue breakthrough innovation often declines.

Curiosity is the other essential ingredient. It is the force that drives individuals to ask better questions, challenge the status quo, and seek out new possibilities. Yet curiosity requires space — time to think, room to explore, and permission to deviate from immediate objectives. When organizations optimize relentlessly for efficiency, that space tends to disappear. Every moment is accounted for, every effort measured, and every outcome expected to justify itself in the short term.

This creates a paradox. The same tools and strategies that enable organizations to move faster can also constrain their ability to think differently. Speed without reflection can lead to acceleration in the wrong direction. Efficiency without exploration can result in incremental progress that ultimately limits long-term growth.

For leaders, the challenge is not to choose between productivity and possibility, but to intentionally design for both. This means recognizing that innovation systems require balance — between execution and exploration, between structure and flexibility, and between human judgment and machine assistance. It requires protecting the conditions that enable creativity even as new technologies reshape how work gets done.

Ultimately, the question is not whether AI will make organizations more efficient — it already is. The question is whether leaders will use that efficiency to create more space for human ingenuity, or whether they will allow it to crowd out the very behaviors that make innovation possible in the first place.

The Future of Innovation in the Age of AI: Augmentation or Abdication?

As organizations navigate layoffs, AI adoption, and shifting expectations around productivity, the future of innovation is not predetermined — it is being actively shaped by the choices leaders make today. The central question is no longer whether artificial intelligence will transform how work gets done, but how that transformation will be directed. Will AI serve as an amplifier of human ingenuity, or will it become a mechanism for narrowing ambition in the pursuit of efficiency?

Three distinct paths are beginning to emerge. The first is an augmentation-led renaissance, where organizations successfully combine human creativity with machine capability. In this scenario, AI handles the repetitive and computationally intensive aspects of work, freeing humans to focus on problem framing, experimentation, and breakthrough thinking. Innovation accelerates not because there are fewer people, but because those people are empowered to operate at a higher level of abstraction and impact.

The second path is the efficiency trap. Here, organizations become so focused on optimizing output and reducing cost that they gradually lose their capacity for exploration. AI is used primarily to streamline existing processes rather than to unlock new possibilities. Over time, these organizations become highly efficient at executing yesterday’s ideas, but increasingly disconnected from tomorrow’s opportunities. What appears to be strength in the short term reveals itself as fragility in the long term.

The third path is a bifurcation of the competitive landscape. Some organizations will lean into augmentation, investing in both AI capabilities and the human systems required to harness them effectively. Others will prioritize efficiency, focusing on cost control and incremental gains. The result is a widening gap between companies that consistently generate new value and those that primarily replicate and optimize existing models. In such an environment, innovation becomes a defining differentiator rather than a baseline expectation.

What separates the leaders from the laggards will not be access to AI alone — those tools are increasingly commoditized — but how organizations integrate them into their innovation systems. Leading organizations will invest not just in AI infrastructure, but in what might be called curiosity infrastructure: the cultural, structural, and leadership practices that encourage questioning, exploration, and cross-functional collaboration. They will recognize that technology can accelerate execution, but only humans can redefine the problems worth solving.

This shift will require a redefinition of roles. Engineers, for example, will need to move beyond execution and into areas such as systems thinking, ethical judgment, and interdisciplinary collaboration. Their value will be measured not just by what they build, but by how they frame problems, challenge assumptions, and integrate diverse inputs into coherent solutions. Similarly, leaders will need to become stewards of both performance and possibility, ensuring that the drive for efficiency does not crowd out the pursuit of innovation.

Organizations that thrive will also be those that intentionally protect space for exploration. This does not mean abandoning discipline or ignoring financial realities. It means recognizing that innovation requires a portfolio approach — balancing investments in core optimization with bets on uncertain, high-potential opportunities. AI can make this balance more achievable by reducing the cost of experimentation, but only if leaders choose to reinvest those gains into discovery rather than solely into margin expansion.

Ultimately, the future of innovation in the age of AI will be defined by whether organizations treat these tools as a substitute for human thinking or as a catalyst for it. The real risk is not that AI replaces engineers — it is that organizations stop asking the kinds of questions that require engineers to think deeply, creatively, and collaboratively in the first place.

Augmentation or abdication is not a technological choice. It is a leadership choice. And in making it, organizations will determine whether this moment becomes a turning point toward a more innovative future — or a gradual slide into highly efficient irrelevance.

Frequently Asked Questions

1. Why are technology companies laying off engineers despite using AI tools?

Layoffs may result from a combination of efficiency gains and slowing innovation opportunities. AI tools like
Claude and OpenAI Code allow smaller teams to maintain or increase output, reducing the need for some roles.
At the same time, some companies face fewer breakthrough projects to pursue, which can also drive workforce reductions.

2. Does AI replace human engineers or just augment their work?

AI primarily augments engineers by automating repetitive coding, debugging, and optimization tasks. This allows
engineers to focus on higher-value activities such as system design, problem framing, and creative innovation.
While some roles shift, AI is intended as an amplifier of human ingenuity rather than a replacement.

3. How can companies maintain innovation in the age of AI?

Companies can preserve innovation by investing in curiosity infrastructure, protecting time and space for
experimentation, fostering cross-functional collaboration, and reinvesting efficiency gains into exploratory,
high-potential projects. Balancing productivity with opportunity ensures that humans and AI together drive breakthroughs.


Image credits: ChatGPT

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from ChatGPT to clean up the article and add citations.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Has AI Killed Design Thinking?

Or Just Removed Its Excuses?

LAST UPDATED: March 2, 2026 at 5:13 PM

Has AI Killed Design Thinking?

by Braden Kelley and Art Inteligencia


I. The Question Everyone Is Whispering

Something fundamental has changed in how products are created.

Artificial intelligence can now generate working software in minutes. Designers can move from an idea to a functional prototype without waiting for engineering. Engineers can generate interface concepts, user flows, and even early product ideas with a few well-crafted prompts.

The traditional product development cycle — design, then build, then test — is collapsing into something faster, messier, and far more fluid.

In the past, the biggest constraint in innovation was the cost and time required to build something. Today, AI dramatically reduces that barrier. Entire features, experiments, and even applications can be created almost instantly.

Which raises an uncomfortable question that many product leaders, designers, and engineers are quietly asking:

If we can ship almost immediately, do we still need design thinking?

At first glance, the answer might seem obvious. Design thinking was created to help teams understand people, define the right problems, and avoid building the wrong solutions. Those goals have not disappeared.

But when the cost of building approaches zero, the role of design inevitably changes. The traditional pacing of discovery, ideation, prototyping, and testing begins to compress. The boundaries between designer and engineer begin to blur.

And as those boundaries dissolve, the question is no longer simply whether design thinking still matters.

The deeper question is whether the discipline itself must evolve to survive in a world where almost anyone can turn an idea into working software.

II. Design Thinking Was Built for a World of Scarcity

To understand how artificial intelligence is reshaping product creation, it helps to remember the environment in which design thinking originally emerged.

Design thinking did not appear because organizations suddenly discovered empathy or creativity. It emerged because building things was expensive, slow, and risky. Every product decision carried significant cost, and mistakes could take months or years to correct.

In that world, organizations needed a structured way to reduce uncertainty before committing engineering resources. Design thinking provided that structure.

Its now-famous stages helped teams move deliberately from understanding people to building solutions:

  • Empathize — deeply understand the people you are designing for.
  • Define — frame the real problem worth solving.
  • Ideate — generate a wide range of possible solutions.
  • Prototype — create rough representations of potential ideas.
  • Test — validate whether those ideas actually work for people.

The goal was simple: avoid spending months building something no one actually needed.

Design thinking slowed teams down in the right places so they could move faster later. It created space for exploration before the heavy machinery of engineering was set in motion.

But this entire framework assumed one critical constraint:

Building was the most expensive part of innovation.

Prototypes were often static mockups. Experiments required engineering time. Even small product changes could take weeks or months to ship.

In other words, design thinking was optimized for a world where the biggest risk was building the wrong thing.

Today, AI is rapidly changing that assumption. When working software can be generated in minutes rather than months, the bottleneck shifts — and the role of design must evolve with it.

III. AI Has Flipped the Innovation Constraint

For most of the history of digital product development, the limiting factor in innovation was the ability to build. Even the best ideas had to wait in line for scarce engineering resources, long development cycles, and complex release processes.

Artificial intelligence is rapidly dismantling that constraint.

Today, AI tools can generate functional code, working interfaces, and interactive prototypes in minutes. What once required a team of specialists and weeks of effort can often be produced by a single individual in an afternoon.

Designers can now:

  • Create interactive prototypes that behave like real products
  • Generate front-end code directly from design concepts
  • Rapidly explore multiple product directions

Engineers can now:

  • Generate user interfaces and layouts
  • Experiment with product concepts before committing to full builds
  • Quickly iterate on product experiences

The barrier between idea and implementation is shrinking dramatically.

As a result, the core constraint in innovation is no longer the ability to build something. The new constraint is the ability to decide what should actually be built.

When creation becomes cheap, judgment becomes the scarce resource.

Organizations can now generate more ideas, features, and experiments than they have the capacity to evaluate thoughtfully. The risk is no longer simply building the wrong thing slowly.

The risk is building thousands of things quickly without enough clarity about which ones actually matter.

This shift fundamentally changes the role of design. Instead of primarily helping teams avoid costly mistakes in development, design increasingly becomes the discipline that helps organizations navigate overwhelming possibility.

IV. The Blurring of Roles: Designers Reach Forward, Engineers Reach Back

One of the most profound effects of AI in product development is the erosion of traditional professional boundaries.

For decades, the technology industry operated with relatively clear separations of responsibility. Designers focused on user needs, interaction models, and visual systems. Engineers translated those designs into working software. Product managers coordinated priorities and timelines between the two.

That structure was largely a reflection of technical limitations. Designing and building required specialized tools, knowledge, and workflows that made cross-disciplinary work difficult.

AI is rapidly dissolving those barriers.

Designers can now reach forward into the domain that once belonged exclusively to engineering. With AI-assisted tools, they can generate working interfaces, produce front-end code, and simulate complex user interactions without waiting for implementation.

At the same time, engineers can reach backward into design. AI systems can help them generate layouts, propose interface structures, and explore experience flows that once required specialized design expertise.

The result is a new kind of creative overlap:

  • Designers who can prototype in code
  • Engineers who can explore experience design
  • Product creators who move fluidly between disciplines

The traditional model of work moving through a linear chain — research to design to engineering — begins to give way to a far more integrated creative process.

The future product creator is not defined by a job title, but by the ability to move fluidly between understanding problems and building solutions.

This does not mean design expertise or engineering skill become less important. If anything, the opposite is true. As tools make it easier for everyone to participate in creation, the depth of real craft becomes more visible and more valuable.

But it does mean the rigid boundaries between “designer” and “builder” are beginning to dissolve, creating a new generation of hybrid creators who can move seamlessly between imagining, designing, and shipping experiences.

V. The Death of the Handoff

For decades, most product development operated like a relay race. Work moved from one team to the next through a series of formal handoffs.

Researchers gathered insights and passed them to designers. Designers created wireframes and mockups that were handed to engineering. Engineers translated those designs into working software and eventually passed the finished product to testing and operations.

Each transition introduced delays, misinterpretations, and loss of context. The original understanding of the problem often became diluted as it traveled through the system.

Artificial intelligence is accelerating the collapse of this model.

When individuals can move rapidly from idea to prototype to functional product, the need for rigid handoffs begins to disappear. A single person can now:

  • Explore a user problem
  • Design a potential solution
  • Generate working code
  • Launch an experiment

Instead of waiting for work to pass from one discipline to another, creators can stay connected to the entire lifecycle of an idea.

The distance between insight and implementation is shrinking.

This shift has profound implications for how innovation happens inside organizations. Instead of large teams coordinating complex handoffs, smaller groups — or even individuals — can rapidly test ideas and learn from real-world feedback.

Product development begins to look less like an industrial assembly line and more like a creative studio, where ideas are explored, built, and refined continuously.

The most effective teams in this environment will not simply move faster. They will maintain ownership of ideas from the moment a problem is discovered all the way through to the moment a solution is experienced by real people.

VI. What AI Actually Kills

Artificial intelligence is not killing design thinking.

What it is killing are many of the habits that organizations adopted in the name of design thinking but that were never truly about understanding people or solving meaningful problems.

For years, some teams have mistaken the appearance of innovation for the practice of it. Workshops replaced experiments. Sticky notes replaced decisions. Slide decks replaced prototypes.

When building was slow and expensive, these behaviors were often tolerated because teams needed time to align before committing resources. But in a world where working solutions can be generated almost instantly, those habits quickly become friction.

AI removes the excuses that allowed these patterns to persist.

Process Theater

Innovation workshops that generate energy but not outcomes become difficult to justify when teams can build and test ideas immediately.

Endless Ideation

Brainstorming sessions that produce dozens of ideas without committing to experiments lose their value when ideas can be rapidly turned into prototypes and evaluated in the real world.

Documentation Instead of Exploration

Detailed reports, long strategy decks, and static artifacts once helped communicate ideas across teams. But when AI allows concepts to be expressed through working experiences, documentation becomes less important than experimentation.

Safe Innovation

Perhaps most importantly, AI challenges organizations that use process as a shield against risk. When it becomes easy to test bold ideas quickly and cheaply, avoiding experimentation becomes a choice rather than a necessity.

AI doesn’t eliminate design thinking. It eliminates the distance between thinking and doing.

The organizations that thrive in this environment will not be the ones with the most polished innovation processes. They will be the ones that are most willing to replace discussion with discovery and ideas with experiments.

Has AI Killed Design Thinking Infographic

VII. The New Role of Design: Decision Velocity

When the cost of building drops dramatically, the nature of competitive advantage changes.

In the past, organizations succeeded by efficiently transforming ideas into products. Engineering capacity, technical expertise, and operational discipline were often the primary constraints.

But when AI can generate working software, prototypes, and experiments almost instantly, the challenge is no longer how quickly something can be built.

The challenge becomes how quickly and wisely teams can decide what is actually worth building.

In an AI-driven world, innovation speed is no longer about development velocity — it is about decision velocity.

This is where the role of design evolves.

Design shifts from primarily producing artifacts — wireframes, mockups, and prototypes — to guiding the choices that shape meaningful innovation.

Designers increasingly become the people who help teams:

  • Frame the right problems to solve
  • Clarify human needs and motivations
  • Prioritize which ideas deserve experimentation
  • Interpret signals from real-world user behavior

In other words, design becomes less about shaping the interface of a product and more about shaping the direction of learning.

When organizations can generate thousands of potential solutions, the real value lies in identifying the small number that actually create meaningful value for people.

Designers, at their best, help organizations navigate that complexity. They connect technology to human context, helping teams avoid the trap of building faster without thinking better.

In the AI era, design is not slowing innovation down. It is helping organizations move quickly without losing their sense of where they should be going.

VIII. From Design Thinking to Design Doing

As artificial intelligence compresses the distance between idea and implementation, the nature of design practice begins to change. The emphasis shifts away from structured stages and toward continuous experimentation.

Traditional design thinking frameworks helped teams organize their thinking before committing to build. But in an AI-enabled environment, building itself becomes part of the thinking process.

Instead of long cycles of analysis followed by development, teams can now explore ideas directly through working prototypes and rapid experiments.

The most effective teams no longer separate thinking from building. They think by building.

This shift marks a move from design thinking to what might be called design doing.

In this model, learning happens through fast cycles of creation, feedback, and refinement. Ideas are not debated endlessly in workshops or captured in lengthy documents. They are explored through tangible experiences that can be observed, tested, and improved.

The practical differences begin to look like this:

Traditional Model AI-Enabled Model
Workshops and brainstorming sessions Rapid experiments and live prototypes
Personas and research summaries Behavioral data and real-world signals
Concept mockups Functional prototypes
Long planning cycles Continuous learning loops

None of this diminishes the importance of understanding people. If anything, the need for deep human insight becomes even more important as the pace of experimentation accelerates.

What changes is how that understanding is expressed. Instead of existing primarily as documents or presentations, insight becomes embedded directly into the experiences teams create and test.

In an AI-native organization, design is no longer a phase that happens before development begins. It becomes an ongoing activity woven directly into the act of building and learning.

IX. Human Trust Becomes the New Design Material

As artificial intelligence accelerates the speed of building, the most important design challenges begin to shift away from usability and toward something deeper: trust.

When products can be created, modified, and deployed almost instantly, the risk is not simply poor interface design. The risk is creating experiences that feel disconnected from human values, human context, and human expectations.

AI makes it easier than ever to generate functionality. But it does not automatically ensure that what is generated is responsible, understandable, or aligned with the needs of the people who will use it.

In an AI-driven world, the most important design material is no longer pixels or screens — it is human trust.

This raises a new set of responsibilities for designers, engineers, and product leaders alike.

Teams must think carefully about questions such as:

  • Do people understand what the system is doing?
  • Are decisions being made transparently?
  • Does the experience respect human autonomy?
  • Does the technology reinforce or erode confidence?

As AI systems become more powerful, the danger is not just that they might fail. The danger is that they might succeed in ways that quietly undermine the relationship between organizations and the people they serve.

Design therefore becomes a critical safeguard. It ensures that rapid technological capability does not outpace thoughtful consideration of human consequences.

In this sense, the role of design expands beyond shaping products. It becomes the discipline that ensures technology remains grounded in human meaning, responsibility, and trust.

X. The Future: Designers Who Ship, Engineers Who Empathize

As AI blurs the traditional boundaries between design and engineering, the most valuable creators in the future will be those who can move fluidly between imagining, designing, and building.

Designers will need to ship working products, not just static prototypes. Engineers will need to empathize deeply with users, understanding problems and shaping experiences that align with human needs.

The new hybrid product creator embodies both curiosity and capability, bridging the gap between thinking and doing. They are able to:

  • Rapidly translate insights into working solutions
  • Experiment and learn from real-world user behavior
  • Balance technical feasibility with human desirability
  • Maintain alignment between strategy, design, and execution

In this new landscape, design thinking does not disappear — it evolves. AI removes many of the barriers that previously prevented designers and engineers from collaborating fully and iterating quickly.

The organizations that succeed will be those where everyone has the ability to both understand humans and act on that understanding at the speed of AI.

The future belongs to hybrid creators who can navigate ambiguity, make fast decisions, and embed human trust into every experiment. In such a world, innovation is no longer the domain of specialists — it is the responsibility of anyone capable of connecting insight with action.

XI. The Real Question Leaders Should Be Asking

The debate is often framed as a dramatic question: “Has AI killed design thinking?” But this framing misses the deeper challenge facing organizations today.

The real question is not whether design thinking survives — it is whether organizations are prepared to operate in a world where anyone can turn ideas into working products almost instantly.

In this AI-accelerated environment, success depends less on the speed of coding or the elegance of design frameworks. It depends on human judgment, understanding, and alignment.

Leaders must ask themselves:

  • Do our teams know what problems are truly worth solving?
  • Can we prioritize experiments that create real human value?
  • Are we embedding human trust and ethical consideration into everything we build?
  • Are our designers and engineers equipped to operate across traditional boundaries?

In this new era, the organizations that thrive will not be the ones with the fastest developers or the slickest design processes.

They will be the organizations that can rapidly identify meaningful opportunities, make thoughtful decisions, and maintain human-centered principles while moving at the speed of AI.

Innovation will no longer belong to the people who can code. It will belong to the people who understand humans well enough to know what should be built in the first place.

The role of leadership is no longer just managing workflows — it is shaping the environment in which hybrid creators can think, act, and build responsibly at unprecedented speed.

New Tools for the New Design Reality

Get the new design thinking downloadsTo help you find problems worth solving and to design and execute experiments, I created a couple of visual and collaborative tools to help you thrive in this new reality. Download them both from my store and enjoy!

  1. Problem Finding Canvas — Only $4.99 for a limited time
  2. Experiment Canvas — FREE

FAQ: AI and the Evolution of Design Thinking

1. Has AI made design thinking obsolete?
No. AI has not killed design thinking, but it has changed the context in which it operates. Traditional design thinking frameworks assumed that building was slow and expensive. With AI accelerating the creation of prototypes and software, design thinking evolves from a staged process into a continuous cycle of experimentation and decision-making.
2. How are the roles of designers and engineers changing with AI?
AI blurs the traditional boundaries between designers and engineers. Designers can now generate working code and functional prototypes, while engineers can explore user experience and interface design. The future favors hybrid creators who can both understand human needs and rapidly implement solutions.
3. What becomes the main focus of design in an AI-driven product environment?
The primary focus shifts from producing artifacts to guiding decision-making and protecting human trust. Design becomes the discipline that helps teams prioritize meaningful experiments, interpret real-world feedback, and ensure that rapid technological development remains aligned with human values and needs.


Image credits: ChatGPT

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from ChatGPT to clean up the article and add citations.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Top 10 Human-Centered Change & Innovation Articles of January 2026

Top 10 Human-Centered Change & Innovation Articles of January 2026Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are January’s ten most popular innovation posts:

  1. Top 40 Innovation Authors of 2025 — Curated by Braden Kelley
  2. Trust is a Gold Mine for Organizations, but it Takes a Bit of Courage — by Oscar Amundsen
  3. Outcome-Driven Innovation in the Age of Agentic AI — by Braden Kelley
  4. Building Your Dream Organization — by Braden Kelley
  5. Why Photonic Processors are the Nervous System of the Future — by Art Inteligencia
  6. Reimagining Personalization — by Geoffrey Moore
  7. We Must Hold AI Accountable — by Greg Satell
  8. The Keys to Changing Someone’s Mind — by Greg Satell
  9. Concentrated Wealth, Consolidated Markets, and the Collapse of Innovation — by Art Inteligencia
  10. It’s Impossible to Innovate When … — by Mike Shipulski

BONUS – Here are five more strong articles published in December that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Build a Common Language of Innovation on your team

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last five years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Causal AI

Moving Beyond Prediction to Purpose

LAST UPDATED: February 13, 2026 at 5:13 PM

Causal AI

GUEST POST from Art Inteligencia

For the last decade, the business world has been obsessed with predictive models. We have spent billions trying to answer the question, “What will happen next?” While these tools have helped us optimize supply chains, they often fail when the world changes. Why? Because prediction is based on correlation, and correlation is not causation. To truly innovate using Human-Centered Innovation™, we must move toward Causal AI.

Causal AI is the next frontier of FutureHacking™. Instead of merely identifying patterns, it seeks to understand the why. It maps the underlying “wiring” of a system to determine how changing one variable will influence another. This shift is vital because innovation isn’t about following a trend; it’s about making a deliberate intervention to create a better future.

“Data can tell you that two things are happening at once, but only Causal AI can tell you which one is the lever and which one is the result. Innovation is the art of pulling the right lever.”
— Braden Kelley

The End of the “Black Box” Strategy

One of the greatest barriers to institutional trust is the “Black Box” nature of traditional machine learning. Causal AI, by its very nature, is explainable. It provides a transparent map of cause and effect, allowing human leaders to maintain autonomy and act as the “gardener” tending to the seeds of technology.

Case Study 1: Personalized Medicine and Healthcare

A leading pharmaceutical institution recently moved beyond predictive patient modeling. By using Causal AI to simulate “What if” scenarios, they identified specific causal drivers for individual patients. This allowed for targeted interventions that actually changed outcomes rather than just predicting a decline. This is the difference between watching a storm and seeding the clouds.

Case Study 2: Retail Pricing and Elasticity

A global retail giant utilized Causal AI to solve why deep discounts led to long-term dips in brand loyalty. Causal models revealed that the discounts were causing a shift in quality perception in specific demographics. By understanding this link, the company pivoted to a human-centered value strategy that maintained price integrity while increasing engagement.

Leading the Causal Frontier

The landscape of Causal AI is rapidly maturing in 2026. causaLens remains a primary pioneer with their Causal AI operating system designed for enterprise decision intelligence. Microsoft Research continues to lead the open-source movement with its DoWhy and EconML libraries, which are now essential tools for data scientists globally. Meanwhile, startups like Geminos Software are revolutionizing industrial intelligence by blending causal reasoning with knowledge graphs to address the high failure rate of traditional models. Causaly is specifically transforming the life sciences sector by mapping over 500 million causal relationships in biomedical data to accelerate drug discovery.

“Causal AI doesn’t just predict the future — it teaches us how to change it.”
— Braden Kelley

From Correlation to Causation

Predictive models operate on correlations. They answer: “Given the patterns in historical data, what will likely happen next?” Causal models ask a deeper question: “If we change this variable, how will the outcome change?” This fundamental difference elevates causal AI from forecasting to strategic influence.

Causal AI leverages counterfactual reasoning — the ability to simulate alternative realities. It makes systems more explainable, robust to context shifts, and aligned with human intentions for impact.

Case Study 3: Healthcare — Reducing Hospital Readmissions

A large health system used predictive analytics to identify patients at high risk of readmission. While accurate, the system did not reveal which interventions would reduce that risk. Nurses and clinicians were left with uncertainty about how to act.

By implementing causal AI techniques, the health system could simulate different combinations of follow-up calls, personalized care plans, and care coordination efforts. The causal model showed which interventions would most reduce readmission likelihood. The organization then prioritized those interventions, achieving a measurable reduction in readmissions and better patient outcomes.

This example illustrates how causal AI moves health leaders from reactive alerts to proactive, evidence-based intervention planning.

Case Study 4: Public Policy — Effective Job Training Programs

A metropolitan region sought to improve employment outcomes through various workforce programs. Traditional analytics identified which neighborhoods had high unemployment, but offered little guidance on which programs would yield the best impact.

Causal AI empowered policymakers to model the effects of expanding job training, childcare support, transportation subsidies, and employer incentives. Rather than piloting each program with limited insight, the city prioritized interventions with the highest projected causal effect. Ultimately, unemployment declined more rapidly than in prior years.

This case demonstrates how causal reasoning can inform public decision-making, directing limited resources toward policies that truly move the needle.

Human-Centered Innovation and Causal AI

Causal AI complements human-centered innovation by prioritizing actionable insight over surface-level pattern recognition. It aligns analytics with stakeholder needs: transparency, explainability, and purpose-driven outcomes.

By embracing causal reasoning, leaders design systems that illuminate why problems occur and how to address them. Instead of deploying technology that automates decisions, causal AI enables decision-makers to retain judgment while accessing deeper insight. This synergy reinforces human agency and enhances trust in AI-driven processes.

Challenges and Ethical Guardrails

Despite its potential, causal AI has challenges. It requires domain expertise to define meaningful variables and valid causal structures. Data quality and context matter. Ethical considerations demand clarity about assumptions, transparency in limitations, and safeguards against misuse.

Causal AI is not a shortcut to certainty. It is a discipline grounded in rigorous reasoning. When applied thoughtfully, it empowers organizations to act with purpose rather than default to correlation-based intuition.

Conclusion: Lead with Causality

In a world of noise, Causal AI provides the signal. It respects human autonomy by providing the evidence needed for a human to make the final call. As you look to your next change management initiative, ask yourself: Are you just predicting the weather, or are you learning how to build a better shelter?

Strategic FAQ

How does Causal AI differ from traditional Machine Learning?

Traditional Machine Learning identifies correlations and patterns in historical data to predict future occurrences. Causal AI identifies the functional relationships between variables, allowing users to understand the impact of specific interventions.

Why is Causal AI better for human-centered innovation?

It provides explainability. Because it maps cause and effect, human leaders can see the logic behind a recommendation, ensuring technology remains a tool for human ingenuity.

Can Causal AI help with bureaucratic corrosion?

Yes. By exposing the “why” behind organizational outcomes, it helps leaders identify which processes (the wiring) are actually producing value and which ones are simply creating friction.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Why We Love to Hate Chatbots

Why We Love to Hate Chatbots

GUEST POST from Shep Hyken

More and more, brands are starting to get the chatbot “thing” right. AI is improving, and customers are realizing that a chatbot can be a great first stop for getting quick answers or resolving questions. After all, if you have a question, don’t you want it answered now?

In a recent interview, I was asked, “What do you love about chatbots?” That was easy. Then came the follow-up question, “What do you hate about chatbots?” Also easy. The truth is, chatbots can deliver amazing experiences. They can also cause just as much frustration as a very long phone hold. With that in mind, here are five reasons to love (and hate) chatbots:

Why We Love Chatbots

  1. 24/7 Availability: Chatbots are always on. They don’t sleep. Customers can get help at any time, even during holidays.
  2. Fast Response: Instant answers to simple questions, such as hours of operation, order status and basic troubleshooting, can be provided with efficiency and minimal friction.
  3. Customer Service at Scale: Once you set up a chatbot, it can handle many customers at once. Customers won’t have to wait, and human agents can focus on more complicated issues and problems.
  4. Multiple Language Capabilities: The latest chatbots are capable of speaking and typing in many different languages. Whether you need global support or just want to cater to different cultures in a local area, a chatbot has you covered.
  5. Consistent Answers: When programmed properly, a chatbot delivers the same answers every time.

Chatbots Shep Hyken Cartoon

Why We Hate Chatbots

  1. AI Can’t Do Everything, but Some Companies Think It Can: This is what frustrates customers the most. Some companies believe AI and chatbots can do it all. They can’t, and the result is frustrated customers who will eventually move on to the competition.
  2. A Lack of Empathy: AI can do a lot, but it can’t express true emotions. For some customers, care, empathy and understanding are more important than efficiency.
  3. Scripted Retorts Feel Robotic: Chatbots often follow strict guidelines. That’s actually a good thing, unless the answers provided feel overly scripted and generic.
  4. Hard to Get to a Human: One of the biggest complaints about chatbots is, “I just want to talk to a person.” Smart companies make it easy for customers to leave AI and connect to a human.
  5. There’s No Emotional Connection to a Chatbot: You’ll most likely never hear a customer say, “I love my chatbot.” A chatbot won’t win your heart. In customer service, sometimes how you make someone feel is more important than what you say.

Chatbots are powerful tools, but they are not a replacement for human connection. The best companies use AI to enhance support, not replace it. When chatbots handle the routine issues and agents handle the more complex and human moments, that’s when customer experience goes from efficient to … amazing.

Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Win Your Way to an AI Job

Anduril’s AI Grand Prix: Racing for the Future of Work

LAST UPDATED: January 28, 2026 at 2:27 PM

Anduril's AI Grand Prix: Racing for the Future of Work

GUEST POST from Art Inteligencia

The traditional job interview is an antiquated artifact, a relic of a bygone industrial era. It often measures conformity, articulateness, and cultural fit more than actual capability or innovative potential. As we navigate the complexities of AI, automation, and rapid technological shifts, organizations are beginning to realize that to find truly exceptional talent, they need to look beyond resumes and carefully crafted answers. This is where companies like Anduril are not just iterating but innovating the very hiring process itself.

Anduril, a defense technology company known for its focus on AI-driven systems, recently announced its AI Grand Prix — a drone racing contest where the ultimate prize isn’t just glory, but a job offer. This isn’t merely a marketing gimmick; it’s a profound statement about their belief in demonstrated skill over credentialism, and a powerful strategy for identifying talent that can truly push the boundaries of autonomous systems. It epitomizes the shift from abstract evaluation to purposeful, real-world application, emphasizing hands-on capability over theoretical knowledge.

“The future of hiring isn’t about asking people what they can do; it’s about giving them a challenge and watching them show you.”

— Braden Kelley

Why Challenge-Based Hiring is the New Frontier

This approach addresses several critical pain points in traditional hiring:

  • Uncovering Latent Talent: Many brilliant minds don’t fit the mold of elite university degrees or polished corporate careers. Challenge-based hiring can surface individuals with raw, untapped potential who might otherwise be overlooked.
  • Assessing Practical Skills: In fields like AI, robotics, and advanced engineering, theoretical knowledge is insufficient. The ability to problem-solve under pressure, adapt to dynamic environments, and debug complex systems is paramount.
  • Cultural Alignment Through Action: Observing how candidates collaborate, manage stress, and iterate on solutions in a competitive yet supportive environment reveals more about their true cultural fit than any behavioral interview.
  • Building a Diverse Pipeline: By opening up contests to a wider audience, companies can bypass traditional biases inherent in resume screening, leading to a more diverse and innovative workforce.

Beyond Anduril: Other Pioneers of Performance-Based Hiring

Anduril isn’t alone in recognizing the power of real-world challenges to identify top talent. Several other forward-thinking organizations have adopted similar, albeit varied, approaches:

Google’s Code Jam and Hash Code

For years, Google has leveraged competitive programming contests like Code Jam and Hash Code to scout for software engineering talent globally. These contests present participants with complex algorithmic problems that test their coding speed, efficiency, and problem-solving abilities. While not always directly leading to a job offer for every participant, top performers are often fast-tracked through the interview process. This allows Google to identify engineers who can perform under pressure and think creatively, rather than just those who can ace a whiteboard interview. It’s a prime example of turning abstract coding prowess into a tangible demonstration of value.

Kaggle Competitions for Data Scientists

Kaggle, now a Google subsidiary, revolutionized how data scientists prove their worth. Through its platform, companies post real-world data science problems—from predicting housing prices to identifying medical conditions from images—and offer prize money, and often, connections to jobs, to the teams that develop the best models. This creates a meritocracy where the quality of one’s predictive model speaks louder than any resume. Many leading data scientists have launched their careers or been recruited directly from their performance in Kaggle competitions. It transforms theoretical data knowledge into demonstrable insights that directly impact business outcomes.

The Human Element in the Machine Age

What makes these initiatives truly human-centered? It’s the recognition that while AI and automation are transforming tasks, the human capacity for ingenuity, adaptation, and critical thinking remains irreplaceable. These contests aren’t about finding people who can simply operate machines; they’re about finding individuals who can teach the machines, design the next generation of algorithms, and solve problems that don’t yet exist. They foster an environment of continuous learning and application, perfectly aligning with the “purposeful learning” philosophy.

The Anduril AI Grand Prix, much like Google’s and Kaggle’s initiatives, de-risks the hiring process by creating a performance crucible. It’s a pragmatic, meritocratic, and ultimately more effective way to build the teams that will define the next era of technological advancement. As leaders, our challenge is to move beyond conventional wisdom and embrace these innovative models, ensuring we’re not just ready for the future of work, but actively shaping it.

Anduril Fury


Frequently Asked Questions

What is challenge-based hiring?

Challenge-based hiring is a recruitment strategy where candidates demonstrate their skills and problem-solving abilities by completing a real-world task, project, or competition, rather than relying solely on resumes and interviews.

What are the benefits of this approach for companies?

Companies can uncover hidden talent, assess practical skills, observe cultural fit in action, and build a more diverse talent pipeline by focusing on demonstrable performance.

How does this approach benefit candidates?

Candidates get a fair chance to showcase their true abilities regardless of traditional credentials, gain valuable experience, and often get direct access to influential companies and potential job offers based purely on merit.

To learn more about transforming your organization’s talent acquisition strategy, reach out to explore how human-centered innovation can reshape your hiring practices.

Image credits: Wikimedia Commons, Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.