One of the reasons customers are concerned about or even scared of artificial intelligence (AI) is that it has been known to provide incorrect answers. The result is frustration and concern over whether to believe any AI-fueled technology. In my annual customer service and customer experience research, I asked more than 1,000 U.S. consumers if they ever received wrong or incorrect information from an AI self-service technology. Fifty-one percent said yes.
No, AI is not perfect. Even though the technology continues to improve, it still makes mistakes. And my response to those who claim they won’t trust AI because of those mistakes is to ask, “Has a live customer support agent ever given you bad information?”
That question gets a surprised look, and then a smile, and then an acknowledgement, something like, “You’re right. I never thought about that.”
When AI gives bad information, I refer to that as Artificial Incompetence. It’s just as frustrating when we experience bad information from a live agent, which I call HI, or Human Incompetence. I doubt – I actually know – that the AI and the human aren’t trying to give you bad information.
I once called a customer support number to get help with what seemed like a straightforward question. I didn’t like the answer I received. It just didn’t make sense. Rather than argue, I thanked the agent, hung up, and dialed the same customer support number. A different agent answered, and I asked the same question. This time, I liked the answer. Two humans from the same company answering the same question, but with two completely different answers. And we worry about AI being inconsistent!
AI and Humans Make Mistakes
The reality is that both AI and humans make mistakes, and both will continue to do so. The difference is our expectations. We don’t expect humans to be perfect, so when they are not, we may be disappointed, maybe even angry. We may or may not forgive them, but usually, we just chalk it up to being … human. But it’s different when interacting with AI. We expect it to be reliable, and when it makes a mistake, we often assume the entire system is flawed.
Perhaps we should treat both with the same reasonable expectations and the same healthy skepticism we apply to weather forecasters, who use sophisticated technology and have years of training yet still can’t seem to get tomorrow’s forecast right half the time. Well, it seems like half the time! That doesn’t mean we won’t be checking the forecast before we plan our outdoor activities. AI, too, is sophisticated technology that can make life easier.
Image credits: Gemini, Shep Hyken
Sign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.
A new wave of layoffs across technology companies has reignited a familiar but increasingly urgent question: what exactly are we witnessing? On the surface, the explanation seems straightforward — companies are tightening costs, responding to macroeconomic pressures, and recalibrating after years of aggressive hiring. But beneath that surface lies a deeper and more consequential debate about the future of innovation, the role of engineers, and the impact of artificial intelligence on knowledge work itself.
Two competing narratives have quickly emerged. The first frames these layoffs as a rational and even necessary evolution. In this view, advances in AI-powered development tools — ranging from large language models to code-generation systems — have fundamentally altered the productivity equation. Engineers equipped with tools like Claude or OpenAI Code can now accomplish in hours what once took days. The implication is clear: if output can be maintained or even increased with fewer people, then reducing headcount is not a sign of weakness but a signal of maturation. Companies are becoming leaner, more efficient, and ultimately more profitable.
The second narrative is far less optimistic. It suggests that layoffs are not a leading indicator of a smarter, AI-augmented future, but a trailing indicator of something more troubling — an innovation slowdown. According to this perspective, many technology companies have already harvested the most accessible opportunities within their existing platforms. What remains is incremental improvement rather than transformative change. In such an environment, cutting engineering talent becomes less about efficiency gains and more about a lack of compelling new problems to solve. The cupboard, in other words, may not be empty — but it may be significantly less full than it once was.
What makes this moment particularly complex is that both narratives can be true at the same time. AI is undeniably increasing productivity in certain domains, compressing development cycles and enabling smaller teams to deliver meaningful results. At the same time, innovation has never been solely a function of efficiency. Breakthroughs emerge from exploration, from cross-functional collisions, and from a willingness to invest in uncertain futures. Layoffs, especially when executed at scale, can disrupt the very conditions that make those breakthroughs possible.
This tension forces us to confront a more nuanced question: are these layoffs a signal of transformation or a symptom of stagnation? Are organizations courageously embracing a new model of AI-augmented work, or are they retreating into cost-cutting as a substitute for bold thinking? The answer matters, because it shapes not only how we interpret today’s decisions, but how we design organizations for tomorrow.
For leaders, the stakes extend beyond quarterly earnings. The choices being made now will determine whether AI becomes a catalyst for a new era of human-centered innovation or a tool that accelerates efficiency at the expense of imagination. For engineers, the implications are equally profound. Their roles are being redefined in real time — not just in terms of what they produce, but in how they create value within increasingly AI-mediated systems.
Ultimately, this is not just a debate about layoffs. It is a debate about what organizations choose to optimize for: productivity or possibility, efficiency or exploration, output or insight. And in that choice lies the future trajectory of innovation itself.
The Case for “Smarter, Leaner, More Profitable”
For many technology leaders, the recent wave of layoffs is not a retreat — it is a re-calibration. The argument is grounded in a simple but powerful premise: the economics of software development have fundamentally changed. With the rapid advancement of AI-assisted coding tools, the amount of output a single engineer can produce has increased dramatically. What once required large, specialized teams can now be accomplished by smaller, more versatile groups augmented by intelligent systems.
Tools such as Claude and OpenAI Code are not merely incremental improvements in developer productivity; they represent a shift in how work gets done. Routine coding tasks, boilerplate generation, debugging assistance, and even architectural suggestions can now be offloaded to AI. This allows engineers to spend less time writing repetitive code and more time focusing on higher-value activities such as system design, problem framing, and integration across complex environments.
In this emerging model, the role of the engineer evolves from builder to orchestrator. Instead of manually crafting every line of code, engineers guide, refine, and validate the outputs of AI systems. The result is a compression of development cycles — features are built faster, iterations occur more rapidly, and time-to-market shrinks. From a business perspective, this translates into a compelling opportunity: maintain or even increase output while reducing labor costs.
This logic is not without precedent. Across industries, waves of automation have consistently redefined the relationship between labor and productivity. In manufacturing, the introduction of robotics did not eliminate production; it scaled it. In many cases, it also improved quality and consistency. Proponents of the current shift argue that AI represents a similar inflection point for knowledge work. The companies that adapt fastest will be those that learn to pair human creativity with machine efficiency.
From a financial standpoint, the incentives are clear. Reducing headcount while sustaining output improves margins, a priority that has become increasingly important in an environment where growth-at-all-costs is no longer rewarded. Investors are placing greater emphasis on profitability and operational discipline, and companies are responding accordingly. Leaner teams are not just a byproduct of technological change — they are a strategic choice aligned with evolving market expectations.
There is also a strategic argument that goes beyond cost savings. By automating lower-value tasks, organizations can theoretically redeploy human talent toward more innovative efforts. Engineers freed from routine work can focus on solving harder problems, exploring new product ideas, and experimenting with emerging technologies. In this view, AI does not replace innovation capacity; it expands it by removing friction from the development process.
Smaller teams can also mean faster decision-making. With fewer layers of coordination required, organizations can become more agile, responding quickly to changing market conditions and customer needs. This agility is often cited as a competitive advantage, particularly in fast-moving technology sectors where speed can determine success or failure.
Ultimately, the “smarter, leaner” argument rests on a belief that efficiency and innovation are not mutually exclusive. Instead, they are mutually reinforcing. By leveraging AI to increase productivity, companies can create the financial and operational headroom needed to invest in the next wave of innovation. Layoffs, in this context, are not an admission of weakness — they are a signal that the underlying system of value creation is being rewritten.
The Case for “Innovation Is Running Dry”
While the efficiency narrative is compelling, an equally important — and more unsettling — interpretation of recent layoffs is gaining traction: that they reflect not technological progress, but an innovation slowdown. In this view, companies are not simply becoming leaner because they can do more with less, but because they have fewer truly novel problems worth investing in. The layoffs, therefore, are less a signal of transformation and more a symptom of diminishing opportunity.
Over the past decade, many technology companies have scaled around a set of highly successful platforms and business models. These platforms have been optimized, expanded, and monetized with remarkable effectiveness. But maturity brings constraints. As systems stabilize and markets saturate, the number of greenfield opportunities naturally declines. What remains is often incremental improvement — refinements, extensions, and efficiencies — rather than the kind of breakthrough innovation that requires large, exploratory engineering teams.
In this context, layoffs can be interpreted as a rational response to a shrinking frontier. If there are fewer bold bets to pursue, there is less need for the capacity required to pursue them. The risk, however, is that this becomes a self-reinforcing cycle. As organizations reduce investment in exploration, they further limit their ability to discover the next wave of opportunity. Over time, efficiency begins to crowd out possibility.
Compounding this dynamic is an increasing reliance on metrics that prioritize productivity over potential. Organizations are becoming exceptionally good at measuring what is already known — velocity, output, utilization — but far less adept at valuing what has yet to be discovered. When success is defined primarily by efficiency gains, it becomes harder to justify the uncertainty and longer time horizons associated with breakthrough innovation.
The rise of AI tools adds another layer of complexity. While these tools can accelerate development, they do not inherently generate new insight. They are trained on existing patterns, which means they are exceptionally effective at extending the present but less equipped to invent the future. This creates the risk of an “illusion of progress,” where output increases but originality does not. More code is produced, but not necessarily more meaningful innovation.
There are also significant cultural consequences to consider. Layoffs, particularly when they affect engineering and product teams, can erode trust and psychological safety within an organization. When employees perceive that their roles are precarious, they are less likely to take risks, challenge assumptions, or pursue unconventional ideas. Yet these behaviors are precisely what fuel innovation. In attempting to optimize for efficiency, companies may inadvertently suppress the very creativity they depend on for long-term growth.
Another often overlooked impact is the loss of institutional knowledge. Experienced engineers carry not just technical expertise, but contextual understanding of systems, decisions, and past experiments. When they leave, they take with them insights that are difficult to codify or replace. This loss can slow future innovation efforts, even as short-term efficiency metrics appear to improve.
Ultimately, the concern is not that companies are becoming more efficient — it is that they may be becoming too narrowly focused on efficiency at the expense of exploration. Innovation requires slack, curiosity, and a willingness to invest in uncertain outcomes. When organizations begin to treat these elements as expendable, they risk signaling something far more significant than cost discipline: a diminishing appetite for invention itself.
The Human-Centered Tension: Productivity vs. Possibility
Beneath the surface of the efficiency versus stagnation debate lies a deeper, more human tension — one that cannot be resolved by technology alone. At its core, innovation has never been just about output. It has always been about the quality of thinking, the diversity of perspectives, and the collisions between ideas that spark something new. When organizations focus too narrowly on productivity, they risk overlooking the very conditions that make possibility achievable.
Innovation does not emerge from isolated efficiency; it emerges from interaction. It is the byproduct of cross-functional curiosity — engineers engaging with designers, product managers challenging assumptions, customers re-framing problems, and leaders creating space for exploration. These interactions are often messy, inefficient, and difficult to measure. But they are also where breakthroughs live. When layoffs reduce not just headcount but diversity of thought and opportunities for collaboration, the innovation system itself becomes less dynamic.
The rise of AI-augmented work introduces a new layer to this tension. As engineers increasingly rely on AI tools to generate code, suggest solutions, and optimize workflows, their role begins to shift. They move from hands-on builders to orchestrators of machine-assisted output. While this shift can increase speed and efficiency, it also raises an important question: what happens to deep craft? The tacit knowledge developed through wrestling with complexity — the kind that often leads to unexpected insights — may be diminished if too much of the process is abstracted away.
There is also a cognitive risk. AI systems are designed to identify and replicate patterns based on existing data. This makes them powerful tools for scaling what is already known, but less effective at challenging foundational assumptions. If organizations become overly dependent on these systems, they may unintentionally standardize thinking. The range of possible solutions narrows, not because people lack creativity, but because the tools they use guide them toward familiar patterns.
Trust plays a critical role in navigating this tension. In environments where employees feel secure, valued, and empowered, they are more likely to experiment, take risks, and pursue unconventional ideas. Layoffs, particularly when they are frequent or poorly communicated, can erode that trust. The result is a more cautious workforce — one that prioritizes safety over exploration. In such environments, productivity may remain high, but the willingness to pursue breakthrough innovation often declines.
Curiosity is the other essential ingredient. It is the force that drives individuals to ask better questions, challenge the status quo, and seek out new possibilities. Yet curiosity requires space — time to think, room to explore, and permission to deviate from immediate objectives. When organizations optimize relentlessly for efficiency, that space tends to disappear. Every moment is accounted for, every effort measured, and every outcome expected to justify itself in the short term.
This creates a paradox. The same tools and strategies that enable organizations to move faster can also constrain their ability to think differently. Speed without reflection can lead to acceleration in the wrong direction. Efficiency without exploration can result in incremental progress that ultimately limits long-term growth.
For leaders, the challenge is not to choose between productivity and possibility, but to intentionally design for both. This means recognizing that innovation systems require balance — between execution and exploration, between structure and flexibility, and between human judgment and machine assistance. It requires protecting the conditions that enable creativity even as new technologies reshape how work gets done.
Ultimately, the question is not whether AI will make organizations more efficient — it already is. The question is whether leaders will use that efficiency to create more space for human ingenuity, or whether they will allow it to crowd out the very behaviors that make innovation possible in the first place.
The Future of Innovation in the Age of AI: Augmentation or Abdication?
As organizations navigate layoffs, AI adoption, and shifting expectations around productivity, the future of innovation is not predetermined — it is being actively shaped by the choices leaders make today. The central question is no longer whether artificial intelligence will transform how work gets done, but how that transformation will be directed. Will AI serve as an amplifier of human ingenuity, or will it become a mechanism for narrowing ambition in the pursuit of efficiency?
Three distinct paths are beginning to emerge. The first is an augmentation-led renaissance, where organizations successfully combine human creativity with machine capability. In this scenario, AI handles the repetitive and computationally intensive aspects of work, freeing humans to focus on problem framing, experimentation, and breakthrough thinking. Innovation accelerates not because there are fewer people, but because those people are empowered to operate at a higher level of abstraction and impact.
The second path is the efficiency trap. Here, organizations become so focused on optimizing output and reducing cost that they gradually lose their capacity for exploration. AI is used primarily to streamline existing processes rather than to unlock new possibilities. Over time, these organizations become highly efficient at executing yesterday’s ideas, but increasingly disconnected from tomorrow’s opportunities. What appears to be strength in the short term reveals itself as fragility in the long term.
The third path is a bifurcation of the competitive landscape. Some organizations will lean into augmentation, investing in both AI capabilities and the human systems required to harness them effectively. Others will prioritize efficiency, focusing on cost control and incremental gains. The result is a widening gap between companies that consistently generate new value and those that primarily replicate and optimize existing models. In such an environment, innovation becomes a defining differentiator rather than a baseline expectation.
What separates the leaders from the laggards will not be access to AI alone — those tools are increasingly commoditized — but how organizations integrate them into their innovation systems. Leading organizations will invest not just in AI infrastructure, but in what might be called curiosity infrastructure: the cultural, structural, and leadership practices that encourage questioning, exploration, and cross-functional collaboration. They will recognize that technology can accelerate execution, but only humans can redefine the problems worth solving.
This shift will require a redefinition of roles. Engineers, for example, will need to move beyond execution and into areas such as systems thinking, ethical judgment, and interdisciplinary collaboration. Their value will be measured not just by what they build, but by how they frame problems, challenge assumptions, and integrate diverse inputs into coherent solutions. Similarly, leaders will need to become stewards of both performance and possibility, ensuring that the drive for efficiency does not crowd out the pursuit of innovation.
Organizations that thrive will also be those that intentionally protect space for exploration. This does not mean abandoning discipline or ignoring financial realities. It means recognizing that innovation requires a portfolio approach — balancing investments in core optimization with bets on uncertain, high-potential opportunities. AI can make this balance more achievable by reducing the cost of experimentation, but only if leaders choose to reinvest those gains into discovery rather than solely into margin expansion.
Ultimately, the future of innovation in the age of AI will be defined by whether organizations treat these tools as a substitute for human thinking or as a catalyst for it. The real risk is not that AI replaces engineers — it is that organizations stop asking the kinds of questions that require engineers to think deeply, creatively, and collaboratively in the first place.
Augmentation or abdication is not a technological choice. It is a leadership choice. And in making it, organizations will determine whether this moment becomes a turning point toward a more innovative future — or a gradual slide into highly efficient irrelevance.
Frequently Asked Questions
1. Why are technology companies laying off engineers despite using AI tools?
Layoffs may result from a combination of efficiency gains and slowing innovation opportunities. AI tools like
Claude and OpenAI Code allow smaller teams to maintain or increase output, reducing the need for some roles.
At the same time, some companies face fewer breakthrough projects to pursue, which can also drive workforce reductions.
2. Does AI replace human engineers or just augment their work?
AI primarily augments engineers by automating repetitive coding, debugging, and optimization tasks. This allows
engineers to focus on higher-value activities such as system design, problem framing, and creative innovation.
While some roles shift, AI is intended as an amplifier of human ingenuity rather than a replacement.
3. How can companies maintain innovation in the age of AI?
Companies can preserve innovation by investing in curiosity infrastructure, protecting time and space for
experimentation, fostering cross-functional collaboration, and reinvesting efficiency gains into exploratory,
high-potential projects. Balancing productivity with opportunity ensures that humans and AI together drive breakthroughs.
Image credits: ChatGPT
Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from ChatGPT to clean up the article and add citations.
Sign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.
Something fundamental has changed in how products are created.
Artificial intelligence can now generate working software in minutes. Designers can move from an idea to a functional prototype without waiting for engineering. Engineers can generate interface concepts, user flows, and even early product ideas with a few well-crafted prompts.
The traditional product development cycle — design, then build, then test — is collapsing into something faster, messier, and far more fluid.
In the past, the biggest constraint in innovation was the cost and time required to build something. Today, AI dramatically reduces that barrier. Entire features, experiments, and even applications can be created almost instantly.
Which raises an uncomfortable question that many product leaders, designers, and engineers are quietly asking:
If we can ship almost immediately, do we still need design thinking?
At first glance, the answer might seem obvious. Design thinking was created to help teams understand people, define the right problems, and avoid building the wrong solutions. Those goals have not disappeared.
But when the cost of building approaches zero, the role of design inevitably changes. The traditional pacing of discovery, ideation, prototyping, and testing begins to compress. The boundaries between designer and engineer begin to blur.
And as those boundaries dissolve, the question is no longer simply whether design thinking still matters.
The deeper question is whether the discipline itself must evolve to survive in a world where almost anyone can turn an idea into working software.
II. Design Thinking Was Built for a World of Scarcity
To understand how artificial intelligence is reshaping product creation, it helps to remember the environment in which design thinking originally emerged.
Design thinking did not appear because organizations suddenly discovered empathy or creativity. It emerged because building things was expensive, slow, and risky. Every product decision carried significant cost, and mistakes could take months or years to correct.
In that world, organizations needed a structured way to reduce uncertainty before committing engineering resources. Design thinking provided that structure.
Its now-famous stages helped teams move deliberately from understanding people to building solutions:
Empathize — deeply understand the people you are designing for.
Define — frame the real problem worth solving.
Ideate — generate a wide range of possible solutions.
Prototype — create rough representations of potential ideas.
Test — validate whether those ideas actually work for people.
The goal was simple: avoid spending months building something no one actually needed.
Design thinking slowed teams down in the right places so they could move faster later. It created space for exploration before the heavy machinery of engineering was set in motion.
But this entire framework assumed one critical constraint:
Building was the most expensive part of innovation.
Prototypes were often static mockups. Experiments required engineering time. Even small product changes could take weeks or months to ship.
In other words, design thinking was optimized for a world where the biggest risk was building the wrong thing.
Today, AI is rapidly changing that assumption. When working software can be generated in minutes rather than months, the bottleneck shifts — and the role of design must evolve with it.
III. AI Has Flipped the Innovation Constraint
For most of the history of digital product development, the limiting factor in innovation was the ability to build. Even the best ideas had to wait in line for scarce engineering resources, long development cycles, and complex release processes.
Artificial intelligence is rapidly dismantling that constraint.
Today, AI tools can generate functional code, working interfaces, and interactive prototypes in minutes. What once required a team of specialists and weeks of effort can often be produced by a single individual in an afternoon.
Designers can now:
Create interactive prototypes that behave like real products
Generate front-end code directly from design concepts
Rapidly explore multiple product directions
Engineers can now:
Generate user interfaces and layouts
Experiment with product concepts before committing to full builds
Quickly iterate on product experiences
The barrier between idea and implementation is shrinking dramatically.
As a result, the core constraint in innovation is no longer the ability to build something. The new constraint is the ability to decide what should actually be built.
When creation becomes cheap, judgment becomes the scarce resource.
Organizations can now generate more ideas, features, and experiments than they have the capacity to evaluate thoughtfully. The risk is no longer simply building the wrong thing slowly.
The risk is building thousands of things quickly without enough clarity about which ones actually matter.
This shift fundamentally changes the role of design. Instead of primarily helping teams avoid costly mistakes in development, design increasingly becomes the discipline that helps organizations navigate overwhelming possibility.
IV. The Blurring of Roles: Designers Reach Forward, Engineers Reach Back
One of the most profound effects of AI in product development is the erosion of traditional professional boundaries.
For decades, the technology industry operated with relatively clear separations of responsibility. Designers focused on user needs, interaction models, and visual systems. Engineers translated those designs into working software. Product managers coordinated priorities and timelines between the two.
That structure was largely a reflection of technical limitations. Designing and building required specialized tools, knowledge, and workflows that made cross-disciplinary work difficult.
AI is rapidly dissolving those barriers.
Designers can now reach forward into the domain that once belonged exclusively to engineering. With AI-assisted tools, they can generate working interfaces, produce front-end code, and simulate complex user interactions without waiting for implementation.
At the same time, engineers can reach backward into design. AI systems can help them generate layouts, propose interface structures, and explore experience flows that once required specialized design expertise.
The result is a new kind of creative overlap:
Designers who can prototype in code
Engineers who can explore experience design
Product creators who move fluidly between disciplines
The traditional model of work moving through a linear chain — research to design to engineering — begins to give way to a far more integrated creative process.
The future product creator is not defined by a job title, but by the ability to move fluidly between understanding problems and building solutions.
This does not mean design expertise or engineering skill become less important. If anything, the opposite is true. As tools make it easier for everyone to participate in creation, the depth of real craft becomes more visible and more valuable.
But it does mean the rigid boundaries between “designer” and “builder” are beginning to dissolve, creating a new generation of hybrid creators who can move seamlessly between imagining, designing, and shipping experiences.
V. The Death of the Handoff
For decades, most product development operated like a relay race. Work moved from one team to the next through a series of formal handoffs.
Researchers gathered insights and passed them to designers. Designers created wireframes and mockups that were handed to engineering. Engineers translated those designs into working software and eventually passed the finished product to testing and operations.
Each transition introduced delays, misinterpretations, and loss of context. The original understanding of the problem often became diluted as it traveled through the system.
Artificial intelligence is accelerating the collapse of this model.
When individuals can move rapidly from idea to prototype to functional product, the need for rigid handoffs begins to disappear. A single person can now:
Explore a user problem
Design a potential solution
Generate working code
Launch an experiment
Instead of waiting for work to pass from one discipline to another, creators can stay connected to the entire lifecycle of an idea.
The distance between insight and implementation is shrinking.
This shift has profound implications for how innovation happens inside organizations. Instead of large teams coordinating complex handoffs, smaller groups — or even individuals — can rapidly test ideas and learn from real-world feedback.
Product development begins to look less like an industrial assembly line and more like a creative studio, where ideas are explored, built, and refined continuously.
The most effective teams in this environment will not simply move faster. They will maintain ownership of ideas from the moment a problem is discovered all the way through to the moment a solution is experienced by real people.
VI. What AI Actually Kills
Artificial intelligence is not killing design thinking.
What it is killing are many of the habits that organizations adopted in the name of design thinking but that were never truly about understanding people or solving meaningful problems.
For years, some teams have mistaken the appearance of innovation for the practice of it. Workshops replaced experiments. Sticky notes replaced decisions. Slide decks replaced prototypes.
When building was slow and expensive, these behaviors were often tolerated because teams needed time to align before committing resources. But in a world where working solutions can be generated almost instantly, those habits quickly become friction.
AI removes the excuses that allowed these patterns to persist.
Process Theater
Innovation workshops that generate energy but not outcomes become difficult to justify when teams can build and test ideas immediately.
Endless Ideation
Brainstorming sessions that produce dozens of ideas without committing to experiments lose their value when ideas can be rapidly turned into prototypes and evaluated in the real world.
Documentation Instead of Exploration
Detailed reports, long strategy decks, and static artifacts once helped communicate ideas across teams. But when AI allows concepts to be expressed through working experiences, documentation becomes less important than experimentation.
Safe Innovation
Perhaps most importantly, AI challenges organizations that use process as a shield against risk. When it becomes easy to test bold ideas quickly and cheaply, avoiding experimentation becomes a choice rather than a necessity.
AI doesn’t eliminate design thinking. It eliminates the distance between thinking and doing.
The organizations that thrive in this environment will not be the ones with the most polished innovation processes. They will be the ones that are most willing to replace discussion with discovery and ideas with experiments.
VII. The New Role of Design: Decision Velocity
When the cost of building drops dramatically, the nature of competitive advantage changes.
In the past, organizations succeeded by efficiently transforming ideas into products. Engineering capacity, technical expertise, and operational discipline were often the primary constraints.
But when AI can generate working software, prototypes, and experiments almost instantly, the challenge is no longer how quickly something can be built.
The challenge becomes how quickly and wisely teams can decide what is actually worth building.
In an AI-driven world, innovation speed is no longer about development velocity — it is about decision velocity.
This is where the role of design evolves.
Design shifts from primarily producing artifacts — wireframes, mockups, and prototypes — to guiding the choices that shape meaningful innovation.
Designers increasingly become the people who help teams:
Frame the right problems to solve
Clarify human needs and motivations
Prioritize which ideas deserve experimentation
Interpret signals from real-world user behavior
In other words, design becomes less about shaping the interface of a product and more about shaping the direction of learning.
When organizations can generate thousands of potential solutions, the real value lies in identifying the small number that actually create meaningful value for people.
Designers, at their best, help organizations navigate that complexity. They connect technology to human context, helping teams avoid the trap of building faster without thinking better.
In the AI era, design is not slowing innovation down. It is helping organizations move quickly without losing their sense of where they should be going.
VIII. From Design Thinking to Design Doing
As artificial intelligence compresses the distance between idea and implementation, the nature of design practice begins to change. The emphasis shifts away from structured stages and toward continuous experimentation.
Traditional design thinking frameworks helped teams organize their thinking before committing to build. But in an AI-enabled environment, building itself becomes part of the thinking process.
Instead of long cycles of analysis followed by development, teams can now explore ideas directly through working prototypes and rapid experiments.
The most effective teams no longer separate thinking from building. They think by building.
This shift marks a move from design thinking to what might be called design doing.
In this model, learning happens through fast cycles of creation, feedback, and refinement. Ideas are not debated endlessly in workshops or captured in lengthy documents. They are explored through tangible experiences that can be observed, tested, and improved.
The practical differences begin to look like this:
Traditional Model
AI-Enabled Model
Workshops and brainstorming sessions
Rapid experiments and live prototypes
Personas and research summaries
Behavioral data and real-world signals
Concept mockups
Functional prototypes
Long planning cycles
Continuous learning loops
None of this diminishes the importance of understanding people. If anything, the need for deep human insight becomes even more important as the pace of experimentation accelerates.
What changes is how that understanding is expressed. Instead of existing primarily as documents or presentations, insight becomes embedded directly into the experiences teams create and test.
In an AI-native organization, design is no longer a phase that happens before development begins. It becomes an ongoing activity woven directly into the act of building and learning.
IX. Human Trust Becomes the New Design Material
As artificial intelligence accelerates the speed of building, the most important design challenges begin to shift away from usability and toward something deeper: trust.
When products can be created, modified, and deployed almost instantly, the risk is not simply poor interface design. The risk is creating experiences that feel disconnected from human values, human context, and human expectations.
AI makes it easier than ever to generate functionality. But it does not automatically ensure that what is generated is responsible, understandable, or aligned with the needs of the people who will use it.
In an AI-driven world, the most important design material is no longer pixels or screens — it is human trust.
This raises a new set of responsibilities for designers, engineers, and product leaders alike.
Teams must think carefully about questions such as:
Do people understand what the system is doing?
Are decisions being made transparently?
Does the experience respect human autonomy?
Does the technology reinforce or erode confidence?
As AI systems become more powerful, the danger is not just that they might fail. The danger is that they might succeed in ways that quietly undermine the relationship between organizations and the people they serve.
Design therefore becomes a critical safeguard. It ensures that rapid technological capability does not outpace thoughtful consideration of human consequences.
In this sense, the role of design expands beyond shaping products. It becomes the discipline that ensures technology remains grounded in human meaning, responsibility, and trust.
X. The Future: Designers Who Ship, Engineers Who Empathize
As AI blurs the traditional boundaries between design and engineering, the most valuable creators in the future will be those who can move fluidly between imagining, designing, and building.
Designers will need to ship working products, not just static prototypes. Engineers will need to empathize deeply with users, understanding problems and shaping experiences that align with human needs.
The new hybrid product creator embodies both curiosity and capability, bridging the gap between thinking and doing. They are able to:
Rapidly translate insights into working solutions
Experiment and learn from real-world user behavior
Balance technical feasibility with human desirability
Maintain alignment between strategy, design, and execution
In this new landscape, design thinking does not disappear — it evolves. AI removes many of the barriers that previously prevented designers and engineers from collaborating fully and iterating quickly.
The organizations that succeed will be those where everyone has the ability to both understand humans and act on that understanding at the speed of AI.
The future belongs to hybrid creators who can navigate ambiguity, make fast decisions, and embed human trust into every experiment. In such a world, innovation is no longer the domain of specialists — it is the responsibility of anyone capable of connecting insight with action.
XI. The Real Question Leaders Should Be Asking
The debate is often framed as a dramatic question: “Has AI killed design thinking?” But this framing misses the deeper challenge facing organizations today.
The real question is not whether design thinking survives — it is whether organizations are prepared to operate in a world where anyone can turn ideas into working products almost instantly.
In this AI-accelerated environment, success depends less on the speed of coding or the elegance of design frameworks. It depends on human judgment, understanding, and alignment.
Leaders must ask themselves:
Do our teams know what problems are truly worth solving?
Can we prioritize experiments that create real human value?
Are we embedding human trust and ethical consideration into everything we build?
Are our designers and engineers equipped to operate across traditional boundaries?
In this new era, the organizations that thrive will not be the ones with the fastest developers or the slickest design processes.
They will be the organizations that can rapidly identify meaningful opportunities, make thoughtful decisions, and maintain human-centered principles while moving at the speed of AI.
Innovation will no longer belong to the people who can code. It will belong to the people who understand humans well enough to know what should be built in the first place.
The role of leadership is no longer just managing workflows — it is shaping the environment in which hybrid creators can think, act, and build responsibly at unprecedented speed.
New Tools for the New Design Reality
To help you find problems worth solving and to design and execute experiments, I created a couple of visual and collaborative tools to help you thrive in this new reality. Download them both from my store and enjoy!
No. AI has not killed design thinking, but it has changed the context in which it operates. Traditional design thinking frameworks assumed that building was slow and expensive. With AI accelerating the creation of prototypes and software, design thinking evolves from a staged process into a continuous cycle of experimentation and decision-making.
2. How are the roles of designers and engineers changing with AI?
AI blurs the traditional boundaries between designers and engineers. Designers can now generate working code and functional prototypes, while engineers can explore user experience and interface design. The future favors hybrid creators who can both understand human needs and rapidly implement solutions.
3. What becomes the main focus of design in an AI-driven product environment?
The primary focus shifts from producing artifacts to guiding decision-making and protecting human trust. Design becomes the discipline that helps teams prioritize meaningful experiments, interpret real-world feedback, and ensure that rapid technological development remains aligned with human values and needs.
Image credits: ChatGPT
Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from ChatGPT to clean up the article and add citations.
Sign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.
At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?
But enough delay, here are January’s ten most popular innovation posts:
If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!
Have something to contribute?
Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.
P.S. Here are our Top 40 Innovation Bloggers lists from the last five years:
For the last decade, the business world has been obsessed with predictive models. We have spent billions trying to answer the question, “What will happen next?” While these tools have helped us optimize supply chains, they often fail when the world changes. Why? Because prediction is based on correlation, and correlation is not causation. To truly innovate using Human-Centered Innovation™, we must move toward Causal AI.
Causal AI is the next frontier of FutureHacking™. Instead of merely identifying patterns, it seeks to understand the why. It maps the underlying “wiring” of a system to determine how changing one variable will influence another. This shift is vital because innovation isn’t about following a trend; it’s about making a deliberate intervention to create a better future.
“Data can tell you that two things are happening at once, but only Causal AI can tell you which one is the lever and which one is the result. Innovation is the art of pulling the right lever.” — Braden Kelley
The End of the “Black Box” Strategy
One of the greatest barriers to institutional trust is the “Black Box” nature of traditional machine learning. Causal AI, by its very nature, is explainable. It provides a transparent map of cause and effect, allowing human leaders to maintain autonomy and act as the “gardener” tending to the seeds of technology.
Case Study 1: Personalized Medicine and Healthcare
A leading pharmaceutical institution recently moved beyond predictive patient modeling. By using Causal AI to simulate “What if” scenarios, they identified specific causal drivers for individual patients. This allowed for targeted interventions that actually changed outcomes rather than just predicting a decline. This is the difference between watching a storm and seeding the clouds.
Case Study 2: Retail Pricing and Elasticity
A global retail giant utilized Causal AI to solve why deep discounts led to long-term dips in brand loyalty. Causal models revealed that the discounts were causing a shift in quality perception in specific demographics. By understanding this link, the company pivoted to a human-centered value strategy that maintained price integrity while increasing engagement.
Leading the Causal Frontier
The landscape of Causal AI is rapidly maturing in 2026. causaLens remains a primary pioneer with their Causal AI operating system designed for enterprise decision intelligence. Microsoft Research continues to lead the open-source movement with its DoWhy and EconML libraries, which are now essential tools for data scientists globally. Meanwhile, startups like Geminos Software are revolutionizing industrial intelligence by blending causal reasoning with knowledge graphs to address the high failure rate of traditional models. Causaly is specifically transforming the life sciences sector by mapping over 500 million causal relationships in biomedical data to accelerate drug discovery.
“Causal AI doesn’t just predict the future — it teaches us how to change it.” — Braden Kelley
From Correlation to Causation
Predictive models operate on correlations. They answer: “Given the patterns in historical data, what will likely happen next?” Causal models ask a deeper question: “If we change this variable, how will the outcome change?” This fundamental difference elevates causal AI from forecasting to strategic influence.
Causal AI leverages counterfactual reasoning — the ability to simulate alternative realities. It makes systems more explainable, robust to context shifts, and aligned with human intentions for impact.
Case Study 3: Healthcare — Reducing Hospital Readmissions
A large health system used predictive analytics to identify patients at high risk of readmission. While accurate, the system did not reveal which interventions would reduce that risk. Nurses and clinicians were left with uncertainty about how to act.
By implementing causal AI techniques, the health system could simulate different combinations of follow-up calls, personalized care plans, and care coordination efforts. The causal model showed which interventions would most reduce readmission likelihood. The organization then prioritized those interventions, achieving a measurable reduction in readmissions and better patient outcomes.
This example illustrates how causal AI moves health leaders from reactive alerts to proactive, evidence-based intervention planning.
Case Study 4: Public Policy — Effective Job Training Programs
A metropolitan region sought to improve employment outcomes through various workforce programs. Traditional analytics identified which neighborhoods had high unemployment, but offered little guidance on which programs would yield the best impact.
Causal AI empowered policymakers to model the effects of expanding job training, childcare support, transportation subsidies, and employer incentives. Rather than piloting each program with limited insight, the city prioritized interventions with the highest projected causal effect. Ultimately, unemployment declined more rapidly than in prior years.
This case demonstrates how causal reasoning can inform public decision-making, directing limited resources toward policies that truly move the needle.
Human-Centered Innovation and Causal AI
Causal AI complements human-centered innovation by prioritizing actionable insight over surface-level pattern recognition. It aligns analytics with stakeholder needs: transparency, explainability, and purpose-driven outcomes.
By embracing causal reasoning, leaders design systems that illuminate why problems occur and how to address them. Instead of deploying technology that automates decisions, causal AI enables decision-makers to retain judgment while accessing deeper insight. This synergy reinforces human agency and enhances trust in AI-driven processes.
Challenges and Ethical Guardrails
Despite its potential, causal AI has challenges. It requires domain expertise to define meaningful variables and valid causal structures. Data quality and context matter. Ethical considerations demand clarity about assumptions, transparency in limitations, and safeguards against misuse.
Causal AI is not a shortcut to certainty. It is a discipline grounded in rigorous reasoning. When applied thoughtfully, it empowers organizations to act with purpose rather than default to correlation-based intuition.
Conclusion: Lead with Causality
In a world of noise, Causal AI provides the signal. It respects human autonomy by providing the evidence needed for a human to make the final call. As you look to your next change management initiative, ask yourself: Are you just predicting the weather, or are you learning how to build a better shelter?
Strategic FAQ
How does Causal AI differ from traditional Machine Learning?
Traditional Machine Learning identifies correlations and patterns in historical data to predict future occurrences. Causal AI identifies the functional relationships between variables, allowing users to understand the impact of specific interventions.
Why is Causal AI better for human-centered innovation?
It provides explainability. Because it maps cause and effect, human leaders can see the logic behind a recommendation, ensuring technology remains a tool for human ingenuity.
Can Causal AI help with bureaucratic corrosion?
Yes. By exposing the “why” behind organizational outcomes, it helps leaders identify which processes (the wiring) are actually producing value and which ones are simply creating friction.
Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.
Image credits: Google Gemini
Sign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.
More and more, brands are starting to get the chatbot “thing” right. AI is improving, and customers are realizing that a chatbot can be a great first stop for getting quick answers or resolving questions. After all, if you have a question, don’t you want it answered now?
In a recent interview, I was asked, “What do you love about chatbots?” That was easy. Then came the follow-up question, “What do you hate about chatbots?” Also easy. The truth is, chatbots can deliver amazing experiences. They can also cause just as much frustration as a very long phone hold. With that in mind, here are five reasons to love (and hate) chatbots:
Why We Love Chatbots
24/7 Availability: Chatbots are always on. They don’t sleep. Customers can get help at any time, even during holidays.
Fast Response: Instant answers to simple questions, such as hours of operation, order status and basic troubleshooting, can be provided with efficiency and minimal friction.
Customer Service at Scale: Once you set up a chatbot, it can handle many customers at once. Customers won’t have to wait, and human agents can focus on more complicated issues and problems.
Multiple Language Capabilities: The latest chatbots are capable of speaking and typing in many different languages. Whether you need global support or just want to cater to different cultures in a local area, a chatbot has you covered.
Consistent Answers: When programmed properly, a chatbot delivers the same answers every time.
Why We Hate Chatbots
AI Can’t Do Everything, but Some Companies Think It Can: This is what frustrates customers the most. Some companies believe AI and chatbots can do it all. They can’t, and the result is frustrated customers who will eventually move on to the competition.
A Lack of Empathy: AI can do a lot, but it can’t express true emotions. For some customers, care, empathy and understanding are more important than efficiency.
Scripted Retorts Feel Robotic: Chatbots often follow strict guidelines. That’s actually a good thing, unless the answers provided feel overly scripted and generic.
Hard to Get to a Human: One of the biggest complaints about chatbots is, “I just want to talk to a person.” Smart companies make it easy for customers to leave AI and connect to a human.
There’s No Emotional Connection to a Chatbot: You’ll most likely never hear a customer say, “I love my chatbot.” A chatbot won’t win your heart. In customer service, sometimes how you make someone feel is more important than what you say.
Chatbots are powerful tools, but they are not a replacement for human connection. The best companies use AI to enhance support, not replace it. When chatbots handle the routine issues and agents handle the more complex and human moments, that’s when customer experience goes from efficient to … amazing.
Image credits: Unsplash
Sign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.
Anduril’s AI Grand Prix: Racing for the Future of Work
LAST UPDATED: January 28, 2026 at 2:27 PM
GUEST POST from Art Inteligencia
The traditional job interview is an antiquated artifact, a relic of a bygone industrial era. It often measures conformity, articulateness, and cultural fit more than actual capability or innovative potential. As we navigate the complexities of AI, automation, and rapid technological shifts, organizations are beginning to realize that to find truly exceptional talent, they need to look beyond resumes and carefully crafted answers. This is where companies like Anduril are not just iterating but innovating the very hiring process itself.
Anduril, a defense technology company known for its focus on AI-driven systems, recently announced its AI Grand Prix — a drone racing contest where the ultimate prize isn’t just glory, but a job offer. This isn’t merely a marketing gimmick; it’s a profound statement about their belief in demonstrated skill over credentialism, and a powerful strategy for identifying talent that can truly push the boundaries of autonomous systems. It epitomizes the shift from abstract evaluation to purposeful, real-world application, emphasizing hands-on capability over theoretical knowledge.
“The future of hiring isn’t about asking people what they can do; it’s about giving them a challenge and watching them show you.”
Why Challenge-Based Hiring is the New Frontier
This approach addresses several critical pain points in traditional hiring:
Uncovering Latent Talent: Many brilliant minds don’t fit the mold of elite university degrees or polished corporate careers. Challenge-based hiring can surface individuals with raw, untapped potential who might otherwise be overlooked.
Assessing Practical Skills: In fields like AI, robotics, and advanced engineering, theoretical knowledge is insufficient. The ability to problem-solve under pressure, adapt to dynamic environments, and debug complex systems is paramount.
Cultural Alignment Through Action: Observing how candidates collaborate, manage stress, and iterate on solutions in a competitive yet supportive environment reveals more about their true cultural fit than any behavioral interview.
Building a Diverse Pipeline: By opening up contests to a wider audience, companies can bypass traditional biases inherent in resume screening, leading to a more diverse and innovative workforce.
Beyond Anduril: Other Pioneers of Performance-Based Hiring
Anduril isn’t alone in recognizing the power of real-world challenges to identify top talent. Several other forward-thinking organizations have adopted similar, albeit varied, approaches:
Google’s Code Jam and Hash Code
For years, Google has leveraged competitive programming contests like Code Jam and Hash Code to scout for software engineering talent globally. These contests present participants with complex algorithmic problems that test their coding speed, efficiency, and problem-solving abilities. While not always directly leading to a job offer for every participant, top performers are often fast-tracked through the interview process. This allows Google to identify engineers who can perform under pressure and think creatively, rather than just those who can ace a whiteboard interview. It’s a prime example of turning abstract coding prowess into a tangible demonstration of value.
Kaggle Competitions for Data Scientists
Kaggle, now a Google subsidiary, revolutionized how data scientists prove their worth. Through its platform, companies post real-world data science problems—from predicting housing prices to identifying medical conditions from images—and offer prize money, and often, connections to jobs, to the teams that develop the best models. This creates a meritocracy where the quality of one’s predictive model speaks louder than any resume. Many leading data scientists have launched their careers or been recruited directly from their performance in Kaggle competitions. It transforms theoretical data knowledge into demonstrable insights that directly impact business outcomes.
The Human Element in the Machine Age
What makes these initiatives truly human-centered? It’s the recognition that while AI and automation are transforming tasks, the human capacity for ingenuity, adaptation, and critical thinking remains irreplaceable. These contests aren’t about finding people who can simply operate machines; they’re about finding individuals who can teach the machines, design the next generation of algorithms, and solve problems that don’t yet exist. They foster an environment of continuous learning and application, perfectly aligning with the “purposeful learning” philosophy.
The Anduril AI Grand Prix, much like Google’s and Kaggle’s initiatives, de-risks the hiring process by creating a performance crucible. It’s a pragmatic, meritocratic, and ultimately more effective way to build the teams that will define the next era of technological advancement. As leaders, our challenge is to move beyond conventional wisdom and embrace these innovative models, ensuring we’re not just ready for the future of work, but actively shaping it.
Frequently Asked Questions
What is challenge-based hiring?
Challenge-based hiring is a recruitment strategy where candidates demonstrate their skills and problem-solving abilities by completing a real-world task, project, or competition, rather than relying solely on resumes and interviews.
What are the benefits of this approach for companies?
Companies can uncover hidden talent, assess practical skills, observe cultural fit in action, and build a more diverse talent pipeline by focusing on demonstrable performance.
How does this approach benefit candidates?
Candidates get a fair chance to showcase their true abilities regardless of traditional credentials, gain valuable experience, and often get direct access to influential companies and potential job offers based purely on merit.
Image credits: Wikimedia Commons, Google Gemini
Sign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.
About ten years ago, IBM invited me to talk with some key members on the Watson team, when the triumph of creating a machine that could beat the best human players at the game show Jeopardy! was still fresh. I wrote in Forbes at the time that we were entering a new era of cognitive collaboration between humans, computers and other humans.
One thing that struck me was how similar the moment seemed to how aviation legend Chuck Yeager described the advent of flying-by-wire, four decades earlier, in which pilots no longer would operate aircraft, but interface with a computer that flew the plane. Many of the macho “flyboys” weren’t able to trust the machines and couldn’t adapt.
Now, with the launch of ChatGPT, Bill Gates has announced that the age of AI has begun and, much like those old flyboys, we’re all going to struggle to adapt. Our success will not only rely on our ability to learn new skills and work in new ways, but the extent to which we are able to trust our machine collaborators. To reach its potential, AI will need to become accountable.
Recognizing Data Bias
With humans, we work diligently to construct safe and constructive learning environments. We design curriculums, carefully selecting materials, instructors and students to try and get the right mix of information and social dynamics. We go to all this trouble because we understand that the environment we create greatly influences the learning experience.
Machines also have a learning environment called a “corpus.” If, for example, you want to teach an algorithm to recognize cats, you expose it to thousands of pictures of cats. In time, it figures out how to tell the difference between, say, a cat and a dog. Much like with human beings, it is through learning from these experiences that algorithms become useful.
However, the process can go horribly awry. A famous case is Microsoft’s Tay, a Twitter bot that the company unleashed on the microblogging platform in 2016. In under a day, Tay went from being friendly and casual (“humans are super cool”) to downright scary, (“Hitler was right and I hate Jews”). It was profoundly disturbing.
Bias in the learning corpus is far more common than we often realize. Do an image search for the word “professional haircut” and you will get almost exclusively pictures of white men. Do the same for “unprofessional haircut” and you will see much more racial and gender diversity.
It’s not hard to figure out why this happens. Editors writing articles about haircuts portray white men in one way and other genders and races in another. When we query machines, we inevitably find our own biases baked in.
Accounting For Algorithmic Bias
A second major source of bias results from how decision-making models are designed. Consider the case of Sarah Wysocki, a fifth grade teacher who — despite being lauded by parents, students, and administrators alike — was fired from the D.C. school district because an algorithm judged her performance to be sub-par. Why? It’s not exactly clear, because the system was too complex to be understood by those who fired her.
Yet it’s not hard to imagine how it could happen. If a teacher’s ability is evaluated based on test scores, then other aspects of performance, such as taking on children with learning differences or emotional problems, would fail to register, or even unfairly penalize them. Good human managers recognize outliers, algorithms generally aren’t designed that way.
In other cases, models are constructed according to what data is easiest to acquire or the model is overfit to a specific set of cases and is then applied too broadly. In 2013, Google Flu Trends predicted almost double as many cases there actually were. What appears to have happened is that increased media coverage about Google Flu Trends led to more searches by people who weren’t sick. The algorithm was never designed to take itself into account.
The simple fact is that an algorithm must be designed in one way or another. Every possible contingency cannot be pursued. Choices have to be made and bias will inevitably creep in. Mistakes happen. The key is not to eliminate error, but to make our systems accountable through, explainability, auditability and transparency.
To Build An Era Of Cognitive Collaboration We First Need To Build Trust
In 2020, Ofqual, the authority that administers A-Level college entrance exams in the UK, found itself mired in scandal. Unable to hold live exams because of Covid-19, it designed and employed an algorithm that based scores partly on the historical performance of the schools students attended with the unintended consequence that already disadvantaged students found themselves further penalized by artificially deflated scores.
The outcry was immediate, but in a sense the Ofqual case is a happy story. Because the agency was transparent about how the algorithm was constructed, the source of the bias was quickly revealed, corrective action was taken in a timely manner, and much of the damage was likely mitigated. As Linus’s Law advises, “given enough eyeballs, all bugs are shallow.”
The age of artificial intelligence requires us to collaborate with machines, leveraging their capabilities to better serve other humans. To make that collaboration successful, however, it needs to take place in an atmosphere of trust. Machines, just like humans, need to be held accountable, their decisions and insights can’t be a “black box.” We need to be able to understand where their judgments come from and how they’re decisions are being made.
Senator Schumer worked on legislation to promote more transparency in 2024, but that is only a start and the new administration has pushed the pause button on AI regulation. The real change has to come from within ourselves and how we see our relationships with the machines we create. Marshall McLuhan wrote that media are extensions of man and the same can be said for technology. Our machines inherit our human weaknesses and frailties. We need to make allowances for that.
— Article courtesy of the Digital Tonto blog
— Image credit: Flickr
Sign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.
We are currently living in the artificial future of 2026, a world where the distinction between human-authored and AI-generated content has become practically invisible to the naked eye. In this era of agentic AI and high-fidelity synthetic media, we have moved past the initial awe of creation and into a far more complex phase: the Trust Imperative. As my friend Braden Kelley has frequently shared in his keynotes, innovation is change with impact, but if the impact is an erosion of truth, we are not innovating — we are disintegrating.
The flood of AI-generated content has created a massive Corporate Antibody response within our social and economic systems. To survive, organizations must adopt Generative Watermarking and Provenance technologies. These aren’t just technical safeguards; they are the new infrastructure of reality. We are shifting from a culture of blind faith in what we see to a culture of verifiable origin.
“Transparency is the only antidote to the erosion of trust; we must build systems that don’t just generate, but testify. If an idea is a useful seed of invention, its origin must be its pedigree.” — Braden Kelley
Why Provenance is the Key to Human-Centered Innovation™
Human-Centered Innovation™ requires psychological safety. In 2026, psychological safety is under threat by “hallucinated” news, deepfake corporate communiques, and the potential for industrial-scale intellectual property theft. When people cannot trust the data in their dashboards or the video of their CEO, the organizational “nervous system” begins to shut down. This is the Efficiency Trap in its most dangerous form: we’ve optimized for speed of content production, but lost the efficiency of shared truth.
Provenance tech — specifically the C2PA (Coalition for Content Provenance and Authenticity) standards — allows us to attach a permanent, tamper-evident digital “ledger” to every piece of media. This tells us who created it, what AI tools were used to modify it, and when it was last verified. It restores the human to the center of the story by providing the context necessary for informed agency.
Case Study 1: Protecting the Frontline of Journalism
The Challenge: In early 2025, a global news agency faced a crisis when a series of high-fidelity deepfake videos depicting a political coup began circulating in a volatile region. Traditional fact-checking was too slow to stop the viral spread, leading to actual civil unrest.
The Innovation: The agency implemented a camera-to-cloud provenance system. Every image captured by their journalists was cryptographically signed at the moment of capture. Using a public verification tool, viewers could instantly see the “chain of custody” for every frame.
The Impact: By 2026, the agency saw a 50% increase in subscriber trust scores. More importantly, they effectively “immunized” their audience against deepfakes by making the absence of a provenance badge a clear signal of potential misinformation. They turned the Trust Imperative into a competitive advantage.
Case Study 2: Securing Enterprise IP in the Age of Co-Pilots
The Challenge: A Fortune 500 manufacturing firm found that its proprietary design schematics were being leaked through “Shadow AI” — employees using unauthorized generative tools to optimize parts. The company couldn’t tell which designs were protected “useful seeds of invention” and which were tainted by external AI data sets.
The Innovation: They deployed an internal Generative Watermarking system. Every output from authorized corporate AI agents was embedded with an invisible, robust watermark. This watermark tracked the specific human prompter, the model version, and the internal data sources used.
The Impact: The company successfully reclaimed its IP posture. By making the origin of every design verifiable, they reduced legal risk and empowered their engineers to use AI safely, fostering a culture of Human-AI Teaming rather than fear-based restriction.
Leading Companies and Startups to Watch
As we navigate 2026, the landscape of provenance is being defined by a few key players. Adobe remains a titan in this space with their Content Authenticity Initiative, which has successfully pushed the C2PA standard into the mainstream. Digimarc has emerged as a leader in “stealth” watermarking that survives compression and cropping. In the startup ecosystem, Steg.AI is doing revolutionary work with deep-learning-based watermarks that are invisible to the eye but indestructible to algorithms. Truepic is the one to watch for “controlled capture,” ensuring the veracity of photos from the moment the shutter clicks. Lastly, Microsoft and Google have integrated these “digital nutrition labels” across their enterprise suites, making provenance a default setting rather than an optional add-on.
Conclusion: The Architecture of Truth
To lead innovation in 2026, you must be more than a creator; you must be a verifier. We cannot allow the “useful seeds of invention” to be choked out by the weeds of synthetic deception. By embracing generative watermarking and provenance, we aren’t just protecting data; we are protecting the human connection that makes change with impact possible.
If you are looking for an innovation speaker to help your organization solve the Trust Imperative and navigate Human-Centered Innovation™, I suggest you look no further than Braden Kelley. The future belongs to those who can prove they are part of it.
Frequently Asked Questions
What is the difference between watermarking and provenance?
Watermarking is a technique to embed information (visible or invisible) directly into content to identify its source. Provenance is the broader history or “chain of custody” of a piece of media, often recorded in metadata or a ledger, showing every change made from creation to consumption.
Can AI-generated watermarks be removed?
While no system is 100% foolproof, modern watermarking from companies like Steg.AI or Digimarc is designed to be highly “robust,” meaning it survives editing, screenshots, and even re-recording. Provenance standards like C2PA use cryptography to ensure that if the data is tampered with, the “broken seal” is immediately apparent.
Why does Braden Kelley call trust a “competitive advantage”?
In a market flooded with low-quality or deceptive content, “Trust” becomes a premium. Organizations that can prove their content is authentic and their AI is transparent will attract higher-quality talent and more loyal customers, effectively bypassing the friction of skepticism that slows down their competitors.
Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.
Image credits: Google Gemini
Sign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.
I’m often asked, “What should AI be used for?” While there is much that AI can do to support businesses in general, it’s obvious that I’m being asked how it relates to customer service and customer experience (CX). The true meaning of the question is more about what tasks AI can do to support a customer, thereby potentially eliminating the need for a live agent who deals directly with customers.
First, as the title of this article implies, just because AI can do something, it doesn’t mean it should. Yes, AI can handle many customer support issues, but even if every customer were willing to accept that AI can deliver good support, there are some sensitive and complicated issues for which customers would prefer to talk to a human.
Additionally, consider that, based on my annual customer experience research, 68% of customers (that’s almost seven out of 10) prefer the phone as their primary means of communication with a company or brand. However, another finding in the report is worth mentioning: 34% of customers stopped doing business with a company because self-service options were not provided. Some customers insist on the self-service option, but at the same time, they want to be transferred to a live agent when appropriate.
AI works well for simple issues, such as password resets, tracking orders, appointment scheduling and answering basic or frequently asked questions. Humans are better suited for handling complaints and issues that need empathy, complex problem-solving situations that require judgment calls and communicating bad news.
An AI-fueled chatbot can answer many questions, but when a medical patient contacts the doctor’s office about test results related to a serious issue, they will likely want to speak with a nurse or doctor, not a chatbot.
Consider These Questions Before Implementing AI For Customer Interactions
AI for addressing simple customer issues has become affordable for even the smallest businesses, and an increasing number of customers are willing to use AI-powered customer support for the right reasons. Consider these questions before implementing AI for customer interactions:
Is the customer’s question routine or fact-based?
Does it require empathy, emotion, understanding and/or judgment (emotional intelligence)?
Could the wrong answer cause a problem or frustrate the customer?
As you think about the reasons customers call, which ones would they feel comfortable having AI handle?
Do you have an easy, seamless way for the customer to be transferred to a human when needed?
The point is, regardless of how capable the technology is, it doesn’t mean it is best suited to deliver what the customer wants. Live agents can “read the customer” and know how to effectively communicate and empathize with them. AI can’t do that … yet. The key isn’t choosing between AI and humans. It’s knowing when to use each one.
Image credits: Google Gemini, Shep Hyken
Sign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.