Category Archives: Technology

The Consumption Collapse – When the Feedback Loop Bites Back

Why the Great American Contraction is leading to a crisis of demand and a re-imagining of the American Social Contract.

LAST UPDATED: April 17, 2026 at 3:58 PM

The Consumption Collapse - When the Feedback Loop Bites Back

GUEST POST from Art Inteligencia


The Ghost in the Shopping Mall

In our previous exploration, The Great American Contraction,” we identified a fundamental shift in the American story. For the first time in our history, the foundational assumption of “more” — more people, more labor, and more expansion — has been inverted. We discussed how the exponential rise of AI and robotics is dismantling the traditional value chain of human labor, moving us from a nation of “doers” to a necessary, albeit smaller, elite class of “architects.”

However, as we move closer to the two-year horizon of the next United States Presidential election, a more insidious shadow is beginning to fall across the landscape. It is no longer just a crisis of employment; it has evolved into a crisis of consumption. This is the “Feedback Loop of Irrelevance.”

The logic is as cold as the algorithms driving it: As increasing numbers of knowledge workers and service providers are displaced by autonomous agents, their disposable income evaporates. When people lose their financial footing, they spend less. When they spend less, the revenue of the very companies that automated them begins to shrink. To protect their margins in a declining market, these companies are forced to cut back even further — often doubling down on automation to reduce costs — which in turn removes more consumers from the marketplace.

We are witnessing the birth of a deflationary death spiral where corporate efficiency threatens to cannibalize the very markets it was designed to serve. Over the next 24 months, this cycle will redefine the American psyche and set the stage for an election year unlike any we have ever seen.

It is time to look beyond the immediate shock of job loss and examine the structural integrity of our economic operating system. If the “Old Equation” of labor-for-income is a sinking ship, we must decide what happens to the passengers before we reach the horizon of 2028.

The Vicious Cycle of Automated Austerity

The transition from a growth-based economy to a Great Contraction is not a linear event; it is a recursive loop. As AI adoption accelerates, we are witnessing a phenomenon I call “Automated Austerity.” This is the process where short-term corporate gains from labor reduction lead directly to long-term market erosion. The cycle progresses through four distinct, overlapping phases:

Phase 1: The First Wave Displacement

We are currently seeing the replacement of both low-skilled physical labor and high-skilled knowledge work by autonomous systems. This isn’t just about factory floors; it’s about the “Architect” roles we once thought were safe. As companies replace $150k-a-year analysts with $15-a-month compute tokens, the immediate impact is a massive surge in corporate profit margins.

Phase 2: The Wallet Effect

The friction begins here. Displaced workers initially rely on savings or severance, but as those dry up, the “gig economy” safety net is nowhere to be found — because AI is already performing the freelance writing, coding, and administrative tasks that used to provide a bridge. Disposable income doesn’t just dip; for a significant percentage of the population, it vanishes. This causes a sharp contraction in discretionary spending.

Phase 3: The Revenue Mirage

This is the trap. Companies that automated to save money suddenly find their top-line revenue shrinking because their customers (the former workers) can no longer afford their products. The efficiency gains are real, but the market size is artificial. We are entering a period where companies may be 100% efficient at producing goods that 0% of the displaced population can buy.

Phase 4: The Secondary Contraction

Faced with shrinking revenues, boards of directors demand even deeper cost-cutting to protect investor dividends. This leads to a second, more desperate wave of layoffs, further reducing the tax base and consumer spending power. This feedback loop creates a Deflationary Death Spiral that traditional monetary policy is ill-equipped to handle.

“When you automate the consumer out of a job, you eventually automate the business out of a customer.” — Braden Kelley

Over the next two years, this cycle will move from the periphery of Silicon Valley to the heart of every American household, forcing a radical re-evaluation of how we distribute the abundance that AI creates.

Vicious Cycle of Automated Austerity

The Two-Year Horizon: 2026–2028

As we navigate the next twenty-four months, the gap between traditional economic indicators and the lived reality of American citizens will become a canyon. We are entering a period of Economic Bifurcation, where the distance between those who own the “compute” and those who formerly provided the “labor” creates a new social stratification.

The Rise of the ‘Hollow’ Recovery

Expect to hear the term “efficiency-led growth” frequently in the coming months. Wall Street may remain buoyant as AI-integrated corporations report record-breaking margins per employee. However, this is a hollow success. While the stock market reflects corporate optimization, our Alternative Economic Health Measures—like the Genuine Progress Indicator (GPI) — will likely show a steep decline. We are becoming a nation that is technically “wealthier” while the average citizen’s ability to participate in that wealth is structurally dismantled.

The Shift from ‘Doer’ to ‘Architect’ Burnout

The “Great American Contraction” is not just about those losing roles; it is about the immense pressure on those who remain. The survivors — the Architect Class — are tasked with managing sprawling AI ecosystems. This creates a new kind of cognitive load. By 2027, I predict we will see a peak in “Technological Burnout,” where the speed of AI-driven change outpaces the human capacity to design for it. This is where Human-Centered Innovation becomes a survival skill rather than a corporate luxury.

The Mindset of Survivalist Innovation

As the feedback loop of shrinking revenue intensifies, we will see American citizens taking radical actions to decouple from a failing labor market. This includes:

  • Hyper-Localization: A resurgence in local bartering and community-based resource sharing as a hedge against the volatility of the automated economy.
  • The ‘Off-Grid’ Digital Economy: Individuals utilizing open-source AI models to create value outside of the traditional corporate gatekeepers, leading to a “shadow economy” of peer-to-peer services.
  • Consumption Sabotage: A psychological shift where citizens, feeling irrelevant to the economy, consciously reduce their consumption to the bare essentials, further accelerating the contraction.

This period will be defined by a search for meaning in a post-labor world. The American citizen of 2027 is no longer asking “How do I get ahead?” but rather “How do I remain relevant in a world that no longer requires my effort to function?”

The Survivalist Innovation Framework

Beyond GDP: New Vitals for a Contracting Economy

As the “Old Equation” fails, the metrics we use to measure national success are becoming dangerously obsolete. In a world where AI can drive productivity while simultaneously hollowing out the consumer class, GDP is no longer a compass; it is a rearview mirror. To navigate the next two years, we must shift our focus to alternative economic health measures that prioritize human vitality over transactional velocity.

1. The Genuine Progress Indicator (GPI)

Unlike GDP, which counts the “cost of cleaning up a disaster” as a positive, the GPI factors in income inequality and the social costs of underemployment. As we move toward 2028, we must demand a GPI-centered view of the economy. If AI-driven efficiency creates wealth but destroys the social capital of our communities, the GPI will show we are regressing, providing a much-needed reality check to “hollow” stock market gains.

2. The U-7 ‘Utility’ Rate

Standard unemployment figures (U-3) are increasingly irrelevant. We need a U-7 ‘Utility’ Rate to track those who are “technologically displaced”—individuals whose roles have been absorbed by algorithms or whose wages have been suppressed to the point of working poverty. This metric will highlight the Architect Gap: the growing number of people who have the capacity for high-value human contribution but lack access to the compute resources required to compete.

3. The Social Progress Index (SPI)

The goal of an automated economy should be to improve the human condition. The SPI measures outcomes that actually matter: Access to advanced education, personal freedom, and environmental quality. By 2027, the SPI will be the most honest indicator of whether the Great Contraction is a managed transition to a better life or a chaotic collapse of the middle class.

4. Value of Organizational Learning Technologies (VOLT)

We must begin measuring the “Agility Score” of our nation. VOLT measures how effectively we are using AI to solve complex problems rather than just replacing workers. A high VOLT score paired with a low SPI suggests we are building a “learning machine” that has forgotten its purpose: to serve the humans who created it.

“A high-GDP nation with a crashing Social Progress Index(SPI) is merely a failed state in a gold tuxedo.”

The political battleground of the next two years will be defined by a new set of metrics similar to these (but likely different). The 2028 election will not just be a choice between candidates, but a choice between maintaining the illusion of growth or designing a system of sovereignty for the American citizen.

The Localized Pivot

The Sovereign Tech-Stack & The Localized Pivot

As the “Feedback Loop of Irrelevance” continues to shrink traditional income, we are witnessing a radical grassroots response: The Localized Pivot. When the macro-economy fails to provide value to the individual, the individual stops providing value to the macro-economy and turns inward to their community.

The Rise of the ‘Personal AI’ Infrastructure

By 2027, the barrier to entry for sophisticated production will vanish. We will see a surge in “Sovereign Tech-Stacks” — individuals and small collectives using localized, open-source AI models to run micro-manufactories, automated vertical farms, and peer-to-peer service networks. This is Innovation as a Survival Tactic. These citizens are essentially “unplugging” from the hollowed-out corporate ecosystem and creating a shadow economy that traditional GDP cannot track.

From Global Chains to Hyper-Local Resilience

The contraction of consumer spending will lead to the death of the “long supply chain” for many goods. In its place, we will see the rise of Regional Circular Economies. AI will be used not to maximize global profit, but to optimize local resource sharing. Imagine community AI agents that manage local energy grids or coordinate the bartering of skills — human-centered design at its most fundamental level.

The ‘Architect’ of the Commons

In this phase, the “Architect” role I’ve discussed previously becomes a civic one. These are the individuals who design the systems that keep their communities thriving while the national revenue shrinks. They are the ones building the Human-Centered Guardrails that ensure technology serves the neighborhood, not the shareholder. This shift represents a move from Global Consumerism to Local Sovereignty.

“When the national economic engine stops fueling the household, the household must build its own engine, or it dies.” — Braden Kelley

This localized movement will be the wild card of 2028. It creates a class of “Un-Architected” citizens who are no longer dependent on the federal government or major corporations, creating a profound tension for any political candidate trying to promise a return to the ‘Old Equation’.

The Road to 2028: The Politics of Human Relevance

As we approach the next Presidential election, the political discourse will undergo a seismic shift. The traditional “Left vs. Right” battle lines over tax rates and social issues will be superseded by a more existential debate: The Individual vs. The Algorithm. The 2028 election will likely be the first in history centered entirely on the consequences of a post-labor economy.

The ‘Humanity First’ Tax and Sovereign Solvency

The most contentious issue will be how to fund a shrinking state as the labor-based tax system collapses. We will see the rise of the “Compute Tax” — a proposal to tax AI tokens and robotic output rather than human hours. This isn’t just about revenue; it’s about sovereign solvency. When companies reinvest profits into compute rather than wages, the “Economic OS” crashes. Expect candidates to run on a platform of Universal Basic Everything (UBE) — providing the results of automation (healthcare, housing, and energy) directly to the people as the tax base from labor vanishes.

The Compute Tax

The Death of Traditional Immigration Debates

As I noted in our initial look at the Contraction, the old argument about immigrants “taking jobs” or “filling gaps” is dead. In 2028, the focus will shift to “Strategic Talent Acquisition.” The debate will center on how to attract the world’s few remaining irreplaceable “Architect” minds while managing a domestic population that is increasingly surplus to the needs of capital. This will create a strange political alliance between protectionists and humanists, both seeking to shield human value from digital devaluation.

Mindset and Likely Actions of the Citizenry

By the time voters head to the polls, the American mindset will have shifted from aspiration to preservation. We are likely to see:

  • The Rise of ‘Neo-Luddite’ Activism: Not a rejection of technology, but a demand for “Human-Centered Guardrails” that prevent AI from cannibalizing the last remaining sectors of human connection.
  • The Search for Non-Monetary Meaning: A surge in candidates who focus on “Quality of Life” metrics rather than fiscal growth, appealing to a class of people who no longer derive their identity from their “job.”
  • Algorithmic Populism: Politicians using AI to personalize fear and hope at scale, creating a feedback loop where the technology used to displace the worker is also used to win their vote.

The central question of the 2028 election will be simple but devastating: “What is a country for, if not to support the thriving of its people — even when those people are no longer ‘productive’ in a traditional sense?” The winner will be the one who can design a new social contract for a smaller, more resilient, and truly innovative nation.

Conclusion: Designing a Thrivable Contraction

The Great American Contraction is no longer a theoretical “what-if” for futurists to debate; it is an active restructuring of our reality. As the feedback loop of automated austerity begins to bite, we are discovering that a country built on the relentless pursuit of “more” is fundamentally ill-equipped to handle the arrival of “enough.”

The next two years will be a period of intense friction as our legacy systems — our tax codes, our education models, and our social safety nets — grind against the frictionless efficiency of the AI era. We will see traditional economic metrics fail to capture the quiet struggle of the consumer, and we will watch as the 2028 election turns into a referendum on the value of a human being in a post-labor world.

But contraction does not have to mean collapse. If we shift our focus from transactional velocity to human vitality, we have the opportunity to design a new version of the American Dream. This new dream isn’t about the quantity of jobs we can protect from the machines, but the quality of the lives we can build with the abundance those machines create. It is about moving from a nation of “doers” who are exhausted by the grind to a nation of “architects” who are inspired by the possible.

“The goal of innovation was never to replace the human; it was to release the human. We are finally being forced to decide what we want to be released to do.” — Braden Kelley

The road to 2028 will be defined by whether we choose to cling to the wreckage of the growth-based model or whether we have the courage to embrace a smaller, smarter, and more human-centered future. The contraction is inevitable, but the outcome is ours to design.

STAY TUNED: On Tuesday my friend Braden Kelley (with a little help from me) is publishing an article featuring one hypothesis for what an AI SOFT LANDING might look like.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Agentic Paradox

Why Giving AI More Autonomy Requires Us to Give Humans More Agency

LAST UPDATED: April 10, 2026 at 7:11 PM

The Agentic Paradox

by Braden Kelley and Art Inteligencia


The Rise of the Machine “Doer”

For the past few years, we have lived in the era of Generative AI — a world of sophisticated chatbots and creative assistants that respond to our prompts. But as we move deeper into 2026, the landscape has shifted. We are now entering the age of Agentic AI. These are not just tools that talk; they are autonomous systems capable of executing complex workflows, making real-time decisions, and acting on our behalf across digital ecosystems.

On the surface, this promises the ultimate efficiency. We imagine a future where the “busy work” vanishes, leaving us free to innovate. However, a troubling Agentic Paradox has emerged: as we grant machines more autonomy to act, many humans are finding themselves with less agency. Instead of feeling liberated, workers often feel like they are merely “babysitting” algorithms or reacting to a relentless stream of machine-generated outputs.

This disconnect creates a high-stakes leadership challenge. If we focus solely on the autonomy of the machine, we risk creating an “algorithmic anxiety” that stifles the very human creativity we need to thrive. To succeed in this new era, leaders must realize that the more powerful our AI agents become, the more we must intentionally “upgrade” the agency, authority, and strategic focus of our people.

The Thesis: The goal of innovation in 2026 is not to build the most autonomous machine, but to build a human-centered ecosystem where AI agents manage the tasks and empowered humans manage the intent.

The Hidden Cost: The Cognitive Load Crisis

The promise of Agentic AI was a reduction in workload, but for many organizations, the reality has been a shift in the type of work rather than a reduction of it. This has birthed the Cognitive Load Crisis. While an autonomous agent can process data and execute tasks 24/7, it lacks the contextual wisdom to understand the nuances of organizational culture or ethical gray areas. This leaves the human “orchestrator” in a state of perpetual high-alert.

Instead of performing deep, meaningful work, leaders and employees are becoming trapped in the Supervision Trap. They are forced to manage a relentless firehose of machine-generated notifications, approvals, and “check-ins.” This creates a fragmented mental state where the human mind is constantly context-switching between different agent streams, leading to a unique form of 2026 burnout — digital exhaustion without the satisfaction of tactile achievement.

Furthermore, as AI agents take over more of the “doing,” we see an erosion of Deep Work. When every minute is spent verifying the output of an algorithm, the quiet space required for radical innovation and strategic foresight vanishes. We are effectively trading our long-term creative capacity for short-term operational speed.

  • Notification Fatigue: The mental tax of being the constant “emergency brake” for autonomous systems.
  • Loss of Intuition: The danger of becoming so reliant on agentic data that we lose our “gut feel” for the market.
  • The Feedback Loop: A system where humans spend more time managing machines than mentoring people.

To break this cycle, we must stop treating AI agents as simple productivity tools and start treating them as entities that require a new architecture of human attention. If we don’t manage the cognitive load, our most talented people will eventually shut down, leaving the “Magic Makers” of our organization feeling like mere cogs in a machine-led wheel.

Agentic Paradox Spectrum Infographic

Redefining Roles: From “The Conscript” to “The Architect”

As the landscape of work shifts, so too must our understanding of how individuals contribute to the innovation ecosystem. In my work on the Nine Innovation Roles, I’ve often highlighted how different archetypes fuel organizational growth. In this agentic age, we are seeing a dramatic migration of these roles. If we are not intentional, our best people will default into the role of The Conscript — those who are merely drafted into service to support the AI’s agenda, performing the monotonous tasks of verification and data cleanup.

The goal of a human-centered transformation is to automate the role of the “Conscript” and elevate the human into the role of The Architect or The Magic Maker. When the AI handles the heavy lifting of execution, the human is finally free to focus on Intent. This is where true agency resides. Agency is not the ability to do more; it is the power to decide what is worth doing and why it matters to the human beings we serve.

However, there is a dangerous “Agency Gap” emerging. If an organization implements AI agents without redefining human job descriptions, employees lose their sense of ownership. When the machine becomes the primary creator, the human “spark” is extinguished. We must ensure that AI serves as the support staff for human intuition, not the other way around.

The Migration of Value

The AI Agent Role The Human Agency Role
The Conscript: Handling repetitive execution and data synthesis. The Architect: Designing the systems and ethical frameworks for the AI.
The Facilitator: Coordinating schedules and managing basic workflows. The Revolutionary: Identifying the “radical” shifts the AI isn’t programmed to see.
The Specialist: Performing deep-dive technical analysis at scale. The Magic Maker: Applying empathy and storytelling to turn data into a movement.

By clearly delineating these roles, leaders can close the Agency Gap. We must empower our teams to move away from “monitoring” and toward “orchestrating.” This transition is the difference between a workforce that feels obsolete and one that feels essential.

Agentic Workforce Migration Infographic

FutureHacking™ the Cognitive Workflow

To navigate the complexities of 2026, organizations cannot rely on reactive strategies. We must use FutureHacking™ — a collective foresight methodology — to map out how the relationship between human intelligence and agentic automation will evolve. This isn’t just about predicting technology; it’s about engineering the “Human-Agent Interface” so that it scales without crushing the human spirit.

The core of this approach involves identifying the Innovation Bonfire within your team. In this metaphor, the AI agents are the fuel — abundant, powerful, and capable of sustaining a massive output. However, the humans must remain the spark. Without the human spark of intent and empathy, the fuel is just a cold pile of logs. FutureHacking™ allows teams to visualize where the “fuel” might be smothering the “spark” and adjust the workflow before burnout sets in.

By engaging in collective foresight, teams can proactively decide which cognitive territories are “Human-Core.” These are the areas where we intentionally limit AI autonomy to preserve our creative agency and cultural identity. It’s about choosing where we want the machine to lead and where we require a human to hold the compass.

  • Mapping the Friction: Identifying which agent-led tasks are creating the most mental “drag” for the team.
  • Defining Non-Negotiables: Establishing which parts of the customer and employee experience must remain 100% human-centric.
  • Intent Modeling: Shifting the focus from “What can the agent do?” to “What outcome are we trying to hack for the future?”

When we FutureHack our workflows, we move from being passive recipients of technological change to being the active architects of our organizational destiny. We ensure that as the machine gets smarter, our collective human intelligence becomes more focused, not more fragmented.

Framework: The “Agency First” Operating Model

Building a resilient organization in the age of Agentic AI requires more than just new software; it requires a new operating philosophy. We must move away from a model of Machine Management and toward a model of Intent Orchestration. This framework provides three critical steps to ensure that human agency remains the primary driver of your business value.

1. Cognitive Offloading, Not Task Dumping

The goal of automation should be to reduce the mental noise for the employee, not just to move a task from a human to a machine. If a human still has to track, verify, and worry about every step the agent takes, the cognitive load hasn’t decreased — it has merely changed shape.
The Strategy: Design “set and forget” guardrails that allow agents to operate within a defined ethical and operational “sandbox,” only alerting the human when a decision falls outside of those parameters.

2. The “Human-in-the-Loop” Upgrade

We must shift the role of the worker from Monitor to Mentor. In the old model, the human checks the machine’s homework for errors. In the “Agency First” model, the human coaches the agent on why certain decisions are better than others, treating the AI as an apprentice. This reinforces the human’s position as the source of wisdom and authority, preventing the “Conscript” mentality.

3. Intent-Based Leadership

Management must evolve to focus on the Intent rather than the Activity. In a world where agents can generate infinite activity, “busyness” is no longer a proxy for value. Leaders must empower their teams to spend their time defining the “Commander’s Intent” — the high-level objectives and human-centered outcomes that the AI agents must then figure out how to achieve.

Intent Based Leadership Blueprint Infographic

The Agency Audit: Ask your team this week: “Does this new AI agent give you more time to think strategically, or does it just give you more machine-generated work to manage?” The answer will tell you if you are facing an Agentic Paradox.

Conclusion: Leading the Human-Centered Revolution

The true test of leadership in 2026 is not how quickly you can deploy autonomous agents, but how effectively you can protect and amplify the human spirit within your organization. As we navigate the Agentic Paradox, we must remember that technology is a force multiplier, but it requires a human “integer” to multiply. Without a clear sense of agency, even the most advanced AI becomes a source of friction rather than a source of freedom.

By addressing the Cognitive Load Crisis and intentionally moving our teams out of “Conscript” roles and into “Architectural” ones, we do more than just improve efficiency — we future-proof our culture. We ensure that our organizations remain places of meaning, creativity, and purpose.

The “Year of Truth” demands that we be honest about the mental tax of automation. It calls on us to use FutureHacking™ not just to map out our tech stacks, but to map out our human potential. The companies that win the next decade won’t be those with the smartest agents; they will be the ones that used those agents to give their people the time and agency to be truly, radically human.

“Innovation is a team sport where the machines play the support roles so the humans can score the points.”

Are you ready to hack your agentic future?

Frequently Asked Questions

What is the primary difference between Generative AI and Agentic AI?

Generative AI focuses on creating content (text, images, code) based on human prompts. Agentic AI goes a step further by having the autonomy to execute multi-step workflows, make decisions, and interact with other systems to complete a goal without constant human intervention.

How can leaders identify if their team is suffering from the Agentic Paradox?

Look for signs of the “Supervision Trap,” where employees spend more time managing and verifying machine outputs than performing strategic work. If your team feels busier but reports a decline in creative output or “Deep Work,” they are likely experiencing the paradox.

What role does FutureHacking™ play in managing AI integration?

FutureHacking™ is a collective foresight methodology used to visualize the long-term impact of AI on organizational roles. It helps teams proactively define “Human-Core” territories, ensuring that as AI scales, it supports rather than smothers human agency and innovation.

Image credits: Google Gemini

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from Google Gemini to clean up the article, add images and create infographics.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Artificial Intelligence Powered Teamwork

Artificial Intelligence Powered Teamwork

GUEST POST from David Burkus

Over the past year, leaders have been asking the same questions trying to leverage AI-Powered teamwork: “What should I be doing with ChatGPT?” “How should we be rolling this out to our team?” “What does this mean for the future of work?”

They’re important questions, but they all kind of miss the mark. Because they treat AI like it’s just another IT rollout. Like that time your company moved from email to Slack. Or when everyone was forced to learn a new payroll system. But AI isn’t just another piece of software.

AI isn’t a tool. AI is a teammate.

And until we start treating it that way, we’re going to keep missing the real opportunity.

Why “Tool Thinking” Falls Short

Most people respond to AI in one of three ways. They see it as a threat. They see it as a tool. Or they see it as a teammate.

If you see AI as a threat, you’re going to hesitate. And hesitation is the enemy of progress. You’ll wait. You’ll hold back. But AI isn’t slowing down. And the people who do embrace it — whether they’re colleagues in your department or competitors across the industry — are only going to get better, faster, and more efficient. That puts your performance at risk by comparison. Compared to those using AI, you will performer slower.

If you see AI as a tool, you’re on slightly better footing. You’ll look for ways to automate the repetitive stuff. Email summaries. Meeting notes. Draft responses. All helpful. All productive. But you’re still missing the big value. You’re simplifying, not improving. You’re staying in neutral.

But if you treat AI as a teammate, that’s where transformation starts.

That’s when AI becomes a collaborator. A partner in decision-making. A quiet force that helps your team think more clearly, solve problems faster, and deliver better outcomes.

That’s when you start to unlock the full potential of AI-powered teamwork. That’s when it truly makes you smarter.

Step One: From Slower to Simpler

The first mindset shift is from threat to tool. From slower to simpler. Think about the annoying parts of your job. The copy-paste chores. The tedious admin. The stuff you’re way too smart to be wasting time on. AI can take that off your plate today.

Summarize the endless email chain. Done. Draft that status report. Done. Transcribe your meeting and highlight key action items. Double done.

Not sure where to start? Try this: open whatever AI platform you prefer — ChatGPT, Claude, Gemini, Grok, doesn’t matter — and type:

“Here’s what I do in my job every day. Ask me questions to understand it better, then show me how you could help.”

It will ask follow-ups. It will start mapping your workflows. It will suggest ways to make your day easier, your output faster, and your mind a little clearer.

Congratulations! You’ve moved from slower to simpler.

Step Two: From Simpler to Smarter

Once you’re using AI to simplify tasks, it’s time to use it to sharpen your thinking. Because smarter teams don’t just offload work. They upgrade their decision-making. They collaborate with AI, not just delegate to it.

How? Try turning AI into a devil’s advocate. Feed it your current strategy or plan, then ask:

“Tell me why this could fail.”

You’re not asking it to make decisions. You’re using it to challenge assumptions. To highlight blind spots. To play the role of critic — without the ego. AI provides friction without awkwardness. No one gets defensive when a bot questions your logic.

Want to go deeper? Try these prompts:

  • “What are we overlooking?”
  • “What assumptions might not be true?”
  • “Give me three stronger alternatives to this approach.”

Want to make the feedback even more useful? Ask the AI to role-play:

  • “Think like a strategic consultant.”
  • “Respond like a customer.”
  • “What would a competitor say?”

This is how AI-powered teamwork gets smarter, not just simpler. You’re not just getting a second opinion. You’re getting sharper thinking, without the politics.

Step Three: Make It a Team Habit

And here’s where the real breakthrough happens: when AI becomes a shared part of your team’s workflow — not just your personal productivity hack.

Use it in meetings to take notes. To draft action items. To highlight decisions made.

But also, use it before meetings. Drop your agenda into the chatbot and ask what you’re missing. Run your strategy plan through it and ask for feedback before your next off-site.

This only works if the whole team adopts it. And that’s where leaders come in.

Leaders need to be intentional. Because while AI can streamline collaboration, it can also introduce risks. If team members outsource their attention to a bot, they may stop listening. If everything’s recorded, people may speak up less. The quiet voices might go even quieter.

That’s why leadership still matters. Psychological safety? Still your job. Empathy? Still your job. Motivation and morale? Still your job.

AI can’t do that for you. But what it can do is give you more time to focus on it. Because when the bots handle the mechanics, you can focus on the human side of leadership — the part that never gets automated.

The Future of AI-Powered Teamwork

So, where’s your team right now? Are you stuck in “slower,” resisting change? Are you in “simpler,” just automating inbox chores? Or are you starting to work “smarter,” using AI to enhance how your team thinks and collaborates?

Wherever you are, there’s room to grow. Don’t just ask what AI can do. Ask how your team can do better work with it. Try a prompt. Test an idea. Challenge a plan. Start treating AI like a teammate, not a tool. Because the future of AI-powered teamwork isn’t about tech. It’s about trust. It’s about how you use new capabilities to build better teams, make better decisions, and do work that actually matters.

And that’s something worth getting smarter about.

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Augmented Mind

Beyond Recall: The Strategic Evolution of Human Digital Memory

LAST UPDATED: April 10, 2026 at 3:39 PM

The Augmented Mind

GUEST POST from Art Inteligencia


The Dawn of the Extended Mind

For decades, we have treated our digital devices as external filing cabinets — places where we “put” information to be retrieved later. However, as the volume of data we consume shifts from a manageable stream to an overwhelming deluge, the traditional boundaries of the human mind are being tested. We are now entering a profound transition from Information Management to Cognitive Partnership.

The “Cognitive Crisis” is no longer a future threat; it is our current reality. Traditional search functions and folder-based storage hierarchies are failing the modern knowledge worker because they rely on perfect recall of where a file was placed or exact matching of keywords. When our biological hardware reaches its limit, our productivity and creativity suffer.

Digital Memory Augmentation represents a fundamental shift. It moves us beyond simple backups and toward active, AI-driven cognitive extensions. This isn’t about replacing human thought with an algorithm; it is a human-centered design opportunity to create a digital scaffold for our intellect. By augmenting our memory, we free the brain from the mundane task of storage, allowing it to return to its highest and best use: imagination, synthesis, and meaningful connection.

The Three Pillars of Augmented Memory

To move beyond simple storage and into true augmentation, we must look at how digital systems interface with our lived experience. This evolution is built upon three foundational pillars that transform raw data into a functional extension of our intellect.

1. Seamless Capture

The greatest friction in traditional memory management is the act of “saving.” When we have to pause our flow to take a note, bookmark a page, or file a document, we break our cognitive momentum. Seamless Capture shifts the burden from the user to the environment. Through “digital exhaust” — the ambient collection of our meetings, readings, and interactions — augmentation systems ensure that the “sparks” of insight are never lost simply because we were too busy to write them down.

2. Contextual Resonance

A memory is useless if it exists in a vacuum. Traditional systems rely on folders or tags, which require us to remember how we categorized information in the past. Contextual Resonance uses semantic analysis to understand the “why” and “how” behind a piece of information. By linking a data point to a specific project, a person, or even an emotional state, the system mimics the associative nature of the human brain, making retrieval feel like a natural thought rather than a database query.

3. Proactive Synthesis

The ultimate goal of augmentation is to move from reactive searching to proactive assistance. Proactive Synthesis is the stage where the system acts as a true partner. Instead of waiting for a prompt, the “Second Brain” identifies patterns across years of data and surfaces relevant insights at the moment they are most useful. It creates “digital serendipity,” connecting a conversation you had this morning with a research paper you read three years ago, fueling innovation through automated cross-pollination.

Reimagining the Innovation Lifecycle

Innovation is rarely the result of a single “Eureka!” moment; it is a cumulative process of gathering sparks, connecting dots, and refining concepts over time. By integrating digital memory augmentation, we transform the innovation lifecycle from a fragile, hit-or-miss endeavor into a robust, high-velocity engine for growth.

1. The End of “Lost Ideas”

How many breakthrough concepts have been lost to the ether simply because they occurred in the shower, during a commute, or in the middle of a casual conversation? Memory augmentation ensures that the “sparks” — the messy, early-stage thoughts and sketches — are captured in real-time. By removing the friction of documentation, we preserve the raw materials of innovation before they can be overwritten by the next urgent task.

2. Cross-Pollination at Scale

The most powerful innovations often come from combining ideas from two completely unrelated fields. However, our biological memory is prone to “siloing” information by department or project. A digital memory layer can scan across decades of organizational history and disparate personal interests to find hidden links. It allows an engineer to see how a solution from a 2015 project might solve a 2026 problem, facilitating a level of cross-pollination that was previously impossible for a single human mind to manage.

3. Accelerating Mastery

In a world of hyper-specialization, the “time-to-expertise” is a major bottleneck for innovation. Memory augmentation acts as a cognitive scaffold, allowing individuals to rapidly navigate complex institutional knowledge and technical documentation. By having a “Second Brain” that remembers the technical nuances and past failures of a specific domain, innovators can stand on the shoulders of their own past experiences (and those of their predecessors) much faster, shifting their energy from learning the foundation to building the future.

Designing for Trust and Human Agency

As we integrate digital memory more deeply into our lives, the design challenge shifts from technical feasibility to ethical responsibility. If we are to treat a digital system as an extension of our own mind, that system must be designed with an uncompromising focus on the user’s autonomy, privacy, and long-term cognitive health.

1. The Privacy Imperative

For digital memory augmentation to be successful, the “Second Brain” must be a private sanctuary. Users will only record their raw thoughts, private conversations, and vulnerable moments if they have absolute certainty that their data is not being used for advertising or surveillance. Designing for trust means prioritizing on-device processing and end-to-end encryption — ensuring that the user remains the sole owner and curator of their digital history.

2. Combatting Cognitive Atrophy

A significant concern with augmentation is the risk of “cognitive laziness.” Just as GPS has weakened our innate sense of navigation, there is a risk that total recall tools could weaken our ability to focus or synthesize information independently. Human-centered design must focus on augmentation, not replacement. The goal is to build tools that act as a “cognitive bicycle” — strengthening our ability to connect ideas and think critically by offloading the low-value task of rote memorization.

3. The Ethics of Perfection

Human memory is naturally fallible; we forget, we forgive, and we move on. A world where every mistake, every awkward comment, and every outdated opinion is preserved with photographic clarity presents a psychological challenge. We must design systems that allow for the “right to be forgotten” and the ability to prune our digital archives. True augmentation should support the human capacity for growth and evolution, rather than chaining us to a static version of our past selves.

The Ecosystem: Titans and Trailblazers

The landscape of memory augmentation is currently a race between established tech giants integrating AI into our daily operating systems and agile startups building dedicated hardware for total recall. By 2026, the market has moved beyond experimental prototypes to functional, cross-platform tools that are reshaping how we interact with our own history.

1. Established Platforms

  • Apple (Apple Intelligence): Apple has positioned itself as the “Privacy-First” memory partner. By leveraging on-device processing and Private Cloud Compute, iOS 26 and macOS Sequoia allow users to search for specific moments across photos, emails, and notes using natural language — creating “Memory Movies” and surfacing context-aware suggestions without ever exposing raw data to the cloud.
  • Microsoft (Windows Recall & Copilot): Despite early privacy hurdles, Microsoft has refined “Recall” into a sophisticated enterprise tool. It creates a searchable photographic timeline of everything you’ve seen and done on your PC, allowing professionals to instantly jump back to a specific slide, website, or conversation from weeks prior.
  • Meta (Ray-Ban Meta & AI): Meta is utilizing hardware to move memory augmentation into the physical world. Their smart glasses act as ambient “eyes and ears,” allowing users to ask, “Hey Meta, what was the name of that restaurant I walked past yesterday?” or “What did my colleague say about the project deadline?”

2. Disruptive Startups

  • Limitless (The Pendant): Limitless has become the go-to for “Total Recall” hardware. Their wearable AI pendant records and transcribes in-person meetings and impromptu conversations, utilizing “Automatic Speaker Recognition” to create smart summaries and reminders that sync across all productivity suites.
  • Mem.ai: Moving beyond traditional note-taking, Mem 2.0 has evolved into an “AI Thought Partner.” It eliminates the need for folders by using a self-organizing knowledge graph that automatically links new thoughts to past research, surfacing relevant context as you type.
  • Heirloom (Heirloom.cloud): Focused on the bridge between analog and digital, Heirloom uses AI to digitize, contextualize, and narrate family histories and personal archives, ensuring that legacy memories remain searchable and meaningful for future generations.
  • The Neural Frontier (Neuralink & Synchron): While still largely focused on clinical applications for motor and speech restoration, the successful 2025-2026 human trials for Brain-Computer Interfaces (BCIs) have laid the groundwork for future direct-to-brain memory retrieval and cognitive offloading.

Case Studies: Augmentation in the Real World

To move from the theoretical to the practical, we must look at how digital memory augmentation is already solving deep-seated organizational and individual challenges. These two case studies illustrate how extending our cognitive capacity directly translates into business value and human safety.

Case Study 1: Resolving the “Institutional Memory” Gap in Professional Services

The Challenge: A global management consulting firm was suffering from “reinventing the wheel.” With over 10,000 consultants globally, teams were frequently spending hundreds of hours on research and analysis that had already been performed by colleagues in different regions or years prior. Internal surveys showed that senior partners were spending 25% of their time simply trying to remember who had the specific “tribal knowledge” needed for a new pitch.

The Approach: The firm implemented a semantic memory layer that indexed all past white papers, anonymized project summaries, internal Slack discussions, and recorded client debriefs. Unlike a traditional database, this system used a “Second Brain” interface that allowed consultants to ask conversational questions like, “What were the specific regulatory hurdles we faced during the 2022 retail merger in Singapore?”

The Result: Within the first twelve months, the firm reported a 35% increase in project velocity and a significant reduction in duplicate research costs. More importantly, the ability to surface “deep-context” insights during client meetings led to a 15% higher win rate on new business pitches.

Case Study 2: Adaptive Learning and Safety in Complex Engineering

The Challenge: An aerospace manufacturing leader faced a massive demographic shift. As their most experienced engineers reached retirement age, they were struggling to transfer decades of “feel” and undocumented maintenance nuances to junior engineers working on legacy aircraft systems — some of which were designed 40 years ago.

The Approach: The company deployed a wearable AR-and-memory system. As a junior engineer looked at a specific engine component, the system utilized computer vision to recognize the part and instantly surfaced the “ambient memory” associated with it: past repair notes from retired masters, video snippets of successful fixes, and warnings about specific bolt-tension issues that weren’t in the official manual.

The Result: The facility saw a 50% reduction in error rates during complex maintenance cycles. The “time-to-expertise” for new hires was cut by four months, as their digital memory augmentation acted as an on-demand mentor, bridging the gap between theoretical training and institutional wisdom.

Conclusion: The Future of Being Human

We are standing at a pivotal crossroads in our evolution as a species. Digital memory augmentation is not merely a technological upgrade; it is a shift in the very nature of human cognition. As we move from a world of “Search” to a world of “Knowing,” we must be intentional about how we design these systems and what we choose to do with our newly reclaimed mental energy.

1. From “Search” to “Knowing”

When the friction of retrieval disappears, our relationship with knowledge changes. We no longer have to wonder if we know something; we simply have access to it. This transition allows us to shift our focus from the logistics of information management to the higher-level pursuit of empathy and understanding. When we are not struggling to remember the facts, we have more capacity to listen to the story, to understand the nuance, and to build deeper connections with those around us.

2. The Human-First Mandate

As a thought leader in human-centered innovation, my message is clear: Technology should never outpace our humanity. While we build smarter memories and more powerful cognitive scaffolds, we must ensure we don’t lose the “wisdom” that comes from human reflection, the growth that comes from our mistakes, and the beauty of our fallibility. Our goal should be to use digital memory to amplify our potential — not to automate our souls.

The future of being human is not about being “replaced” by silicon; it is about being empowered by it to reach new heights of creativity and compassion. Let us design for that future today.

Key Insight: Digital memory augmentation isn’t about building a better hard drive; it’s about building a better bridge between what we experience and what we can achieve.

Frequently Asked Questions

1. What is Digital Memory Augmentation?

It is the use of AI-driven tools and hardware to seamlessly capture, organize, and surface personal and professional information, acting as a “second brain” to extend human cognitive capacity.

2. How does memory augmentation impact privacy?

Privacy is the core pillar of these systems. Modern solutions prioritize on-device processing and end-to-end encryption to ensure that the user remains the sole owner of their digital history.

3. Does using a “Second Brain” lead to cognitive atrophy?

When designed correctly, these tools act as a “cognitive bicycle” — offloading the low-value task of rote memorization so the human brain can focus on higher-level creativity and complex problem-solving.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: ChatGPT

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Does Planned Obsolescence Fuel the Fire or Just Burn the House Down?

The Innovation Paradox

LAST UPDATED: April 4, 2026 at 11:56 AM

Does Planned Obsolescence Fuel the Fire or Just Burn the House Down?

by Braden Kelley and Art Inteligencia


I. Introduction: The Tension Between Renewal and Waste

In the world of innovation, we often talk about the “fire” of creativity — the energy that drives us to build the next great breakthrough. But in the current industrial landscape, we must ask ourselves: are we stoking a sustainable Innovation Bonfire, or are we simply burning the furniture to keep the room warm for a single night?

Planned obsolescence has long been the silent engine of the consumer economy, a strategy designed to ensure that the products of today become the landfill of tomorrow. It creates a fundamental tension between the mechanical need for economic growth and the human-centered need for enduring value.

“To truly innovate for humanity, we must pivot from a strategy of deliberate failure to one of intentional resilience.”

As change leaders, we must recognize that planned obsolescence is an industrial-age relic masquerading as a modern innovation strategy. This article explores whether this cycle of constant replacement truly fuels progress or if it acts as a “wet blanket” that dampens our ability to solve the world’s most pressing, wicked problems.

II. The Case for the “Pro”: Obsolescence as a Catalyst for Speed

While it is easy to dismiss planned obsolescence as purely cynical, from a strategic standpoint, it has functioned as a powerful — if aggressive — accelerant for the adoption curve. By shortening the lifecycle of a product, organizations force a faster cadence of iteration. This “forced evolution” ensures that new technologies, safety standards, and efficiencies are pushed into the hands of users at a rate that a “buy-it-for-life” model simply couldn’t sustain.

Consider the following drivers that proponents argue fuel the innovation engine:

  • R&D Capitalization: The consistent revenue generated by replacement cycles provides the massive capital reserves required for “Big Bang” breakthroughs. Without the “Small Bangs” of incremental sales, the long-term, high-risk research into materials science or AI might never be funded.
  • The Velocity of “Innovation”: When a product is designed to be replaced, designers are freed from the “legacy trap.” They can experiment with radical new interfaces or hardware configurations, knowing that the next cycle provides an immediate opportunity to course-correct based on real-world human feedback.
  • The Psychology of the “New”: In our work on Stoking Your Innovation Bonfire, we recognize that emotion is a primary driver of change. The “Fashion of Tech” creates a sense of momentum. This psychological pull toward the “New” keeps markets liquid and encourages a culture of constant curiosity and upgrade.

In this light, obsolescence isn’t just about things breaking; it’s about keeping the market in motion. It prevents stagnation by ensuring that the “Stable Spine” of our infrastructure is constantly being tested and refreshed by the latest “Modular Wings” of technological advancement.

III. The Case for the “Con”: The “Wet Blankets” of Planned Obsolescence

If innovation is a fire, planned obsolescence often acts as a massive “wet blanket” — smothering the very progress it claims to ignite. When we design for failure, we aren’t just creating a product; we are creating environmental friction. The “Invisible Drain” of e-waste and resource depletion represents a systemic failure that our current economic operating system is struggling to process.

From a human-centered design perspective, the downsides extend far beyond the landfill:

  • The Erosion of Trust: A core pillar of Experience Design is the relationship between the brand and the human. When a user realizes a device was intentionally throttled or made unrepairable, it creates a “Customer Experience (CX) Betrayal.” This loss of trust is a psychological friction that makes future change adoption much harder.
  • Innovation Fatigue: There is a limit to how much “New” a human can process. When consumers feel they are on a hamster wheel of meaningless upgrades, they develop an apathy toward genuine breakthroughs. We risk a future where the “latest” no longer feels like the “greatest” — it just feels like a chore.
  • The Circular vs. Linear Conflict: Planned obsolescence is the hallmark of a linear economy (Take-Make-Waste). To move toward a sustainable future, innovation must embrace circularity, where products are designed as “Stable Spines” that can be updated, repaired, and kept in the ecosystem indefinitely.

Linear versus Circular Economy

By focusing our creative energy on how to make things break, we divert talent away from solving “wicked problems” — like true energy efficiency or radical durability. We are effectively choosing Quantity of Sales over Quality of Impact, a trade-off that rarely benefits humanity in the long run.

IV. The Impact on Innovation: Quality vs. Quantity

One of the most dangerous side effects of planned obsolescence is how it reshapes the innovation mindset. When a company’s primary metric for success is a yearly replacement cycle, the engineering focus shifts from transformational leaps to incremental tweaks. We find ourselves trapped in a cycle of “Innovation Theater” — releasing shiny new features that mask the lack of fundamental progress.

The shift in focus creates several systemic challenges:

  • The Maintenance Trap: In a human-centered world, we should be designing for longevity. However, planned obsolescence forces our best creative minds to spend their energy designing “points of failure” rather than points of resilience. This is a massive diversion of intellectual capital away from the wicked problems that actually matter to humanity.
  • Incrementalism vs. Transformation: If you know your product only needs to last 24 months, why solve the difficult problems of battery degradation or heat management for the long term? The “yearly release” schedule creates a treadmill effect where we are running faster but not necessarily moving further.
  • Systems Thinking Failure: We often view a product as a standalone unit, but in a connected world, every device is a node in a larger infrastructure. When we design for a short lifecycle, we create fragility in the entire system. True innovation requires a Stable Spine Audit — evaluating whether the core of our solution is robust enough to support years of evolving “Modular Wings.”

To move the needle, we must stop measuring innovation by the volume of patents or the frequency of launches. Instead, we should measure the durability of the value created. If an innovation cannot stand the test of time, is it truly an innovation, or is it just a temporary distraction?

V. Is it Good for Humanity? (The Human-Centered Audit)

When we apply a Human-Centered Audit to planned obsolescence, the results are deeply conflicted. Innovation should serve as a tool for human empowerment, yet the cycle of forced replacement often creates new forms of dependency and inequality. We must ask: are we designing for the flourishing of the person, or simply for the health of the balance sheet?

To understand the true impact on humanity, we must look at three critical dimensions:

  • The Ethics of Accessibility: Planned obsolescence often creates a “digital divide.” When software updates outpace hardware capabilities, we effectively lock out those who cannot afford to stay on the upgrade treadmill. If the tools for modern life — education, banking, and communication — require the latest hardware, then deliberate obsolescence becomes a barrier to global equity.
  • Autonomy vs. Dependency: There is a subtle shift occurring from ownership to renting. Through un-repairable hardware and “software locks,” users lose the autonomy to maintain their own tools. This creates a fragile relationship where the human is entirely dependent on the manufacturer, eroding the sense of agency that good design should foster.
  • The Prosperity Balance: Proponents point to the short-term job creation in manufacturing and the “Great American Contraction” as reasons to keep the wheels turning. However, we must weigh these temporary economic gains against the long-term cost of environmental degradation and the loss of organizational agility. A society that spends its energy replacing what it already had is a society that isn’t moving forward.

Ultimately, an innovation strategy that relies on things breaking is fundamentally at odds with a Human-Centered philosophy. If our “Innovation Bonfire” requires us to constantly toss our previous achievements into the flames just to keep the fire going, we haven’t built a fire — we’ve built an incinerator.

VI. The Path Forward: From Obsolescence to Innovation

The shift from a Linear Economy to a Circular Economy requires more than just better recycling; it requires a fundamental redesign of our innovation frameworks. We must move toward Innovation — where the value of a product remains constant or even improves over time, rather than degrading by design.

To transition from a strategy of failure to a strategy of resilience, organizations should embrace three core principles:

  • Designing for Durability: The next truly “disruptive” move in many industries isn’t adding a new sensor; it’s creating a product that lasts a decade. Durability is becoming a premium feature in a world of disposable goods. By focusing on high-quality materials and Human-Centered engineering, brands can build a legacy rather than just a quarterly report.
  • The Modular Revolution: We must apply the “Stable Spine” and “Modular Wings” philosophy to hardware. Imagine a device where the core processor (the spine) is built to last, while the specific sensors or interface components (the wings) can be swapped out as technology advances. This allows for evolution without the need for total replacement.
  • New KPIs for a New Era: We need to stop measuring success solely by unit sales. Forward-thinking companies are moving toward “Value-in-Use” and Experience Level Measures (XLMs). When a company is incentivized by how well a product performs over its entire lifecycle, the motivation to build in failure points disappears.

This isn’t just about “being green”; it’s about Organizational Agility. A company that doesn’t have to reinvent its basic hardware every twelve months can redirect its R&D energy toward solving the deep, systemic challenges that humanity actually faces. It’s time to stop stoking the bonfire with our own waste and start building a fire that truly illuminates the future.

VII. Conclusion: Stoking a Sustainable Flame

As we look toward the future of human-centered change, we must decide what kind of “Innovation Bonfire” we want to build. Is it a flash in the pan that requires the constant sacrifice of resources and consumer trust, or is it a steady, illuminating heat that powers real progress?

Planned obsolescence was a 20th-century solution to a 20th-century problem — the need for rapid industrial scale. But in an era defined by digital transformation and the “Great American Contraction,” the old rules no longer apply. To continue designing for failure is to ignore the wicked problems of our time: climate change, resource scarcity, and the erosion of human agency.

“The true measure of an innovation isn’t how many units we sold this year, but how much better the world is because that product exists ten years from now.”

My challenge to you — the executives, the designers, and the change agents — is this: Stop designing for the landfill. Start designing for the legacy. When we shift our focus from Obsolescence to Resilience, we don’t just save the planet; we save the very soul of innovation.

Let’s stop stoking the fire with our own waste and start building a future that is truly made to last.


Frequently Asked Questions

How does planned obsolescence impact human-centered innovation?

Planned obsolescence often acts as a “wet blanket” on true innovation by forcing creators to focus on incremental tweaks and deliberate failure points rather than solving “wicked problems.” From a human-centered design perspective, it erodes consumer trust and prioritizes short-term sales over long-term value and sustainability.

Can planned obsolescence ever be good for humanity?

Proponents argue it accelerates the adoption curve and provides the R&D capital necessary for major breakthroughs. However, a human-centered audit suggests these economic gains are often offset by environmental degradation, increased e-waste, and the creation of a “digital divide” where only the wealthy can afford to stay on the upgrade treadmill.

What is the alternative to planned obsolescence in design?

The primary alternative is moving toward a “Circular Economy” using a “Stable Spine” and “Modular Wings” philosophy. This involves designing products for durability and repairability, where core components last for years while specific features can be upgraded or replaced, shifting the focus from “quantity of sales” to “value-in-use.”

Image credits: Gemini

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from Gemini to clean up the article and add citations.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Four Psychological Disruptions of AI at Work

LAST UPDATED: April 3, 2026 at 4:20 PM

The Four Psychological Disruptions of AI at Work

by Braden Kelley and Art Inteligencia


Most AI-and-work frameworks are built around economics – job categories, task automation rates, re-skilling costs. This one is built around something different: the interior experience of the person sitting at the desk. The four disruptions mapped in this infographic were identified not through labor market data, but through a human-centered lens – the same lens used in design thinking and change management to surface the needs, fears, and identity stakes that people rarely articulate out loud but always feel.

The framework draws on three converging sources: organizational psychology research on professional identity and role transition; change management practice, particularly the observed patterns of how workers respond when their expertise is devalued or displaced; and direct observation of how individuals are actually experiencing AI adoption in their workplaces right now – not in surveys, but in the unguarded conversations that happen before and after workshops, in the margins of keynotes, in the questions people ask when they think no one important is listening.


Why these four disruptions

1

Competence Displacement

The skill that defined you no longer distinguishes you.

Professional identity is heavily anchored in the belief that what I know how to do has value. When AI can replicate a signature competency – even imperfectly – it attacks that anchor directly. The disruption isn’t primarily about job loss. It’s about the sudden, disorienting feeling that years of deliberate practice have been, in some meaningful sense, made ordinary.

This disruption appears earliest and most acutely in knowledge workers whose expertise was previously considered difficult to acquire – writers, analysts, coders, researchers, strategists.

2

Purpose Erosion

The meaning embedded in the craft begins to hollow out.

Work is not only instrumental – it is ritual. The process of doing difficult things carefully, over time, is itself a source of meaning. When automation removes the friction, it can also remove the satisfaction. This is subtler than competence displacement and slower to surface, but ultimately more corrosive. People find themselves producing more output and feeling less connected to it.

This disruption is particularly acute for people who chose their profession not just for income but for intrinsic love of the work – and who built their identity around that love.

3

Belonging Disruption

The social fabric of work shifts when AI enters the team.

Work teams are social ecosystems built on complementary expertise, shared struggle, and mutual reliance. AI changes those dynamics in ways that are easy to overlook. When an AI tool makes one team member dramatically more productive, or when collaborative tasks are partially automated, the invisible social contracts of the team – who depends on whom, who contributes what – are quietly renegotiated. Belonging depends on feeling needed. When that changes, isolation can follow.

This disruption tends to surface not as explicit conflict but as a gradual withdrawal – people collaborating less, sharing less, protecting their remaining territory.

4

Status Anxiety

The professional hierarchy is being redrawn by AI fluency.

Workplace status has always been tied to expertise scarcity – the person who knew things others didn’t held power. AI is redistributing that scarcity rapidly. Early and confident AI adopters gain speed, output, and visibility. Those who resist, or who are slower to adapt, find themselves losing ground in ways that feel both unfair and disorienting. The new status question – are you someone who uses AI, or someone AI is used on? – is already being asked in organizations, even when no one says it explicitly.

This disruption is uniquely uncomfortable because it combines external threat (status loss) with internal shame (the fear of being seen as behind).


How to read the framework

These four disruptions are not sequential stages – they are simultaneous and overlapping. A single professional can be experiencing all four at once, with different intensities depending on their role, their organization, and how rapidly AI is being adopted around them. The infographic presents them as discrete panels for clarity, but the lived experience is messier and more entangled.

They are also not uniformly negative. Each disruption contains within it the seed of a corresponding renewal: competence displacement can become an invitation to lead with judgment rather than task execution; purpose erosion can prompt a deeper reckoning with what the work is ultimately for; belonging disruption can surface the human connection that was always the real foundation of team cohesion; status anxiety can motivate the kind of deliberate identity authoring that makes professionals more resilient over the long term.

The framework is designed to give leaders and individuals a common language for conversations that are currently happening in fragments — in one-to-ones, in exit interviews, in the silence after a difficult all-hands. Named things can be worked with. Unnamed things can only be endured.

This framework is a practitioner’s model, not a peer-reviewed clinical instrument. It is designed for use in workshops, coaching conversations, and organizational change programs as a starting point for honest dialogue — not as a diagnostic or classification system. It will evolve as our collective understanding of AI’s human impact deepens.

Framework developed by Braden Kelley as part of the article series Psychological Impact of AI on Work Identity  ·  Braden Kelley  ·  © 2026

Image credits: Gemini

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from Claude AI to clean up the article and add citations.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Humans and AI BOTH Hallucinate

Humans and AI BOTH Hallucinate

GUEST POST from Shep Hyken

One of the reasons customers are concerned about or even scared of artificial intelligence (AI) is that it has been known to provide incorrect answers. The result is frustration and concern over whether to believe any AI-fueled technology. In my annual customer service and customer experience research, I asked more than 1,000 U.S. consumers if they ever received wrong or incorrect information from an AI self-service technology. Fifty-one percent said yes.

No, AI is not perfect. Even though the technology continues to improve, it still makes mistakes. And my response to those who claim they won’t trust AI because of those mistakes is to ask, “Has a live customer support agent ever given you bad information?”

That question gets a surprised look, and then a smile, and then an acknowledgement, something like, “You’re right. I never thought about that.”

When AI gives bad information, I refer to that as Artificial Incompetence. It’s just as frustrating when we experience bad information from a live agent, which I call HI, or Human Incompetence. I doubt – I actually know – that the AI and the human aren’t trying to give you bad information.

I once called a customer support number to get help with what seemed like a straightforward question. I didn’t like the answer I received. It just didn’t make sense. Rather than argue, I thanked the agent, hung up, and dialed the same customer support number. A different agent answered, and I asked the same question. This time, I liked the answer. Two humans from the same company answering the same question, but with two completely different answers. And we worry about AI being inconsistent!

AI Hallucination Cartoon Shep Hyken

AI and Humans Make Mistakes

The reality is that both AI and humans make mistakes, and both will continue to do so. The difference is our expectations. We don’t expect humans to be perfect, so when they are not, we may be disappointed, maybe even angry. We may or may not forgive them, but usually, we just chalk it up to being … human. But it’s different when interacting with AI. We expect it to be reliable, and when it makes a mistake, we often assume the entire system is flawed.

Perhaps we should treat both with the same reasonable expectations and the same healthy skepticism we apply to weather forecasters, who use sophisticated technology and have years of training yet still can’t seem to get tomorrow’s forecast right half the time. Well, it seems like half the time! That doesn’t mean we won’t be checking the forecast before we plan our outdoor activities. AI, too, is sophisticated technology that can make life easier.

Image credits: Gemini, Shep Hyken

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Layoffs, AI, and the Future of Innovation

Efficiency Breakthrough or Creative Bankruptcy?

LAST UPDATED: March 21, 2026 at 10:24 PM

Layoffs, AI, and the Future of Innovation

by Braden Kelley and Art Inteligencia


Framing the Debate: Signals or Symptoms?

A new wave of layoffs across technology companies has reignited a familiar but increasingly urgent question: what exactly are we witnessing? On the surface, the explanation seems straightforward — companies are tightening costs, responding to macroeconomic pressures, and recalibrating after years of aggressive hiring. But beneath that surface lies a deeper and more consequential debate about the future of innovation, the role of engineers, and the impact of artificial intelligence on knowledge work itself.

Two competing narratives have quickly emerged. The first frames these layoffs as a rational and even necessary evolution. In this view, advances in AI-powered development tools — ranging from large language models to code-generation systems — have fundamentally altered the productivity equation. Engineers equipped with tools like Claude or OpenAI Code can now accomplish in hours what once took days. The implication is clear: if output can be maintained or even increased with fewer people, then reducing headcount is not a sign of weakness but a signal of maturation. Companies are becoming leaner, more efficient, and ultimately more profitable.

The second narrative is far less optimistic. It suggests that layoffs are not a leading indicator of a smarter, AI-augmented future, but a trailing indicator of something more troubling — an innovation slowdown. According to this perspective, many technology companies have already harvested the most accessible opportunities within their existing platforms. What remains is incremental improvement rather than transformative change. In such an environment, cutting engineering talent becomes less about efficiency gains and more about a lack of compelling new problems to solve. The cupboard, in other words, may not be empty — but it may be significantly less full than it once was.

What makes this moment particularly complex is that both narratives can be true at the same time. AI is undeniably increasing productivity in certain domains, compressing development cycles and enabling smaller teams to deliver meaningful results. At the same time, innovation has never been solely a function of efficiency. Breakthroughs emerge from exploration, from cross-functional collisions, and from a willingness to invest in uncertain futures. Layoffs, especially when executed at scale, can disrupt the very conditions that make those breakthroughs possible.

This tension forces us to confront a more nuanced question: are these layoffs a signal of transformation or a symptom of stagnation? Are organizations courageously embracing a new model of AI-augmented work, or are they retreating into cost-cutting as a substitute for bold thinking? The answer matters, because it shapes not only how we interpret today’s decisions, but how we design organizations for tomorrow.

For leaders, the stakes extend beyond quarterly earnings. The choices being made now will determine whether AI becomes a catalyst for a new era of human-centered innovation or a tool that accelerates efficiency at the expense of imagination. For engineers, the implications are equally profound. Their roles are being redefined in real time — not just in terms of what they produce, but in how they create value within increasingly AI-mediated systems.

Ultimately, this is not just a debate about layoffs. It is a debate about what organizations choose to optimize for: productivity or possibility, efficiency or exploration, output or insight. And in that choice lies the future trajectory of innovation itself.

The Case for “Smarter, Leaner, More Profitable”

For many technology leaders, the recent wave of layoffs is not a retreat — it is a re-calibration. The argument is grounded in a simple but powerful premise: the economics of software development have fundamentally changed. With the rapid advancement of AI-assisted coding tools, the amount of output a single engineer can produce has increased dramatically. What once required large, specialized teams can now be accomplished by smaller, more versatile groups augmented by intelligent systems.

Tools such as Claude and OpenAI Code are not merely incremental improvements in developer productivity; they represent a shift in how work gets done. Routine coding tasks, boilerplate generation, debugging assistance, and even architectural suggestions can now be offloaded to AI. This allows engineers to spend less time writing repetitive code and more time focusing on higher-value activities such as system design, problem framing, and integration across complex environments.

In this emerging model, the role of the engineer evolves from builder to orchestrator. Instead of manually crafting every line of code, engineers guide, refine, and validate the outputs of AI systems. The result is a compression of development cycles — features are built faster, iterations occur more rapidly, and time-to-market shrinks. From a business perspective, this translates into a compelling opportunity: maintain or even increase output while reducing labor costs.

This logic is not without precedent. Across industries, waves of automation have consistently redefined the relationship between labor and productivity. In manufacturing, the introduction of robotics did not eliminate production; it scaled it. In many cases, it also improved quality and consistency. Proponents of the current shift argue that AI represents a similar inflection point for knowledge work. The companies that adapt fastest will be those that learn to pair human creativity with machine efficiency.

From a financial standpoint, the incentives are clear. Reducing headcount while sustaining output improves margins, a priority that has become increasingly important in an environment where growth-at-all-costs is no longer rewarded. Investors are placing greater emphasis on profitability and operational discipline, and companies are responding accordingly. Leaner teams are not just a byproduct of technological change — they are a strategic choice aligned with evolving market expectations.

There is also a strategic argument that goes beyond cost savings. By automating lower-value tasks, organizations can theoretically redeploy human talent toward more innovative efforts. Engineers freed from routine work can focus on solving harder problems, exploring new product ideas, and experimenting with emerging technologies. In this view, AI does not replace innovation capacity; it expands it by removing friction from the development process.

Smaller teams can also mean faster decision-making. With fewer layers of coordination required, organizations can become more agile, responding quickly to changing market conditions and customer needs. This agility is often cited as a competitive advantage, particularly in fast-moving technology sectors where speed can determine success or failure.

Ultimately, the “smarter, leaner” argument rests on a belief that efficiency and innovation are not mutually exclusive. Instead, they are mutually reinforcing. By leveraging AI to increase productivity, companies can create the financial and operational headroom needed to invest in the next wave of innovation. Layoffs, in this context, are not an admission of weakness — they are a signal that the underlying system of value creation is being rewritten.

The Case for “Innovation Is Running Dry”

While the efficiency narrative is compelling, an equally important — and more unsettling — interpretation of recent layoffs is gaining traction: that they reflect not technological progress, but an innovation slowdown. In this view, companies are not simply becoming leaner because they can do more with less, but because they have fewer truly novel problems worth investing in. The layoffs, therefore, are less a signal of transformation and more a symptom of diminishing opportunity.

Over the past decade, many technology companies have scaled around a set of highly successful platforms and business models. These platforms have been optimized, expanded, and monetized with remarkable effectiveness. But maturity brings constraints. As systems stabilize and markets saturate, the number of greenfield opportunities naturally declines. What remains is often incremental improvement — refinements, extensions, and efficiencies — rather than the kind of breakthrough innovation that requires large, exploratory engineering teams.

In this context, layoffs can be interpreted as a rational response to a shrinking frontier. If there are fewer bold bets to pursue, there is less need for the capacity required to pursue them. The risk, however, is that this becomes a self-reinforcing cycle. As organizations reduce investment in exploration, they further limit their ability to discover the next wave of opportunity. Over time, efficiency begins to crowd out possibility.

Compounding this dynamic is an increasing reliance on metrics that prioritize productivity over potential. Organizations are becoming exceptionally good at measuring what is already known — velocity, output, utilization — but far less adept at valuing what has yet to be discovered. When success is defined primarily by efficiency gains, it becomes harder to justify the uncertainty and longer time horizons associated with breakthrough innovation.

The rise of AI tools adds another layer of complexity. While these tools can accelerate development, they do not inherently generate new insight. They are trained on existing patterns, which means they are exceptionally effective at extending the present but less equipped to invent the future. This creates the risk of an “illusion of progress,” where output increases but originality does not. More code is produced, but not necessarily more meaningful innovation.

There are also significant cultural consequences to consider. Layoffs, particularly when they affect engineering and product teams, can erode trust and psychological safety within an organization. When employees perceive that their roles are precarious, they are less likely to take risks, challenge assumptions, or pursue unconventional ideas. Yet these behaviors are precisely what fuel innovation. In attempting to optimize for efficiency, companies may inadvertently suppress the very creativity they depend on for long-term growth.

Another often overlooked impact is the loss of institutional knowledge. Experienced engineers carry not just technical expertise, but contextual understanding of systems, decisions, and past experiments. When they leave, they take with them insights that are difficult to codify or replace. This loss can slow future innovation efforts, even as short-term efficiency metrics appear to improve.

Ultimately, the concern is not that companies are becoming more efficient — it is that they may be becoming too narrowly focused on efficiency at the expense of exploration. Innovation requires slack, curiosity, and a willingness to invest in uncertain outcomes. When organizations begin to treat these elements as expendable, they risk signaling something far more significant than cost discipline: a diminishing appetite for invention itself.

Paths to AI-Driven Engineering Outcomes

The Human-Centered Tension: Productivity vs. Possibility

Beneath the surface of the efficiency versus stagnation debate lies a deeper, more human tension — one that cannot be resolved by technology alone. At its core, innovation has never been just about output. It has always been about the quality of thinking, the diversity of perspectives, and the collisions between ideas that spark something new. When organizations focus too narrowly on productivity, they risk overlooking the very conditions that make possibility achievable.

Innovation does not emerge from isolated efficiency; it emerges from interaction. It is the byproduct of cross-functional curiosity — engineers engaging with designers, product managers challenging assumptions, customers re-framing problems, and leaders creating space for exploration. These interactions are often messy, inefficient, and difficult to measure. But they are also where breakthroughs live. When layoffs reduce not just headcount but diversity of thought and opportunities for collaboration, the innovation system itself becomes less dynamic.

The rise of AI-augmented work introduces a new layer to this tension. As engineers increasingly rely on AI tools to generate code, suggest solutions, and optimize workflows, their role begins to shift. They move from hands-on builders to orchestrators of machine-assisted output. While this shift can increase speed and efficiency, it also raises an important question: what happens to deep craft? The tacit knowledge developed through wrestling with complexity — the kind that often leads to unexpected insights — may be diminished if too much of the process is abstracted away.

There is also a cognitive risk. AI systems are designed to identify and replicate patterns based on existing data. This makes them powerful tools for scaling what is already known, but less effective at challenging foundational assumptions. If organizations become overly dependent on these systems, they may unintentionally standardize thinking. The range of possible solutions narrows, not because people lack creativity, but because the tools they use guide them toward familiar patterns.

Trust plays a critical role in navigating this tension. In environments where employees feel secure, valued, and empowered, they are more likely to experiment, take risks, and pursue unconventional ideas. Layoffs, particularly when they are frequent or poorly communicated, can erode that trust. The result is a more cautious workforce — one that prioritizes safety over exploration. In such environments, productivity may remain high, but the willingness to pursue breakthrough innovation often declines.

Curiosity is the other essential ingredient. It is the force that drives individuals to ask better questions, challenge the status quo, and seek out new possibilities. Yet curiosity requires space — time to think, room to explore, and permission to deviate from immediate objectives. When organizations optimize relentlessly for efficiency, that space tends to disappear. Every moment is accounted for, every effort measured, and every outcome expected to justify itself in the short term.

This creates a paradox. The same tools and strategies that enable organizations to move faster can also constrain their ability to think differently. Speed without reflection can lead to acceleration in the wrong direction. Efficiency without exploration can result in incremental progress that ultimately limits long-term growth.

For leaders, the challenge is not to choose between productivity and possibility, but to intentionally design for both. This means recognizing that innovation systems require balance — between execution and exploration, between structure and flexibility, and between human judgment and machine assistance. It requires protecting the conditions that enable creativity even as new technologies reshape how work gets done.

Ultimately, the question is not whether AI will make organizations more efficient — it already is. The question is whether leaders will use that efficiency to create more space for human ingenuity, or whether they will allow it to crowd out the very behaviors that make innovation possible in the first place.

The Future of Innovation in the Age of AI: Augmentation or Abdication?

As organizations navigate layoffs, AI adoption, and shifting expectations around productivity, the future of innovation is not predetermined — it is being actively shaped by the choices leaders make today. The central question is no longer whether artificial intelligence will transform how work gets done, but how that transformation will be directed. Will AI serve as an amplifier of human ingenuity, or will it become a mechanism for narrowing ambition in the pursuit of efficiency?

Three distinct paths are beginning to emerge. The first is an augmentation-led renaissance, where organizations successfully combine human creativity with machine capability. In this scenario, AI handles the repetitive and computationally intensive aspects of work, freeing humans to focus on problem framing, experimentation, and breakthrough thinking. Innovation accelerates not because there are fewer people, but because those people are empowered to operate at a higher level of abstraction and impact.

The second path is the efficiency trap. Here, organizations become so focused on optimizing output and reducing cost that they gradually lose their capacity for exploration. AI is used primarily to streamline existing processes rather than to unlock new possibilities. Over time, these organizations become highly efficient at executing yesterday’s ideas, but increasingly disconnected from tomorrow’s opportunities. What appears to be strength in the short term reveals itself as fragility in the long term.

The third path is a bifurcation of the competitive landscape. Some organizations will lean into augmentation, investing in both AI capabilities and the human systems required to harness them effectively. Others will prioritize efficiency, focusing on cost control and incremental gains. The result is a widening gap between companies that consistently generate new value and those that primarily replicate and optimize existing models. In such an environment, innovation becomes a defining differentiator rather than a baseline expectation.

What separates the leaders from the laggards will not be access to AI alone — those tools are increasingly commoditized — but how organizations integrate them into their innovation systems. Leading organizations will invest not just in AI infrastructure, but in what might be called curiosity infrastructure: the cultural, structural, and leadership practices that encourage questioning, exploration, and cross-functional collaboration. They will recognize that technology can accelerate execution, but only humans can redefine the problems worth solving.

This shift will require a redefinition of roles. Engineers, for example, will need to move beyond execution and into areas such as systems thinking, ethical judgment, and interdisciplinary collaboration. Their value will be measured not just by what they build, but by how they frame problems, challenge assumptions, and integrate diverse inputs into coherent solutions. Similarly, leaders will need to become stewards of both performance and possibility, ensuring that the drive for efficiency does not crowd out the pursuit of innovation.

Organizations that thrive will also be those that intentionally protect space for exploration. This does not mean abandoning discipline or ignoring financial realities. It means recognizing that innovation requires a portfolio approach — balancing investments in core optimization with bets on uncertain, high-potential opportunities. AI can make this balance more achievable by reducing the cost of experimentation, but only if leaders choose to reinvest those gains into discovery rather than solely into margin expansion.

Ultimately, the future of innovation in the age of AI will be defined by whether organizations treat these tools as a substitute for human thinking or as a catalyst for it. The real risk is not that AI replaces engineers — it is that organizations stop asking the kinds of questions that require engineers to think deeply, creatively, and collaboratively in the first place.

Augmentation or abdication is not a technological choice. It is a leadership choice. And in making it, organizations will determine whether this moment becomes a turning point toward a more innovative future — or a gradual slide into highly efficient irrelevance.

Frequently Asked Questions

1. Why are technology companies laying off engineers despite using AI tools?

Layoffs may result from a combination of efficiency gains and slowing innovation opportunities. AI tools like
Claude and OpenAI Code allow smaller teams to maintain or increase output, reducing the need for some roles.
At the same time, some companies face fewer breakthrough projects to pursue, which can also drive workforce reductions.

2. Does AI replace human engineers or just augment their work?

AI primarily augments engineers by automating repetitive coding, debugging, and optimization tasks. This allows
engineers to focus on higher-value activities such as system design, problem framing, and creative innovation.
While some roles shift, AI is intended as an amplifier of human ingenuity rather than a replacement.

3. How can companies maintain innovation in the age of AI?

Companies can preserve innovation by investing in curiosity infrastructure, protecting time and space for
experimentation, fostering cross-functional collaboration, and reinvesting efficiency gains into exploratory,
high-potential projects. Balancing productivity with opportunity ensures that humans and AI together drive breakthroughs.


Image credits: ChatGPT

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from ChatGPT to clean up the article and add citations.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Organizational Digital Exhaust Analysis

Unlocking the Invisible Signals That Shape Innovation and Change

LAST UPDATED: March 20, 2026 at 5:44 PM

Organizational Digital Exhaust Analysis

GUEST POST from Art Inteligencia


The Invisible Byproduct of Work: What is Digital Exhaust?

Every organization is producing more data than ever before. Dashboards are full, KPIs are tracked, and reports are generated with increasing frequency. And yet, despite this abundance, many leaders still find themselves asking a fundamental question: “What is really happening inside our organization?”

The answer often lies not in the data we intentionally collect, but in the data we unintentionally leave behind. This is what we call digital exhaust—the invisible trail of signals created as people interact with systems, processes, and each other in the course of getting work done.

Digital exhaust includes everything from collaboration patterns in tools like email, Slack, and Teams, to clickstreams in customer journeys, to the subtle workarounds employees create when processes don’t quite fit reality. It is not designed, structured, or curated. It simply exists as a byproduct of activity.

Most organizations focus their attention on intentional data—metrics they define in advance: sales targets, operational efficiency scores, customer satisfaction ratings. These are important, but they are also inherently limited. They reflect what leaders thought would matter ahead of time.

Digital exhaust, by contrast, captures what actually does matter in practice. It reveals:

  • Where employees are struggling despite “green” metrics
  • How work really flows across teams, not how it is designed to flow
  • Where customers encounter friction that was never anticipated
  • Which informal behaviors are compensating for broken systems

In this sense, digital exhaust is not just data—it is a form of organizational truth-telling. It exposes the gap between the designed experience and the lived experience.

For leaders focused on human-centered change and innovation, this distinction is critical. Traditional measurement systems tend to reinforce existing assumptions. Digital exhaust challenges them. It brings visibility to the moments of friction, improvisation, and adaptation where real innovation opportunities are hiding.

Perhaps the most powerful way to think about digital exhaust is this: It is a passive, always-on listening system for your organization.

Unlike surveys or interviews, it does not rely on what people say after the fact. It reflects behavior in real time, at scale, and often without the filters that come with formal reporting. It captures the signals people don’t even realize they are sending.

And that is precisely why it is so valuable. Buried in this exhaust are the early indicators of change resistance, subtle signs of employee disengagement, and the unarticulated needs of customers. It is where inefficiencies whisper before they become visible problems, and where innovation opportunities emerge before they are formally recognized.

The challenge is not whether digital exhaust exists—it already does, in massive quantities. The challenge is whether organizations are willing and able to see it for what it is: not noise, but signal.

Organizations that learn to listen to their digital exhaust gain something incredibly powerful: a clearer, more human-centered understanding of how work actually happens. And with that understanding comes the ability to design change and innovation efforts that are grounded in reality, not assumption.

Why Digital Exhaust Matters for Change and Innovation

Most change initiatives don’t fail because of poor strategy. They fail because leaders are operating with an incomplete—or worse, inaccurate—understanding of how their organization actually functions. This is where digital exhaust becomes a game changer.

At its core, digital exhaust provides a continuous, behavior-based view of the organization in motion. It captures the difference between how work is designed and how it is actually performed. And in that gap lies the truth about why change efforts stall and where innovation opportunities emerge.

Traditional change management relies heavily on lagging indicators—survey results, adoption metrics, and post-implementation reviews. By the time these signals appear, the organization has already absorbed the impact, for better or worse. Digital exhaust, on the other hand, offers something far more valuable: early visibility into emerging patterns of behavior.

This early visibility allows leaders to detect and respond to critical dynamics in real time, including:

  • Change Resistance: Not through what people say, but through what they do—avoiding new tools, reverting to old processes, or creating parallel workarounds.
  • Process Friction: Identifying bottlenecks, repeated handoffs, or excessive rework that signal misaligned or poorly designed workflows.
  • Cultural Misalignment: Revealing disconnects between stated values and actual behavior patterns.
  • Hidden Work: Surfacing informal, often invisible effort employees expend to compensate for gaps in systems or processes.

For innovation leaders, this is where things get especially interesting. Digital exhaust doesn’t just highlight problems—it illuminates possibilities. Every workaround is a signal of unmet need. Every friction point is a potential innovation opportunity. Every unexpected behavior pattern is a clue about how people are adapting to constraints in ways the organization did not anticipate.

In other words, innovation lives in the gaps between designed experience and lived experience.

When organizations ignore digital exhaust, they effectively blind themselves to these gaps. They continue to invest in solutions based on assumptions, often optimizing for a version of reality that no longer exists. This is how well-intentioned initiatives end up driving “hallucinatory innovation”—building elegant solutions to problems that don’t actually matter.

Conversely, organizations that leverage digital exhaust gain the ability to:

  • Continuously validate whether change is working as intended
  • Identify emerging needs before they are formally articulated
  • Adapt strategies dynamically based on real-world behavior
  • Reduce the gap between leadership perception and employee/customer reality

This shifts the role of leadership from one of prediction to one of perception and response. Instead of trying to anticipate every outcome, leaders can sense what is happening and adjust accordingly.

The implications are profound. Change becomes less about large, episodic transformations and more about continuous alignment. Innovation becomes less about isolated breakthroughs and more about systematically uncovering and addressing real human needs.

Ultimately, digital exhaust matters because it reconnects organizations with reality. It grounds strategy in behavior, not intention. And in a world where the pace of change continues to accelerate, that grounding may be the most important competitive advantage of all.

From Data to Meaning: The Practice of Digital Exhaust Analysis

If digital exhaust is the raw signal of how work actually happens, then digital exhaust analysis is the discipline of turning that signal into meaning. This is where many organizations struggle—not because they lack data, but because they lack a systematic way to interpret it in a human-centered way.

The first step is recognizing the breadth of digital exhaust across the enterprise. Every interaction, transaction, and workflow leaves behind traces of behavior. Individually, these signals may seem insignificant. Collectively, they form a dynamic, continuously updating picture of how the organization actually operates.

Common sources of digital exhaust include:

  • Collaboration Tools: Email, messaging platforms, and meeting systems that reveal communication flows, decision bottlenecks, and collaboration overload.
  • Customer Interactions: Support tickets, chat logs, call transcripts, and clickstream data that expose friction, confusion, and unmet expectations.
  • Operational Systems: CRM, ERP, and workflow platforms that capture how processes actually unfold, including delays, rework loops, and exception handling.
  • Content and Knowledge Systems: Document creation, editing patterns, and knowledge-sharing behaviors that reflect how information is accessed, reused, or lost.

But volume alone does not create insight. The real shift comes from applying analytical approaches that focus on behavior rather than static metrics. Instead of asking “What happened?”, digital exhaust analysis asks “How and why did it happen this way?”

Effective analysis typically combines multiple techniques:

  • Behavioral Pattern Recognition: Identifying recurring actions, deviations, and anomalies that signal friction, adaptation, or emerging habits.
  • Process Mining and Journey Reconstruction: Rebuilding actual workflows and customer journeys based on real activity, not designed processes.
  • Language and Sentiment Analysis: Examining tone, word choice, and context in communications to uncover emotion, confusion, or resistance.
  • Network and Interaction Analysis: Mapping how people and teams connect to reveal informal influence structures and collaboration patterns.

A critical principle in this work is triangulation. No single data source tells the full story. Only by combining multiple signals can organizations distinguish between noise and meaningful patterns.

Equally important is the shift from retrospective reporting to continuous sensing. Traditional analytics looks backward, summarizing what has already occurred. Digital exhaust analysis, when done well, enables organizations to monitor patterns as they emerge and evolve—creating the opportunity to respond in near real time.

This does not mean automating decisions blindly. On the contrary, the goal is to augment human judgment. The role of digital exhaust analysis is to surface signals that prompt better questions, deeper inquiry, and more informed action.

Ultimately, the practice is not about mastering tools—it is about building a new organizational capability: the ability to see clearly, move beyond assumptions, understand behavior in context, and translate that understanding into smarter, more human-centered decisions about change and innovation.

Human-Centered Interpretation: Avoiding the Measurement Trap

One of the most dangerous assumptions organizations make is that data is objective. It isn’t. Data is shaped by what we choose to measure, how we collect it, and the context in which we interpret it. Digital exhaust may feel more “real” because it is behavior-based, but it is still incomplete without thoughtful, human-centered interpretation.

This is where many digital exhaust initiatives go off track. Leaders see a new stream of rich behavioral data and immediately move to optimize against it—reducing time, increasing throughput, or eliminating variance. In doing so, they risk falling into the very trap they were trying to escape: mistaking signals for truth and metrics for meaning.

The reality is that every data point carries ambiguity. A spike in after-hours activity could indicate high engagement—or it could signal burnout. A reduction in collaboration might reflect improved efficiency—or growing silos. Without context, interpretation becomes guesswork dressed up as insight.

This is why digital exhaust analysis must be grounded in a human-centered mindset. The goal is not to monitor people more closely, but to understand their experiences more deeply.

There is also an important ethical dimension to consider. The same data that can illuminate friction and unlock innovation can also feel invasive if misused. Employees who believe they are being surveilled will adapt their behavior—not to improve outcomes, but to protect themselves. When that happens, the integrity of the data itself begins to erode.

Organizations must therefore be intentional about how they approach digital exhaust:

  • Transparency: Be clear about what is being analyzed, why it matters, and how it will (and will not) be used.
  • Purpose: Focus on improving systems and experiences, not evaluating or policing individuals.
  • Context: Combine behavioral data with qualitative insights—interviews, observation, and direct feedback—to understand the “why” behind the patterns.
  • Humility: Treat insights as hypotheses to explore, not conclusions to enforce.

At its best, digital exhaust analysis becomes a tool for empathy at scale. It helps leaders see where people are struggling, where systems are failing, and where expectations are misaligned—not in theory, but in lived experience.

This requires a fundamental shift in mindset: from control to curiosity. Instead of asking, “How do we make people comply with the process?” leaders begin asking, “Why does the process not work for people?” That shift is where real transformation begins.

Because the ultimate goal is not to create perfectly optimized systems. It is to design organizations that work with humans, not against them. And that means recognizing that behind every data point is a person making choices, adapting to constraints, and trying to get their work done.

Digital exhaust can show you what is happening. But only a human-centered approach can help you understand why—and what to do about it in a way that builds trust rather than erodes it.

Use Cases That Actually Move the Needle

Digital exhaust analysis only becomes valuable when it drives better decisions and meaningful outcomes. While the concept can feel abstract, its impact becomes very concrete when applied to real organizational challenges. The key is to focus on use cases where behavior-based insight can close the gap between intention and reality.

The following are some of the highest-impact applications of digital exhaust analysis across change, experience, and innovation:

Change Management: Seeing Adoption as It Happens

Traditional change management relies on training completion rates, survey feedback, and delayed adoption metrics. These signals often arrive too late to correct course effectively.

Digital exhaust provides a real-time view of how people are actually engaging with new tools, processes, or ways of working. Leaders can identify:

  • Where employees are reverting to legacy systems or behaviors
  • Which teams are adopting quickly—and why
  • Where informal workarounds are emerging

This enables faster intervention, targeted support, and ultimately a higher likelihood of sustained change.

Employee Experience: Detecting Friction and Burnout Early

Employee experience is often measured through periodic surveys, which provide valuable but infrequent snapshots. Digital exhaust fills in the gaps between those moments.

By analyzing collaboration patterns, workload signals, and communication behaviors, organizations can detect:

  • Meeting overload and fragmentation of focus time
  • After-hours work patterns that may indicate burnout risk
  • Breakdowns in cross-functional collaboration

Instead of reacting to disengagement after it occurs, leaders can proactively redesign work environments to better support how people actually operate.

Customer Experience: Uncovering Hidden Friction

Customer journeys are carefully designed, but rarely experienced exactly as intended. Digital exhaust reveals where those designs break down in practice.

Through analysis of clickstreams, support interactions, and behavioral flows, organizations can identify:

  • Points where customers hesitate, abandon, or seek help
  • Inconsistencies across channels and touchpoints
  • Unmet needs that are not captured in structured feedback

These insights enable more precise, evidence-based improvements to the customer journey—reducing friction and increasing satisfaction in ways that traditional metrics alone cannot achieve.

Innovation Discovery: Finding Opportunity in Workarounds

One of the most overlooked sources of innovation is the set of informal solutions people create to get their work done. These workarounds are not failures—they are signals.

Digital exhaust analysis helps surface:

  • Repeated deviations from standard processes
  • Shadow systems and tools adopted outside official channels
  • Emerging behaviors that indicate shifting needs or expectations

Each of these represents an opportunity to design better solutions that align with how people naturally work, rather than forcing them into rigid structures.

Operational Excellence: Moving Beyond Efficiency to Effectiveness

Many operational improvement efforts focus narrowly on efficiency—reducing time, cost, or variability. Digital exhaust enables a broader view that includes effectiveness and experience.

By reconstructing actual workflows, organizations can identify:

  • Hidden loops of rework and redundancy
  • Misaligned handoffs between teams or systems
  • Disconnects between formal processes and real execution

This allows for redesign efforts that not only streamline operations but also make them more intuitive and resilient.

Across all of these use cases, the common thread is speed of learning. Digital exhaust shortens the feedback loop between action and insight. It allows organizations to move from periodic evaluation to continuous adaptation.

And in an environment where change is constant, that ability—to learn faster than the pace of disruption—is what ultimately separates organizations that struggle from those that thrive.

Digital Exhaust Flow

The Technology Ecosystem Powering Digital Exhaust Analysis

While digital exhaust is created naturally through everyday work, unlocking its value requires a rapidly evolving ecosystem of technologies. No single platform owns this space. Instead, it is an emerging convergence of analytics, artificial intelligence, process mining, and digital twin capabilities—each contributing a piece of the broader puzzle.

Understanding this ecosystem is critical, not because organizations need to adopt every tool, but because it reveals where the market is heading: toward a future of organizational observability—the ability to continuously sense, interpret, and respond to how work actually happens.

Enterprise Platforms: Scaling Insight Across Complex Systems

Large enterprise technology providers are embedding digital exhaust analysis into broader platforms that integrate data across operations, customers, and assets. These solutions often combine IoT, analytics, and simulation to create end-to-end visibility.

  • Siemens: Leveraging digital twin technology to simulate and optimize complex systems, capturing exhaust signals from both physical and digital environments.
  • General Electric: Applying industrial data analytics to monitor performance, predict issues, and improve operational outcomes.
  • Dassault Systèmes: Enabling virtual modeling of organizations and ecosystems to better understand how processes and interactions unfold.
  • PTC: Integrating IoT and augmented reality to connect frontline activity with enterprise systems, generating rich behavioral data streams.

These platforms are particularly powerful in environments where physical and digital systems intersect, but their broader impact is the normalization of continuous data capture and analysis at scale.

Advanced Analytics and Simulation Engines

A second layer of the ecosystem focuses on making sense of complexity. These tools excel at modeling, simulation, and high-dimensional analysis—turning raw exhaust into predictive and prescriptive insight.

  • ANSYS: Known for engineering simulation, increasingly applied to model system behavior and test scenarios before changes are implemented.
  • Altair: Combining data analytics, AI, and high-performance computing to uncover patterns and optimize outcomes across complex environments.

These capabilities allow organizations to move beyond hindsight and into foresight—understanding not just what is happening, but what is likely to happen next under different conditions.

Process Mining and Behavioral Analytics Innovators

One of the fastest-growing segments in this space is process mining and behavioral analytics. These solutions reconstruct workflows and interactions from event logs, revealing how processes actually execute across systems and teams.

They provide:

  • End-to-end visibility into real process flows
  • Identification of bottlenecks, deviations, and rework
  • Data-driven opportunities for automation and redesign

By grounding analysis in actual behavior, these tools bring a level of objectivity and clarity that traditional process mapping rarely achieves.

Emerging Startups: Democratizing Insight

Alongside established players, a new generation of startups is pushing the boundaries of what digital exhaust analysis can do. These companies are often more focused, more agile, and more explicitly human-centered in their approach.

They are exploring innovations such as:

  • AI-driven pattern detection and anomaly identification
  • Natural language processing applied to communication data
  • Lightweight tools that make insight accessible beyond data science teams
  • Privacy-first architectures that balance insight with trust

Their collective impact is to lower the barrier to entry—making it possible for more organizations to experiment with and benefit from digital exhaust analysis without massive upfront investment.

The Convergence Toward Organizational Observability

What is most important is not any individual tool, but the direction of travel. These technologies are converging toward a shared goal: creating organizations that can continuously observe themselves.

In software engineering, observability transformed how systems are managed—shifting from reactive troubleshooting to proactive monitoring and adaptation. A similar transformation is now underway at the organizational level.

The implication is clear. In the near future, leading organizations will not rely on periodic reports to understand performance. They will operate with a living, breathing view of how work unfolds—powered by digital exhaust and the technologies that bring it to life.

The question is no longer whether these capabilities will exist, but how quickly organizations will learn to use them in a way that is both effective and human-centered.

Building the Capability: From Experiment to Enterprise Muscle

Recognizing the value of digital exhaust is one thing. Building the organizational capability to use it consistently and effectively is another. Many organizations start with enthusiasm, launch a pilot, and then stall—unable to scale insight beyond isolated use cases.

The difference between experimentation and impact lies in treating digital exhaust analysis not as a tool, but as a core organizational muscle—one that must be intentionally developed, embedded, and sustained over time.

Start Small, But Start Where It Matters

The most successful organizations resist the urge to boil the ocean. Instead, they begin with a focused, high-value problem—typically a journey or process where friction is both visible and consequential.

This might include:

  • A struggling change initiative with uneven adoption
  • A critical customer journey with known pain points
  • An internal process plagued by delays or rework

By instrumenting relevant systems and analyzing the resulting digital exhaust, teams can generate early wins that demonstrate both value and feasibility.

Build Cross-Functional Alignment Early

Digital exhaust does not belong to a single function. It spans IT, HR, customer experience, operations, and innovation. As a result, siloed approaches quickly run into limitations.

Leading organizations bring together cross-functional teams that combine:

  • Technical expertise (data engineering, analytics, AI)
  • Domain knowledge (HR, CX, operations)
  • Human-centered design and research capabilities

This combination ensures that insights are not only technically sound, but also contextually meaningful and actionable.

Establish Clear Governance and Ethical Guardrails

As digital exhaust analysis scales, questions of trust, privacy, and appropriate use become unavoidable. Without clear guardrails, even well-intentioned efforts can create resistance or unintended consequences.

Effective governance includes:

  • Transparency: Communicating openly about what data is being used and for what purpose
  • Boundaries: Defining what will not be measured or inferred, particularly at the individual level
  • Accountability: Ensuring that insights are used to improve systems, not penalize people

Trust is not a byproduct of capability—it is a prerequisite for it.

Shift the Mindset: From Reporting to Sensing and Adapting

Perhaps the most important transformation is cultural. Traditional organizations are built around reporting—periodic snapshots of performance against predefined metrics.

Digital exhaust enables something fundamentally different: continuous sensing. But to realize this value, leaders must embrace a new operating model—one that prioritizes learning and adaptation over control and prediction.

This means:

  • Acting on directional insight rather than waiting for perfect data
  • Testing and iterating in shorter cycles
  • Empowering teams to respond to what they observe in real time

Over time, this shift transforms digital exhaust analysis from a specialized capability into an embedded way of working.

Scale What Works, Systematically

Once early use cases demonstrate value, the focus should shift to scaling—not by replicating tools, but by codifying practices. This includes:

  • Standardizing data pipelines and integration patterns
  • Creating reusable analytical models and frameworks
  • Embedding insights into existing decision-making processes

The goal is to make digital exhaust analysis repeatable, reliable, and accessible across the organization.

Ultimately, organizations that succeed in this space do not treat digital exhaust as a one-time initiative. They build it into the fabric of how they operate—continuously listening, learning, and adapting.

And in doing so, they move closer to something every organization aspires to, but few achieve: the ability to evolve as quickly as the world around them.

The Future: From Digital Exhaust to Adaptive Organizations

The journey from collecting digital exhaust to building a fully adaptive organization is both a technological and cultural evolution. It requires more than tools or analytics—it demands a mindset shift where organizations listen continuously, respond intelligently, and innovate in alignment with real human behavior.

Organizations that master digital exhaust will develop capabilities similar to observability in software systems: they will sense emerging issues, anticipate bottlenecks, and detect opportunities before they become urgent. This real-time awareness allows leadership to act proactively rather than reactively.

Key hallmarks of adaptive organizations powered by digital exhaust include:

  • Continuous Sensing: Systems and processes generate ongoing behavioral data, providing a real-time view of organizational health and performance.
  • Rapid Feedback Loops: Insights flow quickly to decision-makers, enabling faster course corrections and iterative improvements.
  • Behavior-Informed Innovation: Emerging patterns reveal unmet needs, workarounds, and latent opportunities, fueling human-centered innovation.
  • Trust-Centered Design: Analysis is conducted ethically and transparently, preserving employee and customer confidence.

The implications are profound. Change initiatives no longer rely solely on annual plans or post-implementation reviews. Innovation is no longer limited to isolated labs or ideation workshops. Instead, the organization becomes a living, learning system, continuously adapting based on how people actually work, collaborate, and engage.

Looking forward, the integration of AI and automation with digital exhaust analysis promises even more sophisticated capabilities. Intelligent agents may highlight emerging friction points, suggest targeted interventions, or simulate the potential outcomes of proposed changes before they are executed.

Yet, technology alone is not enough. Adaptive organizations are built on a foundation of human-centered insight, trust, and curiosity. Leaders must listen carefully, interpret thoughtfully, and act with empathy—turning the passive signals of digital exhaust into meaningful transformation.

The ultimate promise of this approach is clear: organizations that learn to sense and respond effectively will not just survive change—they will thrive in it. By transforming digital exhaust from noise into signal, they unlock the ability to innovate continuously, adapt dynamically, and create lasting value for employees, customers, and stakeholders alike.

In a world of accelerating complexity, the question is no longer whether digital exhaust matters. The question is whether your organization is ready to listen—and evolve.

Frequently Asked Questions (FAQ)

What is digital exhaust in an organization?

Digital exhaust is the unintentional trail of data created by employees, customers, and systems as they interact with processes and tools. It includes patterns of behavior, communication flows, process deviations, and other signals that reveal how work actually happens, beyond formal metrics.

How can digital exhaust analysis improve innovation and change initiatives?

Digital exhaust analysis provides real-time insights into actual behavior and process execution. By identifying friction points, informal workarounds, and adoption gaps, organizations can adapt more quickly, design human-centered solutions, and uncover opportunities for innovation that traditional metrics may miss.

What are the ethical considerations when analyzing digital exhaust?

Ethical considerations include ensuring transparency, protecting individual privacy, and using insights to improve systems rather than monitor or penalize people. Organizations should combine quantitative data with qualitative context, communicate clearly about data usage, and maintain trust to preserve the integrity of the analysis.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: ChatGPT

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Are Humans Just a Fleshy Generative AI Machine?

Are Humans Just a Fleshy Generative AI Machine?

GUEST POST from Geoffrey A. Moore

By now you have heard that GenAI’s natural language conversational abilities are anchored in what one wag has termed “auto-correct on steroids.” That is, by ingesting as much text as it can possibly hoover up, and by calculating the probability that any given sequence of words will be followed by a specific next word, it mimics human speech in a truly remarkable way. But, do you know why that is so?

The answer is, because that is exactly what we humans do as well.

Think about how you converse. Where do your words come from? Oh, when you are being deliberate, you can indeed choose your words, but most of the time that is not what you are doing. Instead, you are riding a conversational impulse and just going with the flow. If you had to inspect every word before you said it, you could not possibly converse. Indeed, you spout entire paragraphs that are largely pre-constructed, something like the shticks that comedians perform.

Of course, sometimes you really are being more deliberate, especially when you are working out an idea and choosing your words carefully. But have you ever wondered where those candidate words you are choosing come from? They come from your very own LLM (Large Language Model) even though, compared to ChatGPT’s, it probably should be called a TWLM (Teeny Weeny Language Model).

The point is, for most of our conversational time, we are in the realm of rhetoric, not logic. We are using words to express our feelings and to influence our listeners. We’re not arguing before the Supreme Court (although even there we would be drawing on many of the same skills). Rhetoric is more like an athletic performance than a logical analysis would be. You stay in the moment, read and react, and rely heavily on instinct—there just isn’t time for anything else.

So, if all this is the case, then how are we not like GenAI? The answer here is pretty straightforward as well. We use concepts. It doesn’t.

Concepts are a, well, a pretty abstract concept, so what are we really talking about here? Concepts start with nouns. Every noun we use represents a body of forces that in some way is relevant to life in this world. Water makes us wet. It helps us clean things. It relieves thirst. It will drown a mammal but keep a fish alive. We know a lot about water. Same thing with rock, paper, and scissors. Same thing with cars, clothes, and cash. Same thing with love, languor, and loneliness.

All of our knowledge of the world aggregates around nouns and noun-like phrases. To these, we attach verbs and verb-like phrases that show how these forces act out in the world and what changes they create. And we add modifiers to tease out the nuances and differences among similar forces acting in similar ways. Altogether, we are creating ideas—concepts—which we can link up in increasingly complex structures through the fourth and final word type, conjunctions.

Now, from the time you were an infant, your brain has been working out all the permutations you could imagine that arise from combining two or more forces. It might have begun with you discovering what happens when you put your finger in your eye, or when you burp, or when your mother smiles at you. Anyway, over the years you have developed a remarkable inventory of what is usually called common sense, as in be careful not to touch a hot stove, or chew with your mouth closed, or don’t accept rides from strangers.

The point is you have the ability to take any two nouns at random and imagine how they might interact with one another, and from that effort, you can draw practical conclusions about experiences you have never actually undergone. You can imagine exception conditions—you can touch a hot stove if you are wearing an oven mitt, you can chew bubble gum at a baseball game with your mouth open, and you can use Uber.

You may not think this is amazing, but I assure you that every AI scientist does. That’s because none of them have come close (as yet) to duplicating what you do automatically. GenAI doesn’t even try. Indeed, its crowning success is due directly to the fact that it doesn’t even try. By contrast, all the work that has gone into GOFAI (Good Old-Fashioned AI) has been devoted precisely to the task of conceptualizing, typically as a prelude to planning and then acting, and to date, it has come up painfully short.

So, yes GenAI is amazing. But so are you.

That’s what I think. What do you think?

Image Credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.