Tag Archives: AI

The End of AI Data Centers

Why Decentralized Compute is the Only Resilient Future

LAST UPDATED: May 11, 2026 at 11:24 AM

The End of AI Data Centers

by Braden Kelley and Art Inteligencia


I. Introduction: The Fragility of the AI “Crown Jewels”

The race to dominate artificial intelligence has triggered a global construction boom unlike anything the technology industry has ever seen. Governments and corporations are pouring hundreds of billions of dollars into massive AI data centers packed with advanced GPUs, specialized networking hardware, and enough electrical infrastructure to power small cities. These facilities are rapidly becoming the economic and strategic “crown jewels” of the twenty-first century.

But in the rush to scale AI capability, we may be building exactly the wrong architecture for the world that is emerging around us.

The current model of AI infrastructure is overwhelmingly centralized. Instead of distributing compute across millions of smaller nodes, we are concentrating unprecedented amounts of economic, military, and technological capability into a relatively small number of gigantic facilities. Each hyperscale AI campus represents not only a massive financial investment, but also a critical dependency for national competitiveness, intelligence operations, logistics, cybersecurity, and military decision-making.

In effect, the AI industry has unintentionally created the ultimate single point of failure.

As AI becomes increasingly essential to economic productivity and national defense, these centralized facilities naturally evolve from commercial assets into strategic targets. Their importance guarantees that adversaries will study them, map them, probe them, and eventually develop methods to disrupt or destroy them. The more valuable these AI fortresses become, the more irresistible they become as targets during geopolitical conflict.

This reality formed the basis of a previous argument that the AI data centers of 2030 may ultimately require sovereign-level protection — potentially functioning more like hardened military installations than traditional commercial real estate. Once AI infrastructure becomes critical to national security, protecting it may no longer be optional.

But militarizing data centers only treats the symptom, not the disease.

Building bigger walls around centralized AI infrastructure may delay catastrophe, but it does not eliminate the underlying strategic vulnerability. A fortress is still a fortress. It still has a location. It still has supply lines. It still has power dependencies. And most importantly, it still presents adversaries with a concentrated target whose destruction could create disproportionate economic and military disruption.

Modern warfare is increasingly demonstrating that concentration itself is becoming obsolete.

The emerging lesson from contemporary conflict is that large, static, centralized assets are becoming dangerously vulnerable in an era of cheap autonomous systems, distributed attacks, cyber-physical warfare, and AI-enabled targeting. Resilience no longer comes from concentrating strength behind thicker walls. Resilience comes from distribution, redundancy, mobility, and the elimination of obvious centers of gravity.

The future of AI infrastructure may therefore require a fundamental architectural shift — away from the “Fortress” model and toward something far more decentralized and resilient.

Instead of concentrating compute into a handful of hyperscale compounds, the smarter long-term strategy may be to distribute AI capability across millions of interconnected nodes embedded throughout society itself. Homes, businesses, vehicles, factories, and local energy systems could collectively form a resilient national AI fabric that is vastly harder to disrupt because it has no singular brain to destroy.

In other words, the ultimate defense against the vulnerabilities of centralized AI infrastructure may not be better fortifications at all.

It may be the elimination of the fortress entirely.

II. Lessons from the Front: Operation Spiderweb and the Death of “Large & Static”

For decades, military doctrine revolved around concentration of force. Nations projected power by building larger air bases, larger aircraft carriers, larger command centers, and larger logistical hubs. Strategic advantage often came from assembling overwhelming capability in centralized locations that could be defended through scale, distance, and hardened infrastructure.

But modern warfare is beginning to expose a dangerous flaw in that logic.

Ukraine’s Operation Spiderweb offered a glimpse into the future of asymmetric conflict — and a warning for anyone investing heavily in centralized AI infrastructure. In the operation, relatively inexpensive drones launched from concealed shipping containers reportedly destroyed or severely damaged billions of dollars of Russian military hardware. The attack demonstrated how low-cost autonomous systems can bypass traditional defensive assumptions and threaten even heavily protected strategic assets.

The significance of the operation was not merely tactical. It was architectural.

A modern military aircraft may cost tens or even hundreds of millions of dollars to build, maintain, and defend. Yet those investments can now be threatened by autonomous systems costing a tiny fraction of the target’s value. This is the new asymmetry of modern conflict: increasingly cheap offensive capabilities versus increasingly expensive centralized assets.

The implications extend far beyond the battlefield.

Hyperscale AI data centers are emerging as the civilian equivalent of concentrated military infrastructure. A single AI campus may contain billions of dollars worth of GPUs, networking equipment, transformers, cooling systems, and backup power infrastructure concentrated within a relatively small geographic footprint. These facilities consume enormous amounts of electricity, require extensive water access, and depend on stable transportation and communication links.

In strategic terms, they are ideal targets.

Even if protected by advanced cybersecurity systems, physical security barriers, and military-grade defenses, the economics of attack versus defense are increasingly unfavorable. A nation may spend tens of billions hardening an AI fortress, while adversaries invest comparatively little developing autonomous drones, cyber-physical sabotage systems, electromagnetic disruption tools, or attacks against supporting infrastructure such as substations and fiber routes.

The uncomfortable reality is that static concentration itself is becoming the vulnerability.

This same lesson is already reshaping military thinking. Around the world, defense planners are reconsidering centralized command structures, massive forward operating bases, and tightly clustered logistics hubs. The future military is likely to become more distributed, more mobile, and more redundant — relying on decentralized command systems, autonomous coordination, modular logistics, and dispersed operational assets that can continue functioning even when individual nodes are destroyed.

AI infrastructure must evolve the same way.

If artificial intelligence becomes the backbone of economic productivity, national security, industrial automation, cybersecurity, healthcare, transportation, and military operations, then centralized AI compute becomes too strategically important to remain concentrated in a handful of giant facilities. The more essential AI becomes, the more dangerous centralization becomes.

The lesson of Operation Spiderweb is not simply that drones are dangerous.

The deeper lesson is that resilient systems survive by distributing critical capability across wide networks rather than concentrating it into singular targets. A decentralized system may lose individual nodes without catastrophic failure. A centralized system risks collapse if its core infrastructure is compromised.

In the emerging era of autonomous conflict, resilience increasingly belongs to the distributed.

III. The Social & Political Bottleneck: The Rise of the “NIMBY” Data Center

Even if centralized AI mega-campuses could somehow be fully protected from military and cyber threats, they still face another growing obstacle that may ultimately prove just as limiting: public opposition.

Across the United States and around the world, communities are increasingly resisting the construction of massive data centers in their neighborhoods. What was once viewed as relatively harmless digital infrastructure is now being recognized as an enormous industrial footprint with significant demands on land, water, electricity, and local infrastructure.

Residents are beginning to ask uncomfortable questions.

Why should local communities absorb rising utility costs, water consumption concerns, constant construction traffic, backup generator noise, and visual blight so that a handful of technology companies can consolidate AI power? Why should neighborhoods sacrifice scarce electrical capacity for facilities that may create relatively few permanent local jobs compared to their physical scale and resource consumption?

As AI adoption accelerates, these tensions are likely to intensify rather than diminish.

The scale of future AI infrastructure requirements is staggering. Advanced AI models require immense amounts of compute power, and every new generation of models appears to demand exponentially more energy and hardware than the last. Entire regions are already experiencing concerns about grid strain, water availability, permitting delays, and environmental impact as hyperscale facilities compete for resources with local populations.

This creates a growing sovereignty conflict between national strategic priorities and local community interests.

From the perspective of national governments, AI infrastructure increasingly resembles critical infrastructure on par with ports, railroads, telecommunications networks, or energy systems. Nations that fail to secure sufficient AI compute capacity may find themselves economically disadvantaged, technologically dependent, or strategically vulnerable.

But from the perspective of local residents, a giant AI campus often appears as an unwanted industrial intrusion that consumes disproportionate resources while providing limited direct community benefit.

The collision between these perspectives could become one of the defining infrastructure battles of the next decade.

Governments may attempt to override local opposition through federal permitting reforms, strategic infrastructure designations, or national security arguments. Technology companies may offer tax incentives, local investments, or infrastructure improvements to secure approval. Yet none of these approaches fundamentally solve the underlying tension created by concentrating massive amounts of AI compute into highly visible facilities.

The more AI infrastructure grows in scale, the harder it becomes to hide its impact.

This is why decentralization may represent not only a strategic advantage, but also a political one. It is partly because of expected increases in opposition to terrestrial AI data centers that Elon Musk and others are advocating for space-based AI data centers. But, even on earth we can solve both for fragility/vulnerability and growing political/social opposition.

Instead of forcing communities to accept gigantic industrial AI campuses, future infrastructure could become embedded into the fabric of everyday life itself. Rather than concentrating compute into enormous fortified compounds, AI processing power could be distributed across homes, apartment buildings, offices, vehicles, factories, and local energy systems.

In this model, AI infrastructure becomes largely invisible.

The electrical grid itself offers an instructive analogy. Most people rarely think about the countless distributed components that collectively generate and manage electrical power. The system works precisely because it is distributed, redundant, and woven into the broader physical environment rather than concentrated into a few singular facilities.

Decentralized AI compute could evolve in much the same way.

Instead of building isolated industrial parks dedicated exclusively to AI, society could gradually transform millions of existing structures into intelligent compute nodes. Homes equipped with solar panels, battery storage, smart electrical systems, and AI acceleration hardware could collectively form a national compute fabric that scales organically alongside everyday infrastructure upgrades.

The strategic benefit is resilience.

The political benefit is acceptance.

Infrastructure people barely notice is often infrastructure they are far more willing to live with.

Distributed AI infrastructure - PulteGroup, Nvidia, and Span

IV. The New Architecture: Residential AI Nodes (The Nvidia-Pulte-Span Model)

The transition from centralized AI fortresses to distributed AI infrastructure may sound futuristic, but early versions of this architecture are already beginning to emerge.

One of the clearest signals came from the 2026 partnership between PulteGroup, Nvidia, and Span — an alliance that hinted at a radically different vision for the future of AI compute. Instead of treating homes solely as passive consumers of electricity and internet services, the partnership pointed toward a future where residential properties themselves become intelligent infrastructure nodes participating in a larger distributed compute network.

At the center of this shift is the growing convergence of three technologies that historically operated independently: AI acceleration hardware, residential energy systems, and intelligent electrical management.

Nvidia provides the AI compute layer through increasingly compact and energy-efficient GPU systems optimized for local inference and edge processing. Span contributes the intelligent electrical infrastructure capable of dynamically managing household energy loads, battery systems, solar generation, and grid interaction. PulteGroup represents the large-scale residential deployment mechanism capable of embedding these systems into new homes at scale.

Together, these technologies begin to transform the modern home into something entirely new: a residential AI node.

This concept fundamentally changes the role homes play within both the energy grid and the digital economy. Traditionally, homes consume electricity, bandwidth, and cloud services while contributing relatively little back into the broader infrastructure ecosystem. But with intelligent power management, local battery storage, rooftop solar generation, and dedicated AI hardware, homes can evolve into active participants in a distributed national compute fabric.

In practical terms, this means millions of homes could collectively provide enormous amounts of distributed AI inference capacity without requiring the construction of massive standalone data centers.

The timing of this shift is important because AI workloads themselves are evolving.

Training frontier AI models will likely continue requiring large-scale centralized infrastructure for the foreseeable future. But inference — the process of actually running AI models to serve applications, automate tasks, power agents, process data, and support real-time decision-making — is increasingly capable of operating on smaller, distributed hardware systems.

That distinction changes everything.

Instead of routing every AI request through hyperscale facilities, future AI ecosystems may distribute inference workloads dynamically across millions of geographically dispersed residential nodes. AI processing could occur closer to the end user, reducing latency, improving resilience, lowering bandwidth costs, and minimizing pressure on centralized infrastructure.

The energy implications are equally significant.

One of the biggest criticisms of hyperscale AI infrastructure is its extraordinary power consumption. Massive data centers require huge dedicated energy resources that often strain local grids and trigger political resistance. Distributed residential AI nodes offer a different model by leveraging energy systems that are already being deployed into homes for broader electrification efforts.

Homes equipped with solar panels and battery packs effectively become micro-energy systems capable of storing and managing local power generation. Smart electrical panels can determine when energy demand is low, when renewable generation is abundant, or when excess electricity would otherwise go unused. During those periods, AI inference workloads could be activated opportunistically across distributed residential infrastructure.

In effect, AI compute becomes partially synchronized with the natural rhythms of the electrical grid.

Instead of building ever-larger centralized facilities that demand constant peak power availability, distributed AI infrastructure could absorb excess off-peak generation, stabilize demand curves, and make more efficient use of existing electrical capacity.

The homeowner incentives could also be compelling.

Just as homeowners today can sell excess solar generation back to the grid, future residential AI systems could potentially generate compute revenue by contributing idle processing power to distributed inference networks. Reduced utility costs, subsidized hardware, lower internet expenses, and participation payments could transform homes from passive infrastructure liabilities into productive digital assets.

This creates a powerful alignment between national strategic interests and individual economic incentives.

Governments gain a far more resilient and geographically distributed AI infrastructure. Technology companies gain scalable edge compute capacity without constructing as many hyperscale facilities. Electrical grids gain flexible demand management capabilities. And homeowners gain direct economic participation in the AI economy itself.

Most importantly, the resulting system becomes dramatically harder to disrupt.

A centralized AI fortress presents adversaries with a concentrated target. A distributed residential AI fabric diffuses compute capability across millions of ordinary structures woven throughout society. What once existed inside a handful of highly visible compounds instead becomes embedded everywhere and nowhere at the same time.

In the emerging era of strategic AI competition, that distinction may prove decisive.

V. Strategic Advantages of the Distributed AI Grid

If centralized AI infrastructure represents a high-value target with concentrated risk, then decentralized AI infrastructure represents the opposite: a system designed around dispersion, redundancy, and continual adaptability. The advantages of this shift are not incremental — they are structural.

The most immediate benefit is what might be called kinetic resilience. In a centralized model, a single facility may represent a critical node whose disruption could degrade national AI capability in a meaningful way. In a distributed model, however, compute is spread across thousands or millions of independent nodes. No single strike, outage, or localized failure can meaningfully degrade the system as a whole. The network simply reroutes, reallocates, and continues operating.

This changes the strategic calculus entirely. Instead of defending a small number of high-value assets at extraordinary cost, resilience is achieved through ubiquity. The system becomes less like a fortress and more like a living ecosystem — continuously adapting to localized disruptions without systemic collapse.

A second advantage is power efficiency and grid stability. Hyperscale data centers often require dedicated energy infrastructure, new transmission lines, and significant upgrades to local grids. They tend to behave like industrial-scale energy sinks, demanding predictable and sustained power delivery at massive scale.

A distributed AI grid behaves differently. By embedding compute capability into residential and commercial environments already connected to the electrical system, AI workloads can be dynamically aligned with existing energy flows rather than forcing entirely new ones.

In practical terms, this enables several efficiencies:

  • Utilization of residential solar generation that would otherwise be unused or exported inefficiently
  • Charging and discharging of home battery systems in coordination with AI workload demand
  • Shifting inference tasks to off-peak hours when grid demand is lower and electricity is cheaper
  • Reducing the need for large new transmission infrastructure dedicated solely to AI growth

Instead of AI competing with other sectors for scarce centralized power capacity, it becomes a flexible participant in a broader distributed energy ecosystem.

A third advantage is latency reduction and proximity to the user. As AI becomes more embedded in daily life — powering assistants, autonomous systems, real-time translation, predictive services, and physical automation — the distance between compute and user begins to matter more.

Distributed inference at the edge of the network enables faster response times, reduced dependency on long-haul network routing, and greater robustness during partial connectivity disruptions. In many cases, AI systems embedded in homes, vehicles, and local infrastructure can respond instantaneously without requiring round trips to distant centralized servers.

Taken together, these advantages suggest that decentralization is not simply a defensive posture against geopolitical risk — it is also an optimization of efficiency, responsiveness, and system-wide adaptability.

Perhaps most importantly, the distributed model reduces systemic fragility at exactly the moment AI systems are becoming more deeply integrated into critical societal functions. The more intelligence we embed into infrastructure, the more dangerous it becomes to concentrate that intelligence into a small number of failure-prone locations.

In this sense, decentralization is not a retreat from progress. It is an evolution toward resilience.

VI. Conclusion: From Fortresses to Fabrics

The trajectory of AI infrastructure is often described as a race toward scale: larger models, larger clusters, larger data centers, and larger investments concentrated into fewer and fewer locations. On the surface, this appears to be the natural endpoint of technological progress — efficiency achieved through consolidation.

But that framing assumes a world where concentration remains an advantage. Increasingly, the opposite may be true.

As AI becomes more deeply embedded in national economies, critical infrastructure, and defense systems, the risks associated with centralization grow in parallel with its capabilities. What once looked like an optimization problem begins to resemble a resilience problem. And resilience, in complex systems, rarely comes from concentration.

The “AI Fortress” model — massive, highly capable, strategically critical data centers protected by layers of physical and digital security — may represent an important transitional phase. It enables rapid scaling of capability at a moment when demand is exploding and architectures are still stabilizing. But it is unlikely to represent the final stable equilibrium.

Over time, the logic of vulnerability, energy distribution, political friction, and technological enablement all converge on a different structure: one that is distributed by default, not by exception.

In that future, AI compute is no longer something that exists “somewhere.” It is something that exists everywhere — embedded into homes, vehicles, factories, grids, and local systems, continuously interacting with the physical world rather than being isolated from it.

This is the shift from fortresses to fabrics.

A fortress is defined by its boundaries: inside is protected, outside is excluded, and value is concentrated at the center. A fabric, by contrast, derives its strength from interconnection. It is resilient not because it is hardened in one place, but because it is woven across many places. Damage to one thread does not collapse the structure; it is absorbed, rerouted, and contained.

A distributed AI fabric would behave in the same way. Compute capacity would be ubiquitous but not centralized, powerful but not singularly fragile, intelligent but not dependent on any single point of control or failure.

In this model, the question is no longer how to protect the brain of the system by enclosing it within ever more secure walls. Instead, the question becomes how to ensure there is no single brain to target in the first place.

That shift has profound strategic implications.

It reframes AI infrastructure from something that must be defended at a few critical locations into something that must be designed as a resilient, adaptive system distributed across society itself. It also aligns national security objectives with individual participation, energy efficiency with compute demand, and technological advancement with infrastructural sustainability.

In an era shaped by asymmetric threats, autonomous systems, and rapidly evolving geopolitical risk, the most robust systems will not be those that concentrate power most effectively, but those that distribute it most intelligently.

The future of AI infrastructure may therefore not be a monument.

It may be a mesh.

And in that shift from fortresses to fabrics lies the real foundation of long-term resilience in the age of artificial intelligence.

FAQ: Decentralized AI Compute and Infrastructure Resilience

FAQ

Why are centralized AI data centers considered vulnerable?
Centralized AI data centers concentrate massive compute, energy, and strategic value into a small number of physical locations. This creates single points of failure that can be targeted by physical attacks, cyber operations, or infrastructure disruptions, potentially causing disproportionate economic and national security impact.

What is meant by a “distributed AI fabric”?
A distributed AI fabric refers to an architecture where AI compute is spread across millions of interconnected nodes such as homes, businesses, and edge devices. Instead of relying on a few large data centers, intelligence is embedded throughout the network, improving resilience, reducing latency, and eliminating critical single points of failure.

How could residential AI nodes support the power grid and economy?
Residential AI nodes can leverage solar power, home battery systems, and off-peak electricity to run AI inference workloads locally. This helps balance grid demand, utilize excess renewable energy, reduce strain on centralized infrastructure, and potentially allow homeowners to participate economically in distributed compute networks.

EDITOR’S NOTE: You should read this article to learn more about Why the AI Data Centers of 2030 Will Be Sovereign Fortresses.

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from ChatGPT and Google Gemini to clean up the article, add images and create infographics.

Image credits: Google Gemini, SPAN (via mortgagepoint.com)

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The AI New Deal

Another AI Soft Landing Scenario Exploration — Government as the Employer of First Resort

LAST UPDATED: May 2, 2026 at 5:33 PM

The AI New Deal

by Braden Kelley and Art Inteligencia


The Structural Gap: Why Process Automation Requires a Civic Pivot

As we navigate the accelerating displacement of cognitive and administrative labor, the conversation around the “AI soft landing” has reached a critical juncture. In my previous explorations, I’ve examined how our future might mirror the extreme wealth gaps of Victorian England and how we might witness a Human Premium Renaissance, where uniquely human traits become our most valuable currency.

However, a significant structural link is missing. While AI is exceptionally efficient at automating process, it is incapable of automating presence. This creates a dangerous void: as middle-class administrative roles evaporate, we risk losing the economic liquidity and social cohesion that sustain our communities.

The prevailing solution often discussed is Universal Basic Income (UBI). But as I have argued, UBI is a fiscal mirage — a passive mechanism that fails to account for the human need for agency and the staggering mathematical reality of devalued tax bases. We don’t need a handout; we need a Civic Dividend. We must move from a scarcity mindset focused on protecting obsolete jobs to an abundance mindset that funds the essential work we have historically neglected. This is the foundation of the AI New Deal: positioning the government as the Employer of First Resort.

The Fiscal and Psychological Mirage of UBI

Universal Basic Income (UBI) is often presented as the “silver bullet” for the AI age, but a closer look at the mechanics reveals it to be a flawed tool for a human-centered transition. From a design perspective, UBI solves for survival but fails to solve for contribution.

First, we must confront the Math Problem. Funding a meaningful UBI requires a robust and consistent tax base. However, as AI drives down the cost of labor toward zero, the income tax pool — the traditional engine of government revenue — shrinks alongside it. Relying on passive redistribution in a devalued labor market is a race to the bottom that risks a permanent “subsistence trap” for the majority of the population.

Second, there is the Agency Problem. Innovation thrives on human agency — the ability to act, create, and impact one’s environment. UBI provides a safety net but offers no platform for growth. By decoupling income from contribution, we risk creating a “useless class” not because humans lack value, but because we have failed to design systems that utilize their unique “Human Premium.”

Finally, we must consider the Inflation Trap. Without a mechanism to ensure the circulation of capital through local, human-to-human services, stagnant UBI payments are easily consumed by the rising costs of private-sector essentials. To achieve a soft landing, we need a dynamic model that prioritizes the Velocity of Money over the mere distribution of funds.

The Core Concept: The Civic Dividend

To bridge the gap between AI-driven efficiency and human necessity, we must introduce the Civic Dividend. This is not a social safety net designed for the desperate; it is a strategic economic platform designed for a high-functioning society. At its heart is a fundamental shift in the social contract: the Government as the Employer of First Resort.

In this model, the government doesn’t just step in when the private market fails; it proactively identifies and funds the “work that matters” — the essential maintenance of our physical, social, and cultural existence. These are the roles that require empathy, physical dexterity, and contextual judgment — capabilities that remain firmly in the human domain.

The Civic Dividend operates on the principle that human labor is a public asset. By offering potential employment in public works, care networks, and community resilience projects, the state ensures that most citizens have the opportunity to contribute. This creates a “Social Floor” of activity and income that is immune to algorithmic displacement.

Crucially, this work is not “make-work” intended to keep hands busy. It is the vital labor required to repair our crumbling infrastructure, support our aging population, and revitalize our neighborhoods. Unlike a handout, these wages are earned, providing the dignity of contribution while fueling the Velocity of Money. As these wages are spent at local bakeries, barbershops, and bookstores, they sustain a secondary human-to-human service economy that AI simply cannot replicate.

Three Pillars of AI New Deal

The Three Pillars of the AI New Deal

The success of the AI New Deal rests on a strategic focus on the “Un-automatable.” We must direct our collective energy toward three specific domains where human presence, judgment, and physical interaction are not just preferred, but essential for a thriving society.

Pillar 1: Physical and Digital Infrastructure

We are currently witnessing a “Tragedy of the Commons” in our physical world. Our bridges, transit systems, and power grids require more than just algorithmic optimization; they require physical intervention. The AI New Deal would mobilize a modern workforce to focus on Community Resilience — retrofitting cities for climate adaptation, urban “rewilding” to restore local ecosystems, and maintaining the physical nodes that allow our digital world to function. This work creates a tangible, high-quality public environment that serves as a shared wealth for all citizens.

Pillar 2: The Social and Care Fabric

As we automate cognitive tasks, the “Human Premium” in care becomes our most valuable asset. We are facing a global loneliness epidemic and an aging demographic that requires empathy, companionship, and nuanced psychological support. By professionalizing and scaling roles in elder care, mental health mentorship, and early childhood development, we transform these from marginalized sectors into the prestigious cornerstones of our new economy. These are roles where the goal is not “efficiency” (doing more with less time), but “effectiveness” (the quality of the human connection).

Pillar 3: Community Vitality and Cultural Resilience

In an era of AI-generated noise, local culture and verified information are at risk of erosion. The AI New Deal funds the “Civic Architects” — the local journalists, community theater directors, and public artists who document and celebrate the unique identity of a place. This pillar ensures that while our tools become more global and algorithmic, our lived experiences remain local, vibrant, and distinctly human. We aren’t just building roads; we are building the social connective tissue that prevents the isolation often triggered by rapid technological shifts.

Economic Mechanics: The Velocity of Human Connection

Economic Mechanics: The Velocity of Human Connection

The fiscal engine of the AI New Deal is built on a fundamental economic principle: the Velocity of Money. In a hyper-automated private sector, capital tends to pool at the top, concentrating in the hands of those who own the compute and the algorithms. Without a mechanism to pull that capital back into the hands of the many, the local economy — the shops, services, and neighborhood hubs — withers.

The Civic Dividend solves this by creating a continuous loop of circulation. When the government pays a living wage to a community health worker or a local infrastructure specialist, that income doesn’t sit idle. It is immediately recycled into the Human-to-Human (H2H) service economy. This worker buys bread from a local baker, gets a haircut from a neighborhood barber, and visits a local gym. These secondary businesses thrive precisely because their customers have earned, discretionary income to spend.

To fund this transition, we must look toward Automation Royalties or “Compute Taxes.” Rather than taxing labor — which AI is making artificially cheap — we shift the tax burden to the high-margin output of automated systems. This creates a sustainable cycle: the efficiency of AI funds the resilience of the human community.

Furthermore, the AI New Deal acts as a natural Inflation Buffer. By investing in public housing maintenance, efficient public transit, and community-led food resilience, we lower the “floor” of the cost of living. This ensures that the wages provided by the Civic Dividend maintain high purchasing power, shielding the population from the volatility of a purely algorithmic private market.

Addressing the Critics: Efficiency vs. Resilience

Critics often argue that government-led employment is inherently “inefficient” compared to the lean, optimized nature of the private sector. From the perspective of human-centered innovation, this critique misses the mark because it uses the wrong metric for success. In an AI-dominated age, social resilience is a far more valuable outcome than marginal efficiency.

The private sector’s drive for efficiency is exactly what is displacing workers. If we allow that same logic to dictate our social response, we end up with a society that is “optimized” into instability. The AI New Deal isn’t about competing with AI on speed or cost; it is about providing the stability that the private market, by its very nature, cannot offer. We are designing for systemic health, not just quarterly throughput.

Another common concern is the fear of “make-work” or a lack of individual choice. However, the AI New Deal is designed as a platform, not a cage. By providing a guaranteed social floor of meaningful work, we actually increase career mobility. When a citizen’s basic survival and dignity are secured through the Civic Dividend, they are more — not less — likely to take risks, launch their own H2H small businesses, or pursue creative endeavors in the Human Premium Renaissance.

Finally, we must recognize that this is a choice of design. We can choose to view displaced workers as a “surplus” to be managed, or we can view them as a massive, untapped reserve of human talent ready to be deployed toward the public good. The “inefficiency” of paying a human to do what an algorithm could do is only an inefficiency if you ignore the catastrophic social cost of a disengaged, impoverished populace.

AI New Deal: Designing a New Social Contract

Conclusion: Designing a New Social Contract

We stand at a unique design crossroads in human history. The rapid advancement of artificial intelligence has presented us with a fundamental choice: do we design a future of automated irrelevance, where a vast majority of the population subsists on a dwindling digital handout, or do we design a future of civic abundance?

The AI New Deal is more than an economic policy; it is a reaffirmation of the value of human contribution. It recognizes that while technology can manage our systems, only humans can care for our communities, preserve our culture, and maintain our physical world. By moving toward a model of the Government as the Employer of First Resort, we ensure that the wealth generated by the AI revolution is directly reinvested into the human experience.

This “soft landing” requires us to be bold. We must stop asking how we will survive without the jobs of the past and start asking what kind of world we could build if we finally had the resources and the hands to do it. The Civic Dividend offers a path where technology does the “tasks” so that humans can finally do the “work” of being human—creating a society that is not just more efficient, but more resilient, more connected, and more purposeful.

The tools are in our hands, and the need is all around us. Now, we simply need the courage to sign a new contract with ourselves and build the future we actually want to live in.


Braden Kelley is a leading futurist and trusted voice in human-centered innovation and change. Stay tuned for next week’s next installment in this series on the AI Soft Landing.

Frequently Asked Questions

How is the AI New Deal different from Universal Basic Income (UBI)?

While UBI provides a passive payment regardless of activity, the AI New Deal is a “Civic Dividend” based on active contribution. It positions the government as the Employer of First Resort, paying living wages for essential public work — such as infrastructure maintenance and care services — rather than providing a handout that lacks a connection to social agency or the local service economy.

How can the government afford to become the ‘Employer of First Resort’?

The funding shifts from taxing human labor to taxing the high-margin output of automated systems, often referred to as “Automation Royalties” or “Compute Taxes.” By capturing the wealth generated by AI-driven efficiency, the state can reinvest that capital into the Human-to-Human (H2H) economy, ensuring currency continues to circulate through physical communities.

Does this mean the government is creating ‘make-work’ just to keep people busy?

No. The AI New Deal focuses on the “Un-automatable” — high-value needs that are currently neglected, such as climate resilience, elder care, and mental health support. These are not arbitrary tasks; they are the essential services required for a functional, healthy society that AI cannot perform because they require human empathy, physical presence, and contextual judgment.

EDITOR’S NOTE: This is a visualization of but one possible future. I will be publishing other possible futures as they crystallize in my mind (or as you suggest them for me to explore).

Image credits: Google Gemini

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from Google Gemini to clean up the article, add images and create infographics.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Winning with Artificial Intelligence in 90 Days

Winning with Artificial Intelligence in 90 Days

Exclusive Interview with Charlene Li

The rapid evolution of artificial intelligence (AI) has shifted the technology from a futuristic curiosity to the primary engine of modern organizational growth. In an era defined by data-driven decision-making, the ability to effectively harness machine learning and predictive analytics is no longer just a competitive advantage; it is a fundamental requirement for long-term viability. However, the path to integration is rarely linear. Many organizations find themselves caught between the urgent need for transformation and the daunting reality of legacy infrastructure, talent shortages, and the cultural shifts required to move beyond small-scale pilots toward true enterprise-wide intelligence.

While the potential for increased efficiency and innovation is clear, the execution remains a significant hurdle.

The organizations that thrive in this new landscape are those that treat AI as a core strategic pillar rather than a plug-and-play software update. This requires a rethink of how human talent and machine intelligence coexist, ensuring that the technology enhances human capability rather than simply automating existing inefficiencies. Overcoming these challenges involves not just technical prowess, but a disciplined approach to change management and a clear vision for how intelligence will redefine the value the organization provides to its customers.

Today we will dive deep into what it takes to quickly achieve success with artificial intelligence with our special guest.

Creating a 90-Day Blueprint to Win with Artificial Intelligence

Charlene LiI recently had the opportunity to interview Charlene Li, a New York Times bestselling author, keynote speaker, and AI transformation strategist. Her latest book, Winning with AI: The 90-Day Blueprint for Success, co-authored with Dr. Katia Walsh, gives senior leaders a practical framework for moving from AI experimentation to measurable business value. Her prior books include The Disruption Mindset, Open Leadership, and Groundswell. Fast Company named her one of the most creative people in business, and she has worked with global organizations including 14 of the Dow Jones Industrial 30 companies. She is the founder of Altimeter Group (acquired by Prophet) and currently leads Quantum Networks Group.

Below is the text of my interview with Charlene and a preview of the kinds of insights you’ll find in Winning with AI: The 90-Day Blueprint for Success presented in a Q&A format:

1. What confusion is being created by speaking of “AI” as one thing when there are different kinds of AI, and how does this hold back AI adoption?

When people say “AI,” they’re usually thinking ChatGPT. But ChatGPT is generative AI — and that’s just one of three types of AI showing up in business today. There’s also predictive AI, which has been quietly running in your CRM, your fraud detection, and your streaming recommendations for years. And there’s agentic AI, which takes autonomous action toward a goal rather than waiting for a prompt.

The Oracle (predictive), the Creator (generative), and the Agent (agentic) — that’s how Katia and I describe them in Winning with AI. They do fundamentally different things, and they require fundamentally different things from you.

The conflation matters because it leads to bad decisions. Leaders see a generative AI demo, get excited, and ask their teams to “do something with AI” — when the actual business problem might be better solved with predictive AI (and probably already could’ve been three years ago). Or they hear “agentic AI” and assume their organization is ready to deploy autonomous agents when they haven’t even gotten generative AI into their workforce yet.

The winners aren’t choosing among types — they’re using all three strategically, in combination. A customer care transformation might use predictive AI to route inquiries, generative AI to draft responses, and agentic AI to handle routine cases autonomously. Once you can see the three distinctly, the question stops being “what can I do with AI?” and starts being “what can AI do for me?” That’s the question that actually unlocks value.

2. What are some of the key characteristics of AI inertia and some of the best ways to break free?

We call it pilot purgatory — and almost every organization we work with is stuck there. The signs are easy to spot: dozens of disconnected pilots, lots of conference attendance, lots of slide decks, no measurable financial impact. An MIT study found 95% of AI initiatives fail to scale. That’s not a technology failure. It’s a failure of leadership and culture.

The classic characteristics:

    • Use cases as a strategy. Many use cases equals procrastination. A long list of pilots is how organizations look busy without committing to anything.
    • Diffused accountability. When the CIO, CFO, and CMO all “share” responsibility for AI, no one owns the outcome.
    • Waiting for the foundation to be perfect. Clean data, the right platform, the perfect org structure — these become reasons to delay rather than constraints to solve through.
    • Confusing motion with progress. Running pilots feels like progress. It isn’t, unless those pilots are tied to your most important business problems.

To break free: pick your biggest strategic problems, figure out how AI solves them, invest heavily in those solutions, and move with urgency. Appoint one AI value owner who lives, breathes, and dreams AI outcomes. Kill pilots that aren’t on a path to scale. And replace “fail fast” with “learn fast” — nobody actually rewards failure, and the language of failure lets people walk away from things that should be pushed through.
Speed is the new moat. The companies that win aren’t the ones with the best technology. They’re the ones that adapt faster than their competitors.

3. There are still a lot of people out there not using AI (or not realizing that they are). What are some of the best ways for people to get started with AI?

Most people are already using AI — every spam filter, every Google Maps route, every recommendation on a streaming service is AI. So the real question is: how do you get started with the kind of AI that’s reshaping work right now, which is generative AI?

My advice is genuinely simple. Pick one of the major tools — Claude, ChatGPT, Gemini, Copilot — and start using it for one real task you do every week. Not a toy task. A real one. Drafting an email. Prepping for a meeting. Summarizing a long document. Brainstorming an approach to a problem you’re stuck on.

Two practical tips that make a big difference:

Write better prompts. A good prompt has a role (“Act as a marketing strategist”), instructions (what you want done), context (the background the AI needs), and an output format (memo, table, slide outline). Then refine through dialogue. Most people give AI two sentences and judge it on the result. Give it two paragraphs and you’ll be amazed.

Try the flipped interaction. Instead of asking AI for an answer, ask it to ask you questions until it has enough context to give a good answer. For example, at the end of a prompt, add this sentence: “Ask me any clarifying questions you may have.” It turns your prompt into a conversation.

I think of AI fluency as learning to eat with chopsticks: at first you’re concentrating on every motion, and eventually it’s just how you eat. You won’t get there by reading about it. You get there by using it. Every day. On real work.

4. Does AI safety really matter? It seems like all of the major AI players are just focused on speed and getting to AGI before China, am I wrong?

You’re not wrong about what the AI players are doing. But you’re probably not playing that game – more on that below. First, I’d push back on the framing that safety and speed are opposites.

Think of Formula 1. The drivers who win championships have absolute confidence in their brakes, their crash structures, their fire suppression systems. That’s why they can push so hard on speed. Safety is what makes speed possible. The companies moving fastest on AI adoption aren’t the ones cutting corners on responsibility — they’re the ones with the highest ethical standards, because trust eliminates friction. When your team knows where the guardrails are, when your customers trust your intentions, when your board has confidence in your approach, you can move at the speed AI demands.

The 2024 Edelman Trust Barometer found that 43% of people would reject AI in products and services if they don’t believe the innovation has been thoroughly scrutinized. That’s not a PR problem — it’s a revenue and competitive position problem.

On the AGI race specifically, the geopolitical framing oversimplifies what’s actually a much more textured conversation about how AI is deployed within companies, governments, and communities. Most leaders I work with aren’t worrying about AGI — they’re worrying about whether their AI customer service tool is treating customers fairly, whether their AI-driven hiring screen is introducing bias, and whether their data is being used in ways customers didn’t consent to. Those are the safety questions that matter for the next five years, regardless of what the frontier players are doing.

5. Where is the government being too hands off with AI and its impacts, and what conversations should governments and societies be having about AI and its impacts that they’re not?

I’ll be careful here because I’m not a policy person — I work with the leaders implementing AI inside organizations. But from that vantage point, a few things stand out.

The conversation we aren’t having enough is about workforce transition. Not “will AI take jobs” — we’ve been arguing about that abstractly for three years. The real question is what happens to the millions of people whose roles will substantially change in the next five years, and who’s responsible for helping them adapt. Right now, that’s mostly being left to individual employers, and the gap between what enlightened employers are doing and what the median employer is doing is enormous. That gap will become a societal problem long before regulators catch up.

The second underdiscussed conversation is about education. We’re training a generation of students with curricula designed for a pre-AI world. By the time we figure out what AI fluency looks like in K–12, the kids who needed it most will be in the workforce.

Third — and this is where I’d actually like to see governments lean in more — is data. Most AI regulation focuses on the models. The leverage is in the data: who owns it, how it can be used, what consent looks like in a world where data collected for one purpose can be repurposed for AI training that wasn’t imagined when it was collected.

That said, regulations always lag technology. Anchoring your responsible and ethical AI policy in your organization’s values rather than waiting for rules is the right move, regardless of what governments do.

6. What are the key pillars that form the basis of a strong AI foundation for those who seek to take full advantage of AI in their organization?

In Winning with AI, Katia and I lay out four building blocks. They develop together, not sequentially.

Mindset — the cultural ability to move at AI’s speed. Speed, focus, customer-centricity, experimentation, and learning from setbacks rather than treating them as evidence that the technology doesn’t work. Without the right mindset, you can have the best tools in the world, and they’ll sit unused.

Skillset — AI fluency across the workforce, not just in IT. Everyone needs to understand what AI can and can’t do, how to use it responsibly, and how to apply it to their actual work.

Toolset — the technical foundation. We tell leaders to build with LEGO, not cathedrals. Modular, interchangeable components you can swap as the technology evolves, sitting on top of data that’s good enough to start with.

Decision-set — the governance and decision-making structures that let you move fast without breaking things. Who decides what, how quickly, with what oversight.

The mistake organizations make is treating these as a sequence — first we’ll fix the data, then we’ll train people, then we’ll deploy. That sequence will take you a decade. The right approach is to build the blocks while delivering value, using each AI application to strengthen multiple blocks at once.

And one piece that wraps all four: leadership. Without active, visible commitment from the top, the four building blocks don’t compound. With it, they accelerate.

7. Of all the outcomes that the different types of AI can achieve, which activities create the most value for organizations?

Winning with AIWe frame the value AI creates in three areas: engagement, efficiencies, and reinvention.

Engagement is about deepening relationships with customers and employees through personalization, prediction, and proactive service. Anticipating what someone needs before they articulate it.

Efficiencies are about doing what you already do, faster and cheaper. This is where most organizations start — and where most get stuck. Efficiency gains are real, but they’re easy for competitors to replicate, which means they don’t create lasting advantage.

Reinvention is the most transformational and the most uncomfortable. It’s not asking “how can we do what we do faster?” — it’s asking “what becomes possible now that the old constraints are gone?” New business models. New revenue streams. New markets that were never economical before.

The trap is thinking efficiency is AI’s value. We call it the efficiency trap. Companies that limit themselves to efficiency are using a strategic weapon as a cost-cutting tool. The real competitive advantage comes from engagement and reinvention.

A great example: Coursera. Translation used to cost about $10,000 per course, which made global expansion economically impossible at the scale of their 5,000+ course catalog. Generative AI eliminated that constraint overnight. CEO Jeff Maggioncalda saw it immediately and launched Project Genesis by the end of 2022. That’s reinvention — AI removing a constraint that defined the business model.

If I had to pick one activity that creates the most value, it would be: using AI to remove a constraint that has shaped your industry’s economics for so long that nobody questions it anymore.

8. There was a lot of talk for a while about becoming an AI-first organization. Is this something that companies should be trying to do?

No. Be AI-ready instead.

“AI-first” is a technology company’s framing. It puts the technology in the driver’s seat, which sounds visionary but in practice produces dozens of disconnected pilots with no strategic impact. You end up chasing AI because it’s shiny rather than because it solves a real problem.

“AI-ready” is a business leader’s framing. It puts strategy in the driver’s seat. You’re building the culture, the skills, the decision systems, and the technical foundation that let AI create real value against the strategic priorities you already have.

Said simply: AI-first is a technology mindset. AI-ready is a business mindset.

You don’t actually need an AI strategy. You need a business strategy that uses AI. Anyone selling you on an AI strategy is selling you the wrong thing.

9. What should people be doing as individuals to maintain their value to their organizations and to grow their careers?

Three things, in order.

One: develop genuine AI fluency. Not “I’ve used ChatGPT a few times” fluency. Real fluency — the kind where AI is woven into how you think, prepare, decide, and communicate. The people and organizations who get to AI fluence in 2026 will pull dramatically ahead of those who don’t, and the gap will be very hard to close once it opens.

Two: deepen what’s uniquely human. AI can amplify cognition at speeds and scales no individual can match. What it can’t do is exercise empathy, self-reflection, intuition, judgment, and wisdom. These five traits — the foundation of what Katia and I call “superhumans” in the book — become more valuable, not less, as AI handles more of the cognitive work. The leaders who pair AI’s reach with these distinctly human capacities are the ones creating the most value.

Three: build a lifelong learning practice. The shelf life of any specific skill is shrinking. The skill that doesn’t depreciate is the ability to learn — quickly, repeatedly, with intellectual humility. Normalize not knowing. Embed reflection into how you work. Treat curiosity as a professional asset, not a side hobby.

If you do those three things, you’ll be more valuable in the future than you are today, regardless of what happens to your specific role.

10. What have organizations gotten wrong about rolling out AI and what can the early adopters do to recover from botched initial rollouts?

The biggest things organizations get wrong:

  • Treating AI as a technology project. It’s a business initiative for value creation that happens to use technology. When IT owns it, it stays small.
  • Use cases instead of strategy. A laundry list of pilots is procrastination dressed up as progress.
  • Diffused accountability. Without a single AI value owner, the work fragments.
  • Skipping the people work. Throwing tools at employees without addressing the fear underneath. Until fear is replaced by trust, no amount of training will change behavior.

If you’ve already botched the rollout, here’s the recovery path:

Stop and audit. What’s actually scaling, what’s not, what’s draining resources without producing value? Be honest. Sunset the dead ends.

Appoint one accountable AI leader. If no single person is accountable for AI value creation across the enterprise, fix that this quarter. Not part-time, not committee-led — one person whose performance is measured on the value that AI creates.

Pick one strategically meaningful problem and go after it. Not the easiest problem. The one whose solution would matter most to the business.

Learn from Ally Bank. When generative AI emerged, Ally’s CIO Sathish Muthukrishnan deliberately chose the most resistant audience — customer service agents — and a low-stakes problem: summarizing customer calls. The result was so valuable that the agents who’d been most skeptical became the loudest advocates: “Don’t take this away from me.” Targeting the skeptics with a real win is one of the most powerful change strategies we’ve seen.

A botched rollout isn’t a death sentence. It’s actually a useful clearing of the underbrush — assuming you learn from it.

11. Several studies have come out recently about the negative effects of AI on human cognition. Any tips for how to best use AI without degrading your brain?

This is a real concern and worth taking seriously. The risk isn’t AI itself — it’s lazy AI use. Using AI to skip thinking rather than to enhance it.

A few habits I’ve found useful:

Think first, then prompt. Before going to AI for an answer, write down what you think. Coursera’s Jeff Maggioncalda calls this cognitive bootstrapping — write your perspective on a decision, then ask AI to challenge it: “What are the strengths and weaknesses of this view? What are my blind spots? What would you recommend I improve?” AI sharpens your thinking instead of replacing it.

Treat AI outputs as drafts, not deliverables. Read critically. Push back. Ask why. Verify facts. The moment you stop questioning AI’s outputs is the moment your thinking starts to atrophy.

Protect deep work. Schedule time for thinking that doesn’t involve AI at all. Reading, writing, reflecting, walking — the unstructured time where your brain consolidates what it knows. AI can compress research, but it can’t compress wisdom. That still has to come from lived experience, integrated over time.

Notice the difference between using AI to accelerate something you understand and using AI to substitute for understanding. Acceleration is healthy. Substitution erodes you.

The promise of AI isn’t to do our thinking for us. It’s to help us think better. The discipline is staying on the right side of that line.

12. Any question you wish I had asked but didn’t?

Yes — I’d love a question about the human possibility on the other side of this.

Most AI conversation is about risk, displacement, and disruption. Those are real. But the conversation Katia and I get most excited about is what becomes possible when AI handles the cognitive work that has been depleting people for decades — the synthesis, the routing, the routine analysis — and frees up human capacity for what only humans can do.

We call those people “superhumans” — not because they’re enhanced by technology in some sci-fi sense, but because they finally have the room to be more deeply human. To exercise empathy, self-reflection, intuition, judgment, and wisdom at a level that’s been crowded out by cognitive overload.

The first companies to deliberately develop and organization filled with superhumans won’t just have a competitive advantage. They’ll be creating an entirely new form of value — one we haven’t fully named yet. That’s the future I want leaders thinking about. Not “how do I survive AI?” but “what becomes possible for my people on the other side of this?”

Dream it. Then build it.

Conclusion

Thank you for the great conversation Charlene!

I hope everyone has enjoyed this peek into the mind of one of the women behind the insightful new title Winning with AI: The 90-Day Blueprint for Success!

Image credits: Charlene Li, Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Designing Work for Humans and AI Agents to Do Together

LAST UPDATED: April 29, 2026 at 6:28 PM

Designing Work for Humans and AI Agents to Do Together

by Braden Kelley and Art Inteligencia


The Work Design Gap

We are not struggling to build artificial intelligence. We are struggling to design work for it.

Across industries, organizations are layering AI onto workflows that were never meant for collaboration. The result is predictable: inefficiency, mistrust, and unrealized value.

The real divide is not human versus AI. It is between work that is intentionally designed for collaboration and work that is not.

Why Traditional Tools Fail Us

Most of our management tools were built for a different era.

  • Process maps assume predictability
  • Org charts assume static roles
  • RACI models assume clear ownership

But human and AI collaboration is dynamic, contextual, and continuously learning. These tools help us optimize yesterday’s work, not design tomorrow’s.

What we need is a new visual language for collaboration.

Introducing the Human–AI Collaboration Canvas

The infographic below is not just a diagram. It is a thinking tool.

Its purpose is to make invisible interactions visible, clarify roles without over-constraining them, and embed judgment, trust, and learning into how work gets done.

This is a shift from process design to system design for collaboration.

Designing Work for Humans and AI Infographic

The Three-Lane Model: A More Honest Representation of Work

The canvas is built around three interconnected lanes:

The Human Lane

Where judgment, empathy, ethics, and accountability live. Humans frame the problem, not just solve it.

The AI Agent Lane

Where scale, speed, pattern recognition, and automation operate. AI expands what is possible.

The “Together” Lane

This is where value is actually created. Co-creation, co-decision, and co-learning happen here.

If you are not explicitly designing the middle lane, you are leaving value on the table.

The Work Journey: Sense → Decide → Act → Learn

Instead of rigid workflows, the canvas maps work as an adaptive cycle:

  • Sense: Understand context and gather signals
  • Decide: Blend human reasoning with AI recommendations
  • Act: Execute with scale and oversight
  • Learn: Reflect, adapt, and improve

Learning is not the end of the process. It feeds everything.

Collaboration Nodes: Where the Magic (or Failure) Happens

At key points in the journey are collaboration nodes—the moments where humans and AI interact.

Each node forces three critical questions:

  • Who leads?
  • What is the role of the other?
  • What is at stake?

Most AI failures are not technical failures. They are interaction design failures.

Making Judgment Visible

One of the biggest risks in AI adoption is invisible decision-making.

The canvas highlights:

  • Where human judgment is required
  • Where AI recommendations are sufficient
  • Where escalation is necessary

Automation without explicit judgment design is just risk at scale.

Designing for Trust, Not Just Performance

Capability alone is not enough. Systems must be trusted to be used effectively.

This requires:

  • Transparency
  • Explainability
  • Auditability

The real question is not “Can the AI do this?” but “Will humans trust and use this appropriately?”

Learning Loops: The System That Gets Smarter

The canvas includes two reinforcing learning loops:

  • AI Learning Loop: Data → Model → Output → Feedback → Improvement
  • Human Learning Loop: Experience → Reflection → Insight → Better decisions

The real competitive advantage is not AI itself. It is how quickly your combined system learns.

Risk, Ethics, and Failure by Design

No system is perfect. The best systems are designed with failure in mind.

The canvas highlights:

  • Bias and fairness
  • Privacy and security
  • Safety and compliance

It also asks essential questions:

  • What happens if the AI is wrong?
  • What happens if the human is wrong?
  • How do we recover?

Resilience comes from designing for breakdowns, not ignoring them.

Human-AI Agent Work Collaboration Canvas

How to Use This Canvas

This is a practical tool, not a theoretical one.

  • Use it in workshops to map collaboration
  • Audit existing workflows
  • Design new human–AI systems from scratch

A simple place to start:

  1. Map one critical workflow
  2. Identify collaboration nodes
  3. Redesign the “together” lane first

Designing for a More Human Future

AI does not reduce the need for humans. It raises the bar for how we design work.

The goal is not efficiency alone. The goal is better decisions, better experiences, and better outcomes.

The organizations that win will not be the ones with the most AI. They will be the ones who best design how humans and AI work together.

EDITOR’S NOTE: You should read this article too to learn more about atomizing work for man and machine to do together.

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from ChatGPT and Google Gemini to clean up the article, add images and create infographics.

Image credits: Google Gemini, ChatGPT

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Why the AI Data Centers of 2030 Will Be Sovereign Fortresses

The Great Decoupling

LAST UPDATED: April 27, 2026 at 6:17 PM

Why the AI Data Centers of 2030 Will Be Sovereign Fortresses

GUEST POST from Art Inteligencia


The End of the “Cloud” Illusion

For over a decade, we have been captivated by the metaphor of the “Cloud” — a term that suggests something ethereal, weightless, and omnipresent. But as we navigate the complexities of 2026, the veneer is stripping away. We are realizing that the intelligence driving our civilization is not floating in the sky; it is anchored in massive, high-heat industrial complexes that represent the most concentrated physical assets in human history.

The Convergence of Geopolitical Risk

The shift from digital convenience to National Survival is being driven by a perfect storm. The insatiable energy hunger of agentic AI models has collided with a period of intense global instability. We can no longer view data centers as mere real estate or IT infrastructure. They have become the “high ground” of the modern era. If these cognitive nodes are compromised, the ripple effect doesn’t just crash an app — it destabilizes the national experience.

The Thesis: The Rise of the Fortress Data Center

To ensure true national resilience, we must move beyond the “open campus” model of silicon valley. We are theorizing a future where AI data centers must evolve into self-contained, military-grade sovereign zones. These facilities will likely be:

  • Locally Powered: Utilizing dedicated nuclear SMRs to decouple from the fragile civilian grid.
  • Physically Fortified: Protected with the same kinetic rigor as a strategic missile silo.
  • Logically Isolated: Air-gapped to ensure that the nation’s “Digital Brain” remains untainted by external interference.

The Energy Sovereignty Mandate

The era of the data center as a passive consumer of the public utility is coming to an end. As AI models scale, their appetite for electricity has transitioned from a manageable operational expense to a systemic threat to civilian infrastructure. To maintain social license and operational continuity, the “Fortress Data Center” must become an island of power.

The Fragility of the Public Handshake

For years, tech giants have relied on “handshake deals” with regional utilities, often receiving preferential access to the grid. However, the sheer scale of 2026’s compute requirements has pushed these grids to a breaking point. When a single training run consumes enough energy to power a mid-sized city, the risk of “energy poverty” for the average citizen becomes a human-centered design crisis. Sovereignty requires that we stop competing with the public for the same electrons.

The Nuclear Option: Microgrids and SMRs

The transition toward Small Modular Reactors (SMRs) is no longer a “futurologist’s dream” — it is a mechanical necessity. By embedding nuclear or advanced geothermal power directly into the facility’s footprint, we create an isolated power source that is:

  • Resilient: Immune to regional grid failures, cyber-attacks on public utilities, or physical sabotage of long-distance transmission lines.
  • Scalable: Power generation that grows in lockstep with compute capacity, without requiring decade-long public infrastructure projects.
  • Sustainable: Providing the high-density, carbon-free baseload power required for 24/7 AI operations.

The Design Principle: We must decouple the “National Brain” (the AI) from the “National Body” (the civilian grid) to ensure that the pursuit of innovation never compromises the basic human need for heat, light, and stability.

Signal 2: The Data Center as a Kinetic Target

In the early 2020s, we viewed data center security through the lens of firewalls and encryption. But as we move through 2026, the paradigm has shifted. If a nation’s economy, defense, and essential services are orchestrated by a specific set of GPU clusters, those clusters become the highest-value kinetic targets in any conflict. We must stop designing them like warehouses and start designing them like aircraft carriers.

AI Data Center Drone Defense

Transitioning to the “Military Base” Model

The “Fortress Data Center” logic dictates that physical security must match the strategic importance of the data held within. This evolution requires a fundamental shift in architecture and protocol:

  • Physical Hardening: Implementing reinforced, blast-resistant shells and subterranean compute floors to protect against aerial or domestic threats.
  • Exclusion Zones: Establishing significant geographic perimeters and “no-fly” zones, effectively transitioning these sites into sovereign military installations.
  • On-Site Readiness: Constant tactical presence to defend against unconventional warfare, ensuring the “Digital Front Line” is never left vulnerable to physical breach.

Sovereign Silos and Logical Air-Gaps

Beyond physical walls, we must address Logical Sovereignty. A national AI asset cannot be fully secure if it is perpetually tethered to the public internet. The next generation of security involves “Air-Gapping”—the practice of physically isolating a computer network from unsecured networks.

By creating Sovereign Silos, we prevent the “poisoning” of national intelligence models from external actors and ensure that in the event of a global network collapse, the nation’s internal cognitive capacity remains operational.

The Futurology Perspective: We are moving from the era of “Open Innovation” to the era of “Fortified Intelligence.” The goal is not to hinder progress, but to ensure that our progress cannot be used as a weapon against us.

Designing the Experience of Security

As we fortify the physical and digital walls of our AI infrastructure, we face a profound Experience Design challenge. How do we prevent these “Fortress Data Centers” from becoming symbols of state opacity or fear? In 2026, the success of a national security strategy depends as much on Trust Architecture as it does on concrete and steel.

The Transparency Paradox

We are entering a Transparency Paradox: the more critical an AI system becomes to national security, the more secret its inner workings must be to prevent exploitation. Using Human-Centered Design principles, we must design interfaces and communication loops that provide the public with “Proof of Integrity” without revealing “Methods of Operation.”

  • Auditability: Creating independent, high-clearance civilian oversight boards to ensure the “Fortress” remains aligned with democratic values.
  • Public ROI: Clearly demonstrating how the security of these sites directly enables the stability of civilian services — from healthcare logistics to disaster response.

Trust Literacy and the Citizen Experience

We must build Trust Literacy within the population. If citizens perceive these centers only as “military black boxes,” we risk a breakdown in social cohesion. The experience of the “Fortress” must be framed as a Digital Utility — much like a water treatment plant or a power station — that is guarded not to exclude the public, but to guarantee their safety and continuity of life.

Distributed Nodes: The Anti-Fragile Strategy

From a Systems Thinking perspective, a single, massive “Fortress” is a single point of failure. The superior experience of security lies in a distributed network of regional hubs.

  • Hyper-Localization: Placing smaller, fortified nodes near the communities they serve to reduce latency and improve regional resilience.
  • Redundancy by Design: Ensuring that if one node is taken offline or isolated, the national “Neural Network” can reroute and adapt instantly, mimicking biological resilience.

Thought Leader Insight: Security isn’t just the absence of threat; it is the presence of confidence. We don’t just design the bunker; we design the relationship between the bunker and the people it serves.

The Strategic Implications: A New Innovation Roadmap

The shift toward fortified, sovereign AI infrastructure isn’t just a defensive maneuver; it is a fundamental pivot in how we approach the Innovation Lifecycle. In the past, we optimized for “Speed to Market.” In the landscape of 2026, the new north star is “Speed to Resilience.” This requires a total realignment of our strategic roadmaps.

For Leaders: From Efficiency to Robustness

Business and technology leaders must move beyond the “Just-in-Time” compute model. The era of relying on offshore, third-party clusters for mission-critical intelligence is closing. Strategic roadmapping now requires:

  • Infrastructure Integration: Treating compute and energy as a single, inseparable architectural stack.
  • Risk Re-evaluation: Factoring “Geopolitical Latency” into every project — the risk that a global event could sever access to centralized public clouds.

For Policy Makers: Funding the Digital Front Line

The “Fortress Data Center” cannot be built on corporate balance sheets alone. This is a public-private imperative. We are seeing the emergence of new funding mechanisms, such as:

  • National AI Sovereignty Acts: Legislative frameworks that provide subsidies for companies building “Sovereign-Ready” infrastructure.
  • Regulatory Sandboxes: Fast-tracking the deployment of Small Modular Reactors (SMRs) specifically for data center use, bypassing the decades-long red tape of traditional nuclear projects.

For Humanity: Ensuring the “Dividends of Security”

As a Human-Centered Innovation leader, my greatest concern is that these walls will lock innovation away from the people. Our roadmap must include “Avenues of Access.” While the hardware is fortified and the power source is isolated, the outputs — the medical breakthroughs, the climate models, and the educational tools — must remain a public good.

Strategic Takeaway: We aren’t just building walls; we are building a foundation. Innovation thrives when the underlying system is stable. By securing the “where” and “how” of AI, we liberate the “what” and “why” for everyone.

Conclusion: Choosing Our Preferable Future

The transition of AI data centers into sovereign, nuclear-powered fortresses is not an inevitability to be feared, but a strategic design choice to be mastered. As we look ahead from 2026, we must acknowledge that the “Wild West” era of digital infrastructure is over. We are entering the era of Structural Integrity.

The Choice: Proactive Design vs. Reactive Crisis

We have a window of opportunity to choose our path. We can wait for a catastrophic system failure — a grid collapse or a kinetic strike on a vulnerable node — to force our hand, or we can proactively apply FutureHacking™ principles to build resilience into the very foundations of our digital age.

The Goal: A Fortified but Flourishing Society

The ultimate goal of the “Fortress Data Center” is not isolationism; it is Insulation. By insulating our most critical cognitive assets from the volatility of global energy markets and geopolitical conflict, we create the stability required for the next great leap in human experience.

  • Security provides the safety to experiment.
  • Sovereignty provides the freedom to operate.
  • Isolated Power provides the continuity to grow.

True innovation isn’t just about what the AI can do; it’s about building a world where the AI’s “home” is as secure as the values it is meant to protect. Let’s design an infrastructure that doesn’t just survive the future, but defines it.

Final Thought: In the race for AI supremacy, the winner won’t just have the best algorithms; they will have the most resilient “ground truth.” The fortress isn’t a retreat — it’s a launchpad.

Frequently Asked Questions

1. Why can’t we just use the existing electrical grid for AI data centers?

The current grid is built for predictable civilian and industrial use. AI training requires massive, concentrated loads that can destabilize local power for residents. By using isolated sources like SMRs, we protect the public’s energy security while ensuring the AI never faces a “brownout.”

2. Does making data centers military bases mean civilian AI development will stop?

Not at all. Think of it like the GPS system: it is maintained and secured by the military for national resilience, yet it provides the foundation for thousands of civilian innovations. The “fortress” protects the hardware, not the creativity.

3. What makes a data center a “sovereign” asset?

Sovereignty in this context means independence. A sovereign data center isn’t reliant on international supply chains for power or vulnerable public networks for its logic. It is a self-sustaining node that can continue to function even if the global internet or local grid is compromised.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Human-Premium Renaissance

Another AI Soft Landing Scenario Exploration

LAST UPDATED: April 24, 2026 at 6:52 PM

The Human-Premium Renaissance

by Braden Kelley and Art Inteligencia


I. Beyond the “Empty Desk”

The prevailing narrative surrounding embodied AI and robotics is often one of inevitable displacement. As automation reaches a scale where it can replicate human labor at a fraction of the cost, the fear of an “empty desk” economy—one where human participation is optional—has become a central anxiety of the 2020s.

Defining the “Soft Landing”

A soft landing represents a societal transition that sidesteps the extremes of total economic collapse or violent revolution. It is the search for a new equilibrium where human value is not just preserved, but reimagined within a landscape of infinite machine productivity.

The Core Thesis: Value in the Biological

While many forecast a return to a “Victorian” class structure defined by service and servitude, this scenario proposes a more viable, long-term alternative. The Human-Premium Renaissance suggests that:

  • Commoditized Perfection: As AI makes perfect execution free, the market value of “flawless” drops to zero.
  • The Premium of Imperfection: Economic value will migrate to the “biological origin”—the hand-carved, the human-thought, and the uniquely flawed.
  • Narrative over Utility: We are moving toward an era where we no longer pay for what a product does, but for the human story behind its creation.

In this scenario, human labor isn’t a cost to be minimized; it is the unique identifier that prevents a product from becoming a valueless commodity.

II. The Framework: Utility Floor vs. Premium Ceiling

The viability of this soft landing rests on a bifurcation of the economy into two distinct layers. This structure allows for mass survival through automation while preserving a high-value labor market for human endeavor.

The Utility Floor: The World of “Perfect Commodities”

In this layer, AI and embodied robotics handle the fundamental requirements of modern life. Logistics, basic food production, energy management, and routine diagnostics are optimized to a point where the marginal cost of production approaches zero.

  • Standardization: Everything produced at the floor is “perfect” but uniform.
  • Abundance: Scarcity is eliminated for basic needs, preventing the societal collapse often predicted in mass-unemployment scenarios.
  • Devaluation: Because these goods are generated without human effort, they lack the “prestige” required to command a premium price.

The Premium Ceiling: The Human Narrative

Above the utility floor sits the “Premium Ceiling.” This is a market tier where consumers—who now have their basic needs met by the floor—spend their discretionary wealth on items and services that possess a biological provenance.

  • Authenticity as the New Scarcity: In a world of infinite digital and robotic replicas, the one thing that cannot be mass-produced is the unique perspective and history of a specific human being.
  • The Human-Centric Premium: We see the rise of “Slow Innovation,” where the value is found in the time, struggle, and intent behind the creation rather than the speed of its delivery.

The Strategic Shift: From Utility to Origin

This transition represents a fundamental shift in how we define economic value. We move away from asking “What can this do for me?” (Utility) and toward asking “Who made this, and what is their story?” (Origin).

While the Utility Floor keeps society running, the Premium Ceiling gives society a reason to keep trading, creating, and connecting.

III. Economic Viability: Why This Model Works

The skeptic’s immediate response to a “human-premium” model is usually grounded in the cold logic of the bottom line: If a machine can do it cheaper, why would anyone pay for a human? The answer lies in the shifting definition of value in a post-scarcity utility environment.

The Scarcity of Authenticity

In an era of infinite AI-generated content and robotic manufacturing, “perfection” is no longer a differentiator—it is a baseline requirement. When every digital image is flawlessly composed and every physical object is mathematically precise, human attention, history, and original thought become the only truly non-fungible resources.

  • Effort Heuristic: Humans are psychologically predisposed to value objects and services more highly when they perceive a high degree of effort or “struggle” behind them.
  • Biological Connection: We are social animals who seek the “ghost in the machine.” We don’t just want a solution; we want to know another consciousness intended for us to have it.

The Veblen Good Effect

As basic needs are met by the Utility Floor, discretionary spending migrates toward status symbols. In this scenario, human labor becomes a Veblen Good—a luxury item where demand increases as the price (and the perceived exclusivity of the human touch) rises.

“The hand-carved chair with its slight, organic imperfections becomes a status symbol of the elite, while the flawless, 3D-printed alternative becomes the hallmark of the masses.”

Democratization of Expertise and the “Company of One”

Unlike previous industrial shifts that required massive capital for factories, AI is a capital of the mind. This technology allows individual artisans and “augmented experts” to compete with monolithic corporations.

  • Skill Augmentation: AI doesn’t just replace the expert; it allows the “middle-skill” human to perform at an elite level, spreading the ability to generate high-value, personalized work across a much larger population.
  • Niche Viability: Lowering the cost of production allows for the “Long Tail” of human services to thrive. Small-scale, highly specialized human businesses become economically sustainable because their overhead is managed by AI.

By moving the human worker from a “cost to be minimized” to a “feature to be highlighted,” companies can maintain high margins and justify the continued circulation of capital back into human hands.

Preventing the Consolidation - Breaking the Monopoly on Production

IV. Preventing Wealth Consolidation: Breaking the Monopoly on Production

One of the greatest risks of an AI-driven economy is the “Winner-Take-All” effect, where the owners of the most powerful algorithms capture the entirety of global productivity. However, the Human-Premium Renaissance offers structural defenses against this consolidation by shifting the power of production from centralized capital to distributed intelligence.

The “Company of One” Era

In previous industrial revolutions, scale was a prerequisite for success. You needed a factory to compete with a factory. Today, AI acts as a force multiplier for the individual. When the cost of sophisticated research, design, and logistics drops to near zero, the competitive advantage of a massive corporation—its ability to manage complexity—evaporates.

  • Democratized Innovation: Individual creators can now orchestrate global supply chains and reach global audiences with the same efficiency as a Fortune 500 company.
  • Agility over Scale: Smaller, human-led entities can pivot and personalize their offerings faster than a shareholder-beholden giant, allowing wealth to remain with the creator.

The Circular Human Economy

As global logistics become a commodity (the Utility Floor), we anticipate a resurgence in localized, high-trust commerce. AI-assisted cooperatives and local “Experience Stewards” can replace centralized “Gig Economy” platforms.

  • Localism: Trust is a human currency that does not scale well in an algorithm. By focusing on community-specific needs, human workers can create “walled gardens” of value that shareholders cannot easily penetrate.
  • Profit Retention: When the “platform” is a decentralized protocol rather than a Silicon Valley intermediary, more of the transaction value stays in the pockets of the local human service provider.

Narrative Ownership and Provenance

To prevent AI from simply harvesting and replicating human creativity for the benefit of shareholders, this scenario relies on Digital Provenance.

  • Certification of Origin: Using watermarking and blockchain-based verification, human-made products carry a “digital signature.” This allows creators to maintain the equity of their original work.
  • The Authenticity Tax: If a company uses AI to mimic a specific human’s style or narrative, the legal and social frameworks of the Renaissance model demand a “royalty of origin,” ensuring capital flows back to the human inspiration.

Wealth consolidation occurs when production is centralized. The Renaissance scenario is inherently decentralizing, as it prizes the one thing that cannot be mass-produced: the individual human perspective.

V. Comparing the “Soft Landings”: Victorian vs. Renaissance

To understand the trajectory of our economic future, we must distinguish between two types of “soft landings.” While both scenarios avoid immediate catastrophe, they offer fundamentally different versions of human dignity and wealth distribution.

Feature Victorian England Scenario Human-Premium Renaissance
Core Driver Inequality of Wealth and Power. Inequality of Authenticity and Scarcity.
The Human Role Tasks: Performing labor AI won’t do (low-cost servitude). Meaning: Performing labor AI can’t do (high-value narrative).
Economic Logic Humans as “Cheap Alternatives” to expensive robots. Humans as “Luxury Exceptions” to cheap, mass-produced AI.
Social Structure Centralized and Rigidly Hierarchical. Decentralized and Networked Communities.
Primary Value Obedience and Time. Trust and Shared Experience.
Role of AI The “Master’s Tool” for efficiency. The “Artisan’s Apprentice” for augmentation.

The Crucial Distinction

In the Victorian Scenario, the “servant class” is trapped by a lack of access to capital and a surplus of desperate labor. Success is measured by how well one can serve the elite.

In the Renaissance Scenario, the “artisan class” is empowered by AI to bypass traditional gatekeepers. Success is measured by how well one can connect with other humans through unique, un-automatable narratives. One is a world of servitude; the other is a world of stewardship.

While the Victorian model is a race to the bottom in cost, the Renaissance model is a race to the top in meaning.

Innovation Challenge - From Optimization to Orchestration

VI. The Innovation Challenge: From Optimization to Orchestration

For decades, the core driver of innovation has been Efficiency—doing things faster, cheaper, and with less friction. In the Human-Premium Renaissance, this paradigm reaches its logical conclusion: AI handles all optimization. When efficiency is “solved,” the new frontier of innovation becomes the Human Experience.

The Innovation of “Friction”

In a world of instant gratification provided by the Utility Floor, value is created by intentionally “slowing down” the experience. This is the art of Meaningful Friction.

  • Intentionality over Velocity: Future innovation won’t focus on how to get a product to a customer in ten minutes, but on how to make the ten minutes they spend with your brand the most memorable part of their day.
  • Biological Synchronization: Designing systems that align with human circadian rhythms, emotional cycles, and social needs rather than purely digital throughput.

The New Leadership Role: The Narrative Orchestrator

The role of the leader must shift. We are moving away from the “Optimization Officer” model toward the Narrative Orchestrator.

  • Curation as Strategy: Leaders will spend less time managing processes (AI will do this) and more time curating the talent, stories, and human connections that define the brand’s “Premium” status.
  • Stewardship of Trust: Because trust is a non-automatable resource, the primary job of leadership is to protect and grow the “Trust Equity” between the human staff and the customer base.

Redefining Innovation Maturity

In this scenario, a “mature” organization is not one with the most advanced tech stack, but one that has successfully integrated AI to the point of Invisibility.

Innovation maturity will be measured by an organization’s ability to use AI to automate the “Work” so it can empower its people to perform the “Art.”

This shift forces a total rethink of R&D. We are no longer just solving technical problems; we are solving for human belonging, status, and meaning in a post-labor world.

VII. Conclusion: Choosing Our Trajectory

The transition to an economy defined by embodied AI and mass automation does not have a predetermined destination. While the technical capabilities of generative systems and robotics are advancing at an exponential rate, the social and economic architecture we build around them remains a matter of human agency.

A Choice of Valuations

The “Victorian” and “Renaissance” scenarios represent two distinct paths for the future of work. One path values human time as a commodity—a low-cost alternative to a machine. The other values human time as a canvas—the unique source of narrative and meaning that an algorithm cannot replicate.

The Final Frontier of Competitive Advantage

As we move deeper into the 2030s, the most successful organizations will not be those that achieved the highest level of automation, but those that used that automation to solve the “Utility Floor” problem so they could focus entirely on the “Premium Ceiling.”

The ultimate goal of AI should not be to replace the worker, but to replace the “work”—the repetitive, the mundane, and the soul-crushing—thereby freeing the human to perform the “art” that only they can provide.

The soft landing is within reach, but it requires us to stop asking how we can compete with machines and start asking how we can better complement each other. The future isn’t defined by the artificial; it is defined by what becomes possible when the artificial is so ubiquitous that the human finally becomes the premium.

Frequently Asked Questions: The Human-Premium Renaissance

1. What is the difference between the “Utility Floor” and the “Premium Ceiling”?

The Utility Floor refers to the baseline economy where AI and robotics produce essential goods (food, logistics, basic software) at near-zero marginal cost, making them affordable commodities. The Premium Ceiling is the high-value market tier where consumers pay a significant markup for products and services with a “biological provenance”—meaning they are created, curated, or delivered by humans.

2. How does this scenario prevent massive wealth consolidation?

Unlike previous industrial shifts that required massive capital, AI acts as a “capital of the mind.” This allows for the rise of the Company of One, where individuals use AI to handle complex operations, allowing them to compete with large corporations. Furthermore, because “authenticity” cannot be mass-produced by a central algorithm, the value remains distributed among individual human creators and local communities.

3. Why is “human imperfection” considered an economic asset?

In a world where AI can generate “perfect” results instantly, perfection becomes a devalued commodity. Human “errors” or “uniqueness” serve as proof of biological origin—a signal of authenticity that AI cannot authentically replicate. This creates an Effort Heuristic, where consumers psychologically value the struggle and intent of a human creator over the sterile precision of a machine.

EDITOR’S NOTE: This is a visualization of but one possible future. I will be publishing other possible futures as they crystallize in my mind (or as you suggest them for me to explore).

Image credits: Google Gemini

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from Google Gemini to clean up the article, add images and create infographics.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

AI State of the Union

Image Generation Edition

LAST UPDATED: April 26, 2026 at 11:39 AM

AI State of the Union - Image Generation Edition

by Braden Kelley


Watching the evolution of AI over the past eighty years (83 actually) has been fascinating to watch (admittedly, I haven’t been alive long enough to watch all of it), but the evolution over the past 3 1/2 years following an extended AI winter has been nothing short of amazing. To anchor us and set context for what’s next, here is ChatGPT’s evolution over the current AI spring:

The Evolution of GPT Models

A quick reference for the major milestones in generative AI development:

Version Release Date Key Achievement
GPT-3 June 2020 The first massive 175-billion parameter model.
ChatGPT Nov 2022 Brought generative AI to the general public via a chat interface.
GPT-4 March 2023 Introduced advanced reasoning and multimodal (image) support.
GPT-5 August 2025 A “network of models” approach for complex problem-solving.
GPT-5.5 April 2026 Current state-of-the-art model for nuanced reasoning.

Earlier this week OpenAI released a new image model and people were wondering why, after killing of their video model Sora to focus their limited resources, would they introduce a new, potentially resource hungry image model that will burn more of their compute?

My uninformed user perspective is that perhaps OpenAI’s leaders saw what it could do and they just couldn’t justify depriving the public of it given their stated mission to “ensure artificial general intelligence (AGI) benefits all of humanity.”

Creativity and Innovation and Change Quote

I’ve created more than 1,200 quote posters over the past few years for people to use in their meetings, presentations, keynotes and workshops (download them for FREE at http://misterinnovation.com) using freely available images initially from sites like Pixabay, Unsplash, Pexels and Wikimedia Commons like the one above because the image generation capabilities of the AI models were so bad.

Anticipatory Leader Quote

Then about eight months ago when Google launched Nano Banana the AI image generation started to be good enough at capturing the essence of a quote to use an AI generated image instead of a photo (see the example above), before layering the quote in a translucent layer on top of it.

Cognitive Resilience Quote

But then in March 2026 I started using Gemini’s Nano Banana 2 to start creating hand drawn style images for the quote posters (like the one above) because of it’s ability to MUCH BETTER handle the inclusion of text into an image. You can see in this image, not only was it able to include the quote in the image, but it was able to add some other supplementary text (on its own) into the image AND an image of me, without me asking it to!

I started using this hand drawn style for many of the quote posters I’ve created over the past couple of months, doing a daily bake-off between Gemini, ChatGPT and Grok (which loses 99% of the time) and in March 2026 Gemini was winning most of the bake-offs until maybe April when it started to be about 50-50 between Gemini and ChatGPT.

BUT, with the release of OpenAI’s new image model earlier this week, ChatGPT has been winning every day and it is because it has been creating images like this one off a single, simple text prompt with the quote, author and requested style provided:

Remote-First Intentional Design Quote

Now remember, all I gave ChatGPT was the quote and the author and asked it to capture the essence of the quote in a hand-drawn style. IT decided to add all of these other informational, education, inspirational elements and my jaw literally dropped.

If I was an OpenAI executive and saw this result to my prompt, I too would have argued for the release of this image model given OpenAI’s mission. This ability is superhuman. I as a human would have stopped at finding an image that reinforces or enhances the meaning of the quote.

This image model turned the quote into a multi-dimensional learning tool that transmits far more insight and information in a single document than the already powerful single sentence did.

The quote is still an important distillation that is far easier to remember and thus to drive behavior change from, but the rest of the content that the OpenAI image model created of its own volition adds value for those who want to quickly double-click on the essence and learn more.

So, this is where we are with AI image generation now, this is the kind of power these tools now have. The only question is:

What are you going to do with them next?

Image credits: Google Gemini and http://misterinnovation.com (download all 1,200+ FREE)

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Why an AI Soft Landing Might Look Like Victorian England

LAST UPDATED: April 18, 2026 at 3:29 PM

Why an AI Soft Landing Might Look Like Victorian England

by Braden Kelley and Art Inteligencia


The Mirage of the Post-Scarcity Utopia

For decades, the prevailing narrative surrounding artificial intelligence has been one of a post-scarcity “Star Trek” future. The logic was simple: as machines took over the labor, the dividends of automation would be harvested by the state and redistributed via Universal Basic Income (UBI), freeing humanity to pursue art, philosophy, and leisure.

The AI Promise vs. The Fiscal Reality

However, this utopian vision ignores the gravity of The Great American Contraction. As we approach 2026 and beyond, the friction between exponential technological growth and a $37 trillion+ national debt (with a $2 trillion annual budget deficit) creates a structural barrier to redistribution. When the tax base of human labor erodes, the math for a livable UBI simply fails to compute.

The Victorian Hypothesis

If UBI is a mathematical and political impossibility fueled by corporate and human greed, we must look toward an alternative “soft landing.” This hypothesis suggests a vertical restructuring of society. As AI drives the cost of production and the demand for goods into a deflationary spiral, the purchasing power of the remaining “employed elite” will skyrocket.

The result isn’t a horizontal distribution of wealth, but a return to a Neo-Victorian social hierarchy. In this reality, the new digital gentry will use their outsized wealth to employ a massive “servant class” to maintain stately homes and personal lives, creating a world where status is defined by the human labor one can afford to command.

Neo-Victorian Hypothesis Infographic

The Great American Contraction: Why UBI is a Non-Starter

The conversation around the transition to an AI-driven economy often treats Universal Basic Income as an inevitability — a safety net that will naturally catch those displaced by the silicon wave. However, this assumes a level of fiscal elasticity that no longer exists. We are entering The Great American Contraction, a period where the traditional levers of government spending are restricted by the sheer weight of historical obligation and systemic greed.

The Debt Ceiling of Compassion

With a national debt exceeding $37 trillion, a $2 trillion budget deficit and rising interest rates, the federal government’s “room to maneuver” has effectively vanished. A livable UBI requires a massive, consistent tax base. As AI begins to hollow out the middle class, the very tax revenue needed to fund such a program disappears. To fund UBI under these conditions would require a level of sovereign borrowing that the global markets simply will not support, leading to a reality where the government cannot afford to be the savior of the displaced.

The Greed Variable

Even if the math were more favorable, the human element remains a constant. Corporate interests, focused on margin preservation and shareholder value, are unlikely to support the aggressive taxation required to fund a social floor. In the race to the bottom of production costs, the primary goal of the “winners” in the AI revolution will be wealth concentration, not social equity. The political willpower to force a massive transfer of wealth from AI-profiting corporations to the idle masses is a historical outlier that we should not count on repeating.

The Velocity of Displacement

Finally, the speed of the AI transition is its most disruptive feature. Legislative bodies move in years, while AI cycles move in weeks. By the time a political consensus for UBI could be formed, the economic floor will have already fallen out. This lag time creates a vacuum that will be filled not by government checks, but by a desperate search for subsistence, setting the stage for the return of the domestic labor economy.

The Deflationary Paradox: Collapse of Demand and Cost

In a traditional economy, unemployment leads to recession, which usually leads to stagflation or managed recovery. However, the AI-driven “soft landing” introduces a unique mechanical failure: the Deflationary Paradox. As AI and advanced robotics permeate every sector, the labor cost of producing goods and services begins to approach zero, but the pool of consumers capable of buying those goods simultaneously evaporates.

The Production Floor Drops

We are witnessing the end of the labor theory of value. When an AI can design, a robot can manufacture, and an automated fleet can deliver a product without a single human touchpoint, the marginal cost of production hits the floor. In a desperate bid to capture the dwindling “active” capital in the market, companies will engage in a race to the bottom, causing the prices of physical and digital goods to deflate at a rate unseen in modern history.

The Demand Vacuum

While cheap goods sound like a boon, they are a symptom of a deeper rot: the Demand Vacuum. As the middle class is hollowed out, the velocity of money slows to a crawl. The economy shifts from a mass-consumption model to a precision-consumption model. Most businesses will fail not because they can’t produce, but because there are no longer enough customers with a paycheck to buy, even at rock-bottom prices.

The Purchasing Power of the “Remaining”

This is where the Victorian shift begins. For the small percentage of Americans who retain their income — the innovators, the orchestrators, and the entrepreneurs — this deflationary environment is a golden age. Their dollars, fixed in value while the cost of everything else drops, suddenly possess exponential purchasing power. When a gallon of milk or a digital service costs mere pennies in relative terms, the “wealthy” find themselves with a massive surplus of capital that cannot be spent on “things” alone. This surplus will naturally be redirected toward the one thing that remains scarce and high-status: the dedicated service of another human being.

The New “Stately Home” Economy

As the Deflationary Paradox takes hold, we will see a fundamental shift in the definition of luxury. In the pre-AI era, luxury was defined by the acquisition of high-tech gadgets or rare goods. In the Neo-Victorian era, where machines produce goods for nearly nothing, “luxury” will pivot back toward the human-centered experience. Status will no longer be measured by what you own, but by whose time you command.

From Software to Service

For the “In-Group” — those entrepreneurs and specialized leaders still generating significant revenue — capital will lose its utility in the digital marketplace. When software is free and manufactured goods are commoditized, wealth seeks the only remaining friction: human presence. We will see a massive migration of capital away from Silicon Valley “platforms” and toward the local domestic economy. The wealthy will stop buying more “things” and start buying “lives” — the total dedicated attention of house managers, chefs, valets, and tutors.

The Modern Manor

This economic shift will be physically manifested in the return of the Stately Home. These won’t just be houses; they will be complex ecosystems of employment. Large estates will once again become the primary employer for local communities. As traditional corporate offices vanish, the residence becomes the center of both social and economic power. These modern manors will require extensive human staffs to cook, clean, maintain grounds, and provide security — services that, while technically possible via robotics, will be performed by humans as a deliberate signal of the owner’s immense “effectively wealthy” status.

The Return of the Domestic Professional

Perhaps the most jarring aspect of this transition will be the class of worker entering domestic service. We are not talking about a traditional blue-collar service shift, but the “Victorianization” of the former middle class. Displaced white-collar professionals — accountants, teachers, and middle managers — will find that their highest-paying opportunity is no longer in a cubicle, but in managing the complex domestic affairs, private education, and logistics of the new digital aristocracy. It is a “soft landing” in name only; while they may live in proximity to grandeur, their survival is entirely tethered to the whims of their employer.

Socio-Economic Stratification: The Two-Tiered Reality

The inevitable result of the “Victorian Soft Landing” is the formalization of a rigid, two-tiered social structure. Unlike the 20th century, which was defined by a fluid and expanding middle class, the post-contraction era will be characterized by extreme polarization. The economic “missing middle” creates a vacuum that forces every citizen into one of two distinct realities: the Digital Gentry or the Dependent Class.

The Corporate and Government Gentry

A small percentage of Americans — likely less than 10% — will remain tethered to the engines of primary wealth creation. This “In-Group” consists of high-level AI orchestrators, strategic entrepreneurs, and essential government officials who maintain the infrastructure of the state. Because their income is derived from high-margin automated systems while their cost of living has plummeted due to deflation, they possess a level of functional wealth that rivals the landed gentry of the 19th century. To this group, the “Great Contraction” is not a crisis, but a refinement of their dominance.

The Dependent Class

For those outside the digital fortress, the reality is stark. Without a national UBI to provide a floor, the majority of the population becomes the “Dependent Class.” Their economic utility is no longer found in the marketplace of ideas or manufacturing, but in the marketplace of personal service. In this neo-Victorian landscape, you either work for the companies that own the AI, work for the government that protects it, or you work directly for the individuals who do.

The Choice: Service or Scarcity

This stratification reintroduces a primal power dynamic into the American workforce. When the cost of basic survival (food and shelter) is low due to deflation, but the opportunity for independent income is zero, the wealthy gain total leverage. The “soft landing” is, in truth, a forced labor transition. Those who are not “useful” to the gentry — either as specialized labor or domestic support — face the grim reality of the Victorian workhouse era: they must find a patron to serve, or they will starve in a world of plenty.

Experience Design in the Neo-Victorian Era

Experience Design in the Neo-Victorian Era

From the perspective of experience design and futurology, the shift toward a Victorian-style social structure will fundamentally alter the aesthetic of status. In a world where AI can generate perfect, flawless goods and digital experiences at zero marginal cost, “perfection” becomes a commodity. Status, therefore, will be redesigned around human friction and intentional inefficiency.

The Aesthetic of Inequality

We will see a move away from the sleek, minimalist “Apple-esque” design of the early 21st century toward a more ornate, human-heavy luxury. Experience design for the elite will emphasize things that AI cannot authentically replicate: the slight imperfection of a hand-cooked meal, the presence of a uniformed gatekeeper, and the physical maintenance of vast, non-automated gardens. Architecture will pivot back to “human-centric” layouts—designing spaces not for efficiency, but to accommodate the movement and housing of a live-in staff.

Designing for Disconnect

The most challenging aspect of this new era will be the Experience of the Invisible. Designers will be tasked with creating systems that allow the Digital Gentry to interact with their environment without acknowledging the vast economic disparity surrounding them. This involves “Social UX” — designing layers of intermediation where the “Dependent Class” provides the comfort, but the “Gentry” only interacts with the result. It is a return to the “back-stairs” architecture of the 19th century, modernized for a digital age.

The UX of Survival

For the majority, the “User Experience” of daily life will be one of Hyper-Personal Patronage. Navigation of the economy will no longer be about interfaces or platforms, but about the “UX of Relationships.” Survival will depend on the ability to design one’s persona to be indispensable to a wealthy patron. In this reality, human-centered design takes on a darker, more literal meaning: the human becomes the product, the service, and the infrastructure all at once.

Conclusion: Preparing for the Retro-Future

The “Soft Landing” we are currently engineering is not the one we were promised. As the Great American Contraction forces a collision between astronomical debt and the deflationary power of AI, the middle-class dream of a subsidized leisure class is evaporating. In its place, we are seeing the blueprints of a Retro-Future — a world that looks forward technologically but moves backward socially.

A Call for Human-Centered Transition

If we continue to view innovation solely through the lens of efficiency and margin preservation, the Victorian outcome is not just possible — it is inevitable. We must realize that without a radical redesign of how we value human contribution beyond mere “market productivity,” we are simply building a more efficient feudalism. True Experience Design must now focus on the social fabric, or we risk creating a world where the only “innovation” left is finding new ways for the many to serve the few.

Final Thought: The Soft Landing Paradox

We must be careful what we wish for when we ask for a “seamless” transition. A landing that is “soft” for the Digital Gentry is one where the friction of poverty and the noise of the displaced have been successfully silenced by the return of the servant class. History doesn’t repeat, but it does rhyme — and right now, the future sounds remarkably like 1837. The question is no longer if AI will change our world, but whether we have the courage to design a future that doesn’t require us to retreat into our past.

Frequently Asked Questions

Why would prices deflate if the economy is struggling?

In this scenario, AI and robotics drive the marginal cost of production toward zero. Simultaneously, massive job displacement creates a “demand vacuum.” To capture what little liquid currency remains, companies must drop prices drastically, leading to a reality where goods are incredibly cheap but income is even scarcer.

How does this differ from the 20th-century middle class?

The 20th century was defined by a “horizontal” distribution where many people owned moderate assets. The Neo-Victorian model is “vertical.” The middle class disappears, replaced by a tiny, hyper-wealthy elite (Digital Gentry) and a large class of people who provide them with personalized human services (the Servant Class).

Isn’t UBI a more logical solution to AI displacement?

While logical in theory, the “Great American Contraction” hypothesis suggests that high national debt and corporate prioritisation of margins make a livable UBI politically and fiscally impossible. Without a state-funded floor, the market defaults to the oldest form of social safety: personal patronage and domestic service.

EDITOR’S NOTE: This is a visualization of but one possible future. I will be publishing other possible futures as they crystallize in my mind (or as you suggest them for me to explore).

Image credits: Google Gemini

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from Google Gemini to clean up the article, add images and create infographics.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Liberated to Care – How AI Can Restore Humanity in Healthcare

Liberated to Care - How AI Can Restore Humanity in Healthcare

GUEST POST from Kellee M. Franklin, PhD.

Heapy has long been a quiet force in the evolution of healthcare design – not with grand pronouncements, but with deep, thoughtful work that reshapes how we experience care. For decades, they have approached hospitals and clinics not as static buildings, but as living ecosystems – places where healing does not happen despite the surroundings, but because the space was designed to make it possible.

Their work goes beyond sustainability in the traditional sense – energy efficiency, material choices, LEED certifications – though they lead there, too. What sets Heapy apart is their commitment to human sustainability: designing spaces that support not just the planet, but the people within them. Clinicians. Patients. Families. The entire care team.

They understand that a healing environment is not just about clean lines and natural light – though those things matter. It is about creating places that reduce stress, prevent burnout, and foster connection. Spaces that are flexible enough to adapt to a pandemic, yet intimate enough to embrace the ailing or comfort a grieving family.

And they do this not in isolation, but in partnership – with providers, communities, vendors, and innovators who recognize that the future of healthcare is not only about smart technologies, but about deep human intention. It is not just what we build, but why – and for whom.

It was in that spirit last week, I had the honor of serving as the keynote speaker at Heapy’s Symposium on Sustainability in Healthcare, hosted in the beautiful “Queen City” of Cincinnati, Ohio – a gathering of dreamers and designers from across industries, all united by a shared belief: that the future of care must be human-centered.

It was in that room, surrounded by industry pioneers, who see beyond efficiency and into empathy, that the vision for a different kind of healthcare took shape – not as a distant ideal, but as a gentle uprising already underway.

We have spent decades optimizing a system that was not built to heal. It was not built for people at all. It is a machine – and both patients and caregivers are just trying to survive it.

We have chased speed, throughput, and cost-cutting – as if care were an assembly line. But in the rush to do more, faster, we have lost something irreplaceable: the human connection that lies at the heart of healing.

Clinicians drown in documentation; their eyes fixed on screens instead of faces. Patients feel like data points, shuffled through impersonal workflows. And hospital administrators, well-meaning as they are, focus on numbers that measure activity, not meaning.

But what if we stopped trying to make the machine run faster – and started asking: How might we build something entirely different? Not a smarter system, but a human one?

Not a system that grinds, but one that breathes. Not one that manages, but cares.

That is the future we are stepping into – not as a distant dream, but as a calm, determined shift, unfolding from the electricians who wire our buildings to the executives who shape our boardrooms. Not a future where technology replaces humanity, but one where it finally sees us – amplifies us – and reminds us why we are here.

And this future – the heart of healing — rests on four pillars, championed by forward-thinking organizations like The American College of Healthcare Executives (ACHE): liberating clinicians, designing for resilience, committing to learning, and personalizing care.

Automation in Healthcare

Liberating Clinicians: Letting Humans Be Humans

Imagine a clinic where the doctor looks at you – not at a screen. Where nurses spend their shifts at the bedside, not buried in charts. Where the administrative load does not fall on the shoulders of those already stretched thin – like patients juggling multiple portals, passwords, and fragmented records.

That is not fantasy. It is the promise of AI as an ally, not an agitator.

We are already seeing systems where AI stealthily handles prior authorizations, drafts clinical notes, and surfaces critical data – not to replace clinicians, but to free them. Early adopters report not just time savings, but better patient outcomes. But the real win? Time. Time to listen. Time to notice. Time to care.

Because healing is not transactional. It is relational. It lives in the pause, the eye contact, the hand on the shoulder. And when we automate the mechanical, we make space for the meaningful. The metric should not be how many patients we see – but how deeply we see them.

Designing for Resilience: Spaces that Adapt, Not Just Endure

Now picture the places where care happens.

Too often, they feel like relics – rigid, impersonal, built for a world that no longer exists. The next generation of healing environments must be different. They must be resilient, not just in structure, but in spirit.

We need hospitals that can withstand storms – literal and metaphorical. That can scale during surges, pivot during pandemics, and adapt to the rapid pace of change. Modular walls. Flexible rooms. Infrastructure that evolves.

But resilience is not just about durability – it is about humanity.

It is peaceful zones for staff to decompress. Natural light in every patient room. Wayfinding that feels intuitive, not clinical. It is designing for emotional endurance as much as physical strength.

Because burnout is not just caused by workload – it is shaped by environment. A space that feels cold, chaotic, or dehumanizing wears people down. One that feels calm, connected, and cared for – even in a crisis – helps them endure.

So let us stop building facilities and start creating healing ecosystems. Places that support not just survival, but the fullness of life – where healing and wholeness go hand-and-hand.

Committing to Lifelong Learning: Growing…Together

Even the smartest tools and strongest walls will not matter if we do not equip people with the knowledge, skills, and supportive environment they need to grow.

That is why ongoing education is not just a nice-to-have – it is non-negotiable. But not the kind of training that feels like a box to check. We need learning that is alive, adaptive, and human-centered.

Leaders, clinicians, and designers need to understand not just how to work with AI – but why it matters to their work. It is not about compliance – it is about curiosity. Not just in operating it but partnering with it. We need safe spaces to experiment, explore, grow – and yes, even fail. No innovation happens without change – and no meaningful change happens without real learning.

Micro-learning modules. Peer mentorship. Protected time for reflection. These are not luxuries – they are lifelines of learning and innovation.

And when leaders model learning – when they say, “I don’t know, let’s figure it out together” – they signal that growth matters more than perfection.

Because the future of care is not about mastering technology – it is about forming partnerships. With each other. With patients. With tools that extend our capacity, not replace our judgment.

Transforming Care

Personalizing Care: Seeing the Person, Not the Problem

Finally, imagine care knows you.

Not in a surveillance way – not data hoarded, but wisdom shared. AI that can tailor treatments plans, adjust room settings, and anticipate needs – always with consent, transparency, and control.

This is not about efficiency. It is about dignity.

It is remembering the patient’s name. Honoring their preferences. Adapting to their story. Adjusting to their situation. The most powerful curative is still human attention – and AI can help us focus it.

We are already seeing systems where AI personalizes everything from medication timing to discharge planning – not to automate empathy, but to boost it.

Because when care feels seen and heard, the healing penetrates deeper.

Five Actions for Leaders: From Vision to Practice

So, what can leaders do – right now – to turn this vision into reality?

  1. Redesign Workflows Around Human Dignity: Stop measuring success by speed. Reengineer processes to reduce burnout and restore time for true connection. Use AI to handle the mechanical – documentation, scheduling, billing – and let it also surface critical insights, flag at-risk patients, and streamline workflows so clinicians can focus on what they do best: medicine. Measure moments of care, not mouse clicks – and allow AI to illuminate what truly matters: patient healing and well-being.
  2. Co-Create with Frontline Teams: No more top-down rollouts. Invite nurses, doctors, and support staff into the design of every new tool, space, workflow, and policy. – and use AI to elevate their voices, not override them. Imagine AI that analyzes frontline feedback in real-time, surfaces hidden pain points, and co-generates solutions alongside those who know the work best. Ask: Does this help you provide better care? Their lived experience, supported by intelligent insight, guide what gets built – because the best solutions do not emerge from closed boardroom doors, but from the open collaborative hands and hearts within the community of care.
  3. Build Spaces that Breathe: Invest in modular, adaptable infrastructure – but go further. Design for emotional resilience: tranquil zones, natural light, intuitive layouts, and AI-enhanced environments that respond to human needs in real-time. Imagine rooms that adjust lighting and temperature based on patient stress levels, or corridors that guide staff to moments of respite between high-pressure tasks. A healing space is not just durable – it is humane, alive with invisible intelligence that supports the whole-person: mind, body, heart, and spirit.
  4. Champion Learning as an Act of Care: Make continuous education protected time, not an afterthought. Offer micro-learning, peer mentorship, and collaborative spaces – and harness AI as a dynamic learning partner. Imagine intelligent systems that surface personalized insights, adapt to individualized learning styles, and guide clinicians through real-time decision support that doubles as on-the-job training. When leaders model curiosity and embrace AI not just as a tool, but as a catalyst for growth and innovation, they create cultures where learning is ongoing and invigorating.
  5. Personalize Without Surveillance: Use data to deepen trust, not erode it. Implement AI that personalizes care – predicting needs, tailoring environments, and adapting support – but always with consent, transparency, and patient control. Let personalization mean dignity: remembering a name, honoring a preference, adapting to a story, adjusting to a changing situation, and above all, putting people, not patterns, at the center.

A Future That Feels Human, Beautifully Imperfect

This is not about replacing the system. It is about reimagining it.

From one that manages people to one that sees them.

From one that measures output to one that values presence.

From one that optimizes speed to one that honors slowness – personal focus, deep listening, and the easy moments of connection that no algorithm can replicate.

The tools are here. The insights are clear. The question is no longer can we – but will we?

Will we choose efficiency – or humanity?

Will we build systems that merely function – or ones that truly heal?

The answer lies not in technology, but in where we choose to place our attention – and our intention.

As a Triple Negative Breast Cancer survivor, I have felt firsthand how cold and mechanical care can be – and how profoundly a space can either deepen that pain or help heal it. I have also seen how systems can exhaust the very people meant to deliver care. But I hold onto a belief: healing begins when we return to our humanity. From designers and clinicians to administrators and patients, each of us plays a vital role in co-creating a whole-health environment where care is not just delivered, but genuinely experienced.

And perhaps the most revolutionary act in healthcare today might just be this: to care, deeply, as beautifully imperfect humans – and to let everything else serve a universal truth – one rooted in compassion, true connection, and shared humanity.

Image credits: Kellee M. Franklin

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Consumption Collapse – When the Feedback Loop Bites Back

Why the Great American Contraction is leading to a crisis of demand and a re-imagining of the American Social Contract.

LAST UPDATED: April 17, 2026 at 3:58 PM

The Consumption Collapse - When the Feedback Loop Bites Back

GUEST POST from Art Inteligencia


The Ghost in the Shopping Mall

In our previous exploration, The Great American Contraction,” we identified a fundamental shift in the American story. For the first time in our history, the foundational assumption of “more” — more people, more labor, and more expansion — has been inverted. We discussed how the exponential rise of AI and robotics is dismantling the traditional value chain of human labor, moving us from a nation of “doers” to a necessary, albeit smaller, elite class of “architects.”

However, as we move closer to the two-year horizon of the next United States Presidential election, a more insidious shadow is beginning to fall across the landscape. It is no longer just a crisis of employment; it has evolved into a crisis of consumption. This is the “Feedback Loop of Irrelevance.”

The logic is as cold as the algorithms driving it: As increasing numbers of knowledge workers and service providers are displaced by autonomous agents, their disposable income evaporates. When people lose their financial footing, they spend less. When they spend less, the revenue of the very companies that automated them begins to shrink. To protect their margins in a declining market, these companies are forced to cut back even further — often doubling down on automation to reduce costs — which in turn removes more consumers from the marketplace.

We are witnessing the birth of a deflationary death spiral where corporate efficiency threatens to cannibalize the very markets it was designed to serve. Over the next 24 months, this cycle will redefine the American psyche and set the stage for an election year unlike any we have ever seen.

It is time to look beyond the immediate shock of job loss and examine the structural integrity of our economic operating system. If the “Old Equation” of labor-for-income is a sinking ship, we must decide what happens to the passengers before we reach the horizon of 2028.

The Vicious Cycle of Automated Austerity

The transition from a growth-based economy to a Great Contraction is not a linear event; it is a recursive loop. As AI adoption accelerates, we are witnessing a phenomenon I call “Automated Austerity.” This is the process where short-term corporate gains from labor reduction lead directly to long-term market erosion. The cycle progresses through four distinct, overlapping phases:

Phase 1: The First Wave Displacement

We are currently seeing the replacement of both low-skilled physical labor and high-skilled knowledge work by autonomous systems. This isn’t just about factory floors; it’s about the “Architect” roles we once thought were safe. As companies replace $150k-a-year analysts with $15-a-month compute tokens, the immediate impact is a massive surge in corporate profit margins.

Phase 2: The Wallet Effect

The friction begins here. Displaced workers initially rely on savings or severance, but as those dry up, the “gig economy” safety net is nowhere to be found — because AI is already performing the freelance writing, coding, and administrative tasks that used to provide a bridge. Disposable income doesn’t just dip; for a significant percentage of the population, it vanishes. This causes a sharp contraction in discretionary spending.

Phase 3: The Revenue Mirage

This is the trap. Companies that automated to save money suddenly find their top-line revenue shrinking because their customers (the former workers) can no longer afford their products. The efficiency gains are real, but the market size is artificial. We are entering a period where companies may be 100% efficient at producing goods that 0% of the displaced population can buy.

Phase 4: The Secondary Contraction

Faced with shrinking revenues, boards of directors demand even deeper cost-cutting to protect investor dividends. This leads to a second, more desperate wave of layoffs, further reducing the tax base and consumer spending power. This feedback loop creates a Deflationary Death Spiral that traditional monetary policy is ill-equipped to handle.

“When you automate the consumer out of a job, you eventually automate the business out of a customer.” — Braden Kelley

Over the next two years, this cycle will move from the periphery of Silicon Valley to the heart of every American household, forcing a radical re-evaluation of how we distribute the abundance that AI creates.

Vicious Cycle of Automated Austerity

The Two-Year Horizon: 2026–2028

As we navigate the next twenty-four months, the gap between traditional economic indicators and the lived reality of American citizens will become a canyon. We are entering a period of Economic Bifurcation, where the distance between those who own the “compute” and those who formerly provided the “labor” creates a new social stratification.

The Rise of the ‘Hollow’ Recovery

Expect to hear the term “efficiency-led growth” frequently in the coming months. Wall Street may remain buoyant as AI-integrated corporations report record-breaking margins per employee. However, this is a hollow success. While the stock market reflects corporate optimization, our Alternative Economic Health Measures—like the Genuine Progress Indicator (GPI) — will likely show a steep decline. We are becoming a nation that is technically “wealthier” while the average citizen’s ability to participate in that wealth is structurally dismantled.

The Shift from ‘Doer’ to ‘Architect’ Burnout

The “Great American Contraction” is not just about those losing roles; it is about the immense pressure on those who remain. The survivors — the Architect Class — are tasked with managing sprawling AI ecosystems. This creates a new kind of cognitive load. By 2027, I predict we will see a peak in “Technological Burnout,” where the speed of AI-driven change outpaces the human capacity to design for it. This is where Human-Centered Innovation becomes a survival skill rather than a corporate luxury.

The Mindset of Survivalist Innovation

As the feedback loop of shrinking revenue intensifies, we will see American citizens taking radical actions to decouple from a failing labor market. This includes:

  • Hyper-Localization: A resurgence in local bartering and community-based resource sharing as a hedge against the volatility of the automated economy.
  • The ‘Off-Grid’ Digital Economy: Individuals utilizing open-source AI models to create value outside of the traditional corporate gatekeepers, leading to a “shadow economy” of peer-to-peer services.
  • Consumption Sabotage: A psychological shift where citizens, feeling irrelevant to the economy, consciously reduce their consumption to the bare essentials, further accelerating the contraction.

This period will be defined by a search for meaning in a post-labor world. The American citizen of 2027 is no longer asking “How do I get ahead?” but rather “How do I remain relevant in a world that no longer requires my effort to function?”

The Survivalist Innovation Framework

Beyond GDP: New Vitals for a Contracting Economy

As the “Old Equation” fails, the metrics we use to measure national success are becoming dangerously obsolete. In a world where AI can drive productivity while simultaneously hollowing out the consumer class, GDP is no longer a compass; it is a rearview mirror. To navigate the next two years, we must shift our focus to alternative economic health measures that prioritize human vitality over transactional velocity.

1. The Genuine Progress Indicator (GPI)

Unlike GDP, which counts the “cost of cleaning up a disaster” as a positive, the GPI factors in income inequality and the social costs of underemployment. As we move toward 2028, we must demand a GPI-centered view of the economy. If AI-driven efficiency creates wealth but destroys the social capital of our communities, the GPI will show we are regressing, providing a much-needed reality check to “hollow” stock market gains.

2. The U-7 ‘Utility’ Rate

Standard unemployment figures (U-3) are increasingly irrelevant. We need a U-7 ‘Utility’ Rate to track those who are “technologically displaced”—individuals whose roles have been absorbed by algorithms or whose wages have been suppressed to the point of working poverty. This metric will highlight the Architect Gap: the growing number of people who have the capacity for high-value human contribution but lack access to the compute resources required to compete.

3. The Social Progress Index (SPI)

The goal of an automated economy should be to improve the human condition. The SPI measures outcomes that actually matter: Access to advanced education, personal freedom, and environmental quality. By 2027, the SPI will be the most honest indicator of whether the Great Contraction is a managed transition to a better life or a chaotic collapse of the middle class.

4. Value of Organizational Learning Technologies (VOLT)

We must begin measuring the “Agility Score” of our nation. VOLT measures how effectively we are using AI to solve complex problems rather than just replacing workers. A high VOLT score paired with a low SPI suggests we are building a “learning machine” that has forgotten its purpose: to serve the humans who created it.

“A high-GDP nation with a crashing Social Progress Index(SPI) is merely a failed state in a gold tuxedo.”

The political battleground of the next two years will be defined by a new set of metrics similar to these (but likely different). The 2028 election will not just be a choice between candidates, but a choice between maintaining the illusion of growth or designing a system of sovereignty for the American citizen.

The Localized Pivot

The Sovereign Tech-Stack & The Localized Pivot

As the “Feedback Loop of Irrelevance” continues to shrink traditional income, we are witnessing a radical grassroots response: The Localized Pivot. When the macro-economy fails to provide value to the individual, the individual stops providing value to the macro-economy and turns inward to their community.

The Rise of the ‘Personal AI’ Infrastructure

By 2027, the barrier to entry for sophisticated production will vanish. We will see a surge in “Sovereign Tech-Stacks” — individuals and small collectives using localized, open-source AI models to run micro-manufactories, automated vertical farms, and peer-to-peer service networks. This is Innovation as a Survival Tactic. These citizens are essentially “unplugging” from the hollowed-out corporate ecosystem and creating a shadow economy that traditional GDP cannot track.

From Global Chains to Hyper-Local Resilience

The contraction of consumer spending will lead to the death of the “long supply chain” for many goods. In its place, we will see the rise of Regional Circular Economies. AI will be used not to maximize global profit, but to optimize local resource sharing. Imagine community AI agents that manage local energy grids or coordinate the bartering of skills — human-centered design at its most fundamental level.

The ‘Architect’ of the Commons

In this phase, the “Architect” role I’ve discussed previously becomes a civic one. These are the individuals who design the systems that keep their communities thriving while the national revenue shrinks. They are the ones building the Human-Centered Guardrails that ensure technology serves the neighborhood, not the shareholder. This shift represents a move from Global Consumerism to Local Sovereignty.

“When the national economic engine stops fueling the household, the household must build its own engine, or it dies.” — Braden Kelley

This localized movement will be the wild card of 2028. It creates a class of “Un-Architected” citizens who are no longer dependent on the federal government or major corporations, creating a profound tension for any political candidate trying to promise a return to the ‘Old Equation’.

The Road to 2028: The Politics of Human Relevance

As we approach the next Presidential election, the political discourse will undergo a seismic shift. The traditional “Left vs. Right” battle lines over tax rates and social issues will be superseded by a more existential debate: The Individual vs. The Algorithm. The 2028 election will likely be the first in history centered entirely on the consequences of a post-labor economy.

The ‘Humanity First’ Tax and Sovereign Solvency

The most contentious issue will be how to fund a shrinking state as the labor-based tax system collapses. We will see the rise of the “Compute Tax” — a proposal to tax AI tokens and robotic output rather than human hours. This isn’t just about revenue; it’s about sovereign solvency. When companies reinvest profits into compute rather than wages, the “Economic OS” crashes. Expect candidates to run on a platform of Universal Basic Everything (UBE) — providing the results of automation (healthcare, housing, and energy) directly to the people as the tax base from labor vanishes.

The Compute Tax

The Death of Traditional Immigration Debates

As I noted in our initial look at the Contraction, the old argument about immigrants “taking jobs” or “filling gaps” is dead. In 2028, the focus will shift to “Strategic Talent Acquisition.” The debate will center on how to attract the world’s few remaining irreplaceable “Architect” minds while managing a domestic population that is increasingly surplus to the needs of capital. This will create a strange political alliance between protectionists and humanists, both seeking to shield human value from digital devaluation.

Mindset and Likely Actions of the Citizenry

By the time voters head to the polls, the American mindset will have shifted from aspiration to preservation. We are likely to see:

  • The Rise of ‘Neo-Luddite’ Activism: Not a rejection of technology, but a demand for “Human-Centered Guardrails” that prevent AI from cannibalizing the last remaining sectors of human connection.
  • The Search for Non-Monetary Meaning: A surge in candidates who focus on “Quality of Life” metrics rather than fiscal growth, appealing to a class of people who no longer derive their identity from their “job.”
  • Algorithmic Populism: Politicians using AI to personalize fear and hope at scale, creating a feedback loop where the technology used to displace the worker is also used to win their vote.

The central question of the 2028 election will be simple but devastating: “What is a country for, if not to support the thriving of its people — even when those people are no longer ‘productive’ in a traditional sense?” The winner will be the one who can design a new social contract for a smaller, more resilient, and truly innovative nation.

Conclusion: Designing a Thrivable Contraction

The Great American Contraction is no longer a theoretical “what-if” for futurists to debate; it is an active restructuring of our reality. As the feedback loop of automated austerity begins to bite, we are discovering that a country built on the relentless pursuit of “more” is fundamentally ill-equipped to handle the arrival of “enough.”

The next two years will be a period of intense friction as our legacy systems — our tax codes, our education models, and our social safety nets — grind against the frictionless efficiency of the AI era. We will see traditional economic metrics fail to capture the quiet struggle of the consumer, and we will watch as the 2028 election turns into a referendum on the value of a human being in a post-labor world.

But contraction does not have to mean collapse. If we shift our focus from transactional velocity to human vitality, we have the opportunity to design a new version of the American Dream. This new dream isn’t about the quantity of jobs we can protect from the machines, but the quality of the lives we can build with the abundance those machines create. It is about moving from a nation of “doers” who are exhausted by the grind to a nation of “architects” who are inspired by the possible.

“The goal of innovation was never to replace the human; it was to release the human. We are finally being forced to decide what we want to be released to do.” — Braden Kelley

The road to 2028 will be defined by whether we choose to cling to the wreckage of the growth-based model or whether we have the courage to embrace a smaller, smarter, and more human-centered future. The contraction is inevitable, but the outcome is ours to design.

STAY TUNED: On Tuesday my friend Braden Kelley (with a little help from me) is publishing an article featuring one hypothesis for what an AI SOFT LANDING might look like.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.