We Must Think Less Like Engineers and More Like Gardeners

We Must Think Less Like Engineers and More Like Gardeners

GUEST POST from Greg Satell

In February, 1919, the famous philosopher Bertrand Russell received a card from his former student, Ludwig Wittgenstein, who was at that time in an Italian prison camp. “I’ve written a book which will be published as soon as I get home,” he would say in subsequent correspondence. “I think I’ve solved our problems finally.”

The “problems” he spoke of had to do with a foundational crisis in mathematics and logic that defied the efforts of the world’s greatest minds. The book, Tractatus Logico-Philosophicus, was an attempt to engineer a perfectly logical language from first principles. It would become enormously influential, leading to the Vienna Circle and the logical positivist movement of the 1920s.

Yet Wittgenstein would later disown the idea and it was, in the end, found to be unworkable. There are limits to what we can engineer. The world is a messy place. Rules inevitably have exceptions, which is why every system will always crash. That’s why we need to think less like engineers making machines and more like gardeners that grow and nurture ecosystems.

The Death of the Secular Gods

The problems Russell and Wittgenstein were working on were part of a larger paradigm shift. By the late 19th century, many intellectuals had begun to question ideas passed down from the ancient Greeks, such as Aristotle’s Logic, Euclid’s geometry and the miasma theory in medicine, overturning two thousand years of conventional wisdom.

It’s hard to overstate the seismic shift that this represented. Aristotle’s use of the syllogism, in which conclusions necessarily followed premises, Euclid’s postulate that parallel lines never intersect and Hippocrates theory that bad air causes disease, were considered to be the basic foundations upon which western thought was predicated.

Yet as human knowledge advanced, people began to see flaws in these precepts. Strange paradoxes called Aristotle’s logic into question. Mathematicians like Gauss, Lobachevsky, Bolyai and Riemann began to imagine curved spaces in which parallel lines did, in fact, intersect and scientists such as Robert Koch, Joseph Lister and Louis Pasteur established the germ theory of disease.

These would be, practically speaking, incredibly positive developments. The rise of non-Euclidean geometry made Einstein’s general theory of relativity possible and the germ theory of disease paved the way for antibiotics and much longer lifespans. Yet they created an unwarranted optimism about what the human mind could achieve.

A New Religion

In the early 20th century, science and technology emerged as a rising force in western society. The new wonders of electricity, automobiles and telecommunication were quickly shaping how people lived, worked and thought. Physicists like Einstein and Bohr became celebrities. It seemed that there was nothing that scientific precision couldn’t achieve.

It was against this backdrop that Moritz Schlick formed the Vienna Circle, which became the center of the logical positivist movement and throughout the 20’s and 30’s. At its core was Wittgenstein’s theory of atomic facts, the idea that the world could be reduced to a set of statements that could be verified as being true or false—no opinions or speculation allowed. Those statements, in turn, would be governed by a set of logical algorithms which would determine the validity of any argument.

Yet even as this logical movement was growing, the foundational crisis in logic continued. To solve the problem, David Hilbert the greatest mathematician of the era, proposed a program to solve the crisis that rested on three pillars. First, mathematics needed to be shown to be complete in that it worked for all statements. Second, mathematics needed to be shown to be consistent, no contradictions or paradoxes allowed. Finally, all statements need to be computable, meaning they yielded a clear answer.

Then things took a surprising turn. A young logician named Kurt Gödel would prove that every logical system is flawed with contradictions. Alan Turing would show that all numbers are not computable. The Einstein-Bohr debates would be resolved in Bohr’s favor, destroying Einstein’s vision of an objective physical reality and leaving us with an uncertain universe.

The Rise Of Faux Scientists

The verdict was in. Facts could never be absolutely verifiable, but would stand until they could be falsified. We could, after thorough testing, increase our confidence, but never be completely sure. Ironically, the demise of logic led directly to the era of digital computing and a new, technological age. Just as we learned that systems would always be fallible, the machines we built became unimaginably powerful.

At the same time, human agency was increasingly called into question. It was, after all, subjective judgements that led to the Great Depression of the 1930s and the enormous wars that followed it. As the Baby Boomers came of age in the 1960s, it seemed like everything was up for debate. All of the fuzziness and uncertainty of relying on human judgment increasingly seemed impractical.

Much like Wittgenstein and the Vienna Circle, a number of thinkers sought to engineer systems that would harness natural forces to create better outcomes. The Austrian School of economics eschewed government regulation in favor of consumer preferences. Neorealism in foreign relations argued that competition and conflict could govern that international order.

Yet unlike the original logical positivists, these ideas wouldn’t stay confined to academia, but would seep into the affairs of everyday people. The consumer welfare standard insisted that market price signals, not government bureaucrats, would decide if a transaction should be permitted, while the principle of shareholder value demanded that the stock market, not managers, should govern business decisions.

The results are clear. Too little antitrust regulation has increased concentration in the vast majority of American industries and strangled competition, which has decreased business dynamism and lowered productivity. Our economy has become markedly less productive, less competitive and less dynamic. Purchasing power for most people has stagnated. By just about every metric, we’re worse off.

We Need To Manage Ecosystems, Not Machines

We like to think of ourselves as rational actors, weighing each piece of evidence before making a decision. Yet our brains don’t work like that. We build up our perspectives through synapses in our brain and through our social networks, which form complex webs of influence. Once we adopt a point of view, we rarely adapt it to new evidence.

Engineers believe in laws that can be understood and put to specific use, so they build machines to perform specific tasks. Gardeners believe in complexity and emergence. They don’t design their garden as much as tend to it, nurture it and support its surrounding ecosystem. They don’t expect the same results every time, but understand they will need to adjust their approach as they go.

We need to think less like engineers and more like gardeners. For most important purposes, we manage ecosystems, not machines. We need to think more in terms of networks that grow and less in terms of nodes whose behavior we can predict and control. Our success or failure depends less on individual entities than the connections between them.

In a world driven by networks and ecosystems, we can no longer treat strategy as if it were a game of chess, planning out each move with near perfect precision and foresight. The task of leadership is to make decisions with full knowledge that many will be wrong and that you will need to make them right.

There’s no system to do that for us, no impersonal forces that will point the way. In the end, we have to put trust in ourselves. There isn’t anyone else.

— Article courtesy of the Digital Tonto blog
— Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Winning with Artificial Intelligence in 90 Days

Winning with Artificial Intelligence in 90 Days

Exclusive Interview with Charlene Li

The rapid evolution of artificial intelligence (AI) has shifted the technology from a futuristic curiosity to the primary engine of modern organizational growth. In an era defined by data-driven decision-making, the ability to effectively harness machine learning and predictive analytics is no longer just a competitive advantage; it is a fundamental requirement for long-term viability. However, the path to integration is rarely linear. Many organizations find themselves caught between the urgent need for transformation and the daunting reality of legacy infrastructure, talent shortages, and the cultural shifts required to move beyond small-scale pilots toward true enterprise-wide intelligence.

While the potential for increased efficiency and innovation is clear, the execution remains a significant hurdle.

The organizations that thrive in this new landscape are those that treat AI as a core strategic pillar rather than a plug-and-play software update. This requires a rethink of how human talent and machine intelligence coexist, ensuring that the technology enhances human capability rather than simply automating existing inefficiencies. Overcoming these challenges involves not just technical prowess, but a disciplined approach to change management and a clear vision for how intelligence will redefine the value the organization provides to its customers.

Today we will dive deep into what it takes to quickly achieve success with artificial intelligence with our special guest.

Creating a 90-Day Blueprint to Win with Artificial Intelligence

Charlene LiI recently had the opportunity to interview Charlene Li, a New York Times bestselling author, keynote speaker, and AI transformation strategist. Her latest book, Winning with AI: The 90-Day Blueprint for Success, co-authored with Dr. Katia Walsh, gives senior leaders a practical framework for moving from AI experimentation to measurable business value. Her prior books include The Disruption Mindset, Open Leadership, and Groundswell. Fast Company named her one of the most creative people in business, and she has worked with global organizations including 14 of the Dow Jones Industrial 30 companies. She is the founder of Altimeter Group (acquired by Prophet) and currently leads Quantum Networks Group.

Below is the text of my interview with Charlene and a preview of the kinds of insights you’ll find in Winning with AI: The 90-Day Blueprint for Success presented in a Q&A format:

1. What confusion is being created by speaking of “AI” as one thing when there are different kinds of AI, and how does this hold back AI adoption?

When people say “AI,” they’re usually thinking ChatGPT. But ChatGPT is generative AI — and that’s just one of three types of AI showing up in business today. There’s also predictive AI, which has been quietly running in your CRM, your fraud detection, and your streaming recommendations for years. And there’s agentic AI, which takes autonomous action toward a goal rather than waiting for a prompt.

The Oracle (predictive), the Creator (generative), and the Agent (agentic) — that’s how Katia and I describe them in Winning with AI. They do fundamentally different things, and they require fundamentally different things from you.

The conflation matters because it leads to bad decisions. Leaders see a generative AI demo, get excited, and ask their teams to “do something with AI” — when the actual business problem might be better solved with predictive AI (and probably already could’ve been three years ago). Or they hear “agentic AI” and assume their organization is ready to deploy autonomous agents when they haven’t even gotten generative AI into their workforce yet.

The winners aren’t choosing among types — they’re using all three strategically, in combination. A customer care transformation might use predictive AI to route inquiries, generative AI to draft responses, and agentic AI to handle routine cases autonomously. Once you can see the three distinctly, the question stops being “what can I do with AI?” and starts being “what can AI do for me?” That’s the question that actually unlocks value.

2. What are some of the key characteristics of AI inertia and some of the best ways to break free?

We call it pilot purgatory — and almost every organization we work with is stuck there. The signs are easy to spot: dozens of disconnected pilots, lots of conference attendance, lots of slide decks, no measurable financial impact. An MIT study found 95% of AI initiatives fail to scale. That’s not a technology failure. It’s a failure of leadership and culture.

The classic characteristics:

    • Use cases as a strategy. Many use cases equals procrastination. A long list of pilots is how organizations look busy without committing to anything.
    • Diffused accountability. When the CIO, CFO, and CMO all “share” responsibility for AI, no one owns the outcome.
    • Waiting for the foundation to be perfect. Clean data, the right platform, the perfect org structure — these become reasons to delay rather than constraints to solve through.
    • Confusing motion with progress. Running pilots feels like progress. It isn’t, unless those pilots are tied to your most important business problems.

To break free: pick your biggest strategic problems, figure out how AI solves them, invest heavily in those solutions, and move with urgency. Appoint one AI value owner who lives, breathes, and dreams AI outcomes. Kill pilots that aren’t on a path to scale. And replace “fail fast” with “learn fast” — nobody actually rewards failure, and the language of failure lets people walk away from things that should be pushed through.
Speed is the new moat. The companies that win aren’t the ones with the best technology. They’re the ones that adapt faster than their competitors.

3. There are still a lot of people out there not using AI (or not realizing that they are). What are some of the best ways for people to get started with AI?

Most people are already using AI — every spam filter, every Google Maps route, every recommendation on a streaming service is AI. So the real question is: how do you get started with the kind of AI that’s reshaping work right now, which is generative AI?

My advice is genuinely simple. Pick one of the major tools — Claude, ChatGPT, Gemini, Copilot — and start using it for one real task you do every week. Not a toy task. A real one. Drafting an email. Prepping for a meeting. Summarizing a long document. Brainstorming an approach to a problem you’re stuck on.

Two practical tips that make a big difference:

Write better prompts. A good prompt has a role (“Act as a marketing strategist”), instructions (what you want done), context (the background the AI needs), and an output format (memo, table, slide outline). Then refine through dialogue. Most people give AI two sentences and judge it on the result. Give it two paragraphs and you’ll be amazed.

Try the flipped interaction. Instead of asking AI for an answer, ask it to ask you questions until it has enough context to give a good answer. For example, at the end of a prompt, add this sentence: “Ask me any clarifying questions you may have.” It turns your prompt into a conversation.

I think of AI fluency as learning to eat with chopsticks: at first you’re concentrating on every motion, and eventually it’s just how you eat. You won’t get there by reading about it. You get there by using it. Every day. On real work.

4. Does AI safety really matter? It seems like all of the major AI players are just focused on speed and getting to AGI before China, am I wrong?

You’re not wrong about what the AI players are doing. But you’re probably not playing that game – more on that below. First, I’d push back on the framing that safety and speed are opposites.

Think of Formula 1. The drivers who win championships have absolute confidence in their brakes, their crash structures, their fire suppression systems. That’s why they can push so hard on speed. Safety is what makes speed possible. The companies moving fastest on AI adoption aren’t the ones cutting corners on responsibility — they’re the ones with the highest ethical standards, because trust eliminates friction. When your team knows where the guardrails are, when your customers trust your intentions, when your board has confidence in your approach, you can move at the speed AI demands.

The 2024 Edelman Trust Barometer found that 43% of people would reject AI in products and services if they don’t believe the innovation has been thoroughly scrutinized. That’s not a PR problem — it’s a revenue and competitive position problem.

On the AGI race specifically, the geopolitical framing oversimplifies what’s actually a much more textured conversation about how AI is deployed within companies, governments, and communities. Most leaders I work with aren’t worrying about AGI — they’re worrying about whether their AI customer service tool is treating customers fairly, whether their AI-driven hiring screen is introducing bias, and whether their data is being used in ways customers didn’t consent to. Those are the safety questions that matter for the next five years, regardless of what the frontier players are doing.

5. Where is the government being too hands off with AI and its impacts, and what conversations should governments and societies be having about AI and its impacts that they’re not?

I’ll be careful here because I’m not a policy person — I work with the leaders implementing AI inside organizations. But from that vantage point, a few things stand out.

The conversation we aren’t having enough is about workforce transition. Not “will AI take jobs” — we’ve been arguing about that abstractly for three years. The real question is what happens to the millions of people whose roles will substantially change in the next five years, and who’s responsible for helping them adapt. Right now, that’s mostly being left to individual employers, and the gap between what enlightened employers are doing and what the median employer is doing is enormous. That gap will become a societal problem long before regulators catch up.

The second underdiscussed conversation is about education. We’re training a generation of students with curricula designed for a pre-AI world. By the time we figure out what AI fluency looks like in K–12, the kids who needed it most will be in the workforce.

Third — and this is where I’d actually like to see governments lean in more — is data. Most AI regulation focuses on the models. The leverage is in the data: who owns it, how it can be used, what consent looks like in a world where data collected for one purpose can be repurposed for AI training that wasn’t imagined when it was collected.

That said, regulations always lag technology. Anchoring your responsible and ethical AI policy in your organization’s values rather than waiting for rules is the right move, regardless of what governments do.

6. What are the key pillars that form the basis of a strong AI foundation for those who seek to take full advantage of AI in their organization?

In Winning with AI, Katia and I lay out four building blocks. They develop together, not sequentially.

Mindset — the cultural ability to move at AI’s speed. Speed, focus, customer-centricity, experimentation, and learning from setbacks rather than treating them as evidence that the technology doesn’t work. Without the right mindset, you can have the best tools in the world, and they’ll sit unused.

Skillset — AI fluency across the workforce, not just in IT. Everyone needs to understand what AI can and can’t do, how to use it responsibly, and how to apply it to their actual work.

Toolset — the technical foundation. We tell leaders to build with LEGO, not cathedrals. Modular, interchangeable components you can swap as the technology evolves, sitting on top of data that’s good enough to start with.

Decision-set — the governance and decision-making structures that let you move fast without breaking things. Who decides what, how quickly, with what oversight.

The mistake organizations make is treating these as a sequence — first we’ll fix the data, then we’ll train people, then we’ll deploy. That sequence will take you a decade. The right approach is to build the blocks while delivering value, using each AI application to strengthen multiple blocks at once.

And one piece that wraps all four: leadership. Without active, visible commitment from the top, the four building blocks don’t compound. With it, they accelerate.

7. Of all the outcomes that the different types of AI can achieve, which activities create the most value for organizations?

Winning with AIWe frame the value AI creates in three areas: engagement, efficiencies, and reinvention.

Engagement is about deepening relationships with customers and employees through personalization, prediction, and proactive service. Anticipating what someone needs before they articulate it.

Efficiencies are about doing what you already do, faster and cheaper. This is where most organizations start — and where most get stuck. Efficiency gains are real, but they’re easy for competitors to replicate, which means they don’t create lasting advantage.

Reinvention is the most transformational and the most uncomfortable. It’s not asking “how can we do what we do faster?” — it’s asking “what becomes possible now that the old constraints are gone?” New business models. New revenue streams. New markets that were never economical before.

The trap is thinking efficiency is AI’s value. We call it the efficiency trap. Companies that limit themselves to efficiency are using a strategic weapon as a cost-cutting tool. The real competitive advantage comes from engagement and reinvention.

A great example: Coursera. Translation used to cost about $10,000 per course, which made global expansion economically impossible at the scale of their 5,000+ course catalog. Generative AI eliminated that constraint overnight. CEO Jeff Maggioncalda saw it immediately and launched Project Genesis by the end of 2022. That’s reinvention — AI removing a constraint that defined the business model.

If I had to pick one activity that creates the most value, it would be: using AI to remove a constraint that has shaped your industry’s economics for so long that nobody questions it anymore.

8. There was a lot of talk for a while about becoming an AI-first organization. Is this something that companies should be trying to do?

No. Be AI-ready instead.

“AI-first” is a technology company’s framing. It puts the technology in the driver’s seat, which sounds visionary but in practice produces dozens of disconnected pilots with no strategic impact. You end up chasing AI because it’s shiny rather than because it solves a real problem.

“AI-ready” is a business leader’s framing. It puts strategy in the driver’s seat. You’re building the culture, the skills, the decision systems, and the technical foundation that let AI create real value against the strategic priorities you already have.

Said simply: AI-first is a technology mindset. AI-ready is a business mindset.

You don’t actually need an AI strategy. You need a business strategy that uses AI. Anyone selling you on an AI strategy is selling you the wrong thing.

9. What should people be doing as individuals to maintain their value to their organizations and to grow their careers?

Three things, in order.

One: develop genuine AI fluency. Not “I’ve used ChatGPT a few times” fluency. Real fluency — the kind where AI is woven into how you think, prepare, decide, and communicate. The people and organizations who get to AI fluence in 2026 will pull dramatically ahead of those who don’t, and the gap will be very hard to close once it opens.

Two: deepen what’s uniquely human. AI can amplify cognition at speeds and scales no individual can match. What it can’t do is exercise empathy, self-reflection, intuition, judgment, and wisdom. These five traits — the foundation of what Katia and I call “superhumans” in the book — become more valuable, not less, as AI handles more of the cognitive work. The leaders who pair AI’s reach with these distinctly human capacities are the ones creating the most value.

Three: build a lifelong learning practice. The shelf life of any specific skill is shrinking. The skill that doesn’t depreciate is the ability to learn — quickly, repeatedly, with intellectual humility. Normalize not knowing. Embed reflection into how you work. Treat curiosity as a professional asset, not a side hobby.

If you do those three things, you’ll be more valuable in the future than you are today, regardless of what happens to your specific role.

10. What have organizations gotten wrong about rolling out AI and what can the early adopters do to recover from botched initial rollouts?

The biggest things organizations get wrong:

  • Treating AI as a technology project. It’s a business initiative for value creation that happens to use technology. When IT owns it, it stays small.
  • Use cases instead of strategy. A laundry list of pilots is procrastination dressed up as progress.
  • Diffused accountability. Without a single AI value owner, the work fragments.
  • Skipping the people work. Throwing tools at employees without addressing the fear underneath. Until fear is replaced by trust, no amount of training will change behavior.

If you’ve already botched the rollout, here’s the recovery path:

Stop and audit. What’s actually scaling, what’s not, what’s draining resources without producing value? Be honest. Sunset the dead ends.

Appoint one accountable AI leader. If no single person is accountable for AI value creation across the enterprise, fix that this quarter. Not part-time, not committee-led — one person whose performance is measured on the value that AI creates.

Pick one strategically meaningful problem and go after it. Not the easiest problem. The one whose solution would matter most to the business.

Learn from Ally Bank. When generative AI emerged, Ally’s CIO Sathish Muthukrishnan deliberately chose the most resistant audience — customer service agents — and a low-stakes problem: summarizing customer calls. The result was so valuable that the agents who’d been most skeptical became the loudest advocates: “Don’t take this away from me.” Targeting the skeptics with a real win is one of the most powerful change strategies we’ve seen.

A botched rollout isn’t a death sentence. It’s actually a useful clearing of the underbrush — assuming you learn from it.

11. Several studies have come out recently about the negative effects of AI on human cognition. Any tips for how to best use AI without degrading your brain?

This is a real concern and worth taking seriously. The risk isn’t AI itself — it’s lazy AI use. Using AI to skip thinking rather than to enhance it.

A few habits I’ve found useful:

Think first, then prompt. Before going to AI for an answer, write down what you think. Coursera’s Jeff Maggioncalda calls this cognitive bootstrapping — write your perspective on a decision, then ask AI to challenge it: “What are the strengths and weaknesses of this view? What are my blind spots? What would you recommend I improve?” AI sharpens your thinking instead of replacing it.

Treat AI outputs as drafts, not deliverables. Read critically. Push back. Ask why. Verify facts. The moment you stop questioning AI’s outputs is the moment your thinking starts to atrophy.

Protect deep work. Schedule time for thinking that doesn’t involve AI at all. Reading, writing, reflecting, walking — the unstructured time where your brain consolidates what it knows. AI can compress research, but it can’t compress wisdom. That still has to come from lived experience, integrated over time.

Notice the difference between using AI to accelerate something you understand and using AI to substitute for understanding. Acceleration is healthy. Substitution erodes you.

The promise of AI isn’t to do our thinking for us. It’s to help us think better. The discipline is staying on the right side of that line.

12. Any question you wish I had asked but didn’t?

Yes — I’d love a question about the human possibility on the other side of this.

Most AI conversation is about risk, displacement, and disruption. Those are real. But the conversation Katia and I get most excited about is what becomes possible when AI handles the cognitive work that has been depleting people for decades — the synthesis, the routing, the routine analysis — and frees up human capacity for what only humans can do.

We call those people “superhumans” — not because they’re enhanced by technology in some sci-fi sense, but because they finally have the room to be more deeply human. To exercise empathy, self-reflection, intuition, judgment, and wisdom at a level that’s been crowded out by cognitive overload.

The first companies to deliberately develop and organization filled with superhumans won’t just have a competitive advantage. They’ll be creating an entirely new form of value — one we haven’t fully named yet. That’s the future I want leaders thinking about. Not “how do I survive AI?” but “what becomes possible for my people on the other side of this?”

Dream it. Then build it.

Conclusion

Thank you for the great conversation Charlene!

I hope everyone has enjoyed this peek into the mind of one of the women behind the insightful new title Winning with AI: The 90-Day Blueprint for Success!

Image credits: Charlene Li, Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Why Zero UI Will Redefine Experience Design

The Invisible Interface

LAST UPDATED: May 2, 2026 at 9:13 AM

Why Zero UI Will Redefine Experience Design

GUEST POST from Art Inteligencia


I. Introduction: The End of the Glass Slab

The Screen Fatigue Phenomenon: We have reached a point of peak saturation with traditional displays. Our lives are currently mediated by glowing rectangles, leading to a fragmented human experience where the tool often overshadows the task.

Defining Zero UI: This is not the absence of an interface, but the disappearance of the user interface as we know it. It represents a move away from rigid, button-heavy menus toward more organic inputs like voice, haptics, computer vision, and ambient intelligence.

The Core Thesis: Technology is at its most powerful when it is invisible. By removing the friction between human intent and technological execution, we allow people to return their focus to the experience itself, rather than the device required to facilitate it.

II. The Sensory Stack: How Zero UI Works

Voice & Natural Language: We are witnessing a transition from the “Command-Line Interface” era of voice (where specific keywords were required) to fluid, contextual conversations. The goal is a system that understands nuance, sarcasm, and intent, mirroring human-to-human interaction.

Biometrics & Gesture Control: In a Zero UI world, the body becomes the input device. Through computer vision and skeletal tracking, technology can interpret a wave of a hand or a shift in gaze, allowing for spatial computing that feels like an extension of natural movement.

Proactive vs. Reactive Design: Traditional UI waits for a user to click; Zero UI anticipates. By leveraging machine learning and sensor data, systems can predict needs—adjusting the lighting when you enter a room or preparing a summary of a meeting before you even ask for it.

Haptics & Sensory Feedback: Communication doesn’t always need to be audible or visual. Subtle vibrations (haptics) or environmental changes (thermal or olfactory cues) can provide “glanceable” information without demanding the user’s full cognitive attention.

III. From UX to HX (Human Experience)

Designing for Context: In the era of Zero UI, the focus shifts from “clicks” to “intent.” Experience design no longer lives within the boundaries of a screen; it must account for a user’s physical location, environmental noise levels, and even social setting. We aren’t just designing a path to a button; we are designing a response to a human moment.

Reducing Cognitive Load: The “Invisible Assistant” model moves us away from app management and toward outcome management. By utilizing ambient intelligence, technology handles the “how” so humans can focus on the “why.” This creates a “Calm UI” effect, where digital interactions support our life goals without demanding constant visual attention.

The Ethics of Invisibility: As interfaces disappear, the “Black Box” problem grows. Designers must prioritize radical transparency—ensuring users understand when and how they are being sensed. Trust becomes the primary currency; without clear consent and “off-switches” for predictive features, invisible interfaces risk becoming intrusive rather than helpful.

From Screens to Systems: We are moving toward “Sentient Interfaces” that detect hesitation or frustration through behavioral cues. Transitioning to HX (Human Experience) means building ecosystems that are emotionally aware, neuro-inclusive, and capable of failing gracefully when the AI misinterprets human intent.

IV. Leading Innovators: The Architects of Invisibility

The transition to Zero UI is being led by a diverse ecosystem of startups and legacy tech giants. As of 2026, the following organizations are moving beyond the screen to define the future of human-centered interaction:

Company / Startup Core Focus Why They Matter Now
Neuralink Brain-Computer Interface (BCI) Entering high-volume production in 2026, Neuralink is moving BCI from clinical trials to the ultimate seamless interface: thought-based control.
Ultraleap Mid-air Haptics & Tracking By projecting ultrasound waves onto the skin, they provide tactile feedback in mid-air, crucial for non-visual “touch” in automotive and XR environments.
SoundHound AI Agentic Voice Commerce Their latest “Amelia 7” platform allows users to manage complex real-world transactions—like dinner reservations and parking—entirely through natural conversation.
Memories.ai Contextual Wearables (LUCI) Following the pivot of early wearables like the Humane Ai Pin, Memories.ai is building the “Android of AI wearables,” providing a system-level reference for ambient intelligence.
Synchron Endovascular BCI A key competitor to Neuralink, Synchron focuses on minimally invasive brain interfaces that allow users to control digital devices via the blood vessels, emphasizing safety and accessibility.

Strategic Implementation: For brands, the challenge is no longer just “building an app.” It is about integrating into these emerging ecosystems. Whether it is through voice agents or haptic-enabled environments, the goal for designers is to ensure their brand’s presence is felt and heard, even when it cannot be seen.

V. The Futurologist’s Perspective: What’s Next?

The Transition to “Liquid Services”: In 2026, we are moving away from the “static app” model. Instead, we are entering the era of liquid services—capabilities that flow seamlessly across devices. Your interaction might start as a voice command in the kitchen, continue as a haptic pulse on your wrist while walking, and conclude as a spatial projection in your vehicle. The interface is no longer a destination; it is a persistent, supportive presence.

Hyper-Personalization and Ambient Intelligence: One-size-fits-all design is dead. Leveraging what I call “Fortified Intelligence,” future systems will adapt in real-time to the individual’s neurodiversity, physical abilities, and current emotional state. Environments will become “sentient,” adjusting lighting, acoustics, and information density based on the user’s “Digital Persona” without a single manual adjustment.

The Challenge for Designers: Behavioral Architecture: The role of the designer is shifting from visual storytelling to behavioral and sensory architecture. We are no longer just drawing screens; we are defining the “rules of engagement” between humans and machines. This requires a Whole-Brain approach—part scientist to manage the data and part artist to inspire human connection. Success in this new landscape is measured by “Speed to Resilience” rather than just speed to market.

Reclaiming the Human Moment: Paradoxically, the more advanced our technology becomes, the more we value “human friction.” As Zero UI automates the logistical “drudge work” of life, experience design for the future will emphasize the things AI cannot replicate: intentional inefficiency, the warmth of human presence, and the physical tangibility of the world around us. We are designing technology to get it out of the way, so we can finally be human again.

VI. Conclusion: Reclaiming the Human Moment

Beyond Efficiency: As I often say, true innovation isn’t just about making things faster or cheaper—it’s about making things more human. Zero UI is the final step in removing the technical debt of the 21st century. By dissolving the “glass slab” that separates us from our tasks, we aren’t just improving efficiency; we are restoring presence. When the technology disappears, we are finally free to focus on the work that matters and the people who inspire us.

A Call for Design Integrity: As we look toward the 2030s, the “Wild West” era of digital interfaces is closing. We are entering an era of Structural Integrity in experience design. Designers and innovation leaders must move beyond “Process Theater”—workshops that generate ideas without outcomes—and start building the resilient, invisible infrastructure that supports a flourishing society. We must have the courage to design a future that doesn’t require us to retreat into the friction of the past.

Final Thought: The most disruptive interface is the one that doesn’t exist because it works so well you’ve forgotten it’s there. The goal of the Invisible Interface is not to automate the human out of the loop, but to close the loop on friction, leaving only the experience behind. Let’s design an infrastructure that doesn’t just survive the future, but defines it.

Are you ready to move from UX to HX?

If you’re looking to get to the future first, increase your speed of innovation, or create a culture of continuous transformation, connect with Braden Kelley for a keynote or a FutureHacking™ workshop to teach you to be your own futurist.

Frequently Asked Questions

What is the difference between Zero UI and traditional UI?

Traditional UI (User Interface) relies on visual elements like screens, buttons, and menus to facilitate interaction. Zero UI moves away from these “glass slabs,” instead utilizing natural human behaviors—such as voice, gestures, haptics, and ambient intelligence—to interact with technology without a physical screen as the primary mediator.

How does Zero UI improve the Human Experience (HX)?

By reducing cognitive load and removing the friction of navigating complex menus, Zero UI allows technology to become a proactive assistant rather than a reactive tool. This shift toward “Human Experience” prioritizes context and intent, allowing users to stay present in their physical environment while still benefiting from digital capabilities.

Is Zero UI secure and private?

As interfaces become invisible, transparency becomes the most critical design element. Leading innovators are focusing on “Privacy by Design,” ensuring that ambient sensing and voice processing are handled with clear consent and robust encryption, often processing data locally (on-edge) rather than in the cloud to maintain user trust.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

A Tiny Bit of Uninterrupted Work Goes a Long Way

A Tiny Bit of Uninterrupted Work Goes a Long Way

GUEST POST from Mike Shipulski

If your day doesn’t start with a list of things you want to get done, there’s little chance you’ll get them done. What if you spent thirty minutes to define what you want to get done and then spent an hour getting them done? In ninety minutes you’ll have made a significant dent in the most important work. It doesn’t sound like a big deal, but it’s bigger than big. Question: How often do you work for thirty minutes without interruptions?

Switching costs are high, but we don’t behave that way. Once interrupted, what if it takes ten minutes to get back into the groove? What if it takes fifteen minutes? What if you’re interrupted every ten or fifteen minutes? Question: What if the minimum time block to do real thinking is thirty minutes of uninterrupted time?

Let’s assume for your average week you carve out sixty minutes of uninterrupted time each day to do meaningful work, then, doing as I propose – spending thirty minutes planning and sixty minutes doing something meaningful every day – increases your meaningful work by 50%. Not bad. And if for your average week you currently spend thirty contiguous minutes each day doing deep work, the proposed ninety-minute arrangement increases your meaningful work by 200%. A big deal. And if you only work for thirty minutes three out of five days, the ninety-minute arrangement increases your meaningful work by 400%. A night and day difference.

Question: How many times per week do you spend thirty minutes of uninterrupted time working on the most important things? How would things change if every day you spent thirty minutes planning and sixty minutes doing the most important work?

Great idea, but with today’s business culture there’s no way to block out ninety minutes of uninterrupted time. To that I say, before going to work, plan for thirty minutes at home. And set up a sixty-minute recurring meeting with yourself first thing every morning and do sixty minutes of uninterrupted work. And if you can’t sit at your desk without being interrupted, hold the sixty-minute meeting with yourself in a location where you won’t be interrupted. And, to make up for the thirty minutes you spent planning at home, leave thirty minutes early.

No way. Can’t do it. Won’t work.

It will work. Here’s why. Over the course of a month, you’ll have done at least 50% more real work than everyone else. And, because your work time is uninterrupted, the quality of your work will be better than everyone else’s. And, because you spend time planning, you will work on the most important things. More deep work, higher quality working conditions, and regular planning. You can’t beat that, even if it’s only sixty to ninety minutes per day.

The math works because in our normal working mode, we don’t spend much time working in an uninterrupted way. Do the math for yourself. Sum the number of minutes per week you spend working at least thirty minutes at time. And whatever the number, figure out a way to increase the minutes by 50%. A small number of minutes will make a big difference.

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Customer Confidence Score™ (CCS)

The Customer Confidence Score™ (CCS)

GUEST POST from Shep Hyken

Recently, I wrote about a customer trust survey. The feedback was amazing, which compelled me to take this a step further. After more writing and additional research, I recognized the need for more attention to a metric that measures a customer’s trust, which will directly correlate with customer satisfaction levels, loyalty, and any metric that measures what keeps customers or drives them away.

Merriam-Webster defines trust as an assured reliance on the character, ability, strength, or truth of someone or something. One in which confidence is placed.

One can’t ignore that the word confidence is part of the definition! They are very closely linked. We might ask something similar to, “Which came first, the chicken or the egg?” The question would be, “Which comes first, confidence or trust?”

Or, put another way: Does more trust lead to higher confidence, or does a higher level of confidence lead to more trust?

Or does it really matter? If you have both, you win. My take is that trust leads to confidence. Customers show confidence in your company through repeat business and referrals. That’s how they express their trust.

And that is why I’m officially announcing to you, our subscribers, readers, and viewers, a name to describe the trust questions I recently covered. I call it the Customer Confidence Score™ (CCS), another question to add to the survey questions you use to measure customer satisfaction (CSAT) and Net Promoter Score (NPS). Here’s an anchor question from my recent article on trust surveys:

On a scale of 1-10, how much do you trust that we will always do what’s right for you as our customer?

If your customer doesn’t give you a perfect 10 on this question, there are trust issues. Customers either fully trust you, or they don’t. And obviously, the lower the score, the less likely you’ll see them return. But a score alone is just a number. The real insight comes when you ask your customers why they gave you that score. The answer is your opportunity to resolve trust issues and improve the likelihood they will return.

The Customer Confidence Score™ is the result of surveying for trust, but it’s more than just another metric. It doesn’t replace CSAT or NPS. It completes them by measuring the foundation they are built on: trust. Without trust, a high CSAT or NPS score may be temporary at best. Measure CCS consistently, act on the insights, and you’ll build the kind of confidence and loyalty that get customers to say, “I’ll be back!”

Image Credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Designing Work for Humans and AI Agents to Do Together

LAST UPDATED: April 29, 2026 at 6:28 PM

Designing Work for Humans and AI Agents to Do Together

by Braden Kelley and Art Inteligencia


The Work Design Gap

We are not struggling to build artificial intelligence. We are struggling to design work for it.

Across industries, organizations are layering AI onto workflows that were never meant for collaboration. The result is predictable: inefficiency, mistrust, and unrealized value.

The real divide is not human versus AI. It is between work that is intentionally designed for collaboration and work that is not.

Why Traditional Tools Fail Us

Most of our management tools were built for a different era.

  • Process maps assume predictability
  • Org charts assume static roles
  • RACI models assume clear ownership

But human and AI collaboration is dynamic, contextual, and continuously learning. These tools help us optimize yesterday’s work, not design tomorrow’s.

What we need is a new visual language for collaboration.

Introducing the Human–AI Collaboration Canvas

The infographic below is not just a diagram. It is a thinking tool.

Its purpose is to make invisible interactions visible, clarify roles without over-constraining them, and embed judgment, trust, and learning into how work gets done.

This is a shift from process design to system design for collaboration.

Designing Work for Humans and AI Infographic

The Three-Lane Model: A More Honest Representation of Work

The canvas is built around three interconnected lanes:

The Human Lane

Where judgment, empathy, ethics, and accountability live. Humans frame the problem, not just solve it.

The AI Agent Lane

Where scale, speed, pattern recognition, and automation operate. AI expands what is possible.

The “Together” Lane

This is where value is actually created. Co-creation, co-decision, and co-learning happen here.

If you are not explicitly designing the middle lane, you are leaving value on the table.

The Work Journey: Sense → Decide → Act → Learn

Instead of rigid workflows, the canvas maps work as an adaptive cycle:

  • Sense: Understand context and gather signals
  • Decide: Blend human reasoning with AI recommendations
  • Act: Execute with scale and oversight
  • Learn: Reflect, adapt, and improve

Learning is not the end of the process. It feeds everything.

Collaboration Nodes: Where the Magic (or Failure) Happens

At key points in the journey are collaboration nodes—the moments where humans and AI interact.

Each node forces three critical questions:

  • Who leads?
  • What is the role of the other?
  • What is at stake?

Most AI failures are not technical failures. They are interaction design failures.

Making Judgment Visible

One of the biggest risks in AI adoption is invisible decision-making.

The canvas highlights:

  • Where human judgment is required
  • Where AI recommendations are sufficient
  • Where escalation is necessary

Automation without explicit judgment design is just risk at scale.

Designing for Trust, Not Just Performance

Capability alone is not enough. Systems must be trusted to be used effectively.

This requires:

  • Transparency
  • Explainability
  • Auditability

The real question is not “Can the AI do this?” but “Will humans trust and use this appropriately?”

Learning Loops: The System That Gets Smarter

The canvas includes two reinforcing learning loops:

  • AI Learning Loop: Data → Model → Output → Feedback → Improvement
  • Human Learning Loop: Experience → Reflection → Insight → Better decisions

The real competitive advantage is not AI itself. It is how quickly your combined system learns.

Risk, Ethics, and Failure by Design

No system is perfect. The best systems are designed with failure in mind.

The canvas highlights:

  • Bias and fairness
  • Privacy and security
  • Safety and compliance

It also asks essential questions:

  • What happens if the AI is wrong?
  • What happens if the human is wrong?
  • How do we recover?

Resilience comes from designing for breakdowns, not ignoring them.

Human-AI Agent Work Collaboration Canvas

How to Use This Canvas

This is a practical tool, not a theoretical one.

  • Use it in workshops to map collaboration
  • Audit existing workflows
  • Design new human–AI systems from scratch

A simple place to start:

  1. Map one critical workflow
  2. Identify collaboration nodes
  3. Redesign the “together” lane first

Designing for a More Human Future

AI does not reduce the need for humans. It raises the bar for how we design work.

The goal is not efficiency alone. The goal is better decisions, better experiences, and better outcomes.

The organizations that win will not be the ones with the most AI. They will be the ones who best design how humans and AI work together.

EDITOR’S NOTE: You should read this article too to learn more about atomizing work for man and machine to do together.

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from ChatGPT and Google Gemini to clean up the article, add images and create infographics.

Image credits: Google Gemini, ChatGPT

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Go Beyond SLAs and Measure Human Success with the New XLM Matrix (free download)

LAST UPDATED: April 29, 2026 at 12:03 PM

Go Beyond SLAs and Measure Human Success with the XLM Matrix

by Braden Kelley


The Crisis of the “Efficient but Empty” Experience

In our current landscape of rapid digital transformation, we have achieved unprecedented levels of speed and automation. Organizations have mastered the “how” of delivery, yet many find themselves facing a growing paradox: processes are becoming more efficient while human satisfaction is simultaneously declining. We are successfully building faster systems that often leave the user feeling more like a cog in a machine than a valued participant.

The root of this issue lies in our reliance on traditional Service Level Agreements (SLAs). For decades, SLAs have served as the gold standard for operational success, measuring technical markers like system uptime, response times, and throughput. While these metrics are essential for maintaining infrastructure, they are fundamentally “cold” metrics. They can tell you that a system is functioning, but they cannot tell you if the person using that system is thriving, frustrated, or merely exhausted by the interaction.

To innovate effectively in a human-centered future, we must look beyond technical availability and begin measuring the actual quality of the human encounter. We need a shift in perspective—moving from monitoring system performance to measuring human success. This evolution requires a new framework: Experience Level Measures (XLMs). By focusing on how an innovation impacts the user’s cognitive load, sense of agency, and emotional resonance, we can move past “efficient but empty” outputs and toward solutions that deliver genuine value.

Introducing the XLM Matrix

To bridge the gap between technical output and human success, we developed the XLM (Experience Level Measure) Matrix. This visual framework is designed to help innovation teams move beyond abstract empathy and toward concrete, measurable experience improvements. By visualizing the relationship between friction, measurement, and action, teams can align their efforts with the outcomes that actually move the needle for their users.

The matrix is structured as a series of concentric rings, requiring teams to work from the “inside out” to ensure every innovation is rooted in a real-world human need:

  • The Inner Circle (The Friction Point): This is the starting line. Here, teams identify the specific “ugh” moment—the point in the journey where the user currently feels confused, slowed down, or disempowered.
  • The Middle Ring (The XLM): This layer transforms qualitative frustration into a quantitative metric. It asks: “How do we measure the absence of that friction?” An XLM isn’t about system uptime; it’s about the user’s success rate in reaching their goal without cognitive fatigue.
  • The Outer Ring (The Innovation Lever): Once the friction is identified and the metric is set, the outer ring focuses on the solution. It identifies the specific change in the product, service, or workflow that will directly influence the XLM and eliminate the friction point.

By using this “Target Logic,” teams ensure that they aren’t just innovating for the sake of novelty, but are strategically pulling levers that have a measurable impact on the human experience.

The XLM (Experience Level Measure) Matrix

The Four Pillars of Human-Centered Innovation

To provide a comprehensive view of the user experience, the XLM Matrix is divided into four critical quadrants. Each quadrant represents a fundamental pillar of how humans interact with technology and services. By examining an innovation through these four lenses, teams can uncover hidden friction points and prioritize improvements that resonate most deeply with their audience.

1. Cognitive Load

“Does this make the user’s life simpler or more complex?”

In an age of information abundance, mental energy is a finite resource. This pillar focuses on the mental effort required to complete a task. Innovation here is about reducing noise, simplifying navigation, and ensuring that the “cost of thinking” is kept to an absolute minimum.

2. Time-to-Value

“How quickly does the user reach their ‘Aha!’ moment?”

Success is often determined by the distance between a user’s first interaction and their first realization of value. This quadrant measures the speed of relevance. Effective innovation in this space removes barriers to entry and streamlines the path to a meaningful outcome.

3. Agency

“Does the user feel in control, or like a cog in the process?”

As systems become more autonomous, maintaining human agency is vital. This pillar explores whether a tool empowers the user or forces them into a rigid, predetermined path. High-agency innovations provide the user with the autonomy to make meaningful choices and direct the outcome.

4. Emotional Resonance

“Does the interaction build trust or cause frustration?”

Every interaction leaves an emotional footprint. This quadrant assesses the “vibe” of the experience. It looks beyond function to ask if the solution feels reliable, empathetic, and aligned with the user’s values, transforming a transactional moment into a relational one.

How to Use the Matrix with Your Team

The XLM Matrix is most effective when used as a collaborative workshop tool. By gathering cross-functional perspectives—from product and design to engineering and customer success—you can ensure a 360-degree view of the human experience. Follow these three steps to run your first experience audit:

Step 1: The Empathy Audit

Focus on the Inner Circle. Select one of the four quadrants and ask the team to identify the most persistent “ugh” moment currently facing the user. Be specific. Instead of saying “the checkout process is slow,” identify the exact friction point, such as “the user feels overwhelmed by the number of form fields.”

Step 2: Defining the Metric

Move to the Middle Ring. Once the friction point is clear, brainstorm how you would measure its absence. This is your Experience Level Measure (XLM). If the friction is cognitive overload from form fields, your XLM might be “reduction in time spent on the checkout page” or “a 20% increase in completion rate without support intervention.”

Step 3: Pulling the Innovation Lever

Reach the Outer Ring. Now, identify the specific technical or design change that will move that metric. This is your “Innovation Lever.” It could be an AI-driven auto-fill feature, a progress bar to improve the sense of agency, or a “save for later” option to reduce immediate emotional pressure.

Repeat this process for each quadrant to build a robust, human-centered innovation roadmap that prioritizes meaningful outcomes over simple feature checklists.

Conclusion: Creating a Human-Centered Future

The transition from measuring system performance to measuring human success is not just a technical shift; it is a cultural one. As we move deeper into an era of agentic AI and rapid digital acceleration, the organizations that thrive will be those that prioritize the human experience as their primary north star. Innovation is no longer defined solely by what we can build, but by how effectively we enable people to feel, act, and succeed.

The XLM Matrix provides a structured, repeatable path to this future. By moving from the friction of the “ugh” moment to the strategic clarity of the innovation lever, your team can ensure that every project delivers meaningful, human-centered value. It is time to stop guessing how our users feel and start building for their success.

Start Your Experience Transformation Today

Ready to move beyond SLAs? Download the high-resolution, 11″x17″ (works as A3 too) printable version of The XLM Matrix and begin identifying the measures that truly matter for your innovation team. You can also use it virtually by uploading it and locking it down as a background in Miro, Mural, LucidSpark, Figjam or the FREE Microsoft Whiteboard or Google Jamboard.


Download the Free XLM Matrix Canvas

Frequently Asked Questions

What is the difference between an SLA and an XLM?

A Service Level Agreement (SLA) measures technical system performance, such as uptime or response speed. An Experience Level Measure (XLM) focuses on human outcomes, measuring how effectively an innovation reduces cognitive load, increases user agency, or builds emotional resonance.

How does the XLM Matrix help innovation teams?

The XLM Matrix provides a visual framework to move from identifying user friction (“ugh” moments) to defining specific metrics and identifying the technical or design “levers” required to improve the human experience.

Can the XLM Matrix be used for internal digital transformation?

Yes. The matrix is highly effective for internal projects. By measuring the cognitive load and time-to-value for employees using new internal tools, organizations can ensure their digital transformation efforts actually increase productivity rather than just adding complexity.

Image credits: Braden Kelley, Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Trapped Value Playbook

Creating and Closing Multi-million Dollar Deals

Trapped Value PLaybook

GUEST POST from Geoffrey A. Moore


Dear Readers,

I want to forewarn you that this article is quite long. For those of you who prefer delving into it at your leisure, I’ve arranged for a downloadable version. Happy reading, and I look forward to your insights and discussions in the comments section.

The Concept

Most ROI comes from productivity improvements, and most productivity improvements come from releasing trapped value. The reason is simple. All systems trap value all the time, the only question is, where is it getting trapped today? That is, systems are implemented to help make people more productive than they were, and they do so with varying degrees of success. But to whatever degree that success has been achieved, that simply resets the bar. The old bottlenecks have been addressed, but that just surfaces the new bottlenecks. There is no such thing as a system with no bottlenecks (see Second Law of Thermodynamics 😉), so there is always the opportunity to release trapped value.

Let me give some examples:

  • On a macro scale, much of the trapped value that IT released in the 1980s and 1990s was in the supply chain. The technology that broke through the bottlenecks of communication and coordination included ERP systems for global commerce, the internet for global communications, and client-server infrastructure for standardized universal enablement.
  • In the 2000s attention shifted from the supply chain to the delivery chain with a focus on consumer markets, and especially those that dealt in services and digital goods. Here traditional media, broadcast advertising, and retail distribution, as powerful as they all were, represented massive waste as well as lost opportunity because they could not close the loop with the prospect nor serve them in the moment they were ready to transact. Smart mobile devices, cloud computing, machine learning, predictive analytics, real-time transaction processing, and home delivery were able to close this loop and thereby transform whole swaths of the consumer economy.
  • In the current era, at the macro level, the trapped value of highest priority has shifted back to enterprise markets, in particular those that require professional engagement to deliver products, sales, services, and customer success. Here generative AI and data amalgamation look to be game-changing resources, the former enabling untrained users to interact directly with the most sophisticated IT systems available, the latter feeding those systems with an ever-broadening stream of real-time data and transaction history. The trapped value to be released is tied to the current lack of user empowerment in the moment of engagement. That is, while predictive AI has for some time been able to come up with the right answers, most professionals are unable to access that help in real-time; and while ML and AI could be fed some of the data it craves, much more was trapped in data silos and thus not available in any timely manner. As a consequence, although we have had business intelligence for some time, we have largely been unable to translate it into operational intelligence in a scaled way.

There is one final point to make at the macro level before we transition to major account selling. How does releasing trapped value translate into customer return on investment, and how does that in turn help vendors set a good price? Here’s the deal. If you help your customer release a dollar of trapped value, they are happy to give you a dime. If you ask for fifteen cents, they hesitate, if you ask for twenty cents, they begin to think you’re gouging. So, let’s use ten percent to set our sights if for no other reason than it makes the math easier. The equation is simple. You want a million-dollar deal? Find a way to release ten million dollars of trapped value. You want a ten-million-dollar deal? Find a way to release a hundred million dollars worth. You want a hundred-million-dollar deal? Find a way to release one billion dollars in trapped value. Yes, these are very large numbers, but the larger the target enterprise, the more plausible they become, so this playbook is directed toward the Global 2000 and the public sector, two places where billions of dollars of trapped value are commonplace.

Creating the multi-million-dollar deal

So much for the macro level. Multi-million-dollar don’t happen there. They happen at the level of specific accounts, in specific industries, in specific geographies, at specific points in time. The question we need to answer is, how does trapped value show up locally?

It turns out this is a tough question to answer. After all, it is not as if your prospects haven’t been trying to improve their productivity already. Nonetheless, simply by asking the question from an outsider’s perspective, and by being intellectually curious as to where the real answers might lie, account teams can bring unique value-add to their target customers. Specifically, they can help construct a trapped value map.

A trapped value map is analogous to what oil companies create when their exploration & production divisions are prospecting for petroleum reservoirs. It’s very expensive to come up empty in that business, and so they invest considerably in seismic studies before they commit. By contrast, how many sales interactions have you witnessed where the team, to stick with the oil industry analogy, begins by presenting their drilling history, then demos their oil rigs, and then, because they always want to be closing, asks the prospect when they can get started drilling? They call it “solution selling,” but they don’t even know what the problem is.

Co-creating a trapped-value map

The goal is to co-create this map with your target customer. They are stuck, so they need you to help them get unstuck. But you need them too, not only because they have the domain knowledge as to where the bodies are buried, but also because it is their buy-in that will drive the deal. Both of you need to bring imagination, intellectual curiosity, and attention to detail to this effort because it won’t be easy. Wherever the trapped value is, it is not obvious, or it would have already been detected and dealt with.

One way to start the journey is to begin by just asking people. You want to engage with a cross-section of managers, work teams, and executives. In each case, the dialog is informal, the questions you pose are open-ended. Start with “What is working well?” Be sure to capture their answers because this is the stuff you will likely want to protect. Then move on to, “What is holding you back?” Sometimes they know and can tell you, sometimes they know but are reluctant to tell you, and sometimes you just have to hold up a mirror so they can see it for themselves. Regardless, you need to spend time walking in their shoes, observing what they do, inspecting the way they are using their systems, and just as importantly, how their systems are using them. You need to bring a beginner’s mind and design thinking to develop a fresh perspective that could support taking novel actions. Specifically, you are looking for the intersection of their trapped value with your disruptive innovation, the one that will release the trapped value, the place where you will drill for oil.

To give you a closer look at the work involved, here is an outline for a typical trapped value discovery workshop:

Kickoff

  • Explain the concept of releasing trapped value as the foundation for ROI.
  • Use the example of Amazon Prime as compared with brick-and-mortar retail, or the example of Amazon Web Services as compared with enterprise data centers.
  • Share personal experiences of trapped value—e.g. stuff that gets in the way of you doing your best work or getting things done expeditiously.

Brainstorm trapped-value bottlenecks in your enterprise’s operating model from multiple points of view, including those of:

  • A customer
  • A customer-facing employee
  • An internal-facing employee
  • A partner
  • An investor

Identify bottlenecks in your overall industry’s operating model, examining things like:

  • Resource-consuming regulatory regimes
  • Fragmented installed bases
  • Locked-in customers
  • Process steps that add more cost than value
  • Dropped connections due to latency delay
  • “Brittle” communication mechanisms that cause outages
  • Absence of telemetry and lack of available data
  • Prioritization disconnects leading to poor implementations

Prioritize bottlenecks in terms of potential ROI from removing them:

  • Target the “big rocks”
  • Don’t “major in minors”
  • Don’t try to solve these problems yet
  • Do try to quantify them and put them in rank order

Double-click on the top priority items:

  • Employ a “Five Whys?” approach to begin to get at root causes.
  • Identify “interventions” that could materially improve things.
  • Discuss past attempts that may not have succeeded.
  • Discuss the potential impact a disruptive technology could have
  • Discuss customer examples or war stories that reflect successes.

Summarize and outline next steps.

Sometimes you may find that the trapped value is glaringly obvious, but that might just mean you don’t really understand the trap. In other words, if the right answer is staring everyone in the face, but no one is doing anything about it, then it is likely for some reason there is no permission to pursue it. It may be political, it may be cultural, but intransigent resistance to change is at least part of the problem. Now, do you still want your multi-million-dollar deal? Well then, you not only will have to break the bottleneck at the operational level, you’ll have to solve for the change management problem as well.

That said, keep in mind that your goal at this point is not to solve the problem. Rather, it is to understand it deeply. You are doing diagnosis, not prescription. Eventually, you will convert to prescription, but know that when you do, you will also be capping the size of the deal. That is, one of the barriers to closing a multi-million-dollar deal is to close a million dollar deal instead. Everything has to close eventually, and sometimes the right thing to do is to take the million dollar deal (or the one hundred thousand dollar deal, or even the ten thousand dollar deal) today, and kick the multi-million can down the road. But don’t kid yourself. You don’t get a lot of bites at the apple, and the probability is, once you have set your price envelope, it will not get expanded any time soon.

The trapped value map, by contrast, represents an open-ended narrative, one that can be taken on in chapters, with more to come. At present, we don’t know what the answers will be. Nobody does. We are just assessing whether the problem is material enough to spend the time, talent, and management attention necessary to come up with a feasible solution. Facilitating this assessment is a gift that the account team can bring to the prospect. When conducted with integrity and skill, it positions your company as a trusted advisor, regardless of whether this particular effort bears fruit or not. That’s because you and the customer have been sitting on the same side of the table, working together to co-create something that uniquely describes their challenges in a way that makes them more actionable to address.

Transitioning to the Proposal: Co-creating a V2MOM

A great way to transition from the trapped value map to a full-on proposal is to use the V2MOM framework as a template for getting everyone on the same page. Working one-on-one with your customer sponsor, or in an ideation workshop with a small customer team, address the following:

  • Vision. What is the outcome we are seeking to bring about? Where is the trapped value today? What will things look like once the trapped value has been released? Why is this a big deal?
  • Values. What values get realized if we accomplish our vision? One of these should highlight the financial ROI, but the others can be more qualitative. Will this effort improve our ability to deliver on our mission? Will it help us fulfill one of our brand promises? Will it free our workforce to be more effective? Will it help us recruit and retain the talent we need?
  • Methods. What are all the things we have to get done in order to secure the outcome promised by our vision? The goal here is to describe the whole product, which includes not only whatever products and services are funded by the proposal but also any other deliverables from partners or from the customer team itself that will be required to achieve the desired outcome.
  • Obstacles. For each method in the whole product, what are the challenges we anticipate having to overcome? What is our current thinking about how we will do so?
  • Measures. What are the measures that will confirm we are realizing the outcome promised in our vision? What are the intermediate milestones that will ensure we are progressing toward that goal in a timely fashion?

It is hard to overestimate the positive impact of doing this work with the customer prior to developing a proposal. Not only does it get everyone on the same side of the table, all pulling together, but the level of confidence that the vision can be achieved goes way up, as does the sense of inclusion resulting from simply being heard.

Converting the V2MOM into a formal proposal

Creating major proposals is something account teams do for a living, so we don’t need to address all that here. What is needed, however, is a playbook that constructs that proposal from the outside in rather than from the inside out.

Bad proposals are all about you. They are inside-out presentations and documents that explain what a great company you are, how wonderful your products are, how many references and endorsements you have, why you are so superior to the competition, and why all those bad things they say about you aren’t true. Just remember one thing — nobody cares!

Great proposals, on the other hand, are all about the customer:

  • They start with grounding everyone in the problem to be solved or the opportunity to be captured. They do so in an authentic way that is neither slanted nor self-serving but genuinely positions the customer to make good, if challenging, choices.
  • They “size the prize.” The co-creation team gives its best assessment of the trapped-value costs it seeks to eliminate as well as the unrealized gains it seeks to achieve. Taken together these constitute the targeted ROI and set the 10X mark for positioning a fair price for the solution.
  • They map the solution to the problem, not the other way around. Each plank in the proposal has a clear reason to be, all based on releasing trapped value.
  • They address the whole product, focusing on the sold products and services, but also including both the roles of partners and allies and their responsibilities to the customers themselves, thereby giving the customer a complete picture of what it will take to succeed.
  • They position the proposed solution relative to reference competitors who represent the best alternatives to what is being proposed. These alternatives are honored for what they are. At the same time, the proposal makes clear why they fall short and why what is being proposed is preferable instead.

Building a Stairway to Heaven

Multi-million-dollar deals have grandiose objectives that capture the minds and hearts of visionaries, raise skeptical hackles with pragmatists, and scare the pants off of conservatives. Getting them funded normally requires building a coalition of the willing across all three constituencies. The framework for so doing is called a stairway to heaven.

Here’s the framework:

Capitalizing on Disruption

The point of the framework is that all four steps will play a part in capturing the total ROI from the proposal. Conservative personas will be most interested in the bottom stair, pragmatists under duress, the second one up, pragmatists with options, the third, and visionaries, the topmost. To build the kind of coalition of the willingness necessary to fund a multi-million dollar deal, you meet with as many key stakeholders one-on-one as you can, directing their attention to the stair that is of most interest to them, and showing how the plan will meet their needs, when and where that stair is expected to be addressed, and what measures will verify and validate that this has been achieved.

Conclusion

Freud is famous for saying, “Sometimes a cigar is just a cigar.” The same is true of frameworks. By themselves they achieve nothing. People do all the work. But people can often work at cross purposes not only for each other but for their intended objectives as well. Good frameworks can help them align to be more effective, and with that thought in mind, let me wish you and your team great success.

That’s what I think. What do you think?

Image Credit: Pexels, Geoffrey Moore

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Is Your Innovation Fire Fading?

LAST UPDATED: April 28, 2026 at 3:46 PM

Is Your Innovation Fire Fading?

by Braden Kelley

A common misconception in business is that innovation fails simply because of a shortage of good ideas. In reality, the “fire” is more often extinguished by the structural context in which those ideas are born.

Organizations often focus their energy on brainstorming sessions and ideation workshops, assuming that more ideas will lead to more success. However, volume and diversity are merely preconditions; they cannot overcome a rigid organizational environment.

The Reality: Strategic and Cultural Fire Extinguishers

Innovation is frequently hindered by structural barriers, poor information flow, and misaligned psychology. Without the right enabling conditions, even the most brilliant concepts will stall.

Key Themes for Transformation

  • Strategy vs. Experimentation: Innovation without strategy is merely experimentation, while strategy without innovation results in nothing more than incremental improvement.
  • Human-Centered Insight: Sustainable innovations are almost always rooted in deep, human-centered insights regarding customer needs and frustrations.
  • Structural Alignment: True innovation capability requires organizational structures and digital infrastructure that support rapid experimentation and collaboration across teams.

The Ten Dimensions of Innovation Health

To build a sustainable innovation capability, an organization must evaluate its performance across ten core diagnostic areas. These dimensions help identify whether your innovation “fire” has a strong foundation or is being restricted by hidden barriers.

  1. Vision: A compelling, shared starting point that inspires people to challenge the status quo.
  2. Strategy: Integrating innovation efforts into the broader strategic framework to avoid random experimentation.
  3. Goals: Using specific, measurable targets and leading indicators to focus creative energy.
  4. Insights: Generating deep, human-centered data about customer frustrations and unmet desires.
  5. Idea Generation: Creating conditions for a high volume and wide diversity of ideas across the organization.
  6. Idea Evaluation: Ensuring fair, rigorous, and innovation-friendly processes that guard against incremental bias.
  7. Idea Development: Providing dedicated pathways, resources, and rapid prototyping to turn concepts into reality.
  8. Organizational Psychology: Addressing the mindsets, autonomy, and fear of failure that dictate innovation behavior.
  9. Information and Structural: Optimizing organizational structures and information flows to remove “innovation drag.”
  10. Sustainability: Building innovation as a lasting, self-reinforcing capability rather than a one-time initiative.

Download Your FREE Innovation Health Checks

The Innovation Health Checks are designed to move beyond subjective feelings and toward evidence-based diagnostics. To get the most value from these tools, leadership teams should follow a disciplined approach to the audit process.

Evidence Over Aspiration

When rating your organization, it is critical to be honest and specific. You must base your scores on evidence and observable behavior rather than your intentions or what you believe should be happening. Scoring statements honestly ensures that you are diagnosing the actual state of your innovation “fire.”

Continuous Improvement and Maturity

Innovation health is not a one-time measurement. By repeating these health checks every 6–12 months, you can track your progress over time and identify new barriers that may emerge as your organization’s innovation capability matures.

From Diagnosis to Roadmap

While the Innovation Health Checks provide the diagnostic tools to identify where your fire is fading, they are designed to work in tandem with deeper strategic frameworks. These checks reveal the “what” and the “where,” serving as the essential starting point for any leader committed to building a sustainable culture of innovation and purpose.

Take the Next Step

Ready to clear the barriers identified in your scores?

Stoking Your Innovation Bonfire provides the comprehensive roadmap and deep-dive strategies required to transform these insights into a lasting competitive advantage.


Free download

10 Innovation Health Checks

Audit your leadership team’s innovation capacity with the full PDF toolkit drawn from Braden Kelley’s framework.

⇓  Download PDF


Get the book

Stoking Your Innovation Bonfire

Move your organization from incremental improvement to transformational growth with Braden Kelley’s complete roadmap.

Read the book →

Image credits: ChatGPT

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Why the AI Data Centers of 2030 Will Be Sovereign Fortresses

The Great Decoupling

LAST UPDATED: April 27, 2026 at 6:17 PM

Why the AI Data Centers of 2030 Will Be Sovereign Fortresses

GUEST POST from Art Inteligencia


The End of the “Cloud” Illusion

For over a decade, we have been captivated by the metaphor of the “Cloud” — a term that suggests something ethereal, weightless, and omnipresent. But as we navigate the complexities of 2026, the veneer is stripping away. We are realizing that the intelligence driving our civilization is not floating in the sky; it is anchored in massive, high-heat industrial complexes that represent the most concentrated physical assets in human history.

The Convergence of Geopolitical Risk

The shift from digital convenience to National Survival is being driven by a perfect storm. The insatiable energy hunger of agentic AI models has collided with a period of intense global instability. We can no longer view data centers as mere real estate or IT infrastructure. They have become the “high ground” of the modern era. If these cognitive nodes are compromised, the ripple effect doesn’t just crash an app — it destabilizes the national experience.

The Thesis: The Rise of the Fortress Data Center

To ensure true national resilience, we must move beyond the “open campus” model of silicon valley. We are theorizing a future where AI data centers must evolve into self-contained, military-grade sovereign zones. These facilities will likely be:

  • Locally Powered: Utilizing dedicated nuclear SMRs to decouple from the fragile civilian grid.
  • Physically Fortified: Protected with the same kinetic rigor as a strategic missile silo.
  • Logically Isolated: Air-gapped to ensure that the nation’s “Digital Brain” remains untainted by external interference.

The Energy Sovereignty Mandate

The era of the data center as a passive consumer of the public utility is coming to an end. As AI models scale, their appetite for electricity has transitioned from a manageable operational expense to a systemic threat to civilian infrastructure. To maintain social license and operational continuity, the “Fortress Data Center” must become an island of power.

The Fragility of the Public Handshake

For years, tech giants have relied on “handshake deals” with regional utilities, often receiving preferential access to the grid. However, the sheer scale of 2026’s compute requirements has pushed these grids to a breaking point. When a single training run consumes enough energy to power a mid-sized city, the risk of “energy poverty” for the average citizen becomes a human-centered design crisis. Sovereignty requires that we stop competing with the public for the same electrons.

The Nuclear Option: Microgrids and SMRs

The transition toward Small Modular Reactors (SMRs) is no longer a “futurologist’s dream” — it is a mechanical necessity. By embedding nuclear or advanced geothermal power directly into the facility’s footprint, we create an isolated power source that is:

  • Resilient: Immune to regional grid failures, cyber-attacks on public utilities, or physical sabotage of long-distance transmission lines.
  • Scalable: Power generation that grows in lockstep with compute capacity, without requiring decade-long public infrastructure projects.
  • Sustainable: Providing the high-density, carbon-free baseload power required for 24/7 AI operations.

The Design Principle: We must decouple the “National Brain” (the AI) from the “National Body” (the civilian grid) to ensure that the pursuit of innovation never compromises the basic human need for heat, light, and stability.

Signal 2: The Data Center as a Kinetic Target

In the early 2020s, we viewed data center security through the lens of firewalls and encryption. But as we move through 2026, the paradigm has shifted. If a nation’s economy, defense, and essential services are orchestrated by a specific set of GPU clusters, those clusters become the highest-value kinetic targets in any conflict. We must stop designing them like warehouses and start designing them like aircraft carriers.

AI Data Center Drone Defense

Transitioning to the “Military Base” Model

The “Fortress Data Center” logic dictates that physical security must match the strategic importance of the data held within. This evolution requires a fundamental shift in architecture and protocol:

  • Physical Hardening: Implementing reinforced, blast-resistant shells and subterranean compute floors to protect against aerial or domestic threats.
  • Exclusion Zones: Establishing significant geographic perimeters and “no-fly” zones, effectively transitioning these sites into sovereign military installations.
  • On-Site Readiness: Constant tactical presence to defend against unconventional warfare, ensuring the “Digital Front Line” is never left vulnerable to physical breach.

Sovereign Silos and Logical Air-Gaps

Beyond physical walls, we must address Logical Sovereignty. A national AI asset cannot be fully secure if it is perpetually tethered to the public internet. The next generation of security involves “Air-Gapping”—the practice of physically isolating a computer network from unsecured networks.

By creating Sovereign Silos, we prevent the “poisoning” of national intelligence models from external actors and ensure that in the event of a global network collapse, the nation’s internal cognitive capacity remains operational.

The Futurology Perspective: We are moving from the era of “Open Innovation” to the era of “Fortified Intelligence.” The goal is not to hinder progress, but to ensure that our progress cannot be used as a weapon against us.

Designing the Experience of Security

As we fortify the physical and digital walls of our AI infrastructure, we face a profound Experience Design challenge. How do we prevent these “Fortress Data Centers” from becoming symbols of state opacity or fear? In 2026, the success of a national security strategy depends as much on Trust Architecture as it does on concrete and steel.

The Transparency Paradox

We are entering a Transparency Paradox: the more critical an AI system becomes to national security, the more secret its inner workings must be to prevent exploitation. Using Human-Centered Design principles, we must design interfaces and communication loops that provide the public with “Proof of Integrity” without revealing “Methods of Operation.”

  • Auditability: Creating independent, high-clearance civilian oversight boards to ensure the “Fortress” remains aligned with democratic values.
  • Public ROI: Clearly demonstrating how the security of these sites directly enables the stability of civilian services — from healthcare logistics to disaster response.

Trust Literacy and the Citizen Experience

We must build Trust Literacy within the population. If citizens perceive these centers only as “military black boxes,” we risk a breakdown in social cohesion. The experience of the “Fortress” must be framed as a Digital Utility — much like a water treatment plant or a power station — that is guarded not to exclude the public, but to guarantee their safety and continuity of life.

Distributed Nodes: The Anti-Fragile Strategy

From a Systems Thinking perspective, a single, massive “Fortress” is a single point of failure. The superior experience of security lies in a distributed network of regional hubs.

  • Hyper-Localization: Placing smaller, fortified nodes near the communities they serve to reduce latency and improve regional resilience.
  • Redundancy by Design: Ensuring that if one node is taken offline or isolated, the national “Neural Network” can reroute and adapt instantly, mimicking biological resilience.

Thought Leader Insight: Security isn’t just the absence of threat; it is the presence of confidence. We don’t just design the bunker; we design the relationship between the bunker and the people it serves.

The Strategic Implications: A New Innovation Roadmap

The shift toward fortified, sovereign AI infrastructure isn’t just a defensive maneuver; it is a fundamental pivot in how we approach the Innovation Lifecycle. In the past, we optimized for “Speed to Market.” In the landscape of 2026, the new north star is “Speed to Resilience.” This requires a total realignment of our strategic roadmaps.

For Leaders: From Efficiency to Robustness

Business and technology leaders must move beyond the “Just-in-Time” compute model. The era of relying on offshore, third-party clusters for mission-critical intelligence is closing. Strategic roadmapping now requires:

  • Infrastructure Integration: Treating compute and energy as a single, inseparable architectural stack.
  • Risk Re-evaluation: Factoring “Geopolitical Latency” into every project — the risk that a global event could sever access to centralized public clouds.

For Policy Makers: Funding the Digital Front Line

The “Fortress Data Center” cannot be built on corporate balance sheets alone. This is a public-private imperative. We are seeing the emergence of new funding mechanisms, such as:

  • National AI Sovereignty Acts: Legislative frameworks that provide subsidies for companies building “Sovereign-Ready” infrastructure.
  • Regulatory Sandboxes: Fast-tracking the deployment of Small Modular Reactors (SMRs) specifically for data center use, bypassing the decades-long red tape of traditional nuclear projects.

For Humanity: Ensuring the “Dividends of Security”

As a Human-Centered Innovation leader, my greatest concern is that these walls will lock innovation away from the people. Our roadmap must include “Avenues of Access.” While the hardware is fortified and the power source is isolated, the outputs — the medical breakthroughs, the climate models, and the educational tools — must remain a public good.

Strategic Takeaway: We aren’t just building walls; we are building a foundation. Innovation thrives when the underlying system is stable. By securing the “where” and “how” of AI, we liberate the “what” and “why” for everyone.

Conclusion: Choosing Our Preferable Future

The transition of AI data centers into sovereign, nuclear-powered fortresses is not an inevitability to be feared, but a strategic design choice to be mastered. As we look ahead from 2026, we must acknowledge that the “Wild West” era of digital infrastructure is over. We are entering the era of Structural Integrity.

The Choice: Proactive Design vs. Reactive Crisis

We have a window of opportunity to choose our path. We can wait for a catastrophic system failure — a grid collapse or a kinetic strike on a vulnerable node — to force our hand, or we can proactively apply FutureHacking™ principles to build resilience into the very foundations of our digital age.

The Goal: A Fortified but Flourishing Society

The ultimate goal of the “Fortress Data Center” is not isolationism; it is Insulation. By insulating our most critical cognitive assets from the volatility of global energy markets and geopolitical conflict, we create the stability required for the next great leap in human experience.

  • Security provides the safety to experiment.
  • Sovereignty provides the freedom to operate.
  • Isolated Power provides the continuity to grow.

True innovation isn’t just about what the AI can do; it’s about building a world where the AI’s “home” is as secure as the values it is meant to protect. Let’s design an infrastructure that doesn’t just survive the future, but defines it.

Final Thought: In the race for AI supremacy, the winner won’t just have the best algorithms; they will have the most resilient “ground truth.” The fortress isn’t a retreat — it’s a launchpad.

Frequently Asked Questions

1. Why can’t we just use the existing electrical grid for AI data centers?

The current grid is built for predictable civilian and industrial use. AI training requires massive, concentrated loads that can destabilize local power for residents. By using isolated sources like SMRs, we protect the public’s energy security while ensuring the AI never faces a “brownout.”

2. Does making data centers military bases mean civilian AI development will stop?

Not at all. Think of it like the GPS system: it is maintained and secured by the military for national resilience, yet it provides the foundation for thousands of civilian innovations. The “fortress” protects the hardware, not the creativity.

3. What makes a data center a “sovereign” asset?

Sovereignty in this context means independence. A sovereign data center isn’t reliant on international supply chains for power or vulnerable public networks for its logic. It is a self-sustaining node that can continue to function even if the global internet or local grid is compromised.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.