Investments That Make Your Company More Productive, Efficient and Customer-Friendly

Investments That Make Your Company More Productive, Efficient and Customer-Friendly

GUEST POST from Shep Hyken

What company doesn’t want to be more productive, efficient, and customer-friendly? (That’s a rhetorical question.) Isn’t this what every leader wants? Yet recent survey findings from Call Centre Helper an inconsistency in how organizations pursue these goals. This inconsistency can make or break their customer experience strategy. And if they fail their customers, their company may fail, as well.

The Call Centre Helper numbers tell the story. When asked where organizations can get maximum value for money while improving customer experience, a staggering 40% pointed to self-service solutions. Personalization came in a distant second at 18%, followed by productivity tools (12%). And despite years of improvement in AI, chatbots only received 8% of the vote.

In my own 2025 customer service and CX research of over 1,000 U.S. consumers, 68% still prefer the phone as their first choice for customer support, followed by online chat with a live agent at 55%. This creates what I call the customer experience investment paradox, where companies are pushing their investment into self-service tools while customers continue to value human-to-human interactions with live support agents.

The Self-Service Revolution is Real

In spite of the preference for phone support, the digital self-service revolution is real and becoming more important to companies. While the phone is still king, my research found that 34% of customers stopped doing business with a company because self-service options weren’t offered. That’s a third of your potential customers. Even if they prefer the phone, they want the option of doing it themselves.

There are some digital rockstar brands like Amazon and Uber, which have trained customers to expect instant and easy experiences. When customers can order groceries, stream entertainment, or hail a ride with a few taps of their mobile screen, they naturally expect similar experiences with every brand they encounter.

This makes the case for self-service, in spite of the customer’s desire to make a phone call. When the right solution is provided, the benefits to the company are big in the form of reduced operational costs and an improved customer experience. When executed well, self-service allows customers to resolve simple issues instantly, freeing up human agents to take on complex problems that require expertise and empathy.

That said, I regularly caution my clients that going “all in” on self-service without considering the larger customer journey could be a mistake. The keywords to consider are “when executed well.” Poorly implemented self-service creates frustrated customers who eventually demand human assistance anyway, often at a higher cost to resolve, and not to mention the bad will caused by the frustration.

Personalization: A Competitive Differentiator

The 18% investment in improving personalization shows that companies are understanding the importance of creating the personalized experience. My research reveals that 79% of consumers consider a personalized experience to be important.

Consumers are still being bombarded with generic messages from the companies they do business with that often leave them asking, “Why is this company sending this to me?” The result is customers disengage and often move on. As personalization technology improves (dramatically), analytics on a customer’s buying habits, frequency, past products purchased, and more can be incorporated into messaging and customer support experiences that have customers saying, “This company knows me.”

Smart companies use customer data to do more than personalize marketing messages and improve customer support. The data allows companies to anticipate needs, make recommendations for other products and services, and improve the overall customer experience.

The Relevance of the Human Connection

Despite a focus on digital investment, the human-to-human connection cannot be ignored. The fact that 68% of customers still prefer the phone confirms that self-service and chatbots may not be enough. Customers still want to talk to a live human being, especially about complex problems or major complaints.

However, the technology is getting better, and customers are becoming more confident with self-service solutions, which include chatbots. Also, as Gen Zs and younger Millennials become financially secure, they become a major force in the economy. They are the ones becoming I predict the 68% number will go lower for two reasons:

And age makes a difference, or does it? While my research finds that 82% of Baby Boomers prefer the phone, you can’t ignore that 52% of Gen Zs prefer it as well. At the same time, I predict that 68% of customers preferring the phone will go down for at least two reasons. First, the technology is getting better, and customers are becoming more confident with self-service solutions, which include improved chatbots. Second, Gen Zs and younger Millennials, who are more comfortable with technology, are becoming financially secure. The result is that they will be a major force in the economy.

Productivity and Efficiency

The 12% investment into productivity is about efficiency and optimizing the workforce. Some companies believe that being more efficient means replacing the workforce with technology. That’s a dangerous move for reasons and information already shared in this article. However, rather than saving money by eliminating employees, companies can make employees more productive. Imagine technology that saves employees 20% of their workday by eliminating menial tasks or answering basic questions that AI and chatbots can respond to. In turn, they use that time to focus on more important issues and tasks.

Many companies view chatbots as an investment in productivity, however according to the Call Centre Helper findings, companies are investing less than 8% in this powerful tool. My take on this is that companies have been let down by AI-fueled chatbots that make mistakes and hallucinate. That’s yesterday’s chatbot technology. Today, chatbots are far better than they were just a year ago. And if you’re worried about chatbots giving bad information to customers, don’t think that customers haven’t had the same experience with human support.

Final Words

The most successful companies I work with aren’t choosing between digital efficiency and human connection. They’re creating integrated experiences that deliver both. They use self-service for simple, routine interactions while ensuring a seamless hand-off to a human when needed. They leverage personalization to anticipate customers’ needs and build relationships. They invest in tools that enhance rather than replace human connection, achieving what every leader wants: a business that’s more productive, efficient, and loved by its customers.

This article was originally published on Forbes.com.

Image Credit: Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

How to Design a Horrible, Terrible, No Good, Very Bad User Experience

How to Design a Horrible, Terrible, No Good, Very Bad User Experience

GUEST POST from Geoffrey A. Moore


Some of you may know that early in my career I taught English at the college level. The freshman writing requirement was always a challenge as textbook publishers struggled valiantly to find some reading material that would actually help students write better. One of their best efforts was an essay titled “How to Write an F Paper.” It turns out we learn better from failure than from success—who knew?

With that thought in mind, and taking liberties with the title of one of my favorite children’s books, I want to review an actual user experience delivered to me by the manufacturer of a luxury automobile. The vehicle itself performs admirably, so kudos to the product engineers. It is the customer experience team that needs to be taken to the woodshed.

Here’s how the experience starts. I get in my car, start it, and back out of my garage, benefiting as always from the rear camera system. The system stays on when I shift into drive until I get onto the road and have gone perhaps fifty yards. At that point, the multimedia display presents the following:

An update is ready for installation on your multimedia system. The following conditions must be agreed to before installation.
(READ NOW) (LATER)

Well, I am driving the car, so I don’t think READ NOW is a very good option. I hit LATER, the screen returns to normal, and I get on with my day. To tell the truth, I forget about the whole experience until the next day when, after backing out of my garage and getting onto the road, I get a replay of the same message. Astoundingly, I am driving my car again, so again I push LATER.

Now, as my spouse will testify, sometimes I am a slow learner, so it is not until the better part of a week has passed that I realize the only time I am going to get this message is the first time I start the car in the morning and have driven around fifty yards. At this point, I decide to pull over and push READ. Here is what I got in reply:

Software update for your infotainment system — In order to read the terms and conditions, please park the vehicle safely, switch off the ignition and apply the parking brake.

Well, as it turns out, the reason I got in my car and drove that first fifty yards is that I actually have someplace I need to get to on time, so the idea of switching off the ignition does not appeal. I go back, push the LATER button (feeling a bit like Neo in the Matrix at this point), sub-vocalize a few choice words for the vendor, and carry on with my day.

I won’t testify as to how many days after I had the same introductory message appear and pushed LATER because you guessed it, I actually had somewhere to go and wanted to arrive there on time. But, one day I had the opportunity to be parking somewhere for a good while, so that day I did not push either button until I got to the lot. (“You can fool some of the people all the time, and all of the people some of the time, but you cannot fool all the people all the time.”) Once parked, I did switch off my ignition and applied the parking brake, and was rewarded with the following messages.

Software update for your installation system

Notes
The installation process requires several minutes and cannot be canceled or closed. Individual functions and buttons in the vehicle are not available for use during the installation or their use is limited. The multimedia display does not support display messages.

In the unlikely event of a technical error during installation, functional restrictions of the multimedia system and the above-mentioned functions may persist and make it necessary to consult a workshop.

This is what happens when you let the legal team review the customer communications text. Fresh from their latest efforts with the Safe Harbor statement from the prior quarter’s earnings call, they are fiercely protecting their enterprise from any and every liability risk. Heartwarming as these words were, they actually felt they were not protection enough because they were followed by:

Warnings

During installation of this update, the multimedia system is not available. In particular, this includes systems such as the navigation system, phone, reversing camera, 360 camera, Active Parking Assist, Remote Parking Assist, PARKTRONIC, and the switch for DYNAMIC SELECT.

There is an increased risk of accident.

Installing the update while operating the vehicle may distract you from the traffic situation.

There is an increased risk of accident.

Carry out the installation

And yes, that last line is a call to action, clearly meant to benefit from the wave of inspiration created by the earlier sentences. My only surprise was that it did not append the phrase “at your own risk.”

Now, to be fair, I did carry out the installation, and it took about seven minutes or so, and it was fine. So again, the product engineers know what they are doing. But where in the name of all that is holy is the customer experience engineering? Who in their right mind would ever want their customers—and remember this is a luxury vehicle with some pretty high-end customers—to go through such an experience? And most importantly, what are the takeaways that will keep us from going down the same path?

Here are three that come to mind:

  1. Design the experience. Work backward from the end in mind, making sure each element is contributing to the desired outcome.
  2. Test the experience. Make this a real-world test, not a lab test. Recruit vehicle owners to participate. Capture their feedback.
  3. Eliminate friction. All hygiene processes entail some amount of friction. In such situations, your job is not to delight your customers here but rather to avoid annoying them. Do so by respecting their time.

In this case, what if the car company had sent me an email first? That could have included all their liability stuff. It also could coach me on when and how to best install the update. Once I replied I had read the stuff, then they could have sent a much simpler message over the multimedia system, or maybe just triggered the download on my behalf when my car was safely in my garage. The point is, there was clearly a better way, and just as clearly, nobody at the car company cared enough to advocate for it.

That’s what I think. What do you think?

Image Credit: Pexels, Geoffrey Moore

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The AI New Deal

Another AI Soft Landing Scenario Exploration — Government as the Employer of First Resort

LAST UPDATED: May 2, 2026 at 5:33 PM

The AI New Deal

by Braden Kelley and Art Inteligencia


The Structural Gap: Why Process Automation Requires a Civic Pivot

As we navigate the accelerating displacement of cognitive and administrative labor, the conversation around the “AI soft landing” has reached a critical juncture. In my previous explorations, I’ve examined how our future might mirror the extreme wealth gaps of Victorian England and how we might witness a Human Premium Renaissance, where uniquely human traits become our most valuable currency.

However, a significant structural link is missing. While AI is exceptionally efficient at automating process, it is incapable of automating presence. This creates a dangerous void: as middle-class administrative roles evaporate, we risk losing the economic liquidity and social cohesion that sustain our communities.

The prevailing solution often discussed is Universal Basic Income (UBI). But as I have argued, UBI is a fiscal mirage — a passive mechanism that fails to account for the human need for agency and the staggering mathematical reality of devalued tax bases. We don’t need a handout; we need a Civic Dividend. We must move from a scarcity mindset focused on protecting obsolete jobs to an abundance mindset that funds the essential work we have historically neglected. This is the foundation of the AI New Deal: positioning the government as the Employer of First Resort.

The Fiscal and Psychological Mirage of UBI

Universal Basic Income (UBI) is often presented as the “silver bullet” for the AI age, but a closer look at the mechanics reveals it to be a flawed tool for a human-centered transition. From a design perspective, UBI solves for survival but fails to solve for contribution.

First, we must confront the Math Problem. Funding a meaningful UBI requires a robust and consistent tax base. However, as AI drives down the cost of labor toward zero, the income tax pool — the traditional engine of government revenue — shrinks alongside it. Relying on passive redistribution in a devalued labor market is a race to the bottom that risks a permanent “subsistence trap” for the majority of the population.

Second, there is the Agency Problem. Innovation thrives on human agency — the ability to act, create, and impact one’s environment. UBI provides a safety net but offers no platform for growth. By decoupling income from contribution, we risk creating a “useless class” not because humans lack value, but because we have failed to design systems that utilize their unique “Human Premium.”

Finally, we must consider the Inflation Trap. Without a mechanism to ensure the circulation of capital through local, human-to-human services, stagnant UBI payments are easily consumed by the rising costs of private-sector essentials. To achieve a soft landing, we need a dynamic model that prioritizes the Velocity of Money over the mere distribution of funds.

The Core Concept: The Civic Dividend

To bridge the gap between AI-driven efficiency and human necessity, we must introduce the Civic Dividend. This is not a social safety net designed for the desperate; it is a strategic economic platform designed for a high-functioning society. At its heart is a fundamental shift in the social contract: the Government as the Employer of First Resort.

In this model, the government doesn’t just step in when the private market fails; it proactively identifies and funds the “work that matters” — the essential maintenance of our physical, social, and cultural existence. These are the roles that require empathy, physical dexterity, and contextual judgment — capabilities that remain firmly in the human domain.

The Civic Dividend operates on the principle that human labor is a public asset. By offering potential employment in public works, care networks, and community resilience projects, the state ensures that most citizens have the opportunity to contribute. This creates a “Social Floor” of activity and income that is immune to algorithmic displacement.

Crucially, this work is not “make-work” intended to keep hands busy. It is the vital labor required to repair our crumbling infrastructure, support our aging population, and revitalize our neighborhoods. Unlike a handout, these wages are earned, providing the dignity of contribution while fueling the Velocity of Money. As these wages are spent at local bakeries, barbershops, and bookstores, they sustain a secondary human-to-human service economy that AI simply cannot replicate.

Three Pillars of AI New Deal

The Three Pillars of the AI New Deal

The success of the AI New Deal rests on a strategic focus on the “Un-automatable.” We must direct our collective energy toward three specific domains where human presence, judgment, and physical interaction are not just preferred, but essential for a thriving society.

Pillar 1: Physical and Digital Infrastructure

We are currently witnessing a “Tragedy of the Commons” in our physical world. Our bridges, transit systems, and power grids require more than just algorithmic optimization; they require physical intervention. The AI New Deal would mobilize a modern workforce to focus on Community Resilience — retrofitting cities for climate adaptation, urban “rewilding” to restore local ecosystems, and maintaining the physical nodes that allow our digital world to function. This work creates a tangible, high-quality public environment that serves as a shared wealth for all citizens.

Pillar 2: The Social and Care Fabric

As we automate cognitive tasks, the “Human Premium” in care becomes our most valuable asset. We are facing a global loneliness epidemic and an aging demographic that requires empathy, companionship, and nuanced psychological support. By professionalizing and scaling roles in elder care, mental health mentorship, and early childhood development, we transform these from marginalized sectors into the prestigious cornerstones of our new economy. These are roles where the goal is not “efficiency” (doing more with less time), but “effectiveness” (the quality of the human connection).

Pillar 3: Community Vitality and Cultural Resilience

In an era of AI-generated noise, local culture and verified information are at risk of erosion. The AI New Deal funds the “Civic Architects” — the local journalists, community theater directors, and public artists who document and celebrate the unique identity of a place. This pillar ensures that while our tools become more global and algorithmic, our lived experiences remain local, vibrant, and distinctly human. We aren’t just building roads; we are building the social connective tissue that prevents the isolation often triggered by rapid technological shifts.

Economic Mechanics: The Velocity of Human Connection

Economic Mechanics: The Velocity of Human Connection

The fiscal engine of the AI New Deal is built on a fundamental economic principle: the Velocity of Money. In a hyper-automated private sector, capital tends to pool at the top, concentrating in the hands of those who own the compute and the algorithms. Without a mechanism to pull that capital back into the hands of the many, the local economy — the shops, services, and neighborhood hubs — withers.

The Civic Dividend solves this by creating a continuous loop of circulation. When the government pays a living wage to a community health worker or a local infrastructure specialist, that income doesn’t sit idle. It is immediately recycled into the Human-to-Human (H2H) service economy. This worker buys bread from a local baker, gets a haircut from a neighborhood barber, and visits a local gym. These secondary businesses thrive precisely because their customers have earned, discretionary income to spend.

To fund this transition, we must look toward Automation Royalties or “Compute Taxes.” Rather than taxing labor — which AI is making artificially cheap — we shift the tax burden to the high-margin output of automated systems. This creates a sustainable cycle: the efficiency of AI funds the resilience of the human community.

Furthermore, the AI New Deal acts as a natural Inflation Buffer. By investing in public housing maintenance, efficient public transit, and community-led food resilience, we lower the “floor” of the cost of living. This ensures that the wages provided by the Civic Dividend maintain high purchasing power, shielding the population from the volatility of a purely algorithmic private market.

Addressing the Critics: Efficiency vs. Resilience

Critics often argue that government-led employment is inherently “inefficient” compared to the lean, optimized nature of the private sector. From the perspective of human-centered innovation, this critique misses the mark because it uses the wrong metric for success. In an AI-dominated age, social resilience is a far more valuable outcome than marginal efficiency.

The private sector’s drive for efficiency is exactly what is displacing workers. If we allow that same logic to dictate our social response, we end up with a society that is “optimized” into instability. The AI New Deal isn’t about competing with AI on speed or cost; it is about providing the stability that the private market, by its very nature, cannot offer. We are designing for systemic health, not just quarterly throughput.

Another common concern is the fear of “make-work” or a lack of individual choice. However, the AI New Deal is designed as a platform, not a cage. By providing a guaranteed social floor of meaningful work, we actually increase career mobility. When a citizen’s basic survival and dignity are secured through the Civic Dividend, they are more — not less — likely to take risks, launch their own H2H small businesses, or pursue creative endeavors in the Human Premium Renaissance.

Finally, we must recognize that this is a choice of design. We can choose to view displaced workers as a “surplus” to be managed, or we can view them as a massive, untapped reserve of human talent ready to be deployed toward the public good. The “inefficiency” of paying a human to do what an algorithm could do is only an inefficiency if you ignore the catastrophic social cost of a disengaged, impoverished populace.

AI New Deal: Designing a New Social Contract

Conclusion: Designing a New Social Contract

We stand at a unique design crossroads in human history. The rapid advancement of artificial intelligence has presented us with a fundamental choice: do we design a future of automated irrelevance, where a vast majority of the population subsists on a dwindling digital handout, or do we design a future of civic abundance?

The AI New Deal is more than an economic policy; it is a reaffirmation of the value of human contribution. It recognizes that while technology can manage our systems, only humans can care for our communities, preserve our culture, and maintain our physical world. By moving toward a model of the Government as the Employer of First Resort, we ensure that the wealth generated by the AI revolution is directly reinvested into the human experience.

This “soft landing” requires us to be bold. We must stop asking how we will survive without the jobs of the past and start asking what kind of world we could build if we finally had the resources and the hands to do it. The Civic Dividend offers a path where technology does the “tasks” so that humans can finally do the “work” of being human—creating a society that is not just more efficient, but more resilient, more connected, and more purposeful.

The tools are in our hands, and the need is all around us. Now, we simply need the courage to sign a new contract with ourselves and build the future we actually want to live in.


Braden Kelley is a leading futurist and trusted voice in human-centered innovation and change. Stay tuned for next week’s next installment in this series on the AI Soft Landing.

Frequently Asked Questions

How is the AI New Deal different from Universal Basic Income (UBI)?

While UBI provides a passive payment regardless of activity, the AI New Deal is a “Civic Dividend” based on active contribution. It positions the government as the Employer of First Resort, paying living wages for essential public work — such as infrastructure maintenance and care services — rather than providing a handout that lacks a connection to social agency or the local service economy.

How can the government afford to become the ‘Employer of First Resort’?

The funding shifts from taxing human labor to taxing the high-margin output of automated systems, often referred to as “Automation Royalties” or “Compute Taxes.” By capturing the wealth generated by AI-driven efficiency, the state can reinvest that capital into the Human-to-Human (H2H) economy, ensuring currency continues to circulate through physical communities.

Does this mean the government is creating ‘make-work’ just to keep people busy?

No. The AI New Deal focuses on the “Un-automatable” — high-value needs that are currently neglected, such as climate resilience, elder care, and mental health support. These are not arbitrary tasks; they are the essential services required for a functional, healthy society that AI cannot perform because they require human empathy, physical presence, and contextual judgment.

EDITOR’S NOTE: This is a visualization of but one possible future. I will be publishing other possible futures as they crystallize in my mind (or as you suggest them for me to explore).

Image credits: Google Gemini

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from Google Gemini to clean up the article, add images and create infographics.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

We Must Think Less Like Engineers and More Like Gardeners

We Must Think Less Like Engineers and More Like Gardeners

GUEST POST from Greg Satell

In February, 1919, the famous philosopher Bertrand Russell received a card from his former student, Ludwig Wittgenstein, who was at that time in an Italian prison camp. “I’ve written a book which will be published as soon as I get home,” he would say in subsequent correspondence. “I think I’ve solved our problems finally.”

The “problems” he spoke of had to do with a foundational crisis in mathematics and logic that defied the efforts of the world’s greatest minds. The book, Tractatus Logico-Philosophicus, was an attempt to engineer a perfectly logical language from first principles. It would become enormously influential, leading to the Vienna Circle and the logical positivist movement of the 1920s.

Yet Wittgenstein would later disown the idea and it was, in the end, found to be unworkable. There are limits to what we can engineer. The world is a messy place. Rules inevitably have exceptions, which is why every system will always crash. That’s why we need to think less like engineers making machines and more like gardeners that grow and nurture ecosystems.

The Death of the Secular Gods

The problems Russell and Wittgenstein were working on were part of a larger paradigm shift. By the late 19th century, many intellectuals had begun to question ideas passed down from the ancient Greeks, such as Aristotle’s Logic, Euclid’s geometry and the miasma theory in medicine, overturning two thousand years of conventional wisdom.

It’s hard to overstate the seismic shift that this represented. Aristotle’s use of the syllogism, in which conclusions necessarily followed premises, Euclid’s postulate that parallel lines never intersect and Hippocrates theory that bad air causes disease, were considered to be the basic foundations upon which western thought was predicated.

Yet as human knowledge advanced, people began to see flaws in these precepts. Strange paradoxes called Aristotle’s logic into question. Mathematicians like Gauss, Lobachevsky, Bolyai and Riemann began to imagine curved spaces in which parallel lines did, in fact, intersect and scientists such as Robert Koch, Joseph Lister and Louis Pasteur established the germ theory of disease.

These would be, practically speaking, incredibly positive developments. The rise of non-Euclidean geometry made Einstein’s general theory of relativity possible and the germ theory of disease paved the way for antibiotics and much longer lifespans. Yet they created an unwarranted optimism about what the human mind could achieve.

A New Religion

In the early 20th century, science and technology emerged as a rising force in western society. The new wonders of electricity, automobiles and telecommunication were quickly shaping how people lived, worked and thought. Physicists like Einstein and Bohr became celebrities. It seemed that there was nothing that scientific precision couldn’t achieve.

It was against this backdrop that Moritz Schlick formed the Vienna Circle, which became the center of the logical positivist movement and throughout the 20’s and 30’s. At its core was Wittgenstein’s theory of atomic facts, the idea that the world could be reduced to a set of statements that could be verified as being true or false—no opinions or speculation allowed. Those statements, in turn, would be governed by a set of logical algorithms which would determine the validity of any argument.

Yet even as this logical movement was growing, the foundational crisis in logic continued. To solve the problem, David Hilbert the greatest mathematician of the era, proposed a program to solve the crisis that rested on three pillars. First, mathematics needed to be shown to be complete in that it worked for all statements. Second, mathematics needed to be shown to be consistent, no contradictions or paradoxes allowed. Finally, all statements need to be computable, meaning they yielded a clear answer.

Then things took a surprising turn. A young logician named Kurt Gödel would prove that every logical system is flawed with contradictions. Alan Turing would show that all numbers are not computable. The Einstein-Bohr debates would be resolved in Bohr’s favor, destroying Einstein’s vision of an objective physical reality and leaving us with an uncertain universe.

The Rise Of Faux Scientists

The verdict was in. Facts could never be absolutely verifiable, but would stand until they could be falsified. We could, after thorough testing, increase our confidence, but never be completely sure. Ironically, the demise of logic led directly to the era of digital computing and a new, technological age. Just as we learned that systems would always be fallible, the machines we built became unimaginably powerful.

At the same time, human agency was increasingly called into question. It was, after all, subjective judgements that led to the Great Depression of the 1930s and the enormous wars that followed it. As the Baby Boomers came of age in the 1960s, it seemed like everything was up for debate. All of the fuzziness and uncertainty of relying on human judgment increasingly seemed impractical.

Much like Wittgenstein and the Vienna Circle, a number of thinkers sought to engineer systems that would harness natural forces to create better outcomes. The Austrian School of economics eschewed government regulation in favor of consumer preferences. Neorealism in foreign relations argued that competition and conflict could govern that international order.

Yet unlike the original logical positivists, these ideas wouldn’t stay confined to academia, but would seep into the affairs of everyday people. The consumer welfare standard insisted that market price signals, not government bureaucrats, would decide if a transaction should be permitted, while the principle of shareholder value demanded that the stock market, not managers, should govern business decisions.

The results are clear. Too little antitrust regulation has increased concentration in the vast majority of American industries and strangled competition, which has decreased business dynamism and lowered productivity. Our economy has become markedly less productive, less competitive and less dynamic. Purchasing power for most people has stagnated. By just about every metric, we’re worse off.

We Need To Manage Ecosystems, Not Machines

We like to think of ourselves as rational actors, weighing each piece of evidence before making a decision. Yet our brains don’t work like that. We build up our perspectives through synapses in our brain and through our social networks, which form complex webs of influence. Once we adopt a point of view, we rarely adapt it to new evidence.

Engineers believe in laws that can be understood and put to specific use, so they build machines to perform specific tasks. Gardeners believe in complexity and emergence. They don’t design their garden as much as tend to it, nurture it and support its surrounding ecosystem. They don’t expect the same results every time, but understand they will need to adjust their approach as they go.

We need to think less like engineers and more like gardeners. For most important purposes, we manage ecosystems, not machines. We need to think more in terms of networks that grow and less in terms of nodes whose behavior we can predict and control. Our success or failure depends less on individual entities than the connections between them.

In a world driven by networks and ecosystems, we can no longer treat strategy as if it were a game of chess, planning out each move with near perfect precision and foresight. The task of leadership is to make decisions with full knowledge that many will be wrong and that you will need to make them right.

There’s no system to do that for us, no impersonal forces that will point the way. In the end, we have to put trust in ourselves. There isn’t anyone else.

— Article courtesy of the Digital Tonto blog
— Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Winning with Artificial Intelligence in 90 Days

Winning with Artificial Intelligence in 90 Days

Exclusive Interview with Charlene Li

The rapid evolution of artificial intelligence (AI) has shifted the technology from a futuristic curiosity to the primary engine of modern organizational growth. In an era defined by data-driven decision-making, the ability to effectively harness machine learning and predictive analytics is no longer just a competitive advantage; it is a fundamental requirement for long-term viability. However, the path to integration is rarely linear. Many organizations find themselves caught between the urgent need for transformation and the daunting reality of legacy infrastructure, talent shortages, and the cultural shifts required to move beyond small-scale pilots toward true enterprise-wide intelligence.

While the potential for increased efficiency and innovation is clear, the execution remains a significant hurdle.

The organizations that thrive in this new landscape are those that treat AI as a core strategic pillar rather than a plug-and-play software update. This requires a rethink of how human talent and machine intelligence coexist, ensuring that the technology enhances human capability rather than simply automating existing inefficiencies. Overcoming these challenges involves not just technical prowess, but a disciplined approach to change management and a clear vision for how intelligence will redefine the value the organization provides to its customers.

Today we will dive deep into what it takes to quickly achieve success with artificial intelligence with our special guest.

Creating a 90-Day Blueprint to Win with Artificial Intelligence

Charlene LiI recently had the opportunity to interview Charlene Li, a New York Times bestselling author, keynote speaker, and AI transformation strategist. Her latest book, Winning with AI: The 90-Day Blueprint for Success, co-authored with Dr. Katia Walsh, gives senior leaders a practical framework for moving from AI experimentation to measurable business value. Her prior books include The Disruption Mindset, Open Leadership, and Groundswell. Fast Company named her one of the most creative people in business, and she has worked with global organizations including 14 of the Dow Jones Industrial 30 companies. She is the founder of Altimeter Group (acquired by Prophet) and currently leads Quantum Networks Group.

Below is the text of my interview with Charlene and a preview of the kinds of insights you’ll find in Winning with AI: The 90-Day Blueprint for Success presented in a Q&A format:

1. What confusion is being created by speaking of “AI” as one thing when there are different kinds of AI, and how does this hold back AI adoption?

When people say “AI,” they’re usually thinking ChatGPT. But ChatGPT is generative AI — and that’s just one of three types of AI showing up in business today. There’s also predictive AI, which has been quietly running in your CRM, your fraud detection, and your streaming recommendations for years. And there’s agentic AI, which takes autonomous action toward a goal rather than waiting for a prompt.

The Oracle (predictive), the Creator (generative), and the Agent (agentic) — that’s how Katia and I describe them in Winning with AI. They do fundamentally different things, and they require fundamentally different things from you.

The conflation matters because it leads to bad decisions. Leaders see a generative AI demo, get excited, and ask their teams to “do something with AI” — when the actual business problem might be better solved with predictive AI (and probably already could’ve been three years ago). Or they hear “agentic AI” and assume their organization is ready to deploy autonomous agents when they haven’t even gotten generative AI into their workforce yet.

The winners aren’t choosing among types — they’re using all three strategically, in combination. A customer care transformation might use predictive AI to route inquiries, generative AI to draft responses, and agentic AI to handle routine cases autonomously. Once you can see the three distinctly, the question stops being “what can I do with AI?” and starts being “what can AI do for me?” That’s the question that actually unlocks value.

2. What are some of the key characteristics of AI inertia and some of the best ways to break free?

We call it pilot purgatory — and almost every organization we work with is stuck there. The signs are easy to spot: dozens of disconnected pilots, lots of conference attendance, lots of slide decks, no measurable financial impact. An MIT study found 95% of AI initiatives fail to scale. That’s not a technology failure. It’s a failure of leadership and culture.

The classic characteristics:

    • Use cases as a strategy. Many use cases equals procrastination. A long list of pilots is how organizations look busy without committing to anything.
    • Diffused accountability. When the CIO, CFO, and CMO all “share” responsibility for AI, no one owns the outcome.
    • Waiting for the foundation to be perfect. Clean data, the right platform, the perfect org structure — these become reasons to delay rather than constraints to solve through.
    • Confusing motion with progress. Running pilots feels like progress. It isn’t, unless those pilots are tied to your most important business problems.

To break free: pick your biggest strategic problems, figure out how AI solves them, invest heavily in those solutions, and move with urgency. Appoint one AI value owner who lives, breathes, and dreams AI outcomes. Kill pilots that aren’t on a path to scale. And replace “fail fast” with “learn fast” — nobody actually rewards failure, and the language of failure lets people walk away from things that should be pushed through.
Speed is the new moat. The companies that win aren’t the ones with the best technology. They’re the ones that adapt faster than their competitors.

3. There are still a lot of people out there not using AI (or not realizing that they are). What are some of the best ways for people to get started with AI?

Most people are already using AI — every spam filter, every Google Maps route, every recommendation on a streaming service is AI. So the real question is: how do you get started with the kind of AI that’s reshaping work right now, which is generative AI?

My advice is genuinely simple. Pick one of the major tools — Claude, ChatGPT, Gemini, Copilot — and start using it for one real task you do every week. Not a toy task. A real one. Drafting an email. Prepping for a meeting. Summarizing a long document. Brainstorming an approach to a problem you’re stuck on.

Two practical tips that make a big difference:

Write better prompts. A good prompt has a role (“Act as a marketing strategist”), instructions (what you want done), context (the background the AI needs), and an output format (memo, table, slide outline). Then refine through dialogue. Most people give AI two sentences and judge it on the result. Give it two paragraphs and you’ll be amazed.

Try the flipped interaction. Instead of asking AI for an answer, ask it to ask you questions until it has enough context to give a good answer. For example, at the end of a prompt, add this sentence: “Ask me any clarifying questions you may have.” It turns your prompt into a conversation.

I think of AI fluency as learning to eat with chopsticks: at first you’re concentrating on every motion, and eventually it’s just how you eat. You won’t get there by reading about it. You get there by using it. Every day. On real work.

4. Does AI safety really matter? It seems like all of the major AI players are just focused on speed and getting to AGI before China, am I wrong?

You’re not wrong about what the AI players are doing. But you’re probably not playing that game – more on that below. First, I’d push back on the framing that safety and speed are opposites.

Think of Formula 1. The drivers who win championships have absolute confidence in their brakes, their crash structures, their fire suppression systems. That’s why they can push so hard on speed. Safety is what makes speed possible. The companies moving fastest on AI adoption aren’t the ones cutting corners on responsibility — they’re the ones with the highest ethical standards, because trust eliminates friction. When your team knows where the guardrails are, when your customers trust your intentions, when your board has confidence in your approach, you can move at the speed AI demands.

The 2024 Edelman Trust Barometer found that 43% of people would reject AI in products and services if they don’t believe the innovation has been thoroughly scrutinized. That’s not a PR problem — it’s a revenue and competitive position problem.

On the AGI race specifically, the geopolitical framing oversimplifies what’s actually a much more textured conversation about how AI is deployed within companies, governments, and communities. Most leaders I work with aren’t worrying about AGI — they’re worrying about whether their AI customer service tool is treating customers fairly, whether their AI-driven hiring screen is introducing bias, and whether their data is being used in ways customers didn’t consent to. Those are the safety questions that matter for the next five years, regardless of what the frontier players are doing.

5. Where is the government being too hands off with AI and its impacts, and what conversations should governments and societies be having about AI and its impacts that they’re not?

I’ll be careful here because I’m not a policy person — I work with the leaders implementing AI inside organizations. But from that vantage point, a few things stand out.

The conversation we aren’t having enough is about workforce transition. Not “will AI take jobs” — we’ve been arguing about that abstractly for three years. The real question is what happens to the millions of people whose roles will substantially change in the next five years, and who’s responsible for helping them adapt. Right now, that’s mostly being left to individual employers, and the gap between what enlightened employers are doing and what the median employer is doing is enormous. That gap will become a societal problem long before regulators catch up.

The second underdiscussed conversation is about education. We’re training a generation of students with curricula designed for a pre-AI world. By the time we figure out what AI fluency looks like in K–12, the kids who needed it most will be in the workforce.

Third — and this is where I’d actually like to see governments lean in more — is data. Most AI regulation focuses on the models. The leverage is in the data: who owns it, how it can be used, what consent looks like in a world where data collected for one purpose can be repurposed for AI training that wasn’t imagined when it was collected.

That said, regulations always lag technology. Anchoring your responsible and ethical AI policy in your organization’s values rather than waiting for rules is the right move, regardless of what governments do.

6. What are the key pillars that form the basis of a strong AI foundation for those who seek to take full advantage of AI in their organization?

In Winning with AI, Katia and I lay out four building blocks. They develop together, not sequentially.

Mindset — the cultural ability to move at AI’s speed. Speed, focus, customer-centricity, experimentation, and learning from setbacks rather than treating them as evidence that the technology doesn’t work. Without the right mindset, you can have the best tools in the world, and they’ll sit unused.

Skillset — AI fluency across the workforce, not just in IT. Everyone needs to understand what AI can and can’t do, how to use it responsibly, and how to apply it to their actual work.

Toolset — the technical foundation. We tell leaders to build with LEGO, not cathedrals. Modular, interchangeable components you can swap as the technology evolves, sitting on top of data that’s good enough to start with.

Decision-set — the governance and decision-making structures that let you move fast without breaking things. Who decides what, how quickly, with what oversight.

The mistake organizations make is treating these as a sequence — first we’ll fix the data, then we’ll train people, then we’ll deploy. That sequence will take you a decade. The right approach is to build the blocks while delivering value, using each AI application to strengthen multiple blocks at once.

And one piece that wraps all four: leadership. Without active, visible commitment from the top, the four building blocks don’t compound. With it, they accelerate.

7. Of all the outcomes that the different types of AI can achieve, which activities create the most value for organizations?

Winning with AIWe frame the value AI creates in three areas: engagement, efficiencies, and reinvention.

Engagement is about deepening relationships with customers and employees through personalization, prediction, and proactive service. Anticipating what someone needs before they articulate it.

Efficiencies are about doing what you already do, faster and cheaper. This is where most organizations start — and where most get stuck. Efficiency gains are real, but they’re easy for competitors to replicate, which means they don’t create lasting advantage.

Reinvention is the most transformational and the most uncomfortable. It’s not asking “how can we do what we do faster?” — it’s asking “what becomes possible now that the old constraints are gone?” New business models. New revenue streams. New markets that were never economical before.

The trap is thinking efficiency is AI’s value. We call it the efficiency trap. Companies that limit themselves to efficiency are using a strategic weapon as a cost-cutting tool. The real competitive advantage comes from engagement and reinvention.

A great example: Coursera. Translation used to cost about $10,000 per course, which made global expansion economically impossible at the scale of their 5,000+ course catalog. Generative AI eliminated that constraint overnight. CEO Jeff Maggioncalda saw it immediately and launched Project Genesis by the end of 2022. That’s reinvention — AI removing a constraint that defined the business model.

If I had to pick one activity that creates the most value, it would be: using AI to remove a constraint that has shaped your industry’s economics for so long that nobody questions it anymore.

8. There was a lot of talk for a while about becoming an AI-first organization. Is this something that companies should be trying to do?

No. Be AI-ready instead.

“AI-first” is a technology company’s framing. It puts the technology in the driver’s seat, which sounds visionary but in practice produces dozens of disconnected pilots with no strategic impact. You end up chasing AI because it’s shiny rather than because it solves a real problem.

“AI-ready” is a business leader’s framing. It puts strategy in the driver’s seat. You’re building the culture, the skills, the decision systems, and the technical foundation that let AI create real value against the strategic priorities you already have.

Said simply: AI-first is a technology mindset. AI-ready is a business mindset.

You don’t actually need an AI strategy. You need a business strategy that uses AI. Anyone selling you on an AI strategy is selling you the wrong thing.

9. What should people be doing as individuals to maintain their value to their organizations and to grow their careers?

Three things, in order.

One: develop genuine AI fluency. Not “I’ve used ChatGPT a few times” fluency. Real fluency — the kind where AI is woven into how you think, prepare, decide, and communicate. The people and organizations who get to AI fluence in 2026 will pull dramatically ahead of those who don’t, and the gap will be very hard to close once it opens.

Two: deepen what’s uniquely human. AI can amplify cognition at speeds and scales no individual can match. What it can’t do is exercise empathy, self-reflection, intuition, judgment, and wisdom. These five traits — the foundation of what Katia and I call “superhumans” in the book — become more valuable, not less, as AI handles more of the cognitive work. The leaders who pair AI’s reach with these distinctly human capacities are the ones creating the most value.

Three: build a lifelong learning practice. The shelf life of any specific skill is shrinking. The skill that doesn’t depreciate is the ability to learn — quickly, repeatedly, with intellectual humility. Normalize not knowing. Embed reflection into how you work. Treat curiosity as a professional asset, not a side hobby.

If you do those three things, you’ll be more valuable in the future than you are today, regardless of what happens to your specific role.

10. What have organizations gotten wrong about rolling out AI and what can the early adopters do to recover from botched initial rollouts?

The biggest things organizations get wrong:

  • Treating AI as a technology project. It’s a business initiative for value creation that happens to use technology. When IT owns it, it stays small.
  • Use cases instead of strategy. A laundry list of pilots is procrastination dressed up as progress.
  • Diffused accountability. Without a single AI value owner, the work fragments.
  • Skipping the people work. Throwing tools at employees without addressing the fear underneath. Until fear is replaced by trust, no amount of training will change behavior.

If you’ve already botched the rollout, here’s the recovery path:

Stop and audit. What’s actually scaling, what’s not, what’s draining resources without producing value? Be honest. Sunset the dead ends.

Appoint one accountable AI leader. If no single person is accountable for AI value creation across the enterprise, fix that this quarter. Not part-time, not committee-led — one person whose performance is measured on the value that AI creates.

Pick one strategically meaningful problem and go after it. Not the easiest problem. The one whose solution would matter most to the business.

Learn from Ally Bank. When generative AI emerged, Ally’s CIO Sathish Muthukrishnan deliberately chose the most resistant audience — customer service agents — and a low-stakes problem: summarizing customer calls. The result was so valuable that the agents who’d been most skeptical became the loudest advocates: “Don’t take this away from me.” Targeting the skeptics with a real win is one of the most powerful change strategies we’ve seen.

A botched rollout isn’t a death sentence. It’s actually a useful clearing of the underbrush — assuming you learn from it.

11. Several studies have come out recently about the negative effects of AI on human cognition. Any tips for how to best use AI without degrading your brain?

This is a real concern and worth taking seriously. The risk isn’t AI itself — it’s lazy AI use. Using AI to skip thinking rather than to enhance it.

A few habits I’ve found useful:

Think first, then prompt. Before going to AI for an answer, write down what you think. Coursera’s Jeff Maggioncalda calls this cognitive bootstrapping — write your perspective on a decision, then ask AI to challenge it: “What are the strengths and weaknesses of this view? What are my blind spots? What would you recommend I improve?” AI sharpens your thinking instead of replacing it.

Treat AI outputs as drafts, not deliverables. Read critically. Push back. Ask why. Verify facts. The moment you stop questioning AI’s outputs is the moment your thinking starts to atrophy.

Protect deep work. Schedule time for thinking that doesn’t involve AI at all. Reading, writing, reflecting, walking — the unstructured time where your brain consolidates what it knows. AI can compress research, but it can’t compress wisdom. That still has to come from lived experience, integrated over time.

Notice the difference between using AI to accelerate something you understand and using AI to substitute for understanding. Acceleration is healthy. Substitution erodes you.

The promise of AI isn’t to do our thinking for us. It’s to help us think better. The discipline is staying on the right side of that line.

12. Any question you wish I had asked but didn’t?

Yes — I’d love a question about the human possibility on the other side of this.

Most AI conversation is about risk, displacement, and disruption. Those are real. But the conversation Katia and I get most excited about is what becomes possible when AI handles the cognitive work that has been depleting people for decades — the synthesis, the routing, the routine analysis — and frees up human capacity for what only humans can do.

We call those people “superhumans” — not because they’re enhanced by technology in some sci-fi sense, but because they finally have the room to be more deeply human. To exercise empathy, self-reflection, intuition, judgment, and wisdom at a level that’s been crowded out by cognitive overload.

The first companies to deliberately develop and organization filled with superhumans won’t just have a competitive advantage. They’ll be creating an entirely new form of value — one we haven’t fully named yet. That’s the future I want leaders thinking about. Not “how do I survive AI?” but “what becomes possible for my people on the other side of this?”

Dream it. Then build it.

Conclusion

Thank you for the great conversation Charlene!

I hope everyone has enjoyed this peek into the mind of one of the women behind the insightful new title Winning with AI: The 90-Day Blueprint for Success!

Image credits: Charlene Li, Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Why Zero UI Will Redefine Experience Design

The Invisible Interface

LAST UPDATED: May 2, 2026 at 9:13 AM

Why Zero UI Will Redefine Experience Design

GUEST POST from Art Inteligencia


I. Introduction: The End of the Glass Slab

The Screen Fatigue Phenomenon: We have reached a point of peak saturation with traditional displays. Our lives are currently mediated by glowing rectangles, leading to a fragmented human experience where the tool often overshadows the task.

Defining Zero UI: This is not the absence of an interface, but the disappearance of the user interface as we know it. It represents a move away from rigid, button-heavy menus toward more organic inputs like voice, haptics, computer vision, and ambient intelligence.

The Core Thesis: Technology is at its most powerful when it is invisible. By removing the friction between human intent and technological execution, we allow people to return their focus to the experience itself, rather than the device required to facilitate it.

II. The Sensory Stack: How Zero UI Works

Voice & Natural Language: We are witnessing a transition from the “Command-Line Interface” era of voice (where specific keywords were required) to fluid, contextual conversations. The goal is a system that understands nuance, sarcasm, and intent, mirroring human-to-human interaction.

Biometrics & Gesture Control: In a Zero UI world, the body becomes the input device. Through computer vision and skeletal tracking, technology can interpret a wave of a hand or a shift in gaze, allowing for spatial computing that feels like an extension of natural movement.

Proactive vs. Reactive Design: Traditional UI waits for a user to click; Zero UI anticipates. By leveraging machine learning and sensor data, systems can predict needs—adjusting the lighting when you enter a room or preparing a summary of a meeting before you even ask for it.

Haptics & Sensory Feedback: Communication doesn’t always need to be audible or visual. Subtle vibrations (haptics) or environmental changes (thermal or olfactory cues) can provide “glanceable” information without demanding the user’s full cognitive attention.

III. From UX to HX (Human Experience)

Designing for Context: In the era of Zero UI, the focus shifts from “clicks” to “intent.” Experience design no longer lives within the boundaries of a screen; it must account for a user’s physical location, environmental noise levels, and even social setting. We aren’t just designing a path to a button; we are designing a response to a human moment.

Reducing Cognitive Load: The “Invisible Assistant” model moves us away from app management and toward outcome management. By utilizing ambient intelligence, technology handles the “how” so humans can focus on the “why.” This creates a “Calm UI” effect, where digital interactions support our life goals without demanding constant visual attention.

The Ethics of Invisibility: As interfaces disappear, the “Black Box” problem grows. Designers must prioritize radical transparency—ensuring users understand when and how they are being sensed. Trust becomes the primary currency; without clear consent and “off-switches” for predictive features, invisible interfaces risk becoming intrusive rather than helpful.

From Screens to Systems: We are moving toward “Sentient Interfaces” that detect hesitation or frustration through behavioral cues. Transitioning to HX (Human Experience) means building ecosystems that are emotionally aware, neuro-inclusive, and capable of failing gracefully when the AI misinterprets human intent.

IV. Leading Innovators: The Architects of Invisibility

The transition to Zero UI is being led by a diverse ecosystem of startups and legacy tech giants. As of 2026, the following organizations are moving beyond the screen to define the future of human-centered interaction:

Company / Startup Core Focus Why They Matter Now
Neuralink Brain-Computer Interface (BCI) Entering high-volume production in 2026, Neuralink is moving BCI from clinical trials to the ultimate seamless interface: thought-based control.
Ultraleap Mid-air Haptics & Tracking By projecting ultrasound waves onto the skin, they provide tactile feedback in mid-air, crucial for non-visual “touch” in automotive and XR environments.
SoundHound AI Agentic Voice Commerce Their latest “Amelia 7” platform allows users to manage complex real-world transactions—like dinner reservations and parking—entirely through natural conversation.
Memories.ai Contextual Wearables (LUCI) Following the pivot of early wearables like the Humane Ai Pin, Memories.ai is building the “Android of AI wearables,” providing a system-level reference for ambient intelligence.
Synchron Endovascular BCI A key competitor to Neuralink, Synchron focuses on minimally invasive brain interfaces that allow users to control digital devices via the blood vessels, emphasizing safety and accessibility.

Strategic Implementation: For brands, the challenge is no longer just “building an app.” It is about integrating into these emerging ecosystems. Whether it is through voice agents or haptic-enabled environments, the goal for designers is to ensure their brand’s presence is felt and heard, even when it cannot be seen.

V. The Futurologist’s Perspective: What’s Next?

The Transition to “Liquid Services”: In 2026, we are moving away from the “static app” model. Instead, we are entering the era of liquid services—capabilities that flow seamlessly across devices. Your interaction might start as a voice command in the kitchen, continue as a haptic pulse on your wrist while walking, and conclude as a spatial projection in your vehicle. The interface is no longer a destination; it is a persistent, supportive presence.

Hyper-Personalization and Ambient Intelligence: One-size-fits-all design is dead. Leveraging what I call “Fortified Intelligence,” future systems will adapt in real-time to the individual’s neurodiversity, physical abilities, and current emotional state. Environments will become “sentient,” adjusting lighting, acoustics, and information density based on the user’s “Digital Persona” without a single manual adjustment.

The Challenge for Designers: Behavioral Architecture: The role of the designer is shifting from visual storytelling to behavioral and sensory architecture. We are no longer just drawing screens; we are defining the “rules of engagement” between humans and machines. This requires a Whole-Brain approach—part scientist to manage the data and part artist to inspire human connection. Success in this new landscape is measured by “Speed to Resilience” rather than just speed to market.

Reclaiming the Human Moment: Paradoxically, the more advanced our technology becomes, the more we value “human friction.” As Zero UI automates the logistical “drudge work” of life, experience design for the future will emphasize the things AI cannot replicate: intentional inefficiency, the warmth of human presence, and the physical tangibility of the world around us. We are designing technology to get it out of the way, so we can finally be human again.

VI. Conclusion: Reclaiming the Human Moment

Beyond Efficiency: As I often say, true innovation isn’t just about making things faster or cheaper—it’s about making things more human. Zero UI is the final step in removing the technical debt of the 21st century. By dissolving the “glass slab” that separates us from our tasks, we aren’t just improving efficiency; we are restoring presence. When the technology disappears, we are finally free to focus on the work that matters and the people who inspire us.

A Call for Design Integrity: As we look toward the 2030s, the “Wild West” era of digital interfaces is closing. We are entering an era of Structural Integrity in experience design. Designers and innovation leaders must move beyond “Process Theater”—workshops that generate ideas without outcomes—and start building the resilient, invisible infrastructure that supports a flourishing society. We must have the courage to design a future that doesn’t require us to retreat into the friction of the past.

Final Thought: The most disruptive interface is the one that doesn’t exist because it works so well you’ve forgotten it’s there. The goal of the Invisible Interface is not to automate the human out of the loop, but to close the loop on friction, leaving only the experience behind. Let’s design an infrastructure that doesn’t just survive the future, but defines it.

Are you ready to move from UX to HX?

If you’re looking to get to the future first, increase your speed of innovation, or create a culture of continuous transformation, connect with Braden Kelley for a keynote or a FutureHacking™ workshop to teach you to be your own futurist.

Frequently Asked Questions

What is the difference between Zero UI and traditional UI?

Traditional UI (User Interface) relies on visual elements like screens, buttons, and menus to facilitate interaction. Zero UI moves away from these “glass slabs,” instead utilizing natural human behaviors—such as voice, gestures, haptics, and ambient intelligence—to interact with technology without a physical screen as the primary mediator.

How does Zero UI improve the Human Experience (HX)?

By reducing cognitive load and removing the friction of navigating complex menus, Zero UI allows technology to become a proactive assistant rather than a reactive tool. This shift toward “Human Experience” prioritizes context and intent, allowing users to stay present in their physical environment while still benefiting from digital capabilities.

Is Zero UI secure and private?

As interfaces become invisible, transparency becomes the most critical design element. Leading innovators are focusing on “Privacy by Design,” ensuring that ambient sensing and voice processing are handled with clear consent and robust encryption, often processing data locally (on-edge) rather than in the cloud to maintain user trust.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

A Tiny Bit of Uninterrupted Work Goes a Long Way

A Tiny Bit of Uninterrupted Work Goes a Long Way

GUEST POST from Mike Shipulski

If your day doesn’t start with a list of things you want to get done, there’s little chance you’ll get them done. What if you spent thirty minutes to define what you want to get done and then spent an hour getting them done? In ninety minutes you’ll have made a significant dent in the most important work. It doesn’t sound like a big deal, but it’s bigger than big. Question: How often do you work for thirty minutes without interruptions?

Switching costs are high, but we don’t behave that way. Once interrupted, what if it takes ten minutes to get back into the groove? What if it takes fifteen minutes? What if you’re interrupted every ten or fifteen minutes? Question: What if the minimum time block to do real thinking is thirty minutes of uninterrupted time?

Let’s assume for your average week you carve out sixty minutes of uninterrupted time each day to do meaningful work, then, doing as I propose – spending thirty minutes planning and sixty minutes doing something meaningful every day – increases your meaningful work by 50%. Not bad. And if for your average week you currently spend thirty contiguous minutes each day doing deep work, the proposed ninety-minute arrangement increases your meaningful work by 200%. A big deal. And if you only work for thirty minutes three out of five days, the ninety-minute arrangement increases your meaningful work by 400%. A night and day difference.

Question: How many times per week do you spend thirty minutes of uninterrupted time working on the most important things? How would things change if every day you spent thirty minutes planning and sixty minutes doing the most important work?

Great idea, but with today’s business culture there’s no way to block out ninety minutes of uninterrupted time. To that I say, before going to work, plan for thirty minutes at home. And set up a sixty-minute recurring meeting with yourself first thing every morning and do sixty minutes of uninterrupted work. And if you can’t sit at your desk without being interrupted, hold the sixty-minute meeting with yourself in a location where you won’t be interrupted. And, to make up for the thirty minutes you spent planning at home, leave thirty minutes early.

No way. Can’t do it. Won’t work.

It will work. Here’s why. Over the course of a month, you’ll have done at least 50% more real work than everyone else. And, because your work time is uninterrupted, the quality of your work will be better than everyone else’s. And, because you spend time planning, you will work on the most important things. More deep work, higher quality working conditions, and regular planning. You can’t beat that, even if it’s only sixty to ninety minutes per day.

The math works because in our normal working mode, we don’t spend much time working in an uninterrupted way. Do the math for yourself. Sum the number of minutes per week you spend working at least thirty minutes at time. And whatever the number, figure out a way to increase the minutes by 50%. A small number of minutes will make a big difference.

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Customer Confidence Score™ (CCS)

The Customer Confidence Score™ (CCS)

GUEST POST from Shep Hyken

Recently, I wrote about a customer trust survey. The feedback was amazing, which compelled me to take this a step further. After more writing and additional research, I recognized the need for more attention to a metric that measures a customer’s trust, which will directly correlate with customer satisfaction levels, loyalty, and any metric that measures what keeps customers or drives them away.

Merriam-Webster defines trust as an assured reliance on the character, ability, strength, or truth of someone or something. One in which confidence is placed.

One can’t ignore that the word confidence is part of the definition! They are very closely linked. We might ask something similar to, “Which came first, the chicken or the egg?” The question would be, “Which comes first, confidence or trust?”

Or, put another way: Does more trust lead to higher confidence, or does a higher level of confidence lead to more trust?

Or does it really matter? If you have both, you win. My take is that trust leads to confidence. Customers show confidence in your company through repeat business and referrals. That’s how they express their trust.

And that is why I’m officially announcing to you, our subscribers, readers, and viewers, a name to describe the trust questions I recently covered. I call it the Customer Confidence Score™ (CCS), another question to add to the survey questions you use to measure customer satisfaction (CSAT) and Net Promoter Score (NPS). Here’s an anchor question from my recent article on trust surveys:

On a scale of 1-10, how much do you trust that we will always do what’s right for you as our customer?

If your customer doesn’t give you a perfect 10 on this question, there are trust issues. Customers either fully trust you, or they don’t. And obviously, the lower the score, the less likely you’ll see them return. But a score alone is just a number. The real insight comes when you ask your customers why they gave you that score. The answer is your opportunity to resolve trust issues and improve the likelihood they will return.

The Customer Confidence Score™ is the result of surveying for trust, but it’s more than just another metric. It doesn’t replace CSAT or NPS. It completes them by measuring the foundation they are built on: trust. Without trust, a high CSAT or NPS score may be temporary at best. Measure CCS consistently, act on the insights, and you’ll build the kind of confidence and loyalty that get customers to say, “I’ll be back!”

Image Credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Designing Work for Humans and AI Agents to Do Together

LAST UPDATED: April 29, 2026 at 6:28 PM

Designing Work for Humans and AI Agents to Do Together

by Braden Kelley and Art Inteligencia


The Work Design Gap

We are not struggling to build artificial intelligence. We are struggling to design work for it.

Across industries, organizations are layering AI onto workflows that were never meant for collaboration. The result is predictable: inefficiency, mistrust, and unrealized value.

The real divide is not human versus AI. It is between work that is intentionally designed for collaboration and work that is not.

Why Traditional Tools Fail Us

Most of our management tools were built for a different era.

  • Process maps assume predictability
  • Org charts assume static roles
  • RACI models assume clear ownership

But human and AI collaboration is dynamic, contextual, and continuously learning. These tools help us optimize yesterday’s work, not design tomorrow’s.

What we need is a new visual language for collaboration.

Introducing the Human–AI Collaboration Canvas

The infographic below is not just a diagram. It is a thinking tool.

Its purpose is to make invisible interactions visible, clarify roles without over-constraining them, and embed judgment, trust, and learning into how work gets done.

This is a shift from process design to system design for collaboration.

Designing Work for Humans and AI Infographic

The Three-Lane Model: A More Honest Representation of Work

The canvas is built around three interconnected lanes:

The Human Lane

Where judgment, empathy, ethics, and accountability live. Humans frame the problem, not just solve it.

The AI Agent Lane

Where scale, speed, pattern recognition, and automation operate. AI expands what is possible.

The “Together” Lane

This is where value is actually created. Co-creation, co-decision, and co-learning happen here.

If you are not explicitly designing the middle lane, you are leaving value on the table.

The Work Journey: Sense → Decide → Act → Learn

Instead of rigid workflows, the canvas maps work as an adaptive cycle:

  • Sense: Understand context and gather signals
  • Decide: Blend human reasoning with AI recommendations
  • Act: Execute with scale and oversight
  • Learn: Reflect, adapt, and improve

Learning is not the end of the process. It feeds everything.

Collaboration Nodes: Where the Magic (or Failure) Happens

At key points in the journey are collaboration nodes—the moments where humans and AI interact.

Each node forces three critical questions:

  • Who leads?
  • What is the role of the other?
  • What is at stake?

Most AI failures are not technical failures. They are interaction design failures.

Making Judgment Visible

One of the biggest risks in AI adoption is invisible decision-making.

The canvas highlights:

  • Where human judgment is required
  • Where AI recommendations are sufficient
  • Where escalation is necessary

Automation without explicit judgment design is just risk at scale.

Designing for Trust, Not Just Performance

Capability alone is not enough. Systems must be trusted to be used effectively.

This requires:

  • Transparency
  • Explainability
  • Auditability

The real question is not “Can the AI do this?” but “Will humans trust and use this appropriately?”

Learning Loops: The System That Gets Smarter

The canvas includes two reinforcing learning loops:

  • AI Learning Loop: Data → Model → Output → Feedback → Improvement
  • Human Learning Loop: Experience → Reflection → Insight → Better decisions

The real competitive advantage is not AI itself. It is how quickly your combined system learns.

Risk, Ethics, and Failure by Design

No system is perfect. The best systems are designed with failure in mind.

The canvas highlights:

  • Bias and fairness
  • Privacy and security
  • Safety and compliance

It also asks essential questions:

  • What happens if the AI is wrong?
  • What happens if the human is wrong?
  • How do we recover?

Resilience comes from designing for breakdowns, not ignoring them.

Human-AI Agent Work Collaboration Canvas

How to Use This Canvas

This is a practical tool, not a theoretical one.

  • Use it in workshops to map collaboration
  • Audit existing workflows
  • Design new human–AI systems from scratch

A simple place to start:

  1. Map one critical workflow
  2. Identify collaboration nodes
  3. Redesign the “together” lane first

Designing for a More Human Future

AI does not reduce the need for humans. It raises the bar for how we design work.

The goal is not efficiency alone. The goal is better decisions, better experiences, and better outcomes.

The organizations that win will not be the ones with the most AI. They will be the ones who best design how humans and AI work together.

EDITOR’S NOTE: You should read this article too to learn more about atomizing work for man and machine to do together.

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from ChatGPT and Google Gemini to clean up the article, add images and create infographics.

Image credits: Google Gemini, ChatGPT

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Go Beyond SLAs and Measure Human Success with the New XLM Matrix (free download)

LAST UPDATED: April 29, 2026 at 12:03 PM

Go Beyond SLAs and Measure Human Success with the XLM Matrix

by Braden Kelley


The Crisis of the “Efficient but Empty” Experience

In our current landscape of rapid digital transformation, we have achieved unprecedented levels of speed and automation. Organizations have mastered the “how” of delivery, yet many find themselves facing a growing paradox: processes are becoming more efficient while human satisfaction is simultaneously declining. We are successfully building faster systems that often leave the user feeling more like a cog in a machine than a valued participant.

The root of this issue lies in our reliance on traditional Service Level Agreements (SLAs). For decades, SLAs have served as the gold standard for operational success, measuring technical markers like system uptime, response times, and throughput. While these metrics are essential for maintaining infrastructure, they are fundamentally “cold” metrics. They can tell you that a system is functioning, but they cannot tell you if the person using that system is thriving, frustrated, or merely exhausted by the interaction.

To innovate effectively in a human-centered future, we must look beyond technical availability and begin measuring the actual quality of the human encounter. We need a shift in perspective—moving from monitoring system performance to measuring human success. This evolution requires a new framework: Experience Level Measures (XLMs). By focusing on how an innovation impacts the user’s cognitive load, sense of agency, and emotional resonance, we can move past “efficient but empty” outputs and toward solutions that deliver genuine value.

Introducing the XLM Matrix

To bridge the gap between technical output and human success, we developed the XLM (Experience Level Measure) Matrix. This visual framework is designed to help innovation teams move beyond abstract empathy and toward concrete, measurable experience improvements. By visualizing the relationship between friction, measurement, and action, teams can align their efforts with the outcomes that actually move the needle for their users.

The matrix is structured as a series of concentric rings, requiring teams to work from the “inside out” to ensure every innovation is rooted in a real-world human need:

  • The Inner Circle (The Friction Point): This is the starting line. Here, teams identify the specific “ugh” moment—the point in the journey where the user currently feels confused, slowed down, or disempowered.
  • The Middle Ring (The XLM): This layer transforms qualitative frustration into a quantitative metric. It asks: “How do we measure the absence of that friction?” An XLM isn’t about system uptime; it’s about the user’s success rate in reaching their goal without cognitive fatigue.
  • The Outer Ring (The Innovation Lever): Once the friction is identified and the metric is set, the outer ring focuses on the solution. It identifies the specific change in the product, service, or workflow that will directly influence the XLM and eliminate the friction point.

By using this “Target Logic,” teams ensure that they aren’t just innovating for the sake of novelty, but are strategically pulling levers that have a measurable impact on the human experience.

The XLM (Experience Level Measure) Matrix

The Four Pillars of Human-Centered Innovation

To provide a comprehensive view of the user experience, the XLM Matrix is divided into four critical quadrants. Each quadrant represents a fundamental pillar of how humans interact with technology and services. By examining an innovation through these four lenses, teams can uncover hidden friction points and prioritize improvements that resonate most deeply with their audience.

1. Cognitive Load

“Does this make the user’s life simpler or more complex?”

In an age of information abundance, mental energy is a finite resource. This pillar focuses on the mental effort required to complete a task. Innovation here is about reducing noise, simplifying navigation, and ensuring that the “cost of thinking” is kept to an absolute minimum.

2. Time-to-Value

“How quickly does the user reach their ‘Aha!’ moment?”

Success is often determined by the distance between a user’s first interaction and their first realization of value. This quadrant measures the speed of relevance. Effective innovation in this space removes barriers to entry and streamlines the path to a meaningful outcome.

3. Agency

“Does the user feel in control, or like a cog in the process?”

As systems become more autonomous, maintaining human agency is vital. This pillar explores whether a tool empowers the user or forces them into a rigid, predetermined path. High-agency innovations provide the user with the autonomy to make meaningful choices and direct the outcome.

4. Emotional Resonance

“Does the interaction build trust or cause frustration?”

Every interaction leaves an emotional footprint. This quadrant assesses the “vibe” of the experience. It looks beyond function to ask if the solution feels reliable, empathetic, and aligned with the user’s values, transforming a transactional moment into a relational one.

How to Use the Matrix with Your Team

The XLM Matrix is most effective when used as a collaborative workshop tool. By gathering cross-functional perspectives—from product and design to engineering and customer success—you can ensure a 360-degree view of the human experience. Follow these three steps to run your first experience audit:

Step 1: The Empathy Audit

Focus on the Inner Circle. Select one of the four quadrants and ask the team to identify the most persistent “ugh” moment currently facing the user. Be specific. Instead of saying “the checkout process is slow,” identify the exact friction point, such as “the user feels overwhelmed by the number of form fields.”

Step 2: Defining the Metric

Move to the Middle Ring. Once the friction point is clear, brainstorm how you would measure its absence. This is your Experience Level Measure (XLM). If the friction is cognitive overload from form fields, your XLM might be “reduction in time spent on the checkout page” or “a 20% increase in completion rate without support intervention.”

Step 3: Pulling the Innovation Lever

Reach the Outer Ring. Now, identify the specific technical or design change that will move that metric. This is your “Innovation Lever.” It could be an AI-driven auto-fill feature, a progress bar to improve the sense of agency, or a “save for later” option to reduce immediate emotional pressure.

Repeat this process for each quadrant to build a robust, human-centered innovation roadmap that prioritizes meaningful outcomes over simple feature checklists.

Conclusion: Creating a Human-Centered Future

The transition from measuring system performance to measuring human success is not just a technical shift; it is a cultural one. As we move deeper into an era of agentic AI and rapid digital acceleration, the organizations that thrive will be those that prioritize the human experience as their primary north star. Innovation is no longer defined solely by what we can build, but by how effectively we enable people to feel, act, and succeed.

The XLM Matrix provides a structured, repeatable path to this future. By moving from the friction of the “ugh” moment to the strategic clarity of the innovation lever, your team can ensure that every project delivers meaningful, human-centered value. It is time to stop guessing how our users feel and start building for their success.

Start Your Experience Transformation Today

Ready to move beyond SLAs? Download the high-resolution, 11″x17″ (works as A3 too) printable version of The XLM Matrix and begin identifying the measures that truly matter for your innovation team. You can also use it virtually by uploading it and locking it down as a background in Miro, Mural, LucidSpark, Figjam or the FREE Microsoft Whiteboard or Google Jamboard.


Download the Free XLM Matrix Canvas

Frequently Asked Questions

What is the difference between an SLA and an XLM?

A Service Level Agreement (SLA) measures technical system performance, such as uptime or response speed. An Experience Level Measure (XLM) focuses on human outcomes, measuring how effectively an innovation reduces cognitive load, increases user agency, or builds emotional resonance.

How does the XLM Matrix help innovation teams?

The XLM Matrix provides a visual framework to move from identifying user friction (“ugh” moments) to defining specific metrics and identifying the technical or design “levers” required to improve the human experience.

Can the XLM Matrix be used for internal digital transformation?

Yes. The matrix is highly effective for internal projects. By measuring the cognitive load and time-to-value for employees using new internal tools, organizations can ensure their digital transformation efforts actually increase productivity rather than just adding complexity.

Image credits: Braden Kelley, Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.