Tag Archives: AI

Top 10 Human-Centered Change & Innovation Articles of December 2025

Top 10 Human-Centered Change & Innovation Articles of December 2025Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are December’s ten most popular innovation posts:

  1. Is OpenAI About to Go Bankrupt? — by Chateau G Pato
  2. The Rise of Human-AI Teaming Platforms — by Art Inteligencia
  3. 11 Reasons Why Teams Struggle to Collaborate — by Stefan Lindegaard
  4. How Knowledge Emerges — by Geoffrey Moore
  5. Getting the Most Out of Quiet Employees in Meetings — by David Burkus
  6. The Wood-Fired Automobile — by Art Inteligencia
  7. Was Your AI Strategy Developed by the Underpants Gnomes? — by Robyn Bolton
  8. Will our opinion still really be our own in an AI Future? — by Pete Foley
  9. Three Reasons Change Efforts Fail — by Greg Satell
  10. Do You Have the Courage to Speak Up Against Conformity? — by Mike Shipulski

BONUS – Here are five more strong articles published in November that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Build a Common Language of Innovation on your team

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last four years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Can AI Replace the CEO?

A Day in the Life of the Algorithmic Executive

LAST UPDATED: December 28, 2025 at 1:56 PM

Can AI Replace the CEO?

GUEST POST from Art Inteligencia

We are entering an era where the corporate antibody – that natural organizational resistance to disruptive change – is meeting its most formidable challenger yet: the AI CEO. For years, we have discussed the automation of the factory floor and the back office. But what happens when the “useful seeds of invention” are planted in the corner office?

The suggestion that an algorithm could lead a company often triggers an immediate emotional response. Critics argue that leadership requires soul, while proponents point to the staggering inefficiencies, biases, and ego-driven errors that plague human executives. As an advocate for Innovation = Change with Impact, I believe we must look beyond the novelty and analyze the strategic logic of algorithmic leadership.

“Leadership is not merely a collection of decisions; it is the orchestration of human energy toward a shared purpose. An AI can optimize the notes, but it cannot yet compose the symphony or inspire the orchestra to play with passion.”

Braden Kelley

The Efficiency Play: Data Without Drama

The argument for an AI CEO rests on the pursuit of Truly Actionable Data. Humans are limited by cognitive load, sleep requirements, and emotional variance. An AI executive, by contrast, operates in Future Present mode — constantly processing global market shifts, supply chain micro-fluctuations, and internal sentiment analysis in real-time. It doesn’t have a “bad day,” and it doesn’t make decisions based on who it had lunch with.

Case Study 1: NetDragon Websoft and the “Tang Yu” Experiment

The Experiment: A Virtual CEO in a Gaming Giant

In 2022, NetDragon Websoft, a major Chinese gaming and mobile app company, appointed an AI-powered humanoid robot named Tang Yu as the Rotating CEO of its subsidiary. This wasn’t just a marketing stunt; it was a structural integration into the management flow.

The Results

Tang Yu was tasked with streamlining workflows, improving the quality of work tasks, and enhancing the speed of execution. Over the following year, the company reported that Tang Yu helped the subsidiary outperform the broader Hong Kong stock market. By serving as a real-time data hub, the AI signature was required for document approvals and risk assessments. It proved that in data-rich environments where speed of iteration is the primary competitive advantage, an algorithmic leader can significantly reduce operational friction.

Case Study 2: Dictador’s “Mika” and Brand Stewardship

The Challenge: The Face of Innovation

Dictador, a luxury rum producer, took the concept a step further by appointing Mika, a sophisticated female humanoid robot, as their CEO. Unlike Tang Yu, who worked mostly within internal systems, Mika serves as a public-facing brand steward and high-level decision-maker for their DAO (Decentralized Autonomous Organization) projects.

The Insight

Mika’s role highlights a different facet of leadership: Strategic Pattern Recognition. Mika analyzes consumer behavior and market trends to select artists for bottle designs and lead complex blockchain-based initiatives. While Mika lacks human empathy, the company uses her to demonstrate unbiased precision. However, it also exposes the human-AI gap: while Mika can optimize a product launch, she cannot yet navigate the nuanced political and emotional complexities of a global pandemic or a social crisis with the same grace as a seasoned human leader.

Leading Companies and Startups to Watch

The space is rapidly maturing beyond experimental robot figures. Quantive (with StrategyAI) is building the “operating system” for the modern CEO, connecting KPIs to real-work execution. Microsoft is positioning its Copilot ecosystem to act as a “Chief of Staff” to every executive, effectively automating the data-gathering and synthesis parts of the role. Watch startups like Tessl and Vapi, which are focusing on “Agentic AI” — systems that don’t just recommend decisions but have the autonomy to execute them across disparate platforms.

The Verdict: The Hybrid Future

Will AI replace the CEO? My answer is: not the great ones. AI will certainly replace the transactional CEO — the executive whose primary function is to crunch numbers, approve budgets, and monitor performance. These tasks are ripe for automation because they represent 19th-century management techniques.

However, the transformational CEO — the one who builds culture, navigates ethical gray areas, and creates a sense of belonging — will find that AI is their greatest ally. We must move from fearing replacement to mastering Human-AI Teaming. The CEOs of 2030 will be those who use AI to handle the complexity of the business so they can focus on the humanity of the organization.

Frequently Asked Questions

Can an AI legally serve as a CEO?

Currently, most corporate law jurisdictions require a natural person to serve as a director or officer for liability and accountability reasons. AI “CEOs” like Tang Yu or Mika often operate under the legal umbrella of a human board or chairman who retains ultimate responsibility.

What are the biggest risks of an AI CEO?

The primary risks include Algorithmic Bias (reinforcing historical prejudices found in the data), Lack of Crisis Adaptability (AI struggles with “Black Swan” events that have no historical precedent), and the Loss of Employee Trust if leadership feels cold and disconnected.

How should current CEOs prepare for AI leadership?

Leaders must focus on “Up-skilling for Empathy.” They should delegate data-heavy reporting to AI systems and re-invest that time into Culture Architecture and Change Management. The goal is to become an expert at Orchestrating Intelligence — both human and synthetic.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Will our opinion still really be our own in an AI Future?

Will our opinion still really be our own in an AI Future?

GUEST POST from Pete Foley

Intuitively we all mostly believe our opinions are our own.  After all, they come from that mysterious thing we call consciousness that resides somewhere inside of us. 

But we also know that other peoples opinions are influenced by all sorts of external influences. So unless we as individuals are uniquely immune to influence, it begs at the question; ‘how much of what we think, and what we do, is really uniquely us?’  And perhaps even more importantly, as our understanding of behavioral modification techniques evolves, and the power of the tools at our disposal grows, how much mental autonomy will any of us truly have in the future?

AI Manipulation of Political Opinion: A recent study from the Oxford Internet Institute (OII) and the UK AI Security Institute (AISI) showed how conversational AI can meaningfully influence peoples political beliefs. https://www.ox.ac.uk/news/2025-12-11-study-reveals-how-conversational-ai-can-exert-influence-over-political-beliefs .  Leveraging AI in this way potentially opens the door to a step-change in behavioral and opinion manipulation inn general.  And that’s quite sobering on a couple of fronts.   Firstly, for many today their political beliefs are deeply tied to our value system and deep sense of self, so this manipulation is potentially profound.  Secondly, if AI can do this today, how much more will it be able to do in the future?

A long History of Manipulation: Of course, manipulation of opinion or behavior is not new.  We are all overwhelmed by political marketing during election season.  We accept that media has manipulated public opinion for decades, and that social media has amplified this over the last few decades. Similarly we’ve all grown up immersed in marketing and advertising designed to influence our decisions, opinions and actions.  Meanwhile the rise in prominence of the behavioral sciences in recent decades has provided more structure and efficiency to behavioral influence, literally turning an art into a science.  Framing, priming, pre-suasion, nudging and a host of other techniques can have a profound impact on what we believe and what we actually do. And not only do we accept it, but many, if not most of the people reading this will have used one or more of these channels or techniques.  

An Art and a Science: And behavioral manipulation is a highly diverse field, and can be deployed as an art or a science.   Whether it’s influencers, content creators, politicians, lawyers, marketers, advertisers, movie directors, magicians, artists, comedians, even physicians or financial advisors, our lives are full of people who influence us, often using implicit cues that operate below our awareness. 

And it’s the largely implicit nature of these processes that explains why we tend to intuitively think this is something that happens to other people. By definition we are largely unaware of implicit influence on ourselves, although we can often see it in others.   And even in hindsight, it’s very difficult to introspect implicit manipulation of our own actions and opinions, because there is often no obvious conscious causal event. 

So what does this mean?  As with a lot of discussion around how an AI future, or any future for that matter, will unfold, informed speculation is pretty much all we have.  Futurism is far from an exact science.  But there are a couple of things we can make pretty decent guesses around.

1.  The ability to manipulate how people think creates power and wealth.

2.  Some will use this for good, some not, but given the nature of humanity, it’s unlikely that it will be used exclusively for either.

3.  AI is going to amplify our ability to manipulate how people think.  

The Good news: Benevolent behavioral and opinion manipulation has the power to do enormous good.  Whether it’s mental health and happiness (an increasingly challenging area as we as a species face unprecedented technology driven disruption), health, wellness, job satisfaction, social engagement, important for many of us, adoption of beneficial technology and innovation and so many other areas can benefit from this.  And given the power of the brain, there is even potential for conceptual manipulation to replace significant numbers of pharmaceuticals, by, for example, managing depression, or via preventative behavioral health interventions.   Will this be authentic? It’s probably a little Huxley dystopian, but will we care?  It’s one of the many ethical connundrums AI will pose us with.

The Bad News.  Did I mention wealth and power?  As humans, we don’t have a great record of doing the right thing when wealth and power come into the equation.  And AI and AI empowered social, conceptual and behavioral manipulation has potential to concentrate meaningful power even more so than today’s tech driven society.  Will this be used exclusively for good, or will some seek to leverage for their personal benefit at the expense of the border community?   Answers on a postcard (or AI generated DM if you prefer).

What can and should we do?  Realistically, as individuals we can self police, but we obviously also face limits in self awareness of implicit manipulations.  That said, we can to some degree still audit ourselves.  We’ve probably all felt ourselves at some point being riled up by a well constructed meme designed to amplify our beliefs.   Sometimes we recognize this quickly, other times we may be a little slower. But just simple awareness of the potential to be manipulated, and the symptoms of manipulation, such as intense or disproportionate emotional responses, can help us mitigate and even correct some of the worst effects. 

Collectively, there are more opportunities.  We are better at seeing others being manipulated than ourselves.  We can use that as a mirror, and/or call it out to others when we see it.  And many of us will find ourselves somewhere in the deployment chain, especially as AI is still in it’s early stages.  For those of us that this applies to, we have the opportunity to collectively nudge this emerging technology in the right direction. I still recall a conversation with Dan Ariely when I first started exploring behavioral science, perhaps 15-20 years ago.  It’s so long ago I have to paraphrase, but the essence of the conversation was to never manipulate people to do something that was not in there best interest.  

There is a pretty obvious and compelling moral framework behind this. But there is also an element of enlightened self interest. As a marketer working for a consumer goods company at the time, even if I could have nudged somebody into buying something they really didn’t want, it might have offered initial success, but would likely come back to bite me in the long-term.  They certainly wouldn’t become repeat customers, and a mixture of buyers remorse, loss aversion and revenge could turn them into active opponents.  This potential for critical thinking in hindsight exists for virtually every situation where outcomes damage the individual.   

The bottom line is that even today, we already ave to continually ask ourselves if what we see is real, if our beliefs are truly our own, or have they been manipulated? Media and social media memes already play the manipulation game.   AI may already be better, and if not, it’s only a matter of time before it is. If you think we are politically polarized now, hang onto your hat!!!  But awareness is key.  We all need to stay aware, be conscious of manipulation in ourselves and others, and counter it when we see it occurring for the wrong reasons.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Is OpenAI About to Go Bankrupt?

LAST UPDATED: December 4, 2025 at 4:48 PM

Is OpenAI About to Go Bankrupt?

GUEST POST from Chateau G Pato

The innovation landscape is shifting, and the tremors are strongest in the artificial intelligence (AI) sector. For a moment, OpenAI felt like an impenetrable fortress, the company that cracked the code and opened the floodgates of generative AI to the world. But now, as a thought leader focused on Human-Centered Innovation, I see the classic signs of disruption: a growing competitive field, a relentless cash burn, and a core product advantage that is rapidly eroding. The question of whether OpenAI is on the brink of bankruptcy isn’t just about sensational headlines — it’s about the fundamental sustainability of a business model built on unprecedented scale and staggering cost.

The “Code Red” announcement from OpenAI, ostensibly about maintaining product quality, was a subtle but profound concession. It was an acknowledgment that the days of unchallenged superiority are over. This came as competitors like Google’s Gemini and Anthropic’s Claude are not just keeping pace, but in many key performance metrics, they are reportedly surpassing OpenAI’s flagship models. Performance parity, or even outperformance, is a killer in the technology adoption curve. When the superior tool is also dramatically cheaper, the choice for enterprises and developers — the folks who pay the real money — becomes obvious.

The Inevitable Crunch: Performance and Price

The competitive pressure is coming from two key vectors: performance and cost-efficiency. While the public often focuses on benchmark scores like MMLU or coding abilities — where models like Gemini and Claude are now trading blows or pulling ahead — the real differentiator for business users is price. New models, including the China-based Deepseek, are entering the market with reported capabilities approaching the frontier models but at a fraction of the development and inference cost. Deepseek’s reportedly low development cost highlights that the efficiency of model creation is also improving outside of OpenAI’s immediate sphere.

Crucially, the open-source movement, championed by models like Meta’s Llama family, introduces a zero-cost baseline that fundamentally caps the premium OpenAI can charge. Llama, and the rapidly improving ecosystem around it, means that a good-enough, customizable, and completely free model is always an option for businesses. This open-source competition bypasses the high-cost API revenue model entirely, forcing closed-source providers to offer a quantum leap in utility to justify the expenditure. This dynamic accelerates the commoditization of foundational model technology, turning OpenAI’s once-unique selling proposition into a mere feature.

OpenAI’s models, for all their power, have been famously expensive to run — a cost that gets passed on through their API. The rise of sophisticated, cheaper alternatives — many of which employ highly efficient architectures like Mixture-of-Experts (MoE) — means the competitive edge of sheer scale is being neutralized by engineering breakthroughs in efficiency. If the next step in AI on its way to artificial general intelligence (AGI) is a choice between a 10% performance increase and a 10x cost reduction for 90% of the performance, the market will inevitably choose the latter. This is a structural pricing challenge that erodes one of OpenAI’s core revenue streams: API usage.

The Financial Chasm: Burn Rate vs. Reserves

The financial situation is where the “bankruptcy” narrative gains traction. Developing and running frontier AI models is perhaps the most capital-intensive venture in corporate history. Reports — which are often conflicting and subject to interpretation — paint a picture of a company with an astronomical cash burn rate. Estimates for annual operational and development expenses are in the billions of dollars, resulting in a net loss measured in the billions.

This reality must be contrasted with the position of their main rivals. While OpenAI is heavily reliant on Microsoft’s monumental investment — a complex deal involving cash and Azure cloud compute credits — Microsoft’s exposure is structured as a strategic infrastructure play. The real financial behemoth is Alphabet (Google), which can afford to aggressively subsidize its Gemini division almost indefinitely. Alphabet’s near-monopoly on global search engine advertising generates profits in the tens of billions of dollars every quarter. This virtually limitless reservoir of cash allows Google to cross-subsidize Gemini’s massive research, development, and inference costs, effectively enabling them to engage in a high-stakes price war that smaller, loss-making entities like OpenAI cannot truly win on a level playing field. Alphabet’s strategy is to capture market share first, using the profit engine of search to buy time and scale, a luxury OpenAI simply does not have without a continuous cash injection from a partner.

The question is not whether OpenAI has money now, but whether their revenue growth can finally eclipse their accelerating costs before their massive reserve is depleted. Their long-term financial projections, which foresee profitability and revenues in the hundreds of billions by the end of the decade, require not just growth, but a sustained, near-monopolistic capture of the new AI-driven knowledge economy. That becomes increasingly difficult when competitors are faster, cheaper, and arguably better, and have access to deeper, more sustainable profit engines for cross-subsidization.

The Future Outlook: Change or Consequence

OpenAI’s future is not doomed, but the company must initiate a rapid, human-centered transformation. The current trajectory — relying on unprecedented capital expenditure to maintain a shrinking lead in model performance — is structurally unsustainable in the face of faster, cheaper, and increasingly open-source models like Meta’s Llama. The next frontier isn’t just AGI; it’s AGI at scale, delivered efficiently and affordably.

OpenAI must pivot from a model of monolithic, expensive black-box development to one that prioritizes efficiency, modularity, and a true ecosystem approach. This means a rapid shift to MoE architectures, aggressive cost-cutting in inference, and a clear, compelling value proposition beyond just “we were first.” Human-Centered Innovation principles dictate that a company must listen to the market — and the market is shouting for price, performance, and flexibility. If OpenAI fails to execute this transformation and remains an expensive, marginal performer, its incredible cash reserves will serve only as a countdown timer to a necessary and painful restructuring.

Frequently Asked Questions (FAQ)

  • Is OpenAI currently profitable?
    OpenAI is currently operating at a significant net loss. Its annual cash burn rate, driven by high R&D and inference costs, reportedly exceeds its annual revenue, meaning it relies heavily on its massive cash reserves and the strategic investment from Microsoft to sustain operations.
  • How are Gemini and Claude competing against OpenAI on cost and performance?
    Competitors like Google’s Gemini and Anthropic’s Claude are achieving performance parity or superiority on key benchmarks. Furthermore, they are often cheaper to use (lower inference cost) due to more efficient architectures (like MoE) and the ability of their parent companies (Alphabet and Google) to cross-subsidize their AI divisions with enormous profits from other revenue streams, such as search engine advertising.
  • What was the purpose of OpenAI’s “Code Red” announcement?
    The “Code Red” was an internal or public acknowledgment by OpenAI that its models were facing performance and reliability degradation in the face of intense, high-quality competition from rivals. It signaled a necessary, urgent, company-wide focus on addressing these issues to restore and maintain a technological lead.

UPDATE: Just found on X that HSBC has said that OpenAI is going to have nearly a half trillion in operating losses until 2030, per Financial Times (FT). Here is the chart of their $100 Billion in projected losses in 2029. With the success of Gemini, Claude, Deep Seek, Llama and competitors yet to emerge, the revenue piece may be overstated:

OpenAI estimated 2029 financials

Image credits: Google Gemini, Financial Times

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Top 10 Human-Centered Change & Innovation Articles of November 2025

Top 10 Human-Centered Change & Innovation Articles of November 2025Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are November’s ten most popular innovation posts:

  1. Eight Types of Innovation Executives — by Stefan Lindegaard
  2. Is There a Real Difference Between Leaders and Managers? — by David Burkus
  3. 1,000+ Free Innovation, Change and Design Quotes Slides — by Braden Kelley
  4. The AI Agent Paradox — by Art Inteligencia
  5. 74% of Companies Will Die in 10 Years Without Business Transformation — by Robyn Bolton
  6. The Unpredictability of Innovation is Predictable — by Mike Shipulski
  7. How to Make Your Employees Thirsty — by Braden Kelley
  8. Are We Suffering from AI Confirmation Bias? — by Geoffrey A. Moore
  9. How to Survive the Next Decade — by Robyn Bolton
  10. It’s the Customer Baby — by Braden Kelley

BONUS – Here are five more strong articles published in October that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Build a Common Language of Innovation on your team

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last four years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






The Reasons Customers May Refuse to Speak with AI

The Reasons Customers May Refuse to Speak with AI

GUEST POST from Shep Hyken

If you want to anger your customers, make them do something they don’t want to do.

Up to 66% of U.S. customers say that when it comes to getting help, resolving an issue or making a complaint, they only want to speak to a live person. That’s according to the 2025 State of Customer Service and Customer Experience (CX) annual study. If you don’t provide the option to speak to a live person, you are at risk of losing many customers.

But not all customers feel that way. We asked another sample of more than 1,000 customers about using AI and self-service tools to get customer support, and 34% said they stopped doing business with a company or brand because self-service options were not provided.

These findings reveal the contrasting needs and expectations customers have when communicating with the companies they do business with. While the majority prefer human-to-human interaction, a substantial number (about one-third) not only prefer self-service options — AI-fueled solutions, robust frequently asked question pages on a website, video tutorials and more — but demand it or they will actually leave to find a competitor that can provide what they want.

This creates a big challenge for CX decision-makers that directly impacts customer retention and revenue.

Why Some Customers Resist AI

Our research finds that age makes a difference. For example, Baby Boomers show the strongest preference for human interaction, with 82% preferring the phone over digital solutions. Only half (52%) of Gen-Z feels the same way about the phone. Here’s why:

  1. Lack of Trust: Trust is another concern, with almost half (49%) saying they are scared of technologies like AI and ChatGPT.
  2. Privacy Concerns: Seventy percent of customers are concerned about data privacy and security when interacting with AI.
  3. Success — Or Lack of Success: While I think it’s positive that 50% of customers surveyed have successfully resolved a customer service issue using AI without the need for a live agent, that also means that 50% have not.

Customers aren’t necessarily anti-technology. They’re anti-ineffective technology. When AI fails to understand requests and lacks empathy in sensitive situations, the negative experience can make certain customers want to only communicate with a human. Even half of Gen-Z (48%) says they are frustrated with AI technology (versus 17% of Baby Boomers).

Why Some Customers Embrace AI

The 34% of customers who prefer self-service options to the point of saying they are willing to stop doing business with a company if self-service isn’t available present a dilemma for CX leaders. This can paralyze the decision process for what solutions to buy and implement. Understanding some of the reasons certain customers embrace AI is important:

  1. Speed, Convenience and Efficiency: The ability to get immediate support without having to call a company, wait on hold, be authenticated, etc., is enough to get customers using AI. If you had the choice between getting an answer immediately or having to wait 15 minutes, which would you prefer? (That’s a rhetorical question.)
  2. 24/7 Availability: Immediate support is important, but having immediate access to support outside of normal business hours is even better.
  3. A Belief in the Future: There is optimism about the future of AI, as 63% of customers expect AI technologies to become the primary mode of customer service in the future — a significant increase from just 21% in 2021. That optimism has customers trying and outright adopting the use of AI.

CX leaders must recognize the generational differences — and any other impactful differences — as they make decisions. For companies that sell to customers across generations, this becomes increasingly important, especially as Gen-Z and Millennials gain purchasing power. Turning your back on a generation’s technology expectations puts you at risk of losing a large percentage of customers.

What’s a CX Leader To Do?

Some companies have experimented with forcing customers to use only AI and self-service solutions. This is risky, and for the most part, the experiments have failed. Yet, as AI improves — and it’s doing so at a very rapid pace — it’s okay to push customers to use self-service. Just support it with a seamless transfer to a human if needed. An AI-first approach works as long as there’s a backup.

Forcing customers to use a 100% solution, be it AI or human, puts your company at risk of losing customers. Today’s strategy should be a balanced choice between new and traditional customer support. It should be about giving customers the experience they want and expect — one that makes them say, “I’ll be back!”

Image credit: Pixabay

This article originally appeared on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Re-engineering Trust and Retention in the AI Contact Center

The Empathy Engine

LAST UPDATED: November 9, 2025 at 1:36PM
Re-engineering Trust and Retention in the AI Contact Center

by Braden Kelley

The contact center remains the single most critical point of human truth for a brand. It is where marketing promises meet operational reality. The challenge today, as highlighted by leaders like Bruce Gilbert of Young Energy at Customer Contact Week(CCW) in Nashville recently, is profound: Customers expect friction-less experiences with empathetic responses. The solution is not merely throwing technology at the problem; it’s about strategically weaving automation into the existing human fabric to create an Empathy Engine.

The strategic error most organizations make is starting with the technology’s capability rather than the human need. The conversation must start with empathy not the technology — focusing first on the customer and agent pain points. AI is not a replacement for human connection; it is an amplification tool designed to remove friction, build trust, and elevate the human agent’s role to that of a high-value relationship manager.

The Trust Imperative: The Cautious Adoption Framework

The first goal when introducing AI into the customer journey is simple: Building trust. The consumer public, after years of frustrating Interactive Voice Response (IVR) systems and rigid chatbots, remains deeply skeptical of automation. A grand, “all-in” AI deployment is often met with immediate resistance, which can manifest as call abandonment or increased churn.

To overcome this, innovation must adhere to a principle of cautious, human-centered rollout — a Cautious Adoption Framework: Starting small and starting with simple things can help to build this trust. Implement AI where the risk of failure is low and the utility is high — such as automating password resets, updating billing addresses, or providing initial diagnostics. These are the repetitive, low-value tasks that bore agents and frustrate customers. By successfully automating these simple, transactional elements, you build confidence in the system, preparing both customers and agents for more complex, AI-assisted interactions down the line. This approach honors the customer’s pace of change.

The Agent Retention Strategy: Alleviating Cognitive Load

The operational cost of the contact center is inextricably linked to agent retention. Finding and keeping high-quality agents remains a persistent challenge, primarily because the job is often highly stressful and repetitive. AI provides a powerful retention tool by directly addressing the root cause: cognitive load.

Reducing the cognitive load and stress level on agents is a non-negotiable step for long-term operational health. AI co-pilots must be designed to act as true partners, not simply data overlays. They should instantly surface relevant knowledge base articles, summarize the customer’s entire history before the agent picks up the call, or even handle real-time data entry. This frees the human agent to focus entirely on the empathetic response — active listening, problem-solving, and de-escalation. By transforming the agent’s role from a low-paid data processor into a high-value relationship manager, we elevate the profession, directly improving agent retention and turning contact center employment into an aspirational career path.

The Systemic Challenge: Orchestrating the AI Ecosystem

A major limiting factor in today’s contact center is the presence of fragmented AI deployments. Many organizations deploy AI in isolated pockets — a siloed chatbot here, a transcription service there. The future demands that we move far beyond siloed AI. The goal is complete AI orchestration across the enterprise, requiring us to get the AIs to talk to each other.

A friction-less customer experience requires intelligence continuity: a Voice AI must seamlessly hand off its collected context to a Predictive AI (which assesses the call risk), which then informs the Generative AI (that drafts the agent’s suggested response). This is the necessary chain of intelligence that supports friction-less service. Furthermore, complexity demands a blended AI approach, recognizing that the solution may involve more than one method (generative vs. directed).

For high-compliance tasks, a directed approach ensures precision: for instance, a flow can insert “read as is” instructions for regulatory disclosures, ensuring legal text is delivered exactly as designed. For complex, personalized problem-solving, a generative approach is vital. The best systems understand the regulatory and emotional context, knowing when to switch modes instantly and without customer intervention.

The Strategic Pivot: Investing in Predictive Empathy

The ultimate strategic advantage lies not in reacting to calls, but in preventing them. This requires a deeper investment in data science, moving from descriptive reporting on what happened to predictive analytics to understand why our customers are calling in before they dial the number.

This approach, which I call Predictive Empathy, uses machine learning to identify customers whose usage patterns, payment history, or recent service interactions suggest a high probability of confusion or frustration (e.g., first-time promotions expiring, unusual service interruptions). The organization then proactively initiates a personalized, AI-assisted outreach to address the problem or explain the confusion before the customer reaches the point of anxiety and makes the call. This shifts the interaction from reactive conflict to proactive support, immediately lowering call volume and transforming brand perception.

The Organizational Checkpoint: Post-Deployment Evolution

Once you’ve successfully implemented AI to address pain points, the work is not finished. A crucial strategic question must be addressed: What happens after AI deployment? What’s your plan?

As AI absorbs simple transactions, the nature of the calls that reach the human agent becomes disproportionately more complex, emotional, and high-value. This creates a skills gap in the remaining human workforce. The organization must plan for and fund the Up-skilling Initiative necessary to handle these elevated interactions, focusing on conflict resolution, complex sales, and deep relationship management. The entire organizational structure — training programs, compensation models, and career paths — must evolve to support this higher-skilled human workforce. By raising the value of the human role, the contact center transitions from a cost center into a profit-generating Relationship Hub.

Conclusion: Architecting the Human Layer

The goal of innovation in the contact center is not the elimination of the human, but the elevation of the human. By using AI to build trust, reduce cognitive load, enable predictive empathy, and connect disparate systems, we free the human agent to deliver on the fundamental customer expectation: a friction-less experience coupled with an empathetic response. This is how we re-engineer the contact center from a cost center into a powerful engine for talent retention and customer loyalty.

“AI handles the transaction. The human handles the trust. Design your systems to protect both.” — Braden Kelley

Your first step into the Empathy Engine: Map the single most stressful task for your top-performing agent and commit to automating 80% of its cognitive load using a simple AI co-pilot within the next 90 days.

What is that task for your organization?

Image credits: Google Gemini

Content Authenticity Statement: The topic area, key elements to focus on, insights captured from the Customer Contact Week session, panelists to mention, etc. were decisions made by Braden Kelley, with a little help from Google Gemini to clean up the article.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Are We Suffering from AI Confirmation Bias?

Are We Suffering From AI Confirmation Bias?

GUEST POST from Geoffrey A. Moore

When social media first appeared on the scene, many of us had high hopes it could play a positive role in community development and civic affairs, as indeed it has. What we did not anticipate was the long-term impact of the digital advertising model that supported it. That model is based on click-throughs, and one of the most effective ways to increase them was to present content that reinforces the recipient’s existing views.

Statisticians call the attraction to one’s existing point of view confirmation bias, and we all have it. As individuals, we believe we are in control of this, but it is obvious that at the level of populations, we are not. Confirmation bias, fed first by social media, and then by traditional media once it is converted to digital, has driven political and social polarization throughout the world. It has been further inflamed by conspiracy theories, malicious communications, fake news, and the like. And now we are faced with the advent of yet another amplifier—artificial intelligence. A significant portion of the fears about how AI could impact human welfare stem from how easily it can be put to malicious use through disinformation campaigns.

The impact of all this on our political life is chilling. Polarized media amplifies the impact of extremism and dampens the impact of moderation. This has most obviously been seen in primary elections, but it has now carried over into general elections to the point where highly unqualified individuals who have no interest in public service hold some of the most important roles in state and federal government. The resulting dysfunction is deeply disturbing, but it is not clear if and where a balance can be found.

Part of the problem is that confirmation bias is an essential part of healthy socialization. It reflects the impact that narratives have on our personal and community identities. What we might see as arrant folly another person sees as a necessary leap of faith. Our founding fathers were committed to protecting our nation from any authority imposing its narratives on unwilling recipients, hence our Constitutional commitment to both freedom of religion and freedom of speech.

In effect, this makes it virtually impossible to legislate our way out of this dilemma. Instead, we must embrace it as a Darwinian challenge, one that calls for us as individuals to adapt our strategies for living to a dangerous new circumstance. Here I think we can take a lesson from our recent pandemic experience. Faced with the threat of a highly contagious, ever-mutating Covid virus, most of the developed economies embraced rapid vaccination as their core response. China, however, did not. It embraced regulation instead. What they and we learned is that you cannot solve problems of contagion through regulation.

We can apply this learning to dealing with the universe of viral memes that have infected our digital infrastructure and driven social discord. Instead of regulation, we need to think of vaccination. The vaccine that protects people from fake news and its many variants is called critical thinking, and the healthcare provider that dispenses it is called public education.

We have spent the past several decades focusing on the STEM wing of our educational system, but at the risk of exercising my own confirmation bias, the immunity protection we need now comes from the liberal arts. Specifically, it emerges from supervised classroom discussions in which students are presented with a wide variety of challenging texts and experiences accompanied by a facilitated dialog that instructs them in the practices of listening, questioning, proposing, debating, and ultimately affirming or denying the validity of the argument under consideration. These discussions are not about promoting or endorsing any particular point of view. Rather, they teach one how to engage with any point of view in a respectful, powerful way. This is the intellectual discipline that underlies responsible citizenship. We have it in our labs. We just need to get it distributed more broadly.

That’s what I think. What do you think?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The AI Agent Paradox

How E-commerce Must Proactively Manage Experiences Created Without Their Consent

LAST UPDATED: November 7, 2025 at 4:31 PM

The AI Agent Paradox

GUEST POST from Art Inteligencia

A fundamental shift is underway in the world of e-commerce, moving control of the customer journey out of the hands of the brand and into the hands of the AI Agent. The recent lawsuit by Amazon against Perplexity regarding unauthorized access to user accounts by its agentic browser is not an isolated legal skirmish; it is a red flag moment for every company that sells online. The core challenge is this: AI agents are building and controlling the shopping experience — the selection, the price comparison, the checkout path — often without the e-commerce site’s knowledge or consent.

This is the AI Agent Paradox: The most powerful tool for customer convenience (the agent) simultaneously poses the greatest threat to brand control, data integrity, and monetization models. The era of passively optimizing a webpage is over. The future belongs to brands that actively manage their relationship with the autonomous, agentic layer that sits between them and their human customers.

The Three Existential Threats of the Autonomous Agent

Unmanaged AI agents, operating as digital squatters on your site, create immediate systemic problems for e-commerce sites:

  1. Data Integrity and Scraping Overload: Agents typically use resource-intensive web scraping techniques that overload servers and pollute internal analytics. The shopping experience they create is invisible to the brand’s A/B testing and personalization engines.
  2. Brand Bypass and Commoditization: Agents prioritize utility over loyalty. If a customer asks for “best price on noise-cancelling headphones,” the agent may bypass your brand story, unique value propositions, and even your preferred checkout flow, reducing your products to mere SKU and price points. This is the Brand Bypass threat.
  3. Security and Liability: Unauthorized access, especially to user accounts (as demonstrated by the Amazon-Perplexity case), creates massive security vulnerabilities and legal liability for the e-commerce platform, which is ultimately responsible for protecting user data.

The How-To: Moving from Resistance to Proactive Partnership

Instead of relying solely on defensive legal action (which is slow and expensive), e-commerce brands must embrace a proactive, human-centered API strategy. The goal is to provide a superior, authorized experience for the AI agents, turning them from adversaries into accelerated sales channels — and honoring the trust the human customer places in their proxy.

Step 1: Build the Agent-Optimized API Layer

Treat the AI agent as a legitimate, high-volume customer with unique needs (structured data, speed). Design a specific, clean Agent API separate from your public-facing web UI. This API should allow agents to retrieve product information, pricing, inventory status, and execute checkout with minimal friction and maximum data hygiene. This immediately prevents the resource-intensive web scraping that plagues servers.

Step 2: Define and Enforce the Rules of Engagement

Your Terms of Service (TOS) must clearly articulate the acceptable use of your data by autonomous agents. Furthermore, the Agent API must enforce these rules programmatically. You can reward compliant agents (faster access, richer data) and throttle or block non-compliant agents (those attempting unauthorized access or violating rate limits). This is where you insert your brand’s non-negotiables, such as attribution requirements or user privacy protocols, thereby regaining control.

Step 3: Offer Value-Added Agent Services and Data

This is the shift from defense to offense. Give agents a reason to partner with you and prefer your site. Offer exclusive agent-only endpoints that provide aggregated, structured data your competitors don’t, such as sustainable sourcing information, local inventory availability, or complex configurator data. This creates a competitive advantage where the agent actually prefers to send traffic to your optimized channel because it provides a superior outcome for the human user.

Case Study 1: The Furniture Retailer and the AI Interior Designer

Challenge: Complex, Multivariable E-commerce Decisions

A high-end furniture and décor retailer struggled with low conversion rates because buying furniture requires complex decisions (size, material, delivery time). Customers were leaving the site to use third-party AI interior design tools.

Proactive Partnership:

The retailer created a “Design Agent API.” This API didn’t just provide price and SKU; it offered rich, structured data on 3D model compatibility, real-time customization options, and material sustainability scores. They partnered with a leading AI interior design platform, providing the agent direct, authorized access to this structured data. The AI agent, in turn, could generate highly accurate virtual room mock-ups using the retailer’s products. This integration streamlined the complex path to purchase, turning the agent from a competitor into the retailer’s most effective pre-visualization sales tool.

Case Study 2: The Specialty Grocer and the AI Recipe Planner

Challenge: Fragmented Customer Journey from Inspiration to Purchase

An online specialty grocer, focused on rare and organic ingredients, saw their customers using third-party AI recipe planners and shopping list creators, which often failed to locate the grocer’s unique SKUs or sent traffic to competitors.

Proactive Partnership:

The grocer developed a “Recipe Fulfillment Endpoint.” They partnered with two popular AI recipe apps. When a user generated a recipe, the AI agent, using the grocer’s endpoint, could instantly check ingredient availability, price, and even offer substitute suggestions from the grocer’s unique inventory. The agent generated a “One-Click, Fully-Customized Cart” for the grocer. The grocer ensured the agent received a small attribution fee (a form of commission), turning the agent into a reliable, high-converting affiliate sales channel. This formalized partnership eliminated the friction between inspiration and purchase, driving massive, high-margin sales.

The Human-Centered Imperative

Ultimately, this is a human-centered change challenge. The human customer trusts their AI agent to act on their behalf. By providing a clean, transparent, and optimized path for the agent, the e-commerce brand is honoring that trust. The focus shifts from control over the interface to control over the data and the rules of interaction. This strategy not only improves server performance and data integrity but also secures the brand’s place in the customer’s preferred, agent-mediated future.

“The AI agent is your customer’s proxy. If you treat the proxy poorly, you treat the customer poorly. The future of e-commerce is not about fighting the agents; it’s about collaborating with them to deliver superior value.” — Braden Kelley

The time to move beyond the reactive defense and into proactive partnership is now. The e-commerce leaders of tomorrow will be the ones who design the best infrastructure for the machines that shop for humans. Your essential first step: Form a dedicated internal team to prototype your Agent API, defining the minimum viable, structured data you can share to incentivize collaboration over scraping.

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Why Going AI Only is Dumb

I’m Sorry Dave, But I Can’t Do That

LAST UPDATED: November 3, 2025 at 4:50PM

Why Going AI Only is Dumb

by Braden Kelley

Last month I had the opportunity to attend Customer Contact Week (CCW) in Nashville, Tennessee and following up on my article The Voicebots Are Coming I’d like to dig into the idea that companies like Klarna explored of eliminating all humans from contact centers. After all, what could possibly go wrong?

When I first heard that Klarna was going to eliminate humans from their contact centers and go all in on artificial intelligence I thought to myself that they would likely live to regret it. Don’t get me wrong, artificial intelligence (AI) voicebots and chatbots can be incredibly useful, and that proves out in the real world according to conference speakers that almost half of Fanatics calls are automated on the phone without getting to an agent. A lot of people are experimenting with AI but AI is no longer experimental. What Klarna learned is that when you choose to use AI to reduce your number of human agents, then if the AI is down you don’t have the ability anymore to just call in off duty agents to serve your customers.

But, on the flip side we know that having AI customer service agents as part of your agent mix can have very positive impacts on the business. Small businesses like Brothers That Just Do Gutters have found that using AI agents increased their scheduling of estimate visits over humans alone. National Debt Relief automated their customer insufficient funds (CIF) calls and added an escalation path (AI then agent) that delivered a 20% revenue lift over their best agents. They found that when an agent gets a NO, there isn’t much of an escalation path left. And, the delicate reality is that some people feel self conscious calling a human to talk about debt problems, and there may be other sensitive issues where callers would actually feel more comfortable talking to a voicebot than a human. In addition, Fanatics is finding that AI agents are resolving some issues FASTER than human agents. Taken together these examples show that often a hybrid approach (humans plus AI) yields better results than humans only or AI only, so design your approach consciously.

Now let’s look at some important statistics from Customer Management Practice research:

  • 2/3 of people prefer calling in and talking by phone, but most of that is 55+ and the preference percentage declines every ten years younger you go until 30% for 18-24
  • 3/4 of executives say more people want self service now than three years ago
  • 3/4 of people want to spend less time getting support – so they can get back to the fun stuff, or back to business

Taken together these statistics help make the case for increasing the use of AI agents in the contact center. If you happen to be looking to use AI agents in servicing your customers (or even if you already are) then it is important to think about how you can use them to remove friction from the system and to strategically allocate your humans towards things that only humans can do. And if you need to win support from someone to go big with AI voicebots then pick an important use case instead of one that nobody cares about OR even better, pick something that you couldn’t have done before (example: a ride sharing company had AI voicebots make 5 million calls to have drivers validate their tax information).

Finally, as I was listening to some of these sessions it reminded me of a time when I was tasked with finding a new approach for staffing peak season for one of the Blue Cross/Blue Shield companies in the United States. At that time AI voicebots weren’t a thing and so I was looking at how we could partner with a vendor to have a small number of their staff on hand throughout the year and then rely on them to staff and train seasonal staff using those seasoned vendor staff instead of taking the best employees off the phone to train temps.

Even now, all contact centers will still need a certain level of human staffing. But, AI voicebots, AI simulation training for agents, and other new AI powered tools represent a great opportunity for creating a better solution for peak staffing in a whole host of industries with very cyclical contact demand that is hard to staff for. One example of this from Customer Contact Week was a story about how Fanatics must 5x their number of agents during high seasons and in practice this often results in their worst agents (temps they hired only for the season) serving some of their best customers (high $$ value clients).

Conclusion

AI voicebots can be a great help during demand peaks and other AI powered tools (QA, simulations, coaching, etc.) can help accelerate and optimize both your on-boarding of full-time agents, but also of seasonal agents as well. But don’t pare back your human agent pool too far!

What has been your experience with balancing human and AI agents?

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.