Tag Archives: Artificial Intelligence

Will our opinion still really be our own in an AI Future?

Will our opinion still really be our own in an AI Future?

GUEST POST from Pete Foley

Intuitively we all mostly believe our opinions are our own.  After all, they come from that mysterious thing we call consciousness that resides somewhere inside of us. 

But we also know that other peoples opinions are influenced by all sorts of external influences. So unless we as individuals are uniquely immune to influence, it begs at the question; ‘how much of what we think, and what we do, is really uniquely us?’  And perhaps even more importantly, as our understanding of behavioral modification techniques evolves, and the power of the tools at our disposal grows, how much mental autonomy will any of us truly have in the future?

AI Manipulation of Political Opinion: A recent study from the Oxford Internet Institute (OII) and the UK AI Security Institute (AISI) showed how conversational AI can meaningfully influence peoples political beliefs. https://www.ox.ac.uk/news/2025-12-11-study-reveals-how-conversational-ai-can-exert-influence-over-political-beliefs .  Leveraging AI in this way potentially opens the door to a step-change in behavioral and opinion manipulation inn general.  And that’s quite sobering on a couple of fronts.   Firstly, for many today their political beliefs are deeply tied to our value system and deep sense of self, so this manipulation is potentially profound.  Secondly, if AI can do this today, how much more will it be able to do in the future?

A long History of Manipulation: Of course, manipulation of opinion or behavior is not new.  We are all overwhelmed by political marketing during election season.  We accept that media has manipulated public opinion for decades, and that social media has amplified this over the last few decades. Similarly we’ve all grown up immersed in marketing and advertising designed to influence our decisions, opinions and actions.  Meanwhile the rise in prominence of the behavioral sciences in recent decades has provided more structure and efficiency to behavioral influence, literally turning an art into a science.  Framing, priming, pre-suasion, nudging and a host of other techniques can have a profound impact on what we believe and what we actually do. And not only do we accept it, but many, if not most of the people reading this will have used one or more of these channels or techniques.  

An Art and a Science: And behavioral manipulation is a highly diverse field, and can be deployed as an art or a science.   Whether it’s influencers, content creators, politicians, lawyers, marketers, advertisers, movie directors, magicians, artists, comedians, even physicians or financial advisors, our lives are full of people who influence us, often using implicit cues that operate below our awareness. 

And it’s the largely implicit nature of these processes that explains why we tend to intuitively think this is something that happens to other people. By definition we are largely unaware of implicit influence on ourselves, although we can often see it in others.   And even in hindsight, it’s very difficult to introspect implicit manipulation of our own actions and opinions, because there is often no obvious conscious causal event. 

So what does this mean?  As with a lot of discussion around how an AI future, or any future for that matter, will unfold, informed speculation is pretty much all we have.  Futurism is far from an exact science.  But there are a couple of things we can make pretty decent guesses around.

1.  The ability to manipulate how people think creates power and wealth.

2.  Some will use this for good, some not, but given the nature of humanity, it’s unlikely that it will be used exclusively for either.

3.  AI is going to amplify our ability to manipulate how people think.  

The Good news: Benevolent behavioral and opinion manipulation has the power to do enormous good.  Whether it’s mental health and happiness (an increasingly challenging area as we as a species face unprecedented technology driven disruption), health, wellness, job satisfaction, social engagement, important for many of us, adoption of beneficial technology and innovation and so many other areas can benefit from this.  And given the power of the brain, there is even potential for conceptual manipulation to replace significant numbers of pharmaceuticals, by, for example, managing depression, or via preventative behavioral health interventions.   Will this be authentic? It’s probably a little Huxley dystopian, but will we care?  It’s one of the many ethical connundrums AI will pose us with.

The Bad News.  Did I mention wealth and power?  As humans, we don’t have a great record of doing the right thing when wealth and power come into the equation.  And AI and AI empowered social, conceptual and behavioral manipulation has potential to concentrate meaningful power even more so than today’s tech driven society.  Will this be used exclusively for good, or will some seek to leverage for their personal benefit at the expense of the border community?   Answers on a postcard (or AI generated DM if you prefer).

What can and should we do?  Realistically, as individuals we can self police, but we obviously also face limits in self awareness of implicit manipulations.  That said, we can to some degree still audit ourselves.  We’ve probably all felt ourselves at some point being riled up by a well constructed meme designed to amplify our beliefs.   Sometimes we recognize this quickly, other times we may be a little slower. But just simple awareness of the potential to be manipulated, and the symptoms of manipulation, such as intense or disproportionate emotional responses, can help us mitigate and even correct some of the worst effects. 

Collectively, there are more opportunities.  We are better at seeing others being manipulated than ourselves.  We can use that as a mirror, and/or call it out to others when we see it.  And many of us will find ourselves somewhere in the deployment chain, especially as AI is still in it’s early stages.  For those of us that this applies to, we have the opportunity to collectively nudge this emerging technology in the right direction. I still recall a conversation with Dan Ariely when I first started exploring behavioral science, perhaps 15-20 years ago.  It’s so long ago I have to paraphrase, but the essence of the conversation was to never manipulate people to do something that was not in there best interest.  

There is a pretty obvious and compelling moral framework behind this. But there is also an element of enlightened self interest. As a marketer working for a consumer goods company at the time, even if I could have nudged somebody into buying something they really didn’t want, it might have offered initial success, but would likely come back to bite me in the long-term.  They certainly wouldn’t become repeat customers, and a mixture of buyers remorse, loss aversion and revenge could turn them into active opponents.  This potential for critical thinking in hindsight exists for virtually every situation where outcomes damage the individual.   

The bottom line is that even today, we already ave to continually ask ourselves if what we see is real, if our beliefs are truly our own, or have they been manipulated? Media and social media memes already play the manipulation game.   AI may already be better, and if not, it’s only a matter of time before it is. If you think we are politically polarized now, hang onto your hat!!!  But awareness is key.  We all need to stay aware, be conscious of manipulation in ourselves and others, and counter it when we see it occurring for the wrong reasons.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Was Your AI Strategy Developed by the Underpants Gnomes?

Was Your AI Strategy Developed by the Underpants Gnomes?

GUEST POST from Robyn Bolton

“It just popped up one day. Who knows how long they worked on it or how many of millions were spent. They told us to think of it as ChatGPT but trained on everything our company has ever done so we can ask it anything and get an answer immediately.”

The words my client was using to describe her company’s new AI Chatbot made it sound like a miracle. Her tone said something else completely.

“It sounds helpful,”  I offered.  “Have you tried it?”

 “I’m not training my replacement! And I’m not going to train my R&D, Supply Chain, Customer Insights, or Finance colleagues’ replacements either. And I’m not alone. I don’t think anyone’s using it because the company just announced they’re tracking usage and, if we don’t use it daily, that will be reflected in our performance reviews.”

 All I could do was sigh. The Underpants Gnomes have struck again.

Who are the Underpants Gnomes?

The Underpants Gnomes are the stars of a 1998 South Park episode described by media critic Paul Cantor as, “the most fully developed defense of capitalism ever produced.”

Claiming to be business experts, the Underpants Gnomes sneak into South Park residents’ homes every night and steal their underpants. When confronted by the boy in their underground lair, the Gnomes explain their business plan:

  1. Collect underpants
  2. ?
  3. Profit

It was meant as satire.

Some took it as a an abbreviated MBA.

How to Spot the Underpants AI Gnomes

As the AI hype grows, fueling executive FOMO (Fear of Missing Out), the Underpants Gnomes, cleverly disguised as experts, entrepreneurs and consultants, saw their opportunity.

  1. Sell AI
  2. ?
  3. Profit

 While they’ve pivoted their business focus, they haven’t improved their operations so the Underpants AI Gnomes as still easy to spot:

  1. Investment without Intention: Is your company investing in AI because it’s “essential to future-proofing the business?”  That sounds good but if your company can’t explain the future it’s proofing itself against and how AI builds a moat or a life preserver in that future, it’s a sign that  the Gnomes are in the building.
  2. Switches, not Solutions: If your company thinks that AI adoption is as “easy as turning on Copilot” or “installing a custom GPT chatbot, the Gnomes are gaining traction. AI is a tool and you need to teach people how to use tools, build processes to support the change, and demonstrate the benefit.
  3. Activity without Achievement: When MIT published research indicating that 95% of corporate Gen AI pilots were failing, it was a sign of just how deeply the Gnomes have infiltrated companies. Experiments are essential at the start of any new venture but only useful if they generate replicable and scalable learning.

How to defend against the AI Gnomes

Odds are the gnomes are already in your company. But fear not, you can still turn “Phase 2:?” into something that actually leads to “Phase 3: Profit.”

  1. Start with the end in mind: Be specific about the outcome you are trying to achieve. The answer should be agnostic of AI and tied to business goals.
  2. Design with people at the center: Achieving your desired outcomes requires rethinking and redesigning existing processes. Strategic creativity like that requires combining people, processes, and technology to achieve and embed.
  3. Develop with discipline: Just because you can (run a pilot, sign up for a free trial), doesn’t mean you should. Small-scale experiments require the same degree of discipline as multi-million-dollar digital transformations. So, if you can’t articulate what you need to learn and how it contributes to the bigger goal, move on.

AI, in all its forms, is here to stay. But the same doesn’t have to be true for the AI Gnomes.

Have you spotted the Gnomes in your company?

Image credit: AI Underpants Gnomes (just kidding, Google Gemini made the image)

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Embodied Artificial Intelligence is the Next Frontier of Human-Centered Innovation

LAST UPDATED: December 8, 2025 at 4:56 PM

Embodied Artificial Intelligence is the Next Frontier of Human-Centered Innovation

GUEST POST from Art Inteligencia

For the last decade, Artificial Intelligence (AI) has lived primarily on our screens and in the cloud — a brain without a body. While large language models (LLMs) and predictive algorithms have revolutionized data analysis, they have done little to change the physical experience of work, commerce, and daily life. This is the innovation chasm we must now bridge.

The next great technological leap is Embodied Artificial Intelligence (EAI): the convergence of advanced robotics (the body) and complex, generalized AI (the brain). EAI systems are designed not just to process information, but to operate autonomously and intelligently within our physical world. This is a profound shift for Human-Centered Innovation, because EAI promises to eliminate the drudgery, danger, and limitations of physical labor, allowing humans to focus exclusively on tasks that require judgment, creativity, and empathy.

The strategic deployment of EAI requires a shift in mindset: organizations must view these agents not as mechanical replacements, but as co-creators that augment and elevate the human experience. The most successful businesses will be those that unlearn the idea of human vs. machine and embrace the model of Human-Embodied AI Symbiosis.

The EAI Opportunity: Three Human-Centered Shifts

EAI accelerates change by enabling three crucial shifts in how we organize work and society:

1. The Shift from Automation to Augmentation

Traditional automation replaces repetitive tasks. EAI offers intelligent augmentation. Because EAI agents learn and adapt in real-time within dynamic environments (like a factory floor or a hospital), they can handle unforeseen situations that script-based robots cannot. This means the human partner moves from supervising a simple process to managing the exceptions and optimizations of a sophisticated one. The human job becomes about maximizing the intelligence of the system, not the efficiency of the body.

2. The Shift from Efficiency to Dignity

Many essential human jobs are physically demanding, dangerous, or profoundly repetitive. EAI offers a path to remove humans from these undignified roles — the loading and unloading of heavy boxes, inspection of hazardous infrastructure, or the constant repetition of simple assembly tasks. This frees human capital for high-value interaction, fostering a new organizational focus on the dignity of work. Organizations committed to Human-Centered Innovation must prioritize the use of EAI to eliminate physical risk and strain.

3. The Shift from Digital Transformation to Physical Transformation

For decades, digital transformation has been the focus. EAI catalyzes the necessary physical transformation. It closes the loop between software and reality. An inventory algorithm that predicts demand can now direct a bipedal robot to immediately retrieve and prepare the required product from a highly chaotic warehouse shelf. This real-time, physical execution based on abstract computation is the true meaning of operational innovation.

Case Study 1: Transforming Infrastructure Inspection

Challenge: High Risk and Cost in Critical Infrastructure Maintenance

A global energy corporation (“PowerLine”) faced immense risk and cost in maintaining high-voltage power lines, oil pipelines, and sub-sea infrastructure. These tasks required sending human crews into dangerous, often remote, or confined spaces for time-consuming, repetitive visual inspections.

EAI Intervention: Autonomous Sensory Agents

PowerLine deployed a fleet of autonomous, multi-limbed EAI agents equipped with advanced sensing and thermal imaging capabilities. These robots were trained not just on pre-programmed routes, but on the accumulated, historical data of human inspectors, learning to spot subtle signs of material stress and structural failure — a skill previously reserved for highly experienced humans.

  • The EAI agents performed 95% of routine inspections, capturing data with superior consistency.
  • Human experts unlearned routine patrol tasks and focused exclusively on interpreting the EAI data flags and designing complex repair strategies.

The Outcome:

The use of EAI led to a 70% reduction in inspection time and, critically, a near-zero rate of human exposure to high-risk environments. This strategic pivot proved that EAI’s greatest value is not economic replacement, but human safety and strategic focus. The EAI provided a foundational layer of reliable, granular data, enabling human judgment to be applied only where it mattered most.

Case Study 2: Elderly Care and Companionship

Challenge: Overstretched Human Caregivers and Isolation

A national assisted living provider (“ElderCare”) struggled with caregiver burnout and increasing costs, while many residents suffered from emotional isolation due to limited staff availability. The challenge was profoundly human-centered: how to provide dignity and aid without limitless human resources.

EAI Intervention: The Adaptive Care Companion

ElderCare piloted the use of adaptive, humanoid EAI companions in low-acuity environments. These agents were programmed to handle simple, repetitive physical tasks (retrieving dropped items, fetching water, reminding patients about medication) and, critically, were trained on empathetic conversation models.

  • The EAI agents managed 60% of non-essential, fetch-and-carry tasks, freeing up human nurses for complex medical care and deep, personalized interaction.
  • The EAI’s conversation logs provided caregivers with Small Data insights into the emotional state and preferences of the residents, allowing the human staff to maximize the quality of their face-to-face time.

The Outcome:

The pilot resulted in a 30% reduction in nurse burnout and, most importantly, a measurable increase in resident satisfaction and self-reported emotional well-being. The EAI was deployed not to replace the human touch, but to protect and maximize its quality by taking on the physical burden of routine care. The innovation successfully focused human empathy where it had the greatest impact.

The EAI Ecosystem: Companies to Watch

The race to commercialize EAI is accelerating, driven by the realization that AI needs a body to unlock its full economic potential. Organizations should be keenly aware of the leaders in this ecosystem. Companies like Boston Dynamics, known for advanced mobility and dexterity, are pioneering the physical platforms. Startups such as Sanctuary AI and Figure AI are focused on creating general-purpose humanoid robots capable of performing diverse tasks in unstructured environments, integrating advanced large language and vision models into physical forms. Simultaneously, major players like Tesla with its Optimus project and research divisions within Google DeepMind are laying the foundational AI models necessary for EAI agents to learn and adapt autonomously. The most promising developments are happening at the intersection of sophisticated hardware (the actuators and sensors) and generalized, real-time control software (the brain).

Conclusion: A New Operating Model

Embodied AI is not just another technology trend; it is the catalyst for a radical change in the operating model of human civilization. Leaders must stop viewing EAI deployment as a simple capital expenditure and start treating it as a Human-Centered Innovation project. Your strategy should be defined by the question: How can EAI liberate my best people to do their best, most human work? Embrace the complexity, manage the change, and utilize the EAI revolution to drive unprecedented levels of dignity, safety, and innovation.

“The future of work is not AI replacing humans; it is EAI eliminating the tasks that prevent humans from being fully human.”

Frequently Asked Questions About Embodied Artificial Intelligence

1. How does Embodied AI differ from traditional industrial robotics?

Traditional industrial robots are fixed, single-purpose machines programmed to perform highly repetitive tasks in controlled environments. Embodied AI agents are mobile, often bipedal or multi-limbed, and are powered by generalized AI models, allowing them to learn, adapt, and perform complex, varied tasks in unstructured, human environments.

2. What is the Human-Centered opportunity of EAI?

The opportunity is the elimination of the “3 Ds” of labor: Dangerous, Dull, and Dirty. By transferring these physical burdens to EAI agents, organizations can reallocate human workers to roles requiring social intelligence, complex problem-solving, emotional judgment, and creative innovation, thereby increasing the dignity and strategic value of the human workforce.

3. What does “Human-Embodied AI Symbiosis” mean?

Symbiosis refers to the collaborative operating model where EAI agents manage the physical execution and data collection of routine, complex tasks, while human professionals provide oversight, set strategic goals, manage exceptions, and interpret the resulting data. The systems work together to achieve an outcome that neither could achieve efficiently alone.

Your first step toward embracing Embodied AI: Identify the single most physically demanding or dangerous task in your organization that is currently performed by a human. Begin a Human-Centered Design project to fully map the procedural and emotional friction points of that task, then use those insights to define the minimum viable product (MVP) requirements for an EAI agent that can eliminate that task entirely.

UPDATE – Here is an infographic of the key points of this article that you can download:

Embodied Artificial Intelligence Infographic

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: 1 of 1,000+ quote slides for your meetings & presentations at http://misterinnovation.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Is OpenAI About to Go Bankrupt?

LAST UPDATED: December 4, 2025 at 4:48 PM

Is OpenAI About to Go Bankrupt?

GUEST POST from Chateau G Pato

The innovation landscape is shifting, and the tremors are strongest in the artificial intelligence (AI) sector. For a moment, OpenAI felt like an impenetrable fortress, the company that cracked the code and opened the floodgates of generative AI to the world. But now, as a thought leader focused on Human-Centered Innovation, I see the classic signs of disruption: a growing competitive field, a relentless cash burn, and a core product advantage that is rapidly eroding. The question of whether OpenAI is on the brink of bankruptcy isn’t just about sensational headlines — it’s about the fundamental sustainability of a business model built on unprecedented scale and staggering cost.

The “Code Red” announcement from OpenAI, ostensibly about maintaining product quality, was a subtle but profound concession. It was an acknowledgment that the days of unchallenged superiority are over. This came as competitors like Google’s Gemini and Anthropic’s Claude are not just keeping pace, but in many key performance metrics, they are reportedly surpassing OpenAI’s flagship models. Performance parity, or even outperformance, is a killer in the technology adoption curve. When the superior tool is also dramatically cheaper, the choice for enterprises and developers — the folks who pay the real money — becomes obvious.

The Inevitable Crunch: Performance and Price

The competitive pressure is coming from two key vectors: performance and cost-efficiency. While the public often focuses on benchmark scores like MMLU or coding abilities — where models like Gemini and Claude are now trading blows or pulling ahead — the real differentiator for business users is price. New models, including the China-based Deepseek, are entering the market with reported capabilities approaching the frontier models but at a fraction of the development and inference cost. Deepseek’s reportedly low development cost highlights that the efficiency of model creation is also improving outside of OpenAI’s immediate sphere.

Crucially, the open-source movement, championed by models like Meta’s Llama family, introduces a zero-cost baseline that fundamentally caps the premium OpenAI can charge. Llama, and the rapidly improving ecosystem around it, means that a good-enough, customizable, and completely free model is always an option for businesses. This open-source competition bypasses the high-cost API revenue model entirely, forcing closed-source providers to offer a quantum leap in utility to justify the expenditure. This dynamic accelerates the commoditization of foundational model technology, turning OpenAI’s once-unique selling proposition into a mere feature.

OpenAI’s models, for all their power, have been famously expensive to run — a cost that gets passed on through their API. The rise of sophisticated, cheaper alternatives — many of which employ highly efficient architectures like Mixture-of-Experts (MoE) — means the competitive edge of sheer scale is being neutralized by engineering breakthroughs in efficiency. If the next step in AI on its way to artificial general intelligence (AGI) is a choice between a 10% performance increase and a 10x cost reduction for 90% of the performance, the market will inevitably choose the latter. This is a structural pricing challenge that erodes one of OpenAI’s core revenue streams: API usage.

The Financial Chasm: Burn Rate vs. Reserves

The financial situation is where the “bankruptcy” narrative gains traction. Developing and running frontier AI models is perhaps the most capital-intensive venture in corporate history. Reports — which are often conflicting and subject to interpretation — paint a picture of a company with an astronomical cash burn rate. Estimates for annual operational and development expenses are in the billions of dollars, resulting in a net loss measured in the billions.

This reality must be contrasted with the position of their main rivals. While OpenAI is heavily reliant on Microsoft’s monumental investment — a complex deal involving cash and Azure cloud compute credits — Microsoft’s exposure is structured as a strategic infrastructure play. The real financial behemoth is Alphabet (Google), which can afford to aggressively subsidize its Gemini division almost indefinitely. Alphabet’s near-monopoly on global search engine advertising generates profits in the tens of billions of dollars every quarter. This virtually limitless reservoir of cash allows Google to cross-subsidize Gemini’s massive research, development, and inference costs, effectively enabling them to engage in a high-stakes price war that smaller, loss-making entities like OpenAI cannot truly win on a level playing field. Alphabet’s strategy is to capture market share first, using the profit engine of search to buy time and scale, a luxury OpenAI simply does not have without a continuous cash injection from a partner.

The question is not whether OpenAI has money now, but whether their revenue growth can finally eclipse their accelerating costs before their massive reserve is depleted. Their long-term financial projections, which foresee profitability and revenues in the hundreds of billions by the end of the decade, require not just growth, but a sustained, near-monopolistic capture of the new AI-driven knowledge economy. That becomes increasingly difficult when competitors are faster, cheaper, and arguably better, and have access to deeper, more sustainable profit engines for cross-subsidization.

The Future Outlook: Change or Consequence

OpenAI’s future is not doomed, but the company must initiate a rapid, human-centered transformation. The current trajectory — relying on unprecedented capital expenditure to maintain a shrinking lead in model performance — is structurally unsustainable in the face of faster, cheaper, and increasingly open-source models like Meta’s Llama. The next frontier isn’t just AGI; it’s AGI at scale, delivered efficiently and affordably.

OpenAI must pivot from a model of monolithic, expensive black-box development to one that prioritizes efficiency, modularity, and a true ecosystem approach. This means a rapid shift to MoE architectures, aggressive cost-cutting in inference, and a clear, compelling value proposition beyond just “we were first.” Human-Centered Innovation principles dictate that a company must listen to the market — and the market is shouting for price, performance, and flexibility. If OpenAI fails to execute this transformation and remains an expensive, marginal performer, its incredible cash reserves will serve only as a countdown timer to a necessary and painful restructuring.

Frequently Asked Questions (FAQ)

  • Is OpenAI currently profitable?
    OpenAI is currently operating at a significant net loss. Its annual cash burn rate, driven by high R&D and inference costs, reportedly exceeds its annual revenue, meaning it relies heavily on its massive cash reserves and the strategic investment from Microsoft to sustain operations.
  • How are Gemini and Claude competing against OpenAI on cost and performance?
    Competitors like Google’s Gemini and Anthropic’s Claude are achieving performance parity or superiority on key benchmarks. Furthermore, they are often cheaper to use (lower inference cost) due to more efficient architectures (like MoE) and the ability of their parent companies (Alphabet and Google) to cross-subsidize their AI divisions with enormous profits from other revenue streams, such as search engine advertising.
  • What was the purpose of OpenAI’s “Code Red” announcement?
    The “Code Red” was an internal or public acknowledgment by OpenAI that its models were facing performance and reliability degradation in the face of intense, high-quality competition from rivals. It signaled a necessary, urgent, company-wide focus on addressing these issues to restore and maintain a technological lead.

UPDATE: Just found on X that HSBC has said that OpenAI is going to have nearly a half trillion in operating losses until 2030, per Financial Times (FT). Here is the chart of their $100 Billion in projected losses in 2029. With the success of Gemini, Claude, Deep Seek, Llama and competitors yet to emerge, the revenue piece may be overstated:

OpenAI estimated 2029 financials

Image credits: Google Gemini, Financial Times

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Top 10 Human-Centered Change & Innovation Articles of November 2025

Top 10 Human-Centered Change & Innovation Articles of November 2025Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are November’s ten most popular innovation posts:

  1. Eight Types of Innovation Executives — by Stefan Lindegaard
  2. Is There a Real Difference Between Leaders and Managers? — by David Burkus
  3. 1,000+ Free Innovation, Change and Design Quotes Slides — by Braden Kelley
  4. The AI Agent Paradox — by Art Inteligencia
  5. 74% of Companies Will Die in 10 Years Without Business Transformation — by Robyn Bolton
  6. The Unpredictability of Innovation is Predictable — by Mike Shipulski
  7. How to Make Your Employees Thirsty — by Braden Kelley
  8. Are We Suffering from AI Confirmation Bias? — by Geoffrey A. Moore
  9. How to Survive the Next Decade — by Robyn Bolton
  10. It’s the Customer Baby — by Braden Kelley

BONUS – Here are five more strong articles published in October that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Build a Common Language of Innovation on your team

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last four years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






The Reasons Customers May Refuse to Speak with AI

The Reasons Customers May Refuse to Speak with AI

GUEST POST from Shep Hyken

If you want to anger your customers, make them do something they don’t want to do.

Up to 66% of U.S. customers say that when it comes to getting help, resolving an issue or making a complaint, they only want to speak to a live person. That’s according to the 2025 State of Customer Service and Customer Experience (CX) annual study. If you don’t provide the option to speak to a live person, you are at risk of losing many customers.

But not all customers feel that way. We asked another sample of more than 1,000 customers about using AI and self-service tools to get customer support, and 34% said they stopped doing business with a company or brand because self-service options were not provided.

These findings reveal the contrasting needs and expectations customers have when communicating with the companies they do business with. While the majority prefer human-to-human interaction, a substantial number (about one-third) not only prefer self-service options — AI-fueled solutions, robust frequently asked question pages on a website, video tutorials and more — but demand it or they will actually leave to find a competitor that can provide what they want.

This creates a big challenge for CX decision-makers that directly impacts customer retention and revenue.

Why Some Customers Resist AI

Our research finds that age makes a difference. For example, Baby Boomers show the strongest preference for human interaction, with 82% preferring the phone over digital solutions. Only half (52%) of Gen-Z feels the same way about the phone. Here’s why:

  1. Lack of Trust: Trust is another concern, with almost half (49%) saying they are scared of technologies like AI and ChatGPT.
  2. Privacy Concerns: Seventy percent of customers are concerned about data privacy and security when interacting with AI.
  3. Success — Or Lack of Success: While I think it’s positive that 50% of customers surveyed have successfully resolved a customer service issue using AI without the need for a live agent, that also means that 50% have not.

Customers aren’t necessarily anti-technology. They’re anti-ineffective technology. When AI fails to understand requests and lacks empathy in sensitive situations, the negative experience can make certain customers want to only communicate with a human. Even half of Gen-Z (48%) says they are frustrated with AI technology (versus 17% of Baby Boomers).

Why Some Customers Embrace AI

The 34% of customers who prefer self-service options to the point of saying they are willing to stop doing business with a company if self-service isn’t available present a dilemma for CX leaders. This can paralyze the decision process for what solutions to buy and implement. Understanding some of the reasons certain customers embrace AI is important:

  1. Speed, Convenience and Efficiency: The ability to get immediate support without having to call a company, wait on hold, be authenticated, etc., is enough to get customers using AI. If you had the choice between getting an answer immediately or having to wait 15 minutes, which would you prefer? (That’s a rhetorical question.)
  2. 24/7 Availability: Immediate support is important, but having immediate access to support outside of normal business hours is even better.
  3. A Belief in the Future: There is optimism about the future of AI, as 63% of customers expect AI technologies to become the primary mode of customer service in the future — a significant increase from just 21% in 2021. That optimism has customers trying and outright adopting the use of AI.

CX leaders must recognize the generational differences — and any other impactful differences — as they make decisions. For companies that sell to customers across generations, this becomes increasingly important, especially as Gen-Z and Millennials gain purchasing power. Turning your back on a generation’s technology expectations puts you at risk of losing a large percentage of customers.

What’s a CX Leader To Do?

Some companies have experimented with forcing customers to use only AI and self-service solutions. This is risky, and for the most part, the experiments have failed. Yet, as AI improves — and it’s doing so at a very rapid pace — it’s okay to push customers to use self-service. Just support it with a seamless transfer to a human if needed. An AI-first approach works as long as there’s a backup.

Forcing customers to use a 100% solution, be it AI or human, puts your company at risk of losing customers. Today’s strategy should be a balanced choice between new and traditional customer support. It should be about giving customers the experience they want and expect — one that makes them say, “I’ll be back!”

Image credit: Pixabay

This article originally appeared on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Don’t Adopt Artificial Incompetence

Don't Adopt Artificial Incompetence

GUEST POST from Shep Hyken

I’ve been reviewing my customer experience research, specifically the section on the future of customer service and AI (Artificial Intelligence). A few findings prove that customers are frustrated and lack confidence in how companies are using AI:

  • In general, 57% of customers are frustrated by AI-fueled self-service options.
  • 49% of customers say technologies like AI and ChatGPT scare them.
  • 51% of customers have received wrong or incorrect information from an AI self-service bot.

As negative as these findings sound, there are plenty of findings that point to AI getting better and more customers feeling comfortable using AI solutions. The technology continues to improve quickly. While it’s only been five months since we surveyed more than 1,000 U.S. consumers, I bet a new survey would show continued improvement and comfort level regarding AI. But for this short article, let’s focus on the problem that needs to be resolved.

Upon reviewing the numbers, I realized that there’s another kind of AI: Artificial Incompetence. That’s my new label for companies that improperly use AI and cause customers to be frustrated, scared and/or receive bad information. After thinking I was clever and invented this term, I was disheartened to discover, after a Google search, that the term already exists; however, it’s not widely used.

So, AI – as in Artificial Incompetence – is a problem you don’t want to have. To avoid it, start by recognizing that AI isn’t perfect. Be sure to have a human backup that’s fast and easy to reach when the customer feels frustrated, angry, or scared.

And now, as the title of this article implies, there’s more. After sharing the new concept of AI with my team, we brainstormed and had fun coming up with two more phrases based on some of the ideas I covered in my past articles and videos:

Feedback Constipation: When you get so much feedback and don’t take action, it’s like eating too much and not being able to “go.” (I know … a little graphic … but it makes the point.) This came from my article Turning Around Declining Customer Satisfaction, which teaches that collecting feedback isn’t valuable unless you use it.

Jargon Jeopardy: Most people – but not everyone – know what CX means. If you are using it with a customer, and they don’t know what it means, how do you think they feel? I was once talking to a customer service rep who kept using abbreviations. I could only guess what they meant. So I asked him to stop with the E-I-E-I-O’s (referencing the lyrics from the song about Old McDonald’s farm.) This was the main theme of my article titled Other Experiences Exist Beyond Customer Experience (EX, WX, DX, UX and more).

So, this was a fun way at poking fun of companies that may think they are doing CX right (and doing it well), but the customer’s perception is the opposite. Don’t use AI that frustrates customers and projects an image of incompetence. Don’t collect feedback unless you plan to use it. Otherwise, it’s a waste of everyone’s time and effort. Finally, don’t confuse customers – and even employees – with jargon and acronyms that make them feel like they are forced to relearn the alphabet.

Image Credits: 1 of 950+ FREE quote slides available at http://misterinnovation.com

This article originally appeared on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Are We Suffering from AI Confirmation Bias?

Are We Suffering From AI Confirmation Bias?

GUEST POST from Geoffrey A. Moore

When social media first appeared on the scene, many of us had high hopes it could play a positive role in community development and civic affairs, as indeed it has. What we did not anticipate was the long-term impact of the digital advertising model that supported it. That model is based on click-throughs, and one of the most effective ways to increase them was to present content that reinforces the recipient’s existing views.

Statisticians call the attraction to one’s existing point of view confirmation bias, and we all have it. As individuals, we believe we are in control of this, but it is obvious that at the level of populations, we are not. Confirmation bias, fed first by social media, and then by traditional media once it is converted to digital, has driven political and social polarization throughout the world. It has been further inflamed by conspiracy theories, malicious communications, fake news, and the like. And now we are faced with the advent of yet another amplifier—artificial intelligence. A significant portion of the fears about how AI could impact human welfare stem from how easily it can be put to malicious use through disinformation campaigns.

The impact of all this on our political life is chilling. Polarized media amplifies the impact of extremism and dampens the impact of moderation. This has most obviously been seen in primary elections, but it has now carried over into general elections to the point where highly unqualified individuals who have no interest in public service hold some of the most important roles in state and federal government. The resulting dysfunction is deeply disturbing, but it is not clear if and where a balance can be found.

Part of the problem is that confirmation bias is an essential part of healthy socialization. It reflects the impact that narratives have on our personal and community identities. What we might see as arrant folly another person sees as a necessary leap of faith. Our founding fathers were committed to protecting our nation from any authority imposing its narratives on unwilling recipients, hence our Constitutional commitment to both freedom of religion and freedom of speech.

In effect, this makes it virtually impossible to legislate our way out of this dilemma. Instead, we must embrace it as a Darwinian challenge, one that calls for us as individuals to adapt our strategies for living to a dangerous new circumstance. Here I think we can take a lesson from our recent pandemic experience. Faced with the threat of a highly contagious, ever-mutating Covid virus, most of the developed economies embraced rapid vaccination as their core response. China, however, did not. It embraced regulation instead. What they and we learned is that you cannot solve problems of contagion through regulation.

We can apply this learning to dealing with the universe of viral memes that have infected our digital infrastructure and driven social discord. Instead of regulation, we need to think of vaccination. The vaccine that protects people from fake news and its many variants is called critical thinking, and the healthcare provider that dispenses it is called public education.

We have spent the past several decades focusing on the STEM wing of our educational system, but at the risk of exercising my own confirmation bias, the immunity protection we need now comes from the liberal arts. Specifically, it emerges from supervised classroom discussions in which students are presented with a wide variety of challenging texts and experiences accompanied by a facilitated dialog that instructs them in the practices of listening, questioning, proposing, debating, and ultimately affirming or denying the validity of the argument under consideration. These discussions are not about promoting or endorsing any particular point of view. Rather, they teach one how to engage with any point of view in a respectful, powerful way. This is the intellectual discipline that underlies responsible citizenship. We have it in our labs. We just need to get it distributed more broadly.

That’s what I think. What do you think?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The AI Agent Paradox

How E-commerce Must Proactively Manage Experiences Created Without Their Consent

LAST UPDATED: November 7, 2025 at 4:31 PM

The AI Agent Paradox

GUEST POST from Art Inteligencia

A fundamental shift is underway in the world of e-commerce, moving control of the customer journey out of the hands of the brand and into the hands of the AI Agent. The recent lawsuit by Amazon against Perplexity regarding unauthorized access to user accounts by its agentic browser is not an isolated legal skirmish; it is a red flag moment for every company that sells online. The core challenge is this: AI agents are building and controlling the shopping experience — the selection, the price comparison, the checkout path — often without the e-commerce site’s knowledge or consent.

This is the AI Agent Paradox: The most powerful tool for customer convenience (the agent) simultaneously poses the greatest threat to brand control, data integrity, and monetization models. The era of passively optimizing a webpage is over. The future belongs to brands that actively manage their relationship with the autonomous, agentic layer that sits between them and their human customers.

The Three Existential Threats of the Autonomous Agent

Unmanaged AI agents, operating as digital squatters on your site, create immediate systemic problems for e-commerce sites:

  1. Data Integrity and Scraping Overload: Agents typically use resource-intensive web scraping techniques that overload servers and pollute internal analytics. The shopping experience they create is invisible to the brand’s A/B testing and personalization engines.
  2. Brand Bypass and Commoditization: Agents prioritize utility over loyalty. If a customer asks for “best price on noise-cancelling headphones,” the agent may bypass your brand story, unique value propositions, and even your preferred checkout flow, reducing your products to mere SKU and price points. This is the Brand Bypass threat.
  3. Security and Liability: Unauthorized access, especially to user accounts (as demonstrated by the Amazon-Perplexity case), creates massive security vulnerabilities and legal liability for the e-commerce platform, which is ultimately responsible for protecting user data.

The How-To: Moving from Resistance to Proactive Partnership

Instead of relying solely on defensive legal action (which is slow and expensive), e-commerce brands must embrace a proactive, human-centered API strategy. The goal is to provide a superior, authorized experience for the AI agents, turning them from adversaries into accelerated sales channels — and honoring the trust the human customer places in their proxy.

Step 1: Build the Agent-Optimized API Layer

Treat the AI agent as a legitimate, high-volume customer with unique needs (structured data, speed). Design a specific, clean Agent API separate from your public-facing web UI. This API should allow agents to retrieve product information, pricing, inventory status, and execute checkout with minimal friction and maximum data hygiene. This immediately prevents the resource-intensive web scraping that plagues servers.

Step 2: Define and Enforce the Rules of Engagement

Your Terms of Service (TOS) must clearly articulate the acceptable use of your data by autonomous agents. Furthermore, the Agent API must enforce these rules programmatically. You can reward compliant agents (faster access, richer data) and throttle or block non-compliant agents (those attempting unauthorized access or violating rate limits). This is where you insert your brand’s non-negotiables, such as attribution requirements or user privacy protocols, thereby regaining control.

Step 3: Offer Value-Added Agent Services and Data

This is the shift from defense to offense. Give agents a reason to partner with you and prefer your site. Offer exclusive agent-only endpoints that provide aggregated, structured data your competitors don’t, such as sustainable sourcing information, local inventory availability, or complex configurator data. This creates a competitive advantage where the agent actually prefers to send traffic to your optimized channel because it provides a superior outcome for the human user.

Case Study 1: The Furniture Retailer and the AI Interior Designer

Challenge: Complex, Multivariable E-commerce Decisions

A high-end furniture and décor retailer struggled with low conversion rates because buying furniture requires complex decisions (size, material, delivery time). Customers were leaving the site to use third-party AI interior design tools.

Proactive Partnership:

The retailer created a “Design Agent API.” This API didn’t just provide price and SKU; it offered rich, structured data on 3D model compatibility, real-time customization options, and material sustainability scores. They partnered with a leading AI interior design platform, providing the agent direct, authorized access to this structured data. The AI agent, in turn, could generate highly accurate virtual room mock-ups using the retailer’s products. This integration streamlined the complex path to purchase, turning the agent from a competitor into the retailer’s most effective pre-visualization sales tool.

Case Study 2: The Specialty Grocer and the AI Recipe Planner

Challenge: Fragmented Customer Journey from Inspiration to Purchase

An online specialty grocer, focused on rare and organic ingredients, saw their customers using third-party AI recipe planners and shopping list creators, which often failed to locate the grocer’s unique SKUs or sent traffic to competitors.

Proactive Partnership:

The grocer developed a “Recipe Fulfillment Endpoint.” They partnered with two popular AI recipe apps. When a user generated a recipe, the AI agent, using the grocer’s endpoint, could instantly check ingredient availability, price, and even offer substitute suggestions from the grocer’s unique inventory. The agent generated a “One-Click, Fully-Customized Cart” for the grocer. The grocer ensured the agent received a small attribution fee (a form of commission), turning the agent into a reliable, high-converting affiliate sales channel. This formalized partnership eliminated the friction between inspiration and purchase, driving massive, high-margin sales.

The Human-Centered Imperative

Ultimately, this is a human-centered change challenge. The human customer trusts their AI agent to act on their behalf. By providing a clean, transparent, and optimized path for the agent, the e-commerce brand is honoring that trust. The focus shifts from control over the interface to control over the data and the rules of interaction. This strategy not only improves server performance and data integrity but also secures the brand’s place in the customer’s preferred, agent-mediated future.

“The AI agent is your customer’s proxy. If you treat the proxy poorly, you treat the customer poorly. The future of e-commerce is not about fighting the agents; it’s about collaborating with them to deliver superior value.” — Braden Kelley

The time to move beyond the reactive defense and into proactive partnership is now. The e-commerce leaders of tomorrow will be the ones who design the best infrastructure for the machines that shop for humans. Your essential first step: Form a dedicated internal team to prototype your Agent API, defining the minimum viable, structured data you can share to incentivize collaboration over scraping.

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Why Going AI Only is Dumb

I’m Sorry Dave, But I Can’t Do That

LAST UPDATED: November 3, 2025 at 4:50PM

Why Going AI Only is Dumb

by Braden Kelley

Last month I had the opportunity to attend Customer Contact Week (CCW) in Nashville, Tennessee and following up on my article The Voicebots Are Coming I’d like to dig into the idea that companies like Klarna explored of eliminating all humans from contact centers. After all, what could possibly go wrong?

When I first heard that Klarna was going to eliminate humans from their contact centers and go all in on artificial intelligence I thought to myself that they would likely live to regret it. Don’t get me wrong, artificial intelligence (AI) voicebots and chatbots can be incredibly useful, and that proves out in the real world according to conference speakers that almost half of Fanatics calls are automated on the phone without getting to an agent. A lot of people are experimenting with AI but AI is no longer experimental. What Klarna learned is that when you choose to use AI to reduce your number of human agents, then if the AI is down you don’t have the ability anymore to just call in off duty agents to serve your customers.

But, on the flip side we know that having AI customer service agents as part of your agent mix can have very positive impacts on the business. Small businesses like Brothers That Just Do Gutters have found that using AI agents increased their scheduling of estimate visits over humans alone. National Debt Relief automated their customer insufficient funds (CIF) calls and added an escalation path (AI then agent) that delivered a 20% revenue lift over their best agents. They found that when an agent gets a NO, there isn’t much of an escalation path left. And, the delicate reality is that some people feel self conscious calling a human to talk about debt problems, and there may be other sensitive issues where callers would actually feel more comfortable talking to a voicebot than a human. In addition, Fanatics is finding that AI agents are resolving some issues FASTER than human agents. Taken together these examples show that often a hybrid approach (humans plus AI) yields better results than humans only or AI only, so design your approach consciously.

Now let’s look at some important statistics from Customer Management Practice research:

  • 2/3 of people prefer calling in and talking by phone, but most of that is 55+ and the preference percentage declines every ten years younger you go until 30% for 18-24
  • 3/4 of executives say more people want self service now than three years ago
  • 3/4 of people want to spend less time getting support – so they can get back to the fun stuff, or back to business

Taken together these statistics help make the case for increasing the use of AI agents in the contact center. If you happen to be looking to use AI agents in servicing your customers (or even if you already are) then it is important to think about how you can use them to remove friction from the system and to strategically allocate your humans towards things that only humans can do. And if you need to win support from someone to go big with AI voicebots then pick an important use case instead of one that nobody cares about OR even better, pick something that you couldn’t have done before (example: a ride sharing company had AI voicebots make 5 million calls to have drivers validate their tax information).

Finally, as I was listening to some of these sessions it reminded me of a time when I was tasked with finding a new approach for staffing peak season for one of the Blue Cross/Blue Shield companies in the United States. At that time AI voicebots weren’t a thing and so I was looking at how we could partner with a vendor to have a small number of their staff on hand throughout the year and then rely on them to staff and train seasonal staff using those seasoned vendor staff instead of taking the best employees off the phone to train temps.

Even now, all contact centers will still need a certain level of human staffing. But, AI voicebots, AI simulation training for agents, and other new AI powered tools represent a great opportunity for creating a better solution for peak staffing in a whole host of industries with very cyclical contact demand that is hard to staff for. One example of this from Customer Contact Week was a story about how Fanatics must 5x their number of agents during high seasons and in practice this often results in their worst agents (temps they hired only for the season) serving some of their best customers (high $$ value clients).

Conclusion

AI voicebots can be a great help during demand peaks and other AI powered tools (QA, simulations, coaching, etc.) can help accelerate and optimize both your on-boarding of full-time agents, but also of seasonal agents as well. But don’t pare back your human agent pool too far!

What has been your experience with balancing human and AI agents?

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.