Tag Archives: Artificial Intelligence

Solving the AI Trust Imperative with Provenance

The Digital Fingerprint

LAST UPDATED: January 5, 2026 at 3:33 PM

The Digital Fingerprint - Solving the Trust Imperative with Provenance

GUEST POST from Art Inteligencia

We are currently living in the artificial future of 2026, a world where the distinction between human-authored and AI-generated content has become practically invisible to the naked eye. In this era of agentic AI and high-fidelity synthetic media, we have moved past the initial awe of creation and into a far more complex phase: the Trust Imperative. As my friend Braden Kelley has frequently shared in his keynotes, innovation is change with impact, but if the impact is an erosion of truth, we are not innovating — we are disintegrating.

The flood of AI-generated content has created a massive Corporate Antibody response within our social and economic systems. To survive, organizations must adopt Generative Watermarking and Provenance technologies. These aren’t just technical safeguards; they are the new infrastructure of reality. We are shifting from a culture of blind faith in what we see to a culture of verifiable origin.

“Transparency is the only antidote to the erosion of trust; we must build systems that don’t just generate, but testify. If an idea is a useful seed of invention, its origin must be its pedigree.” — Braden Kelley

Why Provenance is the Key to Human-Centered Innovation™

Human-Centered Innovation™ requires psychological safety. In 2026, psychological safety is under threat by “hallucinated” news, deepfake corporate communiques, and the potential for industrial-scale intellectual property theft. When people cannot trust the data in their dashboards or the video of their CEO, the organizational “nervous system” begins to shut down. This is the Efficiency Trap in its most dangerous form: we’ve optimized for speed of content production, but lost the efficiency of shared truth.

Provenance tech — specifically the C2PA (Coalition for Content Provenance and Authenticity) standards — allows us to attach a permanent, tamper-evident digital “ledger” to every piece of media. This tells us who created it, what AI tools were used to modify it, and when it was last verified. It restores the human to the center of the story by providing the context necessary for informed agency.

Case Study 1: Protecting the Frontline of Journalism

The Challenge: In early 2025, a global news agency faced a crisis when a series of high-fidelity deepfake videos depicting a political coup began circulating in a volatile region. Traditional fact-checking was too slow to stop the viral spread, leading to actual civil unrest.

The Innovation: The agency implemented a camera-to-cloud provenance system. Every image captured by their journalists was cryptographically signed at the moment of capture. Using a public verification tool, viewers could instantly see the “chain of custody” for every frame.

The Impact: By 2026, the agency saw a 50% increase in subscriber trust scores. More importantly, they effectively “immunized” their audience against deepfakes by making the absence of a provenance badge a clear signal of potential misinformation. They turned the Trust Imperative into a competitive advantage.

Case Study 2: Securing Enterprise IP in the Age of Co-Pilots

The Challenge: A Fortune 500 manufacturing firm found that its proprietary design schematics were being leaked through “Shadow AI” — employees using unauthorized generative tools to optimize parts. The company couldn’t tell which designs were protected “useful seeds of invention” and which were tainted by external AI data sets.

The Innovation: They deployed an internal Generative Watermarking system. Every output from authorized corporate AI agents was embedded with an invisible, robust watermark. This watermark tracked the specific human prompter, the model version, and the internal data sources used.

The Impact: The company successfully reclaimed its IP posture. By making the origin of every design verifiable, they reduced legal risk and empowered their engineers to use AI safely, fostering a culture of Human-AI Teaming rather than fear-based restriction.

Leading Companies and Startups to Watch

As we navigate 2026, the landscape of provenance is being defined by a few key players. Adobe remains a titan in this space with their Content Authenticity Initiative, which has successfully pushed the C2PA standard into the mainstream. Digimarc has emerged as a leader in “stealth” watermarking that survives compression and cropping. In the startup ecosystem, Steg.AI is doing revolutionary work with deep-learning-based watermarks that are invisible to the eye but indestructible to algorithms. Truepic is the one to watch for “controlled capture,” ensuring the veracity of photos from the moment the shutter clicks. Lastly, Microsoft and Google have integrated these “digital nutrition labels” across their enterprise suites, making provenance a default setting rather than an optional add-on.

Conclusion: The Architecture of Truth

To lead innovation in 2026, you must be more than a creator; you must be a verifier. We cannot allow the “useful seeds of invention” to be choked out by the weeds of synthetic deception. By embracing generative watermarking and provenance, we aren’t just protecting data; we are protecting the human connection that makes change with impact possible.

If you are looking for an innovation speaker to help your organization solve the Trust Imperative and navigate Human-Centered Innovation™, I suggest you look no further than Braden Kelley. The future belongs to those who can prove they are part of it.

Frequently Asked Questions

What is the difference between watermarking and provenance?

Watermarking is a technique to embed information (visible or invisible) directly into content to identify its source. Provenance is the broader history or “chain of custody” of a piece of media, often recorded in metadata or a ledger, showing every change made from creation to consumption.

Can AI-generated watermarks be removed?

While no system is 100% foolproof, modern watermarking from companies like Steg.AI or Digimarc is designed to be highly “robust,” meaning it survives editing, screenshots, and even re-recording. Provenance standards like C2PA use cryptography to ensure that if the data is tampered with, the “broken seal” is immediately apparent.

Why does Braden Kelley call trust a “competitive advantage”?

In a market flooded with low-quality or deceptive content, “Trust” becomes a premium. Organizations that can prove their content is authentic and their AI is transparent will attract higher-quality talent and more loyal customers, effectively bypassing the friction of skepticism that slows down their competitors.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Just Because You Can Use AI Doesn’t Mean You Should

Just Because You Can Use AI Doesn't Mean You Should

GUEST POST from Shep Hyken

I’m often asked, “What should AI be used for?” While there is much that AI can do to support businesses in general, it’s obvious that I’m being asked how it relates to customer service and customer experience (CX). The true meaning of the question is more about what tasks AI can do to support a customer, thereby potentially eliminating the need for a live agent who deals directly with customers.

First, as the title of this article implies, just because AI can do something, it doesn’t mean it should. Yes, AI can handle many customer support issues, but even if every customer were willing to accept that AI can deliver good support, there are some sensitive and complicated issues for which customers would prefer to talk to a human.

AI Shep Hyken Cartoon

Additionally, consider that, based on my annual customer experience research, 68% of customers (that’s almost seven out of 10) prefer the phone as their primary means of communication with a company or brand. However, another finding in the report is worth mentioning: 34% of customers stopped doing business with a company because self-service options were not provided. Some customers insist on the self-service option, but at the same time, they want to be transferred to a live agent when appropriate.

AI works well for simple issues, such as password resets, tracking orders, appointment scheduling and answering basic or frequently asked questions. Humans are better suited for handling complaints and issues that need empathy, complex problem-solving situations that require judgment calls and communicating bad news.

An AI-fueled chatbot can answer many questions, but when a medical patient contacts the doctor’s office about test results related to a serious issue, they will likely want to speak with a nurse or doctor, not a chatbot.

Consider These Questions Before Implementing AI For Customer Interactions

AI for addressing simple customer issues has become affordable for even the smallest businesses, and an increasing number of customers are willing to use AI-powered customer support for the right reasons. Consider these questions before implementing AI for customer interactions:

  1. Is the customer’s question routine or fact-based?
  2. Does it require empathy, emotion, understanding and/or judgment (emotional intelligence)?
  3. Could the wrong answer cause a problem or frustrate the customer?
  4. As you think about the reasons customers call, which ones would they feel comfortable having AI handle?
  5. Do you have an easy, seamless way for the customer to be transferred to a human when needed?

The point is, regardless of how capable the technology is, it doesn’t mean it is best suited to deliver what the customer wants. Live agents can “read the customer” and know how to effectively communicate and empathize with them. AI can’t do that … yet. The key isn’t choosing between AI and humans. It’s knowing when to use each one.

Image credits: Google Gemini, Shep Hyken

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Top 100 Innovation and Transformation Articles of 2025

Top 100 Innovation and Transformation Articles of 2025

2021 marked the re-birth of my original Blogging Innovation blog as a new blog called Human-Centered Change and Innovation.

Many of you may know that Blogging Innovation grew into the world’s most popular global innovation community before being re-branded as Innovation Excellence and being ultimately sold to DisruptorLeague.com.

Thanks to an outpouring of support I’ve ignited the fuse of this new multiple author blog around the topics of human-centered change, innovation, transformation and design.

I feel blessed that the global innovation and change professional communities have responded with a growing roster of contributing authors and more than 17,000 newsletter subscribers.

To celebrate we’ve pulled together the Top 100 Innovation and Transformation Articles of 2025 from our archive of over 3,200 articles on these topics.

We do some other rankings too.

We just published the Top 40 Innovation Authors of 2025 and as the volume of this blog has grown we have brought back our monthly article ranking to complement this annual one.

But enough delay, here are the 100 most popular innovation and transformation posts of 2025.

Did your favorite make the cut?

1. A Toolbox for High-Performance Teams – Building, Leading and Scaling – by Stefan Lindegaard

2. Top 10 American Innovations of All Time – by Art Inteligencia

3. The Education Business Model Canvas – by Arlen Meyers, M.D.

4. What is Human-Centered Change? – by Braden Kelley

5. How Netflix Built a Culture of Innovation – by Art Inteligencia

6. McKinsey is Wrong That 80% Companies Fail to Generate AI ROI – by Robyn Bolton

7. The Great American Contraction – by Art Inteligencia

8. A Case Study on High Performance Teams – New Zealand’s All Blacks – by Stefan Lindegaard

9. Act Like an Owner – Revisited! – by Shep Hyken

10. Should a Bad Grade in Organic Chemistry be a Doctor Killer? – by Arlen Meyers, M.D.

11. Charting Change – by Braden Kelley

12. Human-Centered Change – by Braden Kelley

13. No Regret Decisions: The First Steps of Leading through Hyper-Change – by Phil Buckley

14. SpaceX is a Masterclass in Innovation Simplification – by Pete Foley

15. Top 5 Future Studies Programs – by Art Inteligencia

16. Marriott’s Approach to Customer Service – by Shep Hyken

17. The Role of Stakeholder Analysis in Change Management – by Art Inteligencia

18. The Triple Bottom Line Framework – by Dainora Jociute

19. The Nordic Way of Leadership in Business – by Stefan Lindegaard

20. Nine Innovation Roles – by Braden Kelley

21. ACMP Standard for Change Management® Visualization – 35″ x 56″ (Poster Size) – Association of Change Management Professionals – by Braden Kelley

22. Designing an Innovation Lab: A Step-by-Step Guide – by Art Inteligencia

23. FutureHacking™ – by Braden Kelley

24. The 6 Building Blocks of Great Teams – by David Burkus

25. Overcoming Resistance to Change – Embracing Innovation at Every Level – by Chateau G Pato

26. Human-Centered Change – Free Downloads – by Braden Kelley

27. 50 Cognitive Biases Reference – Free Download – by Braden Kelley

28. Quote Posters – Curated by Braden Kelley

29. Stoking Your Innovation Bonfire – by Braden Kelley

30. Innovation or Not – Kawasaki Corleo – by Art Inteligencia


Build a common language of innovation on your team


31. Top Six Trends for Innovation Management in 2025 – by Jesse Nieminen

32. Fear is a Leading Indicator of Personal Growth – by Mike Shipulski

33. Visual Project Charter™ – 35″ x 56″ (Poster Size) and JPG for Online Whiteboarding – by Braden Kelley

34. The Most Challenging Obstacles to Achieving Artificial General Intelligence – by Art Inteligencia

35. The Ultimate Guide to the Phase-Gate Process – by Dainora Jociute

36. Case Studies in Human-Centered Design – by Art Inteligencia

37. Transforming Leadership to Reshape the Future of Innovation – Exclusive Interview with Brian Solis

38. Leadership Best Quacktices from Oregon’s Dan Lanning – by Braden Kelley

39. This AI Creativity Trap is Gutting Your Growth – by Robyn Bolton

40. A 90% Project Failure Rate Means You’re Doing it Wrong – by Mike Shipulski

41. Reversible versus Irreversible Decisions – by Farnham Street

42. Next Generation Leadership Traits and Characteristics – by Stefan Lindegaard

43. Top 40 Innovation Bloggers of 2024 – Curated by Braden Kelley

44. Benchmarking Innovation Performance – by Noel Sobelman

45. Three Executive Decisions for Strategic Foresight Success or Failure – by Robyn Bolton

46. Back to Basics for Leaders and Managers – by Robyn Bolton

47. You Already Have Too Many Ideas – by Mike Shipulski

48. Imagination versus Knowledge – Is imagination really more important? – by Janet Sernack

49. Building a Better Change Communication Plan – by Braden Kelley

50. 10 Free Human-Centered Change™ Tools – by Braden Kelley


Accelerate your change and transformation success


51. Why Business Transformations Fail – by Robyn Bolton

52. Overcoming the Fear of Innovation Failure – by Stefan Lindegaard

53. What is the difference between signals and trends? – by Art Inteligencia

54. Unintended Consequences. The Hidden Risk of Fast-Paced Innovation – by Pete Foley

55. Giving Your Team a Sense of Shared Purpose – by David Burkus

56. The Top 10 Irish Innovators Who Shaped the World – by Art Inteligencia

57. The Role of Emotional Intelligence in Effective Change Leadership – by Art Inteligencia

58. Is OpenAI About to Go Bankrupt? – by Art Inteligencia

59. Sprint Toward the Innovation Action – by Mike Shipulski

60. Innovation Management ISO 56000 Series Explained – by Diana Porumboiu

61. How to Make Navigating Ambiguity a Super Power – by Robyn Bolton

62. 3 Secret Saboteurs of Strategic Foresight – by Robyn Bolton

63. Four Major Shifts Driving the 21st Century – by Greg Satell

64. Problems vs. Solutions vs. Complaints – by Mike Shipulski

65. The Power of Position Innovation – by John Bessant

66. Three Ways Strategic Idleness Accelerates Innovation and Growth – by Robyn Bolton

67. Case Studies of Companies Leading in Inclusive Design – by Chateau G Pato

68. Recognizing and Celebrating Small Wins in the Change Process – by Chateau G Pato

69. Parallels Between the 1920’s and Today Are Frightening – by Greg Satell

70. The Art of Adaptability: How to Respond to Changing Market Conditions – by Art Inteligencia

71. Do you have a fixed or growth mindset? – by Stefan Lindegaard

72. Making People Matter in AI Era – by Janet Sernack

73. The Role of Prototyping in Human-Centered Design – by Art Inteligencia

74. Turning Bold Ideas into Tangible Results – by Robyn Bolton

75. Yes the Comfort Zone Can Be Your Best Friend – by Stefan Lindegaard

76. Increasing Organizational Agility – by Braden Kelley

77. Innovation is Dead. Now What? – by Robyn Bolton

78. Four Reasons Change Resistance Exists – by Greg Satell

79. Eight I’s of Infinite Innovation – Revisited – by Braden Kelley

80. Difference Between Possible, Potential and Preferred Futures – by Art Inteligencia


Get the Change Planning Toolkit


81. Resistance to Innovation – What if electric cars came first? – by Dennis Stauffer

82. Science Says You Shouldn’t Waste Too Much Time Trying to Convince People – by Greg Satell

83. Why Context Engineering is the Next Frontier in AI – by Braden Kelley and Art Inteligencia

84. How to Write a Failure Resume – by Arlen Meyers, M.D.

85. The Five Keys to Successful Change – by Braden Kelley

86. Four Forms of Team Motivation – by David Burkus

87. Why Revolutions Fail – by Greg Satell

88. Top 40 Innovation Bloggers of 2023 – Curated by Braden Kelley

89. The Entrepreneurial Mindset – by Arlen Meyers, M.D.

90. Six Reasons Norway is a Leader in High-Performance Teamwork – by Stefan Lindegaard

90. Top 100 Innovation and Transformation Articles of 2024 – Curated by Braden Kelley

91. The Worst British Customer Experiences of 2024 – by Braden Kelley

92. Human-Centered Change & Innovation White Papers – by Braden Kelley

93. Encouraging a Growth Mindset During Times of Organizational Change – by Chateau G Pato

94. Inside the Mind of Jeff Bezos – by Braden Kelley

95. Learning from the Failure of Quibi – by Greg Satell

96. Dare to Think Differently – by Janet Sernack

97. The End of the Digital Revolution – by Greg Satell

98. Your Guidebook to Leading Human-Centered Change – by Braden Kelley

99. The Experiment Canvas™ – 35″ x 56″ (Poster Size) – by Braden Kelley

100. Trust as a Competitive Advantage – by Greg Satell

Curious which article just missed the cut? Well, here it is just for fun:

101. Building Cross-Functional Collaboration for Breakthrough Innovations – by Chateau G Pato

These are the Top 100 innovation and transformation articles of 2025 based on the number of page views. If your favorite Human-Centered Change & Innovation article didn’t make the cut, then send a tweet to @innovate and maybe we’ll consider doing a People’s Choice List for 2024.

If you’re not familiar with Human-Centered Change & Innovation, we publish 1-6 new articles every week focused on human-centered change, innovation, transformation and design insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook feed or on Twitter or LinkedIn too!

Editor’s Note: Human-Centered Change & Innovation is open to contributions from any and all the innovation & transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have a valuable insight to share with everyone for the greater good. If you’d like to contribute, contact us.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Outcome-Driven Innovation in the Age of Agentic AI

The North Star Shift

LAST UPDATED: January 5, 2026 at 5:29PM

Outcome-Driven Innovation in the Age of Agentic AI

by Braden Kelley

In a world of accelerating change, the rhetoric around Artificial Intelligence often centers on its incredible capacity for optimization. We hear about AI designing new materials, orchestrating complex logistics, and even writing entire software applications. This year, the technology has truly matured into agentic AI, capable of pursuing and achieving defined objectives with unprecedented autonomy. But as a specialist in Human-Centered Innovation™ (which pairs well with Outcome-Driven Innovation), I pose two crucial questions: Who is defining these outcomes, and what impact do they truly have on the human experience?

The real innovation of 2026 will show not just that AI can optimize against defined outcomes, but that we, as leaders, finally have the imperative — and the tools — to master Outcome-Driven Innovation and Outcome-Driven Change. If innovation is change with impact, then our impact is only as profound as the outcomes we choose to pursue. Without thoughtful, human-centered specifications, AI simply becomes the most efficient way to achieve the wrong goals, leading us directly into the Efficiency Trap. This is where organizations must overcome the Corporate Antibody response that resists fundamental shifts in how we measure success.

Revisiting and Applying Outcome-Driven Change in the Age of Agentic AI

As we integrate agentic AI into our organizations, the principles of Outcome-Driven Change (ODC) I first introduced in 2018 are more vital than ever. The core of the ODC framework rests on the alignment of three critical domains: Cognitive (Thinking), Affective (Feeling), and Conative (Doing). Today, AI agents are increasingly assuming the “conative” role, executing tasks and optimizing workflows at superhuman speeds. However, as I have always maintained, true success only arrives when what is being done is in harmony with what the people in the organization and customer base think and feel.

Outcome-Driven Change Framework

If an AI agent’s autonomous actions are misaligned with human psychological readiness or emotional context, it will trigger a Corporate Antibody response that kills innovation. To practice genuine Human-Centered Change™, we must ensure that AI agents are directed to pursue outcomes that are not just numerically efficient, but humanly resonant. When an AI’s “doing” matches the collective thinking and feeling of the workforce, we move beyond the Efficiency Trap and create lasting change with impact.

“In the age of agentic AI, the true scarcity is not computational power; it is the human wisdom to define the right ‘North Star’ outcomes. An AI optimizing for the wrong goal is a digital express train headed in the wrong direction – efficient, but ultimately destructive.” — Braden Kelley

From Feature-Building to Outcome-Harvesting

For decades, many organizations have been stuck in a cycle of “feature-building.” Product teams were rewarded for shipping more features, marketing for launching more campaigns, and R&D for creating more patents. The focus was on output, not ultimate impact. Outcome-Driven Innovation shifts this paradigm. It forces us to ask: What human or business value are we trying to create? What measurable change in behavior or well-being are we seeking?

Agentic AI, when properly directed, becomes an unparalleled accelerant for this shift. Instead of building a new feature and hoping it works, we can now tell an AI agent, “Achieve Outcome X for Persona Y, within Constraints Z,” and it will explore millions of pathways to get there. This frees human teams from the tactical churn and allows them to focus on the truly strategic work: deeply understanding customer needs, identifying ethical guardrails, and defining aspirational outcomes that genuinely drive Human-Centered Innovation™.

Case Study 1: Sustainable Manufacturing and the “Circular Economy” Outcome

The Challenge: A major electronics manufacturer in early 2025 aimed to reduce its carbon footprint but struggled with the complexity of optimizing its global supply chain, product design, and end-of-life recycling simultaneously. Traditional methods led to incremental, siloed improvements.

The Outcome-Driven Approach: They defined a bold outcome: “Achieve a 50% reduction in virgin material usage across all product lines by 2028, while maintaining profitability and product quality.” They then deployed an agentic AI system to explore new material combinations, reverse logistics networks, and redesign possibilities. This AI was explicitly optimized to achieve the circular economy outcome.

The Impact: The AI identified design changes that led to a 35% reduction in material waste within 18 months, far exceeding human predictions. It also found pathways to integrate recycled content into new products without compromising durability. The organization moved from a reactive “greenwashing” approach to proactive, systemic innovation driven by a clear, human-centric environmental outcome.

Case Study 2: Personalized Education and “Mastery Outcomes”

The Challenge: A national education system faced stagnating literacy rates, despite massive investments in new curricula. The focus was on “covering material” rather than ensuring true student understanding and application.

The Outcome-Driven Approach: They shifted their objective to “Ensure 90% of students achieve demonstrable mastery of core literacy skills by age 10.” An AI tutoring system was developed, designed to optimize for individual student mastery outcomes, rather than just quiz scores. The AI dynamically adapted learning paths, identified specific knowledge gaps, and even generated custom exercises based on each child’s learning style.

The Impact: Within two years, participating schools saw a 25% improvement in mastery rates. The AI became a powerful co-pilot for teachers, freeing them from repetitive grading and allowing them to focus on high-touch mentorship. This demonstrated how AI, directed by human-defined learning outcomes, can empower both educators and students, moving beyond the Efficiency Trap of standardized testing.

Leading Companies and Startups to Watch

As 2026 solidifies Outcome-Driven Innovation, several entities are paving the way. Amplitude and Pendo are evolving their product analytics to connect feature usage directly to customer outcomes. In the AI space, Anthropic‘s work on “Constitutional AI” is fascinating, as it seeks to embed human-defined ethical outcomes directly into the AI’s decision-making. Glean and Perplexity AI are creating agentic knowledge systems that help organizations define and track complex outcomes across their internal data. Startups like Metaculus are even democratizing the prediction of outcomes, allowing collective intelligence to forecast the impact of potential innovations, providing invaluable insights for human decision-makers. These players are all contributing to the core goal: helping humans define the right problems for AI to solve.

Conclusion: The Human Art of Defining the Future

The year 2026 is a pivotal moment. Agentic AI gives us unprecedented power to optimize, but with great power comes great responsibility — the responsibility to define truly meaningful outcomes. This is not a technical challenge; it is a human one. It requires deep empathy, strategic foresight, and the courage to challenge old metrics. It demands leaders who understand that the most impactful Human-Centered Innovation™ starts with a clear, ethically grounded North Star.

If you’re an innovation leader trying to navigate this future, remember: the future is not about what AI can do, but about what outcomes we, as humans, choose to pursue with it. Let’s make sure those outcomes serve humanity first.

Frequently Asked Questions

What is “Outcome-Driven Innovation”?

Outcome-Driven Innovation (ODI) is a strategic approach that focuses on defining and achieving specific, measurable human or business outcomes, rather than simply creating new features or products. AI then optimizes for these defined outcomes.

How does agentic AI change the role of human leaders in ODI?

Agentic AI frees human leaders from tactical execution and micro-management, allowing them to focus on the higher-level strategic work of identifying critical problems, understanding human needs, and defining the ethical, impactful outcomes for AI to pursue.

What is the “Efficiency Trap” in the context of AI and outcomes?

The Efficiency Trap occurs when AI is used to optimize for speed or cost without first ensuring that the underlying outcome is meaningful and human-centered. This can lead to highly efficient processes that achieve undesirable or even harmful results, ultimately undermining trust and innovation.

Image credits: Braden Kelley, Google Gemini

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from Google Gemini to clean up the article.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Top 10 Human-Centered Change & Innovation Articles of December 2025

Top 10 Human-Centered Change & Innovation Articles of December 2025Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are December’s ten most popular innovation posts:

  1. Is OpenAI About to Go Bankrupt? — by Chateau G Pato
  2. The Rise of Human-AI Teaming Platforms — by Art Inteligencia
  3. 11 Reasons Why Teams Struggle to Collaborate — by Stefan Lindegaard
  4. How Knowledge Emerges — by Geoffrey Moore
  5. Getting the Most Out of Quiet Employees in Meetings — by David Burkus
  6. The Wood-Fired Automobile — by Art Inteligencia
  7. Was Your AI Strategy Developed by the Underpants Gnomes? — by Robyn Bolton
  8. Will our opinion still really be our own in an AI Future? — by Pete Foley
  9. Three Reasons Change Efforts Fail — by Greg Satell
  10. Do You Have the Courage to Speak Up Against Conformity? — by Mike Shipulski

BONUS – Here are five more strong articles published in November that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Build a Common Language of Innovation on your team

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last four years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

What Are We Going to Do Now with GenAI?

What Are We Going to Do Now With GenAI?

GUEST POST from Geoffrey A. Moore

In 2023 we simply could not stop talking about Generative AI. But in 2024 the question for each enterprise became (continuing to today) — and this includes yours as well — is What are we going to do about it? Tough questions call for tough frameworks, so let’s run this one through the Hierarchy of Powers to see if it can shine some light on what might be your company’s best bet.

Category Power

Gen AI can have an impact anywhere in the Category Maturity Life Cycle, but the way it does so differs depending on where your category is, as follows:

  • Early Market. GenAI will almost certainly be a differentiating ingredient that is enabling a disruptive innovation, and you need to be on the bleeding edge. Think ChatGPT.
  • Crossing the chasm. Nailing your target use case is your sole priority, so you would use GenAI if, and only if, it helped you do so, and avoid getting distracted by its other bells and whistles. Think Khan Academy at the school district level.
  • Inside the tornado. Grabbing as much market share as you can is now the game to play, and GenAI-enabled features can help you do so provided they are fully integrated (no “some assembly required”). You cannot afford to slow your adoption down just at the time it needs to be at full speed. Think Microsoft CoPilot.
  • Growth Main Street (category still growing double digits). Market share boundaries are settling in, so the goal now is to grow your patch as fast as you can, solidifying your position and taking as much share as you can from the also-rans. Adding GenAI to the core product can provide a real boost as long as the disruption is minimal. Think Salesforce CRM.
  • Mature Main Street (category stabilized, single-digit growth). You are now marketing primarily to your installed base, secondarily seeking to pick up new logos as they come into play. GenAI can give you a midlife kicker provided you can use it to generate meaningful productivity gains. Think Adobe Photoshop.
  • Late Main Street (category declining, negative growth). The category has never been more profitable, so you are looking to extend its life in as low-cost a way as you can. GenAI can introduce innovative applications that otherwise would never occur to your end users. Think HP home printing.

Company Power

There are two dimensions of company power to consider when analyzing the ROI from a GenAI investment, as follows:

  • Market Share Status. Are you the market share leader, a challenger, or simply a participant? As a challenger, you can use GenAI to disrupt the market pecking order provided you differentiate in a way that is challenging for the leader to copy. On the other hand, as a leader, you can use GenAI to neutralize the innovations coming from challengers provided you can get it to market fast enough to keep the ecosystem in your camp. As a participant, you would add GenAI only if was your single point of differentiation (as a low-share participant, your R&D budget cannot fund more than one).
  • Default Operating Model. Is your core business better served by the complex systems operating model (typical for B2B companies with hundreds to thousands of large enterprises for customers) or the volume operations operating model (typical for B2C companies with hundreds of thousands to millions of consumers)? The complex systems model has sufficient margins to invest professional services across the entire ownership life cycle, from design consulting to installation to expansion. You are going to need deep in-house expertise to win big in this game. By contrast, GenAI deployed via the volume operations model has to work out-of-the-box. Consumers have neither the courage nor the patience to work through any disconnects.

Market Power

Whereas category share leaders benefit most from going broad, market segment leaders win big by going deep. The key tactic is to overdo it on the use cases that mean the most to your target customers, taking your offer beyond anything reasonable for a category leader to copy. GenAI can certainly be a part of this approach, as the two slides below illustrate:

Market Segmentation for Complex Systems

In the complex systems operating model, GenAI should accentuate the differentiation of your whole product, the complete solution to whatever problem you are targeting. That might mean, for example, taking your Large Language Model to a level of specificity that would normally not be warranted. This sets you apart from the incumbent vendor who has nothing like what you offer as well as from other technology vendors who have not embraced your target segment’s specific concerns. Think Crowdstrike’s Charlotte AI for cybersecurity analysis.

Market Segmentation for Volume Operations

In the volume operations operating model, GenAI should accentuate the differentiation of your brand promise by overdelivering on the relevant value discipline. Once again, it is critical not to get distracted by shiny objects—you want to differentiate in one quadrant only, although you can use GenAI in the other three for neutralization purposes. For Performance, think knowledge discovery. For Productivity, think writing letters. For Economy, think tutoring. For Convenience, think gift suggestions.

Offer Power

Everybody wants to “be innovative,” but it is worth stepping back a moment to ask, how do we get a Return on Innovation? Compared to its financial cousin, this kind of ROI is more of a leading indicator and thus of more strategic value. Basically, it comes in three forms:

  1. Differentiation. This creates customer preference, the goal being not just to be different but to create a clear separation from the competition, one that they cannot easily emulate. Think OpenAI.
  2. Neutralization. This closes the gap between you and a competitor who is taking market share away from you, the goal being to get to “good enough, fast enough,” thereby allowing your installed base to stay loyal. Think Google Bard.
  3. Optimization. This reduces the cost while maintaining performance, the goal being to expand the total available market. Think Edge GenAI on PCs and Macs.

For most of us, GenAI will be an added ingredient rather than a core product, which makes the ROI question even more important. The easiest way to waste innovation dollars is to spend them on differentiation that does not go far enough, neutralization that does not go fast enough, or optimization that does not go deep enough. So, the key lesson here is, pick one and only one as your ROI goal, and then go all in to get a positive return.

Execution Power

How best to incorporate GenAI into your existing enterprise depends on which zone of operations you are looking to enhance, as illustrated by the zone management framework below:

Zone Management Framework

If you are unsure exactly what to do, assign the effort to the Incubation Zone and put them on the clock to come up with a good answer as fast as possible. If you can incorporate it directly into your core business’s offerings at relatively low risk, by all means, do so as it is the current hot ticket, and assign it to the Performance Zone. If there is not a good fit, consider using it internally instead to improve your own productivity, assigning it to the Productivity Zone. Finally, although it is awfully early days for this, if you are convinced it is an absolutely essential ingredient in a big bet you feel compelled to make, then assign it to the Transformation Zone and go all in. Again, the overall point is manage your investment in GenAI out of one zone and only one zone, as the success metrics for each zone are incompatible with those of the other three.

One final point. Embracing anything as novel as GenAI has to feel risky. I submit, however, that in 2025 not building upon meaningful GenAI action taken in 2024 is even more so.

That’s what I think. What do you think?

Image Credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Can AI Replace the CEO?

A Day in the Life of the Algorithmic Executive

LAST UPDATED: December 28, 2025 at 1:56 PM

Can AI Replace the CEO?

GUEST POST from Art Inteligencia

We are entering an era where the corporate antibody – that natural organizational resistance to disruptive change – is meeting its most formidable challenger yet: the AI CEO. For years, we have discussed the automation of the factory floor and the back office. But what happens when the “useful seeds of invention” are planted in the corner office?

The suggestion that an algorithm could lead a company often triggers an immediate emotional response. Critics argue that leadership requires soul, while proponents point to the staggering inefficiencies, biases, and ego-driven errors that plague human executives. As an advocate for Innovation = Change with Impact, I believe we must look beyond the novelty and analyze the strategic logic of algorithmic leadership.

“Leadership is not merely a collection of decisions; it is the orchestration of human energy toward a shared purpose. An AI can optimize the notes, but it cannot yet compose the symphony or inspire the orchestra to play with passion.”

Braden Kelley

The Efficiency Play: Data Without Drama

The argument for an AI CEO rests on the pursuit of Truly Actionable Data. Humans are limited by cognitive load, sleep requirements, and emotional variance. An AI executive, by contrast, operates in Future Present mode — constantly processing global market shifts, supply chain micro-fluctuations, and internal sentiment analysis in real-time. It doesn’t have a “bad day,” and it doesn’t make decisions based on who it had lunch with.

Case Study 1: NetDragon Websoft and the “Tang Yu” Experiment

The Experiment: A Virtual CEO in a Gaming Giant

In 2022, NetDragon Websoft, a major Chinese gaming and mobile app company, appointed an AI-powered humanoid robot named Tang Yu as the Rotating CEO of its subsidiary. This wasn’t just a marketing stunt; it was a structural integration into the management flow.

The Results

Tang Yu was tasked with streamlining workflows, improving the quality of work tasks, and enhancing the speed of execution. Over the following year, the company reported that Tang Yu helped the subsidiary outperform the broader Hong Kong stock market. By serving as a real-time data hub, the AI signature was required for document approvals and risk assessments. It proved that in data-rich environments where speed of iteration is the primary competitive advantage, an algorithmic leader can significantly reduce operational friction.

Case Study 2: Dictador’s “Mika” and Brand Stewardship

The Challenge: The Face of Innovation

Dictador, a luxury rum producer, took the concept a step further by appointing Mika, a sophisticated female humanoid robot, as their CEO. Unlike Tang Yu, who worked mostly within internal systems, Mika serves as a public-facing brand steward and high-level decision-maker for their DAO (Decentralized Autonomous Organization) projects.

The Insight

Mika’s role highlights a different facet of leadership: Strategic Pattern Recognition. Mika analyzes consumer behavior and market trends to select artists for bottle designs and lead complex blockchain-based initiatives. While Mika lacks human empathy, the company uses her to demonstrate unbiased precision. However, it also exposes the human-AI gap: while Mika can optimize a product launch, she cannot yet navigate the nuanced political and emotional complexities of a global pandemic or a social crisis with the same grace as a seasoned human leader.

Leading Companies and Startups to Watch

The space is rapidly maturing beyond experimental robot figures. Quantive (with StrategyAI) is building the “operating system” for the modern CEO, connecting KPIs to real-work execution. Microsoft is positioning its Copilot ecosystem to act as a “Chief of Staff” to every executive, effectively automating the data-gathering and synthesis parts of the role. Watch startups like Tessl and Vapi, which are focusing on “Agentic AI” — systems that don’t just recommend decisions but have the autonomy to execute them across disparate platforms.

The Verdict: The Hybrid Future

Will AI replace the CEO? My answer is: not the great ones. AI will certainly replace the transactional CEO — the executive whose primary function is to crunch numbers, approve budgets, and monitor performance. These tasks are ripe for automation because they represent 19th-century management techniques.

However, the transformational CEO — the one who builds culture, navigates ethical gray areas, and creates a sense of belonging — will find that AI is their greatest ally. We must move from fearing replacement to mastering Human-AI Teaming. The CEOs of 2030 will be those who use AI to handle the complexity of the business so they can focus on the humanity of the organization.

Frequently Asked Questions

Can an AI legally serve as a CEO?

Currently, most corporate law jurisdictions require a natural person to serve as a director or officer for liability and accountability reasons. AI “CEOs” like Tang Yu or Mika often operate under the legal umbrella of a human board or chairman who retains ultimate responsibility.

What are the biggest risks of an AI CEO?

The primary risks include Algorithmic Bias (reinforcing historical prejudices found in the data), Lack of Crisis Adaptability (AI struggles with “Black Swan” events that have no historical precedent), and the Loss of Employee Trust if leadership feels cold and disconnected.

How should current CEOs prepare for AI leadership?

Leaders must focus on “Up-skilling for Empathy.” They should delegate data-heavy reporting to AI systems and re-invest that time into Culture Architecture and Change Management. The goal is to become an expert at Orchestrating Intelligence — both human and synthetic.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

AI Stands for Accidental Innovation

LAST UPDATED: December 29, 2025 at 12:49 PM

AI Stands for Accidental Innovation

GUEST POST from Art Inteligencia

In the world of corporate strategy, we love to manufacture myths of inevitable visionary genius. We look at the behemoths of today and assume their current dominance was etched in stone a decade ago by a leader who could see through the fog of time. But as someone who has spent a career studying Human-Centered Innovation and the mechanics of innovation, I can tell you that the reality is often much messier. And this is no different when it comes to artificial intelligence (AI), so much so that it could be said that AI stands for Accidental Innovation.

Take, for instance, the meteoric rise of Nvidia. Today, they are the undisputed architects of the intelligence age, a company whose hardware powers the Large Language Models (LLMs) reshaping our world. Yet, if we pull back the curtain, we find a story of survival, near-acquisitions, and a heavy dose of serendipity. Nvidia didn’t build their current empire because they predicted the exact nuances of the generative AI explosion; they built it because they were lucky enough to have developed technology for a completely different purpose that happened to be the perfect fuel for the AI fire.

“True innovation is rarely a straight line drawn by a visionary; it is more often a resilient platform that survives its original intent long enough to meet a future it didn’t expect.”

Braden Kelley

The Parallel Universe: The Meta/Oculus Near-Miss

It is difficult to imagine now, but there was a point in the Future Present where Nvidia was seen as a vulnerable hardware player. In the mid-2010s, as the Virtual Reality (VR) hype began to peak, Nvidia’s focus was heavily tethered to the gaming market. Internal histories and industry whispers suggest that the Oculus division of Meta (then Facebook) explored the idea of acquiring or deeply merging with Nvidia’s core graphics capabilities to secure their own hardware vertical.

At the time, Nvidia’s valuation was a fraction of what it is today. Had that acquisition occurred, the “Corporate Antibodies” of a social media giant would likely have stifled the very modularity that makes Nvidia great today. Instead of becoming the generic compute engine for the world, Nvidia might have been optimized—and narrowed—into a specialized silicon shop for VR headsets. It was a sliding doors moment for the entire tech industry. By not being acquired, Nvidia maintained the autonomy to follow the scent of demand wherever it led next.

Case Study 1: The Meta/Oculus Intersection

Before the “Magnificent Seven” era, Nvidia was struggling to find its next big act beyond PC gaming. When Meta acquired Oculus, there was a desperate need for low-latency, high-performance GPUs to make VR viable. The relationship between the two companies was so symbiotic that some analysts argued a vertical integration was the only logical step. Had Mark Zuckerberg moved more aggressively to bring Nvidia under the Meta umbrella, the GPU might have become a proprietary tool for the Metaverse. Because this deal failed to materialize, Nvidia remained an open ecosystem, allowing researchers at Google and OpenAI to eventually use that same hardware for a little thing called a Transformer model.

The Crypto Catalyst: A Fortuitous Detour

The second major “accident” in Nvidia’s journey was the Cryptocurrency boom. For years, Nvidia’s stock and production cycles were whipped around by the price of Ethereum. To the outside world, this looked like a distraction—a volatile market that Nvidia was chasing to satisfy shareholders. However, the crypto miners demanded exactly what AI would later require: massive, parallel processing power and specialized chips (ASICs and high-end GPUs) that could perform simple calculations millions of times per second.

Nvidia leaned into this demand, refining their CUDA platform and their manufacturing scale. They weren’t building for LLMs yet; they were building for miners. But in doing so, they solved the scalability problem of parallel computing. When the “AI Winter” ended and the industry realized that Deep Learning was the path forward, Nvidia didn’t have to invent a new chip. They just had to rebrand the one they had already perfected for the blockchain. Preparation met opportunity, but the opportunity wasn’t the one they had initially invited to the dance.

Case Study 2: From Hashes to Tokens

In 2021, Nvidia’s primary concern was “Lite Hash Rate” (LHR) cards to deter crypto miners so gamers could finally buy GPUs. This era of forced scaling forced Nvidia to master the art of data-center-grade reliability. When ChatGPT arrived, the transition was seamless. The “Accidental Innovation” here was that the mathematical operations required to verify a block on a chain are fundamentally similar to the vector mathematics required to predict the next word in a sentence. Nvidia had built the world’s best token-prediction machine while thinking they were building the world’s best ledger-validation machine.

Leading Companies and Startups to Watch

While Nvidia currently sits on the throne of Accidental Innovation, the next wave of change-makers is already emerging by attempting to turn that accident into a deliberate architecture. Cerebras Systems is building “wafer-scale” engines that dwarf traditional GPUs, aiming to eliminate the networking bottlenecks that Nvidia’s “accidental” legacy still carries. Groq (not to be confused with the AI model) is focusing on LPU (Language Processing Units) that prioritize the inference speed necessary for real-time human interaction. In the software layer, Modular is working to decouple the AI software stack from specific hardware, potentially neutralizing Nvidia’s CUDA moat. Finally, keep an eye on CoreWeave, which has pivoted from crypto mining to become a specialized “AI cloud,” proving that Nvidia’s accidental path is a blueprint others can follow by design.

The Human-Centered Conclusion

We must stop teaching innovation as a series of deliberate masterstrokes. When we do that, we discourage leaders from experimenting. If you believe you must see the entire future before you act, you will stay paralyzed. Nvidia’s success is a testament to Agile Resilience. They built a powerful, flexible tool, stayed independent during a crucial acquisition window, and were humble enough to let the market show them what their technology was actually good for.

As we move into this next phase of the Future Present, the lesson is clear: don’t just build for the world you see today. Build for the accidents of tomorrow. Because in the end, the most impactful innovations are rarely the ones we planned; they are the ones we were ready for.

Frequently Asked Questions

Why is Nvidia’s success considered “accidental”?

While Nvidia’s leadership was visionary in parallel computing, their current dominance in AI stems from the fact that hardware they optimized for gaming and cryptocurrency mining turned out to be the exact architecture needed for Large Language Models (LLMs), a use case that wasn’t the primary driver of their R&D for most of their history.

Did Meta almost buy Nvidia?

Historical industry analysis suggests that during the early growth of Oculus, there were significant internal discussions within Meta (Facebook) about vertically integrating hardware. While a formal acquisition of the entire Nvidia corporation was never finalized, the close proximity and the potential for such a deal represent a “what if” moment that would have fundamentally changed the AI landscape.

What is the “CUDA moat”?

CUDA is Nvidia’s proprietary software platform that allows developers to use GPUs for general-purpose processing. Because Nvidia spent years refining this for various industries (including crypto), it has become the industry standard. Most AI developers write code specifically for CUDA, making it very difficult for them to switch to competing chips from AMD or Intel.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Will our opinion still really be our own in an AI Future?

Will our opinion still really be our own in an AI Future?

GUEST POST from Pete Foley

Intuitively we all mostly believe our opinions are our own.  After all, they come from that mysterious thing we call consciousness that resides somewhere inside of us. 

But we also know that other peoples opinions are influenced by all sorts of external influences. So unless we as individuals are uniquely immune to influence, it begs at the question; ‘how much of what we think, and what we do, is really uniquely us?’  And perhaps even more importantly, as our understanding of behavioral modification techniques evolves, and the power of the tools at our disposal grows, how much mental autonomy will any of us truly have in the future?

AI Manipulation of Political Opinion: A recent study from the Oxford Internet Institute (OII) and the UK AI Security Institute (AISI) showed how conversational AI can meaningfully influence peoples political beliefs. https://www.ox.ac.uk/news/2025-12-11-study-reveals-how-conversational-ai-can-exert-influence-over-political-beliefs .  Leveraging AI in this way potentially opens the door to a step-change in behavioral and opinion manipulation inn general.  And that’s quite sobering on a couple of fronts.   Firstly, for many today their political beliefs are deeply tied to our value system and deep sense of self, so this manipulation is potentially profound.  Secondly, if AI can do this today, how much more will it be able to do in the future?

A long History of Manipulation: Of course, manipulation of opinion or behavior is not new.  We are all overwhelmed by political marketing during election season.  We accept that media has manipulated public opinion for decades, and that social media has amplified this over the last few decades. Similarly we’ve all grown up immersed in marketing and advertising designed to influence our decisions, opinions and actions.  Meanwhile the rise in prominence of the behavioral sciences in recent decades has provided more structure and efficiency to behavioral influence, literally turning an art into a science.  Framing, priming, pre-suasion, nudging and a host of other techniques can have a profound impact on what we believe and what we actually do. And not only do we accept it, but many, if not most of the people reading this will have used one or more of these channels or techniques.  

An Art and a Science: And behavioral manipulation is a highly diverse field, and can be deployed as an art or a science.   Whether it’s influencers, content creators, politicians, lawyers, marketers, advertisers, movie directors, magicians, artists, comedians, even physicians or financial advisors, our lives are full of people who influence us, often using implicit cues that operate below our awareness. 

And it’s the largely implicit nature of these processes that explains why we tend to intuitively think this is something that happens to other people. By definition we are largely unaware of implicit influence on ourselves, although we can often see it in others.   And even in hindsight, it’s very difficult to introspect implicit manipulation of our own actions and opinions, because there is often no obvious conscious causal event. 

So what does this mean?  As with a lot of discussion around how an AI future, or any future for that matter, will unfold, informed speculation is pretty much all we have.  Futurism is far from an exact science.  But there are a couple of things we can make pretty decent guesses around.

1.  The ability to manipulate how people think creates power and wealth.

2.  Some will use this for good, some not, but given the nature of humanity, it’s unlikely that it will be used exclusively for either.

3.  AI is going to amplify our ability to manipulate how people think.  

The Good news: Benevolent behavioral and opinion manipulation has the power to do enormous good.  Whether it’s mental health and happiness (an increasingly challenging area as we as a species face unprecedented technology driven disruption), health, wellness, job satisfaction, social engagement, important for many of us, adoption of beneficial technology and innovation and so many other areas can benefit from this.  And given the power of the brain, there is even potential for conceptual manipulation to replace significant numbers of pharmaceuticals, by, for example, managing depression, or via preventative behavioral health interventions.   Will this be authentic? It’s probably a little Huxley dystopian, but will we care?  It’s one of the many ethical connundrums AI will pose us with.

The Bad News.  Did I mention wealth and power?  As humans, we don’t have a great record of doing the right thing when wealth and power come into the equation.  And AI and AI empowered social, conceptual and behavioral manipulation has potential to concentrate meaningful power even more so than today’s tech driven society.  Will this be used exclusively for good, or will some seek to leverage for their personal benefit at the expense of the border community?   Answers on a postcard (or AI generated DM if you prefer).

What can and should we do?  Realistically, as individuals we can self police, but we obviously also face limits in self awareness of implicit manipulations.  That said, we can to some degree still audit ourselves.  We’ve probably all felt ourselves at some point being riled up by a well constructed meme designed to amplify our beliefs.   Sometimes we recognize this quickly, other times we may be a little slower. But just simple awareness of the potential to be manipulated, and the symptoms of manipulation, such as intense or disproportionate emotional responses, can help us mitigate and even correct some of the worst effects. 

Collectively, there are more opportunities.  We are better at seeing others being manipulated than ourselves.  We can use that as a mirror, and/or call it out to others when we see it.  And many of us will find ourselves somewhere in the deployment chain, especially as AI is still in it’s early stages.  For those of us that this applies to, we have the opportunity to collectively nudge this emerging technology in the right direction. I still recall a conversation with Dan Ariely when I first started exploring behavioral science, perhaps 15-20 years ago.  It’s so long ago I have to paraphrase, but the essence of the conversation was to never manipulate people to do something that was not in there best interest.  

There is a pretty obvious and compelling moral framework behind this. But there is also an element of enlightened self interest. As a marketer working for a consumer goods company at the time, even if I could have nudged somebody into buying something they really didn’t want, it might have offered initial success, but would likely come back to bite me in the long-term.  They certainly wouldn’t become repeat customers, and a mixture of buyers remorse, loss aversion and revenge could turn them into active opponents.  This potential for critical thinking in hindsight exists for virtually every situation where outcomes damage the individual.   

The bottom line is that even today, we already ave to continually ask ourselves if what we see is real, if our beliefs are truly our own, or have they been manipulated? Media and social media memes already play the manipulation game.   AI may already be better, and if not, it’s only a matter of time before it is. If you think we are politically polarized now, hang onto your hat!!!  But awareness is key.  We all need to stay aware, be conscious of manipulation in ourselves and others, and counter it when we see it occurring for the wrong reasons.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Was Your AI Strategy Developed by the Underpants Gnomes?

Was Your AI Strategy Developed by the Underpants Gnomes?

GUEST POST from Robyn Bolton

“It just popped up one day. Who knows how long they worked on it or how many of millions were spent. They told us to think of it as ChatGPT but trained on everything our company has ever done so we can ask it anything and get an answer immediately.”

The words my client was using to describe her company’s new AI Chatbot made it sound like a miracle. Her tone said something else completely.

“It sounds helpful,”  I offered.  “Have you tried it?”

 “I’m not training my replacement! And I’m not going to train my R&D, Supply Chain, Customer Insights, or Finance colleagues’ replacements either. And I’m not alone. I don’t think anyone’s using it because the company just announced they’re tracking usage and, if we don’t use it daily, that will be reflected in our performance reviews.”

 All I could do was sigh. The Underpants Gnomes have struck again.

Who are the Underpants Gnomes?

The Underpants Gnomes are the stars of a 1998 South Park episode described by media critic Paul Cantor as, “the most fully developed defense of capitalism ever produced.”

Claiming to be business experts, the Underpants Gnomes sneak into South Park residents’ homes every night and steal their underpants. When confronted by the boy in their underground lair, the Gnomes explain their business plan:

  1. Collect underpants
  2. ?
  3. Profit

It was meant as satire.

Some took it as a an abbreviated MBA.

How to Spot the Underpants AI Gnomes

As the AI hype grows, fueling executive FOMO (Fear of Missing Out), the Underpants Gnomes, cleverly disguised as experts, entrepreneurs and consultants, saw their opportunity.

  1. Sell AI
  2. ?
  3. Profit

 While they’ve pivoted their business focus, they haven’t improved their operations so the Underpants AI Gnomes as still easy to spot:

  1. Investment without Intention: Is your company investing in AI because it’s “essential to future-proofing the business?”  That sounds good but if your company can’t explain the future it’s proofing itself against and how AI builds a moat or a life preserver in that future, it’s a sign that  the Gnomes are in the building.
  2. Switches, not Solutions: If your company thinks that AI adoption is as “easy as turning on Copilot” or “installing a custom GPT chatbot, the Gnomes are gaining traction. AI is a tool and you need to teach people how to use tools, build processes to support the change, and demonstrate the benefit.
  3. Activity without Achievement: When MIT published research indicating that 95% of corporate Gen AI pilots were failing, it was a sign of just how deeply the Gnomes have infiltrated companies. Experiments are essential at the start of any new venture but only useful if they generate replicable and scalable learning.

How to defend against the AI Gnomes

Odds are the gnomes are already in your company. But fear not, you can still turn “Phase 2:?” into something that actually leads to “Phase 3: Profit.”

  1. Start with the end in mind: Be specific about the outcome you are trying to achieve. The answer should be agnostic of AI and tied to business goals.
  2. Design with people at the center: Achieving your desired outcomes requires rethinking and redesigning existing processes. Strategic creativity like that requires combining people, processes, and technology to achieve and embed.
  3. Develop with discipline: Just because you can (run a pilot, sign up for a free trial), doesn’t mean you should. Small-scale experiments require the same degree of discipline as multi-million-dollar digital transformations. So, if you can’t articulate what you need to learn and how it contributes to the bigger goal, move on.

AI, in all its forms, is here to stay. But the same doesn’t have to be true for the AI Gnomes.

Have you spotted the Gnomes in your company?

Image credit: AI Underpants Gnomes (just kidding, Google Gemini made the image)

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.