Category Archives: Technology

Just Because You Can Use AI Doesn’t Mean You Should

Just Because You Can Use AI Doesn't Mean You Should

GUEST POST from Shep Hyken

I’m often asked, “What should AI be used for?” While there is much that AI can do to support businesses in general, it’s obvious that I’m being asked how it relates to customer service and customer experience (CX). The true meaning of the question is more about what tasks AI can do to support a customer, thereby potentially eliminating the need for a live agent who deals directly with customers.

First, as the title of this article implies, just because AI can do something, it doesn’t mean it should. Yes, AI can handle many customer support issues, but even if every customer were willing to accept that AI can deliver good support, there are some sensitive and complicated issues for which customers would prefer to talk to a human.

AI Shep Hyken Cartoon

Additionally, consider that, based on my annual customer experience research, 68% of customers (that’s almost seven out of 10) prefer the phone as their primary means of communication with a company or brand. However, another finding in the report is worth mentioning: 34% of customers stopped doing business with a company because self-service options were not provided. Some customers insist on the self-service option, but at the same time, they want to be transferred to a live agent when appropriate.

AI works well for simple issues, such as password resets, tracking orders, appointment scheduling and answering basic or frequently asked questions. Humans are better suited for handling complaints and issues that need empathy, complex problem-solving situations that require judgment calls and communicating bad news.

An AI-fueled chatbot can answer many questions, but when a medical patient contacts the doctor’s office about test results related to a serious issue, they will likely want to speak with a nurse or doctor, not a chatbot.

Consider These Questions Before Implementing AI For Customer Interactions

AI for addressing simple customer issues has become affordable for even the smallest businesses, and an increasing number of customers are willing to use AI-powered customer support for the right reasons. Consider these questions before implementing AI for customer interactions:

  1. Is the customer’s question routine or fact-based?
  2. Does it require empathy, emotion, understanding and/or judgment (emotional intelligence)?
  3. Could the wrong answer cause a problem or frustrate the customer?
  4. As you think about the reasons customers call, which ones would they feel comfortable having AI handle?
  5. Do you have an easy, seamless way for the customer to be transferred to a human when needed?

The point is, regardless of how capable the technology is, it doesn’t mean it is best suited to deliver what the customer wants. Live agents can “read the customer” and know how to effectively communicate and empathize with them. AI can’t do that … yet. The key isn’t choosing between AI and humans. It’s knowing when to use each one.

Image credits: Google Gemini, Shep Hyken

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Top 100 Innovation and Transformation Articles of 2025

Top 100 Innovation and Transformation Articles of 2025

2021 marked the re-birth of my original Blogging Innovation blog as a new blog called Human-Centered Change and Innovation.

Many of you may know that Blogging Innovation grew into the world’s most popular global innovation community before being re-branded as Innovation Excellence and being ultimately sold to DisruptorLeague.com.

Thanks to an outpouring of support I’ve ignited the fuse of this new multiple author blog around the topics of human-centered change, innovation, transformation and design.

I feel blessed that the global innovation and change professional communities have responded with a growing roster of contributing authors and more than 17,000 newsletter subscribers.

To celebrate we’ve pulled together the Top 100 Innovation and Transformation Articles of 2025 from our archive of over 3,200 articles on these topics.

We do some other rankings too.

We just published the Top 40 Innovation Authors of 2025 and as the volume of this blog has grown we have brought back our monthly article ranking to complement this annual one.

But enough delay, here are the 100 most popular innovation and transformation posts of 2025.

Did your favorite make the cut?

1. A Toolbox for High-Performance Teams – Building, Leading and Scaling – by Stefan Lindegaard

2. Top 10 American Innovations of All Time – by Art Inteligencia

3. The Education Business Model Canvas – by Arlen Meyers, M.D.

4. What is Human-Centered Change? – by Braden Kelley

5. How Netflix Built a Culture of Innovation – by Art Inteligencia

6. McKinsey is Wrong That 80% Companies Fail to Generate AI ROI – by Robyn Bolton

7. The Great American Contraction – by Art Inteligencia

8. A Case Study on High Performance Teams – New Zealand’s All Blacks – by Stefan Lindegaard

9. Act Like an Owner – Revisited! – by Shep Hyken

10. Should a Bad Grade in Organic Chemistry be a Doctor Killer? – by Arlen Meyers, M.D.

11. Charting Change – by Braden Kelley

12. Human-Centered Change – by Braden Kelley

13. No Regret Decisions: The First Steps of Leading through Hyper-Change – by Phil Buckley

14. SpaceX is a Masterclass in Innovation Simplification – by Pete Foley

15. Top 5 Future Studies Programs – by Art Inteligencia

16. Marriott’s Approach to Customer Service – by Shep Hyken

17. The Role of Stakeholder Analysis in Change Management – by Art Inteligencia

18. The Triple Bottom Line Framework – by Dainora Jociute

19. The Nordic Way of Leadership in Business – by Stefan Lindegaard

20. Nine Innovation Roles – by Braden Kelley

21. ACMP Standard for Change Management® Visualization – 35″ x 56″ (Poster Size) – Association of Change Management Professionals – by Braden Kelley

22. Designing an Innovation Lab: A Step-by-Step Guide – by Art Inteligencia

23. FutureHacking™ – by Braden Kelley

24. The 6 Building Blocks of Great Teams – by David Burkus

25. Overcoming Resistance to Change – Embracing Innovation at Every Level – by Chateau G Pato

26. Human-Centered Change – Free Downloads – by Braden Kelley

27. 50 Cognitive Biases Reference – Free Download – by Braden Kelley

28. Quote Posters – Curated by Braden Kelley

29. Stoking Your Innovation Bonfire – by Braden Kelley

30. Innovation or Not – Kawasaki Corleo – by Art Inteligencia


Build a common language of innovation on your team


31. Top Six Trends for Innovation Management in 2025 – by Jesse Nieminen

32. Fear is a Leading Indicator of Personal Growth – by Mike Shipulski

33. Visual Project Charter™ – 35″ x 56″ (Poster Size) and JPG for Online Whiteboarding – by Braden Kelley

34. The Most Challenging Obstacles to Achieving Artificial General Intelligence – by Art Inteligencia

35. The Ultimate Guide to the Phase-Gate Process – by Dainora Jociute

36. Case Studies in Human-Centered Design – by Art Inteligencia

37. Transforming Leadership to Reshape the Future of Innovation – Exclusive Interview with Brian Solis

38. Leadership Best Quacktices from Oregon’s Dan Lanning – by Braden Kelley

39. This AI Creativity Trap is Gutting Your Growth – by Robyn Bolton

40. A 90% Project Failure Rate Means You’re Doing it Wrong – by Mike Shipulski

41. Reversible versus Irreversible Decisions – by Farnham Street

42. Next Generation Leadership Traits and Characteristics – by Stefan Lindegaard

43. Top 40 Innovation Bloggers of 2024 – Curated by Braden Kelley

44. Benchmarking Innovation Performance – by Noel Sobelman

45. Three Executive Decisions for Strategic Foresight Success or Failure – by Robyn Bolton

46. Back to Basics for Leaders and Managers – by Robyn Bolton

47. You Already Have Too Many Ideas – by Mike Shipulski

48. Imagination versus Knowledge – Is imagination really more important? – by Janet Sernack

49. Building a Better Change Communication Plan – by Braden Kelley

50. 10 Free Human-Centered Change™ Tools – by Braden Kelley


Accelerate your change and transformation success


51. Why Business Transformations Fail – by Robyn Bolton

52. Overcoming the Fear of Innovation Failure – by Stefan Lindegaard

53. What is the difference between signals and trends? – by Art Inteligencia

54. Unintended Consequences. The Hidden Risk of Fast-Paced Innovation – by Pete Foley

55. Giving Your Team a Sense of Shared Purpose – by David Burkus

56. The Top 10 Irish Innovators Who Shaped the World – by Art Inteligencia

57. The Role of Emotional Intelligence in Effective Change Leadership – by Art Inteligencia

58. Is OpenAI About to Go Bankrupt? – by Art Inteligencia

59. Sprint Toward the Innovation Action – by Mike Shipulski

60. Innovation Management ISO 56000 Series Explained – by Diana Porumboiu

61. How to Make Navigating Ambiguity a Super Power – by Robyn Bolton

62. 3 Secret Saboteurs of Strategic Foresight – by Robyn Bolton

63. Four Major Shifts Driving the 21st Century – by Greg Satell

64. Problems vs. Solutions vs. Complaints – by Mike Shipulski

65. The Power of Position Innovation – by John Bessant

66. Three Ways Strategic Idleness Accelerates Innovation and Growth – by Robyn Bolton

67. Case Studies of Companies Leading in Inclusive Design – by Chateau G Pato

68. Recognizing and Celebrating Small Wins in the Change Process – by Chateau G Pato

69. Parallels Between the 1920’s and Today Are Frightening – by Greg Satell

70. The Art of Adaptability: How to Respond to Changing Market Conditions – by Art Inteligencia

71. Do you have a fixed or growth mindset? – by Stefan Lindegaard

72. Making People Matter in AI Era – by Janet Sernack

73. The Role of Prototyping in Human-Centered Design – by Art Inteligencia

74. Turning Bold Ideas into Tangible Results – by Robyn Bolton

75. Yes the Comfort Zone Can Be Your Best Friend – by Stefan Lindegaard

76. Increasing Organizational Agility – by Braden Kelley

77. Innovation is Dead. Now What? – by Robyn Bolton

78. Four Reasons Change Resistance Exists – by Greg Satell

79. Eight I’s of Infinite Innovation – Revisited – by Braden Kelley

80. Difference Between Possible, Potential and Preferred Futures – by Art Inteligencia


Get the Change Planning Toolkit


81. Resistance to Innovation – What if electric cars came first? – by Dennis Stauffer

82. Science Says You Shouldn’t Waste Too Much Time Trying to Convince People – by Greg Satell

83. Why Context Engineering is the Next Frontier in AI – by Braden Kelley and Art Inteligencia

84. How to Write a Failure Resume – by Arlen Meyers, M.D.

85. The Five Keys to Successful Change – by Braden Kelley

86. Four Forms of Team Motivation – by David Burkus

87. Why Revolutions Fail – by Greg Satell

88. Top 40 Innovation Bloggers of 2023 – Curated by Braden Kelley

89. The Entrepreneurial Mindset – by Arlen Meyers, M.D.

90. Six Reasons Norway is a Leader in High-Performance Teamwork – by Stefan Lindegaard

90. Top 100 Innovation and Transformation Articles of 2024 – Curated by Braden Kelley

91. The Worst British Customer Experiences of 2024 – by Braden Kelley

92. Human-Centered Change & Innovation White Papers – by Braden Kelley

93. Encouraging a Growth Mindset During Times of Organizational Change – by Chateau G Pato

94. Inside the Mind of Jeff Bezos – by Braden Kelley

95. Learning from the Failure of Quibi – by Greg Satell

96. Dare to Think Differently – by Janet Sernack

97. The End of the Digital Revolution – by Greg Satell

98. Your Guidebook to Leading Human-Centered Change – by Braden Kelley

99. The Experiment Canvas™ – 35″ x 56″ (Poster Size) – by Braden Kelley

100. Trust as a Competitive Advantage – by Greg Satell

Curious which article just missed the cut? Well, here it is just for fun:

101. Building Cross-Functional Collaboration for Breakthrough Innovations – by Chateau G Pato

These are the Top 100 innovation and transformation articles of 2025 based on the number of page views. If your favorite Human-Centered Change & Innovation article didn’t make the cut, then send a tweet to @innovate and maybe we’ll consider doing a People’s Choice List for 2024.

If you’re not familiar with Human-Centered Change & Innovation, we publish 1-6 new articles every week focused on human-centered change, innovation, transformation and design insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook feed or on Twitter or LinkedIn too!

Editor’s Note: Human-Centered Change & Innovation is open to contributions from any and all the innovation & transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have a valuable insight to share with everyone for the greater good. If you’d like to contribute, contact us.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Outcome-Driven Innovation in the Age of Agentic AI

The North Star Shift

LAST UPDATED: January 5, 2026 at 5:29PM

Outcome-Driven Innovation in the Age of Agentic AI

by Braden Kelley

In a world of accelerating change, the rhetoric around Artificial Intelligence often centers on its incredible capacity for optimization. We hear about AI designing new materials, orchestrating complex logistics, and even writing entire software applications. This year, the technology has truly matured into agentic AI, capable of pursuing and achieving defined objectives with unprecedented autonomy. But as a specialist in Human-Centered Innovation™ (which pairs well with Outcome-Driven Innovation), I pose two crucial questions: Who is defining these outcomes, and what impact do they truly have on the human experience?

The real innovation of 2026 will show not just that AI can optimize against defined outcomes, but that we, as leaders, finally have the imperative — and the tools — to master Outcome-Driven Innovation and Outcome-Driven Change. If innovation is change with impact, then our impact is only as profound as the outcomes we choose to pursue. Without thoughtful, human-centered specifications, AI simply becomes the most efficient way to achieve the wrong goals, leading us directly into the Efficiency Trap. This is where organizations must overcome the Corporate Antibody response that resists fundamental shifts in how we measure success.

Revisiting and Applying Outcome-Driven Change in the Age of Agentic AI

As we integrate agentic AI into our organizations, the principles of Outcome-Driven Change (ODC) I first introduced in 2018 are more vital than ever. The core of the ODC framework rests on the alignment of three critical domains: Cognitive (Thinking), Affective (Feeling), and Conative (Doing). Today, AI agents are increasingly assuming the “conative” role, executing tasks and optimizing workflows at superhuman speeds. However, as I have always maintained, true success only arrives when what is being done is in harmony with what the people in the organization and customer base think and feel.

Outcome-Driven Change Framework

If an AI agent’s autonomous actions are misaligned with human psychological readiness or emotional context, it will trigger a Corporate Antibody response that kills innovation. To practice genuine Human-Centered Change™, we must ensure that AI agents are directed to pursue outcomes that are not just numerically efficient, but humanly resonant. When an AI’s “doing” matches the collective thinking and feeling of the workforce, we move beyond the Efficiency Trap and create lasting change with impact.

“In the age of agentic AI, the true scarcity is not computational power; it is the human wisdom to define the right ‘North Star’ outcomes. An AI optimizing for the wrong goal is a digital express train headed in the wrong direction – efficient, but ultimately destructive.” — Braden Kelley

From Feature-Building to Outcome-Harvesting

For decades, many organizations have been stuck in a cycle of “feature-building.” Product teams were rewarded for shipping more features, marketing for launching more campaigns, and R&D for creating more patents. The focus was on output, not ultimate impact. Outcome-Driven Innovation shifts this paradigm. It forces us to ask: What human or business value are we trying to create? What measurable change in behavior or well-being are we seeking?

Agentic AI, when properly directed, becomes an unparalleled accelerant for this shift. Instead of building a new feature and hoping it works, we can now tell an AI agent, “Achieve Outcome X for Persona Y, within Constraints Z,” and it will explore millions of pathways to get there. This frees human teams from the tactical churn and allows them to focus on the truly strategic work: deeply understanding customer needs, identifying ethical guardrails, and defining aspirational outcomes that genuinely drive Human-Centered Innovation™.

Case Study 1: Sustainable Manufacturing and the “Circular Economy” Outcome

The Challenge: A major electronics manufacturer in early 2025 aimed to reduce its carbon footprint but struggled with the complexity of optimizing its global supply chain, product design, and end-of-life recycling simultaneously. Traditional methods led to incremental, siloed improvements.

The Outcome-Driven Approach: They defined a bold outcome: “Achieve a 50% reduction in virgin material usage across all product lines by 2028, while maintaining profitability and product quality.” They then deployed an agentic AI system to explore new material combinations, reverse logistics networks, and redesign possibilities. This AI was explicitly optimized to achieve the circular economy outcome.

The Impact: The AI identified design changes that led to a 35% reduction in material waste within 18 months, far exceeding human predictions. It also found pathways to integrate recycled content into new products without compromising durability. The organization moved from a reactive “greenwashing” approach to proactive, systemic innovation driven by a clear, human-centric environmental outcome.

Case Study 2: Personalized Education and “Mastery Outcomes”

The Challenge: A national education system faced stagnating literacy rates, despite massive investments in new curricula. The focus was on “covering material” rather than ensuring true student understanding and application.

The Outcome-Driven Approach: They shifted their objective to “Ensure 90% of students achieve demonstrable mastery of core literacy skills by age 10.” An AI tutoring system was developed, designed to optimize for individual student mastery outcomes, rather than just quiz scores. The AI dynamically adapted learning paths, identified specific knowledge gaps, and even generated custom exercises based on each child’s learning style.

The Impact: Within two years, participating schools saw a 25% improvement in mastery rates. The AI became a powerful co-pilot for teachers, freeing them from repetitive grading and allowing them to focus on high-touch mentorship. This demonstrated how AI, directed by human-defined learning outcomes, can empower both educators and students, moving beyond the Efficiency Trap of standardized testing.

Leading Companies and Startups to Watch

As 2026 solidifies Outcome-Driven Innovation, several entities are paving the way. Amplitude and Pendo are evolving their product analytics to connect feature usage directly to customer outcomes. In the AI space, Anthropic‘s work on “Constitutional AI” is fascinating, as it seeks to embed human-defined ethical outcomes directly into the AI’s decision-making. Glean and Perplexity AI are creating agentic knowledge systems that help organizations define and track complex outcomes across their internal data. Startups like Metaculus are even democratizing the prediction of outcomes, allowing collective intelligence to forecast the impact of potential innovations, providing invaluable insights for human decision-makers. These players are all contributing to the core goal: helping humans define the right problems for AI to solve.

Conclusion: The Human Art of Defining the Future

The year 2026 is a pivotal moment. Agentic AI gives us unprecedented power to optimize, but with great power comes great responsibility — the responsibility to define truly meaningful outcomes. This is not a technical challenge; it is a human one. It requires deep empathy, strategic foresight, and the courage to challenge old metrics. It demands leaders who understand that the most impactful Human-Centered Innovation™ starts with a clear, ethically grounded North Star.

If you’re an innovation leader trying to navigate this future, remember: the future is not about what AI can do, but about what outcomes we, as humans, choose to pursue with it. Let’s make sure those outcomes serve humanity first.

Frequently Asked Questions

What is “Outcome-Driven Innovation”?

Outcome-Driven Innovation (ODI) is a strategic approach that focuses on defining and achieving specific, measurable human or business outcomes, rather than simply creating new features or products. AI then optimizes for these defined outcomes.

How does agentic AI change the role of human leaders in ODI?

Agentic AI frees human leaders from tactical execution and micro-management, allowing them to focus on the higher-level strategic work of identifying critical problems, understanding human needs, and defining the ethical, impactful outcomes for AI to pursue.

What is the “Efficiency Trap” in the context of AI and outcomes?

The Efficiency Trap occurs when AI is used to optimize for speed or cost without first ensuring that the underlying outcome is meaningful and human-centered. This can lead to highly efficient processes that achieve undesirable or even harmful results, ultimately undermining trust and innovation.

Image credits: Braden Kelley, Google Gemini

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from Google Gemini to clean up the article.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Why Photonic Processors are the Nervous System of the Future

Illumination as Innovation

LAST UPDATED: January 2, 2026 at 4:59 PM

Why Photonic Processors are the Nervous System of the Future

GUEST POST from Art Inteligencia

In the landscape of 2026, we have reached a critical juncture in what I call the Future Present (which you can also think as the close-in future). Our collective appetite for intelligence — specifically the generative, agentic, and predictive kind — has outpaced the physical capabilities of our silicon ancestors. For decades, we have relied on electrons to do our bidding, pushing them through increasingly narrow copper gates. But electrons have a weight, a heat, and a resistance that is now leading us directly into the Efficiency Trap. If we want to move from change to change with impact, we must change the medium of the message itself.

Enter Photonic Processing. This is not merely an incremental speed boost; it is a fundamental shift from the movement of matter to the movement of light. By using photons instead of electrons to perform calculations, we are moving toward a world of near-zero latency and drastically reduced energy consumption. As a specialist in Human-Centered Innovation™, I see this not just as a hardware upgrade, but as a breakthrough for human potential. When computing becomes as fast as thought and as sustainable as sunlight, the barriers between human intent and innovative execution finally begin to dissolve.

“Innovation is not just about moving faster; it is about illuminating the paths that were previously hidden by the friction of our limitations. Photonic computing is the lighthouse that allows us to navigate the vast oceans of data without burning the world to power the voyage.” — Braden Kelley

The End of the Electronic Friction

The core problem with traditional electronic processors is heat. When you move electrons through silicon, they collide, generating thermal energy. This is why data centers now consume a staggering percentage of the world’s electricity. Photons, however, do not have a charge and essentially do not interact with each other in the same way. They can pass through one another, move at the speed of light, and carry data across vast “optical highways” without the parasitic energy loss that plagues copper wiring.

For the modern organization, this means computational abundance. We can finally train the massive models required for true Human-AI Teaming without the ethical burden of a massive carbon footprint. We can move from “batch processing” our insights to “living insights” that evolve at the speed of human conversation.

Case Study 1: Transforming Real-Time Healthcare Diagnostics

The Challenge: A global genomic research institute in early 2025 was struggling with the “analysis lag.” To provide personalized cancer treatment plans, they needed to sequence and analyze terabytes of data in minutes. Using traditional GPU clusters, the process took days and cost thousands of dollars in energy alone.

The Photonic Solution: By integrating a hybrid photonic-electronic accelerator, the institute was able to perform complex matrix multiplications — the backbone of genomic analysis — using light. The impact? Analysis time dropped from 48 hours to 12 minutes. More importantly, the system consumed 90% less power. This allowed doctors to provide life-saving prescriptions while the patient was still in the clinic, transforming a diagnostic process into a human-centered healing experience.

Case Study 2: Autonomous Urban Flow in Smart Cities

The Challenge: A metropolitan pilot program for autonomous traffic management found that traditional electronic sensors were too slow to handle “edge cases” in dense fog and heavy rain. The latency of sending data to the cloud and back created a safety gap that the corporate antibody of public skepticism used to shut down the project.

The Photonic Solution: The city deployed “Optical Edge” processors at major intersections. These photonic chips processed visual data at the speed of light, identifying potential collisions before a human eye or an electronic sensor could even register the movement. The impact? A 60% reduction in traffic incidents and a 20% increase in average transit speed. By removing the latency, they restored public trust — the ultimate currency of Human-Centered Innovation™.

Leading Companies and Startups to Watch

The race to light-speed computing is no longer a laboratory experiment. Lightmatter is currently leading the pack with its Envise and Passage platforms, which provide a bridge between traditional silicon and the photonic future. Celestial AI is making waves with their “Photonic Fabric,” a technology designed to solve the massive data-bottleneck in AI clusters. We must also watch Ayar Labs, whose optical I/O chiplets are being integrated by giants like Intel to replace copper connections with light. Finally, Luminous Computing is quietly building a “supercomputer on a chip” that promises to bring the power of a data center to a desktop-sized device, truly democratizing the useful seeds of invention.

Designing for the Speed of Light

As we integrate these photonic systems, we must be careful not to fall into the Efficiency Trap. Just because we can process data a thousand times faster doesn’t mean we should automate away the human element. The goal of photonic innovation should be to free us from “grunt work” — the heavy lifting of data processing — so we can focus on “soul work” — the empathy, ethics, and creative leaps that no processor, no matter how fast, can replicate.

If you are an innovation speaker or a leader guiding your team through this transition, remember that technology is a tool, but trust is the architect. We use light to see more clearly, not to move so fast that we lose sight of our purpose. The photonic age is here; let us use it to build a future that is as bright as the medium it is built upon.

Frequently Asked Questions

What is a Photonic Processor?

A photonic processor is a type of computer chip that uses light (photons) instead of electricity (electrons) to perform calculations and transmit data. This allows for significantly higher speeds, lower latency, and dramatically reduced energy consumption compared to traditional silicon chips.

Why does photonic computing matter for AI?

AI models rely on massive “matrix multiplications.” Photonic chips can perform these specific mathematical operations using light interference patterns at the speed of light, making them ideally suited for the next generation of Large Language Models and autonomous systems.

Is photonic computing environmentally friendly?

Yes. Because photons do not generate heat through resistance like electrons do, photonic processors require far less cooling and electricity. This makes them a key technology for sustainable innovation and reducing the carbon footprint of global data centers.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

What Are We Going to Do Now with GenAI?

What Are We Going to Do Now With GenAI?

GUEST POST from Geoffrey A. Moore

In 2023 we simply could not stop talking about Generative AI. But in 2024 the question for each enterprise became (continuing to today) — and this includes yours as well — is What are we going to do about it? Tough questions call for tough frameworks, so let’s run this one through the Hierarchy of Powers to see if it can shine some light on what might be your company’s best bet.

Category Power

Gen AI can have an impact anywhere in the Category Maturity Life Cycle, but the way it does so differs depending on where your category is, as follows:

  • Early Market. GenAI will almost certainly be a differentiating ingredient that is enabling a disruptive innovation, and you need to be on the bleeding edge. Think ChatGPT.
  • Crossing the chasm. Nailing your target use case is your sole priority, so you would use GenAI if, and only if, it helped you do so, and avoid getting distracted by its other bells and whistles. Think Khan Academy at the school district level.
  • Inside the tornado. Grabbing as much market share as you can is now the game to play, and GenAI-enabled features can help you do so provided they are fully integrated (no “some assembly required”). You cannot afford to slow your adoption down just at the time it needs to be at full speed. Think Microsoft CoPilot.
  • Growth Main Street (category still growing double digits). Market share boundaries are settling in, so the goal now is to grow your patch as fast as you can, solidifying your position and taking as much share as you can from the also-rans. Adding GenAI to the core product can provide a real boost as long as the disruption is minimal. Think Salesforce CRM.
  • Mature Main Street (category stabilized, single-digit growth). You are now marketing primarily to your installed base, secondarily seeking to pick up new logos as they come into play. GenAI can give you a midlife kicker provided you can use it to generate meaningful productivity gains. Think Adobe Photoshop.
  • Late Main Street (category declining, negative growth). The category has never been more profitable, so you are looking to extend its life in as low-cost a way as you can. GenAI can introduce innovative applications that otherwise would never occur to your end users. Think HP home printing.

Company Power

There are two dimensions of company power to consider when analyzing the ROI from a GenAI investment, as follows:

  • Market Share Status. Are you the market share leader, a challenger, or simply a participant? As a challenger, you can use GenAI to disrupt the market pecking order provided you differentiate in a way that is challenging for the leader to copy. On the other hand, as a leader, you can use GenAI to neutralize the innovations coming from challengers provided you can get it to market fast enough to keep the ecosystem in your camp. As a participant, you would add GenAI only if was your single point of differentiation (as a low-share participant, your R&D budget cannot fund more than one).
  • Default Operating Model. Is your core business better served by the complex systems operating model (typical for B2B companies with hundreds to thousands of large enterprises for customers) or the volume operations operating model (typical for B2C companies with hundreds of thousands to millions of consumers)? The complex systems model has sufficient margins to invest professional services across the entire ownership life cycle, from design consulting to installation to expansion. You are going to need deep in-house expertise to win big in this game. By contrast, GenAI deployed via the volume operations model has to work out-of-the-box. Consumers have neither the courage nor the patience to work through any disconnects.

Market Power

Whereas category share leaders benefit most from going broad, market segment leaders win big by going deep. The key tactic is to overdo it on the use cases that mean the most to your target customers, taking your offer beyond anything reasonable for a category leader to copy. GenAI can certainly be a part of this approach, as the two slides below illustrate:

Market Segmentation for Complex Systems

In the complex systems operating model, GenAI should accentuate the differentiation of your whole product, the complete solution to whatever problem you are targeting. That might mean, for example, taking your Large Language Model to a level of specificity that would normally not be warranted. This sets you apart from the incumbent vendor who has nothing like what you offer as well as from other technology vendors who have not embraced your target segment’s specific concerns. Think Crowdstrike’s Charlotte AI for cybersecurity analysis.

Market Segmentation for Volume Operations

In the volume operations operating model, GenAI should accentuate the differentiation of your brand promise by overdelivering on the relevant value discipline. Once again, it is critical not to get distracted by shiny objects—you want to differentiate in one quadrant only, although you can use GenAI in the other three for neutralization purposes. For Performance, think knowledge discovery. For Productivity, think writing letters. For Economy, think tutoring. For Convenience, think gift suggestions.

Offer Power

Everybody wants to “be innovative,” but it is worth stepping back a moment to ask, how do we get a Return on Innovation? Compared to its financial cousin, this kind of ROI is more of a leading indicator and thus of more strategic value. Basically, it comes in three forms:

  1. Differentiation. This creates customer preference, the goal being not just to be different but to create a clear separation from the competition, one that they cannot easily emulate. Think OpenAI.
  2. Neutralization. This closes the gap between you and a competitor who is taking market share away from you, the goal being to get to “good enough, fast enough,” thereby allowing your installed base to stay loyal. Think Google Bard.
  3. Optimization. This reduces the cost while maintaining performance, the goal being to expand the total available market. Think Edge GenAI on PCs and Macs.

For most of us, GenAI will be an added ingredient rather than a core product, which makes the ROI question even more important. The easiest way to waste innovation dollars is to spend them on differentiation that does not go far enough, neutralization that does not go fast enough, or optimization that does not go deep enough. So, the key lesson here is, pick one and only one as your ROI goal, and then go all in to get a positive return.

Execution Power

How best to incorporate GenAI into your existing enterprise depends on which zone of operations you are looking to enhance, as illustrated by the zone management framework below:

Zone Management Framework

If you are unsure exactly what to do, assign the effort to the Incubation Zone and put them on the clock to come up with a good answer as fast as possible. If you can incorporate it directly into your core business’s offerings at relatively low risk, by all means, do so as it is the current hot ticket, and assign it to the Performance Zone. If there is not a good fit, consider using it internally instead to improve your own productivity, assigning it to the Productivity Zone. Finally, although it is awfully early days for this, if you are convinced it is an absolutely essential ingredient in a big bet you feel compelled to make, then assign it to the Transformation Zone and go all in. Again, the overall point is manage your investment in GenAI out of one zone and only one zone, as the success metrics for each zone are incompatible with those of the other three.

One final point. Embracing anything as novel as GenAI has to feel risky. I submit, however, that in 2025 not building upon meaningful GenAI action taken in 2024 is even more so.

That’s what I think. What do you think?

Image Credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Can AI Replace the CEO?

A Day in the Life of the Algorithmic Executive

LAST UPDATED: December 28, 2025 at 1:56 PM

Can AI Replace the CEO?

GUEST POST from Art Inteligencia

We are entering an era where the corporate antibody – that natural organizational resistance to disruptive change – is meeting its most formidable challenger yet: the AI CEO. For years, we have discussed the automation of the factory floor and the back office. But what happens when the “useful seeds of invention” are planted in the corner office?

The suggestion that an algorithm could lead a company often triggers an immediate emotional response. Critics argue that leadership requires soul, while proponents point to the staggering inefficiencies, biases, and ego-driven errors that plague human executives. As an advocate for Innovation = Change with Impact, I believe we must look beyond the novelty and analyze the strategic logic of algorithmic leadership.

“Leadership is not merely a collection of decisions; it is the orchestration of human energy toward a shared purpose. An AI can optimize the notes, but it cannot yet compose the symphony or inspire the orchestra to play with passion.”

Braden Kelley

The Efficiency Play: Data Without Drama

The argument for an AI CEO rests on the pursuit of Truly Actionable Data. Humans are limited by cognitive load, sleep requirements, and emotional variance. An AI executive, by contrast, operates in Future Present mode — constantly processing global market shifts, supply chain micro-fluctuations, and internal sentiment analysis in real-time. It doesn’t have a “bad day,” and it doesn’t make decisions based on who it had lunch with.

Case Study 1: NetDragon Websoft and the “Tang Yu” Experiment

The Experiment: A Virtual CEO in a Gaming Giant

In 2022, NetDragon Websoft, a major Chinese gaming and mobile app company, appointed an AI-powered humanoid robot named Tang Yu as the Rotating CEO of its subsidiary. This wasn’t just a marketing stunt; it was a structural integration into the management flow.

The Results

Tang Yu was tasked with streamlining workflows, improving the quality of work tasks, and enhancing the speed of execution. Over the following year, the company reported that Tang Yu helped the subsidiary outperform the broader Hong Kong stock market. By serving as a real-time data hub, the AI signature was required for document approvals and risk assessments. It proved that in data-rich environments where speed of iteration is the primary competitive advantage, an algorithmic leader can significantly reduce operational friction.

Case Study 2: Dictador’s “Mika” and Brand Stewardship

The Challenge: The Face of Innovation

Dictador, a luxury rum producer, took the concept a step further by appointing Mika, a sophisticated female humanoid robot, as their CEO. Unlike Tang Yu, who worked mostly within internal systems, Mika serves as a public-facing brand steward and high-level decision-maker for their DAO (Decentralized Autonomous Organization) projects.

The Insight

Mika’s role highlights a different facet of leadership: Strategic Pattern Recognition. Mika analyzes consumer behavior and market trends to select artists for bottle designs and lead complex blockchain-based initiatives. While Mika lacks human empathy, the company uses her to demonstrate unbiased precision. However, it also exposes the human-AI gap: while Mika can optimize a product launch, she cannot yet navigate the nuanced political and emotional complexities of a global pandemic or a social crisis with the same grace as a seasoned human leader.

Leading Companies and Startups to Watch

The space is rapidly maturing beyond experimental robot figures. Quantive (with StrategyAI) is building the “operating system” for the modern CEO, connecting KPIs to real-work execution. Microsoft is positioning its Copilot ecosystem to act as a “Chief of Staff” to every executive, effectively automating the data-gathering and synthesis parts of the role. Watch startups like Tessl and Vapi, which are focusing on “Agentic AI” — systems that don’t just recommend decisions but have the autonomy to execute them across disparate platforms.

The Verdict: The Hybrid Future

Will AI replace the CEO? My answer is: not the great ones. AI will certainly replace the transactional CEO — the executive whose primary function is to crunch numbers, approve budgets, and monitor performance. These tasks are ripe for automation because they represent 19th-century management techniques.

However, the transformational CEO — the one who builds culture, navigates ethical gray areas, and creates a sense of belonging — will find that AI is their greatest ally. We must move from fearing replacement to mastering Human-AI Teaming. The CEOs of 2030 will be those who use AI to handle the complexity of the business so they can focus on the humanity of the organization.

Frequently Asked Questions

Can an AI legally serve as a CEO?

Currently, most corporate law jurisdictions require a natural person to serve as a director or officer for liability and accountability reasons. AI “CEOs” like Tang Yu or Mika often operate under the legal umbrella of a human board or chairman who retains ultimate responsibility.

What are the biggest risks of an AI CEO?

The primary risks include Algorithmic Bias (reinforcing historical prejudices found in the data), Lack of Crisis Adaptability (AI struggles with “Black Swan” events that have no historical precedent), and the Loss of Employee Trust if leadership feels cold and disconnected.

How should current CEOs prepare for AI leadership?

Leaders must focus on “Up-skilling for Empathy.” They should delegate data-heavy reporting to AI systems and re-invest that time into Culture Architecture and Change Management. The goal is to become an expert at Orchestrating Intelligence — both human and synthetic.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

AI Stands for Accidental Innovation

LAST UPDATED: December 29, 2025 at 12:49 PM

AI Stands for Accidental Innovation

GUEST POST from Art Inteligencia

In the world of corporate strategy, we love to manufacture myths of inevitable visionary genius. We look at the behemoths of today and assume their current dominance was etched in stone a decade ago by a leader who could see through the fog of time. But as someone who has spent a career studying Human-Centered Innovation and the mechanics of innovation, I can tell you that the reality is often much messier. And this is no different when it comes to artificial intelligence (AI), so much so that it could be said that AI stands for Accidental Innovation.

Take, for instance, the meteoric rise of Nvidia. Today, they are the undisputed architects of the intelligence age, a company whose hardware powers the Large Language Models (LLMs) reshaping our world. Yet, if we pull back the curtain, we find a story of survival, near-acquisitions, and a heavy dose of serendipity. Nvidia didn’t build their current empire because they predicted the exact nuances of the generative AI explosion; they built it because they were lucky enough to have developed technology for a completely different purpose that happened to be the perfect fuel for the AI fire.

“True innovation is rarely a straight line drawn by a visionary; it is more often a resilient platform that survives its original intent long enough to meet a future it didn’t expect.”

Braden Kelley

The Parallel Universe: The Meta/Oculus Near-Miss

It is difficult to imagine now, but there was a point in the Future Present where Nvidia was seen as a vulnerable hardware player. In the mid-2010s, as the Virtual Reality (VR) hype began to peak, Nvidia’s focus was heavily tethered to the gaming market. Internal histories and industry whispers suggest that the Oculus division of Meta (then Facebook) explored the idea of acquiring or deeply merging with Nvidia’s core graphics capabilities to secure their own hardware vertical.

At the time, Nvidia’s valuation was a fraction of what it is today. Had that acquisition occurred, the “Corporate Antibodies” of a social media giant would likely have stifled the very modularity that makes Nvidia great today. Instead of becoming the generic compute engine for the world, Nvidia might have been optimized—and narrowed—into a specialized silicon shop for VR headsets. It was a sliding doors moment for the entire tech industry. By not being acquired, Nvidia maintained the autonomy to follow the scent of demand wherever it led next.

Case Study 1: The Meta/Oculus Intersection

Before the “Magnificent Seven” era, Nvidia was struggling to find its next big act beyond PC gaming. When Meta acquired Oculus, there was a desperate need for low-latency, high-performance GPUs to make VR viable. The relationship between the two companies was so symbiotic that some analysts argued a vertical integration was the only logical step. Had Mark Zuckerberg moved more aggressively to bring Nvidia under the Meta umbrella, the GPU might have become a proprietary tool for the Metaverse. Because this deal failed to materialize, Nvidia remained an open ecosystem, allowing researchers at Google and OpenAI to eventually use that same hardware for a little thing called a Transformer model.

The Crypto Catalyst: A Fortuitous Detour

The second major “accident” in Nvidia’s journey was the Cryptocurrency boom. For years, Nvidia’s stock and production cycles were whipped around by the price of Ethereum. To the outside world, this looked like a distraction—a volatile market that Nvidia was chasing to satisfy shareholders. However, the crypto miners demanded exactly what AI would later require: massive, parallel processing power and specialized chips (ASICs and high-end GPUs) that could perform simple calculations millions of times per second.

Nvidia leaned into this demand, refining their CUDA platform and their manufacturing scale. They weren’t building for LLMs yet; they were building for miners. But in doing so, they solved the scalability problem of parallel computing. When the “AI Winter” ended and the industry realized that Deep Learning was the path forward, Nvidia didn’t have to invent a new chip. They just had to rebrand the one they had already perfected for the blockchain. Preparation met opportunity, but the opportunity wasn’t the one they had initially invited to the dance.

Case Study 2: From Hashes to Tokens

In 2021, Nvidia’s primary concern was “Lite Hash Rate” (LHR) cards to deter crypto miners so gamers could finally buy GPUs. This era of forced scaling forced Nvidia to master the art of data-center-grade reliability. When ChatGPT arrived, the transition was seamless. The “Accidental Innovation” here was that the mathematical operations required to verify a block on a chain are fundamentally similar to the vector mathematics required to predict the next word in a sentence. Nvidia had built the world’s best token-prediction machine while thinking they were building the world’s best ledger-validation machine.

Leading Companies and Startups to Watch

While Nvidia currently sits on the throne of Accidental Innovation, the next wave of change-makers is already emerging by attempting to turn that accident into a deliberate architecture. Cerebras Systems is building “wafer-scale” engines that dwarf traditional GPUs, aiming to eliminate the networking bottlenecks that Nvidia’s “accidental” legacy still carries. Groq (not to be confused with the AI model) is focusing on LPU (Language Processing Units) that prioritize the inference speed necessary for real-time human interaction. In the software layer, Modular is working to decouple the AI software stack from specific hardware, potentially neutralizing Nvidia’s CUDA moat. Finally, keep an eye on CoreWeave, which has pivoted from crypto mining to become a specialized “AI cloud,” proving that Nvidia’s accidental path is a blueprint others can follow by design.

The Human-Centered Conclusion

We must stop teaching innovation as a series of deliberate masterstrokes. When we do that, we discourage leaders from experimenting. If you believe you must see the entire future before you act, you will stay paralyzed. Nvidia’s success is a testament to Agile Resilience. They built a powerful, flexible tool, stayed independent during a crucial acquisition window, and were humble enough to let the market show them what their technology was actually good for.

As we move into this next phase of the Future Present, the lesson is clear: don’t just build for the world you see today. Build for the accidents of tomorrow. Because in the end, the most impactful innovations are rarely the ones we planned; they are the ones we were ready for.

Frequently Asked Questions

Why is Nvidia’s success considered “accidental”?

While Nvidia’s leadership was visionary in parallel computing, their current dominance in AI stems from the fact that hardware they optimized for gaming and cryptocurrency mining turned out to be the exact architecture needed for Large Language Models (LLMs), a use case that wasn’t the primary driver of their R&D for most of their history.

Did Meta almost buy Nvidia?

Historical industry analysis suggests that during the early growth of Oculus, there were significant internal discussions within Meta (Facebook) about vertically integrating hardware. While a formal acquisition of the entire Nvidia corporation was never finalized, the close proximity and the potential for such a deal represent a “what if” moment that would have fundamentally changed the AI landscape.

What is the “CUDA moat”?

CUDA is Nvidia’s proprietary software platform that allows developers to use GPUs for general-purpose processing. Because Nvidia spent years refining this for various industries (including crypto), it has become the industry standard. Most AI developers write code specifically for CUDA, making it very difficult for them to switch to competing chips from AMD or Intel.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Technology of Tomorrow Requires Ecosystems Today

The Technology Of Tomorrow Requires Ecosystems Today

GUEST POST from Greg Satell

There are a number of stories about what led Hans Lipperhey to submit a patent for the telescope in 1608. Some say that he saw two children playing with lenses in his shop who discovered that when they put one lens in front of each other they could see a weather vane across the street. Others say it was an apprentice that noticed the telescopic effect.

Yet the more interesting question is how such an important discovery could have such prosaic origins. Why was it that it was at that time that somebody noticed that looking through two lenses would magnify objects and not before? How could it have been that the discovery was made in a humble workshop and not by some great personage?

The truth is that history tends to converge and cascade around certain places and times, such as Cambridge before World War I, Vienna in the 1920s or, more recently, in Silicon Valley. In each case, we find that there were ecosystems that led to the inventions that changed the world. If we are going to build a more innovative economy, that’s where we need to focus.

How The Printing Press Led To A New Era Of Science

The mystery surrounding the invention of the telescope in the early 1600s begins to make more sense when you consider that the printing press was invented a little over a century before. By the mid-1500s books were transformed from priceless artifacts rarely seen outside monasteries, to something common enough that people could keep in their homes.

As literacy flourished, the need for spectacles grew exponentially and lens making became a much more common trade. With so many lenses around, it was only a matter of time before someone figured out that combining two lenses would create a compound effect and result in magnification (the microscope was invented around the same time).

From there, things began to move quickly. In 1609, Galileo Galilei first used the telescope to explore the heavens and changed our conception of the universe. He was able to see stars that were invisible to the naked eye, mountains and valleys on the moon and noticed that, similar to the moon, Venus had phases suggesting that it revolved around the sun.

A half century later, Antonie van Leeuwenhoek built himself a microscope and discovered an entirely new world made up of cells and fibers far too small for the human eye to detect. For the first time we became aware of bacteria and protozoa, creating the new field of microbiology. The world began to move away from ancient superstition and into one of observation and deduction.

It’s hard to see how any of this could have been foreseen when Gutenberg printed his first bible. Galileo and van Leeuwenhoek were products of their age as much as they were creators of the future.

How The Light Bulb Helped To Reshape Life, Work And Diets

In 1882, just three years after he had almost literally shocked the world with his revolutionary lighting system, Thomas Edison opened his Pearl Street Station, the first commercial electrical distribution plant in the United States. By 1884 it was already servicing over 500 homes.Yet for the next few decades, electric light remained mostly a curiosity.

As the economist Paul David explains in The Dynamo and the Computer, electricity didn’t have a measurable impact on the economy until the early 1920’s — 40 years after Edison’s plant. The problem wasn’t with electricity itself, Edison quickly expanded his distribution network as did his rival George Westinghouse, but a lack of complementary technologies.

To truly impact productivity, factories had to be redesigned to function not around a single steam turbine, but with smaller electric motors powering each machine. That created the opportunity to reimagine work itself, which led to the study of management. Greater productivity raised living standards and a new consumer culture.

Much like with the printing press, the ecosystem created by electric light led to secondary and tertiary inventions. Radios changed the way people received information and were entertained. Refrigeration meant not only that food could be kept fresh, but sent over large distances, reshaping agriculture and greatly improving diets.

The Automobile And The Category Killer

The internal combustion engine was developed in the late 1870’s and early 1880’s. Two of its primary inventors, Gottlieb Daimler and Karl Benz, began developing cars in the mid-1880’s. Henry Ford came two decades later. By pioneering the assembly line, he transformed cars from an expensive curiosity into a true “product for the masses” and it was this transformation that led to its major impact.

When just a few people have a car, it is merely a mode of transportation. But when everyone has a car, it becomes a force that reshapes society. People move from crowded cities into bedroom communities in the suburbs. Social relationships change, especially for farmers who previously lived their entire lives within a single day’s horse ride of 10 or 12 square miles. Lives opened up. Worlds broadened.

New infrastructure, like roads and gas stations were built. Improved logistics began to reshape supply chains and factories moved from cities in the north—close to customers—to small towns in the south, where labor and land were cheaper. That improved the economics of manufacturing, improved incomes and enriched lives.

With the means to easily carry a week’s worth of groceries, corner stores were replaced by supermarkets. Eventually suburbs formed and shopping malls sprang up. In the US, Little League baseball became popular. With mobility combined with the productivity effects of electricity, almost every facet of life—where we lived, worked and shopped—was reshaped.

Embarking On A New Era Of Innovation

These days, it seems that every time you turn around you see some breakthrough technology that will change our lives. We see media reports about computing breakthroughs, miracle cures, new sources of energy and more. Unfortunately, very few will ever see the outside of a lab and even fewer will prove commercially viable enough to impact our lives.

Don’t get me wrong. Many of these are real discoveries produced by serious scientists and reported by reputable sources. The problem is with how science works. At any given time there are a myriad of exciting possibilities, but very few pan out and even the ones that do usually take decades to make an impact.

Digital technology is a great example of how this happens. As AnnaLee Saxenian explained in Regional Advantage, back in the 1970s and 80s, when Boston was the center of the technology universe, Silicon Valley invested in an ecosystem, which included not just corporations, but scientific labs, universities and community colleges. New England rejected that approach. The results speak for themselves.

If you want to understand the technology of tomorrow, don’t try to imagine an idea no one has ever thought of, but look at the problems people are working on today. You’ll find a vast network working on quantum computing, a significant synthetic biology economy, a large-scale effort in materials science and billions of dollars invested into energy storage startups.

That’s why, if we are to win the future, we need to invest in ecosystems. It’s the nodes that grab attention, but the networks that make things happen.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Rise of Human-AI Teaming Platforms

Designing Partnership, Not Replacement

LAST UPDATED: December 26, 2025 at 4:44 PM

Human-AI Teaming Platforms

GUEST POST from Art Inteligencia

In the rush to adopt artificial intelligence, too many organizations are making a fundamental error. They view AI through the lens of 19th-century industrial automation: a tool to replace expensive human labor with cheaper, faster machines. This perspective is not only shortsighted; it is a recipe for failed digital transformation.

As a human-centered change leader, I argue that the true potential of this era lies not in artificial intelligence alone, but in Augmented Intelligence derived from sophisticated collaboration. We are moving past simple chatbots and isolated algorithms toward comprehensive Human-AI Teaming Platforms. These are environments designed not to remove the human from the loop, but to create a symbiotic workflow where humans and synthetic agents operate as cohesive units, leveraging their respective strengths concurrently.

“Organizations don’t fail because AI is too difficult to adopt. They fail because they never designed how humans and AI would think together and work together.”

Braden Kelley

The Cognitive Collaborative Shift

A Human-AI Teaming Platform differs significantly from standard enterprise software. Traditional tools wait for human input. A teaming platform is proactive; it observes context, anticipates needs, and offers suggestions seamlessly within the flow of work.

The challenge for leadership here is less technological and more cultural. How do we foster psychological safety when a team member is an algorithm? How do we redefine accountability when decisions are co-authored by human judgment and machine probability? Success requires a deliberate shift from managing subordinate tools to orchestrating collaborative partners.

“The ultimate goal of Human-AI teaming isn’t just to build faster organizations, but to build smarter, more adaptable ones. It is about creating a symbiotic relationship where the computational velocity of AI amplifies – rather than replaces – the creative, empathetic, and contextual genius of humans.”

Braden Kelley

When designed correctly, these platforms handle the high-volume cognitive load—data pattern recognition, probabilistic forecasting, and information retrieval—freeing human brains for high-value tasks like ethical reasoning, strategic negotiation, and complex emotional intelligence.

Case Studies in Symbiosis

To understand the practical application of these platforms, we must look at sectors where the cost of error is high and data volumes are overwhelming.

Case Study 1: Mastercard and the Decision Management Platform

In the high-stakes world of global finance, fraud detection is a constant battle against increasingly sophisticated bad actors. Mastercard has moved beyond simple automated flags to a genuine human-AI teaming approach with their Decision Intelligence platform.

The Challenge: False positives in fraud detection insult legitimate customers and stop commerce, while false negatives cost billions. No human team can review every transaction in real-time, and rigid rules-based AI often misses nuanced fraud patterns.

The Teaming Solution: Mastercard employs sophisticated AI that analyzes billions of activities in real-time. However, rather than just issuing a binary block/allow decision, the AI acts as an investigative partner to human analysts. It presents a “reasoned” risk score, highlighting why a transaction looks suspicious based on subtle behavioral shifts that a human would miss. The human analyst then applies contextual knowledge—current geopolitical events, specific merchant relationships, or nuanced customer history—to make the final judgment call. The AI learns from this human intervention, constantly refining its future collaborative suggestions.

Case Study 2: Autodesk and Generative Design in Engineering

The field of engineering and manufacturing is transitioning from computer-aided design (CAD) to human-AI co-creation, pioneered by companies like Autodesk.

The Challenge: When designing complex components—like an aerospace bracket to reduce weight while maintaining structural integrity—an engineer is limited by their experience and the time available to iterate on concepts.

The Teaming Solution: Using Autodesk’s generative design platforms, the human engineer doesn’t draw the part. Instead, they define the constraints: materials, weight limits, load-bearing requirements, and manufacturing methods. The AI then acts as an tireless creative partner, generating hundreds or thousands of permutable design solutions that meet those criteria—many utilizing organic shapes no human would instinctively draw. The human engineer then reviews these options, selecting the optimal design based on aesthetics, manufacturability, and cost-effectiveness. The human sets the goal; the AI explores the solution space; the human selects and refines the outcome.

Leading Platforms and Startups to Watch

The market for these platforms is rapidly bifurcating into massive ecosystem players and niche, workflow-specific innovators.

Among the giants, Microsoft is aggressively positioning its Copilot ecosystem across nearly every knowledge worker touchpoint, turning M365 into the default teaming platform for the enterprise. Salesforce is similarly embedding generative AI deep into its CRM, attempting to turn sales and service records into proactive coaching systems.

However, keep an eye on innovators focused on the mechanics of collaboration. Companies like Atlassian are evolving their suite (Jira, Confluence) to use AI not just to summarize text, but to connect disparate project threads and identify team bottlenecks proactively. In the startup space, look for platforms that are trying to solve the “managerial” layer of AI, helping human leaders coordinate mixed teams of synthetic and biological agents, ensuring alignment and mitigating bias in real-time.

Conclusion: The Leadership Imperative

Implementing Human-AI Teaming Platforms is a change management challenge of the highest order. If introduced poorly, these tools will be viewed as surveillance engines or competitors, leading to resistance and sabotage.

Leaders must communicate a clear vision: AI is brought in to handle the drudgery so humans can focus on the artistry of their professions. The organizations that win in the next decade will not be those with the best AI; they will be the ones with the best relationship between their people and their AI.

Frequently Asked Questions regarding Human-AI Teaming

What is the primary difference between traditional automation and Human-AI teaming?

Traditional automation seeks to replace human tasks entirely to cut costs and increase speed, often removing the human from the loop. Human-AI teaming focuses on augmentation, keeping humans in the loop for complex judgment and creative tasks while leveraging AI for data processing and pattern recognition in a collaborative workflow.

What are the biggest cultural barriers to adopting Human-AI teaming platforms?

The significant barriers include a lack of trust in AI outputs, fear of job displacement among the workforce, and the difficulty of redefining roles and accountability when decisions are co-authored by humans and algorithms.

How do Human-AI teaming platforms improve decision-making?

These platforms improve decision-making by combining the AI’s ability to process vast datasets without fatigue or cognitive bias with the human ability to apply ethical considerations, emotional intelligence, and nuanced contextual understanding to the final choice.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Will our opinion still really be our own in an AI Future?

Will our opinion still really be our own in an AI Future?

GUEST POST from Pete Foley

Intuitively we all mostly believe our opinions are our own.  After all, they come from that mysterious thing we call consciousness that resides somewhere inside of us. 

But we also know that other peoples opinions are influenced by all sorts of external influences. So unless we as individuals are uniquely immune to influence, it begs at the question; ‘how much of what we think, and what we do, is really uniquely us?’  And perhaps even more importantly, as our understanding of behavioral modification techniques evolves, and the power of the tools at our disposal grows, how much mental autonomy will any of us truly have in the future?

AI Manipulation of Political Opinion: A recent study from the Oxford Internet Institute (OII) and the UK AI Security Institute (AISI) showed how conversational AI can meaningfully influence peoples political beliefs. https://www.ox.ac.uk/news/2025-12-11-study-reveals-how-conversational-ai-can-exert-influence-over-political-beliefs .  Leveraging AI in this way potentially opens the door to a step-change in behavioral and opinion manipulation inn general.  And that’s quite sobering on a couple of fronts.   Firstly, for many today their political beliefs are deeply tied to our value system and deep sense of self, so this manipulation is potentially profound.  Secondly, if AI can do this today, how much more will it be able to do in the future?

A long History of Manipulation: Of course, manipulation of opinion or behavior is not new.  We are all overwhelmed by political marketing during election season.  We accept that media has manipulated public opinion for decades, and that social media has amplified this over the last few decades. Similarly we’ve all grown up immersed in marketing and advertising designed to influence our decisions, opinions and actions.  Meanwhile the rise in prominence of the behavioral sciences in recent decades has provided more structure and efficiency to behavioral influence, literally turning an art into a science.  Framing, priming, pre-suasion, nudging and a host of other techniques can have a profound impact on what we believe and what we actually do. And not only do we accept it, but many, if not most of the people reading this will have used one or more of these channels or techniques.  

An Art and a Science: And behavioral manipulation is a highly diverse field, and can be deployed as an art or a science.   Whether it’s influencers, content creators, politicians, lawyers, marketers, advertisers, movie directors, magicians, artists, comedians, even physicians or financial advisors, our lives are full of people who influence us, often using implicit cues that operate below our awareness. 

And it’s the largely implicit nature of these processes that explains why we tend to intuitively think this is something that happens to other people. By definition we are largely unaware of implicit influence on ourselves, although we can often see it in others.   And even in hindsight, it’s very difficult to introspect implicit manipulation of our own actions and opinions, because there is often no obvious conscious causal event. 

So what does this mean?  As with a lot of discussion around how an AI future, or any future for that matter, will unfold, informed speculation is pretty much all we have.  Futurism is far from an exact science.  But there are a couple of things we can make pretty decent guesses around.

1.  The ability to manipulate how people think creates power and wealth.

2.  Some will use this for good, some not, but given the nature of humanity, it’s unlikely that it will be used exclusively for either.

3.  AI is going to amplify our ability to manipulate how people think.  

The Good news: Benevolent behavioral and opinion manipulation has the power to do enormous good.  Whether it’s mental health and happiness (an increasingly challenging area as we as a species face unprecedented technology driven disruption), health, wellness, job satisfaction, social engagement, important for many of us, adoption of beneficial technology and innovation and so many other areas can benefit from this.  And given the power of the brain, there is even potential for conceptual manipulation to replace significant numbers of pharmaceuticals, by, for example, managing depression, or via preventative behavioral health interventions.   Will this be authentic? It’s probably a little Huxley dystopian, but will we care?  It’s one of the many ethical connundrums AI will pose us with.

The Bad News.  Did I mention wealth and power?  As humans, we don’t have a great record of doing the right thing when wealth and power come into the equation.  And AI and AI empowered social, conceptual and behavioral manipulation has potential to concentrate meaningful power even more so than today’s tech driven society.  Will this be used exclusively for good, or will some seek to leverage for their personal benefit at the expense of the border community?   Answers on a postcard (or AI generated DM if you prefer).

What can and should we do?  Realistically, as individuals we can self police, but we obviously also face limits in self awareness of implicit manipulations.  That said, we can to some degree still audit ourselves.  We’ve probably all felt ourselves at some point being riled up by a well constructed meme designed to amplify our beliefs.   Sometimes we recognize this quickly, other times we may be a little slower. But just simple awareness of the potential to be manipulated, and the symptoms of manipulation, such as intense or disproportionate emotional responses, can help us mitigate and even correct some of the worst effects. 

Collectively, there are more opportunities.  We are better at seeing others being manipulated than ourselves.  We can use that as a mirror, and/or call it out to others when we see it.  And many of us will find ourselves somewhere in the deployment chain, especially as AI is still in it’s early stages.  For those of us that this applies to, we have the opportunity to collectively nudge this emerging technology in the right direction. I still recall a conversation with Dan Ariely when I first started exploring behavioral science, perhaps 15-20 years ago.  It’s so long ago I have to paraphrase, but the essence of the conversation was to never manipulate people to do something that was not in there best interest.  

There is a pretty obvious and compelling moral framework behind this. But there is also an element of enlightened self interest. As a marketer working for a consumer goods company at the time, even if I could have nudged somebody into buying something they really didn’t want, it might have offered initial success, but would likely come back to bite me in the long-term.  They certainly wouldn’t become repeat customers, and a mixture of buyers remorse, loss aversion and revenge could turn them into active opponents.  This potential for critical thinking in hindsight exists for virtually every situation where outcomes damage the individual.   

The bottom line is that even today, we already ave to continually ask ourselves if what we see is real, if our beliefs are truly our own, or have they been manipulated? Media and social media memes already play the manipulation game.   AI may already be better, and if not, it’s only a matter of time before it is. If you think we are politically polarized now, hang onto your hat!!!  But awareness is key.  We all need to stay aware, be conscious of manipulation in ourselves and others, and counter it when we see it occurring for the wrong reasons.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.