Tag Archives: Artificial Intelligence

Win Your Way to an AI Job

Anduril’s AI Grand Prix: Racing for the Future of Work

LAST UPDATED: January 28, 2026 at 2:27 PM

Anduril's AI Grand Prix: Racing for the Future of Work

GUEST POST from Art Inteligencia

The traditional job interview is an antiquated artifact, a relic of a bygone industrial era. It often measures conformity, articulateness, and cultural fit more than actual capability or innovative potential. As we navigate the complexities of AI, automation, and rapid technological shifts, organizations are beginning to realize that to find truly exceptional talent, they need to look beyond resumes and carefully crafted answers. This is where companies like Anduril are not just iterating but innovating the very hiring process itself.

Anduril, a defense technology company known for its focus on AI-driven systems, recently announced its AI Grand Prix — a drone racing contest where the ultimate prize isn’t just glory, but a job offer. This isn’t merely a marketing gimmick; it’s a profound statement about their belief in demonstrated skill over credentialism, and a powerful strategy for identifying talent that can truly push the boundaries of autonomous systems. It epitomizes the shift from abstract evaluation to purposeful, real-world application, emphasizing hands-on capability over theoretical knowledge.

“The future of hiring isn’t about asking people what they can do; it’s about giving them a challenge and watching them show you.”

— Braden Kelley

Why Challenge-Based Hiring is the New Frontier

This approach addresses several critical pain points in traditional hiring:

  • Uncovering Latent Talent: Many brilliant minds don’t fit the mold of elite university degrees or polished corporate careers. Challenge-based hiring can surface individuals with raw, untapped potential who might otherwise be overlooked.
  • Assessing Practical Skills: In fields like AI, robotics, and advanced engineering, theoretical knowledge is insufficient. The ability to problem-solve under pressure, adapt to dynamic environments, and debug complex systems is paramount.
  • Cultural Alignment Through Action: Observing how candidates collaborate, manage stress, and iterate on solutions in a competitive yet supportive environment reveals more about their true cultural fit than any behavioral interview.
  • Building a Diverse Pipeline: By opening up contests to a wider audience, companies can bypass traditional biases inherent in resume screening, leading to a more diverse and innovative workforce.

Beyond Anduril: Other Pioneers of Performance-Based Hiring

Anduril isn’t alone in recognizing the power of real-world challenges to identify top talent. Several other forward-thinking organizations have adopted similar, albeit varied, approaches:

Google’s Code Jam and Hash Code

For years, Google has leveraged competitive programming contests like Code Jam and Hash Code to scout for software engineering talent globally. These contests present participants with complex algorithmic problems that test their coding speed, efficiency, and problem-solving abilities. While not always directly leading to a job offer for every participant, top performers are often fast-tracked through the interview process. This allows Google to identify engineers who can perform under pressure and think creatively, rather than just those who can ace a whiteboard interview. It’s a prime example of turning abstract coding prowess into a tangible demonstration of value.

Kaggle Competitions for Data Scientists

Kaggle, now a Google subsidiary, revolutionized how data scientists prove their worth. Through its platform, companies post real-world data science problems—from predicting housing prices to identifying medical conditions from images—and offer prize money, and often, connections to jobs, to the teams that develop the best models. This creates a meritocracy where the quality of one’s predictive model speaks louder than any resume. Many leading data scientists have launched their careers or been recruited directly from their performance in Kaggle competitions. It transforms theoretical data knowledge into demonstrable insights that directly impact business outcomes.

The Human Element in the Machine Age

What makes these initiatives truly human-centered? It’s the recognition that while AI and automation are transforming tasks, the human capacity for ingenuity, adaptation, and critical thinking remains irreplaceable. These contests aren’t about finding people who can simply operate machines; they’re about finding individuals who can teach the machines, design the next generation of algorithms, and solve problems that don’t yet exist. They foster an environment of continuous learning and application, perfectly aligning with the “purposeful learning” philosophy.

The Anduril AI Grand Prix, much like Google’s and Kaggle’s initiatives, de-risks the hiring process by creating a performance crucible. It’s a pragmatic, meritocratic, and ultimately more effective way to build the teams that will define the next era of technological advancement. As leaders, our challenge is to move beyond conventional wisdom and embrace these innovative models, ensuring we’re not just ready for the future of work, but actively shaping it.

Anduril Fury


Frequently Asked Questions

What is challenge-based hiring?

Challenge-based hiring is a recruitment strategy where candidates demonstrate their skills and problem-solving abilities by completing a real-world task, project, or competition, rather than relying solely on resumes and interviews.

What are the benefits of this approach for companies?

Companies can uncover hidden talent, assess practical skills, observe cultural fit in action, and build a more diverse talent pipeline by focusing on demonstrable performance.

How does this approach benefit candidates?

Candidates get a fair chance to showcase their true abilities regardless of traditional credentials, gain valuable experience, and often get direct access to influential companies and potential job offers based purely on merit.

To learn more about transforming your organization’s talent acquisition strategy, reach out to explore how human-centered innovation can reshape your hiring practices.

Image credits: Wikimedia Commons, Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

We Must Hold AI Accountable

We Must Hold AI Accountable

GUEST POST from Greg Satell

About ten years ago, IBM invited me to talk with some key members on the Watson team, when the triumph of creating a machine that could beat the best human players at the game show Jeopardy! was still fresh. I wrote in Forbes at the time that we were entering a new era of cognitive collaboration between humans, computers and other humans.

One thing that struck me was how similar the moment seemed to how aviation legend Chuck Yeager described the advent of flying-by-wire, four decades earlier, in which pilots no longer would operate aircraft, but interface with a computer that flew the plane. Many of the macho “flyboys” weren’t able to trust the machines and couldn’t adapt.

Now, with the launch of ChatGPT, Bill Gates has announced that the age of AI has begun and, much like those old flyboys, we’re all going to struggle to adapt. Our success will not only rely on our ability to learn new skills and work in new ways, but the extent to which we are able to trust our machine collaborators. To reach its potential, AI will need to become accountable.

Recognizing Data Bias

With humans, we work diligently to construct safe and constructive learning environments. We design curriculums, carefully selecting materials, instructors and students to try and get the right mix of information and social dynamics. We go to all this trouble because we understand that the environment we create greatly influences the learning experience.

Machines also have a learning environment called a “corpus.” If, for example, you want to teach an algorithm to recognize cats, you expose it to thousands of pictures of cats. In time, it figures out how to tell the difference between, say, a cat and a dog. Much like with human beings, it is through learning from these experiences that algorithms become useful.

However, the process can go horribly awry. A famous case is Microsoft’s Tay, a Twitter bot that the company unleashed on the microblogging platform in 2016. In under a day, Tay went from being friendly and casual (“humans are super cool”) to downright scary, (“Hitler was right and I hate Jews”). It was profoundly disturbing.

Bias in the learning corpus is far more common than we often realize. Do an image search for the word “professional haircut” and you will get almost exclusively pictures of white men. Do the same for “unprofessional haircut” and you will see much more racial and gender diversity.

It’s not hard to figure out why this happens. Editors writing articles about haircuts portray white men in one way and other genders and races in another. When we query machines, we inevitably find our own biases baked in.

Accounting For Algorithmic Bias

A second major source of bias results from how decision-making models are designed. Consider the case of Sarah Wysocki, a fifth grade teacher who — despite being lauded by parents, students, and administrators alike — was fired from the D.C. school district because an algorithm judged her performance to be sub-par. Why? It’s not exactly clear, because the system was too complex to be understood by those who fired her.

Yet it’s not hard to imagine how it could happen. If a teacher’s ability is evaluated based on test scores, then other aspects of performance, such as taking on children with learning differences or emotional problems, would fail to register, or even unfairly penalize them. Good human managers recognize outliers, algorithms generally aren’t designed that way.

In other cases, models are constructed according to what data is easiest to acquire or the model is overfit to a specific set of cases and is then applied too broadly. In 2013, Google Flu Trends predicted almost double as many cases there actually were. What appears to have happened is that increased media coverage about Google Flu Trends led to more searches by people who weren’t sick. The algorithm was never designed to take itself into account.

The simple fact is that an algorithm must be designed in one way or another. Every possible contingency cannot be pursued. Choices have to be made and bias will inevitably creep in. Mistakes happen. The key is not to eliminate error, but to make our systems accountable through, explainability, auditability and transparency.

To Build An Era Of Cognitive Collaboration We First Need To Build Trust

In 2020, Ofqual, the authority that administers A-Level college entrance exams in the UK, found itself mired in scandal. Unable to hold live exams because of Covid-19, it designed and employed an algorithm that based scores partly on the historical performance of the schools students attended with the unintended consequence that already disadvantaged students found themselves further penalized by artificially deflated scores.

The outcry was immediate, but in a sense the Ofqual case is a happy story. Because the agency was transparent about how the algorithm was constructed, the source of the bias was quickly revealed, corrective action was taken in a timely manner, and much of the damage was likely mitigated. As Linus’s Law advises, “given enough eyeballs, all bugs are shallow.”

The age of artificial intelligence requires us to collaborate with machines, leveraging their capabilities to better serve other humans. To make that collaboration successful, however, it needs to take place in an atmosphere of trust. Machines, just like humans, need to be held accountable, their decisions and insights can’t be a “black box.” We need to be able to understand where their judgments come from and how they’re decisions are being made.

Senator Schumer worked on legislation to promote more transparency in 2024, but that is only a start and the new administration has pushed the pause button on AI regulation. The real change has to come from within ourselves and how we see our relationships with the machines we create. Marshall McLuhan wrote that media are extensions of man and the same can be said for technology. Our machines inherit our human weaknesses and frailties. We need to make allowances for that.

— Article courtesy of the Digital Tonto blog
— Image credit: Flickr

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Humans Don’t Have to Perform Every Task

Humans Don't Have to Perform Every Task

GUEST POST from Shep Hyken

There seems to be a lot of controversy and questions surrounding artificial intelligence (AI) being used to support customers. The customer experience can be enhanced with AI, but it can also derail and cause customers to head to the competition.

Last week, I wrote an article titled Just Because You Can Use AI, Doesn’t Mean You Should. The gist of the article was that while AI has impressive capabilities, there are situations in which human-to-human interaction is still preferred, even necessary, especially for complex, sensitive or emotionally charged customer issues.

However, there is a flip side. Sometimes AI is the smart thing to use, and eliminating human-to-human interaction actually creates a better customer experience. The point is that just because a human could handle a task doesn’t mean they should. 

Before we go further, keep in mind that even if AI should handle an issue, my customer service and customer experience (CX) research finds almost seven out of 10 customers (68%) prefer the phone. So, there are some customers who, regardless of how good AI is, will only talk to a live human being.

Here’s a reality: When a customer simply wants to check their account balance, reset a password, track a package or any other routine, simple task or request, they don’t need to talk to someone. What they really want, even if they don’t realize it, is fast, accurate information and a convenient experience.

The key is recognizing when customers value efficiency over engagement. Even with 68% of customers preferring the phone, they also want convenience and speed. And sometimes, the most convenient experience is one that eliminates unnecessary human interaction.

Smart companies are learning to use both strategically. They are finding a balance. They’re using AI for routine, transactional interactions while making live agents available for situations requiring judgement, creativity or empathy.

The goal isn’t to replace humans with AI. It’s to use each where they excel most. That sometimes means letting technology do what it can do best, even if a human could technically do the job. The customer experience improves when you match the right resource to the customers’ specific need.

That’s why I advocate pushing the digital, AI-infused experience for the right reasons but always – and I emphasize the word always – giving the customer an easy way to connect to a human and continue the conversation.

In the end, most customers don’t care whether their problem is solved by a human or AI. They just want it solved well.

Image credits: Google Gemini, Shep Hyken

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Solving the AI Trust Imperative with Provenance

The Digital Fingerprint

LAST UPDATED: January 5, 2026 at 3:33 PM

The Digital Fingerprint - Solving the Trust Imperative with Provenance

GUEST POST from Art Inteligencia

We are currently living in the artificial future of 2026, a world where the distinction between human-authored and AI-generated content has become practically invisible to the naked eye. In this era of agentic AI and high-fidelity synthetic media, we have moved past the initial awe of creation and into a far more complex phase: the Trust Imperative. As my friend Braden Kelley has frequently shared in his keynotes, innovation is change with impact, but if the impact is an erosion of truth, we are not innovating — we are disintegrating.

The flood of AI-generated content has created a massive Corporate Antibody response within our social and economic systems. To survive, organizations must adopt Generative Watermarking and Provenance technologies. These aren’t just technical safeguards; they are the new infrastructure of reality. We are shifting from a culture of blind faith in what we see to a culture of verifiable origin.

“Transparency is the only antidote to the erosion of trust; we must build systems that don’t just generate, but testify. If an idea is a useful seed of invention, its origin must be its pedigree.” — Braden Kelley

Why Provenance is the Key to Human-Centered Innovation™

Human-Centered Innovation™ requires psychological safety. In 2026, psychological safety is under threat by “hallucinated” news, deepfake corporate communiques, and the potential for industrial-scale intellectual property theft. When people cannot trust the data in their dashboards or the video of their CEO, the organizational “nervous system” begins to shut down. This is the Efficiency Trap in its most dangerous form: we’ve optimized for speed of content production, but lost the efficiency of shared truth.

Provenance tech — specifically the C2PA (Coalition for Content Provenance and Authenticity) standards — allows us to attach a permanent, tamper-evident digital “ledger” to every piece of media. This tells us who created it, what AI tools were used to modify it, and when it was last verified. It restores the human to the center of the story by providing the context necessary for informed agency.

Case Study 1: Protecting the Frontline of Journalism

The Challenge: In early 2025, a global news agency faced a crisis when a series of high-fidelity deepfake videos depicting a political coup began circulating in a volatile region. Traditional fact-checking was too slow to stop the viral spread, leading to actual civil unrest.

The Innovation: The agency implemented a camera-to-cloud provenance system. Every image captured by their journalists was cryptographically signed at the moment of capture. Using a public verification tool, viewers could instantly see the “chain of custody” for every frame.

The Impact: By 2026, the agency saw a 50% increase in subscriber trust scores. More importantly, they effectively “immunized” their audience against deepfakes by making the absence of a provenance badge a clear signal of potential misinformation. They turned the Trust Imperative into a competitive advantage.

Case Study 2: Securing Enterprise IP in the Age of Co-Pilots

The Challenge: A Fortune 500 manufacturing firm found that its proprietary design schematics were being leaked through “Shadow AI” — employees using unauthorized generative tools to optimize parts. The company couldn’t tell which designs were protected “useful seeds of invention” and which were tainted by external AI data sets.

The Innovation: They deployed an internal Generative Watermarking system. Every output from authorized corporate AI agents was embedded with an invisible, robust watermark. This watermark tracked the specific human prompter, the model version, and the internal data sources used.

The Impact: The company successfully reclaimed its IP posture. By making the origin of every design verifiable, they reduced legal risk and empowered their engineers to use AI safely, fostering a culture of Human-AI Teaming rather than fear-based restriction.

Leading Companies and Startups to Watch

As we navigate 2026, the landscape of provenance is being defined by a few key players. Adobe remains a titan in this space with their Content Authenticity Initiative, which has successfully pushed the C2PA standard into the mainstream. Digimarc has emerged as a leader in “stealth” watermarking that survives compression and cropping. In the startup ecosystem, Steg.AI is doing revolutionary work with deep-learning-based watermarks that are invisible to the eye but indestructible to algorithms. Truepic is the one to watch for “controlled capture,” ensuring the veracity of photos from the moment the shutter clicks. Lastly, Microsoft and Google have integrated these “digital nutrition labels” across their enterprise suites, making provenance a default setting rather than an optional add-on.

Conclusion: The Architecture of Truth

To lead innovation in 2026, you must be more than a creator; you must be a verifier. We cannot allow the “useful seeds of invention” to be choked out by the weeds of synthetic deception. By embracing generative watermarking and provenance, we aren’t just protecting data; we are protecting the human connection that makes change with impact possible.

If you are looking for an innovation speaker to help your organization solve the Trust Imperative and navigate Human-Centered Innovation™, I suggest you look no further than Braden Kelley. The future belongs to those who can prove they are part of it.

Frequently Asked Questions

What is the difference between watermarking and provenance?

Watermarking is a technique to embed information (visible or invisible) directly into content to identify its source. Provenance is the broader history or “chain of custody” of a piece of media, often recorded in metadata or a ledger, showing every change made from creation to consumption.

Can AI-generated watermarks be removed?

While no system is 100% foolproof, modern watermarking from companies like Steg.AI or Digimarc is designed to be highly “robust,” meaning it survives editing, screenshots, and even re-recording. Provenance standards like C2PA use cryptography to ensure that if the data is tampered with, the “broken seal” is immediately apparent.

Why does Braden Kelley call trust a “competitive advantage”?

In a market flooded with low-quality or deceptive content, “Trust” becomes a premium. Organizations that can prove their content is authentic and their AI is transparent will attract higher-quality talent and more loyal customers, effectively bypassing the friction of skepticism that slows down their competitors.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Just Because You Can Use AI Doesn’t Mean You Should

Just Because You Can Use AI Doesn't Mean You Should

GUEST POST from Shep Hyken

I’m often asked, “What should AI be used for?” While there is much that AI can do to support businesses in general, it’s obvious that I’m being asked how it relates to customer service and customer experience (CX). The true meaning of the question is more about what tasks AI can do to support a customer, thereby potentially eliminating the need for a live agent who deals directly with customers.

First, as the title of this article implies, just because AI can do something, it doesn’t mean it should. Yes, AI can handle many customer support issues, but even if every customer were willing to accept that AI can deliver good support, there are some sensitive and complicated issues for which customers would prefer to talk to a human.

AI Shep Hyken Cartoon

Additionally, consider that, based on my annual customer experience research, 68% of customers (that’s almost seven out of 10) prefer the phone as their primary means of communication with a company or brand. However, another finding in the report is worth mentioning: 34% of customers stopped doing business with a company because self-service options were not provided. Some customers insist on the self-service option, but at the same time, they want to be transferred to a live agent when appropriate.

AI works well for simple issues, such as password resets, tracking orders, appointment scheduling and answering basic or frequently asked questions. Humans are better suited for handling complaints and issues that need empathy, complex problem-solving situations that require judgment calls and communicating bad news.

An AI-fueled chatbot can answer many questions, but when a medical patient contacts the doctor’s office about test results related to a serious issue, they will likely want to speak with a nurse or doctor, not a chatbot.

Consider These Questions Before Implementing AI For Customer Interactions

AI for addressing simple customer issues has become affordable for even the smallest businesses, and an increasing number of customers are willing to use AI-powered customer support for the right reasons. Consider these questions before implementing AI for customer interactions:

  1. Is the customer’s question routine or fact-based?
  2. Does it require empathy, emotion, understanding and/or judgment (emotional intelligence)?
  3. Could the wrong answer cause a problem or frustrate the customer?
  4. As you think about the reasons customers call, which ones would they feel comfortable having AI handle?
  5. Do you have an easy, seamless way for the customer to be transferred to a human when needed?

The point is, regardless of how capable the technology is, it doesn’t mean it is best suited to deliver what the customer wants. Live agents can “read the customer” and know how to effectively communicate and empathize with them. AI can’t do that … yet. The key isn’t choosing between AI and humans. It’s knowing when to use each one.

Image credits: Google Gemini, Shep Hyken

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Top 100 Innovation and Transformation Articles of 2025

Top 100 Innovation and Transformation Articles of 2025

2021 marked the re-birth of my original Blogging Innovation blog as a new blog called Human-Centered Change and Innovation.

Many of you may know that Blogging Innovation grew into the world’s most popular global innovation community before being re-branded as Innovation Excellence and being ultimately sold to DisruptorLeague.com.

Thanks to an outpouring of support I’ve ignited the fuse of this new multiple author blog around the topics of human-centered change, innovation, transformation and design.

I feel blessed that the global innovation and change professional communities have responded with a growing roster of contributing authors and more than 17,000 newsletter subscribers.

To celebrate we’ve pulled together the Top 100 Innovation and Transformation Articles of 2025 from our archive of over 3,200 articles on these topics.

We do some other rankings too.

We just published the Top 40 Innovation Authors of 2025 and as the volume of this blog has grown we have brought back our monthly article ranking to complement this annual one.

But enough delay, here are the 100 most popular innovation and transformation posts of 2025.

Did your favorite make the cut?

1. A Toolbox for High-Performance Teams – Building, Leading and Scaling – by Stefan Lindegaard

2. Top 10 American Innovations of All Time – by Art Inteligencia

3. The Education Business Model Canvas – by Arlen Meyers, M.D.

4. What is Human-Centered Change? – by Braden Kelley

5. How Netflix Built a Culture of Innovation – by Art Inteligencia

6. McKinsey is Wrong That 80% Companies Fail to Generate AI ROI – by Robyn Bolton

7. The Great American Contraction – by Art Inteligencia

8. A Case Study on High Performance Teams – New Zealand’s All Blacks – by Stefan Lindegaard

9. Act Like an Owner – Revisited! – by Shep Hyken

10. Should a Bad Grade in Organic Chemistry be a Doctor Killer? – by Arlen Meyers, M.D.

11. Charting Change – by Braden Kelley

12. Human-Centered Change – by Braden Kelley

13. No Regret Decisions: The First Steps of Leading through Hyper-Change – by Phil Buckley

14. SpaceX is a Masterclass in Innovation Simplification – by Pete Foley

15. Top 5 Future Studies Programs – by Art Inteligencia

16. Marriott’s Approach to Customer Service – by Shep Hyken

17. The Role of Stakeholder Analysis in Change Management – by Art Inteligencia

18. The Triple Bottom Line Framework – by Dainora Jociute

19. The Nordic Way of Leadership in Business – by Stefan Lindegaard

20. Nine Innovation Roles – by Braden Kelley

21. ACMP Standard for Change Management® Visualization – 35″ x 56″ (Poster Size) – Association of Change Management Professionals – by Braden Kelley

22. Designing an Innovation Lab: A Step-by-Step Guide – by Art Inteligencia

23. FutureHacking™ – by Braden Kelley

24. The 6 Building Blocks of Great Teams – by David Burkus

25. Overcoming Resistance to Change – Embracing Innovation at Every Level – by Chateau G Pato

26. Human-Centered Change – Free Downloads – by Braden Kelley

27. 50 Cognitive Biases Reference – Free Download – by Braden Kelley

28. Quote Posters – Curated by Braden Kelley

29. Stoking Your Innovation Bonfire – by Braden Kelley

30. Innovation or Not – Kawasaki Corleo – by Art Inteligencia


Build a common language of innovation on your team


31. Top Six Trends for Innovation Management in 2025 – by Jesse Nieminen

32. Fear is a Leading Indicator of Personal Growth – by Mike Shipulski

33. Visual Project Charter™ – 35″ x 56″ (Poster Size) and JPG for Online Whiteboarding – by Braden Kelley

34. The Most Challenging Obstacles to Achieving Artificial General Intelligence – by Art Inteligencia

35. The Ultimate Guide to the Phase-Gate Process – by Dainora Jociute

36. Case Studies in Human-Centered Design – by Art Inteligencia

37. Transforming Leadership to Reshape the Future of Innovation – Exclusive Interview with Brian Solis

38. Leadership Best Quacktices from Oregon’s Dan Lanning – by Braden Kelley

39. This AI Creativity Trap is Gutting Your Growth – by Robyn Bolton

40. A 90% Project Failure Rate Means You’re Doing it Wrong – by Mike Shipulski

41. Reversible versus Irreversible Decisions – by Farnham Street

42. Next Generation Leadership Traits and Characteristics – by Stefan Lindegaard

43. Top 40 Innovation Bloggers of 2024 – Curated by Braden Kelley

44. Benchmarking Innovation Performance – by Noel Sobelman

45. Three Executive Decisions for Strategic Foresight Success or Failure – by Robyn Bolton

46. Back to Basics for Leaders and Managers – by Robyn Bolton

47. You Already Have Too Many Ideas – by Mike Shipulski

48. Imagination versus Knowledge – Is imagination really more important? – by Janet Sernack

49. Building a Better Change Communication Plan – by Braden Kelley

50. 10 Free Human-Centered Change™ Tools – by Braden Kelley


Accelerate your change and transformation success


51. Why Business Transformations Fail – by Robyn Bolton

52. Overcoming the Fear of Innovation Failure – by Stefan Lindegaard

53. What is the difference between signals and trends? – by Art Inteligencia

54. Unintended Consequences. The Hidden Risk of Fast-Paced Innovation – by Pete Foley

55. Giving Your Team a Sense of Shared Purpose – by David Burkus

56. The Top 10 Irish Innovators Who Shaped the World – by Art Inteligencia

57. The Role of Emotional Intelligence in Effective Change Leadership – by Art Inteligencia

58. Is OpenAI About to Go Bankrupt? – by Art Inteligencia

59. Sprint Toward the Innovation Action – by Mike Shipulski

60. Innovation Management ISO 56000 Series Explained – by Diana Porumboiu

61. How to Make Navigating Ambiguity a Super Power – by Robyn Bolton

62. 3 Secret Saboteurs of Strategic Foresight – by Robyn Bolton

63. Four Major Shifts Driving the 21st Century – by Greg Satell

64. Problems vs. Solutions vs. Complaints – by Mike Shipulski

65. The Power of Position Innovation – by John Bessant

66. Three Ways Strategic Idleness Accelerates Innovation and Growth – by Robyn Bolton

67. Case Studies of Companies Leading in Inclusive Design – by Chateau G Pato

68. Recognizing and Celebrating Small Wins in the Change Process – by Chateau G Pato

69. Parallels Between the 1920’s and Today Are Frightening – by Greg Satell

70. The Art of Adaptability: How to Respond to Changing Market Conditions – by Art Inteligencia

71. Do you have a fixed or growth mindset? – by Stefan Lindegaard

72. Making People Matter in AI Era – by Janet Sernack

73. The Role of Prototyping in Human-Centered Design – by Art Inteligencia

74. Turning Bold Ideas into Tangible Results – by Robyn Bolton

75. Yes the Comfort Zone Can Be Your Best Friend – by Stefan Lindegaard

76. Increasing Organizational Agility – by Braden Kelley

77. Innovation is Dead. Now What? – by Robyn Bolton

78. Four Reasons Change Resistance Exists – by Greg Satell

79. Eight I’s of Infinite Innovation – Revisited – by Braden Kelley

80. Difference Between Possible, Potential and Preferred Futures – by Art Inteligencia


Get the Change Planning Toolkit


81. Resistance to Innovation – What if electric cars came first? – by Dennis Stauffer

82. Science Says You Shouldn’t Waste Too Much Time Trying to Convince People – by Greg Satell

83. Why Context Engineering is the Next Frontier in AI – by Braden Kelley and Art Inteligencia

84. How to Write a Failure Resume – by Arlen Meyers, M.D.

85. The Five Keys to Successful Change – by Braden Kelley

86. Four Forms of Team Motivation – by David Burkus

87. Why Revolutions Fail – by Greg Satell

88. Top 40 Innovation Bloggers of 2023 – Curated by Braden Kelley

89. The Entrepreneurial Mindset – by Arlen Meyers, M.D.

90. Six Reasons Norway is a Leader in High-Performance Teamwork – by Stefan Lindegaard

90. Top 100 Innovation and Transformation Articles of 2024 – Curated by Braden Kelley

91. The Worst British Customer Experiences of 2024 – by Braden Kelley

92. Human-Centered Change & Innovation White Papers – by Braden Kelley

93. Encouraging a Growth Mindset During Times of Organizational Change – by Chateau G Pato

94. Inside the Mind of Jeff Bezos – by Braden Kelley

95. Learning from the Failure of Quibi – by Greg Satell

96. Dare to Think Differently – by Janet Sernack

97. The End of the Digital Revolution – by Greg Satell

98. Your Guidebook to Leading Human-Centered Change – by Braden Kelley

99. The Experiment Canvas™ – 35″ x 56″ (Poster Size) – by Braden Kelley

100. Trust as a Competitive Advantage – by Greg Satell

Curious which article just missed the cut? Well, here it is just for fun:

101. Building Cross-Functional Collaboration for Breakthrough Innovations – by Chateau G Pato

These are the Top 100 innovation and transformation articles of 2025 based on the number of page views. If your favorite Human-Centered Change & Innovation article didn’t make the cut, then send a tweet to @innovate and maybe we’ll consider doing a People’s Choice List for 2024.

If you’re not familiar with Human-Centered Change & Innovation, we publish 1-6 new articles every week focused on human-centered change, innovation, transformation and design insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook feed or on Twitter or LinkedIn too!

Editor’s Note: Human-Centered Change & Innovation is open to contributions from any and all the innovation & transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have a valuable insight to share with everyone for the greater good. If you’d like to contribute, contact us.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Outcome-Driven Innovation in the Age of Agentic AI

The North Star Shift

LAST UPDATED: January 5, 2026 at 5:29PM

Outcome-Driven Innovation in the Age of Agentic AI

by Braden Kelley

In a world of accelerating change, the rhetoric around Artificial Intelligence often centers on its incredible capacity for optimization. We hear about AI designing new materials, orchestrating complex logistics, and even writing entire software applications. This year, the technology has truly matured into agentic AI, capable of pursuing and achieving defined objectives with unprecedented autonomy. But as a specialist in Human-Centered Innovation™ (which pairs well with Outcome-Driven Innovation), I pose two crucial questions: Who is defining these outcomes, and what impact do they truly have on the human experience?

The real innovation of 2026 will show not just that AI can optimize against defined outcomes, but that we, as leaders, finally have the imperative — and the tools — to master Outcome-Driven Innovation and Outcome-Driven Change. If innovation is change with impact, then our impact is only as profound as the outcomes we choose to pursue. Without thoughtful, human-centered specifications, AI simply becomes the most efficient way to achieve the wrong goals, leading us directly into the Efficiency Trap. This is where organizations must overcome the Corporate Antibody response that resists fundamental shifts in how we measure success.

Revisiting and Applying Outcome-Driven Change in the Age of Agentic AI

As we integrate agentic AI into our organizations, the principles of Outcome-Driven Change (ODC) I first introduced in 2018 are more vital than ever. The core of the ODC framework rests on the alignment of three critical domains: Cognitive (Thinking), Affective (Feeling), and Conative (Doing). Today, AI agents are increasingly assuming the “conative” role, executing tasks and optimizing workflows at superhuman speeds. However, as I have always maintained, true success only arrives when what is being done is in harmony with what the people in the organization and customer base think and feel.

Outcome-Driven Change Framework

If an AI agent’s autonomous actions are misaligned with human psychological readiness or emotional context, it will trigger a Corporate Antibody response that kills innovation. To practice genuine Human-Centered Change™, we must ensure that AI agents are directed to pursue outcomes that are not just numerically efficient, but humanly resonant. When an AI’s “doing” matches the collective thinking and feeling of the workforce, we move beyond the Efficiency Trap and create lasting change with impact.

“In the age of agentic AI, the true scarcity is not computational power; it is the human wisdom to define the right ‘North Star’ outcomes. An AI optimizing for the wrong goal is a digital express train headed in the wrong direction – efficient, but ultimately destructive.” — Braden Kelley

From Feature-Building to Outcome-Harvesting

For decades, many organizations have been stuck in a cycle of “feature-building.” Product teams were rewarded for shipping more features, marketing for launching more campaigns, and R&D for creating more patents. The focus was on output, not ultimate impact. Outcome-Driven Innovation shifts this paradigm. It forces us to ask: What human or business value are we trying to create? What measurable change in behavior or well-being are we seeking?

Agentic AI, when properly directed, becomes an unparalleled accelerant for this shift. Instead of building a new feature and hoping it works, we can now tell an AI agent, “Achieve Outcome X for Persona Y, within Constraints Z,” and it will explore millions of pathways to get there. This frees human teams from the tactical churn and allows them to focus on the truly strategic work: deeply understanding customer needs, identifying ethical guardrails, and defining aspirational outcomes that genuinely drive Human-Centered Innovation™.

Case Study 1: Sustainable Manufacturing and the “Circular Economy” Outcome

The Challenge: A major electronics manufacturer in early 2025 aimed to reduce its carbon footprint but struggled with the complexity of optimizing its global supply chain, product design, and end-of-life recycling simultaneously. Traditional methods led to incremental, siloed improvements.

The Outcome-Driven Approach: They defined a bold outcome: “Achieve a 50% reduction in virgin material usage across all product lines by 2028, while maintaining profitability and product quality.” They then deployed an agentic AI system to explore new material combinations, reverse logistics networks, and redesign possibilities. This AI was explicitly optimized to achieve the circular economy outcome.

The Impact: The AI identified design changes that led to a 35% reduction in material waste within 18 months, far exceeding human predictions. It also found pathways to integrate recycled content into new products without compromising durability. The organization moved from a reactive “greenwashing” approach to proactive, systemic innovation driven by a clear, human-centric environmental outcome.

Case Study 2: Personalized Education and “Mastery Outcomes”

The Challenge: A national education system faced stagnating literacy rates, despite massive investments in new curricula. The focus was on “covering material” rather than ensuring true student understanding and application.

The Outcome-Driven Approach: They shifted their objective to “Ensure 90% of students achieve demonstrable mastery of core literacy skills by age 10.” An AI tutoring system was developed, designed to optimize for individual student mastery outcomes, rather than just quiz scores. The AI dynamically adapted learning paths, identified specific knowledge gaps, and even generated custom exercises based on each child’s learning style.

The Impact: Within two years, participating schools saw a 25% improvement in mastery rates. The AI became a powerful co-pilot for teachers, freeing them from repetitive grading and allowing them to focus on high-touch mentorship. This demonstrated how AI, directed by human-defined learning outcomes, can empower both educators and students, moving beyond the Efficiency Trap of standardized testing.

Leading Companies and Startups to Watch

As 2026 solidifies Outcome-Driven Innovation, several entities are paving the way. Amplitude and Pendo are evolving their product analytics to connect feature usage directly to customer outcomes. In the AI space, Anthropic‘s work on “Constitutional AI” is fascinating, as it seeks to embed human-defined ethical outcomes directly into the AI’s decision-making. Glean and Perplexity AI are creating agentic knowledge systems that help organizations define and track complex outcomes across their internal data. Startups like Metaculus are even democratizing the prediction of outcomes, allowing collective intelligence to forecast the impact of potential innovations, providing invaluable insights for human decision-makers. These players are all contributing to the core goal: helping humans define the right problems for AI to solve.

Conclusion: The Human Art of Defining the Future

The year 2026 is a pivotal moment. Agentic AI gives us unprecedented power to optimize, but with great power comes great responsibility — the responsibility to define truly meaningful outcomes. This is not a technical challenge; it is a human one. It requires deep empathy, strategic foresight, and the courage to challenge old metrics. It demands leaders who understand that the most impactful Human-Centered Innovation™ starts with a clear, ethically grounded North Star.

If you’re an innovation leader trying to navigate this future, remember: the future is not about what AI can do, but about what outcomes we, as humans, choose to pursue with it. Let’s make sure those outcomes serve humanity first.

Frequently Asked Questions

What is “Outcome-Driven Innovation”?

Outcome-Driven Innovation (ODI) is a strategic approach that focuses on defining and achieving specific, measurable human or business outcomes, rather than simply creating new features or products. AI then optimizes for these defined outcomes.

How does agentic AI change the role of human leaders in ODI?

Agentic AI frees human leaders from tactical execution and micro-management, allowing them to focus on the higher-level strategic work of identifying critical problems, understanding human needs, and defining the ethical, impactful outcomes for AI to pursue.

What is the “Efficiency Trap” in the context of AI and outcomes?

The Efficiency Trap occurs when AI is used to optimize for speed or cost without first ensuring that the underlying outcome is meaningful and human-centered. This can lead to highly efficient processes that achieve undesirable or even harmful results, ultimately undermining trust and innovation.

Image credits: Braden Kelley, Google Gemini

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from Google Gemini to clean up the article.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Top 10 Human-Centered Change & Innovation Articles of December 2025

Top 10 Human-Centered Change & Innovation Articles of December 2025Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are December’s ten most popular innovation posts:

  1. Is OpenAI About to Go Bankrupt? — by Chateau G Pato
  2. The Rise of Human-AI Teaming Platforms — by Art Inteligencia
  3. 11 Reasons Why Teams Struggle to Collaborate — by Stefan Lindegaard
  4. How Knowledge Emerges — by Geoffrey Moore
  5. Getting the Most Out of Quiet Employees in Meetings — by David Burkus
  6. The Wood-Fired Automobile — by Art Inteligencia
  7. Was Your AI Strategy Developed by the Underpants Gnomes? — by Robyn Bolton
  8. Will our opinion still really be our own in an AI Future? — by Pete Foley
  9. Three Reasons Change Efforts Fail — by Greg Satell
  10. Do You Have the Courage to Speak Up Against Conformity? — by Mike Shipulski

BONUS – Here are five more strong articles published in November that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Build a Common Language of Innovation on your team

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last four years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

What Are We Going to Do Now with GenAI?

What Are We Going to Do Now With GenAI?

GUEST POST from Geoffrey A. Moore

In 2023 we simply could not stop talking about Generative AI. But in 2024 the question for each enterprise became (continuing to today) — and this includes yours as well — is What are we going to do about it? Tough questions call for tough frameworks, so let’s run this one through the Hierarchy of Powers to see if it can shine some light on what might be your company’s best bet.

Category Power

Gen AI can have an impact anywhere in the Category Maturity Life Cycle, but the way it does so differs depending on where your category is, as follows:

  • Early Market. GenAI will almost certainly be a differentiating ingredient that is enabling a disruptive innovation, and you need to be on the bleeding edge. Think ChatGPT.
  • Crossing the chasm. Nailing your target use case is your sole priority, so you would use GenAI if, and only if, it helped you do so, and avoid getting distracted by its other bells and whistles. Think Khan Academy at the school district level.
  • Inside the tornado. Grabbing as much market share as you can is now the game to play, and GenAI-enabled features can help you do so provided they are fully integrated (no “some assembly required”). You cannot afford to slow your adoption down just at the time it needs to be at full speed. Think Microsoft CoPilot.
  • Growth Main Street (category still growing double digits). Market share boundaries are settling in, so the goal now is to grow your patch as fast as you can, solidifying your position and taking as much share as you can from the also-rans. Adding GenAI to the core product can provide a real boost as long as the disruption is minimal. Think Salesforce CRM.
  • Mature Main Street (category stabilized, single-digit growth). You are now marketing primarily to your installed base, secondarily seeking to pick up new logos as they come into play. GenAI can give you a midlife kicker provided you can use it to generate meaningful productivity gains. Think Adobe Photoshop.
  • Late Main Street (category declining, negative growth). The category has never been more profitable, so you are looking to extend its life in as low-cost a way as you can. GenAI can introduce innovative applications that otherwise would never occur to your end users. Think HP home printing.

Company Power

There are two dimensions of company power to consider when analyzing the ROI from a GenAI investment, as follows:

  • Market Share Status. Are you the market share leader, a challenger, or simply a participant? As a challenger, you can use GenAI to disrupt the market pecking order provided you differentiate in a way that is challenging for the leader to copy. On the other hand, as a leader, you can use GenAI to neutralize the innovations coming from challengers provided you can get it to market fast enough to keep the ecosystem in your camp. As a participant, you would add GenAI only if was your single point of differentiation (as a low-share participant, your R&D budget cannot fund more than one).
  • Default Operating Model. Is your core business better served by the complex systems operating model (typical for B2B companies with hundreds to thousands of large enterprises for customers) or the volume operations operating model (typical for B2C companies with hundreds of thousands to millions of consumers)? The complex systems model has sufficient margins to invest professional services across the entire ownership life cycle, from design consulting to installation to expansion. You are going to need deep in-house expertise to win big in this game. By contrast, GenAI deployed via the volume operations model has to work out-of-the-box. Consumers have neither the courage nor the patience to work through any disconnects.

Market Power

Whereas category share leaders benefit most from going broad, market segment leaders win big by going deep. The key tactic is to overdo it on the use cases that mean the most to your target customers, taking your offer beyond anything reasonable for a category leader to copy. GenAI can certainly be a part of this approach, as the two slides below illustrate:

Market Segmentation for Complex Systems

In the complex systems operating model, GenAI should accentuate the differentiation of your whole product, the complete solution to whatever problem you are targeting. That might mean, for example, taking your Large Language Model to a level of specificity that would normally not be warranted. This sets you apart from the incumbent vendor who has nothing like what you offer as well as from other technology vendors who have not embraced your target segment’s specific concerns. Think Crowdstrike’s Charlotte AI for cybersecurity analysis.

Market Segmentation for Volume Operations

In the volume operations operating model, GenAI should accentuate the differentiation of your brand promise by overdelivering on the relevant value discipline. Once again, it is critical not to get distracted by shiny objects—you want to differentiate in one quadrant only, although you can use GenAI in the other three for neutralization purposes. For Performance, think knowledge discovery. For Productivity, think writing letters. For Economy, think tutoring. For Convenience, think gift suggestions.

Offer Power

Everybody wants to “be innovative,” but it is worth stepping back a moment to ask, how do we get a Return on Innovation? Compared to its financial cousin, this kind of ROI is more of a leading indicator and thus of more strategic value. Basically, it comes in three forms:

  1. Differentiation. This creates customer preference, the goal being not just to be different but to create a clear separation from the competition, one that they cannot easily emulate. Think OpenAI.
  2. Neutralization. This closes the gap between you and a competitor who is taking market share away from you, the goal being to get to “good enough, fast enough,” thereby allowing your installed base to stay loyal. Think Google Bard.
  3. Optimization. This reduces the cost while maintaining performance, the goal being to expand the total available market. Think Edge GenAI on PCs and Macs.

For most of us, GenAI will be an added ingredient rather than a core product, which makes the ROI question even more important. The easiest way to waste innovation dollars is to spend them on differentiation that does not go far enough, neutralization that does not go fast enough, or optimization that does not go deep enough. So, the key lesson here is, pick one and only one as your ROI goal, and then go all in to get a positive return.

Execution Power

How best to incorporate GenAI into your existing enterprise depends on which zone of operations you are looking to enhance, as illustrated by the zone management framework below:

Zone Management Framework

If you are unsure exactly what to do, assign the effort to the Incubation Zone and put them on the clock to come up with a good answer as fast as possible. If you can incorporate it directly into your core business’s offerings at relatively low risk, by all means, do so as it is the current hot ticket, and assign it to the Performance Zone. If there is not a good fit, consider using it internally instead to improve your own productivity, assigning it to the Productivity Zone. Finally, although it is awfully early days for this, if you are convinced it is an absolutely essential ingredient in a big bet you feel compelled to make, then assign it to the Transformation Zone and go all in. Again, the overall point is manage your investment in GenAI out of one zone and only one zone, as the success metrics for each zone are incompatible with those of the other three.

One final point. Embracing anything as novel as GenAI has to feel risky. I submit, however, that in 2025 not building upon meaningful GenAI action taken in 2024 is even more so.

That’s what I think. What do you think?

Image Credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Can AI Replace the CEO?

A Day in the Life of the Algorithmic Executive

LAST UPDATED: December 28, 2025 at 1:56 PM

Can AI Replace the CEO?

GUEST POST from Art Inteligencia

We are entering an era where the corporate antibody – that natural organizational resistance to disruptive change – is meeting its most formidable challenger yet: the AI CEO. For years, we have discussed the automation of the factory floor and the back office. But what happens when the “useful seeds of invention” are planted in the corner office?

The suggestion that an algorithm could lead a company often triggers an immediate emotional response. Critics argue that leadership requires soul, while proponents point to the staggering inefficiencies, biases, and ego-driven errors that plague human executives. As an advocate for Innovation = Change with Impact, I believe we must look beyond the novelty and analyze the strategic logic of algorithmic leadership.

“Leadership is not merely a collection of decisions; it is the orchestration of human energy toward a shared purpose. An AI can optimize the notes, but it cannot yet compose the symphony or inspire the orchestra to play with passion.”

Braden Kelley

The Efficiency Play: Data Without Drama

The argument for an AI CEO rests on the pursuit of Truly Actionable Data. Humans are limited by cognitive load, sleep requirements, and emotional variance. An AI executive, by contrast, operates in Future Present mode — constantly processing global market shifts, supply chain micro-fluctuations, and internal sentiment analysis in real-time. It doesn’t have a “bad day,” and it doesn’t make decisions based on who it had lunch with.

Case Study 1: NetDragon Websoft and the “Tang Yu” Experiment

The Experiment: A Virtual CEO in a Gaming Giant

In 2022, NetDragon Websoft, a major Chinese gaming and mobile app company, appointed an AI-powered humanoid robot named Tang Yu as the Rotating CEO of its subsidiary. This wasn’t just a marketing stunt; it was a structural integration into the management flow.

The Results

Tang Yu was tasked with streamlining workflows, improving the quality of work tasks, and enhancing the speed of execution. Over the following year, the company reported that Tang Yu helped the subsidiary outperform the broader Hong Kong stock market. By serving as a real-time data hub, the AI signature was required for document approvals and risk assessments. It proved that in data-rich environments where speed of iteration is the primary competitive advantage, an algorithmic leader can significantly reduce operational friction.

Case Study 2: Dictador’s “Mika” and Brand Stewardship

The Challenge: The Face of Innovation

Dictador, a luxury rum producer, took the concept a step further by appointing Mika, a sophisticated female humanoid robot, as their CEO. Unlike Tang Yu, who worked mostly within internal systems, Mika serves as a public-facing brand steward and high-level decision-maker for their DAO (Decentralized Autonomous Organization) projects.

The Insight

Mika’s role highlights a different facet of leadership: Strategic Pattern Recognition. Mika analyzes consumer behavior and market trends to select artists for bottle designs and lead complex blockchain-based initiatives. While Mika lacks human empathy, the company uses her to demonstrate unbiased precision. However, it also exposes the human-AI gap: while Mika can optimize a product launch, she cannot yet navigate the nuanced political and emotional complexities of a global pandemic or a social crisis with the same grace as a seasoned human leader.

Leading Companies and Startups to Watch

The space is rapidly maturing beyond experimental robot figures. Quantive (with StrategyAI) is building the “operating system” for the modern CEO, connecting KPIs to real-work execution. Microsoft is positioning its Copilot ecosystem to act as a “Chief of Staff” to every executive, effectively automating the data-gathering and synthesis parts of the role. Watch startups like Tessl and Vapi, which are focusing on “Agentic AI” — systems that don’t just recommend decisions but have the autonomy to execute them across disparate platforms.

The Verdict: The Hybrid Future

Will AI replace the CEO? My answer is: not the great ones. AI will certainly replace the transactional CEO — the executive whose primary function is to crunch numbers, approve budgets, and monitor performance. These tasks are ripe for automation because they represent 19th-century management techniques.

However, the transformational CEO — the one who builds culture, navigates ethical gray areas, and creates a sense of belonging — will find that AI is their greatest ally. We must move from fearing replacement to mastering Human-AI Teaming. The CEOs of 2030 will be those who use AI to handle the complexity of the business so they can focus on the humanity of the organization.

Frequently Asked Questions

Can an AI legally serve as a CEO?

Currently, most corporate law jurisdictions require a natural person to serve as a director or officer for liability and accountability reasons. AI “CEOs” like Tang Yu or Mika often operate under the legal umbrella of a human board or chairman who retains ultimate responsibility.

What are the biggest risks of an AI CEO?

The primary risks include Algorithmic Bias (reinforcing historical prejudices found in the data), Lack of Crisis Adaptability (AI struggles with “Black Swan” events that have no historical precedent), and the Loss of Employee Trust if leadership feels cold and disconnected.

How should current CEOs prepare for AI leadership?

Leaders must focus on “Up-skilling for Empathy.” They should delegate data-heavy reporting to AI systems and re-invest that time into Culture Architecture and Change Management. The goal is to become an expert at Orchestrating Intelligence — both human and synthetic.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.