Category Archives: Technology

We Must Hold AI Accountable

We Must Hold AI Accountable

GUEST POST from Greg Satell

About ten years ago, IBM invited me to talk with some key members on the Watson team, when the triumph of creating a machine that could beat the best human players at the game show Jeopardy! was still fresh. I wrote in Forbes at the time that we were entering a new era of cognitive collaboration between humans, computers and other humans.

One thing that struck me was how similar the moment seemed to how aviation legend Chuck Yeager described the advent of flying-by-wire, four decades earlier, in which pilots no longer would operate aircraft, but interface with a computer that flew the plane. Many of the macho “flyboys” weren’t able to trust the machines and couldn’t adapt.

Now, with the launch of ChatGPT, Bill Gates has announced that the age of AI has begun and, much like those old flyboys, we’re all going to struggle to adapt. Our success will not only rely on our ability to learn new skills and work in new ways, but the extent to which we are able to trust our machine collaborators. To reach its potential, AI will need to become accountable.

Recognizing Data Bias

With humans, we work diligently to construct safe and constructive learning environments. We design curriculums, carefully selecting materials, instructors and students to try and get the right mix of information and social dynamics. We go to all this trouble because we understand that the environment we create greatly influences the learning experience.

Machines also have a learning environment called a “corpus.” If, for example, you want to teach an algorithm to recognize cats, you expose it to thousands of pictures of cats. In time, it figures out how to tell the difference between, say, a cat and a dog. Much like with human beings, it is through learning from these experiences that algorithms become useful.

However, the process can go horribly awry. A famous case is Microsoft’s Tay, a Twitter bot that the company unleashed on the microblogging platform in 2016. In under a day, Tay went from being friendly and casual (“humans are super cool”) to downright scary, (“Hitler was right and I hate Jews”). It was profoundly disturbing.

Bias in the learning corpus is far more common than we often realize. Do an image search for the word “professional haircut” and you will get almost exclusively pictures of white men. Do the same for “unprofessional haircut” and you will see much more racial and gender diversity.

It’s not hard to figure out why this happens. Editors writing articles about haircuts portray white men in one way and other genders and races in another. When we query machines, we inevitably find our own biases baked in.

Accounting For Algorithmic Bias

A second major source of bias results from how decision-making models are designed. Consider the case of Sarah Wysocki, a fifth grade teacher who — despite being lauded by parents, students, and administrators alike — was fired from the D.C. school district because an algorithm judged her performance to be sub-par. Why? It’s not exactly clear, because the system was too complex to be understood by those who fired her.

Yet it’s not hard to imagine how it could happen. If a teacher’s ability is evaluated based on test scores, then other aspects of performance, such as taking on children with learning differences or emotional problems, would fail to register, or even unfairly penalize them. Good human managers recognize outliers, algorithms generally aren’t designed that way.

In other cases, models are constructed according to what data is easiest to acquire or the model is overfit to a specific set of cases and is then applied too broadly. In 2013, Google Flu Trends predicted almost double as many cases there actually were. What appears to have happened is that increased media coverage about Google Flu Trends led to more searches by people who weren’t sick. The algorithm was never designed to take itself into account.

The simple fact is that an algorithm must be designed in one way or another. Every possible contingency cannot be pursued. Choices have to be made and bias will inevitably creep in. Mistakes happen. The key is not to eliminate error, but to make our systems accountable through, explainability, auditability and transparency.

To Build An Era Of Cognitive Collaboration We First Need To Build Trust

In 2020, Ofqual, the authority that administers A-Level college entrance exams in the UK, found itself mired in scandal. Unable to hold live exams because of Covid-19, it designed and employed an algorithm that based scores partly on the historical performance of the schools students attended with the unintended consequence that already disadvantaged students found themselves further penalized by artificially deflated scores.

The outcry was immediate, but in a sense the Ofqual case is a happy story. Because the agency was transparent about how the algorithm was constructed, the source of the bias was quickly revealed, corrective action was taken in a timely manner, and much of the damage was likely mitigated. As Linus’s Law advises, “given enough eyeballs, all bugs are shallow.”

The age of artificial intelligence requires us to collaborate with machines, leveraging their capabilities to better serve other humans. To make that collaboration successful, however, it needs to take place in an atmosphere of trust. Machines, just like humans, need to be held accountable, their decisions and insights can’t be a “black box.” We need to be able to understand where their judgments come from and how they’re decisions are being made.

Senator Schumer worked on legislation to promote more transparency in 2024, but that is only a start and the new administration has pushed the pause button on AI regulation. The real change has to come from within ourselves and how we see our relationships with the machines we create. Marshall McLuhan wrote that media are extensions of man and the same can be said for technology. Our machines inherit our human weaknesses and frailties. We need to make allowances for that.

— Article courtesy of the Digital Tonto blog
— Image credit: Flickr

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Mesh – Collaborative Sensing and the Future of Organizational Intelligence

LAST UPDATED: January 15, 2026 at 5:31 PM

The Mesh - Collaborative Sensing and the Future of Organizational Intelligence

GUEST POST from Art Inteligencia

For decades, organizations have operated like giant, slow-moving mammals with centralized nervous systems. Information traveled from the extremities (the employees and customers) up to the brain (management), where decisions were made and sent back down as commands. But in our hyper-connected, volatile world, this centralized model is failing. To thrive, we must evolve. We must move toward Collaborative Sensing — what I call The Mesh.

The Mesh is a paradigm shift where every person, every device, and every interaction becomes a sensor. It is a decentralized network of intelligence that allows an organization to sense, respond, and adapt in real-time. Instead of waiting for a quarterly report to tell you that a project is failing or a customer trend is shifting, The Mesh tells you the moment the first signal appears. This is human-centered innovation at its most agile.

“The smartest organizations of the future will not be those with the most powerful central computers, but those with the most sensitive and collaborative human-digital mesh. Intelligence is no longer something you possess; it is something you participate in.” — Braden Kelley

From Centralized Silos to Distributed Awareness

In a traditional hierarchy, silos prevent information from flowing horizontally. In a Mesh environment, data is shared peer-to-peer. Collaborative sensing leverages the wisdom of the crowd and the precision of the Internet of Things (IoT) to create a high-resolution picture of reality. This isn’t just about “big data”; it is about thick data — the qualitative, human context that explains the numbers.

When humans and machines collaborate in a sensing mesh, we achieve what I call Anticipatory Leadership. We stop reacting to the past and start shaping the future as it emerges. This requires a culture of radical transparency and psychological safety, where sharing a “negative” signal is seen as a contribution to the collective health of the mesh.

Leading the Charge: Companies and Startups in the Mesh

The landscape of collaborative sensing is being defined by a mix of established giants and disruptive startups. IBM and Cisco are laying the enterprise-grade foundation with their edge computing and industrial IoT frameworks, while Siemens is integrating collaborative sensing into the very fabric of smart cities and factories. On the startup front, companies like Helium are revolutionizing how decentralized wireless networks are built by incentivizing individuals to host “nodes.” Meanwhile, Nodle is creating a citizen-powered mesh network using Bluetooth on smartphones, and StreetLight Data is utilizing the mesh of mobile signals to transform urban planning. These players are proving that the most valuable data is distributed, not centralized.

Case Study 1: Transforming Safety in Industrial Environments

The Challenge

A global mining operation struggled with high rates of “near-miss” accidents. Traditional safety protocols relied on manual reporting after an incident occurred. By the time management reviewed the data, the conditions that caused the risk had often changed, making preventative action difficult.

The Mesh Solution

The company implemented a collaborative sensing mesh. Workers were equipped with wearable sensors that tracked environmental hazards (gas levels, heat) and physiological stress. Simultaneously, heavy machinery was outfitted with proximity sensors. These nodes communicated locally — machine to machine and machine to human.

The Human-Centered Result

The “sensing” happened at the edge. If a worker’s stress levels spiked while a vehicle was approaching an unsafe zone, the mesh triggered an immediate haptic alert to the worker and slowed the vehicle automatically. Over six months, near-misses dropped by 40%. The organization didn’t just get “safer”; it became a learning organization that used real-time data to redesign workflows around human limitations and strengths.

Case Study 2: Urban Resilience and Citizen Sensing

The Challenge

A coastal city prone to flash flooding relied on a few expensive, centralized weather stations. These stations often missed hyper-local rain events that flooded specific neighborhoods, leaving emergency services flat-footed.

The Mesh Solution

The city launched a Citizen Sensing initiative. They distributed low-cost, connected rain gauges to residents and integrated data from connected cars’ windshield wiper activity. This created a high-density sensing mesh across the entire geography.

The Human-Centered Result

Instead of one data point for the whole city, planners had thousands. When a localized cell hit a specific district, the mesh automatically updated digital signage to reroute traffic and alerted residents in that specific block minutes before the water rose. This moved the city from crisis management to collaborative resilience, empowering citizens to be active participants in their own safety.

Building Your Organizational Mesh

If you are looking to help your team navigate this transition, start by asking: Where is our organization currently numb? Where are the blind spots where information exists but isn’t being sensed or shared?

To build a successful Mesh, you must prioritize:

  • Interoperability: Ensuring different sensors and humans can “speak” to each other across platforms.
  • Privacy by Design: Ensuring the mesh protects individual identity while sharing collective insight.
  • Incentivization: Why should people participate? The mesh must provide value back to those who provide the data.

The Mesh is not just a technological infrastructure; it is a human-centered mindset. It is the realization that we are all nodes in a larger system of intelligence. When we sense together, we succeed together.

Frequently Asked Questions on Collaborative Sensing

Q: What is Collaborative Sensing or ‘The Mesh’?

A: Collaborative Sensing is a decentralized approach to intelligence where humans and IoT devices work in a networked “mesh” to share real-time data. Unlike top-down systems, it relies on distributed nodes to sense, process, and act on information locally and collectively.

Q: How does Collaborative Sensing benefit human-centered innovation?

A: It moves the focus from “big data” to “human context.” By sensing environmental and social signals in real-time, organizations can respond to human needs with greater empathy and precision, reducing friction in everything from city planning to workplace safety.

Q: What is the primary challenge in implementing a Mesh network?

A: The primary challenge is trust and data governance. For a mesh to work effectively, participants must be confident that their data is secure, anonymous where necessary, and used for collective benefit rather than invasive surveillance.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Humans Don’t Have to Perform Every Task

Humans Don't Have to Perform Every Task

GUEST POST from Shep Hyken

There seems to be a lot of controversy and questions surrounding artificial intelligence (AI) being used to support customers. The customer experience can be enhanced with AI, but it can also derail and cause customers to head to the competition.

Last week, I wrote an article titled Just Because You Can Use AI, Doesn’t Mean You Should. The gist of the article was that while AI has impressive capabilities, there are situations in which human-to-human interaction is still preferred, even necessary, especially for complex, sensitive or emotionally charged customer issues.

However, there is a flip side. Sometimes AI is the smart thing to use, and eliminating human-to-human interaction actually creates a better customer experience. The point is that just because a human could handle a task doesn’t mean they should. 

Before we go further, keep in mind that even if AI should handle an issue, my customer service and customer experience (CX) research finds almost seven out of 10 customers (68%) prefer the phone. So, there are some customers who, regardless of how good AI is, will only talk to a live human being.

Here’s a reality: When a customer simply wants to check their account balance, reset a password, track a package or any other routine, simple task or request, they don’t need to talk to someone. What they really want, even if they don’t realize it, is fast, accurate information and a convenient experience.

The key is recognizing when customers value efficiency over engagement. Even with 68% of customers preferring the phone, they also want convenience and speed. And sometimes, the most convenient experience is one that eliminates unnecessary human interaction.

Smart companies are learning to use both strategically. They are finding a balance. They’re using AI for routine, transactional interactions while making live agents available for situations requiring judgement, creativity or empathy.

The goal isn’t to replace humans with AI. It’s to use each where they excel most. That sometimes means letting technology do what it can do best, even if a human could technically do the job. The customer experience improves when you match the right resource to the customers’ specific need.

That’s why I advocate pushing the digital, AI-infused experience for the right reasons but always – and I emphasize the word always – giving the customer an easy way to connect to a human and continue the conversation.

In the end, most customers don’t care whether their problem is solved by a human or AI. They just want it solved well.

Image credits: Google Gemini, Shep Hyken

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

A New Era of Economic Warfare Arrives

Is Your Company Prepared?

LAST UPDATED: January 9, 2026 at 3:55PM

A New Era of Economic Warfare Arrives

GUEST POST from Art Inteligencia

Economic warfare rarely announces itself. It embeds quietly into systems designed for trust, openness, and speed. By the time damage becomes visible, advantage has already shifted.

This new era of conflict is not defined by tanks or tariffs alone, but by the strategic exploitation of interdependence — where innovation ecosystems, supply chains, data flows, and cultural platforms become contested terrain.

The most effective economic attacks do not destroy systems outright. They drain them slowly enough to avoid response.

Weaponizing Openness

For decades, the United States has benefited from a research and innovation model grounded in openness, collaboration, and academic freedom. Those same qualities, however, have been repeatedly exploited.

Publicly documented prosecutions, investigations, and corporate disclosures describe coordinated efforts to extract intellectual property from American universities, national laboratories, and private companies through undisclosed affiliations, parallel research pipelines, and cyber-enabled theft.

This is not opportunistic theft. It is strategic harvesting.

When innovation can be copied faster than it can be created, openness becomes a liability instead of a strength.

Cyber Persistence as Economic Strategy

Cyber operations today prioritize persistence over spectacle. Continuous access to sensitive systems allows competitors to shortcut development cycles, underprice rivals, and anticipate strategic moves.

The goal is not disruption — it is advantage.

Skydio and Supply Chain Chokepoints

The experience of American drone manufacturer Skydio illustrates how economic pressure can be applied without direct confrontation.

After achieving leadership through autonomy and software-driven innovation rather than low-cost manufacturing, Skydio encountered pressure through access constraints tied to upstream supply chains.

This was a calculated attack on a successful American business. It serves as a stark reminder: if you depend on a potential adversary for your components, your success is only permitted as long as it doesn’t challenge their dominance. We must decouple our innovation from external control, or we will remain permanently vulnerable.

When supply chains are weaponized, markets no longer reward the best ideas — only the most protected ones.

Agricultural and Biological Vulnerabilities

Incidents involving the unauthorized movement of biological materials related to agriculture and bioscience highlight a critical blind spot. Food systems are economic infrastructure.

Crop blight, livestock disease, and agricultural disruption do not need to be dramatic to be devastating. They only need to be targeted, deniable, and difficult to attribute.

Pandemics and Systemic Shock

The origins of COVID-19 remain contested, with investigations examining both natural spillover and laboratory-associated scenarios. From an economic warfare perspective, attribution matters less than exposure.

The pandemic revealed how research opacity, delayed disclosure, and global interdependence can cascade into economic devastation on a scale rivaling major wars.

Resilience must be designed for uncertainty, not certainty.

The Attention Economy as Strategic Terrain and Algorithmic Narcotic

Platforms such as TikTok represent a new form of economic influence: large-scale behavioral shaping.

Regulatory and academic concerns focus on data governance, algorithmic amplification, and the psychological impact on youth attention, agency, and civic engagement.

TikTok is not just a social media app; it is a cognitive weapon. In China, the algorithm pushes “Douyin” users toward educational content, engineering, and national achievement. In America, the algorithm pushes our youth toward mindless consumption, social fragmentation, and addictive cycles that weaken the mental resilience of the next generation. This is an intentional weakening of our human capital. By controlling the narrative and the attention of 170 million Americans, American children are part of a massive experiment in psychological warfare, designed to ensure that the next generation of Americans is too distracted to lead and too divided to innovate.

Whether intentional or emergent, influence over attention increasingly translates into long-term economic leverage.

The Human Cost of Invisible Conflict

Economic warfare succeeds because its consequences unfold slowly: hollowed industries, lost startups, diminished trust, and weakened social cohesion.

True resilience is not built by reacting to attacks, but by redesigning systems so exploitation becomes expensive and contribution becomes the easiest path forward.

Conclusion

This is not a call for isolation or paranoia. It is a call for strategic maturity.

Openness without safeguards is not virtue — it is exposure. Innovation without resilience is not leadership — it is extraction.

The era of complacency must end. We must treat economic security as national security. This means securing our universities, diversifying our supply chains, and demanding transparency in our digital and biological interactions. We have the power to stoke our own innovation bonfire, but only if we are willing to protect it from those who wish to extinguish it.

The next era of competition will reward nations and companies that design systems where trust is earned, reciprocity is enforced, and long-term value creation is protected.

Frequently Asked Questions

What is economic warfare?

Economic warfare refers to the use of non-military tools — such as intellectual property extraction, cyber operations, supply chain control, and influence platforms — to weaken a rival’s economic position and long-term competitiveness.

Is China the only country using these tactics?

No. Many nations engage in forms of economic competition that blur into coercion. The concern highlighted here is about scale, coordination, and the systematic exploitation of open systems.

How should the United States respond?

By strengthening resilience rather than retreating from openness — protecting critical research, diversifying supply chains, aligning innovation policy with national strategy, and designing systems that reward contribution over extraction.

How should your company protect itself?

Companies should identify their critical knowledge assets, limit unnecessary exposure, diversify suppliers, strengthen cybersecurity, enforce disclosure and governance standards, and design partnerships that balance collaboration with protection. Resilience should be treated as a strategic capability, not a compliance exercise.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Rearchitecting the Landscape of Knowledge Work

Rearchitecting the Landscape of Knowledge Work

GUEST POST from Geoffrey A. Moore

One thing the pandemic made clear to everyone involved with the knowledge-work profession is that daily commuting was a ludicrously excessive tax on their time. The amount of work they were able to get done remotely clearly exceeded what they were getting done previously, and the reduction in stress was both welcome and productive. So, let’s be clear, there is no “going back to the office.” What is possible, on the other hand, is going forward to the office, and that is what we are going to discuss in this blog post.

The point is, we need to rethink the landscape of knowledge work—what work is best done where, and why. Let’s start with remote. Routine task work of the sort that a professional is expected to complete on their own is ideally suited to remote working. It requires no supervision to speak of and little engagement with others except at assigned checkpoints. Those checkpoints can be managed easily through video conferencing combined with collaboration-enabling software like Slack or Teams. Productivity commitments are monitored in terms of the quality and quantity of received work. This is game-changing for everyone involved, and we would be crazy to forsake these gains simply to comply with a return-to-the-office mandate.

That said, there are many good reasons still to want a return. Before we dig into them, however, let’s spend a moment on the bad reasons first. First among them is what we might call “boomer executive control needs”—a carry-over from the days of hierarchical management structures that to this day still run most of our bureaucracies. Implicit in this model is the notion that everyone needs supervision all the time. Let me just say that if that is the case in your knowledge-work organization, you are in big trouble, and mandating everyone to come back to the office is not going to fix it. The fix needed is workforce engagement, and that requires personal intervention, not systemic enforcement. Yes, you want to do this in person, and yes, the office is typically the right place to do so, but no, you don’t need everyone to be there all the time to do it.

This same caveat applies to other reasons why enterprises are mandating a return. Knowledge work benefits from social interactions with colleagues. You get to float ideas, hear about new developments, learn from observing others, and the like. It is all good, and you do need to be collocated to do it—just not every day. What is required instead is a new cadence. People need an established routine to know when they are expected to show up, one they can plan around far in advance. In short, we need the discipline of office attendance, we just want it to be more respectful of our remote work. In that light, a good place to start is a 60/40 split—your call as to which is which. But for the days that are in office, attendance is expected, not optional. To do anything else is to disrespect your colleagues and to put your personal convenience above the best interests of the enterprise that is funding you.

So much for coping with some of the bad reasons. Now let’s look into five good ones.

  1. Customer-facing challenges. This includes sales, account management, and customer success (but not customer support or tech support). The point is, whenever things are up for grabs on the customer side, it takes a team to wrestle them down to earth, and the members of that team need to be in close communication to detect the signals, strategize the responses, and leverage each other’s relationships and expertise. You don’t get to say when this happens, so you have to show up every day ready to play (meaning 80/20 is probably a more effective in-office/out-of-office ratio).
  2. Onboarding, team building, and M&A integration. Things can also be up for grabs inside your own organization, particularly when you are adding new people, building a new team (or turning around an old one), or integrating an acquisition. In these kinds of fluid situations, there is a ton of non-verbal communication, both to detect and to project, and there is simply no substitute for collocation. By contrast, career development, mentoring, and performance reviews are best conducted one-on-one, and here modern video conferencing with its high-definition visuals and zero-latency audio can actually induce a more focused conversation.
  3. Mission-critical systems operations. This is just common sense—if the wheels start to come off, you do not want to lose time assembling the team. Cybersecurity attacks would be one good example. On the other hand, with proper IT infrastructure, routine system monitoring, and maintenance as well as standard end-user support can readily leverage remote expertise.
  4. In-house incubations. It is possible to do a remote-only start-up if you have most of the team in place from the beginning, leveraging time in collocation at a prior company, especially if the talent you need is super-scarce and geographically dispersed.

    But for public enterprises leveraging the Incubation Zone, as well as lines of business conducting nested incubation inside their own organizations, a cadence surrounding collocation is critical. The reason is that incubations call for agile decision-making, coordinated course corrections, fast failures, and even faster responses to them. You don’t have to be together every day—there is still plenty of individual knowledge work to be done, but you do need to keep in close formation, and that requires frequent unscripted connections.

  5. Cross-functional programs and projects. These are simply impossible to do on a remote basis. There are too many new relationships that must be established, too many informal negotiations to get resources assigned, too many group sessions to get people aligned, and too much lobbying to get the additional support you need. This is especially true when the team is led by a middle manager who has no direct authority over the team members, only their managers’ commitment and their own good will.

So, what’s the best in-office/remote ratio for your organization?

You might try doing a high-level inventory of all the work you do, calling out for each workload which mode of working is preferable, and totaling it up to get a first cut. You can be sure that whatever you come up with will be wrong, but that’s OK because your next step will be to socialize it. Once you get enough fingerprints on it, you will go live with it, only to confirm it is still wrong, but now with a coalition of the willing to make it right, if only to make themselves look better.

Ain’t management fun?

That’s what I think. What do you think?

Image Credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Just Because You Can Use AI Doesn’t Mean You Should

Just Because You Can Use AI Doesn't Mean You Should

GUEST POST from Shep Hyken

I’m often asked, “What should AI be used for?” While there is much that AI can do to support businesses in general, it’s obvious that I’m being asked how it relates to customer service and customer experience (CX). The true meaning of the question is more about what tasks AI can do to support a customer, thereby potentially eliminating the need for a live agent who deals directly with customers.

First, as the title of this article implies, just because AI can do something, it doesn’t mean it should. Yes, AI can handle many customer support issues, but even if every customer were willing to accept that AI can deliver good support, there are some sensitive and complicated issues for which customers would prefer to talk to a human.

AI Shep Hyken Cartoon

Additionally, consider that, based on my annual customer experience research, 68% of customers (that’s almost seven out of 10) prefer the phone as their primary means of communication with a company or brand. However, another finding in the report is worth mentioning: 34% of customers stopped doing business with a company because self-service options were not provided. Some customers insist on the self-service option, but at the same time, they want to be transferred to a live agent when appropriate.

AI works well for simple issues, such as password resets, tracking orders, appointment scheduling and answering basic or frequently asked questions. Humans are better suited for handling complaints and issues that need empathy, complex problem-solving situations that require judgment calls and communicating bad news.

An AI-fueled chatbot can answer many questions, but when a medical patient contacts the doctor’s office about test results related to a serious issue, they will likely want to speak with a nurse or doctor, not a chatbot.

Consider These Questions Before Implementing AI For Customer Interactions

AI for addressing simple customer issues has become affordable for even the smallest businesses, and an increasing number of customers are willing to use AI-powered customer support for the right reasons. Consider these questions before implementing AI for customer interactions:

  1. Is the customer’s question routine or fact-based?
  2. Does it require empathy, emotion, understanding and/or judgment (emotional intelligence)?
  3. Could the wrong answer cause a problem or frustrate the customer?
  4. As you think about the reasons customers call, which ones would they feel comfortable having AI handle?
  5. Do you have an easy, seamless way for the customer to be transferred to a human when needed?

The point is, regardless of how capable the technology is, it doesn’t mean it is best suited to deliver what the customer wants. Live agents can “read the customer” and know how to effectively communicate and empathize with them. AI can’t do that … yet. The key isn’t choosing between AI and humans. It’s knowing when to use each one.

Image credits: Google Gemini, Shep Hyken

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Top 100 Innovation and Transformation Articles of 2025

Top 100 Innovation and Transformation Articles of 2025

2021 marked the re-birth of my original Blogging Innovation blog as a new blog called Human-Centered Change and Innovation.

Many of you may know that Blogging Innovation grew into the world’s most popular global innovation community before being re-branded as Innovation Excellence and being ultimately sold to DisruptorLeague.com.

Thanks to an outpouring of support I’ve ignited the fuse of this new multiple author blog around the topics of human-centered change, innovation, transformation and design.

I feel blessed that the global innovation and change professional communities have responded with a growing roster of contributing authors and more than 17,000 newsletter subscribers.

To celebrate we’ve pulled together the Top 100 Innovation and Transformation Articles of 2025 from our archive of over 3,200 articles on these topics.

We do some other rankings too.

We just published the Top 40 Innovation Authors of 2025 and as the volume of this blog has grown we have brought back our monthly article ranking to complement this annual one.

But enough delay, here are the 100 most popular innovation and transformation posts of 2025.

Did your favorite make the cut?

1. A Toolbox for High-Performance Teams – Building, Leading and Scaling – by Stefan Lindegaard

2. Top 10 American Innovations of All Time – by Art Inteligencia

3. The Education Business Model Canvas – by Arlen Meyers, M.D.

4. What is Human-Centered Change? – by Braden Kelley

5. How Netflix Built a Culture of Innovation – by Art Inteligencia

6. McKinsey is Wrong That 80% Companies Fail to Generate AI ROI – by Robyn Bolton

7. The Great American Contraction – by Art Inteligencia

8. A Case Study on High Performance Teams – New Zealand’s All Blacks – by Stefan Lindegaard

9. Act Like an Owner – Revisited! – by Shep Hyken

10. Should a Bad Grade in Organic Chemistry be a Doctor Killer? – by Arlen Meyers, M.D.

11. Charting Change – by Braden Kelley

12. Human-Centered Change – by Braden Kelley

13. No Regret Decisions: The First Steps of Leading through Hyper-Change – by Phil Buckley

14. SpaceX is a Masterclass in Innovation Simplification – by Pete Foley

15. Top 5 Future Studies Programs – by Art Inteligencia

16. Marriott’s Approach to Customer Service – by Shep Hyken

17. The Role of Stakeholder Analysis in Change Management – by Art Inteligencia

18. The Triple Bottom Line Framework – by Dainora Jociute

19. The Nordic Way of Leadership in Business – by Stefan Lindegaard

20. Nine Innovation Roles – by Braden Kelley

21. ACMP Standard for Change Management® Visualization – 35″ x 56″ (Poster Size) – Association of Change Management Professionals – by Braden Kelley

22. Designing an Innovation Lab: A Step-by-Step Guide – by Art Inteligencia

23. FutureHacking™ – by Braden Kelley

24. The 6 Building Blocks of Great Teams – by David Burkus

25. Overcoming Resistance to Change – Embracing Innovation at Every Level – by Chateau G Pato

26. Human-Centered Change – Free Downloads – by Braden Kelley

27. 50 Cognitive Biases Reference – Free Download – by Braden Kelley

28. Quote Posters – Curated by Braden Kelley

29. Stoking Your Innovation Bonfire – by Braden Kelley

30. Innovation or Not – Kawasaki Corleo – by Art Inteligencia


Build a common language of innovation on your team


31. Top Six Trends for Innovation Management in 2025 – by Jesse Nieminen

32. Fear is a Leading Indicator of Personal Growth – by Mike Shipulski

33. Visual Project Charter™ – 35″ x 56″ (Poster Size) and JPG for Online Whiteboarding – by Braden Kelley

34. The Most Challenging Obstacles to Achieving Artificial General Intelligence – by Art Inteligencia

35. The Ultimate Guide to the Phase-Gate Process – by Dainora Jociute

36. Case Studies in Human-Centered Design – by Art Inteligencia

37. Transforming Leadership to Reshape the Future of Innovation – Exclusive Interview with Brian Solis

38. Leadership Best Quacktices from Oregon’s Dan Lanning – by Braden Kelley

39. This AI Creativity Trap is Gutting Your Growth – by Robyn Bolton

40. A 90% Project Failure Rate Means You’re Doing it Wrong – by Mike Shipulski

41. Reversible versus Irreversible Decisions – by Farnham Street

42. Next Generation Leadership Traits and Characteristics – by Stefan Lindegaard

43. Top 40 Innovation Bloggers of 2024 – Curated by Braden Kelley

44. Benchmarking Innovation Performance – by Noel Sobelman

45. Three Executive Decisions for Strategic Foresight Success or Failure – by Robyn Bolton

46. Back to Basics for Leaders and Managers – by Robyn Bolton

47. You Already Have Too Many Ideas – by Mike Shipulski

48. Imagination versus Knowledge – Is imagination really more important? – by Janet Sernack

49. Building a Better Change Communication Plan – by Braden Kelley

50. 10 Free Human-Centered Change™ Tools – by Braden Kelley


Accelerate your change and transformation success


51. Why Business Transformations Fail – by Robyn Bolton

52. Overcoming the Fear of Innovation Failure – by Stefan Lindegaard

53. What is the difference between signals and trends? – by Art Inteligencia

54. Unintended Consequences. The Hidden Risk of Fast-Paced Innovation – by Pete Foley

55. Giving Your Team a Sense of Shared Purpose – by David Burkus

56. The Top 10 Irish Innovators Who Shaped the World – by Art Inteligencia

57. The Role of Emotional Intelligence in Effective Change Leadership – by Art Inteligencia

58. Is OpenAI About to Go Bankrupt? – by Art Inteligencia

59. Sprint Toward the Innovation Action – by Mike Shipulski

60. Innovation Management ISO 56000 Series Explained – by Diana Porumboiu

61. How to Make Navigating Ambiguity a Super Power – by Robyn Bolton

62. 3 Secret Saboteurs of Strategic Foresight – by Robyn Bolton

63. Four Major Shifts Driving the 21st Century – by Greg Satell

64. Problems vs. Solutions vs. Complaints – by Mike Shipulski

65. The Power of Position Innovation – by John Bessant

66. Three Ways Strategic Idleness Accelerates Innovation and Growth – by Robyn Bolton

67. Case Studies of Companies Leading in Inclusive Design – by Chateau G Pato

68. Recognizing and Celebrating Small Wins in the Change Process – by Chateau G Pato

69. Parallels Between the 1920’s and Today Are Frightening – by Greg Satell

70. The Art of Adaptability: How to Respond to Changing Market Conditions – by Art Inteligencia

71. Do you have a fixed or growth mindset? – by Stefan Lindegaard

72. Making People Matter in AI Era – by Janet Sernack

73. The Role of Prototyping in Human-Centered Design – by Art Inteligencia

74. Turning Bold Ideas into Tangible Results – by Robyn Bolton

75. Yes the Comfort Zone Can Be Your Best Friend – by Stefan Lindegaard

76. Increasing Organizational Agility – by Braden Kelley

77. Innovation is Dead. Now What? – by Robyn Bolton

78. Four Reasons Change Resistance Exists – by Greg Satell

79. Eight I’s of Infinite Innovation – Revisited – by Braden Kelley

80. Difference Between Possible, Potential and Preferred Futures – by Art Inteligencia


Get the Change Planning Toolkit


81. Resistance to Innovation – What if electric cars came first? – by Dennis Stauffer

82. Science Says You Shouldn’t Waste Too Much Time Trying to Convince People – by Greg Satell

83. Why Context Engineering is the Next Frontier in AI – by Braden Kelley and Art Inteligencia

84. How to Write a Failure Resume – by Arlen Meyers, M.D.

85. The Five Keys to Successful Change – by Braden Kelley

86. Four Forms of Team Motivation – by David Burkus

87. Why Revolutions Fail – by Greg Satell

88. Top 40 Innovation Bloggers of 2023 – Curated by Braden Kelley

89. The Entrepreneurial Mindset – by Arlen Meyers, M.D.

90. Six Reasons Norway is a Leader in High-Performance Teamwork – by Stefan Lindegaard

90. Top 100 Innovation and Transformation Articles of 2024 – Curated by Braden Kelley

91. The Worst British Customer Experiences of 2024 – by Braden Kelley

92. Human-Centered Change & Innovation White Papers – by Braden Kelley

93. Encouraging a Growth Mindset During Times of Organizational Change – by Chateau G Pato

94. Inside the Mind of Jeff Bezos – by Braden Kelley

95. Learning from the Failure of Quibi – by Greg Satell

96. Dare to Think Differently – by Janet Sernack

97. The End of the Digital Revolution – by Greg Satell

98. Your Guidebook to Leading Human-Centered Change – by Braden Kelley

99. The Experiment Canvas™ – 35″ x 56″ (Poster Size) – by Braden Kelley

100. Trust as a Competitive Advantage – by Greg Satell

Curious which article just missed the cut? Well, here it is just for fun:

101. Building Cross-Functional Collaboration for Breakthrough Innovations – by Chateau G Pato

These are the Top 100 innovation and transformation articles of 2025 based on the number of page views. If your favorite Human-Centered Change & Innovation article didn’t make the cut, then send a tweet to @innovate and maybe we’ll consider doing a People’s Choice List for 2024.

If you’re not familiar with Human-Centered Change & Innovation, we publish 1-6 new articles every week focused on human-centered change, innovation, transformation and design insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook feed or on Twitter or LinkedIn too!

Editor’s Note: Human-Centered Change & Innovation is open to contributions from any and all the innovation & transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have a valuable insight to share with everyone for the greater good. If you’d like to contribute, contact us.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Outcome-Driven Innovation in the Age of Agentic AI

The North Star Shift

LAST UPDATED: January 5, 2026 at 5:29PM

Outcome-Driven Innovation in the Age of Agentic AI

by Braden Kelley

In a world of accelerating change, the rhetoric around Artificial Intelligence often centers on its incredible capacity for optimization. We hear about AI designing new materials, orchestrating complex logistics, and even writing entire software applications. This year, the technology has truly matured into agentic AI, capable of pursuing and achieving defined objectives with unprecedented autonomy. But as a specialist in Human-Centered Innovation™ (which pairs well with Outcome-Driven Innovation), I pose two crucial questions: Who is defining these outcomes, and what impact do they truly have on the human experience?

The real innovation of 2026 will show not just that AI can optimize against defined outcomes, but that we, as leaders, finally have the imperative — and the tools — to master Outcome-Driven Innovation and Outcome-Driven Change. If innovation is change with impact, then our impact is only as profound as the outcomes we choose to pursue. Without thoughtful, human-centered specifications, AI simply becomes the most efficient way to achieve the wrong goals, leading us directly into the Efficiency Trap. This is where organizations must overcome the Corporate Antibody response that resists fundamental shifts in how we measure success.

Revisiting and Applying Outcome-Driven Change in the Age of Agentic AI

As we integrate agentic AI into our organizations, the principles of Outcome-Driven Change (ODC) I first introduced in 2018 are more vital than ever. The core of the ODC framework rests on the alignment of three critical domains: Cognitive (Thinking), Affective (Feeling), and Conative (Doing). Today, AI agents are increasingly assuming the “conative” role, executing tasks and optimizing workflows at superhuman speeds. However, as I have always maintained, true success only arrives when what is being done is in harmony with what the people in the organization and customer base think and feel.

Outcome-Driven Change Framework

If an AI agent’s autonomous actions are misaligned with human psychological readiness or emotional context, it will trigger a Corporate Antibody response that kills innovation. To practice genuine Human-Centered Change™, we must ensure that AI agents are directed to pursue outcomes that are not just numerically efficient, but humanly resonant. When an AI’s “doing” matches the collective thinking and feeling of the workforce, we move beyond the Efficiency Trap and create lasting change with impact.

“In the age of agentic AI, the true scarcity is not computational power; it is the human wisdom to define the right ‘North Star’ outcomes. An AI optimizing for the wrong goal is a digital express train headed in the wrong direction – efficient, but ultimately destructive.” — Braden Kelley

From Feature-Building to Outcome-Harvesting

For decades, many organizations have been stuck in a cycle of “feature-building.” Product teams were rewarded for shipping more features, marketing for launching more campaigns, and R&D for creating more patents. The focus was on output, not ultimate impact. Outcome-Driven Innovation shifts this paradigm. It forces us to ask: What human or business value are we trying to create? What measurable change in behavior or well-being are we seeking?

Agentic AI, when properly directed, becomes an unparalleled accelerant for this shift. Instead of building a new feature and hoping it works, we can now tell an AI agent, “Achieve Outcome X for Persona Y, within Constraints Z,” and it will explore millions of pathways to get there. This frees human teams from the tactical churn and allows them to focus on the truly strategic work: deeply understanding customer needs, identifying ethical guardrails, and defining aspirational outcomes that genuinely drive Human-Centered Innovation™.

Case Study 1: Sustainable Manufacturing and the “Circular Economy” Outcome

The Challenge: A major electronics manufacturer in early 2025 aimed to reduce its carbon footprint but struggled with the complexity of optimizing its global supply chain, product design, and end-of-life recycling simultaneously. Traditional methods led to incremental, siloed improvements.

The Outcome-Driven Approach: They defined a bold outcome: “Achieve a 50% reduction in virgin material usage across all product lines by 2028, while maintaining profitability and product quality.” They then deployed an agentic AI system to explore new material combinations, reverse logistics networks, and redesign possibilities. This AI was explicitly optimized to achieve the circular economy outcome.

The Impact: The AI identified design changes that led to a 35% reduction in material waste within 18 months, far exceeding human predictions. It also found pathways to integrate recycled content into new products without compromising durability. The organization moved from a reactive “greenwashing” approach to proactive, systemic innovation driven by a clear, human-centric environmental outcome.

Case Study 2: Personalized Education and “Mastery Outcomes”

The Challenge: A national education system faced stagnating literacy rates, despite massive investments in new curricula. The focus was on “covering material” rather than ensuring true student understanding and application.

The Outcome-Driven Approach: They shifted their objective to “Ensure 90% of students achieve demonstrable mastery of core literacy skills by age 10.” An AI tutoring system was developed, designed to optimize for individual student mastery outcomes, rather than just quiz scores. The AI dynamically adapted learning paths, identified specific knowledge gaps, and even generated custom exercises based on each child’s learning style.

The Impact: Within two years, participating schools saw a 25% improvement in mastery rates. The AI became a powerful co-pilot for teachers, freeing them from repetitive grading and allowing them to focus on high-touch mentorship. This demonstrated how AI, directed by human-defined learning outcomes, can empower both educators and students, moving beyond the Efficiency Trap of standardized testing.

Leading Companies and Startups to Watch

As 2026 solidifies Outcome-Driven Innovation, several entities are paving the way. Amplitude and Pendo are evolving their product analytics to connect feature usage directly to customer outcomes. In the AI space, Anthropic‘s work on “Constitutional AI” is fascinating, as it seeks to embed human-defined ethical outcomes directly into the AI’s decision-making. Glean and Perplexity AI are creating agentic knowledge systems that help organizations define and track complex outcomes across their internal data. Startups like Metaculus are even democratizing the prediction of outcomes, allowing collective intelligence to forecast the impact of potential innovations, providing invaluable insights for human decision-makers. These players are all contributing to the core goal: helping humans define the right problems for AI to solve.

Conclusion: The Human Art of Defining the Future

The year 2026 is a pivotal moment. Agentic AI gives us unprecedented power to optimize, but with great power comes great responsibility — the responsibility to define truly meaningful outcomes. This is not a technical challenge; it is a human one. It requires deep empathy, strategic foresight, and the courage to challenge old metrics. It demands leaders who understand that the most impactful Human-Centered Innovation™ starts with a clear, ethically grounded North Star.

If you’re an innovation leader trying to navigate this future, remember: the future is not about what AI can do, but about what outcomes we, as humans, choose to pursue with it. Let’s make sure those outcomes serve humanity first.

Frequently Asked Questions

What is “Outcome-Driven Innovation”?

Outcome-Driven Innovation (ODI) is a strategic approach that focuses on defining and achieving specific, measurable human or business outcomes, rather than simply creating new features or products. AI then optimizes for these defined outcomes.

How does agentic AI change the role of human leaders in ODI?

Agentic AI frees human leaders from tactical execution and micro-management, allowing them to focus on the higher-level strategic work of identifying critical problems, understanding human needs, and defining the ethical, impactful outcomes for AI to pursue.

What is the “Efficiency Trap” in the context of AI and outcomes?

The Efficiency Trap occurs when AI is used to optimize for speed or cost without first ensuring that the underlying outcome is meaningful and human-centered. This can lead to highly efficient processes that achieve undesirable or even harmful results, ultimately undermining trust and innovation.

Image credits: Braden Kelley, Google Gemini

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from Google Gemini to clean up the article.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Why Photonic Processors are the Nervous System of the Future

Illumination as Innovation

LAST UPDATED: January 2, 2026 at 4:59 PM

Why Photonic Processors are the Nervous System of the Future

GUEST POST from Art Inteligencia

In the landscape of 2026, we have reached a critical juncture in what I call the Future Present (which you can also think as the close-in future). Our collective appetite for intelligence — specifically the generative, agentic, and predictive kind — has outpaced the physical capabilities of our silicon ancestors. For decades, we have relied on electrons to do our bidding, pushing them through increasingly narrow copper gates. But electrons have a weight, a heat, and a resistance that is now leading us directly into the Efficiency Trap. If we want to move from change to change with impact, we must change the medium of the message itself.

Enter Photonic Processing. This is not merely an incremental speed boost; it is a fundamental shift from the movement of matter to the movement of light. By using photons instead of electrons to perform calculations, we are moving toward a world of near-zero latency and drastically reduced energy consumption. As a specialist in Human-Centered Innovation™, I see this not just as a hardware upgrade, but as a breakthrough for human potential. When computing becomes as fast as thought and as sustainable as sunlight, the barriers between human intent and innovative execution finally begin to dissolve.

“Innovation is not just about moving faster; it is about illuminating the paths that were previously hidden by the friction of our limitations. Photonic computing is the lighthouse that allows us to navigate the vast oceans of data without burning the world to power the voyage.” — Braden Kelley

The End of the Electronic Friction

The core problem with traditional electronic processors is heat. When you move electrons through silicon, they collide, generating thermal energy. This is why data centers now consume a staggering percentage of the world’s electricity. Photons, however, do not have a charge and essentially do not interact with each other in the same way. They can pass through one another, move at the speed of light, and carry data across vast “optical highways” without the parasitic energy loss that plagues copper wiring.

For the modern organization, this means computational abundance. We can finally train the massive models required for true Human-AI Teaming without the ethical burden of a massive carbon footprint. We can move from “batch processing” our insights to “living insights” that evolve at the speed of human conversation.

Case Study 1: Transforming Real-Time Healthcare Diagnostics

The Challenge: A global genomic research institute in early 2025 was struggling with the “analysis lag.” To provide personalized cancer treatment plans, they needed to sequence and analyze terabytes of data in minutes. Using traditional GPU clusters, the process took days and cost thousands of dollars in energy alone.

The Photonic Solution: By integrating a hybrid photonic-electronic accelerator, the institute was able to perform complex matrix multiplications — the backbone of genomic analysis — using light. The impact? Analysis time dropped from 48 hours to 12 minutes. More importantly, the system consumed 90% less power. This allowed doctors to provide life-saving prescriptions while the patient was still in the clinic, transforming a diagnostic process into a human-centered healing experience.

Case Study 2: Autonomous Urban Flow in Smart Cities

The Challenge: A metropolitan pilot program for autonomous traffic management found that traditional electronic sensors were too slow to handle “edge cases” in dense fog and heavy rain. The latency of sending data to the cloud and back created a safety gap that the corporate antibody of public skepticism used to shut down the project.

The Photonic Solution: The city deployed “Optical Edge” processors at major intersections. These photonic chips processed visual data at the speed of light, identifying potential collisions before a human eye or an electronic sensor could even register the movement. The impact? A 60% reduction in traffic incidents and a 20% increase in average transit speed. By removing the latency, they restored public trust — the ultimate currency of Human-Centered Innovation™.

Leading Companies and Startups to Watch

The race to light-speed computing is no longer a laboratory experiment. Lightmatter is currently leading the pack with its Envise and Passage platforms, which provide a bridge between traditional silicon and the photonic future. Celestial AI is making waves with their “Photonic Fabric,” a technology designed to solve the massive data-bottleneck in AI clusters. We must also watch Ayar Labs, whose optical I/O chiplets are being integrated by giants like Intel to replace copper connections with light. Finally, Luminous Computing is quietly building a “supercomputer on a chip” that promises to bring the power of a data center to a desktop-sized device, truly democratizing the useful seeds of invention.

Designing for the Speed of Light

As we integrate these photonic systems, we must be careful not to fall into the Efficiency Trap. Just because we can process data a thousand times faster doesn’t mean we should automate away the human element. The goal of photonic innovation should be to free us from “grunt work” — the heavy lifting of data processing — so we can focus on “soul work” — the empathy, ethics, and creative leaps that no processor, no matter how fast, can replicate.

If you are an innovation speaker or a leader guiding your team through this transition, remember that technology is a tool, but trust is the architect. We use light to see more clearly, not to move so fast that we lose sight of our purpose. The photonic age is here; let us use it to build a future that is as bright as the medium it is built upon.

Frequently Asked Questions

What is a Photonic Processor?

A photonic processor is a type of computer chip that uses light (photons) instead of electricity (electrons) to perform calculations and transmit data. This allows for significantly higher speeds, lower latency, and dramatically reduced energy consumption compared to traditional silicon chips.

Why does photonic computing matter for AI?

AI models rely on massive “matrix multiplications.” Photonic chips can perform these specific mathematical operations using light interference patterns at the speed of light, making them ideally suited for the next generation of Large Language Models and autonomous systems.

Is photonic computing environmentally friendly?

Yes. Because photons do not generate heat through resistance like electrons do, photonic processors require far less cooling and electricity. This makes them a key technology for sustainable innovation and reducing the carbon footprint of global data centers.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

What Are We Going to Do Now with GenAI?

What Are We Going to Do Now With GenAI?

GUEST POST from Geoffrey A. Moore

In 2023 we simply could not stop talking about Generative AI. But in 2024 the question for each enterprise became (continuing to today) — and this includes yours as well — is What are we going to do about it? Tough questions call for tough frameworks, so let’s run this one through the Hierarchy of Powers to see if it can shine some light on what might be your company’s best bet.

Category Power

Gen AI can have an impact anywhere in the Category Maturity Life Cycle, but the way it does so differs depending on where your category is, as follows:

  • Early Market. GenAI will almost certainly be a differentiating ingredient that is enabling a disruptive innovation, and you need to be on the bleeding edge. Think ChatGPT.
  • Crossing the chasm. Nailing your target use case is your sole priority, so you would use GenAI if, and only if, it helped you do so, and avoid getting distracted by its other bells and whistles. Think Khan Academy at the school district level.
  • Inside the tornado. Grabbing as much market share as you can is now the game to play, and GenAI-enabled features can help you do so provided they are fully integrated (no “some assembly required”). You cannot afford to slow your adoption down just at the time it needs to be at full speed. Think Microsoft CoPilot.
  • Growth Main Street (category still growing double digits). Market share boundaries are settling in, so the goal now is to grow your patch as fast as you can, solidifying your position and taking as much share as you can from the also-rans. Adding GenAI to the core product can provide a real boost as long as the disruption is minimal. Think Salesforce CRM.
  • Mature Main Street (category stabilized, single-digit growth). You are now marketing primarily to your installed base, secondarily seeking to pick up new logos as they come into play. GenAI can give you a midlife kicker provided you can use it to generate meaningful productivity gains. Think Adobe Photoshop.
  • Late Main Street (category declining, negative growth). The category has never been more profitable, so you are looking to extend its life in as low-cost a way as you can. GenAI can introduce innovative applications that otherwise would never occur to your end users. Think HP home printing.

Company Power

There are two dimensions of company power to consider when analyzing the ROI from a GenAI investment, as follows:

  • Market Share Status. Are you the market share leader, a challenger, or simply a participant? As a challenger, you can use GenAI to disrupt the market pecking order provided you differentiate in a way that is challenging for the leader to copy. On the other hand, as a leader, you can use GenAI to neutralize the innovations coming from challengers provided you can get it to market fast enough to keep the ecosystem in your camp. As a participant, you would add GenAI only if was your single point of differentiation (as a low-share participant, your R&D budget cannot fund more than one).
  • Default Operating Model. Is your core business better served by the complex systems operating model (typical for B2B companies with hundreds to thousands of large enterprises for customers) or the volume operations operating model (typical for B2C companies with hundreds of thousands to millions of consumers)? The complex systems model has sufficient margins to invest professional services across the entire ownership life cycle, from design consulting to installation to expansion. You are going to need deep in-house expertise to win big in this game. By contrast, GenAI deployed via the volume operations model has to work out-of-the-box. Consumers have neither the courage nor the patience to work through any disconnects.

Market Power

Whereas category share leaders benefit most from going broad, market segment leaders win big by going deep. The key tactic is to overdo it on the use cases that mean the most to your target customers, taking your offer beyond anything reasonable for a category leader to copy. GenAI can certainly be a part of this approach, as the two slides below illustrate:

Market Segmentation for Complex Systems

In the complex systems operating model, GenAI should accentuate the differentiation of your whole product, the complete solution to whatever problem you are targeting. That might mean, for example, taking your Large Language Model to a level of specificity that would normally not be warranted. This sets you apart from the incumbent vendor who has nothing like what you offer as well as from other technology vendors who have not embraced your target segment’s specific concerns. Think Crowdstrike’s Charlotte AI for cybersecurity analysis.

Market Segmentation for Volume Operations

In the volume operations operating model, GenAI should accentuate the differentiation of your brand promise by overdelivering on the relevant value discipline. Once again, it is critical not to get distracted by shiny objects—you want to differentiate in one quadrant only, although you can use GenAI in the other three for neutralization purposes. For Performance, think knowledge discovery. For Productivity, think writing letters. For Economy, think tutoring. For Convenience, think gift suggestions.

Offer Power

Everybody wants to “be innovative,” but it is worth stepping back a moment to ask, how do we get a Return on Innovation? Compared to its financial cousin, this kind of ROI is more of a leading indicator and thus of more strategic value. Basically, it comes in three forms:

  1. Differentiation. This creates customer preference, the goal being not just to be different but to create a clear separation from the competition, one that they cannot easily emulate. Think OpenAI.
  2. Neutralization. This closes the gap between you and a competitor who is taking market share away from you, the goal being to get to “good enough, fast enough,” thereby allowing your installed base to stay loyal. Think Google Bard.
  3. Optimization. This reduces the cost while maintaining performance, the goal being to expand the total available market. Think Edge GenAI on PCs and Macs.

For most of us, GenAI will be an added ingredient rather than a core product, which makes the ROI question even more important. The easiest way to waste innovation dollars is to spend them on differentiation that does not go far enough, neutralization that does not go fast enough, or optimization that does not go deep enough. So, the key lesson here is, pick one and only one as your ROI goal, and then go all in to get a positive return.

Execution Power

How best to incorporate GenAI into your existing enterprise depends on which zone of operations you are looking to enhance, as illustrated by the zone management framework below:

Zone Management Framework

If you are unsure exactly what to do, assign the effort to the Incubation Zone and put them on the clock to come up with a good answer as fast as possible. If you can incorporate it directly into your core business’s offerings at relatively low risk, by all means, do so as it is the current hot ticket, and assign it to the Performance Zone. If there is not a good fit, consider using it internally instead to improve your own productivity, assigning it to the Productivity Zone. Finally, although it is awfully early days for this, if you are convinced it is an absolutely essential ingredient in a big bet you feel compelled to make, then assign it to the Transformation Zone and go all in. Again, the overall point is manage your investment in GenAI out of one zone and only one zone, as the success metrics for each zone are incompatible with those of the other three.

One final point. Embracing anything as novel as GenAI has to feel risky. I submit, however, that in 2025 not building upon meaningful GenAI action taken in 2024 is even more so.

That’s what I think. What do you think?

Image Credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.