Category Archives: Innovation

Nominations Open – Top 40 Innovation Authors of 2025

Nominations Open for the Top 40 Innovation Authors of 2025Human-Centered Change and Innovation loves making innovation insights accessible for the greater good, because we truly believe that the better our organizations get at delivering value to their stakeholders the less waste of natural resources and human resources there will be.

As a result, we are eternally grateful to all of you out there who take the time to create and share great innovation articles, presentations, white papers, and videos with Braden Kelley and the Human-Centered Change and Innovation team. As a small thank you to those of you who follow along, we like to make a list of the Top 40 Innovation Authors available each year!

Our lists from the ten previous years have been tremendously popular, including:

Top 40 Innovation Bloggers of 2015
Top 40 Innovation Bloggers of 2016
Top 40 Innovation Bloggers of 2017
Top 40 Innovation Bloggers of 2018
Top 40 Innovation Bloggers of 2019
Top 40 Innovation Bloggers of 2020
Top 40 Innovation Bloggers of 2021
Top 40 Innovation Bloggers of 2022
Top 40 Innovation Bloggers of 2023
Top 40 Innovation Bloggers of 2024

Do you have someone that you like to read that writes about innovation, or some of the important adjacencies – trends, consumer psychology, change, leadership, strategy, behavioral economics, collaboration, or design thinking?

Human-Centered Change and Innovation is now looking for the Top 40 Innovation Authors of 2025.

The deadline for submitting nominations is December 24, 2025 at midnight GMT.

You can submit a nomination either of these two ways:

  1. Sending us the name of the author and the url of their blog by @reply on twitter to @innovate
  2. Sending the name of the author and the url of their blog and your e-mail address using our contact form

(Note: HUGE bonus points for being a contributing author)

So, think about who you like to read and let us know by midnight GMT on December 24, 2025.

We will then compile a voting list of all the nominations, and publish it on December 25, 2025.

Voting will then be open from December 25, 2025 – January 1, 2026 via comments and twitter @replies to @innovate.

The ranking will be done by me with influence from votes and nominations. The quality and quantity of contributions by an author to this web site will be a contributing factor.

Contact me with writing samples if you’d like to publish your articles on our platform!

The official Top 40 Innovation Authors of 2025 will then be announced on here in early January 2026.

We’re curious to see who you think is worth reading!

SPECIAL BONUS: From now until December 31, 2025 you can get either the hardcover or softcover of my latest best-selling book Charting Change (free shipping worldwide) for only £/$/€ 23.99 (~36% OFF).

Support this blog by getting your copy of Charting Change

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Will our opinion still really be our own in an AI Future?

Will our opinion still really be our own in an AI Future?

GUEST POST from Pete Foley

Intuitively we all mostly believe our opinions are our own.  After all, they come from that mysterious thing we call consciousness that resides somewhere inside of us. 

But we also know that other peoples opinions are influenced by all sorts of external influences. So unless we as individuals are uniquely immune to influence, it begs at the question; ‘how much of what we think, and what we do, is really uniquely us?’  And perhaps even more importantly, as our understanding of behavioral modification techniques evolves, and the power of the tools at our disposal grows, how much mental autonomy will any of us truly have in the future?

AI Manipulation of Political Opinion: A recent study from the Oxford Internet Institute (OII) and the UK AI Security Institute (AISI) showed how conversational AI can meaningfully influence peoples political beliefs. https://www.ox.ac.uk/news/2025-12-11-study-reveals-how-conversational-ai-can-exert-influence-over-political-beliefs .  Leveraging AI in this way potentially opens the door to a step-change in behavioral and opinion manipulation inn general.  And that’s quite sobering on a couple of fronts.   Firstly, for many today their political beliefs are deeply tied to our value system and deep sense of self, so this manipulation is potentially profound.  Secondly, if AI can do this today, how much more will it be able to do in the future?

A long History of Manipulation: Of course, manipulation of opinion or behavior is not new.  We are all overwhelmed by political marketing during election season.  We accept that media has manipulated public opinion for decades, and that social media has amplified this over the last few decades. Similarly we’ve all grown up immersed in marketing and advertising designed to influence our decisions, opinions and actions.  Meanwhile the rise in prominence of the behavioral sciences in recent decades has provided more structure and efficiency to behavioral influence, literally turning an art into a science.  Framing, priming, pre-suasion, nudging and a host of other techniques can have a profound impact on what we believe and what we actually do. And not only do we accept it, but many, if not most of the people reading this will have used one or more of these channels or techniques.  

An Art and a Science: And behavioral manipulation is a highly diverse field, and can be deployed as an art or a science.   Whether it’s influencers, content creators, politicians, lawyers, marketers, advertisers, movie directors, magicians, artists, comedians, even physicians or financial advisors, our lives are full of people who influence us, often using implicit cues that operate below our awareness. 

And it’s the largely implicit nature of these processes that explains why we tend to intuitively think this is something that happens to other people. By definition we are largely unaware of implicit influence on ourselves, although we can often see it in others.   And even in hindsight, it’s very difficult to introspect implicit manipulation of our own actions and opinions, because there is often no obvious conscious causal event. 

So what does this mean?  As with a lot of discussion around how an AI future, or any future for that matter, will unfold, informed speculation is pretty much all we have.  Futurism is far from an exact science.  But there are a couple of things we can make pretty decent guesses around.

1.  The ability to manipulate how people think creates power and wealth.

2.  Some will use this for good, some not, but given the nature of humanity, it’s unlikely that it will be used exclusively for either.

3.  AI is going to amplify our ability to manipulate how people think.  

The Good news: Benevolent behavioral and opinion manipulation has the power to do enormous good.  Whether it’s mental health and happiness (an increasingly challenging area as we as a species face unprecedented technology driven disruption), health, wellness, job satisfaction, social engagement, important for many of us, adoption of beneficial technology and innovation and so many other areas can benefit from this.  And given the power of the brain, there is even potential for conceptual manipulation to replace significant numbers of pharmaceuticals, by, for example, managing depression, or via preventative behavioral health interventions.   Will this be authentic? It’s probably a little Huxley dystopian, but will we care?  It’s one of the many ethical connundrums AI will pose us with.

The Bad News.  Did I mention wealth and power?  As humans, we don’t have a great record of doing the right thing when wealth and power come into the equation.  And AI and AI empowered social, conceptual and behavioral manipulation has potential to concentrate meaningful power even more so than today’s tech driven society.  Will this be used exclusively for good, or will some seek to leverage for their personal benefit at the expense of the border community?   Answers on a postcard (or AI generated DM if you prefer).

What can and should we do?  Realistically, as individuals we can self police, but we obviously also face limits in self awareness of implicit manipulations.  That said, we can to some degree still audit ourselves.  We’ve probably all felt ourselves at some point being riled up by a well constructed meme designed to amplify our beliefs.   Sometimes we recognize this quickly, other times we may be a little slower. But just simple awareness of the potential to be manipulated, and the symptoms of manipulation, such as intense or disproportionate emotional responses, can help us mitigate and even correct some of the worst effects. 

Collectively, there are more opportunities.  We are better at seeing others being manipulated than ourselves.  We can use that as a mirror, and/or call it out to others when we see it.  And many of us will find ourselves somewhere in the deployment chain, especially as AI is still in it’s early stages.  For those of us that this applies to, we have the opportunity to collectively nudge this emerging technology in the right direction. I still recall a conversation with Dan Ariely when I first started exploring behavioral science, perhaps 15-20 years ago.  It’s so long ago I have to paraphrase, but the essence of the conversation was to never manipulate people to do something that was not in there best interest.  

There is a pretty obvious and compelling moral framework behind this. But there is also an element of enlightened self interest. As a marketer working for a consumer goods company at the time, even if I could have nudged somebody into buying something they really didn’t want, it might have offered initial success, but would likely come back to bite me in the long-term.  They certainly wouldn’t become repeat customers, and a mixture of buyers remorse, loss aversion and revenge could turn them into active opponents.  This potential for critical thinking in hindsight exists for virtually every situation where outcomes damage the individual.   

The bottom line is that even today, we already ave to continually ask ourselves if what we see is real, if our beliefs are truly our own, or have they been manipulated? Media and social media memes already play the manipulation game.   AI may already be better, and if not, it’s only a matter of time before it is. If you think we are politically polarized now, hang onto your hat!!!  But awareness is key.  We all need to stay aware, be conscious of manipulation in ourselves and others, and counter it when we see it occurring for the wrong reasons.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Why Are We Forcing People Back into Cubicles?

Why Are We Forcing People Back into Cubicles?

GUEST POST from Mike Shipulski

Whether it’s placing machine tools on the factory floor or designing work spaces for people that work at the company, the number one guiding metric is resources per square foot. If you’re placing machine tools, this metric causes the machines to be stacked closely together, where the space between them is minimized, access to the machines is minimized, and the aisles are the smallest they can be. The result – the number of machines per square foot is maximized.

And though there has been talk of workplaces that promote effective interactions and creativity, the primary metric is still people per square foot. Don’t believe me? I have one word for you – cubicles. Cubicles are the design solution of choice when you want to pack the most people into the smallest area.

Here’s a test. At your next team meeting, ask people to raise their hand if they hate working in a cubicle. I rest my case.

With cubicles, it’s the worst of both worlds. There is none of the benefit of an office and none of the benefit of collaborative environment. They are half of neither.

What is one of Dilbert’s favorite topic? Cubicles.

If no one likes them, why do we still have them? If you want quiet, cubicles are the wrong answer. If you want effective collaboration, cubicles are the wrong answer. If everyone hates them, why do we still have them?

When people need to do deep work, they stay home so they can have peace and quiet. When people they want to concentrate, they avoid cubicles at all costs. When you need to focus, you need quiet. And the best way to get quiet is with four walls and a door. Some would call that and office, but those are passe. And in some cases, they are outlawed. In either case, they are the best way to get some quiet time. And, as a side benefit, they also block interruptions.

Best way for people to interact is face-to-face. And in order to interact at way, they’ve got to be in the same place at the same time. Sure spontaneous interactions are good, but it’s far better to facilitate interactions with a fixed schedule. Like with a bus stop schedule, people know where to be and when. In that way, many people can come together efficiently and effectively and the number of interactions increases dramatically. So why not set up planned interactions at ten in the morning and two in the afternoon?

I propose a new metric for facilities design – number of good ideas per square foot. Good ideas require deep thought, so quiet is important. And good ideas require respectful interaction with others, so interactions are important.

I’m not exactly sure what a facility must look like to maximize the number of good ideas per square foot, but I do know it has no cubicles.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Do What 91% of Executives Will Not

Winning in Times of Uncertainty

Do What 91% of Executives Will Not

GUEST POST from Robyn Bolton

In times of great uncertainty, we seek safety. But what does “safety” look like?

What We Say: Safety = Data

We tend to believe that we are rational beings and, as a result, we rely on data to make decisions.

Great! We’ve got lots of data from lots of uncertain periods. HBR examined 4,700 public companies during three global recessions (1980, 1990, and 2000).  They found that the companies that emerged “outperforming rivals in their industry by at least 10% in terms of sales and profits growth” had one thing in common: They aggressively made cuts to improve operational efficiency and ruthlessly invested in marketing, R&D, and building new assets to better serve customers have the highest probability of emerging as markets leaders post-recession.

This research was backed up in 2020 in a McKinsey study that found that “Organizations that maintained their innovation focus through the 2009 financial crisis, for example, emerged stronger, outperforming the market average by more than 30 percent and continuing to deliver accelerated growth over the subsequent three to five years.”

What We Do: Safety = Hoarding

The reality is that we are human beings and, as a result, make decisions based on how we feel and the use data to justify those decisions.

How else do you explain that despite the data, only 9% of companies took the balanced approach recommended in the HBR study and, ten years later, only 25% of the companies studied by McKinsey stated that “capturing new growth” was a top priority coming out of the COVID-19 pandemic.

Uncertainty is scary so, as individuals and as organizations, we scramble to secure scarce resources, cut anything that feels extraneous, and shift or focus to survival.

What now? AND, not OR

What was true in 2010 is still true today and new research from Bain offers practical advice for how leaders can follow both their hearts and their heads.

Implement systems to protect you from yourself. Bain studied Fast Company’s 50 Most Innovative Companies and found that 79% use two different operating models for innovation to combat executives’ natural risk aversion.  The first, for sustaining innovation uses traditional stage-gate models, seeks input from experts and existing customers, and is evaluated on ROI-driven metrics.

The second, for breakthrough innovations, is designed to embrace and manage uncertainty by learning from new customers and emerging trends, working with speed and agility, engaging non-traditional collaborators, and evaluating projects based on their long-term potential and strategic option value.

Don’t outspend. Out-allocate. Supporting the two-system approach, nearly half of the companies studied send less on R&D than their peers overall and spend it differently: 39% of their R&D budgets to sustaining innovations and 61% to expanding into new categories or business models.

Use AI to accelerate, not create. Companies integrating AI into innovation processes have seen design-to-launch timelines shrink by 20% or more. The key word there is “integrate,” not outsource. They use AI for data and trend analysis, rapid prototyping, and automating repetitive tasks. But they still rely on humans for original thinking, intuition-based decisions, and genuine customer empathy.

Prioritize humans above all else. Even though all the information in the world is at our fingerprints, humans remain unknowable, unpredictable, and wonderfully weird. That’s why successful companies use AI to enhance, not replace, direct engagement with customers. They use synthetic personas as a rehearsal space for brainstorming, designing research, and concept testing. But they also know there is no replacement (yet) for human-to-human interaction, especially when creating new offerings and business models.

In times of create uncertainty, we seek safety.  But safety doesn’t guarantee certainty. Nothing does. So, the safest thing we can do is learn from the past, prepare (not plan) for the future, make the best decisions possible based on what we know and feel today, and stay open to changing them tomorrow.

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Wood-Fired Automobile

WWII’s Forgotten Lesson in Human-Centered Resourcefulness

LAST UPDATED: December 14, 2025 at 5:59 PM

The Wood-Fired Automobile

GUEST POST from Art Inteligencia

Innovation is often romanticized as the pursuit of the new — sleek electric vehicles, AI algorithms, and orbital tourism. Yet, the most profound innovation often arises not from unlimited possibility, but from absolute scarcity. The Second World War offers a stark, compelling lesson in this principle: the widespread adoption of the wood-fired automobile, or the gasogene vehicle.

In the 1940s, as global conflict choked off oil supplies, nations across Europe and Asia were suddenly forced to find an alternative to gasoline to keep their civilian and military transport running. The solution was the gas generator (or gasifier), a bulky metal unit often mounted on the rear or side of a vehicle. This unit burned wood, charcoal, or peat, not for heat or steam, but for gas. The process — pyrolysis — converted solid fuel into a combustible mixture of carbon monoxide, hydrogen, and nitrogen known as “producer gas” or “wood gas,” which was then filtered and fed directly into the vehicle’s conventional internal combustion engine. This adaptation was a pure act of Human-Centered Innovation: it preserved mobility and economic function using readily available, local resources, ensuring the continuity of life amidst crisis.

The Scarcity Catalyst: Unlearning the Oil Dependency

Before the war, cars ran on gasoline. When the oil dried up, the world faced a moment of absolute unlearning. Governments and industries could have simply let transportation collapse, but the necessity of maintaining essential services (mail, food distribution, medical transport) forced them to pivot to what they had: wood and ingenuity. This highlights a core innovation insight: the constraints we face today — whether supply chain failures or climate change mandates — are often the greatest catalysts for creative action.

Gasogene cars were slow, cumbersome, and required constant maintenance, yet their sheer existence was a triumph of adaptation. They provided roughly half the power of a petrol engine, requiring drivers to constantly downshift on hills and demanding a long, smoky warm-up period. But they worked. The innovation was not in the vehicle itself, which remained largely the same, but in the fuel delivery system and the corresponding behavioral shift required by the drivers and mechanics.

Case Study 1: Sweden’s Total Mobilization of Wood Gas

Challenge: Maintaining Neutrality and National Mobility Under Blockade

During WWII, neutral Sweden faced a complete cutoff of its oil imports. Without liquid fuel, the nation risked economic paralysis, potentially undermining its neutrality and ability to supply its citizens. The need was immediate and total: convert all essential vehicles.

Innovation Intervention: Standardization and Centralization

Instead of relying on fragmented, local solutions, the Swedish government centralized the gasifier conversion effort. They established the Gasogenkommittén (Gas Generator Committee) to standardize the design, production, and certification of gasifiers (known as gengas). Manufacturers such as Volvo and Scania were tasked not with building new cars, but with mass-producing the conversion kits.

  • By 1945, approximately 73,000 vehicles — nearly 90% of all Swedish vehicles, from buses and trucks to farm tractors and private cars — had been converted to run on wood gas.
  • The government created standardized wood pellet specifications and set up thousands of public wood-gas fueling stations, turning the challenge into a systematic, national enterprise.

The Innovation Impact:

Sweden demonstrated that human resourcefulness can completely circumvent a critical resource constraint at a national scale. The conversion was not an incremental fix; it was a wholesale, government-backed pivot that secured national resilience and mobility using entirely domestic resources. The key was standardized conversion — a centralized effort to manage distributed complexity.

Fischer-Tropsch Process

Case Study 2: German Logistics and the Bio-Diesel Experiment

Challenge: Fueling a Far-Flung Military and Civilian Infrastructure

Germany faced a dual challenge: supplying a massive, highly mechanized military campaign while keeping the domestic civilian economy functional. While military transport relied heavily on synthetic fuel created through the Fischer-Tropsch process, the civilian sector and local military transport units required mass-market alternatives.

Innovation Intervention: Blended Fuels and Infrastructure Adaptation

Beyond wood gas, German innovation focused on blended fuels. A crucial adaptation was the widespread use of methanol, ethanol, and various bio-diesels (esters derived from vegetable oils) to stretch dwindling petroleum reserves. While wood gasifiers were used on stationary engines and some trucks, the government mandated that local transport fill up with methanol-gasoline blends. This forced a massive, distributed shift in fuel pump calibration and engine tuning across occupied Europe.

  • The adaptation required hundreds of thousands of local mechanics, from France to Poland, to quickly unlearn traditional engine maintenance and become experts in the delicate tuning required for lower-energy blended fuels.
  • This placed the burden of innovation not on a central R&D lab, but on the front-line workforce — a pure example of Human-Centered Innovation at the operational level.

The Innovation Impact:

This case highlights how resource constraints force innovation across the entire value chain. Germany’s transport system survived its oil blockade not just through wood gasifiers, but through a constant, low-grade innovation treadmill of fuel substitution, blending, and local adaptation that enabled maximum optionality under duress. The lesson is that resilience comes from flexibility and decentralization.

Conclusion: The Gasogene Mindset for the Modern Era

The wood-fired car is not a relic of the past; it is a powerful metaphor for the challenges we face today. We are currently facing the scarcity of time, carbon space, and public trust. We are entirely reliant on systems that, while efficient in normal times, are dangerously fragile under stress. The shift to sustainability, the move away from centralized energy grids, and the adoption of closed-loop systems all require the Gasogene Mindset — the ability to pivot rapidly to local, available resources and fundamentally rethink the consumption model.

Modern innovators must ask: If our critical resource suddenly disappeared, what would we use instead? The answer should drive our R&D spending today. The history of the gasogene vehicle proves that sufficiency is the mother of ingenuity, and the greatest innovations often solve the problem of survival first. We must learn to innovate under constraint, not just in comfort.

“The wood-fired car teaches us that every constraint is a hidden resource, if you are creative enough to extract it.” — Braden Kelley

Frequently Asked Questions About Wood Gas Vehicles

1. How does a wood gas vehicle actually work?

The vehicle uses a gasifier that burns wood or charcoal in a low-oxygen environment (a process called pyrolysis). This creates a gas mixture (producer gas) which is then cooled, filtered, and fed directly into the vehicle’s standard internal combustion engine to power it, replacing gasoline.

2. How did the performance of a wood gas vehicle compare to gasoline?

Gasogene cars provided significantly reduced performance, typically delivering only 50-60% of the power of the original gasoline engine. They were slower, had lower top speeds, required frequent refueling with wood, and needed a 15-30 minute warm-up period to start producing usable gas.

3. Why aren’t these systems used today, given their sustainability?

The system is still used in specific industrial and remote applications (power generation), but not widely in transportation because of the convenience and energy density of liquid fuels. Wood gasifiers are large, heavy, require constant manual fueling and maintenance (clinker removal), and produce a low-energy gas that limits speed and range, making them commercially unviable against modern infrastructure.

Your first step toward a Gasogene Mindset: Identify one key external resource your business or team relies on (e.g., a software license, a single supplier, or a non-renewable material). Now, design a three-step innovation plan for a world where that resource suddenly disappears. That plan is your resilience strategy.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






How Knowledge Emerges

Understanding Epistemology

How Knowledge Emerges - Understanding Epistemology

GUEST POST from Geoffrey A. Moore

Epistemology is that branch of philosophy that addresses the theory of knowledge. But what do philosophers mean by knowledge? Traditionally, it is defined as justified true belief, and it is established by applying logic and reason to whatever set of claims is under discussion. That is the path we are going to follow here as well. But to get the full picture, we need to look at both knowledge and knowing through the lens of emergence.

In The Infinite Staircase, we offered a global model of emergence that seeks to span all of reality, organizing itself around eleven stairs, as follows:

Infinite Staircase Geoffrey Moore

Justified true belief is a product of reason employing the top four stairs of language, narrative, analytics, and theory to test claims to truth. It is the cumulative impact of all these stairs building one atop the next that allows knowledge to ultimately emerge in its fullest sense. That is the path we are about to trace. Before so doing, however, we should acknowledge that there are seven stairs below language, all of which are “pre-linguistic,” that also seep into the way we know things. A complete epistemology would therefore go all the way down to the bottom stair, with particular attention to culture (what we learn from others) and values (what we learn from mammalian nurture and governance). Nonetheless, we are going to focus on just the top four because that is where the bulk of the action is.

Beginning with the stair of language, its major contribution to justified true belief is its ability to communicate facts. All facts are expressed through declarative sentences. Each sentence makes a claim. What makes a claim a fact is that we are willing to accept its assertion without further verification or validation. For the ultimate skeptic who is never willing to do this, there are no facts. For the rest of us, who are continually making real-life decisions in real-time, facts are necessary, and we accept or reject claims of fact based on the information we have at hand, including the reliability of the source and the probability of the claim given current circumstances.

That said, facts by themselves don’t mean much. What gives them meaning are narratives. Narrative is the cornerstone of all knowledge, the medium by which we communicate beliefs. The book of Genesis represents one such belief-supporting narrative, The Origin of Species another, the Big Bang a third. Each of these narratives not only explains how things have come to be as they are, at the same time they foreshadow how they can be expected to turn out in the future. Whether it is the hand of God, the workings of natural selection, or the ceaseless operation of the Second Law of Thermodynamics, narratives spotlight the governing forces in whatever situation they describe. That in turn lets us identify actions we can take to turn our situation to best advantage. Narratives, in other words, are essential equipment for any kind of decision-making. The question, however, is are they credible?

This is where analytics comes in. The role of analytics is to justify belief in the claims embedded in the narrative. In The Infinite Staircase, I summarize Stephen Toulmin’s model for conducting such an analysis. It is organized around six elements:

  1. What are the claims being made? Are they clear, precise, and unambiguous?
  2. What evidence is there that these claims might be true? What are the facts of the case as best we can determine them?
  3. What warrants us to believe that this evidence supports these claims? Are there clear lines of reasoning that take us from the facts to the claims and back?
  4. Do the warrants themselves require additional backing to be credible? Is there evidence to support their claims?
  5. What counter-arguments could potentially invalidate our claims, and do we have a credible rebuttal to refute them?
  6. Where do we draw the line between our claims and these alternatives?
  7. Based on all five precious steps, is there some qualification we can apply to our claim to secure its overall justification more firmly? What is our final statement of our core claim?

By applying this model to our beliefs, we can transform them into justified beliefs. But that still begs one question: are they true?

To address the question of truth, we have to draw upon the resources of the highest stair in our model, the one labeled theory. There are multiple theories of truth, but three stand out in particular:

  1. The correspondence theory, which says that claims are true when they are consistent with how things actually turn out to be, leading to a verifiable view of the world.
  2. The coherence theory, which says that claims are true when they are consistent with all the other claims you believe, leading to a coherent view of the world.
  3. The pragmatism theory, which says that claims are true when you act on them and your actions are consistent with your intentions, leading to an effective view of the world.

Rather than think of these theories as competing with one another, consider them as three dimensions of one and the same thing, namely knowledge that helps further one’s strategy for living. In that context, knowledge does indeed consist of justified true beliefs. It emerges from language contributing facts, interacting with narratives contributing beliefs, tested by analytics contributing justification, and confirmed by theory contributing truth. In this context, it is neither complicated nor mysterious.

That’s what I think. What do you think?

Image Credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Was Your AI Strategy Developed by the Underpants Gnomes?

Was Your AI Strategy Developed by the Underpants Gnomes?

GUEST POST from Robyn Bolton

“It just popped up one day. Who knows how long they worked on it or how many of millions were spent. They told us to think of it as ChatGPT but trained on everything our company has ever done so we can ask it anything and get an answer immediately.”

The words my client was using to describe her company’s new AI Chatbot made it sound like a miracle. Her tone said something else completely.

“It sounds helpful,”  I offered.  “Have you tried it?”

 “I’m not training my replacement! And I’m not going to train my R&D, Supply Chain, Customer Insights, or Finance colleagues’ replacements either. And I’m not alone. I don’t think anyone’s using it because the company just announced they’re tracking usage and, if we don’t use it daily, that will be reflected in our performance reviews.”

 All I could do was sigh. The Underpants Gnomes have struck again.

Who are the Underpants Gnomes?

The Underpants Gnomes are the stars of a 1998 South Park episode described by media critic Paul Cantor as, “the most fully developed defense of capitalism ever produced.”

Claiming to be business experts, the Underpants Gnomes sneak into South Park residents’ homes every night and steal their underpants. When confronted by the boy in their underground lair, the Gnomes explain their business plan:

  1. Collect underpants
  2. ?
  3. Profit

It was meant as satire.

Some took it as a an abbreviated MBA.

How to Spot the Underpants AI Gnomes

As the AI hype grows, fueling executive FOMO (Fear of Missing Out), the Underpants Gnomes, cleverly disguised as experts, entrepreneurs and consultants, saw their opportunity.

  1. Sell AI
  2. ?
  3. Profit

 While they’ve pivoted their business focus, they haven’t improved their operations so the Underpants AI Gnomes as still easy to spot:

  1. Investment without Intention: Is your company investing in AI because it’s “essential to future-proofing the business?”  That sounds good but if your company can’t explain the future it’s proofing itself against and how AI builds a moat or a life preserver in that future, it’s a sign that  the Gnomes are in the building.
  2. Switches, not Solutions: If your company thinks that AI adoption is as “easy as turning on Copilot” or “installing a custom GPT chatbot, the Gnomes are gaining traction. AI is a tool and you need to teach people how to use tools, build processes to support the change, and demonstrate the benefit.
  3. Activity without Achievement: When MIT published research indicating that 95% of corporate Gen AI pilots were failing, it was a sign of just how deeply the Gnomes have infiltrated companies. Experiments are essential at the start of any new venture but only useful if they generate replicable and scalable learning.

How to defend against the AI Gnomes

Odds are the gnomes are already in your company. But fear not, you can still turn “Phase 2:?” into something that actually leads to “Phase 3: Profit.”

  1. Start with the end in mind: Be specific about the outcome you are trying to achieve. The answer should be agnostic of AI and tied to business goals.
  2. Design with people at the center: Achieving your desired outcomes requires rethinking and redesigning existing processes. Strategic creativity like that requires combining people, processes, and technology to achieve and embed.
  3. Develop with discipline: Just because you can (run a pilot, sign up for a free trial), doesn’t mean you should. Small-scale experiments require the same degree of discipline as multi-million-dollar digital transformations. So, if you can’t articulate what you need to learn and how it contributes to the bigger goal, move on.

AI, in all its forms, is here to stay. But the same doesn’t have to be true for the AI Gnomes.

Have you spotted the Gnomes in your company?

Image credit: AI Underpants Gnomes (just kidding, Google Gemini made the image)

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Embodied Artificial Intelligence is the Next Frontier of Human-Centered Innovation

LAST UPDATED: December 8, 2025 at 4:56 PM

Embodied Artificial Intelligence is the Next Frontier of Human-Centered Innovation

GUEST POST from Art Inteligencia

For the last decade, Artificial Intelligence (AI) has lived primarily on our screens and in the cloud — a brain without a body. While large language models (LLMs) and predictive algorithms have revolutionized data analysis, they have done little to change the physical experience of work, commerce, and daily life. This is the innovation chasm we must now bridge.

The next great technological leap is Embodied Artificial Intelligence (EAI): the convergence of advanced robotics (the body) and complex, generalized AI (the brain). EAI systems are designed not just to process information, but to operate autonomously and intelligently within our physical world. This is a profound shift for Human-Centered Innovation, because EAI promises to eliminate the drudgery, danger, and limitations of physical labor, allowing humans to focus exclusively on tasks that require judgment, creativity, and empathy.

The strategic deployment of EAI requires a shift in mindset: organizations must view these agents not as mechanical replacements, but as co-creators that augment and elevate the human experience. The most successful businesses will be those that unlearn the idea of human vs. machine and embrace the model of Human-Embodied AI Symbiosis.

The EAI Opportunity: Three Human-Centered Shifts

EAI accelerates change by enabling three crucial shifts in how we organize work and society:

1. The Shift from Automation to Augmentation

Traditional automation replaces repetitive tasks. EAI offers intelligent augmentation. Because EAI agents learn and adapt in real-time within dynamic environments (like a factory floor or a hospital), they can handle unforeseen situations that script-based robots cannot. This means the human partner moves from supervising a simple process to managing the exceptions and optimizations of a sophisticated one. The human job becomes about maximizing the intelligence of the system, not the efficiency of the body.

2. The Shift from Efficiency to Dignity

Many essential human jobs are physically demanding, dangerous, or profoundly repetitive. EAI offers a path to remove humans from these undignified roles — the loading and unloading of heavy boxes, inspection of hazardous infrastructure, or the constant repetition of simple assembly tasks. This frees human capital for high-value interaction, fostering a new organizational focus on the dignity of work. Organizations committed to Human-Centered Innovation must prioritize the use of EAI to eliminate physical risk and strain.

3. The Shift from Digital Transformation to Physical Transformation

For decades, digital transformation has been the focus. EAI catalyzes the necessary physical transformation. It closes the loop between software and reality. An inventory algorithm that predicts demand can now direct a bipedal robot to immediately retrieve and prepare the required product from a highly chaotic warehouse shelf. This real-time, physical execution based on abstract computation is the true meaning of operational innovation.

Case Study 1: Transforming Infrastructure Inspection

Challenge: High Risk and Cost in Critical Infrastructure Maintenance

A global energy corporation (“PowerLine”) faced immense risk and cost in maintaining high-voltage power lines, oil pipelines, and sub-sea infrastructure. These tasks required sending human crews into dangerous, often remote, or confined spaces for time-consuming, repetitive visual inspections.

EAI Intervention: Autonomous Sensory Agents

PowerLine deployed a fleet of autonomous, multi-limbed EAI agents equipped with advanced sensing and thermal imaging capabilities. These robots were trained not just on pre-programmed routes, but on the accumulated, historical data of human inspectors, learning to spot subtle signs of material stress and structural failure — a skill previously reserved for highly experienced humans.

  • The EAI agents performed 95% of routine inspections, capturing data with superior consistency.
  • Human experts unlearned routine patrol tasks and focused exclusively on interpreting the EAI data flags and designing complex repair strategies.

The Outcome:

The use of EAI led to a 70% reduction in inspection time and, critically, a near-zero rate of human exposure to high-risk environments. This strategic pivot proved that EAI’s greatest value is not economic replacement, but human safety and strategic focus. The EAI provided a foundational layer of reliable, granular data, enabling human judgment to be applied only where it mattered most.

Case Study 2: Elderly Care and Companionship

Challenge: Overstretched Human Caregivers and Isolation

A national assisted living provider (“ElderCare”) struggled with caregiver burnout and increasing costs, while many residents suffered from emotional isolation due to limited staff availability. The challenge was profoundly human-centered: how to provide dignity and aid without limitless human resources.

EAI Intervention: The Adaptive Care Companion

ElderCare piloted the use of adaptive, humanoid EAI companions in low-acuity environments. These agents were programmed to handle simple, repetitive physical tasks (retrieving dropped items, fetching water, reminding patients about medication) and, critically, were trained on empathetic conversation models.

  • The EAI agents managed 60% of non-essential, fetch-and-carry tasks, freeing up human nurses for complex medical care and deep, personalized interaction.
  • The EAI’s conversation logs provided caregivers with Small Data insights into the emotional state and preferences of the residents, allowing the human staff to maximize the quality of their face-to-face time.

The Outcome:

The pilot resulted in a 30% reduction in nurse burnout and, most importantly, a measurable increase in resident satisfaction and self-reported emotional well-being. The EAI was deployed not to replace the human touch, but to protect and maximize its quality by taking on the physical burden of routine care. The innovation successfully focused human empathy where it had the greatest impact.

The EAI Ecosystem: Companies to Watch

The race to commercialize EAI is accelerating, driven by the realization that AI needs a body to unlock its full economic potential. Organizations should be keenly aware of the leaders in this ecosystem. Companies like Boston Dynamics, known for advanced mobility and dexterity, are pioneering the physical platforms. Startups such as Sanctuary AI and Figure AI are focused on creating general-purpose humanoid robots capable of performing diverse tasks in unstructured environments, integrating advanced large language and vision models into physical forms. Simultaneously, major players like Tesla with its Optimus project and research divisions within Google DeepMind are laying the foundational AI models necessary for EAI agents to learn and adapt autonomously. The most promising developments are happening at the intersection of sophisticated hardware (the actuators and sensors) and generalized, real-time control software (the brain).

Conclusion: A New Operating Model

Embodied AI is not just another technology trend; it is the catalyst for a radical change in the operating model of human civilization. Leaders must stop viewing EAI deployment as a simple capital expenditure and start treating it as a Human-Centered Innovation project. Your strategy should be defined by the question: How can EAI liberate my best people to do their best, most human work? Embrace the complexity, manage the change, and utilize the EAI revolution to drive unprecedented levels of dignity, safety, and innovation.

“The future of work is not AI replacing humans; it is EAI eliminating the tasks that prevent humans from being fully human.”

Frequently Asked Questions About Embodied Artificial Intelligence

1. How does Embodied AI differ from traditional industrial robotics?

Traditional industrial robots are fixed, single-purpose machines programmed to perform highly repetitive tasks in controlled environments. Embodied AI agents are mobile, often bipedal or multi-limbed, and are powered by generalized AI models, allowing them to learn, adapt, and perform complex, varied tasks in unstructured, human environments.

2. What is the Human-Centered opportunity of EAI?

The opportunity is the elimination of the “3 Ds” of labor: Dangerous, Dull, and Dirty. By transferring these physical burdens to EAI agents, organizations can reallocate human workers to roles requiring social intelligence, complex problem-solving, emotional judgment, and creative innovation, thereby increasing the dignity and strategic value of the human workforce.

3. What does “Human-Embodied AI Symbiosis” mean?

Symbiosis refers to the collaborative operating model where EAI agents manage the physical execution and data collection of routine, complex tasks, while human professionals provide oversight, set strategic goals, manage exceptions, and interpret the resulting data. The systems work together to achieve an outcome that neither could achieve efficiently alone.

Your first step toward embracing Embodied AI: Identify the single most physically demanding or dangerous task in your organization that is currently performed by a human. Begin a Human-Centered Design project to fully map the procedural and emotional friction points of that task, then use those insights to define the minimum viable product (MVP) requirements for an EAI agent that can eliminate that task entirely.

UPDATE – Here is an infographic of the key points of this article that you can download:

Embodied Artificial Intelligence Infographic

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: 1 of 1,000+ quote slides for your meetings & presentations at http://misterinnovation.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Important Questions for Innovation

Important Questions for Innovation

GUEST POST from Mike Shipulski

Here are some important questions for innovation.

What’s the Distinctive Value Proposition? The new offering must help the customer make progress. How does the customer benefit? How is their life made easier? How does this compare to the existing offerings? Summarize the difference on one page. If the innovation doesn’t help the customer make progress, it’s not an innovation.

Is it too big or too small? If the project could deliver sales growth that would dwarf the existing sales numbers for the company, the endeavor is likely too big. The company mindset and philosophy would have to be destroyed. Are you sure you’re up to the challenge? If the project could deliver only a small increase in sales, it’s likely not worth the time and expense. Think return on investment. There’s no right answer, but it’s important to ask the question and set the limits for too big and too small. If it could grow to 10% of today’s sales numbers, that’s probably about right.

Why us? There’s got to be a reason why you’re the right company to do this new work. List the company’s strengths that make the work possible. If you have several strengths that give you an advantage, that’s great. And if one of your weaknesses gives you an advantage, that works too. Step on the accelerator. If none of your strengths give you an advantage, choose another project.

How do we increase our learning rate? First thing, define Learning Objectives (LOs). And once defined, create a plan to achieve them quickly. Here’s a hint. Define what it takes to satisfy the LOs. Here’s another hind. Don’t build a physical prototype. Instead, create a website that describes the potential offering and its value proposition and ask people if they want to buy it. Collect the data and refine the offering based on your learning. Or, create a one-page sales tool and show it to ten potential customers. Define your learning and use the learning to decide what to do next.

Then what? If the first phase of the work is successful, there must be a then what. There must be an approved plan (funding, resources) for the second phase before the first phase starts. And the same thing goes for the follow-on phases. The easiest way to improve innovation effectiveness is avoid starting phase one of projects when their phase two is unfunded. The fastest innovation project is the wrong one that never starts.

How do we start? Define how much money you want to spend. Formalize your business objectives. Choose projects that could meet your business objectives. Free up your best people. Learn as quickly as you can.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Is OpenAI About to Go Bankrupt?

LAST UPDATED: December 4, 2025 at 4:48 PM

Is OpenAI About to Go Bankrupt?

GUEST POST from Chateau G Pato

The innovation landscape is shifting, and the tremors are strongest in the artificial intelligence (AI) sector. For a moment, OpenAI felt like an impenetrable fortress, the company that cracked the code and opened the floodgates of generative AI to the world. But now, as a thought leader focused on Human-Centered Innovation, I see the classic signs of disruption: a growing competitive field, a relentless cash burn, and a core product advantage that is rapidly eroding. The question of whether OpenAI is on the brink of bankruptcy isn’t just about sensational headlines — it’s about the fundamental sustainability of a business model built on unprecedented scale and staggering cost.

The “Code Red” announcement from OpenAI, ostensibly about maintaining product quality, was a subtle but profound concession. It was an acknowledgment that the days of unchallenged superiority are over. This came as competitors like Google’s Gemini and Anthropic’s Claude are not just keeping pace, but in many key performance metrics, they are reportedly surpassing OpenAI’s flagship models. Performance parity, or even outperformance, is a killer in the technology adoption curve. When the superior tool is also dramatically cheaper, the choice for enterprises and developers — the folks who pay the real money — becomes obvious.

The Inevitable Crunch: Performance and Price

The competitive pressure is coming from two key vectors: performance and cost-efficiency. While the public often focuses on benchmark scores like MMLU or coding abilities — where models like Gemini and Claude are now trading blows or pulling ahead — the real differentiator for business users is price. New models, including the China-based Deepseek, are entering the market with reported capabilities approaching the frontier models but at a fraction of the development and inference cost. Deepseek’s reportedly low development cost highlights that the efficiency of model creation is also improving outside of OpenAI’s immediate sphere.

Crucially, the open-source movement, championed by models like Meta’s Llama family, introduces a zero-cost baseline that fundamentally caps the premium OpenAI can charge. Llama, and the rapidly improving ecosystem around it, means that a good-enough, customizable, and completely free model is always an option for businesses. This open-source competition bypasses the high-cost API revenue model entirely, forcing closed-source providers to offer a quantum leap in utility to justify the expenditure. This dynamic accelerates the commoditization of foundational model technology, turning OpenAI’s once-unique selling proposition into a mere feature.

OpenAI’s models, for all their power, have been famously expensive to run — a cost that gets passed on through their API. The rise of sophisticated, cheaper alternatives — many of which employ highly efficient architectures like Mixture-of-Experts (MoE) — means the competitive edge of sheer scale is being neutralized by engineering breakthroughs in efficiency. If the next step in AI on its way to artificial general intelligence (AGI) is a choice between a 10% performance increase and a 10x cost reduction for 90% of the performance, the market will inevitably choose the latter. This is a structural pricing challenge that erodes one of OpenAI’s core revenue streams: API usage.

The Financial Chasm: Burn Rate vs. Reserves

The financial situation is where the “bankruptcy” narrative gains traction. Developing and running frontier AI models is perhaps the most capital-intensive venture in corporate history. Reports — which are often conflicting and subject to interpretation — paint a picture of a company with an astronomical cash burn rate. Estimates for annual operational and development expenses are in the billions of dollars, resulting in a net loss measured in the billions.

This reality must be contrasted with the position of their main rivals. While OpenAI is heavily reliant on Microsoft’s monumental investment — a complex deal involving cash and Azure cloud compute credits — Microsoft’s exposure is structured as a strategic infrastructure play. The real financial behemoth is Alphabet (Google), which can afford to aggressively subsidize its Gemini division almost indefinitely. Alphabet’s near-monopoly on global search engine advertising generates profits in the tens of billions of dollars every quarter. This virtually limitless reservoir of cash allows Google to cross-subsidize Gemini’s massive research, development, and inference costs, effectively enabling them to engage in a high-stakes price war that smaller, loss-making entities like OpenAI cannot truly win on a level playing field. Alphabet’s strategy is to capture market share first, using the profit engine of search to buy time and scale, a luxury OpenAI simply does not have without a continuous cash injection from a partner.

The question is not whether OpenAI has money now, but whether their revenue growth can finally eclipse their accelerating costs before their massive reserve is depleted. Their long-term financial projections, which foresee profitability and revenues in the hundreds of billions by the end of the decade, require not just growth, but a sustained, near-monopolistic capture of the new AI-driven knowledge economy. That becomes increasingly difficult when competitors are faster, cheaper, and arguably better, and have access to deeper, more sustainable profit engines for cross-subsidization.

The Future Outlook: Change or Consequence

OpenAI’s future is not doomed, but the company must initiate a rapid, human-centered transformation. The current trajectory — relying on unprecedented capital expenditure to maintain a shrinking lead in model performance — is structurally unsustainable in the face of faster, cheaper, and increasingly open-source models like Meta’s Llama. The next frontier isn’t just AGI; it’s AGI at scale, delivered efficiently and affordably.

OpenAI must pivot from a model of monolithic, expensive black-box development to one that prioritizes efficiency, modularity, and a true ecosystem approach. This means a rapid shift to MoE architectures, aggressive cost-cutting in inference, and a clear, compelling value proposition beyond just “we were first.” Human-Centered Innovation principles dictate that a company must listen to the market — and the market is shouting for price, performance, and flexibility. If OpenAI fails to execute this transformation and remains an expensive, marginal performer, its incredible cash reserves will serve only as a countdown timer to a necessary and painful restructuring.

Frequently Asked Questions (FAQ)

  • Is OpenAI currently profitable?
    OpenAI is currently operating at a significant net loss. Its annual cash burn rate, driven by high R&D and inference costs, reportedly exceeds its annual revenue, meaning it relies heavily on its massive cash reserves and the strategic investment from Microsoft to sustain operations.
  • How are Gemini and Claude competing against OpenAI on cost and performance?
    Competitors like Google’s Gemini and Anthropic’s Claude are achieving performance parity or superiority on key benchmarks. Furthermore, they are often cheaper to use (lower inference cost) due to more efficient architectures (like MoE) and the ability of their parent companies (Alphabet and Google) to cross-subsidize their AI divisions with enormous profits from other revenue streams, such as search engine advertising.
  • What was the purpose of OpenAI’s “Code Red” announcement?
    The “Code Red” was an internal or public acknowledgment by OpenAI that its models were facing performance and reliability degradation in the face of intense, high-quality competition from rivals. It signaled a necessary, urgent, company-wide focus on addressing these issues to restore and maintain a technological lead.

UPDATE: Just found on X that HSBC has said that OpenAI is going to have nearly a half trillion in operating losses until 2030, per Financial Times (FT). Here is the chart of their $100 Billion in projected losses in 2029. With the success of Gemini, Claude, Deep Seek, Llama and competitors yet to emerge, the revenue piece may be overstated:

OpenAI estimated 2029 financials

Image credits: Google Gemini, Financial Times

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.