Category Archives: Technology

Can AI Replace the CEO?

A Day in the Life of the Algorithmic Executive

LAST UPDATED: December 28, 2025 at 1:56 PM

Can AI Replace the CEO?

GUEST POST from Art Inteligencia

We are entering an era where the corporate antibody – that natural organizational resistance to disruptive change – is meeting its most formidable challenger yet: the AI CEO. For years, we have discussed the automation of the factory floor and the back office. But what happens when the “useful seeds of invention” are planted in the corner office?

The suggestion that an algorithm could lead a company often triggers an immediate emotional response. Critics argue that leadership requires soul, while proponents point to the staggering inefficiencies, biases, and ego-driven errors that plague human executives. As an advocate for Innovation = Change with Impact, I believe we must look beyond the novelty and analyze the strategic logic of algorithmic leadership.

“Leadership is not merely a collection of decisions; it is the orchestration of human energy toward a shared purpose. An AI can optimize the notes, but it cannot yet compose the symphony or inspire the orchestra to play with passion.”

Braden Kelley

The Efficiency Play: Data Without Drama

The argument for an AI CEO rests on the pursuit of Truly Actionable Data. Humans are limited by cognitive load, sleep requirements, and emotional variance. An AI executive, by contrast, operates in Future Present mode — constantly processing global market shifts, supply chain micro-fluctuations, and internal sentiment analysis in real-time. It doesn’t have a “bad day,” and it doesn’t make decisions based on who it had lunch with.

Case Study 1: NetDragon Websoft and the “Tang Yu” Experiment

The Experiment: A Virtual CEO in a Gaming Giant

In 2022, NetDragon Websoft, a major Chinese gaming and mobile app company, appointed an AI-powered humanoid robot named Tang Yu as the Rotating CEO of its subsidiary. This wasn’t just a marketing stunt; it was a structural integration into the management flow.

The Results

Tang Yu was tasked with streamlining workflows, improving the quality of work tasks, and enhancing the speed of execution. Over the following year, the company reported that Tang Yu helped the subsidiary outperform the broader Hong Kong stock market. By serving as a real-time data hub, the AI signature was required for document approvals and risk assessments. It proved that in data-rich environments where speed of iteration is the primary competitive advantage, an algorithmic leader can significantly reduce operational friction.

Case Study 2: Dictador’s “Mika” and Brand Stewardship

The Challenge: The Face of Innovation

Dictador, a luxury rum producer, took the concept a step further by appointing Mika, a sophisticated female humanoid robot, as their CEO. Unlike Tang Yu, who worked mostly within internal systems, Mika serves as a public-facing brand steward and high-level decision-maker for their DAO (Decentralized Autonomous Organization) projects.

The Insight

Mika’s role highlights a different facet of leadership: Strategic Pattern Recognition. Mika analyzes consumer behavior and market trends to select artists for bottle designs and lead complex blockchain-based initiatives. While Mika lacks human empathy, the company uses her to demonstrate unbiased precision. However, it also exposes the human-AI gap: while Mika can optimize a product launch, she cannot yet navigate the nuanced political and emotional complexities of a global pandemic or a social crisis with the same grace as a seasoned human leader.

Leading Companies and Startups to Watch

The space is rapidly maturing beyond experimental robot figures. Quantive (with StrategyAI) is building the “operating system” for the modern CEO, connecting KPIs to real-work execution. Microsoft is positioning its Copilot ecosystem to act as a “Chief of Staff” to every executive, effectively automating the data-gathering and synthesis parts of the role. Watch startups like Tessl and Vapi, which are focusing on “Agentic AI” — systems that don’t just recommend decisions but have the autonomy to execute them across disparate platforms.

The Verdict: The Hybrid Future

Will AI replace the CEO? My answer is: not the great ones. AI will certainly replace the transactional CEO — the executive whose primary function is to crunch numbers, approve budgets, and monitor performance. These tasks are ripe for automation because they represent 19th-century management techniques.

However, the transformational CEO — the one who builds culture, navigates ethical gray areas, and creates a sense of belonging — will find that AI is their greatest ally. We must move from fearing replacement to mastering Human-AI Teaming. The CEOs of 2030 will be those who use AI to handle the complexity of the business so they can focus on the humanity of the organization.

Frequently Asked Questions

Can an AI legally serve as a CEO?

Currently, most corporate law jurisdictions require a natural person to serve as a director or officer for liability and accountability reasons. AI “CEOs” like Tang Yu or Mika often operate under the legal umbrella of a human board or chairman who retains ultimate responsibility.

What are the biggest risks of an AI CEO?

The primary risks include Algorithmic Bias (reinforcing historical prejudices found in the data), Lack of Crisis Adaptability (AI struggles with “Black Swan” events that have no historical precedent), and the Loss of Employee Trust if leadership feels cold and disconnected.

How should current CEOs prepare for AI leadership?

Leaders must focus on “Up-skilling for Empathy.” They should delegate data-heavy reporting to AI systems and re-invest that time into Culture Architecture and Change Management. The goal is to become an expert at Orchestrating Intelligence — both human and synthetic.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

AI Stands for Accidental Innovation

LAST UPDATED: December 29, 2025 at 12:49 PM

AI Stands for Accidental Innovation

GUEST POST from Art Inteligencia

In the world of corporate strategy, we love to manufacture myths of inevitable visionary genius. We look at the behemoths of today and assume their current dominance was etched in stone a decade ago by a leader who could see through the fog of time. But as someone who has spent a career studying Human-Centered Innovation and the mechanics of innovation, I can tell you that the reality is often much messier. And this is no different when it comes to artificial intelligence (AI), so much so that it could be said that AI stands for Accidental Innovation.

Take, for instance, the meteoric rise of Nvidia. Today, they are the undisputed architects of the intelligence age, a company whose hardware powers the Large Language Models (LLMs) reshaping our world. Yet, if we pull back the curtain, we find a story of survival, near-acquisitions, and a heavy dose of serendipity. Nvidia didn’t build their current empire because they predicted the exact nuances of the generative AI explosion; they built it because they were lucky enough to have developed technology for a completely different purpose that happened to be the perfect fuel for the AI fire.

“True innovation is rarely a straight line drawn by a visionary; it is more often a resilient platform that survives its original intent long enough to meet a future it didn’t expect.”

Braden Kelley

The Parallel Universe: The Meta/Oculus Near-Miss

It is difficult to imagine now, but there was a point in the Future Present where Nvidia was seen as a vulnerable hardware player. In the mid-2010s, as the Virtual Reality (VR) hype began to peak, Nvidia’s focus was heavily tethered to the gaming market. Internal histories and industry whispers suggest that the Oculus division of Meta (then Facebook) explored the idea of acquiring or deeply merging with Nvidia’s core graphics capabilities to secure their own hardware vertical.

At the time, Nvidia’s valuation was a fraction of what it is today. Had that acquisition occurred, the “Corporate Antibodies” of a social media giant would likely have stifled the very modularity that makes Nvidia great today. Instead of becoming the generic compute engine for the world, Nvidia might have been optimized—and narrowed—into a specialized silicon shop for VR headsets. It was a sliding doors moment for the entire tech industry. By not being acquired, Nvidia maintained the autonomy to follow the scent of demand wherever it led next.

Case Study 1: The Meta/Oculus Intersection

Before the “Magnificent Seven” era, Nvidia was struggling to find its next big act beyond PC gaming. When Meta acquired Oculus, there was a desperate need for low-latency, high-performance GPUs to make VR viable. The relationship between the two companies was so symbiotic that some analysts argued a vertical integration was the only logical step. Had Mark Zuckerberg moved more aggressively to bring Nvidia under the Meta umbrella, the GPU might have become a proprietary tool for the Metaverse. Because this deal failed to materialize, Nvidia remained an open ecosystem, allowing researchers at Google and OpenAI to eventually use that same hardware for a little thing called a Transformer model.

The Crypto Catalyst: A Fortuitous Detour

The second major “accident” in Nvidia’s journey was the Cryptocurrency boom. For years, Nvidia’s stock and production cycles were whipped around by the price of Ethereum. To the outside world, this looked like a distraction—a volatile market that Nvidia was chasing to satisfy shareholders. However, the crypto miners demanded exactly what AI would later require: massive, parallel processing power and specialized chips (ASICs and high-end GPUs) that could perform simple calculations millions of times per second.

Nvidia leaned into this demand, refining their CUDA platform and their manufacturing scale. They weren’t building for LLMs yet; they were building for miners. But in doing so, they solved the scalability problem of parallel computing. When the “AI Winter” ended and the industry realized that Deep Learning was the path forward, Nvidia didn’t have to invent a new chip. They just had to rebrand the one they had already perfected for the blockchain. Preparation met opportunity, but the opportunity wasn’t the one they had initially invited to the dance.

Case Study 2: From Hashes to Tokens

In 2021, Nvidia’s primary concern was “Lite Hash Rate” (LHR) cards to deter crypto miners so gamers could finally buy GPUs. This era of forced scaling forced Nvidia to master the art of data-center-grade reliability. When ChatGPT arrived, the transition was seamless. The “Accidental Innovation” here was that the mathematical operations required to verify a block on a chain are fundamentally similar to the vector mathematics required to predict the next word in a sentence. Nvidia had built the world’s best token-prediction machine while thinking they were building the world’s best ledger-validation machine.

Leading Companies and Startups to Watch

While Nvidia currently sits on the throne of Accidental Innovation, the next wave of change-makers is already emerging by attempting to turn that accident into a deliberate architecture. Cerebras Systems is building “wafer-scale” engines that dwarf traditional GPUs, aiming to eliminate the networking bottlenecks that Nvidia’s “accidental” legacy still carries. Groq (not to be confused with the AI model) is focusing on LPU (Language Processing Units) that prioritize the inference speed necessary for real-time human interaction. In the software layer, Modular is working to decouple the AI software stack from specific hardware, potentially neutralizing Nvidia’s CUDA moat. Finally, keep an eye on CoreWeave, which has pivoted from crypto mining to become a specialized “AI cloud,” proving that Nvidia’s accidental path is a blueprint others can follow by design.

The Human-Centered Conclusion

We must stop teaching innovation as a series of deliberate masterstrokes. When we do that, we discourage leaders from experimenting. If you believe you must see the entire future before you act, you will stay paralyzed. Nvidia’s success is a testament to Agile Resilience. They built a powerful, flexible tool, stayed independent during a crucial acquisition window, and were humble enough to let the market show them what their technology was actually good for.

As we move into this next phase of the Future Present, the lesson is clear: don’t just build for the world you see today. Build for the accidents of tomorrow. Because in the end, the most impactful innovations are rarely the ones we planned; they are the ones we were ready for.

Frequently Asked Questions

Why is Nvidia’s success considered “accidental”?

While Nvidia’s leadership was visionary in parallel computing, their current dominance in AI stems from the fact that hardware they optimized for gaming and cryptocurrency mining turned out to be the exact architecture needed for Large Language Models (LLMs), a use case that wasn’t the primary driver of their R&D for most of their history.

Did Meta almost buy Nvidia?

Historical industry analysis suggests that during the early growth of Oculus, there were significant internal discussions within Meta (Facebook) about vertically integrating hardware. While a formal acquisition of the entire Nvidia corporation was never finalized, the close proximity and the potential for such a deal represent a “what if” moment that would have fundamentally changed the AI landscape.

What is the “CUDA moat”?

CUDA is Nvidia’s proprietary software platform that allows developers to use GPUs for general-purpose processing. Because Nvidia spent years refining this for various industries (including crypto), it has become the industry standard. Most AI developers write code specifically for CUDA, making it very difficult for them to switch to competing chips from AMD or Intel.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Technology of Tomorrow Requires Ecosystems Today

The Technology Of Tomorrow Requires Ecosystems Today

GUEST POST from Greg Satell

There are a number of stories about what led Hans Lipperhey to submit a patent for the telescope in 1608. Some say that he saw two children playing with lenses in his shop who discovered that when they put one lens in front of each other they could see a weather vane across the street. Others say it was an apprentice that noticed the telescopic effect.

Yet the more interesting question is how such an important discovery could have such prosaic origins. Why was it that it was at that time that somebody noticed that looking through two lenses would magnify objects and not before? How could it have been that the discovery was made in a humble workshop and not by some great personage?

The truth is that history tends to converge and cascade around certain places and times, such as Cambridge before World War I, Vienna in the 1920s or, more recently, in Silicon Valley. In each case, we find that there were ecosystems that led to the inventions that changed the world. If we are going to build a more innovative economy, that’s where we need to focus.

How The Printing Press Led To A New Era Of Science

The mystery surrounding the invention of the telescope in the early 1600s begins to make more sense when you consider that the printing press was invented a little over a century before. By the mid-1500s books were transformed from priceless artifacts rarely seen outside monasteries, to something common enough that people could keep in their homes.

As literacy flourished, the need for spectacles grew exponentially and lens making became a much more common trade. With so many lenses around, it was only a matter of time before someone figured out that combining two lenses would create a compound effect and result in magnification (the microscope was invented around the same time).

From there, things began to move quickly. In 1609, Galileo Galilei first used the telescope to explore the heavens and changed our conception of the universe. He was able to see stars that were invisible to the naked eye, mountains and valleys on the moon and noticed that, similar to the moon, Venus had phases suggesting that it revolved around the sun.

A half century later, Antonie van Leeuwenhoek built himself a microscope and discovered an entirely new world made up of cells and fibers far too small for the human eye to detect. For the first time we became aware of bacteria and protozoa, creating the new field of microbiology. The world began to move away from ancient superstition and into one of observation and deduction.

It’s hard to see how any of this could have been foreseen when Gutenberg printed his first bible. Galileo and van Leeuwenhoek were products of their age as much as they were creators of the future.

How The Light Bulb Helped To Reshape Life, Work And Diets

In 1882, just three years after he had almost literally shocked the world with his revolutionary lighting system, Thomas Edison opened his Pearl Street Station, the first commercial electrical distribution plant in the United States. By 1884 it was already servicing over 500 homes.Yet for the next few decades, electric light remained mostly a curiosity.

As the economist Paul David explains in The Dynamo and the Computer, electricity didn’t have a measurable impact on the economy until the early 1920’s — 40 years after Edison’s plant. The problem wasn’t with electricity itself, Edison quickly expanded his distribution network as did his rival George Westinghouse, but a lack of complementary technologies.

To truly impact productivity, factories had to be redesigned to function not around a single steam turbine, but with smaller electric motors powering each machine. That created the opportunity to reimagine work itself, which led to the study of management. Greater productivity raised living standards and a new consumer culture.

Much like with the printing press, the ecosystem created by electric light led to secondary and tertiary inventions. Radios changed the way people received information and were entertained. Refrigeration meant not only that food could be kept fresh, but sent over large distances, reshaping agriculture and greatly improving diets.

The Automobile And The Category Killer

The internal combustion engine was developed in the late 1870’s and early 1880’s. Two of its primary inventors, Gottlieb Daimler and Karl Benz, began developing cars in the mid-1880’s. Henry Ford came two decades later. By pioneering the assembly line, he transformed cars from an expensive curiosity into a true “product for the masses” and it was this transformation that led to its major impact.

When just a few people have a car, it is merely a mode of transportation. But when everyone has a car, it becomes a force that reshapes society. People move from crowded cities into bedroom communities in the suburbs. Social relationships change, especially for farmers who previously lived their entire lives within a single day’s horse ride of 10 or 12 square miles. Lives opened up. Worlds broadened.

New infrastructure, like roads and gas stations were built. Improved logistics began to reshape supply chains and factories moved from cities in the north—close to customers—to small towns in the south, where labor and land were cheaper. That improved the economics of manufacturing, improved incomes and enriched lives.

With the means to easily carry a week’s worth of groceries, corner stores were replaced by supermarkets. Eventually suburbs formed and shopping malls sprang up. In the US, Little League baseball became popular. With mobility combined with the productivity effects of electricity, almost every facet of life—where we lived, worked and shopped—was reshaped.

Embarking On A New Era Of Innovation

These days, it seems that every time you turn around you see some breakthrough technology that will change our lives. We see media reports about computing breakthroughs, miracle cures, new sources of energy and more. Unfortunately, very few will ever see the outside of a lab and even fewer will prove commercially viable enough to impact our lives.

Don’t get me wrong. Many of these are real discoveries produced by serious scientists and reported by reputable sources. The problem is with how science works. At any given time there are a myriad of exciting possibilities, but very few pan out and even the ones that do usually take decades to make an impact.

Digital technology is a great example of how this happens. As AnnaLee Saxenian explained in Regional Advantage, back in the 1970s and 80s, when Boston was the center of the technology universe, Silicon Valley invested in an ecosystem, which included not just corporations, but scientific labs, universities and community colleges. New England rejected that approach. The results speak for themselves.

If you want to understand the technology of tomorrow, don’t try to imagine an idea no one has ever thought of, but look at the problems people are working on today. You’ll find a vast network working on quantum computing, a significant synthetic biology economy, a large-scale effort in materials science and billions of dollars invested into energy storage startups.

That’s why, if we are to win the future, we need to invest in ecosystems. It’s the nodes that grab attention, but the networks that make things happen.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Rise of Human-AI Teaming Platforms

Designing Partnership, Not Replacement

LAST UPDATED: December 26, 2025 at 4:44 PM

Human-AI Teaming Platforms

GUEST POST from Art Inteligencia

In the rush to adopt artificial intelligence, too many organizations are making a fundamental error. They view AI through the lens of 19th-century industrial automation: a tool to replace expensive human labor with cheaper, faster machines. This perspective is not only shortsighted; it is a recipe for failed digital transformation.

As a human-centered change leader, I argue that the true potential of this era lies not in artificial intelligence alone, but in Augmented Intelligence derived from sophisticated collaboration. We are moving past simple chatbots and isolated algorithms toward comprehensive Human-AI Teaming Platforms. These are environments designed not to remove the human from the loop, but to create a symbiotic workflow where humans and synthetic agents operate as cohesive units, leveraging their respective strengths concurrently.

“Organizations don’t fail because AI is too difficult to adopt. They fail because they never designed how humans and AI would think together and work together.”

Braden Kelley

The Cognitive Collaborative Shift

A Human-AI Teaming Platform differs significantly from standard enterprise software. Traditional tools wait for human input. A teaming platform is proactive; it observes context, anticipates needs, and offers suggestions seamlessly within the flow of work.

The challenge for leadership here is less technological and more cultural. How do we foster psychological safety when a team member is an algorithm? How do we redefine accountability when decisions are co-authored by human judgment and machine probability? Success requires a deliberate shift from managing subordinate tools to orchestrating collaborative partners.

“The ultimate goal of Human-AI teaming isn’t just to build faster organizations, but to build smarter, more adaptable ones. It is about creating a symbiotic relationship where the computational velocity of AI amplifies – rather than replaces – the creative, empathetic, and contextual genius of humans.”

Braden Kelley

When designed correctly, these platforms handle the high-volume cognitive load—data pattern recognition, probabilistic forecasting, and information retrieval—freeing human brains for high-value tasks like ethical reasoning, strategic negotiation, and complex emotional intelligence.

Case Studies in Symbiosis

To understand the practical application of these platforms, we must look at sectors where the cost of error is high and data volumes are overwhelming.

Case Study 1: Mastercard and the Decision Management Platform

In the high-stakes world of global finance, fraud detection is a constant battle against increasingly sophisticated bad actors. Mastercard has moved beyond simple automated flags to a genuine human-AI teaming approach with their Decision Intelligence platform.

The Challenge: False positives in fraud detection insult legitimate customers and stop commerce, while false negatives cost billions. No human team can review every transaction in real-time, and rigid rules-based AI often misses nuanced fraud patterns.

The Teaming Solution: Mastercard employs sophisticated AI that analyzes billions of activities in real-time. However, rather than just issuing a binary block/allow decision, the AI acts as an investigative partner to human analysts. It presents a “reasoned” risk score, highlighting why a transaction looks suspicious based on subtle behavioral shifts that a human would miss. The human analyst then applies contextual knowledge—current geopolitical events, specific merchant relationships, or nuanced customer history—to make the final judgment call. The AI learns from this human intervention, constantly refining its future collaborative suggestions.

Case Study 2: Autodesk and Generative Design in Engineering

The field of engineering and manufacturing is transitioning from computer-aided design (CAD) to human-AI co-creation, pioneered by companies like Autodesk.

The Challenge: When designing complex components—like an aerospace bracket to reduce weight while maintaining structural integrity—an engineer is limited by their experience and the time available to iterate on concepts.

The Teaming Solution: Using Autodesk’s generative design platforms, the human engineer doesn’t draw the part. Instead, they define the constraints: materials, weight limits, load-bearing requirements, and manufacturing methods. The AI then acts as an tireless creative partner, generating hundreds or thousands of permutable design solutions that meet those criteria—many utilizing organic shapes no human would instinctively draw. The human engineer then reviews these options, selecting the optimal design based on aesthetics, manufacturability, and cost-effectiveness. The human sets the goal; the AI explores the solution space; the human selects and refines the outcome.

Leading Platforms and Startups to Watch

The market for these platforms is rapidly bifurcating into massive ecosystem players and niche, workflow-specific innovators.

Among the giants, Microsoft is aggressively positioning its Copilot ecosystem across nearly every knowledge worker touchpoint, turning M365 into the default teaming platform for the enterprise. Salesforce is similarly embedding generative AI deep into its CRM, attempting to turn sales and service records into proactive coaching systems.

However, keep an eye on innovators focused on the mechanics of collaboration. Companies like Atlassian are evolving their suite (Jira, Confluence) to use AI not just to summarize text, but to connect disparate project threads and identify team bottlenecks proactively. In the startup space, look for platforms that are trying to solve the “managerial” layer of AI, helping human leaders coordinate mixed teams of synthetic and biological agents, ensuring alignment and mitigating bias in real-time.

Conclusion: The Leadership Imperative

Implementing Human-AI Teaming Platforms is a change management challenge of the highest order. If introduced poorly, these tools will be viewed as surveillance engines or competitors, leading to resistance and sabotage.

Leaders must communicate a clear vision: AI is brought in to handle the drudgery so humans can focus on the artistry of their professions. The organizations that win in the next decade will not be those with the best AI; they will be the ones with the best relationship between their people and their AI.

Frequently Asked Questions regarding Human-AI Teaming

What is the primary difference between traditional automation and Human-AI teaming?

Traditional automation seeks to replace human tasks entirely to cut costs and increase speed, often removing the human from the loop. Human-AI teaming focuses on augmentation, keeping humans in the loop for complex judgment and creative tasks while leveraging AI for data processing and pattern recognition in a collaborative workflow.

What are the biggest cultural barriers to adopting Human-AI teaming platforms?

The significant barriers include a lack of trust in AI outputs, fear of job displacement among the workforce, and the difficulty of redefining roles and accountability when decisions are co-authored by humans and algorithms.

How do Human-AI teaming platforms improve decision-making?

These platforms improve decision-making by combining the AI’s ability to process vast datasets without fatigue or cognitive bias with the human ability to apply ethical considerations, emotional intelligence, and nuanced contextual understanding to the final choice.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Will our opinion still really be our own in an AI Future?

Will our opinion still really be our own in an AI Future?

GUEST POST from Pete Foley

Intuitively we all mostly believe our opinions are our own.  After all, they come from that mysterious thing we call consciousness that resides somewhere inside of us. 

But we also know that other peoples opinions are influenced by all sorts of external influences. So unless we as individuals are uniquely immune to influence, it begs at the question; ‘how much of what we think, and what we do, is really uniquely us?’  And perhaps even more importantly, as our understanding of behavioral modification techniques evolves, and the power of the tools at our disposal grows, how much mental autonomy will any of us truly have in the future?

AI Manipulation of Political Opinion: A recent study from the Oxford Internet Institute (OII) and the UK AI Security Institute (AISI) showed how conversational AI can meaningfully influence peoples political beliefs. https://www.ox.ac.uk/news/2025-12-11-study-reveals-how-conversational-ai-can-exert-influence-over-political-beliefs .  Leveraging AI in this way potentially opens the door to a step-change in behavioral and opinion manipulation inn general.  And that’s quite sobering on a couple of fronts.   Firstly, for many today their political beliefs are deeply tied to our value system and deep sense of self, so this manipulation is potentially profound.  Secondly, if AI can do this today, how much more will it be able to do in the future?

A long History of Manipulation: Of course, manipulation of opinion or behavior is not new.  We are all overwhelmed by political marketing during election season.  We accept that media has manipulated public opinion for decades, and that social media has amplified this over the last few decades. Similarly we’ve all grown up immersed in marketing and advertising designed to influence our decisions, opinions and actions.  Meanwhile the rise in prominence of the behavioral sciences in recent decades has provided more structure and efficiency to behavioral influence, literally turning an art into a science.  Framing, priming, pre-suasion, nudging and a host of other techniques can have a profound impact on what we believe and what we actually do. And not only do we accept it, but many, if not most of the people reading this will have used one or more of these channels or techniques.  

An Art and a Science: And behavioral manipulation is a highly diverse field, and can be deployed as an art or a science.   Whether it’s influencers, content creators, politicians, lawyers, marketers, advertisers, movie directors, magicians, artists, comedians, even physicians or financial advisors, our lives are full of people who influence us, often using implicit cues that operate below our awareness. 

And it’s the largely implicit nature of these processes that explains why we tend to intuitively think this is something that happens to other people. By definition we are largely unaware of implicit influence on ourselves, although we can often see it in others.   And even in hindsight, it’s very difficult to introspect implicit manipulation of our own actions and opinions, because there is often no obvious conscious causal event. 

So what does this mean?  As with a lot of discussion around how an AI future, or any future for that matter, will unfold, informed speculation is pretty much all we have.  Futurism is far from an exact science.  But there are a couple of things we can make pretty decent guesses around.

1.  The ability to manipulate how people think creates power and wealth.

2.  Some will use this for good, some not, but given the nature of humanity, it’s unlikely that it will be used exclusively for either.

3.  AI is going to amplify our ability to manipulate how people think.  

The Good news: Benevolent behavioral and opinion manipulation has the power to do enormous good.  Whether it’s mental health and happiness (an increasingly challenging area as we as a species face unprecedented technology driven disruption), health, wellness, job satisfaction, social engagement, important for many of us, adoption of beneficial technology and innovation and so many other areas can benefit from this.  And given the power of the brain, there is even potential for conceptual manipulation to replace significant numbers of pharmaceuticals, by, for example, managing depression, or via preventative behavioral health interventions.   Will this be authentic? It’s probably a little Huxley dystopian, but will we care?  It’s one of the many ethical connundrums AI will pose us with.

The Bad News.  Did I mention wealth and power?  As humans, we don’t have a great record of doing the right thing when wealth and power come into the equation.  And AI and AI empowered social, conceptual and behavioral manipulation has potential to concentrate meaningful power even more so than today’s tech driven society.  Will this be used exclusively for good, or will some seek to leverage for their personal benefit at the expense of the border community?   Answers on a postcard (or AI generated DM if you prefer).

What can and should we do?  Realistically, as individuals we can self police, but we obviously also face limits in self awareness of implicit manipulations.  That said, we can to some degree still audit ourselves.  We’ve probably all felt ourselves at some point being riled up by a well constructed meme designed to amplify our beliefs.   Sometimes we recognize this quickly, other times we may be a little slower. But just simple awareness of the potential to be manipulated, and the symptoms of manipulation, such as intense or disproportionate emotional responses, can help us mitigate and even correct some of the worst effects. 

Collectively, there are more opportunities.  We are better at seeing others being manipulated than ourselves.  We can use that as a mirror, and/or call it out to others when we see it.  And many of us will find ourselves somewhere in the deployment chain, especially as AI is still in it’s early stages.  For those of us that this applies to, we have the opportunity to collectively nudge this emerging technology in the right direction. I still recall a conversation with Dan Ariely when I first started exploring behavioral science, perhaps 15-20 years ago.  It’s so long ago I have to paraphrase, but the essence of the conversation was to never manipulate people to do something that was not in there best interest.  

There is a pretty obvious and compelling moral framework behind this. But there is also an element of enlightened self interest. As a marketer working for a consumer goods company at the time, even if I could have nudged somebody into buying something they really didn’t want, it might have offered initial success, but would likely come back to bite me in the long-term.  They certainly wouldn’t become repeat customers, and a mixture of buyers remorse, loss aversion and revenge could turn them into active opponents.  This potential for critical thinking in hindsight exists for virtually every situation where outcomes damage the individual.   

The bottom line is that even today, we already ave to continually ask ourselves if what we see is real, if our beliefs are truly our own, or have they been manipulated? Media and social media memes already play the manipulation game.   AI may already be better, and if not, it’s only a matter of time before it is. If you think we are politically polarized now, hang onto your hat!!!  But awareness is key.  We all need to stay aware, be conscious of manipulation in ourselves and others, and counter it when we see it occurring for the wrong reasons.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Do You Have Green Nitrogen Fixation?

Innovating a Sustainable Future

LAST UPDATED: December 20, 2025 at 9:01 AM

Do You Have Green Nitrogen Fixation?

GUEST POST from Art Inteligencia

Agriculture feeds the world, but its reliance on synthetic nitrogen fertilizers has come at a steep environmental cost. As we confront climate change, waterway degradation, and soil depletion, the innovation challenge of this generation is clear: how to produce nitrogen sustainably. Green nitrogen fixation is not just a technological milestone — it is a systems-level transformation that integrates chemistry, biology, energy, and human-centered design.

The legacy approach — Haber-Bosch — enabled the Green Revolution, yet it locks agricultural productivity into fossil fuel dependency. Today’s innovators are asking a harder question: can we fix nitrogen with minimal emissions, localize production, and make the process accessible and equitable? The answer shapes the future of food, climate, and economy.

The Innovation Imperative

To feed nearly 10 billion people by 2050 without exceeding climate targets, we must decouple nitrogen fertilizer production from carbon-intensive energy systems. Green nitrogen fixation aims to achieve this by harnessing renewable electricity or biological mechanisms that operate at ambient conditions. This means re-imagining production from the ground up.

The implications are vast: lower carbon footprints, reduced nutrient runoff, resilient rural economies, and new pathways for localized fertilizer systems that empower rather than burden farmers.

Nitrogen Cycle Comparison

Case Study One: Electrochemical Nitrogen Reduction Breakthroughs

Electrochemical nitrogen reduction uses renewable electricity to convert atmospheric nitrogen into ammonia or other reactive forms. Unlike Haber-Bosch, which requires high heat and pressures, electrochemical approaches can operate at room temperature using novel catalyst materials.

One research consortium recently demonstrated that a proprietary catalyst structure significantly increased ammonia yield while maintaining stability over long cycles. Although not yet industrially scalable, this work points to a future where modular electrochemical reactors could be deployed near farms, powered by distributed solar and wind.

What makes this case compelling is not just the chemistry, but the design choice to focus on distributed systems — bringing fertilizer production closer to end users and far from centralized, fossil-fueled plants.

Case Study Two: Engineering Nitrogen Fixation into Staple Crops

Until recently, biological nitrogen fixation was limited to symbiotic relationships between legumes and root bacteria. But gene editing and synthetic biology are enabling scientists to embed nitrogenase pathways into non-legume crops like wheat and maize.

Early field trials with engineered rice have shown significant nitrogenase activity, reducing the need for external fertilizer inputs. While challenges remain — such as metabolic integration, field variability, and regulatory pathways — this represents one of the most disruptive possibilities in agricultural innovation.

This approach turns plants themselves into self-fertilizing systems, reducing emissions, costs, and dependence on industrial supply chains.

Leading Companies and Startups to Watch

Several organizations are pushing the frontier of green nitrogen fixation. Clean-tech firms are developing electrochemical ammonia reactors powered by renewables, while biotech startups are engineering novel nitrogenase systems for crops. Strategic partnerships between agritech platforms, renewable energy providers, and academic labs are forming to scale pilot technologies. Some ventures focus on localized solutions for smallholder farmers, others target utility-scale production with integrated carbon accounting. This ecosystem of innovation reflects the diversity of needs — global and local — and underscores the urgency and possibility of sustainable nitrogen solutions.

In the rapidly evolving landscape of green nitrogen fixation, several pioneering companies are dismantling the carbon-intensive legacy of the Haber-Bosch process.

Pivot Bio leads the biological charge, having successfully deployed engineered microbes across millions of acres to deliver nitrogen directly to crop roots, effectively turning the plants themselves into “mini-fertilizer plants.”

On the electrochemical front, Swedish startup NitroCapt is gaining massive traction with its “SUNIFIX” technology—winner of the 2025 Food Planet Prize—which mimics the natural fixation of nitrogen by lightning using only air, water, and renewable energy.

Nitricity is another key disruptor, recently pivoting toward a breakthrough process that combines renewable energy with organic waste, such as almond shells, to create localized “Ash Tea” fertilizers.

Meanwhile, industry giants like Yara International and CF Industries are scaling up “Green Ammonia” projects through massive electrolyzer integrations, signaling a shift where the world’s largest chemical providers are finally betting on a fossil-free future for global food security.

Barriers to Adoption and Scale

For all the promise, green nitrogen fixation faces real barriers. Electrochemical methods must meet industrial throughput, cost, and durability benchmarks. Biological systems need rigorous field validation across diverse climates and soil types. Regulatory frameworks for engineered crops vary by country, affecting adoption timelines.

Moreover, incumbent incentives in agriculture — often skewed toward cheap synthetic fertilizer — can slow willingness to transition. Overcoming these barriers requires policy alignment, investment in workforce training, and multi-stakeholder collaboration.

Human-Centered Implementation Design

Technical innovation alone is not sufficient. Solutions must be accessible to farmers of all scales, compatible with existing practices when possible, and supported by financing that lowers upfront barriers. This means designing technologies with users in mind, investing in training networks, and co-creating pathways with farming communities.

A truly human-centered green nitrogen future is one where benefits are shared — environmentally, economically, and socially.

Conclusion

Green nitrogen fixation is more than an innovation challenge; it is a socio-technical transformation that intersects climate, food security, and economic resilience. While progress is nascent, breakthroughs in electrochemical processes and biological engineering are paving the way. If we align policy, investment, and design thinking with scientific ingenuity, we can achieve a nitrogen economy that nourishes people and the planet simultaneously.

Frequently Asked Questions

What makes nitrogen fixation “green”?

It refers to producing usable nitrogen compounds with minimal greenhouse gas emissions using renewable energy or biological methods that avoid fossil fuel dependence.

Can green nitrogen fixation replace Haber-Bosch?

It has the potential, but widespread replacement will require scalability, economic competitiveness, and supportive policy environments.

How soon might these technologies reach farmers?

Some approaches are in pilot stages now; commercial-scale deployment could occur within the next decade with sustained investment and collaboration.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






The Wood-Fired Automobile

WWII’s Forgotten Lesson in Human-Centered Resourcefulness

LAST UPDATED: December 14, 2025 at 5:59 PM

The Wood-Fired Automobile

GUEST POST from Art Inteligencia

Innovation is often romanticized as the pursuit of the new — sleek electric vehicles, AI algorithms, and orbital tourism. Yet, the most profound innovation often arises not from unlimited possibility, but from absolute scarcity. The Second World War offers a stark, compelling lesson in this principle: the widespread adoption of the wood-fired automobile, or the gasogene vehicle.

In the 1940s, as global conflict choked off oil supplies, nations across Europe and Asia were suddenly forced to find an alternative to gasoline to keep their civilian and military transport running. The solution was the gas generator (or gasifier), a bulky metal unit often mounted on the rear or side of a vehicle. This unit burned wood, charcoal, or peat, not for heat or steam, but for gas. The process — pyrolysis — converted solid fuel into a combustible mixture of carbon monoxide, hydrogen, and nitrogen known as “producer gas” or “wood gas,” which was then filtered and fed directly into the vehicle’s conventional internal combustion engine. This adaptation was a pure act of Human-Centered Innovation: it preserved mobility and economic function using readily available, local resources, ensuring the continuity of life amidst crisis.

The Scarcity Catalyst: Unlearning the Oil Dependency

Before the war, cars ran on gasoline. When the oil dried up, the world faced a moment of absolute unlearning. Governments and industries could have simply let transportation collapse, but the necessity of maintaining essential services (mail, food distribution, medical transport) forced them to pivot to what they had: wood and ingenuity. This highlights a core innovation insight: the constraints we face today — whether supply chain failures or climate change mandates — are often the greatest catalysts for creative action.

Gasogene cars were slow, cumbersome, and required constant maintenance, yet their sheer existence was a triumph of adaptation. They provided roughly half the power of a petrol engine, requiring drivers to constantly downshift on hills and demanding a long, smoky warm-up period. But they worked. The innovation was not in the vehicle itself, which remained largely the same, but in the fuel delivery system and the corresponding behavioral shift required by the drivers and mechanics.

Case Study 1: Sweden’s Total Mobilization of Wood Gas

Challenge: Maintaining Neutrality and National Mobility Under Blockade

During WWII, neutral Sweden faced a complete cutoff of its oil imports. Without liquid fuel, the nation risked economic paralysis, potentially undermining its neutrality and ability to supply its citizens. The need was immediate and total: convert all essential vehicles.

Innovation Intervention: Standardization and Centralization

Instead of relying on fragmented, local solutions, the Swedish government centralized the gasifier conversion effort. They established the Gasogenkommittén (Gas Generator Committee) to standardize the design, production, and certification of gasifiers (known as gengas). Manufacturers such as Volvo and Scania were tasked not with building new cars, but with mass-producing the conversion kits.

  • By 1945, approximately 73,000 vehicles — nearly 90% of all Swedish vehicles, from buses and trucks to farm tractors and private cars — had been converted to run on wood gas.
  • The government created standardized wood pellet specifications and set up thousands of public wood-gas fueling stations, turning the challenge into a systematic, national enterprise.

The Innovation Impact:

Sweden demonstrated that human resourcefulness can completely circumvent a critical resource constraint at a national scale. The conversion was not an incremental fix; it was a wholesale, government-backed pivot that secured national resilience and mobility using entirely domestic resources. The key was standardized conversion — a centralized effort to manage distributed complexity.

Fischer-Tropsch Process

Case Study 2: German Logistics and the Bio-Diesel Experiment

Challenge: Fueling a Far-Flung Military and Civilian Infrastructure

Germany faced a dual challenge: supplying a massive, highly mechanized military campaign while keeping the domestic civilian economy functional. While military transport relied heavily on synthetic fuel created through the Fischer-Tropsch process, the civilian sector and local military transport units required mass-market alternatives.

Innovation Intervention: Blended Fuels and Infrastructure Adaptation

Beyond wood gas, German innovation focused on blended fuels. A crucial adaptation was the widespread use of methanol, ethanol, and various bio-diesels (esters derived from vegetable oils) to stretch dwindling petroleum reserves. While wood gasifiers were used on stationary engines and some trucks, the government mandated that local transport fill up with methanol-gasoline blends. This forced a massive, distributed shift in fuel pump calibration and engine tuning across occupied Europe.

  • The adaptation required hundreds of thousands of local mechanics, from France to Poland, to quickly unlearn traditional engine maintenance and become experts in the delicate tuning required for lower-energy blended fuels.
  • This placed the burden of innovation not on a central R&D lab, but on the front-line workforce — a pure example of Human-Centered Innovation at the operational level.

The Innovation Impact:

This case highlights how resource constraints force innovation across the entire value chain. Germany’s transport system survived its oil blockade not just through wood gasifiers, but through a constant, low-grade innovation treadmill of fuel substitution, blending, and local adaptation that enabled maximum optionality under duress. The lesson is that resilience comes from flexibility and decentralization.

Conclusion: The Gasogene Mindset for the Modern Era

The wood-fired car is not a relic of the past; it is a powerful metaphor for the challenges we face today. We are currently facing the scarcity of time, carbon space, and public trust. We are entirely reliant on systems that, while efficient in normal times, are dangerously fragile under stress. The shift to sustainability, the move away from centralized energy grids, and the adoption of closed-loop systems all require the Gasogene Mindset — the ability to pivot rapidly to local, available resources and fundamentally rethink the consumption model.

Modern innovators must ask: If our critical resource suddenly disappeared, what would we use instead? The answer should drive our R&D spending today. The history of the gasogene vehicle proves that sufficiency is the mother of ingenuity, and the greatest innovations often solve the problem of survival first. We must learn to innovate under constraint, not just in comfort.

“The wood-fired car teaches us that every constraint is a hidden resource, if you are creative enough to extract it.” — Braden Kelley

Frequently Asked Questions About Wood Gas Vehicles

1. How does a wood gas vehicle actually work?

The vehicle uses a gasifier that burns wood or charcoal in a low-oxygen environment (a process called pyrolysis). This creates a gas mixture (producer gas) which is then cooled, filtered, and fed directly into the vehicle’s standard internal combustion engine to power it, replacing gasoline.

2. How did the performance of a wood gas vehicle compare to gasoline?

Gasogene cars provided significantly reduced performance, typically delivering only 50-60% of the power of the original gasoline engine. They were slower, had lower top speeds, required frequent refueling with wood, and needed a 15-30 minute warm-up period to start producing usable gas.

3. Why aren’t these systems used today, given their sustainability?

The system is still used in specific industrial and remote applications (power generation), but not widely in transportation because of the convenience and energy density of liquid fuels. Wood gasifiers are large, heavy, require constant manual fueling and maintenance (clinker removal), and produce a low-energy gas that limits speed and range, making them commercially unviable against modern infrastructure.

Your first step toward a Gasogene Mindset: Identify one key external resource your business or team relies on (e.g., a software license, a single supplier, or a non-renewable material). Now, design a three-step innovation plan for a world where that resource suddenly disappears. That plan is your resilience strategy.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Bio-Computing & DNA Data Storage

The Human-Centered Future of Information

LAST UPDATED: December 12, 2025 at 5:47 PM

Bio-Computing & DNA Data Storage

GUEST POST from Art Inteligencia

We are drowning in data. The digital universe is doubling roughly every two years, and our current infrastructure — reliant on vast, air-conditioned server farms — is neither environmentally nor economically sustainable. This is where the most profound innovation of the 21st century steps in: DNA Data Storage. Rather than using the binary zeroes and ones of silicon, we leverage the four-base code of life — Adenine (A), Cytosine (C), Guanine (G), and Thymine (T) — to encode information. This transition is not merely an improvement; it is a fundamental shift that aligns our technology with the principles of Human-Centered Innovation by prioritizing sustainability, longevity, and density.

The scale of this innovation is staggering. DNA is the most efficient information storage system known. Theoretically, all the world’s data could be stored in a volume smaller than a cubic meter. This level of density, combined with the extreme longevity of DNA (which can last for thousands of years when properly preserved), solves the two biggest crises facing modern data: decay and footprint. We must unlearn the limitation of physical space and embrace biology as the ultimate hard drive. Bio-computing, the application of molecular reactions to perform complex calculations, is the natural, faster counterpart to this massive storage potential.

The Three Pillars of the Bio-Data Revolution

The convergence of biology and information technology is built on three revolutionary pillars:

1. Unprecedented Data Density

A single gram of DNA can theoretically store over 215 petabytes (215 million gigabytes) of data. Compared to a standard hard drive, which requires acres of physical space to house that much information, DNA provides an exponential reduction in physical footprint. This isn’t just about saving space; it’s about decentralizing data storage and dramatically reducing the need for enormous, vulnerable, power-hungry data centers. This density makes truly long-term archival practical for the first time.

2. Extreme Data Longevity

Silicon-based media, such as hard drives and magnetic tape, are ephemeral. They require constant maintenance, migration, and power to prevent data loss, with a shelf life often measured in decades. DNA, in contrast, has proven its stability over millennia. By encapsulating synthetic DNA in glass or mineral environments, the stored data becomes essentially immortal, eliminating the costly and energy-intensive practice of data migration every few years. This shifts the focus from managing hardware to managing the biological encapsulation process.

3. Low Energy Footprint

Traditional data centers consume vast amounts of electricity, both for operation and, critically, for cooling. The cost and carbon footprint of this consumption are rapidly becoming untenable. DNA data storage requires energy primarily during the initial encoding (synthesis) and subsequent decoding (sequencing) stages. Once stored, the data is inert, requiring zero power for preservation. This radical reduction in operational energy makes DNA storage an essential strategy for any organization serious about sustainable innovation and ESG goals.

Leading the Charge: Companies and Startups

This nascent but rapidly accelerating industry is attracting major players and specialized startups. Large technology companies like Microsoft and IBM are deeply invested, often in partnership with specialized biotech firms, to validate the technology and define the industrial standard for synthesis and sequencing. Microsoft, in collaboration with the University of Washington, was among the first to successfully encode and retrieve large files, including the entire text of the Universal Declaration of Human Rights. Meanwhile, startups are focusing on making the process more efficient and commercially viable. Twist Bioscience has become a leader in DNA synthesis, providing the tools necessary to write the data. Other emerging companies like Catalog are working on miniaturizing and automating the DNA storage process, moving the technology from a lab curiosity to a scalable, automated service. These players are establishing the critical infrastructure for the bio-data ecosystem.

Case Study 1: Archiving Global Scientific Data

Challenge: Preserving the Integrity of Long-Term Climate and Astronomical Records

A major research institution (“GeoSphere”) faced the challenge of preserving petabytes of climate, seismic, and astronomical data. This data needs to be kept for over 100 years, but the constant migration required by magnetic tape and hard drives introduced a high risk of data degradation, corruption, and enormous archival cost.

Bio-Data Intervention: DNA Encapsulation

GeoSphere partnered with a biotech firm to conduct a pilot program, encoding its most critical reference datasets into synthetic DNA. The data was converted into A, T, C, G sequences and chemically synthesized. The resulting DNA molecules were then encapsulated in silica beads for long-term storage.

  • The physical volume required to store the petabytes of data was reduced from a warehouse full of tapes to a container the size of a shoebox.
  • The data was found to be chemically stable with a projected longevity of over 1,000 years without any power or maintenance.

The Innovation Impact:

The shift to DNA storage solved GeoSphere’s long-term sustainability and data integrity crisis. It demonstrated that DNA is the perfect medium for “cold” archival data — vast amounts of information that must be kept secure but are infrequently accessed. This validated the role of DNA as a non-electronic, permanent archival solution.

Case Study 2: Bio-Computing for Drug Discovery

Challenge: Accelerating Complex Molecular Simulations in Pharmaceutical R&D

A pharmaceutical company (“BioPharmX”) was struggling with the computational complexity of molecular docking — simulating how millions of potential drug compounds interact with a target protein. Traditional silicon supercomputers required enormous time and electricity to run these optimization problems.

Bio-Data Intervention: Molecular Computing

BioPharmX explored bio-computing (or molecular computing) using DNA strands and enzymes. By setting up the potential drug compounds as sequences of DNA and allowing them to react with a synthesized protein target (also modeled in DNA), the calculation was performed not by electrons, but by molecular collision and selection.

  • Each possible interaction became a physical, parallel chemical reaction taking place simultaneously in the solution.
  • This approach solved the complex Traveling Salesman Problem (a key metaphor for optimization) faster than traditional electronic systems because of the massive parallelism inherent in molecular interactions.

The Innovation Impact:

Bio-computing proved to be a highly efficient, parallel processing method for solving specific, combinatorial problems related to drug design. This allowed BioPharmX to filter billions of potential compounds down to the most viable candidates in a fraction of the time, dramatically accelerating their R&D pipeline and showcasing the power of biological systems as processors.

Conclusion: The Convergence of Life and Logic

The adoption of DNA data storage and the development of bio-computing mark a pivotal moment in the history of information technology. It is a true embodiment of Human-Centered Innovation, pushing us toward a future where our most precious data is stored sustainably, securely, and with a life span that mirrors humanity’s own. For organizations, the question is not if to adopt bio-data solutions, but when and how to begin building the competencies necessary to leverage this biological infrastructure. The future of innovation is deeply intertwined with the science of life itself. The next great hard drive is already inside you.

“If your data has to last forever, it must be stored in the medium that was designed to do just that.”

Frequently Asked Questions About Bio-Computing and DNA Data Storage

1. How is data “written” onto DNA?

Data is written onto DNA using DNA synthesis machines, which chemically assemble the custom sequence of the four nucleotide bases (A, T, C, G) according to a computer algorithm that converts binary code (0s and 1s) into the base-four code of DNA.

2. How is the data “read” from DNA?

Data is read from DNA using standard DNA sequencing technologies. This process determines the exact sequence of the A, T, C, and G bases, and a reverse computer algorithm then converts this base-four sequence back into the original binary code for digital use.

3. What is the current main barrier to widespread commercial adoption?

The primary barrier is the cost and speed of the writing (synthesis) process. While storage density and longevity are superior, the current expense and time required to synthesize vast amounts of custom DNA make it currently viable only for “cold” archival data that is accessed very rarely, rather than for “hot” data used daily.

Your first step into bio-data thinking: Identify one dataset in your organization — perhaps legacy R&D archives or long-term regulatory compliance records — that has to be stored for 50 years or more. Calculate the total cost of power, space, and periodic data migration for that dataset over that time frame. This exercise will powerfully illustrate the human-centered, sustainable value proposition of DNA data storage.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Embodied Artificial Intelligence is the Next Frontier of Human-Centered Innovation

LAST UPDATED: December 8, 2025 at 4:56 PM

Embodied Artificial Intelligence is the Next Frontier of Human-Centered Innovation

GUEST POST from Art Inteligencia

For the last decade, Artificial Intelligence (AI) has lived primarily on our screens and in the cloud — a brain without a body. While large language models (LLMs) and predictive algorithms have revolutionized data analysis, they have done little to change the physical experience of work, commerce, and daily life. This is the innovation chasm we must now bridge.

The next great technological leap is Embodied Artificial Intelligence (EAI): the convergence of advanced robotics (the body) and complex, generalized AI (the brain). EAI systems are designed not just to process information, but to operate autonomously and intelligently within our physical world. This is a profound shift for Human-Centered Innovation, because EAI promises to eliminate the drudgery, danger, and limitations of physical labor, allowing humans to focus exclusively on tasks that require judgment, creativity, and empathy.

The strategic deployment of EAI requires a shift in mindset: organizations must view these agents not as mechanical replacements, but as co-creators that augment and elevate the human experience. The most successful businesses will be those that unlearn the idea of human vs. machine and embrace the model of Human-Embodied AI Symbiosis.

The EAI Opportunity: Three Human-Centered Shifts

EAI accelerates change by enabling three crucial shifts in how we organize work and society:

1. The Shift from Automation to Augmentation

Traditional automation replaces repetitive tasks. EAI offers intelligent augmentation. Because EAI agents learn and adapt in real-time within dynamic environments (like a factory floor or a hospital), they can handle unforeseen situations that script-based robots cannot. This means the human partner moves from supervising a simple process to managing the exceptions and optimizations of a sophisticated one. The human job becomes about maximizing the intelligence of the system, not the efficiency of the body.

2. The Shift from Efficiency to Dignity

Many essential human jobs are physically demanding, dangerous, or profoundly repetitive. EAI offers a path to remove humans from these undignified roles — the loading and unloading of heavy boxes, inspection of hazardous infrastructure, or the constant repetition of simple assembly tasks. This frees human capital for high-value interaction, fostering a new organizational focus on the dignity of work. Organizations committed to Human-Centered Innovation must prioritize the use of EAI to eliminate physical risk and strain.

3. The Shift from Digital Transformation to Physical Transformation

For decades, digital transformation has been the focus. EAI catalyzes the necessary physical transformation. It closes the loop between software and reality. An inventory algorithm that predicts demand can now direct a bipedal robot to immediately retrieve and prepare the required product from a highly chaotic warehouse shelf. This real-time, physical execution based on abstract computation is the true meaning of operational innovation.

Case Study 1: Transforming Infrastructure Inspection

Challenge: High Risk and Cost in Critical Infrastructure Maintenance

A global energy corporation (“PowerLine”) faced immense risk and cost in maintaining high-voltage power lines, oil pipelines, and sub-sea infrastructure. These tasks required sending human crews into dangerous, often remote, or confined spaces for time-consuming, repetitive visual inspections.

EAI Intervention: Autonomous Sensory Agents

PowerLine deployed a fleet of autonomous, multi-limbed EAI agents equipped with advanced sensing and thermal imaging capabilities. These robots were trained not just on pre-programmed routes, but on the accumulated, historical data of human inspectors, learning to spot subtle signs of material stress and structural failure — a skill previously reserved for highly experienced humans.

  • The EAI agents performed 95% of routine inspections, capturing data with superior consistency.
  • Human experts unlearned routine patrol tasks and focused exclusively on interpreting the EAI data flags and designing complex repair strategies.

The Outcome:

The use of EAI led to a 70% reduction in inspection time and, critically, a near-zero rate of human exposure to high-risk environments. This strategic pivot proved that EAI’s greatest value is not economic replacement, but human safety and strategic focus. The EAI provided a foundational layer of reliable, granular data, enabling human judgment to be applied only where it mattered most.

Case Study 2: Elderly Care and Companionship

Challenge: Overstretched Human Caregivers and Isolation

A national assisted living provider (“ElderCare”) struggled with caregiver burnout and increasing costs, while many residents suffered from emotional isolation due to limited staff availability. The challenge was profoundly human-centered: how to provide dignity and aid without limitless human resources.

EAI Intervention: The Adaptive Care Companion

ElderCare piloted the use of adaptive, humanoid EAI companions in low-acuity environments. These agents were programmed to handle simple, repetitive physical tasks (retrieving dropped items, fetching water, reminding patients about medication) and, critically, were trained on empathetic conversation models.

  • The EAI agents managed 60% of non-essential, fetch-and-carry tasks, freeing up human nurses for complex medical care and deep, personalized interaction.
  • The EAI’s conversation logs provided caregivers with Small Data insights into the emotional state and preferences of the residents, allowing the human staff to maximize the quality of their face-to-face time.

The Outcome:

The pilot resulted in a 30% reduction in nurse burnout and, most importantly, a measurable increase in resident satisfaction and self-reported emotional well-being. The EAI was deployed not to replace the human touch, but to protect and maximize its quality by taking on the physical burden of routine care. The innovation successfully focused human empathy where it had the greatest impact.

The EAI Ecosystem: Companies to Watch

The race to commercialize EAI is accelerating, driven by the realization that AI needs a body to unlock its full economic potential. Organizations should be keenly aware of the leaders in this ecosystem. Companies like Boston Dynamics, known for advanced mobility and dexterity, are pioneering the physical platforms. Startups such as Sanctuary AI and Figure AI are focused on creating general-purpose humanoid robots capable of performing diverse tasks in unstructured environments, integrating advanced large language and vision models into physical forms. Simultaneously, major players like Tesla with its Optimus project and research divisions within Google DeepMind are laying the foundational AI models necessary for EAI agents to learn and adapt autonomously. The most promising developments are happening at the intersection of sophisticated hardware (the actuators and sensors) and generalized, real-time control software (the brain).

Conclusion: A New Operating Model

Embodied AI is not just another technology trend; it is the catalyst for a radical change in the operating model of human civilization. Leaders must stop viewing EAI deployment as a simple capital expenditure and start treating it as a Human-Centered Innovation project. Your strategy should be defined by the question: How can EAI liberate my best people to do their best, most human work? Embrace the complexity, manage the change, and utilize the EAI revolution to drive unprecedented levels of dignity, safety, and innovation.

“The future of work is not AI replacing humans; it is EAI eliminating the tasks that prevent humans from being fully human.”

Frequently Asked Questions About Embodied Artificial Intelligence

1. How does Embodied AI differ from traditional industrial robotics?

Traditional industrial robots are fixed, single-purpose machines programmed to perform highly repetitive tasks in controlled environments. Embodied AI agents are mobile, often bipedal or multi-limbed, and are powered by generalized AI models, allowing them to learn, adapt, and perform complex, varied tasks in unstructured, human environments.

2. What is the Human-Centered opportunity of EAI?

The opportunity is the elimination of the “3 Ds” of labor: Dangerous, Dull, and Dirty. By transferring these physical burdens to EAI agents, organizations can reallocate human workers to roles requiring social intelligence, complex problem-solving, emotional judgment, and creative innovation, thereby increasing the dignity and strategic value of the human workforce.

3. What does “Human-Embodied AI Symbiosis” mean?

Symbiosis refers to the collaborative operating model where EAI agents manage the physical execution and data collection of routine, complex tasks, while human professionals provide oversight, set strategic goals, manage exceptions, and interpret the resulting data. The systems work together to achieve an outcome that neither could achieve efficiently alone.

Your first step toward embracing Embodied AI: Identify the single most physically demanding or dangerous task in your organization that is currently performed by a human. Begin a Human-Centered Design project to fully map the procedural and emotional friction points of that task, then use those insights to define the minimum viable product (MVP) requirements for an EAI agent that can eliminate that task entirely.

UPDATE – Here is an infographic of the key points of this article that you can download:

Embodied Artificial Intelligence Infographic

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: 1 of 1,000+ quote slides for your meetings & presentations at http://misterinnovation.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Is OpenAI About to Go Bankrupt?

LAST UPDATED: December 4, 2025 at 4:48 PM

Is OpenAI About to Go Bankrupt?

GUEST POST from Chateau G Pato

The innovation landscape is shifting, and the tremors are strongest in the artificial intelligence (AI) sector. For a moment, OpenAI felt like an impenetrable fortress, the company that cracked the code and opened the floodgates of generative AI to the world. But now, as a thought leader focused on Human-Centered Innovation, I see the classic signs of disruption: a growing competitive field, a relentless cash burn, and a core product advantage that is rapidly eroding. The question of whether OpenAI is on the brink of bankruptcy isn’t just about sensational headlines — it’s about the fundamental sustainability of a business model built on unprecedented scale and staggering cost.

The “Code Red” announcement from OpenAI, ostensibly about maintaining product quality, was a subtle but profound concession. It was an acknowledgment that the days of unchallenged superiority are over. This came as competitors like Google’s Gemini and Anthropic’s Claude are not just keeping pace, but in many key performance metrics, they are reportedly surpassing OpenAI’s flagship models. Performance parity, or even outperformance, is a killer in the technology adoption curve. When the superior tool is also dramatically cheaper, the choice for enterprises and developers — the folks who pay the real money — becomes obvious.

The Inevitable Crunch: Performance and Price

The competitive pressure is coming from two key vectors: performance and cost-efficiency. While the public often focuses on benchmark scores like MMLU or coding abilities — where models like Gemini and Claude are now trading blows or pulling ahead — the real differentiator for business users is price. New models, including the China-based Deepseek, are entering the market with reported capabilities approaching the frontier models but at a fraction of the development and inference cost. Deepseek’s reportedly low development cost highlights that the efficiency of model creation is also improving outside of OpenAI’s immediate sphere.

Crucially, the open-source movement, championed by models like Meta’s Llama family, introduces a zero-cost baseline that fundamentally caps the premium OpenAI can charge. Llama, and the rapidly improving ecosystem around it, means that a good-enough, customizable, and completely free model is always an option for businesses. This open-source competition bypasses the high-cost API revenue model entirely, forcing closed-source providers to offer a quantum leap in utility to justify the expenditure. This dynamic accelerates the commoditization of foundational model technology, turning OpenAI’s once-unique selling proposition into a mere feature.

OpenAI’s models, for all their power, have been famously expensive to run — a cost that gets passed on through their API. The rise of sophisticated, cheaper alternatives — many of which employ highly efficient architectures like Mixture-of-Experts (MoE) — means the competitive edge of sheer scale is being neutralized by engineering breakthroughs in efficiency. If the next step in AI on its way to artificial general intelligence (AGI) is a choice between a 10% performance increase and a 10x cost reduction for 90% of the performance, the market will inevitably choose the latter. This is a structural pricing challenge that erodes one of OpenAI’s core revenue streams: API usage.

The Financial Chasm: Burn Rate vs. Reserves

The financial situation is where the “bankruptcy” narrative gains traction. Developing and running frontier AI models is perhaps the most capital-intensive venture in corporate history. Reports — which are often conflicting and subject to interpretation — paint a picture of a company with an astronomical cash burn rate. Estimates for annual operational and development expenses are in the billions of dollars, resulting in a net loss measured in the billions.

This reality must be contrasted with the position of their main rivals. While OpenAI is heavily reliant on Microsoft’s monumental investment — a complex deal involving cash and Azure cloud compute credits — Microsoft’s exposure is structured as a strategic infrastructure play. The real financial behemoth is Alphabet (Google), which can afford to aggressively subsidize its Gemini division almost indefinitely. Alphabet’s near-monopoly on global search engine advertising generates profits in the tens of billions of dollars every quarter. This virtually limitless reservoir of cash allows Google to cross-subsidize Gemini’s massive research, development, and inference costs, effectively enabling them to engage in a high-stakes price war that smaller, loss-making entities like OpenAI cannot truly win on a level playing field. Alphabet’s strategy is to capture market share first, using the profit engine of search to buy time and scale, a luxury OpenAI simply does not have without a continuous cash injection from a partner.

The question is not whether OpenAI has money now, but whether their revenue growth can finally eclipse their accelerating costs before their massive reserve is depleted. Their long-term financial projections, which foresee profitability and revenues in the hundreds of billions by the end of the decade, require not just growth, but a sustained, near-monopolistic capture of the new AI-driven knowledge economy. That becomes increasingly difficult when competitors are faster, cheaper, and arguably better, and have access to deeper, more sustainable profit engines for cross-subsidization.

The Future Outlook: Change or Consequence

OpenAI’s future is not doomed, but the company must initiate a rapid, human-centered transformation. The current trajectory — relying on unprecedented capital expenditure to maintain a shrinking lead in model performance — is structurally unsustainable in the face of faster, cheaper, and increasingly open-source models like Meta’s Llama. The next frontier isn’t just AGI; it’s AGI at scale, delivered efficiently and affordably.

OpenAI must pivot from a model of monolithic, expensive black-box development to one that prioritizes efficiency, modularity, and a true ecosystem approach. This means a rapid shift to MoE architectures, aggressive cost-cutting in inference, and a clear, compelling value proposition beyond just “we were first.” Human-Centered Innovation principles dictate that a company must listen to the market — and the market is shouting for price, performance, and flexibility. If OpenAI fails to execute this transformation and remains an expensive, marginal performer, its incredible cash reserves will serve only as a countdown timer to a necessary and painful restructuring.

Frequently Asked Questions (FAQ)

  • Is OpenAI currently profitable?
    OpenAI is currently operating at a significant net loss. Its annual cash burn rate, driven by high R&D and inference costs, reportedly exceeds its annual revenue, meaning it relies heavily on its massive cash reserves and the strategic investment from Microsoft to sustain operations.
  • How are Gemini and Claude competing against OpenAI on cost and performance?
    Competitors like Google’s Gemini and Anthropic’s Claude are achieving performance parity or superiority on key benchmarks. Furthermore, they are often cheaper to use (lower inference cost) due to more efficient architectures (like MoE) and the ability of their parent companies (Alphabet and Google) to cross-subsidize their AI divisions with enormous profits from other revenue streams, such as search engine advertising.
  • What was the purpose of OpenAI’s “Code Red” announcement?
    The “Code Red” was an internal or public acknowledgment by OpenAI that its models were facing performance and reliability degradation in the face of intense, high-quality competition from rivals. It signaled a necessary, urgent, company-wide focus on addressing these issues to restore and maintain a technological lead.

UPDATE: Just found on X that HSBC has said that OpenAI is going to have nearly a half trillion in operating losses until 2030, per Financial Times (FT). Here is the chart of their $100 Billion in projected losses in 2029. With the success of Gemini, Claude, Deep Seek, Llama and competitors yet to emerge, the revenue piece may be overstated:

OpenAI estimated 2029 financials

Image credits: Google Gemini, Financial Times

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.