Category Archives: Technology

Will our opinion still really be our own in an AI Future?

Will our opinion still really be our own in an AI Future?

GUEST POST from Pete Foley

Intuitively we all mostly believe our opinions are our own.  After all, they come from that mysterious thing we call consciousness that resides somewhere inside of us. 

But we also know that other peoples opinions are influenced by all sorts of external influences. So unless we as individuals are uniquely immune to influence, it begs at the question; ‘how much of what we think, and what we do, is really uniquely us?’  And perhaps even more importantly, as our understanding of behavioral modification techniques evolves, and the power of the tools at our disposal grows, how much mental autonomy will any of us truly have in the future?

AI Manipulation of Political Opinion: A recent study from the Oxford Internet Institute (OII) and the UK AI Security Institute (AISI) showed how conversational AI can meaningfully influence peoples political beliefs. https://www.ox.ac.uk/news/2025-12-11-study-reveals-how-conversational-ai-can-exert-influence-over-political-beliefs .  Leveraging AI in this way potentially opens the door to a step-change in behavioral and opinion manipulation inn general.  And that’s quite sobering on a couple of fronts.   Firstly, for many today their political beliefs are deeply tied to our value system and deep sense of self, so this manipulation is potentially profound.  Secondly, if AI can do this today, how much more will it be able to do in the future?

A long History of Manipulation: Of course, manipulation of opinion or behavior is not new.  We are all overwhelmed by political marketing during election season.  We accept that media has manipulated public opinion for decades, and that social media has amplified this over the last few decades. Similarly we’ve all grown up immersed in marketing and advertising designed to influence our decisions, opinions and actions.  Meanwhile the rise in prominence of the behavioral sciences in recent decades has provided more structure and efficiency to behavioral influence, literally turning an art into a science.  Framing, priming, pre-suasion, nudging and a host of other techniques can have a profound impact on what we believe and what we actually do. And not only do we accept it, but many, if not most of the people reading this will have used one or more of these channels or techniques.  

An Art and a Science: And behavioral manipulation is a highly diverse field, and can be deployed as an art or a science.   Whether it’s influencers, content creators, politicians, lawyers, marketers, advertisers, movie directors, magicians, artists, comedians, even physicians or financial advisors, our lives are full of people who influence us, often using implicit cues that operate below our awareness. 

And it’s the largely implicit nature of these processes that explains why we tend to intuitively think this is something that happens to other people. By definition we are largely unaware of implicit influence on ourselves, although we can often see it in others.   And even in hindsight, it’s very difficult to introspect implicit manipulation of our own actions and opinions, because there is often no obvious conscious causal event. 

So what does this mean?  As with a lot of discussion around how an AI future, or any future for that matter, will unfold, informed speculation is pretty much all we have.  Futurism is far from an exact science.  But there are a couple of things we can make pretty decent guesses around.

1.  The ability to manipulate how people think creates power and wealth.

2.  Some will use this for good, some not, but given the nature of humanity, it’s unlikely that it will be used exclusively for either.

3.  AI is going to amplify our ability to manipulate how people think.  

The Good news: Benevolent behavioral and opinion manipulation has the power to do enormous good.  Whether it’s mental health and happiness (an increasingly challenging area as we as a species face unprecedented technology driven disruption), health, wellness, job satisfaction, social engagement, important for many of us, adoption of beneficial technology and innovation and so many other areas can benefit from this.  And given the power of the brain, there is even potential for conceptual manipulation to replace significant numbers of pharmaceuticals, by, for example, managing depression, or via preventative behavioral health interventions.   Will this be authentic? It’s probably a little Huxley dystopian, but will we care?  It’s one of the many ethical connundrums AI will pose us with.

The Bad News.  Did I mention wealth and power?  As humans, we don’t have a great record of doing the right thing when wealth and power come into the equation.  And AI and AI empowered social, conceptual and behavioral manipulation has potential to concentrate meaningful power even more so than today’s tech driven society.  Will this be used exclusively for good, or will some seek to leverage for their personal benefit at the expense of the border community?   Answers on a postcard (or AI generated DM if you prefer).

What can and should we do?  Realistically, as individuals we can self police, but we obviously also face limits in self awareness of implicit manipulations.  That said, we can to some degree still audit ourselves.  We’ve probably all felt ourselves at some point being riled up by a well constructed meme designed to amplify our beliefs.   Sometimes we recognize this quickly, other times we may be a little slower. But just simple awareness of the potential to be manipulated, and the symptoms of manipulation, such as intense or disproportionate emotional responses, can help us mitigate and even correct some of the worst effects. 

Collectively, there are more opportunities.  We are better at seeing others being manipulated than ourselves.  We can use that as a mirror, and/or call it out to others when we see it.  And many of us will find ourselves somewhere in the deployment chain, especially as AI is still in it’s early stages.  For those of us that this applies to, we have the opportunity to collectively nudge this emerging technology in the right direction. I still recall a conversation with Dan Ariely when I first started exploring behavioral science, perhaps 15-20 years ago.  It’s so long ago I have to paraphrase, but the essence of the conversation was to never manipulate people to do something that was not in there best interest.  

There is a pretty obvious and compelling moral framework behind this. But there is also an element of enlightened self interest. As a marketer working for a consumer goods company at the time, even if I could have nudged somebody into buying something they really didn’t want, it might have offered initial success, but would likely come back to bite me in the long-term.  They certainly wouldn’t become repeat customers, and a mixture of buyers remorse, loss aversion and revenge could turn them into active opponents.  This potential for critical thinking in hindsight exists for virtually every situation where outcomes damage the individual.   

The bottom line is that even today, we already ave to continually ask ourselves if what we see is real, if our beliefs are truly our own, or have they been manipulated? Media and social media memes already play the manipulation game.   AI may already be better, and if not, it’s only a matter of time before it is. If you think we are politically polarized now, hang onto your hat!!!  But awareness is key.  We all need to stay aware, be conscious of manipulation in ourselves and others, and counter it when we see it occurring for the wrong reasons.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Do You Have Green Nitrogen Fixation?

Innovating a Sustainable Future

LAST UPDATED: December 20, 2025 at 9:01 AM

Do You Have Green Nitrogen Fixation?

GUEST POST from Art Inteligencia

Agriculture feeds the world, but its reliance on synthetic nitrogen fertilizers has come at a steep environmental cost. As we confront climate change, waterway degradation, and soil depletion, the innovation challenge of this generation is clear: how to produce nitrogen sustainably. Green nitrogen fixation is not just a technological milestone — it is a systems-level transformation that integrates chemistry, biology, energy, and human-centered design.

The legacy approach — Haber-Bosch — enabled the Green Revolution, yet it locks agricultural productivity into fossil fuel dependency. Today’s innovators are asking a harder question: can we fix nitrogen with minimal emissions, localize production, and make the process accessible and equitable? The answer shapes the future of food, climate, and economy.

The Innovation Imperative

To feed nearly 10 billion people by 2050 without exceeding climate targets, we must decouple nitrogen fertilizer production from carbon-intensive energy systems. Green nitrogen fixation aims to achieve this by harnessing renewable electricity or biological mechanisms that operate at ambient conditions. This means re-imagining production from the ground up.

The implications are vast: lower carbon footprints, reduced nutrient runoff, resilient rural economies, and new pathways for localized fertilizer systems that empower rather than burden farmers.

Nitrogen Cycle Comparison

Case Study One: Electrochemical Nitrogen Reduction Breakthroughs

Electrochemical nitrogen reduction uses renewable electricity to convert atmospheric nitrogen into ammonia or other reactive forms. Unlike Haber-Bosch, which requires high heat and pressures, electrochemical approaches can operate at room temperature using novel catalyst materials.

One research consortium recently demonstrated that a proprietary catalyst structure significantly increased ammonia yield while maintaining stability over long cycles. Although not yet industrially scalable, this work points to a future where modular electrochemical reactors could be deployed near farms, powered by distributed solar and wind.

What makes this case compelling is not just the chemistry, but the design choice to focus on distributed systems — bringing fertilizer production closer to end users and far from centralized, fossil-fueled plants.

Case Study Two: Engineering Nitrogen Fixation into Staple Crops

Until recently, biological nitrogen fixation was limited to symbiotic relationships between legumes and root bacteria. But gene editing and synthetic biology are enabling scientists to embed nitrogenase pathways into non-legume crops like wheat and maize.

Early field trials with engineered rice have shown significant nitrogenase activity, reducing the need for external fertilizer inputs. While challenges remain — such as metabolic integration, field variability, and regulatory pathways — this represents one of the most disruptive possibilities in agricultural innovation.

This approach turns plants themselves into self-fertilizing systems, reducing emissions, costs, and dependence on industrial supply chains.

Leading Companies and Startups to Watch

Several organizations are pushing the frontier of green nitrogen fixation. Clean-tech firms are developing electrochemical ammonia reactors powered by renewables, while biotech startups are engineering novel nitrogenase systems for crops. Strategic partnerships between agritech platforms, renewable energy providers, and academic labs are forming to scale pilot technologies. Some ventures focus on localized solutions for smallholder farmers, others target utility-scale production with integrated carbon accounting. This ecosystem of innovation reflects the diversity of needs — global and local — and underscores the urgency and possibility of sustainable nitrogen solutions.

In the rapidly evolving landscape of green nitrogen fixation, several pioneering companies are dismantling the carbon-intensive legacy of the Haber-Bosch process.

Pivot Bio leads the biological charge, having successfully deployed engineered microbes across millions of acres to deliver nitrogen directly to crop roots, effectively turning the plants themselves into “mini-fertilizer plants.”

On the electrochemical front, Swedish startup NitroCapt is gaining massive traction with its “SUNIFIX” technology—winner of the 2025 Food Planet Prize—which mimics the natural fixation of nitrogen by lightning using only air, water, and renewable energy.

Nitricity is another key disruptor, recently pivoting toward a breakthrough process that combines renewable energy with organic waste, such as almond shells, to create localized “Ash Tea” fertilizers.

Meanwhile, industry giants like Yara International and CF Industries are scaling up “Green Ammonia” projects through massive electrolyzer integrations, signaling a shift where the world’s largest chemical providers are finally betting on a fossil-free future for global food security.

Barriers to Adoption and Scale

For all the promise, green nitrogen fixation faces real barriers. Electrochemical methods must meet industrial throughput, cost, and durability benchmarks. Biological systems need rigorous field validation across diverse climates and soil types. Regulatory frameworks for engineered crops vary by country, affecting adoption timelines.

Moreover, incumbent incentives in agriculture — often skewed toward cheap synthetic fertilizer — can slow willingness to transition. Overcoming these barriers requires policy alignment, investment in workforce training, and multi-stakeholder collaboration.

Human-Centered Implementation Design

Technical innovation alone is not sufficient. Solutions must be accessible to farmers of all scales, compatible with existing practices when possible, and supported by financing that lowers upfront barriers. This means designing technologies with users in mind, investing in training networks, and co-creating pathways with farming communities.

A truly human-centered green nitrogen future is one where benefits are shared — environmentally, economically, and socially.

Conclusion

Green nitrogen fixation is more than an innovation challenge; it is a socio-technical transformation that intersects climate, food security, and economic resilience. While progress is nascent, breakthroughs in electrochemical processes and biological engineering are paving the way. If we align policy, investment, and design thinking with scientific ingenuity, we can achieve a nitrogen economy that nourishes people and the planet simultaneously.

Frequently Asked Questions

What makes nitrogen fixation “green”?

It refers to producing usable nitrogen compounds with minimal greenhouse gas emissions using renewable energy or biological methods that avoid fossil fuel dependence.

Can green nitrogen fixation replace Haber-Bosch?

It has the potential, but widespread replacement will require scalability, economic competitiveness, and supportive policy environments.

How soon might these technologies reach farmers?

Some approaches are in pilot stages now; commercial-scale deployment could occur within the next decade with sustained investment and collaboration.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






The Wood-Fired Automobile

WWII’s Forgotten Lesson in Human-Centered Resourcefulness

LAST UPDATED: December 14, 2025 at 5:59 PM

The Wood-Fired Automobile

GUEST POST from Art Inteligencia

Innovation is often romanticized as the pursuit of the new — sleek electric vehicles, AI algorithms, and orbital tourism. Yet, the most profound innovation often arises not from unlimited possibility, but from absolute scarcity. The Second World War offers a stark, compelling lesson in this principle: the widespread adoption of the wood-fired automobile, or the gasogene vehicle.

In the 1940s, as global conflict choked off oil supplies, nations across Europe and Asia were suddenly forced to find an alternative to gasoline to keep their civilian and military transport running. The solution was the gas generator (or gasifier), a bulky metal unit often mounted on the rear or side of a vehicle. This unit burned wood, charcoal, or peat, not for heat or steam, but for gas. The process — pyrolysis — converted solid fuel into a combustible mixture of carbon monoxide, hydrogen, and nitrogen known as “producer gas” or “wood gas,” which was then filtered and fed directly into the vehicle’s conventional internal combustion engine. This adaptation was a pure act of Human-Centered Innovation: it preserved mobility and economic function using readily available, local resources, ensuring the continuity of life amidst crisis.

The Scarcity Catalyst: Unlearning the Oil Dependency

Before the war, cars ran on gasoline. When the oil dried up, the world faced a moment of absolute unlearning. Governments and industries could have simply let transportation collapse, but the necessity of maintaining essential services (mail, food distribution, medical transport) forced them to pivot to what they had: wood and ingenuity. This highlights a core innovation insight: the constraints we face today — whether supply chain failures or climate change mandates — are often the greatest catalysts for creative action.

Gasogene cars were slow, cumbersome, and required constant maintenance, yet their sheer existence was a triumph of adaptation. They provided roughly half the power of a petrol engine, requiring drivers to constantly downshift on hills and demanding a long, smoky warm-up period. But they worked. The innovation was not in the vehicle itself, which remained largely the same, but in the fuel delivery system and the corresponding behavioral shift required by the drivers and mechanics.

Case Study 1: Sweden’s Total Mobilization of Wood Gas

Challenge: Maintaining Neutrality and National Mobility Under Blockade

During WWII, neutral Sweden faced a complete cutoff of its oil imports. Without liquid fuel, the nation risked economic paralysis, potentially undermining its neutrality and ability to supply its citizens. The need was immediate and total: convert all essential vehicles.

Innovation Intervention: Standardization and Centralization

Instead of relying on fragmented, local solutions, the Swedish government centralized the gasifier conversion effort. They established the Gasogenkommittén (Gas Generator Committee) to standardize the design, production, and certification of gasifiers (known as gengas). Manufacturers such as Volvo and Scania were tasked not with building new cars, but with mass-producing the conversion kits.

  • By 1945, approximately 73,000 vehicles — nearly 90% of all Swedish vehicles, from buses and trucks to farm tractors and private cars — had been converted to run on wood gas.
  • The government created standardized wood pellet specifications and set up thousands of public wood-gas fueling stations, turning the challenge into a systematic, national enterprise.

The Innovation Impact:

Sweden demonstrated that human resourcefulness can completely circumvent a critical resource constraint at a national scale. The conversion was not an incremental fix; it was a wholesale, government-backed pivot that secured national resilience and mobility using entirely domestic resources. The key was standardized conversion — a centralized effort to manage distributed complexity.

Fischer-Tropsch Process

Case Study 2: German Logistics and the Bio-Diesel Experiment

Challenge: Fueling a Far-Flung Military and Civilian Infrastructure

Germany faced a dual challenge: supplying a massive, highly mechanized military campaign while keeping the domestic civilian economy functional. While military transport relied heavily on synthetic fuel created through the Fischer-Tropsch process, the civilian sector and local military transport units required mass-market alternatives.

Innovation Intervention: Blended Fuels and Infrastructure Adaptation

Beyond wood gas, German innovation focused on blended fuels. A crucial adaptation was the widespread use of methanol, ethanol, and various bio-diesels (esters derived from vegetable oils) to stretch dwindling petroleum reserves. While wood gasifiers were used on stationary engines and some trucks, the government mandated that local transport fill up with methanol-gasoline blends. This forced a massive, distributed shift in fuel pump calibration and engine tuning across occupied Europe.

  • The adaptation required hundreds of thousands of local mechanics, from France to Poland, to quickly unlearn traditional engine maintenance and become experts in the delicate tuning required for lower-energy blended fuels.
  • This placed the burden of innovation not on a central R&D lab, but on the front-line workforce — a pure example of Human-Centered Innovation at the operational level.

The Innovation Impact:

This case highlights how resource constraints force innovation across the entire value chain. Germany’s transport system survived its oil blockade not just through wood gasifiers, but through a constant, low-grade innovation treadmill of fuel substitution, blending, and local adaptation that enabled maximum optionality under duress. The lesson is that resilience comes from flexibility and decentralization.

Conclusion: The Gasogene Mindset for the Modern Era

The wood-fired car is not a relic of the past; it is a powerful metaphor for the challenges we face today. We are currently facing the scarcity of time, carbon space, and public trust. We are entirely reliant on systems that, while efficient in normal times, are dangerously fragile under stress. The shift to sustainability, the move away from centralized energy grids, and the adoption of closed-loop systems all require the Gasogene Mindset — the ability to pivot rapidly to local, available resources and fundamentally rethink the consumption model.

Modern innovators must ask: If our critical resource suddenly disappeared, what would we use instead? The answer should drive our R&D spending today. The history of the gasogene vehicle proves that sufficiency is the mother of ingenuity, and the greatest innovations often solve the problem of survival first. We must learn to innovate under constraint, not just in comfort.

“The wood-fired car teaches us that every constraint is a hidden resource, if you are creative enough to extract it.” — Braden Kelley

Frequently Asked Questions About Wood Gas Vehicles

1. How does a wood gas vehicle actually work?

The vehicle uses a gasifier that burns wood or charcoal in a low-oxygen environment (a process called pyrolysis). This creates a gas mixture (producer gas) which is then cooled, filtered, and fed directly into the vehicle’s standard internal combustion engine to power it, replacing gasoline.

2. How did the performance of a wood gas vehicle compare to gasoline?

Gasogene cars provided significantly reduced performance, typically delivering only 50-60% of the power of the original gasoline engine. They were slower, had lower top speeds, required frequent refueling with wood, and needed a 15-30 minute warm-up period to start producing usable gas.

3. Why aren’t these systems used today, given their sustainability?

The system is still used in specific industrial and remote applications (power generation), but not widely in transportation because of the convenience and energy density of liquid fuels. Wood gasifiers are large, heavy, require constant manual fueling and maintenance (clinker removal), and produce a low-energy gas that limits speed and range, making them commercially unviable against modern infrastructure.

Your first step toward a Gasogene Mindset: Identify one key external resource your business or team relies on (e.g., a software license, a single supplier, or a non-renewable material). Now, design a three-step innovation plan for a world where that resource suddenly disappears. That plan is your resilience strategy.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Bio-Computing & DNA Data Storage

The Human-Centered Future of Information

LAST UPDATED: December 12, 2025 at 5:47 PM

Bio-Computing & DNA Data Storage

GUEST POST from Art Inteligencia

We are drowning in data. The digital universe is doubling roughly every two years, and our current infrastructure — reliant on vast, air-conditioned server farms — is neither environmentally nor economically sustainable. This is where the most profound innovation of the 21st century steps in: DNA Data Storage. Rather than using the binary zeroes and ones of silicon, we leverage the four-base code of life — Adenine (A), Cytosine (C), Guanine (G), and Thymine (T) — to encode information. This transition is not merely an improvement; it is a fundamental shift that aligns our technology with the principles of Human-Centered Innovation by prioritizing sustainability, longevity, and density.

The scale of this innovation is staggering. DNA is the most efficient information storage system known. Theoretically, all the world’s data could be stored in a volume smaller than a cubic meter. This level of density, combined with the extreme longevity of DNA (which can last for thousands of years when properly preserved), solves the two biggest crises facing modern data: decay and footprint. We must unlearn the limitation of physical space and embrace biology as the ultimate hard drive. Bio-computing, the application of molecular reactions to perform complex calculations, is the natural, faster counterpart to this massive storage potential.

The Three Pillars of the Bio-Data Revolution

The convergence of biology and information technology is built on three revolutionary pillars:

1. Unprecedented Data Density

A single gram of DNA can theoretically store over 215 petabytes (215 million gigabytes) of data. Compared to a standard hard drive, which requires acres of physical space to house that much information, DNA provides an exponential reduction in physical footprint. This isn’t just about saving space; it’s about decentralizing data storage and dramatically reducing the need for enormous, vulnerable, power-hungry data centers. This density makes truly long-term archival practical for the first time.

2. Extreme Data Longevity

Silicon-based media, such as hard drives and magnetic tape, are ephemeral. They require constant maintenance, migration, and power to prevent data loss, with a shelf life often measured in decades. DNA, in contrast, has proven its stability over millennia. By encapsulating synthetic DNA in glass or mineral environments, the stored data becomes essentially immortal, eliminating the costly and energy-intensive practice of data migration every few years. This shifts the focus from managing hardware to managing the biological encapsulation process.

3. Low Energy Footprint

Traditional data centers consume vast amounts of electricity, both for operation and, critically, for cooling. The cost and carbon footprint of this consumption are rapidly becoming untenable. DNA data storage requires energy primarily during the initial encoding (synthesis) and subsequent decoding (sequencing) stages. Once stored, the data is inert, requiring zero power for preservation. This radical reduction in operational energy makes DNA storage an essential strategy for any organization serious about sustainable innovation and ESG goals.

Leading the Charge: Companies and Startups

This nascent but rapidly accelerating industry is attracting major players and specialized startups. Large technology companies like Microsoft and IBM are deeply invested, often in partnership with specialized biotech firms, to validate the technology and define the industrial standard for synthesis and sequencing. Microsoft, in collaboration with the University of Washington, was among the first to successfully encode and retrieve large files, including the entire text of the Universal Declaration of Human Rights. Meanwhile, startups are focusing on making the process more efficient and commercially viable. Twist Bioscience has become a leader in DNA synthesis, providing the tools necessary to write the data. Other emerging companies like Catalog are working on miniaturizing and automating the DNA storage process, moving the technology from a lab curiosity to a scalable, automated service. These players are establishing the critical infrastructure for the bio-data ecosystem.

Case Study 1: Archiving Global Scientific Data

Challenge: Preserving the Integrity of Long-Term Climate and Astronomical Records

A major research institution (“GeoSphere”) faced the challenge of preserving petabytes of climate, seismic, and astronomical data. This data needs to be kept for over 100 years, but the constant migration required by magnetic tape and hard drives introduced a high risk of data degradation, corruption, and enormous archival cost.

Bio-Data Intervention: DNA Encapsulation

GeoSphere partnered with a biotech firm to conduct a pilot program, encoding its most critical reference datasets into synthetic DNA. The data was converted into A, T, C, G sequences and chemically synthesized. The resulting DNA molecules were then encapsulated in silica beads for long-term storage.

  • The physical volume required to store the petabytes of data was reduced from a warehouse full of tapes to a container the size of a shoebox.
  • The data was found to be chemically stable with a projected longevity of over 1,000 years without any power or maintenance.

The Innovation Impact:

The shift to DNA storage solved GeoSphere’s long-term sustainability and data integrity crisis. It demonstrated that DNA is the perfect medium for “cold” archival data — vast amounts of information that must be kept secure but are infrequently accessed. This validated the role of DNA as a non-electronic, permanent archival solution.

Case Study 2: Bio-Computing for Drug Discovery

Challenge: Accelerating Complex Molecular Simulations in Pharmaceutical R&D

A pharmaceutical company (“BioPharmX”) was struggling with the computational complexity of molecular docking — simulating how millions of potential drug compounds interact with a target protein. Traditional silicon supercomputers required enormous time and electricity to run these optimization problems.

Bio-Data Intervention: Molecular Computing

BioPharmX explored bio-computing (or molecular computing) using DNA strands and enzymes. By setting up the potential drug compounds as sequences of DNA and allowing them to react with a synthesized protein target (also modeled in DNA), the calculation was performed not by electrons, but by molecular collision and selection.

  • Each possible interaction became a physical, parallel chemical reaction taking place simultaneously in the solution.
  • This approach solved the complex Traveling Salesman Problem (a key metaphor for optimization) faster than traditional electronic systems because of the massive parallelism inherent in molecular interactions.

The Innovation Impact:

Bio-computing proved to be a highly efficient, parallel processing method for solving specific, combinatorial problems related to drug design. This allowed BioPharmX to filter billions of potential compounds down to the most viable candidates in a fraction of the time, dramatically accelerating their R&D pipeline and showcasing the power of biological systems as processors.

Conclusion: The Convergence of Life and Logic

The adoption of DNA data storage and the development of bio-computing mark a pivotal moment in the history of information technology. It is a true embodiment of Human-Centered Innovation, pushing us toward a future where our most precious data is stored sustainably, securely, and with a life span that mirrors humanity’s own. For organizations, the question is not if to adopt bio-data solutions, but when and how to begin building the competencies necessary to leverage this biological infrastructure. The future of innovation is deeply intertwined with the science of life itself. The next great hard drive is already inside you.

“If your data has to last forever, it must be stored in the medium that was designed to do just that.”

Frequently Asked Questions About Bio-Computing and DNA Data Storage

1. How is data “written” onto DNA?

Data is written onto DNA using DNA synthesis machines, which chemically assemble the custom sequence of the four nucleotide bases (A, T, C, G) according to a computer algorithm that converts binary code (0s and 1s) into the base-four code of DNA.

2. How is the data “read” from DNA?

Data is read from DNA using standard DNA sequencing technologies. This process determines the exact sequence of the A, T, C, and G bases, and a reverse computer algorithm then converts this base-four sequence back into the original binary code for digital use.

3. What is the current main barrier to widespread commercial adoption?

The primary barrier is the cost and speed of the writing (synthesis) process. While storage density and longevity are superior, the current expense and time required to synthesize vast amounts of custom DNA make it currently viable only for “cold” archival data that is accessed very rarely, rather than for “hot” data used daily.

Your first step into bio-data thinking: Identify one dataset in your organization — perhaps legacy R&D archives or long-term regulatory compliance records — that has to be stored for 50 years or more. Calculate the total cost of power, space, and periodic data migration for that dataset over that time frame. This exercise will powerfully illustrate the human-centered, sustainable value proposition of DNA data storage.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Embodied Artificial Intelligence is the Next Frontier of Human-Centered Innovation

LAST UPDATED: December 8, 2025 at 4:56 PM

Embodied Artificial Intelligence is the Next Frontier of Human-Centered Innovation

GUEST POST from Art Inteligencia

For the last decade, Artificial Intelligence (AI) has lived primarily on our screens and in the cloud — a brain without a body. While large language models (LLMs) and predictive algorithms have revolutionized data analysis, they have done little to change the physical experience of work, commerce, and daily life. This is the innovation chasm we must now bridge.

The next great technological leap is Embodied Artificial Intelligence (EAI): the convergence of advanced robotics (the body) and complex, generalized AI (the brain). EAI systems are designed not just to process information, but to operate autonomously and intelligently within our physical world. This is a profound shift for Human-Centered Innovation, because EAI promises to eliminate the drudgery, danger, and limitations of physical labor, allowing humans to focus exclusively on tasks that require judgment, creativity, and empathy.

The strategic deployment of EAI requires a shift in mindset: organizations must view these agents not as mechanical replacements, but as co-creators that augment and elevate the human experience. The most successful businesses will be those that unlearn the idea of human vs. machine and embrace the model of Human-Embodied AI Symbiosis.

The EAI Opportunity: Three Human-Centered Shifts

EAI accelerates change by enabling three crucial shifts in how we organize work and society:

1. The Shift from Automation to Augmentation

Traditional automation replaces repetitive tasks. EAI offers intelligent augmentation. Because EAI agents learn and adapt in real-time within dynamic environments (like a factory floor or a hospital), they can handle unforeseen situations that script-based robots cannot. This means the human partner moves from supervising a simple process to managing the exceptions and optimizations of a sophisticated one. The human job becomes about maximizing the intelligence of the system, not the efficiency of the body.

2. The Shift from Efficiency to Dignity

Many essential human jobs are physically demanding, dangerous, or profoundly repetitive. EAI offers a path to remove humans from these undignified roles — the loading and unloading of heavy boxes, inspection of hazardous infrastructure, or the constant repetition of simple assembly tasks. This frees human capital for high-value interaction, fostering a new organizational focus on the dignity of work. Organizations committed to Human-Centered Innovation must prioritize the use of EAI to eliminate physical risk and strain.

3. The Shift from Digital Transformation to Physical Transformation

For decades, digital transformation has been the focus. EAI catalyzes the necessary physical transformation. It closes the loop between software and reality. An inventory algorithm that predicts demand can now direct a bipedal robot to immediately retrieve and prepare the required product from a highly chaotic warehouse shelf. This real-time, physical execution based on abstract computation is the true meaning of operational innovation.

Case Study 1: Transforming Infrastructure Inspection

Challenge: High Risk and Cost in Critical Infrastructure Maintenance

A global energy corporation (“PowerLine”) faced immense risk and cost in maintaining high-voltage power lines, oil pipelines, and sub-sea infrastructure. These tasks required sending human crews into dangerous, often remote, or confined spaces for time-consuming, repetitive visual inspections.

EAI Intervention: Autonomous Sensory Agents

PowerLine deployed a fleet of autonomous, multi-limbed EAI agents equipped with advanced sensing and thermal imaging capabilities. These robots were trained not just on pre-programmed routes, but on the accumulated, historical data of human inspectors, learning to spot subtle signs of material stress and structural failure — a skill previously reserved for highly experienced humans.

  • The EAI agents performed 95% of routine inspections, capturing data with superior consistency.
  • Human experts unlearned routine patrol tasks and focused exclusively on interpreting the EAI data flags and designing complex repair strategies.

The Outcome:

The use of EAI led to a 70% reduction in inspection time and, critically, a near-zero rate of human exposure to high-risk environments. This strategic pivot proved that EAI’s greatest value is not economic replacement, but human safety and strategic focus. The EAI provided a foundational layer of reliable, granular data, enabling human judgment to be applied only where it mattered most.

Case Study 2: Elderly Care and Companionship

Challenge: Overstretched Human Caregivers and Isolation

A national assisted living provider (“ElderCare”) struggled with caregiver burnout and increasing costs, while many residents suffered from emotional isolation due to limited staff availability. The challenge was profoundly human-centered: how to provide dignity and aid without limitless human resources.

EAI Intervention: The Adaptive Care Companion

ElderCare piloted the use of adaptive, humanoid EAI companions in low-acuity environments. These agents were programmed to handle simple, repetitive physical tasks (retrieving dropped items, fetching water, reminding patients about medication) and, critically, were trained on empathetic conversation models.

  • The EAI agents managed 60% of non-essential, fetch-and-carry tasks, freeing up human nurses for complex medical care and deep, personalized interaction.
  • The EAI’s conversation logs provided caregivers with Small Data insights into the emotional state and preferences of the residents, allowing the human staff to maximize the quality of their face-to-face time.

The Outcome:

The pilot resulted in a 30% reduction in nurse burnout and, most importantly, a measurable increase in resident satisfaction and self-reported emotional well-being. The EAI was deployed not to replace the human touch, but to protect and maximize its quality by taking on the physical burden of routine care. The innovation successfully focused human empathy where it had the greatest impact.

The EAI Ecosystem: Companies to Watch

The race to commercialize EAI is accelerating, driven by the realization that AI needs a body to unlock its full economic potential. Organizations should be keenly aware of the leaders in this ecosystem. Companies like Boston Dynamics, known for advanced mobility and dexterity, are pioneering the physical platforms. Startups such as Sanctuary AI and Figure AI are focused on creating general-purpose humanoid robots capable of performing diverse tasks in unstructured environments, integrating advanced large language and vision models into physical forms. Simultaneously, major players like Tesla with its Optimus project and research divisions within Google DeepMind are laying the foundational AI models necessary for EAI agents to learn and adapt autonomously. The most promising developments are happening at the intersection of sophisticated hardware (the actuators and sensors) and generalized, real-time control software (the brain).

Conclusion: A New Operating Model

Embodied AI is not just another technology trend; it is the catalyst for a radical change in the operating model of human civilization. Leaders must stop viewing EAI deployment as a simple capital expenditure and start treating it as a Human-Centered Innovation project. Your strategy should be defined by the question: How can EAI liberate my best people to do their best, most human work? Embrace the complexity, manage the change, and utilize the EAI revolution to drive unprecedented levels of dignity, safety, and innovation.

“The future of work is not AI replacing humans; it is EAI eliminating the tasks that prevent humans from being fully human.”

Frequently Asked Questions About Embodied Artificial Intelligence

1. How does Embodied AI differ from traditional industrial robotics?

Traditional industrial robots are fixed, single-purpose machines programmed to perform highly repetitive tasks in controlled environments. Embodied AI agents are mobile, often bipedal or multi-limbed, and are powered by generalized AI models, allowing them to learn, adapt, and perform complex, varied tasks in unstructured, human environments.

2. What is the Human-Centered opportunity of EAI?

The opportunity is the elimination of the “3 Ds” of labor: Dangerous, Dull, and Dirty. By transferring these physical burdens to EAI agents, organizations can reallocate human workers to roles requiring social intelligence, complex problem-solving, emotional judgment, and creative innovation, thereby increasing the dignity and strategic value of the human workforce.

3. What does “Human-Embodied AI Symbiosis” mean?

Symbiosis refers to the collaborative operating model where EAI agents manage the physical execution and data collection of routine, complex tasks, while human professionals provide oversight, set strategic goals, manage exceptions, and interpret the resulting data. The systems work together to achieve an outcome that neither could achieve efficiently alone.

Your first step toward embracing Embodied AI: Identify the single most physically demanding or dangerous task in your organization that is currently performed by a human. Begin a Human-Centered Design project to fully map the procedural and emotional friction points of that task, then use those insights to define the minimum viable product (MVP) requirements for an EAI agent that can eliminate that task entirely.

UPDATE – Here is an infographic of the key points of this article that you can download:

Embodied Artificial Intelligence Infographic

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: 1 of 1,000+ quote slides for your meetings & presentations at http://misterinnovation.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Is OpenAI About to Go Bankrupt?

LAST UPDATED: December 4, 2025 at 4:48 PM

Is OpenAI About to Go Bankrupt?

GUEST POST from Chateau G Pato

The innovation landscape is shifting, and the tremors are strongest in the artificial intelligence (AI) sector. For a moment, OpenAI felt like an impenetrable fortress, the company that cracked the code and opened the floodgates of generative AI to the world. But now, as a thought leader focused on Human-Centered Innovation, I see the classic signs of disruption: a growing competitive field, a relentless cash burn, and a core product advantage that is rapidly eroding. The question of whether OpenAI is on the brink of bankruptcy isn’t just about sensational headlines — it’s about the fundamental sustainability of a business model built on unprecedented scale and staggering cost.

The “Code Red” announcement from OpenAI, ostensibly about maintaining product quality, was a subtle but profound concession. It was an acknowledgment that the days of unchallenged superiority are over. This came as competitors like Google’s Gemini and Anthropic’s Claude are not just keeping pace, but in many key performance metrics, they are reportedly surpassing OpenAI’s flagship models. Performance parity, or even outperformance, is a killer in the technology adoption curve. When the superior tool is also dramatically cheaper, the choice for enterprises and developers — the folks who pay the real money — becomes obvious.

The Inevitable Crunch: Performance and Price

The competitive pressure is coming from two key vectors: performance and cost-efficiency. While the public often focuses on benchmark scores like MMLU or coding abilities — where models like Gemini and Claude are now trading blows or pulling ahead — the real differentiator for business users is price. New models, including the China-based Deepseek, are entering the market with reported capabilities approaching the frontier models but at a fraction of the development and inference cost. Deepseek’s reportedly low development cost highlights that the efficiency of model creation is also improving outside of OpenAI’s immediate sphere.

Crucially, the open-source movement, championed by models like Meta’s Llama family, introduces a zero-cost baseline that fundamentally caps the premium OpenAI can charge. Llama, and the rapidly improving ecosystem around it, means that a good-enough, customizable, and completely free model is always an option for businesses. This open-source competition bypasses the high-cost API revenue model entirely, forcing closed-source providers to offer a quantum leap in utility to justify the expenditure. This dynamic accelerates the commoditization of foundational model technology, turning OpenAI’s once-unique selling proposition into a mere feature.

OpenAI’s models, for all their power, have been famously expensive to run — a cost that gets passed on through their API. The rise of sophisticated, cheaper alternatives — many of which employ highly efficient architectures like Mixture-of-Experts (MoE) — means the competitive edge of sheer scale is being neutralized by engineering breakthroughs in efficiency. If the next step in AI on its way to artificial general intelligence (AGI) is a choice between a 10% performance increase and a 10x cost reduction for 90% of the performance, the market will inevitably choose the latter. This is a structural pricing challenge that erodes one of OpenAI’s core revenue streams: API usage.

The Financial Chasm: Burn Rate vs. Reserves

The financial situation is where the “bankruptcy” narrative gains traction. Developing and running frontier AI models is perhaps the most capital-intensive venture in corporate history. Reports — which are often conflicting and subject to interpretation — paint a picture of a company with an astronomical cash burn rate. Estimates for annual operational and development expenses are in the billions of dollars, resulting in a net loss measured in the billions.

This reality must be contrasted with the position of their main rivals. While OpenAI is heavily reliant on Microsoft’s monumental investment — a complex deal involving cash and Azure cloud compute credits — Microsoft’s exposure is structured as a strategic infrastructure play. The real financial behemoth is Alphabet (Google), which can afford to aggressively subsidize its Gemini division almost indefinitely. Alphabet’s near-monopoly on global search engine advertising generates profits in the tens of billions of dollars every quarter. This virtually limitless reservoir of cash allows Google to cross-subsidize Gemini’s massive research, development, and inference costs, effectively enabling them to engage in a high-stakes price war that smaller, loss-making entities like OpenAI cannot truly win on a level playing field. Alphabet’s strategy is to capture market share first, using the profit engine of search to buy time and scale, a luxury OpenAI simply does not have without a continuous cash injection from a partner.

The question is not whether OpenAI has money now, but whether their revenue growth can finally eclipse their accelerating costs before their massive reserve is depleted. Their long-term financial projections, which foresee profitability and revenues in the hundreds of billions by the end of the decade, require not just growth, but a sustained, near-monopolistic capture of the new AI-driven knowledge economy. That becomes increasingly difficult when competitors are faster, cheaper, and arguably better, and have access to deeper, more sustainable profit engines for cross-subsidization.

The Future Outlook: Change or Consequence

OpenAI’s future is not doomed, but the company must initiate a rapid, human-centered transformation. The current trajectory — relying on unprecedented capital expenditure to maintain a shrinking lead in model performance — is structurally unsustainable in the face of faster, cheaper, and increasingly open-source models like Meta’s Llama. The next frontier isn’t just AGI; it’s AGI at scale, delivered efficiently and affordably.

OpenAI must pivot from a model of monolithic, expensive black-box development to one that prioritizes efficiency, modularity, and a true ecosystem approach. This means a rapid shift to MoE architectures, aggressive cost-cutting in inference, and a clear, compelling value proposition beyond just “we were first.” Human-Centered Innovation principles dictate that a company must listen to the market — and the market is shouting for price, performance, and flexibility. If OpenAI fails to execute this transformation and remains an expensive, marginal performer, its incredible cash reserves will serve only as a countdown timer to a necessary and painful restructuring.

Frequently Asked Questions (FAQ)

  • Is OpenAI currently profitable?
    OpenAI is currently operating at a significant net loss. Its annual cash burn rate, driven by high R&D and inference costs, reportedly exceeds its annual revenue, meaning it relies heavily on its massive cash reserves and the strategic investment from Microsoft to sustain operations.
  • How are Gemini and Claude competing against OpenAI on cost and performance?
    Competitors like Google’s Gemini and Anthropic’s Claude are achieving performance parity or superiority on key benchmarks. Furthermore, they are often cheaper to use (lower inference cost) due to more efficient architectures (like MoE) and the ability of their parent companies (Alphabet and Google) to cross-subsidize their AI divisions with enormous profits from other revenue streams, such as search engine advertising.
  • What was the purpose of OpenAI’s “Code Red” announcement?
    The “Code Red” was an internal or public acknowledgment by OpenAI that its models were facing performance and reliability degradation in the face of intense, high-quality competition from rivals. It signaled a necessary, urgent, company-wide focus on addressing these issues to restore and maintain a technological lead.

UPDATE: Just found on X that HSBC has said that OpenAI is going to have nearly a half trillion in operating losses until 2030, per Financial Times (FT). Here is the chart of their $100 Billion in projected losses in 2029. With the success of Gemini, Claude, Deep Seek, Llama and competitors yet to emerge, the revenue piece may be overstated:

OpenAI estimated 2029 financials

Image credits: Google Gemini, Financial Times

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






The Tax Trap and Why Our Economic OS is Crashing

LAST UPDATED: December 3, 2025 at 6:23 PM

The Tax Trap and Why Our Economic OS is Crashing

GUEST POST from Art Inteligencia

We are currently operating an analog economy in a digital world. As an innovation strategist, I often talk about Braden Kelley’s “FutureHacking” — the art of getting to the future first. But sometimes, the future arrives before we have even unpacked our bags. The recent discourse around The Great American Contraction has illuminated a structural fault line in our society that we can no longer ignore. It is what I call the Tax Trap.

This isn’t just an economic glitch; it is a design failure of our entire social contract. We have built a civilization where human survival is tethered to labor, and government solvency is tethered to taxing that labor. As we sprint toward a post-labor economy fueled by Artificial Intelligence and robotics, we are effectively sawing off the branch we are sitting on.

The Mechanics of the Trap

To understand the Tax Trap, we must look at the “User Interface” of our government’s revenue stream. Historically, the user was the worker. You worked, you got paid, you paid taxes. The government then used those taxes to build roads, schools, and safety nets. It was a closed loop.

The introduction of AI as a peer-level laborer breaks this loop in two distinct places, creating a pincer movement that threatens to crush fiscal stability.

1. The Revenue Collapse (The Input Failure)

Robots do not pay payroll taxes. They do not contribute to Social Security or Medicare. When a logistics company replaces 500 warehouse workers with an autonomous swarm, the government loses the income tax from 500 people. But it goes deeper.

In the race for AI dominance, companies are incentivized to pour billions into “compute” — data centers, GPUs, and energy infrastructure. Under current accounting rules, these massive investments can often be written off as expenses or depreciated, driving down reportable profit. So, not only does the government lose the payroll tax, but it also sees a dip in corporate tax revenue because on paper, these hyper-efficient companies are “spending” all their money on growth.

2. The Welfare Spike (The Output Overload)

Here is the other side of the trap. Those 500 displaced warehouse workers do not vanish. They still have biological needs. They need food, healthcare, and housing. Without wages, they turn to the public safety net.

This creates a terrifying feedback loop: Revenue plummets exactly when demand for services explodes.

The Innovation Paradox: The more efficient our companies become at generating value through automation, the less capable our government becomes at capturing that value to sustain the society that permits those companies to exist.

A Human-Centered Design Flaw

As a champion of Human-Centered Change, I view this not as a political problem, but as an architectural one. We are trying to run a 21st-century software (AI-driven abundance) on 20th-century hardware (labor-based taxation).

The “Great American Contraction” suggests that smart nations will reduce their populations to avoid this unrest. While logically sound from a cold, mathematical perspective, it is a defensive strategy. It is a retreat. As innovators, we should not be looking to shrink to fit a broken model; we should be looking to redesign the model to fit our new reality.

The current system penalizes the human element. If you hire a human, you pay payroll tax, health insurance, and deal with HR complexity. If you hire a robot, you get a capital depreciation tax break. We have literally incentivized the elimination of human relevance.

Charting the Change: The Pivot to Value

How do we hack this future? We must decouple human dignity from labor, and government revenue from wages. We need a new “operating system” for public finance.

We must shift from taxing effort (labor) to taxing flow (value). This might look like:

  • The Robot Tax 2.0: Not a penalty on innovation, but a “sovereign license fee” for operating autonomous labor units that utilize public infrastructure (digital or physical).
  • Data Dividends: Recognizing that AI is trained on the collective knowledge of humanity. If an AI uses public data to generate profit, a fraction of that value belongs to the public trust.
  • The VAT Revolution: Moving toward taxing consumption and revenue rather than profit. If a company generates billions in revenue with zero employees, the tax code must capture a slice of that transaction volume, regardless of their operational costs.

The Empathy Engine

The Tax Trap is only fatal if we lack imagination. “The Great American Contraction” warns of scarcity, but automation promises abundance. The bridge between the two is distribution.

If we fail to redesign this system, we face a future of gated communities guarded by drones, surrounded by a sea of irrelevant, under-supported humans. That is a failure of innovation. True innovation isn’t just about faster chips or smarter code; it’s about designing systems that elevate the human condition.

We have the tools to build a world where the robot pays the tax, and the human reaps the creative dividend. We just need the courage to rewrite the source code of our economy.


The Great American Contraction Infographic

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






The Evolution of Trapped Value in Cloud Computing

The Evolution of Trapped Value in Cloud Computing

GUEST POST from Geoffrey A. Moore

Releasing trapped value drives the adoption of disruptive technology and subsequent category development. The trapped part inspires the technical innovation while the value part funds the business. As targeted trapped value gets released, the remaining value is held in place by a secondary set of traps, calling for a second generation of innovation, and a second round of businesses. This pattern continues until all the energy in the system is exhausted, and the economic priority shifts from growth to maintenance.

Take cloud computing for example. Amazon and Salesforce were early disrupters. The trapped value in retail was consumer access anytime anywhere. The trapped value in SaaS CRM was a corporate IT model that prioritized forecasting and reporting applications for upper management over tools for improving sales productivity in the trenches. As their models grew in success, however, they outgrew the data center operating model upon which they were based, and that was creating problems for both companies.

Help came from an unexpected quarter. Consumer computing, led by Google and Facebook, tackled the trapped value in the data center model by inventing the data-center-as-a-computer operation. The trapped value was in computers and network equipment that was optimized for scaling up to get more power. The new model relentlessly focused on commoditizing both, with stripped-down compute blocks and software-enabled switching—much to the consternation of the established hardware vendors who had no easy place to retreat to.

Their situation was further exacerbated by the rise of hyperscaler compute vendors who offered to outsource the entire enterprise footprint. But as they did, the value trap moved again, and this time it was the hyperscaler pricing model that was holding things back, particularly when switching costs were high. That has given rise to a hybrid architecture which at present is muddling its way through to a moderating norm. Here companies like Equinix and Digital Realty are helping enterprises combine approaches to find their optimal balance.

As this norm takes over more and more of the playing field, we may approach an asymptote of releasable trapped value at the computing layer. If so, that just means it will migrate elsewhere—in this case, up the stack. We are already seeing this in at least three areas of hypergrowth today:

  1. Cybersecurity, where the trapped value is in patching together component subsystems to address ongoing exposure to catastrophic risk.
  2. Content generation, where the trapped value is in time to market, as well as unfulfilled demand, for fresh digital media, both in consumer markets and in the enterprise.
  3. Co-piloting, where the trapped value is in low-yielding engagement with high-value digital services due to topic complexity and the lack of sophistication on the part of the end user.

All three of these opportunities will push further innovation in cloud computing, but the higher margins will now migrate to the next generation.

The net of all this is a fundamental investment thesis that applies equally well to venture investing, enterprise spending, and personal wealth management. As the Watergate pair of Woodward and Bernstein taught us many decades ago, Follow the money! In this case, the money is in the trapped value, so before you invest in any context, first identify the trapped value that when released will create the ROI you are looking for, and then monitor the early stages to determine if indeed it is getting released, and if so, that a fair share of the returns are coming back to you.

That’s what I think. What do you think?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Why 4D Printing is the Next Frontier of Human-Centered Change

The Adaptive Product

LAST UPDATED: November 29, 2025 at 9:23 AM

Why 4D Printing is the Next Frontier of Human-Centered Change

GUEST POST from Art Inteligencia

For centuries, the pinnacle of manufacturing innovation has been the creation of a static, rigid, and perfect form. Additive Manufacturing, or 3D printing, perfected this, giving us complexity without molds. But a seismic shift is underway, introducing the fourth dimension: time. 4D Printing is the technology that builds products designed to change their shape, composition, or functionality autonomously in response to environmental cues.

The innovation isn’t merely in the print, but in the programmable matter. These are objects with embedded behavioral code, turning raw materials into self-assembling, self-repairing, or self-adapting systems. For the Human-Centered Change leader, this is profoundly disruptive, moving design thinking from What the object is, to How the object behaves across its entire lifespan and in shifting circumstances.

The core difference is simple: 3D printing creates a fixed object. 4D printing creates a dynamic system.

The Mechanics of Transformation: Smart Materials

4D printing leverages existing 3D printing technologies (like Stereolithography or Fused Deposition Modeling) but uses Smart Materials instead of traditional static plastics. These materials have properties programmed into their geometry that cause them to react to external stimuli. The key material categories include:

  • Shape Memory Polymers (SMPs): These materials can be printed into one shape (Shape A), deformed into a temporary shape (Shape B), and then recover Shape A when exposed to a specific trigger, usually heat (thermo-responsive).
  • Hydrogels: These polymers swell or shrink significantly when exposed to moisture or water (hygromorphic), allowing for large-scale, water-driven shape changes.
  • Biomaterials and Composites: Complex structures combining stiff and responsive materials to create controlled folding, bending, or twisting motions.

This allows for the creation of Active Origami—intricate, flat-packed structures that self-assemble into complex 3D forms when deployed or activated.

Case Study 1: The Self-Adapting Medical Stent

Challenge: Implanting Devices in Dynamic Human Biology

Traditional medical stents (small tubes used to open blocked arteries) are fixed in size and delivered via invasive surgery or catheter-based deployment. Once implanted, they cannot adapt to a patient’s growth or unexpected biological changes, sometimes requiring further intervention.

4D Printing Intervention: The Time-Lapse Stent

Researchers have pioneered the use of 4D printing to create stents made of bio-absorbable, shape-memory polymers. These devices are printed in a compact, temporarily fixed state, allowing for minimally invasive insertion. Upon reaching the target location inside the body, the polymer reacts to the patient’s body temperature (the Thermal Stimulus).

  • The heat triggers the material to return to its pre-programmed, expanded shape, safely opening the artery.
  • The material is designed to gradually and safely dissolve over months or years once its structural support is no longer needed, eliminating the need for a second surgical removal.

The Human-Centered Lesson:

This removes the human risk and cost associated with two major steps: the complexity of surgical deployment (by making the stent initially small and flexible) and the future necessity of removal (by designing it to disappear). The product adapts to the patient, rather than the patient having to surgically manage the product.

Case Study 2: The Adaptive Building Facade

Challenge: Passive Infrastructure in Dynamic Climates

Buildings are static, but the environment is not. Traditional building systems require complex, motor-driven hardware and electrical sensors to adapt to sun, heat, and rain, leading to high energy costs and mechanical failure.

4D Printing Intervention: Hygromorphic Shading Systems

Inspired by how pinecones open and close based on humidity, researchers are 4D-printing building facade elements (shades, shutters) using bio-based, hygromorphic composites (materials that react to moisture). These large-scale prints are installed without any wires or motors.

  • When the air is dry and hot (high sun exposure), the material remains rigid, allowing light in.
  • When humidity increases (signaling impending rain or high moisture), the material absorbs the water vapor and is designed to automatically bend and curl, creating a self-shading or self-closing surface.

The Human-Centered Lesson:

This shifts the paradigm of sustainability from complex digital control systems to material intelligence. It reduces energy consumption and maintenance costs by eliminating mechanical components. The infrastructure responds autonomously and elegantly to the environment, making the building a more resilient and sustainable partner for the human occupants.

The Companies and Startups Driving the Change

The field is highly collaborative, bridging material science and industrial design. Leading organizations are often found in partnership with academic pioneers like MIT’s Self-Assembly Lab. Major additive manufacturing companies like Stratasys and Autodesk have made significant investments, often focusing on the software and material compatibility required for programmable matter. Other key players include HP Development Company and the innovative work coming from specialized bioprinting firms like Organovo, which explores responsive tissues. Research teams at institutions like the Georgia Institute of Technology continue to push the boundaries of multi-material 4D printing systems, making the production of complex, shape-changing structures faster and more efficient. The next generation of breakthroughs will emerge from the seamless integration of these material, design, and software leaders.

“4D printing is the ultimate realization of design freedom. We are no longer limited to designing for the moment of creation, but for the entire unfolding life of the product.”

The implications of 4D printing are vast, spanning aerospace (self-deploying antennae), consumer goods (adaptive footwear), and complex piping systems (self-regulating valves). For change leaders, the mandate is clear: start viewing your products and infrastructure not as static assets, but as programmable actors in a continuous, changing environment.

Frequently Asked Questions About 4D Printing

1. What is the “fourth dimension” in 4D Printing?

The fourth dimension is time. 4D printing refers to 3D-printed objects that are created using smart, programmable materials that change their shape, color, or function over time in response to specific external stimuli like heat, light, or water/humidity.

2. How is 4D Printing different from 3D Printing?

3D printing creates a final, static object. 4D printing uses the same additive manufacturing process but employs smart materials (like Shape Memory Polymers) that are programmed to autonomously transform into a second, pre-designed shape or state when a specific environmental condition is met, adding the element of time-based transformation.

3. What are the main applications for 4D Printing?

Applications are strongest where adaptation or deployment complexity is key. This includes biomedical devices (self-deploying stents), aerospace (self-assembling structures), soft robotics (flexible, adaptable grippers), and self-regulating infrastructure (facades that adjust to weather).

Your first step toward adopting 4D innovation: Identify one maintenance-heavy, mechanical component in your operation that is currently failing due to environmental change (e.g., a simple valve or a passive weather seal). Challenge your design team to rethink it as an autonomous, 4D-printed shape-memory structure that requires no external power source.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






The Reasons Customers May Refuse to Speak with AI

The Reasons Customers May Refuse to Speak with AI

GUEST POST from Shep Hyken

If you want to anger your customers, make them do something they don’t want to do.

Up to 66% of U.S. customers say that when it comes to getting help, resolving an issue or making a complaint, they only want to speak to a live person. That’s according to the 2025 State of Customer Service and Customer Experience (CX) annual study. If you don’t provide the option to speak to a live person, you are at risk of losing many customers.

But not all customers feel that way. We asked another sample of more than 1,000 customers about using AI and self-service tools to get customer support, and 34% said they stopped doing business with a company or brand because self-service options were not provided.

These findings reveal the contrasting needs and expectations customers have when communicating with the companies they do business with. While the majority prefer human-to-human interaction, a substantial number (about one-third) not only prefer self-service options — AI-fueled solutions, robust frequently asked question pages on a website, video tutorials and more — but demand it or they will actually leave to find a competitor that can provide what they want.

This creates a big challenge for CX decision-makers that directly impacts customer retention and revenue.

Why Some Customers Resist AI

Our research finds that age makes a difference. For example, Baby Boomers show the strongest preference for human interaction, with 82% preferring the phone over digital solutions. Only half (52%) of Gen-Z feels the same way about the phone. Here’s why:

  1. Lack of Trust: Trust is another concern, with almost half (49%) saying they are scared of technologies like AI and ChatGPT.
  2. Privacy Concerns: Seventy percent of customers are concerned about data privacy and security when interacting with AI.
  3. Success — Or Lack of Success: While I think it’s positive that 50% of customers surveyed have successfully resolved a customer service issue using AI without the need for a live agent, that also means that 50% have not.

Customers aren’t necessarily anti-technology. They’re anti-ineffective technology. When AI fails to understand requests and lacks empathy in sensitive situations, the negative experience can make certain customers want to only communicate with a human. Even half of Gen-Z (48%) says they are frustrated with AI technology (versus 17% of Baby Boomers).

Why Some Customers Embrace AI

The 34% of customers who prefer self-service options to the point of saying they are willing to stop doing business with a company if self-service isn’t available present a dilemma for CX leaders. This can paralyze the decision process for what solutions to buy and implement. Understanding some of the reasons certain customers embrace AI is important:

  1. Speed, Convenience and Efficiency: The ability to get immediate support without having to call a company, wait on hold, be authenticated, etc., is enough to get customers using AI. If you had the choice between getting an answer immediately or having to wait 15 minutes, which would you prefer? (That’s a rhetorical question.)
  2. 24/7 Availability: Immediate support is important, but having immediate access to support outside of normal business hours is even better.
  3. A Belief in the Future: There is optimism about the future of AI, as 63% of customers expect AI technologies to become the primary mode of customer service in the future — a significant increase from just 21% in 2021. That optimism has customers trying and outright adopting the use of AI.

CX leaders must recognize the generational differences — and any other impactful differences — as they make decisions. For companies that sell to customers across generations, this becomes increasingly important, especially as Gen-Z and Millennials gain purchasing power. Turning your back on a generation’s technology expectations puts you at risk of losing a large percentage of customers.

What’s a CX Leader To Do?

Some companies have experimented with forcing customers to use only AI and self-service solutions. This is risky, and for the most part, the experiments have failed. Yet, as AI improves — and it’s doing so at a very rapid pace — it’s okay to push customers to use self-service. Just support it with a seamless transfer to a human if needed. An AI-first approach works as long as there’s a backup.

Forcing customers to use a 100% solution, be it AI or human, puts your company at risk of losing customers. Today’s strategy should be a balanced choice between new and traditional customer support. It should be about giving customers the experience they want and expect — one that makes them say, “I’ll be back!”

Image credit: Pixabay

This article originally appeared on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.