Do You Have Green Nitrogen Fixation?

Innovating a Sustainable Future

LAST UPDATED: December 20, 2025 at 9:01 AM

Do You Have Green Nitrogen Fixation?

GUEST POST from Art Inteligencia

Agriculture feeds the world, but its reliance on synthetic nitrogen fertilizers has come at a steep environmental cost. As we confront climate change, waterway degradation, and soil depletion, the innovation challenge of this generation is clear: how to produce nitrogen sustainably. Green nitrogen fixation is not just a technological milestone — it is a systems-level transformation that integrates chemistry, biology, energy, and human-centered design.

The legacy approach — Haber-Bosch — enabled the Green Revolution, yet it locks agricultural productivity into fossil fuel dependency. Today’s innovators are asking a harder question: can we fix nitrogen with minimal emissions, localize production, and make the process accessible and equitable? The answer shapes the future of food, climate, and economy.

The Innovation Imperative

To feed nearly 10 billion people by 2050 without exceeding climate targets, we must decouple nitrogen fertilizer production from carbon-intensive energy systems. Green nitrogen fixation aims to achieve this by harnessing renewable electricity or biological mechanisms that operate at ambient conditions. This means re-imagining production from the ground up.

The implications are vast: lower carbon footprints, reduced nutrient runoff, resilient rural economies, and new pathways for localized fertilizer systems that empower rather than burden farmers.

Nitrogen Cycle Comparison

Case Study One: Electrochemical Nitrogen Reduction Breakthroughs

Electrochemical nitrogen reduction uses renewable electricity to convert atmospheric nitrogen into ammonia or other reactive forms. Unlike Haber-Bosch, which requires high heat and pressures, electrochemical approaches can operate at room temperature using novel catalyst materials.

One research consortium recently demonstrated that a proprietary catalyst structure significantly increased ammonia yield while maintaining stability over long cycles. Although not yet industrially scalable, this work points to a future where modular electrochemical reactors could be deployed near farms, powered by distributed solar and wind.

What makes this case compelling is not just the chemistry, but the design choice to focus on distributed systems — bringing fertilizer production closer to end users and far from centralized, fossil-fueled plants.

Case Study Two: Engineering Nitrogen Fixation into Staple Crops

Until recently, biological nitrogen fixation was limited to symbiotic relationships between legumes and root bacteria. But gene editing and synthetic biology are enabling scientists to embed nitrogenase pathways into non-legume crops like wheat and maize.

Early field trials with engineered rice have shown significant nitrogenase activity, reducing the need for external fertilizer inputs. While challenges remain — such as metabolic integration, field variability, and regulatory pathways — this represents one of the most disruptive possibilities in agricultural innovation.

This approach turns plants themselves into self-fertilizing systems, reducing emissions, costs, and dependence on industrial supply chains.

Leading Companies and Startups to Watch

Several organizations are pushing the frontier of green nitrogen fixation. Clean-tech firms are developing electrochemical ammonia reactors powered by renewables, while biotech startups are engineering novel nitrogenase systems for crops. Strategic partnerships between agritech platforms, renewable energy providers, and academic labs are forming to scale pilot technologies. Some ventures focus on localized solutions for smallholder farmers, others target utility-scale production with integrated carbon accounting. This ecosystem of innovation reflects the diversity of needs — global and local — and underscores the urgency and possibility of sustainable nitrogen solutions.

In the rapidly evolving landscape of green nitrogen fixation, several pioneering companies are dismantling the carbon-intensive legacy of the Haber-Bosch process.

Pivot Bio leads the biological charge, having successfully deployed engineered microbes across millions of acres to deliver nitrogen directly to crop roots, effectively turning the plants themselves into “mini-fertilizer plants.”

On the electrochemical front, Swedish startup NitroCapt is gaining massive traction with its “SUNIFIX” technology—winner of the 2025 Food Planet Prize—which mimics the natural fixation of nitrogen by lightning using only air, water, and renewable energy.

Nitricity is another key disruptor, recently pivoting toward a breakthrough process that combines renewable energy with organic waste, such as almond shells, to create localized “Ash Tea” fertilizers.

Meanwhile, industry giants like Yara International and CF Industries are scaling up “Green Ammonia” projects through massive electrolyzer integrations, signaling a shift where the world’s largest chemical providers are finally betting on a fossil-free future for global food security.

Barriers to Adoption and Scale

For all the promise, green nitrogen fixation faces real barriers. Electrochemical methods must meet industrial throughput, cost, and durability benchmarks. Biological systems need rigorous field validation across diverse climates and soil types. Regulatory frameworks for engineered crops vary by country, affecting adoption timelines.

Moreover, incumbent incentives in agriculture — often skewed toward cheap synthetic fertilizer — can slow willingness to transition. Overcoming these barriers requires policy alignment, investment in workforce training, and multi-stakeholder collaboration.

Human-Centered Implementation Design

Technical innovation alone is not sufficient. Solutions must be accessible to farmers of all scales, compatible with existing practices when possible, and supported by financing that lowers upfront barriers. This means designing technologies with users in mind, investing in training networks, and co-creating pathways with farming communities.

A truly human-centered green nitrogen future is one where benefits are shared — environmentally, economically, and socially.

Conclusion

Green nitrogen fixation is more than an innovation challenge; it is a socio-technical transformation that intersects climate, food security, and economic resilience. While progress is nascent, breakthroughs in electrochemical processes and biological engineering are paving the way. If we align policy, investment, and design thinking with scientific ingenuity, we can achieve a nitrogen economy that nourishes people and the planet simultaneously.

Frequently Asked Questions

What makes nitrogen fixation “green”?

It refers to producing usable nitrogen compounds with minimal greenhouse gas emissions using renewable energy or biological methods that avoid fossil fuel dependence.

Can green nitrogen fixation replace Haber-Bosch?

It has the potential, but widespread replacement will require scalability, economic competitiveness, and supportive policy environments.

How soon might these technologies reach farmers?

Some approaches are in pilot stages now; commercial-scale deployment could occur within the next decade with sustained investment and collaboration.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Why Are We Forcing People Back into Cubicles?

Why Are We Forcing People Back into Cubicles?

GUEST POST from Mike Shipulski

Whether it’s placing machine tools on the factory floor or designing work spaces for people that work at the company, the number one guiding metric is resources per square foot. If you’re placing machine tools, this metric causes the machines to be stacked closely together, where the space between them is minimized, access to the machines is minimized, and the aisles are the smallest they can be. The result – the number of machines per square foot is maximized.

And though there has been talk of workplaces that promote effective interactions and creativity, the primary metric is still people per square foot. Don’t believe me? I have one word for you – cubicles. Cubicles are the design solution of choice when you want to pack the most people into the smallest area.

Here’s a test. At your next team meeting, ask people to raise their hand if they hate working in a cubicle. I rest my case.

With cubicles, it’s the worst of both worlds. There is none of the benefit of an office and none of the benefit of collaborative environment. They are half of neither.

What is one of Dilbert’s favorite topic? Cubicles.

If no one likes them, why do we still have them? If you want quiet, cubicles are the wrong answer. If you want effective collaboration, cubicles are the wrong answer. If everyone hates them, why do we still have them?

When people need to do deep work, they stay home so they can have peace and quiet. When people they want to concentrate, they avoid cubicles at all costs. When you need to focus, you need quiet. And the best way to get quiet is with four walls and a door. Some would call that and office, but those are passe. And in some cases, they are outlawed. In either case, they are the best way to get some quiet time. And, as a side benefit, they also block interruptions.

Best way for people to interact is face-to-face. And in order to interact at way, they’ve got to be in the same place at the same time. Sure spontaneous interactions are good, but it’s far better to facilitate interactions with a fixed schedule. Like with a bus stop schedule, people know where to be and when. In that way, many people can come together efficiently and effectively and the number of interactions increases dramatically. So why not set up planned interactions at ten in the morning and two in the afternoon?

I propose a new metric for facilities design – number of good ideas per square foot. Good ideas require deep thought, so quiet is important. And good ideas require respectful interaction with others, so interactions are important.

I’m not exactly sure what a facility must look like to maximize the number of good ideas per square foot, but I do know it has no cubicles.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Customer Experience Failures Are a Gift

Customer Experience Failures Are a Gift Pixabay

GUEST POST from Shep Hyken

When things go wrong for your customer, that’s when you have the best opportunity to prove how good you really are. Anyone can look good when everything is running smoothly, but your true customer service “chops” show up during a service failure.

I recently went to a doctor’s office for an appointment. I arrived early to check in. The nurse at the desk was – no exaggeration – horrified that she had to tell me there was a glitch in the scheduling software, and my appointment had to be rescheduled. While some people might have taken a, “That’s too bad … it happens attitude,” she couldn’t have been more apologetic, showing tremendous empathy, and immediately went to work to find another time for me to return to see the doc.

I was at a restaurant and ordered a sandwich without mayonnaise. (I hate mayonnaise!) Of course, the sandwich came out slathered with mayo. The server spotted the mistake while setting the plate down in front of me. Before it even hit the table, she put it back on her tray. She served the rest of the food to everyone else at the table, and like the nurse who had to reschedule my appointment, she apologized and showed empathy. She immediately went to the kitchen to fix the problem. Several minutes later, I had a perfect sandwich.

Shep Hyken CX Failure cartoon

After both of these experiences, I received email messages asking me to complete a short survey. I gave each of these people and businesses a perfect, five-star rating. It wasn’t that they were flawless. In both cases, mistakes were made. But they each made a flawless recovery. In both situations, they didn’t offer a refund or anything for free. They just fixed the problem – but they did it with style. And when someone cares as much as these ladies did, how could I stay mad at them?

One important point: For this approach to work, problems have to be rare, not frequent, occurrences. No matter how nice employees are or how well they handle issues and complaints, if problems happen regularly, customers won’t trust the company. Excellence in recovery can only overcome occasional failures, not “systematic” ones.

I don’t need to rehash my Five Steps to Handling a Moment of Misery (Complaint), but it’s important to point out that both people handled the problems well. Rescheduling an appointment seems like a bigger issue than remaking a sandwich, but that’s not the point. The point is they both fixed the problem, and the attitude they took while doing so became even more important than the fix.

Both of these stories illustrate how, when you really care, you can win back your customer. A mistake isn’t the end of your relationship with a customer. Handled the right way, it’s an opportunity to build trust and loyalty by showing how good you really are when things don’t go according to plan.

Image credits: Pixabay, Shep Hyken

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Bringing Energy Back to Work

Bringing Energy Back to Work

GUEST POST from Geoffrey A. Moore

There are all kinds of survey data these days indicating that morale in the workplace is lower than it used to be and, more importantly, than it ought to be. This has got managers scurrying about trying to find ways to make their employees happier. One word of advice on this: Stop!

It is not your job to make the people on your team happy. That is their job. Your job is to make their work important. Now, as a bonus, there is a strong correlation between meaningful work and worker happiness, so there is a two-birds-for-one-stone principle operating here. It’s just that you have to keep your eye on the lead bird. Employee happiness is a trailing indicator. Customer success is the leading one.

Your team’s customers can be internal or external — it just depends on your performance contract, the one that sets out the outcomes your organization has been funded to deliver. To be meaningful, in one way or another, those outcomes must contribute materially to the overall success of your enterprise’s mission. Your job is to highlight that path, to help your team members see it as a North Star to guide the focus and prioritization of their work. That is what gives their work meaning. Their performance metrics should align directly with the outcomes you have contracted to deliver – else why are they doing the work?

Performance management in this context is simply redirecting their energy to align as closely as possible to the deliverables of your organization’s performance contract. The talent you recruit and develop should have the kind of disposition and gifts that motivate them to want to do this kind of work. If there is a mismatch, help them find some other kind of work that is a better fit for them, and backfill their absence with someone who is a better fit for you. Performance management is not about weeding out—it is about re-potting.

Finally, if we bring this mindset to our current challenges with institutionalizing remote/hybrid operating models, too often this is being framed as an issue of improving employee happiness. Again, not your job. Instead, the focus should be on how best to meet the needs of the customers you have elected to serve. That is, instead of designing enterprise-out, with our heads down in our personal and team calendars, we need to design customer-in, with our heads up looking at where the trapped value is in their world, aligning our energies to release that trapped value, and organizing our operating model to maximize our impact in so doing. If we are not in service to our customers, what use are we?

That’s what I think. What do you think?

Image Credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Do What 91% of Executives Will Not

Winning in Times of Uncertainty

Do What 91% of Executives Will Not

GUEST POST from Robyn Bolton

In times of great uncertainty, we seek safety. But what does “safety” look like?

What We Say: Safety = Data

We tend to believe that we are rational beings and, as a result, we rely on data to make decisions.

Great! We’ve got lots of data from lots of uncertain periods. HBR examined 4,700 public companies during three global recessions (1980, 1990, and 2000).  They found that the companies that emerged “outperforming rivals in their industry by at least 10% in terms of sales and profits growth” had one thing in common: They aggressively made cuts to improve operational efficiency and ruthlessly invested in marketing, R&D, and building new assets to better serve customers have the highest probability of emerging as markets leaders post-recession.

This research was backed up in 2020 in a McKinsey study that found that “Organizations that maintained their innovation focus through the 2009 financial crisis, for example, emerged stronger, outperforming the market average by more than 30 percent and continuing to deliver accelerated growth over the subsequent three to five years.”

What We Do: Safety = Hoarding

The reality is that we are human beings and, as a result, make decisions based on how we feel and the use data to justify those decisions.

How else do you explain that despite the data, only 9% of companies took the balanced approach recommended in the HBR study and, ten years later, only 25% of the companies studied by McKinsey stated that “capturing new growth” was a top priority coming out of the COVID-19 pandemic.

Uncertainty is scary so, as individuals and as organizations, we scramble to secure scarce resources, cut anything that feels extraneous, and shift or focus to survival.

What now? AND, not OR

What was true in 2010 is still true today and new research from Bain offers practical advice for how leaders can follow both their hearts and their heads.

Implement systems to protect you from yourself. Bain studied Fast Company’s 50 Most Innovative Companies and found that 79% use two different operating models for innovation to combat executives’ natural risk aversion.  The first, for sustaining innovation uses traditional stage-gate models, seeks input from experts and existing customers, and is evaluated on ROI-driven metrics.

The second, for breakthrough innovations, is designed to embrace and manage uncertainty by learning from new customers and emerging trends, working with speed and agility, engaging non-traditional collaborators, and evaluating projects based on their long-term potential and strategic option value.

Don’t outspend. Out-allocate. Supporting the two-system approach, nearly half of the companies studied send less on R&D than their peers overall and spend it differently: 39% of their R&D budgets to sustaining innovations and 61% to expanding into new categories or business models.

Use AI to accelerate, not create. Companies integrating AI into innovation processes have seen design-to-launch timelines shrink by 20% or more. The key word there is “integrate,” not outsource. They use AI for data and trend analysis, rapid prototyping, and automating repetitive tasks. But they still rely on humans for original thinking, intuition-based decisions, and genuine customer empathy.

Prioritize humans above all else. Even though all the information in the world is at our fingerprints, humans remain unknowable, unpredictable, and wonderfully weird. That’s why successful companies use AI to enhance, not replace, direct engagement with customers. They use synthetic personas as a rehearsal space for brainstorming, designing research, and concept testing. But they also know there is no replacement (yet) for human-to-human interaction, especially when creating new offerings and business models.

In times of create uncertainty, we seek safety.  But safety doesn’t guarantee certainty. Nothing does. So, the safest thing we can do is learn from the past, prepare (not plan) for the future, make the best decisions possible based on what we know and feel today, and stay open to changing them tomorrow.

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Focus on Shaping Networks Not Opinions

Focus on Shaping Networks Not Opinions

GUEST POST from Greg Satell

Anybody who has ever been married or had kids knows how difficult it can be to convince even a single person. To persuade dozens or hundreds — much less thousands or millions — to change their mind about something important seems like a pipe dream. Yet that doesn’t stop people from spending significant time and energy to do just that.

In fact, there is a massive industry dedicated to shaping opinions. Professionals research attitudes, identify “value propositions,” craft messages and leverage “influencers” in the hopes that they can get people to change their minds. Yet despite the billions of dollars invested each year, evidence of consistent success remains elusive.

The truth is that the best indicator of what people do and think is what the people around them do and think. Instead of trying to shape opinions, we need to shape networks. That’s why we need to focus our efforts on working to craft cultures rather than wordsmithing slogans. To do that, we need to understand the subtle ways we influence each other.

The Influencer Myth

Malcolm Gladwell, blockbuster book, The Tipping Point, popularized his “Law of the Few,” which he stated as: “The success of any kind of social epidemic is heavily dependent on the involvement of people with a particular and rare set of social gifts.” This reenergized earlier ideas about opinion leaders, the supposedly secret people who somehow have outsize influence on others.

Perhaps not surprisingly, the communications industry quickly jumped to promote the idea of secret “influentials” living among us. Clearly, if you’re looking to shape opinions, being able to identify such people would be incredibly valuable and, it goes without saying, firms who could claim an expertise in leveraging those powers could earn outsized fees.

Yet the actual evidence that these people actually exist is incredibly thin. Even the original opinion leader research found that influence was highly contextual. In a more recent study of e-mails, it was found that highly connected people weren’t necessary to produce a viral cascade. In another, based on Twitter, it was found that they aren’t even sufficient. So called “Influentials” are only slightly more likely to produce viral chains.

Duncan Watts, co- author of both studies and a pioneer in the science of networks told me, “The Influentials hypothesis, is a theory that can be made to fit the facts once they are known, but it has little predictive power. It is at best a convenient fiction; at worst a misleading model. The real world is much more complicated.”

The Framingham Heart Study

While there is little evidence to suggest that there are special people secretly influencing our attitudes and decisions, there is abundant evidence that completely normal people exert influence all the time. We may ask our nephew about what app to download, or a co-worker about where to go for dinner. We all have people in our lives that we go to for advice about particular things.

Decades of scientific research suggests that the best indicator of what we think and do is what the people around us think and do. A famous series of studies performed in the 1950s—replicated countless times since then—found that when confronted with a overwhelming opinion, people will conform to the majority even if it is obviously wrong.

More recent research indicates that the effect applies not only to people we know well, but that extends even to second and third-degree relationships. So not only our friends, but the friends of their friends as well—many of whom we may have never met—influence us. This effect not only applies to our opinions, but also things like smoking and obesity and behaviors related to cooperation and trust.

The evidence is, in fact, overwhelming. Working to shape opinions is bound to be a fruitless exercise unless we are able to shape the networks in which ideas, attitudes and behaviors form. Fortunately, there are some fairly straightforward ways to do that.

Starting With A Majority

When we’re passionate about an idea, we want it to spread. We want to tell everyone, especially, for psychological reasons which are not quite clear to me, the opposition. There is some strange quirk embedded in human nature that makes us want to try to convince those who are most hostile to the proposition. We want to convince skeptics.

As should be clear by now, that’s a very bad idea. An idea in its early stages is, almost by definition, not fully formed. It hasn’t been tested and doesn’t have a track record. You also lack experience in countering objections. Taking an idea in its infancy into hostile territory almost guarantees failure.

The simple alternative is to start with a majority, even if that majority is only three people in a room of five. You can always expand a majority out, but once you’re in the minority you’re going to get immediate pushback. Go out and find people who are as enthusiastic as you are, who are willing to support your idea, to strengthen it and help troubleshoot.

That’s how you can begin to gain traction and build a sense for shared purpose and mission. As you begin to work out the kinks, you can embark on a keystone project, show some success, build a track record and accumulate social proof. As you gain momentum, you will find that there is no need to chase skeptics. They will start coming to you.

Small Groups, Loosely Connected, But United By A Shared Purpose

The biggest misconception about change is that once people understand it, they will embrace it and so the best way to drive change forward is to explain the need for change in a convincing and persuasive way. Change, in this view, is essentially a communication exercise and the right combination of words and images is all that is required.

Even assuming that it is practical to convince people that way, by the same logic they can just as easily have their mind changed right back by counter-arguments. So even successful shaping opinions is, at best, a temporary solution. Clearly, if we are going to bring about sustainable change, we need to shape not just opinions, but networks as well.

In my book Cascades, I explained how small groups, loosely connected but united by a shared purpose drive transformational change. It happens gradually, almost imperceptibly, at first. Connections accumulate under the surface, barely noticed, as small groups slowly begin to link together and congeal into a network. Eventually things hit a tipping point.

The good news is that decades of research suggest that tipping point is much smaller than most people think. Everett Rogers’ “S-curve” research estimated it at 10%-20% of a system. Erica Chenoweth’s research calculated the tipping point to be at 3.5% of a society. Damon Centola at the University of Pennsylvania suggests the tipping point to be at 25% of an organization.

I would take each of these numbers with a grain of salt. The salient point here is that nowhere does the evidence suggest we need anything close to 51% support for change to take hold. Our job as leaders is to cultivate networks, help them connect and inspire them with a sense of shared values and shared purpose.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The Wood-Fired Automobile

WWII’s Forgotten Lesson in Human-Centered Resourcefulness

LAST UPDATED: December 14, 2025 at 5:59 PM

The Wood-Fired Automobile

GUEST POST from Art Inteligencia

Innovation is often romanticized as the pursuit of the new — sleek electric vehicles, AI algorithms, and orbital tourism. Yet, the most profound innovation often arises not from unlimited possibility, but from absolute scarcity. The Second World War offers a stark, compelling lesson in this principle: the widespread adoption of the wood-fired automobile, or the gasogene vehicle.

In the 1940s, as global conflict choked off oil supplies, nations across Europe and Asia were suddenly forced to find an alternative to gasoline to keep their civilian and military transport running. The solution was the gas generator (or gasifier), a bulky metal unit often mounted on the rear or side of a vehicle. This unit burned wood, charcoal, or peat, not for heat or steam, but for gas. The process — pyrolysis — converted solid fuel into a combustible mixture of carbon monoxide, hydrogen, and nitrogen known as “producer gas” or “wood gas,” which was then filtered and fed directly into the vehicle’s conventional internal combustion engine. This adaptation was a pure act of Human-Centered Innovation: it preserved mobility and economic function using readily available, local resources, ensuring the continuity of life amidst crisis.

The Scarcity Catalyst: Unlearning the Oil Dependency

Before the war, cars ran on gasoline. When the oil dried up, the world faced a moment of absolute unlearning. Governments and industries could have simply let transportation collapse, but the necessity of maintaining essential services (mail, food distribution, medical transport) forced them to pivot to what they had: wood and ingenuity. This highlights a core innovation insight: the constraints we face today — whether supply chain failures or climate change mandates — are often the greatest catalysts for creative action.

Gasogene cars were slow, cumbersome, and required constant maintenance, yet their sheer existence was a triumph of adaptation. They provided roughly half the power of a petrol engine, requiring drivers to constantly downshift on hills and demanding a long, smoky warm-up period. But they worked. The innovation was not in the vehicle itself, which remained largely the same, but in the fuel delivery system and the corresponding behavioral shift required by the drivers and mechanics.

Case Study 1: Sweden’s Total Mobilization of Wood Gas

Challenge: Maintaining Neutrality and National Mobility Under Blockade

During WWII, neutral Sweden faced a complete cutoff of its oil imports. Without liquid fuel, the nation risked economic paralysis, potentially undermining its neutrality and ability to supply its citizens. The need was immediate and total: convert all essential vehicles.

Innovation Intervention: Standardization and Centralization

Instead of relying on fragmented, local solutions, the Swedish government centralized the gasifier conversion effort. They established the Gasogenkommittén (Gas Generator Committee) to standardize the design, production, and certification of gasifiers (known as gengas). Manufacturers such as Volvo and Scania were tasked not with building new cars, but with mass-producing the conversion kits.

  • By 1945, approximately 73,000 vehicles — nearly 90% of all Swedish vehicles, from buses and trucks to farm tractors and private cars — had been converted to run on wood gas.
  • The government created standardized wood pellet specifications and set up thousands of public wood-gas fueling stations, turning the challenge into a systematic, national enterprise.

The Innovation Impact:

Sweden demonstrated that human resourcefulness can completely circumvent a critical resource constraint at a national scale. The conversion was not an incremental fix; it was a wholesale, government-backed pivot that secured national resilience and mobility using entirely domestic resources. The key was standardized conversion — a centralized effort to manage distributed complexity.

Fischer-Tropsch Process

Case Study 2: German Logistics and the Bio-Diesel Experiment

Challenge: Fueling a Far-Flung Military and Civilian Infrastructure

Germany faced a dual challenge: supplying a massive, highly mechanized military campaign while keeping the domestic civilian economy functional. While military transport relied heavily on synthetic fuel created through the Fischer-Tropsch process, the civilian sector and local military transport units required mass-market alternatives.

Innovation Intervention: Blended Fuels and Infrastructure Adaptation

Beyond wood gas, German innovation focused on blended fuels. A crucial adaptation was the widespread use of methanol, ethanol, and various bio-diesels (esters derived from vegetable oils) to stretch dwindling petroleum reserves. While wood gasifiers were used on stationary engines and some trucks, the government mandated that local transport fill up with methanol-gasoline blends. This forced a massive, distributed shift in fuel pump calibration and engine tuning across occupied Europe.

  • The adaptation required hundreds of thousands of local mechanics, from France to Poland, to quickly unlearn traditional engine maintenance and become experts in the delicate tuning required for lower-energy blended fuels.
  • This placed the burden of innovation not on a central R&D lab, but on the front-line workforce — a pure example of Human-Centered Innovation at the operational level.

The Innovation Impact:

This case highlights how resource constraints force innovation across the entire value chain. Germany’s transport system survived its oil blockade not just through wood gasifiers, but through a constant, low-grade innovation treadmill of fuel substitution, blending, and local adaptation that enabled maximum optionality under duress. The lesson is that resilience comes from flexibility and decentralization.

Conclusion: The Gasogene Mindset for the Modern Era

The wood-fired car is not a relic of the past; it is a powerful metaphor for the challenges we face today. We are currently facing the scarcity of time, carbon space, and public trust. We are entirely reliant on systems that, while efficient in normal times, are dangerously fragile under stress. The shift to sustainability, the move away from centralized energy grids, and the adoption of closed-loop systems all require the Gasogene Mindset — the ability to pivot rapidly to local, available resources and fundamentally rethink the consumption model.

Modern innovators must ask: If our critical resource suddenly disappeared, what would we use instead? The answer should drive our R&D spending today. The history of the gasogene vehicle proves that sufficiency is the mother of ingenuity, and the greatest innovations often solve the problem of survival first. We must learn to innovate under constraint, not just in comfort.

“The wood-fired car teaches us that every constraint is a hidden resource, if you are creative enough to extract it.” — Braden Kelley

Frequently Asked Questions About Wood Gas Vehicles

1. How does a wood gas vehicle actually work?

The vehicle uses a gasifier that burns wood or charcoal in a low-oxygen environment (a process called pyrolysis). This creates a gas mixture (producer gas) which is then cooled, filtered, and fed directly into the vehicle’s standard internal combustion engine to power it, replacing gasoline.

2. How did the performance of a wood gas vehicle compare to gasoline?

Gasogene cars provided significantly reduced performance, typically delivering only 50-60% of the power of the original gasoline engine. They were slower, had lower top speeds, required frequent refueling with wood, and needed a 15-30 minute warm-up period to start producing usable gas.

3. Why aren’t these systems used today, given their sustainability?

The system is still used in specific industrial and remote applications (power generation), but not widely in transportation because of the convenience and energy density of liquid fuels. Wood gasifiers are large, heavy, require constant manual fueling and maintenance (clinker removal), and produce a low-energy gas that limits speed and range, making them commercially unviable against modern infrastructure.

Your first step toward a Gasogene Mindset: Identify one key external resource your business or team relies on (e.g., a software license, a single supplier, or a non-renewable material). Now, design a three-step innovation plan for a world where that resource suddenly disappears. That plan is your resilience strategy.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Should Owners or Employees Come First?

Should Owners or Employees Come First?

GUEST POST from Stefan Lindegaard

“It’s crucial that we succeed in securing a competitive return for our owners and meet the expectations of consumers and society. But the foundation for all of this is creating a workplace and a culture that attracts the best talent.”

– Niels Duedahl, CEO at Danish Crown

Yes, it’s always a balance.

But it’s telling how Niels Duedahl sees people and culture as the true foundation.

I couldn’t agree more.

If we don’t get the workplace right, nothing else will follow.

What about you – how do you see it?

Image Credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Bio-Computing & DNA Data Storage

The Human-Centered Future of Information

LAST UPDATED: December 12, 2025 at 5:47 PM

Bio-Computing & DNA Data Storage

GUEST POST from Art Inteligencia

We are drowning in data. The digital universe is doubling roughly every two years, and our current infrastructure — reliant on vast, air-conditioned server farms — is neither environmentally nor economically sustainable. This is where the most profound innovation of the 21st century steps in: DNA Data Storage. Rather than using the binary zeroes and ones of silicon, we leverage the four-base code of life — Adenine (A), Cytosine (C), Guanine (G), and Thymine (T) — to encode information. This transition is not merely an improvement; it is a fundamental shift that aligns our technology with the principles of Human-Centered Innovation by prioritizing sustainability, longevity, and density.

The scale of this innovation is staggering. DNA is the most efficient information storage system known. Theoretically, all the world’s data could be stored in a volume smaller than a cubic meter. This level of density, combined with the extreme longevity of DNA (which can last for thousands of years when properly preserved), solves the two biggest crises facing modern data: decay and footprint. We must unlearn the limitation of physical space and embrace biology as the ultimate hard drive. Bio-computing, the application of molecular reactions to perform complex calculations, is the natural, faster counterpart to this massive storage potential.

The Three Pillars of the Bio-Data Revolution

The convergence of biology and information technology is built on three revolutionary pillars:

1. Unprecedented Data Density

A single gram of DNA can theoretically store over 215 petabytes (215 million gigabytes) of data. Compared to a standard hard drive, which requires acres of physical space to house that much information, DNA provides an exponential reduction in physical footprint. This isn’t just about saving space; it’s about decentralizing data storage and dramatically reducing the need for enormous, vulnerable, power-hungry data centers. This density makes truly long-term archival practical for the first time.

2. Extreme Data Longevity

Silicon-based media, such as hard drives and magnetic tape, are ephemeral. They require constant maintenance, migration, and power to prevent data loss, with a shelf life often measured in decades. DNA, in contrast, has proven its stability over millennia. By encapsulating synthetic DNA in glass or mineral environments, the stored data becomes essentially immortal, eliminating the costly and energy-intensive practice of data migration every few years. This shifts the focus from managing hardware to managing the biological encapsulation process.

3. Low Energy Footprint

Traditional data centers consume vast amounts of electricity, both for operation and, critically, for cooling. The cost and carbon footprint of this consumption are rapidly becoming untenable. DNA data storage requires energy primarily during the initial encoding (synthesis) and subsequent decoding (sequencing) stages. Once stored, the data is inert, requiring zero power for preservation. This radical reduction in operational energy makes DNA storage an essential strategy for any organization serious about sustainable innovation and ESG goals.

Leading the Charge: Companies and Startups

This nascent but rapidly accelerating industry is attracting major players and specialized startups. Large technology companies like Microsoft and IBM are deeply invested, often in partnership with specialized biotech firms, to validate the technology and define the industrial standard for synthesis and sequencing. Microsoft, in collaboration with the University of Washington, was among the first to successfully encode and retrieve large files, including the entire text of the Universal Declaration of Human Rights. Meanwhile, startups are focusing on making the process more efficient and commercially viable. Twist Bioscience has become a leader in DNA synthesis, providing the tools necessary to write the data. Other emerging companies like Catalog are working on miniaturizing and automating the DNA storage process, moving the technology from a lab curiosity to a scalable, automated service. These players are establishing the critical infrastructure for the bio-data ecosystem.

Case Study 1: Archiving Global Scientific Data

Challenge: Preserving the Integrity of Long-Term Climate and Astronomical Records

A major research institution (“GeoSphere”) faced the challenge of preserving petabytes of climate, seismic, and astronomical data. This data needs to be kept for over 100 years, but the constant migration required by magnetic tape and hard drives introduced a high risk of data degradation, corruption, and enormous archival cost.

Bio-Data Intervention: DNA Encapsulation

GeoSphere partnered with a biotech firm to conduct a pilot program, encoding its most critical reference datasets into synthetic DNA. The data was converted into A, T, C, G sequences and chemically synthesized. The resulting DNA molecules were then encapsulated in silica beads for long-term storage.

  • The physical volume required to store the petabytes of data was reduced from a warehouse full of tapes to a container the size of a shoebox.
  • The data was found to be chemically stable with a projected longevity of over 1,000 years without any power or maintenance.

The Innovation Impact:

The shift to DNA storage solved GeoSphere’s long-term sustainability and data integrity crisis. It demonstrated that DNA is the perfect medium for “cold” archival data — vast amounts of information that must be kept secure but are infrequently accessed. This validated the role of DNA as a non-electronic, permanent archival solution.

Case Study 2: Bio-Computing for Drug Discovery

Challenge: Accelerating Complex Molecular Simulations in Pharmaceutical R&D

A pharmaceutical company (“BioPharmX”) was struggling with the computational complexity of molecular docking — simulating how millions of potential drug compounds interact with a target protein. Traditional silicon supercomputers required enormous time and electricity to run these optimization problems.

Bio-Data Intervention: Molecular Computing

BioPharmX explored bio-computing (or molecular computing) using DNA strands and enzymes. By setting up the potential drug compounds as sequences of DNA and allowing them to react with a synthesized protein target (also modeled in DNA), the calculation was performed not by electrons, but by molecular collision and selection.

  • Each possible interaction became a physical, parallel chemical reaction taking place simultaneously in the solution.
  • This approach solved the complex Traveling Salesman Problem (a key metaphor for optimization) faster than traditional electronic systems because of the massive parallelism inherent in molecular interactions.

The Innovation Impact:

Bio-computing proved to be a highly efficient, parallel processing method for solving specific, combinatorial problems related to drug design. This allowed BioPharmX to filter billions of potential compounds down to the most viable candidates in a fraction of the time, dramatically accelerating their R&D pipeline and showcasing the power of biological systems as processors.

Conclusion: The Convergence of Life and Logic

The adoption of DNA data storage and the development of bio-computing mark a pivotal moment in the history of information technology. It is a true embodiment of Human-Centered Innovation, pushing us toward a future where our most precious data is stored sustainably, securely, and with a life span that mirrors humanity’s own. For organizations, the question is not if to adopt bio-data solutions, but when and how to begin building the competencies necessary to leverage this biological infrastructure. The future of innovation is deeply intertwined with the science of life itself. The next great hard drive is already inside you.

“If your data has to last forever, it must be stored in the medium that was designed to do just that.”

Frequently Asked Questions About Bio-Computing and DNA Data Storage

1. How is data “written” onto DNA?

Data is written onto DNA using DNA synthesis machines, which chemically assemble the custom sequence of the four nucleotide bases (A, T, C, G) according to a computer algorithm that converts binary code (0s and 1s) into the base-four code of DNA.

2. How is the data “read” from DNA?

Data is read from DNA using standard DNA sequencing technologies. This process determines the exact sequence of the A, T, C, and G bases, and a reverse computer algorithm then converts this base-four sequence back into the original binary code for digital use.

3. What is the current main barrier to widespread commercial adoption?

The primary barrier is the cost and speed of the writing (synthesis) process. While storage density and longevity are superior, the current expense and time required to synthesize vast amounts of custom DNA make it currently viable only for “cold” archival data that is accessed very rarely, rather than for “hot” data used daily.

Your first step into bio-data thinking: Identify one dataset in your organization — perhaps legacy R&D archives or long-term regulatory compliance records — that has to be stored for 50 years or more. Calculate the total cost of power, space, and periodic data migration for that dataset over that time frame. This exercise will powerfully illustrate the human-centered, sustainable value proposition of DNA data storage.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Do You Have the Courage to Speak Up Against Conformity?

Do You Have the Courage to Speak Up Against Conformity?

GUEST POST from Mike Shipulski

If you see things differently than others, congratulations. You’re thinking for yourself.

If you find yourself pressured into thinking like everyone else, that’s a sign your opinion threatens. It’s too powerful to be dismissed out-of-hand, and that’s why they want to shut you up.

If the status quo is angered by your theory, you’re likely onto something. Stick to your guns.

If your boss doesn’t want to hear your contrarian opinion, that’s because it cannot be easily dismissed. That’s reason enough to say it.

If you disagree in a meeting and your sentiment is actively dismissed, dismiss the dismisser. And say it again.

If you’re an active member of the project and you are not invited to the meeting, take it as a compliment. Your opinion is too powerful to defend against. The only way for the group-think to survive is to keep you away from it. Well done.

If your opinion is actively and repeatedly ignored, it’s too powerful to be acknowledged. Send a note to someone higher up in the organization. And if that doesn’t work, send it up a level higher still. Don’t back down.

If you look into the future and see a train wreck, set up a meeting with the conductor and tell them what you see.

When you see things differently, others will try to silence you and tell you you’re wrong. Don’t believe them. The world needs people like you who see things as they are and have the courage to speak the truth as they see it.

Thank you for your courage.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.