Category Archives: Innovation

Four Pillars of Innovation

People, Learning, Judgment and Trust

Four Pillars of Innovation

GUEST POST from Mike Shipulski

Innovation is a hot topic. Everyone wants to do it. And everyone wants a simple process that works step-wise – first this, then that, then success.

But Innovation isn’t like that. I think it’s more effective to think of innovation as a result. Innovation as something that emerges from a group of people who are trying to make a difference. In that way, Innovation is a people process. And like with all processes that depend on people, the Innovation process is fluid, dynamic, complex, and context-specific.

Innovation isn’t sequential, it’s not linear and cannot be scripted.. There is no best way to do it, no best tool, no best training, and no best outcome. There is no way to predict where the process will take you. The only predictable thing is you’re better off doing it than not.

The key to Innovation is good judgment. And the key to good judgment is bad judgment. You’ve got to get things wrong before you know how to get them right. In the end, innovation comes down to maximizing the learning rate. And the teams with the highest learning rates are the teams that try the most things and use good judgement to decide what to try.

I used to take offense to the idea that trying the most things is the most effective way. But now, I believe it is. That is not to say it’s best to try everything. It’s best to try the most things that are coherent with the situation as it is, the market conditions as they are, the competitive landscape as we know it, and the the facts as we know them.

And there are ways to try things that are more effective than others. Think small, focused experiments driven by a formal learning objective and supported by repeatable measurement systems and formalized decision criteria. The best teams define end implement the tightest, smallest experiment to learn what needs to be learned. With no excess resources and no wasted time, the team wins runs a tight experiment, measures the feedback, and takes immediate action based on the experimental results.

In short, the team that runs the most effective experiments learns the most, and the team that learns the most wins.

It all comes down to choosing what to learn. Or, another way to look at it is choosing the right problems to solve. If you solve new problems, you’ll learn new things. And if you have the sightedness to choose the right problems, you learn the right new things.

Sightedness is a difficult thing to define and a more difficult thing to hone and improve. If you were charged with creating a new business in a new commercial space and the survival of the company depended on the success of the project, who would you want to choose the things to try? That person has sightedness.

Innovation is about people, learning, judgement and trust.

And innovation is more about why than how and more about who than what.

HALLOWEEN BONUS: Save 30% on the eBook, hardcover or softcover of Braden Kelley’s latest book Charting Change (now in its second edition) — FREE SHIPPING WORLDWIDE — using code HAL30 until midnight October 31, 2025

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Reduce Innovation Risk with this Nobel Prize Winning Formula

Reduce Innovation Risk with this Nobel Prize Winning Formula

GUEST POST from Robyn Bolton

As a kid, you’re taught that when you’re lost, stay put and wait for rescue. Most executives are following that advice right now—sitting tight amid uncertainty, hoping someone saves them from having to make hard choices and take innovation risk.

This year’s Nobel Prize winners in Economics have bad news: there is no rescue coming. Joel Mokyr, Philippe Aghion, and Peter Howitt demonstrated that disruption happens whether you participate or not. Freezing innovation investments doesn’t reduce innovation risk.  It guarantees competitors destroy you while you stand still.

They also have good news: innovation follows predictable patterns based on competitive dynamics, offering a framework for making smarter investment decisions.

How We Turned Stagnation into a System for Growth

For 99.9% of human history, economic growth was essentially zero. There were occasional bursts of innovation, like the printing press, windmills, and mechanical clocks, but growth always stopped.

200 years ago, that changed. Mokyr identified that the Industrial Revolution created systems connecting two types of knowledge: Propositional knowledge (understanding why things work) and Prescriptive knowledge (practical instructions for how to execute).

Before the Industrial Revolution, these existed separately. Philosophers theorized. Artisans tinkered. Neither could build on the other’s work. But the Enlightenment created feedback loops between theory and practice allowing countries like Britain to thrive because they had people who could translate theory into commercial products.

Innovation became a system, not an accident.

Why We Need Creative Destruction

Every year in the US, 10% of companies go out of business and nearly as many are created. This phenomenon of creative destruction, where companies and jobs constantly disappear and are replaced, was identified in 1942. Fifty years later, Aghion and Howitt built a mathematical model proving its required for growth.

Their research also lays bare some hard truths:

  1. Creative destruction is constant and unavoidable. Cutting your innovation budget does not pause the game. It forfeits your position. Competitors are investing in R&D right now and their innovations will disrupt yours whether you participate or not.
  2. Competitive position predicts innovation investments. Neck-to-neck competitors invest heavily in innovation because it’s their only path to the top. Market leaders cut back and coast while laggards don’t have the funds to catch-up. Both under-invest and lose.
  3. Innovation creates winners and losers. Creative destruction leads to job destruction as work shifts from old products and skills to new ones. You can’t innovate and protect every job but you can (and should) help the people affected.

Ultimately, creative destruction drives sustained growth. It is painful and scary, but without it, economies and society stagnate. Ignore it at your peril. Work with it and prosper.

From Prize-winning to Revenue-generating

Even though you’re not collecting the one million Euro prize, these insights can still boost your bottom line if you:

  • Connect your Why teams with your How teams. Too often, Why teams like Strategy, Innovation, and R&D, chuck the ball over the wall to the How teams in Operations, Sales, Supply Chain, and front-line operations. Instead, connect them early and often and ensure the feedback loop that drives growth
  • Check your R&D and innovation investments. Are your R&D and innovation investments consistent with your strategic priorities or your competitive position? What are your investments communicating to your competitors? It’s likely that that “conserving cash” is actually coasting and ceding share.
  • Invest in your people and be honest with them. Your employees aren’t dumb. They know that new technologies are going to change and eliminate jobs. Pretending that won’t happen destroys trust and creates resistance that kills innovation. Tell employees the truth early, then support them generously through transitions.

What’s Your Choice?

Playing it safe guarantees the historical default: stagnation. The 2025 Nobel Prize winners proved sustained growth requires building innovation systems and embracing creative destruction.

The only question is whether you will participate or stagnate.

HALLOWEEN BONUS: Save 30% on the eBook, hardcover or softcover of Braden Kelley’s latest book Charting Change (now in its second edition) — FREE SHIPPING WORLDWIDE — using code HAL30 until midnight October 31, 2025

Image credit: Wikimedia Commons

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Bridging the Gap Between Strategic Ambition and Innovation Delivery

Why Long-Range Planning and Product Development Rarely Align — And What Companies Can Do About It

Bridging the Gap Between Strategic Ambition and Innovation Delivery

GUEST POST from Noel Sobelman

Across industries, executive teams craft long-range plans (LRPs) with confident projections for revenue growth, market expansion, and innovation impact. But when it comes time to deliver, product development pipelines often tell a different story. This misalignment, between the top-down assumptions embedded in strategic plans and the bottom-up reality of new product development (NPD), is one of the most persistent and under-addressed risks in corporate planning.

The consequences are serious: growth targets are missed, credibility erodes, and shareholder confidence wanes. And yet, many organizations continue to treat this disconnect as inevitable, rather than solvable.

The Illusion of Alignment

On paper, LRPs typically assign a portion of future revenue to innovation — new products, new markets, new business models. This makes sense. In competitive, fast-moving sectors, sustaining growth depends on a constant stream of successful launches.

But few companies take the next step: validating whether their actual innovation pipeline supports those ambitions. The top-down LRP rarely connects meaningfully with the bottom-up details of project timelines, product margins, development risks, or resource constraints.

Leadership may assume, for instance, that new product contributions will ramp up in years three through five of the plan. Yet the NPD pipeline might only be populated with early-phase projects, with no clear line of sight to commercialization in that time frame. Or worse, it might be filled with low upside sustaining efforts that do little to drive long-term growth.

This isn’t just a data problem — it’s an accountability problem.

A Blind Spot in Strategic Execution

Unlike sales or operations, which are frequently forced to reconcile their contributions to the LRP through tangible metrics and quarterly reviews, product development is often allowed to operate in a parallel universe. Project business cases get approved on a rolling basis, disconnected from aggregate targets. Teams work diligently, but no one steps back to ask: Do the numbers add up?

In many organizations, this analysis is simply never done. When questioned about how the pipeline contributes to the LRP, the answers range from vague optimism (“We’ll figure it out”) to manual workarounds (“We added 5% to last year’s numbers to cover new product upside”).

Such informal planning approaches might have been acceptable in a slower, less competitive world. But in today’s environment, where innovation cycles are compressed, capital is scrutinized, and every function is expected to deliver ROI, they fall short.

Interestingly, other parts of the business, particularly operations, already have a model for how to approach this. Manufacturing teams routinely perform network strategy exercises to determine whether they have the physical capacity to meet future demand. They map projected sales to factory utilization, labor capacity, CapEx, and throughput. If there’s a gap, they create an actionable plan.

Yet in most organizations, this rigor stops at the walls of the plant. There is no equivalent exercise on the R&D side to ask: Do we have the innovation pipeline, product plans, and resources required to meet our revenue commitments? Working with our clients, we’ve seen how powerful it is when this same network strategy logic is applied to product development. The exercise shifts the conversation from hope to confidence, from general intent to measurable plans.

The Case for a Unified Growth Strategy

The path forward requires a more integrated, data-driven approach, a growth strategy that spans both the strategic and executional layers of the business.

At the core is a disciplined feedback loop: reconciling the LRP’s innovation-driven revenue expectations with the actual new product roadmap, resource plan, and market assumptions. This means:

  • Bottom-up modeling of product-level forecasts (volumes, ASPs, margins, launch dates) that aggregate to a portfolio view of expected revenue. Our benchmarks show that without this discipline, overstatements of new product contributions can widen to 20–40% or more in the outer years of the LRP. Modeling helps identify these gaps early, enabling timely course corrections.
  • Scenario analysis that tests different mixes of existing and in-development products to identify gaps and prioritize high-leverage opportunities.
  • Risk adjustment grounded in performance benchmarks and realistic probabilities of technical and commercial success, not wishful thinking. Companies that formalize these assumptions often uncover significant overstatements in expected revenue from early-stage projects.
  • Cross-functional transparency between R&D, finance, operations, and commercial teams to ensure the entire organization is planning from a shared reality.

Working with our clients, we’ve helped build models that mirror this approach, combining innovation pipeline data, financial assumptions, and market insights into a unified view of expected contribution to growth. The result? Greater visibility into how future revenue will be earned and higher confidence in investment decisions. For some organizations, this alignment has helped redirect 10–15% of R&D spend toward higher-value opportunities without increasing total investment.

In nearly every case, the analysis reveals significant gaps between what leadership believes the innovation engine will deliver and what’s realistically in flight. But once exposed, those gaps become manageable. They become actionable.

This isn’t about punishing innovation teams for uncertainty. It’s about giving them, and the organization, an honest view of what’s likely to be delivered and where targeted adjustments are needed.

Building the Capability (Not Just the Model)

Organizations that do this well don’t just build a single model — they build the capability. They embed portfolio management processes that continually evaluate whether innovation plans are aligned with strategic goals. They invest in tools and talent that can translate project business cases into forward-looking financial impact. And critically, they elevate the conversation from “project selection” to “portfolio impact.”

This approach can also shift the internal conversation away from politics and gut feel, and toward clarity and confidence. CFOs, for example, are increasingly demanding to know what they’re getting for the annual increases in R&D spend. A connected, data-rich view of how new product drives future cash flows goes a long way in strengthening that case. We’ve seen how quickly these conversations mature when companies adopt a planning discipline that brings product development onto the same strategic playing field as operations and sales.

The Strategic Imperative

Ultimately, reconciling innovation with the LRP isn’t a nice-to-have. It’s a fiduciary responsibility. Companies make commitments to their boards and investors based on the assumption that R&D investment will deliver a meaningful share of future growth. When that assumption is built on loosely connected plans and unvalidated forecasts, the entire strategy is at risk.

Bridging that gap can unlock substantial value. In our experience, we see organizations with tightly aligned portfolio and strategy processes outperform their peers by as much as 40% in terms of new product ROI and time-to-market.

The good news? The gap is measurable. The tools, models, and methods to close it exist. What’s often missing is the mandate.

Organizations that seize this opportunity will be better equipped to make confident trade-offs, accelerate high-potential initiatives, and pivot early when plans drift off course. They’ll be able to tell a coherent story, not just about where they want to go, but how they plan to get there.

And that story, told with numbers and backed by action, is what distinguishes companies that plan for growth from those that actually deliver it.

If you’re interested in exploring how to better align your product development plans with long-range strategic goals or want to assess the credibility of your innovation pipeline, we’d be happy to share what we’ve learned from working with companies in similar situations.

HALLOWEEN BONUS: Save 30% on the eBook, hardcover or softcover of Braden Kelley’s latest book Charting Change (now in its second edition) — FREE SHIPPING WORLDWIDE — using code HAL30 until midnight October 31, 2025

Image credits: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Picking Innovation Projects in Four Questions or Less

Picking Innovation Projects in Four Questions or Less

GUEST POST from Mike Shipulski

It’s a challenge to prioritize and choose innovation projects. There are open questions on the technology, the product/service, the customer, the price and sales volume. Other than that, things are pretty well defined.

But with all that, you’ve still go to choose. Here are four questions that may help in your selection process:

1. Is it big enough?

The project will be long, expensive and difficult. And if the potential increase in sales is not big enough, the project is not worth starting. Think (Price – Cost) x Volume. Define a minimum viable increase in sales and bound it in time. For example, the minimum incremental sales is twenty five million dollars after five years in the market. If the project does not have the potential to meet those criteria, don’t do the project. The difficult question – How to estimate the incremental sales five years after launch? The difficult answer – Use your best judgement to estimate sales based on market size and review your assumptions and predictions with seasoned people you trust.

2. Why you?

High growth markets/applications are attractive to everyone, including the big players and the well-funded start-ups. How does your company have an advantage over these tough competitors? What about your company sets you apart? Why will customers buy from you? If you don’t have good answers, don’t start the project. Instead, hold the work hostage and take the time to come up with good answers. If you come up with good answers, try to answer the next questions. If you don’t, choose another project.

3. How is it different?

If the new technology can’t distinguish itself over existing alternatives, you don’t have a project worth starting. So, how is your new offering (the one you’re thinking about creating) better than the ones that can be purchased today? What’s the new value to the customer? Or, in the lingo of the day, what is the Distinctive Value Proposition (DVP)? If there’s no DVP, there’s no project. If you’re not sure of the DVP, figure that out before investing in the project. If you have a DVP but aren’t sure it’s good enough, figure out how to test the DVP before bringing the DVP to life.

4. Is it possible?

Usually, this is where everyone starts. But I’ve listed it last, and it seems backward. Would you rather spend a year making it work only to learn no one wants it, or would you rather spend a month to learn the market wants it then a year making it work? If you make it work and no one wants it, you’ve wasted a year. If, before you make it work, you learn no one wants it, you’ve spent a month learning the right thing and you haven’t spent a year working on the wrong thing. It feels unnatural to define the market need before making it work, but though it feels unnatural, it can block resources from working on the wrong projects.

Conclusion

There is no foolproof way to choose the best innovation projects, but these four questions go a long way. Create a one-page template with four sections to ask the questions and capture the answers. The sections without answers define the next work. Define the learning objectives and the learning activities and do the learning. Fill in the missing answers and you’re ready to compare one project to another.

Sort the projects large-to-small by Is it big enough? Then, rank the top three by Why you? and How is it different? Then, for the highest ranked project, do the work to answer Is it possible?

If it’s possible, commercialize. If it’s not, re-sort the remaining projects by Is it big enough? Why you? and How is it different? and learn if It is possible.

HALLOWEEN BONUS: Save 30% on the eBook, hardcover or softcover of Braden Kelley’s latest book Charting Change (now in its second edition) — FREE SHIPPING WORLDWIDE — using code HAL30 until midnight October 31, 2025

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Are You Getting Your Fair Share of $860 Billion?

Are You Getting Your Fair Share of $860 Billion?

GUEST POST from Shep Hyken

According to Qualtrics, there is an estimated $860 billion worth of revenue and cost savings available for companies that figure out how to create an improved Customer Experience (CX) using AI to better understand and serve their customers. (That includes $420 billion for B2B and $440 billion for B2C.) Qualtrics recently released these figures in a report/eBook titled Unlock the Potential through AI-Enabled CX.

I had a chance to interview Isabelle Zdatny, head of thought leadership at Qualtrics Experience Management Institute, for Amazing Business Radio. She shared insights from the report, including ways in which AI is reshaping how organizations measure, understand and improve their relationships with customers. These ideas are what will help you get more customers, keep existing customers and improve your processes, giving you a share of the $860 billion that is up for grabs. Here are some of the top takeaways from our interview.

AI-Enabled CX Represents a Financial Opportunity

The way AI is used in customer experience is much more than just a way to deflect customers’ questions and complaints to an AI-fueled chatbot or other self-service solution. Qualtrics’ report findings show that the value comes through increased employee productivity, process improvement and revenue growth. Zdatny notes a gap between leadership’s recognition of AI’s potential and their readiness to lead and make a change. Early adopters will likely capture “compounding advantages,” as every customer interaction makes their systems smarter and their advantage more difficult for competitors to overcome. My response to this is that if you aren’t on board with AI for the many opportunities it creates, you’re not only going to be playing catch-up with your competitors, but also having to catch up with the market share you’re losing.

Customers Want Convenience

While overall CX quality is improving, thanks to innovation, today’s customers have less tolerance for friction and mistakes. A single bad experience can cause customers to defect. My customer experience research says an average customer will give you two chances. Zdatny says, “Customers are less tolerant of friction these days. … Deliver one bad experience, and that sends the relationship down a bad path more quickly than it used to.”

AI Takes Us Beyond Surveys

Customer satisfaction surveys can frustrate customers. AI collects the data from interactions between customers and the company and analyzes it using natural language processing and sentiment. It can predict churn and tension. It analyzes customer behavior, and while it doesn’t look at a specific customer (although it can), it is able to spot trends in problems, opportunities and more. The company that uses this information the right way can reap huge financial rewards by creating a better customer experience.

Agentic AI

Agentic AI takes customer interactions to a new level. As a customer interacts with AI-fueled self-service support, the system can do more than give customers information and analyze the interaction. It can also take appropriate action. This is a huge opportunity to make it easier on the workforce as AI processes action items that employees might otherwise handle manually. Think about the dollars saved (part of the $860 billion) by having AI support part of the process so people don’t have to.

Customer Loyalty is at Risk

To wrap this up, Zdatny and I talked about the concept of customer loyalty and how vulnerable companies are to losing their most loyal customers. According to Zdatny, a key reason is the number of options available to consumers. (While there may be fewer options in the B2B world, the concern should still be the same.) Switching brands is easy, and customers are more finicky than ever. Our CX research finds that typical customers give you a second chance before they switch. A loyal customer will give you a third chance — but to put it in baseball terms, “Three strikes and you’re out!” Manage the experience right the first time, and keep in mind that whatever interaction you’re having at that moment is the reason customers will come back—or not—to buy whatever you sell.

Image Credits: Pexels

This article was originally published on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






How Tangible AI Artifacts Accelerate Learning and Alignment

Seeing the Invisible

By Douglas Ferguson, Founder & CEO of Voltage Control
Originally inspired by
“A Lantern in the Fog” on Voltage Control, where teams learn to elevate their ways of working through facilitation mastery and AI-enabled collaboration.

Innovation isn’t just about generating ideas — it’s about testing assumptions before they quietly derail your progress. The faster a team can get something tangible in front of real eyes and minds, the faster they can learn what works, what doesn’t, and why.

Yet many teams stay stuck in abstraction for too long. They debate concepts before they draft them, reason about hypotheses before they visualize them, and lose energy to endless interpretation loops. That’s where AI, when applied strategically, becomes a powerful ally in human-centered innovation — not as a shortcut, but as a clarifier.

How Tangible AI Artifacts Accelerate Learning and Alignment

At Voltage Control, we’ve been experimenting with a practice we call AI Teaming — bringing AI into the collaborative process as a visible, participatory teammate. Using new features in Miro, like AI Flows and Sidekicks, we’re able to layer prompts in sequence so that teams move from research to prototypes in minutes. We call this approach Instant Prototyping — because the prototype isn’t the end goal. It’s the beginning of the real conversation.


Tangibility Fuels Alignment

In human-centered design, the first artifact is often the first alignment. When a team sees a draft — even one that’s flawed — it changes how they think and talk. Suddenly, discussions move from “what if” to “what now.” That’s the tangible magic: the moment ambiguity becomes visible enough to react to.

AI can now accelerate that moment. With one-click flows in Miro, facilitators can generate structured artifacts — such as user flows, screen requirements, or product briefs — based on real research inputs. The output isn’t meant to be perfect; it’s meant to be provocative. A flawed draft surfaces hidden assumptions faster than another round of theorizing ever could.

Each iteration reveals new learning: the missing user story, the poorly defined need, the contradiction in the strategy. These insights aren’t AI’s achievement — they’re the team’s. The AI simply provides a lantern, lighting up the fog so humans can decide where to go next.


Layering Prompts for Better Hypothesis Testing

One of the most powerful aspects of Miro’s new AI Flows is the ability to layer prompts in connected sequences. Instead of a single one-off query, you create a chain of generative steps that build on each other. For example:

  1. Synthesize research into user insights.
  2. Translate insights into “How Might We” statements.
  3. Generate user flows based on selected opportunities.
  4. Draft prototype screens or feature lists.

Each layer of the flow uses the prior outputs as inputs — so when you adjust one, the rest evolves. Change a research insight or tweak your “How Might We” framing, and within seconds, your entire prototype ecosystem updates. It’s an elegant way to make hypothesis testing iterative, dynamic, and evidence-driven.

Seeing the Invisible

In traditional innovation cycles, these transitions can take weeks of hand-offs. With AI flows, they happen in minutes — creating immediate feedback loops that invite teams to think in public and react in real time.

(You can see this process in action in the video embedded below — where we walk through how small prompt adjustments yield dramatically different outputs.)


The Human Element: Facilitating Sensemaking

The irony of AI-assisted innovation is that the faster machines generate, the more valuable human facilitation becomes. Instant prototypes don’t replace discussion — they accelerate it. They make reflection, critique, and sensemaking more productive because there’s something concrete to reference.

Facilitators play a critical role here. Their job is to:

  • Name the decision up front: “By the end of this session, we’ll have a directionally correct concept we’re ready to test.”
  • Guide feedback: Ask, “What’s useful? What’s missing? What will we try next?”
  • Anchor evidence: Trace changes to specific research insights so teams stay grounded.
  • Enable iteration: Encourage re-running the flow after prompt updates to test the effect of new assumptions.

Through this rhythm of generation, reflection, and adjustment, AI becomes a conversation catalyst — not a black box. And the process stays deeply human-centered because it focuses on learning through doing.


Case in Point: Building “Breakout Buddy”

We recently used this exact approach to prototype a new tool called Breakout Buddy — a Zoom app designed to make virtual breakout rooms easier for facilitators. The problem was well-known in our community: facilitators love the connection of small-group moments but dread the logistics. No drag-and-drop, no dynamic reassignment, no simple timers.

Using our Instant Prototyping flow, we gathered real facilitator pain points, synthesized insights, and created an initial app concept in under two hours. The first draft had errors — it misunderstood terms like “preformatted” and missed saving room configurations — but that’s precisely what made it valuable. Those gaps surfaced the assumptions we hadn’t yet defined.

After two quick iterations, we had a working prototype detailed enough for a designer to polish. Within days, we had a testable artifact, a story grounded in user evidence, and a clear set of next steps. The magic wasn’t in the speed — it was in how visible our thinking became.


Designing for Evidence, Not Perfection

If innovation is about learning, then prototypes are your hypotheses made tangible. AI just helps you create more of them — faster — so you can test, compare, and evolve. But the real discipline lies in how you use them.

  • Don’t rush past the drafts. Study what’s wrong and why.
  • Don’t hide your versions. Keep early artifacts visible to trace the evolution.
  • Don’t over-polish. Each iteration should teach, not impress.

When teams treat AI outputs as living evidence rather than final answers, they stay in the human-centered loop — grounded in empathy, focused on context, and oriented toward shared understanding.


A Lantern in the Fog

At Voltage Control, we see AI not as a replacement for creative process, but as a lantern in the fog — illuminating just enough of the path for teams to take their next confident step. Whether you’re redesigning a product, reimagining a service, or exploring cultural transformation, the goal isn’t to hand creativity over to AI. It’s to use AI to make your learning visible faster.

Because once the team can see it, they can improve it. And that’s where innovation truly begins.


🎥 Watch the Demo: How layered AI prompts accelerate hypothesis testing in Miro

Join the waitlist to get your hands on the Instant Prototyping template

Image Credit: Douglas Ferguson, Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Why Best Practices Fail

Five Questions with Ellen DiResta

Why Best Practices Fail

GUEST POST from Robyn Bolton

For decades, we’ve faithfully followed innovation’s best practices. The brainstorming workshops, the customer interviews, and the validated frameworks that make innovation feel systematic and professional. Design thinking sessions, check. Lean startup methodology, check. It’s deeply satisfying, like solving a puzzle where all the pieces fit perfectly.

Problem is, we’re solving the wrong puzzle.

As Ellen Di Resta points out in this conversation, all the frameworks we worship, from brainstorming through business model mapping, are business-building tools, not idea creation tools.

Read on to learn why our failure to act on the fundamental distinction between value creation and value capture causes too  many disciplined, process-following teams to  create beautiful prototypes for products nobody wants.


Robyn: What’s the one piece of conventional wisdom about innovation that organizations need to unlearn?

Ellen: That the innovation best practices everyone’s obsessed with work for the early stages of innovation.

The early part of the innovation process is all about creating value for the customer.  What are their needs?  Why are their Jobs to be Done unsatisfied?  But very quickly we shift to coming up with an idea, prototyping it, and creating a business plan.  We shift to creating value for the business, before we assess whether or not we’ve successfully created value for the customer.

Think about all those innovation best practices. We’ve got business model canvas. That’s about how you create value for the business. Right? We’ve got the incubators, accelerators, lean, lean startup. It’s about creating the startup, which is a business, right? These tools are about creating value for the business, not the customer.

R: You know that Jobs to be Done is a hill I will die on, so I am firmly in the camp that if it doesn’t create value for the customer, it can’t create value for the business.  So why do people rush through the process of creating ideas that create customer value?

E: We don’t really teach people how to develop ideas because our culture only values what’s tangible.  But an idea is not a tangible thing so it’s hard for people to get their minds around it.  What does it mean to work on it? What does it mean to develop it? We need to learn what motivates people’s decision-making.

Prototypes and solutions are much easier to sell to people because you have something tangible that you can show to them, explain, and answer questions about.  Then they either say yes or no, and you immediately know if you succeeded or failed.

R: Sounds like it all comes down to how quickly and accurately can I measure outcomes?   

E: Exactly.  But here’s the rub, they don’t even know they’re rushing because traditional innovation tools give them a sense of progress, even if the progress is wrong.

We’ve all been to a brainstorm session, right? Somebody calls the brainstorm session. Everybody goes. They say any idea is good. Nothing is bad. Come up with wild, crazy ideas. They plaster the walls with 300 ideas, and then everybody leaves, and they feel good and happy and creative, and the poor person who called the brainstorm is stuck.

Now what do they do? They look at these 300 ideas, and they sort them based on things they can measure like how long it’ll take to do or how much money it’ll cost to do it.  What happens?  They end up choosing the things that we already know how to do! So why have the brainstorm?”

R: This creates a real tension: leadership wants progress they can track, but the early work is inherently unmeasurable. How do you navigate that organizational reality?

E: Those tangible metrics are all about reliability. They make sure you’re doing things right. That you’re doing it the same way every time? And that’s appropriate when you know what you’re doing, know you’re creating value for the customer, and now you’re working to create value for the business.  Usually at scale

But the other side of it?  That’s where you’re creating new value and you are trying to figure things out.  You need validity metrics. Are we doing the right things? How will we know that we’re doing the right things.

R: What’s the most important insight leaders need to understand about early-stage innovation?

E: The one thing that the leader must do  is run cover. Their job is to protect the team who’s doing the actual idea development work because that work is fuzzy and doesn’t look like it’s getting anywhere until Ta-Da, it’s done!

They need to strategically communicate and make sure that the leadership hears what they need to hear, so that they know everything is in control, right? And so they’re running cover is the best way to describe it. And if you don’t have that person, it’s really hard to do the idea development work.”

But to do all of that, the leader also must really care about that problem and about understanding the customer.


We must create value for the customer before we can create value for the business. Ellen’s insight that most innovation best practices focus on the latter is devastating.  It’s also essential for all the leaders and teams who need results from their innovation investments.

Before your next innovation project touches a single framework, ask yourself Ellen’s fundamental question: “Are we at a stage where we’re creating value for the customer, or the business?” If you can’t answer that clearly, put down the canvas and start having deeper conversations with the people whose problems you think you’re solving.

To learn more about Ellen’s work, check out Pearl Partners.

To dive deeper into Ellen’s though leadership, visit her Substack – Idea Builders Guild.

To break the cycle of using the wrong idea tools, sign-up for her free one-hour workshop.

Image credit: 1 of 950+ FREE quote slides available at http://misterinnovation.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Innovation or Not – Chemical-Free Farming with Autonomous Robots

Greenfield Robotics and the Human-Centered Reboot of Agriculture

LAST UPDATED: October 20, 2025 at 9:35PM
Innovation or Not - Chemical-Free Farming with Autonomous Robots

GUEST POST from Art Inteligencia

The operating system of modern agriculture is failing. We’ve optimized for yield at the cost of health—human health, soil health, and planetary health. The relentless pursuit of chemical solutions has led to an inevitable biological counter-strike: herbicide-resistant superweeds and a spiraling input cost crisis. We’ve hit the wall of chemical dependency, and the system is demanding a reboot.

This is where the story of Greenfield Robotics — a quiet, powerful disruption born out of a personal tragedy and a regenerative ethos—begins to rewrite the agricultural playbook. Founded by third-generation farmer Clint Brauer, their mission isn’t just to sell a better tool; it’s to eliminate chemicals from our food supply entirely. This is the essence of true, human-centered innovation: identifying a catastrophic systemic failure and providing an elegantly simple, autonomous solution.

The Geometry of Disruption: From Spray to Scalpel

For decades, weed control has been a brute-force exercise. Farmers apply massive spray rigs, blanketing fields with chemicals to kill the unwanted. This approach is inefficient, environmentally harmful, and, critically, losing the biological war.

Greenfield Robotics flips this model from a chemical mass application to a mechanical, autonomous precision action. Their fleet of small, AI-powered robots—the “Weedbots” or BOTONY fleet—are less like tractors and more like sophisticated surgical instruments. They are autonomous, modular, and relentless.

Imagine a swarm of yellow, battery-powered devices, roughly two feet wide, moving through vast crop rows 18 hours a day, day or night. This isn’t mere automation; it’s coordinated, intelligent fleet management. Using proprietary AI-powered machine vision, the bots navigate with centimeter accuracy, identifying the crop from the weed. Their primary weapon is not a toxic spray, but a spinning blade that mechanically scalps the ground, severing the weed right at the root, ensuring chemical-free eradication.

This seemingly simple mechanical action represents a quantum leap in agricultural efficiency. By replacing chemical inputs with a service-based autonomous fleet, Greenfield solves three concurrent crises:

  • Biological Resistance: Superweeds cannot develop resistance to being physically cut down.
  • Environmental Impact: Zero herbicide use means zero chemical runoff, protecting water systems and beneficial insects.
  • Operational Efficiency: The fleet runs continuously and autonomously (up to 1.6 meters per second), drastically increasing the speed of action during critical growth windows and reducing the reliance on increasingly scarce farm labor.

The initial success is staggering. Working across broadacre crops like soybeans, cotton, and sweet corn, farmers are reporting higher yields and lower costs comparable to, or even better than, traditional chemical methods. The economic pitch is the first step, but the deeper change is the regenerative opportunity it unlocks.

The Human-Centered Harvest: Regenerative Agriculture at Scale

As an innovation leader, I look for technologies that don’t just optimize a process, but fundamentally elevate the human condition around that process. Greenfield Robotics is a powerful example of this.

The human-centered core of this innovation is twofold: the farmer and the consumer.

For the farmer, this technology is an act of empowerment. It removes the existential dread of mounting input costs and the stress of battling resistant weeds with diminishing returns. More poignantly, it addresses the long-term health concerns associated with chemical exposure—a mission deeply personal to Brauer, whose father’s Parkinson’s diagnosis fueled the company’s genesis. This is a profound shift: A technology designed to protect the very people who feed the world.

Furthermore, the modular chassis of the Weedbot is the foundation for an entirely new Agri-Ecosystem Platform. The robot is not limited to cutting weeds. It can be equipped to:

  • Plant cover crops in-season.
  • Apply targeted nutrients, like sea kelp, with surgical precision.
  • Act as a mobile sensor platform, collecting data on crop nutrient deficiencies to guide farmer decision-making.

This capability transforms the farmer’s role from a chemical applicator to a regenerative data strategist. The focus shifts from fighting nature to working with it, utilizing practices that build soil health—reduced tillage, increased biodiversity, and water retention. The human element moves up the value chain, focused on strategic field management powered by real-time autonomous data, while the robot handles the tireless, repeatable, physical labor.

For the consumer, the benefit is clear: chemical-free food at scale. The investment from supply chain giants like Chipotle, through their Cultivate Next venture fund, is a validation of this consumer-driven imperative. They understand that meeting the demand for cleaner, healthier food requires a fundamental, scalable change in production methods. Greenfield provides the industrialized backbone for regenerative, herbicide-free farming—moving this practice from niche to normalized.

Beyond the Bot: A Mindset for Tomorrow’s Food System

The challenge for Greenfield Robotics, and any truly disruptive innovator, is not the technology itself, but the organizational and cultural change required for mass adoption. We are talking about replacing a half-century-old paradigm of chemical dependency with an autonomous, mechanical model. This requires more than just selling a machine; it requires cultivating a Mindset Shift in the farming community.

The company’s initial “Robotics as a Service” model was a brilliant, human-centered strategy for adoption. By deploying, operating, and maintaining the fleets themselves for a per-acre fee, they lowered the financial and technical risk for farmers. This reduced-friction introduction proves that the best innovation is often wrapped in the most accessible business model. As the technology matures, transitioning toward a purchase/lease model shows the market confidence and maturity necessary for exponential growth.

Greenfield Robotics is more than a promising startup; it is a signal. It tells us that the future of food is autonomous, chemical-free, and profoundly human-centered. The next chapter of agriculture will be written not with larger, more powerful tractors and sprayers, but with smaller, smarter, and more numerous robots that quietly tend the soil, remove the toxins, and enable the regenerative practices necessary for a sustainable, profitable future.

This autonomous awakening is our chance to heal the rift between technology and nature, and in doing so, secure a healthier, cleaner food supply for the next generation. The future of farming is not just about growing food; it’s about growing change.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Greenfield Robotics

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






The Nuclear Fusion Accelerator

How AI is Commercializing Limitless Power

The Nuclear Fusion Accelerator - How AI is Commercializing Limitless Power

GUEST POST from Art Inteligencia

For decades, nuclear fusion — the process that powers the sun and promises clean, virtually limitless energy from basic elements like hydrogen — has been the “holy grail” of power generation. The famous joke has always been that fusion is “30 years away.” However, as a human-centered change and innovation thought leader, I can tell you that we are no longer waiting for a scientific miracle; we are waiting for an engineering and commercial breakthrough. And the key catalyst accelerating us across the finish line isn’t a new coil design or a stronger laser. It is Artificial Intelligence.

The journey to commercial fusion involves taming plasma — a superheated, unstable state of matter hotter than the sun’s core — for sustained periods. This process is characterized by extraordinary complexity, high costs, and a constant, data-intensive search for optimal control parameters. AI is fundamentally changing the innovation equation by replacing the slow, iterative process of trial-and-error experimentation with rapid, predictive optimization. Fusion experiments generate petabytes of diagnostic data; AI serves as the missing cognitive layer, enabling physicists and engineers to solve problems in days that once took months or even years of physical testing. AI isn’t just a tool; it is the accelerator that is finally making fusion a question of when, not if, and critically, at a commercially viable price point.

AI’s Core Impact: From Simulation to Scalability

AI accelerates commercialization by directly addressing fusion’s three biggest engineering hurdles, all of which directly affect capital expenditure and time-to-market:

  • 1. Real-Time Plasma Control & Digital Twins: Fusion plasma is highly turbulent and prone to disruptive instabilities. Reinforcement Learning (RL) models and Digital Twins — virtual, real-time replicas of the reactor — learn optimal control strategies. This allows fusion machines to maintain plasma confinement and temperature far more stably, which is essential for continuous, reliable power production.
  • 2. Accelerating Materials Discovery: The extreme environment within a fusion reactor destroys conventional materials. AI, particularly Machine Learning (ML), is used to screen vast material databases and even design novel, radiation-resistant alloys faster than traditional metallurgy, shrinking the time-to-discovery from years to weeks. This cuts R&D costs and delays significantly.
  • 3. Design and Manufacturing Optimization: Designing the physical components is immensely complex. AI uses surrogate models — fast-running, ML-trained replicas of expensive high-fidelity physics codes — to quickly test thousands of design iterations. Furthermore, AI is being used to optimize manufacturing processes like the winding of complex high-temperature superconducting magnets, ensuring precision and reducing production costs.

“AI is the quantum leap in speed, turning the decades-long process of fusion R&D into a multi-year sprint towards commercial viability.” — Dr. Michl Binderbauer, the CEO of TAE Technologies


Case Study 1: The Predict-First Approach to Plasma Turbulence

The Challenge:

A major barrier to net-positive energy is plasma turbulence, the chaotic, swirling structures inside the reactor that cause heat to leak out, dramatically reducing efficiency. Traditionally, understanding this turbulence required running extremely time-intensive, high-fidelity computer codes for weeks on supercomputers to simulate one set of conditions.

The AI Solution:

Researchers at institutions like MIT and others have successfully utilized machine learning to build surrogate models. These models are trained on the output of the complex, weeks-long simulations. Once trained, the surrogate can predict the performance and turbulence levels of a given plasma configuration in milliseconds. This “predict-first” approach allows engineers to explore thousands of potential operating scenarios and refine the reactor’s control parameters efficiently, a process that would have been physically impossible just a few years ago.

The Commercial Impact:

This application of AI dramatically reduces the design cycle time. By rapidly optimizing plasma behavior through simulation, engineers can confirm promising configurations before they ever build a new physical machine, translating directly into lower capital costs, reduced reliance on expensive physical prototypes, and a faster path to commercial-scale deployment.


Case Study 2: Real-Time Stabilization in Commercial Reactor Prototypes

The Challenge:

Modern magnetic confinement fusion devices require precise, continuous adjustment of complex magnetic fields to hold the volatile plasma in place. Slight shifts can lead to a plasma disruption — a sudden, catastrophic event that can damage reactor walls and halt operations. Traditional feedback loops are often too slow and rely on simple, linear control rules.

The AI Solution:

Private companies and large public projects (like ITER) are deploying Reinforcement Learning controllers. These AI systems are given a reward function (e.g., maintaining maximum plasma temperature and density) and train themselves across millions of virtual experiments to operate the magnetic ‘knobs’ (actuators) in the most optimal, non-intuitive way. The result is an AI controller that can detect an instability milliseconds before a human or conventional system can, and execute complex corrective maneuvers in real-time to mitigate or avoid disruptions entirely.

The Commercial Impact:

This shift from reactive to proactive control is critical for commercial viability. A commercial fusion plant needs to operate continuously and reliably to make its levelized cost of electricity competitive. By using AI to prevent costly equipment damage and extend plasma burn duration, the technology becomes more reliable, safer, and ultimately more financially attractive as a baseload power source.


The New Fusion Landscape: Companies to Watch

The private sector, recognizing the accelerating potential of AI, is now dominating the race, backed by billions in private capital. Companies like Commonwealth Fusion Systems (CFS), a spin-out from MIT, are leveraging AI-optimized high-temperature superconducting magnets to shrink the tokamak design to a commercially viable size. Helion Energy, which famously signed the first power purchase agreement with Microsoft, uses machine learning to control their pulsed Magneto-Inertial Fusion systems with unprecedented precision to achieve high plasma temperatures. TAE Technologies applies advanced computing to its field-reversed configuration approach, optimizing its non-radioactive fuel cycle. Other startups like Zap Energy and Tokamak Energy are also deeply integrating AI into their core control and design strategies. The partnership between these agile startups and large compute providers (like AWS and Google) highlights that fusion is now an information problem as much as a physics one.

The Human-Centered Future of Energy

AI is not just optimizing the physics; it is optimizing the human innovation cycle. By automating the data-heavy, iterative work, AI frees up the world’s best physicists and engineers to focus on the truly novel, high-risk breakthroughs that only human intuition can provide. When fusion is commercialized — a time frame that has shrunk from decades to perhaps the next five to ten years — it will not just be a clean energy source; it will be a human-centered energy source. It promises energy independence, grid resiliency, and the ability to meet the soaring demands of a globally connected, AI-driven digital economy without contributing to climate change. The fusion story is rapidly becoming the ultimate story of human innovation, powered by intelligence, both artificial and natural.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






The Ongoing Innovation War Between Hackers and Cybersecurity Firms

Last Updated: October 15, 2025 at 8:36PM PDT

The Ongoing Innovation War Between Hackers and Cybersecurity Firms

GUEST POST from Art Inteligencia

In the world of change and innovation, we often celebrate disruptive breakthroughs — the new product, the elegant service, the streamlined process. But there is a parallel, constant, and far more existential conflict that drives more immediate innovation than any market force: the Innovation War between cyber defenders and adversaries. This conflict isn’t just a cat-and-mouse game; it is a Vicious Cycle of Creative Destruction where every defensive breakthrough creates a target for a new offensive tactic, and every successful hack mandates a fundamental reinvention of the defense at firms like F5 and CrowdStrike. As a human-centered change leader, I find this battleground crucial because its friction dictates the speed of digital progress and, more importantly, the erosion or restoration of citizen and customer trust.

We’ve moved past the era of simple financial hacks. Today’s sophisticated adversaries — nation-states, organized crime syndicates, and activist groups — target the supply chain of trust itself. Their strategies are now turbocharged by Generative AI, allowing for the automated creation of zero-day exploits and hyper-realistic phishing campaigns, fundamentally accelerating the attack lifecycle. This forces cybersecurity firms to innovate in response, focusing on achieving Active Cyber Resilience — the ability to not only withstand attacks but to learn, adapt, and operate continuously even while under fire. The human cost of failure — loss of privacy, psychological distress from disruption, and decreased public faith in institutions — is the real metric of this war.

The Three Phases of Cyber Innovation

The defensive innovation cycle, driven by adversary pressure, can be broken down into three phases:

  • 1. The Breach as Discovery (The Hack): An adversary finds a zero-day vulnerability or exploits a systemic weakness. The hack itself is the ultimate proof-of-concept, revealing a blind spot that internal R&D teams failed to predict. This painful discovery is the genesis of new innovation.
  • 2. The Race to Resilience (The Fix): Cybersecurity firms immediately dedicate immense resources — often leveraging AI and automation for rapid detection and response — to patch the vulnerability, not just technically, but systematically. This results in the rapid development of new threat intelligence, monitoring tools, and architectural changes.
  • 3. The Shift in Paradigm (The Reinvention): Over time, repeated attacks exploiting similar vectors force a foundational change in design philosophy. The innovation becomes less about the patch and more about a new, more secure default state. We transition from building walls to implementing Zero Trust principles, treating every user and connection as potentially hostile.

“In cybersecurity, your adversaries are your involuntary R&D partners. They expose your weakness, forcing you to innovate beyond your comfort zone and into your next generation of defense.” — Frank Hersey


Case Study 1: F5 Networks and the Supply Chain of Trust

The Attack:

F5 Networks, whose BIG-IP products are central to application delivery and security for governments and major corporations globally, was breached by a suspected nation-state actor. The attackers reportedly stole proprietary BIG-IP source code and details on undisclosed security vulnerabilities that F5 was internally tracking.

The Innovation Mandate:

This was an attack on the supply chain of security itself. The theft provides adversaries with a blueprint for crafting highly tailored, future exploits that target F5’s massive client base. The innovation challenge for F5 and the entire industry shifts from simply patching products to fundamentally rethinking their Software Development Lifecycle (SDLC). This demands a massive leap in threat intelligence integration, secure coding practices, and isolating development environments from corporate networks to prevent future compromise of the IP that protects the world.

The Broader Impact:

The F5 breach compels every organization to adopt an unprecedented level of vendor risk management. It drives innovation in how infrastructure is secured, shifting the paradigm from trusting the vendor’s product to verifying the vendor’s integrity and securing the entire delivery pipeline.


Case Study 2: Airport Public Address (PA) System Hacks

The Attack:

Hackers gained unauthorized access to the Public Address (PA) systems and Flight Information Display Screens (FIDS) at various airports (e.g., in Canada and the US). They used these systems to broadcast political and disruptive messages, causing passenger confusion, flight delays, and the immediate deployment of emergency protocols.

The Innovation Mandate:

These attacks were not financially motivated, but aimed at disruption and psychological impact — exploiting the human fear factor. The vulnerability often lay in a seemingly innocuous area: a cloud-based, third-party software provider for the PA system. The innovation mandate here is a change in architectural design philosophy. Security teams must discard the concept of “low-value” systems. They must implement micro-segmentation to isolate all operational technology (OT) and critical public-facing systems from the corporate network. Furthermore, it forces an innovation in physical-digital security convergence, requiring security protocols to manage and authenticate the content being pushed to public-facing devices, treating text-to-speech APIs with the same scrutiny as a financial transaction. The priority shifts to minimizing public and maximizing continuity.

The Broader Impact:

The PA system hack highlights the critical need for digital humility
. Every connected device, from the smart thermostat to the public announcement system, is an attack vector. The innovation is moving security from the data center floor to the terminal wall, reinforcing that the human-centered goal is continuity and maintaining public trust.


Conclusion: The Innovation Imperative

The war between hackers and cybersecurity firms is relentless, but it is ultimately a net positive for innovation, albeit a brutally expensive and high-stakes one. Each successful attack provides the industry with a blueprint for a more resilient, better-designed future.

For organizational leaders, the imperative is clear: stop viewing cybersecurity as a cost center and start treating it as the foundational innovation platform. Your investment in security dictates your speed and trust in the market. Adopt the mindset of Continuous Improvement and Adaptation. Leaders must mandate a Zero Trust roadmap and treat security talent as mission-critical R&D personnel. The speed and quality of your future products will depend not just on your R&D teams, but on how quickly your security teams can learn from the enemy’s last move. In the digital economy, cyber resilience is the ultimate competitive differentiator.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.