Tag Archives: insight

Benchmarking Innovation Performance

Closing the Gap Between Aspiration and Execution

Benchmarking Innovation Performance

GUEST POST from Noel Sobelman

In today’s volatile, technology-driven world, where the pace of change continues to accelerate, most executive teams agree on one thing: innovation, whether incremental, adjacent, or transformative, is critical. What’s far less clear is how to measure whether their innovation efforts are working or how to systematically improve them. That’s where benchmarking comes in.

Benchmarking isn’t just a diagnostic tool, it’s a strategic accelerator. It provides clarity where there’s ambiguity, reveals blind spots that internal reviews often miss, and equips leadership teams with hard data to make smarter, faster, and more confident decisions about innovation investments and process improvements.

This article explores benchmarking as a strategic capability for quantifying the gap between current innovation performance and best-in-class execution. It also outlines how companies can use benchmarking to unlock more reliable, scalable, and profitable innovation outcomes.

From Insight to Action: Why Benchmark Innovation?

Innovation is inherently risky with outcomes that are hard to predict, but the processes that support it don’t have to be. Effective innovation systems are structured, repeatable, measurable, and continuously improving. Benchmarking enables companies to see those systems clearly and objectively. It replaces assumptions with insight and turns performance anecdotes into verifiable data.

Used strategically, benchmarking helps executive teams build a compelling case for change grounded in facts rather than opinions. It offers a concrete way to quantify gaps between current and desired performance, helping to expose where process inefficiencies or capability gaps are holding the organization back. Benchmarking also supports leadership in identifying maturity levels across critical innovation capabilities, from governance and investment decision-making to resource management and project execution.

Importantly, it links development capabilities directly to measurable business outcomes. That means innovation isn’t just about creativity or culture, it’s about performance that can be tracked, improved, and scaled. By grounding decisions in comparative data, benchmarking makes it easier to align managers around realistic year-over-year improvement targets that are both ambitious and realistic.

Defining Performance: What Benchmarking Measures

For benchmarking to drive real improvement, it must look at the right dimensions of performance. At Accel, we use a multi-dimensional benchmarking model that examines four distinct categories of innovation performance: innovation effectiveness, project performance, process application, and portfolio management.

Innovation effectiveness reflects senior leadership’s ability to guide success across the full innovation spectrum, from product line extensions to transformative new ventures. This includes new product vitality, the percentage of revenue generated by recent launches, as well as return on R&D investment and the proportion of spend lost due to delayed or ineffective decision-making (aka, wasted development spending). When measuring leadership effectiveness in creating new sources of growth beyond the core business, we include leading indicators like evidence-based portfolio metrics, progress metrics, and scaling metrics such as user engagement, retention rate, and referral rate.

Innovation project performance reflects how well teams execute against their objectives. It includes metrics such as time-to-market, time-to-profitability, and schedule predictability, alongside actual-to-planned measures of product cost, profitability, and quality. These indicators help determine whether teams are executing effectively while meeting the business and customer needs they set out to address. New venture project performance measures include validated assumptions and cumulative evidence strength across solution desirability, business viability, and technical feasibility dimensions.

Innovation process application focuses on how consistently and effectively innovation methodologies are applied. Here, we assess actual versus estimated project cycle times across development phases as well as the accuracy of development cost forecasts. We also examine the frequency of project re-scoping, exception reviews, team turnover, and the reuse of design or code elements, all of which serve as indicators of process health. For transformative innovation processes, we also assess learning velocity, experimentation rigor, evidence-based decision-making, metered funding practices, core business leverage, and engagement with external ecosystems.

Finally, innovation portfolio management metrics reveal how well an organization aligns its innovation resources with its strategy. We evaluate factors such as strategic alignment, investment allocation, resource utilization, and portfolio value realization. When these are off-target, companies often see a mismatch between growth ambition and investment mix, poor development throughput, or low return on their innovation spend.

Accel Management Group innovation performance benchmark metrics

Figure 1. Innovation Performance Benchmark Metrics

Together, these four categories offer a comprehensive view of performance and their connection to business outcomes, and more importantly, a roadmap for targeted, results-driven improvement.

How It Works: Accel’s Benchmarking Approach

The benchmarking process begins by establishing a clear, accurate picture of the company’s current state. This involves gathering available performance data, then evaluating it for consistency and comparability across sources. We reconcile discrepancies and normalize contextual factors like company size, product line complexity, regulatory classification, innovation type, and development methodology.

AI accelerates this process by enabling faster data harmonization, natural language processing to analyze qualitative inputs (such as project postmortems or customer feedback), and machine learning algorithms that detect hidden drivers of performance variance across projects, teams, or business units.

Once we’ve built this baseline, we assess capability maturity across several critical dimensions. These include innovation process structure, governance and decision-making frameworks, execution models (such as gated, Agile, or transformative approaches), and portfolio management practices. We also analyze resource management, discovery and ideation, new venture incubation efforts, alignment with business strategy, culture, and organizational mechanisms such as incentives and reward systems.

From there, we compare the organization’s practices and outcomes against peer companies, industry leaders, and Accel’s leading practice reference model. The output isn’t just a list of issues; it’s a prioritized set of capability gaps linked directly to performance impact. We then work with executive teams to develop action plans and change roadmaps, aligning leadership around where to invest, where to restructure, and where to accelerate change.

Noel Sobelman benchmarking approach

Figure 2. Benchmarking Approach

What Benchmarking Reveals: A Snapshot from the Field

We’ve seen across multiple clients and industries how benchmarking can uncover hidden obstacles to innovation performance. Consider the example of one of our clients, a MedTech manufacturer that decided to benchmark their capabilities after struggling with missed launch dates and underwhelming innovation returns. Their leadership team believed that product complexity and regulatory challenges were the root cause. But when we dug into the data, a different picture emerged.

The company was not consistently tracking core new product development performance metrics, making it difficult to identify root issues or assess improvement opportunities. Sample project data revealed that early-phase development cycles, specifically Concept and Planning Phases, were taking two to three times longer than industry benchmarks. Moreover, the company was investing heavily in detailed design before evaluating technical feasibility or validating customer requirements, which led to protracted development timelines, late-stage surprises, compliance-driven rework, and chronic cost overruns.

Our assessment also uncovered a lack of system-level architecture discipline and siloed project planning without proper integration to balance customer needs against technical, market window, schedule, and resource considerations. In short, while the organization believed it had a process problem, benchmarking revealed a deeper issue: a maturity gap in early-phase project planning, risk management, and system design.

By framing these insights within industry benchmarks and leading practices, the company was able to galvanize leadership support for a targeted transformation. The result was a realigned innovation and portfolio management process focused on early project de-risking, customer need validation, and robust front-end planning, leading to faster cycle times, fewer late-stage surprises, and improved innovation throughput.

Why It Matters: The Strategic Case for Benchmarking

Benchmarking delivers more than operational insights, it unlocks real business value. Companies that benchmark and act on the findings tend to outperform peers in key areas. For instance, best-in-class organizations generate over 45 percent of their revenue from new products. Their time-to-market is over 40 percent faster, and their R&D resources are more efficiently allocated toward high-impact initiatives like platform innovation and next-generation solutions.

In contrast, companies that don’t benchmark often lack visibility into why projects fail, where delays originate, or how resources are being utilized. This results in lower returns on innovation investment, lower project success rates, and internal misalignment on where and how to improve. We’ve seen cases where products missed their mark not because the core idea was flawed, but because teams moved too quickly into development without validating customer needs or failed to adapt to shifting customer expectations. The result: products that launched late, didn’t resonate with customers, or had to be reworked at a significant cost.

When benchmarking is integrated into an ongoing performance management system, it serves as a feedback loop, continuously guiding decision-making and capability development. That’s why it’s not just a one-time diagnostic, but a strategic discipline that supports innovation as a competitive advantage. AI technologies enhance this feedback loop by transforming benchmarking into a dynamic, continuous process, automatically updating benchmarks as internal and external data sources evolve, and alerting teams to emerging gaps or opportunities in real time.

Conclusion: A Tool for Strategic Transformation

In a world where innovation separates leaders from followers, benchmarking is more than a diagnostic, it’s a tool for strategic transformation. By providing hard data on where you stand and where to focus, it turns vague aspirations into actionable priorities and ensures that innovation efforts are aligned with measurable business outcomes.

But benchmarking only delivers value when it’s integrated into the broader innovation system, driving continuous improvement and sharper execution over time. That’s where its real power lies, as an ongoing discipline that builds organizational maturity and long-term advantage.

For executive teams looking to sharpen their innovation capability, a few critical questions should guide the next steps:

  • Do we have an objective understanding of how our innovation performance stacks up against peers?
  • Are our development processes delivering the speed, quality, predictability, and customer impact we need?
  • Can we clearly measure how innovation contributes to growth and profitability?
  • Most importantly, are we investing in the right capabilities to win in the future?

You can’t improve what you don’t measure, and you can’t lead if you don’t know where you stand.

Image credits: Accel Management Group, Noel Sobelman, Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Why So Many Smart People Are Foolish

Why So Many Smart People Are Foolish

GUEST POST from Greg Satell

When I lived in Moscow, my gym was just a five-minute walk from my flat. So rather than use a locker, I would just run over in my shorts and a jacket no matter what the weather was. The locals thought I was crazy. Elderly Russians would sometimes scream at me to go home and get dressed properly.

I had always heard that Russians were impervious to the effects of weather, but the truth is that they get cold just like the rest of us. We tend to mythologize the unknown. Our brains work in strange ways, soaking up patterns from what we see. Often, however, those experiences are unreliable, such as the Hollywood images that helped shape my views about Russians and their impenetrability.

The problem is that myths often feel more real than facts. We have a tendency to seize on information that is most accessible, not the most accurate, and then interpret new evidence based on that prior perception. We need to accept that we can’t avoid our own cognitive biases. The unavoidable truth is that we’re easiest to fool when we think we’re being clever.

Inventing Myths

When Jessica Pressler first published her story about Anna Sorokin in New York Magazine, it could scarcely be believed. A Russian emigrant, with no assets to speak of, somehow managed to convince the cream of New York society that she was, in fact, a wealthy German heiress and swindled them out of hundreds of thousands of dollars.

Her crimes pale in comparison to Elizabeth Holmes of Theranos, who made fools of the elites on the opposite coast. Attracting a powerful board that included Henry Kissinger (but no one with expertise in life sciences), the 20-something entrepreneur convinced investors that she had invented a revolutionary blood testing technology and was able to attract $700 million.

In both cases, there was no shortage of opportunities to unmask the fraud. Anna Sorokin left unpaid bills all over town. Despite Holmes’s claims, she wasn’t able to produce a single peer-reviewed study that her technology worked even after 10 years in business. There were no shortage of whistle blowers from inside and outside the company.

Still, many bought the ruses and would interpret facts to support them. Sorokin’s unpaid bills were seen as proof of her wealth. After all, who but the fabulously rich could be so nonchalant with money? In Holmes’ case, her eccentricities were taken as evidence that she truly was a genius, in the mold of Steve Jobs or Mark Zuckerberg.

The Halo Effect

People like Sorokin and Holmes intentionally prey on our weaknesses. Whenever anybody tried to uncover the facts, they threw elaborate defenses, making counter-accusations of any who dared to question them. Often, they used relationships with powerful people to protect them. At Theranos, there was very strict corporate security and an army of lawyers.

Still, it doesn’t have to be so diabolical. As Phil Rosenzweig explains in The Halo Effect, when a company is doing well, we tend to see every aspect of the organization in a positive light. We assume a profitable company has wise leadership, motivated employees and a sound strategy. At the same time, we see the traits of poorly performing firms in a negative light.

But what if it’s the same company? Rosenzweig points out that, when Cisco was at its peak before the dot-com bust, it was said to have an “extreme customer focus.” But a year later, when things turned south, Cisco was criticized for “a cavalier attitude toward potential customers” and “irksome” sales policies. Did its culture really change so much in a year?

Business pundits, in ways very similar to swindlers, prey on how our minds work. When they say that companies that employ risky strategies outperform others who don’t, they are leveraging survivorship bias and, of course, firms that took big risks and failed are never counted in the analysis. When consulting companies survey industry executives, they are relying more on social proof than uncovering expert opinion.

The Principle Of Reflexivity

In the early 70’s, a young MBA student named Michael Milken noticed that debt that was considered below investment grade could provide higher risk-adjusted returns than other investments. He decided to create a market for the so-called junk bonds and, by the 80’s, was making a ton of money.

Then everybody else piled on and the value of the bonds increased so much that they became a bad investment. Nevertheless, investors continued to rush in. Inevitably, the bubble popped and the market crashed as the crowds rushed for the exit. Many who were considered “smart money” lost billions.

That’s what George Soros calls reflexivity. Expectations aren’t formed in a vacuum, but in the context of other’s expectations. If many believe that the stock market will go up, we’re more likely to believe it too. That makes the stock market actually go up, which only adds fuel to the fire. Nobody wants to get left out of a good thing.

Very few ever seem to learn this lesson and that’s why people like Anna Sorokin and Elizabeth Holmes are able to play us for suckers. We are wired to conform and the effect extends widely throughout our social networks. The best indication of what we believe is not any discernible fact pattern, but what those around us happen to believe.

Don’t Believe Everything You Think

One of the things that I’ve learned over the years is that it’s best to assume people are smart, hardworking and well-intentioned. Of course, that’s not always true, but we don’t learn much from dismissing people as stupid, lazy and crooked. And if we don’t learn from others’ mistakes, then how can we avoid the same failures?

Often, smart people get taken in because they’re smart. They have a track record of seeing things others don’t, making good bets and winning big. People give them deference, come to them for advice and laugh at their jokes. They’re used to seeing things others don’t. For them, a lack of discernible evidence isn’t always a warning sign. It can be an opportunity.

We all need to check ourselves so that we don’t believe everything that we think. There are formal processes that can help, such as pre-mortems and red teams, but most of all we need to own up to the flaws in our own brains. We have a tendency to see patterns that aren’t really there and to double down on bad ideas once we’ve committed to them.

As Richard Feynman famously put it, “The first principle is that you must not fool yourself—and you are the easiest person to fool.” Smart people get taken in so easily because they forget that basic principle. They mythologize themselves and become the heroes of their own stories. That’s why there will always be more stories like “Inventing Anna” and Theranos.

Suckers are born every minute and, invariably, they think they’re playing it smart.

— Article courtesy of the Digital Tonto blog
— Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.