Tag Archives: grand challenges

How to Pursue a Grand Innovation Challenge

How to Pursue a Grand Innovation Challenge

GUEST POST from Greg Satell

All too often, innovation is confused with agility. We’re told to “adapt or die” and encouraged to “move fast and break things.” But the most important innovations take time. Einstein spent ten years on special relativity and then another ten on general relativity. To solve tough, fundamental problems, we have to be able to commit for the long haul.

As John F. Kennedy put it in his moonshot speech, “We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills.” Every organization should pursue grand challenges for the same reason.

Make no mistake. Innovation needs exploration. If you don’t explore, you won’t discover. If you don’t discover you won’t invent and if you don’t invent you will be disrupted. It’s just a matter of time. Unfortunately, exploration can’t be optimized or iterated. That’s why grand challenges don’t favor the quick and agile, but the patient and the determined.

1. Don’t Bet The Company

Most grand challenges aren’t like the original moonshot, which was, in large part, the result of the space race with the Soviets that began with the Sputnik launch in 1957. That was a no-holds-barred effort that consumed the efforts of the nation, because it was widely seen as a fundamental national security issue that represented a clear and present danger.

For most organizations, those type of “bet-the-company” efforts are to be avoided. You don’t want to bet your company if you can avoid it, for the simple reason that if you lose you are unlikely to survive. Most successful grand challenges don’t involve a material investment. They are designed to be sustainable.

“Grand challenges are not about the amount of money you throw at the problem, Bernard Meyerson, IBM’s Chief Innovation Officer, told me. “To run a successful grand challenge program, failure should not be a material risk to the company, but success will have a monumental impact. That’s what makes grand challenges an asymmetric opportunity.”

Take, for example Google’s X division. While the company doesn’t release its budget, it appeared to cost the company about $3.5 billion in 2018, which is a small fraction of its $23 billion in annual profits at the time. At the same time, just one project, Waymo, may be worth $70 billion (2018). In a similar vein, the $3.8 billion invested in the Human Genome Project generated nearly $800 billion of economic activity as of 2011.

So the first rule of grand challenges is not to bet the company. They are, in fact, what you do to avoid having to bet the company later on.

2. Identify A Fundamental Problem

Every innovation starts out with a specific problem to be solved. The iPod, for example, was Steve Jobs’s way of solving the problem of having “a thousand songs in my pocket.” More generally, technology companies strive to deliver better performance and user experience, drug companies aim to cure disease and retail companies look for better ways to drive transactions. Typically, firms evaluate investment based on metrics rooted in past assumptions

Grand challenges are different because they are focused on solving fundamental problems that will change assumptions about what’s possible. For example, IBM’s Jeopardy Grand Challenge had no clear business application, but transformed artificial intelligence from an obscure field to a major business. Later, Google’s AlphaGo made a similar accomplishment with self-learning. Both have led to business opportunities that were not clear at the time.

Grand challenges are not just for technology companies either. MD Anderson Cancer Center has set up a series of Moonshots, each of which is designed to have far reaching effects. 100Kin10, an education nonprofit, has identified a set of grand challenges it has tasked its network with solving.

Talia Milgrom-Elcott, Executive Director of 100Kin10, told me she uses the 5 Whys as a technique to identify grand challenges. Start with a common problem, keep asking why it keeps occurring and you will eventually get to the root problem. By focusing your efforts on solving that, you can make a fundamental impact of wide-ranging consequence.

3. Commit To A Long Term Effort

Grand challenges aren’t like normal problems. They don’t conform to timelines and can’t effectively be quantified. You can’t justify a grand challenge on the basis of return on investment, because fundamental problems are too pervasive and ingrained to surrender themselves to any conventional form of analysis.

Consider The Cancer Genome Atlas, which eventually sequenced and published over 10,000 tumor genomes When Jean Claude Zenklusen first came up with the idea in 2005, it was highly controversial, because although it wasn’t particularly expensive, it would still take resources away from more conventional research.

Today, however, the project is considered to be a runaway success, which has transformed the field, greatly expanding knowledge and substantially lowering costs to perform genetic research. It has also influenced efforts in other fields, such as the Materials Genome Initiative. None of this would have been possible without commitment to a long-term effort.

And that’s what makes grand challenges so different. They are not business as usual and not immediately relevant to present concerns. They are explorations that expand conventional boundaries, so cannot be understood within them.

An Insurance Policy Against A Future You Can’t Yet See

Typically, we analyze a business by extrapolating current trends and making adjustments for things that we think will be different. So, for example, if we expect the market to pick up, we may invest in more capacity to profit from greater demand. On the other hand, if we expect a softer market, we’d probably start trimming costs to preserve margins.

The problem with this type of analysis is that the future tends to surprise us. Technology changes, customer preferences shift and competitors make unexpected moves. Nobody, no matter how diligent or smart, gets every call right. That’s why every business model fails sooner or later, it’s just a matter of time.

It’s also what makes pursuing grand challenges is so important. They are basically an insurance policy against a future we can’t yet see. By investing sustainably in solving fundamental problems, we can create new businesses to replace the ones that will inevitably falter. Google doesn’t invest in self-driving cars to improve its search business, it invests because it knows that the profits from search won’t last forever.

The problem is that there is a fundamental tradeoff between innovation and optimization, so few organizations have the discipline to invest in exploration today for a uncertain payoff tomorrow. That’s why so few businesses last.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

A New Age Of Innovation and Our Next Steps

A New Age Of Innovation and Our Next Steps

GUEST POST from Greg Satell

In Mapping Innovation, I wrote that innovation is never a single event, but a process of discovery, engineering and transformation and that those three things hardly ever happen at the same time or in the same place. Clearly, the Covid-19 pandemic marked an inflection point which demarcated several important shifts in those phases.

Digital technology showed itself to be transformative, as we descended into quarantine and found an entire world of video conferencing and other technologies that we scarcely knew existed. At the same time it was revealed that the engineering of synthetic biology—and mRNA technology in particular—was more advanced than we had thought.

This is just the beginning. I titled the last chapter of my book, “A New Era of Innovation,” because it had become clear that we had begun to cross a new rubicon in which digital technology becomes so ordinary and mundane that it’s hard to remember what life was like without it, while new possibilities alter existence to such an extent we will scarcely believe it.

Post-Digital Architectures

For the past 50 years, the computer industry—and information technology in general—has been driven by the principle known as Moore’s Law, which determined we could double the number of transistors on chips every 18 months. Yet now Moore’s Law is ending and that means we will have to revisit some very basic assumptions about how technology works.

To be clear, the end of Moore’s Law does not mean the end of advancement. There are a number of ways we can speed up computing. We can, for instance, use technologies such as ASIC and FPGA to optimize chips for specialized tasks. Still, those approaches come with tradeoffs, Moore’s law essentially gave us innovation for free.

Another way out of the Moore’s Law conundrum is to shift to completely new architectures, such as quantum, neuromorphic and, possibly, biological computers. Yet here again, the transition will not be seamless or without tradeoffs. Instead of technology based on transistors, we will have multiple architectures based on entirely different logical principles.

So it seems that we will soon be entering a new era of heterogeneous computing, in which we use digital technology to access different technologies suited to different tasks. Each of these technologies will require very different programming languages and algorithmic approaches and, most likely, different teams of specialists to work on them.

What that means is that those who run the IT operations in the future, whether that person is a vaunted CTO or a lowly IT manager, will be unlikely to understand more than a small part of the system. They will have to rely heavily on the expertise of others to an extent that isn’t required today.

Bits Driving Atoms

While the digital revolution does appear to be slowing down, computers have taken on a new role in helping to empower technologies in other fields, such as synthetic biology, materials science and manufacturing 4.0. These, unlike so many digital technologies, are rooted in the physical world and may have the potential to be far more impactful.

Consider the revolutionary mRNA technology, which not only empowered us to develop a Covid vaccine in record time and save the planet from a deadly pandemic, but also makes it possible to design new vaccines in a matter of hours. There is no way we could achieve this without powerful computers driving the process.

There is similar potential in materials discovery. Suffice it to say, every product we use, whether it is a car, a house, a solar panel or whatever, depends on the properties of materials to perform its function. Some need to be strong and light, while others need special electrical properties. Powerful computers and machine learning algorithms can vastly improve our ability to discover better materials (not to mention overcome supply chain disruptions).

Make no mistake, this new era of innovation will be one of atoms, not bits. The challenge we face now is to develop computer scientists who can work effectively with biologists, chemists, factory managers and experts of all kinds to truly create a new future.

Creation And Destruction

The term creative destruction has become so ingrained in our culture we scarcely stop to think where it came from. It was largely coined by economist Joseph Schumpeter to overcome what many saw as an essential “contradiction” of capitalism. Essentially, some thought that if capitalists did their jobs well, then there would be increasing surplus value, which would then be appropriated to accumulate power to rig the system further in capitalists favor.

Schumpeter pointed out that this wasn’t necessarily true because of technological innovation. Railroads, for example, completely changed the contours of competition in the American Midwest. Surely, there had been unfair competition in many cities and towns, but once the railroad came to town, competition flourished (and if it didn’t come, the town died).

For most of history since the beginning of the Industrial Revolution, this has been a happy story. Technological innovation displaced businesses and workers, but resulted in increased productivity which led to more prosperity and entirely new industries. This cycle of creation and destruction has, for the most part, been a virtuous one.

That is, until fairly recently. Digital technology, despite the hype, hasn’t produced the type of productivity gains that earlier technologies, such as electricity and internal combustion, did but actually displaced labor at a faster rate. Put simply, the productivity gains from digital technology are too meager to finance enough new industries with better jobs, which has created income inequality rather than greater prosperity.

We Need To Move From Disrupting Markets To Tackling Grand Challenges

There’s no doubt that digital technology has been highly disruptive. In industry after industry, from retail to media to travel and hospitality, nimble digital upstarts have set established industries on their head, completely changing the basis upon which firms compete. Many incumbents haven’t survived. Many others are greatly diminished.

Still, in many ways, the digital revolution has been a huge disappointment. Besides the meager productivity gains, we’ve seen a ​​global rise in authoritarian populism, stagnant wages, reduced productivity growth and weaker competitive markets, not to mention an anxiety epidemic, increased obesity and, at least in the US, decreased life expectancy.

We can—and must—do better. We can learn from the mistakes we made during the digital revolution and shift our mindset from disrupting markets to tackling grand challenges. This new era of innovation will give us the ability to shape the world around us like never before, at a molecular level and achieve incredible things.

Yet we can’t just leave our destiny to the whims of market and technological forces. We must actually choose the outcomes we prefer and build strategies to achieve them. The possibilities that we will unlock from new computing architectures, synthetic biology, advanced materials science, artificial intelligence and other things will give us that power.

What we do with it is up to us.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.