GUEST POST from Pete Foley
Most innovations go through a similar cycle, often represented as an s-curve.
We start with something potentially game changing. It’s inevitably a rough-cut diamond; un-optimized and not fully understood. But we then optimize it. This usually starts with a fairly steep leaning curve as we address ‘low hanging fruit’ but then evolves into a fine-tuning stage. Eventually we squeeze efficiency from it to the point where the incremental cost of improving it becomes inefficient. We then either commoditize it, or jump to another s-curve.
This is certainly not a new model, and there are multiple variations on the theme. But as the pace of innovation accelerates, something fundamentally new is happening with this s-curve pattern. S-curves are getting closer together. Increasingly we are jumping to new s-curves before we’ve fully optimized the previous one. This means that we are innovating quickly, but also that we are often taking more ‘leaps into the dark’ than ever before.
This has some unintended consequences of its own:
1. Cumulative Unanticipated Consequences. No matter how much we try to anticipate how a new technology will fare in the real world, there are always surprises. Many surprises emerge soon after we hit the market, and create fires than have to be put out quite quickly (and literally in the cases of some battery technologies). But other unanticipated effects can be slower burn (pun intended). The most pertinent example of this is of course greenhouse gasses from Industrialization, and their impact on our climate. This of course took us years to recognize. But there are many more examples, including the rise of antibiotic resistance, plastic pollution, hidden carcinogens, the rising cost of healthcare and the mental health issues associated with social media. Just as the killer application for a new innovation is often missed at its inception, it’s killer flaws can be too. And if the causal relationship between these issues and the innovation are indirect, they can accumulate across multiple s-curves before we notice them. By the time we do, technology is often so entrenched it can be a huge challenge to extract ourselves from it.
2. Poorly understood complex network effects. The impact of new innovation is very hard to predict when it is introduced into a complex, multivariable system. A butterfly flapping its wings can cascade and amplify through a system, and when the butterfly is transformative technology, the effect can be profound. We usually have line of sight of first generation causal effects: For example, we know that electric cars use an existing electric grid, as do solar energy farms. But in today’s complex, interconnected world, it’s difficult to predict second, third or fourth generation network effects, and likely not cost effective or efficient for an innovator to try and do so. For example, the supply-demand interdependency of solar and electric cars is a second-generation network effect that we are aware of, but that is already challenging to fully predict. More causally distant effects can be even more challenging. For example, funding for the road network without gas tax, the interdependency of gas and electric cost and supply as we transition, the impact that will have on broader on global energy costs and socio political stability. Then add in complexities supply of new raw materials needed to support the new battery technologies. These are pretty challenging to model, and of course, are the challenges we are at least aware of. The unanticipated consequences of such a major change are, by definition, unanticipated!
3. Fragile Foundations. In many cases, one s-curve forms the foundation of the next. So if we have not optimized the previous s-curve sufficiently, flaws potentially carry over into the next, often in the form of ‘givens’. For example, an electric car is a classic s-curve jump from internal combustion engines. But for reasons that include design efficiency, compatibility with existing infrastructure, and perhaps most importantly, consumer cognitive comfort, much of the supporting design and technology carries over from previous designs. We have redesigned the engine, but have only evolved wheels, breaks, etc., and have kept legacies such as 4+ seats. But automotives are in many, one of our more stable foundations. We have had a lot of time to stabilize past s-curves before jumping to new ones. But newer technologies such as AI, social media and quantum computing have enjoyed far less time to stabilize foundational s-curves before we dance across to embrace closely spaced new ones. That will likely increase the chances of unintended consequences. And we are already seeing the canary in the coal mine with some, with unexpected mental health and social instability increasingly associated with social media
What’s the Answer? We cannot, or should not stop innovating. We face too many fundamental issues with climate, food security and socio political stability that need solutions, and need them quite quickly.
But the conundrum we face is that many, if not all of these issue are rooted in past, well intentioned innovation, and the unintended consequences that derive from it. So a lot of our innovation efforts are focused on solving issues created by previous rounds of innovation. Nobody expected or intended the industrial revolution to impact our climate, but now much of our current innovation capability is rightly focused on managing the fall out it has created (again, pun intended). Our challenge is that we need to continue to innovate, but also to break the cycle of todays innovation being increasingly focused on fixing yesterdays!
Today new waves of innovation associated with ‘sustainable’ technology, genetic manipulation, AI and quantum computing are already crashing onto our shores. These interdependent innovations will likely dwarf the industrial revolution in scale and complexity, and have the potential for massive impact, both good and bad. And they are occurring at a pace that gives us little time to deal with anticipated consequences, let alone unanticipated ones.
We’ll Find a Way? One answer is to just let it happen, and fix things as we go. Innovation has always been a bumpy road, and humanity has a long history of muddling through. The agricultural revolution ultimately allowed humans to exponentially expand our population, but only after concentrating people into larger social groups that caused disease to ravage many societies. We largely solved that by dying in large numbers and creating herd immunity. It was a solution, but not an optimum one. When London was in danger of being buried in horse poop, the internal combustion engine saved us, but that in turn ultimately resulted in climate change. According to projections from the Club of Rome in the 70’s, economic growth should have ground to a halt long ago, mired in starvation and population contraction. Instead advances in farming technology have allowed us to keep growing. But that increase in population contributes substantially to our issues with climate today. ‘We’ll find a way’ is an approach that works until it doesn’t. and even when it works, it is usually not painless, and often simply defers rather than solves issues.
Anticipation? Another option is that we have to get better at both anticipating issues, and at triaging the unexpected. Maybe AI will give us the processing power to do this, provided of course that it doesn’t become our biggest issue in of itself.
Slow Down and Be More Selective? In a previous article I asked if ‘just because we can do it, does it mean we should?’. That was through a primarily moral lens. But I think unintended consequences make this an even bigger question for broader innovation strategy. The more we innovate, the more consequences we likely create. And the faster we innovate, the more vulnerable we are to fragility. Slowing down creates resilience, speed reduces it. So one option is to be more choiceful about innovations, and look more critically at benefit risk balance. For example, how badly do we need some of the new medications and vaccines being rushed to market? Is all of our gene manipulation research needed? Do we really need a new phone every two years? For sure, in some cases the benefits are clear, but in other cases, is profit driving us more than it should?
In a similar vein, but to be provocative, are we also moving too quickly with renewable energy? It certainly something we need. But are we, for example, pinning too much on a single, almost first generation form of large scale solar technology? We are still at that steep part of the learning curve, so are quite likely missing unintended consequences. Would a more staged transition over a decade or so add more resilience, allow us to optimize the technology based on real world experience, and help us ferret out unanticipated issues? Should we be creating a more balanced portfolio, and leaning more on more established technology such as nuclear? Sometimes moving a bit more slowly ultimately gets you there faster, and a long-term issue like climate is a prime candidate for balancing speed, optimization and resilience to ultimately create a more efficient, robust and better understood network.
The speed of AI development is another obvious question, but I suspect more difficult to evaluate. In this case, Pandora’s box is open, and calls to slow AI research would likely mean responsible players would stop, but research would continue elsewhere, either underground or in less responsible nations. A North Korean AI that is superior to anyone else’s is an example where the risk of not moving likely outweighs the risk of unintended consequences
Regulation? Regulation is a good way of forcing more thoughtful evaluation of benefit versus risk. But it only works if regulators (government) understand technology, or at least its benefits versus risks, better than its developers. This can work reasonably well in pharma, where we have a long track record. But it is much more challenging in newer areas of technology. AI is a prime example where this is almost certainly not the case. And as the complexity of all innovation increases, regulation will become less effective, and increasingly likely to create unintended consequences of its own.
I realize that this may all sound a bit alarmist, and certainly any call to slow down renewable energy conversion or pharma development is going to be unpopular. But history has shown that slowing down creates resilience, while speeding up creates instability and waves of growth and collapse. And an arms race where much of our current innovative capability is focused on fixing issues created by previous innovations is one we always risk losing. So as unanticipated consequences are by definition, really difficult to anticipate, is this a point in time where we in the innovation community need to have a discussion on slowing down and being more selective? Where should we innovate and where not? When should we move fast, and when we might be better served by some productive procrastination. Do we need better risk assessment processes? It’s always easier to do this kind of analysis in hindsight, but do we really have that luxury?
Image credit: Pixabay
Sign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.