GUEST POST from Pete Foley
In this blog, I return and expand on a paradox that has concerned me for some time. Are we getting too good at innovation, and is it in danger of getting out of control? That may seem like a strange question for an innovator to ask. But innovation has always been a two edged sword. It brings huge benefits, but also commensurate risks.
Ostensibly, change is good. Because of technology, today we mostly live more comfortable lives, and enjoy superior health, longevity, and mostly increased leisure and abundance compared to our ancestors.
Exponential Innovation Growth: The pace of innovation is accelerating. It may not exactly mirror Moore’s Law, and of course, innovation is much harder to quantify than transistors. But the general trend in innovation and change approximates exponential growth. The human stone-age lasted about 300,000 years before ending in about 3,000 BC with the advent of metalworking. The culture of the Egyptian Pharos lasted 30 centuries. It was certainly not without innovations, but by modern standards, things changed very slowly. My mum recently turned 98 years young, and the pace of change she has seen in her lifetime is staggering by comparison to the past. Literally from horse and carts delivering milk when she was a child in poor SE London, to todays world of self driving cars and exploring our solar system and beyond. And with AI, quantum computing, fusion, gene manipulation, manned interplanetary spaceflight, and even advanced behavior manipulation all jockeying for position in the current innovation race, it seems highly likely that those living today will see even more dramatic change than my mum experienced.
The Dark Side of Innovation: While accelerated innovation is probably beneficial overall, it is not without its costs. For starters, while humans are natural innovators, we are also paradoxically change averse. Our brains are configured to manage more of our daily lives around habits and familiar behaviors than new experiences. It simply takes more mental effort to manage new stuff than familiar stuff. As a result we like some change, but not too much, or we become stressed. At least some of the burgeoning mental health crisis we face today is probably attributable the difficulty we have adapting to so much rapid change and new technology on multiple fronts.
Nefarious Innovation: And of course, new technology can be used for nefarious as well as noble purpose. We can now kill our fellow humans far more efficiently, and remotely than our ancestors dreamed of. The internet gives us unprecedented access to both information and connectivity, but is also a source of misinformation and manipulation.
The Abundance Dichotomy: Innovation increases abundance, but it’s arguable if that actually makes us happier. It gives us more, but paradoxically brings greater inequalities in distribution of the ‘wealth’ it creates. Behavior science has shown us consistently that humans make far more relative than absolute judgments. Being better off than our ancestors actually doesn’t do much for us. Instead we are far more interested in being better off than our peers, neighbors or the people we compare ourselves to on Instagram. And therein lies yet another challenge. Social media means we now compare ourselves to far more people than past generations, meaning that the standards we judge ourselves against are higher than ever before.
Side effects and Unintended Consequences: Side effects and unintended consequences are perhaps the most difficult challenge we face with innovation. As the pace of innovation accelerates, so does the build up of side effects, and problematically, these often lag our initial innovations. All too often, we only become aware of them when they have already become a significant problem. Climate change is of course a poster child for this, as a huge unanticipated consequence of the industrial revolution. The same applies to pollution. But as innovation accelerates, the unintended consequences it brings are also stacking up. The first generations of ‘digital natives’ are facing unprecedented mental health challenges. Diseases are becoming resistant to antibiotics, while population density is leading increased rate of new disease emergence. Agricultural efficiency has created monocultures that are inherently more fragile than the more diverse supply chain of the past. Longevity is putting enormous pressure on healthcare.
The More we Innovate, the less we understand: And last, but not least, as innovation accelerates, we understand less about what we are creating. Technology becomes unfathomably complex, and requires increasing specialization, which means few if any really understand the holistic picture. Today we are largely going full speed ahead with AI, quantum computing, genetic engineering, and more subtle, but equally perilous experiments in behavioral and social manipulation. But we are doing so with increasingly less pervasive understanding of direct, let alone unintended consequences of these complex changes!
The Runaway Innovation Train: So should we back off and slow down? Is it time to pump the brakes? It’s an odd question for an innovator, but it’s likely a moot point anyway. The reality is that we probably cannot slow down, even if we want to. Innovation is largely a self-propagating chain reaction. All innovators stand on the shoulders of giants. Every generation builds on past discoveries, and often this growing knowledge base inevitably leads to multiple further innovations. The connectivity and information access of internet alone is driving today’s unprecedented innovation, and AI and quantum computing will only accelerate this further. History is compelling on this point. Stone-age innovation was slow not because our ancestors lacked intelligence. To the best of our knowledge, they were neurologically the same as us. But they lacked the cumulative knowledge, and the network to access it that we now enjoy. Even the smartest of us cannot go from inventing flint-knapping to quantum mechanics in a single generation. But, back to ‘standing on the shoulder of giants’, we can build on cumulative knowledge assembled by those who went before us to continuously improve. And as that cumulative knowledge grows, more and more tools and resources become available, multiple insights emerge, and we create what amounts to a chain reaction of innovations. But the trouble with chain reactions is that they can be very hard to control.
Simultaneous Innovation: Perhaps the most compelling support for this inevitability of innovation lies in the pervasiveness of simultaneous innovation. How does human culture exist for 50,000 years or more and then ‘suddenly’ two people, Darwin and Wallace come up with the theory of evolution independently and simultaneously? The same question for calculus (Newton and Leibniz), or the precarious proliferation of nuclear weapons and other assorted weapons of mass destruction. It’s not coincidence, but simply reflects that once all of the pieces of a puzzle are in place, somebody, and more likely, multiple people will inevitably make connections and see the next step in the innovation chain.
But as innovation expands like a conquering army on multiple fronts, more and more puzzle pieces become available, and more puzzles are solved. But unfortunately associated side effects and unanticipated consequences also build up, and my concern is that they can potentially overwhelm us. And this is compounded because often, as in the case of climate change, dealing with side effects can be more demanding than the original innovation. And because they can be slow to emerge, they are often deeply rooted before we become aware of them. As we look forward, just taking AI as an example, we can already somewhat anticipate some worrying possibilities. But what about the surprises analogous to climate change that we haven’t even thought of yet? I find that a sobering thought that we are attempting to create consciousness, but despite the efforts of numerous Nobel laureates over decades, we still have to idea what consciousness is. It’s called the ‘hard problem’ for good reason.
Stop the World, I Want to Get Off: So why not slow down? There are precedents, in the form of nuclear arms treaties, and a variety of ethically based constraints on scientific exploration. But regulations require everybody to agree and comply. Very big, expensive and expansive innovations are relatively easy to police. North Korea and Iran notwithstanding, there are fortunately not too many countries building nuclear capability, at least not yet. But a lot of emerging technology has the potential to require far less physical and financial infrastructure. Cyber crime, gene manipulation, crypto and many others can be carried out with smaller, more distributed resources, which are far more difficult to police. Even AI, which takes considerable resources to initially create, opens numerous doors for misuse that requires far less resource.
The Atomic Weapons Conundrum. The challenge with getting bad actors to agree on regulation and constraint is painfully illustrated by the atomic bomb. The discovery of fission by Strassman and Hahn in the late 1930’s made the bomb inevitable. This set the stage for a race to turn theory into practice between the Allies and Nazi Germany. The Nazis were bad actor, so realistically our only option was to win the race. We did, but at enormous cost. Once the ‘cat was out of the bag, we faced a terrible choice; create nuclear weapons, and the horror they represent, or chose to legislate against them, but in so doing, cede that terrible power to the Nazi’s? Not an enviable choice.
Cumulative Knowledge. Today we face similar conundrums on multiple fronts. Cumulative knowledge will make it extremely difficult not to advance multiple, potentially perilous technologies. Countries who legislate against it risk either pushing it underground, or falling behind and deferring to others. The recent open letter from Meta to the EU chastising it for the potential economic impacts of its AI regulations may have dripped with self-interest. But that didn’t make it wrong. https://euneedsai.com/ Even if the EU slows down AI development, the pieces of the puzzle are already in place. Big corporations, and less conservative countries will still pursue the upside, and risk the downside. The cat is very much out of the bag.
Muddling Through: The good news is that when faced with potentially perilous change in the past, we’ve muddled through. Hopefully we will do so again. We’ve avoided a nuclear holocaust, at least for now. Social media has destabilized our social order, but hasn’t destroyed it, yet. We’ve been through a pandemic, and come out of it, not unscathed, but still functioning. We are making progress in dealing with climate change, and have made enormous strides in managing pollution.
Chain Reactions: But the innovation chain reaction, and the impact of cumulative knowledge mean that the rate of change will, in the absence of catastrophe, inevitably continue to accelerate. And as it does, so will side effects, nefarious use, mistakes and any unintended consequences that derive from it. Key factors that have helped us in the past are time and resource, but as waves of innovation increase in both frequency and intensity, both are likely to be increasingly squeezed.
What can, or should we do? I certainly don’t have simple answers. We’re all pretty good, although by definition, far from perfect at scenario planning and trouble shooting for our individual innovations. But the size and complexity of massive waves of innovation, such as AI, are obviously far more challenging. No individual, or group can realistically either understand or own all of the implications. But perhaps we as an innovation community should put more collective resources against trying? We’ll never anticipate everything, and we’ll still get blindsided. And putting resources against ‘what if’ scenarios is always a hard sell. But maybe we need to go into sales mode.
Can the Problem Become the Solution? Encouragingly, the same emerging technology that creates potential issues could also help us. AI and quantum computing will give us almost infinite capacity for computation and modeling. Could we collectively assign more of that emerging resource against predicting and managing it’s own risks?
With many emerging technologies, we are now where we were in the 1900’s with climate change. We are implementing massive, unpredictable change, and by definition have no idea what the unanticipated consequences of that will be. I personally think we’ll deal with climate change. It’s difficult to slow a leviathan that’s been building for over a hundred years. But we’ve taken the important first steps in acknowledging the problem, and are beginning to implement corrective action.
But big issues require big solutions. Long-term, I personally believe the most important thing for humanity to escape the gravity well. Given the scale of our ability to curate global change, interplanetary colonization is not a luxury, but an essential. Climate change is a shot across the bow with respect to how fragile our planet is, and how big our (unintended) influence can be. We will hopefully manage that, and avoid nuclear war or synthetic pandemics for long enough to achieve it. But ultimately, humanity needs the insurance dispersed planetary colonization will provide.
Image credits: Microsoft Copilot
Sign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.