GUEST POST from Pete Foley
An article on innovation from the BBC caught my eye this week. https://www.bbc.com/news/science-environment-64814781. After extensive research and experimentation, a group in Spain has worked out how to farm octopus. It’s clever innovation, but also comes with some ethical questions. The solution involves forcing highly intelligent, sentient animals together in unnatural environments, and then killing them in a slow, likely highly stressful way. And that triggers something that I believe we need to always keep front and center in innovation: Just Because We Can, Doesn’t Mean That We Should!
Pandora’s Box
It’s a conundrum for many innovations. Change opens Pandora’s Box, and with new possibilities come unknowns, new questions, new risks and sometimes, new moral dilemmas. And because our modern world is so complex, interdependent, and evolves so quickly, we can rarely fully anticipate all of these consequences at conception.
Scenario Planning
In most fields we routinely try and anticipate technical challenges, and run all sorts of stress, stability and consumer tests in an effort to anticipate potential problems. We often still miss stuff, especially when it’s difficult to place prototypes into realistic situations. Phones still catch fire, Hyundai’s can be surprisingly easy to steal, and airbags sometimes do more harm than good. But experienced innovators, while not perfect, tend to be pretty good at catching many of the worst technical issues.
Another Innovators Dilemma
Octopus farming doesn’t, as far as I know, have technical issues, but it does raise serious ethical questions. And these can sometimes be hard to spot, especially if we are very focused on technical challenges. I doubt that the innovators involved in octopus farming are intrinsically bad people intent on imposing suffering on innocent animals. But innovation requires passion, focus and ownership. Love is Blind, and innovators who’ve invested themselves into a project are inevitably biased, and often struggle to objectively view the downsides of their invention.
And this of course has far broader implications than octopus farming. The moral dilemma of innovation and unintended consequences has of course been brought into sharp focus with recent advances in AI. In this case the stakes are much higher. Stephen Hawking and many others expressed concerns that while AI has the potential to provide incalculable benefits, it also has the potential to end the human race. While I personally don’t see CHATgpt as Armageddon, it is certainly evidence that Pandora’s Box is open, and none of us really knows how it will evolve, for better or worse.
What are our Solutions
So what can we do to try and avoid doing more harm than good? Do we need an innovator’s equivalent of the Hippocratic Oath? Should we as a community commit to do no harm, and somehow hold ourselves accountable? Not a bad idea in theory, but how could we practically do that? Innovation and risk go hand in hand, and in reality we often don’t know how an innovation will operate in the real world, and often don’t fully recognize the killer application associated with a new technology. And if we were to eliminate most risk from innovation, we’d also eliminate most progress. This said, I do believe how we balance progress and risk is something we need to discuss more, especially in light of the extraordinary rate of technological innovation we are experiencing, the potential size of its impact, and the increasing challenges associated with predicting outcomes as the pace of change accelerates.
Can We Ever Go Back?
Another issue is that often the choice is not simply ‘do we do it or not’, but instead ‘who does it first’? Frequently it’s not so much our ‘brilliance’ that creates innovation. Instead, it’s simply that all the pieces have just fallen into place and are waiting for someone to see the pattern. From calculus onwards, the history of innovation is replete with examples of parallel discovery, where independent groups draw the same conclusions from emerging data at about the same time.
So parallel to the question of ‘should we do it’ is ‘can we afford not to?’ Perhaps the most dramatic example of this was the nuclear bomb. For the team working the Manhattan Project it must have been ethically agonizing to create something that could cause so much human suffering. But context matters, and the Allies at the time were in a tight race with the Nazi’s to create the first nuclear bomb, the path to which was already sketched out by discoveries in physics earlier that century. The potential consequences of not succeeding were even more horrific than those of winning the race. An ethical dilemma of brutal proportions.
Today, as the pace of change accelerates, we face a raft of rapidly evolving technologies with potential for enormous good or catastrophic damage, and where Pandoras Box is already cracked open. Of course AI is one, but there are so many others. On the technical side we have bio-engineering, gene manipulation, ecological manipulation, blockchain and even space innovation. All of these have potential to do both great good and great harm. And to add to the conundrum, even if we were to decide to shut down risky avenues of innovation, there is zero guarantee that others would not pursue them. On the contrary, as bad players are more likely to pursue ethically dubious avenues of research.
Behavioral Science
And this conundrum is not limited to technical innovations. We are also making huge strides in understanding how people think and make decisions. This is superficially more subtle than AI or bio-manipulation, but as a field I’m close to, it’s also deeply concerning, and carries similar potential to do both great good or cause great harm. Public opinion is one of the few tools we have to help curb mis-use of technology, especially in democracies. But Behavioral Science gives us increasingly effective ways to influence and nudge human choices, often without people being aware they are being nudged. In parallel, technology has given us unprecedented capability to leverage that knowledge, via the internet and social media. There has always been a potential moral dilemma associated with manipulating human behavior, especially below the threshold of consciousness. It’s been a concern since the idea of subliminal advertising emerged in the 1950’s. But technical innovation has created a potentially far more influential infrastructure than the 1950’s movie theater. We now spend a significant portion of our lives on line, and techniques such as memes, framing, managed choice architecture and leveraging mere exposure provide the potential to manipulate opinions and emotional engagement more profoundly than ever before. And the stakes have gotten higher, with political advertising, at least in the USA, often eclipsing more traditional consumer goods marketing in sheer volume. It’s one thing to nudge someone between Coke and Pepsi, but quite another to use unconscious manipulation to drive preference in narrowly contested political races that have significant socio-political implications. There is no doubt we can use behavioral science for good, whether it’s helping people eat better, save better for retirement, drive more carefully or many other situations where the benefit/paternalism equation is pretty clear. But especially in socio-political contexts, where do we draw the line, and who decides where that line is? In our increasingly polarized society, without some oversight, it’s all too easy for well intentioned and passionate people to go too far, and in the worst case flirt with propaganda, and thus potentially enable damaging or even dangerous policy.
What Can or Should We Do?
We spend a great deal of energy and money trying to find better ways to research and anticipate both the effectiveness and potential unintended consequences of new technology. But with a few exceptions, we tend to spend less time discussing the moral implications of what we do. As the pace of innovations accelerates, does the innovation community need to adopt some form of ‘do no harm’ Hippocratic Oath? Or do we need to think more about educating, training, and putting processes in place to try and anticipate the ethical downsides of technology?
Of course, we’ll never anticipate everything. We didn’t have the background knowledge to anticipate that the invention of the internal combustion engine would seriously impact the world’s climate. Instead we were mostly just relieved that projections of cities buried under horse poop would no longer come to fruition.
But other innovations brought issues we might have seen coming with a bit more scenario-planning? Air bags initially increased deaths of children in automobile accidents, while prohibition in the US increased both crime and alcoholism. Hindsight is of course very clear, but could a little more foresight have anticipated these? Perhaps my favorite example unintended consequences is the ‘Cobra Effect’. The British in India were worried about the number of venomous cobra snakes, and so introduced a bounty for every dead cobra. Initially successful, this ultimately led to the breeding of cobras for bounty payments. On learning this, the Brits scrapped the reward. Cobra breeders then set the now-worthless snakes free. The result was more cobras than the original start-point. It’s amusing now, but it also illustrates the often significant gap between foresight and hindsight.
I certainly don’t have the answers. But as we start to stack up world changing technologies in increasingly complex, dynamic and unpredictable contexts, and as financial rewards often favor speed over caution, do we as an innovation community need to start thinking more about societal and moral risk? And if so, how could, or should we go about it?
I’d love to hear the opinions of the innovation community!
Image credit: Pixabay
Sign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.
Pingback: Top 10 Human-Centered Change & Innovation Articles of April 2023 | Human-Centered Change and Innovation