Tag Archives: innovation morality

Just Because We Can, Doesn’t Mean That We Should!

Just Because We Can, Doesn’t Mean That We Should!

GUEST POST from Pete Foley

An article on innovation from the BBC caught my eye this week. https://www.bbc.com/news/science-environment-64814781. After extensive research and experimentation, a group in Spain has worked out how to farm octopus. It’s clever innovation, but also comes with some ethical questions. The solution involves forcing highly intelligent, sentient animals together in unnatural environments, and then killing them in a slow, likely highly stressful way. And that triggers something that I believe we need to always keep front and center in innovation: Just Because We Can, Doesn’t Mean That We Should!

Pandora’s Box

It’s a conundrum for many innovations. Change opens Pandora’s Box, and with new possibilities come unknowns, new questions, new risks and sometimes, new moral dilemmas. And because our modern world is so complex, interdependent, and evolves so quickly, we can rarely fully anticipate all of these consequences at conception.

Scenario Planning

In most fields we routinely try and anticipate technical challenges, and run all sorts of stress, stability and consumer tests in an effort to anticipate potential problems. We often still miss stuff, especially when it’s difficult to place prototypes into realistic situations. Phones still catch fire, Hyundai’s can be surprisingly easy to steal, and airbags sometimes do more harm than good. But experienced innovators, while not perfect, tend to be pretty good at catching many of the worst technical issues.

Another Innovators Dilemma

Octopus farming doesn’t, as far as I know, have technical issues, but it does raise serious ethical questions. And these can sometimes be hard to spot, especially if we are very focused on technical challenges. I doubt that the innovators involved in octopus farming are intrinsically bad people intent on imposing suffering on innocent animals. But innovation requires passion, focus and ownership. Love is Blind, and innovators who’ve invested themselves into a project are inevitably biased, and often struggle to objectively view the downsides of their invention.

And this of course has far broader implications than octopus farming. The moral dilemma of innovation and unintended consequences has of course been brought into sharp focus with recent advances in AI.  In this case the stakes are much higher. Stephen Hawking and many others expressed concerns that while AI has the potential to provide incalculable benefits, it also has the potential to end the human race. While I personally don’t see CHATgpt as Armageddon, it is certainly evidence that Pandora’s Box is open, and none of us really knows how it will evolve, for better or worse.

What are our Solutions

So what can we do to try and avoid doing more harm than good? Do we need an innovator’s equivalent of the Hippocratic Oath? Should we as a community commit to do no harm, and somehow hold ourselves accountable? Not a bad idea in theory, but how could we practically do that? Innovation and risk go hand in hand, and in reality we often don’t know how an innovation will operate in the real world, and often don’t fully recognize the killer application associated with a new technology. And if we were to eliminate most risk from innovation, we’d also eliminate most progress. This said, I do believe how we balance progress and risk is something we need to discuss more, especially in light of the extraordinary rate of technological innovation we are experiencing, the potential size of its impact, and the increasing challenges associated with predicting outcomes as the pace of change accelerates.

Can We Ever Go Back?

Another issue is that often the choice is not simply ‘do we do it or not’, but instead ‘who does it first’? Frequently it’s not so much our ‘brilliance’ that creates innovation. Instead, it’s simply that all the pieces have just fallen into place and are waiting for someone to see the pattern. From calculus onwards, the history of innovation is replete with examples of parallel discovery, where independent groups draw the same conclusions from emerging data at about the same time.

So parallel to the question of ‘should we do it’ is ‘can we afford not to?’ Perhaps the most dramatic example of this was the nuclear bomb. For the team working the Manhattan Project it must have been ethically agonizing to create something that could cause so much human suffering. But context matters, and the Allies at the time were in a tight race with the Nazi’s to create the first nuclear bomb, the path to which was already sketched out by discoveries in physics earlier that century. The potential consequences of not succeeding were even more horrific than those of winning the race. An ethical dilemma of brutal proportions.

Today, as the pace of change accelerates, we face a raft of rapidly evolving technologies with potential for enormous good or catastrophic damage, and where Pandoras Box is already cracked open. Of course AI is one, but there are so many others. On the technical side we have bio-engineering, gene manipulation, ecological manipulation, blockchain and even space innovation. All of these have potential to do both great good and great harm. And to add to the conundrum, even if we were to decide to shut down risky avenues of innovation, there is zero guarantee that others would not pursue them. On the contrary, as bad players are more likely to pursue ethically dubious avenues of research.

Behavioral Science

And this conundrum is not limited to technical innovations. We are also making huge strides in understanding how people think and make decisions. This is superficially more subtle than AI or bio-manipulation, but as a field I’m close to, it’s also deeply concerning, and carries similar potential to do both great good or cause great harm. Public opinion is one of the few tools we have to help curb mis-use of technology, especially in democracies. But Behavioral Science gives us increasingly effective ways to influence and nudge human choices, often without people being aware they are being nudged. In parallel, technology has given us unprecedented capability to leverage that knowledge, via the internet and social media. There has always been a potential moral dilemma associated with manipulating human behavior, especially below the threshold of consciousness. It’s been a concern since the idea of subliminal advertising emerged in the 1950’s. But technical innovation has created a potentially far more influential infrastructure than the 1950’s movie theater.   We now spend a significant portion of our lives on line, and techniques such as memes, framing, managed choice architecture and leveraging mere exposure provide the potential to manipulate opinions and emotional engagement more profoundly than ever before. And the stakes have gotten higher, with political advertising, at least in the USA, often eclipsing more traditional consumer goods marketing in sheer volume.   It’s one thing to nudge someone between Coke and Pepsi, but quite another to use unconscious manipulation to drive preference in narrowly contested political races that have significant socio-political implications. There is no doubt we can use behavioral science for good, whether it’s helping people eat better, save better for retirement, drive more carefully or many other situations where the benefit/paternalism equation is pretty clear. But especially in socio-political contexts, where do we draw the line, and who decides where that line is? In our increasingly polarized society, without some oversight, it’s all too easy for well intentioned and passionate people to go too far, and in the worst case flirt with propaganda, and thus potentially enable damaging or even dangerous policy.

What Can or Should We Do?

We spend a great deal of energy and money trying to find better ways to research and anticipate both the effectiveness and potential unintended consequences of new technology. But with a few exceptions, we tend to spend less time discussing the moral implications of what we do. As the pace of innovations accelerates, does the innovation community need to adopt some form of ‘do no harm’ Hippocratic Oath? Or do we need to think more about educating, training, and putting processes in place to try and anticipate the ethical downsides of technology?

Of course, we’ll never anticipate everything. We didn’t have the background knowledge to anticipate that the invention of the internal combustion engine would seriously impact the world’s climate. Instead we were mostly just relieved that projections of cities buried under horse poop would no longer come to fruition.

But other innovations brought issues we might have seen coming with a bit more scenario-planning? Air bags initially increased deaths of children in automobile accidents, while prohibition in the US increased both crime and alcoholism. Hindsight is of course very clear, but could a little more foresight have anticipated these? Perhaps my favorite example unintended consequences is the ‘Cobra Effect’. The British in India were worried about the number of venomous cobra snakes, and so introduced a bounty for every dead cobra. Initially successful, this ultimately led to the breeding of cobras for bounty payments. On learning this, the Brits scrapped the reward. Cobra breeders then set the now-worthless snakes free. The result was more cobras than the original start-point. It’s amusing now, but it also illustrates the often significant gap between foresight and hindsight.

I certainly don’t have the answers. But as we start to stack up world changing technologies in increasingly complex, dynamic and unpredictable contexts, and as financial rewards often favor speed over caution, do we as an innovation community need to start thinking more about societal and moral risk? And if so, how could, or should we go about it?

I’d love to hear the opinions of the innovation community!

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Morality of Machines

Ethical AI in an Age of Rapid Development

The Morality of Machines

GUEST POST from Chateau G Pato

In the breathless race to develop and deploy artificial intelligence, we are often mesmerized by what machines can do, without pausing to critically examine what they should do. As a human-centered change and innovation thought leader, I believe the greatest challenge of our time is not technological, but ethical. The tools we are building are not neutral; they are reflections of our own data, biases, and values. The true mark of a responsible innovator in this era will be the ability to embed morality into the very code of our creations, ensuring that AI serves humanity rather than compromises it.

The speed of AI development is staggering. From generative models that create art and text to algorithms that inform hiring decisions and medical diagnoses, AI is rapidly becoming an invisible part of our daily lives. But with this power comes immense responsibility. The decisions an AI makes, based on the data it is trained on and the objectives it is given, have real-world consequences for individuals and society. A biased algorithm can perpetuate and amplify discrimination. An opaque one can erode trust. A poorly designed one can lead to catastrophic errors. We are at a crossroads, and our choices today will determine the ethical landscape of tomorrow.

Building ethical AI is not a checkbox; it is a continuous, human-centered practice. It demands that we move beyond a purely technical mindset and integrate a robust framework for ethical inquiry into every stage of the development process. This means:

  • Bias Auditing: Proactively identifying and mitigating biases in training data to ensure that AI systems are fair and equitable for all users.
  • Transparency and Explainability: Designing AI systems that can explain their reasoning and decisions in a way that is understandable to humans, fostering trust and accountability.
  • Human Oversight: Ensuring that there is always a human in the loop, especially for high-stakes decisions, to override AI judgments and provide essential context and empathy.
  • Privacy by Design: Building privacy protections into AI systems from the ground up, minimizing data collection and ensuring sensitive information is handled with the utmost care.
  • Societal Impact Assessment: Consistently evaluating the potential second and third-order effects of an AI system on individuals, communities, and society as a whole.

Case Study 1: The Bias of AI in Hiring

The Challenge: Automating the Recruitment Process

A major technology company, in an effort to streamline its hiring process, developed an AI-powered tool to screen resumes and identify top candidates. The goal was to increase efficiency and remove human bias from the initial selection process. The AI was trained on a decade’s worth of past hiring data, which included a history of successful hires.

The Ethical Failure:

The company soon discovered a critical flaw: the AI was exhibiting a clear gender bias, systematically penalizing resumes that included the word “women’s” or listed attendance at women’s colleges. The algorithm, having been trained on historical data where a majority of successful applicants were male, had learned to associate male-dominated resumes with success. It was not a conscious bias, but a learned one, and it was perpetuating and amplifying the very bias the company was trying to eliminate. The AI was a mirror, reflecting the historical inequities of the company’s past hiring practices. Without human-centered ethical oversight, the technology was making the problem worse.

The Results:

The company had to scrap the project. The case became a cautionary tale, highlighting the critical importance of bias auditing and the fact that AI is only as good as the data it is trained on. It showed that simply automating a process does not make it fair. Instead, it can embed and scale existing inequities at an unprecedented rate. The experience led the company to implement a rigorous ethical review board for all future AI projects, with a specific focus on diversity and inclusion.

Key Insight: AI trained on historical data can perpetuate and scale existing human biases, making proactive bias auditing a non-negotiable step in the development process.

Case Study 2: Autonomous Vehicles and the Trolley Problem

The Challenge: Making Life-and-Death Decisions

The development of autonomous vehicles (AVs) presents one of the most complex ethical challenges of our time. While AI can significantly reduce human-caused accidents, there are inevitable scenarios where an AV will have to make a split-second decision in a no-win situation. This is a real-world application of the “Trolley Problem”: should the car swerve to save its passenger, or should it prioritize the lives of pedestrians?

The Ethical Dilemma:

This is a problem with no easy answer, and it forces us to confront our own values and biases. The AI must be programmed with a moral framework, but whose? A utilitarian framework would prioritize the greatest good for the greatest number, while a deontological framework might prioritize the preservation of the passenger’s life. The choices a programmer makes have profound ethical and legal implications. Furthermore, the public’s trust in AVs hinges on its understanding of how they will behave in these extreme circumstances. An AI that operates as an ethical black box will never gain full public acceptance.

The Results:

The challenge has led to a global conversation about ethical AI. Car manufacturers, tech companies, and governments are now collaborating to create ethical guidelines and regulatory frameworks. Projects like MIT’s Moral Machine have collected millions of human responses to hypothetical scenarios, providing invaluable data on our collective moral intuitions. While a definitive solution remains elusive, the process has forced the industry to move beyond just building a functional machine and to address the foundational ethical questions of safety, responsibility, and human trust. It has made it clear that for AI to be successful in our society, it must be developed with a clear and transparent moral compass.

Key Insight: When AI is tasked with making life-and-death decisions, its ethical framework must be transparent and aligned with human values, requiring a collaborative effort from technologists, ethicists, and policymakers.

The Path Forward: Building a Moral Compass for AI

The morality of machines is not an abstract philosophical debate; it is a practical challenge that innovators must confront today. The case studies above are powerful reminders that building ethical AI is not an optional add-on but a fundamental requirement for creating technology that is both safe and beneficial. The future of AI is not just about what we can build, but about what we choose to build. It’s about having the courage to slow down, ask the hard questions, and embed our best human values—fairness, empathy, and responsibility—into the very core of our creations. It is the only way to ensure that the tools we design serve to elevate humanity, rather than to diminish it.

Extra Extra: Because innovation is all about change, Braden Kelley’s human-centered change methodology and tools are the best way to plan and execute the changes necessary to support your innovation and transformation efforts — all while literally getting everyone all on the same page for change. Find out more about the methodology and tools, including the book Charting Change by following the link. Be sure and download the TEN FREE TOOLS while you’re here.

Image credit: Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.