Tag Archives: ethics

Technology Pushing Us into a New Ethical Universe

Technology Pushing Us into a New Ethical Universe

GUEST POST from Greg Satell

We take it for granted that we’re supposed to act ethically and, usually, that seems pretty simple. Don’t lie, cheat or steal, don’t hurt anybody on purpose and act with good intentions. In some professions, like law or medicine, the issues are somewhat more complex, and practitioners are trained to make good decisions.

Yet ethics in the more classical sense isn’t so much about doing what you know is right, but thinking seriously about what the right thing is. Unlike the classic “ten commandments” type of morality, there are many situations that arise in which determining the right action to take is far from obvious.

Today, as our technology becomes vastly more powerful and complex, ethical issues are increasingly rising to the fore. Over the next decade we will have to build some consensus on issues like what accountability a machine should have and to what extent we should alter the nature of life. The answers are far from clear-cut, but we desperately need to find them.

The Responsibility of Agency

For decades intellectuals have pondered an ethical dilemma known as the trolley problem. Imagine you see a trolley barreling down the tracks that is about to run over five people. The only way to save them is to pull a lever to switch the trolley to a different set of tracks, but if you do that, one person standing there will be killed. What should you do?

For the most part, the trolley problem has been a subject for freshman philosophy classes and avant garde cocktail parties, without any real bearing on actual decisions. However, with the rise of technologies like self-driving cars, decisions such as whether to protect the life of a passenger or a pedestrian will need to be explicitly encoded into the systems we create.

That’s just the start. It’s become increasingly clear that data bias can vastly distort decisions about everything from whether we are admitted to a school, get a job or even go to jail. Still, we’ve yet to achieve any real clarity about who should be held accountable for decisions an algorithm makes.

As we move forward, we need to give serious thought to the responsibility of agency. Who’s responsible for the decisions a machine makes? What should guide those decisions? What recourse should those affected by a machine’s decision have? These are no longer theoretical debates, but practical problems that need to be solved.

Evaluating Tradeoffs

“Now I am become Death, the destroyer of worlds,” said J. Robert Oppenheimer, quoting the Bhagavad Gita. upon witnessing the world’s first nuclear explosion as it shook the plains of New Mexico. It was clear that we had crossed a Rubicon. There was no turning back and Oppenheimer, as the leader of the project, felt an enormous sense of responsibility.

Yet the specter of nuclear Armageddon was only part of the story. In the decades that followed, nuclear medicine saved thousands, if not millions of lives. Mildly radioactive isotopes, which allow us to track molecules as they travel through a biological system, have also been a boon for medical research.

The truth is that every significant advancement has the potential for both harm and good. Consider CRISPR, the gene editing technology that vastly accelerates our ability to alter DNA. It has the potential to cure terrible diseases such as cancer and Multiple Sclerosis, but also raises troubling issues such as biohacking and designer babies.

In the case of nuclear technology many scientists, including Oppenheimer, became activists. They actively engaged with the wider public, including politicians, intellectuals and the media to raise awareness about the very real dangers of nuclear technology and work towards practical solutions.

Today, we need similar engagement between people who create technology and the public square to explore the implications of technologies like AI and CRISPR, but it has scarcely begun. That’s a real problem.

Building A Consensus Based on Transparency

It’s easy to paint pictures of technology going haywire. However, when you take a closer look, the problem isn’t so much with technological advancement, but ourselves. For example, the recent scandals involving Facebook were not about issues inherent to social media websites, but had more to do with an appalling breach of trust and lack of transparency. The company has paid dearly for it and those costs will most likely continue to pile up.

It doesn’t have to be that way. Consider the case of Paul Berg, a pioneer in the creation of recombinant DNA, for which he won the Nobel Prize. Unlike Zuckerberg, he recognized the gravity of the Pandora’s box he had opened and convened the Asilomar Conference to discuss the dangers, which resulted in the Berg Letter that called for a moratorium on the riskiest experiments until the implications were better understood.

In her book, A Crack in Creation, Jennifer Doudna, who made the pivotal discovery for CRISPR gene editing, points out that a key aspect of the Asilomar conference was that it included not only scientists, but also lawyers, government officials and media. It was the dialogue between a diverse set of stakeholders, and the sense of transparency it produced, that helped the field advance.

The philosopher Martin Heidegger argued that technological advancement is a process of revealing and building. We can’t control what we reveal through exploration and discovery, but we can—and should—be wise about what we build. If you just “move fast and break things,” don’t be surprised if you break something important.

Meeting New Standards

In Homo Deus, Yuval Noah Harari writes that the best reason to learn history is “not in order to predict, but to free yourself of the past and imagine alternative destinies.” As we have already seen, when we rush into technologies like nuclear power, we create problems like Chernobyl and Fukushima and reduce technology’s potential.

The issues we will have to grasp over the next few decades will be far more complex and consequential than anything we have faced before. Nuclear technology, while horrifying in its potential for destruction, requires a tremendous amount of scientific expertise to produce it. Even today, it remains confined to governments and large institutions.

New technologies, such as artificial intelligence and gene editing are far more accessible. Anybody with a modicum of expertise can go online and download powerful algorithms for free. High school kids can order CRISPR kits for a few hundred dollars and modify genes. We need to employ far better judgment than organizations like Facebook and Google have shown in the recent past.

Some seem to grasp this. Most of the major tech companies have joined with the ACLU, UNICEF and other stakeholders to form the Partnership On AI to create a forum that can develop sensible standards for artificial intelligence. Salesforce recently hired a Chief Ethical and Human Use Officer. Jennifer Doudna has begun a similar process for CRISPR at the Innovative Genomics Institute.

These are important developments, but they are little more than first steps. We need a more public dialogue about the technologies we are building to achieve some kind of consensus of what the risks are and what we as a society are willing to accept. If not, the consequences, financial and otherwise, may be catastrophic.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Four Lessons Learned from the Digital Revolution

Four Lessons Learned from the Digital Revolution

GUEST POST from Greg Satell

When Steve Jobs was trying to lure John Sculley from Pepsi to Apple in 1982, he asked him, “Do you want to sell sugar water for the rest of your life, or do you want to come with me and change the world?” The ploy worked and Sculley became the first major CEO of a conventional company to join a hot Silicon Valley startup.

It seems so quaint today, in the midst of a global pandemic, that a young entrepreneur selling what was essentially a glorified word processor thought he was changing the world. The truth is that the digital revolution, despite all the hype, has been something of a disappointment. Certainly it failed to usher in the “new economy” that many expected.

Yet what is also becoming clear is that the shortcomings have less to do with the technology itself, in fact the Covid-19 crisis has shown just how amazingly useful digital technology can be, than with ourselves. We expected technology and markets to do all the work for us. Today, as we embark on a new era of innovation, we need to reflect on what we have learned.

1. We Live In a World of Atoms, Not Bits

In 1996, as the dotcom boom was heating up, the economist W. Brian Arthur published an article in Harvard Business Review that signaled a massive shift in how we view the economy. While traditionally markets are made up of firms that faced diminishing returns, Arthur explained that information-based businesses can enjoy increasing returns.

More specifically, Arthur spelled out that if a business had high up-front costs, network effects and the ability to lock in customers it could enjoy increasing returns. That, in turn, would mean that information-based businesses would compete in winner-take-all markets, management would need to become less hierarchical and that investing heavily to win market share early could become a winning strategy.

Arthur’s article was, in many ways, prescient and before long investors were committing enormous amounts of money to companies without real businesses in the hopes that just a few of these bets would hit it big. In 2011, Marc Andreesen predicted that software would eat the world.

He was wrong. As the recent debacle at WeWork, as well as massive devaluations at firms like Uber, Lyft, and Peloton, shows that there is a limit to increasing returns for the simple reason that we live in a world of atoms, not bits. Even today, information and communication technologies make up only 6% of GDP in OECD countries. Obviously, most of our fate rests with the other 94%.

The Covid-19 crisis bears this out. Sure, being able to binge watch on Netflix and attend meetings on Zoom is enormously helpful, but to solve the crisis we need a vaccine. To do that, digital technology isn’t enough. We need to combine it with synthetic biology to make a real world impact.

2. Businesses Do Not Self Regulate

The case Steve Jobs made to John Sculley was predicated on the assumption that digital technology was fundamentally different from the sugar-water sellers of the world. The Silicon Valley ethos (or conceit as the case may be), was that while traditional businesses were motivated purely by greed, technology businesses answered to a higher calling.

This was no accident. As Arthur pointed out in his 1996 article, while atom-based businesses thrived on predictability and control, knowledge-based businesses facing winner-take-all markets are constantly in search of the “next big thing.” So teams that could operate like mission-oriented “commando units” on a holy quest would have a competitive advantage.

Companies like Google who vowed to not “be evil,” could attract exactly the type of technology “commandos” that Arthur described. They would, as Mark Zuckerberg has put it, “move fast and break things,” but would also be more likely to hit on that unpredictable piece of code that would lead to massively increasing returns.

Unfortunately, as we have seen, businesses do not self-regulate. Knowledge-based businesses like Google and Facebook have proven to be every bit as greedy as their atom-based brethren. Privacy legislation, such as GDPR, is a good first step, but we will need far more than that, especially as we move into post-digital technologies that are far more powerful.

Still, we’re not powerless. Consider the work of Stop Hate For Profit, a broad coalition that includes the Anti-Defamation League and the NAACP, which has led to an advertiser boycott of Facebook. We can demand that corporations behave how we want them to, not just what the market will bear.

3. As Our Technology Becomes More Powerful, Ethics Matter More Than Ever

Over the past several years some of the sense of wonder and possibility surrounding digital technology gave way to no small amount of fear and loathing. Scandals like the one involving Facebook and Cambridge Analytica not only alerted us to how our privacy is being violated, but also to how our democracy has been put at risk.

Yet privacy breaches are just the beginning of our problems. Consider artificial intelligence, which exposes us to a number of ethical challenges, ranging from inherent bias to life and death ethical dilemmas such as the trolley problem. It is imperative that we learn to create algorithms that are auditable, explainable and transparent.

Or consider CRISPR, the gene editing technology, available for just a few hundred dollars, that vastly accelerates our ability to alter DNA. It has the potential to cure terrible diseases such as cancer and Multiple Sclerosis, but also raises troubling issues such as biohacking and designer babies. Worried about some hacker cooking up a harmful computer virus, what about a terrorist cooking up a real virus?

That’s just the start. As quantum and neuromorphic computing become commercially available, most likely within a decade or so, our technology will become exponentially more powerful and the risks will increase accordingly. Clearly, we can no longer just “move fast and break things,” or we’re bound to break something important.

4. We Need a New Way to Evaluate Success

By some measures, we’ve been doing fairly well over the past ten years. GDP has hovered around the historical growth rate of 2.3%. Job growth has been consistent and solid. The stock market has been strong, reflecting robust corporate profits. It has, in fact, been the longest US economic expansion on record.

Yet those figures were masking some very troubling signs, even before the pandemic. Life expectancy in the US has been declining, largely due to drug overdoses, alcohol abuse and suicides. Consumer debt hit record highs in 2019 and bankruptcy rates were already rising. Food insecurity has been an epidemic on college campuses for years.

So, while top-line economic figures painted a rosy picture there was rising evidence that something troubling is afoot. The Business Roundtable partly acknowledged this fact with its statement discarding the notion that creating shareholder value is the sole purpose of a business. There are also a number of initiatives designed to replace GDP with broader measures.

The truth is that our well-being can’t be reduced to and reduced to a few tidy metrics and we need more meaning in our lives than more likes on social media. Probably the most important thing that the digital revolution has to teach us is that technology should serve people and not the other way around. If we really want to change the world for the better, that’s what we need to keep in mind.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Addressing Ethical Concerns

Ensuring AI-powered Workplace Productivity Benefits All

Addressing Ethical Concerns: Ensuring AI-powered Workplace Productivity Benefits All

GUEST POST from Art Inteligencia

In today’s fast-paced world, artificial intelligence (AI) has become an integral part of workplace productivity. From streamlining processes to enhancing decision-making, AI technologies have the potential to revolutionize the way we work. However, with great power comes great responsibility, and it is essential to address the ethical concerns that come with the widespread adoption of AI in the workplace.

One of the primary ethical concerns surrounding AI in the workplace is the potential for bias in decision-making. AI algorithms are only as good as the data they are trained on, and if this data is biased, the AI system will perpetuate that bias. This can lead to discriminatory outcomes for employees, such as biased hiring decisions or performance evaluations. To combat this, organizations must ensure that their AI systems are trained on diverse and unbiased datasets.

Case Study 1: Amazon’s Hiring Algorithm

One notable example of bias in AI can be seen in Amazon’s hiring algorithm. The company developed an AI system to automate the screening of job applicants, with the goal of streamlining the hiring process. However, the system started to discriminate against female candidates, as it was trained on historical hiring data that favored male candidates. Amazon eventually scrapped the system, highlighting the importance of ethical considerations when implementing AI in the workplace.

Another ethical concern with AI in the workplace is the potential for job displacement. As AI technologies become more advanced, there is a fear that they will replace human workers, leading to job losses and economic instability. To address this concern, organizations must focus on re-skilling and up-skilling their workforce to prepare them for the changes brought about by AI.

Case Study 2: McDonald’s AI-powered Drive-thru

McDonald’s recently introduced AI-powered drive-thru technology in select locations, which uses AI algorithms to predict customer orders based on factors such as time of day, weather, and previous ordering patterns. While this technology has led to improved efficiency and customer satisfaction, there have been concerns about the impact on the workforce. To address this, McDonald’s has implemented training programs to help employees adapt to the new technology and take on more customer-facing roles.

Conclusion

The ethical concerns surrounding AI in the workplace must be addressed to ensure that the benefits of AI-powered productivity are distributed equitably. By focusing on diversity and inclusion in AI training data, as well as investing in reskilling and upskilling programs for employees, organizations can mitigate the potential negative impacts of AI on the workforce. By taking a proactive approach to ethics in AI, organizations can create a workplace that benefits all employees, customers, and stakeholders.

Bottom line: Futurology is not fortune telling. Futurists use a scientific approach to create their deliverables, but a methodology and tools like those in FutureHacking™ can empower anyone to engage in futurology themselves.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Ethics of Futurology: Exploring Its Impact on Society

The Ethics of Futurology: Exploring Its Impact on Society

GUEST POST from Art Inteligencia

The term “futurology” has become increasingly associated with the exploration of the potential social, economic, and technological effects of the future. It is a field of study that requires a great deal of ethical consideration, due to its potential to shape the lives of individuals and entire societies. In this article, we will explore the ethical implications of futurology and its impact on society.

The most obvious ethical concern of futurology is that it can be used to shape the future in ways that may not be beneficial to society as a whole. For example, futurists have long been concerned with the potential impacts of automation and artificial intelligence on the workforce. Such technology could lead to massive job losses, which would have a devastating effect on the economy and lead to a rise in inequality. As a result, it is important to consider the implications of such technologies before they are implemented.

Furthermore, futurology can be used to create a vision of the future that may be unattainable or unrealistic. Such visions can shape public opinion and, if taken too far, can lead to disillusionment and disappointment. It is therefore important to consider the implications of any predictions made and to ensure that they are based on real-world data and evidence.

In addition to the potential ethical concerns, futurology can also have positive impacts on society. By predicting potential social, economic, and technological trends, futurists can help governments and businesses prepare for the future. This can help to create more informed and efficient decision-making, leading to better outcomes for all.

Despite the potential benefits of futurology, it is important to consider the ethical implications of its use. It is essential that any predictions made are based on evidence and do not lead to unrealistic expectations or disillusionment. It is also important to consider the potential impacts of any new technologies and to ensure that they are implemented responsibly. By doing so, futurology can help to shape a better future for all.

Bottom line: Futurology is not fortune telling. Futurists use a scientific approach to create their deliverables, but a methodology and tools like those in FutureHacking™ can empower anyone to engage in futurology themselves.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Ethical Implications of Genetic Engineering and Biotechnology Advancements

The Ethical Implications of Genetic Engineering and Biotechnology Advancements

GUEST POST from Art Inteligencia

Genetic engineering and biotechnology advancements have revolutionized various domains, including medicine, agriculture, and environmental conservation. These innovative breakthroughs have the potential to benefit humanity significantly. However, as technology advances, it raises ethical concerns regarding the responsible and sustainable use of these techniques. This thought leadership article explores the intricate ethical considerations associated with genetic engineering and biotechnology through two compelling case studies.

Case Study 1: CRISPR-Cas9 and Human Germline Editing

The development and widespread use of CRISPR-Cas9 gene-editing technology have opened up possibilities for targeted modifications in organisms’ genetic material, including humans. The prospect of efficiently and precisely editing human genomes brings forth a myriad of ethical concerns.

One of the most prominent concerns is the application of CRISPR-Cas9 in germline editing, altering the heritable genetic code of future generations. While this technology holds immense potential for treating genetic diseases and eradicating hereditary anomalies, it also raises questions of long-term consequences, consent, and potential unknown harm to individuals or gene pools.

For instance, the controversial case study of Chinese scientist Dr. He Jiankui who claimed to have genetically modified twin girls in 2018, to confer them with resistance to HIV, ignited a global uproar. This unauthorized experiment lacked the required consensus within the scientific community, bypassing ethical boundaries and violating regulations. It highlighted the need for strict ethical guidelines and international consensus to govern the use of germline editing, ensuring transparency, safety, and accountable research.

Case Study 2: Genetic Modification in Agricultural Crops

Biotechnology advancements have played a significant role in improving crop yields, enhancing nutritional value, and increasing resistance to pests and diseases. However, the application of genetically modified (GM) crops also raises ethical questions related to food security, environmental impact, and consumer rights.

An illustrative case study is the widespread cultivation of Bt cotton, genetically modified to produce the Bacillus thuringiensis (Bt) toxin. This toxin offers natural resistance against bollworms, drastically reducing the need for chemical pesticides. While Bt cotton has provided tremendous benefits to farmers in terms of increased yields and reduced environmental pollution, it has also led to concerns related to adverse effects on non-target organisms, resistance development in target pests, and monopolistic control of seed markets.

The ethical implications of these concerns revolve around striking a balance between sustainable agricultural practices, long-term environmental impacts, farmers’ livelihoods, and the rights of consumers to make informed choices about the food they consume.

Conclusion

Genetic engineering and biotechnology advancements have immense transformative potential, but they also bear significant ethical implications. The case studies of CRISPR-Cas9 germline editing and genetic modification in agriculture demonstrate the multifaceted nature of these ethical considerations.

To address the ethical challenges posed by these advancements, proactive measures must be taken, including the establishment of robust ethical frameworks, international guidelines, and meaningful stakeholder engagement. Such measures can help ensure transparency, accountability, equitable access to benefits, and a responsible approach to genetic engineering and biotechnology.

By navigating the ethical implications of genetic engineering and biotechnology with a thoughtful and balanced perspective, we can harness these innovations for the betterment of humanity while safeguarding the well-being of individuals, societies, and the environment.

Bottom line: Futurology is not fortune telling. Futurists use a scientific approach to create their deliverables, but a methodology and tools like those in FutureHacking™ can empower anyone to engage in futurology themselves.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.