Tag Archives: Biology

Why Humans Fail to Plan for the Future

Why Humans Fail to Plan for the Future

GUEST POST from Greg Satell

I was recently reading Michiu Kaku’s wonderful book, The Future of Humanity, about colonizing space and was amazed how detailed some of the plans are. Plans for a Mars colony, for example, are already fairly advanced. In other cases, scientists are actively thinking about technologies that won’t be viable for a century or more.

Yet while we seem to be so good at planning for life in outer space, we are much less capable of thinking responsibly about the future here on earth, especially in the United States. Our federal government deficit recently rose to 4.6% of GDP, which is obviously unsustainable in an economy that’s growing at a meager 2.3%.

That’s just one data point, but everywhere you look we seem to be unable to plan for the future. Consumer debt in the US recently hit levels exceeding those before the crash in 2008. Our infrastructure is falling apart. Air quality is getting worse. The list goes on. We need to start thinking more seriously about the future, but don’t seem to be able. Why is that?

It’s Biology, Stupid

The simplest and most obvious explanation for why we fail to plan for the future is basic human biology. We have pleasure centers in our brains that release a hormone called dopamine, which gives us a feeling of well-being. So, it shouldn’t be surprising that we seek to maximize our dopamine fix in the present and neglect the future.

Yuval Noah Harari made this argument in his book Homo Deus, in which he argued that “organisms are algorithms.” Much like a vending machine is programed to respond to buttons, Harari argues, humans and other animals are programed by genetics and evolution to respond to “sensations, emotions and thoughts.” When those particular buttons are pushed, we respond much like a vending machine does.

He gives various data points for this point of view. For example, he describes psychological experiments in which, by monitoring brainwaves, researchers are able to predict actions, such as whether a person will flip a switch, even before he or she is aware of it. He also points out that certain chemicals, such as Ritalin and Prozac, can modify behavior.

Yet this somehow doesn’t feel persuasive. Adults in even primitive societies are expected to overcome basic urges. Citizens of Ancient Rome were taxed to pay for roads that led to distant lands and took decades to build. Medieval communities built churches that stood for centuries. Why would we somehow lose our ability to think long-term in just the past generation or so?

The Profit Motive

Another explanation of why we neglect the future is the profit motive. Pressed by demanding shareholders to deliver quarterly profits, corporate executives focus on showing short-term profits instead of investing for the future. The result is increased returns to fund managers, but a hollowing out of corporate competitiveness.

A recent article in Harvard Business Review would appear to bear this out. When a team of researchers looked into the health of the innovation ecosystem in the US, they found that corporate America has largely checked out. They also observed that storied corporate research labs, such as Bell Labs and Xerox PARC have diminished over time.

Yet take a closer look and the argument doesn’t hold up. In fact, the data from the National Science Foundation shows that corporate research has increased from roughly 40% of total investment in the 1950s and 60s to more than 60% today. At the same time, while some firms have closed research facilities, others, such as Microsoft, IBM and Google have either opened new ones or greatly expanded previous efforts. Overall R&D spending has risen over time.

Take a look at how Google innovates and you’ll be able to see the source for some the dissonance. 50 years ago, the only real option for corporate investment in research was a corporate lab. Today, however, there are many other avenues, including partnerships with academic researchers, internal venture capital operations, incubators, accelerators and more.

The Free Rider Problem

A third reason we may fail to invest in the future is the free rider problem. In this view, the problem is not that we don’t plan for the future, but that we don’t want to spend money on others who are undeserving. For example, why should we pay higher taxes to educate kids from outside our communities? Or to infrastructure projects that are wasteful and corrupt?

This type of welfare queen argument can be quite powerful. Although actual welfare fraud has been shown to be incredibly rare, there are many who believe that the public sector is inherently wasteful and money would be more productively invested elsewhere. This belief doesn’t only apply to low-income people, but also to “elites” such as scientists.

Essentially, this is a form of kinship selection. We are more willing to invest in the future of people who we see as similar to ourselves, because that is a form of self-survival. However, when we find ourselves asked to invest in the future of those we see as different from ourselves, whether that difference is of race, social class or even profession, we balk.

Yet here again, a closer look and the facts don’t quite fit with the narrative. Charitable giving, for example, has risen almost every year since 1977. So, it’s strange that we’re increasingly generous in giving to those who are in need, but stingy when it comes to things like infrastructure and education.

A New Age of Superstition

What’s especially strange about our inability to plan for the future is that it’s relatively new. In fact, after World War II, we invested heavily in the future. We created new avenues for scientific investment at agencies like the National Science Foundation and the National Institutes of Health, rebuilt Europe with the Marshall Plan and educated an entire generation with the GI Bill.

It wasn’t until the 1980s that our willingness to plan for and invest in the future began to wane, mostly due to two ideas that warped decision making. The first, called the Laffer Curve, argued that by lowering taxes we can increase revenue and that tax cuts, essentially, pay for themselves. The second, shareholder value, argued that whatever was best for shareholders is also best for society.

Both ideas have been partially or thoroughly debunked. Over the past 40 years, lower tax rates have consistently led to lower revenues and higher deficits. The Business Roundtable, an influential group of almost 200 CEOs of America’s largest companies, recently denounced the concept of shareholder value. Yet strangely, many still use both to support anti-future decisions.

We seem to be living in a new era of superstition, where mere belief is enough to inspire action. So projects which easily capture the imagination, such as colonizing Mars, are able to garner fairly widespread support, while investing in basic things like infrastructure, debt reduction or the environment are neglected.

The problem, in other words, seems to be mostly in the realm of a collective narrative. We are more than capable of enduring privation today to benefit tomorrow, just as businesses routinely take less profits today to invest in tomorrow. We are even capable of giving altruistically to others in need. All we need is a story to believe in.

There is, however, the possibility that it is not the future we really have a problem with, but each other and that our lack of a common story arises from a lack of shared values which leads to major differences in how we view the same facts. In any case, the future suffers.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

We Need a More Biological View of Technology

We Need a More Biological View of Technology

GUEST POST from Greg Satell

It’s no accident that Mary Shelley’s novel, Frankenstein, was published in the early 19th century, at roughly the same time as the Luddite movement was gaining momentum. It was in that moment that people first began to take stock of the technological advances that brought about the first Industrial Revolution.

Since then we have seemed to oscillate between techno-utopianism and dystopian visions of machines gone mad. For every “space odyssey” promising an automated, enlightened future, there seems to be a “Terminator” series warning of our impending destruction. Neither scenario has ever come to pass and it is unlikely that either ever will.

What both the optimists and the Cassandras miss is that technology is not something that exists independently from us. It is, in fact, intensely human. We don’t merely build it, but continue to nurture it through how we develop and shape ecosystems. We need to go beyond a simple engineering mindset and focus on a process of revealing, building and emergence.

1. Revealing

World War II brought the destructive potential of technology to the fore of human consciousness. As deadly machines ravaged Europe and bombs of unimaginable power exploded in Asia, the whole planet was engulfed in a maelstrom of human design. It seemed that the technology we had built had become a modern version of Frankenstein’s monster, destined from the start to turn on its master.

Yet the German philosopher Martin Heidegger saw things differently. In his 1954 essay, The Question Concerning Technology, he described technology as akin to art, in that it reveals truths about the nature of the world, brings them forth and puts them to some specific use. In the process, human nature and its capacity for good and evil are also revealed.

He offers the example of a hydroelectric dam, which uncovers a river’s energy and puts it to use making electricity. In much the same sense, Mark Zuckerberg did not so much “build” a social network at Facebook, but took natural human tendencies and channeled them in a particular way. That process of channeling, in turn, reveals even more.

That’s why, as I wrote in Mapping Innovation, innovation is not about coming up with new ideas, but identifying meaningful problems. It’s through exploring tough problems that we reveal new things and those new things can lead to important solutions. All who wander are not lost.

2. Building

The concept of revealing would seem to support the view of Shelley and the Luddites. It suggests that once a force is revealed, we are powerless to shape its trajectory. J. Robert Oppenheimer, upon witnessing the world’s first nuclear explosion as it shook the plains of New Mexico, expressed a similar view. “Now I am become Death, the destroyer of worlds,” he said, quoting the Bhagavad Gita.

Yet in another essay, Building Dwelling Thinking, Heideggar explains that what we build for the world is highly dependent on our interpretation of what it means to live in it. The relationship is, of course, reflexive. What we build depends on how we wish to dwell and that act, in and of itself, shapes how we build further.

Again, Mark Zuckerberg and Facebook are instructive. His insight into human nature led him to build his platform based on what he saw as The Hacker Way and resolved to “move fast and break things.” Unfortunately, that approach led to his enterprise becoming highly vulnerable to schemes by actors such as Cambridge Analytica and the Russian GRU.

Yet technology is not, by itself, determinant. Facebook is, to a great extent, the result of conscious choices that Mark Zuckerberg made. If he had a different set of experiences than that of a young, upper-middle-class kid who had never encountered a moment of true danger in his life, he may have been more cautious and chosen differently.

History has shown that those who build powerful technologies can play a vital role in shaping how they are used. Many of the scientists of Oppenheimer’s day became activists, preparing a manifesto that highlighted the dangers of nuclear weapons, which helped lead to the Partial Test Ban Treaty. In much the same way, the Asilomar Conference, held in 1975, led to important constraints on genomic technologies.

3. Emergence

No technology stands alone, but combines with other technologies to form systems. That’s where things get confusing because when things combine and interact they become more complex. As complexity theorist Sam Arbesman explained in his book, Overcomplicated, this happens because of two forces inherent to the way that technologies evolve.

The first is accretion. A product such as an iPhone represents the accumulation of many different technologies, including microchips, programming languages, gyroscopes, cameras, touchscreens and lithium ion batteries, just to name a few. As we figure out more tasks an iPhone can perform, more technologies are added, building on each other.

The second force is interaction. Put simply, much of the value of an iPhone is embedded in how it works with other technologies to make tasks easier. We want to use it to access platforms such as Facebook to keep in touch with friends, Yelp so that we can pick out a nice restaurant where we can meet them and Google Maps to help us find the place. These interactions, combined with accretion, create an onward march towards greater complexity.

It is through ever increasing complexity that we lose control. Leonard Read pointed out in his classic essay, I, Pencil, that even an object as simple as a pencil is far too complex for any single person to produce by themselves. A smartphone—or even a single microchip—is exponentially more complex.

People work their entire lives to become experts on even a minor aspect of a technology like an iPhone, a narrow practice of medicine or an obscure facet of a single legal code. As complexity increases, so does specialization, making it even harder for any one person to see the whole picture.

Shaping Ecosystems And Taking A Biological View

In 2013, I wrote that we are all Luddites now, because advances in artificial intelligence had become so powerful that anyone who wasn’t nervous didn’t really understand what was going on. Today, as we enter a new era of innovation and technologies become infinitely more powerful, we are entering a new ethical universe.

Typically, the practice of modern ethics has been fairly simple: Don’t lie, cheat or steal. Yet with many of our most advanced technologies, such as artificial intelligence and genetic engineering, the issue isn’t so much about doing the right thing, but figuring out what the right thing is when the issues are novel, abstruse and far reaching.

What’s crucial to understand, however, is that it’s not any particular invention, but ecosystems that create the future. The Luddites were right to fear textile mills, which did indeed shatter their way of life. However the mill was only one technology, when combined with other inventions, such as agricultural advances, labor unions and modern healthcare, lives greatly improved.

Make no mistake, our future will be shaped by our own choices, which is why we need to abandon our illusions of control. We need to shift from an engineering mindset, where we try to optimize for a limited set of variables and take a more biological view, growing and shaping ecosystems of talent, technology, information and cultural norms.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.