Tag Archives: Science

Department Of Energy Programs Helping to Create an American Manufacturing Future

Department Of Energy Programs Helping to Create an American Manufacturing Future

GUEST POST from Greg Satell

In the recession that followed the dotcom crash in 2000, the United States lost five million manufacturing jobs and, while there has been an uptick in recent years, all indications are that they may never be coming back. Manufacturing, perhaps more than any other sector, relies on deep networks of skills and assets that tend to be highly regional.

The consequences of this loss are deep and pervasive. Losing a significant portion of our manufacturing base has led not only to economic vulnerability, but to political polarization. Clearly, it is important to rebuild our manufacturing base. But to do that, we need to focus on new, more advanced, technologies

That’s the mission of the Advanced Manufacturing Office (AMO) at the Department of Energy. By providing a crucial link between the cutting edge science done at the National Labs and private industry, it has been able to make considerable progress. As the collaboration between government scientists widen and deepens over time, US manufacturing may well be revived.

Linking Advanced Research To Private Industry

The origins of the Department of Energy date back to the Manhattan Project during World War II. The immense project was, in many respects, the start of “big science.” Hundreds of top researchers, used to working in small labs, traveled to newly established outposts to collaborate at places like Los Alamos, New Mexico and Oak Ridge, Tennessee.

After the war was over, the facilities continued their work and similar research centers were established to expand the effort. These National Labs became the backbone of the US government’s internal research efforts. In 1977, the National Labs, along with a number of other programs, were combined to form the Department of Energy.

One of the core missions of the AMO is to link the research done at the National Labs to private industry and the Lab Embedded Entrepreneurship Programs (LEEP) have been particularly successful in this regard. Currently, there are four such programs, Cyclotron Road, Chain Reaction Innovations, West Gate and Innovation Crossroads.

I was able to visit Innovation Crossroads at Oak Ridge National Laboratory and meet the entrepreneurs in its current cohort. Each is working to transform a breakthrough discovery into a market changing application, yet due to technical risk, would not be able to attract funding in the private sector. The LEEP program offers a small amount of seed money, access to lab facilities and scientific and entrepreneurial mentorship to help them get off the ground.

That’s just one of the ways that the AMO opens up the resources of the National Labs. It also helps business get access to supercomputing resources (5 out of the 10 fastest computers in the world are located in the United States, most of them at the National Labs) and conducts early stage research to benefit private industry.

Leading Public-Private Consortia

Another area in which the AMO supports private industry is through taking a leading role in consortia, such as the Manufacturing Institutes that were set up to to give American companies a leg up in advanced areas such as clean energy, composite materials and chemical process intensification.

The idea behind these consortia is to create hubs that provide a critical link with government labs, top scientists at academic universities and private companies looking to solve real-world problems. It both helps firms advance in key areas and allows researchers to focus their work on where they will have the greatest possible impact.

For example, the Critical Materials Institute (CMI) was set up to develop alternatives to materials that are subject to supply disruptions, such as the rare earth elements that are critical to many high tech products and are largely produced in China. A few years ago it developed, along with several National Labs and Eck Industries, an advanced alloy that can replace more costly materials in components of advanced vehicles and aircraft.

“We went from an idea on a whiteboard to a profitable product in less than two years and turned what was a waste product into a valuable asset,” Robert Ivester, Director of the Advanced Manufacturing Office told me.

Technology Assistance Partnerships

In 2011, the International Organization for Standardization released its ISO 50001 guidelines. Like previous guidelines that focused on quality management and environmental impact, ISO 50001 recommends best practices to reduce energy use. These can benefit businesses through lower costs and result in higher margins.

Still, for harried executives facing cutthroat competition and demanding customers, figuring out how to implement new standards can easily get lost in the mix. So a third key role that the AMO plays is to assist companies who wish to implement new standards by providing tools, guides and access to professional expertise.

The AMO offers similar support for a number of critical areas, such as prototype development and also provides energy assessment centers for firms that want to reduce costs. “Helping American companies adopt new technology and standards helps keep American manufacturers on the cutting edge,” Ivester says.

“Spinning In” Rather Than Spinning Out

Traditionally we think of the role of government in business largely in terms of regulation. Legislatures pass laws and watchdog agencies enforce them so that we can have confidence in the the food we eat, the products we buy and the medicines that are supposed to cure us. While that is clearly important, we often overlook how government can help drive innovation.

Inventions spun out of government labs include the Internet, GPS and laser scanners, just to name a few. Many of our most important drugs were also originally developed with government funding. Still, traditionally the work has mostly been done in isolation and only later offered to private companies through licensing agreements.

What makes the Advanced Manufacturing Office different than most scientific programs is that it is more focused on “spinning in” private industry rather than spinning out technologies. That enables executives and entrepreneurs with innovative ideas to power them with some of the best minds and advanced equipment in the world.

As Ivester put it to me, “Spinning out technologies is something that the Department of Energy has traditionally done. Increasingly, we want to spin ideas from industry into our labs, so that companies and entrepreneurs can benefit from the resources we have here. It also helps keep our scientists in touch with market needs and helps guide their research.”

Make no mistake, innovation needs collaboration. Combining the ideas from the private sector with the cutting edge science from government labs can help American manufacturing compete for the 21st century.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Time is Not Fundamental

Time is Not Fundamental

GUEST POST from Geoffrey A. Moore

For all my life I have been taught that time is the fourth dimension in a space-time continuum. I mean, for goodness sake, Einstein said this was so, and all of physics has followed his lead. Nonetheless, I want to argue that, while the universe may indeed have four dimensions, time is not one of them, nor is it a fundamental element of reality.

Before you think I have really jumped off the deep end, let me just say that my claim is that motion is a fundamental element of reality, and it is the one that time is substituting for. This is based simply on observation. That is, we can observe and measure mass. We can observe and measure space. We can observe and measure energy. We can observe and measure motion. Time, on the other hand, is simply a tool we have developed to measure motion. That is, motion is fundamental, and time is derived.

Consider where our concept of time came from. It started with three distinct units—the day, the month, and the year. Each is based on a cyclical motion—the earth turning around its axis, the moon encircling the earth, the earth and moon encircling the sun. All three of these cyclical motions have the property of returning to their starting point. They repeat, over and over and over. That’s how they came to our attention in the first place.

If we call this phenomenon cyclical time, we can contrast it with linear time. The latter is time we experience as passing, the one to which we apply the terms past, present, and future. But in fact, what is passing is not time but motion, motion we are calibrating by time. That is, we use the cyclical units of time to measure the linear distance between any given motion and a reference location.

As I discuss in The Infinite Staircase, by virtue of the Big Bang, the Second Law of Thermodynamics, and the ongoing rush to greater and greater entropy, the universe is inherently in motion. Some of that motion gets redirected to do work, and some of that work has resulted life emerging on our planet. Motion is intrinsic to our experience of life, much more so than time. As babies we have no sense of time, but we immediately experience mass, space, energy, and motion.

Because mass, space, energy, and motion are core to our experience, we have developed tools to help us engage with them strategically. We can weigh mass and reshape it in myriad ways to serve our ends. We can measure space using anything as a standard length and create structures of whatever size and shape we need. We can measure energy in terms of temperature and pressure and manipulate it to move all kinds of masses through all kinds of spaces. And we can measure motion through space by using standard units of time.

The equation for so doing is typically written as v = d/t. This equation makes us believe that velocity is a concept derived from the primitives of distance and time. But a more accurate way of looking at reality is to say t = d/v. That is, we can observe distance and motion, from which we derive time. If you have a wristwatch with a second hand, this is easily confirmed. A minute consists of a wand traveling through a fixed angular distance, 360°, at a constant velocity set by convention, in this case the International System of Units, these days atomically calibrated by specified number of oscillations of cesium. Time is derived by dividing a given distance by a given velocity.

OK, so what? Here the paths of philosophy and physics diverge, with me being able to pursue the former but not the latter. Before parting, however, I would like to ask the physicists in the room, should there be any, a question: If one accepted the premise that motion was the fourth dimension, not time, such that we described the universe as a continuum of spacemotion instead of spacetime, would that make any difference? Specifically, with respect to Einstein’s theories of special and general relativity, are we just substituting terms here, or are there material consequences? I would love to learn what you think.

At my end, I am interested in the philosophical implications of this question, specifically in relation to phenomenology, the way we experience time. To begin, I want to take issue with the following definition of time served up by Google:

a nonspatial continuum that is measured in terms of events which succeed one another from past through present to future.

From my perspective, this is just wrong. It calls for using events to measure time. The correct approach would focus on using time to measure motion, describing the situation as follows:

an intra-spatial continuum that can be measured in terms of time as one event succeeds another from a position of higher energy to one of lower energy.

The motive for this redefinition is to underscore that the universe is inherently in motion, following the Second Law of thermodynamics, perpetually seeking to cool itself down by spreading itself out. We here on Earth are born into the midst of that action, boats set afloat upon a river, moving with the current on the way to a sea of ultimate cool. We can go with the flow, we can paddle upstream, we can even divert the river of entropy to siphon off energy to do work. The key point to register is that motion abides, inexorably following the arrow of entropy, moving from hot to cold until heat death is achieved.

If motion is a primary dimension of the universe, there can be no standing still. Phenomenologically, this is quite different from the traditional time-based perspective. In a universe of space and time, events have to be initiated, and one can readily imagine a time with no events, a time when nothing happens, maybe something along the lines of Beckett’s Waiting for Godot. In a universe of space and motion, however, that is impossible. There are always events, and we are always in the midst of doing. A couch potato is as immersed in events as a race car driver. Or, to paraphrase Milton, they also move who only stand and wait.

A second consequence of the spacemotion continuum is that there is no such thing as eternity and no such thing as infinity. Nothing can exist outside the realm of change, and the universe is limited to whatever amount of energy was released at the Big Bang. Now, to be fair, from a phenomenological perspective, the dimensions of the universe are so gigantic that, experientially, they might as well be infinite and eternal. But from a philosophical perspective, the categories of eternity and infinity are not ontologically valid. They are asymptotes not entities.

Needless to say, all this flies in the face of virtually every religion that has ever taken root in human history. As someone deeply committed to traditional ethics, I am grateful to all religions for supporting ethical action and an ethical mindset. If there were no other way to secure ethics, then I would opt for religion for sure. But we know a lot more about the universe today than we did several thousand years ago, and so there is at least an opportunity to forge a modern narrative, one that can find in secular metaphysics a foundation for traditional values. That’s what The Infinite Staircase is seeking to do.

That’s what I think. What do you think?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Event That Made Einstein an Icon

The Event That Made Einstein an Icon

GUEST POST from Greg Satell

On April 3rd, 1921, a handful of journalists went to interview a relatively unknown scientist named Albert Einstein. When they arrived to meet his ship they found a crowd of thousands waiting for him, screaming with adulation. Surprised at his popularity, and charmed by his genial personality, the story of Einstein’s arrival made the front page in major newspapers.

It was all a bit of a mistake. The people in the crowd weren’t there to see Einstein, but Chaim Weizmann, the popular Zionist leader that Einstein was traveling with. Nevertheless, that’s how Einstein gained his iconic status. In a way, Einstein didn’t get famous because of relativity, relativity got famous because of Einstein.

This, of course, in no way lessens Einstein’s accomplishments, which were considerable. Yet as Albert-László Barabási, another highly accomplished scientist, explains in The Formula, there is a big difference between success and accomplishment. The truth is that success isn’t what you think it is but, with talent, persistence and some luck, anyone can achieve it.

There Is Virtually No Limit To Success, But There Is To Accomplishment

Einstein was, without a doubt, one of the great scientific minds in history. Yet the first half of the 20th century was a golden age for physics, with many great minds. Niels Bohr, Einstein’s sparring partner at the famous Bohr–Einstein debates (which Bohr is widely considered to have won) was at least as prominent. Yet Einstein towers over all of them.

It’s not just physicists, either. Why is it that Einstein has become a household name and not, say, Watson and Crick, who discovered the structure of DNA, an accomplishment at least as important as relativity? Even less known is Paul Erdős, the most prolific mathematician since Euler in the 18th century, who had an outrageous personality to boot?

For that matter, consider Richard Feynman, who is probably the second most famous physicist of the 20th century. He was, by all accounts, a man of great accomplishment and charisma. However, his fame is probably more due to his performance on TV following the Space Shuttle Challenger disaster than for his theory of quantum electrodynamics.

There are many great golfers, but only one Tiger Woods, just as there are many great basketball players, but only one Lebron James. The truth is that individual human accomplishment is bounded, but success isn’t. Tiger Woods can’t possibly hit every shot perfectly any more than Lebron James can score every point. But chances are, both will outshine all others in the public consciousness, which will drive their fame and fortune.

What’s probably most interesting about Einstein’s fame is that it grew substantially even as he ceased to be a productive scientist, long after he had become, as Robert Oppenheimer put it, “a landmark, not a beacon.”

Success Relies On Networks

Let’s try and deconstruct what happened after Einstein’s arrival in the United States. The day after thousands came to greet Weizmann and the reporters mistakenly assumed that they were there for Einstein, he appeared on the front pages of major newspapers like The New York Times and the Washington Post. For many readers, it may have been the first time they had heard of any physicist.

As I noted above, this period was something of a heyday for physics, with the basic principles of quantum mechanics first becoming established, so it was a topic that was increasingly discussed. Few could understand the details, but many remembered the genius with the crazy white hair they saw in the newspaper. When the subject of physics came up, people would discuss Einstein, which spread his name further.

Barabási himself established this principle of preferential attachment in networks, also known as the “rich get richer” phenomenon or the Matthew effect. When a particular node gains more connections than its rivals, it tends to gain future connections at a faster rate. Even a slight change in early performance leads to a major advantage going forward.

In his book, Barabási details how this principle applies to things as diverse as petitions on Change.org, projects on Kickstarter and books on Amazon. It also applies to websites on the Internet, computers in a network and proteins in our bodies. Look at any connected system and you’ll see preferential attachment at work.

Small Groups, Loosely Connected

The civil rights movement will always be associated with Martin Luther King Jr., but he was far from a solitary figure. In fact, he was just one of the Big Six of civil rights. Yet few today speak of the others. The only one besides King still relatively famous today is John Lewis and that’s largely because of his present role as a US congressman.

Each of these men were not solitary figures either, but leaders of their own organizations, such as the NAACP, The National Urban League and CORE and these, in turn, had hundreds of local chapters. It was King’s connection to all of these that made him the historic icon we know today, because it was all of those small groups, loosely connected, that made up the movement.

In my book, Cascades, I explain how many movements fail to bring change about by trying to emulate events like the March on Washington without first building small groups, loosely connected, but united by a shared purpose. It is those, far more than any charismatic personality or inspirational speech, that makes a movement powerful.

It also helps explain something about Einstein’s iconic status. He was on the ship with Weizman not as a physicist, but as a Zionist activist and that dual status connected him to two separate networks of loosely connected small groups, which enhanced his prestige. So it is quite possible, if not probable, that we equate Einstein with genius today and not, say, Bohr, because of his political activity as much as for his scientific talent.

Randomness Rewards Persistence

None of this should be taken to mean that Einstein could have become a legendary icon if he hadn’t made truly landmark discoveries. It was the combination of his prominence in the scientific community with the happy accident of Weizmann’s adoring crowds being mistaken for his own, that made him a historic figure.

Still, we can imagine an alternate universe in which Einstein becomes just as famous. He was, for example, enormously quotable and very politically active. (He was, at one time, offered the presidency in Israel). So it is completely possible that some other event, combined with his very real accomplishments, would have catapulted him to fame. There is always an element of luck and randomness in every success.

Yet Einstein’s story tells us some very important things about what makes a great success. It is not, as many tell us, simply a matter of working hard to achieve something because human performance is, as noted above, bounded. You can be better than others, but not that much better. At the same time, it takes more than just luck. It is a combination of both and we can do much to increase our chances of benefiting from them.

Einstein was incredibly persistent, working for ten years on special relativity and another ten for general relativity. He was also a great connector, always working to collaborate with other scientists as well as political figures like Weizmann and even little girls needing help with their math homework. That’s what allowed him to benefit from loosely connected small groups.

Perhaps most importantly, these principles of persistence and connection are ones that any of us can apply. We might not all be Einsteins, but with a little luck, we just might make it someday.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credit: misterinnovation.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Eureka Moment Fallacy

The Eureka Moment Fallacy

GUEST POST from Greg Satell

In 1928, Alexander Fleming arrived at his lab to find that a mysterious mold had contaminated his Petri dishes and was eradicating the bacteria colonies he was trying to grow. Intrigued, he decided to study the mold. That’s how Fleming came to be known as the discoverer of penicillin.

Fleming’s story is one that is told and retold because it reinforces so much about what we love about innovation. A brilliant mind meets a pivotal moment of epiphany and — Eureka! — the world is forever changed. Unfortunately, that’s not really how things work. It wasn’t true in Fleming’s case and it won’t work for you.

The truth is that innovation is never a single event, but a process of discovery, engineering and transformation, which is why penicillin didn’t become commercially available until 1945 (and the drug was actually a different strain of the mold than Fleming had discovered). We need to stop searching for Eureka moments and get busy with the real work of innovating.

Learning To Recognize And Define Problems

Before Fleming, there was Ignaz Semmelweis and to understand Fleming’s story it helps to understand that of his predecessor. Much like Fleming, Semmelweis was a bright young man of science who had a moment of epiphany. In Semmelweis’s case, he was one of the first to realize that infections could spread from doctor to patient.

That simple insight led him to institute a strict regime of hand washing at Vienna General Hospital. Almost immediately, the incidence of deadly childbed fever dropped precipitously. Yet his ideas were not accepted at the time and Semmelweis didn’t do himself any favors by refusing to format his data properly or to work collaboratively to build support for his ideas. Instead, he angrily railed against the medical establishment he saw as undermining his work.

Semmelweis would die in an insane asylum, ironically from an infection he contracted under care, and never got to see the germ theory of disease emerge from the work of people like Louis Pasteur and Robert Koch. That’s what led to the study of bacteriology, sepsis and Alexander Fleming growing those cultures that were contaminated by the mysterious mold.

When Fleming walked into his lab on that morning in 1928, he was bringing a wealth of experiences to the problem. During World War I, he had witnessed many soldiers die from sepsis and how applying antiseptic agents to the wound often made the problem worse. Later, he found that nasal secretions inhibited bacterial growth.

So when the chance discovery of penicillin happened, it was far from a single moment, but rather a “happy accident” that he had spent years preparing for.

Combining Domains

Today, we remember Fleming’s discovery of penicillin as a historic breakthrough, but it wasn’t considered to be so at the time. In fact, when it was first published in the British Journal of Experimental Pathology, nobody really noticed. The truth is that what Fleming discovered couldn’t have cured anybody. It was just a mold secretion that killed bacteria in a Petri dish.

Perhaps even more importantly, Fleming was ill-equipped to transform penicillin into something useful. He was a pathologist that largely worked alone. To transform his discovery into an actual cure, he would need chemists and other scientists, as well as experts in fermentation, manufacturing, logistics and many other things. To go from milliliters in the lab to metric tons in the real world is no trivial thing.

So Fleming’s paper lay buried in a scientific journal for ten years before it was rediscovered by a team led by Howard Florey and Ernst Chain at the University of Oxford. Chain, a world-class biochemist, was able to stabilize the penicillin compound and another member of the team, Norman Heatley, developed a fermentation process to produce it in greater quantities.

Because Florey and Chain led a larger team in a bigger lab they were also had the staff and equipment to perform experiments on mice, which showed that penicillin was effective in treating infections. However, when they tried to cure a human, they found that they were not able to produce enough of the drug. They simply didn’t have the capacity.

Driving A Transformation

By the time Florey and Chain had established the potential of penicillin it was already 1941 and England was at war, which made it difficult to find funding to scale up their work. Luckily, Florey had done a Rhodes Scholarship in the United States and was able to secure a grant to travel to America and continue the development of penicillin with US-based labs.

That collaboration produced two more important breakthroughs. First, they were able to identify a more powerful strain of the penicillin mold. Second, they developed a fermentation process utilizing corn steep liquor as a medium. Corn steep liquor was common in the American Midwest, but virtually unheard of back in England.

Still, they needed to figure out a way to scale up production and that was far beyond the abilities of research scientists. However, the Office of Scientific Research and Development (OSRD), a government agency in charge of wartime research, understood the potential of penicillin for the war effort and initiated an aggressive program, involving two dozen pharmaceutical companies, to overcome the challenges.

Working feverishly, they were able to produce enough penicillin to deploy the drug for D-Day in 1944 and saved untold thousands of lives. After the war was over, in 1945, penicillin was made commercially available, which touched off a “golden age” of antibiotic research and new drugs were discovered almost every year between 1950 and 1970.

Innovation Is Never A Single Event

The story of Fleming’s Eureka! moment is romantic and inspiring, but also incredibly misleading. It wasn’t one person and one moment that changed the world, but the work of many over decades that made an impact. As I explain in my book, Cascades, it is small groups, loosely connected, but united by a shared purpose that drive transformational change.

In fact, the development of penicillin involved not one, but a series of epiphanies. First, Fleming discovered penicillin. Then, Florey and Chain rediscovered Fleming’s work. Chain stabilized the compound, Heatley developed the fermentation process, other scientists identified the more powerful strain and corn steep liquor as a fermentation medium. Surely, there were many other breakthroughs involving production, logistics and treatment that are lost to history.

This is not the exception, but the rule. The truth is that the next big thing always starts out looking like nothing at all. For example, Jim Allison, who recently won the Nobel Prize for his development of cancer immunotherapy, had his idea rejected by pharmaceutical companies, much like the medical establishment dismissed Semmelweis back in the 1850s.

Yet Allison kept at it. He continued to pound the pavement, connect and collaborate with others and that’s why today he his hailed as a pioneer and a hero. That’s why we need to focus less on inventions and more on ecosystems. It’s never a single moment of Eureka! that truly changes the world, but many of them.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Are We Abandoning Science?

Are We Abandoning Science?

GUEST POST from Greg Satell

A recent Pew poll found that, while Americans generally view scientific expertise in high regard, there are deep pockets of mistrust. For example, less than half of Republicans believe that scientists should take an active role in policy debates and significant minorities question the transparency and integrity of scientific findings.

An earlier study done by researchers at Ohio State University found that, when confronted with scientific evidence that conflicted with their pre-existing views, such as the reality of climate change or the safety of vaccines, partisans would not only reject the evidence, but become hostile and question the objectivity of science.

This is a major problem, because if we are only willing to accept evidence that agrees with what we already think we know, we are unlikely to advance our understanding. Perhaps even worse, it opens us up to being influenced by pundits —those with strong opinions but questionable expertise. When we turn our backs on science, we turn our backs on truth.

The Rise Of Science

When René Descartes wrote “I think, therefore I am” in the mid 1600s, he was doing more than coining a clever phrase, he was making an argument for a rational world ruled by pure logic. He believed that you could find the answers to problems you needed to solve merely by thinking about them clearly.

Yet Descartes and his rational movement soon ran out of steam. Many of the great minds that followed, such as John Locke and David Hume, took a more empirical view and argued that we can only truly understand the world around us through our experiences, however flawed and limited they may be.

It was this emphasis on experiences that led us to the concept of expertise. As the Renaissance and the Enlightenment gave way to the modern world, knowledge became specialized. It was no longer enough to think about things, the creation of knowledge came to be seen as arising from a scientific process of testing hypotheses through experiment.

This was a major shift, because you could no longer simply argue about things like how many angels could fit on the head of a pin, you actually had to put your thoughts to the test. Others could then examine the same evidence and see if they came to the same conclusions as you did. Thinking about things wasn’t enough, you had to show that they worked in the real world.

The Soccer Ball You Can’t See

Science is a funny thing, full of chance discoveries, strange coincidences and unlikely moments of insight. In his book, The God Particle, the Nobel prizewinning physicist Leon Lederman tells a metaphorical story about an alien race watching a soccer game to illustrate how it is practiced.

These aliens are very much like humans except that they can not see black and white patterns. If they went to a soccer game, they would be utterly confused to see a bunch of guys running around a field for no apparent reason. They could come up with theories, formulas and other conjectures, but would fail to make useful predictions.

Eventually, one might notice a slight bulge in the net of the goal just as the crowd erupted in a cheer and come up with a crazy idea about an invisible ball. Through further observation, they could test the hypothesis and build evidence. Although they could never actually see the ball, they could catalogue its effects and use them to understand events.

His point is that science is not common sense. It deals with things that we do not directly experience, but nevertheless have concrete effects on the world we live in. Today, we live in a world of the visceral abstract, where oddball theories like relativity result in real innovations like microprocessors and the Internet.

Cargo Cult Science

Because so much of science deals with stuff we can’t directly experience, we need metaphors like Lederman’s story about the aliens to make sense of things. Part of the fun of science is letting your imagination run wild and seeing where things go. Then you can test those ideas to see if they actually reflect reality.

The problem is that pundits and flakes can do the same thing — let their imagination run wild — and not bother to test whether they are true. Consider the anti-vax movement, which has no scientific basis, but has gone viral and led to a resurgence of diseases that were nearly extinct. Nevertheless, dressed up in some scientific sounding words, the idea that vaccines cause disease in children can be very convincing.

The physicist Richard Feynman called this cargo cult science, after a strange phenomenon that takes place on some islands in the South Pacific in which some tribes try to mimic the use of technology. For example, they build mock airstrips in the hopes that airplanes would appear with valuable cargo.

What makes science real is not fancy sounding words or white lab coats, but the fact that you work under certain constraints. You follow the scientific method, observe professional standards and subject your work to peer review. Pundits, on the other hand, do none of these things. Simply having an opinion on a subject will suffice.

The New Mysticism

Clearly, science is what created the modern world. Without science, you cannot have technology and without technology, you cannot create prosperity. So, on purely economic terms, science is extremely important to our success as a society. We need science in order to progress.

Yet in broader terms, science is the search for truth. In a nutshell, science is the practice of coming up with testable statements to see what’s possible. That’s what separates Darwin’s theory of natural selection and the big bang from nonscientific theories. The former is a matter of science, which can be tested through experiment and observation, the latter a matter of faith and belief.

Consider what Marco Rubio said in an interview with GQ about the age of the universe a few years ago:

“I think the age of the universe has zero to do with how our economy is going to grow. I’m not a scientist. I don’t think I’m qualified to answer a question like that. At the end of the day, I think there are multiple theories out there on how the universe was created and I think this is a country where people should have the opportunity to teach them all.”

Yet the big bang is not just a theory, but the result of a set of theories, including general relativity and quantum mechanics, combined with many observations over a period of decades. Students in physics class are supposed to learn about the big bang not to shape their religious beliefs, but because of its importance to those underlying theories.

And those concepts are central to our everyday lives. We use relativity to calibrate GPS satellites, so that we can find restaurants and target missiles. Quantum mechanics gave us lasers and microprocessors, from which we make barcode scanners and iPhones. In fact, the theories underlying big bang are essential for our modern economy to function.

When we turn our backs on science, what we are left with is essentially a form a mysticism. We can listen to our inner voices to decide what we believe and, when faced with a competing idea, ascribe its provenance to only someone else’s inner voice. Once we make truth a matter of opinion, we start our way down a slippery slope.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Is China Our New Sputnik Moment?

Is China Our New Sputnik Moment?

GUEST POST from Greg Satell

When the Soviets launched Sputnik, the first space satellite, into orbit in 1957, it was a wake-up call for America. Over the next year, President Eisenhower would sign the National Defense Education Act to spur science education, increase funding for research and establish NASA and DARPA to spur innovation.

A few years ago, a report by the Council on Foreign Relations (CFR) argued that we are at a similar point today, but with China. While we have been steadily decreasing federal investment in R&D over the past few decades, our Asian rival has been ramping up and now threatens our leadership in key technologies such as AI, genomics and quantum information technology.

Clearly, we need to increase our commitment to science and innovation and that means increasing financial investment. However, what the report makes clear is that money alone won’t solve the problem. We are, in several important ways, actually undermining our ability to innovate, now and in the future. We need to renew our culture of innovation in America.

Educating And Attracting Talent

The foundation of an innovation economy is education, especially in STEM subjects. Historically, America has been the world’s best educated workforce, but more recently we’ve fallen to fifth among OECD countries for post-secondary education. That’s alarming and something we will certainly need to reverse if we are to compete effectively.

Our educational descent can be attributed to three major causes. First, the rest of the world has become more educated, so the competition has become stiffer. Second, is financing. Tuition has nearly tripled in the last decade and student debt has become so onerous that it now takes about 20 years to pay off four years for college. Third, we need to work harder to attract talented people to the United States.

The CFR report recommends developing a “21st century National Defense Education Act” to create scholarships in STEM areas and making it easier for foreign students to get Green Cards when they graduate from our universities. It also points out that we need to work harder to attract foreign talent, especially in high impact areas like AI, genomics and quantum computing.

Unfortunately, we seem to be going the other way. The number of international students to American universities is declining. Policies like the muslim ban and concerns about gun violence are deterring scientific talent coming here. The denial rate for those on H1-B visas has increased from 4% in 2016 to 18% in the first quarter of 2019.

Throughout our history, it has been our openness to new people and new ideas that has made America exceptional. It’s a legitimate question whether that’s still true.

Building Technology Ecosystems

In the 1980s, the US semiconductor industry was on the ropes. Due to increased competition from low-cost Japanese manufacturers, American market share in the DRAM market fell from 70% to 20%. The situation not only had a significant economic impact, there were also important national security implications.

The federal government responded with two initiatives, the Semiconductor Research Corporation and SEMATECH, both of which were nonprofit consortiums that involved government, academia and industry. By the 1990s. American semiconductor manufacturers were thriving again.

Today, we have similar challenges with rare earth elements, battery technology and many manufacturing areas. The Obama administration responded by building similar consortiums to those that were established for semiconductors: The Critical Materials Institute for rare earth elements, JCESR for advanced batteries and the 14 separate Manufacturing Institutes.

Yet here again, we seem to be backsliding. The current administration has sought to slash funding for the Manufacturing Extension Partnership that supports small and medium sized producers. An addendum to the CFR report also points out that the administration has pushed for a 30% cut in funding for the national labs, which support much of the advanced science critical to driving American technology forward.

Supporting International Trade and Alliances

Another historical strength of the US economy has been our open approach to trade. The CFR report points out that our role as a “central node in a global network of research and development,” gave us numerous advantages, such as access to foreign talent at R&D centers overseas, investment into US industry and cooperative responses to global challenges.

However, the report warns that “the Trump administration’s indiscriminate use of tariffs against China, as well as partners and allies, will harm U.S. innovative capabilities.” It also faults the Trump administration for pulling out of the Trans-Pacific Partnership trade agreement, which would have bolstered our relationship with Asian partners and increased our leverage over China.

The tariffs undermine American industry in two ways. First, because many of the tariffs are on intermediate goods which US firms use to make products for export, we’re undermining our own competitive position, especially in manufacturing. Second, because trade partners such as Canada and the EU have retaliated against our tariffs, our position is weakened further.

Clearly, we compete in an ecosystem driven world in which power does not come from the top, but emanates from the center. Traditionally, America has positioned itself at the center of ecosystems by constantly connecting out. Now that process seems to have reversed itself and we are extremely vulnerable to others, such as China, filling the void.

We Need to Stop Killing Innovation in America

The CFR report, whose task force included such luminaries as Admiral William McRaven, former Google CEO Eric Schmidt and economist Laura Tyson, should set alarm bells ringing. Although the report was focused on national security issues, it pertains to general competitiveness just as well and the picture it paints is fairly bleak.

After World War II, America stood almost alone in the world in terms of production capacity. Through smart policy, we were able to transform that initial advantage into long-term technological superiority. Today, however we have stiff competition in areas ranging from AI to synthetic biology to quantum systems.

At the same time, we seem to be doing everything we can to kill innovation in America. Instead of working to educate and attract the world’s best talent, we’re making it harder for Americans to attain higher education and for top foreign talent to come and work here. Instead of ramping up our science and technology programs, presidential budgets regular recommend cutting them. Instead of pulling our allies closer, we are pushing them away.

To be clear, America is still at the forefront of science and technology, vying for leadership in every conceivable area. However, as global competition heats up and we need to be redoubling our efforts, we seem to be doing just the opposite. The truth is that our prosperity is not a birthright to which we are entitled, but a legacy that must be lived up to.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Top 10 Human-Centered Change & Innovation Articles of March 2023

Top 10 Human-Centered Change & Innovation Articles of March 2023Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are March’s ten most popular innovation posts:

  1. Taking Care of Yourself is Not Impossible — by Mike Shipulski
  2. Rise of the Prompt Engineer — by Art Inteligencia
  3. A Guide to Effective Brainstorming — by Diana Porumboiu
  4. What Disruptive Innovation Really Is — by Geoffrey A. Moore
  5. The 6 Building Blocks of Great Teams — by David Burkus
  6. Take Charge of Your Mind to Reclaim Your Potential — by Janet Sernack
  7. Ten Reasons You Must Deliver Amazing Customer Experiences — by Shep Hyken
  8. Deciding You Have Enough Opens Up New Frontiers — by Mike Shipulski
  9. The AI Apocalypse is Here – 3 Reasons You Should Celebrate! — by Robyn Bolton
  10. Artificial Intelligence is Forcing Us to Answer Some Very Human Questions — by Greg Satell

BONUS – Here are five more strong articles published in February that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last three years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Is Futurology a Pseudoscience?

Is Futurology a Pseudoscience?

GUEST POST from Art Inteligencia

Futurology (aka Future Studies or Futures Research) is a subject of study that attempts to make predictions and forecasts about the future. It is an interdisciplinary field that draws from a variety of sources, including science, economics, philosophy, and technology. In recent years, futurology has become a popular topic of debate, with some arguing that it is a pseudoscience and others defending its validity as a legitimate field of study.

One of the main criticisms of futurology is that it relies on speculation and extrapolation of existing trends, rather than on scientific evidence or principles. Critics argue that this makes futurists’ predictions unreliable and that futurology is more of a speculative activity than a rigorous scientific discipline. They also point out that predictions about the future are often wrong, and that the field has had a reputation for making exaggerated claims that have not been borne out by the facts.

“Futurology always ends up telling you more about you own time than about the future.” Matt Ridley

On the other hand, proponents of futurology argue that the field has a legitimate place in the scientific community. They point to the fact that many futurists are well-educated, highly trained professionals who use rigorous methods and data analysis to make accurate predictions. These futurists also often draw on a wide range of sources, such as history, economics, and psychology, to make their forecasts.

Ultimately, the debate over whether or not futurology (aka future studies or futures research) is a pseudoscience is likely to continue. Some may see it as a legitimate field of study, while others may view it as little more than guesswork. What is certain, however, is that the field is still evolving and that the ability of futurists to accurately predict the future will be an important factor in determining its ultimate validity.

Do you think futurology is a pseudoscience?
(sound off in the comments)

And to the futurists and futurology professionals out there, what say you?
(add a comment)

Bottom line: Futurology and prescience are not fortune telling. Skilled futurologists and futurists use a scientific approach to create their deliverables, but a methodology and tools like those in FutureHacking™ can empower anyone to engage in futurology themselves.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Have Humans Evolved Beyond Nature and a Need for It?

Have Humans Evolved Beyond Nature and a Need for It?

GUEST POST from Manuel Berdoy, University of Oxford

Our society has evolved so much, can we still say that we are part of Nature? If not, should we worry – and what should we do about it? Poppy, 21, Warwick.

Such is the extent of our dominion on Earth, that the answer to questions around whether we are still part of nature – and whether we even need some of it – rely on an understanding of what we want as Homo sapiens. And to know what we want, we need to grasp what we are.

It is a huge question – but they are the best. And as a biologist, here is my humble suggestion to address it, and a personal conclusion. You may have a different one, but what matters is that we reflect on it.

Perhaps the best place to start is to consider what makes us human in the first place, which is not as obvious as it may seem.


This article is part of Life’s Big Questions

The Conversation’s new series, co-published with BBC Future, seeks to answer our readers’ nagging questions about life, love, death and the universe. We work with professional researchers who have dedicated their lives to uncovering new perspectives on the questions that shape our lives.


Many years ago, a novel written by Vercors called Les Animaux dénaturés (“Denatured Animals”) told the story of a group of primitive hominids, the Tropis, found in an unexplored jungle in New Guinea, who seem to constitute a missing link.

However, the prospect that this fictional group may be used as slave labour by an entrepreneurial businessman named Vancruysen forces society to decide whether the Tropis are simply sophisticated animals or whether they should be given human rights. And herein lies the difficulty.

Human status had hitherto seemed so obvious that the book describes how it is soon discovered that there is no definition of what a human actually is. Certainly, the string of experts consulted – anthropologists, primatologists, psychologists, lawyers and clergymen – could not agree. Perhaps prophetically, it is a layperson who suggested a possible way forward.

She asked whether some of the hominids’ habits could be described as the early signs of a spiritual or religious mind. In short, were there signs that, like us, the Tropis were no longer “at one” with nature, but had separated from it, and were now looking at it from the outside – with some fear.

It is a telling perspective. Our status as altered or “denatured” animals – creatures who have arguably separated from the natural world – is perhaps both the source of our humanity and the cause of many of our troubles. In the words of the book’s author:

All man’s troubles arise from the fact that we do not know what we are and do not agree on what we want to be.

We will probably never know the timing of our gradual separation from nature – although cave paintings perhaps contain some clues. But a key recent event in our relationship with the world around us is as well documented as it was abrupt. It happened on a sunny Monday morning, at 8.15am precisely.

A new age

The atomic bomb that rocked Hiroshima on August 6 1945, was a wake-up call so loud that it still resonates in our consciousness many decades later.

The day the “sun rose twice” was not only a forceful demonstration of the new era that we had entered, it was a reminder of how paradoxically primitive we remained: differential calculus, advanced electronics and almost godlike insights into the laws of the universe helped build, well … a very big stick. Modern Homo sapiens seemingly had developed the powers of gods, while keeping the psyche of a stereotypical Stone Age killer.

We were no longer fearful of nature, but of what we would do to it, and ourselves. In short, we still did not know where we came from, but began panicking about where we were going.

We now know a lot more about our origins but we remain unsure about what we want to be in the future – or, increasingly, as the climate crisis accelerates, whether we even have one.

Arguably, the greater choices granted by our technological advances make it even more difficult to decide which of the many paths to take. This is the cost of freedom.

I am not arguing against our dominion over nature nor, even as a biologist, do I feel a need to preserve the status quo. Big changes are part of our evolution. After all, oxygen was first a poison which threatened the very existence of early life, yet it is now the fuel vital to our existence.

Similarly, we may have to accept that what we do, even our unprecedented dominion, is a natural consequence of what we have evolved into, and by a process nothing less natural than natural selection itself. If artificial birth control is unnatural, so is reduced infant mortality.

I am also not convinced by the argument against genetic engineering on the basis that it is “unnatural”. By artificially selecting specific strains of wheat or dogs, we had been tinkering more or less blindly with genomes for centuries before the genetic revolution. Even our choice of romantic partner is a form of genetic engineering. Sex is nature’s way of producing new genetic combinations quickly.

Even nature, it seems, can be impatient with itself.

Our natural habitat? Shutterstock

Changing our world

Advances in genomics, however, have opened the door to another key turning point. Perhaps we can avoid blowing up the world, and instead change it – and ourselves – slowly, perhaps beyond recognition.

The development of genetically modified crops in the 1980s quickly moved from early aspirations to improve the taste of food to a more efficient way of destroying undesirable weeds or pests.

In what some saw as the genetic equivalent of the atomic bomb, our early forays into a new technology became once again largely about killing, coupled with worries about contamination. Not that everything was rosy before that. Artificial selection, intensive farming and our exploding population growth were long destroying species quicker than we could record them.

The increasing “silent springs” of the 1950s and 60s caused by the destruction of farmland birds – and, consequently, their song – was only the tip of a deeper and more sinister iceberg. There is, in principle, nothing unnatural about extinction, which has been a recurring pattern (of sometimes massive proportions) in the evolution of our planet long before we came on the scene. But is it really what we want?

The arguments for maintaining biodiversity are usually based on survival, economics or ethics. In addition to preserving obvious key environments essential to our ecosystem and global survival, the economic argument highlights the possibility that a hitherto insignificant lichen, bacteria or reptile might hold the key to the cure of a future disease. We simply cannot afford to destroy what we do not know.

Is it this crocodile’s economic, medical or inherent value which should be important to us? Shutterstock

But attaching an economic value to life makes it subject to the fluctuation of markets. It is reasonable to expect that, in time, most biological solutions will be able to be synthesised, and as the market worth of many lifeforms falls, we need to scrutinise the significance of the ethical argument. Do we need nature because of its inherent value?

Perhaps the answer may come from peering over the horizon. It is somewhat of an irony that as the third millennium coincided with decrypting the human genome, perhaps the start of the fourth may be about whether it has become redundant.

Just as genetic modification may one day lead to the end of “Homo sapiens naturalis” (that is, humans untouched by genetic engineering), we may one day wave goodbye to the last specimen of Homo sapiens genetica. That is the last fully genetically based human living in a world increasingly less burdened by our biological form – minds in a machine.

If the essence of a human, including our memories, desires and values, is somehow reflected in the pattern of the delicate neuronal connections of our brain (and why should it not?) our minds may also one day be changeable like never before.

And this brings us to the essential question that surely we must ask ourselves now: if, or rather when, we have the power to change anything, what would we not change?

After all, we may be able to transform ourselves into more rational, more efficient and stronger individuals. We may venture out further, have greater dominion over greater areas of space, and inject enough insight to bridge the gap between the issues brought about by our cultural evolution and the abilities of a brain evolved to deal with much simpler problems. We might even decide to move into a bodiless intelligence: in the end, even the pleasures of the body are located in the brain.

And then what? When the secrets of the universe are no longer hidden, what makes it worth being part of it? Where is the fun?

“Gossip and sex, of course!” some might say. And in effect, I would agree (although I might put it differently), as it conveys to me the fundamental need that we have to reach out and connect with others. I believe that the attributes that define our worth in this vast and changing universe are simple: empathy and love. Not power or technology, which occupy so many of our thoughts but which are merely (almost boringly) related to the age of a civilisation.

True gods

Like many a traveller, Homo sapiens may need a goal. But from the strengths that come with attaining it, one realises that one’s worth (whether as an individual or a species) ultimately lies elsewhere. So I believe that the extent of our ability for empathy and love will be the yardstick by which our civilisation is judged. It may well be an important benchmark by which we will judge other civilisations that we may encounter, or indeed be judged by them.

When we can change everything about ourselves, what will we keep? Shutterstock

There is something of true wonder at the basis of it all. The fact that chemicals can arise from the austere confines of an ancient molecular soup, and through the cold laws of evolution, combine into organisms that care for other lifeforms (that is, other bags of chemicals) is the true miracle.

Some ancients believed that God made us in “his image”. Perhaps they were right in a sense, as empathy and love are truly godlike features, at least among the benevolent gods.

Cherish those traits and use them now, Poppy, as they hold the solution to our ethical dilemma. It is those very attributes that should compel us to improve the wellbeing of our fellow humans without lowering the condition of what surrounds us.

Anything less will pervert (our) nature.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credits: Pixabay, Shutterstock (via theconversation)

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

How will humans change in the next 10,000 years?

Future evolution: from looks to brains and personality

GUEST POST from Nicholas R. Longrich, University of Bath

READER QUESTION: If humans don’t die out in a climate apocalypse or asteroid impact in the next 10,000 years, are we likely to evolve further into a more advanced species than what we are at the moment? Harry Bonas, 57, Nigeria

Humanity is the unlikely result of 4 billion years of evolution.

From self-replicating molecules in Archean seas, to eyeless fish in the Cambrian deep, to mammals scurrying from dinosaurs in the dark, and then, finally, improbably, ourselves – evolution shaped us.

Organisms reproduced imperfectly. Mistakes made when copying genes sometimes made them better fit to their environments, so those genes tended to get passed on. More reproduction followed, and more mistakes, the process repeating over billions of generations. Finally, Homo sapiens appeared. But we aren’t the end of that story. Evolution won’t stop with us, and we might even be evolving faster than ever.


This article is part of Life’s Big Questions

The Conversation’s new series, co-published with BBC Future, seeks to answer our readers’ nagging questions about life, love, death and the universe. We work with professional researchers who have dedicated their lives to uncovering new perspectives on the questions that shape our lives.


It’s hard to predict the future. The world will probably change in ways we can’t imagine. But we can make educated guesses. Paradoxically, the best way to predict the future is probably looking back at the past, and assuming past trends will continue going forward. This suggests some surprising things about our future.

We will likely live longer and become taller, as well as more lightly built. We’ll probably be less aggressive and more agreeable, but have smaller brains. A bit like a golden retriever, we’ll be friendly and jolly, but maybe not that interesting. At least, that’s one possible future. But to understand why I think that’s likely, we need to look at biology.

The end of natural selection?

Some scientists have argued that civilisation’s rise ended natural selection. It’s true that selective pressures that dominated in the past – predators, famine, plague, warfare – have mostly disappeared.

Starvation and famine were largely ended by high-yield crops, fertilisers and family planning. Violence and war are less common than ever, despite modern militaries with nuclear weapons, or maybe because of them. The lions, wolves and sabertoothed cats that hunted us in the dark are endangered or extinct. Plagues that killed millions – smallpox, Black Death, cholera – were tamed by vaccines, antibiotics, clean water.

But evolution didn’t stop; other things just drive it now. Evolution isn’t so much about survival of the fittest as reproduction of the fittest. Even if nature is less likely to murder us, we still need to find partners and raise children, so sexual selection now plays a bigger role in our evolution.

And if nature doesn’t control our evolution anymore, the unnatural environment we’ve created – culture, technology, cities – produces new selective pressures very unlike those we faced in the ice age. We’re poorly adapted to this modern world; it follows that we’ll have to adapt.

And that process has already started. As our diets changed to include grains and dairy, we evolved genes to help us digest starch and milk. When dense cities created conditions for disease to spread, mutations for disease resistance spread too. And for some reason, our brains have got smaller. Unnatural environments create unnatural selection.

To predict where this goes, we’ll look at our prehistory, studying trends over the past 6 million years of evolution. Some trends will continue, especially those that emerged in the past 10,000 years, after agriculture and civilisation were invented.

We’re also facing new selective pressures, such as reduced mortality. Studying the past doesn’t help here, but we can see how other species responded to similar pressures. Evolution in domestic animals may be especially relevant – arguably we’re becoming a kind of domesticated ape, but curiously, one domesticated by ourselves.

I’ll use this approach to make some predictions, if not always with high confidence. That is, I’ll speculate.

Lifespan

Humans will almost certainly evolve to live longer – much longer. Life cycles evolve in response to mortality rates, how likely predators and other threats are to kill you. When mortality rates are high, animals must reproduce young, or might not reproduce at all. There’s also no advantage to evolving mutations that prevent ageing or cancer – you won’t live long enough to use them.

When mortality rates are low, the opposite is true. It’s better to take your time reaching sexual maturity. It’s also useful to have adaptations that extend lifespan, and fertility, giving you more time to reproduce. That’s why animals with few predators – animals that live on islands or in the deep ocean, or are simply big – evolve longer lifespans. Greenland sharks, Galapagos tortoises and bowhead whales mature late, and can live for centuries.

Even before civilisation, people were unique among apes in having low mortality and long lives. Hunter-gatherers armed with spears and bows could defend against predators; food sharing prevented starvation. So we evolved delayed sexual maturity, and long lifespans – up to 70 years.

Still, child mortality was high – approaching 50% or more by age 15. Average life expectancy was just 35 years. Even after the rise of civilisation, child mortality stayed high until the 19th century, while life expectancy went down – to 30 years – due to plagues and famines.

Then, in the past two centuries, better nutrition, medicine and hygiene reduced youth mortality to under 1% in most developed nations. Life expectancy soared to 70 years worldwide , and 80 in developed countries. These increases are due to improved health, not evolution – but they set the stage for evolution to extend our lifespan.

Now, there’s little need to reproduce early. If anything, the years of training needed to be a doctor, CEO, or carpenter incentivise putting it off. And since our life expectancy has doubled, adaptations to prolong lifespan and child-bearing years are now advantageous. Given that more and more people live to 100 or even 110 yearsthe record being 122 years – there’s reason to think our genes could evolve until the average person routinely lives 100 years or even more.

Size, and strength

Animals often evolve larger size over time; it’s a trend seen in tyrannosaurs, whales, horses and primates – including hominins.

Early hominins like Australopithecus afarensis and Homo habilis were small, four to five feet (120cm-150cm) tall. Later hominins – Homo erectus, Neanderthals, Homo sapiens – grew taller. We’ve continued to gain height in historic times, partly driven by improved nutrition, but genes seem to be evolving too.

Why we got big is unclear. In part, mortality may drive size evolution; growth takes time, so longer lives mean more time to grow. But human females also prefer tall males. So both lower mortality and sexual preferences will likely cause humans to get taller. Today, the tallest people in the world are in Europe, led by the Netherlands. Here, men average 183cm (6ft); women 170cm (5ft 6in). Someday, most people might be that tall, or taller.

As we’ve grown taller, we’ve become more gracile. Over the past 2 million years, our skeletons became more lightly built as we relied less on brute force, and more on tools and weapons. As farming forced us to settle down, our lives became more sedentary, so our bone density decreased. As we spend more time behind desks, keyboards and steering wheels, these trends will likely continue.

Humans have also reduced our muscles compared to other apes, especially in our upper bodies. That will probably continue. Our ancestors had to slaughter antelopes and dig roots; later they tilled and reaped in the fields. Modern jobs increasingly require working with people, words and code – they take brains, not muscle. Even for manual laborers – farmers, fisherman, lumberjacks – machinery such as tractors, hydraulics and chainsaws now shoulder a lot of the work. As physical strength becomes less necessary, our muscles will keep shrinking.

Our jaws and teeth also got smaller. Early, plant-eating hominins had huge molars and mandibles for grinding fibrous vegetables. As we shifted to meat, then started cooking food, jaws and teeth shrank. Modern processed food – chicken nuggets, Big Macs, cookie dough ice cream – needs even less chewing, so jaws will keep shrinking, and we’ll likely lose our wisdom teeth.

Beauty

After people left Africa 100,000 years ago, humanity’s far-flung tribes became isolated by deserts, oceans, mountains, glaciers and sheer distance. In various parts of the world, different selective pressures – different climates, lifestyles and beauty standards – caused our appearance to evolve in different ways. Tribes evolved distinctive skin colour, eyes, hair and facial features.

With civilisation’s rise and new technologies, these populations were linked again. Wars of conquest, empire building, colonisation and trade – including trade of other humans – all shifted populations, which interbred. Today, road, rail and aircraft link us too. Bushmen would walk 40 miles to find a partner; we’ll go 4,000 miles. We’re increasingly one, worldwide population – freely mixing. That will create a world of hybrids – light brown skinned, dark-haired, Afro-Euro-Australo-Americo-Asians, their skin colour and facial features tending toward a global average.

Sexual selection will further accelerate the evolution of our appearance. With most forms of natural selection no longer operating, mate choice will play a larger role. Humans might become more attractive, but more uniform in appearance. Globalised media may also create more uniform standards of beauty, pushing all humans towards a single ideal. Sex differences, however, could be exaggerated if the ideal is masculine-looking men and feminine-looking women.

Intelligence and personality

Last, our brains and minds, our most distinctively human feature, will evolve, perhaps dramatically. Over the past 6 million years, hominin brain size roughly tripled, suggesting selection for big brains driven by tool use, complex societies and language. It might seem inevitable that this trend will continue, but it probably won’t.

Instead, our brains are getting smaller. In Europe, brain size peaked 10,000—20,000 years ago, just before we invented farming. Then, brains got smaller. Modern humans have brains smaller than our ancient predecessors, or even medieval people. It’s unclear why.

It could be that fat and protein were scarce once we shifted to farming, making it more costly to grow and maintain large brains. Brains are also energetically expensive – they burn around 20% of our daily calories. In agricultural societies with frequent famine, a big brain might be a liability.

Maybe hunter-gatherer life was demanding in ways farming isn’t. In civilisation, you don’t need to outwit lions and antelopes, or memorise every fruit tree and watering hole within 1,000 square miles. Making and using bows and spears also requires fine motor control, coordination, the ability to track animals and trajectories — maybe the parts of our brains used for those things got smaller when we stopped hunting.

Or maybe living in a large society of specialists demands less brainpower than living in a tribe of generalists. Stone-age people mastered many skills – hunting, tracking, foraging for plants, making herbal medicines and poisons, crafting tools, waging war, making music and magic. Modern humans perform fewer, more specialised roles as part of vast social networks, exploiting division of labour. In a civilisation, we specialise on a trade, then rely on others for everything else.

That being said, brain size isn’t everything: elephants and orcas have bigger brains than us, and Einstein’s brain was smaller than average. Neanderthals had brains comparable to ours, but more of the brain was devoted to sight and control of the body, suggesting less capacity for things like language and tool use. So how much the loss of brain mass affects overall intelligence is unclear. Maybe we lost certain abilities, while enhancing others that are more relevant to modern life. It’s possible that we’ve maintained processing power by having fewer, smaller neurons. Still, I worry about what that missing 10% of my grey matter did.

Curiously, domestic animals also evolved smaller brains. Sheep lost 24% of their brain mass after domestication; for cows, it’s 26%; dogs, 30%. This raises an unsettling possibility. Maybe being more willing to passively go with the flow (perhaps even thinking less), like a domesticated animal, has been bred into us, like it was for them.

Our personalities must be evolving too. Hunter-gatherers’ lives required aggression. They hunted large mammals, killed over partners and warred with neighbouring tribes. We get meat from a store, and turn to police and courts to settle disputes. If war hasn’t disappeared, it now accounts for fewer deaths, relative to population, than at any time in history. Aggression, now a maladaptive trait, could be bred out.

Changing social patterns will also change personalities. Humans live in much larger groups than other apes, forming tribes of around 1,000 in hunter-gatherers. But in today’s world people living in vast cities of millions. In the past, our relationships were necessarily few, and often lifelong. Now we inhabit seas of people, moving often for work, and in the process forming thousands of relationships, many fleeting and, increasingly, virtual. This world will push us to become more outgoing, open and tolerant. Yet navigating such vast social networks may also require we become more willing to adapt ourselves to them – to be more conformist.

Not everyone is psychologically well-adapted to this existence. Our instincts, desires and fears are largely those of stone-age ancestors, who found meaning in hunting and foraging for their families, warring with their neighbours and praying to ancestor-spirits in the dark. Modern society meets our material needs well, but is less able to meet the psychological needs of our primitive caveman brains.

Perhaps because of this, increasing numbers of people suffer from psychological issues such as loneliness, anxiety and depression. Many turn to alcohol and other substances to cope. Selection against vulnerability to these conditions might improve our mental health, and make us happier as a species. But that could come at a price. Many great geniuses had their demons; leaders like Abraham Lincoln and Winston Churchill fought with depression, as did scientists such as Isaac Newton and Charles Darwin, and artists like Herman Melville and Emily Dickinson. Some, like Virginia Woolf, Vincent Van Gogh and Kurt Cobain, took their own lives. Others – Billy Holliday, Jimi Hendrix and Jack Kerouac – were destroyed by substance abuse.

A disturbing thought is that troubled minds will be removed from the gene pool – but potentially at the cost of eliminating the sort of spark that created visionary leaders, great writers, artists and musicians. Future humans might be better adjusted – but less fun to party with and less likely to launch a scientific revolution — stable, happy and boring.

New species?

There were once nine human species, now it’s just us. But could new human species evolve? For that to happen, we’d need isolated populations subject to distinct selective pressures. Distance no longer isolates us, but reproductive isolation could theoretically be achieved by selective mating. If people were culturally segregated – marrying based on religion, class, caste, or even politics – distinct populations, even species, might evolve.

In The Time Machine, sci-fi novelist H.G. Wells saw a future where class created distinct species. Upper classes evolved into the beautiful but useless Eloi, and the working classes become the ugly, subterranean Morlocks – who revolted and enslaved the Eloi.

In the past, religion and lifestyle have sometimes produced genetically distinct groups, as seen in for example Jewish and Gypsy populations. Today, politics also divides us – could it divide us genetically? Liberals now move to be near other liberals, and conservatives to be near conservatives; many on the left won’t date Trump supporters and vice versa.

Could this create two species, with instinctively different views? Probably not. Still, to the extent culture divides us, it could drive evolution in different ways, in different people. If cultures become more diverse, this could maintain and increase human genetic diversity.

Strange New Possibilities

So far, I’ve mostly taken a historical perspective, looking back. But in some ways, the future might be radically unlike the past. Evolution itself has evolved.

One of the more extreme possibilities is directed evolution, where we actively control our species’ evolution. We already breed ourselves when we choose partners with appearances and personalities we like. For thousands of years, hunter-gatherers arranged marriages, seeking good hunters for their daughters. Even where children chose partners, men were generally expected to seek approval of the bride’s parents. Similar traditions survive elsewhere today. In other words, we breed our own children.

And going forward, we’ll do this with far more knowledge of what we’re doing, and more control over the genes of our progeny. We can already screen ourselves and embryos for genetic diseases. We could potentially choose embryos for desirable genes, as we do with crops. Direct editing of the DNA of a human embryo has been proven to be possible — but seems morally abhorrent, effectively turning children into subjects of medical experimentation. And yet, if such technologies were proven safe, I could imagine a future where you’d be a bad parent not to give your children the best genes possible.

Computers also provide an entirely new selective pressure. As more and more matches are made on smartphones, we are delegating decisions about what the next generation looks like to computer algorithms, who recommend our potential matches. Digital code now helps choose what genetic code passed on to future generations, just like it shapes what you stream or buy online. This might sound like dark science fiction, but it’s already happening. Our genes are being curated by computer, just like our playlists. It’s hard to know where this leads, but I wonder if it’s entirely wise to turn over the future of our species to iPhones, the internet and the companies behind them.

Discussions of human evolution are usually backward looking, as if the greatest triumphs and challenges were in the distant past. But as technology and culture enter a period of accelerating change, our genes will too. Arguably, the most interesting parts of evolution aren’t life’s origins, dinosaurs, or Neanderthals, but what’s happening right now, our present – and our future.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.