Tag Archives: Science

The Breakthrough Lifecycle

The Breakthrough Lifecycle

GUEST POST from Greg Satell

Many experts suspect that the COVID crisis is receding into the background. It is, of course, hard to know for sure. There will continue to be debate and we will still need to have some mitigating measures in place. Still, for the most part, people are back at work, kids are in school, and relatively normal routines have returned.

Generations from now, historians will most likely still question what lessons are to be gleaned from the past few years. Should we strengthen our multilateral institutions or have they become so sclerotic that they need to be dismantled? Is the rise of populist nationalism a harbinger for the future or a flash in the pan?

One thing I don’t expect to be hotly debated, in fact seems perfectly clear even now, is that science saved us. Untold thousands, working mostly anonymously in labs around the world, created a vaccine of astonishing efficacy in record time. It is these types of breakthroughs that change the course of history and, if we can embrace their power, lead us to a better future.

A Seemingly Useless Idea

The MRNA technology that led to the Moderna and Pfizer-BioNTech vaccines have the potential to revolutionize medical science. It can rapidly reprogram the machinery in our cells to manufacture things that can potentially cure or prevent a wide range of diseases, from cancer to malaria, vastly more efficiently than anything we’ve ever seen before.

Yet while revolutionary, it is not at all a new idea. In fact Katalin Karikó, who pioneered the approach, published her first paper on mRNA-based therapy way back in 1990. Unfortunately, she wasn’t able to win grants to fund her work and, by 1995, things came to a head. She was told that she could either direct her energies in a different way, or be demoted.

This type of thing is not unusual. Jim Allison, who won the Nobel Prize for his work on cancer immunotherapy, had a very similar experience when he had his breakthrough, despite having already become a prominent leader in the field. “It was depressing,” he told me. “I knew this discovery could make a difference, but nobody wanted to invest in it.”

The truth is that the next big thing always starts out looking like nothing at all. Things that really change the world always arrive out of context for the simple reason that the world hasn’t changed yet.

Overcoming Resistance

Humans tend to see things in a linear fashion. It is easier for us to imagine a clear line of cause and effect, like a row of dominoes falling into each other, rather than a series of complex interactions and feedback loops. So it shouldn’t be surprising that, in hindsight, breakthrough ideas seem so obvious that only the most dim-witted would deny their utility.

When we think of something like, say, electricity, we often just assume that it was immediately adopted and the world simply changed overnight. After all, who could deny the superiority of an efficient electric motor over a big, noisy steam engine? Yet as the economist Paul David explained in a famous paper, it took 40 years for it to really take hold.

There are a few reasons why this is the case. The first is switching costs. A new technology almost always has to replace something that already does the job. Another problem involves establishing a learning curve. People need to figure out how to unlock the potential of the new technology. To bring about any significant change you first have to overcome resistance.

With electricity, the transition happened slowly. It wouldn’t have made sense to immediately tear down steam-powered factories and replace them. At first, only new plants used the electricity. Yet it wasn’t so much the technology itself, but how people learned to use it to re-imagine how factories functioned that unlocked a revolution in productivity gains.

In the case of mRNA technology, no one had seen a mRNA vaccine work, so many favored more traditional methods. Johnson & Johnson and AstraZeneca, for example, used a more traditional DNA-based approach using adenoviruses that was much better understood, rather than take a chance on a newer, unproven approach.

We seem to be at a similar point now with mRNA and other technologies, such as CRISPR. They’ve been proven to be viable, but we really don’t understand them well enough yet to unlock their full potential.

Building Out The Ecosystem

When we look back through history, we see a series of inventions. It seems obvious to us that things like the internal combustion engine and electricity would change the world. Still, as late as 1920, roughly 40 years after they were invented, most American’s lives remained unchanged. For practical purposes, the impact of those two breakthroughs were negligible.

What made the difference wasn’t so much the inventions themselves, but the ecosystems that form around them. For internal combustion engines it took a separate networks to supply oil, to build roads, manufacture cars and ships and so on. For electricity, entire industries based on secondary inventions, such as household appliances and radios, needed to form to fully realize the potential of the underlying technology.

Much of what came after could scarcely have been dreamed of. Who could have seen how transportation would transform retail? Or how communications technologies would revolutionize warfare? Do you really think anybody looked at an IBM mainframe in the 1960s and said, “Gee, this will be a real problem for newspapers some day?”

We can expect something similar to happen with mRNA technology. Once penicillin hit the market in 1946, a “golden age” of antibiotics ensued, resulting in revolutionary new drugs being introduced every year between 1950 and 1970. We’ve seen a similar bonanza in cancer immunotherapies since Jim Allison’s breakthrough.

In marked contrast to Katalin Karikó’s earlier difficulty in winning grants for her work, the floodgates have now opened as pharma companies are now racing to develop mRNA approaches for a myriad of diseases and maladies.

The Paradox Of New Paradigms

The global activist Srdja Popović once told me that when a revolution is successful, it’s difficult to explain the previous order, because it comes to be seen as unbelievable. Just as it’s hard to imagine a world without electricity, internal combustion or antibiotics today, it will be difficult to explain our lives today to future generations.

In much the same way, we cannot understand the future through linear extrapolation. We can, of course, look at today’s breakthroughs in things like artificial intelligence, synthetic biology and quantum computing, but what we don’t see is the second or third order effects, how they will shape societies and how societies will choose to shape them.

Looking at Edison’s lightbulb would tell you nothing about radios, rock music and the counterculture of the 60s, much like taking a ride in Ford’s “Model T” would offer little insight into the suburbs and shopping malls his machine would make possible. Ecosystems are, by definition, chaotic and non-linear.

What is important is that we allow for the unexpected. It was not obvious to anyone that Katalin Karikó could ever get her idea to work, but she shouldn’t have had to risk her career to make a go of it. We’re enormously lucky that she didn’t, as so many others would have, taken an easier path. It is, in the final analysis, that one brave decision that we have to thank for what promises to be brighter days ahead.

All who wander are not lost.

— Article courtesy of the Digital Tonto blog
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Uber Economy is Killing Innovation, Prosperity and Entrepreneurship

Uber Economy is Killing Innovation, Prosperity and Entrepreneurship

GUEST POST from Greg Satell

Today, it seems that almost everyone wants to be the “Uber” of something, and why not? With very little capital investment, the company has completely disrupted the taxicab industry and attained a market value of over $100 billion. In an earlier era, it would have taken decades to have created that kind of impact on a global scale.

Still, we’re not exactly talking about Henry Ford and his Model T here. Or even the Boeing 707 or the IBM 360. Like Uber, those innovations quickly grew to dominance, but also unleashed incredible productivity. Uber, on the other hand, gushed red ink for more than a decade despite $25 billion invested. In 2021 it lost more than $6 billion, the company made progress in 2022 but still lost money, and it was only in 2023 that they finally made a profit.

The truth is that we have a major problem and, while Uber didn’t cause it, the company is emblematic of it. Put simply, a market economy runs on innovation. It is only through consistent gains in productivity that we can create real prosperity. The data and evidence strongly suggests that we have failed to do that for the past 50 years. We need to do better.

The Productivity Paradox Writ Large

The 20th century was, for the most part, an era of unprecedented prosperity. The emergence of electricity and internal combustion kicked off a 50-year productivity boom between 1920 and 1970. Yet after that, gains in productivity mysteriously disappeared even as business investment in computing technology increased, causing economist Robert Solow to observe that “You can see the computer age everywhere but in the productivity statistics.”

When the internet emerged in the mid-90’s things improved and everybody assumed that the mystery of the productivity paradox had been resolved. However, after 2004 productivity growth disappeared once again. Today, despite the hype surrounding things such as Web 2.0, the mobile Internet and, most recently, artificial intelligence, productivity continues to slump.

Take a closer look at Uber and you can begin to see why. Compare the $25 billion invested in the ride-sharing company with the $5 billion (worth about $45 billion today) IBM invested to build its System 360 in the early 1960s. The System 360 was considered revolutionary, changed computing forever and dominated the industry for decades.

Uber, on the other hand, launched with no hardware or software that was particularly new or revolutionary. In fact, the company used fairly ordinary technology to dis-intermediate relatively low-paid taxi dispatchers. The money invested was largely used to fend off would-be competitors through promoting the service and discounting rides.

Maybe the “productivity paradox” isn’t so mysterious after all.

Two Paths To Profitability

Anybody who’s ever taken an Economics 101 course knows that, under conditions of perfect competition, the forces of supply and demand are supposed to drive markets toward equilibrium. It is at this magical point that prices are high enough to attract supply sufficient to satisfy demand, but not any higher.

Unfortunately for anyone running a business, that equilibrium point is the same point at which economic profit disappears. So to make a profit over the long-term, managers need to alter market dynamics either through limiting competition, often through strategies such as rent seeking and regulatory capture, or by creating new markets through innovation.

As should be clear by now, the digital revolution has been relatively ineffective at creating meaningful innovation. Economists Daron Acemoglu and Pascual Restrepo refer to technologies like Uber, as well as things like automated customer service, as “so-so technologies,” because they displace workers without significantly increasing productivity.

Joseph Schumpeter pointed out long ago, market economies need innovation to fuel prosperity. Without meaningful innovation, managers are left with only strategies that limit innovation, undermine markets and impoverish society, which is what largely seems to have happened over the past few decades.

The Silicon Valley Doomsday Machine

The arrogance of Silicon Valley entrepreneurs seems so outrageous—and so childishly naive— that it is scarcely hard to believe. How could an industry that has produced so little in terms of productivity seem so sure that they’ve been “changing the world” for the better. And how have they made so much money?

The answer lies in something called increasing returns. As it turns out, under certain conditions, namely high up-front investment, negligible marginal costs, network effects and “winner-take-all markets,” the normal laws of economics can be somewhat suspended. In these conditions, it makes sense to pump as much money as possible into an early Amazon, Google or Facebook.

However this seemingly happy story has a few important downsides. First, to a large extent these technologies do not create new markets as much as they disrupt or displace old ones, which is one reason why productivity gains are so meager. Second, the conditions apply to a small set of products, namely software and consumer gadgets, which makes the Silicon Valley model a bad fit for many groundbreaking technologies.

Still, if the perception is that you can make a business viable by pumping a lot of cash into it, you can actually crowd-out a lot of good businesses with bad, albeit well-funded ones. In fact, there is increasing evidence that is exactly what is happening. Rather than an engine of prosperity, Silicon Valley is increasingly looking like a doomsday machine.

Returning To An Innovation Economy

Clearly, we cannot continue “Ubering” ourselves to death. We must return to an economy fueled by innovation, rather than disruption, which produces the kind of prosperity that lifts all boats, rather than outsized profits for a meager few. It is clearly in our power to do that, but we must begin to make better choices.

First, we need to recognize that innovation is something that people do, but instead of investing in human capital, we are actively undermining it. In the US, food insecurity has become an epidemic on college campuses. To make matters worse, the cost of college has created a student debt crisis, essentially condemning our best and brightest to decades of indentured servitude. To add insult to injury, healthcare costs continue to soar. Should we be at all surprised that entrepreneurship is in decline?

Second, we need to rebuild scientific capital. As Vannevar Bush once put it, “There must be a stream of new scientific knowledge to turn the wheels of private and public enterprise.” To take just one example, it is estimated that the $3.8 billion invested in the Human Genome Project generated nearly $800 billion of economic activity as of 2011. Clearly, we need to renew our commitment to basic research.

Finally, we need to rededicate ourselves to free and fair markets. In the United States, by almost every metric imaginable, whether it is industry concentration, occupational licensing, higher prices, lower wages or whatever else you want to look at capitalism has been weakened by poor regulation and oversight. Not surprisingly, innovation has suffered.

Perhaps most importantly, we need to shift our focus from disrupting markets to creating them, from “The Hacker Way”, to tackling grand challenges and from a reductionist approach to an economy based on dignity and well being. Make no mistake: The “Uber Economy” is not the solution, it’s the problem.

— Article courtesy of the Digital Tonto blog
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Don’t ‘Follow the Science’, Follow the Scientific Method

Don't 'Follow the Science', Follow the Scientific Method

GUEST POST from Pete Foley

The scientific method is probably the most useful thing I’ve learnt in my life. It is a near universal tool that can be used in so many ways and so many places.  It unleashes a whole world of assisted critical thinking that is invaluable to innovators, but also in our personal lives.  Teaching it to individuals or teams who are not trained as scientists is one of the most powerful and enabling things we can do for them.  And teaching it to kids, as opposed to endless facts and data that they can easily access from the web is something we should do far more of.  

Recruiting Skills not Expertise:  When I was involved in recruiting, I always valued PhD’s and engineers.  Sometimes that was for their unique, specialized knowledge.  But more often than not it was more for the critical thinking skills they had acquired while gaining that specialized knowledge. In today’s rapidly evolving world, specific knowledge typically has a relatively short shelf-life.  But the cognitive framework embodied by the scientific method is a tool for life, and one that can be reapplied in so many ways.  .

Don’t Follow the Science, Follow the Process:  All too often today the scientific method gets confused with ‘following the science’.  The scientific process is almost infinitely useful, but blindly ‘following the science’ is often far less so, and can be counter productive.  The scientific method is a process that helps us to evaluate information, challenge assumptions, and in so doing, get us closer to truth.  Sometimes it confirms our existing ideas, sometimes it improves them, and sometimes it completely replaces them.  But it is grounded in productive and informed skepticism, and this is at the heart of why science is constantly evolving.

‘Follow the Science’ in many ways the opposite.  It assumes someone in a position of power already has the right answer.  At it’s best it means blindly follow the consensus of today’s experts.  All too often it really means ‘do as you are told’.   Frequently the people saying this are not the experts themselves, but are instead evoking third party expertise to support their viewpoint.  That of course is the opposite of science.  It’s often well intended, but not always good advice.

Science is not a Religion:  At the heart of this is a fundamental misunderstanding of science, and scientists. In today’s media and social media, all too often science and scientists are presented with a quasi-religious reverence, and challenging the current view is framed as heretical.  How often do you here the framing ‘scientists tell us… ‘ as a way of validating a position?   

This is understandable.  The sheer quantity and complexity of information we are faced with in our everyday lives is increasingly unmanageable, while big challenges like climate unimaginably complex.  I find it almost impossible to keep up with my own interests, let alone everything that is happening.  And some topics are so technical that they simply require translation by experts.  When someone announces they’ve discovered the Higgs boson particle, it’s not really practical for any of us to pop over to the local particle accelerator and check for ourselves.  So expertise is clearly an important part of any decision chain. But experts come with their own biases. An engineer naturally tends to see problems and through, an engineering lens, a chemist through a chemical one.

Science in Support of an Agenda:  One danger with the ‘follow the science’ mantra is that it is often used to reinforce a belief, opinion, or even agenda.  I’ve seen this all too often in my work life, with the question, ‘can you find me a paper that supports ‘x’.  This is often benign, in that someone passionately believes something, and wants to find evidence to support it.   But this is fundamentally the wrong question, and of course, completely ‘unscientific’.

The scientific literature is filled with competing theories, disproven or outdated ideas, and bad science.   If you look for literature to support an idea you can usually find it, even if it’s wrong.   Scientists are not gods.  They make mistakes, they run poor experiments, and they are subject to confirmation biases, ego, and other human frailties. There is a good reason for the phrase that science evolves one death at a time. Science, like virtually every human organization is hierarchical, and a prestigious scientist can advance a discipline, but can also slow it down by holding onto a deeply held belief. And mea culpa, I know from personal experience that it’s all too easy to fall in love with a theory, and resist evidence to the contrary. 

Of course, some theories are more robust than others.   Both consensus and longevity are therefore important considerations.  Some science is so well established, and supported by so much observation that it’s unlikely that it will fundamentally change.  For example, we may still have a great deal to learn about gravity, but for practical purposes, apples will still drop from trees.    

Peer Review:  Policing the literature is hard.  Consensus is right until its not. Another phrase I often hear is ‘peer reviewed’, in the context that this makes the paper ‘right’.  Of course, peer review is valuable, part of the scientific process, and helps ensure that content has quality, and has been subject to a high level of rigor.   If one person says it, it can be a breakthrough or utter nonsense.  If a lot of smart people agree, it’s more likely to be ‘right’.  But that is far from guaranteed, especially if they share the same ingoing assumptions. Scientific consensus has historically embraced many poor theories; a flat earth, or the sun revolving around the earth are early examples. More tragically, I grew up with the thalidomide generation in Europe.  On an even bigger scale, the industrial revolution gave us so much, but also precipitated climate change.  And on a personal level, I’ve just been told by my physician to take a statin, and I am in the process of fighting my way through rapidly growing and evolving literature in order to decide if that is the right decision.  So next time you see a scientist, or worse, a politician, journalist, or a random poster on Twitter claim they own scientific truth, enjoin you to ‘follow the science’, or accuse someone else of being a science denier, treat it with a grain of sodium chloride.

They may of course be right, but the more strident they are, or the less qualified, the less likely they are to really understand science, and hence what they are asking you to follow.  And the science they pick is quite possibly influenced by their own goals, biases or experience. Of course, practically we cannot challenge everything. We need to be selective, and the amount of personal effort we put into challenging an idea will depend upon how important it is to us as individuals.      

Owning your Health:  Take physicians as an example.  At some time or other, we’ve all looked to a physician for expert advise.  And there is a good reason to do so.  They work very hard to secure deep knowledge of their chosen field, and the daily practice of medicine gives then a wealth of practical as well as well as theoretical knowledge.  But physicians are not gods either.  The human body is a very complex system, physicians have very little time with an individual patient (over the last 10 years, the average time a physician spends with a patient has shrunk to a little over 15 minutes), the field is vast and expanding, and our theories around how to diagnose and treat disease are constantly evolving.  In that way, medicine is a great example of the scientific method in action, but also how transient ‘scientific truths’ can be.  

I already mentioned my current dilemma with statins.   But to give an even more deeply personal example, neither my wife or I would be alive today if we’d blindly followed a physicians diagnosis.

I had two compounding and comparatively rare conditions that combined to appear like a more common one.  The physician went with the high probability answer.  I took time to dig deeper and incorporate more details.  Together we got to the right answer, and I’m still around!

This is a personal and pragmatic example of how valuable the scientific process can be.  My health is important, so I chose to invest considerable time in the diagnosis I was given, and challenge it productively, instead of blindly accepting an expert opinion. My physicians had far more expertise than I did, but I had far more time and motivation.  We ultimately complemented each other by partnering, and using the scientific method both as a process, and as a way to communicate.   

The Challenge of Science Communication:  To be fair, science communication is hard.   It requires communicating an often complex concept with sufficient simplicity for it to be understandable, often requires giving guidance, while also embracing appropriate uncertainty. Nowhere was this more evident than in the case of Covid 19, where a lot of ‘follow the science’, and ‘science denier’ language came from.  At the beginning of the pandemic, the science was understandably poorly developed, but we still had to make important decisions on often limited data.  At first we simply didn’t understand even the basics like the transmission vectors (was it airborne or surface, how long did it survive outside of the body, etc).  I find it almost surreal to think back to those early months, how little we knew, the now bizarre clean room protocols we used on our weekly shopping, and some of the fear that has now faded into the past.  

But because we understood so little, we made a lot of mistakes.  The over enthusiastic use of ventilators may have killed some patients, although that is still a hotly debated topic. Early in the pandemic masks, later to become a controversial and oddly politically charged topic, masks were specifically not recommended by the US government for the general public. Who knows how many people contracted the disease by following this advice?   It was well intentioned, as authorities were trying to prevent a mask shortage for health workers. But it was also mechanistically completely wrong.

At the time I used simple scientific reasoning, and realized this made little sense.  If the virus was transmitted via an airborne vector, a mask would help.  If it wasn’t, it would do no harm, at least as long as I didn’t subtract from someone with greater need. By that time the government had complete control of the mask supply chain anyway, so that was largely a moot point. Instead I dug out a few old N95 masks that had been used for spray painting and DIY, and used them outside of the house (hospitals would not accept donations of used masks). I was lambasted with ‘follow the science’ by at least one friend for doing so, but followed an approach with high potential reward and virtually zero downside. I’ll never know if that specifically worked, but I didn’t get Covid, at least not until much later when it was far less dangerous.

Science doesn’t own truth: Unlike a religion, good science doesn’t pretend to own ultimate truths.  But unfortunately it can get used that way.  Journalists, politicians, technocrats and others sometimes weaponize (selective) science to support an opinion. Even s few scientists who have become frustrated with ‘science deniers’ can slip into this trap.

Science is a Journey: I should clarify that the scientific method is more of a journey, not so much a single process. To suggest is is a single ‘thing’ so is probably an unscientific simplification in its own right. It’s more a way of thinking that embraces empiricism, observation, description, productive skepticism, and the use of experimentation to test and challenge hypothesis. It also helps us to collaborate and communicate with experts in different areas, creating a common framework for collaboration, rather than blindly following directions or other expert opinions.    

It can be taught, and is incredibly useful.  But like any tool, it requires time and effort to become a skilled user.   But if we invest in it, it can be extraordinarily valuable, both in innovation and life. It’s perhaps not for every situation, as that would mire us in unmanageable procrastination.  But if something is important, it’s an invaluable tool. 

Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Department Of Energy Programs Helping to Create an American Manufacturing Future

Department Of Energy Programs Helping to Create an American Manufacturing Future

GUEST POST from Greg Satell

In the recession that followed the dotcom crash in 2000, the United States lost five million manufacturing jobs and, while there has been an uptick in recent years, all indications are that they may never be coming back. Manufacturing, perhaps more than any other sector, relies on deep networks of skills and assets that tend to be highly regional.

The consequences of this loss are deep and pervasive. Losing a significant portion of our manufacturing base has led not only to economic vulnerability, but to political polarization. Clearly, it is important to rebuild our manufacturing base. But to do that, we need to focus on new, more advanced, technologies

That’s the mission of the Advanced Manufacturing Office (AMO) at the Department of Energy. By providing a crucial link between the cutting edge science done at the National Labs and private industry, it has been able to make considerable progress. As the collaboration between government scientists widen and deepens over time, US manufacturing may well be revived.

Linking Advanced Research To Private Industry

The origins of the Department of Energy date back to the Manhattan Project during World War II. The immense project was, in many respects, the start of “big science.” Hundreds of top researchers, used to working in small labs, traveled to newly established outposts to collaborate at places like Los Alamos, New Mexico and Oak Ridge, Tennessee.

After the war was over, the facilities continued their work and similar research centers were established to expand the effort. These National Labs became the backbone of the US government’s internal research efforts. In 1977, the National Labs, along with a number of other programs, were combined to form the Department of Energy.

One of the core missions of the AMO is to link the research done at the National Labs to private industry and the Lab Embedded Entrepreneurship Programs (LEEP) have been particularly successful in this regard. Currently, there are four such programs, Cyclotron Road, Chain Reaction Innovations, West Gate and Innovation Crossroads.

I was able to visit Innovation Crossroads at Oak Ridge National Laboratory and meet the entrepreneurs in its current cohort. Each is working to transform a breakthrough discovery into a market changing application, yet due to technical risk, would not be able to attract funding in the private sector. The LEEP program offers a small amount of seed money, access to lab facilities and scientific and entrepreneurial mentorship to help them get off the ground.

That’s just one of the ways that the AMO opens up the resources of the National Labs. It also helps business get access to supercomputing resources (5 out of the 10 fastest computers in the world are located in the United States, most of them at the National Labs) and conducts early stage research to benefit private industry.

Leading Public-Private Consortia

Another area in which the AMO supports private industry is through taking a leading role in consortia, such as the Manufacturing Institutes that were set up to to give American companies a leg up in advanced areas such as clean energy, composite materials and chemical process intensification.

The idea behind these consortia is to create hubs that provide a critical link with government labs, top scientists at academic universities and private companies looking to solve real-world problems. It both helps firms advance in key areas and allows researchers to focus their work on where they will have the greatest possible impact.

For example, the Critical Materials Institute (CMI) was set up to develop alternatives to materials that are subject to supply disruptions, such as the rare earth elements that are critical to many high tech products and are largely produced in China. A few years ago it developed, along with several National Labs and Eck Industries, an advanced alloy that can replace more costly materials in components of advanced vehicles and aircraft.

“We went from an idea on a whiteboard to a profitable product in less than two years and turned what was a waste product into a valuable asset,” Robert Ivester, Director of the Advanced Manufacturing Office told me.

Technology Assistance Partnerships

In 2011, the International Organization for Standardization released its ISO 50001 guidelines. Like previous guidelines that focused on quality management and environmental impact, ISO 50001 recommends best practices to reduce energy use. These can benefit businesses through lower costs and result in higher margins.

Still, for harried executives facing cutthroat competition and demanding customers, figuring out how to implement new standards can easily get lost in the mix. So a third key role that the AMO plays is to assist companies who wish to implement new standards by providing tools, guides and access to professional expertise.

The AMO offers similar support for a number of critical areas, such as prototype development and also provides energy assessment centers for firms that want to reduce costs. “Helping American companies adopt new technology and standards helps keep American manufacturers on the cutting edge,” Ivester says.

“Spinning In” Rather Than Spinning Out

Traditionally we think of the role of government in business largely in terms of regulation. Legislatures pass laws and watchdog agencies enforce them so that we can have confidence in the the food we eat, the products we buy and the medicines that are supposed to cure us. While that is clearly important, we often overlook how government can help drive innovation.

Inventions spun out of government labs include the Internet, GPS and laser scanners, just to name a few. Many of our most important drugs were also originally developed with government funding. Still, traditionally the work has mostly been done in isolation and only later offered to private companies through licensing agreements.

What makes the Advanced Manufacturing Office different than most scientific programs is that it is more focused on “spinning in” private industry rather than spinning out technologies. That enables executives and entrepreneurs with innovative ideas to power them with some of the best minds and advanced equipment in the world.

As Ivester put it to me, “Spinning out technologies is something that the Department of Energy has traditionally done. Increasingly, we want to spin ideas from industry into our labs, so that companies and entrepreneurs can benefit from the resources we have here. It also helps keep our scientists in touch with market needs and helps guide their research.”

Make no mistake, innovation needs collaboration. Combining the ideas from the private sector with the cutting edge science from government labs can help American manufacturing compete for the 21st century.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Time is Not Fundamental

Time is Not Fundamental

GUEST POST from Geoffrey A. Moore

For all my life I have been taught that time is the fourth dimension in a space-time continuum. I mean, for goodness sake, Einstein said this was so, and all of physics has followed his lead. Nonetheless, I want to argue that, while the universe may indeed have four dimensions, time is not one of them, nor is it a fundamental element of reality.

Before you think I have really jumped off the deep end, let me just say that my claim is that motion is a fundamental element of reality, and it is the one that time is substituting for. This is based simply on observation. That is, we can observe and measure mass. We can observe and measure space. We can observe and measure energy. We can observe and measure motion. Time, on the other hand, is simply a tool we have developed to measure motion. That is, motion is fundamental, and time is derived.

Consider where our concept of time came from. It started with three distinct units—the day, the month, and the year. Each is based on a cyclical motion—the earth turning around its axis, the moon encircling the earth, the earth and moon encircling the sun. All three of these cyclical motions have the property of returning to their starting point. They repeat, over and over and over. That’s how they came to our attention in the first place.

If we call this phenomenon cyclical time, we can contrast it with linear time. The latter is time we experience as passing, the one to which we apply the terms past, present, and future. But in fact, what is passing is not time but motion, motion we are calibrating by time. That is, we use the cyclical units of time to measure the linear distance between any given motion and a reference location.

As I discuss in The Infinite Staircase, by virtue of the Big Bang, the Second Law of Thermodynamics, and the ongoing rush to greater and greater entropy, the universe is inherently in motion. Some of that motion gets redirected to do work, and some of that work has resulted life emerging on our planet. Motion is intrinsic to our experience of life, much more so than time. As babies we have no sense of time, but we immediately experience mass, space, energy, and motion.

Because mass, space, energy, and motion are core to our experience, we have developed tools to help us engage with them strategically. We can weigh mass and reshape it in myriad ways to serve our ends. We can measure space using anything as a standard length and create structures of whatever size and shape we need. We can measure energy in terms of temperature and pressure and manipulate it to move all kinds of masses through all kinds of spaces. And we can measure motion through space by using standard units of time.

The equation for so doing is typically written as v = d/t. This equation makes us believe that velocity is a concept derived from the primitives of distance and time. But a more accurate way of looking at reality is to say t = d/v. That is, we can observe distance and motion, from which we derive time. If you have a wristwatch with a second hand, this is easily confirmed. A minute consists of a wand traveling through a fixed angular distance, 360°, at a constant velocity set by convention, in this case the International System of Units, these days atomically calibrated by specified number of oscillations of cesium. Time is derived by dividing a given distance by a given velocity.

OK, so what? Here the paths of philosophy and physics diverge, with me being able to pursue the former but not the latter. Before parting, however, I would like to ask the physicists in the room, should there be any, a question: If one accepted the premise that motion was the fourth dimension, not time, such that we described the universe as a continuum of spacemotion instead of spacetime, would that make any difference? Specifically, with respect to Einstein’s theories of special and general relativity, are we just substituting terms here, or are there material consequences? I would love to learn what you think.

At my end, I am interested in the philosophical implications of this question, specifically in relation to phenomenology, the way we experience time. To begin, I want to take issue with the following definition of time served up by Google:

a nonspatial continuum that is measured in terms of events which succeed one another from past through present to future.

From my perspective, this is just wrong. It calls for using events to measure time. The correct approach would focus on using time to measure motion, describing the situation as follows:

an intra-spatial continuum that can be measured in terms of time as one event succeeds another from a position of higher energy to one of lower energy.

The motive for this redefinition is to underscore that the universe is inherently in motion, following the Second Law of thermodynamics, perpetually seeking to cool itself down by spreading itself out. We here on Earth are born into the midst of that action, boats set afloat upon a river, moving with the current on the way to a sea of ultimate cool. We can go with the flow, we can paddle upstream, we can even divert the river of entropy to siphon off energy to do work. The key point to register is that motion abides, inexorably following the arrow of entropy, moving from hot to cold until heat death is achieved.

If motion is a primary dimension of the universe, there can be no standing still. Phenomenologically, this is quite different from the traditional time-based perspective. In a universe of space and time, events have to be initiated, and one can readily imagine a time with no events, a time when nothing happens, maybe something along the lines of Beckett’s Waiting for Godot. In a universe of space and motion, however, that is impossible. There are always events, and we are always in the midst of doing. A couch potato is as immersed in events as a race car driver. Or, to paraphrase Milton, they also move who only stand and wait.

A second consequence of the spacemotion continuum is that there is no such thing as eternity and no such thing as infinity. Nothing can exist outside the realm of change, and the universe is limited to whatever amount of energy was released at the Big Bang. Now, to be fair, from a phenomenological perspective, the dimensions of the universe are so gigantic that, experientially, they might as well be infinite and eternal. But from a philosophical perspective, the categories of eternity and infinity are not ontologically valid. They are asymptotes not entities.

Needless to say, all this flies in the face of virtually every religion that has ever taken root in human history. As someone deeply committed to traditional ethics, I am grateful to all religions for supporting ethical action and an ethical mindset. If there were no other way to secure ethics, then I would opt for religion for sure. But we know a lot more about the universe today than we did several thousand years ago, and so there is at least an opportunity to forge a modern narrative, one that can find in secular metaphysics a foundation for traditional values. That’s what The Infinite Staircase is seeking to do.

That’s what I think. What do you think?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The Event That Made Einstein an Icon

The Event That Made Einstein an Icon

GUEST POST from Greg Satell

On April 3rd, 1921, a handful of journalists went to interview a relatively unknown scientist named Albert Einstein. When they arrived to meet his ship they found a crowd of thousands waiting for him, screaming with adulation. Surprised at his popularity, and charmed by his genial personality, the story of Einstein’s arrival made the front page in major newspapers.

It was all a bit of a mistake. The people in the crowd weren’t there to see Einstein, but Chaim Weizmann, the popular Zionist leader that Einstein was traveling with. Nevertheless, that’s how Einstein gained his iconic status. In a way, Einstein didn’t get famous because of relativity, relativity got famous because of Einstein.

This, of course, in no way lessens Einstein’s accomplishments, which were considerable. Yet as Albert-László Barabási, another highly accomplished scientist, explains in The Formula, there is a big difference between success and accomplishment. The truth is that success isn’t what you think it is but, with talent, persistence and some luck, anyone can achieve it.

There Is Virtually No Limit To Success, But There Is To Accomplishment

Einstein was, without a doubt, one of the great scientific minds in history. Yet the first half of the 20th century was a golden age for physics, with many great minds. Niels Bohr, Einstein’s sparring partner at the famous Bohr–Einstein debates (which Bohr is widely considered to have won) was at least as prominent. Yet Einstein towers over all of them.

It’s not just physicists, either. Why is it that Einstein has become a household name and not, say, Watson and Crick, who discovered the structure of DNA, an accomplishment at least as important as relativity? Even less known is Paul Erdős, the most prolific mathematician since Euler in the 18th century, who had an outrageous personality to boot?

For that matter, consider Richard Feynman, who is probably the second most famous physicist of the 20th century. He was, by all accounts, a man of great accomplishment and charisma. However, his fame is probably more due to his performance on TV following the Space Shuttle Challenger disaster than for his theory of quantum electrodynamics.

There are many great golfers, but only one Tiger Woods, just as there are many great basketball players, but only one Lebron James. The truth is that individual human accomplishment is bounded, but success isn’t. Tiger Woods can’t possibly hit every shot perfectly any more than Lebron James can score every point. But chances are, both will outshine all others in the public consciousness, which will drive their fame and fortune.

What’s probably most interesting about Einstein’s fame is that it grew substantially even as he ceased to be a productive scientist, long after he had become, as Robert Oppenheimer put it, “a landmark, not a beacon.”

Success Relies On Networks

Let’s try and deconstruct what happened after Einstein’s arrival in the United States. The day after thousands came to greet Weizmann and the reporters mistakenly assumed that they were there for Einstein, he appeared on the front pages of major newspapers like The New York Times and the Washington Post. For many readers, it may have been the first time they had heard of any physicist.

As I noted above, this period was something of a heyday for physics, with the basic principles of quantum mechanics first becoming established, so it was a topic that was increasingly discussed. Few could understand the details, but many remembered the genius with the crazy white hair they saw in the newspaper. When the subject of physics came up, people would discuss Einstein, which spread his name further.

Barabási himself established this principle of preferential attachment in networks, also known as the “rich get richer” phenomenon or the Matthew effect. When a particular node gains more connections than its rivals, it tends to gain future connections at a faster rate. Even a slight change in early performance leads to a major advantage going forward.

In his book, Barabási details how this principle applies to things as diverse as petitions on Change.org, projects on Kickstarter and books on Amazon. It also applies to websites on the Internet, computers in a network and proteins in our bodies. Look at any connected system and you’ll see preferential attachment at work.

Small Groups, Loosely Connected

The civil rights movement will always be associated with Martin Luther King Jr., but he was far from a solitary figure. In fact, he was just one of the Big Six of civil rights. Yet few today speak of the others. The only one besides King still relatively famous today is John Lewis and that’s largely because of his present role as a US congressman.

Each of these men were not solitary figures either, but leaders of their own organizations, such as the NAACP, The National Urban League and CORE and these, in turn, had hundreds of local chapters. It was King’s connection to all of these that made him the historic icon we know today, because it was all of those small groups, loosely connected, that made up the movement.

In my book, Cascades, I explain how many movements fail to bring change about by trying to emulate events like the March on Washington without first building small groups, loosely connected, but united by a shared purpose. It is those, far more than any charismatic personality or inspirational speech, that makes a movement powerful.

It also helps explain something about Einstein’s iconic status. He was on the ship with Weizman not as a physicist, but as a Zionist activist and that dual status connected him to two separate networks of loosely connected small groups, which enhanced his prestige. So it is quite possible, if not probable, that we equate Einstein with genius today and not, say, Bohr, because of his political activity as much as for his scientific talent.

Randomness Rewards Persistence

None of this should be taken to mean that Einstein could have become a legendary icon if he hadn’t made truly landmark discoveries. It was the combination of his prominence in the scientific community with the happy accident of Weizmann’s adoring crowds being mistaken for his own, that made him a historic figure.

Still, we can imagine an alternate universe in which Einstein becomes just as famous. He was, for example, enormously quotable and very politically active. (He was, at one time, offered the presidency in Israel). So it is completely possible that some other event, combined with his very real accomplishments, would have catapulted him to fame. There is always an element of luck and randomness in every success.

Yet Einstein’s story tells us some very important things about what makes a great success. It is not, as many tell us, simply a matter of working hard to achieve something because human performance is, as noted above, bounded. You can be better than others, but not that much better. At the same time, it takes more than just luck. It is a combination of both and we can do much to increase our chances of benefiting from them.

Einstein was incredibly persistent, working for ten years on special relativity and another ten for general relativity. He was also a great connector, always working to collaborate with other scientists as well as political figures like Weizmann and even little girls needing help with their math homework. That’s what allowed him to benefit from loosely connected small groups.

Perhaps most importantly, these principles of persistence and connection are ones that any of us can apply. We might not all be Einsteins, but with a little luck, we just might make it someday.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credit: misterinnovation.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The Eureka Moment Fallacy

The Eureka Moment Fallacy

GUEST POST from Greg Satell

In 1928, Alexander Fleming arrived at his lab to find that a mysterious mold had contaminated his Petri dishes and was eradicating the bacteria colonies he was trying to grow. Intrigued, he decided to study the mold. That’s how Fleming came to be known as the discoverer of penicillin.

Fleming’s story is one that is told and retold because it reinforces so much about what we love about innovation. A brilliant mind meets a pivotal moment of epiphany and — Eureka! — the world is forever changed. Unfortunately, that’s not really how things work. It wasn’t true in Fleming’s case and it won’t work for you.

The truth is that innovation is never a single event, but a process of discovery, engineering and transformation, which is why penicillin didn’t become commercially available until 1945 (and the drug was actually a different strain of the mold than Fleming had discovered). We need to stop searching for Eureka moments and get busy with the real work of innovating.

Learning To Recognize And Define Problems

Before Fleming, there was Ignaz Semmelweis and to understand Fleming’s story it helps to understand that of his predecessor. Much like Fleming, Semmelweis was a bright young man of science who had a moment of epiphany. In Semmelweis’s case, he was one of the first to realize that infections could spread from doctor to patient.

That simple insight led him to institute a strict regime of hand washing at Vienna General Hospital. Almost immediately, the incidence of deadly childbed fever dropped precipitously. Yet his ideas were not accepted at the time and Semmelweis didn’t do himself any favors by refusing to format his data properly or to work collaboratively to build support for his ideas. Instead, he angrily railed against the medical establishment he saw as undermining his work.

Semmelweis would die in an insane asylum, ironically from an infection he contracted under care, and never got to see the germ theory of disease emerge from the work of people like Louis Pasteur and Robert Koch. That’s what led to the study of bacteriology, sepsis and Alexander Fleming growing those cultures that were contaminated by the mysterious mold.

When Fleming walked into his lab on that morning in 1928, he was bringing a wealth of experiences to the problem. During World War I, he had witnessed many soldiers die from sepsis and how applying antiseptic agents to the wound often made the problem worse. Later, he found that nasal secretions inhibited bacterial growth.

So when the chance discovery of penicillin happened, it was far from a single moment, but rather a “happy accident” that he had spent years preparing for.

Combining Domains

Today, we remember Fleming’s discovery of penicillin as a historic breakthrough, but it wasn’t considered to be so at the time. In fact, when it was first published in the British Journal of Experimental Pathology, nobody really noticed. The truth is that what Fleming discovered couldn’t have cured anybody. It was just a mold secretion that killed bacteria in a Petri dish.

Perhaps even more importantly, Fleming was ill-equipped to transform penicillin into something useful. He was a pathologist that largely worked alone. To transform his discovery into an actual cure, he would need chemists and other scientists, as well as experts in fermentation, manufacturing, logistics and many other things. To go from milliliters in the lab to metric tons in the real world is no trivial thing.

So Fleming’s paper lay buried in a scientific journal for ten years before it was rediscovered by a team led by Howard Florey and Ernst Chain at the University of Oxford. Chain, a world-class biochemist, was able to stabilize the penicillin compound and another member of the team, Norman Heatley, developed a fermentation process to produce it in greater quantities.

Because Florey and Chain led a larger team in a bigger lab they were also had the staff and equipment to perform experiments on mice, which showed that penicillin was effective in treating infections. However, when they tried to cure a human, they found that they were not able to produce enough of the drug. They simply didn’t have the capacity.

Driving A Transformation

By the time Florey and Chain had established the potential of penicillin it was already 1941 and England was at war, which made it difficult to find funding to scale up their work. Luckily, Florey had done a Rhodes Scholarship in the United States and was able to secure a grant to travel to America and continue the development of penicillin with US-based labs.

That collaboration produced two more important breakthroughs. First, they were able to identify a more powerful strain of the penicillin mold. Second, they developed a fermentation process utilizing corn steep liquor as a medium. Corn steep liquor was common in the American Midwest, but virtually unheard of back in England.

Still, they needed to figure out a way to scale up production and that was far beyond the abilities of research scientists. However, the Office of Scientific Research and Development (OSRD), a government agency in charge of wartime research, understood the potential of penicillin for the war effort and initiated an aggressive program, involving two dozen pharmaceutical companies, to overcome the challenges.

Working feverishly, they were able to produce enough penicillin to deploy the drug for D-Day in 1944 and saved untold thousands of lives. After the war was over, in 1945, penicillin was made commercially available, which touched off a “golden age” of antibiotic research and new drugs were discovered almost every year between 1950 and 1970.

Innovation Is Never A Single Event

The story of Fleming’s Eureka! moment is romantic and inspiring, but also incredibly misleading. It wasn’t one person and one moment that changed the world, but the work of many over decades that made an impact. As I explain in my book, Cascades, it is small groups, loosely connected, but united by a shared purpose that drive transformational change.

In fact, the development of penicillin involved not one, but a series of epiphanies. First, Fleming discovered penicillin. Then, Florey and Chain rediscovered Fleming’s work. Chain stabilized the compound, Heatley developed the fermentation process, other scientists identified the more powerful strain and corn steep liquor as a fermentation medium. Surely, there were many other breakthroughs involving production, logistics and treatment that are lost to history.

This is not the exception, but the rule. The truth is that the next big thing always starts out looking like nothing at all. For example, Jim Allison, who recently won the Nobel Prize for his development of cancer immunotherapy, had his idea rejected by pharmaceutical companies, much like the medical establishment dismissed Semmelweis back in the 1850s.

Yet Allison kept at it. He continued to pound the pavement, connect and collaborate with others and that’s why today he his hailed as a pioneer and a hero. That’s why we need to focus less on inventions and more on ecosystems. It’s never a single moment of Eureka! that truly changes the world, but many of them.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Are We Abandoning Science?

Are We Abandoning Science?

GUEST POST from Greg Satell

A recent Pew poll found that, while Americans generally view scientific expertise in high regard, there are deep pockets of mistrust. For example, less than half of Republicans believe that scientists should take an active role in policy debates and significant minorities question the transparency and integrity of scientific findings.

An earlier study done by researchers at Ohio State University found that, when confronted with scientific evidence that conflicted with their pre-existing views, such as the reality of climate change or the safety of vaccines, partisans would not only reject the evidence, but become hostile and question the objectivity of science.

This is a major problem, because if we are only willing to accept evidence that agrees with what we already think we know, we are unlikely to advance our understanding. Perhaps even worse, it opens us up to being influenced by pundits —those with strong opinions but questionable expertise. When we turn our backs on science, we turn our backs on truth.

The Rise Of Science

When René Descartes wrote “I think, therefore I am” in the mid 1600s, he was doing more than coining a clever phrase, he was making an argument for a rational world ruled by pure logic. He believed that you could find the answers to problems you needed to solve merely by thinking about them clearly.

Yet Descartes and his rational movement soon ran out of steam. Many of the great minds that followed, such as John Locke and David Hume, took a more empirical view and argued that we can only truly understand the world around us through our experiences, however flawed and limited they may be.

It was this emphasis on experiences that led us to the concept of expertise. As the Renaissance and the Enlightenment gave way to the modern world, knowledge became specialized. It was no longer enough to think about things, the creation of knowledge came to be seen as arising from a scientific process of testing hypotheses through experiment.

This was a major shift, because you could no longer simply argue about things like how many angels could fit on the head of a pin, you actually had to put your thoughts to the test. Others could then examine the same evidence and see if they came to the same conclusions as you did. Thinking about things wasn’t enough, you had to show that they worked in the real world.

The Soccer Ball You Can’t See

Science is a funny thing, full of chance discoveries, strange coincidences and unlikely moments of insight. In his book, The God Particle, the Nobel prizewinning physicist Leon Lederman tells a metaphorical story about an alien race watching a soccer game to illustrate how it is practiced.

These aliens are very much like humans except that they can not see black and white patterns. If they went to a soccer game, they would be utterly confused to see a bunch of guys running around a field for no apparent reason. They could come up with theories, formulas and other conjectures, but would fail to make useful predictions.

Eventually, one might notice a slight bulge in the net of the goal just as the crowd erupted in a cheer and come up with a crazy idea about an invisible ball. Through further observation, they could test the hypothesis and build evidence. Although they could never actually see the ball, they could catalogue its effects and use them to understand events.

His point is that science is not common sense. It deals with things that we do not directly experience, but nevertheless have concrete effects on the world we live in. Today, we live in a world of the visceral abstract, where oddball theories like relativity result in real innovations like microprocessors and the Internet.

Cargo Cult Science

Because so much of science deals with stuff we can’t directly experience, we need metaphors like Lederman’s story about the aliens to make sense of things. Part of the fun of science is letting your imagination run wild and seeing where things go. Then you can test those ideas to see if they actually reflect reality.

The problem is that pundits and flakes can do the same thing — let their imagination run wild — and not bother to test whether they are true. Consider the anti-vax movement, which has no scientific basis, but has gone viral and led to a resurgence of diseases that were nearly extinct. Nevertheless, dressed up in some scientific sounding words, the idea that vaccines cause disease in children can be very convincing.

The physicist Richard Feynman called this cargo cult science, after a strange phenomenon that takes place on some islands in the South Pacific in which some tribes try to mimic the use of technology. For example, they build mock airstrips in the hopes that airplanes would appear with valuable cargo.

What makes science real is not fancy sounding words or white lab coats, but the fact that you work under certain constraints. You follow the scientific method, observe professional standards and subject your work to peer review. Pundits, on the other hand, do none of these things. Simply having an opinion on a subject will suffice.

The New Mysticism

Clearly, science is what created the modern world. Without science, you cannot have technology and without technology, you cannot create prosperity. So, on purely economic terms, science is extremely important to our success as a society. We need science in order to progress.

Yet in broader terms, science is the search for truth. In a nutshell, science is the practice of coming up with testable statements to see what’s possible. That’s what separates Darwin’s theory of natural selection and the big bang from nonscientific theories. The former is a matter of science, which can be tested through experiment and observation, the latter a matter of faith and belief.

Consider what Marco Rubio said in an interview with GQ about the age of the universe a few years ago:

“I think the age of the universe has zero to do with how our economy is going to grow. I’m not a scientist. I don’t think I’m qualified to answer a question like that. At the end of the day, I think there are multiple theories out there on how the universe was created and I think this is a country where people should have the opportunity to teach them all.”

Yet the big bang is not just a theory, but the result of a set of theories, including general relativity and quantum mechanics, combined with many observations over a period of decades. Students in physics class are supposed to learn about the big bang not to shape their religious beliefs, but because of its importance to those underlying theories.

And those concepts are central to our everyday lives. We use relativity to calibrate GPS satellites, so that we can find restaurants and target missiles. Quantum mechanics gave us lasers and microprocessors, from which we make barcode scanners and iPhones. In fact, the theories underlying big bang are essential for our modern economy to function.

When we turn our backs on science, what we are left with is essentially a form a mysticism. We can listen to our inner voices to decide what we believe and, when faced with a competing idea, ascribe its provenance to only someone else’s inner voice. Once we make truth a matter of opinion, we start our way down a slippery slope.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Is China Our New Sputnik Moment?

Is China Our New Sputnik Moment?

GUEST POST from Greg Satell

When the Soviets launched Sputnik, the first space satellite, into orbit in 1957, it was a wake-up call for America. Over the next year, President Eisenhower would sign the National Defense Education Act to spur science education, increase funding for research and establish NASA and DARPA to spur innovation.

A few years ago, a report by the Council on Foreign Relations (CFR) argued that we are at a similar point today, but with China. While we have been steadily decreasing federal investment in R&D over the past few decades, our Asian rival has been ramping up and now threatens our leadership in key technologies such as AI, genomics and quantum information technology.

Clearly, we need to increase our commitment to science and innovation and that means increasing financial investment. However, what the report makes clear is that money alone won’t solve the problem. We are, in several important ways, actually undermining our ability to innovate, now and in the future. We need to renew our culture of innovation in America.

Educating And Attracting Talent

The foundation of an innovation economy is education, especially in STEM subjects. Historically, America has been the world’s best educated workforce, but more recently we’ve fallen to fifth among OECD countries for post-secondary education. That’s alarming and something we will certainly need to reverse if we are to compete effectively.

Our educational descent can be attributed to three major causes. First, the rest of the world has become more educated, so the competition has become stiffer. Second, is financing. Tuition has nearly tripled in the last decade and student debt has become so onerous that it now takes about 20 years to pay off four years for college. Third, we need to work harder to attract talented people to the United States.

The CFR report recommends developing a “21st century National Defense Education Act” to create scholarships in STEM areas and making it easier for foreign students to get Green Cards when they graduate from our universities. It also points out that we need to work harder to attract foreign talent, especially in high impact areas like AI, genomics and quantum computing.

Unfortunately, we seem to be going the other way. The number of international students to American universities is declining. Policies like the muslim ban and concerns about gun violence are deterring scientific talent coming here. The denial rate for those on H1-B visas has increased from 4% in 2016 to 18% in the first quarter of 2019.

Throughout our history, it has been our openness to new people and new ideas that has made America exceptional. It’s a legitimate question whether that’s still true.

Building Technology Ecosystems

In the 1980s, the US semiconductor industry was on the ropes. Due to increased competition from low-cost Japanese manufacturers, American market share in the DRAM market fell from 70% to 20%. The situation not only had a significant economic impact, there were also important national security implications.

The federal government responded with two initiatives, the Semiconductor Research Corporation and SEMATECH, both of which were nonprofit consortiums that involved government, academia and industry. By the 1990s. American semiconductor manufacturers were thriving again.

Today, we have similar challenges with rare earth elements, battery technology and many manufacturing areas. The Obama administration responded by building similar consortiums to those that were established for semiconductors: The Critical Materials Institute for rare earth elements, JCESR for advanced batteries and the 14 separate Manufacturing Institutes.

Yet here again, we seem to be backsliding. The current administration has sought to slash funding for the Manufacturing Extension Partnership that supports small and medium sized producers. An addendum to the CFR report also points out that the administration has pushed for a 30% cut in funding for the national labs, which support much of the advanced science critical to driving American technology forward.

Supporting International Trade and Alliances

Another historical strength of the US economy has been our open approach to trade. The CFR report points out that our role as a “central node in a global network of research and development,” gave us numerous advantages, such as access to foreign talent at R&D centers overseas, investment into US industry and cooperative responses to global challenges.

However, the report warns that “the Trump administration’s indiscriminate use of tariffs against China, as well as partners and allies, will harm U.S. innovative capabilities.” It also faults the Trump administration for pulling out of the Trans-Pacific Partnership trade agreement, which would have bolstered our relationship with Asian partners and increased our leverage over China.

The tariffs undermine American industry in two ways. First, because many of the tariffs are on intermediate goods which US firms use to make products for export, we’re undermining our own competitive position, especially in manufacturing. Second, because trade partners such as Canada and the EU have retaliated against our tariffs, our position is weakened further.

Clearly, we compete in an ecosystem driven world in which power does not come from the top, but emanates from the center. Traditionally, America has positioned itself at the center of ecosystems by constantly connecting out. Now that process seems to have reversed itself and we are extremely vulnerable to others, such as China, filling the void.

We Need to Stop Killing Innovation in America

The CFR report, whose task force included such luminaries as Admiral William McRaven, former Google CEO Eric Schmidt and economist Laura Tyson, should set alarm bells ringing. Although the report was focused on national security issues, it pertains to general competitiveness just as well and the picture it paints is fairly bleak.

After World War II, America stood almost alone in the world in terms of production capacity. Through smart policy, we were able to transform that initial advantage into long-term technological superiority. Today, however we have stiff competition in areas ranging from AI to synthetic biology to quantum systems.

At the same time, we seem to be doing everything we can to kill innovation in America. Instead of working to educate and attract the world’s best talent, we’re making it harder for Americans to attain higher education and for top foreign talent to come and work here. Instead of ramping up our science and technology programs, presidential budgets regular recommend cutting them. Instead of pulling our allies closer, we are pushing them away.

To be clear, America is still at the forefront of science and technology, vying for leadership in every conceivable area. However, as global competition heats up and we need to be redoubling our efforts, we seem to be doing just the opposite. The truth is that our prosperity is not a birthright to which we are entitled, but a legacy that must be lived up to.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Top 10 Human-Centered Change & Innovation Articles of March 2023

Top 10 Human-Centered Change & Innovation Articles of March 2023Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are March’s ten most popular innovation posts:

  1. Taking Care of Yourself is Not Impossible — by Mike Shipulski
  2. Rise of the Prompt Engineer — by Art Inteligencia
  3. A Guide to Effective Brainstorming — by Diana Porumboiu
  4. What Disruptive Innovation Really Is — by Geoffrey A. Moore
  5. The 6 Building Blocks of Great Teams — by David Burkus
  6. Take Charge of Your Mind to Reclaim Your Potential — by Janet Sernack
  7. Ten Reasons You Must Deliver Amazing Customer Experiences — by Shep Hyken
  8. Deciding You Have Enough Opens Up New Frontiers — by Mike Shipulski
  9. The AI Apocalypse is Here – 3 Reasons You Should Celebrate! — by Robyn Bolton
  10. Artificial Intelligence is Forcing Us to Answer Some Very Human Questions — by Greg Satell

BONUS – Here are five more strong articles published in February that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last three years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.