Tag Archives: quantum computing

The Runaway Innovation Train

The Runaway Innovation Train

GUEST POST from Pete Foley

In this blog, I return and expand on a paradox that has concerned me for some time.    Are we getting too good at innovation, and is it in danger of getting out of control?   That may seem like a strange question for an innovator to ask.  But innovation has always been a two edged sword.  It brings huge benefits, but also commensurate risks. 

Ostensibly, change is good. Because of technology, today we mostly live more comfortable lives, and enjoy superior health, longevity, and mostly increased leisure and abundance compared to our ancestors.

Exponential Innovation Growth:  The pace of innovation is accelerating. It may not exactly mirror Moore’s Law, and of course, innovation is much harder to quantify than transistors. But the general trend in innovation and change approximates exponential growth. The human stone-age lasted about 300,000 years before ending in about 3,000 BC with the advent of metalworking.  The culture of the Egyptian Pharos lasted 30 centuries.  It was certainly not without innovations, but by modern standards, things changed very slowly. My mum recently turned 98 years young, and the pace of change she has seen in her lifetime is staggering by comparison to the past.  Literally from horse and carts delivering milk when she was a child in poor SE London, to todays world of self driving cars and exploring our solar system and beyond.  And with AI, quantum computing, fusion, gene manipulation, manned interplanetary spaceflight, and even advanced behavior manipulation all jockeying for position in the current innovation race, it seems highly likely that those living today will see even more dramatic change than my mum experienced.  

The Dark Side of Innovation: While accelerated innovation is probably beneficial overall, it is not without its costs. For starters, while humans are natural innovators, we are also paradoxically change averse.  Our brains are configured to manage more of our daily lives around habits and familiar behaviors than new experiences.  It simply takes more mental effort to manage new stuff than familiar stuff.  As a result we like some change, but not too much, or we become stressed.  At least some of the burgeoning mental health crisis we face today is probably attributable the difficulty we have adapting to so much rapid change and new technology on multiple fronts.

Nefarious Innovation:  And of course, new technology can be used for nefarious as well as noble purpose. We can now kill our fellow humans far more efficiently, and remotely than our ancestors dreamed of.  The internet gives us unprecedented access to both information and connectivity, but is also a source of misinformation and manipulation.  

The Abundance Dichotomy:  Innovation increases abundance, but it’s arguable if that actually makes us happier.  It gives us more, but paradoxically brings greater inequalities in distribution of the ‘wealth’ it creates. Behavior science has shown us consistently that humans make far more relative than absolute judgments.  Being better off than our ancestors actually doesn’t do much for us.  Instead we are far more interested in being better off than our peers, neighbors or the people we compare ourselves to on Instagram. And therein lies yet another challenge. Social media means we now compare ourselves to far more people than past generations, meaning that the standards we judge ourselves against are higher than ever before.     

Side effects and Unintended Consequences: Side effects and unintended consequences are perhaps the most difficult challenge we face with innovation. As the pace of innovation accelerates, so does the build up of side effects, and problematically, these often lag our initial innovations. All too often, we only become aware of them when they have already become a significant problem. Climate change is of course a poster child for this, as a huge unanticipated consequence of the industrial revolution. The same applies to pollution.  But as innovation accelerates, the unintended consequences it brings are also stacking up.  The first generations of ‘digital natives’ are facing unprecedented mental health challenges.  Diseases are becoming resistant to antibiotics, while population density is leading increased rate of new disease emergence. Agricultural efficiency has created monocultures that are inherently more fragile than the more diverse supply chain of the past.  Longevity is putting enormous pressure on healthcare.

The More we Innovate, the less we understand:  And last, but not least, as innovation accelerates, we understand less about what we are creating. Technology becomes unfathomably complex, and requires increasing specialization, which means few if any really understand the holistic picture.  Today we are largely going full speed ahead with AI, quantum computing, genetic engineering, and more subtle, but equally perilous experiments in behavioral and social manipulation.  But we are doing so with increasingly less pervasive understanding of direct, let alone unintended consequences of these complex changes!   

The Runaway Innovation Train:  So should we back off and slow down?  Is it time to pump the brakes? It’s an odd question for an innovator, but it’s likely a moot point anyway. The reality is that we probably cannot slow down, even if we want to.  Innovation is largely a self-propagating chain reaction. All innovators stand on the shoulders of giants. Every generation builds on past discoveries, and often this growing knowledge base inevitably leads to multiple further innovations.  The connectivity and information access of internet alone is driving today’s unprecedented innovation, and AI and quantum computing will only accelerate this further.  History is compelling on this point. Stone-age innovation was slow not because our ancestors lacked intelligence.  To the best of our knowledge, they were neurologically the same as us.  But they lacked the cumulative knowledge, and the network to access it that we now enjoy.   Even the smartest of us cannot go from inventing flint-knapping to quantum mechanics in a single generation. But, back to ‘standing on the shoulder of giants’, we can build on cumulative knowledge assembled by those who went before us to continuously improve.  And as that cumulative knowledge grows, more and more tools and resources become available, multiple insights emerge, and we create what amounts to a chain reaction of innovations.  But the trouble with chain reactions is that they can be very hard to control.    

Simultaneous Innovation: Perhaps the most compelling support for this inevitability of innovation lies in the pervasiveness of simultaneous innovation.   How does human culture exist for 50,000 years or more and then ‘suddenly’ two people, Darwin and Wallace come up with the theory of evolution independently and simultaneously?  The same question for calculus (Newton and Leibniz), or the precarious proliferation of nuclear weapons and other assorted weapons of mass destruction.  It’s not coincidence, but simply reflects that once all of the pieces of a puzzle are in place, somebody, and more likely, multiple people will inevitably make connections and see the next step in the innovation chain. 

But as innovation expands like a conquering army on multiple fronts, more and more puzzle pieces become available, and more puzzles are solved.  But unfortunately associated side effects and unanticipated consequences also build up, and my concern is that they can potentially overwhelm us. And this is compounded because often, as in the case of climate change, dealing with side effects can be more demanding than the original innovation. And because they can be slow to emerge, they are often deeply rooted before we become aware of them. As we look forward, just taking AI as an example, we can already somewhat anticipate some worrying possibilities. But what about the surprises analogous to climate change that we haven’t even thought of yet? I find that a sobering thought that we are attempting to create consciousness, but despite the efforts of numerous Nobel laureates over decades, we still have to idea what consciousness is. It’s called the ‘hard problem’ for good reason.  

Stop the World, I Want to Get Off: So why not slow down? There are precedents, in the form of nuclear arms treaties, and a variety of ethically based constraints on scientific exploration.  But regulations require everybody to agree and comply. Very big, expensive and expansive innovations are relatively easy to police. North Korea and Iran notwithstanding, there are fortunately not too many countries building nuclear capability, at least not yet. But a lot of emerging technology has the potential to require far less physical and financial infrastructure.  Cyber crime, gene manipulation, crypto and many others can be carried out with smaller, more distributed resources, which are far more difficult to police.  Even AI, which takes considerable resources to initially create, opens numerous doors for misuse that requires far less resource. 

The Atomic Weapons Conundrum.  The challenge with getting bad actors to agree on regulation and constraint is painfully illustrated by the atomic bomb.  The discovery of fission by Strassman and Hahn in the late 1930’s made the bomb inevitable. This set the stage for a race to turn theory into practice between the Allies and Nazi Germany. The Nazis were bad actor, so realistically our only option was to win the race.  We did, but at enormous cost. Once the ‘cat was out of the bag, we faced a terrible choice; create nuclear weapons, and the horror they represent, or chose to legislate against them, but in so doing, cede that terrible power to the Nazi’s?  Not an enviable choice.

Cumulative Knowledge.  Today we face similar conundrums on multiple fronts. Cumulative knowledge will make it extremely difficult not to advance multiple, potentially perilous technologies.  Countries who legislate against it risk either pushing it underground, or falling behind and deferring to others. The recent open letter from Meta to the EU chastising it for the potential economic impacts of its AI regulations may have dripped with self-interest.  But that didn’t make it wrong.   https://euneedsai.com/  Even if the EU slows down AI development, the pieces of the puzzle are already in place.  Big corporations, and less conservative countries will still pursue the upside, and risk the downside. The cat is very much out of the bag.

Muddling Through:  The good news is that when faced with potentially perilous change in the past, we’ve muddled through.  Hopefully we will do so again.   We’ve avoided a nuclear holocaust, at least for now.  Social media has destabilized our social order, but hasn’t destroyed it, yet.  We’ve been through a pandemic, and come out of it, not unscathed, but still functioning.  We are making progress in dealing with climate change, and have made enormous strides in managing pollution.

Chain Reactions:  But the innovation chain reaction, and the impact of cumulative knowledge mean that the rate of change will, in the absence of catastrophe, inevitably continue to accelerate. And as it does, so will side effects, nefarious use, mistakes and any unintended consequences that derive from it. Key factors that have helped us in the past are time and resource, but as waves of innovation increase in both frequency and intensity, both are likely to be increasingly squeezed.   

What can, or should we do? I certainly don’t have simple answers. We’re all pretty good, although by definition, far from perfect at scenario planning and trouble shooting for our individual innovations.  But the size and complexity of massive waves of innovation, such as AI, are obviously far more challenging.  No individual, or group can realistically either understand or own all of the implications. But perhaps we as an innovation community should put more collective resources against trying? We’ll never anticipate everything, and we’ll still get blindsided.  And putting resources against ‘what if’ scenarios is always a hard sell. But maybe we need to go into sales mode. 

Can the Problem Become the Solution? Encouragingly, the same emerging technology that creates potential issues could also help us.  AI and quantum computing will give us almost infinite capacity for computation and modeling.  Could we collectively assign more of that emerging resource against predicting and managing it’s own risks?

With many emerging technologies, we are now where we were in the 1900’s with climate change.  We are implementing massive, unpredictable change, and by definition have no idea what the unanticipated consequences of that will be. I personally think we’ll deal with climate change.  It’s difficult to slow a leviathan that’s been building for over a hundred years.  But we’ve taken the important first steps in acknowledging the problem, and are beginning to implement corrective action. 

But big issues require big solutions.  Long-term, I personally believe the most important thing for humanity to escape the gravity well.   Given the scale of our ability to curate global change, interplanetary colonization is not a luxury, but an essential.  Climate change is a shot across the bow with respect to how fragile our planet is, and how big our (unintended) influence can be.  We will hopefully manage that, and avoid nuclear war or synthetic pandemics for long enough to achieve it.  But ultimately, humanity needs the insurance dispersed planetary colonization will provide.  

Image credits: Microsoft Copilot

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Top 10 Human-Centered Change & Innovation Articles of November 2023

Top 10 Human-Centered Change & Innovation Articles of November 2023Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are November’s ten most popular innovation posts:

  1. A Quantum Computing Primer — by Greg Satell
  2. Disagreements Can Be a Good Thing — by Mike Shipulski
  3. What’s Your Mindset — by Dennis Stauffer
  4. We Are Killing Innovation in America — by Greg Satell
  5. Two Kinds of Possible — by Dennis Stauffer
  6. Eddie Van Halen, Simultaneous Innovation and the AI Regulation Conundrum — by Pete Foley
  7. Five Secrets to Being a Great Team Player — by David Burkus
  8. Be Clear on What You Want — by Mike Shipulski
  9. Overcoming Your Assumptions — by Dennis Stauffer
  10. Four Things All Leaders Must Know About Digital Transformation — by Greg Satell

BONUS – Here are five more strong articles published in October that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last three years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






A Quantum Computing Primer

A Quantum Computing Primer

GUEST POST from Greg Satell

Every once in a while, a technology comes along with so much potential that people can’t seem to stop talking about it. That’s fun and exciting, but it can also be confusing. Not all of the people who opine really know what they’re talking about and, as the cacophony of voices increases to a loud roar, it’s hard to know what to believe.

We’re beginning to hit that point with quantum computing. Listen to some and you imagine that you’ll be strolling down to your local Apple store to pick one up any day now. Others will tell you that these diabolical machines will kill encryption and bring global commerce to a screeching halt. None of this is true.

What is true though is that quantum computing is not only almost unimaginably powerful, it is also completely different than anything we’ve ever seen before. You won’t use a quantum computer to write emails or to play videos, but the technology will significantly impact our lives over the next decade or two. Here’s a basic guide to what you really need to know.

Computing In 3 Dimensions

Quantum computing, as any expert will tell you, uses quantum effects such as superposition and entanglement to compute, unlike digital computers that use strings of ones and zeros. Yet quantum effects are so confusing that the great physicist Richard Feynman once remarked that nobody, even world class experts like him, really understands them.

So instead of quantum effects, think of quantum computing as a machine that works in three dimensions rather than two-dimensions like digital computers. The benefits of this should be obvious, because you can fit a lot more stuff into three dimensions than you can into two, so a quantum computer can handle vastly more complexity than the ones we’re used to.

Another added benefit is that we live in three dimensions, so quantum computers can simulate the systems we deal with every day, like those in materials and biological organisms. Digital computers can do this to some extent, but some information always gets lost translating the data from a three dimensional world to a two dimensional one, which leads to problems.

I want to stress that this isn’t exactly an accurate description of how quantum computers really work, but it’s close enough for you to get the gist of why they are so different and, potentially, so useful.

Coherence And Error Correction

Everybody makes mistakes and the same goes for machines. When you think of all the billions of calculations a computer makes, you can see how even an infinitesimally small error rate can cause a lot of problems. That’s why computers have error correction mechanisms built into their code to catch mistakes and correct them.

With quantum computers the problem is much tougher because they work with subatomic particles and these systems are incredibly difficult to keep stable. That’s why quantum chips need to be kept within a fraction of a degree of absolute zero. At even a sliver above that, the system “decoheres” and we won’t be able to make sense out of anything.

It also leads to another problem. Because quantum computers are so prone to error, we need a whole lot of quantum bits (or qubits) for each qubit that performs a logical function. In fact, with today’s technology, we need more than a thousand physical qubits (the kind that are in a machine) for each qubit that can reliably perform a logical function.

This is why most of the fears of quantum computing killing encryption and destroying the financial system are mostly unfounded. The most advanced quantum computers today only have about 50 qubits, not nearly enough to crack anything. We will probably have machines that strong in a decade or so, but by that time quantum safe encryption should be fairly common.

Building Practical Applications

Because quantum computers are so different, it’s hard to make them efficient for the tasks that we use traditional computers for because they effectively have to translate two-dimensional digital problems into their three-dimensional quantum world. The error correction issues only compound the problem.

There are some problems, however, that they’re ideally suited to. One is to simulate quantum systems, like molecules and biological systems, which can be tremendously valuable for people like chemists, materials scientists and medical researchers. Another promising area is large optimization problems for use in the financial industry and helping manage complex logistics.

Yet the people who understand those problems know little about quantum computing. In most cases, they’ve never seen a quantum computer before and have trouble making sense out of the data they generate. So they will have to spend some years working with quantum scientists to figure it out and then some more years explaining what they’ve learned to engineers who can build products and services.

We tend to think of innovation as if it is a single event. The reality is that it’s a long process of discovery, engineering and transformation. We are already well into the engineering phase of quantum computing—we have reasonably powerful machines that work—but the transformation phase has just begun.

The End Of The Digital Revolution And A New Era Of Innovation

One of the reasons that quantum computing has been generating so much excitement is that Moore’s Law is ending. The digital revolution was driven by our ability to cram more transistors onto a silicon wafer, so once we are not able to do that anymore, a key avenue of advancement will no longer be viable.

So many assume that quantum computing will simply take over where digital computing left off. It will not. As noted above, quantum computers are fundamentally different than the ones we are used to. They use different logic, require different computing languages and algorithmic approaches and are suited to different tasks.

That means the major impacts from quantum computers won’t hit for a decade or more. That’s not at all unusual. For example, although Apple came out with the Macintosh in 1984, it wasn’t until the late 90s that there was a measurable bump in productivity. It takes time for an ecosystem to evolve around a technology and drive a significant impact.

What’s most important to understand, however, is that the quantum era will open up new worlds of possibility, enabling us to manage almost unthinkable complexity and reshape the physical world. We are, in many ways, just getting started.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






What Pundits Always Get Wrong About the Future

What Pundits Always Get Wrong About the Future

GUEST POST from Greg Satell

Peter Thiel likes to point out that we wanted flying cars, but got 140 characters instead. He’s only partly right. For decades futuristic visions showed everyday families zipping around in flying cars and it’s true that even today we’re still stuck on the ground. Yet that’s not because we’re unable to build one. In fact the first was invented in 1934.

The problem is not so much with engineering, but economics, safety and convenience. We could build a flying car if we wanted to, but to make one that can compete with regular cars is another matter entirely. Besides, in many ways, 140 characters are better than a flying car. Cars only let us travel around town, the Internet helps us span the globe.

That has created far more value than a flying car ever could. We often fail to predict the future accurately because we don’t account for our capacity to surprise ourselves, to see new possibilities and take new directions. We interact with each other, collaborate and change our priorities. The future that we predict is never as exciting as the one we eventually create.

1. The Future Will Not Look Like The Past

We tend to predict the future by extrapolating from the present. So if we invent a car and then an airplane, it only seems natural that we can combine the two. If family has a car, then having one that flies can seem like a logical next step. We don’t look at a car and dream up, say, a computer. So in 1934, we dreamed of flying cars, but not computers.

It’s not just optimists that fall prey to this fundamental error, but pessimists too. In Homo Deus, author and historian Yuval Noah Harari points to several studies that show that human jobs are being replaced by machines. He then paints a dystopian picture. “Humans might become militarily and economically useless,” he writes. Yeesh!

Yet the picture is not as dark as it may seem. Consider the retail apocalypse. Over the past few years, we’ve seen an unprecedented number of retail store closings. Those jobs are gone and they’re not coming back. You can imagine thousands of retail employees sitting at home, wondering how to pay their bills, just as Harari predicts.

Yet economist Michael Mandel argues that the data tell a very different story. First, he shows that the jobs gained from e-commerce far outstrip those lost from traditional retail. Second, he points out that the total e-commerce sector, including lower-wage fulfillment centers, has an average wage of $21.13 per hour, which is 27 percent higher than the $16.65 that the average worker in traditional retail earns.

So not only are more people working, they are taking home more money too. Not only is the retail apocalypse not a tragedy, it’s somewhat of a blessing.

2. The Next Big Thing Always Starts Out Looking Like Nothing At All

Every technology eventually hits theoretical limits. Buy a computer today and you’ll find that the technical specifications are much like they were five years ago. When a new generation of iPhones comes out these days, reviewers tout the camera rather than the processor speed. The truth is that Moore’s law is effectively over.

That seems tragic, because our ability to exponentially increase the number of transistors that we can squeeze onto a silicon wafer has driven technological advancement over the past few decades. Every 18 months or so, a new generation of chips has come out and opened up new possibilities that entrepreneurs have turned into exciting new businesses.

What will we do now?

Yet there’s no real need to worry. There is no 11th commandment that says, “Thou shalt compute with ones and zeros” and the end of Moore’s law will give way to newer, more powerful technologies, like quantum and neuromorphic computing. These are still in their nascent stage and may not have an impact for at least five to ten years, but will likely power the future for decades to come.

The truth is that the next big thing always starts out looking like nothing at all. Einstein never thought that his work would have a practical impact during his lifetime. When Alexander Fleming first discovered penicillin, nobody noticed. In much the same way, the future is not digital. So what? It will be even better!

3. It’s Ecosystems, Not Inventions, That Drive The Future

When the first automobiles came to market, they were called “horseless carriages” because that’s what everyone knew and was familiar with. So it seemed logical that people would use them much like they used horses, to take the occasional trip into town and to work in the fields. Yet it didn’t turn out that way, because driving a car is nothing like riding a horse.

So first people started taking “Sunday drives” to relax and see family and friends, something that would be too tiring to do regularly on a horse. Gas stations and paved roads changed how products were distributed and factories moved from cities in the north, close to customers, to small towns in the south, where land and labor were cheaper.

As the ability to travel increased, people started moving out of cities and into suburbs. When consumers could easily load a week’s worth of groceries into their cars, corner stores gave way to supermarkets and, eventually, shopping malls. The automobile changed a lot more than simply how we got from place to place. It changed our way of life in ways that were impossible to predict.

Look at other significant technologies, such as electricity and computers, and you find a similar story. It’s ecosystems, rather than inventions, that drive the future.

4. We Can Only Validate Patterns Going Forward

G. H. Hardy once wrote that, “a mathematician, like a painter or poet, is a maker of patterns. If his patterns are more permanent than theirs, it is because they are made with ideas.” Futurists often work the same way, identifying patterns in the past and present, then extrapolating them into the future. Yet there is a substantive difference between patterns that we consider to be preordained and those that are to be discovered.

Think about Steve Jobs and Apple for a minute and you will probably recognize the pattern and assume I misspelled the name of his iconic company by forgetting to include the “e” at the end. But I could have just have easily been about to describe an “Applet” he designed for the iPhone or some connection between Jobs and Appleton WI, a small town outside Green Bay.

The point is that we can only validate patterns going forward, never backward. That, in essence, is what Steve Blank means when he says that business plans rarely survive first contact with customers and why his ideas about lean startups are changing the world. We need to be careful about the patterns we think we see. Some are meaningful. Others are not.

The problem with patterns is that future is something we create, not some preordained plan that we are beholden to. The things we create often become inflection points and change our course. That may frustrate the futurists, but it’s what makes life exciting for the rest of us.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The Coming Innovation Slowdown

The Coming Innovation Slowdown

GUEST POST from Greg Satell

Take a moment to think about what the world must have looked like to J.P. Morgan a century ago, in 1919. He was not only an immensely powerful financier with access to the great industrialists of the day, but also an early adopter of new technologies. One of the first electric generators was installed at his home.

The disruptive technologies of the day, electricity and internal combustion, were already almost 40 years old, but had little measurable economic impact. Life largely went on as it always had. That would quickly change over the next decade when those technologies would drive a 50-year boom in productivity unlike anything the world had ever seen before.

It is very likely that we are at a similar point now. Despite significant advances in technology, productivity growth has been depressed for most of the last 50 years. Over the next ten years, however, we’re likely to see that change as nascent technologies hit their stride and create completely new industries. Here’s what you’ll need to know to compete in the new era.

1. Value Will Shift from Bits to Atoms

Over the past few decades, innovation has become almost synonymous with digital technology. Every 18 months or so, semiconductor manufacturers would bring out a new generation of processors that were twice as powerful as what came before. These, in turn, would allow entrepreneurs to imagine completely new possibilities.

However, while the digital revolution has given us snazzy new gadgets, the impact has been muted. Sure, we have hundreds of TV channels and we’re able to talk to our machines and get coherent answers back, but even at this late stage, information and communication technologies make up only about 6% of GDP in advanced countries.

At first, that sounds improbable. How could so much change produce so little effect? But think about going to a typical household in 1960, before the digital revolution took hold. You would likely see a TV, a phone, household appliances and a car in the garage. Now think of a typical household in 1910, with no electricity or running water. Even simple chores like cooking and cleaning took hours of backbreaking labor.

The truth is that much of our economy is still based on what we eat, wear and live in, which is why it’s important that the nascent technologies of today, such as synthetic biology and materials science, are rooted in the physical world. Over the next generation, we can expect innovation to shift from bits back to atoms.

2. Innovation Will Slow Down

We’ve come to take it for granted that things always accelerate because that’s what has happened for the past 30 years or so. So we’ve learned to deliberate less, to rapidly prototype and iterate and to “move fast and break things” because, during the digital revolution, that’s what you needed to do to compete effectively.

Yet microchips are a very old technology that we’ve come to understand very, very well. When a new generation of chips came off the line, they were faster and better, but worked the same way as earlier versions. That won’t be true with new computing architectures such as quantum and neuromorphic computing. We’ll have to learn how to use them first.

In other cases, such as genomics and artificial intelligence, there are serious ethical issues to consider. Under what conditions is it okay to permanently alter the germ line of a species. Who is accountable for the decisions and algorithm makes? On what basis should those decisions be made? To what extent do they need to be explainable and auditable?

Innovation is a process of discovery, engineering and transformation. At the moment, we find ourselves at the end of one transformational phase and about to enter a new one. It will take a decade or so to understand these new technologies enough to begin to accelerate again. We need to do so carefully. As we have seen over the past few years, when you move fast and break things, you run the risk of breaking something important.

3. Ecosystems Will Drive Technology

Let’s return to J.P. Morgan in 1919 and ask ourselves why electricity and internal combustion had so little impact up to that point. Automobiles and electric lights had been around a long time, but adoption takes time. It takes a while to build roads, to string wires and to train technicians to service new inventions reliably.

As economist Paul David pointed out in his classic paper, The Dynamo and the Computer, it takes time for people to learn how to use new technologies. Habits and routines need to change to take full advantage of new technologies. For example, in factories, the biggest benefit electricity provided was through enabling changes in workflow.

The biggest impacts come from secondary and tertiary technologies, such as home appliances in the case of electricity. Automobiles did more than provide transportation, but enables a shift from corner stores to supermarkets and, eventually, shopping malls. Refrigerated railroad cars revolutionized food distribution. Supply chains were transformed. Radios, and later TV, reshaped entertainment.

Nobody, not even someone like J.P. Morgan could have predicted all that in 1919, because it’s ecosystems, not inventions, that drive transformation and ecosystems are non-linear. We can’t simply extrapolate out from the present and get a clear future of what the future is going to look like.

4. You Need to Start Now

The changes that will take place over the next decade or so are likely to be just as transformative—and possibly even more so—than those that happened in the 1920s and 30s. We are on the brink of a new era of innovation that will see the creation of entirely new industries and business models.

Yet the technologies that will drive the 21st century are still mostly in the discovery and engineering phases, so they’re easy to miss. Once the transformation begins in earnest, however, it will likely be too late to adapt. In areas like genomics, materials science, quantum computing and artificial intelligence, if you get a few years behind, you may never catch up.

So the time to start exploring these new technologies is now and there are ample opportunities to do so. The Manufacturing USA Institutes are driving advancement in areas as diverse as bio-fabrication, additive manufacturing and composite materials. IBM has created its Q Network to help companies get up to speed on quantum computing and the Internet of Things Consortium is doing the same thing in that space.

Make no mistake, if you don’t explore, you won’t discover. If you don’t discover you won’t invent. And if you don’t invent, you will be disrupted eventually, it’s just a matter of time. It’s always better to prepare than to adapt and the time to start doing that is now.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






A Brave Post-Coronavirus New World

A Brave Post-Coronavirus New World

GUEST POST from Greg Satell

In 1973, in the wake of the Arab defeat in the Yom Kippur war with Israel, OPEC instituted an oil embargo on America and its allies. The immediate effects of the crisis was a surge in gas prices and a recession in the west. The ripple effects, however, were far more complex and played out over decades.

The rise in oil prices brought much needed hard currency to the Soviet Union, prolonging its existence and setting the stage for its later demise. The American auto industry, with its passion for big, gas guzzling cars, lost ground to the emergent. The new consciousness of conservation led to the establishment of the Department of Energy.

Today the Covid-19 crisis has given a shock to the system and we’re at a similar inflection point. The most immediate effects have been economic recession and the rapid adoption of digital tools, such as video conferencing. Over the next decade or so, however, the short-term impacts will combine with other more longstanding trends to reshape technology and society.

Pervasive Transformation

We tend to think about innovation as if it were a single event, but the truth is that it’s a process of a process of discovery, engineering and transformation, which takes decades to run its course. For example, Alan Turing discovered the principles of a universal computer in 1936, but it wasn’t until the 1950s and 60s that digital computers became commercially available.

Even then, digital technology, didn’t really begin to become truly transformational until the mid-90s. By this time, it was well understood enough to make the leap from highly integrated systems to modular ecosystems, making the technology cheaper, more functional and more reliable. The number of applications exploded and the market grew quickly.

Still, as the Covid-19 crisis has made clear, we’ve really just been scratching the surface. Although digital technology certainly accelerated the pace of work, it did fairly little to fundamentally change the nature of it. People still commuted to work in an office, where they would attend meetings in person, losing hours of productive time each and every day.

Over the next decade, we will see pervasive transformation. As Mark Zuckerberg has pointed out, once people can work remotely, they can work from anywhere, which will change the nature of cities. Instead of “offsite” meetings, we may very well have “onsite” meetings where people from their home cities over travel to headquarters to do more active collaboration.

These trends will combine with nascent technologies like artificial intelligence and blockchain to revolutionize business processes and supply chains. Organizations that cannot adopt key technologies will very likely find themselves unable to compete.

The Rise of Heterogeneous Computing

The digital age did not begin with personal computers in the 70s and 80s, but started back in the 1950s with the shift from electromechanical calculating machines to transistor based mainframes. However, because so few people used computers back then—they were largely relegated to obscure back office tasks and complex scientific calculations—the transformation took place largely out of public view.

A similar process is taking place today with new architectures such as quantum and neuromorphic computing. While these technologies are not yet commercially viable, they are advancing quickly and will eventually become thousands, if not millions, of times more effective than digital systems.

However, what’s most important to understand is that they are fundamentally different from digital computers and from each other. Quantum computers will create incredibly large computing spaces that will handle unimaginable complexity. Neuromorphoic systems, based on the human brain, will be massively powerful, vastly more efficient and more responsive.

Over the next decade we’ll be shifting to a heterogeneous computing environment, where we use different architectures for different tasks. Most likely, we’ll still use digital technology as an interface to access systems, but increasingly performance will be driven by more advanced architectures.

A Shift From Bits to Atoms

The digital revolution created a virtual world. My generation was the first to grow up with video games and our parents worried that we were becoming detached from reality. Then computers entered offices and Dan Bricklin created Visicalc, the first spreadsheet program. Eventually smartphones and social media appeared and we began spending almost as much time in the virtual world as we did in the physical one.

Essentially, what we created was a simulation economy. We could experiment with business models in our computers, find flaws and fix them before they became real. Computer-aided design (CAD) software allowed us to quickly and cheaply design products in bits before we got down to the hard, slow work of shaping atoms. Because it’s much cheaper to fail in the virtual world than the physical one, this made our economy more efficient.

Today we’re doing similar things at the molecular level. For example, digital technology was combined with synthetic biology to quickly sequence the Covid-19 virus. These same technologies then allowed scientists to design vaccines in days and to bring them to market in less than a year.

A parallel revolution is taking in materials science, while at the same time digital technology is beginning to revolutionize traditional industries such as manufacturing and agriculture. The expanded capabilities of heterogeneous computing will accelerate these trends over the next few decades.

What’s important to understand is that we spend vastly more money on atoms than bits. Even at this advanced stage, information technologies only make up about 6% of GDP in advanced economies. Clearly, there is a lot more opportunity in the other 94%, so the potential of the post-digital world is likely to far outstrip anything we’ve seen in our lifetimes.

Collaboration is the New Competitive Advantage

Whenever I think back to when we got that first computer back in the 1980s, I marvel at how different the world was then. We didn’t have email or mobile phones, so unless someone was at home or in the office, they were largely unreachable. Without GPS, we had to either remember where things were or ask for directions.

These technologies have clearly changed our lives dramatically, but they were also fairly simple. Email, mobile and GPS were largely standalone technologies. There were, of course, technical challenges, but these were relatively narrow. The “killer apps” of the post-digital era will require a much higher degree of collaboration over a much more diverse set of skills.

To understand how different this new era of innovation will be, consider how IBM developed the PC. Essentially, they sent some talented engineers to Boca Raton for a year and, in that time, developed a marketable product. For quantum computing, however, it is building a vast network, including national labs, research universities, startups and industrial partners.

The same will be true of the post-Covid world. It’s no accident that Zoom has become the killer app of the pandemic. The truth is that the challenges we will face over the next decade will be far too complex for any one organization to tackle it alone. That’s why collaboration is becoming the new competitive advantage. Power will reside not at the top of hierarchies, but at the center of networks and ecosystems.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






The Role of Quantum Computing in Future Innovations

The Role of Quantum Computing in Future Innovations

GUEST POST from Chateau G Pato

In today’s rapidly evolving technological landscape, innovation is not merely a competitive edge but a necessity. At the heart of future technological advancements lies quantum computing, an enigmatic yet revolutionary field teetering on the brink of mainstream viability. Quantum computing’s potential is vast, with the promise of transforming industries and solving complex problems deemed intractable by classical computers. This article delves into the role of quantum computing in future innovations, highlighting how this powerful technology is poised to reshape our world.

Understanding Quantum Computing

Quantum computing is a paradigm shift from classical computing. While classical computers encode information in binary bits (0s and 1s), quantum computers use quantum bits, or qubits. Through properties such as superposition and entanglement, qubits can perform calculations exponentially faster than classical bits.

Superposition allows qubits to exist in multiple states simultaneously, enabling quantum computers to process a vast number of possibilities at once. Entanglement, another fundamental property, allows qubits that are entangled to influence each other, no matter the distance separating them. These unique features enable quantum computers to tackle problems involving vast combinatorial spaces, optimization, and simulation tasks with unprecedented efficiency.

Potential Innovations Through Quantum Computing

The potential impact of quantum computing spans many sectors, including healthcare, finance, chemistry, logistics, and artificial intelligence (AI). Here, we explore several promising areas whereby quantum computing could drive future innovations:

  • Drug Discovery and Material Science: Quantum computing can simulate molecules at the quantum level, which allows researchers to understand interactions and reactivity better. This capability could lead to discovering new drugs and materials far faster than today’s time-consuming trial-and-error experiments.
  • Optimization Problems: Complex optimization scenarios exist in logistics, supply chain management, and financial modeling. Quantum algorithms, notably the Quantum Approximate Optimization Algorithm (QAOA), have the potential to solve these rapidly and with greater accuracy.
  • Cryptography and Security: Quantum computing challenges current cryptographic systems, threatening conventional encryption methods. However, it also provides pathways for creating potentially unbreakable encryption forms through quantum cryptography, like Quantum Key Distribution (QKD).

Case Study 1: Transforming Healthcare with Quantum Computing

In the healthcare industry, the pharmaceutical giant GlaxoSmithKline (GSK) is exploring quantum computing to revolutionize drug discovery. The traditional process of drug discovery is ineffably slow and expensive, often taking over a decade and costing billions to bring a new drug to market. Part of this immense challenge lies in correctly predicting how complex molecules will behave.

GSK has partnered with various quantum computing companies to accelerate molecular modeling and simulation tasks. By leveraging quantum algorithms, GSK can analyze how potential drug compounds interact with bodily proteins, simulating thousands, if not millions, of configurations. Early trials have demonstrated that this quantum-enhanced approach significantly reduces the time required for identifying viable compounds, thereby cutting down development times and costs drastically.

Case Study 2: Optimizing Global Logistics

World-leading logistics company DHL has embarked on quantum computing projects aiming to optimize its sprawling global operations. One significant challenge in logistics is route optimization under shifting conditions, a notoriously complex problem that classical approaches tackle slowly and often inefficiently.

DHL is piloting a quantum computing strategy to efficiently optimize supply chains and delivery routes, dramatically reducing fuel consumption and operational costs. By applying Quantum Approximate Optimization Algorithms in simulations, DHL identified optimal routes and strategies that would have been impossible with classical computers due to the sheer number of variables. Initial reports from pilot programs reveal savings of up to 15% in operational efficiency, showing the transformative potential when these quantum methodologies are applied at scale.

The Road Ahead

The journey towards fully realizing quantum computing’s potential is not without its challenges. Large-scale, error-free quantum computers are still in development, requiring photonic, trapped ion, and superconducting qubit technologies to advance. Despite these hurdles, steady progress is being made, with government and private sectors investing heavily in research and development.

Quantum computing holds the promise of reshaping many facets of modern life, driving a future brimming with groundbreaking innovations. While it may take time, its transformative power cannot be understated, pushing the boundaries of what’s possible in computing.

As we stand on the cusp of this quantum revolution, organizations must be strategic and foresighted, preparing to integrate quantum computing into their innovation roadmap. After all, in the realm of technology, those who embrace change and pioneer new frontiers set the stage for enduring leadership.

As we continue to explore and expand our understanding of quantum computing, we edge closer to a future where its immense potential is unleashed, driving innovation across domains and reshaping our world in unimaginable ways.

Extra Extra: Because innovation is all about change, Braden Kelley’s human-centered change methodology and tools are the best way to plan and execute the changes necessary to support your innovation and transformation efforts — all while literally getting everyone all on the same page for change. Find out more about the methodology and tools, including the book Charting Change by following the link. Be sure and download the TEN FREE TOOLS while you’re here.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Quantum Computing is the Next Frontier in Innovation

Quantum Computing is the Next Frontier in Innovation

GUEST POST from Chateau G Pato

As we stand at the crossroads of technological evolution, quantum computing emerges as a groundbreaking force poised to redefine the landscape of innovation. Unlike traditional computers that use bits as the smallest unit of data, quantum computers utilize qubits, which have the ability to exist in multiple states at once, leading to computational capabilities that are exponentially faster and more powerful. This revolutionary technology holds the potential to transform industries, solve complex problems, and open up new avenues for creativity and progress.

The Promise of Quantum Computing

Quantum computing represents a paradigm shift, offering immense power for problem-solving in fields such as materials science, cryptography, and artificial intelligence. Because of their ability to perform complex calculations at unprecedented speeds, quantum computers can analyze and process enormous amounts of data, providing solutions that were previously unimaginable.

Case Study 1: Pharmaceutical Innovation

Pharmaceutical companies are at the forefront of leveraging quantum computing to accelerate drug discovery. One groundbreaking example is the collaboration between the tech giant IBM and the pharmaceutical leader GlaxoSmithKline (GSK). This partnership aims to use quantum computing to simulate molecular interactions at an atomic level, dramatically speeding up the discovery of new compounds. By accurately predicting how molecules interact, GSK hopes to streamline the development of new drugs, reducing the time and cost involved in bringing lifesaving treatments to market.

With quantum computing, researchers are now able to run simulations that capture the complexities of molecular dynamics, leading to a better understanding of drug efficacy and safety. As a result, this case study underscores the transformative potential of quantum computing in the pharmaceutical industry, promising to revolutionize how new therapies are developed and personalized for patients.

Case Study 2: Revolutionizing Transportation with Quantum Optimization

The transportation sector stands to gain immensely from quantum computing, particularly in the realm of optimization. Volkswagen, in collaboration with D-Wave, a pioneer in quantum computing systems, explored the use of quantum algorithms to improve traffic flow and reduce congestion in urban environments. The pilot project targeted reducing wait times and optimizing routes for city buses in Lisbon during the Web Summit.

By leveraging quantum computing to process and analyze real-time traffic data, the project demonstrated its potential to minimize traffic jams and enhance the overall efficiency of transportation networks. This case study illustrates how quantum computing can be an engine of innovation, offering solutions that create value not only for businesses but also for cities and their inhabitants by reducing travel time, cutting emissions, and improving the quality of urban life.

The Road Ahead

While quantum computing is still in its nascent stages, the potential it holds for catalyzing innovation across industries is undeniable. As we invest in research, development, and collaboration, it’s vital for organizations to envision how they can harness the power of quantum computing to address unique challenges and seize new opportunities. As we step into this new frontier, interdisciplinary partnerships and a keen focus on human-centered design will be essential to unlocking the full potential of quantum technologies.

In conclusion, quantum computing is not just the next frontier in innovation; it is a catalyst for the radical transformation of the technological landscape. By continuing to explore and invest in this extraordinary field, we open doors to limitless possibilities that promise to reshape our world for the better.

SPECIAL BONUS: The very best change planners use a visual, collaborative approach to create their deliverables. A methodology and tools like those in Change Planning Toolkit™ can empower anyone to become great change planners themselves.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Competing in a New Era of Innovation

Competing in a New Era of Innovation

GUEST POST from Greg Satell

In 1998, the dotcom craze was going at full steam and it seemed like the entire world was turning upside down. So people took notice when economist Paul Krugman wrote that “by 2005 or so, it will become clear that the internet’s impact on the economy has been no greater than the fax machine’s.”

He was obviously quite a bit off base, but these types of mistakes are incredibly common. As the futurist Roy Amara famously put it, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” The truth is that it usually takes about 30 years for a technology to go from an initial discovery to a measurable impact.

Today, as we near the end of the digital age and enter a new era of innovation, Amara’s point is incredibly important to keep in mind. New technologies, such as quantum computing, blockchain and gene editing will be overhyped, but really will change the world, eventually. So we need to do more than adapt, we need to prepare for a future we can’t see yet.

Identify A “Hair-On-Fire” Use Case

Today we remember the steam engine for powering factories and railroads. In the process, it made the first industrial revolution possible. Yet that’s not how it started out. Its initial purpose was to pump water out of coal mines. At the time, it would have been tough to get people to imagine a factory that didn’t exist yet, but pretty easy for owners to see that their mine was flooded.

The truth is that innovation is never really about ideas, it’s about solving problems. So when a technology is still nascent, doesn’t gain traction in a large, established market, which by definition is already fairly well served, but in a hair-on-fire use case — a problem that somebody needs solved so badly that they almost literally have their hair on fire.

Early versions of the steam engine, such as Thomas Newcomen’s version, didn’t work well and were ill-suited to running factories or driving locomotives. Still, flooded mines were a major problem, so many were more tolerant of glitches and flaws. Later, after James Watt perfected the steam engine, it became more akin to technology that remember now.

We can see the same principle at work today. Blockchain has not had much impact as an alternative currency, but has gained traction optimizing supply chains. Virtual reality has not really caught on in the entertainment industry, but is making headway in corporate training. That’s probably not where those technologies will end up, but it’s how they make money now.

So in the early stages of a technology, don’t try to imagine how a perfected version fit in, find a problem that somebody needs solved so badly right now that they are willing to put up with some inconvenience.

The truth is that the “next big thing” never turns out like people think it will. Putting a man on the moon, for example, didn’t lead to flying cars like in the Jetsons, but instead to satellites that bring events to us from across the world, help us navigate to the corner store and call our loved ones from a business trip.

Build A Learning Curve

Things that change the world always start out arrive out of context, for the simple reason that the world hasn’t changed yet. So when a new technology first appears, we don’t really know how to use it. It takes time to learn how to leverage its advantages to create an impact.

Consider electricity, which as the economist Paul David explained in a classic paper, was first used in factories to cut down on construction costs (steam engines were heavy and needed extra bracing). What wasn’t immediately obvious was that electricity allowed factories to be designed to optimize workflow, rather than having to be arranged around the power source.

We can see the same forces at work today. Consider Amazon’s recent move to offer quantum computing to its customers through the cloud, even though the technology is so primitive that it has no practical application. Nevertheless, it is potentially so powerful—and so different from digital computing—that firms are willing to pay for the privilege of experimenting with it.

The truth is that it’s better to prepare than it is to adapt. When you are adapting you are, by definition, already behind. That’s why it’s important to build a learning curve early, before a technology has begun to impact your business.

Beware Of Switching Costs

When we look back today, it seems incredible that it took decades for factories to switch from steam to electricity. Besides the extra construction costs to build extra bracing, steam engines were dirty and inflexible. Every machine in the factory needed to be tied to one engine, so if one broke down or needed maintenance, the whole factory had to be shut down.

However, when you look at the investment from the perspective of a factory owner, things aren’t so clear cut. While electricity was relatively more attractive when building a new factory, junking an existing facility to make way for a new technology didn’t make as much sense. So most factory owners kept what they had.

These types of switching costs still exist today. Consider neuromorphic chips, which are based on the architecture of the human brain and therefore highly suited to artificial intelligence. They are also potentially millions of times more energy efficient than conventional chips. However, existing AI chips also perform very well, can be manufactured in conventional fabs and run conventional AI algorithms, so neuromorphic chips haven’t caught on yet.

All too often, when a new technology emerges we only look at how its performance compares to what exists today and ignore the importance of switching costs—both real and imagined. That’s a big part of the reason we underestimate how long a technology takes to gain traction and underestimate how much impact it will have in the long run.

Find Your Place In The Ecosystem

We tend to see history through the lens of inventions: Watt and his steam engine. Edison and his light bulb. Ford and his assembly line. Yet building a better mousetrap is never enough to truly change the world. Besides the need to identify a use case, build a learning curve and overcome switching costs, every new technology needs an ecosystem to truly drive the future.

Ford’s automobiles needed roads and gas stations, which led to supermarkets, shopping malls and suburbs. Electricity needed secondary inventions, such as home appliances and radios, which created a market for skilled technicians. It is often in the ecosystem, rather than the initial invention, where most of the value is produced.

Today, we can see similar ecosystems beginning to form around emerging technologies. The journal Nature published an analysis which showed that over $450 million was invested in more than 50 quantum startups between 2012 and 2018, but only a handful are actually making quantum computers. The rest are helping to build out the ecosystem.

So for most of us, the opportunities in the post-digital era won’t be creating new technologies themselves, but in the ecosystems they create. That’s where we’ll see new markets emerge, new jobs created and new fortunes to be made.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Importance of Long-Term Innovation

Importance of Long-Term Innovation

GUEST POST from Greg Satell

Scientists studying data from Mars recently found that the red planet may have oceans worth of water embedded in its crust in addition to the ice caps at its poles. The finding is significant because, if we are ever to build a colony there, we will need access to water to sustain life and, eventually, to terraform the planet.

While it’s become fashionable for people to lament short-term thinking and “quarterly capitalism,” it’s worth noting that there are a lot of people working on—and a not insignificant amount of money invested in—colonizing another world. Many dedicate entire careers to a goal they do not expect to be achieved in their lifetime.

The truth is that there is no shortage of organizations that are willing to invest for the long-term. In fact, nascent technologies which are unlikely to pay off for years are still able to attract significant investment. The challenge is to come up with a vision that is compelling enough to inspire others, while still being practical enough that you can still make it happen.

The Road to a Miracle Vaccine

When the FDA announced that it was granting an emergency use authorization for Covid-19 vaccines, everybody was amazed at how quickly they were developed. That sense of wonder only increased when it was revealed that they were designed in a mere matter of days. Traditionally, vaccines take years, if not decades to develop.

Yet appearances can be deceiving. What looked like a 10-month sprint to a miracle cure was actually the culmination of a three-decade effort that started in the 90s with a vision of a young researcher named Katalin Karikó, who believed that a molecule called mRNA could hold the key to reprogramming our cells to produce specific protein molecules.

The problem was that, although theoretically once inside the cytoplasm mRNA could instruct our cell machinery to produce any protein we wanted, our bodies tend to reject it. However, working with her colleague Drew Weissman, Karikó figured out that they could slip it past our natural defenses by slightly modifying the mRNA molecule.

It was that breakthrough that led two startup companies, Moderna and BioNTech to license the technology and for investors to back it. Still, it would still take more than a decade and a pandemic before the bet paid off.

The Hard Road of Hard Tech

In the mid-90s when the Internet started to take off, companies with no profits soon began attracting valuations that seemed insane. Yet the economist W. Brian Arthur explained that under certain conditions—namely high initial investment, low or negligible marginal costs and network effects—firms could defy economic gravity and produce increasing returns.

Arthur’s insight paved the way for the incredible success of Silicon Valley’s brand of venture-funded capitalism. Before long, runaway successes such as Yahoo, Amazon and Google made those who invested in the idea of increasing returns a mountain of money.

Yet the Silicon Valley model only works for a fairly narrow slice of technologies, mostly software and consumer gadgets. For other, so-called “hard technologies,” such as biotech, clean tech, materials science and manufacturing 4.0, the approach isn’t effective. There’s no way to rapidly prototype a cure for cancer or a multimillion-dollar piece of equipment.

Still, over the last decade a new ecosystem has been emerging that specifically targets these technologies. Some, like the LEEP programs at the National Laboratories, are government funded. Others, such as Steve Blank’s I-Corps program, focus on training scientists to become entrepreneurs. There are also increasingly investors who specialize in hard tech.

Look closely and you can see a subtle shift taking place. Traditionally, venture investors have been willing to take market risk but not technical risk. In other words, they wanted to see a working prototype, but were willing to take a flyer on whether demand would emerge. This new breed of investors are taking on technical risk on technologies, such as new sources of energy, for which there is little market risk if they can be made to work.

The Quantum Computing Ecosystem

At the end of 2019, Amazon announced Braket, a new quantum computing service that would utilize technologies from companies such as D-Wave, IonQ, and Rigetti. They were not alone. IBM had already been building its network of quantum partners for years which included high profile customers ranging from Goldman Sachs to ExxonMobil to Boeing.

Here’s the catch. Quantum computers can’t be used by anybody for any practical purpose. In fact, there’s nobody on earth who can even tell you definitively how quantum computing should work or exactly what types of problems it can be used to solve. There are, in fact, a number of different approaches being pursued, but none of them have proved out yet.

Nevertheless, an analysis by Nature found that private funding for quantum computing is surging and not just for hardware, but enabling technologies like software and services. The US government has created a $1 billion quantum technology plan and has set up five quantum computing centers at the national labs.

So if quantum computing is not yet a proven technology why is it generating so much interest? The truth is that the smart players understand that the potential of quantum is so massive, and the technology itself so different from anything we’ve ever seen before, that it’s imperative to start early. Get behind and you may never catch up.

In other words, they’re thinking for the long-term.

A Plan Isn’t Enough, You Need To Have A Vision

It’s become fashionable to bemoan the influence of investors and blame them for short-term and “quarterly capitalism,” but that’s just an excuse for failed leadership. If you look at the world’s most valuable companies—the ones investors most highly prize—you’ll find a very different story.

Apple’s Steve Jobs famously disregarded the opinions of investors, (and just about everybody else as well). Amazon’s Jeff Bezos, who habitually keeps margins low in order to increase market share, has long been a Wall Street darling. Microsoft invested heavily in a research division aimed at creating technologies that won’t pan out for years or even decades.

The truth is that it’s not enough to have a long-term plan, you have to have a vision to go along with it. Nobody wants to “wait” for profits, but everybody can get excited about a vision that inspires them. Who doesn’t get thrilled by the possibility of a colony on Mars, miracle cures, revolutionary new materials or a new era of computing?

Here’s the thing: Just because you’re not thinking long-term doesn’t mean somebody else isn’t and, quite frankly, if they are able to articulate a vision to go along with that plan, you don’t stand a chance. You won’t survive. So take some time to look around, to dream a little bit and, maybe, to be inspired to do something worthy of a legacy.

All who wander are not lost.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.