Tag Archives: quantum computing

Top 10 Human-Centered Change & Innovation Articles of November 2023

Top 10 Human-Centered Change & Innovation Articles of November 2023Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are November’s ten most popular innovation posts:

  1. A Quantum Computing Primer — by Greg Satell
  2. Disagreements Can Be a Good Thing — by Mike Shipulski
  3. What’s Your Mindset — by Dennis Stauffer
  4. We Are Killing Innovation in America — by Greg Satell
  5. Two Kinds of Possible — by Dennis Stauffer
  6. Eddie Van Halen, Simultaneous Innovation and the AI Regulation Conundrum — by Pete Foley
  7. Five Secrets to Being a Great Team Player — by David Burkus
  8. Be Clear on What You Want — by Mike Shipulski
  9. Overcoming Your Assumptions — by Dennis Stauffer
  10. Four Things All Leaders Must Know About Digital Transformation — by Greg Satell

BONUS – Here are five more strong articles published in October that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last three years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

A Quantum Computing Primer

A Quantum Computing Primer

GUEST POST from Greg Satell

Every once in a while, a technology comes along with so much potential that people can’t seem to stop talking about it. That’s fun and exciting, but it can also be confusing. Not all of the people who opine really know what they’re talking about and, as the cacophony of voices increases to a loud roar, it’s hard to know what to believe.

We’re beginning to hit that point with quantum computing. Listen to some and you imagine that you’ll be strolling down to your local Apple store to pick one up any day now. Others will tell you that these diabolical machines will kill encryption and bring global commerce to a screeching halt. None of this is true.

What is true though is that quantum computing is not only almost unimaginably powerful, it is also completely different than anything we’ve ever seen before. You won’t use a quantum computer to write emails or to play videos, but the technology will significantly impact our lives over the next decade or two. Here’s a basic guide to what you really need to know.

Computing In 3 Dimensions

Quantum computing, as any expert will tell you, uses quantum effects such as superposition and entanglement to compute, unlike digital computers that use strings of ones and zeros. Yet quantum effects are so confusing that the great physicist Richard Feynman once remarked that nobody, even world class experts like him, really understands them.

So instead of quantum effects, think of quantum computing as a machine that works in three dimensions rather than two-dimensions like digital computers. The benefits of this should be obvious, because you can fit a lot more stuff into three dimensions than you can into two, so a quantum computer can handle vastly more complexity than the ones we’re used to.

Another added benefit is that we live in three dimensions, so quantum computers can simulate the systems we deal with every day, like those in materials and biological organisms. Digital computers can do this to some extent, but some information always gets lost translating the data from a three dimensional world to a two dimensional one, which leads to problems.

I want to stress that this isn’t exactly an accurate description of how quantum computers really work, but it’s close enough for you to get the gist of why they are so different and, potentially, so useful.

Coherence And Error Correction

Everybody makes mistakes and the same goes for machines. When you think of all the billions of calculations a computer makes, you can see how even an infinitesimally small error rate can cause a lot of problems. That’s why computers have error correction mechanisms built into their code to catch mistakes and correct them.

With quantum computers the problem is much tougher because they work with subatomic particles and these systems are incredibly difficult to keep stable. That’s why quantum chips need to be kept within a fraction of a degree of absolute zero. At even a sliver above that, the system “decoheres” and we won’t be able to make sense out of anything.

It also leads to another problem. Because quantum computers are so prone to error, we need a whole lot of quantum bits (or qubits) for each qubit that performs a logical function. In fact, with today’s technology, we need more than a thousand physical qubits (the kind that are in a machine) for each qubit that can reliably perform a logical function.

This is why most of the fears of quantum computing killing encryption and destroying the financial system are mostly unfounded. The most advanced quantum computers today only have about 50 qubits, not nearly enough to crack anything. We will probably have machines that strong in a decade or so, but by that time quantum safe encryption should be fairly common.

Building Practical Applications

Because quantum computers are so different, it’s hard to make them efficient for the tasks that we use traditional computers for because they effectively have to translate two-dimensional digital problems into their three-dimensional quantum world. The error correction issues only compound the problem.

There are some problems, however, that they’re ideally suited to. One is to simulate quantum systems, like molecules and biological systems, which can be tremendously valuable for people like chemists, materials scientists and medical researchers. Another promising area is large optimization problems for use in the financial industry and helping manage complex logistics.

Yet the people who understand those problems know little about quantum computing. In most cases, they’ve never seen a quantum computer before and have trouble making sense out of the data they generate. So they will have to spend some years working with quantum scientists to figure it out and then some more years explaining what they’ve learned to engineers who can build products and services.

We tend to think of innovation as if it is a single event. The reality is that it’s a long process of discovery, engineering and transformation. We are already well into the engineering phase of quantum computing—we have reasonably powerful machines that work—but the transformation phase has just begun.

The End Of The Digital Revolution And A New Era Of Innovation

One of the reasons that quantum computing has been generating so much excitement is that Moore’s Law is ending. The digital revolution was driven by our ability to cram more transistors onto a silicon wafer, so once we are not able to do that anymore, a key avenue of advancement will no longer be viable.

So many assume that quantum computing will simply take over where digital computing left off. It will not. As noted above, quantum computers are fundamentally different than the ones we are used to. They use different logic, require different computing languages and algorithmic approaches and are suited to different tasks.

That means the major impacts from quantum computers won’t hit for a decade or more. That’s not at all unusual. For example, although Apple came out with the Macintosh in 1984, it wasn’t until the late 90s that there was a measurable bump in productivity. It takes time for an ecosystem to evolve around a technology and drive a significant impact.

What’s most important to understand, however, is that the quantum era will open up new worlds of possibility, enabling us to manage almost unthinkable complexity and reshape the physical world. We are, in many ways, just getting started.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

What Pundits Always Get Wrong About the Future

What Pundits Always Get Wrong About the Future

GUEST POST from Greg Satell

Peter Thiel likes to point out that we wanted flying cars, but got 140 characters instead. He’s only partly right. For decades futuristic visions showed everyday families zipping around in flying cars and it’s true that even today we’re still stuck on the ground. Yet that’s not because we’re unable to build one. In fact the first was invented in 1934.

The problem is not so much with engineering, but economics, safety and convenience. We could build a flying car if we wanted to, but to make one that can compete with regular cars is another matter entirely. Besides, in many ways, 140 characters are better than a flying car. Cars only let us travel around town, the Internet helps us span the globe.

That has created far more value than a flying car ever could. We often fail to predict the future accurately because we don’t account for our capacity to surprise ourselves, to see new possibilities and take new directions. We interact with each other, collaborate and change our priorities. The future that we predict is never as exciting as the one we eventually create.

1. The Future Will Not Look Like The Past

We tend to predict the future by extrapolating from the present. So if we invent a car and then an airplane, it only seems natural that we can combine the two. If family has a car, then having one that flies can seem like a logical next step. We don’t look at a car and dream up, say, a computer. So in 1934, we dreamed of flying cars, but not computers.

It’s not just optimists that fall prey to this fundamental error, but pessimists too. In Homo Deus, author and historian Yuval Noah Harari points to several studies that show that human jobs are being replaced by machines. He then paints a dystopian picture. “Humans might become militarily and economically useless,” he writes. Yeesh!

Yet the picture is not as dark as it may seem. Consider the retail apocalypse. Over the past few years, we’ve seen an unprecedented number of retail store closings. Those jobs are gone and they’re not coming back. You can imagine thousands of retail employees sitting at home, wondering how to pay their bills, just as Harari predicts.

Yet economist Michael Mandel argues that the data tell a very different story. First, he shows that the jobs gained from e-commerce far outstrip those lost from traditional retail. Second, he points out that the total e-commerce sector, including lower-wage fulfillment centers, has an average wage of $21.13 per hour, which is 27 percent higher than the $16.65 that the average worker in traditional retail earns.

So not only are more people working, they are taking home more money too. Not only is the retail apocalypse not a tragedy, it’s somewhat of a blessing.

2. The Next Big Thing Always Starts Out Looking Like Nothing At All

Every technology eventually hits theoretical limits. Buy a computer today and you’ll find that the technical specifications are much like they were five years ago. When a new generation of iPhones comes out these days, reviewers tout the camera rather than the processor speed. The truth is that Moore’s law is effectively over.

That seems tragic, because our ability to exponentially increase the number of transistors that we can squeeze onto a silicon wafer has driven technological advancement over the past few decades. Every 18 months or so, a new generation of chips has come out and opened up new possibilities that entrepreneurs have turned into exciting new businesses.

What will we do now?

Yet there’s no real need to worry. There is no 11th commandment that says, “Thou shalt compute with ones and zeros” and the end of Moore’s law will give way to newer, more powerful technologies, like quantum and neuromorphic computing. These are still in their nascent stage and may not have an impact for at least five to ten years, but will likely power the future for decades to come.

The truth is that the next big thing always starts out looking like nothing at all. Einstein never thought that his work would have a practical impact during his lifetime. When Alexander Fleming first discovered penicillin, nobody noticed. In much the same way, the future is not digital. So what? It will be even better!

3. It’s Ecosystems, Not Inventions, That Drive The Future

When the first automobiles came to market, they were called “horseless carriages” because that’s what everyone knew and was familiar with. So it seemed logical that people would use them much like they used horses, to take the occasional trip into town and to work in the fields. Yet it didn’t turn out that way, because driving a car is nothing like riding a horse.

So first people started taking “Sunday drives” to relax and see family and friends, something that would be too tiring to do regularly on a horse. Gas stations and paved roads changed how products were distributed and factories moved from cities in the north, close to customers, to small towns in the south, where land and labor were cheaper.

As the ability to travel increased, people started moving out of cities and into suburbs. When consumers could easily load a week’s worth of groceries into their cars, corner stores gave way to supermarkets and, eventually, shopping malls. The automobile changed a lot more than simply how we got from place to place. It changed our way of life in ways that were impossible to predict.

Look at other significant technologies, such as electricity and computers, and you find a similar story. It’s ecosystems, rather than inventions, that drive the future.

4. We Can Only Validate Patterns Going Forward

G. H. Hardy once wrote that, “a mathematician, like a painter or poet, is a maker of patterns. If his patterns are more permanent than theirs, it is because they are made with ideas.” Futurists often work the same way, identifying patterns in the past and present, then extrapolating them into the future. Yet there is a substantive difference between patterns that we consider to be preordained and those that are to be discovered.

Think about Steve Jobs and Apple for a minute and you will probably recognize the pattern and assume I misspelled the name of his iconic company by forgetting to include the “e” at the end. But I could have just have easily been about to describe an “Applet” he designed for the iPhone or some connection between Jobs and Appleton WI, a small town outside Green Bay.

The point is that we can only validate patterns going forward, never backward. That, in essence, is what Steve Blank means when he says that business plans rarely survive first contact with customers and why his ideas about lean startups are changing the world. We need to be careful about the patterns we think we see. Some are meaningful. Others are not.

The problem with patterns is that future is something we create, not some preordained plan that we are beholden to. The things we create often become inflection points and change our course. That may frustrate the futurists, but it’s what makes life exciting for the rest of us.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Coming Innovation Slowdown

The Coming Innovation Slowdown

GUEST POST from Greg Satell

Take a moment to think about what the world must have looked like to J.P. Morgan a century ago, in 1919. He was not only an immensely powerful financier with access to the great industrialists of the day, but also an early adopter of new technologies. One of the first electric generators was installed at his home.

The disruptive technologies of the day, electricity and internal combustion, were already almost 40 years old, but had little measurable economic impact. Life largely went on as it always had. That would quickly change over the next decade when those technologies would drive a 50-year boom in productivity unlike anything the world had ever seen before.

It is very likely that we are at a similar point now. Despite significant advances in technology, productivity growth has been depressed for most of the last 50 years. Over the next ten years, however, we’re likely to see that change as nascent technologies hit their stride and create completely new industries. Here’s what you’ll need to know to compete in the new era.

1. Value Will Shift from Bits to Atoms

Over the past few decades, innovation has become almost synonymous with digital technology. Every 18 months or so, semiconductor manufacturers would bring out a new generation of processors that were twice as powerful as what came before. These, in turn, would allow entrepreneurs to imagine completely new possibilities.

However, while the digital revolution has given us snazzy new gadgets, the impact has been muted. Sure, we have hundreds of TV channels and we’re able to talk to our machines and get coherent answers back, but even at this late stage, information and communication technologies make up only about 6% of GDP in advanced countries.

At first, that sounds improbable. How could so much change produce so little effect? But think about going to a typical household in 1960, before the digital revolution took hold. You would likely see a TV, a phone, household appliances and a car in the garage. Now think of a typical household in 1910, with no electricity or running water. Even simple chores like cooking and cleaning took hours of backbreaking labor.

The truth is that much of our economy is still based on what we eat, wear and live in, which is why it’s important that the nascent technologies of today, such as synthetic biology and materials science, are rooted in the physical world. Over the next generation, we can expect innovation to shift from bits back to atoms.

2. Innovation Will Slow Down

We’ve come to take it for granted that things always accelerate because that’s what has happened for the past 30 years or so. So we’ve learned to deliberate less, to rapidly prototype and iterate and to “move fast and break things” because, during the digital revolution, that’s what you needed to do to compete effectively.

Yet microchips are a very old technology that we’ve come to understand very, very well. When a new generation of chips came off the line, they were faster and better, but worked the same way as earlier versions. That won’t be true with new computing architectures such as quantum and neuromorphic computing. We’ll have to learn how to use them first.

In other cases, such as genomics and artificial intelligence, there are serious ethical issues to consider. Under what conditions is it okay to permanently alter the germ line of a species. Who is accountable for the decisions and algorithm makes? On what basis should those decisions be made? To what extent do they need to be explainable and auditable?

Innovation is a process of discovery, engineering and transformation. At the moment, we find ourselves at the end of one transformational phase and about to enter a new one. It will take a decade or so to understand these new technologies enough to begin to accelerate again. We need to do so carefully. As we have seen over the past few years, when you move fast and break things, you run the risk of breaking something important.

3. Ecosystems Will Drive Technology

Let’s return to J.P. Morgan in 1919 and ask ourselves why electricity and internal combustion had so little impact up to that point. Automobiles and electric lights had been around a long time, but adoption takes time. It takes a while to build roads, to string wires and to train technicians to service new inventions reliably.

As economist Paul David pointed out in his classic paper, The Dynamo and the Computer, it takes time for people to learn how to use new technologies. Habits and routines need to change to take full advantage of new technologies. For example, in factories, the biggest benefit electricity provided was through enabling changes in workflow.

The biggest impacts come from secondary and tertiary technologies, such as home appliances in the case of electricity. Automobiles did more than provide transportation, but enables a shift from corner stores to supermarkets and, eventually, shopping malls. Refrigerated railroad cars revolutionized food distribution. Supply chains were transformed. Radios, and later TV, reshaped entertainment.

Nobody, not even someone like J.P. Morgan could have predicted all that in 1919, because it’s ecosystems, not inventions, that drive transformation and ecosystems are non-linear. We can’t simply extrapolate out from the present and get a clear future of what the future is going to look like.

4. You Need to Start Now

The changes that will take place over the next decade or so are likely to be just as transformative—and possibly even more so—than those that happened in the 1920s and 30s. We are on the brink of a new era of innovation that will see the creation of entirely new industries and business models.

Yet the technologies that will drive the 21st century are still mostly in the discovery and engineering phases, so they’re easy to miss. Once the transformation begins in earnest, however, it will likely be too late to adapt. In areas like genomics, materials science, quantum computing and artificial intelligence, if you get a few years behind, you may never catch up.

So the time to start exploring these new technologies is now and there are ample opportunities to do so. The Manufacturing USA Institutes are driving advancement in areas as diverse as bio-fabrication, additive manufacturing and composite materials. IBM has created its Q Network to help companies get up to speed on quantum computing and the Internet of Things Consortium is doing the same thing in that space.

Make no mistake, if you don’t explore, you won’t discover. If you don’t discover you won’t invent. And if you don’t invent, you will be disrupted eventually, it’s just a matter of time. It’s always better to prepare than to adapt and the time to start doing that is now.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

A Brave Post-Coronavirus New World

A Brave Post-Coronavirus New World

GUEST POST from Greg Satell

In 1973, in the wake of the Arab defeat in the Yom Kippur war with Israel, OPEC instituted an oil embargo on America and its allies. The immediate effects of the crisis was a surge in gas prices and a recession in the west. The ripple effects, however, were far more complex and played out over decades.

The rise in oil prices brought much needed hard currency to the Soviet Union, prolonging its existence and setting the stage for its later demise. The American auto industry, with its passion for big, gas guzzling cars, lost ground to the emergent. The new consciousness of conservation led to the establishment of the Department of Energy.

Today the Covid-19 crisis has given a shock to the system and we’re at a similar inflection point. The most immediate effects have been economic recession and the rapid adoption of digital tools, such as video conferencing. Over the next decade or so, however, the short-term impacts will combine with other more longstanding trends to reshape technology and society.

Pervasive Transformation

We tend to think about innovation as if it were a single event, but the truth is that it’s a process of a process of discovery, engineering and transformation, which takes decades to run its course. For example, Alan Turing discovered the principles of a universal computer in 1936, but it wasn’t until the 1950s and 60s that digital computers became commercially available.

Even then, digital technology, didn’t really begin to become truly transformational until the mid-90s. By this time, it was well understood enough to make the leap from highly integrated systems to modular ecosystems, making the technology cheaper, more functional and more reliable. The number of applications exploded and the market grew quickly.

Still, as the Covid-19 crisis has made clear, we’ve really just been scratching the surface. Although digital technology certainly accelerated the pace of work, it did fairly little to fundamentally change the nature of it. People still commuted to work in an office, where they would attend meetings in person, losing hours of productive time each and every day.

Over the next decade, we will see pervasive transformation. As Mark Zuckerberg has pointed out, once people can work remotely, they can work from anywhere, which will change the nature of cities. Instead of “offsite” meetings, we may very well have “onsite” meetings where people from their home cities over travel to headquarters to do more active collaboration.

These trends will combine with nascent technologies like artificial intelligence and blockchain to revolutionize business processes and supply chains. Organizations that cannot adopt key technologies will very likely find themselves unable to compete.

The Rise of Heterogeneous Computing

The digital age did not begin with personal computers in the 70s and 80s, but started back in the 1950s with the shift from electromechanical calculating machines to transistor based mainframes. However, because so few people used computers back then—they were largely relegated to obscure back office tasks and complex scientific calculations—the transformation took place largely out of public view.

A similar process is taking place today with new architectures such as quantum and neuromorphic computing. While these technologies are not yet commercially viable, they are advancing quickly and will eventually become thousands, if not millions, of times more effective than digital systems.

However, what’s most important to understand is that they are fundamentally different from digital computers and from each other. Quantum computers will create incredibly large computing spaces that will handle unimaginable complexity. Neuromorphoic systems, based on the human brain, will be massively powerful, vastly more efficient and more responsive.

Over the next decade we’ll be shifting to a heterogeneous computing environment, where we use different architectures for different tasks. Most likely, we’ll still use digital technology as an interface to access systems, but increasingly performance will be driven by more advanced architectures.

A Shift From Bits to Atoms

The digital revolution created a virtual world. My generation was the first to grow up with video games and our parents worried that we were becoming detached from reality. Then computers entered offices and Dan Bricklin created Visicalc, the first spreadsheet program. Eventually smartphones and social media appeared and we began spending almost as much time in the virtual world as we did in the physical one.

Essentially, what we created was a simulation economy. We could experiment with business models in our computers, find flaws and fix them before they became real. Computer-aided design (CAD) software allowed us to quickly and cheaply design products in bits before we got down to the hard, slow work of shaping atoms. Because it’s much cheaper to fail in the virtual world than the physical one, this made our economy more efficient.

Today we’re doing similar things at the molecular level. For example, digital technology was combined with synthetic biology to quickly sequence the Covid-19 virus. These same technologies then allowed scientists to design vaccines in days and to bring them to market in less than a year.

A parallel revolution is taking in materials science, while at the same time digital technology is beginning to revolutionize traditional industries such as manufacturing and agriculture. The expanded capabilities of heterogeneous computing will accelerate these trends over the next few decades.

What’s important to understand is that we spend vastly more money on atoms than bits. Even at this advanced stage, information technologies only make up about 6% of GDP in advanced economies. Clearly, there is a lot more opportunity in the other 94%, so the potential of the post-digital world is likely to far outstrip anything we’ve seen in our lifetimes.

Collaboration is the New Competitive Advantage

Whenever I think back to when we got that first computer back in the 1980s, I marvel at how different the world was then. We didn’t have email or mobile phones, so unless someone was at home or in the office, they were largely unreachable. Without GPS, we had to either remember where things were or ask for directions.

These technologies have clearly changed our lives dramatically, but they were also fairly simple. Email, mobile and GPS were largely standalone technologies. There were, of course, technical challenges, but these were relatively narrow. The “killer apps” of the post-digital era will require a much higher degree of collaboration over a much more diverse set of skills.

To understand how different this new era of innovation will be, consider how IBM developed the PC. Essentially, they sent some talented engineers to Boca Raton for a year and, in that time, developed a marketable product. For quantum computing, however, it is building a vast network, including national labs, research universities, startups and industrial partners.

The same will be true of the post-Covid world. It’s no accident that Zoom has become the killer app of the pandemic. The truth is that the challenges we will face over the next decade will be far too complex for any one organization to tackle it alone. That’s why collaboration is becoming the new competitive advantage. Power will reside not at the top of hierarchies, but at the center of networks and ecosystems.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Competing in a New Era of Innovation

Competing in a New Era of Innovation

GUEST POST from Greg Satell

In 1998, the dotcom craze was going at full steam and it seemed like the entire world was turning upside down. So people took notice when economist Paul Krugman wrote that “by 2005 or so, it will become clear that the internet’s impact on the economy has been no greater than the fax machine’s.”

He was obviously quite a bit off base, but these types of mistakes are incredibly common. As the futurist Roy Amara famously put it, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” The truth is that it usually takes about 30 years for a technology to go from an initial discovery to a measurable impact.

Today, as we near the end of the digital age and enter a new era of innovation, Amara’s point is incredibly important to keep in mind. New technologies, such as quantum computing, blockchain and gene editing will be overhyped, but really will change the world, eventually. So we need to do more than adapt, we need to prepare for a future we can’t see yet.

Identify A “Hair-On-Fire” Use Case

Today we remember the steam engine for powering factories and railroads. In the process, it made the first industrial revolution possible. Yet that’s not how it started out. Its initial purpose was to pump water out of coal mines. At the time, it would have been tough to get people to imagine a factory that didn’t exist yet, but pretty easy for owners to see that their mine was flooded.

The truth is that innovation is never really about ideas, it’s about solving problems. So when a technology is still nascent, doesn’t gain traction in a large, established market, which by definition is already fairly well served, but in a hair-on-fire use case — a problem that somebody needs solved so badly that they almost literally have their hair on fire.

Early versions of the steam engine, such as Thomas Newcomen’s version, didn’t work well and were ill-suited to running factories or driving locomotives. Still, flooded mines were a major problem, so many were more tolerant of glitches and flaws. Later, after James Watt perfected the steam engine, it became more akin to technology that remember now.

We can see the same principle at work today. Blockchain has not had much impact as an alternative currency, but has gained traction optimizing supply chains. Virtual reality has not really caught on in the entertainment industry, but is making headway in corporate training. That’s probably not where those technologies will end up, but it’s how they make money now.

So in the early stages of a technology, don’t try to imagine how a perfected version fit in, find a problem that somebody needs solved so badly right now that they are willing to put up with some inconvenience.

The truth is that the “next big thing” never turns out like people think it will. Putting a man on the moon, for example, didn’t lead to flying cars like in the Jetsons, but instead to satellites that bring events to us from across the world, help us navigate to the corner store and call our loved ones from a business trip.

Build A Learning Curve

Things that change the world always start out arrive out of context, for the simple reason that the world hasn’t changed yet. So when a new technology first appears, we don’t really know how to use it. It takes time to learn how to leverage its advantages to create an impact.

Consider electricity, which as the economist Paul David explained in a classic paper, was first used in factories to cut down on construction costs (steam engines were heavy and needed extra bracing). What wasn’t immediately obvious was that electricity allowed factories to be designed to optimize workflow, rather than having to be arranged around the power source.

We can see the same forces at work today. Consider Amazon’s recent move to offer quantum computing to its customers through the cloud, even though the technology is so primitive that it has no practical application. Nevertheless, it is potentially so powerful—and so different from digital computing—that firms are willing to pay for the privilege of experimenting with it.

The truth is that it’s better to prepare than it is to adapt. When you are adapting you are, by definition, already behind. That’s why it’s important to build a learning curve early, before a technology has begun to impact your business.

Beware Of Switching Costs

When we look back today, it seems incredible that it took decades for factories to switch from steam to electricity. Besides the extra construction costs to build extra bracing, steam engines were dirty and inflexible. Every machine in the factory needed to be tied to one engine, so if one broke down or needed maintenance, the whole factory had to be shut down.

However, when you look at the investment from the perspective of a factory owner, things aren’t so clear cut. While electricity was relatively more attractive when building a new factory, junking an existing facility to make way for a new technology didn’t make as much sense. So most factory owners kept what they had.

These types of switching costs still exist today. Consider neuromorphic chips, which are based on the architecture of the human brain and therefore highly suited to artificial intelligence. They are also potentially millions of times more energy efficient than conventional chips. However, existing AI chips also perform very well, can be manufactured in conventional fabs and run conventional AI algorithms, so neuromorphic chips haven’t caught on yet.

All too often, when a new technology emerges we only look at how its performance compares to what exists today and ignore the importance of switching costs—both real and imagined. That’s a big part of the reason we underestimate how long a technology takes to gain traction and underestimate how much impact it will have in the long run.

Find Your Place In The Ecosystem

We tend to see history through the lens of inventions: Watt and his steam engine. Edison and his light bulb. Ford and his assembly line. Yet building a better mousetrap is never enough to truly change the world. Besides the need to identify a use case, build a learning curve and overcome switching costs, every new technology needs an ecosystem to truly drive the future.

Ford’s automobiles needed roads and gas stations, which led to supermarkets, shopping malls and suburbs. Electricity needed secondary inventions, such as home appliances and radios, which created a market for skilled technicians. It is often in the ecosystem, rather than the initial invention, where most of the value is produced.

Today, we can see similar ecosystems beginning to form around emerging technologies. The journal Nature published an analysis which showed that over $450 million was invested in more than 50 quantum startups between 2012 and 2018, but only a handful are actually making quantum computers. The rest are helping to build out the ecosystem.

So for most of us, the opportunities in the post-digital era won’t be creating new technologies themselves, but in the ecosystems they create. That’s where we’ll see new markets emerge, new jobs created and new fortunes to be made.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Importance of Long-Term Innovation

Importance of Long-Term Innovation

GUEST POST from Greg Satell

Scientists studying data from Mars recently found that the red planet may have oceans worth of water embedded in its crust in addition to the ice caps at its poles. The finding is significant because, if we are ever to build a colony there, we will need access to water to sustain life and, eventually, to terraform the planet.

While it’s become fashionable for people to lament short-term thinking and “quarterly capitalism,” it’s worth noting that there are a lot of people working on—and a not insignificant amount of money invested in—colonizing another world. Many dedicate entire careers to a goal they do not expect to be achieved in their lifetime.

The truth is that there is no shortage of organizations that are willing to invest for the long-term. In fact, nascent technologies which are unlikely to pay off for years are still able to attract significant investment. The challenge is to come up with a vision that is compelling enough to inspire others, while still being practical enough that you can still make it happen.

The Road to a Miracle Vaccine

When the FDA announced that it was granting an emergency use authorization for Covid-19 vaccines, everybody was amazed at how quickly they were developed. That sense of wonder only increased when it was revealed that they were designed in a mere matter of days. Traditionally, vaccines take years, if not decades to develop.

Yet appearances can be deceiving. What looked like a 10-month sprint to a miracle cure was actually the culmination of a three-decade effort that started in the 90s with a vision of a young researcher named Katalin Karikó, who believed that a molecule called mRNA could hold the key to reprogramming our cells to produce specific protein molecules.

The problem was that, although theoretically once inside the cytoplasm mRNA could instruct our cell machinery to produce any protein we wanted, our bodies tend to reject it. However, working with her colleague Drew Weissman, Karikó figured out that they could slip it past our natural defenses by slightly modifying the mRNA molecule.

It was that breakthrough that led two startup companies, Moderna and BioNTech to license the technology and for investors to back it. Still, it would still take more than a decade and a pandemic before the bet paid off.

The Hard Road of Hard Tech

In the mid-90s when the Internet started to take off, companies with no profits soon began attracting valuations that seemed insane. Yet the economist W. Brian Arthur explained that under certain conditions—namely high initial investment, low or negligible marginal costs and network effects—firms could defy economic gravity and produce increasing returns.

Arthur’s insight paved the way for the incredible success of Silicon Valley’s brand of venture-funded capitalism. Before long, runaway successes such as Yahoo, Amazon and Google made those who invested in the idea of increasing returns a mountain of money.

Yet the Silicon Valley model only works for a fairly narrow slice of technologies, mostly software and consumer gadgets. For other, so-called “hard technologies,” such as biotech, clean tech, materials science and manufacturing 4.0, the approach isn’t effective. There’s no way to rapidly prototype a cure for cancer or a multimillion-dollar piece of equipment.

Still, over the last decade a new ecosystem has been emerging that specifically targets these technologies. Some, like the LEEP programs at the National Laboratories, are government funded. Others, such as Steve Blank’s I-Corps program, focus on training scientists to become entrepreneurs. There are also increasingly investors who specialize in hard tech.

Look closely and you can see a subtle shift taking place. Traditionally, venture investors have been willing to take market risk but not technical risk. In other words, they wanted to see a working prototype, but were willing to take a flyer on whether demand would emerge. This new breed of investors are taking on technical risk on technologies, such as new sources of energy, for which there is little market risk if they can be made to work.

The Quantum Computing Ecosystem

At the end of 2019, Amazon announced Braket, a new quantum computing service that would utilize technologies from companies such as D-Wave, IonQ, and Rigetti. They were not alone. IBM had already been building its network of quantum partners for years which included high profile customers ranging from Goldman Sachs to ExxonMobil to Boeing.

Here’s the catch. Quantum computers can’t be used by anybody for any practical purpose. In fact, there’s nobody on earth who can even tell you definitively how quantum computing should work or exactly what types of problems it can be used to solve. There are, in fact, a number of different approaches being pursued, but none of them have proved out yet.

Nevertheless, an analysis by Nature found that private funding for quantum computing is surging and not just for hardware, but enabling technologies like software and services. The US government has created a $1 billion quantum technology plan and has set up five quantum computing centers at the national labs.

So if quantum computing is not yet a proven technology why is it generating so much interest? The truth is that the smart players understand that the potential of quantum is so massive, and the technology itself so different from anything we’ve ever seen before, that it’s imperative to start early. Get behind and you may never catch up.

In other words, they’re thinking for the long-term.

A Plan Isn’t Enough, You Need To Have A Vision

It’s become fashionable to bemoan the influence of investors and blame them for short-term and “quarterly capitalism,” but that’s just an excuse for failed leadership. If you look at the world’s most valuable companies—the ones investors most highly prize—you’ll find a very different story.

Apple’s Steve Jobs famously disregarded the opinions of investors, (and just about everybody else as well). Amazon’s Jeff Bezos, who habitually keeps margins low in order to increase market share, has long been a Wall Street darling. Microsoft invested heavily in a research division aimed at creating technologies that won’t pan out for years or even decades.

The truth is that it’s not enough to have a long-term plan, you have to have a vision to go along with it. Nobody wants to “wait” for profits, but everybody can get excited about a vision that inspires them. Who doesn’t get thrilled by the possibility of a colony on Mars, miracle cures, revolutionary new materials or a new era of computing?

Here’s the thing: Just because you’re not thinking long-term doesn’t mean somebody else isn’t and, quite frankly, if they are able to articulate a vision to go along with that plan, you don’t stand a chance. You won’t survive. So take some time to look around, to dream a little bit and, maybe, to be inspired to do something worthy of a legacy.

All who wander are not lost.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Exploring the Potential of Quantum Computing in Solving Complex Problems

Exploring the Potential of Quantum Computing in Solving Complex Problems

GUEST POST from Chateau G Pato

Quantum computing has emerged as an exciting frontier in the field of computer science, promising to revolutionize problem-solving capabilities. By harnessing the unique properties of quantum mechanics, quantum computers have the potential to solve complex problems that are impossible for classical computers. In this thought leadership article, we will delve into the opportunities and challenges associated with quantum computing, while illustrating its potential through two compelling case studies.

Case Study 1: Drug Discovery Acceleration

The process of drug discovery is a time-consuming and expensive endeavor that typically involves screening large chemical databases for potential therapeutic compounds. Quantum computing offers a promising solution by enabling rapid exploration of chemical space. One notable case study involves the collaboration between IBM and pharmaceutical company Merck. By leveraging IBM’s Qiskit software platform and accessing IBM’s quantum systems, researchers at Merck were able to investigate various molecular configurations and accelerate the discovery of novel drug candidates. Quantum simulations provided valuable insights into the interactions of molecules at a quantum level, leading to more efficient drug design and reducing costs associated with traditional laboratory-based testing.

Case Study 2: Optimizing Supply Chain Management

Supply chains are often complex networks with numerous variables and interdependencies, making them difficult to optimize. Quantum computing holds tremendous potential in analyzing and streamlining supply chain processes. Volkswagen, in collaboration with Google and mobileX, explored the application of quantum computing in optimizing electric vehicle spare parts delivery. By utilizing Google’s quantum processors and advanced machine learning algorithms, they demonstrated how the quantum approach can significantly enhance route optimization, reduce transportation costs, and improve overall efficiency in supply chain management. The results showcased the immense potential of quantum computing in revolutionizing traditional logistics strategies.

Challenges and Future Considerations:

While the opportunities presented by quantum computing are undoubtedly transformative, challenges remain on the path to widespread adoption. Quantum systems are highly sensitive to noise and environmental factors, making it challenging to maintain stability and accuracy in computations. Building error-correcting mechanisms and scalable quantum hardware are pivotal for overcoming these hurdles. Furthermore, educating and training a workforce equipped with the required skill sets will be crucial.

To pave the way for the widespread implementation of quantum computing, collaboration between academia, industry, and governments is necessary. Investments in research and development, as well as infrastructure, are key to advancing quantum computing capabilities and fostering innovation.


Quantum computing holds immense potential in solving complex problems that are beyond the reach of classical computers. The case studies involving drug discovery acceleration and supply chain optimization highlight its promising applications in real-world scenarios. Though challenges persist, investments in research, collaboration, and skill development can help unlock the full potential of quantum computing. As the technology continues to evolve, organizations that leverage quantum computing will gain a significant competitive advantage, enabling breakthroughs in a wide array of industries and ultimately shaping a better future for humanity.

SPECIAL BONUS: Futurology is not fortune telling. Futurists use a scientific approach to create their deliverables, but a methodology and tools like those in FutureHacking™ can empower anyone to engage in futurology themselves.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Training Your Quantum Human Computer

Quantum Human Computing

What is quantum computing?

According to Wikipedia, “Quantum computing is the use of quantum phenomena such as superposition and entanglement to perform computation. Computers that perform quantum computations are known as quantum computers.”

Rather than try and explain all of the ins and outs of how quantum computing differs from traditional computing and why it matters, I encourage you to check out this YouTube video:

In case you were curious, according to the Guinness Book of World Records, the current record holder for quantum computing is a Google machine capable of processing 72 Quantum Bits. There is supposedly a machine in China capable of 76 Qubits, but it has yet to be fully recognized as the new record holder.

So, what does quantum computing have to do with humanity and the human brain and our collective future?

Is the human brain a quantum computer?

The easy answer is – we’re not sure – but scientists are conducting experiments to try and determine whether the human brain is capable of computing in a quantum way.

As the pace of change in our world accelerates and data proliferates, we will need to train our brains to use less traditional brute force computing of going through every possibility one after another to do more parallel processing, better pattern recognition, and generating an increase in our ability to see insights straight away.

Connect the Dots

But how can we train our brains?

There are many different ways to better prepare your brain as we move from the Information Age to the Age of Insight. Let me start you off with two good ones and invite you to add more in the comments:

1. Connect the Dots

Many of us grew up doing connect-the-dot puzzles, and they seemed pretty easy. But, that is with visual queues. The image above shows a number of different visual queues. Connect the dots, especially without numbers or visual queues are great proving grounds for improving your visual pattern recognition skills.


One of my favorites is the word game DAILY JUMBLE in my local newspaper. You can also play it online. The key here is to work not on using brute force to reorder the letters into a word, but trying to train your brain to just SEE THE WORD – instantly.

Succeeding at this and other ways of training your brain to be more like a quantum computer involves getting better at removing your conscious analytical brain from the picture and letting other parts of your brain take over. It’s not easy. It takes practice – continual practice – because it is really hard to keep the analytical brain out of the way.

So, are you willing to give it a try?

Stay tuned for the next article in this series “The Age of Insight” …

Image credits: Utrecht University, Pixabay

Accelerate your change and transformation success

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Unraveling the Potential of Quantum Computing in Solving Complex Problems

Unraveling the Potential of Quantum Computing in Solving Complex Problems

GUEST POST from Chateau G Pato

In recent years, the field of quantum computing has captured the imagination of scientists, researchers, and technologists worldwide. Promising significant advancements over classical computers, quantum computing has the potential to revolutionize various industries by solving complex problems that were once considered insurmountable. With its ability to harness the principles of superposition and entanglement, quantum computing offers novel approaches to computation, unlocking new frontiers in fields such as cryptography, drug discovery, optimization, and modeling complex physical systems.

Case Study 1 – Cryptography

One of the most exciting prospects of quantum computing lies in its ability to break cryptographic codes that are currently deemed unbreakable by classical computers. Case in point, the advent of quantum algorithms such as Shor’s algorithm allows for the efficient factorization of large numbers, a crucial foundation of many encryption methods currently employed. To illustrate how this could impact various industries, let’s consider the financial sector. Banks and financial institutions rely on encryption to protect customers’ sensitive information and ensure secure online transactions. Should quantum computers become capable of breaking existing encryption algorithms, the financial industry would need to swiftly adapt by implementing quantum-resistant encryption methods. The ripple effect of quantum computing in cryptography extends beyond finance, affecting communication, military intelligence, and data security for various sectors worldwide.

Case Study 2 – Drug Discovery

Another compelling case study showcasing the potential of quantum computing can be found in the field of drug discovery. The process of discovering new drugs is an intricate and time-consuming task involving extensive computational analysis. Quantum computing has the potential to significantly accelerate this process by simulating the behavior of molecules with unparalleled precision. By leveraging quantum algorithms, researchers can more accurately predict how drugs will interact with target molecules, reducing the need for costly and time-consuming laboratory experiments. This computational power could pave the way for the discovery of new drugs and the ability to personalize treatments based on an individual’s unique molecular makeup, revolutionizing healthcare and ultimately saving lives.

Additionally, quantum computing holds great promise in optimizing complex systems, offering solutions to previously intractable problems. Consider the logistics industry, which heavily relies on optimization algorithms to optimize delivery routes, minimize costs, and decrease transportation time. Quantum computing could offer significant advancements in this field by exponentially improving the efficiency of optimization algorithms. By analyzing vast amounts of data and considering intricate variables, quantum computers could determine optimal routes, minimizing fuel consumption, and reducing carbon emissions. Such advancements benefit not only the logistics industry but also have implications for supply chain management, traffic control, and urban planning, ultimately leading to more sustainable and efficient infrastructures.

While these case studies provide a glimpse into the future capabilities of quantum computing, it is important to acknowledge that the field is still in its infancy. Overcoming the current challenges of maintaining qubits’ stability, error correction, and scaling remains critical for the practical implementation of quantum computers. However, tremendous strides have been made, and as technology continues to evolve, quantum computing holds the potential to unlock new frontiers and transform countless industries.


Unraveling the potential of quantum computing offers a new chapter in computational possibilities. The breakthroughs it can provide, from breaking encryption codes to accelerating drug discovery and optimizing complex systems, can transform industries and shape the world we live in. Embracing quantum computing’s potential opens up new avenues for innovation and brings us closer to solving complex problems that were once thought to be beyond the reach of classical computation. Let us embrace this frontier with curiosity, resilience, and collaboration, as we stand on the precipice of a quantum revolution.

SPECIAL BONUS: Braden Kelley’s Problem Finding Canvas can be a super useful starting point for doing design thinking or human-centered design.

“The Problem Finding Canvas should help you investigate a handful of areas to explore, choose the one most important to you, extract all of the potential challenges and opportunities and choose one to prioritize.”

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.