Tag Archives: resiliency

Asking the Wrong Questions Gets You the Wrong Answers

Asking the Wrong Questions Gets You the Wrong Answers

GUEST POST from Greg Satell

“Greed… is good,” declared Gordon Gekko, the legendary character from the 80s hit film Wall Street. “Greed is right. Greed works. Greed clarifies, cuts through and captures the essence of the evolutionary spirit. Greed, in all of its forms; greed for life, for money, for love, knowledge has marked the upward surge of mankind.”

The line resonated because it answered a question that people cared deeply about at the time, “how can we become more efficient?” In the face of heightened competition from Japan’s doctrine of total quality management, American firms appeared too sclerotic to compete. Corporate raiders preaching shareholder capitalism offered an easy answer.

The results are clear. Since then, the stock market has crashed a number of times, the last one resulting in a Great Recession. Productivity growth has been depressed for half a century. The incidence of extreme weather events and pandemics like coronavirus is on the rise. Clearly, we’ve been getting the wrong answers. It’s time we started asking different questions.

How Can We Become More Resilient?

We’ve grown accustomed to a reasonably stable world in which disasters were relatively rare. In the 19th century, wars, epidemics and financial panics were relatively common. The 1930s and 40s saw a global depression and a world war that claimed 75 million lives. By 1945, almost all of Europe and large parts of Asia lay in ruins.

Yet out of the ashes, we built a new, more resilient world. Institutions like the United Nations, World Bank and the International Monetary Fund created platforms to solve problems on a global scale. Bretton Woods established a global financial system and the Marshall Plan rebuilt Europe. An emerging welfare state permanently altered the role of the public sector in society.

That began to change in the Go-Go 80s when we shifted our focus from resilience to output maximization. As economists developed exciting new financial engineering techniques, business and governments increased their tolerance for risk and loaded up on debt. Staid chief executives gave way to corporate raiders and tech moguls.

The result is that we’ve become more vulnerable to shocks. In addition to worrying levels of financial debt, we also have considerable environmental debt and infrastructure debt, even as threats from terrorism, cyberattacks, extreme weather events and, of course, pandemics increase. We desperately need to figure out how to increase our resilience.

Clearly, a capitalism that focuses solely on financial capital and ignores other forms, such as social capital, human capital, natural capital, etc., is far too narrowly construed. We need to get better at integrating Environmental, Social and Corporate Governance (ESG) metrics into how we evaluate organizational performance.

What is the Relationship Between Cause and Effect?

Even a young child understands that if she touches a hot stove, her burn was caused by the stove and that it is no coincidence that both happened at the same time. We would expect her to run to her mother crying, “the stove burnt my hand!,” not “the pain in my hand coincided with touching the hot stove.”

Yet our algorithms and equations have no way of making basic distinctions between correlation and causality, which makes it difficult to design interventions. For example, if you find a strong correlation between temperature readings and ice cream sales, you might conclude that moving the thermometer close to a heater will improve ice cream sales.

Now I admit that sounds a bit silly, but similar mistakes happen all the time. For example, if a correlation is found between certain zip codes, crime rates and recidivism, we will tend to design our systems to punish people from poor neighborhoods more harshly. In fact, there is abundant evidence that mistakes such as these are common.

Debates about correlation and causation may seem academic, but they have real world impacts. If we could incorporate causation into our machine learning algorithms, we would greatly increase the speed and likelihood of finding a cure to Covid-19. At this point, there is a nascent effort to build intelligent systems based on causal principles, but there haven’t been any practical breakthroughs yet.

What is the Right Thing to Do?

In modern times, acting ethically has been seen as a relatively simple matter. You try to be kind to people and don’t lie, cheat or steal. In a moral classical sense, however, the study of ethics has been less about adhering to moral principles and more about trying to understand what the right thing to do is when there isn’t any cut-and-dried answer.

Most important decisions, like those that involve Covid-19 policy, have tradeoffs. It’s not hard to get people to agree that we should do everything possible to save as many lives as we can. Yet it is also true that we need to think about people’s ability to earn a living as well. So coming up with a strategy that saves lives and minimizes economic impact is far from easy, especially when easing restrictions too early could lead to even greater economic and human costs.

As our technology becomes more powerful, more difficult questions emerge. Can we teach an algorithm to understand right from wrong? Who is accountable for decisions machines make? To what extent should artificial intelligence systems be auditable? Or consider the emerging field of synthetic biology. Clearly, it’s giving us a leg up in fighting the coronavirus, but too what extent is it okay to alter the genetic code?

Part of the reason we were so unprepared for the Covid-19 pandemic is that most people were completely unaware of how dire the danger was. Clearly, we need a more public dialogue about the technologies we are building to achieve some kind of consensus of what the risks are and what we as a society are willing to accept. As we have seen, the consequences, financial and otherwise, can be catastrophic. We no longer have the luxury of acting cavalierly.

What Will It Take to Make Change Happen?

It should be obvious by now that things need to change. What’s not so obvious is how to bring change about. Theoretically, in a democracy you drive change forward by convincing a majority of your fellow citizens that it’s a good idea. However, research suggests otherwise. In fact, one study found that “when a majority—even a very large majority—of the public favors change, it is not likely to get what it wants.”

We see this play out in the real world as well. It has become common for those calling for change to organize a “March on Washington.” They make some noise for a while and then sputter out. In 2011, the Occupy Movement organized protests in over 950 cities across 62 countries, with little or nothing to show for it.

Yet it’s also misleading to suggest that shadowy special interests dictate what happens. While it is true that there are a number of rich and powerful forces, ranging from the Koch Brothers and George Soros to the NRA and Planned Parenthood, these forces are often in opposition to each other. They are better at blocking change than bringing it about.

As I explain in Cascades, change is not top-down or bottom-up but moves side-to-side. You need to mobilize people to influence institutions that have the power to affect change. Or, as Martin Luther King Jr’s biographer put it, “A social movement that only moves people is merely a revolt. A movement that changes both people and institutions is a revolution.”

We’re where we’re at today because people convinced institutions that maximizing output was more important than stability and resilience, that correlation was more important than causation and that technology was ethically neutral. We know now that none of these things are true. If we are to come up with better answers, we need to start asking different questions.

— Article courtesy of the Digital Tonto blog
— Image credit: Business Insider

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Top 10 Human-Centered Change & Innovation Articles of October 2022

Top 10 Human-Centered Change & Innovation Articles of October 2022Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are October’s ten most popular innovation posts:

  1. Bridging the Gap Between Strategy and Reality — by Braden Kelley
  2. How Do You Judge Innovation: Guilty or Innocent? — by Robyn Bolton
  3. Scaling New Heights – Building Resilience — by Teresa Spangler
  4. What Great Transformational Leaders Learn from Their Failures — by Greg Satell
  5. Your Brand Isn’t the Problem — by Mike Shipulski
  6. What’s Next – Through the Looking Glass — by Braden Kelley
  7. Don’t Blame Quiet Quitting for a Broken Business Strategy — by Soren Kaplan
  8. The Ways Inflection Points Define Our Future — by Greg Satell
  9. How to Use TikTok for Marketing Your Business — by Shep Hyken
  10. Making Innovation the Way We Do Business (easy as ABC) — by Robyn Bolton

BONUS – Here are five more strong articles published in September that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last two years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Resilience Conundrum

From the Webb Space Telescope to Dishwashing Liquids

The Resilience Conundrum

GUEST POST from Pete Foley

Many of us have been watching the spectacular photos coming from Webb Space Telescope this week. It is a breathtaking example of innovation in action. But what grabbed my attention almost as much as the photos was the challenge of deploying it at the L2 Lagrange point. That not only required extraordinary innovation of core technologies, but also building unprecedented resilience into the design. Deploying a technology a million miles from Earth leaves little room for mistakes, or the opportunity for the kind of repairs that rescued the Hubble mission. Obviously the Webb team were acutely aware of this, and were painstaking in identifying and pre-empting 344 single points of failure, any one of which had the potential to derail it. The result is a triumph.  But it is not without cost. Anticipating and protecting against those potential failures played a significant part in taking Webb billions over budget, and years behind it’s original schedule.

Efficiency versus Adaptability: Most of us will never face quite such an amazing but  daunting challenge, or have the corresponding time and budget flexibility. But as an innovation community, and a planet, we are entering a phase of very rapid change as we try to quickly address really big issues, such as climate change and AI. And the speed, scope and interconnected complexity of that change make it increasingly difficult to build resilience into our innovations. This is compounded because a need for speed and efficiency often drives us towards narrow focus and increased specialization.  That focus can help us move quickly, but we know from nature that the first species to go extinct in the face of environmental change are often the specialists, who are less able to adapt with their changing world. Efficiency often reduces resilience, it’s another conundrum.

Complexity, Systems Effects and Collateral Damage. To pile on the challenges a little, the more breakthrough an innovation is, the less we understand about how interacts at a systems level, or secondary effects it may trigger.  And secondary failures can be catastrophic. Takata airbags, or the batteries in Samsung Galaxy phones were enabling, not core technologies, but they certainly derailed the core innovations.

Designed Resiliency. One answer to this is to be more systematic about designing resilience into innovation, as the Webb team were. We may not be able to reach the equivalent of 344 points of failure, but we can be systematic about scenario planning, anticipating failure, and investing up front in buffering ourselves against risk. There are a number of approaches we can adopt to achieve this, which I’ll discuss in detail later.

The Resiliency Conundrum. But first let’s talk just a little more about the Resilience conundrum. For virtually any innovation, time and money are tight. Conversely, taking time to anticipate potential failures is often time consuming and expensive. Worse, it rarely adds direct, or at least marketable value. And when it does work, we often don’t see the issues it prevents, we only notice them when resiliency fails. It’s a classic trade off, and one we face at all levels of innovation. For example, when I worked on dishwashing liquids at P&G, a slightly less glamorous field than space exploration, an enormous amount of effort went into maintaining product performance and stability under extreme conditions. Product could be transported in freezing or hot temperatures, and had to work extreme water hardness or softness. These conditions weren’t typical, but they were possible. But the cost of protecting these outliers was often disproportionately high.

And there again lies the trade off. Design in too much resiliency, and we are become inefficient and/or uncompetitive. But too little, and we risk a catastrophic failure like the Takata airbags. We need to find a sweet spot. And finding it is still further complicated because we are entering an era of innovation and disruption where we are making rapid changes to multiple systems in parallel. Climate change is driving major structural change in energy, transport and agriculture, and advances in computing are changing how those systems are managed. With dishwashing, we made changes to the formula, but the conditions of use remained fairly constant, meaning we were pretty good at extrapolating what the product would have to navigate. The same applies with the Webb telescope, where conditions at the Lagrange point have not changed during the lifetime of the project. We typically have a more complex, moving target.

Low Carbon Energy. Much of the core innovation we are pursuing today is interdependent. As an example, consider energy. Simply replacing hydrocarbons with, for example, solar, is far more complex than simply swapping one source of energy for another. It impacts the whole energy supply system. Where and how it links into our grid, how we store it, unpredictable power generation based on weather, how much we can store, maintenance protocols, and how quickly we can turn up or down the supply are just a few examples. We also create new feedback loops, as variables such as weather can impact both power generation and power usage concurrently. But we are not just pursuing solar, but multiple alternatives, all of which have different challenges. And concurrent to changing our power source, we are also trying to switch automobiles and transport in general from hydrocarbons to electric power, sourced from the same solar energy. This means attempting significant change in both supply and a key usage vector, changing two interdependent variables in parallel. Simply predicting the weather is tricky, but adding it to this complex set of interdependent variables makes surprises inevitable, and hence dialing in the right degree of resilience pretty challenging.

The Grass is Always Greener: And even if we anticipate all of that complexity, I strongly suspect, we’ll see more, rather than less surprises than we expect.   One lesson I’ve learned and re-learned in innovation is that the grass is always greener. We don’t know what we don’t know, in part because we cannot see the weeds from a distance. The devil often really is in the details, and there is nothing like moving from theory to practice, or from small to large scale to ferret out all of the nasty little problems that plague nearly every innovation, but that are often unfathomable when we begin. Finding and solving these is an inherent part of virtually any innovation process, but it usually adds time and cost to the process. There are reasons why more innovations take longer than expected than are delivered ahead of schedule!

It’s an exciting, but also perilous time to be innovating. But ultimately this is all manageable. We have a lot of smart people working on these problems, and so most of the obvious challenges will have contingencies.   We don’t have the relative time and budget of the Webb Space Telescope, and so we’ll inevitably hit a few unanticipated bumps, and we’ll never get everything right. But there are some things we can do to tip the odds in our favor, and help us find those sweet spots.

  1. Plan for over capacity during transitions. If possible, don’t shut down old supply chins until the new ones are fully established. If that is not possible, stockpile heavily as a buffer during the transition. This sounds obvious, but it’s often a hard sell, as it can be a significant expense. Building inventory or capacity of an old product we don’t really want to sell, and leaving it in place as we launch doesn’t excite anybody, but the cost of not having a buffer can be catastrophic.
  2. In complex systems, know the weakest link, and focus resilience planning on it. Whether it’s a shortage of refills for a new device, packaging for a new product, or charging stations for an EV, innovation is only as good as its weakest link. This sounds obvious, but our bias is to focus on the difficult, core and most interesting parts of innovation, and pay less attention to peripherals. I’ve known a major consumer project be held up for months because of a problem with a small plastic bottle cap, a tiny part of a much bigger project. This means looking at resilience across the whole innovation, the system it operates in and beyond. It goes without saying that the network of compatible charging stations needs to precede any major EV rollout. But never forget, the weakest link may not be within our direct control. We recently had a bunch of EV’s stranded in Vegas because a huge group of left an event at a time when it was really hot. The large group overwhelmed our charging stations, and the high temperatures meant AC use limited the EV’s range, requiring more charging. It’s a classic multivariable issue where two apparently unassociated triggers occur at once.   And that is a case where the weakest link is visible. If we are not fully vertically integrated, resilience may require multiple sources or suppliers to protect against potential failure points we are not aware of, just to protect us against things we cannot control.
  3. Avoid over optimization too early. It’s always tempting to squeeze as much cost out of innovation prior to launch. But innovation by its very nature disrupts a market, and creates a moving target. It triggers competitive responses, changes in consumer behavior, supply chain, and raw material demand. If we’ve optimized to the point of removing flexibility, this can mean trouble. Of course, some optimization is always needed as part of the innovation process, but nailing it down too tightly and too early is often a mistake. I’ve lost count of the number of initiatives I’ve seen that had to re-tool or change capacity post launch at a much higher cost than if they’d left some early flexibility and fine-tuned once the initial dust had settled.
  4. Design for the future, not the now. Again this sounds obvious, but we often forget that innovation takes time, and that, depending upon our cycle-time, the world may be quite different when we are ready to roll out than it was when we started. Again, Webb has an advantage here, as the Lagrange point won’t have changed much even in the years the project has been active. But our complex, interconnected world is moving very quickly, especially at a systems level, and so we have to build in enough flexibility to account for that.
  5. Run test markets or real world experiments if at all possible. Again comes with trade offs, but no simulation or lab test beats real world experience. Whether its software, a personal care product, or a solar panel array, the real world will throw challenges at us we didn’t anticipate. Some will matter, some may not, but without real world experience we will nearly always miss something. And the bigger our innovation, generally the more we miss. Sometimes we need to slow down to move fast, and avoid having to back track.
  6. Engage devils advocates. The more interesting or challenging an innovation is, the easier it is to slip into narrow focus, and miss the big picture. Nobody loves having people from ‘outside’ poke holes in the idea they’ve been nurturing for months or years, but that external objectiveness is hugely valuable, together with different expertise, perspectives and goals. And cast the net as wide as possible. Try to include people from competing technologies, with different goals, or from the broad surrounding system. There’s nothing like a fierce competitor, or people we disagree with to find our weaknesses and sharpen an idea. Welcome the naysayers, and listen to them. Just because they may have a different agenda doesn’t mean the issues they see don’t exist.

Of course, this is all a trade off. I started this with the brilliant Webb Space telescope, which is amazing innovation with extraordinary resilience, enabled by an enormous budget and a great deal or time and resource. As we move through the coming years we are going to be attempting innovation of at least comparable complexity on many fronts, on a far more planetary scale, and with far greater implications if we get it wrong. Resiliency was a critical part of the Webb Telescopes success. But with stakes as high as they are with much of today’s innovation, I passionately believe we need to learn from that. And a lot of us can contribute to building that resiliency. It’s easy to think of Carbon neutral energy, EV’s, or AI as big, isolated innovations. But in reality they comprise and interface with many, many sub-projects. That’s a lot of innovation, a lot of complexity, a lot of touch-points, a lot of innovators, and a lot of potential for surprises. A lot of us will be involved in some way, and we can all contribute. Resiliency is certainly not a new concept for innovation, but given the scale, stakes and implications of what we are attempting, we need it more than ever.

Image Credit: NASA, ESA, CSA, and STScl

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Era of Moving Fast and Breaking Things is Over

The Era of Moving Fast and Breaking Things is Over

GUEST POST from Greg Satell

On July 16th, 1945, when the world’s first nuclear explosion shook the plains of New Mexico, the leader of the Manhattan Project, J. Robert Oppenheimer quoted from the Bhagavad Gita, “Now I am become Death, the destroyer of worlds.” Clearly, he was troubled by what he had unleashed and for good reason. The world was never truly the same after that.

Today, however, we have lost much of that reverence for the power of technology. Instead of proceeding deliberately and with caution, tech entrepreneurs have prided themselves on their willingness to “move fast and break things” and, almost reflexively, casually deride anyone who questions the practice as those who “don’t get it.”

It’s hard to see how, by any tangible metric, any of this has made us better off. We set out to disrupt industries, but disrupted people instead. It wasn’t always like this. Throughout our history we have asked hard questions and made good choices about technological progress. As we enter a new era of innovation, we desperately need to recapture some of that wisdom.

How We Put the Nuclear Genie Back in the Bottle

The story of nuclear weapons didn’t start with Oppenheimer, not by a long shot. In fact, if we were going to attribute the Manhattan Project to a single person, it would probably be a Hungarian immigrant physicist named Leo Szilard, who was one of the first to conceive of the possibility of a nuclear chain reaction.

In 1939, upon hearing of the discovery of nuclear fission in Germany he, along with fellow Hungarian emigre Eugene Wigner, decided that the authorities needed to be warned. Szilard then composed a letter warning of the possibility of a nuclear bomb that was eventually signed by Albert Einstein and sent to President Roosevelt. That’s what led to the American development program.

Yet after the explosions at Hiroshima and Nagasaki, many of the scientists who worked to develop the bomb wanted to educate the public of its dangers. In 1955, the philosopher Bertrand Russell issued a manifesto signed by a number of scientific luminaries. Based on this, a series of conferences at Pugwash, Nova Scotia were convened to discuss different approaches to protect the world from weapons of mass destruction.

These efforts involved far more than talk, but helped to shape the non-proliferation agenda and led to concrete achievements such as the Partial Test Ban Treaty. In fact, these contributions were so crucially important that the organizers of the Pugwash conferences were awarded the Nobel Peace Prize in 1995 and they continue even today.

Putting Limits On What We Do With the Code of Life

While the nuclear age started with a bang, the genetic age began with a simple article in the scientific journal Nature, written by two relatively unknown scientists named James Watson and Francis Crick, that described the structure of DNA. It was one of those few watershed moments when an entirely new branch of science arose from a single event.

The field progressed quickly and, roughly 20 years later, a brilliant researcher named Paul Berg discovered that you could merge human DNA with that from other living things, creating new genetic material that didn’t exist in nature. Much like Oppenheimer, Berg understood that, due to his work, humanity stood on a precipice and it wasn’t quite clear where the edge was.

He organized a conference at Asilomar State Beach in California to establish guidelines. Importantly, participation wasn’t limited to scientists. A wide swath of stakeholders were invited, including public officials, members of the media and ethical specialists. The result, now known as the Berg Letter, called for a moratorium on the riskiest experiments until the dangers were better understood. These norms were respected for decades.

Today, we’re undergoing another revolution in genomics and synthetic biology. New technologies, such as CRISPR and mRNA techniques, have opened up incredible possibilities, but also serious dangers. Yet here again, pioneers in the field like Jennifer Doudna are taking the lead in devising sensible guardrails and using the technology responsibly.

The New Economy Meets the New Era of Innovation

When Netscape went public in 1995, it hit like a bombshell. It was the first big Internet stock and, although originally priced at $14 per share, it opened at double that amount and quickly zoomed to $75. By the end of the day, it had settled back at $58.25. Still, a tiny enterprise with no profits was almost instantly worth $2.9 billion.

By the late 1990s, increased computing power combined with the Internet to create a new productivity boom. Many economists hailed the digital age as a “new economy” of increasing returns, in which the old rules no longer applied and a small initial advantage would lead to market dominance.

Yet today, it’s clear that the “new economy” was a mirage. Despite very real advances in processing speed, broadband penetration, artificial intelligence and other things, we seem to be in the midst of a second productivity paradox in which we see digital technology everywhere except in the economic statistics.

The digital revolution has been a real disappointment. In fact, when you look at outcomes, if anything we’re worse off. Rather than a democratized economy, market concentration has markedly increased in most industries. Income inequality in advanced economies has soared. In America wages have stagnated and social mobility has declined for decades. At the same time, social media has been destroying our mental health.

Now we’re entering a new era of innovation, in which we will unleash technologies much more powerful. New computing architectures like quantum and neuromorphic technologies will power things like synthetic biology and materials science to create things that would have seemed like science fiction a generation ago. We simply can no longer afford to be so reckless.

Shifting From Agility Toward Resilience

Moving fast and breaking things only seems like a good idea in a stable world. When you operate in a safe environment, it’s okay to take a little risk and see what happens. Clearly, we no longer live in such a world (if we ever did). Taking on more risk in financial markets led to the Great Recession. Being blase about data security has nearly destroyed our democracy. Failure to prepare for a pandemic has nearly brought modern society to its knees.

Over the next decade, the dangers will only increase. We will undergo four major shifts in technology, resources, migration and demographics. To put that in perspective, a similar shift in demography was enough to make the 60s a tumultuous decade. We haven’t seen a confluence of so many disruptive forces since the 1920s and that didn’t end well.

Unfortunately it’s far too easy to underinvest in order to mitigate the risk of a danger that may never come to fruition. Moving fast and breaking things can seem attractive because the costs are often diffuse. Although it has impoverished society as a whole and made us worse off in so many ways, it has created a small cadre of fabulously wealthy plutocrats.

Yet history is not destiny. We have the power to shape our path by making better choices. We can abandon the cult of disruption and begin to invest in resilience. In fact, we have to. By this point there should be no doubt that the dangers are real. The only question is whether we will act now or simply wait for it to happen and accept the consequences.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Rethinking Electric Vehicles and the Power Grid

Ford F150 Lightning Electric Truck

Ford just announced an electric truck for the masses, the Ford F-150 Lightning, with up to 300 miles of range starting at just under $40,000.

That is about as much detail as I’m going to go into about this new electric truck from Ford, and you won’t find me comparing it to Tesla’s Cybertruck or GM’s electric Hummer. I’ll leave that that to the gearheads.

The purpose for today’s article on Human-Centered Change™ and Innovation is not to compare electric truck specifications, but instead to highlight a somewhat buried feature of the new Ford F-150 Lightning Electric Truck:

Ford is providing an 80-amp home charging station that completely charges the truck in eight hours, or allows buyers to easily use the truck to power their entire home for around three days in the event of an electricity outage.

Sometimes what seems like a minor benefit outside the typical product feature set actually has the potential to shift mindsets and customer expectations. AND, it leads to a series of questions:

Have you spent $10,000-20,000 on a Tesla Powerwall battery backup system for your house?

Or thousands of dollars on a more traditional partial home generator?

Have you ever thought about using your car or truck to power your house?

What if this were to become a common expectation of consumers of electric vehicles?

If this became a key differentiator between internal combustion and electric vehicles, might this help to accelerate the transition to electric vehicles in the United States and elsewhere?

And what might the implications be for utilities and the power grid?

Stay tuned! It will be interesting to monitor how this situation develops and whether other electric vehicle manufacturers modify their marketing strategies, leading to one final question:

Innovation or not?

Image credit: yahoo


Accelerate your change and transformation success

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.