Tag Archives: Innovation

How to Use Your Nervous System to Feel Psychologically Safe

or “Why Mandating a Return to the Office Destroys Safety”

How to Use Your Nervous System to Feel Psychologically Safe

GUEST POST from Robyn Bolton

In last week’s episode, we learned that psychological safety is more neuroscience than psychology and the huge role our nervous system plays in our experience of safety. 

This week, we’re going deeper into our nervous system and how we can use our understanding of it to influence our psychology.


I’m sensing I can’t think my way to safety.  So, can I fix my nervous system to feel safe and smart?

This is where I go beyond Dr. Amy Edmondson’s definition of psychological safety to incorporate neuroscience and how our nervous system works.

Our nervous system has three states:

  1. Immobilization or the freeze response, as you felt, is often accompanied by a sense of overwhelm
  2. Fight-and-flight when you try to either end the conversation or become more aggressive, resistant, and push back on exploring other alternatives.
  3. Rest-and-Digest when you feel safe, social, and connected to the people around you

This third state sets humans and mammals apart from other living things.  Communicating and connecting serve as a survival mechanism and represent a safe state for our nervous system.  When we communicate and connect, our tribe looks out for us and keeps us safe from threats like lions or unfriendly tribes.

So, the answer is to foster more profound connections among human beings, which requires going well beyond our work roles and activities.

Does it require hugging?  I knew it would require hugging.

Don’t worry, hugging isn’t mandatory.

We, as individuals, have a strong desire to connect and communicate, but it doesn’t necessarily require physical proximity. Being physically together doesn’t guarantee anything.

But what about the push to return to the office? There’s even research to support executives’ claims that physical proximity is essential to culture, innovation, and connection.

Not only does physical proximity not guarantee anything, but being forced to return to the office causes more harm than good. 

From a safety perspective, our nervous system doesn’t want to feel trapped. Being forced back to the office activates our flight-or-fight response and erodes safety. Because of how our nervous system perceives choices, the more choices people have, the safer they feel.

Even though I’m tempted to ask questions about building psychological safety at the team or company level, I want to stay on the individual level for a moment. We talked about how I wasn’t consciously unsafe during a phone call. How can I tell when I feel unsafe if I’m not conscious of it?

There’s physical science behind what happens when you feel unsafe. Your heart rate increases, you might hold your breath, and your body may tense up.  Your thoughts might blank out, and your peripheral vision may narrow as your body prepares for fight or flight.  Your body doesn’t differentiate; it treats any threat as a threatening event.

On the other hand, feeling safe doesn’t mean you lack emotions or feel calm. Feeling calm and internally relaxed signifies safety, but it’s more than that.  When your nervous system is regulated, your emotions align with the situation. They’re not an extreme overreaction or underreaction. There’s congruence. If your emotional response matches the situation, your nervous system and brain feel safe.

That makes sense, but it’s not easy.  We’re trained to hide our emotions and always appear calm.  I can’t tell you how many times I’ve heard and said, “Be a duck.  Calm on the surface and paddling like hell below it.”

And that is not congruent.  But congruence doesn’t mean you act out like a toddler, either.

Step one in creating safety is calming your nervous system by verbalizing your feelings. If you say, “This conversation is overwhelming for me. I need a break. Let me get some water,” you’re safe and regulated at that moment. There’s nothing wrong.

But when you can’t verbalize what you’re experiencing and freeze, that’s a sign you’re no longer in a safe state. Your body starts pumping cortisol and adrenaline, preparing for whatever it perceives as a threat.

Even if you feel overwhelmed, if you’re aware of that feeling and can take some breaths or a short break and return to the conversation, you’re in a safe, regulated state.

I can’t imagine admitting to feeling overwhelmed or asking for a break! Plus, I work with so many people who say, “I feel overwhelmed, but I can’t take a moment for myself.  I need to plow through and get this done.”

It takes a tremendous amount of self-awareness. If you want to create safety and emotional intelligence, you must know what you’re feeling and be able to name it. You also need to sense what others are feeling and understand your emotional impact on them.

For example, if you say, “I’m feeling overwhelmed right now,” and I respond calmly and slow my cadence of speech, your nervous system receives the message that everything is okay.  However, if I’m in “fight or flight” mode and you’re overwhelmed, we’ll end up in a chaotic and unproductive cycle.

Self-awareness and understanding are essential to safety. Unfortunately, many organizations I speak with need help with this.

Amen, sister,


Stay tuned for next week’s exciting conclusion, 3 Steps to Building a Psychologically Safe Environment or The No-Cost, No-Hug Secret to Smarter Teams

Image Credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

AI as an Innovation Tool – How to Work with a Deeply Flawed Genius!

AI as an Innovation Tool - How to Work with a Deeply Flawed Genius!

GUEST POST from Pete Foley

For those of us working in the innovation and change field, it is hard to overstate the value and importance of AI.   It opens doors, that were, for me at least, barely imaginable 10 years ago.  And for someone who views analogy, crossing expertise boundaries, and the reapplication of ideas across domains as central to innovation, it’s hard to imagine a more useful tool.

But it is still a tool.  And as with any tool, leaning it’s limitations, and how to use it skillfully is key.  I make the analogy to an automobile.  We don’t need to know everything about how it works, and we certainly don’t need to understand how to build it.  But we do need to know what it can, and cannot do. We also need to learn how to drive it, and the better our driving skills, the more we get out of it.

AI, the Idiot Savant?  An issue with current AI is that it is both intelligent and stupid at the same time (see Yejin Chois excellent TED talk that is attached). It has phenomenal ‘data intelligence’, but can also fail on even simple logic puzzles. Part of the problem is that AI lacks ‘common sense’ or the implicit framework that filters a great deal of human decision making and behavior.  Chois calls this the  ‘dark matter’ common sense of decision-making. I think of it as the framework of knowledge, morality, biases and common sense that we accumulate over time, and that is foundational to the unconscious ‘System 1’ elements that influence many, if not most of our decisions. But whatever we call it, it’s an important, but sometimes invisible and unintuitive part of human information processing that is can be missing from AI output.    

Of course, AI is far from being unique in having limitations in the quality of its output.   Any information source we use is subject to errors.  We all know not to believe everything we read on the internet. That makes Google searches useful, but also potentially flawed.  Even consulting with human experts has pitfalls.   Not all experts agree, and even to most eminent expert can be subject to biases, or just good old fashioned human error.  But most of us have learned to be appropriately skeptical of these sources of information.  We routinely cross-reference, challenge data, seek second opinions and do not simply ‘parrot’ the data they provide.

But increasingly with AI, I’ve seen a tendency to treat its output with perhaps too much respect.   The reasons for this are multi-faceted, but very human.   Part of it may be the potential for generative AI to provide answers in an apparently definitive form.  Part may simply be awe of its capabilities, and to confuse breadth of knowledge with accuracy.  Another element is the ability it gives us to quickly penetrate areas where we may have little domain knowledge or background.  As I’ve already mentioned, this is fantastic for those of us who value exploring new domains and analogies.  But it comes with inherent challenges, as the further we step away from our own expertise, the easier it is for us to miss even basic mistakes.  

As for AI’s limitations, Chois provides some sobering examples.  It can pass a bar exam, but can fail abysmally on even simple logic problems.  For example, it suggests building a bridge over broken glass and nails is likely to cause punctures!   It has even suggested increasing the efficiency of paperclip manufacture by using humans as raw materials.  Of course, these negative examples are somewhat cherry picked to make a point, but they do show how poor some AI answers can be, and how they can be low in common sense.   Of course, when the errors are this obvious, we should automatically filter them out with our own common sense.  But the challenge comes when we are dealing in areas where we have little experience, and AI delivers superficially plausible but flawed answers. 

Why is this a weak spot for AI?  At the root of this is that implicit knowledge is rarely articulated in the data AI scrapes. For example, a recipe will often say ‘remove the pot from the heat’, but rarely says ‘remove the pot from heat and don’t stick your fingers in the flames’. We’re supposed to know that already. Because it is ‘obvious’, and processed quickly, unconsciously and often automatically by our brains, it is rarely explicitly articulated. AI, however, cannot learn what is not said.  And so because we don’t tend to state the obvious, it can make it challenging for an AI to learn it.  It learns to take the pot off of the heat, but not the more obvious insight, which is to avoid getting burned when we do so.  

This is obviously a known problem, and several strategies are employed to help address it.  These include manually adding crafted examples and direct human input into AI’s training. But this level of human curation creates other potential risks. The minute humans start deciding what content should and should not be incorporated, or highlighted into AI training, the risk of transferring specific human biases to that AI increase.   It also creates the potential for competing AI’s with different ‘viewpoints’, depending upon differences in both human input and the choices around what data-sets are scraped. There is a ‘nature’ component to the development of AI capability, but also a nurture influence. This is of course analogous the influence that parents, teachers and peers have on the values and biases of children as they develop their own frameworks. 

But most humans are exposed to at least some diversity in the influences that shape their decision frameworks.  Parents, peers and teachers provide generational variety, and the gradual and layered process that builds the human implicit decision framework help us to evolve a supporting network of contextual insight.  It’s obvious imperfect, and the current culture wars are testament to some profound differences in end result.  But to a large extent, we evolve similar, if not identical common sense frameworks. With AI, the narrower group contributing to curated ‘education’ increases the risk of both intentional and unintentional bias, and of ‘divergent intelligence’.     

What Can We do?  The most important thing is to be skeptical about AI output.  Just because it sounds plausible, don’t assume it is.  Just as we’d not take the first answer on a Google search as absolute truth, don’t do the same with AI.  Ask it for references, and check them (early iterations were known to make up plausible looking but nonsense references).  And of course, the more important the output is to us, the more important it is to check it.  As I said at the beginning, it can be tempting to take verbatim output from AI, especially if it sounds plausible, or fits our theory or worldview.  But always challenge the illusion of omnipotence that AI creates.  It’s probably correct, but especially if its providing an important or surprising insight, double check it.    

The Sci-Fi Monster!  The concept of a childish super intelligence has been explored by more than one Science Fiction writer.  But in many ways that is what we are dealing with in the case of AI.  It’s informational ‘IQ’ is greater than the contextual or common sense ‘IQ’ , making it a different type of intelligence to those we are used to.   And because so much of the human input side is proprietary and complex, it’s difficult  to determine whether bias or misinformation is included in its output, and if so, how much?   I’m sure these are solvable challenges.  But some bias is probably unavoidable the moment any human intervention or selection invades choice of training materials or their interpretation.   And as we see an increase in copyright law suits and settlements associated with AI, it becomes increasingly plausible that narrowing of sources will result in different AI’s with different ‘experiences’, and hence potentially different answers to questions.  

AI is an incredible gift, but like the three wishes in Aladdin’s lamp, use it wisely and carefully.  A little bit of skepticism, and some human validation is a good idea. Something that can pass the bar, but that lacks common sense is powerful, it could even get elected, but don’t automatically trust everything it says!

Image credits: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

80% of Psychological Safety Has Nothing to Do With Psychology

or Why the Lack of Psychological Safety Makes You Dumber

80% of Psychological Safety Has Nothing to Do With Psychology

GUEST POST from Robyn Bolton

It’s been over 20 years since “Psychological Safety” exploded onto the scene and into the business lexicon.  But as good as it sounded, I always felt like it was one of those “safe space, everyone gets a trophy, special snowflake” things we had to do to make the Millennials (and subsequent generations) happy.

Then I read Alla Weinberg’s book, A Culture of Safety, and realized I was very, very wrong.

It’s not the equivalent of an HR-approved hug and high-five. 

It’s the foundation of what we do. Without it, there is no productivity, creativity, or progress.

Needing to know more, I reached out to Alla, who graciously agreed to teach me.


Thanks for speaking with me, Alla.  Let’s get right to the point: why should I, or any business leader, care about psychological safety?

The short answer is that without psychological safety, you are dumber.  When you feel unsafe, your operating IQ, which you use for daily tasks, drops in half.

Think about all the people you work with or all the people in your company.  They’re there because they’re smart, have experience, and demonstrated that they can do the job.  But then something goes wrong, and you wonder why they didn’t anticipate it or plan appropriately to avoid it.  You start to question their competence when, in fact, it may be that they feel unsafe, so parts of their brain have gone offline.  Their operating IQ isn’t operating at 100%.

I am so guilty of this.  When things go wrong, I assume someone didn’t know what to do, so they need to be trained, or they did know what to do and decided not to do it. It never occurs to me that there could be something else, something not logical, going on.

We all forget that human beings are biological creatures, and survival is the number one evolutionary trait for all living beings. Our body and mind are wired to ensure our continued existence.

A part of the brain – the prefrontal cortex, responsible for planning, executive thought, and analysis – is unique to humans, and it goes offline when our body feels unsafe. 

When we experience extreme stress, our body and mind cannot distinguish an impending deadline from a lunging tiger.  Our body and mind prioritize survival, so we experience all the biological responses to a threat, like getting tunnel vision, losing peripheral vision, and perceiving limited options.

So, when you’re trying to meet a deadline, and your manager or supervisor asks why you didn’t consider alternatives or complete a specific task, it’s because you physically couldn’t think of it at that moment. This is how human beings operate.

My first reaction is to wonder who can’t tell the difference between a deadline and a tiger because if you can’t tell the difference between the two, you may have bigger problems.  But when you mentioned the inability to perceive options, I immediately thought of something that happened yesterday.

I was on a call with a client, someone I’ve worked with for years and consider a friend, and we were trying to restructure a program to serve their client’s needs better.  I didn’t feel under threat…

Consciously.  You didn’t consciously feel under threat.

Right, I didn’t feel consciously under threat. But I froze.  I absolutely couldn’t think.  I put my head in my hands and tried to block out all the light and the noise, and I still couldn’t think of any option other than what we were already doing.  My brain came to a screeching halt.

That’s your nervous system, and it’s a huge driver of psychological safety.  80% of the information our brain receives comes from our nervous system.  So, while you didn’t consciously feel unsafe, your body felt unsafe and sent a signal to your brain to go into survival mode, and your brain chose to freeze.

But it was a Zoom call.  I was sitting alone in my office. I wasn’t unsafe.  Why would my nervous system think I was unsafe?

Your nervous system doesn’t think. It perceives and reacts.  Let me give you a simple illustration that we’ve all experienced.  When you touch something hot, your hand immediately pulls away.  You say “ouch” after your hand is away from the heat source.  When you felt the hot object, your nervous system entered survival mode and pulled away your hand.  Your brain then had to catch up, so you saw “Ow” after the threat was over.

Hold up.  We’re talking about psychological safety.  What does my nervous system have to do with this?

I define psychological safety as a state of our nervous system with three states: safe, mobilized (fight or flight), and immobilized (freeze response). The tricky part is not psychological but neurobiological. You cannot think your way to safety or unfreeze yourself. The rational mind has no control over this. Mantras and mindsets won’t make you feel safe; it’s a neurobiological process.

That is a plot twist I did not see coming.


Stay tuned for Part 2:

How to Use Your Nervous System to Feel Psychologically Safe, or “Why Mandating a Return to the Office Destroys Safety”

Image Credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Continuous Improvement vs. Incremental Innovation

Are They the Same?

Continuous Improvement vs. Incremental Innovation

GUEST POST from Robyn Bolton

“Isn’t continuous improvement the same as incremental innovation?  After all, both focus on doing what you do better, faster, or cheaper.”

Ooof, I have a love-hate relationship with questions like this one.

I hate them because, in the moment, they feel like a gut punch.  The answer feels obvious to me – no, they are entirely different things – but I struggle to explain myself clearly and simply.

I love them because, once the frustration and embarrassment of being unable to offer a clear and simple answer passes, they become a clear sign that I don’t understand something well enough or that *gasp* my “obvious” answer may be wrong.

So, is Continuous Improvement the same as Incremental Innovation?

No. They’re different.

But the difference is subtle, so let’s use an analogy to tease it apart.

Imagine learning to ride a bike.  When you first learn, success is staying upright, moving forward, and stopping before you crash into something.  With time and practice, you get better.  You move faster, stop more quickly, and move with greater precision and agility.

That’s continuous improvement.  You’re using the same solution but using it better.

Now, imagine that you’ve mastered your neighborhood’s bike paths and streets and want to do more.  You want to go faster, so add a motor to your bike.  You want to ride through the neighboring forest, so you change to off-road tires.  You want a smoother feel on your long rides, so you switch to a carbon fiber frame.

That’s incremental innovation.  You changed an aspect of the solution so that it performs better.

It all comes down to the definition of innovation – something different (or new) that creates value.

Both continuous improvement and incremental innovation create value. 

The former does it by improving what exists. The latter does it by changing (making different) what exists.

Got it. They are entirely different things.

Sort of.

Think of them as a Venn diagram – they’re different but similar.

There is evidence that a culture committed to quality and continuous improvement can lead to a culture of innovation because “Both approaches are focused in meeting customer needs, and since CI encourages small but constant changes in current products, processes and working methods its use can lead firms to become innovative by taking these small changes as an approach to innovation, more specifically, incremental innovation.”

Thanks, nerd.  But does this matter where I work, which is in the real world?

Yes.

Continuous Improvement and Incremental Innovation are different things and, as a result, require different resource levels, timelines, and expectations for ROI.

You should expect everyone in your organization to engage in continuous innovation (CI) because (1) using CI helps the organizations change adoption and risk taking by evaluating and implementing solutions to current needs” and (2) the problem-solving tools used in CI uncover “opportunities for finding new ideas that could become incremental innovations.”

You should designate specific people and teams to work on incremental people because (1) what “better” looks like is less certain, (2) doing something different or new increases risk, and (3) more time and resources are required to learn your way to the more successful outcome.

What do you think?

How do you answer the question at the start of this post?

How do you demonstrate your answer?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Is AI Saving Corporate Innovation or Killing It?

Is AI Saving Corporate Innovation or Killing It?

GUEST POST from Robyn Bolton

AI is killing Corporate Innovation.

Last Friday, the brilliant minds of Scott Kirsner, Rita McGrath, and Alex Osterwalder (plus a few guest stars like me, no big deal) gathered to debate the truth of this statement.

Honestly, it was one of the smartest and most thoughtful debates on AI that I’ve heard (biased but right, as my husband would say), and you should definitely listen to the whole thing.

But if you don’t have time for the deep dive over your morning coffee, then here are the highlights (in my humble opinion)

Why this debate is important

Every quarter, InnoLead fields a survey to understand the issues and challenges facing corporate innovators.  The results from their Q2 survey and anecdotal follow-on conversations were eye-opening:

  • Resources are shifting from Innovation to AI: 61.5% of companies are increasing the resources allocated to AI, while 63.9% of companies are maintaining or decreasing their innovation investments
  • IT is more likely to own AI than innovation: 61.5% of companies put IT in charge of exploring potential AI use cases, compared to 53.9% of Innovation departments (percentages sum to greater than 0 because multiple departments may have responsibility)
  • Innovation departments are becoming AI departments.  In fact, some former VPs and Directors of Innovation have been retitled to VPs or Directors of AI

So when Scott asked if AI was killing Corporate Innovation, the data said YES.

The people said NO.

What’s killing corporate innovation isn’t technology.  It’s leadership.

Alex Osterwalder didn’t pull his punches and delivered a truth bomb right at the start. Like all the innovation tools and technologies that came before, the impact of AI on innovation isn’t about the technology itself—it’s about the leaders driving it.

If executives take the time to understand AI as a tool that enables successful outcomes and accelerates the accomplishment of key strategies, then there is no reason for it to threaten, let alone supplant, innovation. 

But if they treat it like a shiny new toy or a silver bullet to solve all their growth needs, then it’s just “innovation theater” all over again.

AI is an Inflection Point that leaders need to approach strategically

As Rita wrote in her book Seeing Around Corners, an inflection point has a 10x impact on business, for example, 10x cheaper, 10x faster, or 10x easier.  The emergence and large-scale adoption of AI is, without doubt, an inflection point for business.

Just like the internet and Netscape shook things up and changed the game, AI has the power to do the same—maybe even more. But, to Osterwalder’s point, leaders need to recognize AI as a strategic inflection point and proceed accordingly. 

Leaders don’t need to have it all figured out yet, but they need a plan, and that’s where we come in.

This inflection point is our time to shine

From what I’ve seen, AI isn’t killing corporate innovation. It’s creating the biggest corporate innovation opportunity in decades.  But it’s up to us, as corporate innovators, to seize the moment.

Unlike our colleagues in the core business, we are comfortable navigating ambiguity and uncertainty.  We have experience creating order from what seems like chaos and using innovation to grow today’s business and create tomorrow’s.

We can do this because we’ve done it before.  It’s exactly what we do,

AI is not a problem.  It’s an opportunity.  But only if we make it one.

AI is not the end of corporate innovation —it’s a tool, a powerful one at that.

As corporate innovators, we have the skills and knowledge required to steer businesses through uncertainty and drive meaningful change. So, let’s embrace AI strategically and unlock its full potential.

The path forward may not always be crystal clear, but that’s what makes it exciting. So, let’s seize the moment, navigate the chaos, and embrace AI as the innovation accelerant that it is.

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Two Kinds of Persistence – What’s Your Habit?

GUEST POST from Dennis Stauffer

I suspect you’ve heard all your life that it’s important to be persistent, whether that’s studying hard, practicing a sport, launching a new business, or attempting some innovation. You’re told that you need to stick with it until you find success. You need to have GRIT.

But what’s so often lost in that advice is that there’s more than one way to be persistent, and which one you have can make a HUGE difference.

Type 1

The first kind of persistence is sticking with something despite setbacks. That’s the marathoner who pushes through exhaustion and pain. It’s the student who studies until they really “get” the subject matter. It’s the entrepreneur putting in long hours to pursue a dream. That kind of persistence sees a target, pushes toward it, and blocks out any distractions that keep them from pursuing it.

Type 2

The other kind of persistence is about being creative and resourceful. It’s trying more than one way to reach your goals, and sometimes adjusting those goals to fit the realities you confront. It’s the entrepreneur that pivots to a new business model because the first one isn’t working. It’s the student who changes their career plans because it better fits their personal strengths and preferences. It’s the athlete who changes their technique to improve rather than just practicing the same approach.

Type 1 versus Type 2

These are radically different—opposing—strategies, and you can be quite good at one of them and lousy at the other.

That first kind of persistence is helpful when things are predictable and the rules are clear, when you know what will work. You just need to go do it. That’s useful at times, but much of life rarely works that way.

The challenges you face are often not so clear, and one of the biggest mistakes you can make is thinking they are when they’re not. That’s the entrepreneur that falls in love with an idea and keeps pursuing it long after getting signals that it’s not really working. Thinking: if I just push a little longer. When they need to change course.

It’s called being stubborn.

Skilled innovators—and those who are most effective generally—favor that second kind of persistence. They don’t just keep plugging along. They’re willing to rethink their strategy, seek feedback and gain new insights. Instead of assuming they know what works, they strive to figure out what works.

That’s not mindless pushing, and it’s not just trying random alternatives. It’s a disciplined process you can learn. A process of innovation that reflects a mindset that values flexibility, adaptability and resourcefulness, more than raw determination.

Which kind of persistence do you believe in? Which do you use?

Here is the video version of this post for all of you:

Image Credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Las Vegas Formula One

Successful Innovation, Learning Experience or Total Disaster?

GUEST POST from Pete Foley

In Las Vegas, we are now clearing up after the Formula 1 Grand Prix on the Strip.  This extremely complex event required a great deal of executional innovation, and one that I think as innovators, we can learn quite a lot from. 

It was certainly a bumpy ride, both for the multi-million dollar Ferrari that hit an errant drain cover during practice, but also with respect to broader preparation, logistics, pricing and projections of consumer behavior.  Despite this, race itself was exciting and largely issue free, and even won over some of the most skeptical drivers.  In terms of Kahneman’s peak-end effects, there were both memorable lows, but also a triumphant end result.   So did this ultimately amount to success?

Success?:   For now, I think it very much depends upon your perspective and who you talk to.  Perhaps it’s a sign of the times, but in Las Vegas, the race was extremely polarizing, with often heated debates between pro- and anti- F1-ers that were often as competitive as the race.

The reality is that it will be months, or more likely years before the dust settles, and we know the answer.  And I strongly suspect that even then, those who are for and against it will all likely be able to claim support for their point of view.  One insight I think innovators can take from this is that success can be quite subjective in of itself, and greatly depends upon what factors you measure, what period of time you measure over, and often your ingoing biases.  And the bigger and more complex the innovation, often the harder it is to define and measure success.  

Compromise Effects:  When you launch a new product, it is often simpler and cheaper to measure its success narrowly in terms of specific dollar contribution to your business. But this often misses its holistic impact.   Premium products can elevate an entire category or brand, while poorly executed innovations can do the opposite.  For example, the compromise effect from Behavioral Economics suggests that a premium addition to a brand line up can shift the ‘Good, Better, Best’ spectrum of a category upwards.  This can boost dollar sales across a line up, even if the new premium product itself has only moderate sales.   For example, the addition of high priced wines to a menu can often increase the average dollars per bottle spent by diners, even if the expensive wine itself doesn’t sell.  The expensive wines shift the ‘safe middle’ of the consideration set upwards, and thus increase revenue, and hopefully profit.      

Money, Scope and Intangibles:  In the case of F1, how far can and should we cast the net when trying to measure success?  Can we look just at the bottom line?  Did this specific weekend bring in more than the same weekend the previous year in sports betting, rooms and entertainment?  Did that difference exceed the investments? 

Or is that too narrow?  What about the $$ impact on the weeks surrounding the event?  We know that some people stayed away because of the construction and congestion in the lead up to the race.  That should probably be added into, or subtracted from the equation. 

And then there’s the ‘who won and who lost question’? The benefits and losses were certainly not homogeneous across stakeholders.  The big casinos benefited disproportionately in comparison to the smaller restaurants that lost business due to construction, some to a degree that almost rivaled Covid.  Gig workers also fared differently. I have friends who gained business from the event, and friends who lost.  Many Uber drivers simply gave up and stopped working. But those who stayed, or the high-end limo drivers likely had bumper weekends.   Entertainers working shows that were disrupted by F1 lost out, but the plethora of special events that came with F1 also provided a major uptick in business for many performers and entertainers.

There is also substantial public investment to consider.  Somewhat bizarrely, the contribution of public funds was not agreed prior to the race, and the public-private cost sharing of tens of millions is still being negotiated.  But even facing that moving target, did increased (or decreased) tax income before, during and after the race offset those still to be determined costs?

Intangibles:  And then there’s the intangibles.  While Vegas is not exactly an unknown entity, F1 certainly upped its exposure, or in marketing terms, it’s mental availability.   It brought Vegas into the news, but was that in a positive or negative light?  Or is all publicity good publicity in this context? News coverage was mixed, with a lot of negative focus on the logistic issues, but also global coverage of what was generally regarded as an exciting race.   And of course, that media coverage also by definition marketed other businesses, including the spectacular Sphere. 

Logistics:  Traffic has been a nightmare with many who work on the strip facing unprecedented delays in their commutes for many weeks, with many commutes going from minutes to hours.   This reached a point where casinos were raffling substantial prizes, including a Tesla, just to persuade people to not call in sick.  Longer term, it’s hard to determine the impact on employee morale and retention, but its hard to imagine that it will be zero, and that brings costs of its own that go well beyond a raffled Tesla

Measuring Success?  In conclusion, this was a huge operation, and its impact by definition is going to be multidimensional.  The outcome was, not surprisingly, a mixed bag.  It could have been a lot better, or a lot worse. And even as the dust settles, it’s likely that different groups will be able to cherry pick data to support their current opinions and biases. 

Innovation Insights:  So what are some of the more generalized innovation insights we can draw?

(a) Innovation is rarely a one and done process.   We rarely get it right first time, and the bigger and more complex an innovation is, the more we usually have to learn.  F1 is the poster child for this, and the organization is going to have an enormous amount of data to plough through. The value of this will greatly depend on F1’s internal innovation culture.  Is it a learning organization?  In a situation like this, where billions of dollars, and careers are on the line, will it be open or defensive?  Great innovation organizations mostly put defensiveness aside, actively learn from mistakes, and adopt Devils Advocate approaches to learn from hard earned data. But culture is deeply embedded, and difficult to change, so much depends on the current culture of the organizations involved.  

(b) Going Fast versus Going Slow:  This project moved very, very quickly.  Turning a city like Las Vegas from scratch into a top of the line race track in less than a year was a massive challenge.  The upside is that if you go fast, you learn fast.  And the complexity of the task meant much of the insight could pragmatically only be achieved ‘on the ground’.  But conversely, better scenario planning might have helped anticipate some of the biggest issues, especially around traffic disruption, loss of business to smaller organizations, commuting issues and community outreach.  And things like not finalizing public-private contracts prior to execution will likely end up prolonging the agony.  Whatever our innovation is, big or small, hitting that sweet spot between winging it and over-thinking is key. 

(c) Understanding Real Consumer Behavior.  The casinos got pricing horribly wrong.  When the race was announced, hotel prices and race packages for the F1 weekend went through the roof.  But in the final run up to the race, prices for both rooms and the race itself plummeted.  One news article reported a hotel room on the strip as low as $18!  Tickets for the race that the previous month had cost $1600 had dropped to $800 or less on race day.  Visitors who had earlier paid top dollar for rooms were reported to be cancelling and rebooking, while those locked into rates were frustrated.  There is even a major lawsuit in progress around a cancelled practice.  I don’t know any details around how pricing was researched, and predicting the market for a new product or innovation is always a challenge.  In addition, the bigger the innovation, the more challenging the prediction game is, as there are less relevant anchors for consumers or the business to work from.   But I think the generalizable lesson for all innovators is to be humble.  Assume you don’t know, that your models are approximate, do as much research as you can in contexts that are a close to realistic as possible, don’t squeeze margins based on unrealistic expectations for the accuracy of business models, and build as much agility into innovation launches as possible.  Easier said than done I know, but one of the most consistent reasons for new product failure is over confidence in understanding real consumer response when the rubber hits the road (pun intended), and how it can differ from articulated consumer response derived in unrealistic contexts. Focus groups and on-line surveys can be quite misleading when it comes down to the reality of handing over hard cash, opportunity cost, or how we value ur precious time short versus long-term term.

Conclusion: Full disclosure, I’ve personally gone through the full spectrum with Formula One in Vegas.  I loved the idea when it was announced, but 6 months of construction, disruption, and the prospect of another two months of tear down have severely dented my enthusiasm.  Ultimately I went from coveting tickets to avoiding the event altogether.  People I know range from ecstatic to furious, and everything in between.  Did I mention it was polarizing? 

The reality is that this is an ongoing innovation process.   There is a 3-year contract with options to extend to 10 years.  How successful it ultimately is will likely be very dependent upon how good a learning and innovation culture Formula One and its partners are, or can become.  It’s a steep and expensive learning curve, and how it moves forward is going to be interesting if nothing else.  And being Vegas, we have both CES and the Super Bowl to distract us in the next few months, before we start preparing again for next year. 

Image credits: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Eddie Van Halen, Simultaneous Innovation and the AI Regulation Conundrum

Eddie Van Halen, Simultaneous Innovation and the AI Regulation Conundrum

GUEST POST from Pete Foley

It’s great to have an excuse to post an Eddie Van Halen video to the innovation community.  It’s of course fun just to watch Eddie, but I also have a deeper, innovation relevant reason for doing so.

Art & Science:  I’m a passionate believer in cross-pollination between art and science.  And I especially believe we can learn a great deal from artists and musicians like Eddie who have innovated consistently over a career.  Dig into their processes, and we see serial innovators like The Beatles, Picasso, Elton John, Bowie, George Martin, Freddie Mercury, William Gibson, Lady Gaga, Paul Simon and so many others apply techniques that are highly applicable to all innovation fields. Techniques such as analogy, conceptual blending, collaboration, reapplication, boundary stretching, risk taking, learning from failure and T-Shaped innovation all crop up fairly consistently.  And these creative approaches are typically also built upon deep expertise, passion, motivation, and an ability to connect with future consumer needs, and to tap into early adopters and passionate consumers.  For me at least, that’s a pretty good innovation toolkit for innovation in any field.  Now, to be fair, often their process is intuitive, and many truly prolific artists are lucky enough to automatically and intuitively ‘think that way’. But understanding and then stealing some of their techniques, either implicit or explicit, can be a great way to both jump-start our own innovative processes, and also to understand how innovation works. As Picasso said, ‘great artists steal’, but I’d argue that so do good innovators, at least within the bounds allowed by the patent literature!

In the past I’ve written quite a lot about Picasso and The Beatles use of conceptual blending, Paul Simon’s analogies, reapplication and collaboration, Bowie’s innovative courage, and William Gibson’s ability to project s-curves.  Today, I’d like to to focus on some insights I see in the guitar innovations of Eddie.   

(a) Parallel or Simultaneous Innovation.  I suspect this is one of the most important yet under-appreciated concepts in innovation today. Virtually every innovation is built upon the shoulders of giants. Past innovations provide the foundation for future ones, to the point where once the pieces of the puzzle are in place, many innovations become inevitable. It still takes an agile and creative mind to come up with innovative ideas, but contemporary innovations often set the stage for the next leap forward. And this applies both to the innovative process, and also to a customers ability to understand and embrace it. The design of the first skyscraper was innovative, but it was made a lot more obvious by the construction of the Eiffel Tower. The ubiquitous mobile phone may now seem obvious, but it owes its existence to a very long list of enabling technologies that paved the way for it’s invention, from electricity to chips to Wi-Fi, etc.

The outcome of this ‘stage setting’ is that often even really big innovations occur simultaneously yet independently.  We’ve seen this play out with calculus (independently developed by Newton and Leibnitz), the atomic bomb, where Oppenheimer and company only just beat the Nazi’s, the theory of evolution, the invention of the thermometer, nylon and so many others.  We even see it in evolution, where scavenger birds vultures and condors superficially appear quite similar due to adaptations that allow them to eat carrion, but actually have quite different genetic lineages.  Similarly many marsupials look very similar to placental mammals that fill similar ecological niches, but typically evolved independently. Context has a huge impact on innovation, and similar contexts typical create parallel, and often similar innovations. As the world becomes more interconnected, and context becomes more homogenized, we are going to see more and more examples of simultaneous innovation.

Faster and More Competitive Innovation:  Today social media, search technology and the web mean that more people know more of the same ‘stuff’ more quickly than before.  This near instantaneous and democratized access to the latest knowledge sets the scene and context for a next generation of innovation that is faster and more competitive than we’ve ever seen.   More people have access to the pieces of the puzzle far more quickly than ever before; background information that acts as a precursor for the next innovative leap. Eddie had to go and watch Jimmy Paige live and in person to get his inspiration for ‘tapping’.  Today he, and a few million others would simply need to go onto YouTube.  He therefore discovered Paige’s hammer-on years after Paige started using them.  Today it would likely be days.  That acceleration of ‘innovation context’ has a couple of major implications: 

1.  If you think you’ve just come up with something new, it’s more than likely that several other people have too, or will do so very soon.   More than ever before you are more than likely in a race from the moment you have an idea! So snooze and you loose. Assume several others are working on the same idea.

2.  Regulating Innovation is becoming really, really difficult.  I think this is possibly the most profound implication.  For example, a very current and somewhat contentious topic today is if and how we should regulate AI.  And it’s a pretty big decision. We really don’t know how AI will evolve, but it is certainly moving very quickly, and comes with the potential for earthshaking pros and cons.  It is also almost inevitably subject to simultaneous invention.  So many people are working on it, and so much adjacent innovation is occurring, that it’s somewhat unlikely that any single group is going to get very far out in front.   The proverbial cat is out of the bag, and the race is on. The issue for regulation then becomes painfully obvious.   Unless we can somehow implement universal regulation, then any regulations simply slow down those who follow the rules.  This unfortunately opens the doors to bad actors taking the lead, and controlling potentially devastating technology.

So we are somewhat damned if we do, and damned if we don’t.  If we don’t regulate, then we run the risk of potentially dangerous technology getting out of control.  But if do regulate, we run the risk of enabling bad actors to own that dangerous technology.  We’ve of course been here before.  The race for the nuclear bomb between the Allies and the Nazi’s was a great example of simultaneous innovation with potentially catastrophic outcomes.   Imagine if we’d decided fission was simply too dangerous, and regulated it’s development to the point where the Nazi’s had got there first.  We’d likely be living in a very different world today!  Much like AI, it was a tough decision, as without regulation, there was a small but possible scenario where the outcome could have been devastating.    

Today we have a raft of rapidly evolving technologies that I’d both love to regulate, but am also profoundly worried about the unintended consequences of doing so.  AI of course, but also genetic engineering, gene manipulating medicines, even climate mediation and behavioral science!  With respect to the latter, the better we get at nudging behavior, and the more reach we have with those techniques, the more dangerous miss-use becomes.  

The core problem underlying all of this is that we are human.   Most people try to do the right thing, but there are always bad actors.  And even those trying to do the right thing all too often get it wrong.  And the more democratized access to cutting edge insight becomes, parallel innovation means the more contenders we have for mistakes and bad bad choices, intentional or unintentional. 

(b) Innovation versus Invention:  A less dramatic, but I think similarly interesting insight we can draw from Eddie lies in the difference between innovation and invention He certainly wasn’t the first guitarist to use the tapping technique.  That goes back centuries! At least as far as classical composer Paganini, and it was a required technique for playing the Chapman stick in the 1970’s, popularized by the great Tony Levin in King Crimson. It was also widely, albeit sparingly (and often obscurely) used by jazz guitarists in the 1950’s and 60’s. But Eddie was the first to feature it, and turn it into a meaningful innovation in of itself. Until him, nobody had packaged the technique in a way that it could be ‘marketed’ and ‘sold’ as a viable product. He found the killer application, made it his own, and made it a ‘thing’. I would therefore argue that he wasn’t the inventor, but he was the ‘innovator’.  This points to the value of innovation over invention.  If you don’t have the capability or the partners to turn an invention into something useful, its still just an idea.   Invention is a critical part of the broader innovation process, but in isolation it’s more curiosity than useful. Innovation is about reduction to practice and communication as well a great ideas

Art & science:  I love the arts.  I play guitar, paint, and photograph.  It’s a lot of fun, and provides a invaluable outlet from the stresses involved in business and innovation.  But as I suggested at the beginning, a lot of the boundaries we place between art and science, and by extension business, are artificial and counter-productive. Some of my most productive collaborations as a scientist have been with designers and artists. As a visual scientist, I’ve found that artists often intuitively have a command of attentional insights that our cutting edge science is still trying to understand.  It’s a lot of fun to watch Eddie Van Halen, but learning from great artists like him can, via analogy, also be surprisingly insightful and instructive.   

Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

AI and Human Creativity Solving Complex Problems Together

AI and Human Creativity Solving Complex Problems Together

GUEST POST from Janet Sernack

A recent McKinsey Leading Off – Essentials for leaders and those they lead email newsletter, referred to an article “The organization of the future: Enabled by gen AI, driven by people” which stated that digitization, automation, and AI will reshape whole industries and every enterprise. The article elaborated further by saying that, in terms of magnitude, the challenge is akin to coping with the large-scale shift from agricultural work to manufacturing that occurred in the early 20th century in North America and Europe, and more recently in China. This shift was powered by the defining trait of our species, our human creativity, which is at the heart of all creative problem-solving endeavors, where innovation is the engine of growth, no matter, what the context.

Moving into Unchartered Job and Skills Territory

We don’t yet know what exact technological, or soft skills, new occupations, or jobs will be required in this fast-moving transformation, or how we might further advance generative AI, digitization, and automation.

We also don’t know how AI will impact the need for humans to tap even more into the defining trait of our species, our human creativity. To enable us to become more imaginative, curious, and creative in the way we solve some of the world’s greatest challenges and most complex and pressing problems, and transform them into innovative solutions.

We can be proactive by asking these two generative questions:

  • What if the true potential of AI lies in embracing its ability to augment human creativity and aid innovation, especially in enhancing creative problem solving, at all levels of civil society, instead of avoiding it? (Ideascale)
  • How might we develop AI as a creative thinking partner to effect profound change, and create innovative solutions that help us build a more equitable and sustainable planet for all humanity? (Hal Gregersen)

Because our human creativity is at the heart of creative problem-solving, and innovation is the engine of growth, competitiveness, and profound and positive change.

Developing a Co-Creative Thinking Partnership

In a recent article in the Harvard Business Review “AI Can Help You Ask Better Questions – and Solve Bigger Problems” by Hal Gregersen and Nicola Morini Bianzino, they state:

“Artificial intelligence may be superhuman in some ways, but it also has considerable weaknesses. For starters, the technology is fundamentally backward-looking, trained on yesterday’s data – and the future might not look anything like the past. What’s more, inaccurate or otherwise flawed training data (for instance, data skewed by inherent biases) produces poor outcomes.”

The authors say that dealing with this issue requires people to manage this limitation if they are going to treat AI as a creative-thinking partner in solving complex problems, that enable people to live healthy and happy lives and to co-create an equitable and sustainable planet.

We can achieve this by focusing on specific areas where the human brain and machines might possibly complement one another to co-create the systemic changes the world badly needs through creative problem-solving.

  • A double-edged sword

This perspective is further complimented by a recent Boston Consulting Group article  “How people can create-and destroy value- with generative AI” where they found that the adoption of generative AI is, in fact, a double-edged sword.

In an experiment, participants using GPT-4 for creative product innovation outperformed the control group (those who completed the task without using GPT-4) by 40%. But for business problem solving, using GPT-4 resulted in performance that was 23% lower than that of the control group.

“Perhaps somewhat counterintuitively, current GenAI models tend to do better on the first type of task; it is easier for LLMs to come up with creative, novel, or useful ideas based on the vast amounts of data on which they have been trained. Where there’s more room for error is when LLMs are asked to weigh nuanced qualitative and quantitative data to answer a complex question. Given this shortcoming, we as researchers knew that GPT-4 was likely to mislead participants if they relied completely on the tool, and not also on their own judgment, to arrive at the solution to the business problem-solving task (this task had a “right” answer)”.

  • Taking the path of least resistance

In McKinsey’s Top Ten Reports This Quarter blog, seven out of the ten articles relate specifically to generative AI: technology trends, state of AI, future of work, future of AI, the new AI playbook, questions to ask about AI and healthcare and AI.

As it is the most dominant topic across the board globally, if we are not both vigilant and intentional, a myopic focus on this one significant technology will take us all down the path of least resistance – where our energy will move to where it is easiest to go.  Rather than being like a river, which takes the path of least resistance to its surrounding terrain, and not by taking a strategic and systemic perspective, we will always go, and end up, where we have always gone.

  • Living our lives forwards

According to the Boston Consulting Group article:

“The primary locus of human-driven value creation lies not in enhancing generative AI where it is already great, but in focusing on tasks beyond the frontier of the technology’s core competencies.”

This means that a whole lot of other variables need to be at play, and a newly emerging set of human skills, especially in creative problem solving, need to be developed to maximize the most value from generative AI, to generate the most imaginative, novel and value adding landing strips of the future.

Creative Problem Solving

In my previous blog posts “Imagination versus Knowledge” and “Why Successful Innovators Are Curious Like Cats” we shared that we are in the midst of a “Sputnik Moment” where we have the opportunity to advance our human creativity.

This human creativity is inside all of us, it involves the process of bringing something new into being, that is original, surprising useful, or desirable, in ways that add value to the quality of people’s lives, in ways they appreciate and cherish.

  • Taking a both/and approach

Our human creativity will be paralysed, if we focus our attention and intention only on the technology, and on the financial gains or potential profits we will get from it, and if we exclude the possibilities of a co-creative thinking partnership with the technology.

To deeply engage people in true creative problem solving – and involving them in impacting positively on our crucial relationships and connectedness, with one another and with the natural world, and the planet.

  • A marriage between creatives, technologists, and humanities

In a recent Fast Company video presentation, “Innovating Imagination: How Airbnb Is Using AI to Foster Creativity” Brian Chesky CEO of Airbnb, states that we need to consider and focus our attention and intention on discovering what is good for people.

To develop a “marriage between creatives, technologists, and the humanities” that brings the human out and doesn’t let technology overtake our human element.

Developing Creative Problem-Solving Skills

At ImagineNation, we teach, mentor, and coach clients in creative problem-solving, through developing their Generative Discovery skills.

This involves developing an open and active mind and heart, by becoming flexible, adaptive, and playful in the ways we engage and focus our human creativity in the four stages of creative problem-solving.

Including sensing, perceiving, and enabling people to deeply listen, inquire, question, and debate from the edges of temporarily hidden or emerging fields of the future.

To know how to emerge, diverge, and converge creative insights, collective breakthroughs, an ideation process, and cognitive and emotional agility shifts to:

  • Deepen our attending, observing, and discerning capabilities to consciously connect with, explore, and discover possibilities that create tension and cognitive dissonance to disrupt and challenge the status quo, and other conventional thinking and feeling processes.
  • Create cracks, openings, and creative thresholds by asking generative questions to push the boundaries, and challenge assumptions and mental and emotional models to pull people towards evoking, provoking, and generating boldly creative ideas.
  • Unleash possibilities, and opportunities for creative problem solving to contribute towards generating innovative solutions to complex problems, and pressing challenges, that may not have been previously imagined.

Experimenting with the generative discovery skill set enables us to juggle multiple theories, models, and strategies to create and plan in an emergent, and non-linear way through creative problem-solving.

As stated by Hal Gregersen:

“Partnering with the technology in this way can help people ask smarter questions, making them better problem solvers and breakthrough innovators.”

Succeeding in the Age of AI

We know that Generative AI will change much of what we do and how we do it, in ways that we cannot yet anticipate.

Success in the age of AI will largely depend on our ability to learn and change faster than we ever have before, in ways that preserve our well-being, connectedness, imagination, curiosity, human creativity, and our collective humanity through partnering with generative AI in the creative problem-solving process.

Find Out More About Our Work at ImagineNation™

Find out about our collective, learning products and tools, including The Coach for Innovators, Leaders, and Teams Certified Program, presented by Janet Sernack, is a collaborative, intimate, and deeply personalized innovation coaching and learning program, supported by a global group of peers over 9-weeks, which can be customised as a bespoke corporate learning program.

It is a blended and transformational change and learning program that will give you a deep understanding of the language, principles, and applications of an ecosystem focus, human-centric approach, and emergent structure (Theory U) to innovation, and upskill people and teams and develop their future fitness, within your unique innovation context. Find out more about our products and tools.

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

LEGO Knows Why Companies Don’t Innovate

LEGO Knows Why Companies Don't Innovate

GUEST POST from Robyn Bolton

“Lego’s Latest Effort to Avoid Oil-Based Plastic Hits Brick Wall” – WSJ

“Lego axes plans to make bricks from recycled bottles” – BBC

“Lego ditches oil-free brick in sustainability setback” – The Financial Times

Recently, LEGO found itself doing the Walk of Atonement (see video below) after announcing to The Financial Times that it was scrapping plans to make bricks from recycled bottles, and media outlets from The Wall Street Journal to Fast Company to WIRED were more than happy to play the Shame Nun.

And it wasn’t just media outlets ringing the Shame Bell:

  • In the future, they should not make these kinds of announcements (prototype made from recyclable plastic) until they actually do it,” Judith Enck, President of Beyond Plastics
  • They are not going to survive as an organization if they don’t find a solution,” Paolo Taticchi, corporate sustainability expert at University College London.
  • “Lego undoubtedly had good intentions, but if you’re going to to (sic) announce a major environmental initiative like this—one that affects the core of your company—good intentions aren’t enough. And in this instance, it can even undermine progress.” Jesus Diaz, creative director, screenwriter, and producer at The Magic Sauce, writing forFast Company

As a LEGO lover, I am not unbiased, but WOW, the amount of hypocritical, self-righteous judgment is astounding!  All these publications and pundits espouse the need for innovation, yet when a company falls even the tiniest bit short of aspirations, it’s just SHAME (clang) SHAME (clang) SHAME.

LEGO Atlantis 8073 Manta Warrior (i.e., tiny) bit of context

In 1946, LEGO founder Ole Kirk Christiansen purchased Denmark’s first plastic injection molding machine.  Today, 95% of the company’s 4,400 different bricks are made using acrylonitrile butadiene styrene (ABS), a plastic that requires 4.4 pounds of oil to produce 2.2 pounds of brick.  Admittedly, it’s not a great ratio, and it gets worse.  The material isn’t biodegradable or easily recyclable, so when the 3% of bricks not handed down to the next generation end up in a landfill, they’ll break down into highly polluting microplastics.

With this context, it’s easy to understand why LEGO’s 2018 announcement that it will move to all non-plastic or recycled materials by 2030 and reduce its carbon emissions by 37% (from 2019’s 1.2 million tons) by 2032 was such big news.

Three years later, in 2021, LEGO announced that its prototype bricks made from polyethylene terephthalate (PET) bottles offered a promising alternative to its oil-based plastic bricks. 

But last Monday, after two years of testing, the company shared that what was promising as a prototype isn’t possible at scale because the process required to produce PET-based bricks actually increases carbon emissions.

SHAME!

LEGO Art World Map (i.e. massive) amount of praise for LEGO

LEGO is doing everything that innovation theorists, consultants, and practitioners recommend:

  • Setting a clear vision and measurable goals so that people know what the priorities are (reduce carbon emissions), why they’re important (“playing our part in building a sustainable future and creating a better world for our children to inherit”), and the magnitude of change required
  • Defining what is on and off the table in terms of innovation, specifically that they are not willing to compromise the quality, durability, or “clutch power” of bricks to improve sustainability
  • Developing a portfolio of bets that includes new materials for products and packaging, new services to keep bricks out of landfills and in kids’ hands, new building and production processes, and active partnerships with suppliers to reduce their climate footprint
  • Prototyping and learning before committing to scale because what is possible at a prototype level is different than what’s possible at pilot, which is different from what’s possible at scale.
  • Focusing on the big picture and the long-term by not going for the near-term myopic win of declaring “we’re making bricks from more sustainable materials” and instead deciding “not to progress” with something that, when taken as a whole process, moves the company further away from its 2032 goal.

Just one minifig’s opinion

If we want companies to innovate (and we do), shaming them for falling short of perfection is the absolute wrong way to do it.

Is it disappointing that something that seemed promising didn’t work out?  Of course.  But it’s just one of many avenues and experiments being pursued.  This project ended, but the pursuit of the goal hasn’t.

Is 2 years a long time to figure out that you can’t scale a prototype and still meet your goals?  Maybe.  But, then again, it took P&G 10 years to figure out how to develop and scale a perforation that improved one-handed toilet paper tearing.

Should LEGO have kept all its efforts and success a secret until everything was perfect and ready to launch?  Absolutely not.  Sharing its goals and priorities, experiments and results, learnings and decisions shows employees, partners, and other companies what it means to innovate and lead.

Is LEGO perfect? No.

Is it trying to be better? Yes.

Isn’t that what we want?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.