Tag Archives: organizational change

Implementing Successful Transformation Initiatives for 2024

Implementing Successful Transformation Initiatives for 2024

GUEST POST from Janet Sernack

Transformation and change initiatives are usually designed as strategic interventions, intending to advance an organization’s growth, deliver increased shareholder value, build competitive advantage, or improve speed and agility to respond to fast-changing industries.  These initiatives typically focus on improving efficiency, and productivity, resolving IT legacy and technological issues, encouraging innovation, or developing high-performance organizational cultures. Yet, according to research conducted over fifteen years by McKinsey & Co., shared in a recent article “Losing from day one: Why even successful transformations fall short” – Organizations have realized only 67 percent of the maximum financial benefits that their transformations could have achieved. By contrast, respondents at all other companies say they captured an average of only 37 percent of the potential benefit, and it’s all due to a lack of human skills, and their inability to adapt, innovate, and thrive in a decade of disruption.

Differences between success and failure

The survey results confirm that “there are no short­cuts to successful transformation and change initiatives. The main differentiator between success and failure was not whether an organization followed a specific subset of actions but rather how many actions it took throughout an organizational transformation’s life cycle” and actions taken by the people involved.

Capacity, confidence, and competence – human skills

What stands out is that thirty-five percent of the value lost occurs in the implementation phase, which involves the unproductive actions taken by the people involved.

The Boston Consulting Group (BCG) supports this in a recent article “How to Create a Transformation That Lasts” – “Transformations are inherently difficult, filled with compressed deadlines and limited resources. Executing them typically requires big changes in processes, product offerings, governance, structure, the operating model itself, and human behavior.

Reinforcing the need for organizations to invest in developing the deep human skills that embed transformation disciplines into business-as-usual structures, processes, and systems, and help shift the culture. Which depends on enhancing people’s capacity, confidence, and competence to implement the “annual business-planning processes and review cycles, from executive-level weekly briefings and monthly or quarterly reviews to individual performance dialogue” that delivers and embeds the desired changes, especially the cultural enablers.

Complex and difficult to navigate – key challenges

As a result of the impact of our VUCA/BANI world, coupled with the global pandemic, current global instability, and geopolitics, many people have had their focus stolen, and are still experiencing dissonance cognitively, emotionally, and viscerally.

This impacts their ability to take intelligent actions and the range of symptoms includes emotional overwhelm, cognitive overload, and change fatigue.

It seems that many people lack the capacity, confidence, and competence, to underpin their balance, well-being, and resilience, which resources their ability and GRIT to engage fully in transformation and change initiatives.

The new normal – restoring our humanity

At ImagineNation™ for the past four years, in our coaching and mentoring practice, we have spent more than 1000 hours partnering with leaders and managers around the world to support them in recovering and re-emerging from a range of uncomfortable, disabling, and disempowering feelings.

Some of these unresourceful states include loneliness, disconnection, a lack of belonging, and varying degrees of burnout, and have caused them to withdraw and, in some cases, even resist returning to the office, or to work generally.

It appears that this is the new normal we all have to deal with, knowing there is no playbook, to take us there because it involves restoring the essence of our humanity and deepening our human skills.

Taking a whole-person approach – develop human skills

By embracing a whole-person approach, in all transformation and change initiatives, that focuses on building people’s capacity, confidence, and competence, and that cultivates their well-being and resilience to:

  • Engage, empower, and enable them to collaborate in setting the targets, business plans, implementation, and follow-up necessary to ensure a successful transformation and change initiative.
  • Safely partner with them through their discomfort, anxiety, fear, and reactive responses.
  • Learn resourceful emotional states, traits, mindsets, behaviors, and human skills to embody, enact and execute the desired changes strategically and systemically.

By then slowing down, to pause, retreat and reflect, and choose to operate systemically and holistically, and cultivate the “deliberate calm” required to operate at the three different human levels outlined in the illustration below:

The Neurological Level – which most transformation and change initiatives fail to comprehend, connect to, and work with. Because people lack the focus, intention, and skills to help people collapse any unconscious RIGIDITY existing in their emotional, cognitive, and visceral states, which means they may be frozen, distracted, withdrawn, or aggressive as a result of their fears and anxiety.

You can build your capacity, confidence, and competence to operate at this level by accepting “what is”:

  • Paying attention and being present with whatever people are experiencing neurologically by attending, allowing, accepting, naming, and acknowledging whatever is going on for them, and by supporting and enabling them to rest, revitalize and recover in their unique way.
  • Operating from an open mind and an open heart and by being empathic and compassionate, in line with their fragility and vulnerability, being kind, appreciative, and considerate of their individual needs.
  • Being intentional in enabling them to become grounded, mindful conscious, and truly connected to what is really going on for them, and rebuild their positivity, optimism, and hope for the future.
  • Creating a collective holding space or container that gives them permission, safety, and trust to pull them towards the benefits and rewards of not knowing, unlearning, and being open to relearning new mental models.
  • Evoking new and multiple perspectives that will help them navigate uncertainty and complexity.

The Emotional Cognition Levels – which most transformation and change initiatives fail to take into account because people need to develop their PLASTICITY and flexibility in regulating and focusing their thoughts, feelings, and actions to adapt and be agile in a world of unknowns, and deliver the outcomes and results they want to have.

You can build your capacity, confidence, and competence to operate at this level by supporting them to open their hearts and minds:

  • Igniting their curiosity, imagination, and playfulness, introducing novel ideas, and allowing play and improvisation into their thinking processes, to allow time out to mind wander and wonder into new and unexplored territories.
  • Exposing, disrupting, and re-framing negative beliefs, ruminations, overthinking and catastrophizing patterns, imposter syndromes, fears of failure, and feelings of hopelessness and helplessness.
  • Evoking mindset shifts, embracing positivity and an optimistic focus on what might be a future possibility and opportunity.
  • Being empathic, compassionate, and appreciative, and engaging in self-care activities and well-being practices.

The Generative Level – which most transformation and change initiatives ignore, because they fail to develop the critical and creative thinking, and problem sensing and solving skills that are required to GENERATE the crucial elastic thinking and human skills that result in change, and innovation.

You can build your capacity, confidence, and competence to operate at this level by:

  • Creating a safe space to help people reason and make sense of the things occurring within, around, and outside of them.
  • Cultivating their emotional and cognitive agility, creative, critical, and associative thinking skills to challenge the status quo and think differently.
  • Developing behavioral flexibility to collaborate, being inclusive to maximize differences and diversity, and safe experimentation to close their knowing-doing gaps.
  • Taking small bets, giving people permission and safety to fail fast to learn quickly, be courageous, be both strategic and systemic in taking smart risks and intelligent actions.

Reigniting our humanity – unlocking human potential  

At the end of the day, we all know that we can’t solve the problem with the same thinking that created it. Yet, so many of us keep on trying to do that, by unconsciously defaulting into a business-as-usual linear thinking process when involved in setting up and implementing a transformation or change initiative.

Ai can only take us so far, because the defining trait of our species, is our human creativity, which is at the heart of all creative problem-solving endeavors, where innovation can be the engine of change, transformation, and growth, no matter what the context. According to Fei-Fei Li, Sequoia Professor of Computer Science at Stanford, and co-director of AI4All, a non-profit organization promoting diversity and inclusion in the field of AI.

“There’s nothing artificial about AI. It’s inspired by people, created by people, and most importantly it has an impact on people”.

  • Develop the human skills

When we have the capacity, confidence, and competence to reignite our humanity, we will unlock human potential, and stop producing results no one wants. By developing human skills that enable people to adapt, be resilient, agile, creative, and innovate, they will grow through disruption in ways that add value to the quality of people’s lives, that are appreciated and cherished, we can truly serve people, deliver profits and perhaps save the planet.

Find out more about our work at ImagineNation™

Find out about our collective, learning products and tools, including The Coach for Innovators, Leaders, and Teams Certified Program, presented by Janet Sernack, is a collaborative, intimate, and deeply personalized innovation coaching and learning program, supported by a global group of peers over 9-weeks, and can be customized as a bespoke corporate learning and coaching program for leadership and team development and change and culture transformation initiatives.

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

AI as an Innovation Tool – How to Work with a Deeply Flawed Genius!

AI as an Innovation Tool - How to Work with a Deeply Flawed Genius!

GUEST POST from Pete Foley

For those of us working in the innovation and change field, it is hard to overstate the value and importance of AI.   It opens doors, that were, for me at least, barely imaginable 10 years ago.  And for someone who views analogy, crossing expertise boundaries, and the reapplication of ideas across domains as central to innovation, it’s hard to imagine a more useful tool.

But it is still a tool.  And as with any tool, leaning it’s limitations, and how to use it skillfully is key.  I make the analogy to an automobile.  We don’t need to know everything about how it works, and we certainly don’t need to understand how to build it.  But we do need to know what it can, and cannot do. We also need to learn how to drive it, and the better our driving skills, the more we get out of it.

AI, the Idiot Savant?  An issue with current AI is that it is both intelligent and stupid at the same time (see Yejin Chois excellent TED talk that is attached). It has phenomenal ‘data intelligence’, but can also fail on even simple logic puzzles. Part of the problem is that AI lacks ‘common sense’ or the implicit framework that filters a great deal of human decision making and behavior.  Chois calls this the  ‘dark matter’ common sense of decision-making. I think of it as the framework of knowledge, morality, biases and common sense that we accumulate over time, and that is foundational to the unconscious ‘System 1’ elements that influence many, if not most of our decisions. But whatever we call it, it’s an important, but sometimes invisible and unintuitive part of human information processing that is can be missing from AI output.    

Of course, AI is far from being unique in having limitations in the quality of its output.   Any information source we use is subject to errors.  We all know not to believe everything we read on the internet. That makes Google searches useful, but also potentially flawed.  Even consulting with human experts has pitfalls.   Not all experts agree, and even to most eminent expert can be subject to biases, or just good old fashioned human error.  But most of us have learned to be appropriately skeptical of these sources of information.  We routinely cross-reference, challenge data, seek second opinions and do not simply ‘parrot’ the data they provide.

But increasingly with AI, I’ve seen a tendency to treat its output with perhaps too much respect.   The reasons for this are multi-faceted, but very human.   Part of it may be the potential for generative AI to provide answers in an apparently definitive form.  Part may simply be awe of its capabilities, and to confuse breadth of knowledge with accuracy.  Another element is the ability it gives us to quickly penetrate areas where we may have little domain knowledge or background.  As I’ve already mentioned, this is fantastic for those of us who value exploring new domains and analogies.  But it comes with inherent challenges, as the further we step away from our own expertise, the easier it is for us to miss even basic mistakes.  

As for AI’s limitations, Chois provides some sobering examples.  It can pass a bar exam, but can fail abysmally on even simple logic problems.  For example, it suggests building a bridge over broken glass and nails is likely to cause punctures!   It has even suggested increasing the efficiency of paperclip manufacture by using humans as raw materials.  Of course, these negative examples are somewhat cherry picked to make a point, but they do show how poor some AI answers can be, and how they can be low in common sense.   Of course, when the errors are this obvious, we should automatically filter them out with our own common sense.  But the challenge comes when we are dealing in areas where we have little experience, and AI delivers superficially plausible but flawed answers. 

Why is this a weak spot for AI?  At the root of this is that implicit knowledge is rarely articulated in the data AI scrapes. For example, a recipe will often say ‘remove the pot from the heat’, but rarely says ‘remove the pot from heat and don’t stick your fingers in the flames’. We’re supposed to know that already. Because it is ‘obvious’, and processed quickly, unconsciously and often automatically by our brains, it is rarely explicitly articulated. AI, however, cannot learn what is not said.  And so because we don’t tend to state the obvious, it can make it challenging for an AI to learn it.  It learns to take the pot off of the heat, but not the more obvious insight, which is to avoid getting burned when we do so.  

This is obviously a known problem, and several strategies are employed to help address it.  These include manually adding crafted examples and direct human input into AI’s training. But this level of human curation creates other potential risks. The minute humans start deciding what content should and should not be incorporated, or highlighted into AI training, the risk of transferring specific human biases to that AI increase.   It also creates the potential for competing AI’s with different ‘viewpoints’, depending upon differences in both human input and the choices around what data-sets are scraped. There is a ‘nature’ component to the development of AI capability, but also a nurture influence. This is of course analogous the influence that parents, teachers and peers have on the values and biases of children as they develop their own frameworks. 

But most humans are exposed to at least some diversity in the influences that shape their decision frameworks.  Parents, peers and teachers provide generational variety, and the gradual and layered process that builds the human implicit decision framework help us to evolve a supporting network of contextual insight.  It’s obvious imperfect, and the current culture wars are testament to some profound differences in end result.  But to a large extent, we evolve similar, if not identical common sense frameworks. With AI, the narrower group contributing to curated ‘education’ increases the risk of both intentional and unintentional bias, and of ‘divergent intelligence’.     

What Can We do?  The most important thing is to be skeptical about AI output.  Just because it sounds plausible, don’t assume it is.  Just as we’d not take the first answer on a Google search as absolute truth, don’t do the same with AI.  Ask it for references, and check them (early iterations were known to make up plausible looking but nonsense references).  And of course, the more important the output is to us, the more important it is to check it.  As I said at the beginning, it can be tempting to take verbatim output from AI, especially if it sounds plausible, or fits our theory or worldview.  But always challenge the illusion of omnipotence that AI creates.  It’s probably correct, but especially if its providing an important or surprising insight, double check it.    

The Sci-Fi Monster!  The concept of a childish super intelligence has been explored by more than one Science Fiction writer.  But in many ways that is what we are dealing with in the case of AI.  It’s informational ‘IQ’ is greater than the contextual or common sense ‘IQ’ , making it a different type of intelligence to those we are used to.   And because so much of the human input side is proprietary and complex, it’s difficult  to determine whether bias or misinformation is included in its output, and if so, how much?   I’m sure these are solvable challenges.  But some bias is probably unavoidable the moment any human intervention or selection invades choice of training materials or their interpretation.   And as we see an increase in copyright law suits and settlements associated with AI, it becomes increasingly plausible that narrowing of sources will result in different AI’s with different ‘experiences’, and hence potentially different answers to questions.  

AI is an incredible gift, but like the three wishes in Aladdin’s lamp, use it wisely and carefully.  A little bit of skepticism, and some human validation is a good idea. Something that can pass the bar, but that lacks common sense is powerful, it could even get elected, but don’t automatically trust everything it says!

Image credits: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Continuous Improvement vs. Incremental Innovation

Are They the Same?

Continuous Improvement vs. Incremental Innovation

GUEST POST from Robyn Bolton

“Isn’t continuous improvement the same as incremental innovation?  After all, both focus on doing what you do better, faster, or cheaper.”

Ooof, I have a love-hate relationship with questions like this one.

I hate them because, in the moment, they feel like a gut punch.  The answer feels obvious to me – no, they are entirely different things – but I struggle to explain myself clearly and simply.

I love them because, once the frustration and embarrassment of being unable to offer a clear and simple answer passes, they become a clear sign that I don’t understand something well enough or that *gasp* my “obvious” answer may be wrong.

So, is Continuous Improvement the same as Incremental Innovation?

No. They’re different.

But the difference is subtle, so let’s use an analogy to tease it apart.

Imagine learning to ride a bike.  When you first learn, success is staying upright, moving forward, and stopping before you crash into something.  With time and practice, you get better.  You move faster, stop more quickly, and move with greater precision and agility.

That’s continuous improvement.  You’re using the same solution but using it better.

Now, imagine that you’ve mastered your neighborhood’s bike paths and streets and want to do more.  You want to go faster, so add a motor to your bike.  You want to ride through the neighboring forest, so you change to off-road tires.  You want a smoother feel on your long rides, so you switch to a carbon fiber frame.

That’s incremental innovation.  You changed an aspect of the solution so that it performs better.

It all comes down to the definition of innovation – something different (or new) that creates value.

Both continuous improvement and incremental innovation create value. 

The former does it by improving what exists. The latter does it by changing (making different) what exists.

Got it. They are entirely different things.

Sort of.

Think of them as a Venn diagram – they’re different but similar.

There is evidence that a culture committed to quality and continuous improvement can lead to a culture of innovation because “Both approaches are focused in meeting customer needs, and since CI encourages small but constant changes in current products, processes and working methods its use can lead firms to become innovative by taking these small changes as an approach to innovation, more specifically, incremental innovation.”

Thanks, nerd.  But does this matter where I work, which is in the real world?

Yes.

Continuous Improvement and Incremental Innovation are different things and, as a result, require different resource levels, timelines, and expectations for ROI.

You should expect everyone in your organization to engage in continuous innovation (CI) because (1) using CI helps the organizations change adoption and risk taking by evaluating and implementing solutions to current needs” and (2) the problem-solving tools used in CI uncover “opportunities for finding new ideas that could become incremental innovations.”

You should designate specific people and teams to work on incremental people because (1) what “better” looks like is less certain, (2) doing something different or new increases risk, and (3) more time and resources are required to learn your way to the more successful outcome.

What do you think?

How do you answer the question at the start of this post?

How do you demonstrate your answer?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

AI and Human Creativity Solving Complex Problems Together

AI and Human Creativity Solving Complex Problems Together

GUEST POST from Janet Sernack

A recent McKinsey Leading Off – Essentials for leaders and those they lead email newsletter, referred to an article “The organization of the future: Enabled by gen AI, driven by people” which stated that digitization, automation, and AI will reshape whole industries and every enterprise. The article elaborated further by saying that, in terms of magnitude, the challenge is akin to coping with the large-scale shift from agricultural work to manufacturing that occurred in the early 20th century in North America and Europe, and more recently in China. This shift was powered by the defining trait of our species, our human creativity, which is at the heart of all creative problem-solving endeavors, where innovation is the engine of growth, no matter, what the context.

Moving into Unchartered Job and Skills Territory

We don’t yet know what exact technological, or soft skills, new occupations, or jobs will be required in this fast-moving transformation, or how we might further advance generative AI, digitization, and automation.

We also don’t know how AI will impact the need for humans to tap even more into the defining trait of our species, our human creativity. To enable us to become more imaginative, curious, and creative in the way we solve some of the world’s greatest challenges and most complex and pressing problems, and transform them into innovative solutions.

We can be proactive by asking these two generative questions:

  • What if the true potential of AI lies in embracing its ability to augment human creativity and aid innovation, especially in enhancing creative problem solving, at all levels of civil society, instead of avoiding it? (Ideascale)
  • How might we develop AI as a creative thinking partner to effect profound change, and create innovative solutions that help us build a more equitable and sustainable planet for all humanity? (Hal Gregersen)

Because our human creativity is at the heart of creative problem-solving, and innovation is the engine of growth, competitiveness, and profound and positive change.

Developing a Co-Creative Thinking Partnership

In a recent article in the Harvard Business Review “AI Can Help You Ask Better Questions – and Solve Bigger Problems” by Hal Gregersen and Nicola Morini Bianzino, they state:

“Artificial intelligence may be superhuman in some ways, but it also has considerable weaknesses. For starters, the technology is fundamentally backward-looking, trained on yesterday’s data – and the future might not look anything like the past. What’s more, inaccurate or otherwise flawed training data (for instance, data skewed by inherent biases) produces poor outcomes.”

The authors say that dealing with this issue requires people to manage this limitation if they are going to treat AI as a creative-thinking partner in solving complex problems, that enable people to live healthy and happy lives and to co-create an equitable and sustainable planet.

We can achieve this by focusing on specific areas where the human brain and machines might possibly complement one another to co-create the systemic changes the world badly needs through creative problem-solving.

  • A double-edged sword

This perspective is further complimented by a recent Boston Consulting Group article  “How people can create-and destroy value- with generative AI” where they found that the adoption of generative AI is, in fact, a double-edged sword.

In an experiment, participants using GPT-4 for creative product innovation outperformed the control group (those who completed the task without using GPT-4) by 40%. But for business problem solving, using GPT-4 resulted in performance that was 23% lower than that of the control group.

“Perhaps somewhat counterintuitively, current GenAI models tend to do better on the first type of task; it is easier for LLMs to come up with creative, novel, or useful ideas based on the vast amounts of data on which they have been trained. Where there’s more room for error is when LLMs are asked to weigh nuanced qualitative and quantitative data to answer a complex question. Given this shortcoming, we as researchers knew that GPT-4 was likely to mislead participants if they relied completely on the tool, and not also on their own judgment, to arrive at the solution to the business problem-solving task (this task had a “right” answer)”.

  • Taking the path of least resistance

In McKinsey’s Top Ten Reports This Quarter blog, seven out of the ten articles relate specifically to generative AI: technology trends, state of AI, future of work, future of AI, the new AI playbook, questions to ask about AI and healthcare and AI.

As it is the most dominant topic across the board globally, if we are not both vigilant and intentional, a myopic focus on this one significant technology will take us all down the path of least resistance – where our energy will move to where it is easiest to go.  Rather than being like a river, which takes the path of least resistance to its surrounding terrain, and not by taking a strategic and systemic perspective, we will always go, and end up, where we have always gone.

  • Living our lives forwards

According to the Boston Consulting Group article:

“The primary locus of human-driven value creation lies not in enhancing generative AI where it is already great, but in focusing on tasks beyond the frontier of the technology’s core competencies.”

This means that a whole lot of other variables need to be at play, and a newly emerging set of human skills, especially in creative problem solving, need to be developed to maximize the most value from generative AI, to generate the most imaginative, novel and value adding landing strips of the future.

Creative Problem Solving

In my previous blog posts “Imagination versus Knowledge” and “Why Successful Innovators Are Curious Like Cats” we shared that we are in the midst of a “Sputnik Moment” where we have the opportunity to advance our human creativity.

This human creativity is inside all of us, it involves the process of bringing something new into being, that is original, surprising useful, or desirable, in ways that add value to the quality of people’s lives, in ways they appreciate and cherish.

  • Taking a both/and approach

Our human creativity will be paralysed, if we focus our attention and intention only on the technology, and on the financial gains or potential profits we will get from it, and if we exclude the possibilities of a co-creative thinking partnership with the technology.

To deeply engage people in true creative problem solving – and involving them in impacting positively on our crucial relationships and connectedness, with one another and with the natural world, and the planet.

  • A marriage between creatives, technologists, and humanities

In a recent Fast Company video presentation, “Innovating Imagination: How Airbnb Is Using AI to Foster Creativity” Brian Chesky CEO of Airbnb, states that we need to consider and focus our attention and intention on discovering what is good for people.

To develop a “marriage between creatives, technologists, and the humanities” that brings the human out and doesn’t let technology overtake our human element.

Developing Creative Problem-Solving Skills

At ImagineNation, we teach, mentor, and coach clients in creative problem-solving, through developing their Generative Discovery skills.

This involves developing an open and active mind and heart, by becoming flexible, adaptive, and playful in the ways we engage and focus our human creativity in the four stages of creative problem-solving.

Including sensing, perceiving, and enabling people to deeply listen, inquire, question, and debate from the edges of temporarily hidden or emerging fields of the future.

To know how to emerge, diverge, and converge creative insights, collective breakthroughs, an ideation process, and cognitive and emotional agility shifts to:

  • Deepen our attending, observing, and discerning capabilities to consciously connect with, explore, and discover possibilities that create tension and cognitive dissonance to disrupt and challenge the status quo, and other conventional thinking and feeling processes.
  • Create cracks, openings, and creative thresholds by asking generative questions to push the boundaries, and challenge assumptions and mental and emotional models to pull people towards evoking, provoking, and generating boldly creative ideas.
  • Unleash possibilities, and opportunities for creative problem solving to contribute towards generating innovative solutions to complex problems, and pressing challenges, that may not have been previously imagined.

Experimenting with the generative discovery skill set enables us to juggle multiple theories, models, and strategies to create and plan in an emergent, and non-linear way through creative problem-solving.

As stated by Hal Gregersen:

“Partnering with the technology in this way can help people ask smarter questions, making them better problem solvers and breakthrough innovators.”

Succeeding in the Age of AI

We know that Generative AI will change much of what we do and how we do it, in ways that we cannot yet anticipate.

Success in the age of AI will largely depend on our ability to learn and change faster than we ever have before, in ways that preserve our well-being, connectedness, imagination, curiosity, human creativity, and our collective humanity through partnering with generative AI in the creative problem-solving process.

Find Out More About Our Work at ImagineNation™

Find out about our collective, learning products and tools, including The Coach for Innovators, Leaders, and Teams Certified Program, presented by Janet Sernack, is a collaborative, intimate, and deeply personalized innovation coaching and learning program, supported by a global group of peers over 9-weeks, which can be customised as a bespoke corporate learning program.

It is a blended and transformational change and learning program that will give you a deep understanding of the language, principles, and applications of an ecosystem focus, human-centric approach, and emergent structure (Theory U) to innovation, and upskill people and teams and develop their future fitness, within your unique innovation context. Find out more about our products and tools.

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

LEGO Knows Why Companies Don’t Innovate

LEGO Knows Why Companies Don't Innovate

GUEST POST from Robyn Bolton

“Lego’s Latest Effort to Avoid Oil-Based Plastic Hits Brick Wall” – WSJ

“Lego axes plans to make bricks from recycled bottles” – BBC

“Lego ditches oil-free brick in sustainability setback” – The Financial Times

Recently, LEGO found itself doing the Walk of Atonement (see video below) after announcing to The Financial Times that it was scrapping plans to make bricks from recycled bottles, and media outlets from The Wall Street Journal to Fast Company to WIRED were more than happy to play the Shame Nun.

And it wasn’t just media outlets ringing the Shame Bell:

  • In the future, they should not make these kinds of announcements (prototype made from recyclable plastic) until they actually do it,” Judith Enck, President of Beyond Plastics
  • They are not going to survive as an organization if they don’t find a solution,” Paolo Taticchi, corporate sustainability expert at University College London.
  • “Lego undoubtedly had good intentions, but if you’re going to to (sic) announce a major environmental initiative like this—one that affects the core of your company—good intentions aren’t enough. And in this instance, it can even undermine progress.” Jesus Diaz, creative director, screenwriter, and producer at The Magic Sauce, writing forFast Company

As a LEGO lover, I am not unbiased, but WOW, the amount of hypocritical, self-righteous judgment is astounding!  All these publications and pundits espouse the need for innovation, yet when a company falls even the tiniest bit short of aspirations, it’s just SHAME (clang) SHAME (clang) SHAME.

LEGO Atlantis 8073 Manta Warrior (i.e., tiny) bit of context

In 1946, LEGO founder Ole Kirk Christiansen purchased Denmark’s first plastic injection molding machine.  Today, 95% of the company’s 4,400 different bricks are made using acrylonitrile butadiene styrene (ABS), a plastic that requires 4.4 pounds of oil to produce 2.2 pounds of brick.  Admittedly, it’s not a great ratio, and it gets worse.  The material isn’t biodegradable or easily recyclable, so when the 3% of bricks not handed down to the next generation end up in a landfill, they’ll break down into highly polluting microplastics.

With this context, it’s easy to understand why LEGO’s 2018 announcement that it will move to all non-plastic or recycled materials by 2030 and reduce its carbon emissions by 37% (from 2019’s 1.2 million tons) by 2032 was such big news.

Three years later, in 2021, LEGO announced that its prototype bricks made from polyethylene terephthalate (PET) bottles offered a promising alternative to its oil-based plastic bricks. 

But last Monday, after two years of testing, the company shared that what was promising as a prototype isn’t possible at scale because the process required to produce PET-based bricks actually increases carbon emissions.

SHAME!

LEGO Art World Map (i.e. massive) amount of praise for LEGO

LEGO is doing everything that innovation theorists, consultants, and practitioners recommend:

  • Setting a clear vision and measurable goals so that people know what the priorities are (reduce carbon emissions), why they’re important (“playing our part in building a sustainable future and creating a better world for our children to inherit”), and the magnitude of change required
  • Defining what is on and off the table in terms of innovation, specifically that they are not willing to compromise the quality, durability, or “clutch power” of bricks to improve sustainability
  • Developing a portfolio of bets that includes new materials for products and packaging, new services to keep bricks out of landfills and in kids’ hands, new building and production processes, and active partnerships with suppliers to reduce their climate footprint
  • Prototyping and learning before committing to scale because what is possible at a prototype level is different than what’s possible at pilot, which is different from what’s possible at scale.
  • Focusing on the big picture and the long-term by not going for the near-term myopic win of declaring “we’re making bricks from more sustainable materials” and instead deciding “not to progress” with something that, when taken as a whole process, moves the company further away from its 2032 goal.

Just one minifig’s opinion

If we want companies to innovate (and we do), shaming them for falling short of perfection is the absolute wrong way to do it.

Is it disappointing that something that seemed promising didn’t work out?  Of course.  But it’s just one of many avenues and experiments being pursued.  This project ended, but the pursuit of the goal hasn’t.

Is 2 years a long time to figure out that you can’t scale a prototype and still meet your goals?  Maybe.  But, then again, it took P&G 10 years to figure out how to develop and scale a perforation that improved one-handed toilet paper tearing.

Should LEGO have kept all its efforts and success a secret until everything was perfect and ready to launch?  Absolutely not.  Sharing its goals and priorities, experiments and results, learnings and decisions shows employees, partners, and other companies what it means to innovate and lead.

Is LEGO perfect? No.

Is it trying to be better? Yes.

Isn’t that what we want?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Comforter Cold War of 2006

(or How Assumptions Stifle Innovation)

The Comforter Cold War of 2006

GUEST POST from Robyn Bolton

In the room were two single beds, each with a fluffy white comforter folded neatly on top.

“Yeah, this is not gonna work.”

I had just entered my one-bedroom corporate apartment in Copenhagen, and while everything else was pleasantly light and spacious, there was no way I would spend the next six months sleeping in a single bed.

So, I set down my suitcases and immediately pushed the two beds together, using the two nightstands to secure them. The two comforters would work since there was just one of me, and I made a mental note to request a king-sized comforter from the desk when I left for work in the morning.

Thus began the great Comforter Cold War of 2006/2007.

Every few days, I would request a king-sized comforter for my jerry-rigged king-sized bed.  I would return to find one queen-sized comforter.  The luxury of a larger comforter would diminish the disappointment of not getting an appropriately sized one, and I would bask in the warmth of fully covered sleep.  For one night. The next day, I would return to my room only to find that the two single comforters had returned.

This went on for nine months.

I shared this story of passive-aggressive housekeeping at my going away party with my colleagues. Midway through the story, I noticed the absolutely baffled looks on their faces.

“What?”

“Why did you want one comforter?”

“Because I have one bed.  A comforter should cover the bed.”

“Why?  A bed doesn’t need a comforter.  A person does.  You just need a comforter to cover you.”

[extended silence while we try to process each other’s points]

“So, does that mean that in Denmark, if a couple sleeps together, they each have their own comforter?”

“Yes, of course!  Why would we share?  Each person has their own temperature preferences, and there’s no worry about someone stealing your covers.”

My mind.  Was.  Blown.

This made so much sense. A comforter covers a person, so the 1:1 ratio of comforter to people is far more logical than a 1:1 ratio of comforter to bed (and often a 1:2 ratio of comforter to people).  Seriously, how many relationships would be saved by simply having separate comforters?

Yet, for nine months, it made more sense to me to battle for a comforter size that apparently doesn’t exist in the country without ever asking why I couldn’t get what I was so clearly and reasonably (in my mind) requesting.

I assumed the apartment building didn’t have king-sized comforters or only enough for the actual king-sized beds.  I assumed housekeeping was on automatic pilot, not realizing they were replacing a queen-sized comforter with two single ones.  I assumed that communication amongst the staff was poor, so my request wasn’t being shared.  I assumed a lot.

But I never assumed that I was wrong and that the root of the problem was a cultural difference so deeply ingrained and subtle that it never occurred to anyone to question it.

Question your assumptions.

Assumptions are a shortcut to understanding our world.  Based on culture, experiences, and even stereotypes, we make assumptions about what came before, who we’re interacting with, what’s happening now, and what will happen next.

Most of the time, we’re right (or at least more right than wrong), so we keep making assumptions. It’s also why, when our assumptions are wrong, we tend to question everything but our assumptions.

And that kills innovation because it limits our curiosity and imagination, our perception of what’s possible, and our willingness to engage with and learn from others.

We all cling to assumptions that lead to Cold Wars. 

What’s yours?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

What Einstein Got Wrong

Defining Design

What Einstein Got Wrong - Defining Design

GUEST POST from Robyn Bolton

“If you can’t explain something simply, you don’t understand it well enough.”Albert Einstein (supposedly)

This is one of my favorite quotes because it’s an absolute gut punch.  You think you know something, probably because you’ve been saying and doing it for years.  Then someone comes along and asks you to explain it, and suddenly, you’re just standing there, mouth agape, gesturing, hoping that this wacky game of charades produces an answer.

This happened to me last Monday.

While preparing to teach a course titled “Design Innovation Lab,” I thought it would be a good idea to define “design” and “innovation.”  I already had a slide with the definition of “innovation” – something new that creates value – but when I had to make one for “design,” my stomach sank.

My first definition was “pretty pictures,” which is both wrong and slightly demeaning because designers do that and so much more.  My second definition, I know it when I see it, was worse.

So, I Googled the definition.

Then I asked ChatGPT.

Then I asked some designer friends.

No one had a simple definition of Design.

As the clock ticked closer to 6:00 pm, I defaulted to a definition from the International Council of Design:

“Design is a discipline of study and practice focused on the interaction between a person – a “user” – and the man-made environment, taking into account aesthetic, functional, contextual, cultural, and societal considerations.  As a formalized discipline, design is a modern construct.”

Before unveiling this definition to a classroom full of degreed designers pursuing their Master’s in Design, I asked them to define “design.”

It went as well as all my previous attempts.  Lots of thoughts and ideas.  Lots of “it’s this but not that.”  Lots of debate about whether it needs to have a purpose for it to be distinct from art.

Absolutely no simple explanations or punchy definitions.

So, when I unveiled the definition from the very official-sounding International Council of Design, we all just stared at it.

“Yes, but it’s not quite right.”

“It is all those things, but it’s more than just those things.”

“I guess it is a ‘modern construct’ when you think of it as a job, but we’ve done it forever.”

As we squinted and puzzled, what was missing slowly dawned on us. 

There was nothing human in this definition. There was no mention of feelings or empathy, life or nature, connection or community, aspirations or dreams.

In this definition, designers consider multiple aspects of an unnatural environment in creating something to be used. Designers are simply the step before mass production begins.

Who wants to do that?

Who wants to be a stop, however necessary, on a conveyor belt of sameness?

Yet that’s what we become when we strip the humanness out of our work.

Humans are messy, emotional, unpredictable, irrational, challenging, and infuriating.

We’re also interesting, creative, imaginative, hopeful, kind, curious, hard-working, and resilient.

When we try to strip away human messiness to create MECE (mutually exclusive, collectively exhaustive) target markets and customer personas, we strip away the human we’re creating for.

When we ignore unpredictable and irrational feedback on our ideas, we ignore the creative and imaginative answers that could improve our ideas.

When we give up on a challenge because it’s more difficult than expected and doesn’t produce immediate results, we give up hope, resiliency, and the opportunity to improve things.

I still don’t have a simple definition of design, but I know that one that doesn’t acknowledge all the aspects of a human beyond just being a “user” isn’t correct.

Even if you explain something simply, you may not understand it well enough.

Image Credit: Misterinnovation.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

An Innovation Rant: Just Because You Can Doesn’t Mean You Should

An Innovation Rant: Just Because You Can Doesn’t Mean You Should

GUEST POST from Robyn Bolton

Why are people so concerned about, afraid of, or resistant to new things?

Innovation, by its very nature, is good.  It is something new that creates value.

Naturally, the answer has nothing to do with innovation.

It has everything to do with how we experience it. 

And innovation without humanity is a very bad experience.

Over the last several weeks, I’ve heard so many stories of inhuman innovation that I have said, “I hate innovation” more than once.

Of course, I don’t mean that (I would be at an extraordinary career crossroads if I did).  What I mean is that I hate the choices we make about how to use innovation. 

Just because AI can filter resumes doesn’t mean you should remove humans from the process.

Years ago, I oversaw recruiting for a small consulting firm of about 50 people.  I was a full-time project manager, but given our size, everyone was expected to pitch in and take on extra responsibilities.  Because of our founder, we received more resumes than most firms our size, so I usually spent 2 to 3 hours a week reviewing them and responding to applicants.  It was usually boring, sometimes hilarious, and always essential because of our people-based business.

Would I have loved to have an AI system sort through the resumes for me?  Absolutely!

Would we have missed out on incredible talent because they weren’t out “type?”  Absolutely!

AI judges a resume based on keywords and other factors you program in.  This probably means that it filters out people who worked in multiple industries, aren’t following a traditional career path, or don’t have the right degree.

This also means that you are not accessing people who bring a new perspective to your business, who can make the non-obvious connections that drive innovation and growth, and who bring unique skills and experiences to your team and its ideas.

If you permit AI to find all your talent, pretty soon, the only talent you’ll have is AI.

Just because you can ghost people doesn’t mean you should.

Rejection sucks.  When you reject someone, and they take it well, you still feel a bit icky and sad.  When they don’t take it well, as one of my colleagues said when viewing a response from a candidate who did not take the decision well, “I feel like I was just assaulted by a bag of feathers.  I’m not hurt.  I’m just shocked.”

So, I understand ghosting feels like the better option.  It’s not.  At best, it’s lazy, and at worst, it’s selfish.  Especially if you’re a big company using AI to screen resumes. 

It’s not hard to add a function that triggers a standard rejection email when the AI filters someone out.  It’s not that hard to have a pre-programmed email that can quickly be clicked and sent when a human makes a decision.

The Golden Rule – do unto others as you would have done unto you – doesn’t apply to AI.  It does apply to you.

Just because you can stack bots on bots doesn’t mean you should.

At this point, we all know that our first interaction with customer service will be with a bot.  Whether it’s an online chatbot or an automated phone tree, the journey to a human is often long and frustrating. Fine.  We don’t like it, but we don’t have a choice.

But when a bot transfers us to a bot masquerading as a person?  Do you hate your customers that much?

Some companies do, as my husband and I discovered.  I was on the phone with one company trying to resolve a problem, and he was in a completely different part of the house on the phone with another company trying to fix a separate issue.  When I wandered to the room where my husband was to get information that the “person” I was talking to needed, I noticed he was on hold.  Then he started staring at me funny (not as unusual as you might think).  Then he asked me to put my call on speaker (that was unusual).  After listening for a few minutes, he said, “I’m talking to the same woman.”

He was right.  As we listened to each other’s calls, we heard the same “woman” with the same tenor of voice, unusual cadence of speech, and indecipherable accent.  We were talking to a bot.  It was not helpful.  It took each of us several days and several more calls to finally reach humans.  When that happened, our issues were resolved in minutes.

Just because innovation can doesn’t mean you should allow it to.

You are a human.  You know more than the machine knows (for now).

You are interacting with other humans who, like you, have a right to be treated with respect.

If you forget these things – how important you and your choices are and how you want to be treated – you won’t have to worry about AI taking your job.  You already gave it away.

Image Credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Your Innovation is Dictated by Who You Are & What You Do

Your Innovation is Dictated by Who You Are & What You Do

GUEST POST from Robyn Bolton

Using only three words, how would you describe your company?

Better yet, what three words would your customers use to describe your company?

These three words capture your company’s identity. They answer, “who we are” and “what business we’re in.”  They capture a shared understanding of where customers allow you to play and how you take action to win. 

Everything consistent with this identity is normal, safe, and comfortable.

Everything inconsistent with this identity is weird, risky, and scary.

Your identity is killing innovation.

Innovation is something new that creates value.

Identity is carefully constructed, enduring, and fiercely protected and reinforced.

When innovation and identity conflict, innovation usually loses.

Whether the innovation is incremental, adjacent, or radical doesn’t matter. If it conflicts with the company’s identity, it will join the 99.9% of innovations that are canceled before they ever launch.

Your identity can supercharge innovation.

When innovation and identity guide and reinforce each other, it doesn’t matter if the innovation is incremental, adjacent, or radical.  It can win.

Identity-based Innovation changes your perspective. 

We typically think about innovation as falling into three types based on the scope of change to the business model:

  1. Incremental innovations that make existing offerings better, faster, and cheaper for existing customers and use our existing business model
  2. Adjacent innovations are new offerings in new categories, appeal to new customers, require new processes and activities to create or use new revenue models
  3. Radical innovations that change everything – offerings, customers, processes and activities, and revenue models

These types make sense IF we’re perfectly logical and rational beings capable of dispassionately evaluating data and making decisions.  SPOILER ALERT: We’re not.  We decide with our hearts (emotions, values, fears, and desires) and justify those decisions with our heads (logic and data).

So, why not use an innovation-typing scheme that reflects our humanity and reality?

That’s where Identity-based Innovation categories come in:

  1. Identity-enhancing innovations reinforce and strengthen people’s comfort and certainty in who they are and what they do relative to the organization.  “Organizational members all ‘know’ what actions are acceptable based on a shared understanding of what the organization represents, and this knowledge becomes codified u a set of heuristics about which innovative activities should be pursued and which should be dismissed.”
  2. Identity-stretching innovations enable and stretch people’s understanding of who they are and what they do in an additive, not threatening, way to their current identities.
  3. Identity-challenging innovations are threats and tend to occur in one of two contexts:
    • Extreme technological change that “results in the obsolescence of a product market or the convergence of multiple product markets.” (challenges “who we are”)
    • Competitors or new entrants that launch new offerings or change the basis of competition (challenges “what we do”)

By looking at your innovations through the lens of identity (and, therefore, people’s decision-making hearts), you can more easily identify the ones that will be supported and those that will be axed.

It also changes your results.

“Ok, nerd,” you’re probably thinking.  “Thanks for dragging me into your innovation portfolio geek-out.”

Fair, but let me illustrate the power of this perspective using some examples from P&G.

OfferingBusiness-Model TypesIdentity-based Categories
Charmin Smooth TearIncremental
Made Charmin easier to tear
Identity-enhancing
Reinforced Charmin’s premium experience
SwifferAdjacent
New durable product in an existing category (floor cleaning)
Identity-enhancing
Reinforced P&G’s identity as a provider of best-in-class cleaning products
Tide Dry CleanersRadical
Moved P&G into services and uses a franchise model
Identity-stretching
Dry cleaning service is consistent with P&G’s identity but stretches into providing services vs. just products

Do you see what happened on that third line?  A Radical Innovation was identity-stretching (not challenging), and it’s in the 0.1% of corporate innovations that launched!  It’s in 22 states!

The Bottom Line

If you look at innovation in the same way you always have, through the lens of changes to your business model, you’ll get the same innovation results you always have.

If you look at innovation differently, through the lens of how it affects personal and organizational identity, you’ll get different results.  You may even get radical results.

Image Credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

An Innovation Lesson From The Rolling Stones

An Innovation Lesson From The Rolling Stones

GUEST POST from Robyn Bolton

If you’re like most people, you’ve faced disappointment. Maybe the love of your life didn’t return your affection, you didn’t get into your dream college, or you were passed over for promotion.  It hurts.  And sometimes, that hurt lingers for a long time.

Until one day, something happens, and you realize your disappointment was a gift.  You meet the true love of your life while attending college at your fallback school, and years later, when you get passed over for promotion, the two of you quit your jobs, pursue your dreams, and live happily ever after. Or something like that.

We all experience disappointment.  We also all get to choose whether we stay there, lamenting the loss of what coulda shoulda woulda been, or we can persevere, putting one foot in front of the other and playing The Rolling Stones on repeat:

“You can’t always get what you want

But if you try sometimes, well, you might just find

You get what you need”

That’s life.

That’s also innovation.

As innovators, especially leaders of innovators, we rarely get what we want.  But we always get what we need (whether we like it or not)

We want to know. 
We need to be comfortable not knowing.

Most of us want to know the answer because if we know the answer, there is no risk. There is no chance of being wrong, embarrassed, judged, or punished.  But if there is no risk, there is no growth, expansion, or discovery.

Innovation is something new that creates value. If you know everything, you can’t innovate.

As innovators, we need to be comfortable not knowing.  When we admit to ourselves that we don’t know something, we open our minds to new information, new perspectives, and new opportunities. When we say we don’t know, we give others permission to be curious, learn, and create. 

We want the creative genius and billion-dollar idea. 
We need the team and the steady stream of big ideas.

We want to believe that one person blessed with sufficient time, money, and genius can change the world.  Some people like to believe they are that person, and most of us think we can hire that person, and when we do find that person and give them the resources they need, they will give us the billion-dollar idea that transforms our company, disrupts the industry, and change the world.

Innovation isn’t magic.  Innovation is team work.

We need other people to help us see what we can’t and do what we struggle to do.  The idea-person needs the optimizer to bring her idea to life, and the optimizer needs the idea-person so he has a starting point.  We need lots of ideas because most won’t work, but we don’t know which ones those are, so we prototype, experiment, assess, and refine our way to the ones that will succeed.   

We want to be special.
We need to be equal.

We want to work on the latest and most cutting-edge technology and discuss it using terms that no one outside of Innovation understands. We want our work to be on stage, oohed and aahed over on analyst calls, and talked about with envy and reverence in every meeting. We want to be the cool kids, strutting around our super hip offices in our hoodies and flip-flops or calling into the meeting from Burning Man. 

Innovation isn’t about you.  It’s about serving others.

As innovators, we create value by solving problems.  But we can’t do it alone.  We need experienced operators who can quickly spot design flaws and propose modifications.  We need accountants and attorneys who instantly see risks and help you navigate around them.  We need people to help us bring our ideas to life, but that won’t happen if we act like we’re different or better.  Just as we work in service to our customers, we must also work in service to our colleagues by working with them, listening, compromising, and offering help.

What about you?
What do you want?
What are you learning you need?

Image Credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.