Tag Archives: Innovation

Making Decisions in Uncertainty

This 25-Year-Old Tool Actually Works

Making Decisions in Uncertainty

GUEST POST from Robyn Bolton

Just as we got used to VUCA (volatile, uncertain, complex, ambiguous) futurists now claim “the world is BANI now.”  BANI (brittle, anxious, nonlinear, incomprehensible) is much worse than VUCA and reflects “the fractured, unpredictable state of the modern world.”

Not to get too Gen X on the futurists who coined and are spreading this term but…shut up.

Is the world fractured and unpredictable? Yes.

Does it feel brittle? Are we more anxious than ever? Are things changing at exponential speed, requiring nonlinear responses? Does the world feel incomprehensible? Yes, to all.

Naming a problem is the first step in solving it. The second step is falling in love with the problem so that we become laser focused on solving it. BANI does the first but fails at the second. It wallows in the problem without proposing a path forward. And as the sign says, “Ain’t nobody got time for this.”

(Re)Introducing the Cynefin Framework

The Cynefin framework recognizes that leadership and problem-solving must be contextual to be effective. Using the Welsh word for “habitat,” the framework is a tool to understand and name the context of a situation and identify the approaches best suited for managing or solving the situation.

It’s grounded in the idea that every context – situation, challenge, problem, opportunity – exists somewhere on a spectrum between Ordered and Unordered. At the Ordered end of the spectrum, cause and affect are obvious and immediate and the path forward is based on objective, immutable facts. Unordered contexts, however, have no obvious or immediate relationship between cause and effect and moving forward requires people to recognize patterns as they emerge.

Both VUCA and BANI point out the obvious – we’re spending more time on the Unordered end of the spectrum than ever. Unlike the acronyms, Cynefin helps leaders decide and act.

Five Contexts, Five Ways Forward

The Cynefin framework identifies five contexts, each with its own best practices for making decisions and progress.

On the Ordered end of the spectrum:

  • Simple contexts are characterized by stability and obvious and undisputed right answers. Here, patterns repeat, and events are consistent. This is where leaders rely on best practices to inform decisions and delegation, and direct communication to move their teams forward.
  • Complicated contexts have many possible right answers and the relationship between cause and effect isn’t known but can be discovered. Here, leaders need to rely on diverse expertise and be particularly attuned to conflicting advice and novel ideas to avoid making decisions based on outdated experience.

On the Unordered end of the spectrum:

  • Complex contexts are filled with unknown unknowns, many competing ideas, and unpredictable cause and effects. The most effective leadership approach in this context is one that is deeply uncomfortable for most leaders but familiar to innovators – letting patterns emerge. Using small-scale experiments and high levels of collaboration, diversity, and dissent, leaders can accelerate pattern-recognition and place smart bets.
  • Chaos are contexts fraught with tension. There are no right answers or clear cause and effect. There are too many decisions to make and not enough time. Here, leaders often freeze or make big bold decisions. Neither is wise. Instead, leaders need to think like emergency responders and rapidly response to re-establish order where possible to bring the situation into a Complex state, rather than trying to solve everything at once.

The final context is Disorder. Here leaders argue, multiple perspectives fight for dominance, and the organization is divided into fractions. Resolution requires breaking the context down into smaller parts that fit one of the four previous contexts and addressing them accordingly.

The Only Way Out is Through

Our VUCA/BANI world isn’t going to get any simpler or easier. And fighting it, freezing, or fleeing isn’t going to solve anything. Organizations need leaders with the courage to move forward and the wisdom and flexibility to do so in a way that is contextually appropriate. Cynefin is their map.

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

AI, Cognitive Obesity and Arrested Development

AI, Cognitive Obesity and Arrested Development

GUEST POST from Pete Foley

Some of the biggest questions of our age are whether AI will ultimately benefit or hurt us, and how big its’ effect will ultimately be.

And that of course is a problem with any big, disruptive technology.  We want to anticipate how it will play out in the real world, but our forecasts are rarely very accurate, and all too often miss a lot of the more important outcomes. We often don’t anticipate it’s killer applications, how it will evolve or co-evolve with other emergent technologies, or predict all of the side effects and ‘off label’ uses that come with it.  And the bigger the potential impact new tech has, and the broader the potential applications, the harder prediction becomes.  The reality is that in virtually every case, it’s not until we set innovation free that we find its full impact, good, bad or indifferent.

Pandora’s Box

And that can of course be a sizable concern.  We have to open Pandora’s Box in order to find out what is inside, but once open, it may not be possible to close it again.   For AI, the potential scale of its impact makes this particularly risky. It also makes any meaningful regulation really difficult. We cannot regulate what we cannot accurately predict. And if we try we risk not only missing our target, but also creating unintended consequences, and distorting ‘innovation markets’ in unexpected, potentially negative ways.

So it’s not surprising there is a lot of discussion around what AI will or will not do. How will it effect jobs, the economy, security, mental health. Will it ‘pull’ a Skynet, turn rogue and destroy humanity? Will it simply replace human critical thinking to the point where it rules us by default? Or will it ultimately fizzle out to some degree, and become a tool in a society that looks a lot like today, rather than revolutionizing it?

I don’t even begin to claim to predict the future with any accuracy, for all of the reasons mentioned above. But as a way to illustrate how complex an issue this is, I’d like to discuss a few less talked about scenarios.

1.  Less obvious issues:  Obviously AI comes with potential for enormous benefits and commensurate problems.  It’s likely to trigger an arms race between ‘good’ and ‘bad’ applications, and that of itself will likely be a moving target.  An obvious, oft discussed potential issue is of course the ‘Terminator Scenario’ mentioned above.  That’s not completely far fetched, especially with recent developments in AI self preservation and scheming that I’ll touch on later. But there are plenty of other potential, if less extreme pitfalls, many of which involve AI amplifying and empowering bad behavior by humans.  The speed and agility AI hands to hackers, hostile governments, black-hats, terrorists and organized crime vastly enhanced capability for attacks on infrastructure, mass fraud or worse. And perhaps more concerning, there’s the potential for AI to democratize cyber crime, and make it accessible to a large number of ‘petty’ criminals who until now have lacked resources to engage in this area. And when the crime base expands, so does the victim base. Organizations or individuals who were too small to be targeted for ransomware when it took huge resources to create, will presumably become more attractive targets as AI allows similar code to be built in hours by people who possess limited coding skills.

And all of this of course adds another regulation challenge. The last thing we want to do is slow legitimate AI development via legislation, while giving free reign to illegitimate users, who presumably will be far less likely to follow regulations. If the arms race mentioned above occurs, the last thing we want to do is unintentionally tip the advantage to the bad guys!

Social Impacts

But AI also has the potential to be disruptive in more subtle ways.  If the internet has taught us anything, it is that how the general public adopts technology, and how big tech monetizes matter a lot. But this is hard to predict.  Some of the Internet’s biggest negative impacts have derived from largely unanticipated damage to our social fabric.  We are still wrestling with its impact on social isolation, mental health, cognitive development and our vital implicit skill-set. To the last point, simply deferring mental tasks to phones and computers means some cognitive muscles lack exercise, and atrophy, while reduction in human to human interactions depreciate our emotion and social intelligence.

1. Cognitive Obesity  The human brain evolved over tens of thousands, arguable millions of years (depending upon where in you start measuring our hominid history).  But 99% of that evolution was characterized by slow change, and occurred in the context of limited resources, limited access to information, and relatively small social groups.  Today, as the rate of technological innovation explodes, our environment is vastly different from the one our brain evolved to deal with.  And that gap between us and our environment is widening rapidly, as the world is evolving far faster than our biology.  Of course, as mentioned above, the nurture part of our cognitive development does change with changing context, so we do course correct to some degree, but our core DNA cannot, and that has consequences.

Take the current ‘obesity epidemic’.  We evolved to leverage limited food resources, and to maximize opportunities to stock up calories when they occurred.  But today, faced with near infinite availability of food, we struggle to control our scarcity instincts. As a society, we eat far too much, with all of the health issues that brings with it. Even when we are cognitively aware of the dangers of overeating, we find it difficult to resist our implicit instincts to gorge on more food than we need.  The analogy to information is fairly obvious. The internet brought us near infinite access to information and ‘social connections’.  We’ve already seen the negative impact this can have, contributing to societal polarization, loss of social skills, weakened emotional intelligence, isolation, mental health ‘epidemics’ and much more. It’s not hard to envisage these issues growing as AI increases the power of the internet, while also amplifying the seduction of virtual environments.  Will we therefore see a cognitive obesity epidemic as our brain simply isn’t adapted to deal with near infinite resources? Instead of AI turning us all into hyper productive geniuses, will we simply gorge on less productive content, be it cat videos, porn or manipulative but appealing memes and misinformation? Instead of it acting as an intelligence enhancer, will it instead accelerate a dystopian Brave New World, where massive data centers gorge on our common natural resources primarily to create trivial entertainment?

2. Amplified Intelligence.  Even in the unlikely event that access to AI is entirely democratic, it’s guaranteed that its benefits will not be. Some will leverage it far more effectively than others, creating significant risk of accelerating social disparity.  While many will likely gorge unproductively as described above, others will be more disciplined, more focused and hence secure more advantage.  To return to the obesity analogy, It’s well documented that obesity is far more prevalent in lower income groups. It’s hard not to envisage that productive leverage of AI will follow a similar pattern, widening disparities within and between societies, with all of the issues and social instability that comes with that.

3. Arrested Development.  We all know that ultimately we are products of both nature and nurture. As mentioned earlier, our DNA evolves slowly over time, but how it is expressed in individuals is impacted by current or context.  Humans possess enormous cognitive plasticity, and can adapt and change very quickly to different environments.  It’s arguably our biggest ‘blessing’, but can also be a curse, especially when that environment is changing so quickly.

The brain is analogous to a muscle, in that the parts we exercise expand or sharpen, and the parts we don’t atrophy.    As we defer more and more tasks to AI, it’s almost certain that we’ll become less capable in those areas.  At one level, that may not matter. Being weaker at math or grammar is relatively minor if our phones can act as a surrogate, all of my personal issues with autocorrect notwithstanding.

But a bigger potential issue is the erosion of causal reasoning.  Critical thinking requires understanding of underlying mechanisms.  But when infinite information is available at a swipe of a finger, it becomes all too easy to become a ‘headline thinker’, and unconsciously fail to penetrate problems with sufficient depth.

That risks what Art Markman, a psychologist at UT, and mentor and friend, used to call the ‘illusion of understanding’.  We may think we know how something works, but often find that knowledge is superficial, or at least incomplete, when we actually need it.   Whether its fixing a toilet, changing a tire, resetting a fuse, or unblocking a sink, often the need to actually perform a task reveals a lack in deep, causal knowledge.   This often doesn’t matter until it does in home improvement contexts, but at least we get a clear signal when we discover we need to rush to YouTube to fix that leaking toilet!

This has implications that go far beyond home improvement, and is one factor helping to tear our social fabric apart.   We only have to browse the internet to find people with passionate, but often opposing views on a wide variety of often controversial topics. It could be interest rates, Federal budgets, immigration, vaccine policy, healthcare strategy, or a dozen others. But all too often, the passion is not matched by deep causal knowledge.  In reality, these are all extremely complex topics with multiple competing and interdependent variables.  And at risk of triggering hate mail, few if any of them have easy, conclusive answers.  This is not physics, where we can plug numbers into an equation and it spits out a single, unambiguous solution.  The reality is that complex, multi-dimensional problems often have multiple, often competing partial solutions, and optimum outcomes usually require trade offs.  Unfortunately few of us really have the time to assimilate the expertise and causal knowledge to have truly informed and unambiguous answers to most, if not all of these difficult problems.

And worse, AI also helps the ‘bad guys’. It enables unscrupulous parties to manipulate us for their own benefit, via memes, selective information and misinformation that are often designed to make us think we understand complex problems far better than we really do. As we increasingly rely on input from AI, this will inevitable get worse. The internet and social media has already contributed to unprecedented social division and nefarious financial rimes.   Will AI amplify this further?

This problem is not limited to complex social challenges. The danger is that for ALL problems, the internet, and now AI, allows us to create the illusion for ourselves that we understand complex systems far more deeply than we really do.  That in turn risks us becoming less effective problem solvers and innovators. Deep causal knowledge is often critical for innovating or solving difficult problems.  But in a world where we can access answers to questions so quickly and easily, the risk is that we don’t penetrate topics as deeply. I personally recall doing literature searches before starting a project. It was often tedious, time consuming and boring. Exactly the types of task AI is perfect for. But that tedious process inevitably built my knowledge of the space I was moving into, and often proved valuable when we hit problems later in the project. If we now defer this task to AI, even in part, this reduces depth of understanding. And in in complex systems or theoretic problem solving, will often lack the unambiguous signal that usually tells us our skills and knowledge are lacking when doing something relatively simple like fixing a toilet. The more we use AI, the more we risk lacking necessary depth of understanding, but often without realizing it.

Will AI become increasingly unreliable?

We are seeing AI develop the capability to lie, together with a growing propensity to cover it’s tracks when it does so. The AI community call it ’scheming’, but in reality it’s fundamentally lying.  https://openai.com/index/detecting-and-reducing-scheming-in-ai-models/?_bhlid=6a932f218e6ebc041edc62ebbff4f40bb73e9b14. We know from the beginning we’ve faced situations where AI makes mistakes.  And as I discussed recently, the risks associated with that are amplified because of it’s increasingly (super)human or oracle-like interface creating an illusion of omnipotence.

But now it appears to be increasingly developing properties that mirror self preservation.  A few weeks ago there were reports of difficulties in getting AI’s to shut themselves down, and even of AI’s using defensive blackmail when so threatened. Now we are seeing reports of AI’s deliberately trying to hide their mistakes.  And perhaps worse, concerns that attempts to fix this may simply “teach the model to become better at hiding its deceptive behavior”, or in other words, become a better liar.

If we are already in an arms race with an entity to keep it honest, and put our interests above its own, given it’s vastly superior processing power and speed, it may be a race we’ve already lost.  That may sound ‘doomsday-like’, but that doesn’t make it any less possible. And keep in mind, much of the Doomsday projections around AI focus on a ’singularity event’ when AI suddenly becomes self aware. That assumes AI awareness and consciousness will be similar to human, and forces a ‘birth’ analogy onto the technology. However, recent examples of self preservation and dishonesty maybe hint at a longer, more complex transition, some of which may have already started.

How big will the impact of AI be?

I think we all assume that AI’s impact will be profound. After all,  it’s still in its infancy, and is already finding it’s way into all walks of life.  But what if we are wrong, or at least overestimating its impact?  Just to play Devils Advocate, we humans do have a history of over-estimating both the speed and impact of technology driven change.

Remember the unfounded (in hindsight) panic around Y2K?  Or when I was growing up, we all thought 2025 would be full of people whizzing around using personal jet-packs.  In the 60’s and 70’s we were all pretty convinced we were facing nuclear Armageddon. One of the greatest movies of all time, 2001, co-written by inventor and futurist Arthur C. Clark, had us voyaging to Jupiter 24 years ago!  Then there is the great horse manure crisis of 1894. At that time, London was growing rapidly, and literally becoming buried in horse manure.  The London Times predicted that in 50 years all of London would be buried under 9 feet of poop. In 1898 the first global urban planning conference could find no solution, concluding that civilization was doomed. But London, and many other cities received salvation from an unexpected quarter. Henry Ford invented the motor car, which surreptitiously saved the day.  It was not a designed solution for the manure problem, and nobody saw it coming as a solution to that problem. But nonetheless, it’s yet another example of our inability to see the future in all of it’s glorious complexity, and for our predictions to screw towards worse case scenarios and/or hyperbole.

Change Aversion:

That doesn’t of course mean that AI will not have a profound impact. But lot’s of factors could potentially slow down, or reduce its effects.  Not least of these is human nature. Humans possess a profound resistance to change.  For sure, we are curious, and the new and innovative holds great appeal.  That curiosity is a key reason as to why humans now dominate virtually every ecological niche on our planet.   But we are also a bit schizophrenic, in that we love both change and stability and consistency at the same time.  Our brains have limited capacity, especially for thinking about and learning new stuff.  For a majority of our daily activities, we therefore rely on habits, rituals, and automatic behaviors to get us through without using that limited higher cognitive capacity. We can drive, or type, or do parts of our job without really thinking about it. This ‘implicit’ mental processing frees up our conscious brain to manage the new or unexpected.  But as technology like AI accelerates, a couple of things could happen.  One is that as our cognitive capacity gets overloaded, and we unconsciously resist it.  Instead of using the source of all human knowledge for deep self improvement, we instead immerse ourselves in less cognitively challenging content such as social media.

Or, as mentioned earlier, we increasingly lose causal understanding of our world, and do so without realizing it.   Why use our limited thinking capacity for tasks when it is quicker, easier, and arguably more accurate to defer to an AI. But lack of causal understanding seriously inhibits critical thinking and problem solving.  As AI gets smarter, there is a real risk that we as a society become dumber, or at least less innovative and creative.

Our Predictions are Wrong.

If history teaches us anything, most, if not all of the sage and learned predictions about AI will be mostly wrong. There is no denying that it is already assimilating into virtually every area of human society.  Finance, healthcare, medicine, science, economics, logistics, education etc.  And it’s a snooze and you lose scenario, and in many fields of human endeavor, we have little choice.  Fail to embrace the upside of AI and we get left behind.

That much power in things that can think so much faster than us, that may be developing self-interest, if not self awareness, that has no apparent moral framework, and is in danger of becoming an expert liar, is certainly quite sobering.

The Doomsday Mindset.

As suggested above, loss aversion and other biases drive us to focus on the downside of change.   It’s a bias that makes evolutionary sense, and helped keep our ancestors alive long enough to breed and become our ancestors. But remember, that bias is implicitly built into most, if not all of our predictions.   So there’s at least  chance that it’s impact wont be quite as good or bad as our predictions suggest

But I’m not sure we want to rely on that.  Maybe this time a Henry Ford won’t serendipitously rescue us from a giant pile of poop of our own making. But whatever happens, I think it’s a very good bet that we are in for some surprises, both good and bad. Probably the best way to deal with that is to not cling too tightly to our projections or our theories, remain agile, and follow the surprises as much, if not more than met expectations.

Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The Secret to Endless Customers

The Secret to Endless Customers

GUEST POST from Shep Hyken

Marcus Sheridan owns a pool and spa manufacturing company in Virginia — not a very sexy business, unless you consider the final product, which is often surrounded by beautiful people. What he did to stand out in a marketplace filled with competition is a masterclass in how to get noticed and, more importantly, get business. His most recent book, Endless Customers, is a follow-up to his bestselling book They Ask, You Answer, with updated information and new ideas that will help you build a business that has, as the title implies, endless customers.

Sheridan’s journey began in 2001 when he started a pool company with two friends. When the 2008 market collapse hit, they were on the verge of losing everything. This crisis forced them to think differently about how to reach customers. Sheridan realized that potential buyers were searching for answers to their questions, so he decided his company would become “the Wikipedia of fiberglass swimming pools.”

By brainstorming every question he’d ever received as a pool salesperson and addressing them through content online, his company’s website became the most trafficked swimming pool website in the world within just a couple of years. This approach transformed his business and became the foundation for his business philosophy.

In our interview on Amazing Business Radio, Sheridan shared what he believes is the most important strategy that businesses can use to get and keep customers, and that is to become a known and trusted brand. They must immerse themselves in what he calls the Four Pillars of a Known and Trusted Brand.

  1. Say What Others Aren’t Willing to Say: The No. 1 reason people leave websites is because they can’t find what they’re looking for — and the top information they seek is pricing. Sheridan emphasizes that businesses should openly discuss costs and pricing on their websites. While you don’t need to list exact prices, you should educate consumers about what drives costs up or down in your industry. Sheridan suggests creating a comprehensive pricing page that teaches potential customers how to buy in your industry. According to him, 90% of industries still avoid this conversation, even though it’s what customers want most.
  2. Show What Others Aren’t Willing to Show: When Sheridan’s company was manufacturing fiberglass swimming pools, it became the first to show its entire manufacturing process from start to finish through a series of videos. They were so complete that someone could literally learn how to start their own manufacturing company by watching these videos. Sheridan recognized that sharing the “secret sauce” was a level of transparency that built trust, helping to make his company the obvious choice for many customers.
  3. Sell in Ways Others Aren’t Willing to Sell: According to Sheridan, 75% of today’s buyers prefer a “seller-free sales experience.” He says, “That doesn’t mean we hate salespeople. We just don’t want to talk to them until we’re very, very, ready.” Sheridan suggests meeting customers where they are by offering self-service options on your website. For his pool and spa business, that included a price estimator solution that helped potential customers determine how much they could afford — without the pressure of talking to a salesperson.
  4. Be More Human than Others Are Willing to Be: In a world that is becoming dominated by AI and technology, showing the human side of a business is critical to a trusting business relationship. Sheridan suggests putting leaders and employees on camera. They are truly the “face of the brand.” It’s okay to use AI, just find the balance that helps you stay human in a technology-dominated world.

As we wrapped up the interview, I asked Sheridan to share his most powerful idea, and the answer goes back to a word he used several times throughout the interview: Trust. “In a time of change, we need, as businesses, constants that won’t change,” Sheridan explained. “One thing I can assure you is that in 10 years, you’re going to be in a battle for trust. It’s the one thing that binds all of us. It’s the great currency that is not going to go away. So, become that voice of trust. If you do, your organization is going to be built to last.”

And that, according to Sheridan, is how you create “endless customers.”

Image Credits: Shep Hyken

This article originally appeared on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






How Compensation Reveals Culture

Five Questions with Kate Dixon

How Compensation Reveals Culture

GUEST POST from Robyn Bolton

It’s time for your company’s All-Hands meeting. Your CEO stands on stage and announces ambitious innovation goals, talking passionately about the importance of long-term thinking and breakthrough results. Everyone nods enthusiastically, applauds politely, and returns to their desks to focus on hitting this quarter’s numbers.  After all, that’s what their bonuses depend on.

Kate Dixon, compensation expert and founder of Dixon Consulting, has watched this contradiction play out across Fortune 500 companies, B Corps, and startups. Her insight cuts to the heart of why so many innovation initiatives fail: we’re asking people to think long-term while paying them to deliver short-term.

In our conversation, Kate revealed why most companies are inadvertently sabotaging their own innovation efforts through their compensation structures—and what the smartest organizations are doing differently.


Robyn Bolton: Kate, when I first heard you say, “compensation is the expression of a company’s culture,” it blew my mind.  What do you mean by that?

Kate Dixon: If you want to understand what an organization values, look at how they pay their people: Who gets paid more? Who gets paid less? Who gets bigger bonuses? Who moves up in the organization and who doesn’t? Who gets long-term incentives?

The answers to these questions, and a million others, express the culture of the organization.  How we reward people’s performance, either directly or indirectly, establishes and reinforces cultural norms.  Compensation is usually the biggest, if not the biggest, expenses that a company has so they’re very thoughtful and deliberate about how it is used.  Which is why it tells you what the company actually does value.

RB: What’s the biggest mistake companies make when trying to incentivize innovation?

KD: Let’s start by what companies are good at when it comes to compensations and incentives.  They’re really good about base pay, because that’s the biggest part of pay for most people in an organization. Then they spend the next amount of time and effort trying to figure out the annual bonus structure. After that comes other benefits, like long term incentives, assuming they don’t fall by the wayside.

As you know, innovation can take a long time to payout, so long-term incentives are key to encouraging that kind of investment.  Stock options and restricted shares are probably the most common long-term incentives but cash bonuses, phantom stock, and ESOP shares in employee-owned companies are also considered long term incentives.

Large companies are pretty good using some equity as an incentive, but they tie it t long term revenue goals, not innovation. As you often remind us, “innovation is a means to the end, which is growth,” so tying incentives to growth isn’t bad but I believe that we can do better. Tying incentives to the growth goals and how they’re achieved will go a long way towards driving innovation.

RB: I’ve worked in and with big companies and I’ve noticed that while they say, “innovation is everyone’s job,” the people who get long-term incentives are typically senior execs.  What gives?

Long-term incentives are definitely underutilized, below the executive level, and maybe below the director level. Assuming that most companies’ innovation efforts aren’t moonshots that take decades to realize, it makes a ton of sense to use long-term incentives throughout the organization and its ecosystem.  However, when this idea is proposed, people often pushback because “it’s too complex” for folks lower in the organization, “they wouldn’t understand.” or “they won’t appreciate it”. That stance is both arrogant and untrue.  I’ve consistently seen that when you explain long-term incentives to people, they do get it, it does motivate them, and the company does see results.

RB: Are there any examples of organizations that are getting this right?

We’re seeing a lot more innovative and interesting risk-taking behaviors in companies that are not primarily focused on profit.

Our B Corp clients are doing some crazy, cool stuff.  We have an employee-owned company that is a consulting firm, but they had an idea for a software product.  They launched it and now it’s becoming a bigger and bigger part of their business.

Family-owned or public companies that have a single giganto shareholder are also hotbeds of long-term thinking and, therefore, innovation.  They don’t have that same quarter to quarter pressure that drives a relentless focus on what’s happening right now and allows people to focus on the future.

What’s the most important thing leaders need to understand about compensation and innovation?

If you’re serious about innovation, you should be incentivizing people all over the organization.  If you want innovation to be a more regular piece of the culture so you get better results, you’ve got to look at long term incentives.  Yes, you should reward people for revenue and short-term goals.  But you also need to consider what else is a precursor to our innovation. What else is makes the conditions for innovating better for people, and reward that, too.


Kate’s insight reveals the fundamental contradiction at the heart of most companies’ innovation struggles: you can’t build long-term value with short-term thinking, especially when your compensation system rewards only the latter.

What does your company’s approach to compensation say about its culture and values?

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Three Steps from Stuck to Success

Managing Uncertainty

Three Steps from Stuck to Success

GUEST POST from Robyn Bolton

When a project is stuck and your team is trying to manage uncertainty, what do you hear most often:

  1. “We’re so afraid of making the wrong decision that we don’t make any decisions.”
  2. “We don’t have time to explore a bunch of stuff. We need to make decisions and go.”
  3. “The problem is so multi-faceted, and everything affects everything else that we don’t know where to start.”

I’ve heard all three this week, each spoken by teams leads who cared deeply about their projects and teams.

Differentiating between risk and uncertainty and accepting that uncertainty would never go away, just change focus helped relieve their overwhelm and self-doubt.

But without a way to resolve the fear, time-pressure, and complexity, the project would stay stuck with little change of progressing to success.

Turn Uncertainty Into an Asset

It’s a truism in the field of innovation that you must fall in love with the problem, not the solution. Falling in love with the problem ensures that you remain focused on creating value and agnostic about the solution.

While this sounds great and logically makes sense, most struggle to do it. As a result, it takes incredible strength and leadership to wrestle with the problem long enough to find a solution.

Uncertainty requires the same strength and leadership because the only way out of it is through it. And, research shows, the process of getting through it, turns it into an asset.

Three Steps to Turn Uncertainty Into an Asset

Research in the music and pharmaceutical industries reveals that teams that embraced uncertainty engaged in three specific practices:

  1. Embrace It: Start by acknowledging the uncertainty and that things will change, go wrong, and maybe even fail. Then stay open to surprise and unpredictability, delving into the unknown “by being playful, explorative, and purposefully engaging in ventures with indeterminate outcome.”
  2. Fix It: Especially when dealing with Unknowable Uncertainty, which occurs when more info supports several different meanings rather than pointing to one conclusion, teams that succeed make provisional decisions to “fix” an uncertain dimension so they can move forward while also documenting the rationale for the fix, setting a date to revisit it, and criteria for changing it.
  3. Ignore It: It’s impossible to embrace every uncertainty at once and unwise to fix too many uncertainties at the same time. As a result, some uncertainties, you just need to ignore. Successful teams adopt “strategic ignorance” “not primarily for purposes of avoiding responsibility [but to] allow postponing decisions until better ideas emerge during the collaborative process.

This practice is iterative, often leading to new knowledge, re-examined fixes, and fresh uncertainties. It sounds overwhelming but the teams that are explicit and intentional about what they’re embracing, fixing, and ignoring are not only more likely to be successful, but they also tend to move faster.

Put It Into Practice

Let’s return to NatureComp, a pharmaceutical company developing natural treatments for heart disease.

Throughout the drug development process, they oscillated between addressing What, Who, How, and Where Uncertainties. They did that by changing whether they embraced, fixed, or ignored each type of uncertainty at a given point:

As you can see, they embraced only one type of uncertainty to ensure focus and rapid progress. To avoid the fear of making mistakes, they fixed uncertainties throughout the process and returned to them as more information came available, either changing or reaffirming the fix. Ignoring uncertainties helped relieve feelings of being overwhelmed because the team had a plan and timeframe for when they would shift from ignoring to embracing or fixing.

Uncertainty is Dynamic – You Need to Be Dynamic, Too

You’ll never eliminate uncertainty. It’s too dynamic to every fully resolve. But by dynamically embracing, fixing, and ignore it in all its dimensions, you can accelerate your path to success.

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Don’t Fall for the Design Squiggle Lie

Don't Fall for the Design Squiggle Lie

GUEST POST from Robyn Bolton

Last night, I lied to a room full of MBA students. I showed them the Design Squiggle, and explained that innovation starts with (what feels like) chaos and ends with certainty.

The chaos part? Absolutely true.

The certainty part? A complete lie.

Nothing is Ever Certain (including death and taxes)

Last week I wrote about the different between risk and uncertainty.  Uncertainty occurs when we cannot predict what will happen when acting or not acting.  It can also be broken down into Unknown uncertainty (resolved with more data) and Unknowable uncertainty (which persists despite more data).

But no matter how we slice, dice, and define uncertainty, it never goes away.

It may be higher or lower at different times,

More importantly, it changes focus.

Four Dimensions of Uncertainty

Something new that creates value (i.e. an innovation) is multi-faceted and dynamic. Treating uncertainty as a single “thing”  therefore clouds our understanding and ability to find and addresses root causes.

That’s why we need to look at different dimensions of uncertainty.

Thankfully, the ivory tower gives us a starting point.

WHAT: Content uncertainty relates to the outcome or goal of the innovation process. To minimize it, we must address what we want to make, what we want the results to be, and what our goals are for the endeavor.

WHO: Participation uncertainty relates to the people, partners, and relationships active at various points in the process. It requires constant re-assessment of expertise and capabilities required and the people who need to be involved.

HOW: Procedure uncertainty focuses on the process, methods, and tools required to make progress. Again, it requires constant re-assessment of how we progress towards our goals.

WHERE: Time-space uncertainty focuses on the fact that the work may need to occur in different locations and on different timelines, requiring us to figure out when to start and where to work.

It’s tempting to think each of these are resolved in an orderly fashion, by clear decisions made at the start of a project, but when has a decision made on Day 1 ever held to launch day?

Uncertainty in Pharmaceutical Development

 Let’s take the case of NatureComp, a mid-sized company pharmaceutical company and the uncertainties they navigated while working to replicate, develop, and commercialize a natural substance to target and treat heart disease.

  1. What molecule should the biochemists research?
  2. How should the molecule be produced?
  3. Who has the expertise and capability to synthetically poduce the selected molecule because NatureComp doesn’t have the experience required internally?
  4. Where to produce that meets the synthesization criteria and could produce cost-effectively at low volume?
  5. What target disease specifically should the molecule target so that initial clincial trials can be developed and run?
  6. Who will finance the initial trials and, hopefully, become a commercialization partner?
  7. Where would the final commercial entity exist (e.g. stay in NatureComp, move to partner, stand-alone startup) and the molecule produced?

 And those are just the highlights.

It’s all a bit squiggly

The knotty, scribbly mess at the start of the Design Squiggle is true. The line at the end is a lie because uncertainty never goes away. Instead, we learn and adapt until it feels manageable.

Next week, you’ll learn how.

Image credit: The Process of Design Squiggle by Damien Newman, thedesignsquiggle.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Mismanaging Uncertainty & Risk is Killing Our Businesses

Mismanaging Uncertainty & Risk is Killing Our Businesses

GUEST POST from Robyn Bolton

During September 2011, the English language officially died.  That was the month that the Oxford English Dictionary, long regarded as the accepted authority on the English language published an update in which “literally” also meant figuratively. By 2016, every other major dictionary had followed suit.

The justification was simple: “literally” has been used to mean “figuratively” since 1769. Citing examples from Louisa May Alcott’s Little Women, Charles Dickens’ David Copperfield, Charlotte Bronte’s Jane Eyre, and F. Scott Fitzgerald’s The Great Gatsby, they claimed they were simply reflecting the evolution of a living language.

What utter twaddle.

Without a common understanding of a word’s meaning, we create our own definitions which lead to secret expectations, and eventually chaos.

And not just interpersonally. It can affect entire economies.

Maybe the state of the US economy is just a misunderstanding

Uncertainty.

We’re hearing and saying that word a lot lately. Whether it’s in reference to tariffs, interest rates, immigration, or customer spending, it’s hard to go a single day without “uncertainty” popping up somewhere in your life.

But are we really talking about “uncertainty?”

Uncertainty and Risk are not the same.

The notion of risk and uncertainty was first formally introduced into economics in 1921 when Frank Knight, one of the founders of the Chicago school of economics, published his dissertation Risk, Uncertainty and Profit.  In the 114 since, economists and academics continued to enhance, refine, and debate his definitions and their implications.

Out here in the real world, most businesspeople use them as synonyms meaning “bad things to be avoided at all costs.”

But they’re not synonyms. They have distinct meanings, different paths to resolution, and dramatically different outcomes.

Risk can be measured and/or calculated.

Uncertainty cannot be measured or calculated

The impact of tariffs, interest rates, changes in visa availability, and customer spending can all be modeled and quantified.

So it’s NOT uncertainty that’s “paralyzing” employers.  It’s risk!

Not so fast my friend.

Not all Uncertainties are the same

According to Knight, Uncertainty drives profit because it connects “with the exercise of judgment or the formation of those opinions as to the future course of events, which…actually guide most of our conduct.”

So while we can model, calculate, and measure tariffs, interest rates, and other market dynamics, the probability of each outcome is unknown.  Thus, our response requires judgment.

Sometimes.

Because not all uncertainties are the same.

The Unknown (also known as “uncertainty based on ignorance”) exists when there is a “lack of information which would be necessary to make decisions with certain outcomes.”

The Unknowable (“uncertainty based on ambiguity”) exists when “an ongoing stream [of information]  supports several different meanings at the same time.”

Put simply, if getting more data makes the answer obvious, we’re facing the Unknown and waiting, learning, or modeling different outcomes can move us closer to resolution. If more data isn’t helpful because it will continue to point to different, equally plausible, solutions, you’re facing the Unknowable.

So what (and why did you drag us through your literally/figuratively rant)?

If you want to get unstuck – whether it’s a project, a proposal, a team, or an entire business, you first need to be clear about what you’re facing.

If it’s a Risk, model it, measure it, make a decision, move forward.

If it’s an uncertainty, what kind is it?

If it’s Unknown, decide when to decide, ask questions, gather data, then, when the time comes, decide and move forward

If it’s Unknowable, decide how to decide then put your big kid pants on, have the honest and tough conversations, negotiate, make a decision, and move on.

I mean that literally.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Back to Basics for Leaders and Managers

Back to Basics for Leaders and Managers

GUEST POST from Robyn Bolton

Imagine that you are the CEO working with your CHRO on a succession plan.  Both the CFO and COO are natural candidates, and both are, on paper, equally qualified and effective.

The CFO distinguishes herself by consistently working with colleagues to find creative solutions to business issues, even if it isn’t the optimal solution financially, and inspiring them with her vision of the future. She attracts top talent and builds strong relationships with investors who trust her strategic judgment. However, she sometimes struggles with day-to-day details and can be inconsistent in her communication with direct reports.

The COO inspires deep loyalty from his team through consistent execution and reliability. People turn down better offers to stay because they trust his systematic approach, flawless delivery, and deep commitment to developing people. However, his vision rarely extends beyond “do things better,” rigidly adhering to established processes and shutting down difficult conversations with peers when change is needed.

Who so you choose?

The COO feels like the safer bet, especially in uncertain times, given his track record of proven execution, loyal teams, and predictable results. While the CFO feels riskier because she’s brilliant but inconsistent, visionary but scattered.

It’s not an easy question to answer.

Most people default to “It depends.”

It doesn’t depend.

It doesn’t “depend,” because being CEO is a leadership role and only the CFO demonstrates leadership behaviors. The COO, on the other hand, is a fantastic manager, exactly the kind of person you want and need in the COO role. But he’s not the leader a company needs, no matter how stable or uncertain the environment.

Yet we all struggle with this choice because we’ve made “leadership” and “management” synonyms. Companies no longer have “senior management teams,” they have “senior/executive leadership teams.”  People moving from independent contributor roles to oversee teams are trained in “people leadership,” not “team management” (even though the curriculum is still largely the same).

But leadership and management are two fundamentally different things.

Leader OR Manager?

There are lots of definitions of both leaders and managers, so let’s go back to the “original” distinction as defined by Warren Bennis in his 1987 classic On Becoming a Leader

LeadersManagers
·       Do the right things·       Challenge the status quo·       Innovate·       Develops·       Focuses on people·       Relies on trust·       Has a long-range perspective·       Asks what and why·       Has an eye on the horizon·       Do things right·       Accept the status quo·       Administers·       Maintains·       Focuses on systems and structures·       Relies on control·       Has a short-range view·       Asks how and when·       Has an eye on the bottom line

In a nutshell: leaders inspire people to create change and pursue a vision while managers control systems to maintain operations and deliver results.

Leaders AND Managers!

Although the roles of leaders and managers are different, it doesn’t mean that the person who fills those roles is capable of only one or the other. I’ve worked with dozens of people who are phenomenal managers AND leaders and they are as inspiring as they are effective.

But not everyone can play both roles and it can be painful, even toxic, when we ask managers to take on leadership roles and vice versa. This is the problem with labeling everything outside of individual contributor roles as “leadership.”

When we designate something as a “people leadership” role and someone does an outstanding job of managing his team, we believe he’s a leader and promote him to a true leadership role (which rarely ends well).  Conversely, when we see someone displaying leadership qualities and promote her into “people leadership,” we may be shocked and disappointed when she struggles to manage as effortlessly as she inspires.

The Bottom Line

Leadership and Management aren’t the same thing, but they are both essential to an organization’s success. They key is putting the right people in the right roles and celebrating their unique capabilities and contributions.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

McKinsey is Wrong That 80% Companies Fail to Generate AI ROI

McKinsey is Wrong That 80% Companies Fail to Generate AI ROI

GUEST POST from Robyn Bolton

Sometimes, you see a headline and just have to shake your head.  Sometimes, you see a bunch of headlines and need to scream into a pillow.  This week’s headlines on AI ROI were the latter:

  • Companies are Pouring Billions Into A.I. It Has Yet to Pay Off – NYT
  • MIT report: 95% of generative AI pilots at companies are failing – Forbes
  • Nearly 8 in 10 companies report using gen AI – yet just as many report no significant bottom-line impact – McKinsey

AI has slipped into what Gartner calls the Trough of Disillusionment. But, for people working on pilots,  it might as well be the Pit of Despair because executives are beginning to declare AI a fad and deny ever having fallen victim to its siren song.

Because they’re listening to the NYT, Forbes, and McKinsey.

And they’re wrong.

ROI Reality Check

In 20205, private investment in generative AI is expected to increase 94% to an estimated $62 billion.  When you’re throwing that kind of money around, it’s natural to expect ROI ASAP.

But is it realistic?

Let’s assume Gen AI “started” (became sufficiently available to set buyer expectations and warrant allocating resources to) in late 2022/early 2023.  That means that we’re expecting ROI within 2 years.

That’s not realistic.  It’s delusional. 

ERP systems “started” in the early 1990s, yet providers like SAP still recommend five-year ROI timeframes.  Cloud Computing“started” in the early 2000s, and yet, in 2025, “48% of CEOs lack confidence in their ability to measure cloud ROI.” CRM systems’ claims of 1-3 years to ROI must be considered in the context of their 50-70% implementation failure rate.

That’s not to say we shouldn’t expect rapid results.  We just need to set realistic expectations around results and timing.

Measure ROI by Speed and Magnitude of Learning

In the early days of any new technology or initiative, we don’t know what we don’t know.  It takes time to experiment and learn our way to meaningful and sustainable financial ROI. And the learnings are coming fast and furious:

Trust, not tech, is your biggest challenge: MIT research across 9,000+ workers shows automation success depends more on whether your team feels valued and believes you’re invested in their growth than which AI platform you choose.

Workers who experience AI’s benefits first-hand are more likely to champion automation than those told, “trust us, you’ll love it.” Job satisfaction emerged as the second strongest indicator of technology acceptance, followed by feeling valued.  If you don’t invest in earning your people’s trust, don’t invest in shiny new tech.

More users don’t lead to more impact: Companies assume that making AI available to everyone guarantees ROI.  Yet of the 70% of Fortune 500 companies deploying Microsoft 365 Copilot and similar “horizontal” tools (enterprise-wide copilots and chatbots), none have seen any financial impact.

The opposite approach of deploying “vertical” function-specific tools doesn’t fare much better.  In fact, less than 10% make it past the pilot stage, despite having higher potential for economic impact.

Better results require reinvention, not optimization:  McKinsey found that call centers that gave agents access to passive AI tools for finding articles, summarizing tickets, and drafting emails resulted in only a 5-10% call time reduction.  Centers using AI tools to automate tasks without agent initiation reduced call time by 20-40%.

Centers reinventing processes around AI agents? 60-90% reduction in call time, with 80% automatically resolved.

How to Climb Out of the Pit

Make no mistake, despite these learnings, we are in the pit of AI despair.  42% of companies are abandoning their AI initiatives.  That’s up from 17% just a year ago.

But we can escape if we set the right expectations and measure ROI on learning speed and quality.

Because the real concern isn’t AI’s lack of ROI today.  It’s whether you’re willing to invest in the learning process long enough to be successful tomorrow.

Image credit: Microsoft CoPilot

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Is All Publicity Good Publicity?

Some Insights from Cracker Barrel

Is All Publicity Good Publicity?

GUEST POST from Pete Foley

The Cracker Barrel rebrand has certainly created a lot of media and social media attention.  Everything happened so fast that I have had to rewrite this introduction twice in as many days. Originally written when the new logo was in place, it has subsequently been withdrawn and replaced with the original one.

It’s probably been a expensive, somewhat embarrassing and sleepless week for the Cracker Barrel management team. But also one that generated a great deal of ‘free’ publicity for them. You could argue that despite the cost of a major rebranding and de-branding, this episode was priceless from a marketing penetration perspective. There is no way they could have spent enough to generate the level of media and social media they have achieved, if not necessarily enjoyed.

But of course, it raises the perennial question ‘is all publicity good publicity?’  With brands, I’d argue not always.  For certain, both good and bad publicity adds to ‘brand fluency’ and mental availability. But whether that is positively or negatively valanced, or triggers implicit or explicit approach or avoid responses is less straightforward. A case in point is of course Budweiser, who generated a lot of free media, but are still trying to drag themselves out of the Bud Light controversy.

Listening to the Customer: But when the dust settles, I suspect that Cracker Barrel will come out of this quite well. They enjoyed massive media and social media exposure, elevating the ‘mindshare’ of their brand. And to their credit, they’ve also, albeit a little reluctantly, listened to their customers. The quick change back to their legacy branding must ave been painful, but from a customer perspective, it screams ‘I hear you, and I value you’.

The Political Minefield. But there is some lingering complexity. Somehow the logo change became associated with politics. That is not exactly unusual these days, and when it happens, it inevitably triggers passion, polarization and outrage. I find it a quite depressing commentary on the current state of society that a restaurant logo can trigger ‘outrage. But like it or not, as change agents, these emotions, polarization and dubious political framing are a reality we all have to deal with. In this case, I personally suspect that any politically driven market effects will be short-lived. To my eye, any political position was unintentional, generated by social media rather than the company, and the connection between logo design and political affiliation is at best tenuous, and lacks the depth of meaning typically required for persistent outrage. The mobs should move on.

The Man on the Moon: But it does illustrate a broader problem for innovation derived from our current polarized society. If a logo simplification can somehow take on political overtones, pretty much any change or innovation can. Change nearly always comes with supporters and detractors, reflecting the somewhat contradictory nature of human behavior and cognition – we are change agents who also operate largely from habits. Our response to innovation is therefore inherently polarized, both as individuals and as a society, with elements of both behavioral inertia and change affinity. But with society deeply polarized and divided, it is perhaps inevitable that we will see connections between two different polarizations, whether they are logical or causal or not. We humans are pattern creators, evolved to see connections where they may or may not exist. This ability to see patterns using partial data protected us, and helped us see predators, food or even potential mates using limited information. Spotting a predator from a few glimpses through the trees obviously has huge advantages over waiting until it ambushes us. So we see animals in clouds, patterns in the stars, faces on the moon, and on some occasions, political intent where none probably exists.

My original intent with this article was to look at the design change for the logo from a fundamental visual science perspective. From that perspective, I thought it was quite flawed. But as the story quickly evolved, I couldn’t ignore the societal, social media and political element. Context really does matter. But if we step back from that, there are stillo some really interesting technical design insights we can glean.

1.  Simplicity is deceptively complex. The current trend towards reducing complexity and even color in a brands visual language superficially makes sense.  After all, the reduced amount of information and complexity should be easier for our brains to visually process.  And low cognitive processing costs come with all sorts of benefits. But unfortunately it’s not quite that simple.  With familiar objects, our brain doesn’t construct images from scratch, but instead takes the less intuitive, but more cognitively efficient route of unconsciously matching what we see to our existing memory.  This allows us to recognize familiar objects with a minimum of cognitive effort, and without needing to process all of the visual details they contain.  Our memory, as opposed to our vision, fills in much of the details.  But this process means that dramatic simplification of a well established visual language or brand, if not done very carefully, can inhibit that matching process.  So counterintuitively, if we remove the wrong visual cues, it can make a simplified visual language or brand more difficult to process than it’s original, and thus harder to find, at least for established customers.  Put another way, the way our visual system operates, it automatically and very quickly (faster than we can consciously think) reduces images down to their visual essence. If we try to do that ourselves, we need to very clearly understand what the key visual elements are, and make sure we keep the right ones. Cracker Barrel has lost some basic shapes, and removed several visual elements completely, meaning it has likely not done a great job in that respect.

2.  Managing the Distinctive-Simple Trade Off.  Our brains have evolved to be very efficient, so as noted above, we only do the ‘heavy lifting’ of encoding complex designs into memory once.  We then use a shortcut of matching what we see to what we already know, and so can recognize relatively complex but familiar objects with relatively little effort. This matching process means a familiar visual scene like the old Cracker Barrel logo is quickly processed as a ‘whole’, as opposed to a complex, detailed image.  But unfortunately, this means the devil is in the details, and a dramatic simplification like Cracker Barrels can unintentionally remove many of the cues or signals that allowed us to unconsciously recognize it with minimal cognitive effort. 

And the process of minimizing visual complexity can also remove much of what made the brand both familiar and distinctive in parallel.  And it’s the relatively low resolution elements of the design that make it distinctive.  To get a feel for this, try squinting at the old and new brand.  With the old design, squinting loses the details of the barrel, or the old man,  But the rough shape of them, and of the logo, and their relative positions remain.  That gives a rough approximation of what our visual system feeds into our brain when looking for a match with our memory. Do the same with the new logo, and it has little or no consistency or distinctivity.  This means the new logo is unintentionally making it harder for customers to either find it (in memory or elsewhere) or recognize it. 

As a side effect, oversimplification also risks looking ‘generic’, and falling into the noise created by a growing sea of increasingly simplified logos. Now, to be fair, historical context matters.  If information is not encoded into memory, the matching process fails, and a visual memory needs to be built from scratch.  So if we were a new brand, Cracker Barrels new brand visual language might lack distinctivity, but it would certainly carry ease of processing benefits for new customers, whereas the legacy label would likely be too complex, and would quite likely be broadly deselected.  But because the old design already owns ‘mindspace’ with existing customers, the dramatic change risks and removal of basic visual cues asks repeat customers to ’think’ at a more conscious level, and so potentially challenges long established habits.  A major risk for any established brand  

3.  Distinctivity Matters. All visual branding represents a trade off.  We need signal to noise characteristics that stand out from the crowd, or we are unlikely to be noticed. But we also need to look like we belong to a category, or we risk being deselected.  It’s a balancing act.  Look too much like category archetypes, and lack distinctivity, and we fade into the background noise, and appear generic.  But look too different, and we stand out, but in a potentially bad way, by asking potential customers to put in too much work to understand us. This will often lead a customer to quickly de-select us.  It’s a trade off where controlled complexity can curate distinctive cues to stand out, while also incorporating enough category prototype cues to make it feel right.  Combine this with sufficient simplicity to ease processing fluency, and we likely have a winning design, especially for new customers.  But it’s a delicate balancing act between competing variables

4.  People don’t like change. As mentioned earlier, we have a complex relationship with change. We like some, but not too much. Change asks their brains to work harder, so it needs to provide value. I’m skeptical the in this case, it added commensurate value to the customer.  And change also breaks habits. So any major rebrand comes with risk for a well established brand.  But it’s a balancing act, and we should remain locked into aging designs forever.  As the context we operate in changes, we need to ‘move with the times’, and remain consistent in our relationship with our context, at least as much as we remain consistent with our history. 

And of course, there is also a trade off between a visual language that resonates with existing customers and one designed to attract new ones, as ultimately, virtually every brand needs both trial and repeat.   But for established brands evolutionary change is usually the way to achieve reach and trial without alienating existing customers.  Coke are the masters of this.   Look at how their brand has evolved over time, staying contemporary, but without creating the kind of ‘cognitive jolts’ the Cracker Barrel rebrand has created.  If you look at an old Coke advertisement, you intuitively know both that it’s old, but also that it is Coke.

Brands and Politics.    I generally advise brands to stay out of politics. With a few exceptions, entering this minefield risks alienating 50% of our customers. And any subsequent ‘course corrections’ risk alienating those that are left. For a vast majorities of companies, the cost-benefit equation simply doesn’t work!

But in this case, we are seeing consumers interpreting change through a political lens, even when that was not the intent. But just because it’s not there doesn’t mean it doesn’t matter, as Cracker barrel is discovered.  So I’m changing my advice from ‘don’t be political’ to ‘try and anticipate if you’re initiative could be misunderstood as political’.  It’s a subtle, but important difference. 

And as a build, marketers often try to incorporate secondary messages into their communication.  But in todays charged political climate, I think we need to be careful about being too ‘clever’ in this respect.  Consumer’s sensitivity to socio-political cues is very high at present, as the Cracker Barrel example shows.  So if they can see political content where none was intended, they are quite likely to spot any secondary or ‘implicit’ messaging.   So for example, an advertisement that features a lot of flags and patriotic displays, or one that predominately features members of the LBGTQ community both run a risk of being perceived as ‘making a political statement’, whether it is intended to or not.  There is absolutely nothing wrong with either patriotism or the LBGT community, and to be fair, as society becomes increasingly polarized, it’s increasingly hard to create content that doesn’t somehow offend someone.  At least without becoming so ‘vanilla’ that the content is largely pointless, and doesn’t cut through the noise. But from a business perspective, in today’s socially and politically fractured world, any perceived political bias or message in either direction comes with business risks.  Proceed with caution.

And keep in mind we’ve evolved to respond more intensely to negatives than positives – Caution kept our ancestors alive.  If we half see a coiled object in the grass that could be a garden hose or a snake, our instinct  is to back off.  If we mistake a garden hose for a snake to cost is small. But if we mistake a venomous snake for a garden hose, the cost could be high. 

As I implied earlier, when consumers look at our content though specific and increasingly intense partisan lens, it’s really difficult for us to not be perceived as being either ‘for’ or ‘against’ them. And keep in mind, the cost of undoing even an unintended political statement is inevitably higher than the cost of making it. So it’s at very least worth trying to avoid being dragged into a political space whenever possible, especially as a negative.  So be careful out there, and embrace some devils advocate thinking. Even if we are not trying to make a point, implicitly or explicitly, we need to step back and look at how those who see the world from deeply polarized position could interpret us.  The ‘no such thing as bad publicity’ concept sits on very thin ice at this moment in time, where social media often seeks to punish more than communicate  

Image credits: Wikimedia Commons

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.