Author Archives: Pete Foley

About Pete Foley

Pete Foley is a consultant who applies Behavioral Science to catalyze innovation for Retail, Hospitality, Product Design, Branding and Marketing Design. He applies insights derived from consumer and shopper psychology, behavioral economics, perceptual science, and behavioral design to create practical solutions to difficult business challenges. He brings 25 years experience as a serial innovator at P&G. He has over 100 published or granted patents, has published papers in behavioral economics, evolutionary psychology and visual science, is an exhibited artist and photographer, and an accomplished musician.

Have We Made AI Interfaces Too Human?

Could a Little Uncanny Valley Help Add Some Much Needed Skepticism to How We Treat AI Output?

Have We Made AI Interfaces Too Human?

GUEST POST from Pete Foley

A cool element of AI is how ‘human’ it appear’s to be. This is of course a part of its ‘wow’ factor, and has helped to drive rapid and widespread adoption. It’s also of course a clever illusion, as AI’s don’t really ‘think’ like real humans. But the illusion is pretty convincing. And most of us, me included, who have interacted with AI at any length, have probably at times all but forgotten they are having a conversation with code, albeit sophisticated code.

Benefits of a Human-LIke Interface: And this humanizing of the user interface brings multiple benefits. It is of course a part of the ‘wow’ factor that has helped drive rapid and widespread adoption of the technology. The intuitive, conversational interface also makes it far easier for everyday users to access information without training in search techniques. While AI’s they don’t fundamentally have access to better information than an old fashioned Google search, they are much easier to use. And the humanesque output not only provides ‘ready to use’ and pre-synthesized information, but also increases the believability of the output. Furthermore, by creating an illusion of human-like intelligence, it implicitly implies emotions, compassion and critical thinking behind the output, even if it’s not really there

Democratizing Knowledge: And in many ways, this is a really good thing. Knowledge is power. Democratizing access to it has many benefits, and in so doing adds checks and balances to our society we’ve never before enjoyed. And it’s part of a long-term positive trend. Our societies have evolved from shaman and priests jealously guarding knowledge for their own benefit, through the broader dissemination enabled by the Gutenberg press, books and libraries. That in turn gave way to mass media, the internet, and now the next step, AI. Of course, it’s not quite that simple, as it’s also a bit of an arms race. With this increased access to information has come ever more sophisticated ways in which today’s ’shamans’ or leaders try to protect their advantage. They may no longer use solar eclipses to frighten an astronomically ignorant populace into submission and obedience. But spinning, framing, controlled narratives, selective dissemination of information, fake news, media control, marketing, behavioral manipulation and ’nudging’ are just a few ways in which the flow of information is controlled or manipulated today. We have moved in the right direction, but still have a way to go, and freedom of information and it’s control are always in some kind of arms race.

Two Edged Sword: But this humanization of AI can also be a two edged sword, and comes with downsides in addition to the benefits described above. It certainly improves access and believability, and makes output easier to disseminate, but also hides its true nature. AI operates in a quite different way from a human mind. It lacks intrinsic ethics, emotional connections, genuine empathy, and ‘gut feelings’. To my inexpert mind, it in some uncomfortable ways resembles a psychopath. It’s not evil in a human sense by any means, but it also doesn’t care, and lacks a moral or ethical framework

A brutal example is the recent case of Adam Raine, where ChatGPT advised him on ways to commit suicide, and helped him write a suicide note. A sane human would never do this, but the humanesque nature of the interface appeared to create an illusion for that unfortunate individual that he was dealing with a human, and the empathy, emotional intelligence and compassion that comes with that.

That may be an extreme example. But the illusion of humanity and the ability to access unfiltered information can also bring more subtle issues. For example, while the ability to interrogate AI around our symptoms before visiting a physician certainly empowers us to take a more proactive role in our healthcare. But it can also be counterproductive. A patient who has convinced themselves of an incorrect diagnosis can actually harm themselves, or make a physicians job much harder. And AI lacks the compassion to break bad news gently, or add context in the way a human can.

The Uncanny Valley: That brings me to the Uncanny Valley. This describes when technology approaches but doesn’t quite achieve perfection in human mimicry. In the past we could often detect synthetic content on a subtle and implicit level, even if we were not conscious of it. For example, a computerized voice that missed subtle tonal inflections, or a photoshopped image or manipulated video that missed subtle facial micro expressions might not be obvious, but often still ‘felt’ wrong. Or early drum machines were so perfect that they lacked the natural ’swing’ of even the most precise human drummer, and so had to be modified to include randomness that was below the threshold of conscious awareness, but made them ‘feel’ real.

This difference between conscious and unconscious evaluation creates cognitive dissonance that can result in content feeling odd, or even ‘creepy’. And often, the closer we got to eliminating that dissonance, the creepier it feels. When I’ve dealt with the uncanny valley in the past, it’s generally been something we needed to ‘fix’. For example, over-photoshopping in a print ad, or poor CGI. But be careful what you wish for. AI appears to have marched through the ‘uncanny valley’ to the point where its output feels human. But despite feeling right, it may still lack the ethical, moral or emotional framework of the human responses it mimics.

This begs a question, ‘do we need some implicit as well as explicit cues that remind us we are not dealing with a real human? Could a slight feeling of ‘creepiness maybe help to avoid another Adam Raine? Should we add back some ‘uncanny valley’, and turn what used to be something we thought of as an ‘enemy’ to good use? The latter is one of my favorite innovation strategies. Whether it’s vaccination, or exposure to risks during childhood, or not over-sanitizing, sometimes a little of what does us harm can do us good. Maybe the uncanny valley we’ve typical tried to overcome could now actually help us?

Would just a little implicit doubt also encourage us to think a bit more deeply about the output, rather than simply cut and paste it into a report? By making AI output sound so human, it potentially removes the need for cognitive effort to process the output. Thinking that played a key role in translating search into output can now be skipped. Synthesizing and processing output from a ‘old fashioned’ Google search requires effort and comprehension. With AI, it is all to easy to regurgitate the output, skip meaningful critical thinking, and share what we really don’t understand. Or perhaps worse, we can create an illusion of understanding where we don’t think deeply or causally enough to even realize that we don’t understand what we are sharing. It’s in some ways analogous to proof reading, in that it’s all to easy to skip over content we think we already know, even if we really don’t . And the more we skip over content, the more difficult it is to be discerning, or question the output. When a searcher receives answers in prose he or she can cut and paste into a report or essay, less effort effort and critical thinking goes into comprehension and the critical thinking, and the risk of sharing inaccurate information, or even nonsense increases.

And that also brings up another side effect of low engagement with output – confirmation bias. If the output is already in usable form, doesn’t require synthesizing or comprehension, and it agrees with our beliefs or motivations, it’s a perfect storm. There is little reason to question it, or even truly understand it. We are generally pretty good at challenging something that surprises us, or that we disagree with. But it takes a lot of will, and a deep adherence to the scientific method to challenge output that supports our beliefs or theories

Question everything, and you do nothing! The corollary to this is surely ‘that’s the point of AI?’ It’s meant to give us well structured, and correct answers, and in so doing free up our time for more important things, or to act on ideas, rather than just think about them. If we challenge and analyze every output, why use AI in the first place? That’s certainly fair, but taking AI output without any question is not smart either. Remember that it isn’t human, and is still capable of making really stupid mistakes. Okay, so are humans, but AI is still far earlier in its evolutionary journey, and prone to unanticipated errors. I suspect the answer to this lies in how important the output is, and where it will be used. If it’s important, treat AI output as a hypothesis. Don’t believe everything you read, and before simply sharing or accepting, ask ourselves and AI itself questions around what went into the conclusions, where the data came from, and what the critical thinking path is. Basically apply the scientific method to AI output much the same as we would, or should our own ideas.

Cat Videos and AI Action Figures: Another related risk with AI is if we let it become an oracle. We not only treat its output as human, but as super human. With access to all knowledge, vastly superior processing power compared to us mere mortals, and apparent human reasoning, why bother to think for ourselves? A lot of people worry about AI becoming sentient, more powerful than humans, and the resultant doomsday scenarios involving Terminators and Skynet. While it would be foolish to ignore such possibilities, perhaps there is a more clear and present danger, where instead of AI conquering humanity, we simply cede our position to it. Just as basic mathematical literacy has plummeted since the introduction of calculators, and spell-check has reduced our basic literary capability, what if AI erodes our critical thinking and problem solving? I’m not the first to notice that with the internet we have access to all human knowledge, but all too often use it for cat videos and porn. With AI, we have an extraordinary creativity enhancing tool, but use masses of energy and water for data centers to produce dubious action figures in our own image. Maybe we need a little help doing better with AI. A little ‘uncanny Valley’ would not begin to deal with all of the potential issues, but maybe simply not fully trusting AI output on an implicit level might just help a little bit.

Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Is All Publicity Good Publicity?

Some Insights from Cracker Barrel

Is All Publicity Good Publicity?

GUEST POST from Pete Foley

The Cracker Barrel rebrand has certainly created a lot of media and social media attention.  Everything happened so fast that I have had to rewrite this introduction twice in as many days. Originally written when the new logo was in place, it has subsequently been withdrawn and replaced with the original one.

It’s probably been a expensive, somewhat embarrassing and sleepless week for the Cracker Barrel management team. But also one that generated a great deal of ‘free’ publicity for them. You could argue that despite the cost of a major rebranding and de-branding, this episode was priceless from a marketing penetration perspective. There is no way they could have spent enough to generate the level of media and social media they have achieved, if not necessarily enjoyed.

But of course, it raises the perennial question ‘is all publicity good publicity?’  With brands, I’d argue not always.  For certain, both good and bad publicity adds to ‘brand fluency’ and mental availability. But whether that is positively or negatively valanced, or triggers implicit or explicit approach or avoid responses is less straightforward. A case in point is of course Budweiser, who generated a lot of free media, but are still trying to drag themselves out of the Bud Light controversy.

Listening to the Customer: But when the dust settles, I suspect that Cracker Barrel will come out of this quite well. They enjoyed massive media and social media exposure, elevating the ‘mindshare’ of their brand. And to their credit, they’ve also, albeit a little reluctantly, listened to their customers. The quick change back to their legacy branding must ave been painful, but from a customer perspective, it screams ‘I hear you, and I value you’.

The Political Minefield. But there is some lingering complexity. Somehow the logo change became associated with politics. That is not exactly unusual these days, and when it happens, it inevitably triggers passion, polarization and outrage. I find it a quite depressing commentary on the current state of society that a restaurant logo can trigger ‘outrage. But like it or not, as change agents, these emotions, polarization and dubious political framing are a reality we all have to deal with. In this case, I personally suspect that any politically driven market effects will be short-lived. To my eye, any political position was unintentional, generated by social media rather than the company, and the connection between logo design and political affiliation is at best tenuous, and lacks the depth of meaning typically required for persistent outrage. The mobs should move on.

The Man on the Moon: But it does illustrate a broader problem for innovation derived from our current polarized society. If a logo simplification can somehow take on political overtones, pretty much any change or innovation can. Change nearly always comes with supporters and detractors, reflecting the somewhat contradictory nature of human behavior and cognition – we are change agents who also operate largely from habits. Our response to innovation is therefore inherently polarized, both as individuals and as a society, with elements of both behavioral inertia and change affinity. But with society deeply polarized and divided, it is perhaps inevitable that we will see connections between two different polarizations, whether they are logical or causal or not. We humans are pattern creators, evolved to see connections where they may or may not exist. This ability to see patterns using partial data protected us, and helped us see predators, food or even potential mates using limited information. Spotting a predator from a few glimpses through the trees obviously has huge advantages over waiting until it ambushes us. So we see animals in clouds, patterns in the stars, faces on the moon, and on some occasions, political intent where none probably exists.

My original intent with this article was to look at the design change for the logo from a fundamental visual science perspective. From that perspective, I thought it was quite flawed. But as the story quickly evolved, I couldn’t ignore the societal, social media and political element. Context really does matter. But if we step back from that, there are stillo some really interesting technical design insights we can glean.

1.  Simplicity is deceptively complex. The current trend towards reducing complexity and even color in a brands visual language superficially makes sense.  After all, the reduced amount of information and complexity should be easier for our brains to visually process.  And low cognitive processing costs come with all sorts of benefits. But unfortunately it’s not quite that simple.  With familiar objects, our brain doesn’t construct images from scratch, but instead takes the less intuitive, but more cognitively efficient route of unconsciously matching what we see to our existing memory.  This allows us to recognize familiar objects with a minimum of cognitive effort, and without needing to process all of the visual details they contain.  Our memory, as opposed to our vision, fills in much of the details.  But this process means that dramatic simplification of a well established visual language or brand, if not done very carefully, can inhibit that matching process.  So counterintuitively, if we remove the wrong visual cues, it can make a simplified visual language or brand more difficult to process than it’s original, and thus harder to find, at least for established customers.  Put another way, the way our visual system operates, it automatically and very quickly (faster than we can consciously think) reduces images down to their visual essence. If we try to do that ourselves, we need to very clearly understand what the key visual elements are, and make sure we keep the right ones. Cracker Barrel has lost some basic shapes, and removed several visual elements completely, meaning it has likely not done a great job in that respect.

2.  Managing the Distinctive-Simple Trade Off.  Our brains have evolved to be very efficient, so as noted above, we only do the ‘heavy lifting’ of encoding complex designs into memory once.  We then use a shortcut of matching what we see to what we already know, and so can recognize relatively complex but familiar objects with relatively little effort. This matching process means a familiar visual scene like the old Cracker Barrel logo is quickly processed as a ‘whole’, as opposed to a complex, detailed image.  But unfortunately, this means the devil is in the details, and a dramatic simplification like Cracker Barrels can unintentionally remove many of the cues or signals that allowed us to unconsciously recognize it with minimal cognitive effort. 

And the process of minimizing visual complexity can also remove much of what made the brand both familiar and distinctive in parallel.  And it’s the relatively low resolution elements of the design that make it distinctive.  To get a feel for this, try squinting at the old and new brand.  With the old design, squinting loses the details of the barrel, or the old man,  But the rough shape of them, and of the logo, and their relative positions remain.  That gives a rough approximation of what our visual system feeds into our brain when looking for a match with our memory. Do the same with the new logo, and it has little or no consistency or distinctivity.  This means the new logo is unintentionally making it harder for customers to either find it (in memory or elsewhere) or recognize it. 

As a side effect, oversimplification also risks looking ‘generic’, and falling into the noise created by a growing sea of increasingly simplified logos. Now, to be fair, historical context matters.  If information is not encoded into memory, the matching process fails, and a visual memory needs to be built from scratch.  So if we were a new brand, Cracker Barrels new brand visual language might lack distinctivity, but it would certainly carry ease of processing benefits for new customers, whereas the legacy label would likely be too complex, and would quite likely be broadly deselected.  But because the old design already owns ‘mindspace’ with existing customers, the dramatic change risks and removal of basic visual cues asks repeat customers to ’think’ at a more conscious level, and so potentially challenges long established habits.  A major risk for any established brand  

3.  Distinctivity Matters. All visual branding represents a trade off.  We need signal to noise characteristics that stand out from the crowd, or we are unlikely to be noticed. But we also need to look like we belong to a category, or we risk being deselected.  It’s a balancing act.  Look too much like category archetypes, and lack distinctivity, and we fade into the background noise, and appear generic.  But look too different, and we stand out, but in a potentially bad way, by asking potential customers to put in too much work to understand us. This will often lead a customer to quickly de-select us.  It’s a trade off where controlled complexity can curate distinctive cues to stand out, while also incorporating enough category prototype cues to make it feel right.  Combine this with sufficient simplicity to ease processing fluency, and we likely have a winning design, especially for new customers.  But it’s a delicate balancing act between competing variables

4.  People don’t like change. As mentioned earlier, we have a complex relationship with change. We like some, but not too much. Change asks their brains to work harder, so it needs to provide value. I’m skeptical the in this case, it added commensurate value to the customer.  And change also breaks habits. So any major rebrand comes with risk for a well established brand.  But it’s a balancing act, and we should remain locked into aging designs forever.  As the context we operate in changes, we need to ‘move with the times’, and remain consistent in our relationship with our context, at least as much as we remain consistent with our history. 

And of course, there is also a trade off between a visual language that resonates with existing customers and one designed to attract new ones, as ultimately, virtually every brand needs both trial and repeat.   But for established brands evolutionary change is usually the way to achieve reach and trial without alienating existing customers.  Coke are the masters of this.   Look at how their brand has evolved over time, staying contemporary, but without creating the kind of ‘cognitive jolts’ the Cracker Barrel rebrand has created.  If you look at an old Coke advertisement, you intuitively know both that it’s old, but also that it is Coke.

Brands and Politics.    I generally advise brands to stay out of politics. With a few exceptions, entering this minefield risks alienating 50% of our customers. And any subsequent ‘course corrections’ risk alienating those that are left. For a vast majorities of companies, the cost-benefit equation simply doesn’t work!

But in this case, we are seeing consumers interpreting change through a political lens, even when that was not the intent. But just because it’s not there doesn’t mean it doesn’t matter, as Cracker barrel is discovered.  So I’m changing my advice from ‘don’t be political’ to ‘try and anticipate if you’re initiative could be misunderstood as political’.  It’s a subtle, but important difference. 

And as a build, marketers often try to incorporate secondary messages into their communication.  But in todays charged political climate, I think we need to be careful about being too ‘clever’ in this respect.  Consumer’s sensitivity to socio-political cues is very high at present, as the Cracker Barrel example shows.  So if they can see political content where none was intended, they are quite likely to spot any secondary or ‘implicit’ messaging.   So for example, an advertisement that features a lot of flags and patriotic displays, or one that predominately features members of the LBGTQ community both run a risk of being perceived as ‘making a political statement’, whether it is intended to or not.  There is absolutely nothing wrong with either patriotism or the LBGT community, and to be fair, as society becomes increasingly polarized, it’s increasingly hard to create content that doesn’t somehow offend someone.  At least without becoming so ‘vanilla’ that the content is largely pointless, and doesn’t cut through the noise. But from a business perspective, in today’s socially and politically fractured world, any perceived political bias or message in either direction comes with business risks.  Proceed with caution.

And keep in mind we’ve evolved to respond more intensely to negatives than positives – Caution kept our ancestors alive.  If we half see a coiled object in the grass that could be a garden hose or a snake, our instinct  is to back off.  If we mistake a garden hose for a snake to cost is small. But if we mistake a venomous snake for a garden hose, the cost could be high. 

As I implied earlier, when consumers look at our content though specific and increasingly intense partisan lens, it’s really difficult for us to not be perceived as being either ‘for’ or ‘against’ them. And keep in mind, the cost of undoing even an unintended political statement is inevitably higher than the cost of making it. So it’s at very least worth trying to avoid being dragged into a political space whenever possible, especially as a negative.  So be careful out there, and embrace some devils advocate thinking. Even if we are not trying to make a point, implicitly or explicitly, we need to step back and look at how those who see the world from deeply polarized position could interpret us.  The ‘no such thing as bad publicity’ concept sits on very thin ice at this moment in time, where social media often seeks to punish more than communicate  

Image credits: Wikimedia Commons

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

‘Stealing’ from Artists to Make Innovations Both Novel and Familiar

AKA Self Plagiarism

Stealing from Artists to Make Innovations Both Novel and Familiar

GUEST POST from Pete Foley


This morning I came across a wonderful piece of music by one of my guitar heroes, Robert Fripp, of King Crimson fame.  It was a duet with Andy Sommers (The Police).  You don’t need to listen to it to connect to the insight it gave me, but if you are interested, you can watch it here. It’s interesting and innovative music

I’m a fan of Fripp, in part because of his technical expertise with the guitar, but mostly because of his innovation and restless creativity.   King Crimson are not a top 40 band, but they’ve enjoyed a long and successful career going back to the late 1960’s.  Their longevity derives, at least in part from their ability to completely reinvent themselves, and challenge their audience on a regular basis.  But they do so while also retaining a loyal following and owning a unique space in music.  They have, over 50 odd years, managed to walk the tightrope between constant change and ongoing familiarity.  

The Novelty-Familiarity Dichotomy:  Stepping back, that tightrope is one of the biggest challenges we all face as innovators.  Hitting the sweet spot between novelty and familiarity is key to both trial and repeat. If we don’t offer something new and interesting, then people have no reason to try us, and are better off staying with their existing habits and behaviors.  But make it too different, and we create a barrier to adoption, because we ask potential users to take a risk by straying from the proven and familiar, and to put effort into trying, using and understanding us. 

This reflects the somewhat schizophrenic, or at least dual personality of our collective human behavior.  We are drawn to familiarity, but have also evolved to crave novelty.  Our desire to experiment and explore is key to why we are the dominant species on the planet, and have expanded our presence to just about every habitat on the planet.  But the lower cognitive demands of the familiar mean much of our life is still dominated by habits, comfortable repetition and familiar activities.  Whether we an artist, a brand, work in an office, or are simply in a romantic relationship, we all have to navigate this dichotomy.  

Self Plagiarizing:  That brings me back to Robert Fripp.  Given his history of continuous change, and much as I enjoyed the track, I was surprised that the core riff sounded very, very similar to a King Crimson song Thela Hut Ginje, released the year before.  They are both Robert Fripp co-compositions, so he was effectively ‘stealing’ his own ideas, or self plagiarizing. 

Initially that seemed odd for someone who has for decades been a formidable change agent.   But I often learn a lot about the innovation process via analogy from music and fine arts.  So I started thinking about self plagiarism, and if it is a tool we could or should use more in innovation in general, as a potential way to maintain familiarity while also driving change  

Transferring our own signatures into multiple new executions ensures familiarity and hence reassures to our ‘loyal’ users.  But in parallel, putting those signatures in new contexts also provides a way to draw in new ‘fans’, or safely break monotony for our ‘regulars’.   Of course, at one level, the reassurance element is exactly what branding does.   But the concept of self plagiarism is potentially a way to achieve this on a more subtle, implicit level.   

Name that Band!  The arts community are masters of this.   It’s amazing to me how often we almost instantly recognize an artist, even if the painting or song itself is not familiar.  Maybe it’s a unique voice, a unique style or sound, or perhaps a signature motif.  Whether it’s David Bowie, Mick Jagger, Pablo Picasso,  Salvador Dali or Taylor Swift, we intuitively and largely unconsciously recognize their ‘style’.   Of course, explicit continuity and consistency is also important.  The wall of color in a supermarket acts as both a signpost, and reinforces important popularity cues.   Even in more dispersed digital environments, more ‘explicit’  cues provide important and cognitively simple cues that tie individual innovations to over-arching brands.  

But self reference, or self plagiarism is an additional tool that I think is worth exploring.  It allows us to leverage (implicit) sensory cues to reinforce brand consistency, and is one potential way to reinforce continuity in the face of evolutionary or even disruptive change. And just as you may intuitively recognize a song by your favorite artist without having to ‘think’ about it, it can operate very quickly, and help an innovation to ‘feel’ right.

Bob Dylan Goes Electric: And having more implicit tools can help with some of the inherent constraints of consistent branding.  Chasing familiarity can be both a blessing and a curse; ask any classic rock band on a greatest hits tour.  Or for any of you who saw the excellent “A Complete Unknown’ movie about Dylan, that culminates in the outrage he created with his core fan base by ‘going electric.  Maintaining familiarity ‘talks’ to a loyal audience, but can also be quite constraining, especially for the most innovative amongst us. And this can be especially challenging if, as in Dylan’s case, the outside world is changing quickly and we need or want to respond.  But there are numerous examples of artists who have done this quite successfully.  For better or for worse, Dylan still sounded distinctly like Dylan after he ‘rebranded’ as electric.  David Bowie, Madonna, or the different ‘periods’ that describe Picasso’s catalog are good examples of dramatic change and reinvention that still maintain some familiarity and consistency.  

What taking this kind of approach looks like for us will of course depend upon the area in which we are innovating.  But sensory cues, shapes, or relative design elements are all cues we can self-plagiarize, that add layers of familiarity, and are often difficult for competition to copy without evoking as, and hence increasing the ‘mind-share’ of their competitor.     

Of course, this is not to suggest replacing brand (visual) language and brand first design with subtle, implicit cues.   But the journey of a brand is complex, and in today’s world of rapid change, we are likely to increasingly need ways to manage ever greater changes within a ‘familiar’ context.  Thinking about different, potentially complementary ways to do this is never a bad idea. 

Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Should My Brand Take a Political Stand?

Should My Brand Take a Political Stand?

GUEST POST from Pete Foley

Many of you may have noticed that we are in a period of unparalleled social and political polarization in the US. For better or for worse, the public is probably more engaged and more passionate about politics and related social issues than it’s ever been.

So how should we, and the organizations we are a part of respond to this?  When we feel passionate about something, there is always motivation to take action. And for many of us, the place where we have the most influence, resources and leverage is via work.    

Does Politics Belong at Work? So should we blur the boundary between our personal beliefs and our work? Should our marketing and communication reflect the social or political passions of ourselves, and our colleagues? It’s a question I’ve been asked a lot over the last few years, and even more over the last few months. And not surprisingly, it’s often fueled by a working group who share passionate common values. 

Job Satisfaction: Acting on these shared passions certainly has potential to benefits job satisfaction, team building and even perception of work life balance. Despite this, I nearly always advise to avoid politicizing a brand, and to even be very cautious about social engagement. That’s often an unpopular opinion, especially if team members care deeply about a cause.  But aligning a brand with politics opens a door that is extremely difficult to close.  

Bud-Light: The news story below is a good example. Anheuser-Busch is currently facing negative social media for pulling it’s support for a Pride Festival.

https://www.fox5vegas.com/2025/03/26/anheuser-busch-pulls-out-pride-festival-after-30-year-partnership/?fbclid=IwY2xjawJRIflleHRuA2FlbQIxMQABHdeKDxDCkmbH0QkJNegb-TZxi1TiwDpqs35z4gcx7AwYH3nCOVH01VEscg_aem_w6v3QjCD_cWvEnFdcP2NIA

It’s not the first time Bud-Light has found itself in the news for a politically related topic. I’m sure we all remember the Bud Light controversy over it’s association with Dylan Mulvaney. That resulted in massive backlash from the ‘right’ and loss of its position as the #1 beer in the US.  Now it’s facing backlash from the ‘left’ over Pride. Basically they now cannot win, and that is the core issue. Once you’ve taken a position in a controversial space, even somewhat unintentionally as Bud Lite did, it becomes a part of your brand, and that lens is applied to virtually everything you do. It is then extremely difficult to recapture a neutral position.

No-Win Scenario? It really doesn’t matter which side of the political fence a brand chooses.  Once that door is open, the repercussions’ can last for years, and any course correction almost inevitably upsets one side or the other.  Budweiser, Chick-Fil-A, even Pepsi have all dipped their toes in to political and social arenas, and had to manage fall-out that is typically disproportional to the original content.   

All of that said, a brand following a purpose can have positive impact on internal job satisfaction, at least in the short term. At of course, it can and often does resonate positively with a subset of its customers.   But unless that purpose is unambiguously and universally supported by all existing and potential customers, and frankly very little is these days, the risks almost inevitably outweigh the benefits.  Even apparently successful campaigns like Nike’s featuring Colin Kaepernick, which had strong appeal for their core, younger demographic, are high risk-high reward, and come with long-term risks which are hard to quantify.  Negative emotions tend to drive strong, and more resilient behavioral changes than positive ones. So even if initially polarized markets sees offsets between positive and negative consumer response, the positive tends to fade faster. Humans have evolved to more heavily weight negative experiences for good survival based reasons.

Universal Appeal and Availability: At the heart of this challenge is that growing and maintaining a brand requires reaching and appealing to as many customers as possible.   Whether we view markets through the eye of Ehrenberg-Bass models, or follow more traditional volume forecasting models, the single biggest variable that enables a brand to grow is reach. And that reach needs to operate on both a mental and physical vector. Physical availability is generally achieved via wide distribution or ubiquitous access. Quite simply, if potential customers cannot find you, then most will not buy you. But mental availability is equally important. If and when shoppers do find you, they need to both desire and understand you. This is a bit more complex, and achieved by great marketing, branding, media, packaging and messaging.

But if a brand aligns with a controversial cause, it risks losing positive mental availability, and being either consciously or implicitly rejected. The reality is that pretty much any political or social cause these days carries a real risk of upsetting half of your customers.  Positive Brand loyalty is often at best fickle, but once someone has decided they dislike a brand for whatever reason, that de-selection can be quite resilient.   

Treat Marketing like Thanksgiving: And it can become even harder when brands try to course correct.  Reversals tend to look inauthentic and manipulative, while attempts to ‘read the room’, and go with current trends risks being distrusted by both sides!!  In a vast majority of cases, by far the best strategy is to treat marketing like Thanksgiving dinner, and keep out of politics and religion

Keeping Purpose Alive: So should brands abandon any form of purpose or altruism. I’d hope not. Altruism is good for community, good for employee satisfaction, good for long-term equity and more. So what should we do?

I think there are at least three important guidelines.

  1. One is stay in your lane.  Most people struggle with a drink, food or soap powder having a political or social opinion.  
  2. The second is to find ways to contribute that are at least largely universally supported, and avoid the flavor of the month’.  Even in today’s polarized society, helping cancer research, disaster victims, helping kids, animal shelters, and ma minimum controversy.   
  3. The third is to ask ‘why am I doing this? Is this the best use of company money, and am doing this for the brand, the business, or is it more in support of my own values?”  If it’s the latter, maybe find ways to achieve that without opening your brand to future risk  
    Bottom line, basically anything that politicians talk a lot about, and certainly argue about, is best avoided. And even be careful how you frame what you do to avoid affiliation with groups perceived as political. Channeling money through a non-profit can be very effective, both in endorsements and validating claims.  But many non-profits have become increasingly politicized. I’m not here to make judgment on that, except that from a marketing perspective, we risk becoming aligned with that bias.

But if we are thoughtful, we can combine purpose and innovation and marketing. I think Tide’s ‘Loads of Hope’ is a great positive example. It’s about cleaning laundry, which is perfectly in lane for the brand, & it helps disaster victims, which at least for now is political neutral, and more importantly, largely future proofed.

Image credits: Wikimedia Commons

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Don’t ‘Follow the Science’, Follow the Scientific Method

Don't 'Follow the Science', Follow the Scientific Method

GUEST POST from Pete Foley

The scientific method is probably the most useful thing I’ve learnt in my life. It is a near universal tool that can be used in so many ways and so many places.  It unleashes a whole world of assisted critical thinking that is invaluable to innovators, but also in our personal lives.  Teaching it to individuals or teams who are not trained as scientists is one of the most powerful and enabling things we can do for them.  And teaching it to kids, as opposed to endless facts and data that they can easily access from the web is something we should do far more of.  

Recruiting Skills not Expertise:  When I was involved in recruiting, I always valued PhD’s and engineers.  Sometimes that was for their unique, specialized knowledge.  But more often than not it was more for the critical thinking skills they had acquired while gaining that specialized knowledge. In today’s rapidly evolving world, specific knowledge typically has a relatively short shelf-life.  But the cognitive framework embodied by the scientific method is a tool for life, and one that can be reapplied in so many ways.  .

Don’t Follow the Science, Follow the Process:  All too often today the scientific method gets confused with ‘following the science’.  The scientific process is almost infinitely useful, but blindly ‘following the science’ is often far less so, and can be counter productive.  The scientific method is a process that helps us to evaluate information, challenge assumptions, and in so doing, get us closer to truth.  Sometimes it confirms our existing ideas, sometimes it improves them, and sometimes it completely replaces them.  But it is grounded in productive and informed skepticism, and this is at the heart of why science is constantly evolving.

‘Follow the Science’ in many ways the opposite.  It assumes someone in a position of power already has the right answer.  At it’s best it means blindly follow the consensus of today’s experts.  All too often it really means ‘do as you are told’.   Frequently the people saying this are not the experts themselves, but are instead evoking third party expertise to support their viewpoint.  That of course is the opposite of science.  It’s often well intended, but not always good advice.

Science is not a Religion:  At the heart of this is a fundamental misunderstanding of science, and scientists. In today’s media and social media, all too often science and scientists are presented with a quasi-religious reverence, and challenging the current view is framed as heretical.  How often do you here the framing ‘scientists tell us… ‘ as a way of validating a position?   

This is understandable.  The sheer quantity and complexity of information we are faced with in our everyday lives is increasingly unmanageable, while big challenges like climate unimaginably complex.  I find it almost impossible to keep up with my own interests, let alone everything that is happening.  And some topics are so technical that they simply require translation by experts.  When someone announces they’ve discovered the Higgs boson particle, it’s not really practical for any of us to pop over to the local particle accelerator and check for ourselves.  So expertise is clearly an important part of any decision chain. But experts come with their own biases. An engineer naturally tends to see problems and through, an engineering lens, a chemist through a chemical one.

Science in Support of an Agenda:  One danger with the ‘follow the science’ mantra is that it is often used to reinforce a belief, opinion, or even agenda.  I’ve seen this all too often in my work life, with the question, ‘can you find me a paper that supports ‘x’.  This is often benign, in that someone passionately believes something, and wants to find evidence to support it.   But this is fundamentally the wrong question, and of course, completely ‘unscientific’.

The scientific literature is filled with competing theories, disproven or outdated ideas, and bad science.   If you look for literature to support an idea you can usually find it, even if it’s wrong.   Scientists are not gods.  They make mistakes, they run poor experiments, and they are subject to confirmation biases, ego, and other human frailties. There is a good reason for the phrase that science evolves one death at a time. Science, like virtually every human organization is hierarchical, and a prestigious scientist can advance a discipline, but can also slow it down by holding onto a deeply held belief. And mea culpa, I know from personal experience that it’s all too easy to fall in love with a theory, and resist evidence to the contrary. 

Of course, some theories are more robust than others.   Both consensus and longevity are therefore important considerations.  Some science is so well established, and supported by so much observation that it’s unlikely that it will fundamentally change.  For example, we may still have a great deal to learn about gravity, but for practical purposes, apples will still drop from trees.    

Peer Review:  Policing the literature is hard.  Consensus is right until its not. Another phrase I often hear is ‘peer reviewed’, in the context that this makes the paper ‘right’.  Of course, peer review is valuable, part of the scientific process, and helps ensure that content has quality, and has been subject to a high level of rigor.   If one person says it, it can be a breakthrough or utter nonsense.  If a lot of smart people agree, it’s more likely to be ‘right’.  But that is far from guaranteed, especially if they share the same ingoing assumptions. Scientific consensus has historically embraced many poor theories; a flat earth, or the sun revolving around the earth are early examples. More tragically, I grew up with the thalidomide generation in Europe.  On an even bigger scale, the industrial revolution gave us so much, but also precipitated climate change.  And on a personal level, I’ve just been told by my physician to take a statin, and I am in the process of fighting my way through rapidly growing and evolving literature in order to decide if that is the right decision.  So next time you see a scientist, or worse, a politician, journalist, or a random poster on Twitter claim they own scientific truth, enjoin you to ‘follow the science’, or accuse someone else of being a science denier, treat it with a grain of sodium chloride.

They may of course be right, but the more strident they are, or the less qualified, the less likely they are to really understand science, and hence what they are asking you to follow.  And the science they pick is quite possibly influenced by their own goals, biases or experience. Of course, practically we cannot challenge everything. We need to be selective, and the amount of personal effort we put into challenging an idea will depend upon how important it is to us as individuals.      

Owning your Health:  Take physicians as an example.  At some time or other, we’ve all looked to a physician for expert advise.  And there is a good reason to do so.  They work very hard to secure deep knowledge of their chosen field, and the daily practice of medicine gives then a wealth of practical as well as well as theoretical knowledge.  But physicians are not gods either.  The human body is a very complex system, physicians have very little time with an individual patient (over the last 10 years, the average time a physician spends with a patient has shrunk to a little over 15 minutes), the field is vast and expanding, and our theories around how to diagnose and treat disease are constantly evolving.  In that way, medicine is a great example of the scientific method in action, but also how transient ‘scientific truths’ can be.  

I already mentioned my current dilemma with statins.   But to give an even more deeply personal example, neither my wife or I would be alive today if we’d blindly followed a physicians diagnosis.

I had two compounding and comparatively rare conditions that combined to appear like a more common one.  The physician went with the high probability answer.  I took time to dig deeper and incorporate more details.  Together we got to the right answer, and I’m still around!

This is a personal and pragmatic example of how valuable the scientific process can be.  My health is important, so I chose to invest considerable time in the diagnosis I was given, and challenge it productively, instead of blindly accepting an expert opinion. My physicians had far more expertise than I did, but I had far more time and motivation.  We ultimately complemented each other by partnering, and using the scientific method both as a process, and as a way to communicate.   

The Challenge of Science Communication:  To be fair, science communication is hard.   It requires communicating an often complex concept with sufficient simplicity for it to be understandable, often requires giving guidance, while also embracing appropriate uncertainty. Nowhere was this more evident than in the case of Covid 19, where a lot of ‘follow the science’, and ‘science denier’ language came from.  At the beginning of the pandemic, the science was understandably poorly developed, but we still had to make important decisions on often limited data.  At first we simply didn’t understand even the basics like the transmission vectors (was it airborne or surface, how long did it survive outside of the body, etc).  I find it almost surreal to think back to those early months, how little we knew, the now bizarre clean room protocols we used on our weekly shopping, and some of the fear that has now faded into the past.  

But because we understood so little, we made a lot of mistakes.  The over enthusiastic use of ventilators may have killed some patients, although that is still a hotly debated topic. Early in the pandemic masks, later to become a controversial and oddly politically charged topic, masks were specifically not recommended by the US government for the general public. Who knows how many people contracted the disease by following this advice?   It was well intentioned, as authorities were trying to prevent a mask shortage for health workers. But it was also mechanistically completely wrong.

At the time I used simple scientific reasoning, and realized this made little sense.  If the virus was transmitted via an airborne vector, a mask would help.  If it wasn’t, it would do no harm, at least as long as I didn’t subtract from someone with greater need. By that time the government had complete control of the mask supply chain anyway, so that was largely a moot point. Instead I dug out a few old N95 masks that had been used for spray painting and DIY, and used them outside of the house (hospitals would not accept donations of used masks). I was lambasted with ‘follow the science’ by at least one friend for doing so, but followed an approach with high potential reward and virtually zero downside. I’ll never know if that specifically worked, but I didn’t get Covid, at least not until much later when it was far less dangerous.

Science doesn’t own truth: Unlike a religion, good science doesn’t pretend to own ultimate truths.  But unfortunately it can get used that way.  Journalists, politicians, technocrats and others sometimes weaponize (selective) science to support an opinion. Even s few scientists who have become frustrated with ‘science deniers’ can slip into this trap.

Science is a Journey: I should clarify that the scientific method is more of a journey, not so much a single process. To suggest is is a single ‘thing’ so is probably an unscientific simplification in its own right. It’s more a way of thinking that embraces empiricism, observation, description, productive skepticism, and the use of experimentation to test and challenge hypothesis. It also helps us to collaborate and communicate with experts in different areas, creating a common framework for collaboration, rather than blindly following directions or other expert opinions.    

It can be taught, and is incredibly useful.  But like any tool, it requires time and effort to become a skilled user.   But if we invest in it, it can be extraordinarily valuable, both in innovation and life. It’s perhaps not for every situation, as that would mire us in unmanageable procrastination.  But if something is important, it’s an invaluable tool. 

Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Runaway Innovation Train

The Runaway Innovation Train

GUEST POST from Pete Foley

In this blog, I return and expand on a paradox that has concerned me for some time.    Are we getting too good at innovation, and is it in danger of getting out of control?   That may seem like a strange question for an innovator to ask.  But innovation has always been a two edged sword.  It brings huge benefits, but also commensurate risks. 

Ostensibly, change is good. Because of technology, today we mostly live more comfortable lives, and enjoy superior health, longevity, and mostly increased leisure and abundance compared to our ancestors.

Exponential Innovation Growth:  The pace of innovation is accelerating. It may not exactly mirror Moore’s Law, and of course, innovation is much harder to quantify than transistors. But the general trend in innovation and change approximates exponential growth. The human stone-age lasted about 300,000 years before ending in about 3,000 BC with the advent of metalworking.  The culture of the Egyptian Pharos lasted 30 centuries.  It was certainly not without innovations, but by modern standards, things changed very slowly. My mum recently turned 98 years young, and the pace of change she has seen in her lifetime is staggering by comparison to the past.  Literally from horse and carts delivering milk when she was a child in poor SE London, to todays world of self driving cars and exploring our solar system and beyond.  And with AI, quantum computing, fusion, gene manipulation, manned interplanetary spaceflight, and even advanced behavior manipulation all jockeying for position in the current innovation race, it seems highly likely that those living today will see even more dramatic change than my mum experienced.  

The Dark Side of Innovation: While accelerated innovation is probably beneficial overall, it is not without its costs. For starters, while humans are natural innovators, we are also paradoxically change averse.  Our brains are configured to manage more of our daily lives around habits and familiar behaviors than new experiences.  It simply takes more mental effort to manage new stuff than familiar stuff.  As a result we like some change, but not too much, or we become stressed.  At least some of the burgeoning mental health crisis we face today is probably attributable the difficulty we have adapting to so much rapid change and new technology on multiple fronts.

Nefarious Innovation:  And of course, new technology can be used for nefarious as well as noble purpose. We can now kill our fellow humans far more efficiently, and remotely than our ancestors dreamed of.  The internet gives us unprecedented access to both information and connectivity, but is also a source of misinformation and manipulation.  

The Abundance Dichotomy:  Innovation increases abundance, but it’s arguable if that actually makes us happier.  It gives us more, but paradoxically brings greater inequalities in distribution of the ‘wealth’ it creates. Behavior science has shown us consistently that humans make far more relative than absolute judgments.  Being better off than our ancestors actually doesn’t do much for us.  Instead we are far more interested in being better off than our peers, neighbors or the people we compare ourselves to on Instagram. And therein lies yet another challenge. Social media means we now compare ourselves to far more people than past generations, meaning that the standards we judge ourselves against are higher than ever before.     

Side effects and Unintended Consequences: Side effects and unintended consequences are perhaps the most difficult challenge we face with innovation. As the pace of innovation accelerates, so does the build up of side effects, and problematically, these often lag our initial innovations. All too often, we only become aware of them when they have already become a significant problem. Climate change is of course a poster child for this, as a huge unanticipated consequence of the industrial revolution. The same applies to pollution.  But as innovation accelerates, the unintended consequences it brings are also stacking up.  The first generations of ‘digital natives’ are facing unprecedented mental health challenges.  Diseases are becoming resistant to antibiotics, while population density is leading increased rate of new disease emergence. Agricultural efficiency has created monocultures that are inherently more fragile than the more diverse supply chain of the past.  Longevity is putting enormous pressure on healthcare.

The More we Innovate, the less we understand:  And last, but not least, as innovation accelerates, we understand less about what we are creating. Technology becomes unfathomably complex, and requires increasing specialization, which means few if any really understand the holistic picture.  Today we are largely going full speed ahead with AI, quantum computing, genetic engineering, and more subtle, but equally perilous experiments in behavioral and social manipulation.  But we are doing so with increasingly less pervasive understanding of direct, let alone unintended consequences of these complex changes!   

The Runaway Innovation Train:  So should we back off and slow down?  Is it time to pump the brakes? It’s an odd question for an innovator, but it’s likely a moot point anyway. The reality is that we probably cannot slow down, even if we want to.  Innovation is largely a self-propagating chain reaction. All innovators stand on the shoulders of giants. Every generation builds on past discoveries, and often this growing knowledge base inevitably leads to multiple further innovations.  The connectivity and information access of internet alone is driving today’s unprecedented innovation, and AI and quantum computing will only accelerate this further.  History is compelling on this point. Stone-age innovation was slow not because our ancestors lacked intelligence.  To the best of our knowledge, they were neurologically the same as us.  But they lacked the cumulative knowledge, and the network to access it that we now enjoy.   Even the smartest of us cannot go from inventing flint-knapping to quantum mechanics in a single generation. But, back to ‘standing on the shoulder of giants’, we can build on cumulative knowledge assembled by those who went before us to continuously improve.  And as that cumulative knowledge grows, more and more tools and resources become available, multiple insights emerge, and we create what amounts to a chain reaction of innovations.  But the trouble with chain reactions is that they can be very hard to control.    

Simultaneous Innovation: Perhaps the most compelling support for this inevitability of innovation lies in the pervasiveness of simultaneous innovation.   How does human culture exist for 50,000 years or more and then ‘suddenly’ two people, Darwin and Wallace come up with the theory of evolution independently and simultaneously?  The same question for calculus (Newton and Leibniz), or the precarious proliferation of nuclear weapons and other assorted weapons of mass destruction.  It’s not coincidence, but simply reflects that once all of the pieces of a puzzle are in place, somebody, and more likely, multiple people will inevitably make connections and see the next step in the innovation chain. 

But as innovation expands like a conquering army on multiple fronts, more and more puzzle pieces become available, and more puzzles are solved.  But unfortunately associated side effects and unanticipated consequences also build up, and my concern is that they can potentially overwhelm us. And this is compounded because often, as in the case of climate change, dealing with side effects can be more demanding than the original innovation. And because they can be slow to emerge, they are often deeply rooted before we become aware of them. As we look forward, just taking AI as an example, we can already somewhat anticipate some worrying possibilities. But what about the surprises analogous to climate change that we haven’t even thought of yet? I find that a sobering thought that we are attempting to create consciousness, but despite the efforts of numerous Nobel laureates over decades, we still have to idea what consciousness is. It’s called the ‘hard problem’ for good reason.  

Stop the World, I Want to Get Off: So why not slow down? There are precedents, in the form of nuclear arms treaties, and a variety of ethically based constraints on scientific exploration.  But regulations require everybody to agree and comply. Very big, expensive and expansive innovations are relatively easy to police. North Korea and Iran notwithstanding, there are fortunately not too many countries building nuclear capability, at least not yet. But a lot of emerging technology has the potential to require far less physical and financial infrastructure.  Cyber crime, gene manipulation, crypto and many others can be carried out with smaller, more distributed resources, which are far more difficult to police.  Even AI, which takes considerable resources to initially create, opens numerous doors for misuse that requires far less resource. 

The Atomic Weapons Conundrum.  The challenge with getting bad actors to agree on regulation and constraint is painfully illustrated by the atomic bomb.  The discovery of fission by Strassman and Hahn in the late 1930’s made the bomb inevitable. This set the stage for a race to turn theory into practice between the Allies and Nazi Germany. The Nazis were bad actor, so realistically our only option was to win the race.  We did, but at enormous cost. Once the ‘cat was out of the bag, we faced a terrible choice; create nuclear weapons, and the horror they represent, or chose to legislate against them, but in so doing, cede that terrible power to the Nazi’s?  Not an enviable choice.

Cumulative Knowledge.  Today we face similar conundrums on multiple fronts. Cumulative knowledge will make it extremely difficult not to advance multiple, potentially perilous technologies.  Countries who legislate against it risk either pushing it underground, or falling behind and deferring to others. The recent open letter from Meta to the EU chastising it for the potential economic impacts of its AI regulations may have dripped with self-interest.  But that didn’t make it wrong.   https://euneedsai.com/  Even if the EU slows down AI development, the pieces of the puzzle are already in place.  Big corporations, and less conservative countries will still pursue the upside, and risk the downside. The cat is very much out of the bag.

Muddling Through:  The good news is that when faced with potentially perilous change in the past, we’ve muddled through.  Hopefully we will do so again.   We’ve avoided a nuclear holocaust, at least for now.  Social media has destabilized our social order, but hasn’t destroyed it, yet.  We’ve been through a pandemic, and come out of it, not unscathed, but still functioning.  We are making progress in dealing with climate change, and have made enormous strides in managing pollution.

Chain Reactions:  But the innovation chain reaction, and the impact of cumulative knowledge mean that the rate of change will, in the absence of catastrophe, inevitably continue to accelerate. And as it does, so will side effects, nefarious use, mistakes and any unintended consequences that derive from it. Key factors that have helped us in the past are time and resource, but as waves of innovation increase in both frequency and intensity, both are likely to be increasingly squeezed.   

What can, or should we do? I certainly don’t have simple answers. We’re all pretty good, although by definition, far from perfect at scenario planning and trouble shooting for our individual innovations.  But the size and complexity of massive waves of innovation, such as AI, are obviously far more challenging.  No individual, or group can realistically either understand or own all of the implications. But perhaps we as an innovation community should put more collective resources against trying? We’ll never anticipate everything, and we’ll still get blindsided.  And putting resources against ‘what if’ scenarios is always a hard sell. But maybe we need to go into sales mode. 

Can the Problem Become the Solution? Encouragingly, the same emerging technology that creates potential issues could also help us.  AI and quantum computing will give us almost infinite capacity for computation and modeling.  Could we collectively assign more of that emerging resource against predicting and managing it’s own risks?

With many emerging technologies, we are now where we were in the 1900’s with climate change.  We are implementing massive, unpredictable change, and by definition have no idea what the unanticipated consequences of that will be. I personally think we’ll deal with climate change.  It’s difficult to slow a leviathan that’s been building for over a hundred years.  But we’ve taken the important first steps in acknowledging the problem, and are beginning to implement corrective action. 

But big issues require big solutions.  Long-term, I personally believe the most important thing for humanity to escape the gravity well.   Given the scale of our ability to curate global change, interplanetary colonization is not a luxury, but an essential.  Climate change is a shot across the bow with respect to how fragile our planet is, and how big our (unintended) influence can be.  We will hopefully manage that, and avoid nuclear war or synthetic pandemics for long enough to achieve it.  But ultimately, humanity needs the insurance dispersed planetary colonization will provide.  

Image credits: Microsoft Copilot

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

SpaceX is a Masterclass in Innovation Simplification

SpaceX is a Masterclass in Innovation Simplification

GUEST POST from Pete Foley

This capture from a recent SpaceX tweet is a stunning visual example of smart innovation and simplification. 

While I’m not even close to being a rocket scientist, and so am far from familiar with all of the technical details, I’ve heard that breakthroughs incorporated into this include innovative sensor design that allows for streamlined feedback loops. But this goes beyond just impressive technical innovation.   To innovate at this level requires organizational and cultural flexibility as well as technical brilliance. That latter flexibility is probably far more broadly transferable and adoptable than specific advances in rocket science, and hence more useful to the broader innovation community. So let’s dig a little deeper into that space.

Secret Sauce?  Organizationally SpaceX is well known for less formal hierarchies, passion, ownership and engineers working on the production floor.  This hands on approach creates a different, but important kind of feedback, while passion feeds intrinsic motivation, ownership and engagement, which is so critical to consistent innovation. 

Learning from Failure – An Innovation Superpower?  But perhaps most important of all is the innovation culture. Within SpaceX there is a very clear willingness to experiment and learn from failure.  Not lip service, or the sometimes half-hearted embrace of failure often found in large, bureaucratic organizations, where rewards and career progression often doesn’t reflect the mantra of learning by failing.  This is an authentic willingness to publicly treat productive failure of individual launches as a learning success for the program, and to reward productive failure and appropriate risk taking.  Of course, it’s not always easy to walk the talk of celebrating failure, especially in spacecraft design, where failures are often spectacular, public, and visual gold for the media.  And no doubt this is compounded by Musk’s controversial public profile, where media and social media are often only too keen to highlight failures.  But the visual of Raptor 3 is for me a compelling advertisement for authentically embedding learning by failure deeply into the DNA of an innovative organization. 

Stretch Goals:  Musk is famous for, and sometimes ridiculed for setting ambitious stretch goals, and for not always achieving them.   But in a culture where failure is tolerated, or if done right, celebrated, missing a stretch goal is not a problem, especially if it propelled innovation along at a pace that goes beyond conventional expectation.    

Challenging Legacy and ‘Givens’:  Culturally, this kind of radical simplification requires the systematic challenge of givens that were part of previous iterations.  You cannot make these kind of innovation leaps unless you are both willing and able to discard legacy technical and organizational structures.  

At risk of kicking Boeing while it is down, it is hard not to contrast SpaceX with Boeing, whose space (and commercial aviation) program is very publicly floundering, and facing the potentially humiliating prospect of needing rescue from the more agile SpaceX program. 

Innovation Plaque:  But in the spirit of learning from failure, if we look a bit deeper, perhaps it should not be a surprise that Boeing are struggling to keep up. They have a long, storied, and successful history as a leader in aerospace.  But history and leadership can be a blessing and a curse, as I know from P&G. It brings experience, but also bureaucracy, rigid systems, and deeply rooted culture that may or may not be optimum for managing change.  Deep institutional knowledge can be a similar mixed blessing.  It of course allows easy access to in-domain experience, and is key to not repeating past mistakes, or making naïve errors.  But is also comes with an inherent bias towards traditional solutions, and technologies.  Perhaps even more important is the organizationally remembered pain of past failures, especially if a ‘learn by failure’ culture isn’t fully embraced.  Failure is good at telling us what didn’t work, and plays an important role in putting processes in place that help us to avoid repeating errors.  But over time these ‘defensive’ processes can build up like plaque in an artery, making it difficult to push cutting edge technologies or radical changes through the system.

Balance is everything.  Nobody wants to be the Space Cowboy.  Space exploration is expensive, and risks the lives of some extraordinarily brave people.  Getting the balance between risk taking and the right kind of failure is even more critical than in most other contexts. But SpaceX are doing it right, certainly until now. Whatever the technical details, the impact on speed, efficiency and $$ behind the simplification of Raptor 3 is stunning.  I suspect that ultimately reliability and efficiency will also likely helped by increased simplicity.  But it’s a delicate line.  The aforementioned ‘plaque’ does slow the process, but done right, it can also prevent unnecessary failure.   It’s important to be lean, but  not ‘slice the salami’ too thin.  Great innovation teams mix diverse experience, backgrounds and personalities for this reason.  We need the cynic as well as the gung-ho risk taker.  For SpaceX, so far, so good, but it’s important that they don’t become over confident.  

The Elon Musk Factor:  For anyone who hasn’t noticed. Musk has become a somewhat controversial figure of late. But even if you dislike him, you can still learn from him, and as innovators, I don’t think we can afford not to. He is the most effective innovator, or at least innovation leader for at least a generation. The teams he puts together are brilliant at challenging ‘givens’, and breaking out of legacy constraints and the ‘ghosts of evolution’. We see it across the SpaceX design, not just the engine, but also the launch systems, recycling of parts, etc. We also see an analogous innovation strategy in the way Tesla cars so dramatically challenged so many givens in the auto industry, or the ‘Boring company in my hometown of Las Vegas.

Ghosts of Evolution I’d mentioned the challenges of legacy designs and legacy constraints. I think this is central to SpaceX’s success, and so I think it’s worth going a little deeper on this topic.  Every technology, and every living thing on our planet comes with its own ghosts.   They are why humans have a literal blind-spot in our vision, why our bodies pleasure centers are co-located with our effluent outlets, and why the close proximity of our air and liquid/solid intakes lead to thousands of choking deaths every year. Nature is largely stuck with incrementally building on top of past designs, often leading to the types of inefficiency described above. Another example is the Pronghorn antelope that lives in my adopted American West. It can achieve speeds of close to 90 mph. This is impressive, but vastly over-designed and inefficient for it’s current environment. But it is a legacy design, evolved at a time when it was predated upon by long extinct North American Cheetah. It cannot simply undo that capability now that it’s no longer useful. So far, it’s survived this disadvantage, but it is vulnerable to both competition and changing environment simply because it is over-designed.

Bio-Inspiration:  I’ve long believed we can learn a great deal from nature and bio-inspired design, but sometimes learning what not to do is as useful as ‘stealing’ usable insights. It’s OK to love nature, but also acknowledge that evolution has far more failures than successes. There are far, far more extinct species than living ones.  And virtually every one was either too specialized, or lacked the ability to pivot and adapt in the face of changing context.  

As innovators, we have unique option of creating totally new 2.0 designs, and challenging the often unarticulated givens that are held within a category. And we have the option of changing our culture and organizational structures too.  But often we fail do so because we are individually or organizationally blind to legacy elements that are implicitly part of our assumptions for a category or a company.  The fish doesn’t see the water, or at least not until it’s dangling from a hook. By then it’s too late.   Whatever you think of Musk, he’s taught us it is possible to create innovation cultures that challenge legacy designs extremely effectively.  It’s a lesson worth learning

Image credits: Twitter (via SpaceX)

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Cover versions, Sequels, Taylor Swift and Innovation

Taylor Swift and Innovation

GUEST POST from Pete Foley

An inherent contradiction in almost any new innovation is that it needs to be both new, but also somewhat familiar.  If it doesn’t offer anything new, there is little motivation for consumers to risk abandoning existing habits or preferences to try it.  But if it is not at least anchored in familiarity, then we ask consumers to put a lot of effort into understanding it, in addition to any opportunity cost from what they give up for trying something new.  Innovation is difficult, and a lot of innovations fail, at least in part because of this fundamental contradiction. 

Transformative Performance:  Of course, innovations can be successful, which means we do navigate this challenge.  But how? One way is to deliver something with such transformative benefits that people are willing to push themselves over the hump of learning something new. Huge benefits also create their own ‘gravity’, often spreading via world of mouth via media, social media, and even old-fashioned human-to-human conversations. This avoids the need for brute force mass marketing spend that can create the illusion of familiarity, but with a hefty price tag that is typically beyond smaller companies

Familiarity: The second option is to leverage what people already know in such a way that the ‘adoption hump’ becomes relatively insignificant, because new users intuitively know what the innovation is and how to use it.

Wow!  The best innovations do both.  CHATgpt Generative AI is a contemporary example, where transformative performance has created an enormous amount of word of mouth, but the interface is so intuitive there is little barrier to adoption, at least superficially. 

Of course, using it skillfully is another thing altogether, but I think there is an insight there too.  It’s OK to have an ongoing learning curve after initial adoption, but initial engagement needs to be relatively simple.  The gaming industry are masters of this.    

Little Wows!  CHATgpt is brilliant innovation.  But realistically, few of us are gong to create something quite that extraordinary.  So how do we manage to create more modest wows that still drive trial, engagement and ultimately repeat business?

Science, Art and Analogy:  As a believer that a lot of interesting things happen at the interface between science and art, and that analogy is a great tool, I think we cam learn a little about solving this by taking insight from the arts.  In this case, music and movies. For example, popular music routinely plunders the familiar, and repackages it as new via cover versions.  I often do the same myself!   Movies do something similar, either with the cycle of remakes of classic movies, or with sequels that often closely follow the narrative structure of the original.  

But this highlights some of the challenges in solving this dichotomy.  It’s rare for a remake, cover version, or sequel to do better than the original.  But a few do, so what is their secret?  What works, and what doesn’t? 

  1. Distance from the original.  Some of the best movie remakes completely reframe the original in ways that maintain a largely implicit familiarity, but do so without inviting direct comparisons of alignable differences to the original. For example, West Side Story is a brilliant retelling of Romeo and Juliet, Bridget Jones Diary reframes Pride and Prejudice, She’s All That is a retelling of George Bernard Shaw’s Pygmalion, while The Lion King retools Hamlet, etc.  I’m not suggesting that nobody sees these connections, but many don’t, and even if they do, the context is sufficiently different to avoid constant comparisons throughout the experience.  And of course, in most of these cases, the originals are not contemporary, so there is temporal as well as conceptual distance between original and remake.   Similarly with cover versions, Hendrix and the Byrds both completely and very successfully reframed Dylan (All Along the Watchtower and Mr. Tambourine Man).  Sinead O’Connor achieved similar success with Prince’s “Nothing Compares 2 U”.  For those of you with less grey in their hairl, last summers cover of Tracy Chapman’s ‘Fast Car’ by Luke Combs shows that covers can still do this. 

2.  Something New.   A different way to fail is to tap familiarity, but without adding anything sufficiently new or interesting.  All too often covers, sequels and remakes are simply weaker copies of the original.  I’m sure that anyone reading this can come up with their own examples of a disappointing remake or sequel.   Footloose, Annie, Psycho, Tom Cruise’s the Mummy or Karate Kid are all candidates for me.  As for sequels, again, I’m sure you can all name a respectable list of your own wasted 2 hours, with Highlander 2 and Jaws the Revenge being my personal cures for insomnia.   And even if we include novelty, it cannot be too predictable either.  It needs to at least be a little surprising.   For example, the gender reversal of the remake of Overboard has a point of difference in comparison to the Goldie Hawn original, but its not exactly staggeringly novel or surprising.  It’s a lot like a joke, if you can see it coming, it’s not going too create a wow.    

3.  Don’t Get De-Selected.  Learning from the two previous approaches can help us to create sufficient separation from past experience to engage and hopefully delight potential consumers.  But it’s important to not get carried away, and become un-tethered from familiarity.  For example, I personally enjoy a lot of jazz, but despite their often extraordinary skill, jazz musicians don’t fill many arenas.  That’s in part because jazz asks the listener to invest a lot of cognitive bandwidth and time to develop an ‘ear’, or musical expertise in order to appreciate it. It often moves a long way from the familiar original, and adds lot of new into the equation.  As a result, it is a somewhat niche musical form.  Pop music generally doesn’t require the same skill or engagement, and successful artists like Taylor Swift understand that.   And when it comes to innovation, most of us want to be mainstream, not niche. This is compounded because consumers today face a bewildering array of options, and a huge amount of information.  One way our brains have evolved to deal with complexity is to quickly ignore or ‘de-select’ things that don’t appear relevant to our goals. A lot of the time, we do this unconsciously.  Faced with more information than we can process, we quickly narrow our choices down to a consideration set that is ‘right-sized’ for us to make a decision.   From an innovation perspective, if our innovations are too ‘jazzy’, they risk being de-selected by a majority on consumers before they can be fully appreciated, or even consciously noticed.     

There’s no precise right or wrong strategy in this context. It’s possible to deliver successful innovations by tapping and balancing these approaches in many different ways.   But there are certainly good and bad executions, and I personally find it helpful to use these kinds of analogy when evaluating an innovation.   Are we too jazzy? Do we have separation from incumbents that is meaningful for consumers, and not just ourselves? And the latter is a real challenge for experts. When we are deeply engaged in a category, it’s all too easy to get lost in the magic of our own creations.  We see differences more clearly than consumers. It’s easy for us to become overly excited by relatively small changes that excite us, but that lack sufficient newness and separation from existing products for consumers who are nowhere near as engaged in our category as we are.  But it’s also easy to create ‘jazz’ for similar reasons, by forgetting that real world consumers are typically far less interested in our products than we are, and so miss the brilliance of our ‘performance’, or perhaps don’t ‘get it’ at all. 

For me, it is useful to simply ask myself whether I’m a Godfather II or a Highlander II, a Taylor Swift or a Dupree Bolton, or even Larry Coryell.  And there’s the rub.  As a musician, I’d rather be Larry, but as a record company exec, I’d far rather have Taylor Swift on my label. 

Image credits: Wikimedia Commons

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

AI as an Innovation Tool – How to Work with a Deeply Flawed Genius!

AI as an Innovation Tool - How to Work with a Deeply Flawed Genius!

GUEST POST from Pete Foley

For those of us working in the innovation and change field, it is hard to overstate the value and importance of AI.   It opens doors, that were, for me at least, barely imaginable 10 years ago.  And for someone who views analogy, crossing expertise boundaries, and the reapplication of ideas across domains as central to innovation, it’s hard to imagine a more useful tool.

But it is still a tool.  And as with any tool, leaning it’s limitations, and how to use it skillfully is key.  I make the analogy to an automobile.  We don’t need to know everything about how it works, and we certainly don’t need to understand how to build it.  But we do need to know what it can, and cannot do. We also need to learn how to drive it, and the better our driving skills, the more we get out of it.

AI, the Idiot Savant?  An issue with current AI is that it is both intelligent and stupid at the same time (see Yejin Chois excellent TED talk that is attached). It has phenomenal ‘data intelligence’, but can also fail on even simple logic puzzles. Part of the problem is that AI lacks ‘common sense’ or the implicit framework that filters a great deal of human decision making and behavior.  Chois calls this the  ‘dark matter’ common sense of decision-making. I think of it as the framework of knowledge, morality, biases and common sense that we accumulate over time, and that is foundational to the unconscious ‘System 1’ elements that influence many, if not most of our decisions. But whatever we call it, it’s an important, but sometimes invisible and unintuitive part of human information processing that is can be missing from AI output.    

Of course, AI is far from being unique in having limitations in the quality of its output.   Any information source we use is subject to errors.  We all know not to believe everything we read on the internet. That makes Google searches useful, but also potentially flawed.  Even consulting with human experts has pitfalls.   Not all experts agree, and even to most eminent expert can be subject to biases, or just good old fashioned human error.  But most of us have learned to be appropriately skeptical of these sources of information.  We routinely cross-reference, challenge data, seek second opinions and do not simply ‘parrot’ the data they provide.

But increasingly with AI, I’ve seen a tendency to treat its output with perhaps too much respect.   The reasons for this are multi-faceted, but very human.   Part of it may be the potential for generative AI to provide answers in an apparently definitive form.  Part may simply be awe of its capabilities, and to confuse breadth of knowledge with accuracy.  Another element is the ability it gives us to quickly penetrate areas where we may have little domain knowledge or background.  As I’ve already mentioned, this is fantastic for those of us who value exploring new domains and analogies.  But it comes with inherent challenges, as the further we step away from our own expertise, the easier it is for us to miss even basic mistakes.  

As for AI’s limitations, Chois provides some sobering examples.  It can pass a bar exam, but can fail abysmally on even simple logic problems.  For example, it suggests building a bridge over broken glass and nails is likely to cause punctures!   It has even suggested increasing the efficiency of paperclip manufacture by using humans as raw materials.  Of course, these negative examples are somewhat cherry picked to make a point, but they do show how poor some AI answers can be, and how they can be low in common sense.   Of course, when the errors are this obvious, we should automatically filter them out with our own common sense.  But the challenge comes when we are dealing in areas where we have little experience, and AI delivers superficially plausible but flawed answers. 

Why is this a weak spot for AI?  At the root of this is that implicit knowledge is rarely articulated in the data AI scrapes. For example, a recipe will often say ‘remove the pot from the heat’, but rarely says ‘remove the pot from heat and don’t stick your fingers in the flames’. We’re supposed to know that already. Because it is ‘obvious’, and processed quickly, unconsciously and often automatically by our brains, it is rarely explicitly articulated. AI, however, cannot learn what is not said.  And so because we don’t tend to state the obvious, it can make it challenging for an AI to learn it.  It learns to take the pot off of the heat, but not the more obvious insight, which is to avoid getting burned when we do so.  

This is obviously a known problem, and several strategies are employed to help address it.  These include manually adding crafted examples and direct human input into AI’s training. But this level of human curation creates other potential risks. The minute humans start deciding what content should and should not be incorporated, or highlighted into AI training, the risk of transferring specific human biases to that AI increase.   It also creates the potential for competing AI’s with different ‘viewpoints’, depending upon differences in both human input and the choices around what data-sets are scraped. There is a ‘nature’ component to the development of AI capability, but also a nurture influence. This is of course analogous the influence that parents, teachers and peers have on the values and biases of children as they develop their own frameworks. 

But most humans are exposed to at least some diversity in the influences that shape their decision frameworks.  Parents, peers and teachers provide generational variety, and the gradual and layered process that builds the human implicit decision framework help us to evolve a supporting network of contextual insight.  It’s obvious imperfect, and the current culture wars are testament to some profound differences in end result.  But to a large extent, we evolve similar, if not identical common sense frameworks. With AI, the narrower group contributing to curated ‘education’ increases the risk of both intentional and unintentional bias, and of ‘divergent intelligence’.     

What Can We do?  The most important thing is to be skeptical about AI output.  Just because it sounds plausible, don’t assume it is.  Just as we’d not take the first answer on a Google search as absolute truth, don’t do the same with AI.  Ask it for references, and check them (early iterations were known to make up plausible looking but nonsense references).  And of course, the more important the output is to us, the more important it is to check it.  As I said at the beginning, it can be tempting to take verbatim output from AI, especially if it sounds plausible, or fits our theory or worldview.  But always challenge the illusion of omnipotence that AI creates.  It’s probably correct, but especially if its providing an important or surprising insight, double check it.    

The Sci-Fi Monster!  The concept of a childish super intelligence has been explored by more than one Science Fiction writer.  But in many ways that is what we are dealing with in the case of AI.  It’s informational ‘IQ’ is greater than the contextual or common sense ‘IQ’ , making it a different type of intelligence to those we are used to.   And because so much of the human input side is proprietary and complex, it’s difficult  to determine whether bias or misinformation is included in its output, and if so, how much?   I’m sure these are solvable challenges.  But some bias is probably unavoidable the moment any human intervention or selection invades choice of training materials or their interpretation.   And as we see an increase in copyright law suits and settlements associated with AI, it becomes increasingly plausible that narrowing of sources will result in different AI’s with different ‘experiences’, and hence potentially different answers to questions.  

AI is an incredible gift, but like the three wishes in Aladdin’s lamp, use it wisely and carefully.  A little bit of skepticism, and some human validation is a good idea. Something that can pass the bar, but that lacks common sense is powerful, it could even get elected, but don’t automatically trust everything it says!

Image credits: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Las Vegas Formula One

Successful Innovation, Learning Experience or Total Disaster?

GUEST POST from Pete Foley

In Las Vegas, we are now clearing up after the Formula 1 Grand Prix on the Strip.  This extremely complex event required a great deal of executional innovation, and one that I think as innovators, we can learn quite a lot from. 

It was certainly a bumpy ride, both for the multi-million dollar Ferrari that hit an errant drain cover during practice, but also with respect to broader preparation, logistics, pricing and projections of consumer behavior.  Despite this, race itself was exciting and largely issue free, and even won over some of the most skeptical drivers.  In terms of Kahneman’s peak-end effects, there were both memorable lows, but also a triumphant end result.   So did this ultimately amount to success?

Success?:   For now, I think it very much depends upon your perspective and who you talk to.  Perhaps it’s a sign of the times, but in Las Vegas, the race was extremely polarizing, with often heated debates between pro- and anti- F1-ers that were often as competitive as the race.

The reality is that it will be months, or more likely years before the dust settles, and we know the answer.  And I strongly suspect that even then, those who are for and against it will all likely be able to claim support for their point of view.  One insight I think innovators can take from this is that success can be quite subjective in of itself, and greatly depends upon what factors you measure, what period of time you measure over, and often your ingoing biases.  And the bigger and more complex the innovation, often the harder it is to define and measure success.  

Compromise Effects:  When you launch a new product, it is often simpler and cheaper to measure its success narrowly in terms of specific dollar contribution to your business. But this often misses its holistic impact.   Premium products can elevate an entire category or brand, while poorly executed innovations can do the opposite.  For example, the compromise effect from Behavioral Economics suggests that a premium addition to a brand line up can shift the ‘Good, Better, Best’ spectrum of a category upwards.  This can boost dollar sales across a line up, even if the new premium product itself has only moderate sales.   For example, the addition of high priced wines to a menu can often increase the average dollars per bottle spent by diners, even if the expensive wine itself doesn’t sell.  The expensive wines shift the ‘safe middle’ of the consideration set upwards, and thus increase revenue, and hopefully profit.      

Money, Scope and Intangibles:  In the case of F1, how far can and should we cast the net when trying to measure success?  Can we look just at the bottom line?  Did this specific weekend bring in more than the same weekend the previous year in sports betting, rooms and entertainment?  Did that difference exceed the investments? 

Or is that too narrow?  What about the $$ impact on the weeks surrounding the event?  We know that some people stayed away because of the construction and congestion in the lead up to the race.  That should probably be added into, or subtracted from the equation. 

And then there’s the ‘who won and who lost question’? The benefits and losses were certainly not homogeneous across stakeholders.  The big casinos benefited disproportionately in comparison to the smaller restaurants that lost business due to construction, some to a degree that almost rivaled Covid.  Gig workers also fared differently. I have friends who gained business from the event, and friends who lost.  Many Uber drivers simply gave up and stopped working. But those who stayed, or the high-end limo drivers likely had bumper weekends.   Entertainers working shows that were disrupted by F1 lost out, but the plethora of special events that came with F1 also provided a major uptick in business for many performers and entertainers.

There is also substantial public investment to consider.  Somewhat bizarrely, the contribution of public funds was not agreed prior to the race, and the public-private cost sharing of tens of millions is still being negotiated.  But even facing that moving target, did increased (or decreased) tax income before, during and after the race offset those still to be determined costs?

Intangibles:  And then there’s the intangibles.  While Vegas is not exactly an unknown entity, F1 certainly upped its exposure, or in marketing terms, it’s mental availability.   It brought Vegas into the news, but was that in a positive or negative light?  Or is all publicity good publicity in this context? News coverage was mixed, with a lot of negative focus on the logistic issues, but also global coverage of what was generally regarded as an exciting race.   And of course, that media coverage also by definition marketed other businesses, including the spectacular Sphere. 

Logistics:  Traffic has been a nightmare with many who work on the strip facing unprecedented delays in their commutes for many weeks, with many commutes going from minutes to hours.   This reached a point where casinos were raffling substantial prizes, including a Tesla, just to persuade people to not call in sick.  Longer term, it’s hard to determine the impact on employee morale and retention, but its hard to imagine that it will be zero, and that brings costs of its own that go well beyond a raffled Tesla

Measuring Success?  In conclusion, this was a huge operation, and its impact by definition is going to be multidimensional.  The outcome was, not surprisingly, a mixed bag.  It could have been a lot better, or a lot worse. And even as the dust settles, it’s likely that different groups will be able to cherry pick data to support their current opinions and biases. 

Innovation Insights:  So what are some of the more generalized innovation insights we can draw?

(a) Innovation is rarely a one and done process.   We rarely get it right first time, and the bigger and more complex an innovation is, the more we usually have to learn.  F1 is the poster child for this, and the organization is going to have an enormous amount of data to plough through. The value of this will greatly depend on F1’s internal innovation culture.  Is it a learning organization?  In a situation like this, where billions of dollars, and careers are on the line, will it be open or defensive?  Great innovation organizations mostly put defensiveness aside, actively learn from mistakes, and adopt Devils Advocate approaches to learn from hard earned data. But culture is deeply embedded, and difficult to change, so much depends on the current culture of the organizations involved.  

(b) Going Fast versus Going Slow:  This project moved very, very quickly.  Turning a city like Las Vegas from scratch into a top of the line race track in less than a year was a massive challenge.  The upside is that if you go fast, you learn fast.  And the complexity of the task meant much of the insight could pragmatically only be achieved ‘on the ground’.  But conversely, better scenario planning might have helped anticipate some of the biggest issues, especially around traffic disruption, loss of business to smaller organizations, commuting issues and community outreach.  And things like not finalizing public-private contracts prior to execution will likely end up prolonging the agony.  Whatever our innovation is, big or small, hitting that sweet spot between winging it and over-thinking is key. 

(c) Understanding Real Consumer Behavior.  The casinos got pricing horribly wrong.  When the race was announced, hotel prices and race packages for the F1 weekend went through the roof.  But in the final run up to the race, prices for both rooms and the race itself plummeted.  One news article reported a hotel room on the strip as low as $18!  Tickets for the race that the previous month had cost $1600 had dropped to $800 or less on race day.  Visitors who had earlier paid top dollar for rooms were reported to be cancelling and rebooking, while those locked into rates were frustrated.  There is even a major lawsuit in progress around a cancelled practice.  I don’t know any details around how pricing was researched, and predicting the market for a new product or innovation is always a challenge.  In addition, the bigger the innovation, the more challenging the prediction game is, as there are less relevant anchors for consumers or the business to work from.   But I think the generalizable lesson for all innovators is to be humble.  Assume you don’t know, that your models are approximate, do as much research as you can in contexts that are a close to realistic as possible, don’t squeeze margins based on unrealistic expectations for the accuracy of business models, and build as much agility into innovation launches as possible.  Easier said than done I know, but one of the most consistent reasons for new product failure is over confidence in understanding real consumer response when the rubber hits the road (pun intended), and how it can differ from articulated consumer response derived in unrealistic contexts. Focus groups and on-line surveys can be quite misleading when it comes down to the reality of handing over hard cash, opportunity cost, or how we value ur precious time short versus long-term term.

Conclusion: Full disclosure, I’ve personally gone through the full spectrum with Formula One in Vegas.  I loved the idea when it was announced, but 6 months of construction, disruption, and the prospect of another two months of tear down have severely dented my enthusiasm.  Ultimately I went from coveting tickets to avoiding the event altogether.  People I know range from ecstatic to furious, and everything in between.  Did I mention it was polarizing? 

The reality is that this is an ongoing innovation process.   There is a 3-year contract with options to extend to 10 years.  How successful it ultimately is will likely be very dependent upon how good a learning and innovation culture Formula One and its partners are, or can become.  It’s a steep and expensive learning curve, and how it moves forward is going to be interesting if nothing else.  And being Vegas, we have both CES and the Super Bowl to distract us in the next few months, before we start preparing again for next year. 

Image credits: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.