Author Archives: Pete Foley

About Pete Foley

Pete Foley is a consultant who applies Behavioral Science to catalyze innovation for Retail, Hospitality, Product Design, Branding and Marketing Design. He applies insights derived from consumer and shopper psychology, behavioral economics, perceptual science, and behavioral design to create practical solutions to difficult business challenges. He brings 25 years experience as a serial innovator at P&G. He has over 100 published or granted patents, has published papers in behavioral economics, evolutionary psychology and visual science, is an exhibited artist and photographer, and an accomplished musician.

The Runaway Innovation Train

The Runaway Innovation Train

GUEST POST from Pete Foley

In this blog, I return and expand on a paradox that has concerned me for some time.    Are we getting too good at innovation, and is it in danger of getting out of control?   That may seem like a strange question for an innovator to ask.  But innovation has always been a two edged sword.  It brings huge benefits, but also commensurate risks. 

Ostensibly, change is good. Because of technology, today we mostly live more comfortable lives, and enjoy superior health, longevity, and mostly increased leisure and abundance compared to our ancestors.

Exponential Innovation Growth:  The pace of innovation is accelerating. It may not exactly mirror Moore’s Law, and of course, innovation is much harder to quantify than transistors. But the general trend in innovation and change approximates exponential growth. The human stone-age lasted about 300,000 years before ending in about 3,000 BC with the advent of metalworking.  The culture of the Egyptian Pharos lasted 30 centuries.  It was certainly not without innovations, but by modern standards, things changed very slowly. My mum recently turned 98 years young, and the pace of change she has seen in her lifetime is staggering by comparison to the past.  Literally from horse and carts delivering milk when she was a child in poor SE London, to todays world of self driving cars and exploring our solar system and beyond.  And with AI, quantum computing, fusion, gene manipulation, manned interplanetary spaceflight, and even advanced behavior manipulation all jockeying for position in the current innovation race, it seems highly likely that those living today will see even more dramatic change than my mum experienced.  

The Dark Side of Innovation: While accelerated innovation is probably beneficial overall, it is not without its costs. For starters, while humans are natural innovators, we are also paradoxically change averse.  Our brains are configured to manage more of our daily lives around habits and familiar behaviors than new experiences.  It simply takes more mental effort to manage new stuff than familiar stuff.  As a result we like some change, but not too much, or we become stressed.  At least some of the burgeoning mental health crisis we face today is probably attributable the difficulty we have adapting to so much rapid change and new technology on multiple fronts.

Nefarious Innovation:  And of course, new technology can be used for nefarious as well as noble purpose. We can now kill our fellow humans far more efficiently, and remotely than our ancestors dreamed of.  The internet gives us unprecedented access to both information and connectivity, but is also a source of misinformation and manipulation.  

The Abundance Dichotomy:  Innovation increases abundance, but it’s arguable if that actually makes us happier.  It gives us more, but paradoxically brings greater inequalities in distribution of the ‘wealth’ it creates. Behavior science has shown us consistently that humans make far more relative than absolute judgments.  Being better off than our ancestors actually doesn’t do much for us.  Instead we are far more interested in being better off than our peers, neighbors or the people we compare ourselves to on Instagram. And therein lies yet another challenge. Social media means we now compare ourselves to far more people than past generations, meaning that the standards we judge ourselves against are higher than ever before.     

Side effects and Unintended Consequences: Side effects and unintended consequences are perhaps the most difficult challenge we face with innovation. As the pace of innovation accelerates, so does the build up of side effects, and problematically, these often lag our initial innovations. All too often, we only become aware of them when they have already become a significant problem. Climate change is of course a poster child for this, as a huge unanticipated consequence of the industrial revolution. The same applies to pollution.  But as innovation accelerates, the unintended consequences it brings are also stacking up.  The first generations of ‘digital natives’ are facing unprecedented mental health challenges.  Diseases are becoming resistant to antibiotics, while population density is leading increased rate of new disease emergence. Agricultural efficiency has created monocultures that are inherently more fragile than the more diverse supply chain of the past.  Longevity is putting enormous pressure on healthcare.

The More we Innovate, the less we understand:  And last, but not least, as innovation accelerates, we understand less about what we are creating. Technology becomes unfathomably complex, and requires increasing specialization, which means few if any really understand the holistic picture.  Today we are largely going full speed ahead with AI, quantum computing, genetic engineering, and more subtle, but equally perilous experiments in behavioral and social manipulation.  But we are doing so with increasingly less pervasive understanding of direct, let alone unintended consequences of these complex changes!   

The Runaway Innovation Train:  So should we back off and slow down?  Is it time to pump the brakes? It’s an odd question for an innovator, but it’s likely a moot point anyway. The reality is that we probably cannot slow down, even if we want to.  Innovation is largely a self-propagating chain reaction. All innovators stand on the shoulders of giants. Every generation builds on past discoveries, and often this growing knowledge base inevitably leads to multiple further innovations.  The connectivity and information access of internet alone is driving today’s unprecedented innovation, and AI and quantum computing will only accelerate this further.  History is compelling on this point. Stone-age innovation was slow not because our ancestors lacked intelligence.  To the best of our knowledge, they were neurologically the same as us.  But they lacked the cumulative knowledge, and the network to access it that we now enjoy.   Even the smartest of us cannot go from inventing flint-knapping to quantum mechanics in a single generation. But, back to ‘standing on the shoulder of giants’, we can build on cumulative knowledge assembled by those who went before us to continuously improve.  And as that cumulative knowledge grows, more and more tools and resources become available, multiple insights emerge, and we create what amounts to a chain reaction of innovations.  But the trouble with chain reactions is that they can be very hard to control.    

Simultaneous Innovation: Perhaps the most compelling support for this inevitability of innovation lies in the pervasiveness of simultaneous innovation.   How does human culture exist for 50,000 years or more and then ‘suddenly’ two people, Darwin and Wallace come up with the theory of evolution independently and simultaneously?  The same question for calculus (Newton and Leibniz), or the precarious proliferation of nuclear weapons and other assorted weapons of mass destruction.  It’s not coincidence, but simply reflects that once all of the pieces of a puzzle are in place, somebody, and more likely, multiple people will inevitably make connections and see the next step in the innovation chain. 

But as innovation expands like a conquering army on multiple fronts, more and more puzzle pieces become available, and more puzzles are solved.  But unfortunately associated side effects and unanticipated consequences also build up, and my concern is that they can potentially overwhelm us. And this is compounded because often, as in the case of climate change, dealing with side effects can be more demanding than the original innovation. And because they can be slow to emerge, they are often deeply rooted before we become aware of them. As we look forward, just taking AI as an example, we can already somewhat anticipate some worrying possibilities. But what about the surprises analogous to climate change that we haven’t even thought of yet? I find that a sobering thought that we are attempting to create consciousness, but despite the efforts of numerous Nobel laureates over decades, we still have to idea what consciousness is. It’s called the ‘hard problem’ for good reason.  

Stop the World, I Want to Get Off: So why not slow down? There are precedents, in the form of nuclear arms treaties, and a variety of ethically based constraints on scientific exploration.  But regulations require everybody to agree and comply. Very big, expensive and expansive innovations are relatively easy to police. North Korea and Iran notwithstanding, there are fortunately not too many countries building nuclear capability, at least not yet. But a lot of emerging technology has the potential to require far less physical and financial infrastructure.  Cyber crime, gene manipulation, crypto and many others can be carried out with smaller, more distributed resources, which are far more difficult to police.  Even AI, which takes considerable resources to initially create, opens numerous doors for misuse that requires far less resource. 

The Atomic Weapons Conundrum.  The challenge with getting bad actors to agree on regulation and constraint is painfully illustrated by the atomic bomb.  The discovery of fission by Strassman and Hahn in the late 1930’s made the bomb inevitable. This set the stage for a race to turn theory into practice between the Allies and Nazi Germany. The Nazis were bad actor, so realistically our only option was to win the race.  We did, but at enormous cost. Once the ‘cat was out of the bag, we faced a terrible choice; create nuclear weapons, and the horror they represent, or chose to legislate against them, but in so doing, cede that terrible power to the Nazi’s?  Not an enviable choice.

Cumulative Knowledge.  Today we face similar conundrums on multiple fronts. Cumulative knowledge will make it extremely difficult not to advance multiple, potentially perilous technologies.  Countries who legislate against it risk either pushing it underground, or falling behind and deferring to others. The recent open letter from Meta to the EU chastising it for the potential economic impacts of its AI regulations may have dripped with self-interest.  But that didn’t make it wrong.   https://euneedsai.com/  Even if the EU slows down AI development, the pieces of the puzzle are already in place.  Big corporations, and less conservative countries will still pursue the upside, and risk the downside. The cat is very much out of the bag.

Muddling Through:  The good news is that when faced with potentially perilous change in the past, we’ve muddled through.  Hopefully we will do so again.   We’ve avoided a nuclear holocaust, at least for now.  Social media has destabilized our social order, but hasn’t destroyed it, yet.  We’ve been through a pandemic, and come out of it, not unscathed, but still functioning.  We are making progress in dealing with climate change, and have made enormous strides in managing pollution.

Chain Reactions:  But the innovation chain reaction, and the impact of cumulative knowledge mean that the rate of change will, in the absence of catastrophe, inevitably continue to accelerate. And as it does, so will side effects, nefarious use, mistakes and any unintended consequences that derive from it. Key factors that have helped us in the past are time and resource, but as waves of innovation increase in both frequency and intensity, both are likely to be increasingly squeezed.   

What can, or should we do? I certainly don’t have simple answers. We’re all pretty good, although by definition, far from perfect at scenario planning and trouble shooting for our individual innovations.  But the size and complexity of massive waves of innovation, such as AI, are obviously far more challenging.  No individual, or group can realistically either understand or own all of the implications. But perhaps we as an innovation community should put more collective resources against trying? We’ll never anticipate everything, and we’ll still get blindsided.  And putting resources against ‘what if’ scenarios is always a hard sell. But maybe we need to go into sales mode. 

Can the Problem Become the Solution? Encouragingly, the same emerging technology that creates potential issues could also help us.  AI and quantum computing will give us almost infinite capacity for computation and modeling.  Could we collectively assign more of that emerging resource against predicting and managing it’s own risks?

With many emerging technologies, we are now where we were in the 1900’s with climate change.  We are implementing massive, unpredictable change, and by definition have no idea what the unanticipated consequences of that will be. I personally think we’ll deal with climate change.  It’s difficult to slow a leviathan that’s been building for over a hundred years.  But we’ve taken the important first steps in acknowledging the problem, and are beginning to implement corrective action. 

But big issues require big solutions.  Long-term, I personally believe the most important thing for humanity to escape the gravity well.   Given the scale of our ability to curate global change, interplanetary colonization is not a luxury, but an essential.  Climate change is a shot across the bow with respect to how fragile our planet is, and how big our (unintended) influence can be.  We will hopefully manage that, and avoid nuclear war or synthetic pandemics for long enough to achieve it.  But ultimately, humanity needs the insurance dispersed planetary colonization will provide.  

Image credits: Microsoft Copilot

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

SpaceX is a Masterclass in Innovation Simplification

SpaceX is a Masterclass in Innovation Simplification

GUEST POST from Pete Foley

This capture from a recent SpaceX tweet is a stunning visual example of smart innovation and simplification. 

While I’m not even close to being a rocket scientist, and so am far from familiar with all of the technical details, I’ve heard that breakthroughs incorporated into this include innovative sensor design that allows for streamlined feedback loops. But this goes beyond just impressive technical innovation.   To innovate at this level requires organizational and cultural flexibility as well as technical brilliance. That latter flexibility is probably far more broadly transferable and adoptable than specific advances in rocket science, and hence more useful to the broader innovation community. So let’s dig a little deeper into that space.

Secret Sauce?  Organizationally SpaceX is well known for less formal hierarchies, passion, ownership and engineers working on the production floor.  This hands on approach creates a different, but important kind of feedback, while passion feeds intrinsic motivation, ownership and engagement, which is so critical to consistent innovation. 

Learning from Failure – An Innovation Superpower?  But perhaps most important of all is the innovation culture. Within SpaceX there is a very clear willingness to experiment and learn from failure.  Not lip service, or the sometimes half-hearted embrace of failure often found in large, bureaucratic organizations, where rewards and career progression often doesn’t reflect the mantra of learning by failing.  This is an authentic willingness to publicly treat productive failure of individual launches as a learning success for the program, and to reward productive failure and appropriate risk taking.  Of course, it’s not always easy to walk the talk of celebrating failure, especially in spacecraft design, where failures are often spectacular, public, and visual gold for the media.  And no doubt this is compounded by Musk’s controversial public profile, where media and social media are often only too keen to highlight failures.  But the visual of Raptor 3 is for me a compelling advertisement for authentically embedding learning by failure deeply into the DNA of an innovative organization. 

Stretch Goals:  Musk is famous for, and sometimes ridiculed for setting ambitious stretch goals, and for not always achieving them.   But in a culture where failure is tolerated, or if done right, celebrated, missing a stretch goal is not a problem, especially if it propelled innovation along at a pace that goes beyond conventional expectation.    

Challenging Legacy and ‘Givens’:  Culturally, this kind of radical simplification requires the systematic challenge of givens that were part of previous iterations.  You cannot make these kind of innovation leaps unless you are both willing and able to discard legacy technical and organizational structures.  

At risk of kicking Boeing while it is down, it is hard not to contrast SpaceX with Boeing, whose space (and commercial aviation) program is very publicly floundering, and facing the potentially humiliating prospect of needing rescue from the more agile SpaceX program. 

Innovation Plaque:  But in the spirit of learning from failure, if we look a bit deeper, perhaps it should not be a surprise that Boeing are struggling to keep up. They have a long, storied, and successful history as a leader in aerospace.  But history and leadership can be a blessing and a curse, as I know from P&G. It brings experience, but also bureaucracy, rigid systems, and deeply rooted culture that may or may not be optimum for managing change.  Deep institutional knowledge can be a similar mixed blessing.  It of course allows easy access to in-domain experience, and is key to not repeating past mistakes, or making naïve errors.  But is also comes with an inherent bias towards traditional solutions, and technologies.  Perhaps even more important is the organizationally remembered pain of past failures, especially if a ‘learn by failure’ culture isn’t fully embraced.  Failure is good at telling us what didn’t work, and plays an important role in putting processes in place that help us to avoid repeating errors.  But over time these ‘defensive’ processes can build up like plaque in an artery, making it difficult to push cutting edge technologies or radical changes through the system.

Balance is everything.  Nobody wants to be the Space Cowboy.  Space exploration is expensive, and risks the lives of some extraordinarily brave people.  Getting the balance between risk taking and the right kind of failure is even more critical than in most other contexts. But SpaceX are doing it right, certainly until now. Whatever the technical details, the impact on speed, efficiency and $$ behind the simplification of Raptor 3 is stunning.  I suspect that ultimately reliability and efficiency will also likely helped by increased simplicity.  But it’s a delicate line.  The aforementioned ‘plaque’ does slow the process, but done right, it can also prevent unnecessary failure.   It’s important to be lean, but  not ‘slice the salami’ too thin.  Great innovation teams mix diverse experience, backgrounds and personalities for this reason.  We need the cynic as well as the gung-ho risk taker.  For SpaceX, so far, so good, but it’s important that they don’t become over confident.  

The Elon Musk Factor:  For anyone who hasn’t noticed. Musk has become a somewhat controversial figure of late. But even if you dislike him, you can still learn from him, and as innovators, I don’t think we can afford not to. He is the most effective innovator, or at least innovation leader for at least a generation. The teams he puts together are brilliant at challenging ‘givens’, and breaking out of legacy constraints and the ‘ghosts of evolution’. We see it across the SpaceX design, not just the engine, but also the launch systems, recycling of parts, etc. We also see an analogous innovation strategy in the way Tesla cars so dramatically challenged so many givens in the auto industry, or the ‘Boring company in my hometown of Las Vegas.

Ghosts of Evolution I’d mentioned the challenges of legacy designs and legacy constraints. I think this is central to SpaceX’s success, and so I think it’s worth going a little deeper on this topic.  Every technology, and every living thing on our planet comes with its own ghosts.   They are why humans have a literal blind-spot in our vision, why our bodies pleasure centers are co-located with our effluent outlets, and why the close proximity of our air and liquid/solid intakes lead to thousands of choking deaths every year. Nature is largely stuck with incrementally building on top of past designs, often leading to the types of inefficiency described above. Another example is the Pronghorn antelope that lives in my adopted American West. It can achieve speeds of close to 90 mph. This is impressive, but vastly over-designed and inefficient for it’s current environment. But it is a legacy design, evolved at a time when it was predated upon by long extinct North American Cheetah. It cannot simply undo that capability now that it’s no longer useful. So far, it’s survived this disadvantage, but it is vulnerable to both competition and changing environment simply because it is over-designed.

Bio-Inspiration:  I’ve long believed we can learn a great deal from nature and bio-inspired design, but sometimes learning what not to do is as useful as ‘stealing’ usable insights. It’s OK to love nature, but also acknowledge that evolution has far more failures than successes. There are far, far more extinct species than living ones.  And virtually every one was either too specialized, or lacked the ability to pivot and adapt in the face of changing context.  

As innovators, we have unique option of creating totally new 2.0 designs, and challenging the often unarticulated givens that are held within a category. And we have the option of changing our culture and organizational structures too.  But often we fail do so because we are individually or organizationally blind to legacy elements that are implicitly part of our assumptions for a category or a company.  The fish doesn’t see the water, or at least not until it’s dangling from a hook. By then it’s too late.   Whatever you think of Musk, he’s taught us it is possible to create innovation cultures that challenge legacy designs extremely effectively.  It’s a lesson worth learning

Image credits: Twitter (via SpaceX)

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Cover versions, Sequels, Taylor Swift and Innovation

Taylor Swift and Innovation

GUEST POST from Pete Foley

An inherent contradiction in almost any new innovation is that it needs to be both new, but also somewhat familiar.  If it doesn’t offer anything new, there is little motivation for consumers to risk abandoning existing habits or preferences to try it.  But if it is not at least anchored in familiarity, then we ask consumers to put a lot of effort into understanding it, in addition to any opportunity cost from what they give up for trying something new.  Innovation is difficult, and a lot of innovations fail, at least in part because of this fundamental contradiction. 

Transformative Performance:  Of course, innovations can be successful, which means we do navigate this challenge.  But how? One way is to deliver something with such transformative benefits that people are willing to push themselves over the hump of learning something new. Huge benefits also create their own ‘gravity’, often spreading via world of mouth via media, social media, and even old-fashioned human-to-human conversations. This avoids the need for brute force mass marketing spend that can create the illusion of familiarity, but with a hefty price tag that is typically beyond smaller companies

Familiarity: The second option is to leverage what people already know in such a way that the ‘adoption hump’ becomes relatively insignificant, because new users intuitively know what the innovation is and how to use it.

Wow!  The best innovations do both.  CHATgpt Generative AI is a contemporary example, where transformative performance has created an enormous amount of word of mouth, but the interface is so intuitive there is little barrier to adoption, at least superficially. 

Of course, using it skillfully is another thing altogether, but I think there is an insight there too.  It’s OK to have an ongoing learning curve after initial adoption, but initial engagement needs to be relatively simple.  The gaming industry are masters of this.    

Little Wows!  CHATgpt is brilliant innovation.  But realistically, few of us are gong to create something quite that extraordinary.  So how do we manage to create more modest wows that still drive trial, engagement and ultimately repeat business?

Science, Art and Analogy:  As a believer that a lot of interesting things happen at the interface between science and art, and that analogy is a great tool, I think we cam learn a little about solving this by taking insight from the arts.  In this case, music and movies. For example, popular music routinely plunders the familiar, and repackages it as new via cover versions.  I often do the same myself!   Movies do something similar, either with the cycle of remakes of classic movies, or with sequels that often closely follow the narrative structure of the original.  

But this highlights some of the challenges in solving this dichotomy.  It’s rare for a remake, cover version, or sequel to do better than the original.  But a few do, so what is their secret?  What works, and what doesn’t? 

  1. Distance from the original.  Some of the best movie remakes completely reframe the original in ways that maintain a largely implicit familiarity, but do so without inviting direct comparisons of alignable differences to the original. For example, West Side Story is a brilliant retelling of Romeo and Juliet, Bridget Jones Diary reframes Pride and Prejudice, She’s All That is a retelling of George Bernard Shaw’s Pygmalion, while The Lion King retools Hamlet, etc.  I’m not suggesting that nobody sees these connections, but many don’t, and even if they do, the context is sufficiently different to avoid constant comparisons throughout the experience.  And of course, in most of these cases, the originals are not contemporary, so there is temporal as well as conceptual distance between original and remake.   Similarly with cover versions, Hendrix and the Byrds both completely and very successfully reframed Dylan (All Along the Watchtower and Mr. Tambourine Man).  Sinead O’Connor achieved similar success with Prince’s “Nothing Compares 2 U”.  For those of you with less grey in their hairl, last summers cover of Tracy Chapman’s ‘Fast Car’ by Luke Combs shows that covers can still do this. 

2.  Something New.   A different way to fail is to tap familiarity, but without adding anything sufficiently new or interesting.  All too often covers, sequels and remakes are simply weaker copies of the original.  I’m sure that anyone reading this can come up with their own examples of a disappointing remake or sequel.   Footloose, Annie, Psycho, Tom Cruise’s the Mummy or Karate Kid are all candidates for me.  As for sequels, again, I’m sure you can all name a respectable list of your own wasted 2 hours, with Highlander 2 and Jaws the Revenge being my personal cures for insomnia.   And even if we include novelty, it cannot be too predictable either.  It needs to at least be a little surprising.   For example, the gender reversal of the remake of Overboard has a point of difference in comparison to the Goldie Hawn original, but its not exactly staggeringly novel or surprising.  It’s a lot like a joke, if you can see it coming, it’s not going too create a wow.    

3.  Don’t Get De-Selected.  Learning from the two previous approaches can help us to create sufficient separation from past experience to engage and hopefully delight potential consumers.  But it’s important to not get carried away, and become un-tethered from familiarity.  For example, I personally enjoy a lot of jazz, but despite their often extraordinary skill, jazz musicians don’t fill many arenas.  That’s in part because jazz asks the listener to invest a lot of cognitive bandwidth and time to develop an ‘ear’, or musical expertise in order to appreciate it. It often moves a long way from the familiar original, and adds lot of new into the equation.  As a result, it is a somewhat niche musical form.  Pop music generally doesn’t require the same skill or engagement, and successful artists like Taylor Swift understand that.   And when it comes to innovation, most of us want to be mainstream, not niche. This is compounded because consumers today face a bewildering array of options, and a huge amount of information.  One way our brains have evolved to deal with complexity is to quickly ignore or ‘de-select’ things that don’t appear relevant to our goals. A lot of the time, we do this unconsciously.  Faced with more information than we can process, we quickly narrow our choices down to a consideration set that is ‘right-sized’ for us to make a decision.   From an innovation perspective, if our innovations are too ‘jazzy’, they risk being de-selected by a majority on consumers before they can be fully appreciated, or even consciously noticed.     

There’s no precise right or wrong strategy in this context. It’s possible to deliver successful innovations by tapping and balancing these approaches in many different ways.   But there are certainly good and bad executions, and I personally find it helpful to use these kinds of analogy when evaluating an innovation.   Are we too jazzy? Do we have separation from incumbents that is meaningful for consumers, and not just ourselves? And the latter is a real challenge for experts. When we are deeply engaged in a category, it’s all too easy to get lost in the magic of our own creations.  We see differences more clearly than consumers. It’s easy for us to become overly excited by relatively small changes that excite us, but that lack sufficient newness and separation from existing products for consumers who are nowhere near as engaged in our category as we are.  But it’s also easy to create ‘jazz’ for similar reasons, by forgetting that real world consumers are typically far less interested in our products than we are, and so miss the brilliance of our ‘performance’, or perhaps don’t ‘get it’ at all. 

For me, it is useful to simply ask myself whether I’m a Godfather II or a Highlander II, a Taylor Swift or a Dupree Bolton, or even Larry Coryell.  And there’s the rub.  As a musician, I’d rather be Larry, but as a record company exec, I’d far rather have Taylor Swift on my label. 

Image credits: Wikimedia Commons

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

AI as an Innovation Tool – How to Work with a Deeply Flawed Genius!

AI as an Innovation Tool - How to Work with a Deeply Flawed Genius!

GUEST POST from Pete Foley

For those of us working in the innovation and change field, it is hard to overstate the value and importance of AI.   It opens doors, that were, for me at least, barely imaginable 10 years ago.  And for someone who views analogy, crossing expertise boundaries, and the reapplication of ideas across domains as central to innovation, it’s hard to imagine a more useful tool.

But it is still a tool.  And as with any tool, leaning it’s limitations, and how to use it skillfully is key.  I make the analogy to an automobile.  We don’t need to know everything about how it works, and we certainly don’t need to understand how to build it.  But we do need to know what it can, and cannot do. We also need to learn how to drive it, and the better our driving skills, the more we get out of it.

AI, the Idiot Savant?  An issue with current AI is that it is both intelligent and stupid at the same time (see Yejin Chois excellent TED talk that is attached). It has phenomenal ‘data intelligence’, but can also fail on even simple logic puzzles. Part of the problem is that AI lacks ‘common sense’ or the implicit framework that filters a great deal of human decision making and behavior.  Chois calls this the  ‘dark matter’ common sense of decision-making. I think of it as the framework of knowledge, morality, biases and common sense that we accumulate over time, and that is foundational to the unconscious ‘System 1’ elements that influence many, if not most of our decisions. But whatever we call it, it’s an important, but sometimes invisible and unintuitive part of human information processing that is can be missing from AI output.    

Of course, AI is far from being unique in having limitations in the quality of its output.   Any information source we use is subject to errors.  We all know not to believe everything we read on the internet. That makes Google searches useful, but also potentially flawed.  Even consulting with human experts has pitfalls.   Not all experts agree, and even to most eminent expert can be subject to biases, or just good old fashioned human error.  But most of us have learned to be appropriately skeptical of these sources of information.  We routinely cross-reference, challenge data, seek second opinions and do not simply ‘parrot’ the data they provide.

But increasingly with AI, I’ve seen a tendency to treat its output with perhaps too much respect.   The reasons for this are multi-faceted, but very human.   Part of it may be the potential for generative AI to provide answers in an apparently definitive form.  Part may simply be awe of its capabilities, and to confuse breadth of knowledge with accuracy.  Another element is the ability it gives us to quickly penetrate areas where we may have little domain knowledge or background.  As I’ve already mentioned, this is fantastic for those of us who value exploring new domains and analogies.  But it comes with inherent challenges, as the further we step away from our own expertise, the easier it is for us to miss even basic mistakes.  

As for AI’s limitations, Chois provides some sobering examples.  It can pass a bar exam, but can fail abysmally on even simple logic problems.  For example, it suggests building a bridge over broken glass and nails is likely to cause punctures!   It has even suggested increasing the efficiency of paperclip manufacture by using humans as raw materials.  Of course, these negative examples are somewhat cherry picked to make a point, but they do show how poor some AI answers can be, and how they can be low in common sense.   Of course, when the errors are this obvious, we should automatically filter them out with our own common sense.  But the challenge comes when we are dealing in areas where we have little experience, and AI delivers superficially plausible but flawed answers. 

Why is this a weak spot for AI?  At the root of this is that implicit knowledge is rarely articulated in the data AI scrapes. For example, a recipe will often say ‘remove the pot from the heat’, but rarely says ‘remove the pot from heat and don’t stick your fingers in the flames’. We’re supposed to know that already. Because it is ‘obvious’, and processed quickly, unconsciously and often automatically by our brains, it is rarely explicitly articulated. AI, however, cannot learn what is not said.  And so because we don’t tend to state the obvious, it can make it challenging for an AI to learn it.  It learns to take the pot off of the heat, but not the more obvious insight, which is to avoid getting burned when we do so.  

This is obviously a known problem, and several strategies are employed to help address it.  These include manually adding crafted examples and direct human input into AI’s training. But this level of human curation creates other potential risks. The minute humans start deciding what content should and should not be incorporated, or highlighted into AI training, the risk of transferring specific human biases to that AI increase.   It also creates the potential for competing AI’s with different ‘viewpoints’, depending upon differences in both human input and the choices around what data-sets are scraped. There is a ‘nature’ component to the development of AI capability, but also a nurture influence. This is of course analogous the influence that parents, teachers and peers have on the values and biases of children as they develop their own frameworks. 

But most humans are exposed to at least some diversity in the influences that shape their decision frameworks.  Parents, peers and teachers provide generational variety, and the gradual and layered process that builds the human implicit decision framework help us to evolve a supporting network of contextual insight.  It’s obvious imperfect, and the current culture wars are testament to some profound differences in end result.  But to a large extent, we evolve similar, if not identical common sense frameworks. With AI, the narrower group contributing to curated ‘education’ increases the risk of both intentional and unintentional bias, and of ‘divergent intelligence’.     

What Can We do?  The most important thing is to be skeptical about AI output.  Just because it sounds plausible, don’t assume it is.  Just as we’d not take the first answer on a Google search as absolute truth, don’t do the same with AI.  Ask it for references, and check them (early iterations were known to make up plausible looking but nonsense references).  And of course, the more important the output is to us, the more important it is to check it.  As I said at the beginning, it can be tempting to take verbatim output from AI, especially if it sounds plausible, or fits our theory or worldview.  But always challenge the illusion of omnipotence that AI creates.  It’s probably correct, but especially if its providing an important or surprising insight, double check it.    

The Sci-Fi Monster!  The concept of a childish super intelligence has been explored by more than one Science Fiction writer.  But in many ways that is what we are dealing with in the case of AI.  It’s informational ‘IQ’ is greater than the contextual or common sense ‘IQ’ , making it a different type of intelligence to those we are used to.   And because so much of the human input side is proprietary and complex, it’s difficult  to determine whether bias or misinformation is included in its output, and if so, how much?   I’m sure these are solvable challenges.  But some bias is probably unavoidable the moment any human intervention or selection invades choice of training materials or their interpretation.   And as we see an increase in copyright law suits and settlements associated with AI, it becomes increasingly plausible that narrowing of sources will result in different AI’s with different ‘experiences’, and hence potentially different answers to questions.  

AI is an incredible gift, but like the three wishes in Aladdin’s lamp, use it wisely and carefully.  A little bit of skepticism, and some human validation is a good idea. Something that can pass the bar, but that lacks common sense is powerful, it could even get elected, but don’t automatically trust everything it says!

Image credits: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Las Vegas Formula One

Successful Innovation, Learning Experience or Total Disaster?

GUEST POST from Pete Foley

In Las Vegas, we are now clearing up after the Formula 1 Grand Prix on the Strip.  This extremely complex event required a great deal of executional innovation, and one that I think as innovators, we can learn quite a lot from. 

It was certainly a bumpy ride, both for the multi-million dollar Ferrari that hit an errant drain cover during practice, but also with respect to broader preparation, logistics, pricing and projections of consumer behavior.  Despite this, race itself was exciting and largely issue free, and even won over some of the most skeptical drivers.  In terms of Kahneman’s peak-end effects, there were both memorable lows, but also a triumphant end result.   So did this ultimately amount to success?

Success?:   For now, I think it very much depends upon your perspective and who you talk to.  Perhaps it’s a sign of the times, but in Las Vegas, the race was extremely polarizing, with often heated debates between pro- and anti- F1-ers that were often as competitive as the race.

The reality is that it will be months, or more likely years before the dust settles, and we know the answer.  And I strongly suspect that even then, those who are for and against it will all likely be able to claim support for their point of view.  One insight I think innovators can take from this is that success can be quite subjective in of itself, and greatly depends upon what factors you measure, what period of time you measure over, and often your ingoing biases.  And the bigger and more complex the innovation, often the harder it is to define and measure success.  

Compromise Effects:  When you launch a new product, it is often simpler and cheaper to measure its success narrowly in terms of specific dollar contribution to your business. But this often misses its holistic impact.   Premium products can elevate an entire category or brand, while poorly executed innovations can do the opposite.  For example, the compromise effect from Behavioral Economics suggests that a premium addition to a brand line up can shift the ‘Good, Better, Best’ spectrum of a category upwards.  This can boost dollar sales across a line up, even if the new premium product itself has only moderate sales.   For example, the addition of high priced wines to a menu can often increase the average dollars per bottle spent by diners, even if the expensive wine itself doesn’t sell.  The expensive wines shift the ‘safe middle’ of the consideration set upwards, and thus increase revenue, and hopefully profit.      

Money, Scope and Intangibles:  In the case of F1, how far can and should we cast the net when trying to measure success?  Can we look just at the bottom line?  Did this specific weekend bring in more than the same weekend the previous year in sports betting, rooms and entertainment?  Did that difference exceed the investments? 

Or is that too narrow?  What about the $$ impact on the weeks surrounding the event?  We know that some people stayed away because of the construction and congestion in the lead up to the race.  That should probably be added into, or subtracted from the equation. 

And then there’s the ‘who won and who lost question’? The benefits and losses were certainly not homogeneous across stakeholders.  The big casinos benefited disproportionately in comparison to the smaller restaurants that lost business due to construction, some to a degree that almost rivaled Covid.  Gig workers also fared differently. I have friends who gained business from the event, and friends who lost.  Many Uber drivers simply gave up and stopped working. But those who stayed, or the high-end limo drivers likely had bumper weekends.   Entertainers working shows that were disrupted by F1 lost out, but the plethora of special events that came with F1 also provided a major uptick in business for many performers and entertainers.

There is also substantial public investment to consider.  Somewhat bizarrely, the contribution of public funds was not agreed prior to the race, and the public-private cost sharing of tens of millions is still being negotiated.  But even facing that moving target, did increased (or decreased) tax income before, during and after the race offset those still to be determined costs?

Intangibles:  And then there’s the intangibles.  While Vegas is not exactly an unknown entity, F1 certainly upped its exposure, or in marketing terms, it’s mental availability.   It brought Vegas into the news, but was that in a positive or negative light?  Or is all publicity good publicity in this context? News coverage was mixed, with a lot of negative focus on the logistic issues, but also global coverage of what was generally regarded as an exciting race.   And of course, that media coverage also by definition marketed other businesses, including the spectacular Sphere. 

Logistics:  Traffic has been a nightmare with many who work on the strip facing unprecedented delays in their commutes for many weeks, with many commutes going from minutes to hours.   This reached a point where casinos were raffling substantial prizes, including a Tesla, just to persuade people to not call in sick.  Longer term, it’s hard to determine the impact on employee morale and retention, but its hard to imagine that it will be zero, and that brings costs of its own that go well beyond a raffled Tesla

Measuring Success?  In conclusion, this was a huge operation, and its impact by definition is going to be multidimensional.  The outcome was, not surprisingly, a mixed bag.  It could have been a lot better, or a lot worse. And even as the dust settles, it’s likely that different groups will be able to cherry pick data to support their current opinions and biases. 

Innovation Insights:  So what are some of the more generalized innovation insights we can draw?

(a) Innovation is rarely a one and done process.   We rarely get it right first time, and the bigger and more complex an innovation is, the more we usually have to learn.  F1 is the poster child for this, and the organization is going to have an enormous amount of data to plough through. The value of this will greatly depend on F1’s internal innovation culture.  Is it a learning organization?  In a situation like this, where billions of dollars, and careers are on the line, will it be open or defensive?  Great innovation organizations mostly put defensiveness aside, actively learn from mistakes, and adopt Devils Advocate approaches to learn from hard earned data. But culture is deeply embedded, and difficult to change, so much depends on the current culture of the organizations involved.  

(b) Going Fast versus Going Slow:  This project moved very, very quickly.  Turning a city like Las Vegas from scratch into a top of the line race track in less than a year was a massive challenge.  The upside is that if you go fast, you learn fast.  And the complexity of the task meant much of the insight could pragmatically only be achieved ‘on the ground’.  But conversely, better scenario planning might have helped anticipate some of the biggest issues, especially around traffic disruption, loss of business to smaller organizations, commuting issues and community outreach.  And things like not finalizing public-private contracts prior to execution will likely end up prolonging the agony.  Whatever our innovation is, big or small, hitting that sweet spot between winging it and over-thinking is key. 

(c) Understanding Real Consumer Behavior.  The casinos got pricing horribly wrong.  When the race was announced, hotel prices and race packages for the F1 weekend went through the roof.  But in the final run up to the race, prices for both rooms and the race itself plummeted.  One news article reported a hotel room on the strip as low as $18!  Tickets for the race that the previous month had cost $1600 had dropped to $800 or less on race day.  Visitors who had earlier paid top dollar for rooms were reported to be cancelling and rebooking, while those locked into rates were frustrated.  There is even a major lawsuit in progress around a cancelled practice.  I don’t know any details around how pricing was researched, and predicting the market for a new product or innovation is always a challenge.  In addition, the bigger the innovation, the more challenging the prediction game is, as there are less relevant anchors for consumers or the business to work from.   But I think the generalizable lesson for all innovators is to be humble.  Assume you don’t know, that your models are approximate, do as much research as you can in contexts that are a close to realistic as possible, don’t squeeze margins based on unrealistic expectations for the accuracy of business models, and build as much agility into innovation launches as possible.  Easier said than done I know, but one of the most consistent reasons for new product failure is over confidence in understanding real consumer response when the rubber hits the road (pun intended), and how it can differ from articulated consumer response derived in unrealistic contexts. Focus groups and on-line surveys can be quite misleading when it comes down to the reality of handing over hard cash, opportunity cost, or how we value ur precious time short versus long-term term.

Conclusion: Full disclosure, I’ve personally gone through the full spectrum with Formula One in Vegas.  I loved the idea when it was announced, but 6 months of construction, disruption, and the prospect of another two months of tear down have severely dented my enthusiasm.  Ultimately I went from coveting tickets to avoiding the event altogether.  People I know range from ecstatic to furious, and everything in between.  Did I mention it was polarizing? 

The reality is that this is an ongoing innovation process.   There is a 3-year contract with options to extend to 10 years.  How successful it ultimately is will likely be very dependent upon how good a learning and innovation culture Formula One and its partners are, or can become.  It’s a steep and expensive learning curve, and how it moves forward is going to be interesting if nothing else.  And being Vegas, we have both CES and the Super Bowl to distract us in the next few months, before we start preparing again for next year. 

Image credits: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Eddie Van Halen, Simultaneous Innovation and the AI Regulation Conundrum

Eddie Van Halen, Simultaneous Innovation and the AI Regulation Conundrum

GUEST POST from Pete Foley

It’s great to have an excuse to post an Eddie Van Halen video to the innovation community.  It’s of course fun just to watch Eddie, but I also have a deeper, innovation relevant reason for doing so.

Art & Science:  I’m a passionate believer in cross-pollination between art and science.  And I especially believe we can learn a great deal from artists and musicians like Eddie who have innovated consistently over a career.  Dig into their processes, and we see serial innovators like The Beatles, Picasso, Elton John, Bowie, George Martin, Freddie Mercury, William Gibson, Lady Gaga, Paul Simon and so many others apply techniques that are highly applicable to all innovation fields. Techniques such as analogy, conceptual blending, collaboration, reapplication, boundary stretching, risk taking, learning from failure and T-Shaped innovation all crop up fairly consistently.  And these creative approaches are typically also built upon deep expertise, passion, motivation, and an ability to connect with future consumer needs, and to tap into early adopters and passionate consumers.  For me at least, that’s a pretty good innovation toolkit for innovation in any field.  Now, to be fair, often their process is intuitive, and many truly prolific artists are lucky enough to automatically and intuitively ‘think that way’. But understanding and then stealing some of their techniques, either implicit or explicit, can be a great way to both jump-start our own innovative processes, and also to understand how innovation works. As Picasso said, ‘great artists steal’, but I’d argue that so do good innovators, at least within the bounds allowed by the patent literature!

In the past I’ve written quite a lot about Picasso and The Beatles use of conceptual blending, Paul Simon’s analogies, reapplication and collaboration, Bowie’s innovative courage, and William Gibson’s ability to project s-curves.  Today, I’d like to to focus on some insights I see in the guitar innovations of Eddie.   

(a) Parallel or Simultaneous Innovation.  I suspect this is one of the most important yet under-appreciated concepts in innovation today. Virtually every innovation is built upon the shoulders of giants. Past innovations provide the foundation for future ones, to the point where once the pieces of the puzzle are in place, many innovations become inevitable. It still takes an agile and creative mind to come up with innovative ideas, but contemporary innovations often set the stage for the next leap forward. And this applies both to the innovative process, and also to a customers ability to understand and embrace it. The design of the first skyscraper was innovative, but it was made a lot more obvious by the construction of the Eiffel Tower. The ubiquitous mobile phone may now seem obvious, but it owes its existence to a very long list of enabling technologies that paved the way for it’s invention, from electricity to chips to Wi-Fi, etc.

The outcome of this ‘stage setting’ is that often even really big innovations occur simultaneously yet independently.  We’ve seen this play out with calculus (independently developed by Newton and Leibnitz), the atomic bomb, where Oppenheimer and company only just beat the Nazi’s, the theory of evolution, the invention of the thermometer, nylon and so many others.  We even see it in evolution, where scavenger birds vultures and condors superficially appear quite similar due to adaptations that allow them to eat carrion, but actually have quite different genetic lineages.  Similarly many marsupials look very similar to placental mammals that fill similar ecological niches, but typically evolved independently. Context has a huge impact on innovation, and similar contexts typical create parallel, and often similar innovations. As the world becomes more interconnected, and context becomes more homogenized, we are going to see more and more examples of simultaneous innovation.

Faster and More Competitive Innovation:  Today social media, search technology and the web mean that more people know more of the same ‘stuff’ more quickly than before.  This near instantaneous and democratized access to the latest knowledge sets the scene and context for a next generation of innovation that is faster and more competitive than we’ve ever seen.   More people have access to the pieces of the puzzle far more quickly than ever before; background information that acts as a precursor for the next innovative leap. Eddie had to go and watch Jimmy Paige live and in person to get his inspiration for ‘tapping’.  Today he, and a few million others would simply need to go onto YouTube.  He therefore discovered Paige’s hammer-on years after Paige started using them.  Today it would likely be days.  That acceleration of ‘innovation context’ has a couple of major implications: 

1.  If you think you’ve just come up with something new, it’s more than likely that several other people have too, or will do so very soon.   More than ever before you are more than likely in a race from the moment you have an idea! So snooze and you loose. Assume several others are working on the same idea.

2.  Regulating Innovation is becoming really, really difficult.  I think this is possibly the most profound implication.  For example, a very current and somewhat contentious topic today is if and how we should regulate AI.  And it’s a pretty big decision. We really don’t know how AI will evolve, but it is certainly moving very quickly, and comes with the potential for earthshaking pros and cons.  It is also almost inevitably subject to simultaneous invention.  So many people are working on it, and so much adjacent innovation is occurring, that it’s somewhat unlikely that any single group is going to get very far out in front.   The proverbial cat is out of the bag, and the race is on. The issue for regulation then becomes painfully obvious.   Unless we can somehow implement universal regulation, then any regulations simply slow down those who follow the rules.  This unfortunately opens the doors to bad actors taking the lead, and controlling potentially devastating technology.

So we are somewhat damned if we do, and damned if we don’t.  If we don’t regulate, then we run the risk of potentially dangerous technology getting out of control.  But if do regulate, we run the risk of enabling bad actors to own that dangerous technology.  We’ve of course been here before.  The race for the nuclear bomb between the Allies and the Nazi’s was a great example of simultaneous innovation with potentially catastrophic outcomes.   Imagine if we’d decided fission was simply too dangerous, and regulated it’s development to the point where the Nazi’s had got there first.  We’d likely be living in a very different world today!  Much like AI, it was a tough decision, as without regulation, there was a small but possible scenario where the outcome could have been devastating.    

Today we have a raft of rapidly evolving technologies that I’d both love to regulate, but am also profoundly worried about the unintended consequences of doing so.  AI of course, but also genetic engineering, gene manipulating medicines, even climate mediation and behavioral science!  With respect to the latter, the better we get at nudging behavior, and the more reach we have with those techniques, the more dangerous miss-use becomes.  

The core problem underlying all of this is that we are human.   Most people try to do the right thing, but there are always bad actors.  And even those trying to do the right thing all too often get it wrong.  And the more democratized access to cutting edge insight becomes, parallel innovation means the more contenders we have for mistakes and bad bad choices, intentional or unintentional. 

(b) Innovation versus Invention:  A less dramatic, but I think similarly interesting insight we can draw from Eddie lies in the difference between innovation and invention He certainly wasn’t the first guitarist to use the tapping technique.  That goes back centuries! At least as far as classical composer Paganini, and it was a required technique for playing the Chapman stick in the 1970’s, popularized by the great Tony Levin in King Crimson. It was also widely, albeit sparingly (and often obscurely) used by jazz guitarists in the 1950’s and 60’s. But Eddie was the first to feature it, and turn it into a meaningful innovation in of itself. Until him, nobody had packaged the technique in a way that it could be ‘marketed’ and ‘sold’ as a viable product. He found the killer application, made it his own, and made it a ‘thing’. I would therefore argue that he wasn’t the inventor, but he was the ‘innovator’.  This points to the value of innovation over invention.  If you don’t have the capability or the partners to turn an invention into something useful, its still just an idea.   Invention is a critical part of the broader innovation process, but in isolation it’s more curiosity than useful. Innovation is about reduction to practice and communication as well a great ideas

Art & science:  I love the arts.  I play guitar, paint, and photograph.  It’s a lot of fun, and provides a invaluable outlet from the stresses involved in business and innovation.  But as I suggested at the beginning, a lot of the boundaries we place between art and science, and by extension business, are artificial and counter-productive. Some of my most productive collaborations as a scientist have been with designers and artists. As a visual scientist, I’ve found that artists often intuitively have a command of attentional insights that our cutting edge science is still trying to understand.  It’s a lot of fun to watch Eddie Van Halen, but learning from great artists like him can, via analogy, also be surprisingly insightful and instructive.   

Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

A New Innovation Sphere

A New Innovation Sphere

GUEST POST from Pete Foley

I’m obsessed with the newly opened Sphere in Las Vegas as an example of Innovation.   As I write this, U2 are preparing for their second show there, and Vegas is buzzing about the new innovation they are performing in.  That in of itself is quite something.  Vegas is a city that is nor short of entertainment and visual spectacle, so for an innovation to capture the collective imagination in this way it has to be genuinely Wow.  And that ‘Wow’ means there are opportunities for the innovation community to learn from it. 

For those of you who might have missed it, The Sphere is an approximately 20,000 seat auditorium with razor sharp cutting edge multisensory capabilities that include a 16K resolution wraparound interior LED screen, speakers with beamforming and wave field synthesis technology, and 4D haptic physical effects built into the seats. The exterior of the 366 foot high building features 580,000 sq ft of LED displays which have transformed the already ostentatious Las Vegas skyline. Images including a giant eye, moon, earth, smiley face, Halloween pumpkin and various underwater scenes and geometric animations light up the sky, together with advertisements that are rumored to cost almost $500,000 per day.  Together with giant drone displays and giant LED displays on adjacent casinos mean that Bladerunner has truly come to Vegas. But these descriptions simply don’t do it justice, you really, really have to see it. 

Las Vegas U2 Residency at the Sphere

Master of Attention – Leveraging Visual Science to the Full:  The outside is a brilliant example of visual marketing that leverages just about every insight possible for grabbing attention. It’s scale is simply ‘Wow!’, and you can see it from the mountains surrounding Vegas, or from the plane as you come into land.   The content it displays on its outside is brilliantly designed to capture attention. It has the fundamental visual cues of movement, color, luminescence, contrast and scale, but these are all turned up to 11, maybe even 12.  This alone pretty much compels attention, even in a city whose skyline is already replete with all of these.  When designing for visual attention, I often invoke the ‘Times Square analogy’.  When trying to grab attention in a visually crowded context, signal to noise is your friend, and a simple, ‘less is more’ design can stand out against a background context of intense, complex visual noise.  But the Sphere has instead leapt s-curves, and has instead leveraged new technology to be brighter, bigger, more colorful and create an order of magnitude more movement than its surroundings.  It visually shouts above the surrounding visual noise, and has created genuine separation, at least for now. 

But it also leverages many other elements that we know command attention.  It uses faces, eyes, and natural cues that tap into our unconscious cognitive attentional architecture.  The giant eye, giant pumpkin and giant smiley face tap these attentional mechanisms, but in a playful way.  The orange and black of the pumpkin or the yellow and black of the smiley face tap into implicit biomimetic ‘danger’ clues, but in a way that resolves instantly to turn attention from avoid to approach.  The giant jellyfish and whales floating above the strip tap into our attentional priority mechanisms for natural cues.  And of course, it all fits the surprisingly obvious cognitive structure that creates ‘Wow!’.  A giant smiley emoji floating above the Vegas skyline is initially surprising, but also pretty obvious once you realize it is the sphere! 

And this is of course a dynamic display, that once it captures your attention, then advertises the upcoming U2 show or other paid advertising.  As I mentioned before, that advertising does not come cheap, but it does come with pretty much guaranteed engagement.  You really do need to see it for yourself if you can, but I’ve also captured some video here:

The Real Innovation Magic: The outside of The Sphere is stunning, but the inside goes even further, and provides a new and disruptive technology platform that opens the door for all sorts of creativity and innovation in entertainment and beyond. The potential to leverage the super-additive power of multi-sensory combinations to command attention and emotion is staggering.

The opening act was U2, and the show has received mostly positive but also mixed reviews. Everyone raves about the staggering visual effects, the sound quality, and the spectacle. But others do point out that the band itself gets somewhat lost, and/or is overshadowed by the new technology.

But this is just the beginning.   The technology platform is truly disruptive innovation that will open the door for all sorts of innovation and creativity. It fundamentally challenges the ‘givens’ of what a concert is. The U2 show is still based on and marketed as the band being the ‘star’ of the show. But the Sphere is an unprecedented immersive multimedia experience that can and likely will change that completely, making the venue the star itself. The potential for great musicians, visual and multisensory artist to create unprecedented customer experience is enormous.  Artists from Gaga to Muse, or their successors must be salivating at the potential to bring until now impossible visions to life, and deliver multi-sensory experience to audiences on a scale not previously imagined. Disruptive innovation often emerges at the interface of previous expertise, and the potential for hybrid sensory experiences that the Sphere offer are unprecedented. Imagine visuals created and inspired by the Webb telescope accompanied by an orchestra that sonically surrounds the audience in ways they’ve never experienced or perhaps imagined. And of course, new technology will challenge new creative’s to leverage it in ways we haven’t yet imagined.  Cawsie Jijina, the engineer who designed the Sphere maybe says it best:

You have the best audio there possibly can be today. You have the best visual there can possible be today. Now you just have to wait and let some artist meet some batshit crazy engineer and techie and create something totally new.” 

This technology platform will stimulate emergent blends of creative innovation that will challenge our expectations of what a show is.  It will likely require both creative’s and audiences to give up on some pre-conceptions. But I love to see a new technology emerge in front of my eyes. We ain’t seen nothing yet. 

Las Vegas Sphere Halloween

Image credits: Pete Foley

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

When Innovation Becomes Magic

When Innovation Becomes Magic

GUEST POST from Pete Foley

Arthur C Clarke’s 3rd Law famously stated:

“Any sufficiently advanced technology is indistinguishable from magic”

In other words, if the technology of an advanced civilization is so far beyond comprehension, it appears magical to a less advanced one. This could take the form of a human encounter with a highly advanced extraterrestrial civilization, how current technology might be viewed by historical figures, or encounters between human cultures with different levels of scientific and technological knowledge.

Clarke’s law implicitly assumed that knowledge within a society is sufficiently democratized that we never view technology within a civilization as ‘magic’.  But a combination of specialization, rapid advancements in technology, and a highly stratified society means this is changing.  Generative AI, Blockchain and various forms of automation are all ‘everyday magic’ that we increasingly use, but mostly with little more than an illusion of understanding around how they work.  More technological leaps are on the horizon, and as innovation accelerates exponentially, we are all going to have to navigate a world that looks and feels increasingly magical.   Knowing how to do this effectively is going to become an increasingly important skill for us all.  

The Magic Behind the Curtain:  So what’s the problem? Why do we need to understand the ‘magic’ behind the curtain, as long as we can operate the interface, and reap the benefits?  After all, most of us use phones, computers, cars, or take medicines without really understanding how they work.  We rely on experts to guide us, and use interfaces that help us navigate complex technology without a need for deep understanding of what goes on behind the curtain.

It’s a nuanced question.  Take a car as an analogy.  We certainly don’t need to know how to build one in order to use one.  But we do need to know how to operate it and understand what it’s performance limitations are.  It also helps to have at least some basic knowledge of how it works; enough to change a tire on a remote road, or to have some concept of basic mechanics to minimize the potential of being ripped off by a rogue mechanic.  In a nutshell, the more we understand it, the more efficiently, safely and economically we leverage it.  It’s a similar situation with medicine.  It is certainly possible to defer all of our healthcare decisions to a physician.  But people who partner with their doctors, and become advocates for their own health generally have superior outcomes, are less likely to die from unintended contraindications, and typically pay less for healthcare.  And this is not trivial.  The third leading cause of death in Europe behind cancer and heart disease are issues associated with prescription medications.  We don’t need to know everything to use a tool, but in most cases, the more we know the better

The Speed/Knowledge Trade-Off:  With new, increasingly complex technologies coming at us in waves, it’s becoming increasing challenging to make sense of what’s ‘behind the curtain’. This has the potential for costly mistakes.  But delaying embracing technology until we fully understand it can come with serious opportunity costs.  Adopt too early, and we risk getting it wrong, too late and we ‘miss the bus’.  How many people who invested in crypto currency or NFT’s really understood what they were doing?  And how many of those have lost on those deals, often to the benefit of those with deeper knowledge?  That isn’t to in anyway suggest that those who are knowledgeable in those fields deliberately exploit those who aren’t, but markets tend to reward those who know, and punish those who don’t.    

The AI Oracle:  The recent rise of Generative AI has many people treating it essentially as an oracle.  We ask it a question, and it ‘magically’ spits out an answer in a very convincing and sharable format.  Few of us understand the basics of how it does this, let alone the details or limitations. We may not call it magic, but we often treat it as such.  We really have little choice; as we lack sufficient understanding to apply quality critical thinking to what we are told, so have to take answers on trust.  That would be brilliant if AI was foolproof.  But while it is certainly right a lot of the time, it does make mistakes, often quite embarrassing ones. . For example, Google’s BARD incorrectly claimed the James Webb Space Telescope had taken the first photo of a planet outside our solar system, which led to panic selling of parent company Alphabet’s stock.  Generative AI is a superb innovation, but its current iterations are far from perfect.  They are limited by the data bases they are fed on, are extremely poor at spotting their own mistakes, can be manipulated by the choice of data sets they are trained on, and they lack the underlying framework of understanding that is essential for critical thinking or for making analogical connections.  I’m sure that we’ll eventually solve these issues, either with iterations of current tech, or via integration of new technology platforms.  But until we do, we have a brilliant, but still flawed tool.  It’s mostly right, is perfect for quickly answering a lot of questions, but its biggest vulnerability is that most users have pretty limited capability to understand when it’s wrong.

Technology Blind Spots: That of course is the Achilles Heel, or blind spot and a dilemma. If an answer is wrong, and we act on it without realizing, it’s potentially trouble. But if we know the answer, we didn’t really need to ask the AI. Of course, it’s more nuanced than that.  Just getting the right answer is not always enough, as the causal understanding that we pick up by solving a problem ourselves can also be important.  It helps us to spot obvious errors, but also helps to generate memory, experience, problem solving skills, buy-in, and belief in an idea.  Procedural and associative memory is encoded differently to answers, and mechanistic understanding helps us to reapply insights and make analogies. 

Need for Causal Understanding.  Belief and buy-in can be particularly important. Different people respond to a lack of ‘internal’ understanding in different ways.  Some shy away from the unknown and avoid or oppose what they don’t understand. Others embrace it, and trust the experts.  There’s really no right or wrong in this.  Science is a mixture of both approaches it stands on the shoulders of giants, but advances based on challenging existing theories.  Good scientists are both data driven and skeptical.  But in some cases skepticism based on lack of causal understanding can be a huge barrier to adoption. It has contributed to many of the debates we see today around technology adoption, including genetically engineered foods, efficacy of certain pharmaceuticals, environmental contaminants, nutrition, vaccinations, and during Covid, RNA vaccines and even masks.  Even extremely smart people can make poor decisions because of a lack of causal understanding.  In 2003, Steve Jobs was advised by his physicians to undergo immediately surgery for a rare form of pancreatic cancer.  Instead he delayed the procedure for nine months and attempted to treat himself with alternative medicine, a decision that very likely cut his life tragically short.

What Should We Do?  We need to embrace new tools and opportunities, but we need to do so with our eyes open.   Loss aversion, and the fear of losing out is a very powerful motivator of human behavior, and so an important driver in the adoption of new technology.  But it can be costly. A lot of people lost out with crypto and NFT’s because they had a fairly concrete idea of what they could miss out on if they didn’t engage, but a much less defined idea of the risk, because they didn’t deeply understand the system. Ironically, in this case, our loss aversion bias caused a significant number of people to lose out!

Similarly with AI, a lot of people are embracing it enthusiastically, in part because they are afraid of being left behind.  That is probably right, but it’s important to balance this enthusiasm with an understanding of its potential limitations.  We may not need to know how to build a car, but it really helps to know how to steer and when to apply the brakes .   Knowing how to ask an AI questions, and when to double check answers are both going to be critical skills.  For big decisions, ‘second opinions’ are going to become extremely important.   And the human ability to interpret answers through a filter of nuance, critical thinking, different perspectives, analogy and appropriate skepticism is going to be a critical element in fully leveraging AI technology, at least for now. 

Today AI is still a tool, not an oracle. It augments our intelligence, but for complex, important or nuanced decisions or information retrieval, I’d be wary of sitting back and letting it replace us.  Its ability to process data in quantity is certainly superior to any human, but we still need humans to interpret, challenge and integrate information.  The winners of this iteration of AI technology will be those who become highly skilled at walking that line, and who are good at managing the trade off between speed and accuracy using AI as a tool.  The good news is that we are naturally good at this, it’s a critical function of the human brain, embodied in the way it balances Kahneman’s System 1 and System 2 thinking. Future iterations may not need us, but for now AI is a powerful partner and tool, but not a replacement

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Unintended Consequences.  The Hidden Risk of Fast-Paced Innovation

Unintended Consequences.  The Hidden Risk of Fast-Paced Innovation

GUEST POST from Pete Foley

Most innovations go through a similar cycle, often represented as an s-curve.

We start with something potentially game changing. It’s inevitably a rough-cut diamond; un-optimized and not fully understood.  But we then optimize it. This usually starts with a fairly steep leaning curve as we address ‘low hanging fruit’ but then evolves into a fine-tuning stage.  Eventually we squeeze efficiency from it to the point where the incremental cost of improving it becomes inefficient.  We then either commoditize it, or jump to another s-curve.

This is certainly not a new model, and there are multiple variations on the theme.  But as the pace of innovation accelerates, something fundamentally new is happening with this s-curve pattern.  S-curves are getting closer together. Increasingly we are jumping to new s-curves before we’ve fully optimized the previous one.  This means that we are innovating quickly, but also that we are often taking more ‘leaps into the dark’ than ever before.

This has some unintended consequences of its own:

1. Cumulative Unanticipated Consequences. No matter how much we try to anticipate how a new technology will fare in the real world, there are always surprises.  Many surprises emerge soon after we hit the market, and create fires than have to be put out quite quickly (and literally in the cases of some battery technologies).  But other unanticipated effects can be slower burn (pun intended).  The most pertinent example of this is of course greenhouse gasses from Industrialization, and their impact on our climate. This of course took us years to recognize. But there are many more examples, including the rise of antibiotic resistance, plastic pollution, hidden carcinogens, the rising cost of healthcare and the mental health issues associated with social media. Just as the killer application for a new innovation is often missed at its inception, it’s killer flaws can be too.  And if the causal relationship between these issues and the innovation are indirect, they can accumulate across multiple s-curves before we notice them.  By the time we do, technology is often so entrenched it can be a huge challenge to extract ourselves from it.

2.  Poorly understood complex network effects.  The impact of new innovation is very hard to predict when it is introduced into a complex, multivariable system.  A butterfly flapping its wings can cascade and amplify through a system, and when the butterfly is transformative technology, the effect can be profound.  We usually have line of sight of first generation causal effects:  For example, we know that electric cars use an existing electric grid, as do solar energy farms.  But in today’s complex, interconnected world, it’s difficult to predict second, third or fourth generation network effects, and likely not cost effective or efficient for an innovator to try and do so. For example, the supply-demand interdependency of solar and electric cars is a second-generation network effect that we are aware of, but that is already challenging to fully predict.  More causally distant effects can be even more challenging. For example, funding for the road network without gas tax, the interdependency of gas and electric cost and supply as we transition, the impact that will have on broader on global energy costs and socio political stability.  Then add in complexities supply of new raw materials needed to support the new battery technologies.  These are pretty challenging to model, and of course, are the challenges we are at least aware of. The unanticipated consequences of such a major change are, by definition, unanticipated!

3. Fragile Foundations.  In many cases, one s-curve forms the foundation of the next.  So if we have not optimized the previous s-curve sufficiently, flaws potentially carry over into the next, often in the form of ‘givens’.  For example, an electric car is a classic s-curve jump from internal combustion engines.  But for reasons that include design efficiency, compatibility with existing infrastructure, and perhaps most importantly, consumer cognitive comfort, much of the supporting design and technology carries over from previous designs. We have redesigned the engine, but have only evolved wheels, breaks, etc., and have kept legacies such as 4+ seats.  But automotives are in many, one of our more stable foundations. We have had a lot of time to stabilize past s-curves before jumping to new ones.  But newer technologies such as AI, social media and quantum computing have enjoyed far less time to stabilize foundational s-curves before we dance across to embrace closely spaced new ones.  That will likely increase the chances of unintended consequences. And we are already seeing the canary in the coal mine with some, with unexpected mental health and social instability increasingly associated with social media

What’s the Answer?  We cannot, or should not stop innovating.  We face too many fundamental issues with climate, food security and socio political stability that need solutions, and need them quite quickly.

But the conundrum we face is that many, if not all of these issue are rooted in past, well intentioned innovation, and the unintended consequences that derive from it. So a lot of our innovation efforts are focused on solving issues created by previous rounds of innovation.  Nobody expected or intended the industrial revolution to impact our climate, but now much of our current innovation capability is rightly focused on managing the fall out it has created (again, pun intended).  Our challenge is that we need to continue to innovate, but also to break the cycle of todays innovation being increasingly focused on fixing yesterdays!

Today new waves of innovation associated with ‘sustainable’ technology, genetic manipulation, AI and quantum computing are already crashing onto our shores. These interdependent innovations will likely dwarf the industrial revolution in scale and complexity, and have the potential for massive impact, both good and bad. And they are occurring at a pace that gives us little time to deal with anticipated consequences, let alone unanticipated ones.

We’ll Find a Way?  One answer is to just let it happen, and fix things as we go. Innovation has always been a bumpy road, and humanity has a long history of muddling through. The agricultural revolution ultimately allowed humans to exponentially expand our population, but only after concentrating people into larger social groups that caused disease to ravage many societies. We largely solved that by dying in large numbers and creating herd immunity. It was a solution, but not an optimum one.  When London was in danger of being buried in horse poop, the internal combustion engine saved us, but that in turn ultimately resulted in climate change. According to projections from the Club of Rome in the 70’s, economic growth should have ground to a halt long ago, mired in starvation and population contraction.  Instead advances in farming technology have allowed us to keep growing.  But that increase in population contributes substantially to our issues with climate today.  ‘We’ll find a way’ is an approach that works until it doesn’t.  and even when it works, it is usually not painless, and often simply defers rather than solves issues.

Anticipation?    Another option is that we have to get better at both anticipating issues, and at triaging the unexpected. Maybe AI will give us the processing power to do this, provided of course that it doesn’t become our biggest issue in of itself.

Slow Down and Be More Selective?  In a previous article I asked if ‘just because we can do it, does it mean we should?’.  That was through a primarily moral lens.  But I think unintended consequences make this an even bigger question for broader innovation strategy.  The more we innovate, the more consequences we likely create.  And the faster we innovate, the more vulnerable we are to fragility. Slowing down creates resilience, speed reduces it.  So one option is to be more choiceful about innovations, and look more critically at benefit risk balance. For example, how badly do we need some of the new medications and vaccines being rushed to market?  Is all of our gene manipulation research needed? Do we really need a new phone every two years?   For sure, in some cases the benefits are clear, but in other cases, is profit driving us more than it should?

In a similar vein, but to be provocative, are we also moving too quickly with renewable energy?  It certainly something we need.  But are we, for example, pinning too much on a single, almost first generation form of large scale solar technology?  We are still at that steep part of the learning curve, so are quite likely missing unintended consequences.  Would a more staged transition over a decade or so add more resilience, allow us to optimize the technology based on real world experience, and help us ferret out unanticipated issues? Should we be creating a more balanced portfolio, and leaning more on more established technology such as nuclear? Sometimes moving a bit more slowly ultimately gets you there faster, and a long-term issue like climate is a prime candidate for balancing speed, optimization and resilience to ultimately create a more efficient, robust and better understood network.

The speed of AI development is another obvious question, but I suspect more difficult to evaluate.  In this case, Pandora’s box is open, and calls to slow AI research would likely mean responsible players would stop, but research would continue elsewhere, either underground or in less responsible nations.  A North Korean AI that is superior to anyone else’s is an example where the risk of not moving likely outweighs the risk of unintended consequences

Regulation?  Regulation is a good way of forcing more thoughtful evaluation of benefit versus risk. But it only works if regulators (government) understand technology, or at least its benefits versus risks, better than its developers.  This can work reasonably well in pharma, where we have a long track record. But it is much more challenging in newer areas of technology. AI is a prime example where this is almost certainly not the case.  And as the complexity of all innovation increases, regulation will become less effective, and increasingly likely to create unintended consequences of its own.

I realize that this may all sound a bit alarmist, and certainly any call to slow down renewable energy conversion or pharma development is going to be unpopular.  But history has shown that slowing down creates resilience, while speeding up creates instability and waves of growth and collapse.  And an arms race where much of our current innovative capability is focused on fixing issues created by previous innovations is one we always risk losing.  So as unanticipated consequences are by definition, really difficult to anticipate, is this a point in time where we in the innovation community need to have a discussion on slowing down and being more selective?  Where should we innovate and where not?  When should we move fast, and when we might be better served by some productive procrastination.  Do we need better risk assessment processes? It’s always easier to do this kind of analysis in hindsight, but do we really have that luxury?

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Just Because We Can, Doesn’t Mean That We Should!

Just Because We Can, Doesn’t Mean That We Should!

GUEST POST from Pete Foley

An article on innovation from the BBC caught my eye this week. https://www.bbc.com/news/science-environment-64814781. After extensive research and experimentation, a group in Spain has worked out how to farm octopus. It’s clever innovation, but also comes with some ethical questions. The solution involves forcing highly intelligent, sentient animals together in unnatural environments, and then killing them in a slow, likely highly stressful way. And that triggers something that I believe we need to always keep front and center in innovation: Just Because We Can, Doesn’t Mean That We Should!

Pandora’s Box

It’s a conundrum for many innovations. Change opens Pandora’s Box, and with new possibilities come unknowns, new questions, new risks and sometimes, new moral dilemmas. And because our modern world is so complex, interdependent, and evolves so quickly, we can rarely fully anticipate all of these consequences at conception.

Scenario Planning

In most fields we routinely try and anticipate technical challenges, and run all sorts of stress, stability and consumer tests in an effort to anticipate potential problems. We often still miss stuff, especially when it’s difficult to place prototypes into realistic situations. Phones still catch fire, Hyundai’s can be surprisingly easy to steal, and airbags sometimes do more harm than good. But experienced innovators, while not perfect, tend to be pretty good at catching many of the worst technical issues.

Another Innovators Dilemma

Octopus farming doesn’t, as far as I know, have technical issues, but it does raise serious ethical questions. And these can sometimes be hard to spot, especially if we are very focused on technical challenges. I doubt that the innovators involved in octopus farming are intrinsically bad people intent on imposing suffering on innocent animals. But innovation requires passion, focus and ownership. Love is Blind, and innovators who’ve invested themselves into a project are inevitably biased, and often struggle to objectively view the downsides of their invention.

And this of course has far broader implications than octopus farming. The moral dilemma of innovation and unintended consequences has of course been brought into sharp focus with recent advances in AI.  In this case the stakes are much higher. Stephen Hawking and many others expressed concerns that while AI has the potential to provide incalculable benefits, it also has the potential to end the human race. While I personally don’t see CHATgpt as Armageddon, it is certainly evidence that Pandora’s Box is open, and none of us really knows how it will evolve, for better or worse.

What are our Solutions

So what can we do to try and avoid doing more harm than good? Do we need an innovator’s equivalent of the Hippocratic Oath? Should we as a community commit to do no harm, and somehow hold ourselves accountable? Not a bad idea in theory, but how could we practically do that? Innovation and risk go hand in hand, and in reality we often don’t know how an innovation will operate in the real world, and often don’t fully recognize the killer application associated with a new technology. And if we were to eliminate most risk from innovation, we’d also eliminate most progress. This said, I do believe how we balance progress and risk is something we need to discuss more, especially in light of the extraordinary rate of technological innovation we are experiencing, the potential size of its impact, and the increasing challenges associated with predicting outcomes as the pace of change accelerates.

Can We Ever Go Back?

Another issue is that often the choice is not simply ‘do we do it or not’, but instead ‘who does it first’? Frequently it’s not so much our ‘brilliance’ that creates innovation. Instead, it’s simply that all the pieces have just fallen into place and are waiting for someone to see the pattern. From calculus onwards, the history of innovation is replete with examples of parallel discovery, where independent groups draw the same conclusions from emerging data at about the same time.

So parallel to the question of ‘should we do it’ is ‘can we afford not to?’ Perhaps the most dramatic example of this was the nuclear bomb. For the team working the Manhattan Project it must have been ethically agonizing to create something that could cause so much human suffering. But context matters, and the Allies at the time were in a tight race with the Nazi’s to create the first nuclear bomb, the path to which was already sketched out by discoveries in physics earlier that century. The potential consequences of not succeeding were even more horrific than those of winning the race. An ethical dilemma of brutal proportions.

Today, as the pace of change accelerates, we face a raft of rapidly evolving technologies with potential for enormous good or catastrophic damage, and where Pandoras Box is already cracked open. Of course AI is one, but there are so many others. On the technical side we have bio-engineering, gene manipulation, ecological manipulation, blockchain and even space innovation. All of these have potential to do both great good and great harm. And to add to the conundrum, even if we were to decide to shut down risky avenues of innovation, there is zero guarantee that others would not pursue them. On the contrary, as bad players are more likely to pursue ethically dubious avenues of research.

Behavioral Science

And this conundrum is not limited to technical innovations. We are also making huge strides in understanding how people think and make decisions. This is superficially more subtle than AI or bio-manipulation, but as a field I’m close to, it’s also deeply concerning, and carries similar potential to do both great good or cause great harm. Public opinion is one of the few tools we have to help curb mis-use of technology, especially in democracies. But Behavioral Science gives us increasingly effective ways to influence and nudge human choices, often without people being aware they are being nudged. In parallel, technology has given us unprecedented capability to leverage that knowledge, via the internet and social media. There has always been a potential moral dilemma associated with manipulating human behavior, especially below the threshold of consciousness. It’s been a concern since the idea of subliminal advertising emerged in the 1950’s. But technical innovation has created a potentially far more influential infrastructure than the 1950’s movie theater.   We now spend a significant portion of our lives on line, and techniques such as memes, framing, managed choice architecture and leveraging mere exposure provide the potential to manipulate opinions and emotional engagement more profoundly than ever before. And the stakes have gotten higher, with political advertising, at least in the USA, often eclipsing more traditional consumer goods marketing in sheer volume.   It’s one thing to nudge someone between Coke and Pepsi, but quite another to use unconscious manipulation to drive preference in narrowly contested political races that have significant socio-political implications. There is no doubt we can use behavioral science for good, whether it’s helping people eat better, save better for retirement, drive more carefully or many other situations where the benefit/paternalism equation is pretty clear. But especially in socio-political contexts, where do we draw the line, and who decides where that line is? In our increasingly polarized society, without some oversight, it’s all too easy for well intentioned and passionate people to go too far, and in the worst case flirt with propaganda, and thus potentially enable damaging or even dangerous policy.

What Can or Should We Do?

We spend a great deal of energy and money trying to find better ways to research and anticipate both the effectiveness and potential unintended consequences of new technology. But with a few exceptions, we tend to spend less time discussing the moral implications of what we do. As the pace of innovations accelerates, does the innovation community need to adopt some form of ‘do no harm’ Hippocratic Oath? Or do we need to think more about educating, training, and putting processes in place to try and anticipate the ethical downsides of technology?

Of course, we’ll never anticipate everything. We didn’t have the background knowledge to anticipate that the invention of the internal combustion engine would seriously impact the world’s climate. Instead we were mostly just relieved that projections of cities buried under horse poop would no longer come to fruition.

But other innovations brought issues we might have seen coming with a bit more scenario-planning? Air bags initially increased deaths of children in automobile accidents, while prohibition in the US increased both crime and alcoholism. Hindsight is of course very clear, but could a little more foresight have anticipated these? Perhaps my favorite example unintended consequences is the ‘Cobra Effect’. The British in India were worried about the number of venomous cobra snakes, and so introduced a bounty for every dead cobra. Initially successful, this ultimately led to the breeding of cobras for bounty payments. On learning this, the Brits scrapped the reward. Cobra breeders then set the now-worthless snakes free. The result was more cobras than the original start-point. It’s amusing now, but it also illustrates the often significant gap between foresight and hindsight.

I certainly don’t have the answers. But as we start to stack up world changing technologies in increasingly complex, dynamic and unpredictable contexts, and as financial rewards often favor speed over caution, do we as an innovation community need to start thinking more about societal and moral risk? And if so, how could, or should we go about it?

I’d love to hear the opinions of the innovation community!

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.