Author Archives: Pete Foley

About Pete Foley

Pete Foley is a consultant who applies Behavioral Science to catalyze innovation for Retail, Hospitality, Product Design, Branding and Marketing Design. He applies insights derived from consumer and shopper psychology, behavioral economics, perceptual science, and behavioral design to create practical solutions to difficult business challenges. He brings 25 years experience as a serial innovator at P&G. He has over 100 published or granted patents, has published papers in behavioral economics, evolutionary psychology and visual science, is an exhibited artist and photographer, and an accomplished musician.

AI as an Innovation Tool – How to Work with a Deeply Flawed Genius!

AI as an Innovation Tool - How to Work with a Deeply Flawed Genius!

GUEST POST from Pete Foley

For those of us working in the innovation and change field, it is hard to overstate the value and importance of AI.   It opens doors, that were, for me at least, barely imaginable 10 years ago.  And for someone who views analogy, crossing expertise boundaries, and the reapplication of ideas across domains as central to innovation, it’s hard to imagine a more useful tool.

But it is still a tool.  And as with any tool, leaning it’s limitations, and how to use it skillfully is key.  I make the analogy to an automobile.  We don’t need to know everything about how it works, and we certainly don’t need to understand how to build it.  But we do need to know what it can, and cannot do. We also need to learn how to drive it, and the better our driving skills, the more we get out of it.

AI, the Idiot Savant?  An issue with current AI is that it is both intelligent and stupid at the same time (see Yejin Chois excellent TED talk that is attached). It has phenomenal ‘data intelligence’, but can also fail on even simple logic puzzles. Part of the problem is that AI lacks ‘common sense’ or the implicit framework that filters a great deal of human decision making and behavior.  Chois calls this the  ‘dark matter’ common sense of decision-making. I think of it as the framework of knowledge, morality, biases and common sense that we accumulate over time, and that is foundational to the unconscious ‘System 1’ elements that influence many, if not most of our decisions. But whatever we call it, it’s an important, but sometimes invisible and unintuitive part of human information processing that is can be missing from AI output.    

Of course, AI is far from being unique in having limitations in the quality of its output.   Any information source we use is subject to errors.  We all know not to believe everything we read on the internet. That makes Google searches useful, but also potentially flawed.  Even consulting with human experts has pitfalls.   Not all experts agree, and even to most eminent expert can be subject to biases, or just good old fashioned human error.  But most of us have learned to be appropriately skeptical of these sources of information.  We routinely cross-reference, challenge data, seek second opinions and do not simply ‘parrot’ the data they provide.

But increasingly with AI, I’ve seen a tendency to treat its output with perhaps too much respect.   The reasons for this are multi-faceted, but very human.   Part of it may be the potential for generative AI to provide answers in an apparently definitive form.  Part may simply be awe of its capabilities, and to confuse breadth of knowledge with accuracy.  Another element is the ability it gives us to quickly penetrate areas where we may have little domain knowledge or background.  As I’ve already mentioned, this is fantastic for those of us who value exploring new domains and analogies.  But it comes with inherent challenges, as the further we step away from our own expertise, the easier it is for us to miss even basic mistakes.  

As for AI’s limitations, Chois provides some sobering examples.  It can pass a bar exam, but can fail abysmally on even simple logic problems.  For example, it suggests building a bridge over broken glass and nails is likely to cause punctures!   It has even suggested increasing the efficiency of paperclip manufacture by using humans as raw materials.  Of course, these negative examples are somewhat cherry picked to make a point, but they do show how poor some AI answers can be, and how they can be low in common sense.   Of course, when the errors are this obvious, we should automatically filter them out with our own common sense.  But the challenge comes when we are dealing in areas where we have little experience, and AI delivers superficially plausible but flawed answers. 

Why is this a weak spot for AI?  At the root of this is that implicit knowledge is rarely articulated in the data AI scrapes. For example, a recipe will often say ‘remove the pot from the heat’, but rarely says ‘remove the pot from heat and don’t stick your fingers in the flames’. We’re supposed to know that already. Because it is ‘obvious’, and processed quickly, unconsciously and often automatically by our brains, it is rarely explicitly articulated. AI, however, cannot learn what is not said.  And so because we don’t tend to state the obvious, it can make it challenging for an AI to learn it.  It learns to take the pot off of the heat, but not the more obvious insight, which is to avoid getting burned when we do so.  

This is obviously a known problem, and several strategies are employed to help address it.  These include manually adding crafted examples and direct human input into AI’s training. But this level of human curation creates other potential risks. The minute humans start deciding what content should and should not be incorporated, or highlighted into AI training, the risk of transferring specific human biases to that AI increase.   It also creates the potential for competing AI’s with different ‘viewpoints’, depending upon differences in both human input and the choices around what data-sets are scraped. There is a ‘nature’ component to the development of AI capability, but also a nurture influence. This is of course analogous the influence that parents, teachers and peers have on the values and biases of children as they develop their own frameworks. 

But most humans are exposed to at least some diversity in the influences that shape their decision frameworks.  Parents, peers and teachers provide generational variety, and the gradual and layered process that builds the human implicit decision framework help us to evolve a supporting network of contextual insight.  It’s obvious imperfect, and the current culture wars are testament to some profound differences in end result.  But to a large extent, we evolve similar, if not identical common sense frameworks. With AI, the narrower group contributing to curated ‘education’ increases the risk of both intentional and unintentional bias, and of ‘divergent intelligence’.     

What Can We do?  The most important thing is to be skeptical about AI output.  Just because it sounds plausible, don’t assume it is.  Just as we’d not take the first answer on a Google search as absolute truth, don’t do the same with AI.  Ask it for references, and check them (early iterations were known to make up plausible looking but nonsense references).  And of course, the more important the output is to us, the more important it is to check it.  As I said at the beginning, it can be tempting to take verbatim output from AI, especially if it sounds plausible, or fits our theory or worldview.  But always challenge the illusion of omnipotence that AI creates.  It’s probably correct, but especially if its providing an important or surprising insight, double check it.    

The Sci-Fi Monster!  The concept of a childish super intelligence has been explored by more than one Science Fiction writer.  But in many ways that is what we are dealing with in the case of AI.  It’s informational ‘IQ’ is greater than the contextual or common sense ‘IQ’ , making it a different type of intelligence to those we are used to.   And because so much of the human input side is proprietary and complex, it’s difficult  to determine whether bias or misinformation is included in its output, and if so, how much?   I’m sure these are solvable challenges.  But some bias is probably unavoidable the moment any human intervention or selection invades choice of training materials or their interpretation.   And as we see an increase in copyright law suits and settlements associated with AI, it becomes increasingly plausible that narrowing of sources will result in different AI’s with different ‘experiences’, and hence potentially different answers to questions.  

AI is an incredible gift, but like the three wishes in Aladdin’s lamp, use it wisely and carefully.  A little bit of skepticism, and some human validation is a good idea. Something that can pass the bar, but that lacks common sense is powerful, it could even get elected, but don’t automatically trust everything it says!

Image credits: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Las Vegas Formula One

Successful Innovation, Learning Experience or Total Disaster?

GUEST POST from Pete Foley

In Las Vegas, we are now clearing up after the Formula 1 Grand Prix on the Strip.  This extremely complex event required a great deal of executional innovation, and one that I think as innovators, we can learn quite a lot from. 

It was certainly a bumpy ride, both for the multi-million dollar Ferrari that hit an errant drain cover during practice, but also with respect to broader preparation, logistics, pricing and projections of consumer behavior.  Despite this, race itself was exciting and largely issue free, and even won over some of the most skeptical drivers.  In terms of Kahneman’s peak-end effects, there were both memorable lows, but also a triumphant end result.   So did this ultimately amount to success?

Success?:   For now, I think it very much depends upon your perspective and who you talk to.  Perhaps it’s a sign of the times, but in Las Vegas, the race was extremely polarizing, with often heated debates between pro- and anti- F1-ers that were often as competitive as the race.

The reality is that it will be months, or more likely years before the dust settles, and we know the answer.  And I strongly suspect that even then, those who are for and against it will all likely be able to claim support for their point of view.  One insight I think innovators can take from this is that success can be quite subjective in of itself, and greatly depends upon what factors you measure, what period of time you measure over, and often your ingoing biases.  And the bigger and more complex the innovation, often the harder it is to define and measure success.  

Compromise Effects:  When you launch a new product, it is often simpler and cheaper to measure its success narrowly in terms of specific dollar contribution to your business. But this often misses its holistic impact.   Premium products can elevate an entire category or brand, while poorly executed innovations can do the opposite.  For example, the compromise effect from Behavioral Economics suggests that a premium addition to a brand line up can shift the ‘Good, Better, Best’ spectrum of a category upwards.  This can boost dollar sales across a line up, even if the new premium product itself has only moderate sales.   For example, the addition of high priced wines to a menu can often increase the average dollars per bottle spent by diners, even if the expensive wine itself doesn’t sell.  The expensive wines shift the ‘safe middle’ of the consideration set upwards, and thus increase revenue, and hopefully profit.      

Money, Scope and Intangibles:  In the case of F1, how far can and should we cast the net when trying to measure success?  Can we look just at the bottom line?  Did this specific weekend bring in more than the same weekend the previous year in sports betting, rooms and entertainment?  Did that difference exceed the investments? 

Or is that too narrow?  What about the $$ impact on the weeks surrounding the event?  We know that some people stayed away because of the construction and congestion in the lead up to the race.  That should probably be added into, or subtracted from the equation. 

And then there’s the ‘who won and who lost question’? The benefits and losses were certainly not homogeneous across stakeholders.  The big casinos benefited disproportionately in comparison to the smaller restaurants that lost business due to construction, some to a degree that almost rivaled Covid.  Gig workers also fared differently. I have friends who gained business from the event, and friends who lost.  Many Uber drivers simply gave up and stopped working. But those who stayed, or the high-end limo drivers likely had bumper weekends.   Entertainers working shows that were disrupted by F1 lost out, but the plethora of special events that came with F1 also provided a major uptick in business for many performers and entertainers.

There is also substantial public investment to consider.  Somewhat bizarrely, the contribution of public funds was not agreed prior to the race, and the public-private cost sharing of tens of millions is still being negotiated.  But even facing that moving target, did increased (or decreased) tax income before, during and after the race offset those still to be determined costs?

Intangibles:  And then there’s the intangibles.  While Vegas is not exactly an unknown entity, F1 certainly upped its exposure, or in marketing terms, it’s mental availability.   It brought Vegas into the news, but was that in a positive or negative light?  Or is all publicity good publicity in this context? News coverage was mixed, with a lot of negative focus on the logistic issues, but also global coverage of what was generally regarded as an exciting race.   And of course, that media coverage also by definition marketed other businesses, including the spectacular Sphere. 

Logistics:  Traffic has been a nightmare with many who work on the strip facing unprecedented delays in their commutes for many weeks, with many commutes going from minutes to hours.   This reached a point where casinos were raffling substantial prizes, including a Tesla, just to persuade people to not call in sick.  Longer term, it’s hard to determine the impact on employee morale and retention, but its hard to imagine that it will be zero, and that brings costs of its own that go well beyond a raffled Tesla

Measuring Success?  In conclusion, this was a huge operation, and its impact by definition is going to be multidimensional.  The outcome was, not surprisingly, a mixed bag.  It could have been a lot better, or a lot worse. And even as the dust settles, it’s likely that different groups will be able to cherry pick data to support their current opinions and biases. 

Innovation Insights:  So what are some of the more generalized innovation insights we can draw?

(a) Innovation is rarely a one and done process.   We rarely get it right first time, and the bigger and more complex an innovation is, the more we usually have to learn.  F1 is the poster child for this, and the organization is going to have an enormous amount of data to plough through. The value of this will greatly depend on F1’s internal innovation culture.  Is it a learning organization?  In a situation like this, where billions of dollars, and careers are on the line, will it be open or defensive?  Great innovation organizations mostly put defensiveness aside, actively learn from mistakes, and adopt Devils Advocate approaches to learn from hard earned data. But culture is deeply embedded, and difficult to change, so much depends on the current culture of the organizations involved.  

(b) Going Fast versus Going Slow:  This project moved very, very quickly.  Turning a city like Las Vegas from scratch into a top of the line race track in less than a year was a massive challenge.  The upside is that if you go fast, you learn fast.  And the complexity of the task meant much of the insight could pragmatically only be achieved ‘on the ground’.  But conversely, better scenario planning might have helped anticipate some of the biggest issues, especially around traffic disruption, loss of business to smaller organizations, commuting issues and community outreach.  And things like not finalizing public-private contracts prior to execution will likely end up prolonging the agony.  Whatever our innovation is, big or small, hitting that sweet spot between winging it and over-thinking is key. 

(c) Understanding Real Consumer Behavior.  The casinos got pricing horribly wrong.  When the race was announced, hotel prices and race packages for the F1 weekend went through the roof.  But in the final run up to the race, prices for both rooms and the race itself plummeted.  One news article reported a hotel room on the strip as low as $18!  Tickets for the race that the previous month had cost $1600 had dropped to $800 or less on race day.  Visitors who had earlier paid top dollar for rooms were reported to be cancelling and rebooking, while those locked into rates were frustrated.  There is even a major lawsuit in progress around a cancelled practice.  I don’t know any details around how pricing was researched, and predicting the market for a new product or innovation is always a challenge.  In addition, the bigger the innovation, the more challenging the prediction game is, as there are less relevant anchors for consumers or the business to work from.   But I think the generalizable lesson for all innovators is to be humble.  Assume you don’t know, that your models are approximate, do as much research as you can in contexts that are a close to realistic as possible, don’t squeeze margins based on unrealistic expectations for the accuracy of business models, and build as much agility into innovation launches as possible.  Easier said than done I know, but one of the most consistent reasons for new product failure is over confidence in understanding real consumer response when the rubber hits the road (pun intended), and how it can differ from articulated consumer response derived in unrealistic contexts. Focus groups and on-line surveys can be quite misleading when it comes down to the reality of handing over hard cash, opportunity cost, or how we value ur precious time short versus long-term term.

Conclusion: Full disclosure, I’ve personally gone through the full spectrum with Formula One in Vegas.  I loved the idea when it was announced, but 6 months of construction, disruption, and the prospect of another two months of tear down have severely dented my enthusiasm.  Ultimately I went from coveting tickets to avoiding the event altogether.  People I know range from ecstatic to furious, and everything in between.  Did I mention it was polarizing? 

The reality is that this is an ongoing innovation process.   There is a 3-year contract with options to extend to 10 years.  How successful it ultimately is will likely be very dependent upon how good a learning and innovation culture Formula One and its partners are, or can become.  It’s a steep and expensive learning curve, and how it moves forward is going to be interesting if nothing else.  And being Vegas, we have both CES and the Super Bowl to distract us in the next few months, before we start preparing again for next year. 

Image credits: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Eddie Van Halen, Simultaneous Innovation and the AI Regulation Conundrum

Eddie Van Halen, Simultaneous Innovation and the AI Regulation Conundrum

GUEST POST from Pete Foley

It’s great to have an excuse to post an Eddie Van Halen video to the innovation community.  It’s of course fun just to watch Eddie, but I also have a deeper, innovation relevant reason for doing so.

Art & Science:  I’m a passionate believer in cross-pollination between art and science.  And I especially believe we can learn a great deal from artists and musicians like Eddie who have innovated consistently over a career.  Dig into their processes, and we see serial innovators like The Beatles, Picasso, Elton John, Bowie, George Martin, Freddie Mercury, William Gibson, Lady Gaga, Paul Simon and so many others apply techniques that are highly applicable to all innovation fields. Techniques such as analogy, conceptual blending, collaboration, reapplication, boundary stretching, risk taking, learning from failure and T-Shaped innovation all crop up fairly consistently.  And these creative approaches are typically also built upon deep expertise, passion, motivation, and an ability to connect with future consumer needs, and to tap into early adopters and passionate consumers.  For me at least, that’s a pretty good innovation toolkit for innovation in any field.  Now, to be fair, often their process is intuitive, and many truly prolific artists are lucky enough to automatically and intuitively ‘think that way’. But understanding and then stealing some of their techniques, either implicit or explicit, can be a great way to both jump-start our own innovative processes, and also to understand how innovation works. As Picasso said, ‘great artists steal’, but I’d argue that so do good innovators, at least within the bounds allowed by the patent literature!

In the past I’ve written quite a lot about Picasso and The Beatles use of conceptual blending, Paul Simon’s analogies, reapplication and collaboration, Bowie’s innovative courage, and William Gibson’s ability to project s-curves.  Today, I’d like to to focus on some insights I see in the guitar innovations of Eddie.   

(a) Parallel or Simultaneous Innovation.  I suspect this is one of the most important yet under-appreciated concepts in innovation today. Virtually every innovation is built upon the shoulders of giants. Past innovations provide the foundation for future ones, to the point where once the pieces of the puzzle are in place, many innovations become inevitable. It still takes an agile and creative mind to come up with innovative ideas, but contemporary innovations often set the stage for the next leap forward. And this applies both to the innovative process, and also to a customers ability to understand and embrace it. The design of the first skyscraper was innovative, but it was made a lot more obvious by the construction of the Eiffel Tower. The ubiquitous mobile phone may now seem obvious, but it owes its existence to a very long list of enabling technologies that paved the way for it’s invention, from electricity to chips to Wi-Fi, etc.

The outcome of this ‘stage setting’ is that often even really big innovations occur simultaneously yet independently.  We’ve seen this play out with calculus (independently developed by Newton and Leibnitz), the atomic bomb, where Oppenheimer and company only just beat the Nazi’s, the theory of evolution, the invention of the thermometer, nylon and so many others.  We even see it in evolution, where scavenger birds vultures and condors superficially appear quite similar due to adaptations that allow them to eat carrion, but actually have quite different genetic lineages.  Similarly many marsupials look very similar to placental mammals that fill similar ecological niches, but typically evolved independently. Context has a huge impact on innovation, and similar contexts typical create parallel, and often similar innovations. As the world becomes more interconnected, and context becomes more homogenized, we are going to see more and more examples of simultaneous innovation.

Faster and More Competitive Innovation:  Today social media, search technology and the web mean that more people know more of the same ‘stuff’ more quickly than before.  This near instantaneous and democratized access to the latest knowledge sets the scene and context for a next generation of innovation that is faster and more competitive than we’ve ever seen.   More people have access to the pieces of the puzzle far more quickly than ever before; background information that acts as a precursor for the next innovative leap. Eddie had to go and watch Jimmy Paige live and in person to get his inspiration for ‘tapping’.  Today he, and a few million others would simply need to go onto YouTube.  He therefore discovered Paige’s hammer-on years after Paige started using them.  Today it would likely be days.  That acceleration of ‘innovation context’ has a couple of major implications: 

1.  If you think you’ve just come up with something new, it’s more than likely that several other people have too, or will do so very soon.   More than ever before you are more than likely in a race from the moment you have an idea! So snooze and you loose. Assume several others are working on the same idea.

2.  Regulating Innovation is becoming really, really difficult.  I think this is possibly the most profound implication.  For example, a very current and somewhat contentious topic today is if and how we should regulate AI.  And it’s a pretty big decision. We really don’t know how AI will evolve, but it is certainly moving very quickly, and comes with the potential for earthshaking pros and cons.  It is also almost inevitably subject to simultaneous invention.  So many people are working on it, and so much adjacent innovation is occurring, that it’s somewhat unlikely that any single group is going to get very far out in front.   The proverbial cat is out of the bag, and the race is on. The issue for regulation then becomes painfully obvious.   Unless we can somehow implement universal regulation, then any regulations simply slow down those who follow the rules.  This unfortunately opens the doors to bad actors taking the lead, and controlling potentially devastating technology.

So we are somewhat damned if we do, and damned if we don’t.  If we don’t regulate, then we run the risk of potentially dangerous technology getting out of control.  But if do regulate, we run the risk of enabling bad actors to own that dangerous technology.  We’ve of course been here before.  The race for the nuclear bomb between the Allies and the Nazi’s was a great example of simultaneous innovation with potentially catastrophic outcomes.   Imagine if we’d decided fission was simply too dangerous, and regulated it’s development to the point where the Nazi’s had got there first.  We’d likely be living in a very different world today!  Much like AI, it was a tough decision, as without regulation, there was a small but possible scenario where the outcome could have been devastating.    

Today we have a raft of rapidly evolving technologies that I’d both love to regulate, but am also profoundly worried about the unintended consequences of doing so.  AI of course, but also genetic engineering, gene manipulating medicines, even climate mediation and behavioral science!  With respect to the latter, the better we get at nudging behavior, and the more reach we have with those techniques, the more dangerous miss-use becomes.  

The core problem underlying all of this is that we are human.   Most people try to do the right thing, but there are always bad actors.  And even those trying to do the right thing all too often get it wrong.  And the more democratized access to cutting edge insight becomes, parallel innovation means the more contenders we have for mistakes and bad bad choices, intentional or unintentional. 

(b) Innovation versus Invention:  A less dramatic, but I think similarly interesting insight we can draw from Eddie lies in the difference between innovation and invention He certainly wasn’t the first guitarist to use the tapping technique.  That goes back centuries! At least as far as classical composer Paganini, and it was a required technique for playing the Chapman stick in the 1970’s, popularized by the great Tony Levin in King Crimson. It was also widely, albeit sparingly (and often obscurely) used by jazz guitarists in the 1950’s and 60’s. But Eddie was the first to feature it, and turn it into a meaningful innovation in of itself. Until him, nobody had packaged the technique in a way that it could be ‘marketed’ and ‘sold’ as a viable product. He found the killer application, made it his own, and made it a ‘thing’. I would therefore argue that he wasn’t the inventor, but he was the ‘innovator’.  This points to the value of innovation over invention.  If you don’t have the capability or the partners to turn an invention into something useful, its still just an idea.   Invention is a critical part of the broader innovation process, but in isolation it’s more curiosity than useful. Innovation is about reduction to practice and communication as well a great ideas

Art & science:  I love the arts.  I play guitar, paint, and photograph.  It’s a lot of fun, and provides a invaluable outlet from the stresses involved in business and innovation.  But as I suggested at the beginning, a lot of the boundaries we place between art and science, and by extension business, are artificial and counter-productive. Some of my most productive collaborations as a scientist have been with designers and artists. As a visual scientist, I’ve found that artists often intuitively have a command of attentional insights that our cutting edge science is still trying to understand.  It’s a lot of fun to watch Eddie Van Halen, but learning from great artists like him can, via analogy, also be surprisingly insightful and instructive.   

Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

A New Innovation Sphere

A New Innovation Sphere

GUEST POST from Pete Foley

I’m obsessed with the newly opened Sphere in Las Vegas as an example of Innovation.   As I write this, U2 are preparing for their second show there, and Vegas is buzzing about the new innovation they are performing in.  That in of itself is quite something.  Vegas is a city that is nor short of entertainment and visual spectacle, so for an innovation to capture the collective imagination in this way it has to be genuinely Wow.  And that ‘Wow’ means there are opportunities for the innovation community to learn from it. 

For those of you who might have missed it, The Sphere is an approximately 20,000 seat auditorium with razor sharp cutting edge multisensory capabilities that include a 16K resolution wraparound interior LED screen, speakers with beamforming and wave field synthesis technology, and 4D haptic physical effects built into the seats. The exterior of the 366 foot high building features 580,000 sq ft of LED displays which have transformed the already ostentatious Las Vegas skyline. Images including a giant eye, moon, earth, smiley face, Halloween pumpkin and various underwater scenes and geometric animations light up the sky, together with advertisements that are rumored to cost almost $500,000 per day.  Together with giant drone displays and giant LED displays on adjacent casinos mean that Bladerunner has truly come to Vegas. But these descriptions simply don’t do it justice, you really, really have to see it. 

Las Vegas U2 Residency at the Sphere

Master of Attention – Leveraging Visual Science to the Full:  The outside is a brilliant example of visual marketing that leverages just about every insight possible for grabbing attention. It’s scale is simply ‘Wow!’, and you can see it from the mountains surrounding Vegas, or from the plane as you come into land.   The content it displays on its outside is brilliantly designed to capture attention. It has the fundamental visual cues of movement, color, luminescence, contrast and scale, but these are all turned up to 11, maybe even 12.  This alone pretty much compels attention, even in a city whose skyline is already replete with all of these.  When designing for visual attention, I often invoke the ‘Times Square analogy’.  When trying to grab attention in a visually crowded context, signal to noise is your friend, and a simple, ‘less is more’ design can stand out against a background context of intense, complex visual noise.  But the Sphere has instead leapt s-curves, and has instead leveraged new technology to be brighter, bigger, more colorful and create an order of magnitude more movement than its surroundings.  It visually shouts above the surrounding visual noise, and has created genuine separation, at least for now. 

But it also leverages many other elements that we know command attention.  It uses faces, eyes, and natural cues that tap into our unconscious cognitive attentional architecture.  The giant eye, giant pumpkin and giant smiley face tap these attentional mechanisms, but in a playful way.  The orange and black of the pumpkin or the yellow and black of the smiley face tap into implicit biomimetic ‘danger’ clues, but in a way that resolves instantly to turn attention from avoid to approach.  The giant jellyfish and whales floating above the strip tap into our attentional priority mechanisms for natural cues.  And of course, it all fits the surprisingly obvious cognitive structure that creates ‘Wow!’.  A giant smiley emoji floating above the Vegas skyline is initially surprising, but also pretty obvious once you realize it is the sphere! 

And this is of course a dynamic display, that once it captures your attention, then advertises the upcoming U2 show or other paid advertising.  As I mentioned before, that advertising does not come cheap, but it does come with pretty much guaranteed engagement.  You really do need to see it for yourself if you can, but I’ve also captured some video here:

The Real Innovation Magic: The outside of The Sphere is stunning, but the inside goes even further, and provides a new and disruptive technology platform that opens the door for all sorts of creativity and innovation in entertainment and beyond. The potential to leverage the super-additive power of multi-sensory combinations to command attention and emotion is staggering.

The opening act was U2, and the show has received mostly positive but also mixed reviews. Everyone raves about the staggering visual effects, the sound quality, and the spectacle. But others do point out that the band itself gets somewhat lost, and/or is overshadowed by the new technology.

But this is just the beginning.   The technology platform is truly disruptive innovation that will open the door for all sorts of innovation and creativity. It fundamentally challenges the ‘givens’ of what a concert is. The U2 show is still based on and marketed as the band being the ‘star’ of the show. But the Sphere is an unprecedented immersive multimedia experience that can and likely will change that completely, making the venue the star itself. The potential for great musicians, visual and multisensory artist to create unprecedented customer experience is enormous.  Artists from Gaga to Muse, or their successors must be salivating at the potential to bring until now impossible visions to life, and deliver multi-sensory experience to audiences on a scale not previously imagined. Disruptive innovation often emerges at the interface of previous expertise, and the potential for hybrid sensory experiences that the Sphere offer are unprecedented. Imagine visuals created and inspired by the Webb telescope accompanied by an orchestra that sonically surrounds the audience in ways they’ve never experienced or perhaps imagined. And of course, new technology will challenge new creative’s to leverage it in ways we haven’t yet imagined.  Cawsie Jijina, the engineer who designed the Sphere maybe says it best:

You have the best audio there possibly can be today. You have the best visual there can possible be today. Now you just have to wait and let some artist meet some batshit crazy engineer and techie and create something totally new.” 

This technology platform will stimulate emergent blends of creative innovation that will challenge our expectations of what a show is.  It will likely require both creative’s and audiences to give up on some pre-conceptions. But I love to see a new technology emerge in front of my eyes. We ain’t seen nothing yet. 

Las Vegas Sphere Halloween

Image credits: Pete Foley

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

When Innovation Becomes Magic

When Innovation Becomes Magic

GUEST POST from Pete Foley

Arthur C Clarke’s 3rd Law famously stated:

“Any sufficiently advanced technology is indistinguishable from magic”

In other words, if the technology of an advanced civilization is so far beyond comprehension, it appears magical to a less advanced one. This could take the form of a human encounter with a highly advanced extraterrestrial civilization, how current technology might be viewed by historical figures, or encounters between human cultures with different levels of scientific and technological knowledge.

Clarke’s law implicitly assumed that knowledge within a society is sufficiently democratized that we never view technology within a civilization as ‘magic’.  But a combination of specialization, rapid advancements in technology, and a highly stratified society means this is changing.  Generative AI, Blockchain and various forms of automation are all ‘everyday magic’ that we increasingly use, but mostly with little more than an illusion of understanding around how they work.  More technological leaps are on the horizon, and as innovation accelerates exponentially, we are all going to have to navigate a world that looks and feels increasingly magical.   Knowing how to do this effectively is going to become an increasingly important skill for us all.  

The Magic Behind the Curtain:  So what’s the problem? Why do we need to understand the ‘magic’ behind the curtain, as long as we can operate the interface, and reap the benefits?  After all, most of us use phones, computers, cars, or take medicines without really understanding how they work.  We rely on experts to guide us, and use interfaces that help us navigate complex technology without a need for deep understanding of what goes on behind the curtain.

It’s a nuanced question.  Take a car as an analogy.  We certainly don’t need to know how to build one in order to use one.  But we do need to know how to operate it and understand what it’s performance limitations are.  It also helps to have at least some basic knowledge of how it works; enough to change a tire on a remote road, or to have some concept of basic mechanics to minimize the potential of being ripped off by a rogue mechanic.  In a nutshell, the more we understand it, the more efficiently, safely and economically we leverage it.  It’s a similar situation with medicine.  It is certainly possible to defer all of our healthcare decisions to a physician.  But people who partner with their doctors, and become advocates for their own health generally have superior outcomes, are less likely to die from unintended contraindications, and typically pay less for healthcare.  And this is not trivial.  The third leading cause of death in Europe behind cancer and heart disease are issues associated with prescription medications.  We don’t need to know everything to use a tool, but in most cases, the more we know the better

The Speed/Knowledge Trade-Off:  With new, increasingly complex technologies coming at us in waves, it’s becoming increasing challenging to make sense of what’s ‘behind the curtain’. This has the potential for costly mistakes.  But delaying embracing technology until we fully understand it can come with serious opportunity costs.  Adopt too early, and we risk getting it wrong, too late and we ‘miss the bus’.  How many people who invested in crypto currency or NFT’s really understood what they were doing?  And how many of those have lost on those deals, often to the benefit of those with deeper knowledge?  That isn’t to in anyway suggest that those who are knowledgeable in those fields deliberately exploit those who aren’t, but markets tend to reward those who know, and punish those who don’t.    

The AI Oracle:  The recent rise of Generative AI has many people treating it essentially as an oracle.  We ask it a question, and it ‘magically’ spits out an answer in a very convincing and sharable format.  Few of us understand the basics of how it does this, let alone the details or limitations. We may not call it magic, but we often treat it as such.  We really have little choice; as we lack sufficient understanding to apply quality critical thinking to what we are told, so have to take answers on trust.  That would be brilliant if AI was foolproof.  But while it is certainly right a lot of the time, it does make mistakes, often quite embarrassing ones. . For example, Google’s BARD incorrectly claimed the James Webb Space Telescope had taken the first photo of a planet outside our solar system, which led to panic selling of parent company Alphabet’s stock.  Generative AI is a superb innovation, but its current iterations are far from perfect.  They are limited by the data bases they are fed on, are extremely poor at spotting their own mistakes, can be manipulated by the choice of data sets they are trained on, and they lack the underlying framework of understanding that is essential for critical thinking or for making analogical connections.  I’m sure that we’ll eventually solve these issues, either with iterations of current tech, or via integration of new technology platforms.  But until we do, we have a brilliant, but still flawed tool.  It’s mostly right, is perfect for quickly answering a lot of questions, but its biggest vulnerability is that most users have pretty limited capability to understand when it’s wrong.

Technology Blind Spots: That of course is the Achilles Heel, or blind spot and a dilemma. If an answer is wrong, and we act on it without realizing, it’s potentially trouble. But if we know the answer, we didn’t really need to ask the AI. Of course, it’s more nuanced than that.  Just getting the right answer is not always enough, as the causal understanding that we pick up by solving a problem ourselves can also be important.  It helps us to spot obvious errors, but also helps to generate memory, experience, problem solving skills, buy-in, and belief in an idea.  Procedural and associative memory is encoded differently to answers, and mechanistic understanding helps us to reapply insights and make analogies. 

Need for Causal Understanding.  Belief and buy-in can be particularly important. Different people respond to a lack of ‘internal’ understanding in different ways.  Some shy away from the unknown and avoid or oppose what they don’t understand. Others embrace it, and trust the experts.  There’s really no right or wrong in this.  Science is a mixture of both approaches it stands on the shoulders of giants, but advances based on challenging existing theories.  Good scientists are both data driven and skeptical.  But in some cases skepticism based on lack of causal understanding can be a huge barrier to adoption. It has contributed to many of the debates we see today around technology adoption, including genetically engineered foods, efficacy of certain pharmaceuticals, environmental contaminants, nutrition, vaccinations, and during Covid, RNA vaccines and even masks.  Even extremely smart people can make poor decisions because of a lack of causal understanding.  In 2003, Steve Jobs was advised by his physicians to undergo immediately surgery for a rare form of pancreatic cancer.  Instead he delayed the procedure for nine months and attempted to treat himself with alternative medicine, a decision that very likely cut his life tragically short.

What Should We Do?  We need to embrace new tools and opportunities, but we need to do so with our eyes open.   Loss aversion, and the fear of losing out is a very powerful motivator of human behavior, and so an important driver in the adoption of new technology.  But it can be costly. A lot of people lost out with crypto and NFT’s because they had a fairly concrete idea of what they could miss out on if they didn’t engage, but a much less defined idea of the risk, because they didn’t deeply understand the system. Ironically, in this case, our loss aversion bias caused a significant number of people to lose out!

Similarly with AI, a lot of people are embracing it enthusiastically, in part because they are afraid of being left behind.  That is probably right, but it’s important to balance this enthusiasm with an understanding of its potential limitations.  We may not need to know how to build a car, but it really helps to know how to steer and when to apply the brakes .   Knowing how to ask an AI questions, and when to double check answers are both going to be critical skills.  For big decisions, ‘second opinions’ are going to become extremely important.   And the human ability to interpret answers through a filter of nuance, critical thinking, different perspectives, analogy and appropriate skepticism is going to be a critical element in fully leveraging AI technology, at least for now. 

Today AI is still a tool, not an oracle. It augments our intelligence, but for complex, important or nuanced decisions or information retrieval, I’d be wary of sitting back and letting it replace us.  Its ability to process data in quantity is certainly superior to any human, but we still need humans to interpret, challenge and integrate information.  The winners of this iteration of AI technology will be those who become highly skilled at walking that line, and who are good at managing the trade off between speed and accuracy using AI as a tool.  The good news is that we are naturally good at this, it’s a critical function of the human brain, embodied in the way it balances Kahneman’s System 1 and System 2 thinking. Future iterations may not need us, but for now AI is a powerful partner and tool, but not a replacement

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Unintended Consequences.  The Hidden Risk of Fast-Paced Innovation

Unintended Consequences.  The Hidden Risk of Fast-Paced Innovation

GUEST POST from Pete Foley

Most innovations go through a similar cycle, often represented as an s-curve.

We start with something potentially game changing. It’s inevitably a rough-cut diamond; un-optimized and not fully understood.  But we then optimize it. This usually starts with a fairly steep leaning curve as we address ‘low hanging fruit’ but then evolves into a fine-tuning stage.  Eventually we squeeze efficiency from it to the point where the incremental cost of improving it becomes inefficient.  We then either commoditize it, or jump to another s-curve.

This is certainly not a new model, and there are multiple variations on the theme.  But as the pace of innovation accelerates, something fundamentally new is happening with this s-curve pattern.  S-curves are getting closer together. Increasingly we are jumping to new s-curves before we’ve fully optimized the previous one.  This means that we are innovating quickly, but also that we are often taking more ‘leaps into the dark’ than ever before.

This has some unintended consequences of its own:

1. Cumulative Unanticipated Consequences. No matter how much we try to anticipate how a new technology will fare in the real world, there are always surprises.  Many surprises emerge soon after we hit the market, and create fires than have to be put out quite quickly (and literally in the cases of some battery technologies).  But other unanticipated effects can be slower burn (pun intended).  The most pertinent example of this is of course greenhouse gasses from Industrialization, and their impact on our climate. This of course took us years to recognize. But there are many more examples, including the rise of antibiotic resistance, plastic pollution, hidden carcinogens, the rising cost of healthcare and the mental health issues associated with social media. Just as the killer application for a new innovation is often missed at its inception, it’s killer flaws can be too.  And if the causal relationship between these issues and the innovation are indirect, they can accumulate across multiple s-curves before we notice them.  By the time we do, technology is often so entrenched it can be a huge challenge to extract ourselves from it.

2.  Poorly understood complex network effects.  The impact of new innovation is very hard to predict when it is introduced into a complex, multivariable system.  A butterfly flapping its wings can cascade and amplify through a system, and when the butterfly is transformative technology, the effect can be profound.  We usually have line of sight of first generation causal effects:  For example, we know that electric cars use an existing electric grid, as do solar energy farms.  But in today’s complex, interconnected world, it’s difficult to predict second, third or fourth generation network effects, and likely not cost effective or efficient for an innovator to try and do so. For example, the supply-demand interdependency of solar and electric cars is a second-generation network effect that we are aware of, but that is already challenging to fully predict.  More causally distant effects can be even more challenging. For example, funding for the road network without gas tax, the interdependency of gas and electric cost and supply as we transition, the impact that will have on broader on global energy costs and socio political stability.  Then add in complexities supply of new raw materials needed to support the new battery technologies.  These are pretty challenging to model, and of course, are the challenges we are at least aware of. The unanticipated consequences of such a major change are, by definition, unanticipated!

3. Fragile Foundations.  In many cases, one s-curve forms the foundation of the next.  So if we have not optimized the previous s-curve sufficiently, flaws potentially carry over into the next, often in the form of ‘givens’.  For example, an electric car is a classic s-curve jump from internal combustion engines.  But for reasons that include design efficiency, compatibility with existing infrastructure, and perhaps most importantly, consumer cognitive comfort, much of the supporting design and technology carries over from previous designs. We have redesigned the engine, but have only evolved wheels, breaks, etc., and have kept legacies such as 4+ seats.  But automotives are in many, one of our more stable foundations. We have had a lot of time to stabilize past s-curves before jumping to new ones.  But newer technologies such as AI, social media and quantum computing have enjoyed far less time to stabilize foundational s-curves before we dance across to embrace closely spaced new ones.  That will likely increase the chances of unintended consequences. And we are already seeing the canary in the coal mine with some, with unexpected mental health and social instability increasingly associated with social media

What’s the Answer?  We cannot, or should not stop innovating.  We face too many fundamental issues with climate, food security and socio political stability that need solutions, and need them quite quickly.

But the conundrum we face is that many, if not all of these issue are rooted in past, well intentioned innovation, and the unintended consequences that derive from it. So a lot of our innovation efforts are focused on solving issues created by previous rounds of innovation.  Nobody expected or intended the industrial revolution to impact our climate, but now much of our current innovation capability is rightly focused on managing the fall out it has created (again, pun intended).  Our challenge is that we need to continue to innovate, but also to break the cycle of todays innovation being increasingly focused on fixing yesterdays!

Today new waves of innovation associated with ‘sustainable’ technology, genetic manipulation, AI and quantum computing are already crashing onto our shores. These interdependent innovations will likely dwarf the industrial revolution in scale and complexity, and have the potential for massive impact, both good and bad. And they are occurring at a pace that gives us little time to deal with anticipated consequences, let alone unanticipated ones.

We’ll Find a Way?  One answer is to just let it happen, and fix things as we go. Innovation has always been a bumpy road, and humanity has a long history of muddling through. The agricultural revolution ultimately allowed humans to exponentially expand our population, but only after concentrating people into larger social groups that caused disease to ravage many societies. We largely solved that by dying in large numbers and creating herd immunity. It was a solution, but not an optimum one.  When London was in danger of being buried in horse poop, the internal combustion engine saved us, but that in turn ultimately resulted in climate change. According to projections from the Club of Rome in the 70’s, economic growth should have ground to a halt long ago, mired in starvation and population contraction.  Instead advances in farming technology have allowed us to keep growing.  But that increase in population contributes substantially to our issues with climate today.  ‘We’ll find a way’ is an approach that works until it doesn’t.  and even when it works, it is usually not painless, and often simply defers rather than solves issues.

Anticipation?    Another option is that we have to get better at both anticipating issues, and at triaging the unexpected. Maybe AI will give us the processing power to do this, provided of course that it doesn’t become our biggest issue in of itself.

Slow Down and Be More Selective?  In a previous article I asked if ‘just because we can do it, does it mean we should?’.  That was through a primarily moral lens.  But I think unintended consequences make this an even bigger question for broader innovation strategy.  The more we innovate, the more consequences we likely create.  And the faster we innovate, the more vulnerable we are to fragility. Slowing down creates resilience, speed reduces it.  So one option is to be more choiceful about innovations, and look more critically at benefit risk balance. For example, how badly do we need some of the new medications and vaccines being rushed to market?  Is all of our gene manipulation research needed? Do we really need a new phone every two years?   For sure, in some cases the benefits are clear, but in other cases, is profit driving us more than it should?

In a similar vein, but to be provocative, are we also moving too quickly with renewable energy?  It certainly something we need.  But are we, for example, pinning too much on a single, almost first generation form of large scale solar technology?  We are still at that steep part of the learning curve, so are quite likely missing unintended consequences.  Would a more staged transition over a decade or so add more resilience, allow us to optimize the technology based on real world experience, and help us ferret out unanticipated issues? Should we be creating a more balanced portfolio, and leaning more on more established technology such as nuclear? Sometimes moving a bit more slowly ultimately gets you there faster, and a long-term issue like climate is a prime candidate for balancing speed, optimization and resilience to ultimately create a more efficient, robust and better understood network.

The speed of AI development is another obvious question, but I suspect more difficult to evaluate.  In this case, Pandora’s box is open, and calls to slow AI research would likely mean responsible players would stop, but research would continue elsewhere, either underground or in less responsible nations.  A North Korean AI that is superior to anyone else’s is an example where the risk of not moving likely outweighs the risk of unintended consequences

Regulation?  Regulation is a good way of forcing more thoughtful evaluation of benefit versus risk. But it only works if regulators (government) understand technology, or at least its benefits versus risks, better than its developers.  This can work reasonably well in pharma, where we have a long track record. But it is much more challenging in newer areas of technology. AI is a prime example where this is almost certainly not the case.  And as the complexity of all innovation increases, regulation will become less effective, and increasingly likely to create unintended consequences of its own.

I realize that this may all sound a bit alarmist, and certainly any call to slow down renewable energy conversion or pharma development is going to be unpopular.  But history has shown that slowing down creates resilience, while speeding up creates instability and waves of growth and collapse.  And an arms race where much of our current innovative capability is focused on fixing issues created by previous innovations is one we always risk losing.  So as unanticipated consequences are by definition, really difficult to anticipate, is this a point in time where we in the innovation community need to have a discussion on slowing down and being more selective?  Where should we innovate and where not?  When should we move fast, and when we might be better served by some productive procrastination.  Do we need better risk assessment processes? It’s always easier to do this kind of analysis in hindsight, but do we really have that luxury?

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Just Because We Can, Doesn’t Mean That We Should!

Just Because We Can, Doesn’t Mean That We Should!

GUEST POST from Pete Foley

An article on innovation from the BBC caught my eye this week. https://www.bbc.com/news/science-environment-64814781. After extensive research and experimentation, a group in Spain has worked out how to farm octopus. It’s clever innovation, but also comes with some ethical questions. The solution involves forcing highly intelligent, sentient animals together in unnatural environments, and then killing them in a slow, likely highly stressful way. And that triggers something that I believe we need to always keep front and center in innovation: Just Because We Can, Doesn’t Mean That We Should!

Pandora’s Box

It’s a conundrum for many innovations. Change opens Pandora’s Box, and with new possibilities come unknowns, new questions, new risks and sometimes, new moral dilemmas. And because our modern world is so complex, interdependent, and evolves so quickly, we can rarely fully anticipate all of these consequences at conception.

Scenario Planning

In most fields we routinely try and anticipate technical challenges, and run all sorts of stress, stability and consumer tests in an effort to anticipate potential problems. We often still miss stuff, especially when it’s difficult to place prototypes into realistic situations. Phones still catch fire, Hyundai’s can be surprisingly easy to steal, and airbags sometimes do more harm than good. But experienced innovators, while not perfect, tend to be pretty good at catching many of the worst technical issues.

Another Innovators Dilemma

Octopus farming doesn’t, as far as I know, have technical issues, but it does raise serious ethical questions. And these can sometimes be hard to spot, especially if we are very focused on technical challenges. I doubt that the innovators involved in octopus farming are intrinsically bad people intent on imposing suffering on innocent animals. But innovation requires passion, focus and ownership. Love is Blind, and innovators who’ve invested themselves into a project are inevitably biased, and often struggle to objectively view the downsides of their invention.

And this of course has far broader implications than octopus farming. The moral dilemma of innovation and unintended consequences has of course been brought into sharp focus with recent advances in AI.  In this case the stakes are much higher. Stephen Hawking and many others expressed concerns that while AI has the potential to provide incalculable benefits, it also has the potential to end the human race. While I personally don’t see CHATgpt as Armageddon, it is certainly evidence that Pandora’s Box is open, and none of us really knows how it will evolve, for better or worse.

What are our Solutions

So what can we do to try and avoid doing more harm than good? Do we need an innovator’s equivalent of the Hippocratic Oath? Should we as a community commit to do no harm, and somehow hold ourselves accountable? Not a bad idea in theory, but how could we practically do that? Innovation and risk go hand in hand, and in reality we often don’t know how an innovation will operate in the real world, and often don’t fully recognize the killer application associated with a new technology. And if we were to eliminate most risk from innovation, we’d also eliminate most progress. This said, I do believe how we balance progress and risk is something we need to discuss more, especially in light of the extraordinary rate of technological innovation we are experiencing, the potential size of its impact, and the increasing challenges associated with predicting outcomes as the pace of change accelerates.

Can We Ever Go Back?

Another issue is that often the choice is not simply ‘do we do it or not’, but instead ‘who does it first’? Frequently it’s not so much our ‘brilliance’ that creates innovation. Instead, it’s simply that all the pieces have just fallen into place and are waiting for someone to see the pattern. From calculus onwards, the history of innovation is replete with examples of parallel discovery, where independent groups draw the same conclusions from emerging data at about the same time.

So parallel to the question of ‘should we do it’ is ‘can we afford not to?’ Perhaps the most dramatic example of this was the nuclear bomb. For the team working the Manhattan Project it must have been ethically agonizing to create something that could cause so much human suffering. But context matters, and the Allies at the time were in a tight race with the Nazi’s to create the first nuclear bomb, the path to which was already sketched out by discoveries in physics earlier that century. The potential consequences of not succeeding were even more horrific than those of winning the race. An ethical dilemma of brutal proportions.

Today, as the pace of change accelerates, we face a raft of rapidly evolving technologies with potential for enormous good or catastrophic damage, and where Pandoras Box is already cracked open. Of course AI is one, but there are so many others. On the technical side we have bio-engineering, gene manipulation, ecological manipulation, blockchain and even space innovation. All of these have potential to do both great good and great harm. And to add to the conundrum, even if we were to decide to shut down risky avenues of innovation, there is zero guarantee that others would not pursue them. On the contrary, as bad players are more likely to pursue ethically dubious avenues of research.

Behavioral Science

And this conundrum is not limited to technical innovations. We are also making huge strides in understanding how people think and make decisions. This is superficially more subtle than AI or bio-manipulation, but as a field I’m close to, it’s also deeply concerning, and carries similar potential to do both great good or cause great harm. Public opinion is one of the few tools we have to help curb mis-use of technology, especially in democracies. But Behavioral Science gives us increasingly effective ways to influence and nudge human choices, often without people being aware they are being nudged. In parallel, technology has given us unprecedented capability to leverage that knowledge, via the internet and social media. There has always been a potential moral dilemma associated with manipulating human behavior, especially below the threshold of consciousness. It’s been a concern since the idea of subliminal advertising emerged in the 1950’s. But technical innovation has created a potentially far more influential infrastructure than the 1950’s movie theater.   We now spend a significant portion of our lives on line, and techniques such as memes, framing, managed choice architecture and leveraging mere exposure provide the potential to manipulate opinions and emotional engagement more profoundly than ever before. And the stakes have gotten higher, with political advertising, at least in the USA, often eclipsing more traditional consumer goods marketing in sheer volume.   It’s one thing to nudge someone between Coke and Pepsi, but quite another to use unconscious manipulation to drive preference in narrowly contested political races that have significant socio-political implications. There is no doubt we can use behavioral science for good, whether it’s helping people eat better, save better for retirement, drive more carefully or many other situations where the benefit/paternalism equation is pretty clear. But especially in socio-political contexts, where do we draw the line, and who decides where that line is? In our increasingly polarized society, without some oversight, it’s all too easy for well intentioned and passionate people to go too far, and in the worst case flirt with propaganda, and thus potentially enable damaging or even dangerous policy.

What Can or Should We Do?

We spend a great deal of energy and money trying to find better ways to research and anticipate both the effectiveness and potential unintended consequences of new technology. But with a few exceptions, we tend to spend less time discussing the moral implications of what we do. As the pace of innovations accelerates, does the innovation community need to adopt some form of ‘do no harm’ Hippocratic Oath? Or do we need to think more about educating, training, and putting processes in place to try and anticipate the ethical downsides of technology?

Of course, we’ll never anticipate everything. We didn’t have the background knowledge to anticipate that the invention of the internal combustion engine would seriously impact the world’s climate. Instead we were mostly just relieved that projections of cities buried under horse poop would no longer come to fruition.

But other innovations brought issues we might have seen coming with a bit more scenario-planning? Air bags initially increased deaths of children in automobile accidents, while prohibition in the US increased both crime and alcoholism. Hindsight is of course very clear, but could a little more foresight have anticipated these? Perhaps my favorite example unintended consequences is the ‘Cobra Effect’. The British in India were worried about the number of venomous cobra snakes, and so introduced a bounty for every dead cobra. Initially successful, this ultimately led to the breeding of cobras for bounty payments. On learning this, the Brits scrapped the reward. Cobra breeders then set the now-worthless snakes free. The result was more cobras than the original start-point. It’s amusing now, but it also illustrates the often significant gap between foresight and hindsight.

I certainly don’t have the answers. But as we start to stack up world changing technologies in increasingly complex, dynamic and unpredictable contexts, and as financial rewards often favor speed over caution, do we as an innovation community need to start thinking more about societal and moral risk? And if so, how could, or should we go about it?

I’d love to hear the opinions of the innovation community!

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Will CHATgpt make us more or less innovative?

Will CHATgpt make us more or less innovative?

GUEST POST from Pete Foley

The rapid emergence of increasingly sophisticated ‘AI ‘ programs such as CHATgpt will profoundly impact our world in many ways. That will inevitably include Innovation, especially the front end. But will it ultimately help or hurt us? Better access to information should be a huge benefit, and my intuition was to dive in and take full advantage. I still think it has enormous upside, but I also think it needs to be treated with care. At this point at least, it’s still a tool, not an oracle. It’s an excellent source for tapping existing information, but it’s (not yet) a source of new ideas. As with any tool, those who understand deeply how it works, its benefits and its limitations, will get the most from it. And those who use it wrongly could end up doing more harm than good. So below I’ve mapped out a few pros and cons that I see. It’s new, and like everybody else, I’m on a learning curve, so would welcome any and all thoughts on these pros and cons:

What is Innovation?

First a bit of a sidebar. To understand how to use a tool, I at least need to have a reasonably clear of what goals I want it to help me achieve. Obviously ‘what is innovation’ is a somewhat debatable topic, but my working model is that the front end of innovation typically involves taking existing knowledge or technology, and combining it in new, useful ways, or in new contexts, to create something that is new, useful and ideally understandable and accessible. This requires deep knowledge, curiosity and the ability to reframe problems to find new uses of existing assets. A recent illustrative example is Oculus Rift, an innovation that helped to make virtual reality accessible by combining fairly mundane components including a mobile phone screen and a tracking sensor and ski glasses into something new. But innovation comes in many forms, and can also involve serendipity and keen observation, as in Alexander Fleming’s original discovery of penicillin. But even this requires deep domain knowledge to spot the opportunity and reframing undesirable mold into a (very) useful pharmaceutical. So, my start-point is which parts of this can CHATgpt help with?

Another sidebar is that innovation is of course far more than simply discovery or a Eureka moment. Turning an idea into a viable product or service usually requires considerable work, with the development of penicillin being a case in point. I’ve no doubt that CHATgpt and its inevitable ‘progeny’ will be of considerable help in that part of the process too.   But for starters I’ve focused on what it brings to the discovery phase, and the generation of big, game changing ideas.

First the Pros:

1. Staying Current: We all have to strike a balance between keeping up with developments in our own fields, and trying to come up with new ideas. The sheer volume of new information, especially in developing fields, means that keeping pace with even our own area of expertise has become challenging. But spend too much time just keeping up, and we become followers, not innovators, so we have to carve out time to also stretch existing knowledge. But if we don’t get the balance right, and fail to stay current, we risk get leapfrogged by those who more diligently track the latest discoveries. Simultaneous invention has been pervasive at least since the development of calculus, as one discovery often signposts and lays the path for the next. So fail to stay on top of our field, and we potentially miss a relatively easy step to the next big idea. CHATgpt can become an extremely efficient tool for tracking advances without getting buried in them.

2. Pushing Outside of our Comfort Zone: Breakthrough innovation almost by definition requires us to step beyond the boundaries of our existing knowledge. Whether we are Dyson stealing filtration technology from a sawmill for his unique ‘filterless’ vacuum cleaner, physicians combining stem cell innovation with tech to create rejection resistant artificial organs, or the Oculus tech mentioned above, innovation almost always requires tapping resources from outside of the established field. If we don’t do this, then we not only tend towards incremental ideas, but also tend to stay in lock step with other experts in our field. This becomes increasingly the case as an area matures, low hanging fruit is exhausted, and domain knowledge becomes somewhat commoditized. CHATgpt simply allows us to explore beyond our field far more efficiently than we’ve ever been able to before. And as it or related tech evolves, it will inevitably enable ever more sophisticated search. From my experience it already enables some degree of analogous search if you are thoughtful about how to frame questions, thus allowing us to more effectively expand searches for existing solutions to problems that lie beyond the obvious. That is potentially really exciting.

Some Possible Cons:

1. Going Down the Rabbit Hole: CHATgpt is crack cocaine for the curious. Mea culpa, this has probably been the most time consuming blog I’ve ever written. Answers inevitably lead to more questions, and it’s almost impossible to resist playing well beyond the specific goals I initially have. It’s fascinating, it’s fun, you learn a lot of stuff you didn’t know, but I at least struggle with discipline and focus when using it. Hopefully that will wear off, and I will find a balance that uses it efficiently.

2. The Illusion of Understanding: This is a bit more subtle, but a topic inevitably enhances our understanding of it. The act of asking questions is as much a part of learning as reading answers, and often requires deep mechanistic understanding. CHATgpa helps us probe faster, and its explanations may help us to understand concepts more quickly. But it also risks the illusion of understanding. When the heavy loading of searching is shifted away from us, we get quick answers, but may also miss out on the deeper mechanistic understanding we’d have gleaned if we’d been forced to work a bit harder. And that deeper understanding can be critical when we are trying to integrate superficially different domains as part of the innovation process. For example, knowing that we can use a patient’s stem cells to minimize rejection of an artificial organ is quite different from understanding how the immune system differentiates between its own and other stem cells. The risk is that sophisticated search engines will do more heavy lifting, allow us to move faster, but also result in a more superficial understanding, which reduces our ability to spot roadblocks early, or solve problems as we move to the back end of innovation, and reduce an idea to practice.

3. Eureka Moment: That’s the ‘conscious’ watch out, but there is also an unconscious one. It’s no secret that quite often our biggest ideas come when we are not actually trying. Archimedes had his Eureka moment in the bath, and many of my better ideas come when I least expect them, perhaps in the shower, when I first wake up, or am out having dinner. The neuroscience of creativity helps explain this, in that the restructuring of problems that leads to new insight and the integration of ideas works mostly unconsciously, and when we are not consciously focused on a problem. It’s analogous to the ‘tip of the tongue’ effect, where the harder we try to remember something, the harder it gets, but then comes to us later when we are not trying. But the key for the Eureka moment is that we need sufficiently deep knowledge for those integrations to occur. If CHATgpt increases the illusion of understanding, we could see less of those Eureka moments, and the ‘obvious in hindsight ideas’ they create.

Conclusion

I think that ultimately innovation will be accelerated by CHATgpt and what follows, perhaps quite dramatically. But I also think that we as innovators need to try and peel back the layers and understand as much as we can about these tools, as there is potential for us to trip up. We need to constantly reinvent the way we interact with them, leverage them as sophisticated innovation tools, but avoid them becoming oracles. We also need to ensure that we, and future generations use them to extend our thinking skill set, but not become a proxy for it. The calculator has in some ways made us all mathematical geniuses, but in other ways has reduced large swathes of the population’s ability to do basic math. We need to be careful that CHATgpt doesn’t do the same for our need for cognition, and deep mechanistic and/or critical thinking.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Pele and Vivienne Westwood – Innovators Lost

Pele and Vivienne Westwood - Innovators Lost

GUEST POST from Pete Foley

The loss of Pele and Vivienne Westwood, two giants of innovation in their respective fields, marks a sad end to 2022. But both left legacies that can inspire us as we navigate a likely challenging New Year.

Humble Beginnings: Both rose from humble beginnings to become national and international institutions. Pele was an artist with a football, Westwood with fabric and design. Both were resilient, multifaceted, creative, and had the courage to challenge the status quo. Pele famously honed his football skills by kicking around grapefruits in desperately poor neighborhood. Westwood originated from humble British working-class origins, where her parents were factory and mill workers.

Pele was a complete footballer, talented with head, foot and mind. He was both creative and practical, and turned football into an art form. A graceful embodiment of the beautiful game, he invented moves, and developed techniques and skills that not only entertained, but that also created a new technical platform for future masters such as Cruyff, Neymar and Messi. But he was also extremely successful, winning three world cups, and scoring over 700 goals for club and country. Furthermore, he was a great ambassador for Brazil and for football. But perhaps most important of all, he was an inspiration to countless youngsters. He embodied that hard work, hard earned skill, a creative mindset and a passionate work ethic could forge a path from poverty to success. A model that inspired many in sports and beyond.

Westwood was similarly both skilled and creatively fearless. She emerged as part of the leading edge of the punk scene in the UK, closely entwined with Malcolm McLaren and the Sex Pistols. But after splitting with McLaren, she forged her own unique and highly successful path. She blended historical materials and fashion references with post-punk individualism to create emergent, maverick designs. Designs that somewhat ironically mainstreamed and embodied British eccentricity, but that also held global appeal.  Like Pele, she was a leader who saw things before anyone else in her field, and ultimately, as the first Dame of Punk, turned that vision into both financial and social success.

Nobody lives forever, and few get to reach the heights of Pele or Westwood. But we can all hopefully learn a little from them. But were leaders, unafraid to follow their own vision. Both were resilient, with the courage and belief to overcame hardship and challenges. Both blended existing norms in new ways to create new, emergent forms. They didn’t just stand on, but rose above, the shoulders of giants. Both are missed, but both live on both in the legacy and lessons they leave

Published simultaneously on LinkedIn

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Preserving Ecosystems as an Innovation Superpower

Lessons from Picasso and David Attenborough

Preserving Ecosystems as an Innovation Superpower

GUEST POST from Pete Foley

We probably all agree that the conservation of our natural world is important. Sharing the planet with other species is not only ethically and emotionally the right thing to do to, but it’s also enlightened self-interest. A healthy ecosystem helps equilibrate and stabilize our climate, while the potential of the largely untapped biochemical reservoir of the natural world has enormous potential for pharmaceuticals, medicine and hence long-term human survival.

Today I’m going to propose yet another reason why conservation is in our best interest. And not just the preservation of individual species, but also the maintenance of the complex, interactive ecosystems in which individual species exist.

Biomimicry: Nature is not only a resource for pharmaceuticals, but also an almost infinite resource for innovation that transcends virtually every field we can, or will imagine. This is not a new idea. Biomimicry, the concept of mimicking natures’ solutions to a broad range of problems, was first coined by Janine Benyus in 1997. But humans have intuitively looked to nature to help solve problems throughout history. Silk production in ancient bio-technology that co-opts the silk worm, while much of early human habitations were based on caves, a natural phenomenon. More recently, Velcro, wind turbines, and elements of bullet train design have all been attributed to innovation inspired by nature.

And Biomimicry, together with related areas such as biomechanics and bio-utilization taps into the fundamental core of what the front end of innovation is all about. Dig deep into virtually any innovation, and we’ll find it has been stolen from another source. For example, early computers reapplied punch cards from tapestry looms. The Beatles stole and blended liberally from the blues, skiffle, music hall, reggae and numerous other sources. ‘Uberization’ has created a multitude of new business from AirBNB to nanny, housecleaning or food prep services. Medical suturing was directly ‘stolen’ from embroidery, the Dyson vacuum from a sawmill, oral care calcium deposition technology was reapplied from laundry detergents, etc., etc..

Picasso – Great Artists Steal! This is also the creative process espoused by Pablo Picasso when he said ‘good artists borrow, great artists steal’. He ‘stole’ elements of African sculpture and blended them with ideas from contemporaries such as Cézanne to create analytical cubism. In so doing he combined existing knowledge in new ways that created a revolutionary and emergent form of art – one that asked the viewer to engage with a painting in a whole new way. Innovation incarnate!

Ecosystems as an Innovation Resource: The biological world is the biggest potential source of potential innovative ideas we have at our disposal anywhere.  Hence it is an intuitive place to go looking for ideas to solve our biggest innovation challenges. But despite many people trying to leverage this potential goldmine, including myself, it’s never really achieved its full potential. For sure, there are a few great examples, such as Velcro, bullet train flow dynamics or sharkskin surfaces. But given how long we’ve been playing in this sandbox, there are far too few successes. And of those, far too many are based on hindsight, as opposed to using nature to solve a specific challenge. Just look at virtually any article on biomimicry, and the same few success stories show up year after year.

The Resource/Source Paradox. One issue that helps explain this is that the natural world is an almost infinite repository of information. That potential creates a challenging signal to noise’ search problem. The result is enormous potential, but coupled with almost inevitably high failure rates, as we struggle to find the most useful insights

Innovation is More than Ideation: Another challenge is that innovation is not just about ideas or invention; it’s about turning those ideas into practice. In the case of biomimicry, that is particularly hard, as the technical challenge of converting natural technology into viable commercial technologies is hampered because nature works on fundamentally different design principles, and uses very different materials to us. Evolution builds at a nano scale, is highly context dependent, and is result rather than theory led. Materials are usually organic; often water based, and are grown rather than manufactured.  Very different to most conventional human engineering.

Tipping Point: But the good news is that materials science, technology, 3D printing and computational and data processing power, together with nascent AI are evolving at such a fast rate that I’m optimistic that we will soon reach a tipping point that will make search and translation of natural innovations considerably easier than today. Self-learning systems should be able to more easily replicate natural information processing, and 3D printing and nano structures should be able to better mimic the physical constructs of natural systems. AI, or at least massively increased computing power should make it easier for us to both ask the right questions and search large, complex databases.

Conservation as an Innovation Superpower: And that brings me back to conservation as an innovation superpower. If we don’t protect our natural environment, we’ll have a lot less to search, and a lot less to mimic. And that applies to ecosystems as well as individual species. Take the animal or plant out of its natural environment, and it becomes far more difficult to untangle how or why it has evolved in a certain way.

Evolution is the ultimate exploiter of serendipity. It does not have to understand why something works, it simply runs experiments until it stumbles on solutions that do, and natural selection picks the winner(s). That leads to some surprisingly sophisticated innovation. For example, we are only just starting to understand the quantum effects used in avian navigation and photosynthesis. Migratory birds don’t have deep knowledge of quantum mechanics; the beauty of evolution is that they don’t need to. The benefit to us is that we can potentially tap into sophisticated innovation at the leading edge of our theoretical knowledge, provided we know how to define problems, where to look and have sufficient knowledge to decipher it and reduce it to practice. The bad news is that we don’t know what we don’t know. Evolution tapped into quantum mechanics millennia before we knew what it was, so who knows what other innovations lie waiting to be discovered as our knowledge catches up with the nature – the ultimate experimenter.

Ecosystems Matter: But a species without the context of its ecosystem is at best half the story. Nature has solved flight, deep-water exploration, carbon sequestration, renewable energy, high and low temperature resilience and so many more challenges. And it has also done so with 100% utilization and recycling on a systems basis. But most of the underlying innovations solve very specific problems, and so require deep understanding of context.

The Zebra Conundrum: Take the zebra as an example. I was recently watching a David Attenborough documentary about zebras. As a tasty prey animal surrounded by highly efficient predators such as lions, leopards, cheetahs and hyenas, the zebra is an evolutionary puzzle. Why has it evolved a high contrast coat that grabs attention and makes it visible from miles away? High contrast is a fundamental visual cue that means even if a predator is not particularly hungry; it is pretty much compelled to take notice of the hapless zebra. But despite this, the zebra has done pretty well, and the planes of Africa are scattered with this very successful animal. The explanation for this has understandably been the topic of much conjecture and research, and to this day remains somewhat controversial. But more and more, the explanation is narrowing onto a surprisingly obvious culprit; the tsetse fly. When we think of the dangers to a large mammal, we automatically think of large predators. But while zebras undoubtedly prefer to avoid being eaten by lions, diseases associated with tsetse fly bites kill more of them. That means that avoiding tsetse flies likely creates stronger evolutionary pressure than avoiding lions, and that is proving to be a promising explanation for the zebras coat. Far less flies land on or bite animals with stripes.  Exactly why that is remains debatable, and theories range from disrupting the flies vision when landing, to creating mini weather fronts due to differential heating or cooling from the stripes. But whatever the mechanism ultimately turns out to be, stripes stop flies. It appears that the obvious big predators were not the answer after all.

Context Matters: But without deep understanding of the context in which the zebra evolved, this would have been very difficult to unravel. Even if we’d conserved zebras in zoos, finding the tsetse fly connection without the context of the complex African savannah would be quite challenging. It’s all too easy to enthusiastically chase an obvious cause of a problem, and so miss the real one, and our confirmation bias routinely amplifies this.

We often talk about protecting species, but if, as our technology evolves to more effectively ‘steal’ ideas from natural systems, from an innovation perspective alone, preserving context, in the form of complex ecosystems may likely turn out to be at least as important as preserving individual species. We don’t know what we don’t know, and often the surprisingly obvious and critical answer to a puzzle can only be determined by exploring a puzzle in its natural environment.

Enlightened Self-Interest. Could we use an analogy to the zebra to help control malaria? Could we steal avian navigation for gps? I have no idea, but I believe this makes pursuing conservation enlightened self-interest of the highest order. We want to save the environment for all sorts of reasons, but one of the most interesting is that one-day, some part of it could save us.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.