Author Archives: Pete Foley

About Pete Foley

Pete Foley is a consultant who applies Behavioral Science to catalyze innovation for Retail, Hospitality, Product Design, Branding and Marketing Design. He applies insights derived from consumer and shopper psychology, behavioral economics, perceptual science, and behavioral design to create practical solutions to difficult business challenges. He brings 25 years experience as a serial innovator at P&G. He has over 100 published or granted patents, has published papers in behavioral economics, evolutionary psychology and visual science, is an exhibited artist and photographer, and an accomplished musician.

When Innovation Becomes Magic

When Innovation Becomes Magic

GUEST POST from Pete Foley

Arthur C Clarke’s 3rd Law famously stated:

“Any sufficiently advanced technology is indistinguishable from magic”

In other words, if the technology of an advanced civilization is so far beyond comprehension, it appears magical to a less advanced one. This could take the form of a human encounter with a highly advanced extraterrestrial civilization, how current technology might be viewed by historical figures, or encounters between human cultures with different levels of scientific and technological knowledge.

Clarke’s law implicitly assumed that knowledge within a society is sufficiently democratized that we never view technology within a civilization as ‘magic’.  But a combination of specialization, rapid advancements in technology, and a highly stratified society means this is changing.  Generative AI, Blockchain and various forms of automation are all ‘everyday magic’ that we increasingly use, but mostly with little more than an illusion of understanding around how they work.  More technological leaps are on the horizon, and as innovation accelerates exponentially, we are all going to have to navigate a world that looks and feels increasingly magical.   Knowing how to do this effectively is going to become an increasingly important skill for us all.  

The Magic Behind the Curtain:  So what’s the problem? Why do we need to understand the ‘magic’ behind the curtain, as long as we can operate the interface, and reap the benefits?  After all, most of us use phones, computers, cars, or take medicines without really understanding how they work.  We rely on experts to guide us, and use interfaces that help us navigate complex technology without a need for deep understanding of what goes on behind the curtain.

It’s a nuanced question.  Take a car as an analogy.  We certainly don’t need to know how to build one in order to use one.  But we do need to know how to operate it and understand what it’s performance limitations are.  It also helps to have at least some basic knowledge of how it works; enough to change a tire on a remote road, or to have some concept of basic mechanics to minimize the potential of being ripped off by a rogue mechanic.  In a nutshell, the more we understand it, the more efficiently, safely and economically we leverage it.  It’s a similar situation with medicine.  It is certainly possible to defer all of our healthcare decisions to a physician.  But people who partner with their doctors, and become advocates for their own health generally have superior outcomes, are less likely to die from unintended contraindications, and typically pay less for healthcare.  And this is not trivial.  The third leading cause of death in Europe behind cancer and heart disease are issues associated with prescription medications.  We don’t need to know everything to use a tool, but in most cases, the more we know the better

The Speed/Knowledge Trade-Off:  With new, increasingly complex technologies coming at us in waves, it’s becoming increasing challenging to make sense of what’s ‘behind the curtain’. This has the potential for costly mistakes.  But delaying embracing technology until we fully understand it can come with serious opportunity costs.  Adopt too early, and we risk getting it wrong, too late and we ‘miss the bus’.  How many people who invested in crypto currency or NFT’s really understood what they were doing?  And how many of those have lost on those deals, often to the benefit of those with deeper knowledge?  That isn’t to in anyway suggest that those who are knowledgeable in those fields deliberately exploit those who aren’t, but markets tend to reward those who know, and punish those who don’t.    

The AI Oracle:  The recent rise of Generative AI has many people treating it essentially as an oracle.  We ask it a question, and it ‘magically’ spits out an answer in a very convincing and sharable format.  Few of us understand the basics of how it does this, let alone the details or limitations. We may not call it magic, but we often treat it as such.  We really have little choice; as we lack sufficient understanding to apply quality critical thinking to what we are told, so have to take answers on trust.  That would be brilliant if AI was foolproof.  But while it is certainly right a lot of the time, it does make mistakes, often quite embarrassing ones. . For example, Google’s BARD incorrectly claimed the James Webb Space Telescope had taken the first photo of a planet outside our solar system, which led to panic selling of parent company Alphabet’s stock.  Generative AI is a superb innovation, but its current iterations are far from perfect.  They are limited by the data bases they are fed on, are extremely poor at spotting their own mistakes, can be manipulated by the choice of data sets they are trained on, and they lack the underlying framework of understanding that is essential for critical thinking or for making analogical connections.  I’m sure that we’ll eventually solve these issues, either with iterations of current tech, or via integration of new technology platforms.  But until we do, we have a brilliant, but still flawed tool.  It’s mostly right, is perfect for quickly answering a lot of questions, but its biggest vulnerability is that most users have pretty limited capability to understand when it’s wrong.

Technology Blind Spots: That of course is the Achilles Heel, or blind spot and a dilemma. If an answer is wrong, and we act on it without realizing, it’s potentially trouble. But if we know the answer, we didn’t really need to ask the AI. Of course, it’s more nuanced than that.  Just getting the right answer is not always enough, as the causal understanding that we pick up by solving a problem ourselves can also be important.  It helps us to spot obvious errors, but also helps to generate memory, experience, problem solving skills, buy-in, and belief in an idea.  Procedural and associative memory is encoded differently to answers, and mechanistic understanding helps us to reapply insights and make analogies. 

Need for Causal Understanding.  Belief and buy-in can be particularly important. Different people respond to a lack of ‘internal’ understanding in different ways.  Some shy away from the unknown and avoid or oppose what they don’t understand. Others embrace it, and trust the experts.  There’s really no right or wrong in this.  Science is a mixture of both approaches it stands on the shoulders of giants, but advances based on challenging existing theories.  Good scientists are both data driven and skeptical.  But in some cases skepticism based on lack of causal understanding can be a huge barrier to adoption. It has contributed to many of the debates we see today around technology adoption, including genetically engineered foods, efficacy of certain pharmaceuticals, environmental contaminants, nutrition, vaccinations, and during Covid, RNA vaccines and even masks.  Even extremely smart people can make poor decisions because of a lack of causal understanding.  In 2003, Steve Jobs was advised by his physicians to undergo immediately surgery for a rare form of pancreatic cancer.  Instead he delayed the procedure for nine months and attempted to treat himself with alternative medicine, a decision that very likely cut his life tragically short.

What Should We Do?  We need to embrace new tools and opportunities, but we need to do so with our eyes open.   Loss aversion, and the fear of losing out is a very powerful motivator of human behavior, and so an important driver in the adoption of new technology.  But it can be costly. A lot of people lost out with crypto and NFT’s because they had a fairly concrete idea of what they could miss out on if they didn’t engage, but a much less defined idea of the risk, because they didn’t deeply understand the system. Ironically, in this case, our loss aversion bias caused a significant number of people to lose out!

Similarly with AI, a lot of people are embracing it enthusiastically, in part because they are afraid of being left behind.  That is probably right, but it’s important to balance this enthusiasm with an understanding of its potential limitations.  We may not need to know how to build a car, but it really helps to know how to steer and when to apply the brakes .   Knowing how to ask an AI questions, and when to double check answers are both going to be critical skills.  For big decisions, ‘second opinions’ are going to become extremely important.   And the human ability to interpret answers through a filter of nuance, critical thinking, different perspectives, analogy and appropriate skepticism is going to be a critical element in fully leveraging AI technology, at least for now. 

Today AI is still a tool, not an oracle. It augments our intelligence, but for complex, important or nuanced decisions or information retrieval, I’d be wary of sitting back and letting it replace us.  Its ability to process data in quantity is certainly superior to any human, but we still need humans to interpret, challenge and integrate information.  The winners of this iteration of AI technology will be those who become highly skilled at walking that line, and who are good at managing the trade off between speed and accuracy using AI as a tool.  The good news is that we are naturally good at this, it’s a critical function of the human brain, embodied in the way it balances Kahneman’s System 1 and System 2 thinking. Future iterations may not need us, but for now AI is a powerful partner and tool, but not a replacement

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Unintended Consequences.  The Hidden Risk of Fast-Paced Innovation

Unintended Consequences.  The Hidden Risk of Fast-Paced Innovation

GUEST POST from Pete Foley

Most innovations go through a similar cycle, often represented as an s-curve.

We start with something potentially game changing. It’s inevitably a rough-cut diamond; un-optimized and not fully understood.  But we then optimize it. This usually starts with a fairly steep leaning curve as we address ‘low hanging fruit’ but then evolves into a fine-tuning stage.  Eventually we squeeze efficiency from it to the point where the incremental cost of improving it becomes inefficient.  We then either commoditize it, or jump to another s-curve.

This is certainly not a new model, and there are multiple variations on the theme.  But as the pace of innovation accelerates, something fundamentally new is happening with this s-curve pattern.  S-curves are getting closer together. Increasingly we are jumping to new s-curves before we’ve fully optimized the previous one.  This means that we are innovating quickly, but also that we are often taking more ‘leaps into the dark’ than ever before.

This has some unintended consequences of its own:

1. Cumulative Unanticipated Consequences. No matter how much we try to anticipate how a new technology will fare in the real world, there are always surprises.  Many surprises emerge soon after we hit the market, and create fires than have to be put out quite quickly (and literally in the cases of some battery technologies).  But other unanticipated effects can be slower burn (pun intended).  The most pertinent example of this is of course greenhouse gasses from Industrialization, and their impact on our climate. This of course took us years to recognize. But there are many more examples, including the rise of antibiotic resistance, plastic pollution, hidden carcinogens, the rising cost of healthcare and the mental health issues associated with social media. Just as the killer application for a new innovation is often missed at its inception, it’s killer flaws can be too.  And if the causal relationship between these issues and the innovation are indirect, they can accumulate across multiple s-curves before we notice them.  By the time we do, technology is often so entrenched it can be a huge challenge to extract ourselves from it.

2.  Poorly understood complex network effects.  The impact of new innovation is very hard to predict when it is introduced into a complex, multivariable system.  A butterfly flapping its wings can cascade and amplify through a system, and when the butterfly is transformative technology, the effect can be profound.  We usually have line of sight of first generation causal effects:  For example, we know that electric cars use an existing electric grid, as do solar energy farms.  But in today’s complex, interconnected world, it’s difficult to predict second, third or fourth generation network effects, and likely not cost effective or efficient for an innovator to try and do so. For example, the supply-demand interdependency of solar and electric cars is a second-generation network effect that we are aware of, but that is already challenging to fully predict.  More causally distant effects can be even more challenging. For example, funding for the road network without gas tax, the interdependency of gas and electric cost and supply as we transition, the impact that will have on broader on global energy costs and socio political stability.  Then add in complexities supply of new raw materials needed to support the new battery technologies.  These are pretty challenging to model, and of course, are the challenges we are at least aware of. The unanticipated consequences of such a major change are, by definition, unanticipated!

3. Fragile Foundations.  In many cases, one s-curve forms the foundation of the next.  So if we have not optimized the previous s-curve sufficiently, flaws potentially carry over into the next, often in the form of ‘givens’.  For example, an electric car is a classic s-curve jump from internal combustion engines.  But for reasons that include design efficiency, compatibility with existing infrastructure, and perhaps most importantly, consumer cognitive comfort, much of the supporting design and technology carries over from previous designs. We have redesigned the engine, but have only evolved wheels, breaks, etc., and have kept legacies such as 4+ seats.  But automotives are in many, one of our more stable foundations. We have had a lot of time to stabilize past s-curves before jumping to new ones.  But newer technologies such as AI, social media and quantum computing have enjoyed far less time to stabilize foundational s-curves before we dance across to embrace closely spaced new ones.  That will likely increase the chances of unintended consequences. And we are already seeing the canary in the coal mine with some, with unexpected mental health and social instability increasingly associated with social media

What’s the Answer?  We cannot, or should not stop innovating.  We face too many fundamental issues with climate, food security and socio political stability that need solutions, and need them quite quickly.

But the conundrum we face is that many, if not all of these issue are rooted in past, well intentioned innovation, and the unintended consequences that derive from it. So a lot of our innovation efforts are focused on solving issues created by previous rounds of innovation.  Nobody expected or intended the industrial revolution to impact our climate, but now much of our current innovation capability is rightly focused on managing the fall out it has created (again, pun intended).  Our challenge is that we need to continue to innovate, but also to break the cycle of todays innovation being increasingly focused on fixing yesterdays!

Today new waves of innovation associated with ‘sustainable’ technology, genetic manipulation, AI and quantum computing are already crashing onto our shores. These interdependent innovations will likely dwarf the industrial revolution in scale and complexity, and have the potential for massive impact, both good and bad. And they are occurring at a pace that gives us little time to deal with anticipated consequences, let alone unanticipated ones.

We’ll Find a Way?  One answer is to just let it happen, and fix things as we go. Innovation has always been a bumpy road, and humanity has a long history of muddling through. The agricultural revolution ultimately allowed humans to exponentially expand our population, but only after concentrating people into larger social groups that caused disease to ravage many societies. We largely solved that by dying in large numbers and creating herd immunity. It was a solution, but not an optimum one.  When London was in danger of being buried in horse poop, the internal combustion engine saved us, but that in turn ultimately resulted in climate change. According to projections from the Club of Rome in the 70’s, economic growth should have ground to a halt long ago, mired in starvation and population contraction.  Instead advances in farming technology have allowed us to keep growing.  But that increase in population contributes substantially to our issues with climate today.  ‘We’ll find a way’ is an approach that works until it doesn’t.  and even when it works, it is usually not painless, and often simply defers rather than solves issues.

Anticipation?    Another option is that we have to get better at both anticipating issues, and at triaging the unexpected. Maybe AI will give us the processing power to do this, provided of course that it doesn’t become our biggest issue in of itself.

Slow Down and Be More Selective?  In a previous article I asked if ‘just because we can do it, does it mean we should?’.  That was through a primarily moral lens.  But I think unintended consequences make this an even bigger question for broader innovation strategy.  The more we innovate, the more consequences we likely create.  And the faster we innovate, the more vulnerable we are to fragility. Slowing down creates resilience, speed reduces it.  So one option is to be more choiceful about innovations, and look more critically at benefit risk balance. For example, how badly do we need some of the new medications and vaccines being rushed to market?  Is all of our gene manipulation research needed? Do we really need a new phone every two years?   For sure, in some cases the benefits are clear, but in other cases, is profit driving us more than it should?

In a similar vein, but to be provocative, are we also moving too quickly with renewable energy?  It certainly something we need.  But are we, for example, pinning too much on a single, almost first generation form of large scale solar technology?  We are still at that steep part of the learning curve, so are quite likely missing unintended consequences.  Would a more staged transition over a decade or so add more resilience, allow us to optimize the technology based on real world experience, and help us ferret out unanticipated issues? Should we be creating a more balanced portfolio, and leaning more on more established technology such as nuclear? Sometimes moving a bit more slowly ultimately gets you there faster, and a long-term issue like climate is a prime candidate for balancing speed, optimization and resilience to ultimately create a more efficient, robust and better understood network.

The speed of AI development is another obvious question, but I suspect more difficult to evaluate.  In this case, Pandora’s box is open, and calls to slow AI research would likely mean responsible players would stop, but research would continue elsewhere, either underground or in less responsible nations.  A North Korean AI that is superior to anyone else’s is an example where the risk of not moving likely outweighs the risk of unintended consequences

Regulation?  Regulation is a good way of forcing more thoughtful evaluation of benefit versus risk. But it only works if regulators (government) understand technology, or at least its benefits versus risks, better than its developers.  This can work reasonably well in pharma, where we have a long track record. But it is much more challenging in newer areas of technology. AI is a prime example where this is almost certainly not the case.  And as the complexity of all innovation increases, regulation will become less effective, and increasingly likely to create unintended consequences of its own.

I realize that this may all sound a bit alarmist, and certainly any call to slow down renewable energy conversion or pharma development is going to be unpopular.  But history has shown that slowing down creates resilience, while speeding up creates instability and waves of growth and collapse.  And an arms race where much of our current innovative capability is focused on fixing issues created by previous innovations is one we always risk losing.  So as unanticipated consequences are by definition, really difficult to anticipate, is this a point in time where we in the innovation community need to have a discussion on slowing down and being more selective?  Where should we innovate and where not?  When should we move fast, and when we might be better served by some productive procrastination.  Do we need better risk assessment processes? It’s always easier to do this kind of analysis in hindsight, but do we really have that luxury?

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Just Because We Can, Doesn’t Mean That We Should!

Just Because We Can, Doesn’t Mean That We Should!

GUEST POST from Pete Foley

An article on innovation from the BBC caught my eye this week. https://www.bbc.com/news/science-environment-64814781. After extensive research and experimentation, a group in Spain has worked out how to farm octopus. It’s clever innovation, but also comes with some ethical questions. The solution involves forcing highly intelligent, sentient animals together in unnatural environments, and then killing them in a slow, likely highly stressful way. And that triggers something that I believe we need to always keep front and center in innovation: Just Because We Can, Doesn’t Mean That We Should!

Pandora’s Box

It’s a conundrum for many innovations. Change opens Pandora’s Box, and with new possibilities come unknowns, new questions, new risks and sometimes, new moral dilemmas. And because our modern world is so complex, interdependent, and evolves so quickly, we can rarely fully anticipate all of these consequences at conception.

Scenario Planning

In most fields we routinely try and anticipate technical challenges, and run all sorts of stress, stability and consumer tests in an effort to anticipate potential problems. We often still miss stuff, especially when it’s difficult to place prototypes into realistic situations. Phones still catch fire, Hyundai’s can be surprisingly easy to steal, and airbags sometimes do more harm than good. But experienced innovators, while not perfect, tend to be pretty good at catching many of the worst technical issues.

Another Innovators Dilemma

Octopus farming doesn’t, as far as I know, have technical issues, but it does raise serious ethical questions. And these can sometimes be hard to spot, especially if we are very focused on technical challenges. I doubt that the innovators involved in octopus farming are intrinsically bad people intent on imposing suffering on innocent animals. But innovation requires passion, focus and ownership. Love is Blind, and innovators who’ve invested themselves into a project are inevitably biased, and often struggle to objectively view the downsides of their invention.

And this of course has far broader implications than octopus farming. The moral dilemma of innovation and unintended consequences has of course been brought into sharp focus with recent advances in AI.  In this case the stakes are much higher. Stephen Hawking and many others expressed concerns that while AI has the potential to provide incalculable benefits, it also has the potential to end the human race. While I personally don’t see CHATgpt as Armageddon, it is certainly evidence that Pandora’s Box is open, and none of us really knows how it will evolve, for better or worse.

What are our Solutions

So what can we do to try and avoid doing more harm than good? Do we need an innovator’s equivalent of the Hippocratic Oath? Should we as a community commit to do no harm, and somehow hold ourselves accountable? Not a bad idea in theory, but how could we practically do that? Innovation and risk go hand in hand, and in reality we often don’t know how an innovation will operate in the real world, and often don’t fully recognize the killer application associated with a new technology. And if we were to eliminate most risk from innovation, we’d also eliminate most progress. This said, I do believe how we balance progress and risk is something we need to discuss more, especially in light of the extraordinary rate of technological innovation we are experiencing, the potential size of its impact, and the increasing challenges associated with predicting outcomes as the pace of change accelerates.

Can We Ever Go Back?

Another issue is that often the choice is not simply ‘do we do it or not’, but instead ‘who does it first’? Frequently it’s not so much our ‘brilliance’ that creates innovation. Instead, it’s simply that all the pieces have just fallen into place and are waiting for someone to see the pattern. From calculus onwards, the history of innovation is replete with examples of parallel discovery, where independent groups draw the same conclusions from emerging data at about the same time.

So parallel to the question of ‘should we do it’ is ‘can we afford not to?’ Perhaps the most dramatic example of this was the nuclear bomb. For the team working the Manhattan Project it must have been ethically agonizing to create something that could cause so much human suffering. But context matters, and the Allies at the time were in a tight race with the Nazi’s to create the first nuclear bomb, the path to which was already sketched out by discoveries in physics earlier that century. The potential consequences of not succeeding were even more horrific than those of winning the race. An ethical dilemma of brutal proportions.

Today, as the pace of change accelerates, we face a raft of rapidly evolving technologies with potential for enormous good or catastrophic damage, and where Pandoras Box is already cracked open. Of course AI is one, but there are so many others. On the technical side we have bio-engineering, gene manipulation, ecological manipulation, blockchain and even space innovation. All of these have potential to do both great good and great harm. And to add to the conundrum, even if we were to decide to shut down risky avenues of innovation, there is zero guarantee that others would not pursue them. On the contrary, as bad players are more likely to pursue ethically dubious avenues of research.

Behavioral Science

And this conundrum is not limited to technical innovations. We are also making huge strides in understanding how people think and make decisions. This is superficially more subtle than AI or bio-manipulation, but as a field I’m close to, it’s also deeply concerning, and carries similar potential to do both great good or cause great harm. Public opinion is one of the few tools we have to help curb mis-use of technology, especially in democracies. But Behavioral Science gives us increasingly effective ways to influence and nudge human choices, often without people being aware they are being nudged. In parallel, technology has given us unprecedented capability to leverage that knowledge, via the internet and social media. There has always been a potential moral dilemma associated with manipulating human behavior, especially below the threshold of consciousness. It’s been a concern since the idea of subliminal advertising emerged in the 1950’s. But technical innovation has created a potentially far more influential infrastructure than the 1950’s movie theater.   We now spend a significant portion of our lives on line, and techniques such as memes, framing, managed choice architecture and leveraging mere exposure provide the potential to manipulate opinions and emotional engagement more profoundly than ever before. And the stakes have gotten higher, with political advertising, at least in the USA, often eclipsing more traditional consumer goods marketing in sheer volume.   It’s one thing to nudge someone between Coke and Pepsi, but quite another to use unconscious manipulation to drive preference in narrowly contested political races that have significant socio-political implications. There is no doubt we can use behavioral science for good, whether it’s helping people eat better, save better for retirement, drive more carefully or many other situations where the benefit/paternalism equation is pretty clear. But especially in socio-political contexts, where do we draw the line, and who decides where that line is? In our increasingly polarized society, without some oversight, it’s all too easy for well intentioned and passionate people to go too far, and in the worst case flirt with propaganda, and thus potentially enable damaging or even dangerous policy.

What Can or Should We Do?

We spend a great deal of energy and money trying to find better ways to research and anticipate both the effectiveness and potential unintended consequences of new technology. But with a few exceptions, we tend to spend less time discussing the moral implications of what we do. As the pace of innovations accelerates, does the innovation community need to adopt some form of ‘do no harm’ Hippocratic Oath? Or do we need to think more about educating, training, and putting processes in place to try and anticipate the ethical downsides of technology?

Of course, we’ll never anticipate everything. We didn’t have the background knowledge to anticipate that the invention of the internal combustion engine would seriously impact the world’s climate. Instead we were mostly just relieved that projections of cities buried under horse poop would no longer come to fruition.

But other innovations brought issues we might have seen coming with a bit more scenario-planning? Air bags initially increased deaths of children in automobile accidents, while prohibition in the US increased both crime and alcoholism. Hindsight is of course very clear, but could a little more foresight have anticipated these? Perhaps my favorite example unintended consequences is the ‘Cobra Effect’. The British in India were worried about the number of venomous cobra snakes, and so introduced a bounty for every dead cobra. Initially successful, this ultimately led to the breeding of cobras for bounty payments. On learning this, the Brits scrapped the reward. Cobra breeders then set the now-worthless snakes free. The result was more cobras than the original start-point. It’s amusing now, but it also illustrates the often significant gap between foresight and hindsight.

I certainly don’t have the answers. But as we start to stack up world changing technologies in increasingly complex, dynamic and unpredictable contexts, and as financial rewards often favor speed over caution, do we as an innovation community need to start thinking more about societal and moral risk? And if so, how could, or should we go about it?

I’d love to hear the opinions of the innovation community!

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Will CHATgpt make us more or less innovative?

Will CHATgpt make us more or less innovative?

GUEST POST from Pete Foley

The rapid emergence of increasingly sophisticated ‘AI ‘ programs such as CHATgpt will profoundly impact our world in many ways. That will inevitably include Innovation, especially the front end. But will it ultimately help or hurt us? Better access to information should be a huge benefit, and my intuition was to dive in and take full advantage. I still think it has enormous upside, but I also think it needs to be treated with care. At this point at least, it’s still a tool, not an oracle. It’s an excellent source for tapping existing information, but it’s (not yet) a source of new ideas. As with any tool, those who understand deeply how it works, its benefits and its limitations, will get the most from it. And those who use it wrongly could end up doing more harm than good. So below I’ve mapped out a few pros and cons that I see. It’s new, and like everybody else, I’m on a learning curve, so would welcome any and all thoughts on these pros and cons:

What is Innovation?

First a bit of a sidebar. To understand how to use a tool, I at least need to have a reasonably clear of what goals I want it to help me achieve. Obviously ‘what is innovation’ is a somewhat debatable topic, but my working model is that the front end of innovation typically involves taking existing knowledge or technology, and combining it in new, useful ways, or in new contexts, to create something that is new, useful and ideally understandable and accessible. This requires deep knowledge, curiosity and the ability to reframe problems to find new uses of existing assets. A recent illustrative example is Oculus Rift, an innovation that helped to make virtual reality accessible by combining fairly mundane components including a mobile phone screen and a tracking sensor and ski glasses into something new. But innovation comes in many forms, and can also involve serendipity and keen observation, as in Alexander Fleming’s original discovery of penicillin. But even this requires deep domain knowledge to spot the opportunity and reframing undesirable mold into a (very) useful pharmaceutical. So, my start-point is which parts of this can CHATgpt help with?

Another sidebar is that innovation is of course far more than simply discovery or a Eureka moment. Turning an idea into a viable product or service usually requires considerable work, with the development of penicillin being a case in point. I’ve no doubt that CHATgpt and its inevitable ‘progeny’ will be of considerable help in that part of the process too.   But for starters I’ve focused on what it brings to the discovery phase, and the generation of big, game changing ideas.

First the Pros:

1. Staying Current: We all have to strike a balance between keeping up with developments in our own fields, and trying to come up with new ideas. The sheer volume of new information, especially in developing fields, means that keeping pace with even our own area of expertise has become challenging. But spend too much time just keeping up, and we become followers, not innovators, so we have to carve out time to also stretch existing knowledge. But if we don’t get the balance right, and fail to stay current, we risk get leapfrogged by those who more diligently track the latest discoveries. Simultaneous invention has been pervasive at least since the development of calculus, as one discovery often signposts and lays the path for the next. So fail to stay on top of our field, and we potentially miss a relatively easy step to the next big idea. CHATgpt can become an extremely efficient tool for tracking advances without getting buried in them.

2. Pushing Outside of our Comfort Zone: Breakthrough innovation almost by definition requires us to step beyond the boundaries of our existing knowledge. Whether we are Dyson stealing filtration technology from a sawmill for his unique ‘filterless’ vacuum cleaner, physicians combining stem cell innovation with tech to create rejection resistant artificial organs, or the Oculus tech mentioned above, innovation almost always requires tapping resources from outside of the established field. If we don’t do this, then we not only tend towards incremental ideas, but also tend to stay in lock step with other experts in our field. This becomes increasingly the case as an area matures, low hanging fruit is exhausted, and domain knowledge becomes somewhat commoditized. CHATgpt simply allows us to explore beyond our field far more efficiently than we’ve ever been able to before. And as it or related tech evolves, it will inevitably enable ever more sophisticated search. From my experience it already enables some degree of analogous search if you are thoughtful about how to frame questions, thus allowing us to more effectively expand searches for existing solutions to problems that lie beyond the obvious. That is potentially really exciting.

Some Possible Cons:

1. Going Down the Rabbit Hole: CHATgpt is crack cocaine for the curious. Mea culpa, this has probably been the most time consuming blog I’ve ever written. Answers inevitably lead to more questions, and it’s almost impossible to resist playing well beyond the specific goals I initially have. It’s fascinating, it’s fun, you learn a lot of stuff you didn’t know, but I at least struggle with discipline and focus when using it. Hopefully that will wear off, and I will find a balance that uses it efficiently.

2. The Illusion of Understanding: This is a bit more subtle, but a topic inevitably enhances our understanding of it. The act of asking questions is as much a part of learning as reading answers, and often requires deep mechanistic understanding. CHATgpa helps us probe faster, and its explanations may help us to understand concepts more quickly. But it also risks the illusion of understanding. When the heavy loading of searching is shifted away from us, we get quick answers, but may also miss out on the deeper mechanistic understanding we’d have gleaned if we’d been forced to work a bit harder. And that deeper understanding can be critical when we are trying to integrate superficially different domains as part of the innovation process. For example, knowing that we can use a patient’s stem cells to minimize rejection of an artificial organ is quite different from understanding how the immune system differentiates between its own and other stem cells. The risk is that sophisticated search engines will do more heavy lifting, allow us to move faster, but also result in a more superficial understanding, which reduces our ability to spot roadblocks early, or solve problems as we move to the back end of innovation, and reduce an idea to practice.

3. Eureka Moment: That’s the ‘conscious’ watch out, but there is also an unconscious one. It’s no secret that quite often our biggest ideas come when we are not actually trying. Archimedes had his Eureka moment in the bath, and many of my better ideas come when I least expect them, perhaps in the shower, when I first wake up, or am out having dinner. The neuroscience of creativity helps explain this, in that the restructuring of problems that leads to new insight and the integration of ideas works mostly unconsciously, and when we are not consciously focused on a problem. It’s analogous to the ‘tip of the tongue’ effect, where the harder we try to remember something, the harder it gets, but then comes to us later when we are not trying. But the key for the Eureka moment is that we need sufficiently deep knowledge for those integrations to occur. If CHATgpt increases the illusion of understanding, we could see less of those Eureka moments, and the ‘obvious in hindsight ideas’ they create.

Conclusion

I think that ultimately innovation will be accelerated by CHATgpt and what follows, perhaps quite dramatically. But I also think that we as innovators need to try and peel back the layers and understand as much as we can about these tools, as there is potential for us to trip up. We need to constantly reinvent the way we interact with them, leverage them as sophisticated innovation tools, but avoid them becoming oracles. We also need to ensure that we, and future generations use them to extend our thinking skill set, but not become a proxy for it. The calculator has in some ways made us all mathematical geniuses, but in other ways has reduced large swathes of the population’s ability to do basic math. We need to be careful that CHATgpt doesn’t do the same for our need for cognition, and deep mechanistic and/or critical thinking.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Pele and Vivienne Westwood – Innovators Lost

Pele and Vivienne Westwood - Innovators Lost

GUEST POST from Pete Foley

The loss of Pele and Vivienne Westwood, two giants of innovation in their respective fields, marks a sad end to 2022. But both left legacies that can inspire us as we navigate a likely challenging New Year.

Humble Beginnings: Both rose from humble beginnings to become national and international institutions. Pele was an artist with a football, Westwood with fabric and design. Both were resilient, multifaceted, creative, and had the courage to challenge the status quo. Pele famously honed his football skills by kicking around grapefruits in desperately poor neighborhood. Westwood originated from humble British working-class origins, where her parents were factory and mill workers.

Pele was a complete footballer, talented with head, foot and mind. He was both creative and practical, and turned football into an art form. A graceful embodiment of the beautiful game, he invented moves, and developed techniques and skills that not only entertained, but that also created a new technical platform for future masters such as Cruyff, Neymar and Messi. But he was also extremely successful, winning three world cups, and scoring over 700 goals for club and country. Furthermore, he was a great ambassador for Brazil and for football. But perhaps most important of all, he was an inspiration to countless youngsters. He embodied that hard work, hard earned skill, a creative mindset and a passionate work ethic could forge a path from poverty to success. A model that inspired many in sports and beyond.

Westwood was similarly both skilled and creatively fearless. She emerged as part of the leading edge of the punk scene in the UK, closely entwined with Malcolm McLaren and the Sex Pistols. But after splitting with McLaren, she forged her own unique and highly successful path. She blended historical materials and fashion references with post-punk individualism to create emergent, maverick designs. Designs that somewhat ironically mainstreamed and embodied British eccentricity, but that also held global appeal.  Like Pele, she was a leader who saw things before anyone else in her field, and ultimately, as the first Dame of Punk, turned that vision into both financial and social success.

Nobody lives forever, and few get to reach the heights of Pele or Westwood. But we can all hopefully learn a little from them. But were leaders, unafraid to follow their own vision. Both were resilient, with the courage and belief to overcame hardship and challenges. Both blended existing norms in new ways to create new, emergent forms. They didn’t just stand on, but rose above, the shoulders of giants. Both are missed, but both live on both in the legacy and lessons they leave

Published simultaneously on LinkedIn

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Preserving Ecosystems as an Innovation Superpower

Lessons from Picasso and David Attenborough

Preserving Ecosystems as an Innovation Superpower

GUEST POST from Pete Foley

We probably all agree that the conservation of our natural world is important. Sharing the planet with other species is not only ethically and emotionally the right thing to do to, but it’s also enlightened self-interest. A healthy ecosystem helps equilibrate and stabilize our climate, while the potential of the largely untapped biochemical reservoir of the natural world has enormous potential for pharmaceuticals, medicine and hence long-term human survival.

Today I’m going to propose yet another reason why conservation is in our best interest. And not just the preservation of individual species, but also the maintenance of the complex, interactive ecosystems in which individual species exist.

Biomimicry: Nature is not only a resource for pharmaceuticals, but also an almost infinite resource for innovation that transcends virtually every field we can, or will imagine. This is not a new idea. Biomimicry, the concept of mimicking natures’ solutions to a broad range of problems, was first coined by Janine Benyus in 1997. But humans have intuitively looked to nature to help solve problems throughout history. Silk production in ancient bio-technology that co-opts the silk worm, while much of early human habitations were based on caves, a natural phenomenon. More recently, Velcro, wind turbines, and elements of bullet train design have all been attributed to innovation inspired by nature.

And Biomimicry, together with related areas such as biomechanics and bio-utilization taps into the fundamental core of what the front end of innovation is all about. Dig deep into virtually any innovation, and we’ll find it has been stolen from another source. For example, early computers reapplied punch cards from tapestry looms. The Beatles stole and blended liberally from the blues, skiffle, music hall, reggae and numerous other sources. ‘Uberization’ has created a multitude of new business from AirBNB to nanny, housecleaning or food prep services. Medical suturing was directly ‘stolen’ from embroidery, the Dyson vacuum from a sawmill, oral care calcium deposition technology was reapplied from laundry detergents, etc., etc..

Picasso – Great Artists Steal! This is also the creative process espoused by Pablo Picasso when he said ‘good artists borrow, great artists steal’. He ‘stole’ elements of African sculpture and blended them with ideas from contemporaries such as Cézanne to create analytical cubism. In so doing he combined existing knowledge in new ways that created a revolutionary and emergent form of art – one that asked the viewer to engage with a painting in a whole new way. Innovation incarnate!

Ecosystems as an Innovation Resource: The biological world is the biggest potential source of potential innovative ideas we have at our disposal anywhere.  Hence it is an intuitive place to go looking for ideas to solve our biggest innovation challenges. But despite many people trying to leverage this potential goldmine, including myself, it’s never really achieved its full potential. For sure, there are a few great examples, such as Velcro, bullet train flow dynamics or sharkskin surfaces. But given how long we’ve been playing in this sandbox, there are far too few successes. And of those, far too many are based on hindsight, as opposed to using nature to solve a specific challenge. Just look at virtually any article on biomimicry, and the same few success stories show up year after year.

The Resource/Source Paradox. One issue that helps explain this is that the natural world is an almost infinite repository of information. That potential creates a challenging signal to noise’ search problem. The result is enormous potential, but coupled with almost inevitably high failure rates, as we struggle to find the most useful insights

Innovation is More than Ideation: Another challenge is that innovation is not just about ideas or invention; it’s about turning those ideas into practice. In the case of biomimicry, that is particularly hard, as the technical challenge of converting natural technology into viable commercial technologies is hampered because nature works on fundamentally different design principles, and uses very different materials to us. Evolution builds at a nano scale, is highly context dependent, and is result rather than theory led. Materials are usually organic; often water based, and are grown rather than manufactured.  Very different to most conventional human engineering.

Tipping Point: But the good news is that materials science, technology, 3D printing and computational and data processing power, together with nascent AI are evolving at such a fast rate that I’m optimistic that we will soon reach a tipping point that will make search and translation of natural innovations considerably easier than today. Self-learning systems should be able to more easily replicate natural information processing, and 3D printing and nano structures should be able to better mimic the physical constructs of natural systems. AI, or at least massively increased computing power should make it easier for us to both ask the right questions and search large, complex databases.

Conservation as an Innovation Superpower: And that brings me back to conservation as an innovation superpower. If we don’t protect our natural environment, we’ll have a lot less to search, and a lot less to mimic. And that applies to ecosystems as well as individual species. Take the animal or plant out of its natural environment, and it becomes far more difficult to untangle how or why it has evolved in a certain way.

Evolution is the ultimate exploiter of serendipity. It does not have to understand why something works, it simply runs experiments until it stumbles on solutions that do, and natural selection picks the winner(s). That leads to some surprisingly sophisticated innovation. For example, we are only just starting to understand the quantum effects used in avian navigation and photosynthesis. Migratory birds don’t have deep knowledge of quantum mechanics; the beauty of evolution is that they don’t need to. The benefit to us is that we can potentially tap into sophisticated innovation at the leading edge of our theoretical knowledge, provided we know how to define problems, where to look and have sufficient knowledge to decipher it and reduce it to practice. The bad news is that we don’t know what we don’t know. Evolution tapped into quantum mechanics millennia before we knew what it was, so who knows what other innovations lie waiting to be discovered as our knowledge catches up with the nature – the ultimate experimenter.

Ecosystems Matter: But a species without the context of its ecosystem is at best half the story. Nature has solved flight, deep-water exploration, carbon sequestration, renewable energy, high and low temperature resilience and so many more challenges. And it has also done so with 100% utilization and recycling on a systems basis. But most of the underlying innovations solve very specific problems, and so require deep understanding of context.

The Zebra Conundrum: Take the zebra as an example. I was recently watching a David Attenborough documentary about zebras. As a tasty prey animal surrounded by highly efficient predators such as lions, leopards, cheetahs and hyenas, the zebra is an evolutionary puzzle. Why has it evolved a high contrast coat that grabs attention and makes it visible from miles away? High contrast is a fundamental visual cue that means even if a predator is not particularly hungry; it is pretty much compelled to take notice of the hapless zebra. But despite this, the zebra has done pretty well, and the planes of Africa are scattered with this very successful animal. The explanation for this has understandably been the topic of much conjecture and research, and to this day remains somewhat controversial. But more and more, the explanation is narrowing onto a surprisingly obvious culprit; the tsetse fly. When we think of the dangers to a large mammal, we automatically think of large predators. But while zebras undoubtedly prefer to avoid being eaten by lions, diseases associated with tsetse fly bites kill more of them. That means that avoiding tsetse flies likely creates stronger evolutionary pressure than avoiding lions, and that is proving to be a promising explanation for the zebras coat. Far less flies land on or bite animals with stripes.  Exactly why that is remains debatable, and theories range from disrupting the flies vision when landing, to creating mini weather fronts due to differential heating or cooling from the stripes. But whatever the mechanism ultimately turns out to be, stripes stop flies. It appears that the obvious big predators were not the answer after all.

Context Matters: But without deep understanding of the context in which the zebra evolved, this would have been very difficult to unravel. Even if we’d conserved zebras in zoos, finding the tsetse fly connection without the context of the complex African savannah would be quite challenging. It’s all too easy to enthusiastically chase an obvious cause of a problem, and so miss the real one, and our confirmation bias routinely amplifies this.

We often talk about protecting species, but if, as our technology evolves to more effectively ‘steal’ ideas from natural systems, from an innovation perspective alone, preserving context, in the form of complex ecosystems may likely turn out to be at least as important as preserving individual species. We don’t know what we don’t know, and often the surprisingly obvious and critical answer to a puzzle can only be determined by exploring a puzzle in its natural environment.

Enlightened Self-Interest. Could we use an analogy to the zebra to help control malaria? Could we steal avian navigation for gps? I have no idea, but I believe this makes pursuing conservation enlightened self-interest of the highest order. We want to save the environment for all sorts of reasons, but one of the most interesting is that one-day, some part of it could save us.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






The Resilience Conundrum

From the Webb Space Telescope to Dishwashing Liquids

The Resilience Conundrum

GUEST POST from Pete Foley

Many of us have been watching the spectacular photos coming from Webb Space Telescope this week. It is a breathtaking example of innovation in action. But what grabbed my attention almost as much as the photos was the challenge of deploying it at the L2 Lagrange point. That not only required extraordinary innovation of core technologies, but also building unprecedented resilience into the design. Deploying a technology a million miles from Earth leaves little room for mistakes, or the opportunity for the kind of repairs that rescued the Hubble mission. Obviously the Webb team were acutely aware of this, and were painstaking in identifying and pre-empting 344 single points of failure, any one of which had the potential to derail it. The result is a triumph.  But it is not without cost. Anticipating and protecting against those potential failures played a significant part in taking Webb billions over budget, and years behind it’s original schedule.

Efficiency versus Adaptability: Most of us will never face quite such an amazing but  daunting challenge, or have the corresponding time and budget flexibility. But as an innovation community, and a planet, we are entering a phase of very rapid change as we try to quickly address really big issues, such as climate change and AI. And the speed, scope and interconnected complexity of that change make it increasingly difficult to build resilience into our innovations. This is compounded because a need for speed and efficiency often drives us towards narrow focus and increased specialization.  That focus can help us move quickly, but we know from nature that the first species to go extinct in the face of environmental change are often the specialists, who are less able to adapt with their changing world. Efficiency often reduces resilience, it’s another conundrum.

Complexity, Systems Effects and Collateral Damage. To pile on the challenges a little, the more breakthrough an innovation is, the less we understand about how interacts at a systems level, or secondary effects it may trigger.  And secondary failures can be catastrophic. Takata airbags, or the batteries in Samsung Galaxy phones were enabling, not core technologies, but they certainly derailed the core innovations.

Designed Resiliency. One answer to this is to be more systematic about designing resilience into innovation, as the Webb team were. We may not be able to reach the equivalent of 344 points of failure, but we can be systematic about scenario planning, anticipating failure, and investing up front in buffering ourselves against risk. There are a number of approaches we can adopt to achieve this, which I’ll discuss in detail later.

The Resiliency Conundrum. But first let’s talk just a little more about the Resilience conundrum. For virtually any innovation, time and money are tight. Conversely, taking time to anticipate potential failures is often time consuming and expensive. Worse, it rarely adds direct, or at least marketable value. And when it does work, we often don’t see the issues it prevents, we only notice them when resiliency fails. It’s a classic trade off, and one we face at all levels of innovation. For example, when I worked on dishwashing liquids at P&G, a slightly less glamorous field than space exploration, an enormous amount of effort went into maintaining product performance and stability under extreme conditions. Product could be transported in freezing or hot temperatures, and had to work extreme water hardness or softness. These conditions weren’t typical, but they were possible. But the cost of protecting these outliers was often disproportionately high.

And there again lies the trade off. Design in too much resiliency, and we are become inefficient and/or uncompetitive. But too little, and we risk a catastrophic failure like the Takata airbags. We need to find a sweet spot. And finding it is still further complicated because we are entering an era of innovation and disruption where we are making rapid changes to multiple systems in parallel. Climate change is driving major structural change in energy, transport and agriculture, and advances in computing are changing how those systems are managed. With dishwashing, we made changes to the formula, but the conditions of use remained fairly constant, meaning we were pretty good at extrapolating what the product would have to navigate. The same applies with the Webb telescope, where conditions at the Lagrange point have not changed during the lifetime of the project. We typically have a more complex, moving target.

Low Carbon Energy. Much of the core innovation we are pursuing today is interdependent. As an example, consider energy. Simply replacing hydrocarbons with, for example, solar, is far more complex than simply swapping one source of energy for another. It impacts the whole energy supply system. Where and how it links into our grid, how we store it, unpredictable power generation based on weather, how much we can store, maintenance protocols, and how quickly we can turn up or down the supply are just a few examples. We also create new feedback loops, as variables such as weather can impact both power generation and power usage concurrently. But we are not just pursuing solar, but multiple alternatives, all of which have different challenges. And concurrent to changing our power source, we are also trying to switch automobiles and transport in general from hydrocarbons to electric power, sourced from the same solar energy. This means attempting significant change in both supply and a key usage vector, changing two interdependent variables in parallel. Simply predicting the weather is tricky, but adding it to this complex set of interdependent variables makes surprises inevitable, and hence dialing in the right degree of resilience pretty challenging.

The Grass is Always Greener: And even if we anticipate all of that complexity, I strongly suspect, we’ll see more, rather than less surprises than we expect.   One lesson I’ve learned and re-learned in innovation is that the grass is always greener. We don’t know what we don’t know, in part because we cannot see the weeds from a distance. The devil often really is in the details, and there is nothing like moving from theory to practice, or from small to large scale to ferret out all of the nasty little problems that plague nearly every innovation, but that are often unfathomable when we begin. Finding and solving these is an inherent part of virtually any innovation process, but it usually adds time and cost to the process. There are reasons why more innovations take longer than expected than are delivered ahead of schedule!

It’s an exciting, but also perilous time to be innovating. But ultimately this is all manageable. We have a lot of smart people working on these problems, and so most of the obvious challenges will have contingencies.   We don’t have the relative time and budget of the Webb Space Telescope, and so we’ll inevitably hit a few unanticipated bumps, and we’ll never get everything right. But there are some things we can do to tip the odds in our favor, and help us find those sweet spots.

  1. Plan for over capacity during transitions. If possible, don’t shut down old supply chins until the new ones are fully established. If that is not possible, stockpile heavily as a buffer during the transition. This sounds obvious, but it’s often a hard sell, as it can be a significant expense. Building inventory or capacity of an old product we don’t really want to sell, and leaving it in place as we launch doesn’t excite anybody, but the cost of not having a buffer can be catastrophic.
  2. In complex systems, know the weakest link, and focus resilience planning on it. Whether it’s a shortage of refills for a new device, packaging for a new product, or charging stations for an EV, innovation is only as good as its weakest link. This sounds obvious, but our bias is to focus on the difficult, core and most interesting parts of innovation, and pay less attention to peripherals. I’ve known a major consumer project be held up for months because of a problem with a small plastic bottle cap, a tiny part of a much bigger project. This means looking at resilience across the whole innovation, the system it operates in and beyond. It goes without saying that the network of compatible charging stations needs to precede any major EV rollout. But never forget, the weakest link may not be within our direct control. We recently had a bunch of EV’s stranded in Vegas because a huge group of left an event at a time when it was really hot. The large group overwhelmed our charging stations, and the high temperatures meant AC use limited the EV’s range, requiring more charging. It’s a classic multivariable issue where two apparently unassociated triggers occur at once.   And that is a case where the weakest link is visible. If we are not fully vertically integrated, resilience may require multiple sources or suppliers to protect against potential failure points we are not aware of, just to protect us against things we cannot control.
  3. Avoid over optimization too early. It’s always tempting to squeeze as much cost out of innovation prior to launch. But innovation by its very nature disrupts a market, and creates a moving target. It triggers competitive responses, changes in consumer behavior, supply chain, and raw material demand. If we’ve optimized to the point of removing flexibility, this can mean trouble. Of course, some optimization is always needed as part of the innovation process, but nailing it down too tightly and too early is often a mistake. I’ve lost count of the number of initiatives I’ve seen that had to re-tool or change capacity post launch at a much higher cost than if they’d left some early flexibility and fine-tuned once the initial dust had settled.
  4. Design for the future, not the now. Again this sounds obvious, but we often forget that innovation takes time, and that, depending upon our cycle-time, the world may be quite different when we are ready to roll out than it was when we started. Again, Webb has an advantage here, as the Lagrange point won’t have changed much even in the years the project has been active. But our complex, interconnected world is moving very quickly, especially at a systems level, and so we have to build in enough flexibility to account for that.
  5. Run test markets or real world experiments if at all possible. Again comes with trade offs, but no simulation or lab test beats real world experience. Whether its software, a personal care product, or a solar panel array, the real world will throw challenges at us we didn’t anticipate. Some will matter, some may not, but without real world experience we will nearly always miss something. And the bigger our innovation, generally the more we miss. Sometimes we need to slow down to move fast, and avoid having to back track.
  6. Engage devils advocates. The more interesting or challenging an innovation is, the easier it is to slip into narrow focus, and miss the big picture. Nobody loves having people from ‘outside’ poke holes in the idea they’ve been nurturing for months or years, but that external objectiveness is hugely valuable, together with different expertise, perspectives and goals. And cast the net as wide as possible. Try to include people from competing technologies, with different goals, or from the broad surrounding system. There’s nothing like a fierce competitor, or people we disagree with to find our weaknesses and sharpen an idea. Welcome the naysayers, and listen to them. Just because they may have a different agenda doesn’t mean the issues they see don’t exist.

Of course, this is all a trade off. I started this with the brilliant Webb Space telescope, which is amazing innovation with extraordinary resilience, enabled by an enormous budget and a great deal or time and resource. As we move through the coming years we are going to be attempting innovation of at least comparable complexity on many fronts, on a far more planetary scale, and with far greater implications if we get it wrong. Resiliency was a critical part of the Webb Telescopes success. But with stakes as high as they are with much of today’s innovation, I passionately believe we need to learn from that. And a lot of us can contribute to building that resiliency. It’s easy to think of Carbon neutral energy, EV’s, or AI as big, isolated innovations. But in reality they comprise and interface with many, many sub-projects. That’s a lot of innovation, a lot of complexity, a lot of touch-points, a lot of innovators, and a lot of potential for surprises. A lot of us will be involved in some way, and we can all contribute. Resiliency is certainly not a new concept for innovation, but given the scale, stakes and implications of what we are attempting, we need it more than ever.

Image Credit: NASA, ESA, CSA, and STScl

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Can We Innovate Like Elon Musk?

Can We Innovate Like Elon Musk?

GUEST POST from Pete Foley

When we see someone do something really well, it’s always tempting to try to emulate them. And this is clearly a smart strategy; humans have evolved to automatically copy the successful strategies of others. We are cognitive misers, and it requires considerably less thinking to copy a great idea than to come up with it ourselves. As a result, more of us are the ancestors of people who were good at copying big ideas than of the people who originally came up with them.

In that context, it’s hard to ignore Elon Musk at present. A polarizing character perhaps, but as an innovator, he is second to none. As if leading the electric car revolution was not enough, he has reinvented and reinvigorated space travel, and is currently in the process of doing the same for robotics, AI and public transport, the latter via his tunneling technology. Now he’s added social media to his collection, and it’s hard to imagine even his greatest critics aren’t just a little bit interested to see how he’ll shake that field up. So should we, or can we copy him?

Can we become “Mini-Musk’s”? As tempting as that is, I’m not sure that is even close to possible. It’s really difficult to closely emulate someone else. Everyone has different natural skill sets, motivations, personalities, thinking styles and resources, and so what works for one person may not work for us. It’s no coincidence that the learning curve to effective leadership and innovation is paved with abandoned role models – people who were successful as individuals but not as ‘templates’.   I’m old enough to remember when everyone was trying to emulate Jack Welch, or more recently Steve Jobs. Even when I was attempting to be a professional musician, every A&R person we spoke to wanted us to be the next Sex Pistols or Dire Straits, as they were the big new bands at the time (yes, I’m old, and yes, those are quite different bands). Nobody was looking for U2, or even Guns and Roses, neither of whom sound a lot like either the Sex Pistols or G&R!

We don’t become the next big thing by mimicking the current big thing.   To the best of my knowledge, none of the aforementioned role models were themselves trying to be the ‘new’ anybody, any more than U@ wanted to be the new Sex Pistols. In reality, we don’t become the next big thing by mimicking the current big thing; it’s already too late for that. The reasons are complex. In addition to the individual differences mentioned above, the world has typically moved on, and even if it hasn’t, everybody else has the same opportunity to study the same examples, and so there is limited advantage to be had from closely copying the current best in class. True innovation leadership comes from originality, and from creating our own path. But that doesn’t mean we cannot learn a few things from current or past people who were really good at ‘stuff’,

The Gaga Effect: One of my favorite examples of that is Lady Gaga. She didn’t try to copy whoever was the gold standard at the time she emerged, she is a unique talent. But I could argue that she did borrow from both Madonna and Bowie, just as Bowie borrowed Liberally from Lou Reed, Anthony Newley and mime artist Lindsey Kemp. We all stand on the shoulders of giants, and can borrow from them. But I believe that the best strategy is a blending one, taking some ideas from others that fit us, or the situation we are in, and blending them to create something original.

Musk Master class? So can we learn anything useful from Musk, or is he just a once in a generation genius, with a unique thinking style that we cannot emulate. I believe he is unique, but I also think we can learn a few thing from him.

  1. Think Big, but be flexible in how you get there. Musk is the master of the stretch goal. It’s easy to forget how ambitious the electric sports car was when he first pitched the idea. His space program has achieved what NASA couldn’t, his public transport tunnel system in Vegas looks like something from Blade Runner, and now he’s talking about AI personal robots in the near future. But while he uses high expectations to drive progress, he’s also willing to back off, albeit reluctantly, when he hits a roadblock. Few of us can set ourselves or others goals of this magnitude, but my experience, especially in corporate R&D, is that we often do the opposite. Corporate culture means that nobody wants to be the one who derails an aggressive goal, and all too often this is achieved by under-promising in the hope of over-delivering. But the reality is that innovation rarely happens faster than scheduled. So building padding into initiatives simply slows us down. Don’t get me wrong, we often miss even padded goals, but it’s rarely because of the issues we plan and pad for. It’s nearly always the unexpected that derails us, and aggressive goals tend to root out the unexpected faster.
  2. Take time to define the right problem, and make it stretching and systems based. In his recent TED interview, Musk talked at some length about Douglas Adams, the author of “The Hitchhikers Guide to the Galaxy” as a philosopher. In particular, in the context of our collective tendency to race to find answers, without spending long enough refining our questions. In Adams’ book, a race of super-beings invests in building an ultimate AI, with the goal of answering the ultimate question about ‘life, the universe and everything’. The ambiguous, and somewhat unsatisfying answer they eventually get is ‘42’. This answer is a lesson in the importance of asking well-defined questions, which becomes the quest in the next couple of books. I share Musk’s love of Adams, but always thought of him as more of a playful satirist than philosopher.   But he does make a great philosophical point, in that in our haste to action, we are often so busy looking for answers that we forget to effectively define the question, and so ultimately miss the big opportunity. And this applies to both size of the prize, and scope of our thinking. Musk is brilliant at setting ambitious goals and aggressive timelines, as mentioned above. But he’s also great at taking a systems approach, illustrated by Tesla being leading the charge (pun intended) in creating not just EV’s, but also the charging infrastructure they need to compete with legacy automobiles.
  3. Tenacity. Musk personifies vision, belief and bloody mindedness. Innovation can be expensive. Not just in financial terms, but also in personal terms. Musk describes pushing himself to the absolute edge, sleeping in factories, risking his mental health, and committing to his vision with an obsession where work-life balance is not even a consideration. I’m certainly not advocating that any of us should, or could go to those extremes. But that alone is a great insight, as in reality, very few of us, and mea culpa, really want to be the next Jobs or Musk. We ‘d love to have the success, but few really want to commit to that degree. That’s why few of us will lead a space program. But we can take a realistic look at how much we are willing to push ourselves ahead of time, and set stretching, but realize goals within that scope.
  4. Seek out criticism. Nobody really likes having their ideas criticized. But it really is better to have potential problems pointed out earlier rather than later in the process. As Musk took over Twitter, he said “ I hope that even my worst critics stay on Twitter’. We can all emulate from that. Echo chambers do not drive innovation, they drive incrementalism at best. Criticism is a really inexpensive form of learning by failing.  Even when its painful, it’s valuable.
  5. Neuro-diversity. This is a tough one, as we cannot choose to be neuro-diverse, or directly emulate it.   And it is at best highly speculative whether unique thinking processes are important in the success of Musk. Mea culpa, I personally sit on ‘the spectrum’, albeit not terrible far along it, but part of the problem with not being ‘normal’ is that you don’t really know what normal is.  And of course, vice versa, ‘normal’ thinkers, whatever that is, cannot really imagine being on the spectrum.  But while none of us has any control on how our minds are wired, we can embrace different thinking styles in our network. We can encourage and support diverse thinking styles.  But in reality, it’s hard to embrace mavericks in large, structured organizations.  I’d speculate that Musk probably wouldn’t have lasted 6 months at P&G, or many other multi-nationals. But for a company that prides itself of being innovative, that the world’s greatest innovator likely wouldn’t have flourished there should at least be food for thought.

Change, Controversy and the Abundance Economy. Full disclosure, if you hadn’t already guessed, I’m a fan of Elon Musk. I don’t always agree with him, but I admire the traits described above, and his willingness to be controversial. That’s probably another lesson for innovators. You really cannot make an omelet without breaking a few eggs, and if you are driving radical change, you will likely upset a few people on that journey. But most of all, I admire his vision. A vision of breaking free of our planet, and of the abundance economy he discusses towards the end of the TED interview.

Abundance is the innovators ultimate dream, and it’s a topic I’ve been lucky enough to discuss with some very smart advocates for it, including James Burke and Matt Mason. Visionaries tend to get a little ahead of themselves sometimes, and I suspect that in some ways, Musk may be a little optimistic in this case. I grew up on Gerry Anderson, Thunderbirds, Star Trek, and a little later Arthur C Clark and Neal Stephenson. Even if I suspected that warp speed and teleporting might not encroach on my lifetime, I did believe that by now we’d be zipping around on jet packs or in flying cars, have colonies on Mars, be talking to AI’s on our video watches and flip phones, and that everybody would be wearing metallic versions of 60’s fashions. We’re not quite there with all of them, but we’re not that far away either. So maybe Musk is not that far wrong after all?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Cognitive Bandwidth – Staying Innovative in ‘Interesting’ Times

Cognitive Bandwidth - Staying Innovative in ‘Interesting’ Times

GUEST POST from Pete Foley

‘May you live in interesting times’ is the English translation of an ancient Chinese curse. Superficially presented as a blessing, its true meaning is of course far from positive. As memes go, it has lasted quite a while, perhaps because from a cognitive perspective, that little twist, and the little puzzle it forces us to solve makes it more subtle, but also more impactful than a more direct insult. But the ‘blessing and a curse’ dichotomy that it embodies is also a fundamental insight. Opportunity usually brings potential for trouble, and trouble usually bring potential for opportunity, largely because both involve change. So many are going through an awful time on many fronts at the moment, but if that has a silver lining, it is that with it comes change. And ultimately that creates an opportunity for innovation, and hopefully better times.

Big Issues Create Big Opportunity: I’ve written before about the opportunity that Covid-19 presented for innovation. The shattering of habits and established behaviors, combined with dramatic shifts in personal and work situations opened the door to trial of new products and services to a degree not seen in a generation. But as we (hopefully) continue to emerge from Covid, we’ve been sucker punched by numerous other things. The horror of war in Europe being the most shocking, but we are also facing enormous economic challenges in the form of energy shortages, inflation, supply chain issues, the great resignation and rapidly changing socio-political landscapes.  And of course, we still have numerous other pressing ‘pre-Covid’ issues such as climate change, pollution and economic inequality that also require urgent attention.

That is a lot of problems that need solving. And as awful as Covid was for everyone, the current issues around supply chain, global economic instability, inflation and increased cost of debt likely create at least as immediate operational issues for many organizations, and hence an equally urgent need for innovation.

Another Innovators Dilemma. Unfortunately, the time when we need most innovation is often when it is hardest to deliver it. Innovation doesn’t happen overnight, and usually needs clear strategy, resources, funding, creativity and knowledge. And all of these are currently in short supply. An uncertain and rapidly changing world makes setting long-term strategy challenging. Supply chain challenges can have huge short-term operational impact, and suck up resources and expertise normally allocated to longer-term innovation. The great resignation and early retirements reduce available expertise. And on top of all of this, inflation, increasing interest rates, raw material prices and labor costs are squeezing finances. None of this is terribly new news, or insightful, but it does provide context for another, sometimes less obvious barrier to innovation that I want to talk about: One that operates more on the individual level – the squeeze of cognitive bandwidth.

Cognitive Bandwidth: The innovation journey needs creativity everywhere from the nascent front end through to launching into market. Ultimately that creativity comes from individuals. That in turn requires those individuals to be allowed the cognitive bandwidth, or ‘quality thinking time’ to ideate. We can only effectively think deeply about one thing at a time. This is our ‘cognitive bandwidth’, and it is a finite resource. There are only so many hours in a day, and most of us can only allocate a small fraction of those to think deeply about problems or process information. And of course the more problems we are facing, the less bandwidth we usually have. The more difficult the situation, the more of our time is spent distracted, jumping from one issue to another, or attempting to ‘multi-task’. Even when we carve out time, the current climate means all too often we are stressed, or in an elevated emotional state. This reduces the quality as well as quantity of our thinking, and so further narrows our individual cognitive bandwidth.

The Covid Squeeze: Covid-19 of course sucked up a lot of cognitive bandwidth. We had to find new ways to work, learn new tools, and new ways to manage personal lives and work-life balance as many found themselves taking on new roles as educators, care givers, chefs, simply learning how to share an office with a spouse for the first time. There were some compensating effects, such as reduced travel, but even that likely had some less obvious and hard to measure impacts on the creative process that I’ll discuss later. But perhaps the biggest, albeit largely intangible impact on cognitive bandwidth was the impact Covid had on our collective emotional state. Covid, and the changes it brought was hard on everybody. Everyone has there own stories, and we’ve all seen the increase in mental health issues that accompanied the pandemic. But this is almost certainly the ‘tip of the iceberg’. Virtually everyone has experienced some degree of increased stress and negative emotions during Covid, and this directly impacts cognitive bandwidth and hence individual innovative capacity.

The Post-Covid Sucker Punch: One thing I think we were all looking forward to was a return to some semblance of normal. But unfortunately, as Covid (hopefully) subsides, reentry into the post Covid world is proving to be very bumpy, and we are facing the cornucopia of other issues described above.   This not only creates a host of ‘fires’ that need to be put out, but it also inevitably takes an emotional toll. After two years of disrupted work and home-life, we are now asking people to again step up and be ‘unusually’ innovative in difficult circumstances, and against a backdrop of war and human suffering. Fatigue and burn-out are almost inevitable.

At a practical level, I see this on a day-to-day basis. I sit in a lot of innovation teams, and one pattern I observe consistently is the workforce getting increasingly stretched; both from a time and emotional perspective. I see more and more people getting pulled out of meetings to fight fires, people attempting to double task, or stepping in and out of meetings, or simply looking frazzled and overworked. Of course, none of this is new, overwork and stress existed log before Covid. But it’s also not surprising that it appears to be increasing during a long period of constant change.

The Neuroscience of the Creative Moment. Innovative thinking comes in multiple forms, but it all requires time. We need time to think deeply, and consciously about problems, and to assimilate data and knowledge.  But ‘downtime’ is also a critical, if less understood part of the creative process. There is a very good reason that Eureka moments often happen in the bath, shower, or middle of the night. When the mind is relaxed, has time, and not focused on an immediate problem, it is more likely to make surprisingly obvious connections, or see things in different ways. This is often when the biggest ideas occur. We need conscious thinking to build essential foundations of knowledge, but the most interesting ideas and connections often happen when we are not trying. Have you ever had a name on the tip of your tongue, but no matter how hard you try, you cannot find it? Then a few hours later when you are not trying, it pops into your head? This is an analogous mechanism, where conscious focus simply reinforces and repeats converging on the same, sometimes unwanted result, but when we relax, it opens the channel to the needed connection. There is a lot of research around how this works, which includes the interaction between default mode and executive function, the role of alpha waves and flow state, and the conceptual blending process. It’s still very much an evolving science, but one thing that is fairly consistent across this research is that downtime and periods of reduced stress play an important role in the creative process and making connections. Unfortunately, for many, the pandemic reduced relaxation and ‘own time’.   Needing to learn new skills and new ways of working, while also having to solve a myriad of new and ever changing problems sucked up time. Even the loss of commutes took away a period of solo reflection where many of us consciously or unconsciously processed and synthesized the day’s information.   But perhaps the hardest pill to swallow has been that while we all hoped that the end of Covid would have provided some relief, if anything the news cycle has got worse. This takes an emotional toll.  Part of this reflects the ratings competition within media that favors an ever-increasing stream of bad news.  But unfortunately it also reflects a very challenging global reality and very real problems and suffering.

What Can We Do?

There are of course limits to what we can do within our sphere of influence. Most of us cannot directly impact the war in Ukraine, the supply chain crisis or global diplomacy. But we can take steps to reduce pressure on our teams, and ourselves, and thus make innovation and creativity a little easier.

1. Make tough strategic priority decisions. Primarily this is a leadership task, but it’s also something we can to some degree manage in our personal portfolios. One reason we see so much innovation during crisis is focus, and a willingness to sacrifice some goals or standards for more important ones. For us to replicate this means being very selective about what fires to fight, while also being willing to let others burn themselves out. This is not without risk, as short-term survival is of course a prerequisite for any successful long-term strategy.   But during periods of rapid change, we also see rapid reversals. For example, spikes in raw material costs are often short-term, and developing alternatives can often take longer than the problem lasts. It sounds obvious, but is often deceptively difficult, especially as deciding to let the wrong fire burn itself out can be quite career limiting. But making difficult priority calls, and saying ‘no’ can be critical to maintaining our innovative and competitive edge, by keeping limited cognitive bandwidth focused of the most important tasks.

2. Help talent to focus on what is really important, and to grow skills that are most relevant to the future. There has been an ongoing trend to increasingly ask talent to handle their own administrative and organizational work. This is partly driven by technology that reduces the need for specialized knowledge to manage many logistics tasks. And eliminating support roles looks good on margins and fixed costs. But asking a highly skilled technical expert to cover their own admin not only adds to their workload, but it is also inefficient, as we are effectively overpaying them to complete tasks that often don’t play to their core skills. Conversely, there is also a lot of skill on the sidelines at the moment, while many have developed skills in working remotely. So is one option is to leverage this to free up innovators and experts. Let them focus more on their areas of expertise, by bringing back more general support roles. Or bring in temporary outside help where short-term issues require expertise that is not anticipated to be part of long-term strategy.

3. Schedule down-time, and create a culture where it is encouraged. Build protected spaces in calendars when meetings are not allowed. Encourage lunch breaks, and enable casual team-building events and wellness practices. It’s easy to view these as non-essential, and the type of activities that we cut first when times get tough. But they are critical to an innovative culture. Mental downtime is not a luxury or a perk, but an essential part of the creative process.   And in too many cases, we’ve been in crisis mode for so long, that tool has become blunt or burnt out.

4. Further support this with the design of our physical environments. Another trend has been the move to open offices and shared space. This has benefits for both collaboration, and for space efficiency as hybrid home/office working models emerge. But studies have also shown more innovative ideas emerge when people work alone than in brainstorming environments. So it is critical to provide both physical spaces and a culture that enable private reflection and quiet concentration where people can potentially synthesize information and make connections. The key to a cognitively diverse innovation culture is to provide options for different thinking styles. And this also means that acknowledging that benefits of work from home are not one size fits all. For some it’s a blessing, but both work style and personal circumstance can make working from home a challenge for others. To support a cognitively diverse workforce, some people, especially those early in their careers, may need work as a sanctuary, and a bigger physical footprint at work than others.

5. Finally, distribute work evenly. I remember someone telling me early in my career that, ‘if you need something done quickly, go to the busiest person’. There is some truth in that, and some people thrive on high workload. But it only works to a point, and if taken too far, we risk overloading the cognitive bandwidth of our most creative people, even if they may not realize it themselves. By all means give the most challenging and most important tasks to the best people. But don’t overload them too much. They will often be happy to take on more, but it may not be best for them, their creativity, or the organization. Look very hard to see if the load is evenly distributed within an organization, and if not, ask hard questions why not? And if you are the person everyone comes to, practice saying ‘no’ occasionally!

The good news is that humans are pretty resilient, so it doesn’t always take huge changes to get significant results. We are all the progeny of ancestors who survived wars, famine, disease, social upheaval and natural disasters. And it’s worth noting that we are often at our most creative during periods of greatest tragedy.

Technology advanced at a phenomenal pace during WW-II, and more recently the speed of development of Covid vaccines was staggering. But there are clues in those situations that we can learn from. Resources and focus were unprecedented. During WW-II virtually everything was thrown against the war effort, and tough, sometimes brutal priority calls were the norm.

Project Warp Speed put enormous resources against the Covid vaccine and took huge risks on uncertain bets. Of course, most of us working in innovation don’t have these almost infinite resources, but we can be very strategic in how we use what we have. And keep in mind that wartime mentality is meant to be short-term, while Project Warp Speed was designed to last about a year.

We are in the business of creating a sustainable innovation culture. So, we are not just about protecting the cognitive bandwidth of individuals in the short-term, but also preventing burn out, and creating a sustainable cognitive culture.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Innovation in the time of Covid – Satisfycing Organizations

Innovation in the time of Covid - Satisfycing Organizations

GUEST POST from Pete Foley

Many of us spend a lot of time thinking about consumer habits, and how to change or reinforce them.  As innovators that’s pretty central to our job.  And Covid has presented us with a unique opportunity, as so many consumer habits have been disrupted.  But work habits are as ingrained and as hard to break as consumer behavior, and so Covid provides a similar once in a generation opportunity to change work processes.

We have been forced us to work differently.  Remote working has meant less oversight and more autonomy. In parallel, the world has changed rapidly around us, forcing us to make quicker decisions while relying on less data.  As a result, we’ve also probably made a few mistakes, but hopefully also learned from them.  It’s been tough, but it’s also been a unique opportunity for learning and change.

The Organizational Brain:  I love analogies, and an obvious one is that the change in many organizations brings their processes closer to how the human brain makes decisions. They’ve been satisfycing – a concept borrowed from Behavioral Economics, that describes decisions that are good enough, not always perfect, but reached faster, and with less ‘process’.

This is a key concept in understanding real human behavior. Time is critical to survival. An early human being chased by a hungry saber tooth didn’t have time to ponder every possible escape route.  He or she just had to get away from the predator before it reached them, or at least move away faster than the slowest member of the tribe. As a result, we are the ancestors of people who made timely decisions based on limited data, not those who stood pondering every possibility in search of perfection.

Even contemporary decisions, while often not quite as urgent as escaping from a hungry predator, typically involve an analogous trade off between time and completeness of information. How many people know every detail about a stock, or even a car, before they buy?   In reality we rarely have time to fully process every relevant piece of information for any decision we make, but instead use a mixture of heuristics, proxy’s and our gut, together with some analysis to make good enough, but often not perfect decisions.

A Corporate Flaw: A flaw in traditional economics was that time was largely ignored.  It assumed that humans made perfect decisions based on all available data, no matter how long that took.  In many ways, businesses, especially big corporations lean towards this much slower, data based type of decision.  Employees have to justify decisions to a far greater degree than we do as individuals.  Telling a boss or a shareholder that a decision  ‘just felt right’ is probably career limiting, especially if it turns out to be the wrong decision. But this slows them down, and leaves them vulnerably to more agile, less risk averse competition making good enough decisions faster.  I’d also argue that a lot of time is also spent creating the illusion of certainty.  We collect supporting data, pre-align with a boss, or seek consensus via a team, but all too often this is an exercise in precision, not accuracy. We are only as good as our models, and these often struggle to accurately predict the complex, fast moving real world we live in.  I’m sure a few people will not be comfortable with this premise, and I’ll dive a little deeper later, but it’s born out by the high proportion of innovations that fail, despite great supporting consumer data and business projections.

The Covid Change: The good news is that Covid has forced us to change. Meetings have been switched to virtual, and in many cases participation has been trimmed. We haven’t abandoned consensus, but in many cases we’ve had to be more choiceful about when and where it’s needed. We have been forced to give people more autonomy, if only because oversight has been impossible. And hand in hand with all of this, in many cases we’ve also been forced to make decisions without the same level of supporting data we are used to.  The pace of change has accelerated, while many of our usual methods of testing have been stymied, or at least had to go through significant changes. Before Covid we may have debated and aligned, or run additional research or tests, both to make more informed decisions, but also to CYA should things go wrong.  In the last 18 months we’ve more often had to go with our gut, or at least make decisions where we’re far less ‘ certain’ about the outcome.

We will not know how this has worked out for some time, if ever, as we lack a frame of reference for operating in a pandemic.  But my guess is it this has probably worked out fairly well.  We probably have made a few more mistakes, or at least sub optimal decisions.  And we’ve likely learnt a few hard lessons as well. But most of the time, we’ve probably made good enough decisions.  And we’ve likely compensated by learning and adapting on the fly, or have perhaps built more flexibility into our plans to compensate for the lack of ‘certainty’ in our business plans.  In other words we’ve been more closely mirroring at an organization level how the human brain works.

I’m going to argue that this is a good thing, for at least four reasons.

1.  Less Meetings!!  When the work we have to do is too big, too difficult, or beyond the expertise of one person, we create a team to do it.  But teams also represent a trade-off.  It’s a conundrum that the very differences that make teams so valuable can also make them cumbersome and time consuming.  As we add different skills and perspectives in a team, transaction costs increase, all too often resulting in seemingly endless meetings in the pursuit of consensus. At P&G it wasn’t uncommon to have entire days of back-to-back meetings.

And Mea Culpa, I’m a recovering meeting addict. At times that back-to-back schedule almost felt like a badge of honor.  Conversely sitting at a desk and thinking, or quietly reading was treated with deep suspicion in some circles, despite it often being a highly productive exercise

2.  We’ve grown capability. We’ve been forced to give people more autonomy, which develops skills and motivation. Not everyone will have thrived when pushed out of their comfort zones, but we’ll have given people opportunity, and that will ultimately pay dividends

3.  We’ve been forced to embrace more learning from failure.We talk a lot about this, especially in innovation, but more often than not we still celebrate success far more than failure.  But a good scientist designs tests to fail, in order to challenge a hypothesis.  This does happen in business, but realistically most consumer research is designed to demonstrate success, and hence move us through the next stage-gate in our business process.  But we’ve probably made a few more mistakes, so we’ve probably learned a bit more.

4.  Perhaps most importantly, we’ve learned to live, and act with less data.  Humans all have a risk aversion bias, albeit some more, some less.  Data makes us believe we are increasing the quality of our decisions.  It can even provide a rational for procrastination.- “Let’s get more data before we push the button’.  Historically this has often caused us to run big, expensive consumer research, generate complex volume forecasts, and present detailed and precise (if not accurate) business plans to management. It feels good to believe we are betting on a near certainty, but that’s often unrealistic.  A majority of new products fail, despite having excellent consumer and volume forecasting data to back them up. The reality is that the world we place innovation into is usually too complex to accurately predict. The very act of introducing something new disrupts the system, as does any competitive response.  And if we are truly introducing something innovative or disruptive, it should by its very nature invalidate at least some of the careful validation work that has gone into our forecasting models and methodologies.  All too often, our research creates an illusion of certainty, or at best, over estimates our ability to predict the future.  It feels better than it performs.

I’m not suggesting we completely abandon consensus, or consumer testing and modeling.  These are great tools for weeding out bad ideas, and for anticipating and fixing issues that are more obvious in hindsight than in enthusiastic foresight.  And they can certainly help us to ball-park initiatives, especially if they are not too disruptive.  But the success rate of innovation in market strongly suggests that our models are not as reliably predictive as we’d like to believe.  It certainly suggests that if we can, we are betting off fine-tuning in market than we are fine tuning for a volume forecast.

Conversely, the human brain is, at least for the next few years, the smartest decision-making ‘entity’ we know. It routinely makes satisfycing decisions that balance the need for action against the cost of obtaining and processing additional information.  It accepts ‘good enough’ as a start point, and is really, really good at not locking into decisions prematurely, but using feedback loops to adjust on the fly.  It uses heuristics for quick decisions rather than certainty.  Given that it’s the pinnacle of millions of years of evolution, it’s probably not a bad thing if our organizations more closely mirror it.

Assuming we eventually vanquish Covid, we’ll all be searching for new equilibriums as the world restabilizes. There are things I’m personally really keen to bring back, such as the serendipity that comes from real human-to-human interaction.  But I also hope we don’t loose what we’ve learned.  Risk aversion will nudge us to revert back to higher degrees of certainty. And there will certainly be contexts where this makes sense, especially in pharmaceuticals and medicine, where we’ve taken unusual risks because of exceptional time constraints. But in less life and death fields, we may have found we can give people more autonomy, be more selective about consensus, have less meetings, better embrace learning from failure, and may not need as many consumer tests or as precise volume forecasts, as we previously thought.  A little bit of agility built into the back end can go a long way to reduce the perceived need of illusory certainty at the front.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.