Tag Archives: AI

Framing Your 2024 Strategy

Framing Your 2024 Strategy

GUEST POST from Geoffrey A. Moore

Fall is in the air, which brings to mind the season’s favorite sport—no, not football, strategic planning! Let’s face it, 2023 has been a tough year for most of us, with few annual plans surviving first contact with an economy that was not so much sluggish as simply hesitant. With the exception of generative AI’s burst onto the scene, most technology sectors have been more or less trudging along, and that begs the question, what do we think we can do in 2024? Time to bring out the strategy frameworks, polish up those crystal balls that have been a bit murky of late, and chart our course forward.

This post will kick off a series of blogs about framing strategy, all organized around a meta-model we call the Hierarchy of Powers:

Geoffrey Moore Strategy Framework

The inspiration for this model came from looking at how investors prioritize their portfolios. The first thing they do is allocate by sector, based primarily on category power, referring both to the growth rate of the category as well as its potential size. Rising tides float all boats, and one of the toughest challenges in business is how to manage a premier franchise when category growth is negative. In conjunction with assessing our current portfolio’s category power, this is also a time to look at adjacent categories, whether as threats or as opportunities, to see if there are any transformative acquisitions that deserve our immediate attention.

Returning to our current set of assets, within each category the next question to answer is, what is our company power within that category? This is largely a factor of market share. The more share a company has of a given category, the more likely the ecosystem of partners that supports the category will focus first on that company’s installed base, adding more value to its offers, as well as to recommend that company’s products first, again because of the added leverage from partner engagement. Marketplaces, in other words, self-organize around category leaders, accelerating the sales and offloading the support costs of the market share leaders.

But what do you do when you don’t have company power? That’s when you turn your attention to market power. Marketplaces destabilize around problematic use cases that the incumbent vendors do not handle well. This creates openings for new entrants, provided they can authentically address the customer’s problems. The key is to focus product management on the whole product (not just what your enterprise supplies, but rather, everything the customer needs to be successful) and to focus your go-to-market engine on the target market segment. This is the playbook that has kept Crossing the Chasm on entrepreneur’s book lists some thirty years in, but it is a different matter to execute it in a large enterprise where sales and marketing are organized for global coverage, not rifle-shot initiatives. Nonetheless, when properly executed, it is the most reliable play in all of high-tech market development.

If market power is key to taking market share, offer power is key to maintaining it, both in high-growth categories as well as mature ones. Offer power is a function of three disciplines—differentiation to create customer preference, neutralization to catch up to and reduce a competitor’s differentiation, and optimization to eliminate non-value-adding costs. Anything that does not contribute materially to one of these three outcomes is waste.

Finally, execution power is the ability to take advantage of one’s inertial momentum rather than having it take advantage of you. Here the discipline of zone management has proved particularly valuable to enterprises who are seeking to balance investment in their existing lines of business, typically in mature categories, with forays into new categories that promise higher growth.

In upcoming blog posts I am going to dive deeper into each of the five powers outlined above to share specific frameworks that clarify what decisions need to be made during the strategic planning process and what principles can best guide them. In the meantime, there is still one more quarter in 2023 to make, and we all must do our best to make the most of it.

That’s what I think. What do you think?

Image Credit: Pixabay, Geoffrey A. Moore

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Innovation Evolution in the Era of AI

Innovation Evolution in the Era of AI

GUEST POST from Stefan Lindegaard

Half a decade ago, I laid out a perspective on the evolution of innovation. Now, I return to these reflections with a sentiment of both awe and unease as I observe the profound impacts of AI on innovation and business at large. The transformation unfolding before us presents a remarkable panorama of opportunities, yet it also carries with it the potential for disruption, hence the mixed feelings.

1. The Reign of R&D (1970-2015): There was a time when the Chief Technology Officer (CTO) held the reins. The focus was almost exclusively on Research and Development (R&D), with the power of the CTO often towering over the innovative impulses of the organization. Technology drove progress, but a tech-exclusive vision could sometimes be a hidden pitfall.

2. Era of Innovation Management (1990-2001): A shift towards understanding innovation as a strategic force began to emerge in the ’90s. The concept of managing innovation, previously only a flicker in the business landscape, began its journey towards being a guiding light. Pioneers like Christensen brought innovation into the educational mainstream, marking a paradigm shift in the mindsets of future business leaders.

3. Business Models & Customer Experience (2001-2008): The millennium ushered in an era where simply possessing superior technology wasn’t a winning card anymore. Process refinement, service quality, and most critically, innovative business models became the new mantra. Firms like Microsoft demonstrated this shift, evolving their strategies to stay competitive in this new game.

4. Ecosystems & Platforms (2008-2018): This phase saw the rise of ecosystems and platforms, representing a shift from isolated competition to interconnected collaboration. The lines that once defined industries began to blur. Companies from emerging markets, particularly China, became global players, and we saw industries morphing and intermingling. Case in point: was it still the automotive industry, or had the mobility industry arrived?

5. Corporate Transformation (2019-2025): With the onslaught of digital technologies, corporations faced the need to transform from within. Technological adoption wasn’t a mere surface-level change anymore; it demanded a thorough, comprehensive rethinking of strategies, structures, and processes. Anything less was simply insufficient to weather the storm of this digital revolution.

6. Comborg Transformation (2025-??): As we gaze into the future, the ‘Comborg’ era comes into view. This era sees organizations fusing human elements and digital capabilities into a harmonious whole. In this stage, the equilibrium between human creativity and AI-driven efficiency will be crucial, an exciting but challenging frontier to explore.

I believe that revisiting this timeline of innovation’s evolution highlights the remarkable journey we’ve undertaken. As we now figure out the role of AI in innovation and business, it’s an exciting but also challenging time. Even though it can be a bit scary, I believe we can create a successful future if we use AI in a responsible and thoughtful way.

Stefan Lindegaard Evolution of Innovation

Image Credit: Stefan Lindegaard, Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

AI and Human Creativity Solving Complex Problems Together

AI and Human Creativity Solving Complex Problems Together

GUEST POST from Janet Sernack

A recent McKinsey Leading Off – Essentials for leaders and those they lead email newsletter, referred to an article “The organization of the future: Enabled by gen AI, driven by people” which stated that digitization, automation, and AI will reshape whole industries and every enterprise. The article elaborated further by saying that, in terms of magnitude, the challenge is akin to coping with the large-scale shift from agricultural work to manufacturing that occurred in the early 20th century in North America and Europe, and more recently in China. This shift was powered by the defining trait of our species, our human creativity, which is at the heart of all creative problem-solving endeavors, where innovation is the engine of growth, no matter, what the context.

Moving into Unchartered Job and Skills Territory

We don’t yet know what exact technological, or soft skills, new occupations, or jobs will be required in this fast-moving transformation, or how we might further advance generative AI, digitization, and automation.

We also don’t know how AI will impact the need for humans to tap even more into the defining trait of our species, our human creativity. To enable us to become more imaginative, curious, and creative in the way we solve some of the world’s greatest challenges and most complex and pressing problems, and transform them into innovative solutions.

We can be proactive by asking these two generative questions:

  • What if the true potential of AI lies in embracing its ability to augment human creativity and aid innovation, especially in enhancing creative problem solving, at all levels of civil society, instead of avoiding it? (Ideascale)
  • How might we develop AI as a creative thinking partner to effect profound change, and create innovative solutions that help us build a more equitable and sustainable planet for all humanity? (Hal Gregersen)

Because our human creativity is at the heart of creative problem-solving, and innovation is the engine of growth, competitiveness, and profound and positive change.

Developing a Co-Creative Thinking Partnership

In a recent article in the Harvard Business Review “AI Can Help You Ask Better Questions – and Solve Bigger Problems” by Hal Gregersen and Nicola Morini Bianzino, they state:

“Artificial intelligence may be superhuman in some ways, but it also has considerable weaknesses. For starters, the technology is fundamentally backward-looking, trained on yesterday’s data – and the future might not look anything like the past. What’s more, inaccurate or otherwise flawed training data (for instance, data skewed by inherent biases) produces poor outcomes.”

The authors say that dealing with this issue requires people to manage this limitation if they are going to treat AI as a creative-thinking partner in solving complex problems, that enable people to live healthy and happy lives and to co-create an equitable and sustainable planet.

We can achieve this by focusing on specific areas where the human brain and machines might possibly complement one another to co-create the systemic changes the world badly needs through creative problem-solving.

  • A double-edged sword

This perspective is further complimented by a recent Boston Consulting Group article  “How people can create-and destroy value- with generative AI” where they found that the adoption of generative AI is, in fact, a double-edged sword.

In an experiment, participants using GPT-4 for creative product innovation outperformed the control group (those who completed the task without using GPT-4) by 40%. But for business problem solving, using GPT-4 resulted in performance that was 23% lower than that of the control group.

“Perhaps somewhat counterintuitively, current GenAI models tend to do better on the first type of task; it is easier for LLMs to come up with creative, novel, or useful ideas based on the vast amounts of data on which they have been trained. Where there’s more room for error is when LLMs are asked to weigh nuanced qualitative and quantitative data to answer a complex question. Given this shortcoming, we as researchers knew that GPT-4 was likely to mislead participants if they relied completely on the tool, and not also on their own judgment, to arrive at the solution to the business problem-solving task (this task had a “right” answer)”.

  • Taking the path of least resistance

In McKinsey’s Top Ten Reports This Quarter blog, seven out of the ten articles relate specifically to generative AI: technology trends, state of AI, future of work, future of AI, the new AI playbook, questions to ask about AI and healthcare and AI.

As it is the most dominant topic across the board globally, if we are not both vigilant and intentional, a myopic focus on this one significant technology will take us all down the path of least resistance – where our energy will move to where it is easiest to go.  Rather than being like a river, which takes the path of least resistance to its surrounding terrain, and not by taking a strategic and systemic perspective, we will always go, and end up, where we have always gone.

  • Living our lives forwards

According to the Boston Consulting Group article:

“The primary locus of human-driven value creation lies not in enhancing generative AI where it is already great, but in focusing on tasks beyond the frontier of the technology’s core competencies.”

This means that a whole lot of other variables need to be at play, and a newly emerging set of human skills, especially in creative problem solving, need to be developed to maximize the most value from generative AI, to generate the most imaginative, novel and value adding landing strips of the future.

Creative Problem Solving

In my previous blog posts “Imagination versus Knowledge” and “Why Successful Innovators Are Curious Like Cats” we shared that we are in the midst of a “Sputnik Moment” where we have the opportunity to advance our human creativity.

This human creativity is inside all of us, it involves the process of bringing something new into being, that is original, surprising useful, or desirable, in ways that add value to the quality of people’s lives, in ways they appreciate and cherish.

  • Taking a both/and approach

Our human creativity will be paralysed, if we focus our attention and intention only on the technology, and on the financial gains or potential profits we will get from it, and if we exclude the possibilities of a co-creative thinking partnership with the technology.

To deeply engage people in true creative problem solving – and involving them in impacting positively on our crucial relationships and connectedness, with one another and with the natural world, and the planet.

  • A marriage between creatives, technologists, and humanities

In a recent Fast Company video presentation, “Innovating Imagination: How Airbnb Is Using AI to Foster Creativity” Brian Chesky CEO of Airbnb, states that we need to consider and focus our attention and intention on discovering what is good for people.

To develop a “marriage between creatives, technologists, and the humanities” that brings the human out and doesn’t let technology overtake our human element.

Developing Creative Problem-Solving Skills

At ImagineNation, we teach, mentor, and coach clients in creative problem-solving, through developing their Generative Discovery skills.

This involves developing an open and active mind and heart, by becoming flexible, adaptive, and playful in the ways we engage and focus our human creativity in the four stages of creative problem-solving.

Including sensing, perceiving, and enabling people to deeply listen, inquire, question, and debate from the edges of temporarily hidden or emerging fields of the future.

To know how to emerge, diverge, and converge creative insights, collective breakthroughs, an ideation process, and cognitive and emotional agility shifts to:

  • Deepen our attending, observing, and discerning capabilities to consciously connect with, explore, and discover possibilities that create tension and cognitive dissonance to disrupt and challenge the status quo, and other conventional thinking and feeling processes.
  • Create cracks, openings, and creative thresholds by asking generative questions to push the boundaries, and challenge assumptions and mental and emotional models to pull people towards evoking, provoking, and generating boldly creative ideas.
  • Unleash possibilities, and opportunities for creative problem solving to contribute towards generating innovative solutions to complex problems, and pressing challenges, that may not have been previously imagined.

Experimenting with the generative discovery skill set enables us to juggle multiple theories, models, and strategies to create and plan in an emergent, and non-linear way through creative problem-solving.

As stated by Hal Gregersen:

“Partnering with the technology in this way can help people ask smarter questions, making them better problem solvers and breakthrough innovators.”

Succeeding in the Age of AI

We know that Generative AI will change much of what we do and how we do it, in ways that we cannot yet anticipate.

Success in the age of AI will largely depend on our ability to learn and change faster than we ever have before, in ways that preserve our well-being, connectedness, imagination, curiosity, human creativity, and our collective humanity through partnering with generative AI in the creative problem-solving process.

Find Out More About Our Work at ImagineNation™

Find out about our collective, learning products and tools, including The Coach for Innovators, Leaders, and Teams Certified Program, presented by Janet Sernack, is a collaborative, intimate, and deeply personalized innovation coaching and learning program, supported by a global group of peers over 9-weeks, which can be customised as a bespoke corporate learning program.

It is a blended and transformational change and learning program that will give you a deep understanding of the language, principles, and applications of an ecosystem focus, human-centric approach, and emergent structure (Theory U) to innovation, and upskill people and teams and develop their future fitness, within your unique innovation context. Find out more about our products and tools.

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

AI and the Productivity Paradox

AI and the Productivity Paradox

GUEST POST from Greg Satell

In the 1970’s and 80’s, business investment in computer technology were increasing by more than twenty percent per year. Strangely though, productivity growth had decreased during the same period. Economists found this turn of events so strange that they called it the productivity paradox to underline their confusion.

Productivity growth would take off in the late 1990s, but then mysteriously drop again during the mid-aughts. At each juncture, experts would debate whether digital technology produced real value or if it was all merely a mirage. The debate would continue even as industry after industry was disrupted.

Today, that debate is over, but a new one is likely to begin over artificial intelligence. Much like in the early 1970s, we have increasing investment in a new technology, diminished productivity growth and “experts” predicting massive worker displacement . Yet now we have history and experience to guide us and can avoid making the same mistakes.

You Can’t Manage (Or Evaluate) What You Can’t Measure

The productivity paradox dumbfounded economists because it violated a basic principle of how a free market economy is supposed to work. If profit seeking businesses continue to make substantial investments, you expect to see a return. Yet with IT investment in the 70s and 80s, firms continued to increase their investment with negligible measurable benefit.

A paper by researchers at the University of Sheffield sheds some light on what happened. First, productivity measures were largely developed for an industrial economy, not an information economy. Second, the value of those investments, while substantial, were a small portion of total capital investment. Third, the aggregate productivity numbers didn’t reflect differences in management performance.

Consider a widget company in the 1970s that invested in IT to improve service so that it could ship out products in less time. That would improve its competitive position and increase customer satisfaction, but it wouldn’t produce any more widgets. So, from an economic point of view, it wouldn’t be a productive investment. Rival firms might then invest in similar systems to stay competitive but, again, widget production would stay flat.

So firms weren’t investing in IT to increase productivity, but to stay competitive. Perhaps even more importantly, investment in digital technology in the 70s and 80s was focused on supporting existing business models. It wasn’t until the late 90s that we began to see significant new business models being created.

The Greatest Value Comes From New Business Models—Not Cost Savings

Things began to change when firms began to see the possibilities to shift their approach. As Josh Sutton, CEO of Agorai, an AI marketplace, explained to me, “The businesses that won in the digital age weren’t necessarily the ones who implemented systems the best, but those who took a ‘digital first’ mindset to imagine completely new business models.”

He gives the example of the entertainment industry. Sure, digital technology revolutionized distribution, but merely putting your programming online is of limited value. The ones who are winning are reimagining storytelling and optimizing the experience for binge watching. That’s the real paradigm shift.

“One of the things that digital technology did was to focus companies on their customers,” Sutton continues. “When switching costs are greatly reduced, you have to make sure your customers are being really well served. Because so much friction was taken out of the system, value shifted to who could create the best experience.”

So while many companies today are attempting to leverage AI to provide similar service more cheaply, the really smart players are exploring how AI can empower employees to provide a much better service or even to imagine something that never existed before. “AI will make it possible to put powerful intelligence tools in the hands of consumers, so that businesses can become collaborators and trusted advisors, rather than mere service providers,” Sutton says.

It Takes An Ecosystem To Drive Impact

Another aspect of digital technology in the 1970s and 80s was that it was largely made up of standalone systems. You could buy, say, a mainframe from IBM to automate back office systems or, later, Macintoshes or a PCs with some basic software to sit on employees desks, but that did little more than automate basic clerical tasks.

However, value creation began to explode in the mid-90s when the industry shifted from systems to ecosystems. Open source software, such as Apache and Linux, helped democratize development. Application developers began offering industry and process specific software and a whole cadre of systems integrators arose to design integrated systems for their customers.

We can see a similar process unfolding today in AI, as the industry shifts from one-size-fits-all systems like IBM’s Watson to a modular ecosystem of firms that provide data, hardware, software and applications. As the quality and specificity of the tools continues to increase, we can expect the impact of AI to increase as well.

In 1987, Robert Solow quipped that, “ You can see the computer age everywhere but in the productivity statistics,” and we’re at a similar point today. AI permeates our phones, smart speakers in our homes and, increasingly, the systems we use at work. However, we’ve yet to see a measurable economic impact from the technology. Much like in the 70s and 80s, productivity growth remains depressed. But the technology is still in its infancy.

We’re Just Getting Started

One of the most salient, but least discussed aspects of artificial intelligence is that it’s not an inherently digital technology. Applications like voice recognition and machine vision are, in fact, inherently analog. The fact that we use digital technology to execute machine learning algorithms is actually often a bottleneck.

Yet we can expect that to change over the next decade as new computing architectures, such as quantum computers and neuromorphic chips, rise to the fore. As these more powerful technologies replace silicon chips computing in ones and zeroes, value will shift from bits to atoms and artificial intelligence will be applied to the physical world.

“The digital technology revolutionized business processes, so it shouldn’t be a surprise that cognitive technologies are starting from the same place, but that’s not where they will end up. The real potential is driving processes that we can’t manage well today, such as in synthetic biology, materials science and other things in the physical world,” Agorai’s Sutton told me.

In 1987, when Solow made his famous quip, there was no consumer Internet, no World Wide Web and no social media. Artificial intelligence was largely science fiction. We’re at a similar point today, at the beginning of a new era. There’s still so much we don’t yet see, for the simple reason that so much has yet to happen.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The Hard Problem of Consciousness is Not That Hard

The Hard Problem of Consciousness is Not That Hard

GUEST POST from Geoffrey A. Moore

We human beings like to believe we are special—and we are, but not as special as we might like to think. One manifestation of our need to be exceptional is the way we privilege our experience of consciousness. This has led to a raft of philosophizing which can be organized around David Chalmers’ formulation of “the hard problem.”

In case this is a new phrase for you, here is some context from our friends at Wikipedia:

“… even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?”

— David Chalmers, Facing up to the problem of consciousness

The problem of consciousness, Chalmers argues, is two problems: the easy problems and the hard problem. The easy problems may include how sensory systems work, how such data is processed in the brain, how that data influences behavior or verbal reports, the neural basis of thought and emotion, and so on. The hard problem is the problem of why and how those processes are accompanied by experience.3 It may further include the question of why these processes are accompanied by that particular experience rather than another experience.

The key word here is experience. It emerges out of cognitive processes, but it is not completely reducible to them. For anyone who has read much in the field of complexity, this should not come as a surprise. All complex systems share the phenomenon of higher orders of organization emerging out of lower orders, as seen in the frequently used example of how cells, tissues, organs, and organisms all interrelate. Experience is just the next level.

The notion that explaining experience is a hard problem comes from locating it at the wrong level of emergence. Materialists place it too low—they argue it is reducible to physical phenomena, which is simply another way of denying that emergence is a meaningful construct. Shakespeare is reducible to quantum effects? Good luck with that.

Most people’s problems with explaining experience, on the other hand, is that they place it too high. They want to use their own personal experience as a grounding point. The problem is that our personal experience of consciousness is deeply inflected by our immersion in language, but it is clear that experience precedes language acquisition, as we see in our infants as well as our pets. Philosophers call such experiences qualia, and they attribute all sorts of ineluctable and mysterious qualities to them. But there is a much better way to understand what qualia really are—namely, the pre-linguistic mind’s predecessor to ideas. That is, they are representations of reality that confer strategic advantage to the organism that can host and act upon them.

Experience in this context is the ability to detect, attend to, learn from, and respond to signals from our environment, whether they be externally or internally generated. Experiences are what we remember. That is why they are so important to us.

Now, as language-enabled humans, we verbalize these experiences constantly, which is what leads us to locate them higher up in the order of emergence, after language itself has emerged. Of course, we do have experiences with language directly—lots of them. But we need to acknowledge that our identity as experiencers is not dependent upon, indeed precedes our acquisition of, language capability.

With this framework in mind, let’s revisit some of the formulations of the hard problem to see if we can’t nip them in the bud.

  • The hard problem of consciousness is the problem of explaining why and how we have qualia or phenomenal experiences. Our explanation is that qualia are mental abstractions of phenomenal experiences that, when remembered and acted upon, confer strategic advantage to organisms under conditions of natural and sexual selection. Prior to the emergence of brains, “remembering and acting upon” is a function of chemical signals activating organisms to alter their behavior and, over time, to privilege tendencies that reinforce survival. Once brain emerges, chemical signaling is supplemented by electrical signaling to the same ends. There is no magic here, only a change of medium.
  • Annaka Harris poses the hard problem as the question of “how experience arise[s] out of non-sentient matter.” The answer to this question is, “level by level.” First sentience has to emerge from non-sentience. That happens with the emergence of life at the cellular level. Then sentience has to spread beyond the cell. That happens when chemical signaling enables cellular communication. Then sentience has to speed up to enable mobile life. That happens when electrical signaling enabled by nerves supplements chemical signaling enabled by circulatory systems. Then signaling has to complexify into meta-signaling, the aggregation of signals into qualia, remembered as experiences. Again, no miracles required.
  • Others, such as Daniel Dennett and Patricia Churchland believe that the hard problem is really more of a collection of easy problems, and will be solved through further analysis of the brain and behavior. If so, it will be through the lens of emergence, not through the mechanics of reductive materialism.
  • Consciousness is an ambiguous term. It can be used to mean self-consciousness, awareness, the state of being awake, and so on. Chalmers uses Thomas Nagel’s definition of consciousness: the feeling of what it is like to be something. Consciousness, in this sense, is synonymous with experience. Now we are in the language-inflected zone where we are going to get consciousness wrong because we are entangling it in levels of emergence that come later. Specifically, to experience anything as like anything else is not possible without the intervention of language. That is, likeness is not a qualia, it is a language-enabled idea. Thus, when Thomas Nagel famously asked, “What is it like to be a bat?” he is posing a question that has meaning only for humans, never for bats.

Going back to the first sentence above, self-consciousness is another concept that has been language-inflected in that only human beings have selves. Selves, in other words, are creations of language. More specifically, our selves are characters embedded in narratives, and use both the narratives and the character profiles to organize our lives. This is a completely language-dependent undertaking and thus not available to pets or infants. Our infants are self-sentient, but it is not until the little darlings learn language, hear stories, then hear stories about themselves, that they become conscious of their own selves as separate and distinct from other selves.

On the other hand, if we use the definitions of consciousness as synonymous with awareness or being awake, then we are exactly at the right level because both those capabilities are the symptoms of, and thus synonymous with, the emergence of consciousness.

  • Chalmers argues that experience is more than the sum of its parts. In other words, experience is irreducible. Yes, but let’s not be mysterious here. Experience emerges from the sum of its parts, just like any other layer of reality emergences from its component elements. To say something is irreducible does not mean that it is unexplainable.
  • Wolfgang Fasching argues that the hard problem is not about qualia, but about pure what-it-is-like-ness of experience in Nagel’s sense, about the very givenness of any phenomenal contents itself:

Today there is a strong tendency to simply equate consciousness with qualia. Yet there is clearly something not quite right about this. The “itchiness of itches” and the “hurtfulness of pain” are qualities we are conscious of. So, philosophy of mind tends to treat consciousness as if it consisted simply of the contents of consciousness (the phenomenal qualities), while it really is precisely consciousness of contents, the very givenness of whatever is subjectively given. And therefore, the problem of consciousness does not pertain so much to some alleged “mysterious, nonpublic objects”, i.e. objects that seem to be only “visible” to the respective subject, but rather to the nature of “seeing” itself (and in today’s philosophy of mind astonishingly little is said about the latter).

Once again, we are melding consciousness and language together when, to be accurate, we must continue to keep them separate. In this case, the dangerous phrase is “the nature of seeing.” There is nothing mysterious about seeing in the non-metaphorical sense, but that is not how the word is being used here. Instead, “seeing” is standing for “understanding” or “getting” or “grokking” (if you are nerdy enough to know Robert Heinlein’s Stranger in a Strange Land). Now, I think it is reasonable to assert that animals “grok” if by that we mean that they can reliably respond to environmental signals with strategic behaviors. But anything more than that requires the intervention of language, and that ends up locating consciousness per se at the wrong level of emergence.

OK, that’s enough from me. I don’t think I’ve exhausted the topic, so let me close by saying…

That’s what I think, what do you think?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The Robots Aren’t Really Going to Take Over

The Robots Aren't Really Going to Take Over

GUEST POST from Greg Satell

In 2013, a study at Oxford University found that 47% of jobs in the United States are likely to be replaced by robots over the next two decades. As if that doesn’t seem bad enough, Yuval Noah Harari, in his bestselling book Homo Deus, writes that “humans might become militarily and economically useless.” Yeesh! That doesn’t sound good.

Yet today, ten years after the Oxford Study, we are experiencing a serious labor shortage. Even more puzzling is that the shortage is especially acute in manufacturing, where automation is most pervasive. If robots are truly taking over, then why are having trouble finding enough humans to do work that needs being done?

The truth is that automation doesn’t replace jobs, it replaces tasks and when tasks become automated, they largely become commoditized. So while there are significant causes for concern about automation, such as increasing returns to capital amid decreasing returns to labor, the real danger isn’t with automation itself, but what we choose to do with it.

Organisms Are Not Algorithms

Harari’s rationale for humans becoming useless is his assertion that “organisms are algorithms.” Much like a vending machine is programed to respond to buttons, humans and other animals are programed by genetics and evolution to respond to “sensations, emotions and thoughts.” When those particular buttons are pushed, we respond much like a vending machine does.

He gives various data points for this point of view. For example, he describes psychological experiments in which, by monitoring brainwaves, researchers are able to predict actions, such as whether a person will flip a switch, even before he or she is aware of it. He also points out that certain chemicals, such as Ritalin and Prozac, can modify behavior.

Therefore, he continues, free will is an illusion because we don’t choose our urges. Nobody makes a conscious choice to crave chocolate cake or cigarettes any more than we choose whether to be attracted to someone other than our spouse. Those things are a product of our biological programming.

Yet none of this is at all dispositive. While it is true that we don’t choose our urges, we do choose our actions. We can be aware of our urges and still resist them. In fact, we consider developing the ability to resist urges as an integral part of growing up. Mature adults are supposed to resist things like gluttony, adultery and greed.

Revealing And Building

If you believe that organisms are algorithms, it’s easy to see how humans become subservient to machines. As machine learning techniques combine with massive computing power, machines will be able to predict, with great accuracy, which buttons will lead to what actions. Here again, an incomplete picture leads to a spurious conclusion.

In his 1954 essay, The Question Concerning Technology the German philosopher Martin Heidegger sheds some light on these issues. He described technology as akin to art, in that it reveals truths about the nature of the world, brings them forth and puts them to some specific use. In the process, human nature and its capacity for good and evil is also revealed.

He gives the example of a hydroelectric dam, which reveals the energy of a river and puts it to use making electricity. In much the same sense, Mark Zuckerberg did not “build” a social network at Facebook, but took natural human tendencies and channeled them in a particular way. After all, we go online not for bits or electrons, but to connect with each other.

In another essay, Building Dwelling Thinking, Heidegger explains that building also plays an important role, because to build for the world, we first must understand what it means to live in it. Once we understand that Mark Zuckerberg, or anyone else for that matter, is working to manipulate us, we can work to prevent it. In fact, knowing that someone or something seeks to control us gives us an urge to resist. If we’re all algorithms, that’s part of the code.
Social Skills Will Trump Cognitive Skills

All of this is, of course, somewhat speculative. What is striking, however, is the extent to which the opposite of what Harari and other “experts” predict is happening. Not only have greater automation and more powerful machine learning algorithms not led to mass unemployment it has, as noted above, led to a labor shortage. What gives?

To understand what’s going on, consider the legal industry, which is rapidly being automated. Basic activities like legal discovery are now largely done by algorithms. Services like LegalZoom automate basic filings. There are even artificial intelligence systems that can predict the outcome of a court case better than a human can.

So it shouldn’t be surprising that many experts predict gloomy days ahead for lawyers. By now, you can probably predict the punchline. The number of lawyers in the US has increased by 15% since 2008 and it’s not hard to see why. People don’t hire lawyers for their ability to hire cheap associates to do discovery, file basic documents or even, for the most part, to go to trial. In large part, they want someone they can trust to advise them.

The true shift in the legal industry will be from cognitive to social skills. When much of the cognitive heavy lifting can be done by machines, attorneys who can show empathy and build trust will have an advantage over those who depend on their ability to retain large amounts of information and read through lots of documents.

Value Never Disappears, It Just Shifts To Another Place

In 1900, 30 million people in the United States worked as farmers, but by 1990 that number had fallen to under 3 million even as the population more than tripled. So, in a matter of speaking, 90% of American agriculture workers lost their jobs, mostly due to automation. Yet somehow, the twentieth century was seen as an era of unprecedented prosperity.

You can imagine anyone working in agriculture a hundred years ago would be horrified to find that their jobs would vanish over the next century. If you told them that everything would be okay because they could find work as computer scientists, geneticists or digital marketers, they would probably have thought that you were some kind of a nut.

But consider if you told them that instead of working in the fields all day, they could spend that time in a nice office that was cool and dry because of something called “air conditioning,” and that they would have machines that cook meals without needing wood to be chopped and hauled. To sweeten the pot you could tell them that ”work” would mostly consist largely of talking to other people. They may have imagined it as a paradise.

The truth is that value never disappears, it just shifts to another place. That’s why today we have less farmers, but more food and, for better or worse, more lawyers. It is also why it’s highly unlikely that the robots will take over, because we are not algorithms. We have the power to choose.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






A Triumph of Artificial Intelligence Rhetoric

Understanding ChatGPT

A Triumph of Artificial Intelligence Rhetoric - Understanding ChatGPT

GUEST POST from Geoffrey A. Moore

I recently finished reading Stephen Wolfram’s very approachable introduction to ChatGPT, What is ChatGPT Doing . . . And Why Does It Work?, and I encourage you to do the same. It has sparked a number of thoughts that I want to share in this post.

First, if I have understood Wolfram correctly, what ChatGPT does can be summarized as follows:

  1. Ingest an enormous corpus of text from every available digitized source.
  2. While so doing, assign to each unique word a unique identifier, a number that will serve as a token to represent that word.
  3. Within the confines of each text, record the location of every token relative to every other token.
  4. Using just these two elements—token and location—determine for every word in the entire corpus the probability of it being adjacent to, or in the vicinity of, every other word.
  5. Feed these probabilities into a neural network to cluster words and build a map of relationships.
  6. Leveraging this map, given any string of words as a prompt, use the neural network to predict the next word (just like AutoCorrect).
  7. Based on feedback from so doing, adjust the internal parameters of the neural network to improve its performance.
  8. As performance improves, extend the reach of prediction from the next word to the next phrase, then to the next clause, the next sentence, the next paragraph, and so on, improving performance at each stage by using feedback to further adjust its internal parameters.
  9. Based on all of the above, generate text responses to user questions and prompts that reviewers agree are appropriate and useful.

OK, I concede this is a radical oversimplification, but for the purposes of this post, I do not think I am misrepresenting what is going on, specifically when it comes to making what I think is the most important point to register when it comes to understanding ChatGPT. That point is a simple one. ChatGPT has no idea what it is talking about.

Indeed, ChatGPT has no ideas of any kind—no knowledge or expertise—because it has no semantic information. It is all math. Math has been used to strip words of their meaning, and that meaning is not restored until a reader or user engages with the output to do so, using their own brain, not ChatGPT’s. ChatGPT is operating entirely on form and not a whit on content. By processing the entirety of its corpus, it can generate the most probable sequence of words that correlates with the input prompt it had been fed. Additionally, it can modify that sequence based on subsequent interactions with an end user. As human beings participating in that interaction, we process these interactions as a natural language conversation with an intelligent agent, but that is not what is happening at all. ChatGPT is using our prompts to initiate a mathematical exercise using tokens and locations as its sole variables.

OK, so what? I mean, if it works, isn’t that all that matters? Not really. Here are some key concerns.

First, and most importantly, ChatGPT cannot be expected to be self-governing when it comes to content. It has no knowledge of content. So, whatever guardrails one has in mind would have to be put in place either before the data gets into ChatGPT or afterward to intercept its answers prior to passing them along to users. The latter approach, however, would defeat the whole purpose of using it in the first place by undermining one of ChatGPT’s most attractive attributes—namely, its extraordinary scalability. So, if guardrails are required, they need to be put in place at the input end of the funnel, not the output end. That is, by restricting the datasets to trustworthy sources, one can ensure that the output will be trustworthy, or at least not malicious. Fortunately, this is a practical solution for a reasonably large set of use cases. To be fair, reducing the size of the input dataset diminishes the number of examples ChatGPT can draw upon, so its output is likely to be a little less polished from a rhetorical point of view. Still, for many use cases, this is a small price to pay.

Second, we need to stop thinking of ChatGPT as artificial intelligence. It creates the illusion of intelligence, but it has no semantic component. It is all form and no content. It is a like a spider that can spin an amazing web, but it has no knowledge of what it is doing. As a consequence, while its artifacts have authority, based on their roots in authoritative texts in the data corpus validated by an extraordinary amount of cross-checking computing, the engine itself has none. ChatGPT is a vehicle for transmitting the wisdom of crowds, but it has no wisdom itself.

Third, we need to fully appreciate why interacting with ChatGPT is so seductive. To do so, understand that because it constructs its replies based solely on formal properties, it is selecting for rhetoric, not logic. It is delivering the optimal rhetorical answer to your prompt, not the most expert one. It is the one that is the most popular, not the one that is the most profound. In short, it has a great bedside manner, and that is why we feel so comfortable engaging with it.

Now, given all of the above, it is clear that for any form of user support services, ChatGPT is nothing less than a godsend, especially where people need help learning how to do something. It is the most patient of teachers, and it is incredibly well-informed. As such, it can revolutionize technical support, patient care, claims processing, social services, language learning, and a host of other disciplines where users are engaging with a technical corpus of information or a system of regulated procedures. In all such domains, enterprises should pursue its deployment as fast as possible.

Conversely, wherever ambiguity is paramount, wherever judgment is required, or wherever moral values are at stake, one must not expect ChatGPT to be the final arbiter. That is simply not what it is designed to do. It can be an input, but it cannot be trusted to be the final output.

That’s what I think. What do you think?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Generation AI Replacing Generation Z

Generation AI Replacing Generation Z

by Braden Kelley

The boundary lines between different named generations are a bit fuzzy but the goal should always be to draw the boundary at an event significant enough to create substantial behavior changes in the new generation worthy of consideration in strategy formation.

I believe we have arrived at such a point and that it is time for GenZ to cede the top of strategy mountain to a new generation I call Generation AI (GenAI).

The dividing line for Generation AI falls around 2014 and the people of GenAI are characterized by being the first group of people to grow up not knowing a world without easy access to generative artificial intelligence (AI) tools that begin to transform their interactions with our institutions and each other.

We have already seen professors and teachers having to police AI-generated school essays, while the rest of us are trying to cope with frighteningly realistic deep fake audio and video. But what other impacts on people’s behavior will we see as a result of the coming ubiquity of artificial intelligence?

It is important to remember that generative artificial intelligence is not really artificial intelligence but collective intelligence informed by what we the people have contributed to the training/reference set. As such these large language models are predicting the next word or combining existing content based on whatever training set they are exposed to. They are not creating original thought.

Generative AI is being built into nearly all of our existing software and cloud tools, and GenAI will grow up only knowing a reality where every application and web site they interact with will have an AI component to it. Generation AI will not know a time where they cannot ask an AI, in the same way that GenZ relies on social search, and Gen X and Millenials assume search engines hold their answers.

Our brains are changing to focus more on processing and less on storage. These changes make us more capable, but more vulnerable too.

This new AI technology represents a double-edge sword and its effects could fall on either edge of the sword in different areas:

Option 1 – Best Case

  • Generative AI will amplify creativity by encouraging recombination of existing images, text, audio and video in new inspiring ways using the outputs of AI as inputs into human creativity

Option 2 – Worst Case

  • Generative AI will reduce creativity because people will become reliant on using artificial intelligence to create, creating an echo chamber of new content only created from existing content, leading to AI outputs becoming the only outputs and a world where people spend more time interacting with AI’s than with other people

Which of these two options on the impact of AI reliance do you see as the most likely in the areas where you focus?

How do you see Generation AI impacting the direction of societies around the world?

Are you planning to add Generation AI to your marketing strategies and strategic planning for 2024 or beyond?

Reference

For reference, here is timeline of previous American generations according to an article from NPR:

Though there is a consensus on the general time period for generations, there is not an agreement on the exact year that each generation begins and ends.

Generation Z – Born 2001-2013 (Age 10-22)

These kids were the first born with the Internet and are suspected to be the most individualistic and technology-dependent generation. Sometimes referred to as the iGeneration.

EDITOR’S NOTE: This description is erroneous, the differentiating factor of GenZ is that they experienced the rise of social media.

Millennials – Born 1980-2000 (Age 23-43)

They experienced the rise of the Internet, Sept. 11 and the wars that followed. Sometimes called Generation Y. Because of their dependence on technology, they are said to be entitled and narcissistic.

Generation X – Born 1965-1979 (Age 44-58)

They were originally called the baby busters because fertility rates fell after the boomers. As teenagers, they experienced the AIDs epidemic and the fall of the Berlin Wall. Sometimes called the MTV Generation, the “X” in their name refers to this generation’s desire not to be defined.

EDITOR’S NOTE: GenX also experienced the rise of the personal computer and this has influenced their parenting of a large portion of Millenials and GenZ

Baby Boomers – Born 1943-1964 (Age 59-80)

The boomers were born during an economic and baby boom following World War II. These hippie kids protested against the Vietnam War and participated in the civil rights movement, all with rock ‘n’ roll music blaring in the background.

Silent Generation – Born 1925-1942 (Age 81-98)

They were too young to see action in World War II and too old to participate in the fun of the Summer of Love. This label describes their conformist tendencies and belief that following the rules was a sure ticket to success.

GI Generation – Born 1901-1924 (Age 99+)

They were teenagers during the Great Depression and fought in World War II. Sometimes called the greatest generation (following a book by journalist Tom Brokaw) or the swing generation because of their jazz music.

If you’d like to sign up to learn more about my new FutureHacking™ methodology and set of tools, go here.

Build a Common Language of Innovation on your team

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






When Innovation Becomes Magic

When Innovation Becomes Magic

GUEST POST from Pete Foley

Arthur C Clarke’s 3rd Law famously stated:

“Any sufficiently advanced technology is indistinguishable from magic”

In other words, if the technology of an advanced civilization is so far beyond comprehension, it appears magical to a less advanced one. This could take the form of a human encounter with a highly advanced extraterrestrial civilization, how current technology might be viewed by historical figures, or encounters between human cultures with different levels of scientific and technological knowledge.

Clarke’s law implicitly assumed that knowledge within a society is sufficiently democratized that we never view technology within a civilization as ‘magic’.  But a combination of specialization, rapid advancements in technology, and a highly stratified society means this is changing.  Generative AI, Blockchain and various forms of automation are all ‘everyday magic’ that we increasingly use, but mostly with little more than an illusion of understanding around how they work.  More technological leaps are on the horizon, and as innovation accelerates exponentially, we are all going to have to navigate a world that looks and feels increasingly magical.   Knowing how to do this effectively is going to become an increasingly important skill for us all.  

The Magic Behind the Curtain:  So what’s the problem? Why do we need to understand the ‘magic’ behind the curtain, as long as we can operate the interface, and reap the benefits?  After all, most of us use phones, computers, cars, or take medicines without really understanding how they work.  We rely on experts to guide us, and use interfaces that help us navigate complex technology without a need for deep understanding of what goes on behind the curtain.

It’s a nuanced question.  Take a car as an analogy.  We certainly don’t need to know how to build one in order to use one.  But we do need to know how to operate it and understand what it’s performance limitations are.  It also helps to have at least some basic knowledge of how it works; enough to change a tire on a remote road, or to have some concept of basic mechanics to minimize the potential of being ripped off by a rogue mechanic.  In a nutshell, the more we understand it, the more efficiently, safely and economically we leverage it.  It’s a similar situation with medicine.  It is certainly possible to defer all of our healthcare decisions to a physician.  But people who partner with their doctors, and become advocates for their own health generally have superior outcomes, are less likely to die from unintended contraindications, and typically pay less for healthcare.  And this is not trivial.  The third leading cause of death in Europe behind cancer and heart disease are issues associated with prescription medications.  We don’t need to know everything to use a tool, but in most cases, the more we know the better

The Speed/Knowledge Trade-Off:  With new, increasingly complex technologies coming at us in waves, it’s becoming increasing challenging to make sense of what’s ‘behind the curtain’. This has the potential for costly mistakes.  But delaying embracing technology until we fully understand it can come with serious opportunity costs.  Adopt too early, and we risk getting it wrong, too late and we ‘miss the bus’.  How many people who invested in crypto currency or NFT’s really understood what they were doing?  And how many of those have lost on those deals, often to the benefit of those with deeper knowledge?  That isn’t to in anyway suggest that those who are knowledgeable in those fields deliberately exploit those who aren’t, but markets tend to reward those who know, and punish those who don’t.    

The AI Oracle:  The recent rise of Generative AI has many people treating it essentially as an oracle.  We ask it a question, and it ‘magically’ spits out an answer in a very convincing and sharable format.  Few of us understand the basics of how it does this, let alone the details or limitations. We may not call it magic, but we often treat it as such.  We really have little choice; as we lack sufficient understanding to apply quality critical thinking to what we are told, so have to take answers on trust.  That would be brilliant if AI was foolproof.  But while it is certainly right a lot of the time, it does make mistakes, often quite embarrassing ones. . For example, Google’s BARD incorrectly claimed the James Webb Space Telescope had taken the first photo of a planet outside our solar system, which led to panic selling of parent company Alphabet’s stock.  Generative AI is a superb innovation, but its current iterations are far from perfect.  They are limited by the data bases they are fed on, are extremely poor at spotting their own mistakes, can be manipulated by the choice of data sets they are trained on, and they lack the underlying framework of understanding that is essential for critical thinking or for making analogical connections.  I’m sure that we’ll eventually solve these issues, either with iterations of current tech, or via integration of new technology platforms.  But until we do, we have a brilliant, but still flawed tool.  It’s mostly right, is perfect for quickly answering a lot of questions, but its biggest vulnerability is that most users have pretty limited capability to understand when it’s wrong.

Technology Blind Spots: That of course is the Achilles Heel, or blind spot and a dilemma. If an answer is wrong, and we act on it without realizing, it’s potentially trouble. But if we know the answer, we didn’t really need to ask the AI. Of course, it’s more nuanced than that.  Just getting the right answer is not always enough, as the causal understanding that we pick up by solving a problem ourselves can also be important.  It helps us to spot obvious errors, but also helps to generate memory, experience, problem solving skills, buy-in, and belief in an idea.  Procedural and associative memory is encoded differently to answers, and mechanistic understanding helps us to reapply insights and make analogies. 

Need for Causal Understanding.  Belief and buy-in can be particularly important. Different people respond to a lack of ‘internal’ understanding in different ways.  Some shy away from the unknown and avoid or oppose what they don’t understand. Others embrace it, and trust the experts.  There’s really no right or wrong in this.  Science is a mixture of both approaches it stands on the shoulders of giants, but advances based on challenging existing theories.  Good scientists are both data driven and skeptical.  But in some cases skepticism based on lack of causal understanding can be a huge barrier to adoption. It has contributed to many of the debates we see today around technology adoption, including genetically engineered foods, efficacy of certain pharmaceuticals, environmental contaminants, nutrition, vaccinations, and during Covid, RNA vaccines and even masks.  Even extremely smart people can make poor decisions because of a lack of causal understanding.  In 2003, Steve Jobs was advised by his physicians to undergo immediately surgery for a rare form of pancreatic cancer.  Instead he delayed the procedure for nine months and attempted to treat himself with alternative medicine, a decision that very likely cut his life tragically short.

What Should We Do?  We need to embrace new tools and opportunities, but we need to do so with our eyes open.   Loss aversion, and the fear of losing out is a very powerful motivator of human behavior, and so an important driver in the adoption of new technology.  But it can be costly. A lot of people lost out with crypto and NFT’s because they had a fairly concrete idea of what they could miss out on if they didn’t engage, but a much less defined idea of the risk, because they didn’t deeply understand the system. Ironically, in this case, our loss aversion bias caused a significant number of people to lose out!

Similarly with AI, a lot of people are embracing it enthusiastically, in part because they are afraid of being left behind.  That is probably right, but it’s important to balance this enthusiasm with an understanding of its potential limitations.  We may not need to know how to build a car, but it really helps to know how to steer and when to apply the brakes .   Knowing how to ask an AI questions, and when to double check answers are both going to be critical skills.  For big decisions, ‘second opinions’ are going to become extremely important.   And the human ability to interpret answers through a filter of nuance, critical thinking, different perspectives, analogy and appropriate skepticism is going to be a critical element in fully leveraging AI technology, at least for now. 

Today AI is still a tool, not an oracle. It augments our intelligence, but for complex, important or nuanced decisions or information retrieval, I’d be wary of sitting back and letting it replace us.  Its ability to process data in quantity is certainly superior to any human, but we still need humans to interpret, challenge and integrate information.  The winners of this iteration of AI technology will be those who become highly skilled at walking that line, and who are good at managing the trade off between speed and accuracy using AI as a tool.  The good news is that we are naturally good at this, it’s a critical function of the human brain, embodied in the way it balances Kahneman’s System 1 and System 2 thinking. Future iterations may not need us, but for now AI is a powerful partner and tool, but not a replacement

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Sustaining Imagination is Hard

by Braden Kelley

Recently I stumbled across a new Royal Institute video of Martin Reeves, a managing director and senior partner in BCG’s San Francisco office. Martin leads the BCG Henderson Institute, BCG’s vehicle for exploring ideas from beyond the world of business, which have implications for business strategy management.

I previously interviewed Martin along with his co-author Dr. Jack Fuller in a post titled ‘Building an Imagination Machine‘. In this video you’ll find him presenting content along similar themes. I think you’ll enjoy it:

Bonus points to anyone who can name this napkin sketch in the comments.

In the video Martin explores several of the frameworks introduced in his book The Imagination Machine. One of the central tenets of Martin’s video is the fact that sustaining imagination is hard. There are three core reasons why this is so:

  1. Overspecialization – As companies grow, jobs become increasingly smaller in scope and greater in specialization, leading to myopia as fewer and fewer people see the problems that the company started to solve in the first place
  2. Insularity – As companies grow, the majority of employees shift from being externally facing to being internally facing, isolating more and more employees from the customer and their evolving wants and needs
  3. Complacency – As companies become successful, predictably, the successful parts of the business receive most of the attention and investment, making it difficult for new efforts to receive the care and feeding necessary for them to grow and dare I say – replace – the currently idolized parts of the business

I do like the notion Martin presents that companies wishing to be continuously successful, continuously seek to be surprised and invest energy in rethinking, exploring and probing in areas where they find themselves surprised.

Martin also explores some of the common misconceptions about imagination, including the ideas that imagination is:

  1. A solitary endeavor
  2. It comes out of nowhere
  3. Unmanageable

And finally, Martin puts forward his ideas on how imagination can be harnessed systematically, using a simple six-step model:

  1. Seduction – Where can we find surprise?
  2. Idea – Do we embrace the messiness of the napkin sketch? Or expect perfection?
  3. Collision – Where can we collide this idea with the real world for validation or more surprise?
  4. Epidemic – How can we foster collective imagination? What behaviors are we encouraging?
  5. New Ordinary – How can we create new norms? What evolvable scripts can we create that live inbetween the 500-page manual and the one-sentence vision?
  6. Encore – How can we sustain imagination? How can we maintain a Day One mentality?

And no speech in 2023 would be complete without some analysis of what role artificial intelligence (AI) has to play. Martin’s perspective is that when it comes to the different levels of cognition, AI might be good at finding patterns of correlation, but humans have more advanced capabilities than machines when it comes to finding causation and counterfactual opportunities. There is an opportunity for all of us to think about how we can leverage AI across the six steps in the model above to accelerate or enhance our human efforts.

To close, Martin highlighted that when it comes to leading re-imagination, it is important to look outward, to self-disrupt, to establish heroic goals, utilize multiple mental models, and foster playfulness and experimentation across the organization to help keep imagination alive.

p.s. If you’re committed to learning the art and science of getting to the future first, then be sure and subscribe to my newsletter to make sure you’re one of the first to get certified in the FutureHacking™ methodology.

Image credits: Netflix

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.