Tag Archives: AI

AI Strategy Should Have Nothing to do with AI

AI Strategy Should Have Nothing to do with AI

GUEST POST from Robyn Bolton

You’ve heard the adage that “culture eats strategy for breakfast.”  Well, AI is the fruit bowl on the side of your Denny’s Grand Slam Strategy, and culture is eating that, too.

1 tool + 2 companies = 2 strategies

On an Innovation Leader call about AI, two people from two different companies shared stories about what happened when an AI notetaking tool unexpectedly joined a call and started taking notes.  In both stories, everyone on the calls was surprised, uncomfortable, and a little bit angry that even some of the conversation was recorded and transcribed (understandable because both calls were about highly sensitive topics). 

The storyteller from Company A shared that the senior executive on the call was so irate that, after the call, he contacted people in Legal, IT, and Risk Management.  By the end of the day, all AI tools were shut down, and an extensive “ask permission or face termination” policy was issued.

Company B’s story ended differently.  Everyone on the call, including senior executives and government officials, was surprised, but instead of demanding that the tool be turned off, they asked why it was necessary. After a quick discussion about whether the tool was necessary, when it would be used, and how to ensure the accuracy of the transcript, everyone agreed to keep the note-taker running.  After the call, the senior executive asked everyone using an AI note-taker on a call to ask attendees’ permission before turning it on.

Why such a difference between the approaches of two companies of relatively the same size, operating in the same industry, using the same type of tool in a similar situation?

1 tool + 2 CULTURES = 2 strategies

Neither storyteller dove into details or described their companies’ cultures, but from other comments and details, I’m comfortable saying that the culture at Company A is quite different from the one at Company B. It is this difference, more than anything else, that drove Company A’s draconian response compared to Company B’s more forgiving and guiding one.  

This is both good and bad news for you as an innovation leader.

It’s good news because it means that you don’t have to pour hours, days, or even weeks of your life into finding, testing, and evaluating an ever-growing universe of AI tools to feel confident that you found the right one. 

It’s bad news because even if you do develop the perfect AI strategy, it won’t matter if you’re in a culture that isn’t open to exploration, learning, and even a tiny amount of risk-taking.

Curious whether you’re facing more good news than bad news?  Start here.

8 culture = 8+ strategies

In 2018, Boris Groysberg, a professor at Harvard Business School, and his colleagues published “The Leader’s Guide to Corporate Culture,” a meta-study of “more than 100 of the most commonly used social and behavior models [and] identified eight styles that distinguish a culture and can be measured.  I’m a big fan of the model, having used it with clients and taught it to hundreds of executives, and I see it actively defining and driving companies’ AI strategies*.

Results (89% of companies): Achievement and winning

  • AI strategy: Be first and be right. Experimentation is happening on an individual or team level in an effort to gain an advantage over competitors and peers.

Caring (63%): Relationships and mutual trust

  • AI strategy: A slow, cautious, and collaborative approach to exploring and testing AI so as to avoid ruffling feathers

Order (15%): Respect, structure, and shared norms

  • AI strategy: Given the “ask permission, not forgiveness” nature of the culture, AI exploration and strategy are centralized in a single function, and everyone waits on the verdict

Purpose (9%): Idealism and altruism

  • AI strategy: Torn between the undeniable productivity benefits AI offers and the myriad ethical and sustainability issues involved, strategies are more about monitoring than acting.

Safety (8%): Planning, caution, and preparedness

  • AI strategy: Like Order, this culture takes a centralized approach. Unlike Order, it hopes that if it closes its eyes, all of this will just go away.

Learning (7%): Exploration, expansiveness, creativity

  • AI strategy: Slightly more deliberate and guided than Purpose cultures, this culture encourages thoughtful and intentional experimentation to inform its overall strategy

Authority (4%): Strength, decisiveness, and boldness

  • AI strategy: If the AI strategies from Results and Order had a baby, it would be Authority’s AI strategy – centralized control with a single-minded mission to win quickly

Enjoyment (2%): Fun and excitement

  • AI strategy: It’s a glorious free-for-all with everyone doing what they want.  Strategies and guidelines will be set if and when needed.

What do you think?

Based on the story above, what culture best describes Company A?  Company B?

What culture best describes your team or company?  What about your AI strategy?

*Disclaimer. Culture is an “elusive lever” because it is based on assumptions, mindsets, social patterns, and unconscious actions.  As a result, the eight cultures aren’t MECE (mutually exclusive, collectively exhaustive), and multiple cultures often exist in a single team, function, and company.  Bottom line, the eight cultures are a tool, not a law (and I glossed over a lot of stuff from the report)

Image credit: Wikimedia Commons

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

How I Use AI to Understand Humans

(and Cut Research Time by 80%)

How I Use AI to Understand Humans

GUEST POST from Robyn Bolton

AI is NOT a substitute for person-to-person discovery conversations or Jobs to be Done interviews.

But it is a freakin’ fantastic place to start…if you do the work before you start.

Get smart about what’s possible

When ChatGPT debuted, I had a lot of fun playing with it, but never once worried that it would replace qualitative research.  Deep insights, social and emotional Jobs to be Done, and game-changing surprises only ever emerge through personal conversation.  No matter how good the Large Language Model (LLM) is, it can’t tell you how feelings, aspirations, and motivations drive their decisions.

Then I watched JTBD Untangled’s video with Evan Shore, WalMart’s Senior Director of Product for Health & Wellness, sharing the tests, prompts, and results his team used to compare insights from AI and traditional research approaches.

In a few hours, he generated 80% of the insights that took nine months to gather using traditional methods.

Get clear about what you want and need.

Before getting sucked into the latest shiny AI tools, get clear about what you expect the tool to do for you.  For example:

  • Provide a starting point for research: I used the free version of ChatGPT to build JTBD Canvas 2.0 for four distinct consumer personas.  The results weren’t great, but they provided a helpful starting point.  I also like Perplexity because even the free version links to sources.
  • Conduct qualitative research for meI haven’t used it yet, but a trusted colleague recommended Outset.ai, a service that promises to get to the Why behind the What because of its ability to “conduct and synthesize video, audio, and text conversations.”
  • Synthesize my research and identify insights: An AI platform built explicitly for Jobs to be Done Research?  Yes, please!  That’s precisely what JobLens claims to be, and while I haven’t used it in a live research project, I’ve been impressed by the results of my experiments.  For non-JTBD research, Otter.ai is the original and still my favorite tool for recording, live transcription, and AI-generated summaries and key takeaways.
  • Visualize insights:  MuralMiro, and FigJam are the most widely known and used collaborative whiteboards, all offering hundreds of pre-formatted templates for personas, journey maps, and other consumer research templates.  Another colleague recently sang the praises of theydo, an AI tool designed specifically for customer journey mapping.

Practice your prompts

“Garbage in.  Garbage out.” Has never been truer than with AI.  Your prompts determine the accuracy and richness of the insights you’ll get, so don’t wait until you’ve started researching to hone them.  If you want to start from scratch, you can learn how to write super-effective prompts here and here.  If you’d rather build on someone else’s work, Brian at JobsLens has great prompt resources. 

Spend time testing and refining your prompts by using a previous project as a starting point.  Because you know what the output should be (or at least the output you got), you can keep refining until you get a prompt that returns what you expect.    It can take hours, days, or even weeks to craft effective prompts, but once you have them, you can re-use them for future projects.

Defend your budget

Using AI for customer research will save you time and money, but it is not free. It’s also not just the cost of the subscription or license for your chosen tool(s).  

Remember the 80% of insights that AI surfaced in the JTBD Untangled video?  The other 20% of insights came solely from in-person conversations but comprised almost 100% of the insights that inspired innovative products and services.

AI can only tell you what everyone already knows. You need to discover what no one knows, but everyone feels.  That still takes time, money, and the ability to connect with humans.

Run small experiments before making big promises

People react to change differently.  Some will love the idea of using AI for customer research, while others will resist with.  Everyone, however, will pounce on any evidence that they’re right.  So be prepared.  Take advantage of free trials to play with tools.  Test tools on friends, family, and colleagues.  Then under-promise and over-deliver.

AI is a starting point.  It is not the ending point. 

I’m curious, have you tried using AI for customer research?  What tools have you tried? Which ones do you recommend?

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Humans Are Not as Different from AI as We Think

Humans Are Not as Different from AI as We Think

GUEST POST from Geoffrey A. Moore

By now you have heard that GenAI’s natural language conversational abilities are anchored in what one wag has termed “auto-correct on steroids.” That is, by ingesting as much text as it can possibly hoover up, and by calculating the probability that any given sequence of words will be followed by a specific next word, it mimics human speech in a truly remarkable way. But, do you know why that is so?

The answer is, because that is exactly what we humans do as well.

Think about how you converse. Where do your words come from? Oh, when you are being deliberate, you can indeed choose your words, but most of the time that is not what you are doing. Instead, you are riding a conversational impulse and just going with the flow. If you had to inspect every word before you said it, you could not possibly converse. Indeed, you spout entire paragraphs that are largely pre-constructed, something like the shticks that comedians perform.

Of course, sometimes you really are being more deliberate, especially when you are working out an idea and choosing your words carefully. But have you ever wondered where those candidate words you are choosing come from? They come from your very own LLM (Large Language Model) even though, compared to ChatGPT’s, it probably should be called a TWLM (Teeny Weeny Language Model).

The point is, for most of our conversational time, we are in the realm of rhetoric, not logic. We are using words to express our feelings and to influence our listeners. We’re not arguing before the Supreme Court (although even there we would be drawing on many of the same skills). Rhetoric is more like an athletic performance than a logical analysis would be. You stay in the moment, read and react, and rely heavily on instinct—there just isn’t time for anything else.

So, if all this is the case, then how are we not like GenAI? The answer here is pretty straightforward as well. We use concepts. It doesn’t.

Concepts are a, well, a pretty abstract concept, so what are we really talking about here? Concepts start with nouns. Every noun we use represents a body of forces that in some way is relevant to life in this world. Water makes us wet. It helps us clean things. It relieves thirst. It will drown a mammal but keep a fish alive. We know a lot about water. Same thing with rock, paper, and scissors. Same thing with cars, clothes, and cash. Same thing with love, languor, and loneliness.

All of our knowledge of the world aggregates around nouns and noun-like phrases. To these, we attach verbs and verb-like phrases that show how these forces act out in the world and what changes they create. And we add modifiers to tease out the nuances and differences among similar forces acting in similar ways. Altogether, we are creating ideas—concepts—which we can link up in increasingly complex structures through the fourth and final word type, conjunctions.

Now, from the time you were an infant, your brain has been working out all the permutations you could imagine that arise from combining two or more forces. It might have begun with you discovering what happens when you put your finger in your eye, or when you burp, or when your mother smiles at you. Anyway, over the years you have developed a remarkable inventory of what is usually called common sense, as in be careful not to touch a hot stove, or chew with your mouth closed, or don’t accept rides from strangers.

The point is you have the ability to take any two nouns at random and imagine how they might interact with one another, and from that effort, you can draw practical conclusions about experiences you have never actually undergone. You can imagine exception conditions—you can touch a hot stove if you are wearing an oven mitt, you can chew bubble gum at a baseball game with your mouth open, and you can use Uber.

You may not think this is amazing, but I assure you that every AI scientist does. That’s because none of them have come close (as yet) to duplicating what you do automatically. GenAI doesn’t even try. Indeed, its crowning success is due directly to the fact that it doesn’t even try. By contrast, all the work that has gone into GOFAI (Good Old-Fashioned AI) has been devoted precisely to the task of conceptualizing, typically as a prelude to planning and then acting, and to date, it has come up painfully short.

So, yes GenAI is amazing. But so are you.

That’s what I think. What do you think?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Will Innovation Management Leverage AI in the Future?

Will Innovation Management Leverage AI in the Future?

GUEST POST from Jesse Nieminen

What role can AI play in innovation management, and how can we unlock its true potential?

Unless you’ve been living under a rock, you’ve probably heard a thing or two about AI in the last year. The launch of ChatGPT has supercharged the hype around AI, and now we’re seeing dramatic progress at a pace unlike anything that’s come before.

For those of us into innovation, it’s an exciting time.

Much has been said about the topic at large so I won’t go over the details here. At HYPE, what we’re most excited about is what AI can do for innovation management specifically. We’ve had AI capabilities for years, and have been looking into the topic at large for quite some time.

Here, I share HYPE’s current thinking and answer some key questions:

  • What can AI do for innovation management?
  • What are some common use cases?
  • How can you operationalize AI’s use in innovation management?

The Current State of Innovation Management

Before we answer those questions, let’s review how most organizations carry out innovation management.

We’re all familiar with the innovation funnel.

Hype Innovation Image 1

To oversimplify, you gather ideas, review them, and then select the best ones to move forward to the pilot stage and eventual implementation. After each phase, poor ideas get weeded out.

It’s systematic, it’s conceptually simple, and investment is tiered so that you don’t spend too much time or money before an idea has shown its potential. What’s not to love?

Well, there are a few key challenges: the process is slow, linear, and is usually biased due to the evaluation criteria selected for the gates or decision points (if you use a Phase-Gate model).

Each of these challenges can be mitigated with smart adaptations of the process, but the funnel has another fundamental limitation: It’s generally built for a world where innovation requires significant capital expenditures and vast amounts of proprietary information.

But, regardless of your industry, that just isn’t the case anymore. Now most information is freely available, and technology has come a long way, in many cases because of AI. For example, pharmaceutical companies use AI to accelerate drug discovery while infrastructure and manufacturing companies use advanced simulation techniques, digital twins (virtual replicas of physical objects or systems), and rapid prototyping.

It’s now possible to innovate, test, and validate ideas faster than ever with minimal investment. With the right guidance, these tasks don’t have to be limited to innovation experts like you anymore. That can be an intimidating thought, but it’s also an empowering one. Soon, thanks to AI, you’ll be able to scale your expertise and make an impact significantly bigger than before.

For more than 20 years, we’ve been helping our customers succeed in this era of systematic innovation management. Today, countless organizations manage trends at scale, collect insights and ideas from a wide and diverse audience, and then manage that funnel highly effectively.

Yet, despite, or maybe because of this, more and more seemingly well-run organizations are struggling to keep up and adapt to the future.

What gives?

Some say that innovation is decelerating. Research reveals that as technology gets more complex, coming up with the next big scientific breakthrough is likely to require more and more investment, which makes intuitive sense. This type of research is actually about invention, not innovation per se.

Innovation is using those inventions to drive measurable value. The economic impact of these inventions has always come and gone in waves, as highlighted in ARK Investment’s research, illustrated below.

Throughout history, significant inventions have created platforms that enable dramatic progress through their practical application or, in other words, through innovation. ARK firmly believes that we’re on the precipice of another such wave and one that is likely to be bigger than any that has come before. AI is probably the most important of these platforms, but it’s not the only one.

Mckinsey Hype Innovation Image 2

Whether that will be the case remains to be seen, but regardless, the economic impact of innovation typically derives from the creative combination of existing “building blocks,” be they technologies, processes, or experiences.

Famously, the more such building blocks, or types of innovation, you combine to solve a specific pain point or challenge holistically, the more successful you’re likely to be. Thanks to more and more information and technology becoming free or highly affordable worldwide, change has accelerated rapidly in most industries.

That’s why, despite the evident deceleration of scientific progress in many industries, companies have to fight harder to stay relevant and change dramatically more quickly, as evidenced by the average tenure of S&P500 companies dropping like a stone.

Hype Innovation 3

In most industries, sustainable competitive advantages are a thing of the past. Now, it’s all about strategically planning for, as well as adapting to, change. This is what’s known as transient advantage, and it’s already a reality for most organizations.

How Innovation Management Needs to Change

In this landscape, the traditional innovation funnel isn’t cutting it anymore. Organizations can’t just focus on research and then turn that into new products and expect to do well.

To be clear, that doesn’t mean that the funnel no longer works, just that managing it well is no longer enough. It’s now table stakes. With that approach, innovating better than the next company is getting harder and more expensive.

When we look at our most successful customers and the most successful companies in the world in general, they have several things in common:

  • They have significantly faster cycle times than the competition at every step of the innovation process, i.e., they simply move faster.
  • For them, innovation is not a team, department, or process. It’s an activity the entire organization undertakes.
  • As such, they innovate everything, not just their products but also processes, experiences, business models, and more.

When you put these together, the pace of innovation leaves the competition in the dust.

How can you then maximize the pace of innovation at your organization? In a nutshell, it comes down to having:

  • A well-structured and streamlined set of processes for different kinds of innovation;
  • Appropriate tools, techniques, capabilities, and structures to support each of these processes;
  • A strategy and culture that values innovation;
  • A network of partners to accelerate learning and progress.

With these components in place, you’ll empower most people in the organization to deliver innovation, not just come up with ideas, and that makes all the difference in the world.

Hype Innovation 4

What Role Does AI Play in Innovation Management?

In the last couple of years, we’ve seen massive advancements not just in the quality of AI models and tools, but especially in the affordability and ease of their application. What used to be feasible for just a handful of the biggest and wealthiest companies out there is now quickly commoditizing. Generative AI, which has attracted most of the buzz, is merely the tip of the iceberg.

In just a few years, AI is likely to play a transformative role in the products and services most organizations provide.

For innovation managers too, AI will have dramatic and widely applicable benefits by speeding up and improving the way you work and innovate.

Let’s dive a bit deeper.

AI as an Accelerator

At HYPE, because we believe that using AI as a tool is something every organization that wants to innovate needs to do, we’ve been focusing on applying it to innovation management for some time. For example, we’ve identified and built a plethora of use cases where AI can be helpful, and it’s not just about generative AI. Other types of models and approaches still have their place as well.

There are too many use cases to cover here in detail, but we generally view AI’s use as falling into three buckets:

  • Augmenting: AI can augment human creativity, uncover new perspectives, kickstart work, help alleviate some of the inevitable biases, and make top-notch coaching available for everyone.
  • Assisting: AI-powered tools can assist innovators in research and ideation, summarize large amounts of information quickly, provide feedback, and help find, analyze, and make the most of vast quantities of structured or unstructured information.
  • Automating: AI can automate both routine and challenging work, to improve the speed and efficiency at which you can operate and save time so that you can focus on the value-added tasks at the heart of innovation.

In a nutshell, with the right AI tools, you can move faster, make smarter decisions, and operate more efficiently across virtually every part of the innovation management process.

While effective on their own, it’s only by putting the “three As” together and operationalizing them across the organization that you can unlock the full power of AI and take your innovation work to the next level.

In a nutshell, with the right AI tools, you can move faster, make smarter decisions, and operate more efficiently across virtually every part of the innovation management process.

While effective on their own, it’s only by putting the “three As” together and operationalizing them across the organization that you can unlock the full power of AI and take your innovation work to the next level.

Putting AI Into Practice

So, what’s the key to success with AI?

At HYPE, we think the key is understanding that AI is not just one “big thing.” It’s a versatile and powerful enabling technology that has become considerably cheaper and will likely continue on the same trajectory.

There are significant opportunities for using AI to deliver more value for customers, but organizations need the right data and talent to maximize the opportunities and to enable AI to support how their business operates, not least in the field of innovation management. It’s essential to find the right ways to apply AI to specific business needs; just asking everybody to use ChatGPT won’t cut it.

The anecdotal evidence we’re hearing highlights that learning to use a plethora of different AI tools and operationalizing these across an organization can often become challenging, time-consuming, and expensive.

To overcome these issues, there’s a real benefit in finding ways to operationalize AI as a part of the tools and processes you already use. And that’s where we believe The HYPE Suite with its built-in AI capabilities can make a big difference for our customers.

Final Thoughts

At the start of this article, we asked “Is AI the future of innovation management?”

In short, we think the answer is yes. But the question misses the real point.

Almost everyone is already using AI in at least some way, and over time, it will be everywhere. As an enabling technology, it’s a bit like computers or the Internet: Sure, you can innovate without them, but if everyone else uses them and you don’t, you’ll be slower and end up with a worse outcome.

The real question is how well you use and operationalize AI to support your innovation ambitions, whatever they may be. Using AI in combination with the right tools and processes, you can innovate better and faster than the competition.

At HYPE, we have many AI features in our development roadmap that will complement the software solutions we already have in place. Please reach out to us if you’d like to get an early sneak peek into what’s coming up!

Originally published at https://www.hypeinnovation.com.

Image credits: Pixabay, Hype, McKinsey

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Top 10 Human-Centered Change & Innovation Articles of January 2024

Top 10 Human-Centered Change & Innovation Articles of January 2024Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are January’s ten most popular innovation posts:

  1. Top 40 Innovation Bloggers of 2023 — Curated by Braden Kelley
  2. Creating Organizational Agility — by Howard Tiersky
  3. 5 Simple Steps to Team Alignment — by David Burkus
  4. 5 Essential Customer Experience Tools to Master — by Braden Kelley
  5. Four Ways To Empower Change In Your Organization — by Greg Satell
  6. AI as an Innovation Tool – How to Work with a Deeply Flawed Genius! — by Pete Foley
  7. Top 100 Innovation and Transformation Articles of 2023 — Curated by Braden Kelley
  8. 80% of Psychological Safety Has Nothing to Do With Psychology — by Robyn Bolton
  9. How will you allocate your time differently in 2024? — by Mike Shipulski
  10. Leadership Development Fundamentals – Work Products — by Mike Shipulski

BONUS – Here are five more strong articles published in December that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last four years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






AI as an Innovation Tool – How to Work with a Deeply Flawed Genius!

AI as an Innovation Tool - How to Work with a Deeply Flawed Genius!

GUEST POST from Pete Foley

For those of us working in the innovation and change field, it is hard to overstate the value and importance of AI.   It opens doors, that were, for me at least, barely imaginable 10 years ago.  And for someone who views analogy, crossing expertise boundaries, and the reapplication of ideas across domains as central to innovation, it’s hard to imagine a more useful tool.

But it is still a tool.  And as with any tool, leaning it’s limitations, and how to use it skillfully is key.  I make the analogy to an automobile.  We don’t need to know everything about how it works, and we certainly don’t need to understand how to build it.  But we do need to know what it can, and cannot do. We also need to learn how to drive it, and the better our driving skills, the more we get out of it.

AI, the Idiot Savant?  An issue with current AI is that it is both intelligent and stupid at the same time (see Yejin Chois excellent TED talk that is attached). It has phenomenal ‘data intelligence’, but can also fail on even simple logic puzzles. Part of the problem is that AI lacks ‘common sense’ or the implicit framework that filters a great deal of human decision making and behavior.  Chois calls this the  ‘dark matter’ common sense of decision-making. I think of it as the framework of knowledge, morality, biases and common sense that we accumulate over time, and that is foundational to the unconscious ‘System 1’ elements that influence many, if not most of our decisions. But whatever we call it, it’s an important, but sometimes invisible and unintuitive part of human information processing that is can be missing from AI output.    

Of course, AI is far from being unique in having limitations in the quality of its output.   Any information source we use is subject to errors.  We all know not to believe everything we read on the internet. That makes Google searches useful, but also potentially flawed.  Even consulting with human experts has pitfalls.   Not all experts agree, and even to most eminent expert can be subject to biases, or just good old fashioned human error.  But most of us have learned to be appropriately skeptical of these sources of information.  We routinely cross-reference, challenge data, seek second opinions and do not simply ‘parrot’ the data they provide.

But increasingly with AI, I’ve seen a tendency to treat its output with perhaps too much respect.   The reasons for this are multi-faceted, but very human.   Part of it may be the potential for generative AI to provide answers in an apparently definitive form.  Part may simply be awe of its capabilities, and to confuse breadth of knowledge with accuracy.  Another element is the ability it gives us to quickly penetrate areas where we may have little domain knowledge or background.  As I’ve already mentioned, this is fantastic for those of us who value exploring new domains and analogies.  But it comes with inherent challenges, as the further we step away from our own expertise, the easier it is for us to miss even basic mistakes.  

As for AI’s limitations, Chois provides some sobering examples.  It can pass a bar exam, but can fail abysmally on even simple logic problems.  For example, it suggests building a bridge over broken glass and nails is likely to cause punctures!   It has even suggested increasing the efficiency of paperclip manufacture by using humans as raw materials.  Of course, these negative examples are somewhat cherry picked to make a point, but they do show how poor some AI answers can be, and how they can be low in common sense.   Of course, when the errors are this obvious, we should automatically filter them out with our own common sense.  But the challenge comes when we are dealing in areas where we have little experience, and AI delivers superficially plausible but flawed answers. 

Why is this a weak spot for AI?  At the root of this is that implicit knowledge is rarely articulated in the data AI scrapes. For example, a recipe will often say ‘remove the pot from the heat’, but rarely says ‘remove the pot from heat and don’t stick your fingers in the flames’. We’re supposed to know that already. Because it is ‘obvious’, and processed quickly, unconsciously and often automatically by our brains, it is rarely explicitly articulated. AI, however, cannot learn what is not said.  And so because we don’t tend to state the obvious, it can make it challenging for an AI to learn it.  It learns to take the pot off of the heat, but not the more obvious insight, which is to avoid getting burned when we do so.  

This is obviously a known problem, and several strategies are employed to help address it.  These include manually adding crafted examples and direct human input into AI’s training. But this level of human curation creates other potential risks. The minute humans start deciding what content should and should not be incorporated, or highlighted into AI training, the risk of transferring specific human biases to that AI increase.   It also creates the potential for competing AI’s with different ‘viewpoints’, depending upon differences in both human input and the choices around what data-sets are scraped. There is a ‘nature’ component to the development of AI capability, but also a nurture influence. This is of course analogous the influence that parents, teachers and peers have on the values and biases of children as they develop their own frameworks. 

But most humans are exposed to at least some diversity in the influences that shape their decision frameworks.  Parents, peers and teachers provide generational variety, and the gradual and layered process that builds the human implicit decision framework help us to evolve a supporting network of contextual insight.  It’s obvious imperfect, and the current culture wars are testament to some profound differences in end result.  But to a large extent, we evolve similar, if not identical common sense frameworks. With AI, the narrower group contributing to curated ‘education’ increases the risk of both intentional and unintentional bias, and of ‘divergent intelligence’.     

What Can We do?  The most important thing is to be skeptical about AI output.  Just because it sounds plausible, don’t assume it is.  Just as we’d not take the first answer on a Google search as absolute truth, don’t do the same with AI.  Ask it for references, and check them (early iterations were known to make up plausible looking but nonsense references).  And of course, the more important the output is to us, the more important it is to check it.  As I said at the beginning, it can be tempting to take verbatim output from AI, especially if it sounds plausible, or fits our theory or worldview.  But always challenge the illusion of omnipotence that AI creates.  It’s probably correct, but especially if its providing an important or surprising insight, double check it.    

The Sci-Fi Monster!  The concept of a childish super intelligence has been explored by more than one Science Fiction writer.  But in many ways that is what we are dealing with in the case of AI.  It’s informational ‘IQ’ is greater than the contextual or common sense ‘IQ’ , making it a different type of intelligence to those we are used to.   And because so much of the human input side is proprietary and complex, it’s difficult  to determine whether bias or misinformation is included in its output, and if so, how much?   I’m sure these are solvable challenges.  But some bias is probably unavoidable the moment any human intervention or selection invades choice of training materials or their interpretation.   And as we see an increase in copyright law suits and settlements associated with AI, it becomes increasingly plausible that narrowing of sources will result in different AI’s with different ‘experiences’, and hence potentially different answers to questions.  

AI is an incredible gift, but like the three wishes in Aladdin’s lamp, use it wisely and carefully.  A little bit of skepticism, and some human validation is a good idea. Something that can pass the bar, but that lacks common sense is powerful, it could even get elected, but don’t automatically trust everything it says!

Image credits: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Is AI Saving Corporate Innovation or Killing It?

Is AI Saving Corporate Innovation or Killing It?

GUEST POST from Robyn Bolton

AI is killing Corporate Innovation.

Last Friday, the brilliant minds of Scott Kirsner, Rita McGrath, and Alex Osterwalder (plus a few guest stars like me, no big deal) gathered to debate the truth of this statement.

Honestly, it was one of the smartest and most thoughtful debates on AI that I’ve heard (biased but right, as my husband would say), and you should definitely listen to the whole thing.

But if you don’t have time for the deep dive over your morning coffee, then here are the highlights (in my humble opinion)

Why this debate is important

Every quarter, InnoLead fields a survey to understand the issues and challenges facing corporate innovators.  The results from their Q2 survey and anecdotal follow-on conversations were eye-opening:

  • Resources are shifting from Innovation to AI: 61.5% of companies are increasing the resources allocated to AI, while 63.9% of companies are maintaining or decreasing their innovation investments
  • IT is more likely to own AI than innovation: 61.5% of companies put IT in charge of exploring potential AI use cases, compared to 53.9% of Innovation departments (percentages sum to greater than 0 because multiple departments may have responsibility)
  • Innovation departments are becoming AI departments.  In fact, some former VPs and Directors of Innovation have been retitled to VPs or Directors of AI

So when Scott asked if AI was killing Corporate Innovation, the data said YES.

The people said NO.

What’s killing corporate innovation isn’t technology.  It’s leadership.

Alex Osterwalder didn’t pull his punches and delivered a truth bomb right at the start. Like all the innovation tools and technologies that came before, the impact of AI on innovation isn’t about the technology itself—it’s about the leaders driving it.

If executives take the time to understand AI as a tool that enables successful outcomes and accelerates the accomplishment of key strategies, then there is no reason for it to threaten, let alone supplant, innovation. 

But if they treat it like a shiny new toy or a silver bullet to solve all their growth needs, then it’s just “innovation theater” all over again.

AI is an Inflection Point that leaders need to approach strategically

As Rita wrote in her book Seeing Around Corners, an inflection point has a 10x impact on business, for example, 10x cheaper, 10x faster, or 10x easier.  The emergence and large-scale adoption of AI is, without doubt, an inflection point for business.

Just like the internet and Netscape shook things up and changed the game, AI has the power to do the same—maybe even more. But, to Osterwalder’s point, leaders need to recognize AI as a strategic inflection point and proceed accordingly. 

Leaders don’t need to have it all figured out yet, but they need a plan, and that’s where we come in.

This inflection point is our time to shine

From what I’ve seen, AI isn’t killing corporate innovation. It’s creating the biggest corporate innovation opportunity in decades.  But it’s up to us, as corporate innovators, to seize the moment.

Unlike our colleagues in the core business, we are comfortable navigating ambiguity and uncertainty.  We have experience creating order from what seems like chaos and using innovation to grow today’s business and create tomorrow’s.

We can do this because we’ve done it before.  It’s exactly what we do,

AI is not a problem.  It’s an opportunity.  But only if we make it one.

AI is not the end of corporate innovation —it’s a tool, a powerful one at that.

As corporate innovators, we have the skills and knowledge required to steer businesses through uncertainty and drive meaningful change. So, let’s embrace AI strategically and unlock its full potential.

The path forward may not always be crystal clear, but that’s what makes it exciting. So, let’s seize the moment, navigate the chaos, and embrace AI as the innovation accelerant that it is.

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Top 10 Thanksgiving Innovations of All Time

According to OpenAI, ChatGPT, and Tom the Turkey

Top 10 Thanksgiving Innovations of All Time

by Braden Kelley

Call me crazy, but I’ve always been a big believer in fun as a key component to the successful pursuit of innovation.

In the spirit of this belief and to celebrate this year’s Thanksgiving Holiday I thought I would ask OpenAI to make a list of the top ten Thanksgiving innovations of all time because I honestly have no idea what to put on such a list. So, in honor of the holiday, here is what it came up with:

1. Turkey Fryer: The invention of the turkey fryer revolutionized Thanksgiving cooking, providing a faster and delicious alternative to traditional roasting methods.

2. Canned Cranberry Sauce: The introduction of canned cranberry sauce made it easier to serve this beloved Thanksgiving side dish, eliminating the need to cook fresh cranberries.

3. Instant Pot: This versatile kitchen appliance has made Thanksgiving meal preparation much quicker and hassle-free, reducing cooking time for dishes like mashed potatoes and soups.

4. Green Bean Casserole: Created in 1955 by a staff member of Campbell Soup Company, the green bean casserole quickly became a Thanksgiving staple, providing a tasty and easy-to-make side dish.

5. Thanksgiving Parades: Starting with the Macy’s Thanksgiving Day Parade in 1924, Thanksgiving parades have become a cherished tradition, bringing joy and entertainment to families across the country.

6. Stuffing Mixes: Pre-packaged stuffing mixes have made it simpler to prepare the traditional stuffing, saving time and effort while still delivering the traditional flavors.

7. Pumpkin Pie Filling: Canned pumpkin pie filling has made it convenient for home cooks to make the quintessential Thanksgiving dessert, allowing everyone to enjoy a delicious pumpkin pie without the hassle of preparing fresh puree.

8. Thanksgiving Football: Thanksgiving football games have been a long-standing tradition for many families, providing entertainment and friendly competition after the big feast.

9. Online Recipe Resources: The availability of online recipe resources has made it easier than ever to find new and creative recipes for Thanksgiving dishes, expanding our culinary horizons and inspiring unique menu ideas.

10. Pop-Up Turkey Timer: This simple but effective innovation takes the guesswork out of roasting a turkey by indicating when it is done, ensuring a perfectly cooked bird.

Maybe I have been living in a cave, but I had never heard of Instant Pot so I had to Bing it. ChatGPT also suggested “Thanksgiving Themed Decor” which I thought was a bad suggestion, so I asked it for three more options to replace that one and ended up swapping it out for the beloved “Pop-Up Turkey Timer.”

I hope you enjoyed the list, have great holiday festivities (however you choose to celebrate) and finally – I am grateful for all of you!

What is your favorite Thanksgiving innovation that you’ve seen or experienced recently?

SPECIAL BONUS: My publisher is having a Thanksgiving sale that will allow you to get the hardcover or the digital version (eBook) of my latest best-selling book Charting Change for 55% off using code CYB23 only until November 30, 2023!

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Framing Your 2024 Strategy

Framing Your 2024 Strategy

GUEST POST from Geoffrey A. Moore

Fall is in the air, which brings to mind the season’s favorite sport—no, not football, strategic planning! Let’s face it, 2023 has been a tough year for most of us, with few annual plans surviving first contact with an economy that was not so much sluggish as simply hesitant. With the exception of generative AI’s burst onto the scene, most technology sectors have been more or less trudging along, and that begs the question, what do we think we can do in 2024? Time to bring out the strategy frameworks, polish up those crystal balls that have been a bit murky of late, and chart our course forward.

This post will kick off a series of blogs about framing strategy, all organized around a meta-model we call the Hierarchy of Powers:

Geoffrey Moore Strategy Framework

The inspiration for this model came from looking at how investors prioritize their portfolios. The first thing they do is allocate by sector, based primarily on category power, referring both to the growth rate of the category as well as its potential size. Rising tides float all boats, and one of the toughest challenges in business is how to manage a premier franchise when category growth is negative. In conjunction with assessing our current portfolio’s category power, this is also a time to look at adjacent categories, whether as threats or as opportunities, to see if there are any transformative acquisitions that deserve our immediate attention.

Returning to our current set of assets, within each category the next question to answer is, what is our company power within that category? This is largely a factor of market share. The more share a company has of a given category, the more likely the ecosystem of partners that supports the category will focus first on that company’s installed base, adding more value to its offers, as well as to recommend that company’s products first, again because of the added leverage from partner engagement. Marketplaces, in other words, self-organize around category leaders, accelerating the sales and offloading the support costs of the market share leaders.

But what do you do when you don’t have company power? That’s when you turn your attention to market power. Marketplaces destabilize around problematic use cases that the incumbent vendors do not handle well. This creates openings for new entrants, provided they can authentically address the customer’s problems. The key is to focus product management on the whole product (not just what your enterprise supplies, but rather, everything the customer needs to be successful) and to focus your go-to-market engine on the target market segment. This is the playbook that has kept Crossing the Chasm on entrepreneur’s book lists some thirty years in, but it is a different matter to execute it in a large enterprise where sales and marketing are organized for global coverage, not rifle-shot initiatives. Nonetheless, when properly executed, it is the most reliable play in all of high-tech market development.

If market power is key to taking market share, offer power is key to maintaining it, both in high-growth categories as well as mature ones. Offer power is a function of three disciplines—differentiation to create customer preference, neutralization to catch up to and reduce a competitor’s differentiation, and optimization to eliminate non-value-adding costs. Anything that does not contribute materially to one of these three outcomes is waste.

Finally, execution power is the ability to take advantage of one’s inertial momentum rather than having it take advantage of you. Here the discipline of zone management has proved particularly valuable to enterprises who are seeking to balance investment in their existing lines of business, typically in mature categories, with forays into new categories that promise higher growth.

In upcoming blog posts I am going to dive deeper into each of the five powers outlined above to share specific frameworks that clarify what decisions need to be made during the strategic planning process and what principles can best guide them. In the meantime, there is still one more quarter in 2023 to make, and we all must do our best to make the most of it.

That’s what I think. What do you think?

Image Credit: Pixabay, Geoffrey A. Moore

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Innovation Evolution in the Era of AI

Innovation Evolution in the Era of AI

GUEST POST from Stefan Lindegaard

Half a decade ago, I laid out a perspective on the evolution of innovation. Now, I return to these reflections with a sentiment of both awe and unease as I observe the profound impacts of AI on innovation and business at large. The transformation unfolding before us presents a remarkable panorama of opportunities, yet it also carries with it the potential for disruption, hence the mixed feelings.

1. The Reign of R&D (1970-2015): There was a time when the Chief Technology Officer (CTO) held the reins. The focus was almost exclusively on Research and Development (R&D), with the power of the CTO often towering over the innovative impulses of the organization. Technology drove progress, but a tech-exclusive vision could sometimes be a hidden pitfall.

2. Era of Innovation Management (1990-2001): A shift towards understanding innovation as a strategic force began to emerge in the ’90s. The concept of managing innovation, previously only a flicker in the business landscape, began its journey towards being a guiding light. Pioneers like Christensen brought innovation into the educational mainstream, marking a paradigm shift in the mindsets of future business leaders.

3. Business Models & Customer Experience (2001-2008): The millennium ushered in an era where simply possessing superior technology wasn’t a winning card anymore. Process refinement, service quality, and most critically, innovative business models became the new mantra. Firms like Microsoft demonstrated this shift, evolving their strategies to stay competitive in this new game.

4. Ecosystems & Platforms (2008-2018): This phase saw the rise of ecosystems and platforms, representing a shift from isolated competition to interconnected collaboration. The lines that once defined industries began to blur. Companies from emerging markets, particularly China, became global players, and we saw industries morphing and intermingling. Case in point: was it still the automotive industry, or had the mobility industry arrived?

5. Corporate Transformation (2019-2025): With the onslaught of digital technologies, corporations faced the need to transform from within. Technological adoption wasn’t a mere surface-level change anymore; it demanded a thorough, comprehensive rethinking of strategies, structures, and processes. Anything less was simply insufficient to weather the storm of this digital revolution.

6. Comborg Transformation (2025-??): As we gaze into the future, the ‘Comborg’ era comes into view. This era sees organizations fusing human elements and digital capabilities into a harmonious whole. In this stage, the equilibrium between human creativity and AI-driven efficiency will be crucial, an exciting but challenging frontier to explore.

I believe that revisiting this timeline of innovation’s evolution highlights the remarkable journey we’ve undertaken. As we now figure out the role of AI in innovation and business, it’s an exciting but also challenging time. Even though it can be a bit scary, I believe we can create a successful future if we use AI in a responsible and thoughtful way.

Stefan Lindegaard Evolution of Innovation

Image Credit: Stefan Lindegaard, Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.