Tag Archives: AI

Artificial Intelligence is a No-Brainer

Why innovation management needs co-intelligence

Artificial Intelligence is a No-Brainer

GUEST POST from John Bessant

Long fuse, big bang. A great descriptor which Andrew Hargadon uses to describe the way some major innovations arrive and have impact. For a long time they exist but we hardly notice them, they are confined to limited application, there are constraints on what the technology can do and so on. But suddenly, almost as if by magic they move center stage and seem to have impact everywhere we look.

Which is pretty much the story we now face with the wonderful world of AI. While there is plenty of debate about labels — artificial intelligence, machine learning, different models and approaches — the result is the same. Everywhere we look there is AI — and it’s already having an impact.

More than that; the pace of innovation within the world of AI is breath-taking, even by today’s rapid product cycle standards. We’ve become used to seeing major shifts in things like mobile phones, change happening on a cycle measured in months. But AI announcements of a breakthrough nature seem to happen with weekly frequency.

That’s also reflected in the extent of use — from the ‘early days’ (only last year!) of hearing about Chat GPT and other models we’ve now reached a situation where estimates suggest that millions of people are experimenting with them. Chat GPT has grown from a handful of people to over 200 million in less than a year; it added its first million subscribers within five days of launch! Similar figures show massive and rapid take -up of competing products like Anthropic’s Claude and Google’s Gemini, etc. It’s pretty clear that there’s a high-paced ‘arms race’ going on and it’s drawing in all the big players.

This rapid rate of adoption is being led by an even faster proliferation on the supply side, with many new players entering the market , especially in niche fields. As with the apps market there’s a huge number of players jumping on the bandwagon, and significant growth in the open source availability of models. And many models now allow for users to create their own custom versions — mini-GPTs’ and ‘Co-pilots’ which they can deploy for highly specific needs.

Not surprisingly estimates suggest that the growth potential in the market for AI technologies is vast, amounting to around 200 billion U.S. dollars in 2023 and expected to grow to over 1.8 trillion U.S. dollars by 2030.

Growth in Artificial Intelligence

There’s another important aspect to this growth. As Ethan Mollick suggests in his excellent book ‘Co-intelligence’, everything that we see AI doing today is the product of a far-from-perfect version of the technology; in very short time, given the rate of growth so far, we can expect much more power, integration and multi-modality.

The all-singing, dancing and doing pretty much anything else version of AI we can imagine isn’t far off. Speculation about when AGI — artificial general intelligence — will arrive is still just that — speculative — but the direction of travel is clear.

Not that the impact is seen as entirely positive. Whilst there have been impressive breakthroughs, using AI to help understand and innovate in fields as diverse as healthcare , distribution and education these are matched by growing concern about, for example, privacy and data security, deep-fake abuse and significant employment effects.

With its demonstrable potential for undertaking a wide range of tasks AI certainly poses a threat to the quality and quantity of a wide range of jobs — and at the limit could eliminate them entirely. And where earlier generations of technological automation impacted simple manual operations or basic tasks AI has the capacity to undertake many complex operations — often doing so faster and more effectively than humans.

AI models like Chat GPT can now routinely pass difficult exams for law or medical school, they can interpret complex data sets and spot patterns better than their human counterparts and they can quickly combine and analyze complex data to arrive at decisions which may often be better quality than those made by even experienced practitioners. Not surprisingly the policy discussion around this potential impact has proliferated at a similarly fast rate, echoing growing public concern about the darker side of AI.

But is it inevitable going to be a case of replacement, with human beings shunted to the side-lines? No-one is sure and it is still early days. We’ve had technological revolutions before — think back fifty years to when we first felt the early shock waves of what was to become the ‘microelectronics revolution’. Newspaper headlines and media programs with provocative titles like ‘Now the chips are down’ prompted frenzied discussion and policy planning for a future world staffed by robots and automated to the point where most activity would be undertaken by automated systems, overseen by one man and a dog. The role of the dog being to act as security guard, the role of the man being confined to feeding the dog.

Automation Man and Dog

This didn’t materialize; as many commentators pointed out at the time and as history has shown there were shifts and job changes but there was also compensating creation of new roles and tasks for which new skills were needed. Change yes — but not always in the negative direction and with growing potential for improving the content and quality of remaining and new jobs.

So if history is any guide then there are some grounds for optimism. Certainly we should be exploring and anticipating and particularly trying to match skills and capacity building to likely future needs.

Not least in the area of innovation management. What impact is AI having — and what might the future hold? It’s certainly implicated in a major shift right across the innovation space in terms of its application. If we take a simple ‘innovation compass’ to map these developments we can find plenty of examples:

Exploring Innovation Space

Innovation in terms of what we offer the world — our products and services — here AI already has a strong presence in everything from toys through intelligent and interactive services on our phones through to advanced weapon systems

And it’s the same story if we look at process innovation — changes in the ways we create and deliver whatever it is we offer. AI is embedded in automated and self-optimizing control systems for a huge range of tasks from mining, through manufacturing and out to service delivery.

Position innovation is another dimension where we innovate in opening up new or under-served markets, and changing the stories we tell to existing ones. AI has been a key enabler here, helping spot emerging trends, providing detailed market analysis and underpinning so many of the platform businesses which effectively handle the connection between multi-sided markets. Think Amazon, Uber, Alibaba or AirBnB and imagine them without the support of AI.

And innovation is possible through rethinking the whole approach to what we do, coming up with new business models. Rethinking the underlying value and how it might be delivered — think Spotify, Netflix and many others replacing the way we consume and enjoy our entertainment. Once again AI step forward as a key enabler.

AI is already a 360 degree solution looking for problems to attach to. Importantly this isn’t just in the commercial world; the power of AI is also being harnessed to enable social innovation in many different ways.

But perhaps the real question is not about AI-enabled innovations but one of how it affects innovators — and the organizations employing them? By now we know that innovation isn’t some magical force that strikes blindly in the light bulb moment. It’s a process which can be organized and managed so that we are able to repeat the trick. And after over 100 years of research and documenting hard-won experience we know the kind of things we need to put in place — how to manage innovation. It’s reached the point where we can codify it into an international standard — ISO 56001- and use this as a template to check out the ways in which we build and operate our innovation management systems.

So how will AI affect this — and, more to the point, how is it already doing so? Let’s take our helicopter and look down on where and how AI playing a role in the key areas of innovation management systems.

Typically the ‘front end’ of innovation involves various kinds of search activity, picking up strong and weak signals about needs and opportunities for change. And this kind of exploration and forecasting is something which AI has already shown itself to be very good at — whether in the search for new protein forms or the generation of ideas for consumer products.

Frank Piller’s research team published an excellent piece last year describing their exploration of this aspect of innovation. They looked at the potential which AI offered and tested their predictions out by tasking Chat GPT with a number of prompts based on the needs of a fictitious outdoor activities company. They had it monitoring and picking up on trends, scraping online communities for early warning signals about new consumer themes and, crucially, actually doing idea generation to come up with new product concepts. Their results mimic many other studies which suggest that AI is very good at this — in fact, as Mollick reports, it often does the job better than humans.

Of course finding opportunities is only the start of the innovation process; a key next stage is some kind of strategic selection. Out of all the possibilities of what we could do, what are we going to do and why? Limited resources mean we have to make choices — and the evidence is that AI is pretty helpful here too. It can explore and compare alternatives, make better bets and build more viable business models to take emerging value propositions forward. (At least in the test case where it competed against MBA students…!)

Innovation Process John Bessant

And then we are in the world of implementation, the long and winding road to converting our value proposition into something which will actually work and be wanted. Today’s agile innovation involves a cycle of testing, trial and error learning, gradually pivoting and homing in on what works and building from that. And once again AI is good at this — not least because it’s at the heart of how it does what it does. There’s a clue in the label — machine learning is all about deploying different learning and improvement strategies. AI can carry out fast experiments and focus in, it can simulate markets and bring to bear many of the adoption influences as probabilistic variables which it can work with.

Of course launching a successful version of a value proposition converted to a viable solution is still only half the innovation journey. To have impact we need to scale — but here again AI is likely to change the game. Much of the scaling journey involves understanding and configuring your solution to match the high variability across populations and accelerate diffusion. We know a lot about what influences this (not least thanks to the extensive work of Everett Rogers) and AI has particular capabilities in making sense of the preferences and predilections of populations through studying big datasets. It’s record in persuasion in fields like election campaigning suggests it has the capacity to enhance our ability to influence the innovation adoption decision process.

Scaling also involves complementary assets — the ‘who else?’ and ‘what else?’ which we need to have impact at scale. We need to assemble value networks, ecosystems of co-operating stakeholders — but to do this we need to be able to make connections. Specifically finding potential partners, forming relationships and getting the whole system to perform with emergent properties, where the whole is greater than the sum of the parts.

And here too AI has an growing track record in enabling recombinant innovation, cross-linking, connecting and making sense of patterns, even if we humans can’t always see them.

So far, so disturbing — at least if you are a practicing innovation manager looking over your shoulder at the AI competition rapidly catching up. But what about the bigger picture, the idea of developing and executing an innovation strategy? Here our concern is with the long-term, managing the process of accumulating competencies and capabilities to create long term competitiveness in volatile and unpredictable markets?

It involves being able to imagine and explore different options and make decisions based on the best use of resources and the likely fit with a future world. Which is, once again, the kind of thing which AI has shown itself to be good at. It’s moved a long way from playing chess and winning by brute calculating force. Now it can beat world champions at complex games of strategy like Go and win poker tournaments, bluffing with the best of them to sweep the pot.

Artificial Intelligence Poker Player

So what are we left with? In many ways it takes us right back to basics. We’ve survived as a species on the back of our imaginations — we’re not big or fast, or able to fly, but we are able to think. And our creativity has helped us devise and share tools and techniques, to innovate our way out of trouble. Importantly we’ve learned to do this collectively — shared creativity is a key part of the puzzle.

We’ve seen this throughout history; the recent response to the Covid-19 pandemic provides yet another illustration. In the face of crisis we can work together and innovate radically. It’s something we see in the humanitarian innovation world and in many other crisis contexts. Innovation benefits from more minds on the job.

So one way forward is not to wring our hands and say that the game is over and we should step back and let the AI take over. Rather it points towards us finding ways of working with it — as Mollick’s book title suggests, learning to treat it as a ‘co-intelligence’. Different, certainly but often in in complementary ways. Diversity has always mattered in innovation teams — so maybe by recruiting AI to our team we amplify that effect. There’s enough to do in meeting the challenge of managing innovation against a background of uncertainty; it makes sense to take advantage of all the help we can get.

AI may seem to point to a direction in which our role becomes superfluous — the ‘no-brain needed’ option. But we’re also seeing real possibilities for it to become an effective partner in the process.

And subscribe to my (free) newsletter here

You can find my podcast here and my videos here

And if you’d like to learn with me take a look at my online course here

Image credits: Dall-E via Microsoft CoPilot, John Bessant

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

AI Can Help Attract, Retain and Grow Customer Relationships

AI Can Help Attract, Retain and Grow Customer Relationships

GUEST POST from Shep Hyken

How do you know what your customers want if they don’t tell you? It’s more than sending surveys and interpreting data. Joe Tyrrell is the CEO of Medallia, a company that helps its customers tailor experiences through “intelligent personalization” and automation. I had a chance to interview him on Amazing Business Radio and he shared how smart companies are using AI to build and retain customer relationships. Below are some of his comments followed by my commentary:

  • The generative AI momentum is so widespread that 85% of executives say the technology will be interacting directly with customers in the next two years. AI has been around for longer than most people realize. When a customer is on a website that makes suggestions, when they interact with a chatbot or get the best answers to frequently asked questions, they are interacting with AI-infused technology, whether they know it or not.
  • While most executives want to use AI, they don’t know how they want to use it, the value it will bring and the problems it will solve. In other words, they know they want to use it, but don’t know how (yet). Tyrrell says, “Most organizations don’t know how they are going to use AI responsibly and ethically, and how they will use it in a way that doesn’t introduce unintended consequences, and even worse, unintended bias.” There needs to be quality control and oversight to ensure that AI is meeting the goals and intentions of the company or brand.
  • Generative AI is different than traditional AI. According to Tyrrell, the nature of generative AI is to, “Give me something in real time while I’m interacting with it.” In other words, it’s not just finding answers. It’s communicating with me, almost like human-to-human. When you ask it to clarify a point, it knows exactly how to respond. This is quite different from a traditional search bar on a website—or even a Google search.
  • AI’s capability to personalize the customer experience will be the focus of the next two years. Based on the comment about how AI technology currently interacts with customers, I asked Tyrrell to be more specific about how AI will be used. His answer was focused on personalization. The data we extract from multiple sources will allow for personalization like never before. According to Tyrrell, 82% of consumers say a personalized experience will influence which brand they end up purchasing from in at least half of all shopping situations. The question isn’t whether a company should personalize the customer experience. It is what happens if they don’t.
  • Personalization isn’t about being seen as a consumer, but as a person. That’s the goal of personalization. Medallia’s North Star, which guides all its decisions and investments, is its mission to personalize every customer experience. What makes this a challenge is the word every. If customers experience this one time but the next time the brand acts as if they don’t recognize them, all the work from the previous visit along with the credibility built with the customer is eroded.
  • The next frontier of AI is interpreting social feedback. Tyrrell is excited about Medallia’s future focus. “Surveys may validate information,” says Tyrrell, “but it is often what’s not said that can be just as important, if not even more so.” Tyrrell talked about Medallia’s capability to look everywhere, outside of surveys and social media comments, reviews and ratings, where customers traditionally express themselves. There is behavioral feedback, which Tyrrell refers to as social feedback, not to be confused with social media feedback. Technology can track customer behavior on a website. What pages do they spend the most time on? How do they use the mouse to navigate the page? Tyrell says, “Wherever people are expressing themselves, we capture the information, aggregate it, translate it, interpret it, correlate it and then deliver insights back to our customers.” This isn’t about communicating with customers about customer support issues. It’s mining data to understand customers and make products and experiences better.

Tyrrell’s insights emphasize the opportunities for AI to support the relationship a company or brand has with its customers. The future of customer engagement will be about an experience that creates customer connection. Even though technology is driving the experience, customers appreciate being known and recognized when they return. Tyrrell and I joked about the theme song from the TV sitcom Cheers, which debuted in 1982 and lasted 11 seasons. But it really isn’t a joke at all. It’s what customers want, and it’s so simple. As the song title suggests, customers want to go to a place Where Everybody Knows Your Name.

Image Credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Time is a Flat Circle

Jamie Dimon’s Comments on AI Just Proved It

Time is a Flat Circle

GUEST POST from Robyn Bolton


“Time is a flat circle.  Everything we have done or will do we will do over and over and over and over again – forever.” –- Rusty Cohle, played by Matthew McConaughey, in True Detective

For the whole of human existence, we have created new things with no idea if, when, or how they will affect humanity, society, or business.  New things can be a distraction, sucking up time and money and offering nothing in return.  Or they can be a bridge to a better future.

As a leader, it’s your job to figure out which things are a bridge (i.e., innovation) and which things suck (i.e., shiny objects).

Innovation is a flat circle

The concept of eternal recurrence, that time repeats itself in an infinite loop, was first taught by Pythagoras (of Pythagorean theorem fame) in the 6th century BC. It remerged (thereby proving its own truth) in Friedreich Nietzsche’s writings in the 19th century, then again in 2014’s first season of True Detective, and then again on Monday in Jamie Dimon’s Annual Letter to Shareholders.

Mr. Dimon, the CEO and Chairman of JPMorgan Chase & Co, first mentioned AI in his 2017 Letter to Shareholders.  So, it wasn’t the mention of AI that was newsworthy. It was how it was mentioned.  Before mentioning geopolitical risks, regulatory issues, or the recent acquisition of First Republic, Mr. Dimon spends nine paragraphs talking about AI, its impact on banking, and how JPMorgan Chase is responding.

Here’s a screenshot of the first two paragraphs:

JP Morgan Annual Letter 2017

He’s right. We don’t know “the full effect or the precise rate at which AI will change our business—or how it will affect society at large.” We were similarly clueless in 1436 (when the printing press was invented), 1712 (when the first commercially successful steam engine was invented), 1882 (when electricity was first commercially distributed), and 1993 (when the World Wide Web was released to the public).

Innovation, it seems, is also a flat circle.

Our response doesn’t have to be.

Historically, people responded to innovation in one of two ways: panic because it’s a sign of the apocalypse or rejoice because it will be our salvation. And those reactions aren’t confined to just “transformational” innovations.  In 2015, a visiting professor at Kings College London declared that the humble eraser (1770) was “an instrument of the devil” because it creates “a culture of shame about error.  It’s a way of lying to the world, which says, ‘I didn’t make a mistake.  I got it right the first time.’”

Neither reaction is true. Fortunately, as time passes, more people recognize that the truth is somewhere between the apocalypse and salvation and that we can influence what that “between” place is through intentional experimentation and learning.

JPMorgan started experimenting with AI over a decade ago, well before most of its competitors.  As a result, they “now have over 400 use cases in production in areas such as marketing, fraud, and risk” that are producing quantifiable financial value for the company. 

It’s not just JPMorgan.  Organizations as varied as John Deere, BMW, Amazon, the US Department of Energy, Vanguard, and Johns Hopkins Hospital have been experimenting with AI for years, trying to understand if and how it could improve their operations and enable them to serve customers better.  Some experiments worked.  Some didn’t.  But every company brave enough to try learned something and, as a result, got smarter and more confident about “the full effect or the precise rate at which AI will change our business.”

You have free will.  Use it to learn.

Cynics believe that time is a flat circle.  Leaders believe it is an ever-ascending spiral, one in which we can learn, evolve, and influence what’s next.  They also have the courage to act on (and invest in) that belief.

What do you believe?  More importantly, what are you doing about it?

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Top 10 Human-Centered Change & Innovation Articles of May 2024

Top 10 Human-Centered Change & Innovation Articles of May 2024Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are May’s ten most popular innovation posts:

  1. Five Lessons from the Apple Car’s Demise — by Robyn Bolton
  2. Six Causes of Employee Burnout — by David Burkus
  3. Learning About Innovation – From a Skateboard? — by John Bessant
  4. Fighting for Innovation in the Trenches — by Geoffrey A. Moore
  5. A Case Study on High Performance Teams — by Stefan Lindegaard
  6. Growth Comes From What You Don’t Have — by Mike Shipulski
  7. Innovation Friction Risks and Pitfalls — by Howard Tiersky
  8. Difference Between Customer Experience Perception and Reality — by Shep Hyken
  9. How Tribalism Can Kill Innovation — by Greg Satell
  10. Preparing the Next Generation for a Post-Digital Age — by Greg Satell

BONUS – Here are five more strong articles published in April that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last four years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






AI Strategy Should Have Nothing to do with AI

AI Strategy Should Have Nothing to do with AI

GUEST POST from Robyn Bolton

You’ve heard the adage that “culture eats strategy for breakfast.”  Well, AI is the fruit bowl on the side of your Denny’s Grand Slam Strategy, and culture is eating that, too.

1 tool + 2 companies = 2 strategies

On an Innovation Leader call about AI, two people from two different companies shared stories about what happened when an AI notetaking tool unexpectedly joined a call and started taking notes.  In both stories, everyone on the calls was surprised, uncomfortable, and a little bit angry that even some of the conversation was recorded and transcribed (understandable because both calls were about highly sensitive topics). 

The storyteller from Company A shared that the senior executive on the call was so irate that, after the call, he contacted people in Legal, IT, and Risk Management.  By the end of the day, all AI tools were shut down, and an extensive “ask permission or face termination” policy was issued.

Company B’s story ended differently.  Everyone on the call, including senior executives and government officials, was surprised, but instead of demanding that the tool be turned off, they asked why it was necessary. After a quick discussion about whether the tool was necessary, when it would be used, and how to ensure the accuracy of the transcript, everyone agreed to keep the note-taker running.  After the call, the senior executive asked everyone using an AI note-taker on a call to ask attendees’ permission before turning it on.

Why such a difference between the approaches of two companies of relatively the same size, operating in the same industry, using the same type of tool in a similar situation?

1 tool + 2 CULTURES = 2 strategies

Neither storyteller dove into details or described their companies’ cultures, but from other comments and details, I’m comfortable saying that the culture at Company A is quite different from the one at Company B. It is this difference, more than anything else, that drove Company A’s draconian response compared to Company B’s more forgiving and guiding one.  

This is both good and bad news for you as an innovation leader.

It’s good news because it means that you don’t have to pour hours, days, or even weeks of your life into finding, testing, and evaluating an ever-growing universe of AI tools to feel confident that you found the right one. 

It’s bad news because even if you do develop the perfect AI strategy, it won’t matter if you’re in a culture that isn’t open to exploration, learning, and even a tiny amount of risk-taking.

Curious whether you’re facing more good news than bad news?  Start here.

8 culture = 8+ strategies

In 2018, Boris Groysberg, a professor at Harvard Business School, and his colleagues published “The Leader’s Guide to Corporate Culture,” a meta-study of “more than 100 of the most commonly used social and behavior models [and] identified eight styles that distinguish a culture and can be measured.  I’m a big fan of the model, having used it with clients and taught it to hundreds of executives, and I see it actively defining and driving companies’ AI strategies*.

Results (89% of companies): Achievement and winning

  • AI strategy: Be first and be right. Experimentation is happening on an individual or team level in an effort to gain an advantage over competitors and peers.

Caring (63%): Relationships and mutual trust

  • AI strategy: A slow, cautious, and collaborative approach to exploring and testing AI so as to avoid ruffling feathers

Order (15%): Respect, structure, and shared norms

  • AI strategy: Given the “ask permission, not forgiveness” nature of the culture, AI exploration and strategy are centralized in a single function, and everyone waits on the verdict

Purpose (9%): Idealism and altruism

  • AI strategy: Torn between the undeniable productivity benefits AI offers and the myriad ethical and sustainability issues involved, strategies are more about monitoring than acting.

Safety (8%): Planning, caution, and preparedness

  • AI strategy: Like Order, this culture takes a centralized approach. Unlike Order, it hopes that if it closes its eyes, all of this will just go away.

Learning (7%): Exploration, expansiveness, creativity

  • AI strategy: Slightly more deliberate and guided than Purpose cultures, this culture encourages thoughtful and intentional experimentation to inform its overall strategy

Authority (4%): Strength, decisiveness, and boldness

  • AI strategy: If the AI strategies from Results and Order had a baby, it would be Authority’s AI strategy – centralized control with a single-minded mission to win quickly

Enjoyment (2%): Fun and excitement

  • AI strategy: It’s a glorious free-for-all with everyone doing what they want.  Strategies and guidelines will be set if and when needed.

What do you think?

Based on the story above, what culture best describes Company A?  Company B?

What culture best describes your team or company?  What about your AI strategy?

*Disclaimer. Culture is an “elusive lever” because it is based on assumptions, mindsets, social patterns, and unconscious actions.  As a result, the eight cultures aren’t MECE (mutually exclusive, collectively exhaustive), and multiple cultures often exist in a single team, function, and company.  Bottom line, the eight cultures are a tool, not a law (and I glossed over a lot of stuff from the report)

Image credit: Wikimedia Commons

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

How I Use AI to Understand Humans

(and Cut Research Time by 80%)

How I Use AI to Understand Humans

GUEST POST from Robyn Bolton

AI is NOT a substitute for person-to-person discovery conversations or Jobs to be Done interviews.

But it is a freakin’ fantastic place to start…if you do the work before you start.

Get smart about what’s possible

When ChatGPT debuted, I had a lot of fun playing with it, but never once worried that it would replace qualitative research.  Deep insights, social and emotional Jobs to be Done, and game-changing surprises only ever emerge through personal conversation.  No matter how good the Large Language Model (LLM) is, it can’t tell you how feelings, aspirations, and motivations drive their decisions.

Then I watched JTBD Untangled’s video with Evan Shore, WalMart’s Senior Director of Product for Health & Wellness, sharing the tests, prompts, and results his team used to compare insights from AI and traditional research approaches.

In a few hours, he generated 80% of the insights that took nine months to gather using traditional methods.

Get clear about what you want and need.

Before getting sucked into the latest shiny AI tools, get clear about what you expect the tool to do for you.  For example:

  • Provide a starting point for research: I used the free version of ChatGPT to build JTBD Canvas 2.0 for four distinct consumer personas.  The results weren’t great, but they provided a helpful starting point.  I also like Perplexity because even the free version links to sources.
  • Conduct qualitative research for meI haven’t used it yet, but a trusted colleague recommended Outset.ai, a service that promises to get to the Why behind the What because of its ability to “conduct and synthesize video, audio, and text conversations.”
  • Synthesize my research and identify insights: An AI platform built explicitly for Jobs to be Done Research?  Yes, please!  That’s precisely what JobLens claims to be, and while I haven’t used it in a live research project, I’ve been impressed by the results of my experiments.  For non-JTBD research, Otter.ai is the original and still my favorite tool for recording, live transcription, and AI-generated summaries and key takeaways.
  • Visualize insights:  MuralMiro, and FigJam are the most widely known and used collaborative whiteboards, all offering hundreds of pre-formatted templates for personas, journey maps, and other consumer research templates.  Another colleague recently sang the praises of theydo, an AI tool designed specifically for customer journey mapping.

Practice your prompts

“Garbage in.  Garbage out.” Has never been truer than with AI.  Your prompts determine the accuracy and richness of the insights you’ll get, so don’t wait until you’ve started researching to hone them.  If you want to start from scratch, you can learn how to write super-effective prompts here and here.  If you’d rather build on someone else’s work, Brian at JobsLens has great prompt resources. 

Spend time testing and refining your prompts by using a previous project as a starting point.  Because you know what the output should be (or at least the output you got), you can keep refining until you get a prompt that returns what you expect.    It can take hours, days, or even weeks to craft effective prompts, but once you have them, you can re-use them for future projects.

Defend your budget

Using AI for customer research will save you time and money, but it is not free. It’s also not just the cost of the subscription or license for your chosen tool(s).  

Remember the 80% of insights that AI surfaced in the JTBD Untangled video?  The other 20% of insights came solely from in-person conversations but comprised almost 100% of the insights that inspired innovative products and services.

AI can only tell you what everyone already knows. You need to discover what no one knows, but everyone feels.  That still takes time, money, and the ability to connect with humans.

Run small experiments before making big promises

People react to change differently.  Some will love the idea of using AI for customer research, while others will resist with.  Everyone, however, will pounce on any evidence that they’re right.  So be prepared.  Take advantage of free trials to play with tools.  Test tools on friends, family, and colleagues.  Then under-promise and over-deliver.

AI is a starting point.  It is not the ending point. 

I’m curious, have you tried using AI for customer research?  What tools have you tried? Which ones do you recommend?

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Humans Are Not as Different from AI as We Think

Humans Are Not as Different from AI as We Think

GUEST POST from Geoffrey A. Moore

By now you have heard that GenAI’s natural language conversational abilities are anchored in what one wag has termed “auto-correct on steroids.” That is, by ingesting as much text as it can possibly hoover up, and by calculating the probability that any given sequence of words will be followed by a specific next word, it mimics human speech in a truly remarkable way. But, do you know why that is so?

The answer is, because that is exactly what we humans do as well.

Think about how you converse. Where do your words come from? Oh, when you are being deliberate, you can indeed choose your words, but most of the time that is not what you are doing. Instead, you are riding a conversational impulse and just going with the flow. If you had to inspect every word before you said it, you could not possibly converse. Indeed, you spout entire paragraphs that are largely pre-constructed, something like the shticks that comedians perform.

Of course, sometimes you really are being more deliberate, especially when you are working out an idea and choosing your words carefully. But have you ever wondered where those candidate words you are choosing come from? They come from your very own LLM (Large Language Model) even though, compared to ChatGPT’s, it probably should be called a TWLM (Teeny Weeny Language Model).

The point is, for most of our conversational time, we are in the realm of rhetoric, not logic. We are using words to express our feelings and to influence our listeners. We’re not arguing before the Supreme Court (although even there we would be drawing on many of the same skills). Rhetoric is more like an athletic performance than a logical analysis would be. You stay in the moment, read and react, and rely heavily on instinct—there just isn’t time for anything else.

So, if all this is the case, then how are we not like GenAI? The answer here is pretty straightforward as well. We use concepts. It doesn’t.

Concepts are a, well, a pretty abstract concept, so what are we really talking about here? Concepts start with nouns. Every noun we use represents a body of forces that in some way is relevant to life in this world. Water makes us wet. It helps us clean things. It relieves thirst. It will drown a mammal but keep a fish alive. We know a lot about water. Same thing with rock, paper, and scissors. Same thing with cars, clothes, and cash. Same thing with love, languor, and loneliness.

All of our knowledge of the world aggregates around nouns and noun-like phrases. To these, we attach verbs and verb-like phrases that show how these forces act out in the world and what changes they create. And we add modifiers to tease out the nuances and differences among similar forces acting in similar ways. Altogether, we are creating ideas—concepts—which we can link up in increasingly complex structures through the fourth and final word type, conjunctions.

Now, from the time you were an infant, your brain has been working out all the permutations you could imagine that arise from combining two or more forces. It might have begun with you discovering what happens when you put your finger in your eye, or when you burp, or when your mother smiles at you. Anyway, over the years you have developed a remarkable inventory of what is usually called common sense, as in be careful not to touch a hot stove, or chew with your mouth closed, or don’t accept rides from strangers.

The point is you have the ability to take any two nouns at random and imagine how they might interact with one another, and from that effort, you can draw practical conclusions about experiences you have never actually undergone. You can imagine exception conditions—you can touch a hot stove if you are wearing an oven mitt, you can chew bubble gum at a baseball game with your mouth open, and you can use Uber.

You may not think this is amazing, but I assure you that every AI scientist does. That’s because none of them have come close (as yet) to duplicating what you do automatically. GenAI doesn’t even try. Indeed, its crowning success is due directly to the fact that it doesn’t even try. By contrast, all the work that has gone into GOFAI (Good Old-Fashioned AI) has been devoted precisely to the task of conceptualizing, typically as a prelude to planning and then acting, and to date, it has come up painfully short.

So, yes GenAI is amazing. But so are you.

That’s what I think. What do you think?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Will Innovation Management Leverage AI in the Future?

Will Innovation Management Leverage AI in the Future?

GUEST POST from Jesse Nieminen

What role can AI play in innovation management, and how can we unlock its true potential?

Unless you’ve been living under a rock, you’ve probably heard a thing or two about AI in the last year. The launch of ChatGPT has supercharged the hype around AI, and now we’re seeing dramatic progress at a pace unlike anything that’s come before.

For those of us into innovation, it’s an exciting time.

Much has been said about the topic at large so I won’t go over the details here. At HYPE, what we’re most excited about is what AI can do for innovation management specifically. We’ve had AI capabilities for years, and have been looking into the topic at large for quite some time.

Here, I share HYPE’s current thinking and answer some key questions:

  • What can AI do for innovation management?
  • What are some common use cases?
  • How can you operationalize AI’s use in innovation management?

The Current State of Innovation Management

Before we answer those questions, let’s review how most organizations carry out innovation management.

We’re all familiar with the innovation funnel.

Hype Innovation Image 1

To oversimplify, you gather ideas, review them, and then select the best ones to move forward to the pilot stage and eventual implementation. After each phase, poor ideas get weeded out.

It’s systematic, it’s conceptually simple, and investment is tiered so that you don’t spend too much time or money before an idea has shown its potential. What’s not to love?

Well, there are a few key challenges: the process is slow, linear, and is usually biased due to the evaluation criteria selected for the gates or decision points (if you use a Phase-Gate model).

Each of these challenges can be mitigated with smart adaptations of the process, but the funnel has another fundamental limitation: It’s generally built for a world where innovation requires significant capital expenditures and vast amounts of proprietary information.

But, regardless of your industry, that just isn’t the case anymore. Now most information is freely available, and technology has come a long way, in many cases because of AI. For example, pharmaceutical companies use AI to accelerate drug discovery while infrastructure and manufacturing companies use advanced simulation techniques, digital twins (virtual replicas of physical objects or systems), and rapid prototyping.

It’s now possible to innovate, test, and validate ideas faster than ever with minimal investment. With the right guidance, these tasks don’t have to be limited to innovation experts like you anymore. That can be an intimidating thought, but it’s also an empowering one. Soon, thanks to AI, you’ll be able to scale your expertise and make an impact significantly bigger than before.

For more than 20 years, we’ve been helping our customers succeed in this era of systematic innovation management. Today, countless organizations manage trends at scale, collect insights and ideas from a wide and diverse audience, and then manage that funnel highly effectively.

Yet, despite, or maybe because of this, more and more seemingly well-run organizations are struggling to keep up and adapt to the future.

What gives?

Some say that innovation is decelerating. Research reveals that as technology gets more complex, coming up with the next big scientific breakthrough is likely to require more and more investment, which makes intuitive sense. This type of research is actually about invention, not innovation per se.

Innovation is using those inventions to drive measurable value. The economic impact of these inventions has always come and gone in waves, as highlighted in ARK Investment’s research, illustrated below.

Throughout history, significant inventions have created platforms that enable dramatic progress through their practical application or, in other words, through innovation. ARK firmly believes that we’re on the precipice of another such wave and one that is likely to be bigger than any that has come before. AI is probably the most important of these platforms, but it’s not the only one.

Mckinsey Hype Innovation Image 2

Whether that will be the case remains to be seen, but regardless, the economic impact of innovation typically derives from the creative combination of existing “building blocks,” be they technologies, processes, or experiences.

Famously, the more such building blocks, or types of innovation, you combine to solve a specific pain point or challenge holistically, the more successful you’re likely to be. Thanks to more and more information and technology becoming free or highly affordable worldwide, change has accelerated rapidly in most industries.

That’s why, despite the evident deceleration of scientific progress in many industries, companies have to fight harder to stay relevant and change dramatically more quickly, as evidenced by the average tenure of S&P500 companies dropping like a stone.

Hype Innovation 3

In most industries, sustainable competitive advantages are a thing of the past. Now, it’s all about strategically planning for, as well as adapting to, change. This is what’s known as transient advantage, and it’s already a reality for most organizations.

How Innovation Management Needs to Change

In this landscape, the traditional innovation funnel isn’t cutting it anymore. Organizations can’t just focus on research and then turn that into new products and expect to do well.

To be clear, that doesn’t mean that the funnel no longer works, just that managing it well is no longer enough. It’s now table stakes. With that approach, innovating better than the next company is getting harder and more expensive.

When we look at our most successful customers and the most successful companies in the world in general, they have several things in common:

  • They have significantly faster cycle times than the competition at every step of the innovation process, i.e., they simply move faster.
  • For them, innovation is not a team, department, or process. It’s an activity the entire organization undertakes.
  • As such, they innovate everything, not just their products but also processes, experiences, business models, and more.

When you put these together, the pace of innovation leaves the competition in the dust.

How can you then maximize the pace of innovation at your organization? In a nutshell, it comes down to having:

  • A well-structured and streamlined set of processes for different kinds of innovation;
  • Appropriate tools, techniques, capabilities, and structures to support each of these processes;
  • A strategy and culture that values innovation;
  • A network of partners to accelerate learning and progress.

With these components in place, you’ll empower most people in the organization to deliver innovation, not just come up with ideas, and that makes all the difference in the world.

Hype Innovation 4

What Role Does AI Play in Innovation Management?

In the last couple of years, we’ve seen massive advancements not just in the quality of AI models and tools, but especially in the affordability and ease of their application. What used to be feasible for just a handful of the biggest and wealthiest companies out there is now quickly commoditizing. Generative AI, which has attracted most of the buzz, is merely the tip of the iceberg.

In just a few years, AI is likely to play a transformative role in the products and services most organizations provide.

For innovation managers too, AI will have dramatic and widely applicable benefits by speeding up and improving the way you work and innovate.

Let’s dive a bit deeper.

AI as an Accelerator

At HYPE, because we believe that using AI as a tool is something every organization that wants to innovate needs to do, we’ve been focusing on applying it to innovation management for some time. For example, we’ve identified and built a plethora of use cases where AI can be helpful, and it’s not just about generative AI. Other types of models and approaches still have their place as well.

There are too many use cases to cover here in detail, but we generally view AI’s use as falling into three buckets:

  • Augmenting: AI can augment human creativity, uncover new perspectives, kickstart work, help alleviate some of the inevitable biases, and make top-notch coaching available for everyone.
  • Assisting: AI-powered tools can assist innovators in research and ideation, summarize large amounts of information quickly, provide feedback, and help find, analyze, and make the most of vast quantities of structured or unstructured information.
  • Automating: AI can automate both routine and challenging work, to improve the speed and efficiency at which you can operate and save time so that you can focus on the value-added tasks at the heart of innovation.

In a nutshell, with the right AI tools, you can move faster, make smarter decisions, and operate more efficiently across virtually every part of the innovation management process.

While effective on their own, it’s only by putting the “three As” together and operationalizing them across the organization that you can unlock the full power of AI and take your innovation work to the next level.

In a nutshell, with the right AI tools, you can move faster, make smarter decisions, and operate more efficiently across virtually every part of the innovation management process.

While effective on their own, it’s only by putting the “three As” together and operationalizing them across the organization that you can unlock the full power of AI and take your innovation work to the next level.

Putting AI Into Practice

So, what’s the key to success with AI?

At HYPE, we think the key is understanding that AI is not just one “big thing.” It’s a versatile and powerful enabling technology that has become considerably cheaper and will likely continue on the same trajectory.

There are significant opportunities for using AI to deliver more value for customers, but organizations need the right data and talent to maximize the opportunities and to enable AI to support how their business operates, not least in the field of innovation management. It’s essential to find the right ways to apply AI to specific business needs; just asking everybody to use ChatGPT won’t cut it.

The anecdotal evidence we’re hearing highlights that learning to use a plethora of different AI tools and operationalizing these across an organization can often become challenging, time-consuming, and expensive.

To overcome these issues, there’s a real benefit in finding ways to operationalize AI as a part of the tools and processes you already use. And that’s where we believe The HYPE Suite with its built-in AI capabilities can make a big difference for our customers.

Final Thoughts

At the start of this article, we asked “Is AI the future of innovation management?”

In short, we think the answer is yes. But the question misses the real point.

Almost everyone is already using AI in at least some way, and over time, it will be everywhere. As an enabling technology, it’s a bit like computers or the Internet: Sure, you can innovate without them, but if everyone else uses them and you don’t, you’ll be slower and end up with a worse outcome.

The real question is how well you use and operationalize AI to support your innovation ambitions, whatever they may be. Using AI in combination with the right tools and processes, you can innovate better and faster than the competition.

At HYPE, we have many AI features in our development roadmap that will complement the software solutions we already have in place. Please reach out to us if you’d like to get an early sneak peek into what’s coming up!

Originally published at https://www.hypeinnovation.com.

Image credits: Pixabay, Hype, McKinsey

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Top 10 Human-Centered Change & Innovation Articles of January 2024

Top 10 Human-Centered Change & Innovation Articles of January 2024Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are January’s ten most popular innovation posts:

  1. Top 40 Innovation Bloggers of 2023 — Curated by Braden Kelley
  2. Creating Organizational Agility — by Howard Tiersky
  3. 5 Simple Steps to Team Alignment — by David Burkus
  4. 5 Essential Customer Experience Tools to Master — by Braden Kelley
  5. Four Ways To Empower Change In Your Organization — by Greg Satell
  6. AI as an Innovation Tool – How to Work with a Deeply Flawed Genius! — by Pete Foley
  7. Top 100 Innovation and Transformation Articles of 2023 — Curated by Braden Kelley
  8. 80% of Psychological Safety Has Nothing to Do With Psychology — by Robyn Bolton
  9. How will you allocate your time differently in 2024? — by Mike Shipulski
  10. Leadership Development Fundamentals – Work Products — by Mike Shipulski

BONUS – Here are five more strong articles published in December that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last four years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






AI as an Innovation Tool – How to Work with a Deeply Flawed Genius!

AI as an Innovation Tool - How to Work with a Deeply Flawed Genius!

GUEST POST from Pete Foley

For those of us working in the innovation and change field, it is hard to overstate the value and importance of AI.   It opens doors, that were, for me at least, barely imaginable 10 years ago.  And for someone who views analogy, crossing expertise boundaries, and the reapplication of ideas across domains as central to innovation, it’s hard to imagine a more useful tool.

But it is still a tool.  And as with any tool, leaning it’s limitations, and how to use it skillfully is key.  I make the analogy to an automobile.  We don’t need to know everything about how it works, and we certainly don’t need to understand how to build it.  But we do need to know what it can, and cannot do. We also need to learn how to drive it, and the better our driving skills, the more we get out of it.

AI, the Idiot Savant?  An issue with current AI is that it is both intelligent and stupid at the same time (see Yejin Chois excellent TED talk that is attached). It has phenomenal ‘data intelligence’, but can also fail on even simple logic puzzles. Part of the problem is that AI lacks ‘common sense’ or the implicit framework that filters a great deal of human decision making and behavior.  Chois calls this the  ‘dark matter’ common sense of decision-making. I think of it as the framework of knowledge, morality, biases and common sense that we accumulate over time, and that is foundational to the unconscious ‘System 1’ elements that influence many, if not most of our decisions. But whatever we call it, it’s an important, but sometimes invisible and unintuitive part of human information processing that is can be missing from AI output.    

Of course, AI is far from being unique in having limitations in the quality of its output.   Any information source we use is subject to errors.  We all know not to believe everything we read on the internet. That makes Google searches useful, but also potentially flawed.  Even consulting with human experts has pitfalls.   Not all experts agree, and even to most eminent expert can be subject to biases, or just good old fashioned human error.  But most of us have learned to be appropriately skeptical of these sources of information.  We routinely cross-reference, challenge data, seek second opinions and do not simply ‘parrot’ the data they provide.

But increasingly with AI, I’ve seen a tendency to treat its output with perhaps too much respect.   The reasons for this are multi-faceted, but very human.   Part of it may be the potential for generative AI to provide answers in an apparently definitive form.  Part may simply be awe of its capabilities, and to confuse breadth of knowledge with accuracy.  Another element is the ability it gives us to quickly penetrate areas where we may have little domain knowledge or background.  As I’ve already mentioned, this is fantastic for those of us who value exploring new domains and analogies.  But it comes with inherent challenges, as the further we step away from our own expertise, the easier it is for us to miss even basic mistakes.  

As for AI’s limitations, Chois provides some sobering examples.  It can pass a bar exam, but can fail abysmally on even simple logic problems.  For example, it suggests building a bridge over broken glass and nails is likely to cause punctures!   It has even suggested increasing the efficiency of paperclip manufacture by using humans as raw materials.  Of course, these negative examples are somewhat cherry picked to make a point, but they do show how poor some AI answers can be, and how they can be low in common sense.   Of course, when the errors are this obvious, we should automatically filter them out with our own common sense.  But the challenge comes when we are dealing in areas where we have little experience, and AI delivers superficially plausible but flawed answers. 

Why is this a weak spot for AI?  At the root of this is that implicit knowledge is rarely articulated in the data AI scrapes. For example, a recipe will often say ‘remove the pot from the heat’, but rarely says ‘remove the pot from heat and don’t stick your fingers in the flames’. We’re supposed to know that already. Because it is ‘obvious’, and processed quickly, unconsciously and often automatically by our brains, it is rarely explicitly articulated. AI, however, cannot learn what is not said.  And so because we don’t tend to state the obvious, it can make it challenging for an AI to learn it.  It learns to take the pot off of the heat, but not the more obvious insight, which is to avoid getting burned when we do so.  

This is obviously a known problem, and several strategies are employed to help address it.  These include manually adding crafted examples and direct human input into AI’s training. But this level of human curation creates other potential risks. The minute humans start deciding what content should and should not be incorporated, or highlighted into AI training, the risk of transferring specific human biases to that AI increase.   It also creates the potential for competing AI’s with different ‘viewpoints’, depending upon differences in both human input and the choices around what data-sets are scraped. There is a ‘nature’ component to the development of AI capability, but also a nurture influence. This is of course analogous the influence that parents, teachers and peers have on the values and biases of children as they develop their own frameworks. 

But most humans are exposed to at least some diversity in the influences that shape their decision frameworks.  Parents, peers and teachers provide generational variety, and the gradual and layered process that builds the human implicit decision framework help us to evolve a supporting network of contextual insight.  It’s obvious imperfect, and the current culture wars are testament to some profound differences in end result.  But to a large extent, we evolve similar, if not identical common sense frameworks. With AI, the narrower group contributing to curated ‘education’ increases the risk of both intentional and unintentional bias, and of ‘divergent intelligence’.     

What Can We do?  The most important thing is to be skeptical about AI output.  Just because it sounds plausible, don’t assume it is.  Just as we’d not take the first answer on a Google search as absolute truth, don’t do the same with AI.  Ask it for references, and check them (early iterations were known to make up plausible looking but nonsense references).  And of course, the more important the output is to us, the more important it is to check it.  As I said at the beginning, it can be tempting to take verbatim output from AI, especially if it sounds plausible, or fits our theory or worldview.  But always challenge the illusion of omnipotence that AI creates.  It’s probably correct, but especially if its providing an important or surprising insight, double check it.    

The Sci-Fi Monster!  The concept of a childish super intelligence has been explored by more than one Science Fiction writer.  But in many ways that is what we are dealing with in the case of AI.  It’s informational ‘IQ’ is greater than the contextual or common sense ‘IQ’ , making it a different type of intelligence to those we are used to.   And because so much of the human input side is proprietary and complex, it’s difficult  to determine whether bias or misinformation is included in its output, and if so, how much?   I’m sure these are solvable challenges.  But some bias is probably unavoidable the moment any human intervention or selection invades choice of training materials or their interpretation.   And as we see an increase in copyright law suits and settlements associated with AI, it becomes increasingly plausible that narrowing of sources will result in different AI’s with different ‘experiences’, and hence potentially different answers to questions.  

AI is an incredible gift, but like the three wishes in Aladdin’s lamp, use it wisely and carefully.  A little bit of skepticism, and some human validation is a good idea. Something that can pass the bar, but that lacks common sense is powerful, it could even get elected, but don’t automatically trust everything it says!

Image credits: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.