Tag Archives: AI

The State of Customer Experience and the Contact Center

The State of Customer Experience and the Contact Center

GUEST POST from Shep Hyken

Oh, what a difference a year makes. A few months ago I traveled to Las Vegas to attend the Customer Contact Week (CCW), the largest conference and trade show in the contact center industry. For the past several years, the big discussion has centered on artificial intelligence (AI), and that continues, but Customer Experience (CX) is also moving into the spotlight. AI and natural language models can give customers an almost human-like experience when they have a question or complaint. However, no surprise, some companies do it better than others.

First, all the hype around AI is not new. AI has been in our lives for decades, just at a much simpler level. How do you think Outlook and other email companies recognize that an email is spam and belongs in the junk/spam folder? Of course, it’s not 100% perfect, and neither are today’s best AI programs.

Many of us use Siri and Alexa. That’s AI. And as simple as that is, it’s obviously more sophisticated when you apply it to customer support and CX.

Let’s go back 10 years ago when I attended the IBM Watson conference in Las Vegas. The big hype then was around AI. There were some incredible cases of AI changing customer service, sales and marketing, not to mention automated processes. One of the demonstrations during the general session showcased AI’s stunning capability. Here’s what I saw:

A customer called the contact center. While the customer service agent listened to the customer, the computer (fueled by AI) listened to the conversation and fed the agent answers without the agent typing the questions. In addition, the computer informed the agent how long the customer had been doing business with the company, how often they made purchases, what products they had bought and more. The computer also compared this customer to others who had the same questions and suggested the agent answer those questions. Even though the customer didn’t yet know to ask them, at some point in the future, they would surely be calling back to do so.

That demonstration was a preview of what we have today. One big difference is that implementing that type of solution back then could have cost hundreds of thousands of dollars, if not more than a million. Today, that technology is affordable to almost any company, costing a fraction of what it cost back then (as in just a few thousand dollars).

Voice Technology Gets Better

Less than two years ago, ChatGPT was introduced to the world. Similar technologies have been developed. The capability continues to improve at an incredibly rapid pace. The response from an AI-fueled chatbot is lightning fast. Now, the technology is moving to voice. Rather than type a question for the chatbot, you talk, and it responds in a human-like voice. While voice technology has existed for years, it’s never been this good. Google introduced voice technology that seemed almost human-like. The operative word here is almost. As good as it was, people could still sense they weren’t talking to a human. Today, the best systems are human-like, not almost human-like. Think Alexa and Siri on steroids.

Foreign Accents Are Disappearing

We’ve all experienced calling customer support, and an offshore customer service agent with a heavy accent answers the call. Sometimes, it’s nearly impossible to understand the agent. New technologies are neutralizing accents. A year ago, the software sounded a little “digital.” Today, it sounds almost perfect.

Why Customers Struggle with AI and Other Self-Service Solutions

As far as these technologies have come, customers still struggle to accept them. Our customer service research (sponsored by RingCentral) found that 63% of customers are frustrated by self-service options, such as ChatGPT and similar technologies. Furthermore, 56% of customers admit to being scared of these technologies. Even though 32% of the customers surveyed said they had successfully resolved a customer service issue using AI or ChatGPT-type technologies, it’s not their top preference as 70% still choose the phone as their first level of support. Inconsistency is part of the problem. Some companies still use old technology. The result is that the customer experience varies from company to company. In other words, customers don’t know whether the next time they experience an AI solution if it will be good or not. Inconsistency destroys trust and confidence.

Companies Are Investing in Creating a Better CX

I’ve never been more excited about customer service, CX and the contact center. The main reason is that almost everything about this conference was focused on creating a better experience for the customer. The above examples are just the tip of the iceberg. Companies and brands know what customers want and expect. They know the only way to keep customers is to give them a product that works with an experience they can count on. Price is no longer a barrier as the cost of some of these technologies has dropped to a level that even small companies can afford.

Customer Service Goes Beyond Technology: We Still Need People!

This article focused on the digital experience rather than the traditional human experience. But to nail it for customers, a company can’t invest in just tech. It must also invest in its employees. Even the best technology doesn’t always get the customer what they need, which means the customer will be transferred to a live agent. That agent must be properly trained to deliver the experience that gets customers to say, “I’ll be back.”

Image Credits: Pexels, Shep Hyken

This article originally appeared on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Top 10 Human-Centered Change & Innovation Articles of November 2024

Top 10 Human-Centered Change & Innovation Articles of November 2024Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are November’s ten most popular innovation posts:

  1. A Shared Language for Radical Change — by Greg Satell
  2. Leadership Best Quacktices from Oregon’s Dan Lanning — by Braden Kelley
  3. Navigating Uncertainty Requires a Map — by John Bessant
  4. The Most Successful Innovation Approach is … — by Howard Tiersky
  5. Don’t Listen to These Three Change Consultant Recommendations — by Greg Satell
  6. What We Can Learn from MrBeast’s Onboarding — by Robyn Bolton
  7. Does Diversity Increase Team Performance? — by David Burkus
  8. Customer Experience Audit 101 — by Braden Kelley and Art Inteligencia
  9. Daily Practices of Great Managers — by David Burkus
  10. An Innovation Leadership Fable – Wisdom from the Waters — by Robyn Bolton

BONUS – Here are five more strong articles published in October that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

SPECIAL BONUS: While supplies last, you can get the hardcover version of my first bestselling book Stoking Your Innovation Bonfire for 51% OFF until Amazon runs out of stock or changes the price. This deal won’t last long, so grab your copy while it lasts!

Build a Common Language of Innovation on your team

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last four years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

AI Requires Conversational Intelligence

AI Requires Conversational Intelligence

GUEST POST from Greg Satell

Historically, building technology had been about capabilities and features. Engineers and product designers would come up with new things that they thought people wanted, figure out how to make them work and ship “new and improved” products. The result was often things that were maddeningly difficult to use.

That began to change when Don Norman published his classic, The Design of Everyday Things and introduced concepts like dominant design, affordances and natural mapping into industrial design. The book is largely seen as pioneering the user-centered design movement. Today, UX has become a thriving field.

Yet artificial intelligence poses new challenges. We speak or type into an interface and expect machines to respond appropriately. Often they do not. With the popularity of smart speakers like Amazon Alexa and Google Home, we have a dire need for clear principles for human-AI interactions. A few years ago, two researchers at IBM embarked on a journey to do just that.

The Science Of Conversations

Bob Moore first came across conversation analysis as an undergraduate in the late 1980s, became intensely interested and later earned a PhD based on his work in the field. The central problems are well known to anybody who has ever watched Seinfeld or Curb Your Enthusiasm, our conversations are riddled with complex, unwritten rules that aren’t always obvious.

For example, every conversation has an unstated goal, whether it is just to pass the time, exchange information or to inspire an emotion. Yet our conversations are also shaped by context. For example, the unwritten rules would be different for a conversation between a pair of friends, a boss and subordinate, in a courtroom setting or in a doctor’s office.

“What conversation analysis basically tries to reveal are the unwritten rules people follow, bend and break when engaging in conversations,” Moore told me and he soon found that the tech industry was beginning to ask similar questions. So he took a position at Xerox PARC and then Yahoo! before landing at IBM in 2012.

As the company was working to integrate its Watson system with applications from other industries, he began to work with Raphael Arar, an award-winning visual designer and user experience expert. The two began to see that their interests were strangely intertwined and formed a partnership to design better conversations for machines.

Establishing The Rules Of Engagement

Typically, we use natural language interfaces, both voice and text, like a search box. We announce our intention to seek information by saying, “Hey Siri,” or “Hey Alexa,” followed by a simple query, like “where is the nearest Starbucks.” This can be useful, especially when driving or walking down the street,” but is also fairly limited, especially for more complex tasks.

What’s far more interesting — and potentially far more useful — is being able to use natural language interfaces in conjunction with other interfaces, like a screen. That’s where the marriage of conversational analysis and user experience becomes important, because it will help us build conventions for more complex human-computer interactions.

“We wanted to come up with a clear set of principles for how the various aspects of the interface would relate to each other,” Arar told me. “What happens in the conversation when someone clicks on a button to initiate an action?” What makes this so complex is that different conversations will necessarily have different contexts.

For example, when we search for a restaurant on our phone, should the screen bring up a map, information about pricing, pictures of food, user ratings or some combination? How should the rules change when we are looking for a doctor, a plumber or a travel destination?

Deriving Meaning Through Preserving Context

Another aspect of conversations is that they are highly dependent on context, which can shift and evolve over time. For example, if we ask someone for a restaurant nearby, it would be natural for them to ask a question to narrow down the options, such as “what kind of food are you looking for?” If we answer, “Mexican,” we would expect that person to know we are still interested in restaurants, not, say, the Mexican economy or culture.

Another issue is that when we follow a particular logical chain, we often find some disqualifying factor. For instance, a doctor might be looking for a clinical trial for her patient, find one that looks promising but then see that that particular study is closed. Typically, she would have to retrace her steps to go back to find other options.

“A true conversational interface allows us to preserve context across the multiple turns in the interaction,” Moore says. “If we’re successful, the machine will be able to adapt to the user’s level of competence, serving the expert efficiently but also walking the novice through the system, explaining itself as needed.”

And that’s the true potential of the ability to initiate more natural conversations with computers. Much like working with humans, the better we are able to communicate, the more value we can get out of our relationships.

Making The Interface Disappear

In the early days of web usability, there was a constant tension between user experience and design. Media designers were striving to be original. User experience engineers, on the other hand, were trying to build conventions. Putting a search box in the upper right hand corner of a web page might not be creative, but that’s where users look to find it.

Yet eventually a productive partnership formed and today most websites seem fairly intuitive. We mostly know where things are supposed to be and can navigate things easily. The challenge now is to build that same type of experience for artificial intelligence, so that our relationships with the technology become more natural and more useful.

“Much like we started to do with user experience for conventional websites two decades ago, we want the user interface to disappear,” Arar says. Because when we aren’t wrestling with the interface and constantly having to repeat ourselves or figuring out how to rephrase our questions, we can make our interactions much more efficient and productive.

As Moore put it to me, “Much of the value of systems today is locked in the data and, as we add exabytes to that every year, the potential is truly enormous. However, our ability to derive value from that data is limited by the effectiveness of the user interface. The more we can make the interface become intelligent and largely disappear, the more value we will be able unlock.”

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Five Keys to Personalizing the Customer Experience

Five Keys to Personalizing the Customer Experience

GUEST POST from Shep Hyken

Earlier this year, we surveyed more than 1,000 consumers in the U.S. for our 2024 State of Customer Service and Customer Experience (CX) Study. We asked about the importance of a personalized experience. We found that 81% of customers prefer companies that offer a personalized experience, and 70% say a personalized experience in which the employee knows who they are and their history with the company (past purchases, buying patterns, support calls and more) is important. They also want the experience to go beyond people and include the platforms where they prefer to do business.

For a recent episode of Amazing Business Radio, I talked with Elizabeth Tobey, head of Marketing, Digital & AI of NICE, which helps companies apply AI to manage customer experience. The focus of the discussion was personalization. Here are some of the highlights from the interview:

1. Channel of Choice: This is where the modern-day concept of personalization begins. Tobey said, “In a world where people carry computers in their pockets (also known as mobile phones), it’s important to meet your customers when and where they want to be met.” Customers used to have two main choices when communicating with a brand. They could either walk into a store or call on the phone. Today, there are multiple channels and platforms. They can still visit in person or call, but they can also go to a website with self-service options, visit a social channel like Facebook, conduct business using an app, communicate with a brand’s chatbot and more. Customers want convenience, and part of that is being able to connect with a brand the way they want to connect. Some companies and brands do that better than others. The ones that get it right have educated customers on what they should expect, in effect raising the bar for all others who haven’t yet recognized the importance of communication.

2. Communicate on the Customer’s Terms: Tobey shared a frustrating personal experience that illustrated how some customers like to communicate but a brand falls short. Tobey was getting home late from an event. She contacted a company through its support channel on its website and was communicating with a customer support agent via chat. It was late, and she said, “I have to go to sleep,” expecting she could continue the chat the next morning with another agent. But, when she went to resume the conversation, she was forced to restart the process. She logged back into the website and repeated the authentication process, which was expected, but what she didn’t expect was having to start over with a new agent, repeating her conversation from the beginning as if she had never called before. Tobey made a case for technology that allows for asynchronous conversations on the customer’s timeline, eliminating the need for “over-authentication” and forcing the customer to start over, wasting time and creating an experience marred with friction.

3. Eliminate Friction: How could an interview with an executive at a technology company like NICE not bring up the topic of AI? In the story Tobey told about having to start over with a new agent, going through the authentication process again and repeating her issue, there is a clear message, which is to eliminate unnecessary steps. I shared an experience about visiting a doctor’s office where I had to fill out numerous forms with repeat information: name, address, date of birth, etc. Why should any patient have to fill in the same information more than once? The answer to the question, according to Tobey, is AI. She says, “Take all data that’s coming in from a customer journey and feed it into our AI so that the engine is continuously learning, growing and getting smarter. That means for every customer interaction, the automation and self-service can evolve.” In other words, once AI has the customer’s information, it should be used appropriately to eliminate needless steps (also known as friction) to give the customer the easiest and most convenient experience.

4. It’s Not Just About the Customer: In addition to AI supporting the customer’s self-service and automated experience, any data that is picked up in the customer’s journey can be fed to customer support agents, supervisors and CX leaders, changing how they work and making them more agile with the ability to make decisions faster. Agents get information about the customer, enabling them to provide the personalized experience customers desire. Tobey says, “Agents get a co-pilot or collaborator who listens to every interaction, offers them the best information they need and gives them suggestions.” For supervisors and CX leaders, they get information that makes them more agile and helps them make decisions faster.

5. Knowledge Management: To wrap up our interview, Tobey said, “AI management is knowledge management. Your AI is only as good as your data and knowledge. If you put garbage in, you might get garbage out.” AI should constantly learn and communicate the best information and data, allowing customers, agents and CX leaders to access the right information quickly and create a better and more efficient experience for all.

This article originally appeared on Forbes.com

Image Credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Artificial Intelligence is a No-Brainer

Why innovation management needs co-intelligence

Artificial Intelligence is a No-Brainer

GUEST POST from John Bessant

Long fuse, big bang. A great descriptor which Andrew Hargadon uses to describe the way some major innovations arrive and have impact. For a long time they exist but we hardly notice them, they are confined to limited application, there are constraints on what the technology can do and so on. But suddenly, almost as if by magic they move center stage and seem to have impact everywhere we look.

Which is pretty much the story we now face with the wonderful world of AI. While there is plenty of debate about labels — artificial intelligence, machine learning, different models and approaches — the result is the same. Everywhere we look there is AI — and it’s already having an impact.

More than that; the pace of innovation within the world of AI is breath-taking, even by today’s rapid product cycle standards. We’ve become used to seeing major shifts in things like mobile phones, change happening on a cycle measured in months. But AI announcements of a breakthrough nature seem to happen with weekly frequency.

That’s also reflected in the extent of use — from the ‘early days’ (only last year!) of hearing about Chat GPT and other models we’ve now reached a situation where estimates suggest that millions of people are experimenting with them. Chat GPT has grown from a handful of people to over 200 million in less than a year; it added its first million subscribers within five days of launch! Similar figures show massive and rapid take -up of competing products like Anthropic’s Claude and Google’s Gemini, etc. It’s pretty clear that there’s a high-paced ‘arms race’ going on and it’s drawing in all the big players.

This rapid rate of adoption is being led by an even faster proliferation on the supply side, with many new players entering the market , especially in niche fields. As with the apps market there’s a huge number of players jumping on the bandwagon, and significant growth in the open source availability of models. And many models now allow for users to create their own custom versions — mini-GPTs’ and ‘Co-pilots’ which they can deploy for highly specific needs.

Not surprisingly estimates suggest that the growth potential in the market for AI technologies is vast, amounting to around 200 billion U.S. dollars in 2023 and expected to grow to over 1.8 trillion U.S. dollars by 2030.

Growth in Artificial Intelligence

There’s another important aspect to this growth. As Ethan Mollick suggests in his excellent book ‘Co-intelligence’, everything that we see AI doing today is the product of a far-from-perfect version of the technology; in very short time, given the rate of growth so far, we can expect much more power, integration and multi-modality.

The all-singing, dancing and doing pretty much anything else version of AI we can imagine isn’t far off. Speculation about when AGI — artificial general intelligence — will arrive is still just that — speculative — but the direction of travel is clear.

Not that the impact is seen as entirely positive. Whilst there have been impressive breakthroughs, using AI to help understand and innovate in fields as diverse as healthcare , distribution and education these are matched by growing concern about, for example, privacy and data security, deep-fake abuse and significant employment effects.

With its demonstrable potential for undertaking a wide range of tasks AI certainly poses a threat to the quality and quantity of a wide range of jobs — and at the limit could eliminate them entirely. And where earlier generations of technological automation impacted simple manual operations or basic tasks AI has the capacity to undertake many complex operations — often doing so faster and more effectively than humans.

AI models like Chat GPT can now routinely pass difficult exams for law or medical school, they can interpret complex data sets and spot patterns better than their human counterparts and they can quickly combine and analyze complex data to arrive at decisions which may often be better quality than those made by even experienced practitioners. Not surprisingly the policy discussion around this potential impact has proliferated at a similarly fast rate, echoing growing public concern about the darker side of AI.

But is it inevitable going to be a case of replacement, with human beings shunted to the side-lines? No-one is sure and it is still early days. We’ve had technological revolutions before — think back fifty years to when we first felt the early shock waves of what was to become the ‘microelectronics revolution’. Newspaper headlines and media programs with provocative titles like ‘Now the chips are down’ prompted frenzied discussion and policy planning for a future world staffed by robots and automated to the point where most activity would be undertaken by automated systems, overseen by one man and a dog. The role of the dog being to act as security guard, the role of the man being confined to feeding the dog.

Automation Man and Dog

This didn’t materialize; as many commentators pointed out at the time and as history has shown there were shifts and job changes but there was also compensating creation of new roles and tasks for which new skills were needed. Change yes — but not always in the negative direction and with growing potential for improving the content and quality of remaining and new jobs.

So if history is any guide then there are some grounds for optimism. Certainly we should be exploring and anticipating and particularly trying to match skills and capacity building to likely future needs.

Not least in the area of innovation management. What impact is AI having — and what might the future hold? It’s certainly implicated in a major shift right across the innovation space in terms of its application. If we take a simple ‘innovation compass’ to map these developments we can find plenty of examples:

Exploring Innovation Space

Innovation in terms of what we offer the world — our products and services — here AI already has a strong presence in everything from toys through intelligent and interactive services on our phones through to advanced weapon systems

And it’s the same story if we look at process innovation — changes in the ways we create and deliver whatever it is we offer. AI is embedded in automated and self-optimizing control systems for a huge range of tasks from mining, through manufacturing and out to service delivery.

Position innovation is another dimension where we innovate in opening up new or under-served markets, and changing the stories we tell to existing ones. AI has been a key enabler here, helping spot emerging trends, providing detailed market analysis and underpinning so many of the platform businesses which effectively handle the connection between multi-sided markets. Think Amazon, Uber, Alibaba or AirBnB and imagine them without the support of AI.

And innovation is possible through rethinking the whole approach to what we do, coming up with new business models. Rethinking the underlying value and how it might be delivered — think Spotify, Netflix and many others replacing the way we consume and enjoy our entertainment. Once again AI step forward as a key enabler.

AI is already a 360 degree solution looking for problems to attach to. Importantly this isn’t just in the commercial world; the power of AI is also being harnessed to enable social innovation in many different ways.

But perhaps the real question is not about AI-enabled innovations but one of how it affects innovators — and the organizations employing them? By now we know that innovation isn’t some magical force that strikes blindly in the light bulb moment. It’s a process which can be organized and managed so that we are able to repeat the trick. And after over 100 years of research and documenting hard-won experience we know the kind of things we need to put in place — how to manage innovation. It’s reached the point where we can codify it into an international standard — ISO 56001- and use this as a template to check out the ways in which we build and operate our innovation management systems.

So how will AI affect this — and, more to the point, how is it already doing so? Let’s take our helicopter and look down on where and how AI playing a role in the key areas of innovation management systems.

Typically the ‘front end’ of innovation involves various kinds of search activity, picking up strong and weak signals about needs and opportunities for change. And this kind of exploration and forecasting is something which AI has already shown itself to be very good at — whether in the search for new protein forms or the generation of ideas for consumer products.

Frank Piller’s research team published an excellent piece last year describing their exploration of this aspect of innovation. They looked at the potential which AI offered and tested their predictions out by tasking Chat GPT with a number of prompts based on the needs of a fictitious outdoor activities company. They had it monitoring and picking up on trends, scraping online communities for early warning signals about new consumer themes and, crucially, actually doing idea generation to come up with new product concepts. Their results mimic many other studies which suggest that AI is very good at this — in fact, as Mollick reports, it often does the job better than humans.

Of course finding opportunities is only the start of the innovation process; a key next stage is some kind of strategic selection. Out of all the possibilities of what we could do, what are we going to do and why? Limited resources mean we have to make choices — and the evidence is that AI is pretty helpful here too. It can explore and compare alternatives, make better bets and build more viable business models to take emerging value propositions forward. (At least in the test case where it competed against MBA students…!)

Innovation Process John Bessant

And then we are in the world of implementation, the long and winding road to converting our value proposition into something which will actually work and be wanted. Today’s agile innovation involves a cycle of testing, trial and error learning, gradually pivoting and homing in on what works and building from that. And once again AI is good at this — not least because it’s at the heart of how it does what it does. There’s a clue in the label — machine learning is all about deploying different learning and improvement strategies. AI can carry out fast experiments and focus in, it can simulate markets and bring to bear many of the adoption influences as probabilistic variables which it can work with.

Of course launching a successful version of a value proposition converted to a viable solution is still only half the innovation journey. To have impact we need to scale — but here again AI is likely to change the game. Much of the scaling journey involves understanding and configuring your solution to match the high variability across populations and accelerate diffusion. We know a lot about what influences this (not least thanks to the extensive work of Everett Rogers) and AI has particular capabilities in making sense of the preferences and predilections of populations through studying big datasets. It’s record in persuasion in fields like election campaigning suggests it has the capacity to enhance our ability to influence the innovation adoption decision process.

Scaling also involves complementary assets — the ‘who else?’ and ‘what else?’ which we need to have impact at scale. We need to assemble value networks, ecosystems of co-operating stakeholders — but to do this we need to be able to make connections. Specifically finding potential partners, forming relationships and getting the whole system to perform with emergent properties, where the whole is greater than the sum of the parts.

And here too AI has an growing track record in enabling recombinant innovation, cross-linking, connecting and making sense of patterns, even if we humans can’t always see them.

So far, so disturbing — at least if you are a practicing innovation manager looking over your shoulder at the AI competition rapidly catching up. But what about the bigger picture, the idea of developing and executing an innovation strategy? Here our concern is with the long-term, managing the process of accumulating competencies and capabilities to create long term competitiveness in volatile and unpredictable markets?

It involves being able to imagine and explore different options and make decisions based on the best use of resources and the likely fit with a future world. Which is, once again, the kind of thing which AI has shown itself to be good at. It’s moved a long way from playing chess and winning by brute calculating force. Now it can beat world champions at complex games of strategy like Go and win poker tournaments, bluffing with the best of them to sweep the pot.

Artificial Intelligence Poker Player

So what are we left with? In many ways it takes us right back to basics. We’ve survived as a species on the back of our imaginations — we’re not big or fast, or able to fly, but we are able to think. And our creativity has helped us devise and share tools and techniques, to innovate our way out of trouble. Importantly we’ve learned to do this collectively — shared creativity is a key part of the puzzle.

We’ve seen this throughout history; the recent response to the Covid-19 pandemic provides yet another illustration. In the face of crisis we can work together and innovate radically. It’s something we see in the humanitarian innovation world and in many other crisis contexts. Innovation benefits from more minds on the job.

So one way forward is not to wring our hands and say that the game is over and we should step back and let the AI take over. Rather it points towards us finding ways of working with it — as Mollick’s book title suggests, learning to treat it as a ‘co-intelligence’. Different, certainly but often in in complementary ways. Diversity has always mattered in innovation teams — so maybe by recruiting AI to our team we amplify that effect. There’s enough to do in meeting the challenge of managing innovation against a background of uncertainty; it makes sense to take advantage of all the help we can get.

AI may seem to point to a direction in which our role becomes superfluous — the ‘no-brain needed’ option. But we’re also seeing real possibilities for it to become an effective partner in the process.

And subscribe to my (free) newsletter here

You can find my podcast here and my videos here

And if you’d like to learn with me take a look at my online course here

Image credits: Dall-E via Microsoft CoPilot, John Bessant

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

AI Can Help Attract, Retain and Grow Customer Relationships

AI Can Help Attract, Retain and Grow Customer Relationships

GUEST POST from Shep Hyken

How do you know what your customers want if they don’t tell you? It’s more than sending surveys and interpreting data. Joe Tyrrell is the CEO of Medallia, a company that helps its customers tailor experiences through “intelligent personalization” and automation. I had a chance to interview him on Amazing Business Radio and he shared how smart companies are using AI to build and retain customer relationships. Below are some of his comments followed by my commentary:

  • The generative AI momentum is so widespread that 85% of executives say the technology will be interacting directly with customers in the next two years. AI has been around for longer than most people realize. When a customer is on a website that makes suggestions, when they interact with a chatbot or get the best answers to frequently asked questions, they are interacting with AI-infused technology, whether they know it or not.
  • While most executives want to use AI, they don’t know how they want to use it, the value it will bring and the problems it will solve. In other words, they know they want to use it, but don’t know how (yet). Tyrrell says, “Most organizations don’t know how they are going to use AI responsibly and ethically, and how they will use it in a way that doesn’t introduce unintended consequences, and even worse, unintended bias.” There needs to be quality control and oversight to ensure that AI is meeting the goals and intentions of the company or brand.
  • Generative AI is different than traditional AI. According to Tyrrell, the nature of generative AI is to, “Give me something in real time while I’m interacting with it.” In other words, it’s not just finding answers. It’s communicating with me, almost like human-to-human. When you ask it to clarify a point, it knows exactly how to respond. This is quite different from a traditional search bar on a website—or even a Google search.
  • AI’s capability to personalize the customer experience will be the focus of the next two years. Based on the comment about how AI technology currently interacts with customers, I asked Tyrrell to be more specific about how AI will be used. His answer was focused on personalization. The data we extract from multiple sources will allow for personalization like never before. According to Tyrrell, 82% of consumers say a personalized experience will influence which brand they end up purchasing from in at least half of all shopping situations. The question isn’t whether a company should personalize the customer experience. It is what happens if they don’t.
  • Personalization isn’t about being seen as a consumer, but as a person. That’s the goal of personalization. Medallia’s North Star, which guides all its decisions and investments, is its mission to personalize every customer experience. What makes this a challenge is the word every. If customers experience this one time but the next time the brand acts as if they don’t recognize them, all the work from the previous visit along with the credibility built with the customer is eroded.
  • The next frontier of AI is interpreting social feedback. Tyrrell is excited about Medallia’s future focus. “Surveys may validate information,” says Tyrrell, “but it is often what’s not said that can be just as important, if not even more so.” Tyrrell talked about Medallia’s capability to look everywhere, outside of surveys and social media comments, reviews and ratings, where customers traditionally express themselves. There is behavioral feedback, which Tyrrell refers to as social feedback, not to be confused with social media feedback. Technology can track customer behavior on a website. What pages do they spend the most time on? How do they use the mouse to navigate the page? Tyrell says, “Wherever people are expressing themselves, we capture the information, aggregate it, translate it, interpret it, correlate it and then deliver insights back to our customers.” This isn’t about communicating with customers about customer support issues. It’s mining data to understand customers and make products and experiences better.

Tyrrell’s insights emphasize the opportunities for AI to support the relationship a company or brand has with its customers. The future of customer engagement will be about an experience that creates customer connection. Even though technology is driving the experience, customers appreciate being known and recognized when they return. Tyrrell and I joked about the theme song from the TV sitcom Cheers, which debuted in 1982 and lasted 11 seasons. But it really isn’t a joke at all. It’s what customers want, and it’s so simple. As the song title suggests, customers want to go to a place Where Everybody Knows Your Name.

Image Credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Time is a Flat Circle

Jamie Dimon’s Comments on AI Just Proved It

Time is a Flat Circle

GUEST POST from Robyn Bolton


“Time is a flat circle.  Everything we have done or will do we will do over and over and over and over again – forever.” –- Rusty Cohle, played by Matthew McConaughey, in True Detective

For the whole of human existence, we have created new things with no idea if, when, or how they will affect humanity, society, or business.  New things can be a distraction, sucking up time and money and offering nothing in return.  Or they can be a bridge to a better future.

As a leader, it’s your job to figure out which things are a bridge (i.e., innovation) and which things suck (i.e., shiny objects).

Innovation is a flat circle

The concept of eternal recurrence, that time repeats itself in an infinite loop, was first taught by Pythagoras (of Pythagorean theorem fame) in the 6th century BC. It remerged (thereby proving its own truth) in Friedreich Nietzsche’s writings in the 19th century, then again in 2014’s first season of True Detective, and then again on Monday in Jamie Dimon’s Annual Letter to Shareholders.

Mr. Dimon, the CEO and Chairman of JPMorgan Chase & Co, first mentioned AI in his 2017 Letter to Shareholders.  So, it wasn’t the mention of AI that was newsworthy. It was how it was mentioned.  Before mentioning geopolitical risks, regulatory issues, or the recent acquisition of First Republic, Mr. Dimon spends nine paragraphs talking about AI, its impact on banking, and how JPMorgan Chase is responding.

Here’s a screenshot of the first two paragraphs:

JP Morgan Annual Letter 2017

He’s right. We don’t know “the full effect or the precise rate at which AI will change our business—or how it will affect society at large.” We were similarly clueless in 1436 (when the printing press was invented), 1712 (when the first commercially successful steam engine was invented), 1882 (when electricity was first commercially distributed), and 1993 (when the World Wide Web was released to the public).

Innovation, it seems, is also a flat circle.

Our response doesn’t have to be.

Historically, people responded to innovation in one of two ways: panic because it’s a sign of the apocalypse or rejoice because it will be our salvation. And those reactions aren’t confined to just “transformational” innovations.  In 2015, a visiting professor at Kings College London declared that the humble eraser (1770) was “an instrument of the devil” because it creates “a culture of shame about error.  It’s a way of lying to the world, which says, ‘I didn’t make a mistake.  I got it right the first time.’”

Neither reaction is true. Fortunately, as time passes, more people recognize that the truth is somewhere between the apocalypse and salvation and that we can influence what that “between” place is through intentional experimentation and learning.

JPMorgan started experimenting with AI over a decade ago, well before most of its competitors.  As a result, they “now have over 400 use cases in production in areas such as marketing, fraud, and risk” that are producing quantifiable financial value for the company. 

It’s not just JPMorgan.  Organizations as varied as John Deere, BMW, Amazon, the US Department of Energy, Vanguard, and Johns Hopkins Hospital have been experimenting with AI for years, trying to understand if and how it could improve their operations and enable them to serve customers better.  Some experiments worked.  Some didn’t.  But every company brave enough to try learned something and, as a result, got smarter and more confident about “the full effect or the precise rate at which AI will change our business.”

You have free will.  Use it to learn.

Cynics believe that time is a flat circle.  Leaders believe it is an ever-ascending spiral, one in which we can learn, evolve, and influence what’s next.  They also have the courage to act on (and invest in) that belief.

What do you believe?  More importantly, what are you doing about it?

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Top 10 Human-Centered Change & Innovation Articles of May 2024

Top 10 Human-Centered Change & Innovation Articles of May 2024Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are May’s ten most popular innovation posts:

  1. Five Lessons from the Apple Car’s Demise — by Robyn Bolton
  2. Six Causes of Employee Burnout — by David Burkus
  3. Learning About Innovation – From a Skateboard? — by John Bessant
  4. Fighting for Innovation in the Trenches — by Geoffrey A. Moore
  5. A Case Study on High Performance Teams — by Stefan Lindegaard
  6. Growth Comes From What You Don’t Have — by Mike Shipulski
  7. Innovation Friction Risks and Pitfalls — by Howard Tiersky
  8. Difference Between Customer Experience Perception and Reality — by Shep Hyken
  9. How Tribalism Can Kill Innovation — by Greg Satell
  10. Preparing the Next Generation for a Post-Digital Age — by Greg Satell

BONUS – Here are five more strong articles published in April that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last four years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






AI Strategy Should Have Nothing to do with AI

AI Strategy Should Have Nothing to do with AI

GUEST POST from Robyn Bolton

You’ve heard the adage that “culture eats strategy for breakfast.”  Well, AI is the fruit bowl on the side of your Denny’s Grand Slam Strategy, and culture is eating that, too.

1 tool + 2 companies = 2 strategies

On an Innovation Leader call about AI, two people from two different companies shared stories about what happened when an AI notetaking tool unexpectedly joined a call and started taking notes.  In both stories, everyone on the calls was surprised, uncomfortable, and a little bit angry that even some of the conversation was recorded and transcribed (understandable because both calls were about highly sensitive topics). 

The storyteller from Company A shared that the senior executive on the call was so irate that, after the call, he contacted people in Legal, IT, and Risk Management.  By the end of the day, all AI tools were shut down, and an extensive “ask permission or face termination” policy was issued.

Company B’s story ended differently.  Everyone on the call, including senior executives and government officials, was surprised, but instead of demanding that the tool be turned off, they asked why it was necessary. After a quick discussion about whether the tool was necessary, when it would be used, and how to ensure the accuracy of the transcript, everyone agreed to keep the note-taker running.  After the call, the senior executive asked everyone using an AI note-taker on a call to ask attendees’ permission before turning it on.

Why such a difference between the approaches of two companies of relatively the same size, operating in the same industry, using the same type of tool in a similar situation?

1 tool + 2 CULTURES = 2 strategies

Neither storyteller dove into details or described their companies’ cultures, but from other comments and details, I’m comfortable saying that the culture at Company A is quite different from the one at Company B. It is this difference, more than anything else, that drove Company A’s draconian response compared to Company B’s more forgiving and guiding one.  

This is both good and bad news for you as an innovation leader.

It’s good news because it means that you don’t have to pour hours, days, or even weeks of your life into finding, testing, and evaluating an ever-growing universe of AI tools to feel confident that you found the right one. 

It’s bad news because even if you do develop the perfect AI strategy, it won’t matter if you’re in a culture that isn’t open to exploration, learning, and even a tiny amount of risk-taking.

Curious whether you’re facing more good news than bad news?  Start here.

8 culture = 8+ strategies

In 2018, Boris Groysberg, a professor at Harvard Business School, and his colleagues published “The Leader’s Guide to Corporate Culture,” a meta-study of “more than 100 of the most commonly used social and behavior models [and] identified eight styles that distinguish a culture and can be measured.  I’m a big fan of the model, having used it with clients and taught it to hundreds of executives, and I see it actively defining and driving companies’ AI strategies*.

Results (89% of companies): Achievement and winning

  • AI strategy: Be first and be right. Experimentation is happening on an individual or team level in an effort to gain an advantage over competitors and peers.

Caring (63%): Relationships and mutual trust

  • AI strategy: A slow, cautious, and collaborative approach to exploring and testing AI so as to avoid ruffling feathers

Order (15%): Respect, structure, and shared norms

  • AI strategy: Given the “ask permission, not forgiveness” nature of the culture, AI exploration and strategy are centralized in a single function, and everyone waits on the verdict

Purpose (9%): Idealism and altruism

  • AI strategy: Torn between the undeniable productivity benefits AI offers and the myriad ethical and sustainability issues involved, strategies are more about monitoring than acting.

Safety (8%): Planning, caution, and preparedness

  • AI strategy: Like Order, this culture takes a centralized approach. Unlike Order, it hopes that if it closes its eyes, all of this will just go away.

Learning (7%): Exploration, expansiveness, creativity

  • AI strategy: Slightly more deliberate and guided than Purpose cultures, this culture encourages thoughtful and intentional experimentation to inform its overall strategy

Authority (4%): Strength, decisiveness, and boldness

  • AI strategy: If the AI strategies from Results and Order had a baby, it would be Authority’s AI strategy – centralized control with a single-minded mission to win quickly

Enjoyment (2%): Fun and excitement

  • AI strategy: It’s a glorious free-for-all with everyone doing what they want.  Strategies and guidelines will be set if and when needed.

What do you think?

Based on the story above, what culture best describes Company A?  Company B?

What culture best describes your team or company?  What about your AI strategy?

*Disclaimer. Culture is an “elusive lever” because it is based on assumptions, mindsets, social patterns, and unconscious actions.  As a result, the eight cultures aren’t MECE (mutually exclusive, collectively exhaustive), and multiple cultures often exist in a single team, function, and company.  Bottom line, the eight cultures are a tool, not a law (and I glossed over a lot of stuff from the report)

Image credit: Wikimedia Commons

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

How I Use AI to Understand Humans

(and Cut Research Time by 80%)

How I Use AI to Understand Humans

GUEST POST from Robyn Bolton

AI is NOT a substitute for person-to-person discovery conversations or Jobs to be Done interviews.

But it is a freakin’ fantastic place to start…if you do the work before you start.

Get smart about what’s possible

When ChatGPT debuted, I had a lot of fun playing with it, but never once worried that it would replace qualitative research.  Deep insights, social and emotional Jobs to be Done, and game-changing surprises only ever emerge through personal conversation.  No matter how good the Large Language Model (LLM) is, it can’t tell you how feelings, aspirations, and motivations drive their decisions.

Then I watched JTBD Untangled’s video with Evan Shore, WalMart’s Senior Director of Product for Health & Wellness, sharing the tests, prompts, and results his team used to compare insights from AI and traditional research approaches.

In a few hours, he generated 80% of the insights that took nine months to gather using traditional methods.

Get clear about what you want and need.

Before getting sucked into the latest shiny AI tools, get clear about what you expect the tool to do for you.  For example:

  • Provide a starting point for research: I used the free version of ChatGPT to build JTBD Canvas 2.0 for four distinct consumer personas.  The results weren’t great, but they provided a helpful starting point.  I also like Perplexity because even the free version links to sources.
  • Conduct qualitative research for meI haven’t used it yet, but a trusted colleague recommended Outset.ai, a service that promises to get to the Why behind the What because of its ability to “conduct and synthesize video, audio, and text conversations.”
  • Synthesize my research and identify insights: An AI platform built explicitly for Jobs to be Done Research?  Yes, please!  That’s precisely what JobLens claims to be, and while I haven’t used it in a live research project, I’ve been impressed by the results of my experiments.  For non-JTBD research, Otter.ai is the original and still my favorite tool for recording, live transcription, and AI-generated summaries and key takeaways.
  • Visualize insights:  MuralMiro, and FigJam are the most widely known and used collaborative whiteboards, all offering hundreds of pre-formatted templates for personas, journey maps, and other consumer research templates.  Another colleague recently sang the praises of theydo, an AI tool designed specifically for customer journey mapping.

Practice your prompts

“Garbage in.  Garbage out.” Has never been truer than with AI.  Your prompts determine the accuracy and richness of the insights you’ll get, so don’t wait until you’ve started researching to hone them.  If you want to start from scratch, you can learn how to write super-effective prompts here and here.  If you’d rather build on someone else’s work, Brian at JobsLens has great prompt resources. 

Spend time testing and refining your prompts by using a previous project as a starting point.  Because you know what the output should be (or at least the output you got), you can keep refining until you get a prompt that returns what you expect.    It can take hours, days, or even weeks to craft effective prompts, but once you have them, you can re-use them for future projects.

Defend your budget

Using AI for customer research will save you time and money, but it is not free. It’s also not just the cost of the subscription or license for your chosen tool(s).  

Remember the 80% of insights that AI surfaced in the JTBD Untangled video?  The other 20% of insights came solely from in-person conversations but comprised almost 100% of the insights that inspired innovative products and services.

AI can only tell you what everyone already knows. You need to discover what no one knows, but everyone feels.  That still takes time, money, and the ability to connect with humans.

Run small experiments before making big promises

People react to change differently.  Some will love the idea of using AI for customer research, while others will resist with.  Everyone, however, will pounce on any evidence that they’re right.  So be prepared.  Take advantage of free trials to play with tools.  Test tools on friends, family, and colleagues.  Then under-promise and over-deliver.

AI is a starting point.  It is not the ending point. 

I’m curious, have you tried using AI for customer research?  What tools have you tried? Which ones do you recommend?

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.