Tag Archives: Artificial Intelligence

AI Can Help Attract, Retain and Grow Customer Relationships

AI Can Help Attract, Retain and Grow Customer Relationships

GUEST POST from Shep Hyken

How do you know what your customers want if they don’t tell you? It’s more than sending surveys and interpreting data. Joe Tyrrell is the CEO of Medallia, a company that helps its customers tailor experiences through “intelligent personalization” and automation. I had a chance to interview him on Amazing Business Radio and he shared how smart companies are using AI to build and retain customer relationships. Below are some of his comments followed by my commentary:

  • The generative AI momentum is so widespread that 85% of executives say the technology will be interacting directly with customers in the next two years. AI has been around for longer than most people realize. When a customer is on a website that makes suggestions, when they interact with a chatbot or get the best answers to frequently asked questions, they are interacting with AI-infused technology, whether they know it or not.
  • While most executives want to use AI, they don’t know how they want to use it, the value it will bring and the problems it will solve. In other words, they know they want to use it, but don’t know how (yet). Tyrrell says, “Most organizations don’t know how they are going to use AI responsibly and ethically, and how they will use it in a way that doesn’t introduce unintended consequences, and even worse, unintended bias.” There needs to be quality control and oversight to ensure that AI is meeting the goals and intentions of the company or brand.
  • Generative AI is different than traditional AI. According to Tyrrell, the nature of generative AI is to, “Give me something in real time while I’m interacting with it.” In other words, it’s not just finding answers. It’s communicating with me, almost like human-to-human. When you ask it to clarify a point, it knows exactly how to respond. This is quite different from a traditional search bar on a website—or even a Google search.
  • AI’s capability to personalize the customer experience will be the focus of the next two years. Based on the comment about how AI technology currently interacts with customers, I asked Tyrrell to be more specific about how AI will be used. His answer was focused on personalization. The data we extract from multiple sources will allow for personalization like never before. According to Tyrrell, 82% of consumers say a personalized experience will influence which brand they end up purchasing from in at least half of all shopping situations. The question isn’t whether a company should personalize the customer experience. It is what happens if they don’t.
  • Personalization isn’t about being seen as a consumer, but as a person. That’s the goal of personalization. Medallia’s North Star, which guides all its decisions and investments, is its mission to personalize every customer experience. What makes this a challenge is the word every. If customers experience this one time but the next time the brand acts as if they don’t recognize them, all the work from the previous visit along with the credibility built with the customer is eroded.
  • The next frontier of AI is interpreting social feedback. Tyrrell is excited about Medallia’s future focus. “Surveys may validate information,” says Tyrrell, “but it is often what’s not said that can be just as important, if not even more so.” Tyrrell talked about Medallia’s capability to look everywhere, outside of surveys and social media comments, reviews and ratings, where customers traditionally express themselves. There is behavioral feedback, which Tyrrell refers to as social feedback, not to be confused with social media feedback. Technology can track customer behavior on a website. What pages do they spend the most time on? How do they use the mouse to navigate the page? Tyrell says, “Wherever people are expressing themselves, we capture the information, aggregate it, translate it, interpret it, correlate it and then deliver insights back to our customers.” This isn’t about communicating with customers about customer support issues. It’s mining data to understand customers and make products and experiences better.

Tyrrell’s insights emphasize the opportunities for AI to support the relationship a company or brand has with its customers. The future of customer engagement will be about an experience that creates customer connection. Even though technology is driving the experience, customers appreciate being known and recognized when they return. Tyrrell and I joked about the theme song from the TV sitcom Cheers, which debuted in 1982 and lasted 11 seasons. But it really isn’t a joke at all. It’s what customers want, and it’s so simple. As the song title suggests, customers want to go to a place Where Everybody Knows Your Name.

Image Credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Top 10 Human-Centered Change & Innovation Articles of June 2024

Top 10 Human-Centered Change & Innovation Articles of June 2024Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are June’s ten most popular innovation posts:

  1. The Surprising Downside of Collaboration in Problem-Solving — by Robyn Bolton
  2. Designing Organizational Change and Transformation — by Stefan Lindegaard
  3. Four Principles of Successful Digital Transformation — by Greg Satell
  4. Managers Make the Difference – Four Common Mistakes Managers Make — by David Burkus
  5. Learning to Innovate — by Janet Sernack
  6. Think Outside Which Box? — by Howard Tiersky
  7. Innovation the Amazon Way — by Greg Satell
  8. Irrelevant Innovation — by John Bessant
  9. Nike Should Stop Blaming Working from Home for Their Innovation Struggles — by Robyn Bolton
  10. Time is a Flat Circle – Jamie Dimon’s Comments on AI Just Proved It — by Robyn Bolton

BONUS – Here are five more strong articles published in May that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last four years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Humans Wanted for the Decade’s Biggest Innovation Challenges

Humans Wanted for the Decade's Biggest Innovation Challenges

GUEST POST from Greg Satell

Every era is defined by the problems it tackles. At the beginning of the 20th century, harnessing the power of internal combustion and electricity shaped society. In the 1960s there was the space race. Since the turn of this century, we’ve learned how to decode the human genome and make machines intelligent.

None of these were achieved by one person or even one organization. In the case of electricity, Faraday and Maxwell established key principles in the early and mid 1800s. Edison, Westinghouse and Tesla came up with the first applications later in that century. Scores of people made contributions for decades after that.

The challenges we face today will be fundamentally different because they won’t be solved by humans alone, but through complex human-machine interactions. That will require a new division of labor in which the highest level skills won’t be things like the ability to retain information or manipulate numbers, but to connect and collaborate with other humans.

Making New Computing Architectures Useful

Technology over the past century has been driven by a long succession of digital devices. First vacuum tubes, then transistors and finally microchips transformed electrical power into something approaching an intelligent control system for machines. That has been the key to the electronic and digital eras.

Yet today that smooth procession is coming to an end. Microchips are hitting their theoretical limits and will need to be replaced by new computing paradigms such as quantum computing and neuromorphic chips. The new technologies will not be digital, but will work fundamentally different than what we’re used to.

They will also have fundamentally different capabilities and will be applied in very different ways. Quantum computing, for example, will be able to simulate physical systems, which may revolutionize sciences like chemistry, materials research and biology. Neuromorphic chips may be thousands of times more energy efficient than conventional chips, opening up new possibilities for edge computing and intelligent materials.

There is still a lot of work to be done to make these technologies useful. To be commercially viable, not only do important applications need to be identified, but much like with classical computers, an entire generation of professionals will need to learn how to use them. That, in truth, may be the most significant hurdle.

Ethics For AI And Genomics

Artificial intelligence, once the stuff of science fiction, has become an everyday technology. We speak into our devices as a matter of course and expect to get back coherent answers. In the near future, we will see autonomous cars and other vehicles regularly deliver products and eventually become an integral part of our transportation system.

This opens up a significant number of ethical dilemmas. If given a choice to protect a passenger or a pedestrian, which should be encoded into the software of a autonomous car? Who gets to decide which factors are encoded into systems that make decisions about our education, whether we get hired or if we go to jail? How will these systems be trained? We all worry about who’s educating our kids, but who’s teaching our algorithms?

Powerful genomics techniques like CRISPR open up further ethical dilemmas. What are the guidelines for editing human genes? What are the risks of a mutation inserted in one species jumping to another? Should we revive extinct species, Jurassic Park style? What are the potential consequences?

What’s striking about the moral and ethical issues of both artificial intelligence and genomics is that they have no precedent, save for science fiction. We are in totally uncharted territory. Nevertheless, it is imperative that we develop a consensus about what principles should be applied, in what contexts and for what purpose.

Closing A Perpetual Skills Gap

Education used to be something that you underwent in preparation for your “real life.” Afterwards, you put away the schoolbooks and got down to work, raised a family and never really looked back. Even today, Pew Research reports that nearly one in four adults in the US did not read a single book last year.

Today technology is making many things we learned obsolete. In fact, a study at Oxford estimated that nearly half of the jobs that exist today will be automated in the next 20 years. That doesn’t mean that there won’t be jobs for humans to do, in fact we are in the midst of an acute labor shortage, especially in manufacturing, where automation is most pervasive.

Yet just as advanced technologies are eliminating the need for skills, they are also increasingly able to help us learn new ones. A number of companies are using virtual reality to train workers and finding that it can boost learning efficiency by as much as 40%. IBM, with the Rensselaer Polytechnic Institute, has recently unveiled a system that help you learn a new language like Mandarin. This video shows how it works.

Perhaps the most important challenge is a shift in mindset. We need to treat education as a lifelong need that extends long past childhood. If we only retrain workers once their industry has become obsolete and they’ve lost their jobs, then we are needlessly squandering human potential, not to mention courting an abundance of misery.

Shifting Value To Humans

The industrial revolution replaced the physical labor of humans with that of machines. The result was often mind-numbing labor in factories. Yet further automation opened up new opportunities for knowledge workers who could design ways to boost the productivity of both humans and machines.

Today, we’re seeing a similar shift from cognitive to social skills. Go into a highly automated Apple Store, to take just one example, and you don’t see a futuristic robot dystopia, but a small army of smiling attendants on hand to help you. The future of technology always seems to be more human.

In much the same way, when I talk to companies implementing advanced technologies like artificial intelligence or cloud computing, the one thing I constantly hear is that the human element is often the most important. Unless you can shift your employees to higher level tasks, you miss out on many of the most important benefits

What’s important to consider is that when a task is automated, it is also democratized and value shifts to another place. So, for example, e-commerce devalues the processing of transactions, but increases the value of things like customer service, expertise and resolving problems with orders, which is why we see all those smiling faces when we walk into an Apple Store.

That’s what we often forget about innovation. It’s essentially a very human endeavor and, to measure as true progress, humans always need to be at the center.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

How To Create the IKEA Effect

A Customer Experience That Will Be Appreciated

How To Create The IKEA Effect

GUEST POST from Shep Hyken

When reaching out for customer service and support, most customers still prefer to communicate with a company or brand via the traditional phone call. That said, more and more customers are attracted to and embracing a do-it-yourself customer service experience, known as self-service.

I had a chance to sit down with Venk Korla, the president and CEO of HGS Digital, which recently released its HGS Buyers Insight Report. We talked about investments CX (customer experience) leaders are making into AI and digital self-support and the importance of creating a similar experience for employees, which will get to in a moment. But first, I want to share some comments Korla made about comparing customer service to an IKEA experience.

The IKEA Effect

The IKEA effect was identified and named by Michael I. Norton of Harvard Business School, Daniel Mochon of Yale and Dan Ariely of Duke, who published the results of three studies in 2011. A short description of the IKEA effect is that some customers not only enjoy putting furniture together themselves but also find more value in the experience than if a company delivered pre-assembled furniture.

“It’s the same in the customer service/support world,” Korla said. “Customers who easily resolve their issues or have their questions answered on a brand’s self-service portal, either through traditional FAQ pages on a website or something more advanced, such as AI-powered solutions, will not only be happy with the experience but will also be grateful to the company for providing such an easy, fulfilling experience.”

To support this notion, our customer service research (sponsored by RingCentral) found that even with the phone being the No. 1 way customers like to interact with brands, 26% of customers stopped doing business with a company or brand because self-service options were not provided. (Note: Younger generations prefer self-service solutions more than older generations.) As the self-service experience improves, more will adopt it as their go-to method of getting questions answered and problems resolved.

The Big Bet On AI

In the next 18 months, CX decision-makers are betting big on artificial intelligence. The research behind the HGS Buyers Insight Report found that 37% of the leaders surveyed will deploy customer-facing chatbots, 30% will use generative AI or text-speech solutions to support employees taking care of customers, and 28% will invest in and deploy robotic process automation. All of these investments are meant to improve both the customer and employee experience.

While Spending On CX Is A Top Priority, Spending On Employee Experience (EX) Is Lagging

Korla recognizes the need to support not only customers with AI, but also employees. Companies betting on AI must also consider employees as they invest in technology to support customers. Just as a customer uses an AI-powered chatbot to communicate using natural language, the employee interacting directly with the customer should be able to use similar tools.

Imagine the customer support agent receives a call from a customer with a difficult question. As the customer describes the issue, the agent inputs notes into the computer. Within seconds, the agent has the answer to the question appear on their screen. In addition, the AI tool shares insights about the customer, such as their buying patterns, how long they have been a customer, what they’ve called about in the past and more. At this point, a good agent can interpret the information and communicate it in the style that best suits the customer.

Korla explains that the IKEA effect is just as powerful for employees as it is for customers. When employees are armed with the right tools to do their jobs effectively, allowing them to easily support customers and solve their most difficult problems, they are more fulfilled. In the HGS report, 54% of CX leaders surveyed cited talent attraction and retention as a top investment priority. So, for the company that invests in EX tools—specifically AI and automation—the result translates into lower turnover and more engaged employees.

Korla’s insights highlight the essence of the IKEA effect in creating empowering customer experiences and employee experiences. He reminds us that an amazing CX is supported by an amazing EX. As your company prepares to invest in AI and other self-service tools for your customers, consider an investment in similar tools for your employees.

Download the HGS Buyers Insight Report to find out what CX decision-makers will invest in and focus on for 2024 and beyond.

Image Credits: Pixabay
This article originally appeared on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Balancing Artificial Intelligence with the Human Touch

GUEST POST from Shep Hyken

As AI and ChatGPT-type technologies grow in capability and ease of use and become more cost-effective, more and more companies are making their way to the digital experience. Still, the best companies know better than to switch to 100% digital.

I had a chance to interview Nicole Kyle, managing director and co-founder of CMP Research (Customer Management Practice), for Amazing Business Radio. Kyle’s team provides research and advisory services for the contact center industry and conducts some of the most critical research on the topic of self-service and digital customer service. I first met Kyle at CCW, the largest contact center conference in the industry. I’ve summarized seven of her key observations below, followed by my commentary:

  1. The Amazon Effect has trained customers to expect a level of service that’s not always in line with what companies and brands can provide. This is exactly what’s happening with customer expectations. They no longer compare you just to your direct competitors but to the best experience they’ve had from any company. Amazon and other rockstar brands focused on CX (customer experience) have set the bar higher for all companies in all industries.
  2. People’s acceptance and eventual normalization of digital experiences accelerated during the pandemic, and they have become a way of life for many customers. The pandemic forced customers to accept self-service. For example, many customers never went online to buy groceries, vehicles or other items that were traditionally shopped for in person. Once customers got used to it, as the pandemic became history, many never returned to the “old way” of doing business. At a minimum, many customers expect a choice between the two.
  3. Customers have new priorities and are placing a premium on their time. Seventy-two percent of customers say they want to spend less time interacting with customer service. They want to be self-sufficient in managing typical customer service issues. In other words, they want self-service options that will get them answers to their questions efficiently and in a timely manner. Our CX research differs and is less than half of that 72% number. When I asked Kyle about the discrepancy, she responded, “Customers who have a poor self-service experience are less likely to return to self-service. While there is an increase in preference, you’re not seeing the adoption because some companies aren’t offering the type of self-service experience the customer wants.”
  4. The digital dexterity of society is improving! That phrase is a great way to describe self-service adoption, specifically how customers view chatbots or other ChatGPT-type technologies. Kyle explained, “Digital experiences became normalized during the pandemic, and digital tools, such as generative AI, are now starting to help people in their daily lives, making them more digitally capable.” That translates into customers’ higher acceptance and desire for digital support and CX.
  5. Many customers can tell the difference between talking to an AI chatbot and a live chat with a human agent due to their ability to access technology and the quality of the chatbot. However, customers are still willing to use the tools if the results are good. When it comes to AI interacting with customers via text or voice, don’t get hung up on how lifelike (or not) the experience is as long as it gets your customers what they want quickly and efficiently.
  6. The No. 1 driver of satisfaction (according to 78% of customers surveyed) in a self-service experience is personalization. Personalization is more important than ever in customer service and CX. So, how do you personalize digital support? The “machine” must not only be capable of delivering the correct answers and solutions, but it must also recognize the existing customer, remember issues the customer had in the past, make suggestions that are specific to the customer and provide other customized, personalized approaches to the experience.
  7. With increased investments in self-service and generative AI, 60% of executives say they will reduce the number of frontline customer-facing jobs. But, the good news is that jobs will be created for employees to monitor performance, track data and more. I’m holding firm in my predictions over the past two years that while there may be some job disruption, the frontline customer support agent job will not be eliminated. To Kyle’s point, there will be job opportunities related to the contact center, even if they are not on the front line.

Self-service and automation are a balancing act. The companies that have gone “all in” and eliminated human-to-human customer support have had pushback from customers. Companies that have not adopted newer technologies are frustrating many customers who want and expect self-service solutions. While it may differ from one company to the next, the balance is critical, but smart leaders will find the balance and continue to adapt to the ever-changing expectations of their customers.

Image Credits: Unsplash
This article originally appeared on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Time is a Flat Circle

Jamie Dimon’s Comments on AI Just Proved It

Time is a Flat Circle

GUEST POST from Robyn Bolton


“Time is a flat circle.  Everything we have done or will do we will do over and over and over and over again – forever.” –- Rusty Cohle, played by Matthew McConaughey, in True Detective

For the whole of human existence, we have created new things with no idea if, when, or how they will affect humanity, society, or business.  New things can be a distraction, sucking up time and money and offering nothing in return.  Or they can be a bridge to a better future.

As a leader, it’s your job to figure out which things are a bridge (i.e., innovation) and which things suck (i.e., shiny objects).

Innovation is a flat circle

The concept of eternal recurrence, that time repeats itself in an infinite loop, was first taught by Pythagoras (of Pythagorean theorem fame) in the 6th century BC. It remerged (thereby proving its own truth) in Friedreich Nietzsche’s writings in the 19th century, then again in 2014’s first season of True Detective, and then again on Monday in Jamie Dimon’s Annual Letter to Shareholders.

Mr. Dimon, the CEO and Chairman of JPMorgan Chase & Co, first mentioned AI in his 2017 Letter to Shareholders.  So, it wasn’t the mention of AI that was newsworthy. It was how it was mentioned.  Before mentioning geopolitical risks, regulatory issues, or the recent acquisition of First Republic, Mr. Dimon spends nine paragraphs talking about AI, its impact on banking, and how JPMorgan Chase is responding.

Here’s a screenshot of the first two paragraphs:

JP Morgan Annual Letter 2017

He’s right. We don’t know “the full effect or the precise rate at which AI will change our business—or how it will affect society at large.” We were similarly clueless in 1436 (when the printing press was invented), 1712 (when the first commercially successful steam engine was invented), 1882 (when electricity was first commercially distributed), and 1993 (when the World Wide Web was released to the public).

Innovation, it seems, is also a flat circle.

Our response doesn’t have to be.

Historically, people responded to innovation in one of two ways: panic because it’s a sign of the apocalypse or rejoice because it will be our salvation. And those reactions aren’t confined to just “transformational” innovations.  In 2015, a visiting professor at Kings College London declared that the humble eraser (1770) was “an instrument of the devil” because it creates “a culture of shame about error.  It’s a way of lying to the world, which says, ‘I didn’t make a mistake.  I got it right the first time.’”

Neither reaction is true. Fortunately, as time passes, more people recognize that the truth is somewhere between the apocalypse and salvation and that we can influence what that “between” place is through intentional experimentation and learning.

JPMorgan started experimenting with AI over a decade ago, well before most of its competitors.  As a result, they “now have over 400 use cases in production in areas such as marketing, fraud, and risk” that are producing quantifiable financial value for the company. 

It’s not just JPMorgan.  Organizations as varied as John Deere, BMW, Amazon, the US Department of Energy, Vanguard, and Johns Hopkins Hospital have been experimenting with AI for years, trying to understand if and how it could improve their operations and enable them to serve customers better.  Some experiments worked.  Some didn’t.  But every company brave enough to try learned something and, as a result, got smarter and more confident about “the full effect or the precise rate at which AI will change our business.”

You have free will.  Use it to learn.

Cynics believe that time is a flat circle.  Leaders believe it is an ever-ascending spiral, one in which we can learn, evolve, and influence what’s next.  They also have the courage to act on (and invest in) that belief.

What do you believe?  More importantly, what are you doing about it?

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Top 10 Human-Centered Change & Innovation Articles of May 2024

Top 10 Human-Centered Change & Innovation Articles of May 2024Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are May’s ten most popular innovation posts:

  1. Five Lessons from the Apple Car’s Demise — by Robyn Bolton
  2. Six Causes of Employee Burnout — by David Burkus
  3. Learning About Innovation – From a Skateboard? — by John Bessant
  4. Fighting for Innovation in the Trenches — by Geoffrey A. Moore
  5. A Case Study on High Performance Teams — by Stefan Lindegaard
  6. Growth Comes From What You Don’t Have — by Mike Shipulski
  7. Innovation Friction Risks and Pitfalls — by Howard Tiersky
  8. Difference Between Customer Experience Perception and Reality — by Shep Hyken
  9. How Tribalism Can Kill Innovation — by Greg Satell
  10. Preparing the Next Generation for a Post-Digital Age — by Greg Satell

BONUS – Here are five more strong articles published in April that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last four years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






AI Strategy Should Have Nothing to do with AI

AI Strategy Should Have Nothing to do with AI

GUEST POST from Robyn Bolton

You’ve heard the adage that “culture eats strategy for breakfast.”  Well, AI is the fruit bowl on the side of your Denny’s Grand Slam Strategy, and culture is eating that, too.

1 tool + 2 companies = 2 strategies

On an Innovation Leader call about AI, two people from two different companies shared stories about what happened when an AI notetaking tool unexpectedly joined a call and started taking notes.  In both stories, everyone on the calls was surprised, uncomfortable, and a little bit angry that even some of the conversation was recorded and transcribed (understandable because both calls were about highly sensitive topics). 

The storyteller from Company A shared that the senior executive on the call was so irate that, after the call, he contacted people in Legal, IT, and Risk Management.  By the end of the day, all AI tools were shut down, and an extensive “ask permission or face termination” policy was issued.

Company B’s story ended differently.  Everyone on the call, including senior executives and government officials, was surprised, but instead of demanding that the tool be turned off, they asked why it was necessary. After a quick discussion about whether the tool was necessary, when it would be used, and how to ensure the accuracy of the transcript, everyone agreed to keep the note-taker running.  After the call, the senior executive asked everyone using an AI note-taker on a call to ask attendees’ permission before turning it on.

Why such a difference between the approaches of two companies of relatively the same size, operating in the same industry, using the same type of tool in a similar situation?

1 tool + 2 CULTURES = 2 strategies

Neither storyteller dove into details or described their companies’ cultures, but from other comments and details, I’m comfortable saying that the culture at Company A is quite different from the one at Company B. It is this difference, more than anything else, that drove Company A’s draconian response compared to Company B’s more forgiving and guiding one.  

This is both good and bad news for you as an innovation leader.

It’s good news because it means that you don’t have to pour hours, days, or even weeks of your life into finding, testing, and evaluating an ever-growing universe of AI tools to feel confident that you found the right one. 

It’s bad news because even if you do develop the perfect AI strategy, it won’t matter if you’re in a culture that isn’t open to exploration, learning, and even a tiny amount of risk-taking.

Curious whether you’re facing more good news than bad news?  Start here.

8 culture = 8+ strategies

In 2018, Boris Groysberg, a professor at Harvard Business School, and his colleagues published “The Leader’s Guide to Corporate Culture,” a meta-study of “more than 100 of the most commonly used social and behavior models [and] identified eight styles that distinguish a culture and can be measured.  I’m a big fan of the model, having used it with clients and taught it to hundreds of executives, and I see it actively defining and driving companies’ AI strategies*.

Results (89% of companies): Achievement and winning

  • AI strategy: Be first and be right. Experimentation is happening on an individual or team level in an effort to gain an advantage over competitors and peers.

Caring (63%): Relationships and mutual trust

  • AI strategy: A slow, cautious, and collaborative approach to exploring and testing AI so as to avoid ruffling feathers

Order (15%): Respect, structure, and shared norms

  • AI strategy: Given the “ask permission, not forgiveness” nature of the culture, AI exploration and strategy are centralized in a single function, and everyone waits on the verdict

Purpose (9%): Idealism and altruism

  • AI strategy: Torn between the undeniable productivity benefits AI offers and the myriad ethical and sustainability issues involved, strategies are more about monitoring than acting.

Safety (8%): Planning, caution, and preparedness

  • AI strategy: Like Order, this culture takes a centralized approach. Unlike Order, it hopes that if it closes its eyes, all of this will just go away.

Learning (7%): Exploration, expansiveness, creativity

  • AI strategy: Slightly more deliberate and guided than Purpose cultures, this culture encourages thoughtful and intentional experimentation to inform its overall strategy

Authority (4%): Strength, decisiveness, and boldness

  • AI strategy: If the AI strategies from Results and Order had a baby, it would be Authority’s AI strategy – centralized control with a single-minded mission to win quickly

Enjoyment (2%): Fun and excitement

  • AI strategy: It’s a glorious free-for-all with everyone doing what they want.  Strategies and guidelines will be set if and when needed.

What do you think?

Based on the story above, what culture best describes Company A?  Company B?

What culture best describes your team or company?  What about your AI strategy?

*Disclaimer. Culture is an “elusive lever” because it is based on assumptions, mindsets, social patterns, and unconscious actions.  As a result, the eight cultures aren’t MECE (mutually exclusive, collectively exhaustive), and multiple cultures often exist in a single team, function, and company.  Bottom line, the eight cultures are a tool, not a law (and I glossed over a lot of stuff from the report)

Image credit: Wikimedia Commons

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Top 10 Human-Centered Change & Innovation Articles of April 2024

Top 10 Human-Centered Change & Innovation Articles of April 2024Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are April’s ten most popular innovation posts:

  1. Ignite Innovation with These 3 Key Ingredients — by Howard Tiersky
  2. What Have We Learned About Digital Transformation? — by Geoffrey A. Moore
  3. The Collective Growth Mindset — by Stefan Lindegaard
  4. Companies Are Not Families — by David Burkus
  5. 24 Customer Experience Mistakes to Stop in 2024 — by Shep Hyken
  6. Transformation is Human Not Digital — by Greg Satell
  7. Embrace the Art of Getting Started — by Mike Shipulski
  8. Trust as a Competitive Advantage — by Greg Satell
  9. 3 Innovation Lessons from The Departed — by Robyn Bolton
  10. Humans Are Not as Different from AI as We Think — by Geoffrey A. Moore

BONUS – Here are five more strong articles published in March that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last four years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






How I Use AI to Understand Humans

(and Cut Research Time by 80%)

How I Use AI to Understand Humans

GUEST POST from Robyn Bolton

AI is NOT a substitute for person-to-person discovery conversations or Jobs to be Done interviews.

But it is a freakin’ fantastic place to start…if you do the work before you start.

Get smart about what’s possible

When ChatGPT debuted, I had a lot of fun playing with it, but never once worried that it would replace qualitative research.  Deep insights, social and emotional Jobs to be Done, and game-changing surprises only ever emerge through personal conversation.  No matter how good the Large Language Model (LLM) is, it can’t tell you how feelings, aspirations, and motivations drive their decisions.

Then I watched JTBD Untangled’s video with Evan Shore, WalMart’s Senior Director of Product for Health & Wellness, sharing the tests, prompts, and results his team used to compare insights from AI and traditional research approaches.

In a few hours, he generated 80% of the insights that took nine months to gather using traditional methods.

Get clear about what you want and need.

Before getting sucked into the latest shiny AI tools, get clear about what you expect the tool to do for you.  For example:

  • Provide a starting point for research: I used the free version of ChatGPT to build JTBD Canvas 2.0 for four distinct consumer personas.  The results weren’t great, but they provided a helpful starting point.  I also like Perplexity because even the free version links to sources.
  • Conduct qualitative research for meI haven’t used it yet, but a trusted colleague recommended Outset.ai, a service that promises to get to the Why behind the What because of its ability to “conduct and synthesize video, audio, and text conversations.”
  • Synthesize my research and identify insights: An AI platform built explicitly for Jobs to be Done Research?  Yes, please!  That’s precisely what JobLens claims to be, and while I haven’t used it in a live research project, I’ve been impressed by the results of my experiments.  For non-JTBD research, Otter.ai is the original and still my favorite tool for recording, live transcription, and AI-generated summaries and key takeaways.
  • Visualize insights:  MuralMiro, and FigJam are the most widely known and used collaborative whiteboards, all offering hundreds of pre-formatted templates for personas, journey maps, and other consumer research templates.  Another colleague recently sang the praises of theydo, an AI tool designed specifically for customer journey mapping.

Practice your prompts

“Garbage in.  Garbage out.” Has never been truer than with AI.  Your prompts determine the accuracy and richness of the insights you’ll get, so don’t wait until you’ve started researching to hone them.  If you want to start from scratch, you can learn how to write super-effective prompts here and here.  If you’d rather build on someone else’s work, Brian at JobsLens has great prompt resources. 

Spend time testing and refining your prompts by using a previous project as a starting point.  Because you know what the output should be (or at least the output you got), you can keep refining until you get a prompt that returns what you expect.    It can take hours, days, or even weeks to craft effective prompts, but once you have them, you can re-use them for future projects.

Defend your budget

Using AI for customer research will save you time and money, but it is not free. It’s also not just the cost of the subscription or license for your chosen tool(s).  

Remember the 80% of insights that AI surfaced in the JTBD Untangled video?  The other 20% of insights came solely from in-person conversations but comprised almost 100% of the insights that inspired innovative products and services.

AI can only tell you what everyone already knows. You need to discover what no one knows, but everyone feels.  That still takes time, money, and the ability to connect with humans.

Run small experiments before making big promises

People react to change differently.  Some will love the idea of using AI for customer research, while others will resist with.  Everyone, however, will pounce on any evidence that they’re right.  So be prepared.  Take advantage of free trials to play with tools.  Test tools on friends, family, and colleagues.  Then under-promise and over-deliver.

AI is a starting point.  It is not the ending point. 

I’m curious, have you tried using AI for customer research?  What tools have you tried? Which ones do you recommend?

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.