Tag Archives: Artificial Intelligence

You Need to Know What Your Customers Think of AI

You Need to Know What Your Customers Think of AI

GUEST POST from Shep Hyken

Ten years ago, only the most technologically advanced companies used AI — although it barely resembled what companies use today when communicating with customers — and it was very, very expensive. But not anymore. Today, any company can implement an AI strategy using ChatGPT-type technologies, often creating experiences that give customers what they want. But not always, which is why the information below is important.

The 2025 Findings

My annual customer service and customer experience (CX) research study surveys more than 1,000 U.S. consumers weighted to the population’s demographics of age, gender, ethnicity and geography. It included an entire group of questions focused on how customers react to and accept (or don’t accept) AI options to ask questions, resolve problems and communicate with a company or brand. Consider the following findings:

  • AI Success: Half of U.S. customers (50%) said they have successfully resolved a customer service issue using AI or ChatGPT-type technologies without needing human assistance. In 2024, only three out of 10 customers (32%) did so. That’s great news, but it’s important to point out that age makes a difference. Six out of 10 Gen-Z customers (61%) successfully used AI support versus just 32% of Boomers.
  • AI Is Far From Perfect: Half of U.S. customers (51%) said they received incorrect information from an AI self-service bot. Even with incredible improvement in AI’s capabilities, it still serves up wrong information. That destroys trust, not only in the company but also in the technology as a whole. A few bad answers and customers will be reluctant, at least in the near term, to choose self-service over the traditional mode of communication, the phone.
  • Still, Customers Believe: Four out of 10 customers (42%) believe AI and ChatGPT can handle complex customer service inquiries as effectively as humans. Even with the mistakes, customers believe AI solutions work. However, 86% of customers think companies using AI should always provide an option to speak or text with a real person.
  • The Phone Still Rules: It’s still too early to throw away phone support. My prediction is that it will be years, if ever, that human-to-human interactions completely disappear, which was proven when we asked, “When you have a problem or issue with a company, which solution do you prefer to use: phone or digital self-service?” The answer is that 68% of customers will still choose the phone over digital self-service. That number is highly influenced by the 82% of Baby Boomers who choose to call a company over any other type of digital support.
  • The Future Looks Strong For AI Customer Support: Six out of 10 customers (63%) expect AI-fueled technologies to become the primary mode of customer support. We asked the same question in 2021, and only 21% of customers felt this way.

The Strategy Behind Using AI For CX

  • Age Matters: As you can see from some of the above findings, there is a big generational gap between younger and older customers. Gen-Z customers are more comfortable, have had more success, and want more digital/AI interactions compared to older customers. Know your customer demographics and provide the appropriate support and communication options based on their age. Recognize you may need to provide different support options if your customer base is “everyone.”
  • Trust Is a Factor: Seven out of 10 customers (70%) have concerns about privacy and security when interacting with AI. Once again, age makes a difference. Trust and confidence with AI consistently decrease with age.

The Future of AI

As AI continues to evolve, especially in the customer service and experience world, companies and brands must find a balance between technology and the human touch. While customers are becoming more comfortable and finding success with AI, we can’t become so enamored with it that we abandon what many of our customers expect. The future of AI isn’t a choice between technology and humans. It’s about creating a blended experience that plays to the technology’s strengths and still gives customers the choice.

Furthermore, if every business had a 100% digital experience, what would be a competitive differentiator? Unless you are the only company that sells a specific product, everything becomes a commodity. Again, I emphasize that there must be a balance. I’ll close with something I’ve written before, but bears repeating:

The greatest technology in the world can’t replace the ultimate relationship-building tool between a customer and a business: the human touch.

This article was originally published on Forbes.com.

Image Credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Great American Contraction

Population, Scarcity, and the New Era of Human Value

The Great American Contraction - Population, Scarcity, and the New Era of Human Value

GUEST POST from Art Inteligencia

We stand at a unique crossroads in human history. For centuries, the American story has been a tale of growth and expansion. We built an empire on a relentless increase in population and labor, a constant flow of people and ideas fueling ever-greater economic output. But what happens when that foundational assumption is not just inverted, but rendered obsolete? What happens when a country built on the idea of more hands and more minds needing more work suddenly finds itself with a shrinking demand for both, thanks to the exponential rise of artificial intelligence and robotics?

The Old Equation: A Sinking Ship

The traditional narrative of immigration as an economic engine is now a relic of a bygone era. For decades, we debated whether immigrants filled low-skilled labor gaps or competed for high-skilled jobs. That entire argument is now moot. Robotics and autonomous systems are already replacing a vast swath of low-skilled labor, from agriculture to logistics, with greater speed and efficiency than any human ever could. This is not a future possibility; it’s a current reality accelerating at an exponential pace. The need for a large population to perform physical tasks is over.

But the disruption is far more profound. While we were arguing about factory floors and farm fields, Artificial Intelligence (AI) has quietly become a peer-level, and in many cases, superior, knowledge worker. AI can now draft legal briefs, write code, analyze complex data sets, and even generate creative content with a level of precision and speed no human can match. The very “high-skilled” jobs we once championed as the future — the jobs we sought to fill with the world’s brightest minds — are now on the chopping block. The traditional value chain of human labor, from manual to cognitive, is being dismantled from both ends simultaneously.

“The question is no longer ‘What can humans do?’ but ‘What can only a human do?'”

The New Paradigm: Radical Scarcity

This creates a terrifying and necessary paradox. The scarcity we must now manage is not one of labor or even of minds, but of human relevance. The old model of a growing population fueling a growing economy is not just inefficient; it is a direct path to social and economic collapse. A population designed for a labor-based economy is fundamentally misaligned with a future where labor is a non-human commodity. The only logical conclusion is a Great Contraction — a deliberate and necessary reduction of our population to a size that can be sustained by a radically transformed economy.

This reality demands a ruthless re-evaluation of our immigration policy. We can no longer afford to see immigrants as a source of labor, knowledge, or even general innovation. The only value that matters now is singular, irreplaceable talent. We must shift our focus from mass immigration to an ultra-selective, curated approach. The goal is no longer to bring in more people, but to attract and retain the handful of individuals whose unique genius and creativity are so rare that AI can’t replicate them. These are the truly exceptional minds who will pioneer new frontiers, not just execute existing tasks.

The future of innovation lies not in the crowd, but in the individual who can forge a new path where none existed before. We must build a system that only allows for the kind of talent that is a true outlier — the Einstein, the Tesla, the Brin, but with the understanding that even a hundred of them will not be enough to employ millions. We are not looking for a workforce; we are looking for a new type of human capital that can justify its existence in a world of automated plenty. This is a cold and pragmatic reality, but it is the only path forward.

Human-Centered Value in a Post-Labor World

My core philosophy has always been about human-centered innovation. In this new world, that means understanding that the purpose of innovation is not just about efficiency or profit. It’s about preserving and cultivating the rare human qualities that still hold value. The purpose of immigration, therefore, must shift. It is not about filling jobs, but about adding the spark of genius that can redefine what is possible for a smaller, more focused society. We must recognize that the most valuable immigrants are not those who can fill our knowledge economy, but those who can help us build a new economy based on a new, more profound understanding of what it means to be human.

The political and social challenges of this transition are immense. But the choice is clear. We can either cling to a growth-based model and face the inevitable social and economic fallout, or we can embrace this new reality. We can choose to see this moment not as a failure, but as an opportunity to become a smaller, more resilient, and more truly innovative nation. The future isn’t about fewer robots and more people. It’s about robots designing, building and repairing other robots. And, it’s about fewer people, but with more brilliant, diverse, and human ideas.

This may sound like a dystopia to some people, but to others it will sound like the future is finally arriving. If you’re still not quite sure what this future might look like and why fewer humans will be needed in America, here are a couple of videos from the present that will give you a glimpse of why this may be the future of America:

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Customer Experience is Changing

If You Don’t Like Change, You’re Going to Hate Extinction

Customer Experience is Changing

GUEST POST from Shep Hyken

Depending on which studies and articles you read, customer service and customer experience (CX) are getting better … or they’re getting worse. Our customer service and CX research found that 60% of consumers had better customer service experiences than last year, and in general, 82% are happy with the customer service they receive from the companies and brands with which they do business.

Yet, some studies claim customer service is worse than ever. Regardless, more companies than ever are investing in improving CX. Some nail it, but even with an investment, some still struggle. Another telling stat is the growing number of companies attending CX conferences.

Last month, more than 5,000 people representing 1,382 companies attended and participated in Contact Center Week (CCW), the world’s largest conference dedicated to customer service and customer experience. This was the largest attendance to date, representing a 25% growth over last year.

Many recognized brands and CX leaders attended and shared their wisdom from the main stage and breakout rooms. The expo hall featured demonstrations of the latest and greatest solutions to create more effective customer support experiences.

The primary reason I attend conferences like CCW is to stay current with the latest advancements and solutions in CX and to gain insight into how industry leaders think. AI took center stage for most of the presentations. No doubt, it continues to improve and gain acceptance. With that in mind, here are some of my favorite takeaways with my commentary from the sessions I attended:

AI for Training

Becky Ploeger, global head of reservations and customer care at Hilton, uses AI to create micro-lessons for employee training. Hilton is using Centrical’s platform to take various topics and turn them into coaching modules. Employees participate in simulations that replicate customer issues.

Can We Trust AI?

As excited as Ploeger is about AI (and agentic AI), there is still trepidation. CX leaders must recognize that AI is not yet perfect and will occasionally provide inaccurate information. Ploeger said, “We have years and years of experience with agents. We only have six months of experience with agentic AI.”

Wrong Information from AI Costs a Company Money—or Does it?

Gadi Shamia, CEO of Replicant, an AI voice technology company, commented about the mistakes AI makes. In general, CX leaders are complaining that going digital is costing the company money because of the bad information customers receive. Shamia asks, “How much are you losing?” While bad information can cause a customer to defect to a competitor, so does a bad experience with a live customer service rep. So, how often does AI provide incorrect information? How many of those customers leave versus trying to connect with an agent? The metrics you choose to define success with a digital self-service experience need to include more than measuring bad experiences. Mark Killick, SVP of experiential operations at Shipt, weighed in on this topic, saying, “If we don’t fix the problems of providing bad information, we’ll just deliver bad information faster.”

Making the Case to Invest in AI

Mariano Tan, president and CEO of Prosodica says, “Nothing gets funded without a clear business case.” The person in charge of the budget for customer service and CX initiatives (typically the CFO in larger companies) won’t “open the wallet” without proof that the expenditure will yield a return on investment (ROI). People in charge of budgets like numbers, so when you create your “clear business case,” be sure to include the numbers that make a compelling reason to invest in CX. Simply saying, “We’ll reduce churn,” isn’t enough. How much churn—that’s a number. How much does it mean to the bottom line—another number. Numbers sell!

Final Words: Love Change, or Else

Neil Gibson, SVP of CX at FedEx, was part of a panel and shared a quote that is the perfect way to end the article. AI is rapidly changing the way we do business. We must keep up, or else. Gibson quoted Fred Smith, the first CEO and founder of FedEx, who said, “If you don’t like change, you’re going to hate extinction.” In other words, keep up or watch your competition blow past you.

This article was originally published on Forbes.com.

Image Credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Have We Made AI Interfaces Too Human?

Could a Little Uncanny Valley Help Add Some Much Needed Skepticism to How We Treat AI Output?

Have We Made AI Interfaces Too Human?

GUEST POST from Pete Foley

A cool element of AI is how ‘human’ it appear’s to be. This is of course a part of its ‘wow’ factor, and has helped to drive rapid and widespread adoption. It’s also of course a clever illusion, as AI’s don’t really ‘think’ like real humans. But the illusion is pretty convincing. And most of us, me included, who have interacted with AI at any length, have probably at times all but forgotten they are having a conversation with code, albeit sophisticated code.

Benefits of a Human-LIke Interface: And this humanizing of the user interface brings multiple benefits. It is of course a part of the ‘wow’ factor that has helped drive rapid and widespread adoption of the technology. The intuitive, conversational interface also makes it far easier for everyday users to access information without training in search techniques. While AI’s they don’t fundamentally have access to better information than an old fashioned Google search, they are much easier to use. And the humanesque output not only provides ‘ready to use’ and pre-synthesized information, but also increases the believability of the output. Furthermore, by creating an illusion of human-like intelligence, it implicitly implies emotions, compassion and critical thinking behind the output, even if it’s not really there

Democratizing Knowledge: And in many ways, this is a really good thing. Knowledge is power. Democratizing access to it has many benefits, and in so doing adds checks and balances to our society we’ve never before enjoyed. And it’s part of a long-term positive trend. Our societies have evolved from shaman and priests jealously guarding knowledge for their own benefit, through the broader dissemination enabled by the Gutenberg press, books and libraries. That in turn gave way to mass media, the internet, and now the next step, AI. Of course, it’s not quite that simple, as it’s also a bit of an arms race. With this increased access to information has come ever more sophisticated ways in which today’s ’shamans’ or leaders try to protect their advantage. They may no longer use solar eclipses to frighten an astronomically ignorant populace into submission and obedience. But spinning, framing, controlled narratives, selective dissemination of information, fake news, media control, marketing, behavioral manipulation and ’nudging’ are just a few ways in which the flow of information is controlled or manipulated today. We have moved in the right direction, but still have a way to go, and freedom of information and it’s control are always in some kind of arms race.

Two Edged Sword: But this humanization of AI can also be a two edged sword, and comes with downsides in addition to the benefits described above. It certainly improves access and believability, and makes output easier to disseminate, but also hides its true nature. AI operates in a quite different way from a human mind. It lacks intrinsic ethics, emotional connections, genuine empathy, and ‘gut feelings’. To my inexpert mind, it in some uncomfortable ways resembles a psychopath. It’s not evil in a human sense by any means, but it also doesn’t care, and lacks a moral or ethical framework

A brutal example is the recent case of Adam Raine, where ChatGPT advised him on ways to commit suicide, and helped him write a suicide note. A sane human would never do this, but the humanesque nature of the interface appeared to create an illusion for that unfortunate individual that he was dealing with a human, and the empathy, emotional intelligence and compassion that comes with that.

That may be an extreme example. But the illusion of humanity and the ability to access unfiltered information can also bring more subtle issues. For example, while the ability to interrogate AI around our symptoms before visiting a physician certainly empowers us to take a more proactive role in our healthcare. But it can also be counterproductive. A patient who has convinced themselves of an incorrect diagnosis can actually harm themselves, or make a physicians job much harder. And AI lacks the compassion to break bad news gently, or add context in the way a human can.

The Uncanny Valley: That brings me to the Uncanny Valley. This describes when technology approaches but doesn’t quite achieve perfection in human mimicry. In the past we could often detect synthetic content on a subtle and implicit level, even if we were not conscious of it. For example, a computerized voice that missed subtle tonal inflections, or a photoshopped image or manipulated video that missed subtle facial micro expressions might not be obvious, but often still ‘felt’ wrong. Or early drum machines were so perfect that they lacked the natural ’swing’ of even the most precise human drummer, and so had to be modified to include randomness that was below the threshold of conscious awareness, but made them ‘feel’ real.

This difference between conscious and unconscious evaluation creates cognitive dissonance that can result in content feeling odd, or even ‘creepy’. And often, the closer we got to eliminating that dissonance, the creepier it feels. When I’ve dealt with the uncanny valley in the past, it’s generally been something we needed to ‘fix’. For example, over-photoshopping in a print ad, or poor CGI. But be careful what you wish for. AI appears to have marched through the ‘uncanny valley’ to the point where its output feels human. But despite feeling right, it may still lack the ethical, moral or emotional framework of the human responses it mimics.

This begs a question, ‘do we need some implicit as well as explicit cues that remind us we are not dealing with a real human? Could a slight feeling of ‘creepiness maybe help to avoid another Adam Raine? Should we add back some ‘uncanny valley’, and turn what used to be something we thought of as an ‘enemy’ to good use? The latter is one of my favorite innovation strategies. Whether it’s vaccination, or exposure to risks during childhood, or not over-sanitizing, sometimes a little of what does us harm can do us good. Maybe the uncanny valley we’ve typical tried to overcome could now actually help us?

Would just a little implicit doubt also encourage us to think a bit more deeply about the output, rather than simply cut and paste it into a report? By making AI output sound so human, it potentially removes the need for cognitive effort to process the output. Thinking that played a key role in translating search into output can now be skipped. Synthesizing and processing output from a ‘old fashioned’ Google search requires effort and comprehension. With AI, it is all to easy to regurgitate the output, skip meaningful critical thinking, and share what we really don’t understand. Or perhaps worse, we can create an illusion of understanding where we don’t think deeply or causally enough to even realize that we don’t understand what we are sharing. It’s in some ways analogous to proof reading, in that it’s all to easy to skip over content we think we already know, even if we really don’t . And the more we skip over content, the more difficult it is to be discerning, or question the output. When a searcher receives answers in prose he or she can cut and paste into a report or essay, less effort effort and critical thinking goes into comprehension and the critical thinking, and the risk of sharing inaccurate information, or even nonsense increases.

And that also brings up another side effect of low engagement with output – confirmation bias. If the output is already in usable form, doesn’t require synthesizing or comprehension, and it agrees with our beliefs or motivations, it’s a perfect storm. There is little reason to question it, or even truly understand it. We are generally pretty good at challenging something that surprises us, or that we disagree with. But it takes a lot of will, and a deep adherence to the scientific method to challenge output that supports our beliefs or theories

Question everything, and you do nothing! The corollary to this is surely ‘that’s the point of AI?’ It’s meant to give us well structured, and correct answers, and in so doing free up our time for more important things, or to act on ideas, rather than just think about them. If we challenge and analyze every output, why use AI in the first place? That’s certainly fair, but taking AI output without any question is not smart either. Remember that it isn’t human, and is still capable of making really stupid mistakes. Okay, so are humans, but AI is still far earlier in its evolutionary journey, and prone to unanticipated errors. I suspect the answer to this lies in how important the output is, and where it will be used. If it’s important, treat AI output as a hypothesis. Don’t believe everything you read, and before simply sharing or accepting, ask ourselves and AI itself questions around what went into the conclusions, where the data came from, and what the critical thinking path is. Basically apply the scientific method to AI output much the same as we would, or should our own ideas.

Cat Videos and AI Action Figures: Another related risk with AI is if we let it become an oracle. We not only treat its output as human, but as super human. With access to all knowledge, vastly superior processing power compared to us mere mortals, and apparent human reasoning, why bother to think for ourselves? A lot of people worry about AI becoming sentient, more powerful than humans, and the resultant doomsday scenarios involving Terminators and Skynet. While it would be foolish to ignore such possibilities, perhaps there is a more clear and present danger, where instead of AI conquering humanity, we simply cede our position to it. Just as basic mathematical literacy has plummeted since the introduction of calculators, and spell-check has reduced our basic literary capability, what if AI erodes our critical thinking and problem solving? I’m not the first to notice that with the internet we have access to all human knowledge, but all too often use it for cat videos and porn. With AI, we have an extraordinary creativity enhancing tool, but use masses of energy and water for data centers to produce dubious action figures in our own image. Maybe we need a little help doing better with AI. A little ‘uncanny Valley’ would not begin to deal with all of the potential issues, but maybe simply not fully trusting AI output on an implicit level might just help a little bit.

Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Most Challenging Obstacles to Achieving Artificial General Intelligence

The Unclimbed Peaks

The Most Challenging Obstacles to Achieving Artificial General Intelligence

GUEST POST from Art Inteligencia

The pace of artificial intelligence (AI) development over the last decade has been nothing short of breathtaking. From generating photo-realistic images to holding surprisingly coherent conversations, the progress has led many to believe that the holy grail of artificial intelligence — Artificial General Intelligence (AGI) — is just around the corner. AGI is defined as a hypothetical AI that possesses the ability to understand, learn, and apply its intelligence to solve any problem, much like a human. As a human-centered change and innovation thought leader, I am here to argue that while we’ve made incredible strides, the path to AGI is not a straight line. It is a rugged, mountainous journey filled with profound, unclimbed peaks that require us to solve not just technological puzzles, but also fundamental questions about consciousness, creativity, and common sense.

We are currently operating in the realm of Narrow AI, where systems are exceptionally good at a single task, like playing chess or driving a car. The leap from Narrow AI to AGI is not just an incremental improvement; it’s a quantum leap. It’s the difference between a tool that can hammer a nail perfectly and a person who can understand why a house is being built, design its blueprints, and manage the entire process while also making a sandwich and comforting a child. The true obstacles to AGI are not merely computational; they are conceptual and philosophical. They require us to innovate in a way that goes beyond brute-force data processing and into the realm of true understanding.

The Three Grand Obstacles to AGI

While there are many technical hurdles, I believe the path to AGI is blocked by three foundational challenges:

  • 1. The Problem of Common Sense and Context: Narrow AI lacks common sense, a quality that is effortless for humans but incredibly difficult to code. For example, an AI can process billions of images of cars, but it doesn’t “know” that a car needs fuel or that a flat tire means it can’t drive. Common sense is a vast, interconnected web of implicit knowledge about how the world works, and it’s something we’ve yet to find a way to replicate.
  • 2. The Challenge of Causal Reasoning: Current AI models are masterful at recognizing patterns and correlations in data. They can tell you that when event A happens, event B is likely to follow. However, they struggle with causal reasoning — understanding why A causes B. True intelligence involves understanding cause-and-effect relationships, a critical component for true problem-solving, planning, and adapting to novel situations.
  • 3. The Final Frontier of Human-Like Creativity & Understanding: Can an AI truly create something new and original? Can it experience “aha!” moments of insight? Current models can generate incredibly creative outputs based on patterns they’ve seen, but do they understand the deeper meaning or emotional weight of what they create? Achieving AGI requires us to cross the final chasm: imbuing a machine with a form of human-like creativity, insight, and self-awareness.

“We are excellent at building digital brains, but we are still far from replicating the human mind. The real work isn’t in building bigger models; it’s in cracking the code of common sense and consciousness.”


Case Study 1: The Fight for Causal AI (Causaly vs. Traditional Models)

The Challenge:

In scientific research, especially in fields like drug discovery, identifying causal relationships is everything. Traditional AI models can analyze a massive database of scientific papers and tell a researcher that “Drug X is often mentioned alongside Disease Y.” However, they cannot definitively state whether Drug X *causes* a certain effect on Disease Y, or if the relationship is just a correlation. This lack of causal understanding leads to a time-consuming and expensive process of manual verification and experimentation.

The Human-Centered Innovation:

Companies like Causaly are at the forefront of tackling this problem. Instead of relying solely on a brute-force approach to pattern recognition, Causaly’s platform is designed to identify and extract causal relationships from biomedical literature. It uses a different kind of model to recognize phrases and structures that denote cause and effect, such as “is associated with,” “induces,” or “results in.” This allows researchers to get a more nuanced, and scientifically useful, view of the data.

The Result:

By focusing on the causal reasoning obstacle, Causaly has enabled researchers to accelerate the drug discovery process. It helps scientists filter through the noise of correlation to find genuine causal links, allowing them to formulate hypotheses and design experiments with a much higher probability of success. This is not about creating AGI, but about solving one of its core components, proving that a human-centered approach to a single, deep problem can unlock immense value. They are not just making research faster; they are making it smarter and more focused on finding the *why*.


Case Study 2: The Push for Common Sense (OpenAI’s Reinforcement Learning Efforts)

The Challenge:

As impressive as large language models (LLMs) are, they can still produce nonsensical or factually incorrect information, a phenomenon known as “hallucination.” This is a direct result of their lack of common sense. For instance, an LLM might confidently tell you that you can use a toaster to take a bath, because it has learned patterns of words in sentences, not the underlying physics and danger of the real world.

The Human-Centered Innovation:

OpenAI, a leader in AI research, has been actively tackling this through a method called Reinforcement Learning from Human Feedback (RLHF). This is a crucial, human-centered step. In RLHF, human trainers provide feedback to the AI model, essentially teaching it what is helpful, honest, and harmless. The model is rewarded for generating responses that align with human values and common sense, and penalized for those that do not. This process is an attempt to inject a form of implicit, human-like understanding into the model that it cannot learn from raw data alone.

The Result:

RLHF has been a game-changer for improving the safety, coherence, and usefulness of models like ChatGPT. While it’s not a complete solution to the common sense problem, it represents a significant step forward. It demonstrates that the path to a more “intelligent” AI isn’t just about scaling up data and compute; it’s about systematically incorporating a human-centric layer of guidance and values. It’s a pragmatic recognition that humans must be deeply involved in shaping the AI’s understanding of the world, serving as the common sense compass for the machine.


Conclusion: AGI as a Human-Led Journey

The quest for AGI is perhaps the greatest scientific and engineering challenge of our time. While we’ve climbed the foothills of narrow intelligence, the true peaks of common sense, causal reasoning, and human-like creativity remain unscaled. These are not problems that can be solved with bigger servers or more data alone. They require fundamental, human-centered innovation.

The companies and researchers who will lead the way are not just those with the most computing power, but those who are the most creative, empathetic, and philosophically minded. They will be the ones who understand that AGI is not just about building a smart machine; it’s about building a machine that understands the world the way we do, with all its nuances, complexities, and unspoken rules. The path to AGI is a collaborative, human-led journey, and by solving its core challenges, we will not only create more intelligent machines but also gain a deeper understanding of our own intelligence in the process.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Dall-E

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






McKinsey is Wrong That 80% Companies Fail to Generate AI ROI

McKinsey is Wrong That 80% Companies Fail to Generate AI ROI

GUEST POST from Robyn Bolton

Sometimes, you see a headline and just have to shake your head.  Sometimes, you see a bunch of headlines and need to scream into a pillow.  This week’s headlines on AI ROI were the latter:

  • Companies are Pouring Billions Into A.I. It Has Yet to Pay Off – NYT
  • MIT report: 95% of generative AI pilots at companies are failing – Forbes
  • Nearly 8 in 10 companies report using gen AI – yet just as many report no significant bottom-line impact – McKinsey

AI has slipped into what Gartner calls the Trough of Disillusionment. But, for people working on pilots,  it might as well be the Pit of Despair because executives are beginning to declare AI a fad and deny ever having fallen victim to its siren song.

Because they’re listening to the NYT, Forbes, and McKinsey.

And they’re wrong.

ROI Reality Check

In 20205, private investment in generative AI is expected to increase 94% to an estimated $62 billion.  When you’re throwing that kind of money around, it’s natural to expect ROI ASAP.

But is it realistic?

Let’s assume Gen AI “started” (became sufficiently available to set buyer expectations and warrant allocating resources to) in late 2022/early 2023.  That means that we’re expecting ROI within 2 years.

That’s not realistic.  It’s delusional. 

ERP systems “started” in the early 1990s, yet providers like SAP still recommend five-year ROI timeframes.  Cloud Computing“started” in the early 2000s, and yet, in 2025, “48% of CEOs lack confidence in their ability to measure cloud ROI.” CRM systems’ claims of 1-3 years to ROI must be considered in the context of their 50-70% implementation failure rate.

That’s not to say we shouldn’t expect rapid results.  We just need to set realistic expectations around results and timing.

Measure ROI by Speed and Magnitude of Learning

In the early days of any new technology or initiative, we don’t know what we don’t know.  It takes time to experiment and learn our way to meaningful and sustainable financial ROI. And the learnings are coming fast and furious:

Trust, not tech, is your biggest challenge: MIT research across 9,000+ workers shows automation success depends more on whether your team feels valued and believes you’re invested in their growth than which AI platform you choose.

Workers who experience AI’s benefits first-hand are more likely to champion automation than those told, “trust us, you’ll love it.” Job satisfaction emerged as the second strongest indicator of technology acceptance, followed by feeling valued.  If you don’t invest in earning your people’s trust, don’t invest in shiny new tech.

More users don’t lead to more impact: Companies assume that making AI available to everyone guarantees ROI.  Yet of the 70% of Fortune 500 companies deploying Microsoft 365 Copilot and similar “horizontal” tools (enterprise-wide copilots and chatbots), none have seen any financial impact.

The opposite approach of deploying “vertical” function-specific tools doesn’t fare much better.  In fact, less than 10% make it past the pilot stage, despite having higher potential for economic impact.

Better results require reinvention, not optimization:  McKinsey found that call centers that gave agents access to passive AI tools for finding articles, summarizing tickets, and drafting emails resulted in only a 5-10% call time reduction.  Centers using AI tools to automate tasks without agent initiation reduced call time by 20-40%.

Centers reinventing processes around AI agents? 60-90% reduction in call time, with 80% automatically resolved.

How to Climb Out of the Pit

Make no mistake, despite these learnings, we are in the pit of AI despair.  42% of companies are abandoning their AI initiatives.  That’s up from 17% just a year ago.

But we can escape if we set the right expectations and measure ROI on learning speed and quality.

Because the real concern isn’t AI’s lack of ROI today.  It’s whether you’re willing to invest in the learning process long enough to be successful tomorrow.

Image credit: Microsoft CoPilot

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

This AI Creativity Trap is Gutting Your Growth

This AI Creativity Trap is Gutting Your Growth

GUEST POST from Robyn Bolton

“We have to do more with less” has become an inescapable mantra, and goodness, are you trying.  You’ve slashed projects and budgets, “right-sized” teams, and tried any technology that promised efficiency and a free trial.  Now, all that’s left is to replace the people you still have with AI creativity tools.  Welcome to the era of the AI Innovation Team.

It sounds like a great idea.  Now, everyone can be an innovator with access to an LLM.  Heck, even innovation firms are “outsourcing” their traditional work to AI, promising the same radical results with less time and for far less money.

It sounds almost too good to be true.

Because it is too good to be true.

AI is eliminating the very brain processes that produce breakthrough innovations.

This isn’t hyperbole, and it’s not just one study.

MIT researchers split 54 people into three groups (ChatGPT users, search engine users, and no online/AI tools using ChatGPT) and asked them to write a series of essays.  Using EEG brain monitoring, they found that the brain connectivity in networks crucial for creativity and analogous thinking dropped by 55%.

Even worse? When people stopped using AI, their brains stayed stuck in this diminished state.

University of Arkansas researchers tested AI against 3,562 humans on a series of four challenges involving finding new uses for everyday objects, like a brick or paperclip.   While AI scored slightly higher on standard tests, when researchers introduced a new context, constraint, or modification to the object, AI’s performance “collapsed.” Humans stayed strong.

Why? AI relies on pattern matching and is unable to transfer its “creativity” to unexpected scenarios. Humans use analogical reasoning so are able to flex quickly and adapt.

University of Strasbourg researchers analyzed 15,000 studies of COVID-19 infections and found that teams that relied heavily on AI experts produced research that got fewer citations and less media attention. However, papers that drew from diverse knowledge sources across multiple fields became widely cited and influential.

The lesson? Breakthroughs require cross-domain thinking, which is precisely what diverse human teams provide, and, according to the MIT study, AI is unable to produce.

How to optimize for efficiency AND impact (and beat your competition)

While this seems like bad news if you’ve already cut your innovation team, the silver lining is that your competition is probably making the same mistake.

Now that you know better, you can do better, and that creates a massive opportunity.

Use AI for what it does well:

  • Data analysis and synthesis
  • Rapid testing and iteration to refine an advanced prototype
  • Process optimization

Use humans for what we do well:

  • Make meaningful connections across unrelated domains
  • Recognize when discoveries from one field apply to another
  • Generate the “aha moments” that redefine industries

Three Questions to Ask This Week

  1. Where did your most recent breakthroughs come from? How many came from connecting insights across different domains? If most of your innovations require analogical leaps, cutting creative teams could kill your pipeline.
  2. How are teams currently using AI tools? Are they using AI for data synthesis and rapid iteration? Good. Are they replacing human ideation entirely? Problem.
  3. How can you see it to believe it? Run a simple experiment: Give two teams an hour to solve a breakthrough challenge. Have one solve it with AI assistance and one without.  Which solution is more surprising and potentially breakthrough?

The Hidden Competitive Advantage

As AI commoditizes pattern recognition, human analogical thinking and creativity become a competitive advantage.

The companies that figure out the right balance will eat everyone else’s lunch.

Image credit: Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Why Context Engineering is the Next Frontier in AI

Why Context Engineering is the Next Frontier in AI

by Braden Kelley and Art Inteligencia

Observing the rapid evolution of artificial intelligence, one thing has become abundantly clear: while raw processing power and sophisticated algorithms are crucial, the true key to unlocking AI’s transformative potential lies in its ability to understand and leverage context. We’ve seen remarkable advancements in generative AI and machine learning, but these technologies often stumble when faced with the nuances of real-world situations. This is why I believe context engineering – the discipline of explicitly designing and managing the contextual information available to AI systems – is not just an optimization, but the next fundamental frontier in AI innovation.

Think about human intelligence. Our ability to understand language, make decisions, and solve problems is deeply rooted in our understanding of context. A single word can have multiple meanings depending on the sentence it’s used in. A request can be interpreted differently based on the relationship between the people involved or the situation at hand. For AI to truly augment human capabilities and integrate seamlessly into our lives, it needs a similar level of contextual awareness. Current AI models often operate on relatively narrow inputs, lacking the broader understanding of user intent, environmental factors, and historical interactions that humans take for granted. Context engineering aims to bridge this gap, moving AI from being a powerful but often brittle tool to a truly intelligent and adaptable partner.

In the realm of artificial intelligence, context engineering is the strategic and human-centered practice of providing an AI system with the relevant background information it needs to understand a query or situation accurately. It goes beyond simple prompt design by actively building and managing the comprehensive context that surrounds an interaction. This includes integrating historical data, user profiles, real-time environmental factors, and external knowledge sources, allowing the AI to move from a narrow, transactional understanding to a more holistic, human-like awareness. By engineering this context, we enable AI to produce more accurate, personalized, and genuinely useful responses, bridging the gap between a machine’s logic and the nuanced complexity of human communication and problem-solving.

The field of context engineering encompasses a range of techniques and strategies focused on providing AI systems with relevant and actionable context. This includes:

  • Prompt Engineering: Crafting detailed and context-rich prompts that guide AI models towards desired outputs.
  • Memory Management: Implementing mechanisms for AI to remember past interactions and use that history to inform current responses.
  • External Knowledge Integration: Connecting AI systems to external databases, APIs, and real-time data streams to provide up-to-date and relevant information.
  • User Profiling and Personalization: Leveraging data about individual users to tailor AI responses to their specific needs and preferences.
  • Situational Awareness: Incorporating real-world contextual cues, such as location, time of day, and user activity, to make AI more responsive to the current situation.

A Human-Centered Blueprint for Implementation

Implementing context engineering is not a one-time technical fix; it is a continuous, human-centered practice that must be embedded into your innovation lifecycle. To move beyond a static, one-size-fits-all model and create truly intelligent, context-aware AI, consider this blueprint for action:

  • Step 1: Start with the Human Context. Before you even think about data streams or algorithms, you must first deeply understand the human being you are serving. Conduct ethnographic research, user interviews, and journey mapping to identify what context is truly relevant to your users. What are their goals? What unspoken needs do they have? What external factors influence their decisions? The most valuable context often isn’t in a database—it’s in the real-world experiences and emotional states of your users.
  • Step 2: Map the Contextual Landscape. Once you understand the human context, you can begin to identify and integrate the necessary data. This involves creating a “contextual map” that connects the human need to the available data sources. For a customer service AI, this map would link a customer’s inquiry to their purchase history, recent support tickets, and even their browsing behavior on your website. For a medical AI, the map would link a patient’s symptoms to their genetic data, environmental exposure, and family medical history. This mapping process ensures that the AI’s inputs are directly tied to what matters most to the user.
  • Step 3: Build a Dynamic Feedback Loop. The context of a situation is constantly changing. A great context-aware AI is not a static system but a learning one. Implement a continuous feedback loop where human users can correct the AI’s understanding, provide additional information, and refine its responses. This “human-in-the-loop” approach is vital for ethical and accurate AI. It allows the system to learn from its mistakes and adapt to new, unforeseen contexts, ensuring its relevance and reliability over time.
  • Step 4: Prioritize Privacy and Ethical Guardrails. The more context you provide to an AI, the more critical it becomes to manage that information responsibly. From the outset, you must design for privacy, collecting only the data you absolutely need and ensuring it is stored and used in a secure and transparent manner. Establish clear ethical guardrails for how the AI uses and interprets contextual information, particularly for sensitive data. This is not just a regulatory requirement; it is a fundamental aspect of building trust with your users and ensuring that your AI serves humanity, rather than exploiting it.

By following these best practices, you can move beyond simple, reactive AI to a proactive, human-centered intelligence that understands the world not just as a collection of data points, but as a rich tapestry of interconnected context. This is the work that will define the next generation of AI and, in doing so, will fundamentally change how technology serves humanity.

Case Study 1: Improving Customer Service with Context-Aware AI Assistants

The Challenge: Generic and Frustrating Customer Service Chatbots

Many companies have implemented AI-powered chatbots to handle customer inquiries. However, these chatbots often struggle with complex or nuanced issues, leading to frustrating experiences for customers who have to repeat information or are given irrelevant answers. The lack of contextual awareness is a major limitation.

Context Engineering in Action:

A telecommunications company sought to improve its customer service chatbot by implementing robust context engineering. They integrated the chatbot with their CRM system, allowing it to access the customer’s purchase history, past interactions, and current account status. They also implemented memory management so the chatbot could retain information shared earlier in the conversation. Furthermore, they used prompt engineering to guide the chatbot to ask clarifying questions and to tailor its responses based on the specific product or service the customer was inquiring about. For example, if a customer asked about a billing issue, the chatbot could access their latest bill and provide specific details, rather than generic troubleshooting steps. It could also remember if the customer had contacted support recently for a related issue and take that into account.

The Impact:

The context-aware chatbot significantly improved customer satisfaction scores and reduced the number of inquiries that had to be escalated to human agents. Customers felt more understood and received more relevant and efficient support. The company also saw a decrease in customer churn. This case study highlights how context engineering can transform a basic AI tool into a valuable and helpful resource by enabling it to understand the customer’s individual situation and history.

Key Insight: By providing AI customer service assistants with access to relevant customer data and interaction history, companies can significantly enhance the quality and efficiency of support, leading to increased customer satisfaction and loyalty.

Case Study 2: Enhancing Medical Diagnosis with Contextual Patient Information

The Challenge: Over-reliance on Isolated Symptoms in AI Diagnostic Tools

AI is increasingly being used to assist medical professionals in diagnosing diseases. However, early AI diagnostic tools often focused primarily on analyzing individual symptoms in isolation, potentially missing crucial contextual information such as the patient’s medical history, lifestyle, environmental factors, and even subtle cues from their recent health records.

Context Engineering in Action:

A research hospital in the Pacific Northwest developed an AI-powered diagnostic tool for a specific type of rare disease. Recognizing the importance of context, they engineered the AI to integrate a wide range of patient data beyond just the presenting symptoms. This included the patient’s complete medical history (past illnesses, medications, allergies), family medical history, lifestyle information (diet, exercise, smoking habits), recent lab results, and even notes from previous doctor’s visits. The AI was also connected to relevant medical literature to understand the broader context of the disease and potential co-morbidities. By providing the AI with this rich contextual information, the researchers aimed to improve the accuracy and speed of diagnosis, especially in complex cases where isolated symptoms might be misleading.

The Impact:

The context-aware AI diagnostic tool demonstrated a significantly higher accuracy rate in identifying the rare disease compared to traditional methods and earlier AI models that lacked comprehensive contextual input. It was also able to flag potential risks and complications that might have been overlooked otherwise. This case study underscores the critical role of context engineering in high-stakes applications like medical diagnosis, where a holistic understanding of the patient’s situation can lead to more timely and effective treatments.

Key Insight: Context engineering, by enabling a holistic view of a patient’s health and history, is crucial for improving the accuracy and reliability of AI in critical fields like medical diagnosis.

The Future of AI is Contextual

The future of AI is not about building bigger models; it’s about building smarter ones. And a smarter AI is one that can understand and leverage the richness of context, just as humans do. From a human-centered perspective, context engineering is the practice that makes AI more useful, more reliable, and more deeply integrated into our lives in a way that truly helps us. By moving beyond simple prompts and isolated data points, we can create AI systems that are not just powerful tools, but truly intelligent and invaluable partners. The work of bridging the gap between isolated data and meaningful context is where the next great wave of AI innovation will emerge, and it is a task that will demand our full attention.

Image credit: Pexels

Content Authenticity Statement: The topic area and the key elements to focus on were decisions made by Braden Kelley, with help from Google Gemini to shape the article and create the illustrative case studies.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Why Explainable AI is the Key to Our Future

The Unseen Imperative

Why Explainable AI is the Key to Our Future

GUEST POST from Art Inteligencia

We’re in the midst of an AI revolution, a tidal wave of innovation that promises to redefine industries and transform our lives. We’ve seen algorithms drive cars, diagnose diseases, and manage our finances. But as these “black box” systems become more powerful and more pervasive, a critical question arises: can we truly trust them? The answer, for many, is a hesitant ‘maybe,’ and that hesitation is a massive brake on progress. The key to unlocking AI’s true, transformative potential isn’t just more data or faster chips. It’s Explainable AI (XAI).

XAI is not a futuristic buzzword; it’s the indispensable framework for today’s AI-driven world. It’s the set of tools and methodologies that peel back the layers of a complex algorithm, making its decisions understandable to humans. Without XAI, our reliance on AI is little more than a leap of faith. We must transition from trusting AI because it’s effective, to trusting it because we understand why and how it’s effective. This is the fundamental shift from a blind tool to an accountable partner.

This is more than a technical problem; it’s a strategic business imperative. XAI provides the foundation for the four pillars of responsible AI that will differentiate the market leaders of tomorrow:

  • Transparency: Moving beyond “what” the AI decided to “how” it arrived at that decision. This sheds light on the model’s logic and reasoning.
  • Fairness & Bias Detection: Actively identifying and mitigating hidden biases in the data or algorithm itself. This ensures that AI systems make equitable decisions that don’t discriminate against specific groups.
  • Accountability: Empowering humans to understand and take responsibility for AI-driven outcomes. When things go wrong, we can trace the decision back to its source and correct it.
  • Trust: Earning the confidence of users, stakeholders, and regulators. Trust is the currency of the future, and XAI is the engine that generates it.

For any organization aiming to deploy AI in high-stakes fields like healthcare, finance, or justice, XAI isn’t a nice-to-have—it’s a non-negotiable requirement. The competitive advantage will go to the companies that don’t just build powerful AI, but build trustworthy AI.

Case Study 1: Empowering Doctors with Transparent Diagnostics

Consider a team of data scientists who develop a highly accurate deep learning model to detect early-stage cancer from medical scans. The model’s accuracy is impressive, but it operates as a “black box.” Doctors are understandably hesitant to stake a patient’s life on a recommendation they can’t understand. The company then integrates an XAI framework. Now, when the model flags a potential malignancy, it doesn’t just give a diagnosis. It provides a visual heat map highlighting the specific regions of the scan that led to its conclusion, along with a confidence score. It also presents a list of similar, previously diagnosed cases from its training data, providing concrete evidence to support its claim. This explainable output transforms the AI from an un-auditable oracle into a valuable, trusted second opinion. The doctors, now empowered with understanding, can use their expertise to validate the AI’s findings, leading to faster, more confident diagnoses and, most importantly, better patient outcomes.

Case Study 2: Proving Fairness in Financial Services

A major financial institution implements an AI-powered system to automate its loan approval process. The system is incredibly efficient, but its lack of transparency triggers concerns from regulators and consumer advocacy groups. Are its decisions fair, or is the algorithm subtly discriminating against certain demographic groups? Without XAI, the bank would be in a difficult position to defend its practices. By implementing an XAI framework, the company can now generate a clear, human-readable report for every single loan decision. If an application is denied, the report lists the specific, justifiable factors that contributed to the outcome—e.g., “debt-to-income ratio is outside of policy guidelines” or “credit history shows a high number of recent inquiries.” Crucially, it can also definitively prove that the decision was not based on protected characteristics like race or gender. This transparency not only helps the bank comply with fair lending laws but also builds critical trust with its customers, turning a potential liability into a significant source of competitive advantage.

The Architects of Trust: XAI Market Leaders and Startups to Watch

In the rapidly evolving world of Explainable AI (XAI), the market is being defined by a mix of established technology giants and innovative, agile startups. Major players like Google, Microsoft, and IBM are leading the way, integrating XAI tools directly into their cloud and AI platforms like Azure Machine Learning and IBM Watson. These companies are setting the industry standard by making explainability a core feature of their enterprise-level solutions. They are often joined by other large firms such as FICO and SAS Institute, which have long histories in data analytics and are now applying their expertise to ensure transparency in high-stakes areas like credit scoring and risk management. Meanwhile, a number of dynamic startups are pushing the boundaries of XAI. Companies like H2O.ai and Fiddler AI are gaining significant traction with platforms dedicated to providing model monitoring, bias detection, and interpretability for machine learning models. Another startup to watch is Arthur AI, which focuses on providing a centralized platform for AI performance monitoring to ensure that models remain fair and accurate over time. These emerging innovators are crucial for democratizing XAI, making sophisticated tools accessible to a wider range of organizations and ensuring that the future of AI is built on a foundation of trust and accountability.

The Road Ahead: A Call to Action

The future of AI is not about building more powerful black boxes. It’s about building smarter, more transparent, and more trustworthy partners. This is not a task for data scientists alone; it’s a strategic imperative for every business leader, every product manager, and every innovator. The companies that bake XAI into their processes from the ground up will be the ones that successfully navigate the coming waves of regulation and consumer skepticism. They will be the ones that win the trust of their customers and employees. They will be the ones that truly unlock the full, transformative power of AI. Are you ready to lead that charge?

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Boring AI is the Key to Better Customer Service

Boring AI is the Key to Better Customer Service

GUEST POST from Shep Hyken

Boring can be a good thing. When something works the way it’s supposed to, it shouldn’t be a surprise. There shouldn’t be friction or drama if a customer has a problem or wants a question answered. It should just be easy. And when it comes to customer service, “easy” and “boring” are good. The experience should just happen the way the customer wants it to happen. You might call that boring. I call that excellent.

That was the beginning of a conversation I had with Damon Covey, general manager of unified communications and collaboration for GoTo, on Amazing Business Radio. GoTo is one of the leading cloud communications companies, providing software and solutions to companies of all sizes and helping them implement AI systems that work, without the complexity and stress that can come from new technology. Covey’s goal for our conversation was to demystify AI, cutting through the noise and complexities of flashy AI and taking it down to a practical level. Boring was the word he liked to use, emphasizing it should be easy, simple and uncomplicated.

In our discussion, Covey said that large companies used to make six- and seven-figure investments to implement AI. Today, AI technology is far superior and, at the same time, much less expensive, so even the smallest companies can afford it. They can get advanced technology for hundreds of dollars, not hundreds of thousands of dollars. Covey said, “For example, a small bike shop or an automotive dealership can now provide the same advanced customer service options as large corporations.” With that in mind, here are the main takeaways from our conversation:

Conversational AI

Until recently (within the past two or three years), a basic chatbot had to follow pre-set rules. Conversational AI provides a much broader opportunity, allowing a computer to interact with people in a natural, human-like manner. Today, AI can understand and respond to customers’ questions and issues with much more flexibility. It has the capability to recognize different languages and understand fumbled phrases, much like a human would. By using conversational AI, businesses can provide 24/7 service, allowing them to respond to customer queries and schedule appointments even when the customer contacts them outside of regular business hours.

Treat AI Like a Team Member

If you hire a new employee, you train them. Treat your AI solutions the same way. Covey said that, similar to training an employee, you need to set specific parameters and provide the AI with the necessary information to ensure it stays within the scope of your business requirements. He emphasized the importance of making sure the AI only draws from the information provided by your business, such as your website, FAQ pages, product manuals, etc., rather than pulling from a source outside of your company, to maintain accuracy and relevance. Covey said that AI should be continuously optimized and trained over time to improve its performance, much like you would train and coach a human employee to expand their capabilities.

Productivity: Automating Processes

Covey talked about automating processes. Anything you do more than three times can be a candidate for AI automation. For example, AI can integrate with a business’ telecommunications system to automate the process of taking notes during calls. It can then summarize the call, put the information into the customer’s record and create a list of next steps, if appropriate. This is a simple function that helps employees be more productive. Instead of an employee typing notes and summarizing the call, AI can handle the task so the employee can move on to helping the next customer.

Augmenting the Business

AI can help businesses do things they don’t normally do, such as remain open for certain functions (like customer support) after hours. It can act as an after-hours receptionist, answering phone calls, setting appointments or providing basic information to customers after business hours. That turns a business that’s typically open during traditional hours to a 24/7 operation.

It is Easier Than You Think

At the end of the interview, Covey dropped a nugget of wisdom that is the perfect way to close this article. For many, especially smaller organizations, deciding what technology to use and how to best use AI can be a daunting decision. It shouldn’t be. Covey says, “Start with the problem you want to solve, and solve for that problem.” He added that you should start using the technology for small problems. Once you understand how it works, the more complicated issues will be easier to solve for.

And that brings us back to where we started. AI doesn’t need to be complicated or flashy. It should be boring—in a good way. Start small, focus on one problem at a time and let AI do what it’s supposed to do: make customer service easier and more efficient. When done right, your customers won’t be amazed by the AI—they’ll just be amazed by how easy it is to do business with you.

Image Credit: Unsplash

This article was originally published on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.