Category Archives: Technology

Customer Experience is Changing

If You Don’t Like Change, You’re Going to Hate Extinction

Customer Experience is Changing

GUEST POST from Shep Hyken

Depending on which studies and articles you read, customer service and customer experience (CX) are getting better … or they’re getting worse. Our customer service and CX research found that 60% of consumers had better customer service experiences than last year, and in general, 82% are happy with the customer service they receive from the companies and brands with which they do business.

Yet, some studies claim customer service is worse than ever. Regardless, more companies than ever are investing in improving CX. Some nail it, but even with an investment, some still struggle. Another telling stat is the growing number of companies attending CX conferences.

Last month, more than 5,000 people representing 1,382 companies attended and participated in Contact Center Week (CCW), the world’s largest conference dedicated to customer service and customer experience. This was the largest attendance to date, representing a 25% growth over last year.

Many recognized brands and CX leaders attended and shared their wisdom from the main stage and breakout rooms. The expo hall featured demonstrations of the latest and greatest solutions to create more effective customer support experiences.

The primary reason I attend conferences like CCW is to stay current with the latest advancements and solutions in CX and to gain insight into how industry leaders think. AI took center stage for most of the presentations. No doubt, it continues to improve and gain acceptance. With that in mind, here are some of my favorite takeaways with my commentary from the sessions I attended:

AI for Training

Becky Ploeger, global head of reservations and customer care at Hilton, uses AI to create micro-lessons for employee training. Hilton is using Centrical’s platform to take various topics and turn them into coaching modules. Employees participate in simulations that replicate customer issues.

Can We Trust AI?

As excited as Ploeger is about AI (and agentic AI), there is still trepidation. CX leaders must recognize that AI is not yet perfect and will occasionally provide inaccurate information. Ploeger said, “We have years and years of experience with agents. We only have six months of experience with agentic AI.”

Wrong Information from AI Costs a Company Money—or Does it?

Gadi Shamia, CEO of Replicant, an AI voice technology company, commented about the mistakes AI makes. In general, CX leaders are complaining that going digital is costing the company money because of the bad information customers receive. Shamia asks, “How much are you losing?” While bad information can cause a customer to defect to a competitor, so does a bad experience with a live customer service rep. So, how often does AI provide incorrect information? How many of those customers leave versus trying to connect with an agent? The metrics you choose to define success with a digital self-service experience need to include more than measuring bad experiences. Mark Killick, SVP of experiential operations at Shipt, weighed in on this topic, saying, “If we don’t fix the problems of providing bad information, we’ll just deliver bad information faster.”

Making the Case to Invest in AI

Mariano Tan, president and CEO of Prosodica says, “Nothing gets funded without a clear business case.” The person in charge of the budget for customer service and CX initiatives (typically the CFO in larger companies) won’t “open the wallet” without proof that the expenditure will yield a return on investment (ROI). People in charge of budgets like numbers, so when you create your “clear business case,” be sure to include the numbers that make a compelling reason to invest in CX. Simply saying, “We’ll reduce churn,” isn’t enough. How much churn—that’s a number. How much does it mean to the bottom line—another number. Numbers sell!

Final Words: Love Change, or Else

Neil Gibson, SVP of CX at FedEx, was part of a panel and shared a quote that is the perfect way to end the article. AI is rapidly changing the way we do business. We must keep up, or else. Gibson quoted Fred Smith, the first CEO and founder of FedEx, who said, “If you don’t like change, you’re going to hate extinction.” In other words, keep up or watch your competition blow past you.

This article was originally published on Forbes.com.

Image Credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Have We Made AI Interfaces Too Human?

Could a Little Uncanny Valley Help Add Some Much Needed Skepticism to How We Treat AI Output?

Have We Made AI Interfaces Too Human?

GUEST POST from Pete Foley

A cool element of AI is how ‘human’ it appear’s to be. This is of course a part of its ‘wow’ factor, and has helped to drive rapid and widespread adoption. It’s also of course a clever illusion, as AI’s don’t really ‘think’ like real humans. But the illusion is pretty convincing. And most of us, me included, who have interacted with AI at any length, have probably at times all but forgotten they are having a conversation with code, albeit sophisticated code.

Benefits of a Human-LIke Interface: And this humanizing of the user interface brings multiple benefits. It is of course a part of the ‘wow’ factor that has helped drive rapid and widespread adoption of the technology. The intuitive, conversational interface also makes it far easier for everyday users to access information without training in search techniques. While AI’s they don’t fundamentally have access to better information than an old fashioned Google search, they are much easier to use. And the humanesque output not only provides ‘ready to use’ and pre-synthesized information, but also increases the believability of the output. Furthermore, by creating an illusion of human-like intelligence, it implicitly implies emotions, compassion and critical thinking behind the output, even if it’s not really there

Democratizing Knowledge: And in many ways, this is a really good thing. Knowledge is power. Democratizing access to it has many benefits, and in so doing adds checks and balances to our society we’ve never before enjoyed. And it’s part of a long-term positive trend. Our societies have evolved from shaman and priests jealously guarding knowledge for their own benefit, through the broader dissemination enabled by the Gutenberg press, books and libraries. That in turn gave way to mass media, the internet, and now the next step, AI. Of course, it’s not quite that simple, as it’s also a bit of an arms race. With this increased access to information has come ever more sophisticated ways in which today’s ’shamans’ or leaders try to protect their advantage. They may no longer use solar eclipses to frighten an astronomically ignorant populace into submission and obedience. But spinning, framing, controlled narratives, selective dissemination of information, fake news, media control, marketing, behavioral manipulation and ’nudging’ are just a few ways in which the flow of information is controlled or manipulated today. We have moved in the right direction, but still have a way to go, and freedom of information and it’s control are always in some kind of arms race.

Two Edged Sword: But this humanization of AI can also be a two edged sword, and comes with downsides in addition to the benefits described above. It certainly improves access and believability, and makes output easier to disseminate, but also hides its true nature. AI operates in a quite different way from a human mind. It lacks intrinsic ethics, emotional connections, genuine empathy, and ‘gut feelings’. To my inexpert mind, it in some uncomfortable ways resembles a psychopath. It’s not evil in a human sense by any means, but it also doesn’t care, and lacks a moral or ethical framework

A brutal example is the recent case of Adam Raine, where ChatGPT advised him on ways to commit suicide, and helped him write a suicide note. A sane human would never do this, but the humanesque nature of the interface appeared to create an illusion for that unfortunate individual that he was dealing with a human, and the empathy, emotional intelligence and compassion that comes with that.

That may be an extreme example. But the illusion of humanity and the ability to access unfiltered information can also bring more subtle issues. For example, while the ability to interrogate AI around our symptoms before visiting a physician certainly empowers us to take a more proactive role in our healthcare. But it can also be counterproductive. A patient who has convinced themselves of an incorrect diagnosis can actually harm themselves, or make a physicians job much harder. And AI lacks the compassion to break bad news gently, or add context in the way a human can.

The Uncanny Valley: That brings me to the Uncanny Valley. This describes when technology approaches but doesn’t quite achieve perfection in human mimicry. In the past we could often detect synthetic content on a subtle and implicit level, even if we were not conscious of it. For example, a computerized voice that missed subtle tonal inflections, or a photoshopped image or manipulated video that missed subtle facial micro expressions might not be obvious, but often still ‘felt’ wrong. Or early drum machines were so perfect that they lacked the natural ’swing’ of even the most precise human drummer, and so had to be modified to include randomness that was below the threshold of conscious awareness, but made them ‘feel’ real.

This difference between conscious and unconscious evaluation creates cognitive dissonance that can result in content feeling odd, or even ‘creepy’. And often, the closer we got to eliminating that dissonance, the creepier it feels. When I’ve dealt with the uncanny valley in the past, it’s generally been something we needed to ‘fix’. For example, over-photoshopping in a print ad, or poor CGI. But be careful what you wish for. AI appears to have marched through the ‘uncanny valley’ to the point where its output feels human. But despite feeling right, it may still lack the ethical, moral or emotional framework of the human responses it mimics.

This begs a question, ‘do we need some implicit as well as explicit cues that remind us we are not dealing with a real human? Could a slight feeling of ‘creepiness maybe help to avoid another Adam Raine? Should we add back some ‘uncanny valley’, and turn what used to be something we thought of as an ‘enemy’ to good use? The latter is one of my favorite innovation strategies. Whether it’s vaccination, or exposure to risks during childhood, or not over-sanitizing, sometimes a little of what does us harm can do us good. Maybe the uncanny valley we’ve typical tried to overcome could now actually help us?

Would just a little implicit doubt also encourage us to think a bit more deeply about the output, rather than simply cut and paste it into a report? By making AI output sound so human, it potentially removes the need for cognitive effort to process the output. Thinking that played a key role in translating search into output can now be skipped. Synthesizing and processing output from a ‘old fashioned’ Google search requires effort and comprehension. With AI, it is all to easy to regurgitate the output, skip meaningful critical thinking, and share what we really don’t understand. Or perhaps worse, we can create an illusion of understanding where we don’t think deeply or causally enough to even realize that we don’t understand what we are sharing. It’s in some ways analogous to proof reading, in that it’s all to easy to skip over content we think we already know, even if we really don’t . And the more we skip over content, the more difficult it is to be discerning, or question the output. When a searcher receives answers in prose he or she can cut and paste into a report or essay, less effort effort and critical thinking goes into comprehension and the critical thinking, and the risk of sharing inaccurate information, or even nonsense increases.

And that also brings up another side effect of low engagement with output – confirmation bias. If the output is already in usable form, doesn’t require synthesizing or comprehension, and it agrees with our beliefs or motivations, it’s a perfect storm. There is little reason to question it, or even truly understand it. We are generally pretty good at challenging something that surprises us, or that we disagree with. But it takes a lot of will, and a deep adherence to the scientific method to challenge output that supports our beliefs or theories

Question everything, and you do nothing! The corollary to this is surely ‘that’s the point of AI?’ It’s meant to give us well structured, and correct answers, and in so doing free up our time for more important things, or to act on ideas, rather than just think about them. If we challenge and analyze every output, why use AI in the first place? That’s certainly fair, but taking AI output without any question is not smart either. Remember that it isn’t human, and is still capable of making really stupid mistakes. Okay, so are humans, but AI is still far earlier in its evolutionary journey, and prone to unanticipated errors. I suspect the answer to this lies in how important the output is, and where it will be used. If it’s important, treat AI output as a hypothesis. Don’t believe everything you read, and before simply sharing or accepting, ask ourselves and AI itself questions around what went into the conclusions, where the data came from, and what the critical thinking path is. Basically apply the scientific method to AI output much the same as we would, or should our own ideas.

Cat Videos and AI Action Figures: Another related risk with AI is if we let it become an oracle. We not only treat its output as human, but as super human. With access to all knowledge, vastly superior processing power compared to us mere mortals, and apparent human reasoning, why bother to think for ourselves? A lot of people worry about AI becoming sentient, more powerful than humans, and the resultant doomsday scenarios involving Terminators and Skynet. While it would be foolish to ignore such possibilities, perhaps there is a more clear and present danger, where instead of AI conquering humanity, we simply cede our position to it. Just as basic mathematical literacy has plummeted since the introduction of calculators, and spell-check has reduced our basic literary capability, what if AI erodes our critical thinking and problem solving? I’m not the first to notice that with the internet we have access to all human knowledge, but all too often use it for cat videos and porn. With AI, we have an extraordinary creativity enhancing tool, but use masses of energy and water for data centers to produce dubious action figures in our own image. Maybe we need a little help doing better with AI. A little ‘uncanny Valley’ would not begin to deal with all of the potential issues, but maybe simply not fully trusting AI output on an implicit level might just help a little bit.

Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Most Challenging Obstacles to Achieving Artificial General Intelligence

The Unclimbed Peaks

The Most Challenging Obstacles to Achieving Artificial General Intelligence

GUEST POST from Art Inteligencia

The pace of artificial intelligence (AI) development over the last decade has been nothing short of breathtaking. From generating photo-realistic images to holding surprisingly coherent conversations, the progress has led many to believe that the holy grail of artificial intelligence — Artificial General Intelligence (AGI) — is just around the corner. AGI is defined as a hypothetical AI that possesses the ability to understand, learn, and apply its intelligence to solve any problem, much like a human. As a human-centered change and innovation thought leader, I am here to argue that while we’ve made incredible strides, the path to AGI is not a straight line. It is a rugged, mountainous journey filled with profound, unclimbed peaks that require us to solve not just technological puzzles, but also fundamental questions about consciousness, creativity, and common sense.

We are currently operating in the realm of Narrow AI, where systems are exceptionally good at a single task, like playing chess or driving a car. The leap from Narrow AI to AGI is not just an incremental improvement; it’s a quantum leap. It’s the difference between a tool that can hammer a nail perfectly and a person who can understand why a house is being built, design its blueprints, and manage the entire process while also making a sandwich and comforting a child. The true obstacles to AGI are not merely computational; they are conceptual and philosophical. They require us to innovate in a way that goes beyond brute-force data processing and into the realm of true understanding.

The Three Grand Obstacles to AGI

While there are many technical hurdles, I believe the path to AGI is blocked by three foundational challenges:

  • 1. The Problem of Common Sense and Context: Narrow AI lacks common sense, a quality that is effortless for humans but incredibly difficult to code. For example, an AI can process billions of images of cars, but it doesn’t “know” that a car needs fuel or that a flat tire means it can’t drive. Common sense is a vast, interconnected web of implicit knowledge about how the world works, and it’s something we’ve yet to find a way to replicate.
  • 2. The Challenge of Causal Reasoning: Current AI models are masterful at recognizing patterns and correlations in data. They can tell you that when event A happens, event B is likely to follow. However, they struggle with causal reasoning — understanding why A causes B. True intelligence involves understanding cause-and-effect relationships, a critical component for true problem-solving, planning, and adapting to novel situations.
  • 3. The Final Frontier of Human-Like Creativity & Understanding: Can an AI truly create something new and original? Can it experience “aha!” moments of insight? Current models can generate incredibly creative outputs based on patterns they’ve seen, but do they understand the deeper meaning or emotional weight of what they create? Achieving AGI requires us to cross the final chasm: imbuing a machine with a form of human-like creativity, insight, and self-awareness.

“We are excellent at building digital brains, but we are still far from replicating the human mind. The real work isn’t in building bigger models; it’s in cracking the code of common sense and consciousness.”


Case Study 1: The Fight for Causal AI (Causaly vs. Traditional Models)

The Challenge:

In scientific research, especially in fields like drug discovery, identifying causal relationships is everything. Traditional AI models can analyze a massive database of scientific papers and tell a researcher that “Drug X is often mentioned alongside Disease Y.” However, they cannot definitively state whether Drug X *causes* a certain effect on Disease Y, or if the relationship is just a correlation. This lack of causal understanding leads to a time-consuming and expensive process of manual verification and experimentation.

The Human-Centered Innovation:

Companies like Causaly are at the forefront of tackling this problem. Instead of relying solely on a brute-force approach to pattern recognition, Causaly’s platform is designed to identify and extract causal relationships from biomedical literature. It uses a different kind of model to recognize phrases and structures that denote cause and effect, such as “is associated with,” “induces,” or “results in.” This allows researchers to get a more nuanced, and scientifically useful, view of the data.

The Result:

By focusing on the causal reasoning obstacle, Causaly has enabled researchers to accelerate the drug discovery process. It helps scientists filter through the noise of correlation to find genuine causal links, allowing them to formulate hypotheses and design experiments with a much higher probability of success. This is not about creating AGI, but about solving one of its core components, proving that a human-centered approach to a single, deep problem can unlock immense value. They are not just making research faster; they are making it smarter and more focused on finding the *why*.


Case Study 2: The Push for Common Sense (OpenAI’s Reinforcement Learning Efforts)

The Challenge:

As impressive as large language models (LLMs) are, they can still produce nonsensical or factually incorrect information, a phenomenon known as “hallucination.” This is a direct result of their lack of common sense. For instance, an LLM might confidently tell you that you can use a toaster to take a bath, because it has learned patterns of words in sentences, not the underlying physics and danger of the real world.

The Human-Centered Innovation:

OpenAI, a leader in AI research, has been actively tackling this through a method called Reinforcement Learning from Human Feedback (RLHF). This is a crucial, human-centered step. In RLHF, human trainers provide feedback to the AI model, essentially teaching it what is helpful, honest, and harmless. The model is rewarded for generating responses that align with human values and common sense, and penalized for those that do not. This process is an attempt to inject a form of implicit, human-like understanding into the model that it cannot learn from raw data alone.

The Result:

RLHF has been a game-changer for improving the safety, coherence, and usefulness of models like ChatGPT. While it’s not a complete solution to the common sense problem, it represents a significant step forward. It demonstrates that the path to a more “intelligent” AI isn’t just about scaling up data and compute; it’s about systematically incorporating a human-centric layer of guidance and values. It’s a pragmatic recognition that humans must be deeply involved in shaping the AI’s understanding of the world, serving as the common sense compass for the machine.


Conclusion: AGI as a Human-Led Journey

The quest for AGI is perhaps the greatest scientific and engineering challenge of our time. While we’ve climbed the foothills of narrow intelligence, the true peaks of common sense, causal reasoning, and human-like creativity remain unscaled. These are not problems that can be solved with bigger servers or more data alone. They require fundamental, human-centered innovation.

The companies and researchers who will lead the way are not just those with the most computing power, but those who are the most creative, empathetic, and philosophically minded. They will be the ones who understand that AGI is not just about building a smart machine; it’s about building a machine that understands the world the way we do, with all its nuances, complexities, and unspoken rules. The path to AGI is a collaborative, human-led journey, and by solving its core challenges, we will not only create more intelligent machines but also gain a deeper understanding of our own intelligence in the process.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Dall-E

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






The Crisis Innovation Trap

Why Proactive Innovation Wins

The Crisis Innovation Trap

by Braden Kelley and Art Inteligencia

In the narrative of business, we often romanticize the idea of “crisis innovation.” The sudden, high-stakes moment when a company, backed against a wall, unleashes a burst of creativity to survive. The pandemic, for instance, forced countless businesses to pivot their models overnight. While this showcases incredible human resilience, it also reveals a dangerous and costly trap: the belief that innovation is something you turn on only when there’s an emergency. As a human-centered change and innovation thought leader, I’ve seen firsthand that relying on crisis as a catalyst is a recipe for short-term fixes and long-term decline. True, sustainable innovation is not a reaction; it’s a proactive, continuous discipline.

The problem with waiting for a crisis is that by the time it hits, you’re operating from a position of weakness. You’re making decisions under immense pressure, with limited resources, and with a narrow focus on survival. This reactive approach rarely leads to truly transformative breakthroughs. Instead, it produces incremental changes and tactical adaptations—often at a steep price in terms of burnout, strategic coherence, and missed opportunities. The most successful organizations don’t innovate to escape a crisis; they innovate continuously to prevent one from ever happening.

The Cost of Crisis-Driven Innovation

Relying on crisis as your innovation driver comes with significant hidden costs:

  • Reactive vs. Strategic: Crisis innovation is inherently reactive. You’re fixing a symptom, not addressing the root cause. This prevents you from engaging in the deep, strategic thinking necessary for true market disruption.
  • Loss of Foresight: When you’re in a crisis, all attention is on the immediate threat. This short-term focus blinds you to emerging trends, shifting customer needs, and new market opportunities that could have been identified and acted upon proactively.
  • Burnout and Exhaustion: Innovation requires creative energy. Forcing your teams into a constant state of emergency to innovate leads to rapid burnout, high turnover, and a culture of fear, not creativity.
  • Suboptimal Outcomes: The solutions developed in a crisis are often rushed, inadequately tested, and sub-optimized. They are designed to solve an immediate problem, not to create a lasting competitive advantage.

“Crisis innovation is a sprint for survival. Proactive innovation is a marathon for market leadership. You can’t win a marathon by only practicing sprints when the gun goes off.”

Building a Culture of Proactive, Human-Centered Innovation

The alternative to the crisis innovation trap is to embed innovation into your organization’s DNA. This means creating a culture where curiosity, experimentation, and a deep understanding of human needs are constant, not sporadic. It’s about empowering your people to solve problems and create value every single day.

  1. Embrace Psychological Safety: Create an environment where employees feel safe to share half-formed ideas, question assumptions, and even fail. This is the single most important ingredient for continuous innovation.
  2. Allocate Dedicated Resources: Don’t expect innovation to happen in people’s spare time. Set aside dedicated time, budget, and talent for exploratory projects and initiatives that don’t have an immediate ROI.
  3. Focus on Human-Centered Design: Continuously engage with your customers and employees to understand their frustrations and aspirations. True innovation comes from solving real human problems, not just from internal brainstorming.
  4. Reward Curiosity, Not Just Results: Celebrate learning, even from failures. Recognize teams for their efforts in exploring new ideas and for the insights they gain, not just for the products they successfully launch.

Case Study 1: Blockbuster vs. Netflix – The Foresight Gap

The Challenge:

In the late 1990s, Blockbuster was the undisputed king of home video rentals. It had a massive physical footprint, brand recognition, and a highly profitable business model based on late fees. The crisis of digital disruption and streaming was not a sudden event; it was a slow-moving signal on the horizon.

The Reactive Approach (Blockbuster):

Blockbuster’s management was aware of the shift to digital, but they largely viewed it as a distant threat. They were so profitable from their existing model that they had no incentive to proactively innovate. When Netflix began gaining traction with its subscription-based, DVD-by-mail service, Blockbuster’s response was a reactive, half-hearted attempt to mimic it. They launched an online service but failed to integrate it with their core business, and their culture remained focused on the physical store model. They only truly panicked and began a desperate, large-scale innovation effort when it was already too late and the market had irreversibly shifted to streaming.

The Result:

Blockbuster’s crisis-driven innovation was a spectacular failure. By the time they were forced to act, they lacked the necessary strategic coherence, internal alignment, and cultural agility to compete. They didn’t innovate to get ahead; they innovated to survive, and they failed. They went from market leader to bankruptcy, a powerful lesson in the dangers of waiting for a crisis to force your hand.


Case Study 2: Lego’s Near-Death and Subsequent Reinvention

The Challenge:

In the early 2000s, Lego was on the brink of bankruptcy. The brand, once a global icon, had become a sprawling, unfocused company that was losing relevance with children increasingly drawn to video games and digital entertainment. The company’s crisis was not a sudden external shock, but a slow, painful internal decline caused by a lack of proactive innovation and a departure from its core values. They had innovated, but in a scattered, unfocused way that diluted the brand.

The Proactive Turnaround (Lego):

Lego’s new leadership realized that a reactive, last-ditch effort wouldn’t save them. They saw the crisis as a wake-up call to fundamentally reinvent how they innovate. Their strategy was not just to survive but to thrive by returning to a proactive, human-centered approach. They went back to their core product, the simple plastic brick, and focused on deeply understanding what their customers—both children and adult fans—wanted. They launched several initiatives:

  • Re-focus on the Core: They trimmed down their product lines and doubled down on what made Lego special—creativity and building.
  • Embracing the Community: They proactively engaged with their most passionate fans, the “AFOLs” (Adult Fans of Lego), and co-created new products like the highly successful Lego Architecture and Ideas series. This wasn’t a reaction to a trend; it was a strategic partnership.
  • Thoughtful Digital Integration: Instead of panicking and launching a thousand digital products, they carefully integrated their physical and digital worlds with games like Lego Star Wars and movies like The Lego Movie. These weren’t rushed reactions; they were part of a long-term, strategic vision.

The Result:

Lego’s transformation from a company on the brink to a global powerhouse is a powerful example of the superiority of proactive innovation. By not just reacting to their crisis but using it as a catalyst to build a continuous, human-centered innovation engine, they not only survived but flourished. They turned a painful crisis into a foundation for a new era of growth, proving that the best time to innovate is always, not just when you have no other choice.


Eight I's of Infinite Innovation

The Eight I’s of Infinite Innovation

Braden Kelley’s Eight I’s of Infinite Innovation provides a comprehensive framework for organizations seeking to embed continuous innovation into their DNA. The model starts with Ideation, the spark of new concepts, which must be followed by Inspiration—connecting those ideas to a compelling, human-centered vision. This vision is refined through Investigation, a process of deeply understanding customer needs and market dynamics, leading to the Iteration of prototypes and solutions based on real-world feedback. The framework then moves from development to delivery with Implementation, the critical step of bringing a viable product to market. This is not the end, however; it’s a feedback loop that requires Invention of new business models, a constant process of Improvement based on outcomes, and finally, the cultivation of an Innovation culture where the cycle can repeat infinitely. Each ‘I’ builds upon the last, creating a holistic and sustainable engine for growth.

Conclusion: The Time to Innovate is Now

The notion of “crisis innovation” is seductive because it offers a heroic narrative. But behind every such story is a cautionary tale of a company that let a problem fester for far too long. The most enduring, profitable, and relevant organizations don’t wait for a burning platform to jump; they are constantly building new platforms. They have embedded a culture of continuous, proactive innovation driven by a deep understanding of human needs. They innovate when times are good so they are prepared when times are tough.

The time to innovate is not when your stock price plummets or your competitor launches a new product. The time to innovate is now, and always. By making innovation a fundamental part of your business, you ensure your organization’s longevity and its ability to not just survive the future, but to shape it.

Image credit: Pixabay

Content Authenticity Statement: The topic area and the key elements to focus on were decisions made by Braden Kelley, with help from Google Gemini to shape the article and create the illustrative case studies.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






McKinsey is Wrong That 80% Companies Fail to Generate AI ROI

McKinsey is Wrong That 80% Companies Fail to Generate AI ROI

GUEST POST from Robyn Bolton

Sometimes, you see a headline and just have to shake your head.  Sometimes, you see a bunch of headlines and need to scream into a pillow.  This week’s headlines on AI ROI were the latter:

  • Companies are Pouring Billions Into A.I. It Has Yet to Pay Off – NYT
  • MIT report: 95% of generative AI pilots at companies are failing – Forbes
  • Nearly 8 in 10 companies report using gen AI – yet just as many report no significant bottom-line impact – McKinsey

AI has slipped into what Gartner calls the Trough of Disillusionment. But, for people working on pilots,  it might as well be the Pit of Despair because executives are beginning to declare AI a fad and deny ever having fallen victim to its siren song.

Because they’re listening to the NYT, Forbes, and McKinsey.

And they’re wrong.

ROI Reality Check

In 20205, private investment in generative AI is expected to increase 94% to an estimated $62 billion.  When you’re throwing that kind of money around, it’s natural to expect ROI ASAP.

But is it realistic?

Let’s assume Gen AI “started” (became sufficiently available to set buyer expectations and warrant allocating resources to) in late 2022/early 2023.  That means that we’re expecting ROI within 2 years.

That’s not realistic.  It’s delusional. 

ERP systems “started” in the early 1990s, yet providers like SAP still recommend five-year ROI timeframes.  Cloud Computing“started” in the early 2000s, and yet, in 2025, “48% of CEOs lack confidence in their ability to measure cloud ROI.” CRM systems’ claims of 1-3 years to ROI must be considered in the context of their 50-70% implementation failure rate.

That’s not to say we shouldn’t expect rapid results.  We just need to set realistic expectations around results and timing.

Measure ROI by Speed and Magnitude of Learning

In the early days of any new technology or initiative, we don’t know what we don’t know.  It takes time to experiment and learn our way to meaningful and sustainable financial ROI. And the learnings are coming fast and furious:

Trust, not tech, is your biggest challenge: MIT research across 9,000+ workers shows automation success depends more on whether your team feels valued and believes you’re invested in their growth than which AI platform you choose.

Workers who experience AI’s benefits first-hand are more likely to champion automation than those told, “trust us, you’ll love it.” Job satisfaction emerged as the second strongest indicator of technology acceptance, followed by feeling valued.  If you don’t invest in earning your people’s trust, don’t invest in shiny new tech.

More users don’t lead to more impact: Companies assume that making AI available to everyone guarantees ROI.  Yet of the 70% of Fortune 500 companies deploying Microsoft 365 Copilot and similar “horizontal” tools (enterprise-wide copilots and chatbots), none have seen any financial impact.

The opposite approach of deploying “vertical” function-specific tools doesn’t fare much better.  In fact, less than 10% make it past the pilot stage, despite having higher potential for economic impact.

Better results require reinvention, not optimization:  McKinsey found that call centers that gave agents access to passive AI tools for finding articles, summarizing tickets, and drafting emails resulted in only a 5-10% call time reduction.  Centers using AI tools to automate tasks without agent initiation reduced call time by 20-40%.

Centers reinventing processes around AI agents? 60-90% reduction in call time, with 80% automatically resolved.

How to Climb Out of the Pit

Make no mistake, despite these learnings, we are in the pit of AI despair.  42% of companies are abandoning their AI initiatives.  That’s up from 17% just a year ago.

But we can escape if we set the right expectations and measure ROI on learning speed and quality.

Because the real concern isn’t AI’s lack of ROI today.  It’s whether you’re willing to invest in the learning process long enough to be successful tomorrow.

Image credit: Microsoft CoPilot

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Why Context Engineering is the Next Frontier in AI

Why Context Engineering is the Next Frontier in AI

by Braden Kelley and Art Inteligencia

Observing the rapid evolution of artificial intelligence, one thing has become abundantly clear: while raw processing power and sophisticated algorithms are crucial, the true key to unlocking AI’s transformative potential lies in its ability to understand and leverage context. We’ve seen remarkable advancements in generative AI and machine learning, but these technologies often stumble when faced with the nuances of real-world situations. This is why I believe context engineering – the discipline of explicitly designing and managing the contextual information available to AI systems – is not just an optimization, but the next fundamental frontier in AI innovation.

Think about human intelligence. Our ability to understand language, make decisions, and solve problems is deeply rooted in our understanding of context. A single word can have multiple meanings depending on the sentence it’s used in. A request can be interpreted differently based on the relationship between the people involved or the situation at hand. For AI to truly augment human capabilities and integrate seamlessly into our lives, it needs a similar level of contextual awareness. Current AI models often operate on relatively narrow inputs, lacking the broader understanding of user intent, environmental factors, and historical interactions that humans take for granted. Context engineering aims to bridge this gap, moving AI from being a powerful but often brittle tool to a truly intelligent and adaptable partner.

In the realm of artificial intelligence, context engineering is the strategic and human-centered practice of providing an AI system with the relevant background information it needs to understand a query or situation accurately. It goes beyond simple prompt design by actively building and managing the comprehensive context that surrounds an interaction. This includes integrating historical data, user profiles, real-time environmental factors, and external knowledge sources, allowing the AI to move from a narrow, transactional understanding to a more holistic, human-like awareness. By engineering this context, we enable AI to produce more accurate, personalized, and genuinely useful responses, bridging the gap between a machine’s logic and the nuanced complexity of human communication and problem-solving.

The field of context engineering encompasses a range of techniques and strategies focused on providing AI systems with relevant and actionable context. This includes:

  • Prompt Engineering: Crafting detailed and context-rich prompts that guide AI models towards desired outputs.
  • Memory Management: Implementing mechanisms for AI to remember past interactions and use that history to inform current responses.
  • External Knowledge Integration: Connecting AI systems to external databases, APIs, and real-time data streams to provide up-to-date and relevant information.
  • User Profiling and Personalization: Leveraging data about individual users to tailor AI responses to their specific needs and preferences.
  • Situational Awareness: Incorporating real-world contextual cues, such as location, time of day, and user activity, to make AI more responsive to the current situation.

A Human-Centered Blueprint for Implementation

Implementing context engineering is not a one-time technical fix; it is a continuous, human-centered practice that must be embedded into your innovation lifecycle. To move beyond a static, one-size-fits-all model and create truly intelligent, context-aware AI, consider this blueprint for action:

  • Step 1: Start with the Human Context. Before you even think about data streams or algorithms, you must first deeply understand the human being you are serving. Conduct ethnographic research, user interviews, and journey mapping to identify what context is truly relevant to your users. What are their goals? What unspoken needs do they have? What external factors influence their decisions? The most valuable context often isn’t in a database—it’s in the real-world experiences and emotional states of your users.
  • Step 2: Map the Contextual Landscape. Once you understand the human context, you can begin to identify and integrate the necessary data. This involves creating a “contextual map” that connects the human need to the available data sources. For a customer service AI, this map would link a customer’s inquiry to their purchase history, recent support tickets, and even their browsing behavior on your website. For a medical AI, the map would link a patient’s symptoms to their genetic data, environmental exposure, and family medical history. This mapping process ensures that the AI’s inputs are directly tied to what matters most to the user.
  • Step 3: Build a Dynamic Feedback Loop. The context of a situation is constantly changing. A great context-aware AI is not a static system but a learning one. Implement a continuous feedback loop where human users can correct the AI’s understanding, provide additional information, and refine its responses. This “human-in-the-loop” approach is vital for ethical and accurate AI. It allows the system to learn from its mistakes and adapt to new, unforeseen contexts, ensuring its relevance and reliability over time.
  • Step 4: Prioritize Privacy and Ethical Guardrails. The more context you provide to an AI, the more critical it becomes to manage that information responsibly. From the outset, you must design for privacy, collecting only the data you absolutely need and ensuring it is stored and used in a secure and transparent manner. Establish clear ethical guardrails for how the AI uses and interprets contextual information, particularly for sensitive data. This is not just a regulatory requirement; it is a fundamental aspect of building trust with your users and ensuring that your AI serves humanity, rather than exploiting it.

By following these best practices, you can move beyond simple, reactive AI to a proactive, human-centered intelligence that understands the world not just as a collection of data points, but as a rich tapestry of interconnected context. This is the work that will define the next generation of AI and, in doing so, will fundamentally change how technology serves humanity.

Case Study 1: Improving Customer Service with Context-Aware AI Assistants

The Challenge: Generic and Frustrating Customer Service Chatbots

Many companies have implemented AI-powered chatbots to handle customer inquiries. However, these chatbots often struggle with complex or nuanced issues, leading to frustrating experiences for customers who have to repeat information or are given irrelevant answers. The lack of contextual awareness is a major limitation.

Context Engineering in Action:

A telecommunications company sought to improve its customer service chatbot by implementing robust context engineering. They integrated the chatbot with their CRM system, allowing it to access the customer’s purchase history, past interactions, and current account status. They also implemented memory management so the chatbot could retain information shared earlier in the conversation. Furthermore, they used prompt engineering to guide the chatbot to ask clarifying questions and to tailor its responses based on the specific product or service the customer was inquiring about. For example, if a customer asked about a billing issue, the chatbot could access their latest bill and provide specific details, rather than generic troubleshooting steps. It could also remember if the customer had contacted support recently for a related issue and take that into account.

The Impact:

The context-aware chatbot significantly improved customer satisfaction scores and reduced the number of inquiries that had to be escalated to human agents. Customers felt more understood and received more relevant and efficient support. The company also saw a decrease in customer churn. This case study highlights how context engineering can transform a basic AI tool into a valuable and helpful resource by enabling it to understand the customer’s individual situation and history.

Key Insight: By providing AI customer service assistants with access to relevant customer data and interaction history, companies can significantly enhance the quality and efficiency of support, leading to increased customer satisfaction and loyalty.

Case Study 2: Enhancing Medical Diagnosis with Contextual Patient Information

The Challenge: Over-reliance on Isolated Symptoms in AI Diagnostic Tools

AI is increasingly being used to assist medical professionals in diagnosing diseases. However, early AI diagnostic tools often focused primarily on analyzing individual symptoms in isolation, potentially missing crucial contextual information such as the patient’s medical history, lifestyle, environmental factors, and even subtle cues from their recent health records.

Context Engineering in Action:

A research hospital in the Pacific Northwest developed an AI-powered diagnostic tool for a specific type of rare disease. Recognizing the importance of context, they engineered the AI to integrate a wide range of patient data beyond just the presenting symptoms. This included the patient’s complete medical history (past illnesses, medications, allergies), family medical history, lifestyle information (diet, exercise, smoking habits), recent lab results, and even notes from previous doctor’s visits. The AI was also connected to relevant medical literature to understand the broader context of the disease and potential co-morbidities. By providing the AI with this rich contextual information, the researchers aimed to improve the accuracy and speed of diagnosis, especially in complex cases where isolated symptoms might be misleading.

The Impact:

The context-aware AI diagnostic tool demonstrated a significantly higher accuracy rate in identifying the rare disease compared to traditional methods and earlier AI models that lacked comprehensive contextual input. It was also able to flag potential risks and complications that might have been overlooked otherwise. This case study underscores the critical role of context engineering in high-stakes applications like medical diagnosis, where a holistic understanding of the patient’s situation can lead to more timely and effective treatments.

Key Insight: Context engineering, by enabling a holistic view of a patient’s health and history, is crucial for improving the accuracy and reliability of AI in critical fields like medical diagnosis.

The Future of AI is Contextual

The future of AI is not about building bigger models; it’s about building smarter ones. And a smarter AI is one that can understand and leverage the richness of context, just as humans do. From a human-centered perspective, context engineering is the practice that makes AI more useful, more reliable, and more deeply integrated into our lives in a way that truly helps us. By moving beyond simple prompts and isolated data points, we can create AI systems that are not just powerful tools, but truly intelligent and invaluable partners. The work of bridging the gap between isolated data and meaningful context is where the next great wave of AI innovation will emerge, and it is a task that will demand our full attention.

Image credit: Pexels

Content Authenticity Statement: The topic area and the key elements to focus on were decisions made by Braden Kelley, with help from Google Gemini to shape the article and create the illustrative case studies.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






The Future is Rotary

Human-Centered Innovation in Rotating Detonation Engines

The Future is Rotary - Human-Centered Innovation in Rotating Detonation Engine

GUEST POST from Art Inteligencia

For decades, the pursuit of more efficient and sustainable propulsion systems has driven innovation in aerospace and beyond. Among the most promising advancements on the horizon is the Rotating Detonation Engine (RDE). This technology, which harnesses supersonic combustion waves traveling in a circular channel, offers the potential for significant leaps in fuel efficiency and reduced emissions compared to traditional combustion methods. However, the true impact of RDEs will not solely be defined by their technical prowess, but by a human-centered approach to their development and integration.

A Paradigm Shift for a Better Future

Human-centered change innovation focuses on understanding and addressing the needs and aspirations of people affected by technological advancements. In the context of RDEs, this means considering not only the engineers and scientists developing the technology but also the pilots, passengers, communities living near airports, and the planet as a whole. The potential benefits are immense:

  • Enhanced Fuel Efficiency: RDEs promise a significant reduction in fuel consumption, leading to lower operating costs and a smaller carbon footprint for air travel and other applications.
  • Reduced Emissions: More efficient combustion can translate to lower emissions of harmful pollutants, contributing to cleaner air and a healthier environment.
  • Increased Performance: The unique properties of detonation combustion could lead to more powerful and lighter engines, opening up new possibilities for aircraft design and space travel.
  • Economic Growth: The development and adoption of RDE technology will create new jobs in research, manufacturing, and maintenance, fostering economic growth.

Navigating the Winds of Change: Key Areas for Innovation

Realizing the full potential of RDEs requires a concerted effort across various domains, guided by a human-centered perspective:

  • Materials Science: Developing materials that can withstand the extreme temperatures and pressures of detonation combustion is crucial. This requires innovative research and collaboration between material scientists and engineers.
  • Engine Design and Control Systems: Creating robust and reliable RDE designs, along with sophisticated control systems to manage the complex detonation process, is essential for safe and efficient operation. Human factors engineering will play a vital role in designing intuitive and user-friendly control interfaces.
  • Manufacturing Processes: Scaling up the production of RDE components will require innovative manufacturing techniques that are both cost-effective and environmentally sustainable.
  • Infrastructure Development: The widespread adoption of RDEs may necessitate changes in fuel production, storage, and delivery infrastructure. Planning for these changes with community needs and environmental impact in mind is critical.
  • Education and Training: A new generation of engineers, technicians, and pilots will need to be trained in the principles and operation of RDE technology. Educational programs must adapt to incorporate this emerging field.
  • Regulatory Frameworks: Governments and regulatory bodies will need to develop new standards and certifications to ensure the safe and responsible deployment of RDE-powered systems. Engaging stakeholders in the development of these frameworks is crucial.

Companies and Startups to Watch

The landscape of RDE development is dynamic, with several established aerospace companies and innovative startups making significant strides. Keep an eye on organizations like GE Aerospace and Rolls-Royce which have publicly acknowledged their research into detonation technologies. Emerging startups such as Venus Aerospace are focusing on leveraging RDEs for high-speed flight, while others like Purdue University’s research labs often spin out promising technologies. These entities are pushing the boundaries of RDE technology and demonstrating potential pathways for its future application, always with an eye on the practical and societal implications of their work.

Case Studies in Human-Centered RDE Application

Case Study 1: Sustainable Air Travel

Imagine a future where short-haul flights are powered by RDEs running on sustainable aviation fuels (SAFs). The increased fuel efficiency of RDEs could significantly reduce the amount of SAF required per flight, making sustainable travel more economically viable and environmentally friendly. This benefits passengers through potentially lower ticket prices in the long run and contributes to the well-being of communities near airports by reducing noise and air pollution. Aircraft manufacturers would need to prioritize designs that minimize noise impact and ensure passenger comfort within the new performance parameters of RDE-powered aircraft. This human-centered approach ensures that the technological advancement directly addresses the need for sustainable and accessible air travel.

Case Study 2: Enhanced Emergency Response

Consider the application of compact, high-power RDEs in heavy-lift drones for disaster relief. Their potential for increased payload capacity and range could enable faster and more efficient delivery of critical supplies to disaster-stricken areas. For first responders and affected populations, this translates to quicker access to necessities like medical equipment, food, and shelter. Developing user-friendly drone control systems and ensuring the safe operation of these powerful machines in complex, real-world scenarios are key human-centered considerations. The focus here is on leveraging RDE technology to improve the speed and effectiveness of humanitarian aid, directly impacting the lives and safety of vulnerable individuals.

A Future Forged Together

The future of rotating detonation engines is not just about technological advancement; it’s about creating a future where propulsion is more efficient, sustainable, and ultimately benefits humanity. By embracing a human-centered approach to innovation, we can navigate the challenges and unlock the transformative potential of RDEs, ushering in a new era of cleaner, more powerful, and more responsible propulsion.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Decoding the Code of Life

Human-Centered Innovation in Synthetic Biology

Decoding the Code of Life

GUEST POST from Art Inteligencia

From my vantage point here in Seattle, I’m constantly tracking emerging technologies that hold the potential to reshape our world. One area that consistently sparks my interest, and demands a strong human-centered lens, is synthetic biology. This revolutionary field combines biology and engineering principles to design and build new biological parts, devices, and systems—essentially allowing us to program life itself. While the possibilities are immense, so too are the ethical and societal considerations, making a human-centered approach to its innovation crucial.

Synthetic biology stands at the intersection of several scientific disciplines, leveraging our increasing understanding of genomics, molecular biology, and genetic engineering. It moves beyond simply reading the code of life to actively writing and rewriting it. This capability opens doors to addressing some of humanity’s most pressing challenges, from developing new medicines and sustainable fuels to creating novel materials and revolutionizing agriculture. However, as we gain the power to manipulate the fundamental building blocks of life, we must ensure that our innovation is guided by ethical principles, societal needs, and a deep understanding of the potential consequences.

A human-centered approach to innovation in synthetic biology means prioritizing the well-being of individuals and the planet. It involves engaging with the public to understand their concerns and aspirations, fostering transparency in research and development, and proactively addressing potential risks. It requires us to ask not just “can we do this?” but “should we do this?” and “what are the potential impacts on human health, the environment, and the fabric of society?” This proactive ethical framework is essential for building trust and ensuring that the transformative potential of synthetic biology is harnessed responsibly and for the benefit of all.

Case Study 1: Engineering Microbes for Sustainable Fuel Production

The Challenge: Dependence on Fossil Fuels and Climate Change

Our current reliance on fossil fuels is a major driver of climate change and environmental degradation. Finding sustainable and renewable alternatives is a critical global challenge. Synthetic biology offers a promising pathway by enabling the engineering of microorganisms to produce biofuels from renewable resources, such as agricultural waste or even captured carbon dioxide.

The Innovation:

Companies and research labs are now engineering yeast and algae to efficiently convert sugars and other feedstocks into biofuels like ethanol, butanol, and even advanced hydrocarbons that can directly replace gasoline or jet fuel. This involves designing new metabolic pathways within these organisms, optimizing their growth conditions, and scaling up production in bioreactors. The human-centered aspect here lies in the potential to create a cleaner, more sustainable energy future, reducing our carbon footprint and mitigating the impacts of climate change. Furthermore, these bioproduction processes can potentially utilize waste streams, contributing to a more circular economy.

The Potential Impact:

Successful development and deployment of these bio-based fuels could significantly reduce our dependence on finite fossil fuel reserves and lower greenhouse gas emissions. Imagine fueling our cars and airplanes with fuels produced by engineered microbes, utilizing resources that would otherwise go to waste. This innovation has the potential to create new jobs in biorefineries and contribute to energy independence, while simultaneously addressing a critical environmental need. However, careful consideration of land use, water resources, and the potential for unintended environmental consequences is paramount to ensure a truly sustainable solution.

Key Insight: Synthetic biology offers powerful tools to engineer sustainable solutions to global challenges like climate change, but a human-centered approach requires careful consideration of the entire lifecycle and potential impacts.

Case Study 2: Cell-Based Agriculture for a Sustainable Food System

The Challenge: Environmental Impact and Ethical Concerns of Traditional Animal Agriculture

Traditional animal agriculture has a significant environmental footprint, contributing to deforestation, greenhouse gas emissions, and water pollution. It also raises ethical concerns about animal welfare. Synthetic biology is paving the way for cell-based agriculture, where meat and other animal products are grown directly from animal cells in a lab, without the need to raise and slaughter animals.

The Innovation:

Companies are now developing methods to cultivate animal cells in bioreactors, providing them with the necessary nutrients and growth factors to proliferate and differentiate into muscle tissue, fat, and other components of meat. This “cultured meat” has the potential to drastically reduce the environmental impact associated with traditional farming and address ethical concerns about animal treatment. From a human-centered perspective, this innovation could lead to a more sustainable and ethical food system, ensuring food security for a growing global population while minimizing harm to the planet and animals.

The Potential Impact:

Widespread adoption of cell-based agriculture could revolutionize the food industry, offering consumers real meat with a significantly lower environmental footprint. It could also reduce the risk of zoonotic diseases and the need for antibiotics in animal agriculture. However, challenges remain in scaling up production, reducing costs, and gaining consumer acceptance. Addressing public perceptions, ensuring the safety and nutritional value of lab-grown meat, and understanding the potential socio-economic impacts on traditional farming communities are crucial human-centered considerations for this transformative technology.

Key Insight: Synthetic biology can contribute to a more sustainable and ethical food system through cell-based agriculture, but public engagement and careful consideration of societal impacts are essential for its responsible adoption.

Startups and Companies to Watch

The field of synthetic biology is rapidly evolving, with numerous innovative startups and established companies making significant strides. Keep an eye on companies like Ginkgo Bioworks, which is building a platform for organism design; Zymergen, focused on creating novel materials and ingredients through microbial engineering; Impossible Foods and Beyond Meat, leveraging synthetic biology for plant-based and cell-based meat alternatives; Moderna and BioNTech, who utilized mRNA technology (a product of synthetic biology advancements) for their groundbreaking COVID-19 vaccines; and companies like Pivot Bio, developing sustainable microbial fertilizers. This dynamic landscape is constantly generating new solutions and pushing the boundaries of what’s biologically possible.

As we continue to unlock the power of synthetic biology here in America and around the world, it is imperative that we do so with a strong sense of human-centered responsibility. By prioritizing ethics, engaging with society, and focusing on solutions that address fundamental human needs and environmental sustainability, we can ensure that this remarkable technology truly serves the betterment of humanity.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Why Explainable AI is the Key to Our Future

The Unseen Imperative

Why Explainable AI is the Key to Our Future

GUEST POST from Art Inteligencia

We’re in the midst of an AI revolution, a tidal wave of innovation that promises to redefine industries and transform our lives. We’ve seen algorithms drive cars, diagnose diseases, and manage our finances. But as these “black box” systems become more powerful and more pervasive, a critical question arises: can we truly trust them? The answer, for many, is a hesitant ‘maybe,’ and that hesitation is a massive brake on progress. The key to unlocking AI’s true, transformative potential isn’t just more data or faster chips. It’s Explainable AI (XAI).

XAI is not a futuristic buzzword; it’s the indispensable framework for today’s AI-driven world. It’s the set of tools and methodologies that peel back the layers of a complex algorithm, making its decisions understandable to humans. Without XAI, our reliance on AI is little more than a leap of faith. We must transition from trusting AI because it’s effective, to trusting it because we understand why and how it’s effective. This is the fundamental shift from a blind tool to an accountable partner.

This is more than a technical problem; it’s a strategic business imperative. XAI provides the foundation for the four pillars of responsible AI that will differentiate the market leaders of tomorrow:

  • Transparency: Moving beyond “what” the AI decided to “how” it arrived at that decision. This sheds light on the model’s logic and reasoning.
  • Fairness & Bias Detection: Actively identifying and mitigating hidden biases in the data or algorithm itself. This ensures that AI systems make equitable decisions that don’t discriminate against specific groups.
  • Accountability: Empowering humans to understand and take responsibility for AI-driven outcomes. When things go wrong, we can trace the decision back to its source and correct it.
  • Trust: Earning the confidence of users, stakeholders, and regulators. Trust is the currency of the future, and XAI is the engine that generates it.

For any organization aiming to deploy AI in high-stakes fields like healthcare, finance, or justice, XAI isn’t a nice-to-have—it’s a non-negotiable requirement. The competitive advantage will go to the companies that don’t just build powerful AI, but build trustworthy AI.

Case Study 1: Empowering Doctors with Transparent Diagnostics

Consider a team of data scientists who develop a highly accurate deep learning model to detect early-stage cancer from medical scans. The model’s accuracy is impressive, but it operates as a “black box.” Doctors are understandably hesitant to stake a patient’s life on a recommendation they can’t understand. The company then integrates an XAI framework. Now, when the model flags a potential malignancy, it doesn’t just give a diagnosis. It provides a visual heat map highlighting the specific regions of the scan that led to its conclusion, along with a confidence score. It also presents a list of similar, previously diagnosed cases from its training data, providing concrete evidence to support its claim. This explainable output transforms the AI from an un-auditable oracle into a valuable, trusted second opinion. The doctors, now empowered with understanding, can use their expertise to validate the AI’s findings, leading to faster, more confident diagnoses and, most importantly, better patient outcomes.

Case Study 2: Proving Fairness in Financial Services

A major financial institution implements an AI-powered system to automate its loan approval process. The system is incredibly efficient, but its lack of transparency triggers concerns from regulators and consumer advocacy groups. Are its decisions fair, or is the algorithm subtly discriminating against certain demographic groups? Without XAI, the bank would be in a difficult position to defend its practices. By implementing an XAI framework, the company can now generate a clear, human-readable report for every single loan decision. If an application is denied, the report lists the specific, justifiable factors that contributed to the outcome—e.g., “debt-to-income ratio is outside of policy guidelines” or “credit history shows a high number of recent inquiries.” Crucially, it can also definitively prove that the decision was not based on protected characteristics like race or gender. This transparency not only helps the bank comply with fair lending laws but also builds critical trust with its customers, turning a potential liability into a significant source of competitive advantage.

The Architects of Trust: XAI Market Leaders and Startups to Watch

In the rapidly evolving world of Explainable AI (XAI), the market is being defined by a mix of established technology giants and innovative, agile startups. Major players like Google, Microsoft, and IBM are leading the way, integrating XAI tools directly into their cloud and AI platforms like Azure Machine Learning and IBM Watson. These companies are setting the industry standard by making explainability a core feature of their enterprise-level solutions. They are often joined by other large firms such as FICO and SAS Institute, which have long histories in data analytics and are now applying their expertise to ensure transparency in high-stakes areas like credit scoring and risk management. Meanwhile, a number of dynamic startups are pushing the boundaries of XAI. Companies like H2O.ai and Fiddler AI are gaining significant traction with platforms dedicated to providing model monitoring, bias detection, and interpretability for machine learning models. Another startup to watch is Arthur AI, which focuses on providing a centralized platform for AI performance monitoring to ensure that models remain fair and accurate over time. These emerging innovators are crucial for democratizing XAI, making sophisticated tools accessible to a wider range of organizations and ensuring that the future of AI is built on a foundation of trust and accountability.

The Road Ahead: A Call to Action

The future of AI is not about building more powerful black boxes. It’s about building smarter, more transparent, and more trustworthy partners. This is not a task for data scientists alone; it’s a strategic imperative for every business leader, every product manager, and every innovator. The companies that bake XAI into their processes from the ground up will be the ones that successfully navigate the coming waves of regulation and consumer skepticism. They will be the ones that win the trust of their customers and employees. They will be the ones that truly unlock the full, transformative power of AI. Are you ready to lead that charge?

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Why Innovators Can’t Ignore the Quantum Revolution

Why Innovators Can't Ignore the Quantum Revolution

GUEST POST from Art Inteligencia

In the world of innovation, we are always looking for the next big thing—the technology that will fundamentally change how we solve problems, create value, and shape the future. For the past several decades, that technology has been the classical computer, with its exponential increase in processing power. But a new paradigm is on the horizon, one that promises to unlock capabilities previously thought impossible: quantum computing. While it may seem like a distant, esoteric concept, innovators and business leaders who ignore quantum computing are doing so at their own peril. This isn’t just about faster computers; it’s about a complete re-imagining of what is computationally possible.

The core difference is simple but profound. A classical computer is like a single light switch—it can be either ON or OFF (1 or 0). A quantum computer, however, uses qubits that can be ON, OFF, or in a state of superposition, meaning it’s both ON and OFF at the same time. This ability, combined with entanglement, allows quantum computers to perform calculations in parallel and tackle problems that are intractable for even the most powerful supercomputers. The shift is not incremental; it is a fundamental leap in computational power, moving from a deterministic, linear process to a probabilistic, multi-dimensional one.

Quantum as an Innovation Engine: Solving the Unsolvable

For innovators, quantum computing is not a threat to be feared, but a tool to be mastered. It provides a new lens through which to view and solve the world’s most complex challenges. The problems that are “hard” for classical computers—like simulating complex molecules, optimizing global supply chains, or cracking certain types of encryption—are the very problems where quantum computers are expected to excel. By leveraging this technology, innovators can create new products, services, and business models that were simply impossible before.

Key Areas Where Quantum Will Drive Innovation

  • Revolutionizing Material Science: Simulating how atoms and molecules interact is a notoriously difficult task for classical computers. Quantum computers can model these interactions with unprecedented accuracy, accelerating the discovery of new materials, catalysts, and life-saving drugs in fields from energy storage to pharmaceuticals.
  • Optimizing Complex Systems: From optimizing financial portfolios to routing delivery trucks in a complex network, optimization problems become exponentially more difficult as the number of variables increases. Quantum algorithms can solve these problems much faster, leading to incredible efficiencies and cost savings.
  • Fueling the Next Wave of AI: Quantum machine learning (QML) can process vast, complex datasets in ways that are impossible for classical AI. This could lead to more accurate predictive models, better image recognition, and new forms of artificial intelligence that can find patterns in data that humans and classical machines would miss.
  • Securing Our Digital Future: While quantum computing poses a threat to current encryption methods, it also offers a solution. Quantum cryptography promises to create uncrackable communication channels, leading to a new era of secure data transmission.

Case Study 1: Accelerating Drug Discovery for a New Tomorrow

A major pharmaceutical company was struggling to develop a new drug for a rare disease. The traditional method involved months of painstaking laboratory experiments and classical computer simulations to model the interactions of a new molecule with its target protein. The sheer number of variables and possible molecular configurations made the process a slow and expensive trial-and-error loop, often with no clear path forward.

They partnered with a quantum computing research firm to apply quantum simulation algorithms. The quantum computer was able to model the complex quantum mechanical properties of the molecules with a level of precision and speed that was previously unattainable. Instead of months, the simulations were run in days. This allowed the human research team to rapidly narrow down the most promising molecular candidates, saving years of R&D time and millions of dollars. The quantum computer didn’t invent the drug, but it acted as a powerful co-pilot, guiding the human innovators to the most probable solutions and dramatically accelerating the path to a breakthrough.

This case study demonstrates how quantum computing can transform the bottleneck of complex simulation into a rapid discovery cycle, augmenting the human innovator’s ability to find life-saving solutions.

Case Study 2: Optimizing Global Logistics for a Sustainable Future

A global shipping and logistics company faced the monumental task of optimizing its entire network of ships, trucks, and warehouses. Factors like fuel costs, weather patterns, traffic, and delivery windows created a mind-bogglingly complex optimization problem. The company’s classical optimization software could only provide a suboptimal solution, leading to wasted fuel, delayed deliveries, and significant carbon emissions.

Recognizing the limitations of their current technology, they began to explore quantum optimization. By using a quantum annealer, a type of quantum computer designed for optimization problems, they were able to model the entire network simultaneously. The quantum algorithm found a more efficient route and scheduling solution that reduced fuel consumption by 15% and cut delivery times by an average of 10%. This innovation not only provided a significant competitive advantage but also had a profound positive impact on the company’s environmental footprint. It was an innovation that leveraged quantum computing to solve a business problem that was previously too complex for existing technology.

This example shows that quantum’s power to solve previously intractable optimization problems can lead to both significant cost savings and sustainable, planet-friendly outcomes.

The Innovator’s Call to Action

The quantum revolution is not a distant sci-fi fantasy; it is a reality in its nascent stages. For innovators, the key is not to become a quantum physicist overnight, but to understand the potential of the technology and to start experimenting now. Here are the steps you must take to prepare for this new era:

  • Educate and Evangelize: Start a dialogue about quantum computing and its potential applications in your industry. Find internal champions who can explore this new frontier and evangelize its possibilities.
  • Find Your Partners: You don’t have to build your own quantum computer. Partner with academic institutions, research labs, or quantum-as-a-service providers to start running pilot projects on a cloud-based quantum machine.
  • Identify the Right Problems: Look for the “intractable” problems in your business—the optimization challenges, the material science hurdles, the data analysis bottlenecks—and see if they are a fit for quantum computing. These are the problems where a quantum solution will deliver a true breakthrough.

The greatest innovations are born from a willingness to embrace new tools and new ways of thinking. Quantum computing is the most powerful new tool we have ever seen. For the innovator of tomorrow, understanding and leveraging this technology will be the key to staying ahead. The quantum leap is upon us—are you ready to take it?

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.