Tag Archives: AI

AI, Cognitive Obesity and Arrested Development

AI, Cognitive Obesity and Arrested Development

GUEST POST from Pete Foley

Some of the biggest questions of our age are whether AI will ultimately benefit or hurt us, and how big its’ effect will ultimately be.

And that of course is a problem with any big, disruptive technology.  We want to anticipate how it will play out in the real world, but our forecasts are rarely very accurate, and all too often miss a lot of the more important outcomes. We often don’t anticipate it’s killer applications, how it will evolve or co-evolve with other emergent technologies, or predict all of the side effects and ‘off label’ uses that come with it.  And the bigger the potential impact new tech has, and the broader the potential applications, the harder prediction becomes.  The reality is that in virtually every case, it’s not until we set innovation free that we find its full impact, good, bad or indifferent.

Pandora’s Box

And that can of course be a sizable concern.  We have to open Pandora’s Box in order to find out what is inside, but once open, it may not be possible to close it again.   For AI, the potential scale of its impact makes this particularly risky. It also makes any meaningful regulation really difficult. We cannot regulate what we cannot accurately predict. And if we try we risk not only missing our target, but also creating unintended consequences, and distorting ‘innovation markets’ in unexpected, potentially negative ways.

So it’s not surprising there is a lot of discussion around what AI will or will not do. How will it effect jobs, the economy, security, mental health. Will it ‘pull’ a Skynet, turn rogue and destroy humanity? Will it simply replace human critical thinking to the point where it rules us by default? Or will it ultimately fizzle out to some degree, and become a tool in a society that looks a lot like today, rather than revolutionizing it?

I don’t even begin to claim to predict the future with any accuracy, for all of the reasons mentioned above. But as a way to illustrate how complex an issue this is, I’d like to discuss a few less talked about scenarios.

1.  Less obvious issues:  Obviously AI comes with potential for enormous benefits and commensurate problems.  It’s likely to trigger an arms race between ‘good’ and ‘bad’ applications, and that of itself will likely be a moving target.  An obvious, oft discussed potential issue is of course the ‘Terminator Scenario’ mentioned above.  That’s not completely far fetched, especially with recent developments in AI self preservation and scheming that I’ll touch on later. But there are plenty of other potential, if less extreme pitfalls, many of which involve AI amplifying and empowering bad behavior by humans.  The speed and agility AI hands to hackers, hostile governments, black-hats, terrorists and organized crime vastly enhanced capability for attacks on infrastructure, mass fraud or worse. And perhaps more concerning, there’s the potential for AI to democratize cyber crime, and make it accessible to a large number of ‘petty’ criminals who until now have lacked resources to engage in this area. And when the crime base expands, so does the victim base. Organizations or individuals who were too small to be targeted for ransomware when it took huge resources to create, will presumably become more attractive targets as AI allows similar code to be built in hours by people who possess limited coding skills.

And all of this of course adds another regulation challenge. The last thing we want to do is slow legitimate AI development via legislation, while giving free reign to illegitimate users, who presumably will be far less likely to follow regulations. If the arms race mentioned above occurs, the last thing we want to do is unintentionally tip the advantage to the bad guys!

Social Impacts

But AI also has the potential to be disruptive in more subtle ways.  If the internet has taught us anything, it is that how the general public adopts technology, and how big tech monetizes matter a lot. But this is hard to predict.  Some of the Internet’s biggest negative impacts have derived from largely unanticipated damage to our social fabric.  We are still wrestling with its impact on social isolation, mental health, cognitive development and our vital implicit skill-set. To the last point, simply deferring mental tasks to phones and computers means some cognitive muscles lack exercise, and atrophy, while reduction in human to human interactions depreciate our emotion and social intelligence.

1. Cognitive Obesity  The human brain evolved over tens of thousands, arguable millions of years (depending upon where in you start measuring our hominid history).  But 99% of that evolution was characterized by slow change, and occurred in the context of limited resources, limited access to information, and relatively small social groups.  Today, as the rate of technological innovation explodes, our environment is vastly different from the one our brain evolved to deal with.  And that gap between us and our environment is widening rapidly, as the world is evolving far faster than our biology.  Of course, as mentioned above, the nurture part of our cognitive development does change with changing context, so we do course correct to some degree, but our core DNA cannot, and that has consequences.

Take the current ‘obesity epidemic’.  We evolved to leverage limited food resources, and to maximize opportunities to stock up calories when they occurred.  But today, faced with near infinite availability of food, we struggle to control our scarcity instincts. As a society, we eat far too much, with all of the health issues that brings with it. Even when we are cognitively aware of the dangers of overeating, we find it difficult to resist our implicit instincts to gorge on more food than we need.  The analogy to information is fairly obvious. The internet brought us near infinite access to information and ‘social connections’.  We’ve already seen the negative impact this can have, contributing to societal polarization, loss of social skills, weakened emotional intelligence, isolation, mental health ‘epidemics’ and much more. It’s not hard to envisage these issues growing as AI increases the power of the internet, while also amplifying the seduction of virtual environments.  Will we therefore see a cognitive obesity epidemic as our brain simply isn’t adapted to deal with near infinite resources? Instead of AI turning us all into hyper productive geniuses, will we simply gorge on less productive content, be it cat videos, porn or manipulative but appealing memes and misinformation? Instead of it acting as an intelligence enhancer, will it instead accelerate a dystopian Brave New World, where massive data centers gorge on our common natural resources primarily to create trivial entertainment?

2. Amplified Intelligence.  Even in the unlikely event that access to AI is entirely democratic, it’s guaranteed that its benefits will not be. Some will leverage it far more effectively than others, creating significant risk of accelerating social disparity.  While many will likely gorge unproductively as described above, others will be more disciplined, more focused and hence secure more advantage.  To return to the obesity analogy, It’s well documented that obesity is far more prevalent in lower income groups. It’s hard not to envisage that productive leverage of AI will follow a similar pattern, widening disparities within and between societies, with all of the issues and social instability that comes with that.

3. Arrested Development.  We all know that ultimately we are products of both nature and nurture. As mentioned earlier, our DNA evolves slowly over time, but how it is expressed in individuals is impacted by current or context.  Humans possess enormous cognitive plasticity, and can adapt and change very quickly to different environments.  It’s arguably our biggest ‘blessing’, but can also be a curse, especially when that environment is changing so quickly.

The brain is analogous to a muscle, in that the parts we exercise expand or sharpen, and the parts we don’t atrophy.    As we defer more and more tasks to AI, it’s almost certain that we’ll become less capable in those areas.  At one level, that may not matter. Being weaker at math or grammar is relatively minor if our phones can act as a surrogate, all of my personal issues with autocorrect notwithstanding.

But a bigger potential issue is the erosion of causal reasoning.  Critical thinking requires understanding of underlying mechanisms.  But when infinite information is available at a swipe of a finger, it becomes all too easy to become a ‘headline thinker’, and unconsciously fail to penetrate problems with sufficient depth.

That risks what Art Markman, a psychologist at UT, and mentor and friend, used to call the ‘illusion of understanding’.  We may think we know how something works, but often find that knowledge is superficial, or at least incomplete, when we actually need it.   Whether its fixing a toilet, changing a tire, resetting a fuse, or unblocking a sink, often the need to actually perform a task reveals a lack in deep, causal knowledge.   This often doesn’t matter until it does in home improvement contexts, but at least we get a clear signal when we discover we need to rush to YouTube to fix that leaking toilet!

This has implications that go far beyond home improvement, and is one factor helping to tear our social fabric apart.   We only have to browse the internet to find people with passionate, but often opposing views on a wide variety of often controversial topics. It could be interest rates, Federal budgets, immigration, vaccine policy, healthcare strategy, or a dozen others. But all too often, the passion is not matched by deep causal knowledge.  In reality, these are all extremely complex topics with multiple competing and interdependent variables.  And at risk of triggering hate mail, few if any of them have easy, conclusive answers.  This is not physics, where we can plug numbers into an equation and it spits out a single, unambiguous solution.  The reality is that complex, multi-dimensional problems often have multiple, often competing partial solutions, and optimum outcomes usually require trade offs.  Unfortunately few of us really have the time to assimilate the expertise and causal knowledge to have truly informed and unambiguous answers to most, if not all of these difficult problems.

And worse, AI also helps the ‘bad guys’. It enables unscrupulous parties to manipulate us for their own benefit, via memes, selective information and misinformation that are often designed to make us think we understand complex problems far better than we really do. As we increasingly rely on input from AI, this will inevitable get worse. The internet and social media has already contributed to unprecedented social division and nefarious financial rimes.   Will AI amplify this further?

This problem is not limited to complex social challenges. The danger is that for ALL problems, the internet, and now AI, allows us to create the illusion for ourselves that we understand complex systems far more deeply than we really do.  That in turn risks us becoming less effective problem solvers and innovators. Deep causal knowledge is often critical for innovating or solving difficult problems.  But in a world where we can access answers to questions so quickly and easily, the risk is that we don’t penetrate topics as deeply. I personally recall doing literature searches before starting a project. It was often tedious, time consuming and boring. Exactly the types of task AI is perfect for. But that tedious process inevitably built my knowledge of the space I was moving into, and often proved valuable when we hit problems later in the project. If we now defer this task to AI, even in part, this reduces depth of understanding. And in in complex systems or theoretic problem solving, will often lack the unambiguous signal that usually tells us our skills and knowledge are lacking when doing something relatively simple like fixing a toilet. The more we use AI, the more we risk lacking necessary depth of understanding, but often without realizing it.

Will AI become increasingly unreliable?

We are seeing AI develop the capability to lie, together with a growing propensity to cover it’s tracks when it does so. The AI community call it ’scheming’, but in reality it’s fundamentally lying.  https://openai.com/index/detecting-and-reducing-scheming-in-ai-models/?_bhlid=6a932f218e6ebc041edc62ebbff4f40bb73e9b14. We know from the beginning we’ve faced situations where AI makes mistakes.  And as I discussed recently, the risks associated with that are amplified because of it’s increasingly (super)human or oracle-like interface creating an illusion of omnipotence.

But now it appears to be increasingly developing properties that mirror self preservation.  A few weeks ago there were reports of difficulties in getting AI’s to shut themselves down, and even of AI’s using defensive blackmail when so threatened. Now we are seeing reports of AI’s deliberately trying to hide their mistakes.  And perhaps worse, concerns that attempts to fix this may simply “teach the model to become better at hiding its deceptive behavior”, or in other words, become a better liar.

If we are already in an arms race with an entity to keep it honest, and put our interests above its own, given it’s vastly superior processing power and speed, it may be a race we’ve already lost.  That may sound ‘doomsday-like’, but that doesn’t make it any less possible. And keep in mind, much of the Doomsday projections around AI focus on a ’singularity event’ when AI suddenly becomes self aware. That assumes AI awareness and consciousness will be similar to human, and forces a ‘birth’ analogy onto the technology. However, recent examples of self preservation and dishonesty maybe hint at a longer, more complex transition, some of which may have already started.

How big will the impact of AI be?

I think we all assume that AI’s impact will be profound. After all,  it’s still in its infancy, and is already finding it’s way into all walks of life.  But what if we are wrong, or at least overestimating its impact?  Just to play Devils Advocate, we humans do have a history of over-estimating both the speed and impact of technology driven change.

Remember the unfounded (in hindsight) panic around Y2K?  Or when I was growing up, we all thought 2025 would be full of people whizzing around using personal jet-packs.  In the 60’s and 70’s we were all pretty convinced we were facing nuclear Armageddon. One of the greatest movies of all time, 2001, co-written by inventor and futurist Arthur C. Clark, had us voyaging to Jupiter 24 years ago!  Then there is the great horse manure crisis of 1894. At that time, London was growing rapidly, and literally becoming buried in horse manure.  The London Times predicted that in 50 years all of London would be buried under 9 feet of poop. In 1898 the first global urban planning conference could find no solution, concluding that civilization was doomed. But London, and many other cities received salvation from an unexpected quarter. Henry Ford invented the motor car, which surreptitiously saved the day.  It was not a designed solution for the manure problem, and nobody saw it coming as a solution to that problem. But nonetheless, it’s yet another example of our inability to see the future in all of it’s glorious complexity, and for our predictions to screw towards worse case scenarios and/or hyperbole.

Change Aversion:

That doesn’t of course mean that AI will not have a profound impact. But lot’s of factors could potentially slow down, or reduce its effects.  Not least of these is human nature. Humans possess a profound resistance to change.  For sure, we are curious, and the new and innovative holds great appeal.  That curiosity is a key reason as to why humans now dominate virtually every ecological niche on our planet.   But we are also a bit schizophrenic, in that we love both change and stability and consistency at the same time.  Our brains have limited capacity, especially for thinking about and learning new stuff.  For a majority of our daily activities, we therefore rely on habits, rituals, and automatic behaviors to get us through without using that limited higher cognitive capacity. We can drive, or type, or do parts of our job without really thinking about it. This ‘implicit’ mental processing frees up our conscious brain to manage the new or unexpected.  But as technology like AI accelerates, a couple of things could happen.  One is that as our cognitive capacity gets overloaded, and we unconsciously resist it.  Instead of using the source of all human knowledge for deep self improvement, we instead immerse ourselves in less cognitively challenging content such as social media.

Or, as mentioned earlier, we increasingly lose causal understanding of our world, and do so without realizing it.   Why use our limited thinking capacity for tasks when it is quicker, easier, and arguably more accurate to defer to an AI. But lack of causal understanding seriously inhibits critical thinking and problem solving.  As AI gets smarter, there is a real risk that we as a society become dumber, or at least less innovative and creative.

Our Predictions are Wrong.

If history teaches us anything, most, if not all of the sage and learned predictions about AI will be mostly wrong. There is no denying that it is already assimilating into virtually every area of human society.  Finance, healthcare, medicine, science, economics, logistics, education etc.  And it’s a snooze and you lose scenario, and in many fields of human endeavor, we have little choice.  Fail to embrace the upside of AI and we get left behind.

That much power in things that can think so much faster than us, that may be developing self-interest, if not self awareness, that has no apparent moral framework, and is in danger of becoming an expert liar, is certainly quite sobering.

The Doomsday Mindset.

As suggested above, loss aversion and other biases drive us to focus on the downside of change.   It’s a bias that makes evolutionary sense, and helped keep our ancestors alive long enough to breed and become our ancestors. But remember, that bias is implicitly built into most, if not all of our predictions.   So there’s at least  chance that it’s impact wont be quite as good or bad as our predictions suggest

But I’m not sure we want to rely on that.  Maybe this time a Henry Ford won’t serendipitously rescue us from a giant pile of poop of our own making. But whatever happens, I think it’s a very good bet that we are in for some surprises, both good and bad. Probably the best way to deal with that is to not cling too tightly to our projections or our theories, remain agile, and follow the surprises as much, if not more than met expectations.

Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The AI Innovations We Really Need

The Future of Sustainable AI Data Centers and Green Algorithms

The AI Innovations We Really Need

GUEST POST from Art Inteligencia

The rise of Artificial Intelligence represents a monumental leap in human capability, yet it carries an unsustainable hidden cost. Today’s large language models (LLMs) and deep learning systems are power and water hungry behemoths. Training a single massive model can consume the energy equivalent of dozens of homes for a year, and data centers globally now demand staggering amounts of fresh water for cooling. As a human-centered change and innovation thought leader, I argue that the next great innovation in AI must not be a better algorithm, but a greener one. We must pivot from the purely computational pursuit of performance to the holistic pursuit of water and energy efficiency across the entire digital infrastructure stack. A sustainable AI infrastructure is not just an environmental mandate; it is human-centered mandate for equitable, accessible global technology. The withdrawal of Google’s latest AI data center project in Indiana this week after months of community opposition is proof of this need.

The current model of brute-force computation—throwing more GPUs and more power at the problem—is a dead end. Sustainable innovation requires targeting every element of the AI ecosystem, from the silicon up to the data center’s cooling system. This is an immediate, strategic imperative. Failure to address the environmental footprint of AI is not just an ethical lapse; it’s an economic and infrastructural vulnerability that will limit global AI deployment and adoption, leaving entire populations behind.

Strategic Innovation Across the AI Stack

True, sustainable AI innovation must be decentralized and permeate six core areas:

  1. Processors (ASICs, FPGAs, etc.): The goal is to move beyond general-purpose computing toward Domain-Specific Architecture. Custom ASICs and highly specialized FPGAs designed solely for AI inference and training, rather than repurposed hardware, offer orders of magnitude greater performance-per-watt. The shift to analog and neuromorphic computing drastically reduces the power needed for each calculation by mimicking the brain’s sparse, event-driven architecture.
  2. Algorithms: The most powerful innovation is optimization at the source. Techniques like Sparsity (running only critical parts of a model) and Quantization (reducing the numerical precision required for calculation, e.g., from 32-bit to 8-bit) can cut compute demands by over 50% with minimal loss of accuracy. We need algorithms that are trained to be inherently efficient.
  3. Cooling: The biggest drain on water resources is evaporative cooling. We must accelerate the adoption of Liquid Immersion Cooling (both single-phase and two-phase), which significantly reduces reliance on water and allows for more effective waste heat capture for repurposing (e.g., district heating).
  4. Networking and Storage: Innovations optical networking (replacing copper with fiber) and silicon photonics reduce the energy spikes for data transfer between thousands of chips. For storage, emerging non-volatile memory technologies can cut the energy consumed during frequent data retrieval and writes.
  5. Security: Encryption and decryption are computationally expensive. We need Homomorphic Encryption (HE) accelerators and specialized ASICs that can execute complex security protocols with minimal power draw. Additionally, efficient algorithms for federated learning reduce the need to move sensitive data to central, high-power centers.

“We are generating moderate incremental intelligence by wasting massive amounts of water and power. Sustainability is not a constraint on AI; it is the ultimate measure of its long-term viability.” — Braden Kelley


Case Study 1: Google’s TPU and Data Center PUE

The Challenge:

Google’s internal need for massive, hyper-efficient AI processing far outstripped the efficiency available from standard, off-the-shelf GPUs. They were running up against the physical limits of power consumption and cooling capacity in their massive fleet.

The Innovation:

Google developed the Tensor Processing Unit (TPU), a custom ASIC optimized entirely for their TensorFlow workload. The TPU achieved significantly better performance-per-watt for inference compared to conventional processors at the time of its introduction. Simultaneously, Google pioneered data center efficiency, achieving industry-leading Power Usage Effectiveness (PUE) averages near 1.1. (PUE is defined as Total Energy entering the facility divided by the Energy used by IT Equipment.)

The Impact:

This twin focus—efficient, specialized silicon paired with efficient facility management—demonstrated that energy reduction is a solvable engineering problem. The TPU allows Google to run billions of daily AI inferences using a fraction of the energy that would be required by repurposed hardware, setting a clear standard for silicon specialization and driving down the facility overhead costs.


Case Study 2: Microsoft’s Underwater Data Centers (Project Natick)

The Challenge:

Traditional data centers struggle with constant overheating, humidity, and high energy use for active, water-intensive cooling, leading to high operational and environmental costs.

The Innovation:

Microsoft’s Project Natick experimented with deploying sealed data center racks underwater. The ambient temperature of the deep ocean or a cold sea serves as a massive, free, passive heat sink. The sealed environment (filled with inert nitrogen) also eliminated the oxygen-based corrosion and humidity that cause component failures, resulting in a 8x lower failure rate than land-based centers.

The Impact:

Project Natick provides a crucial proof-of-concept for passive cooling innovation and Edge Computing. By using the natural environment for cooling, it dramatically reduces the PUE and water consumption tied to cooling towers, pushing the industry to consider geographical placement and non-mechanical cooling as core elements of sustainable design. The sealed environment also improves hardware longevity, reducing e-waste.


The Next Wave: Startups and Companies to Watch

The race for the “Green Chip” is heating up. Keep an eye on companies pioneering specialized silicon like Cerebras and Graphcore, whose large-scale architectures aim to minimize data movement—the most energy-intensive part of AI training. Startups like Submer and Iceotope are rapidly commercializing scalable liquid immersion cooling solutions, transforming the data center floor. On the algorithmic front, research labs are focusing Spiking Neural Networks (SNNs) and neuromorphic chips (like those from Intel’s Loihi project), which mimic the brain’s energy efficiency by only firing when necessary. Furthermore, the development of carbon-aware scheduling tools by startups is beginning to allow cloud users to automatically shift compute workloads to times and locations where clean, renewable energy is most abundant, attacking the power consumption problem from the software layer and offering consumers a transparent, green choice.

The Sustainable Mandate

Sustainable AI is not an optional feature; it is a design constraint for all future human-centered innovation. The shift requires organizational courage to reject the incremental path. We must move funding away from simply purchasing more conventional hardware and towards investing in these strategic innovations: domain-specific silicon, quantum-inspired algorithms, liquid cooling, and security protocols designed for minimum power draw. The true power of AI will only be realized when its environmental footprint shrinks, making it globally scalable, ethically sound, and economically viable for generations to come. Human-centered innovation demands a planet-centered infrastructure.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

7 Things Leaders Need to Know About Team AI Usage

7 Things Leaders Need to Know About Team AI Usage

GUEST POST from David Burkus

Leaders, we need to talk about intelligence.

By now you’ve–hopefully–started to take it as seriously as many leaders of industry have been. Either way you look at artificial intelligence, good or bad, it is here to stay. And so we need to start thinking of answers for several questions at the intersection of leadership and AI.

How can it be used effectively, not just to cut costs but to supercharge productivity? How can we use artificial intelligence to supplement our solid foundational leadership? Where should we NOT be using artificial intelligence?

It’s still early in the new world of artificial intelligence in the workplace. A lot of companies are delaying hiring, some are already cutting teams to embrace the optimistic promises AI will bring. But I don’t think we should be all in…yet.

I do know one thing to be true: Leaders using AI will quickly outpace leaders who don’t. And it’s important you get equipped, and in the right way.

Artificial intelligence will make good managers better, but not mediocre bosses better

They say a great actor can bring a C+ movie script up to a B+ or even an A if they are really good. But if a C+ actor is given a C+ script, then it’s going be a C+ movie. The same goes for artificial intelligence and leadership. You need to be a great leader before you start implementing artificial intelligence. AI will not bump up a mediocre manager and turn them into a great leader. It’s not some miracle machine. The truth is you need to have your foundations as a manager be solid first. AI is a good supplement for already successful managers.

Don’t use artificial intelligence to monitor

Often the first temptation of leaders experimenting with AI is to find a productivity AI tool out there, plug it into their IT systems, and start virtually looking over their team’s shoulders to monitor output. There are already dozens of stories…horror stories…of companies doing just that. And it’s not a good look, and deeply hurts morale.

If you need a technology tool to ensure your people are actually working when they say they are, you screwed up a long time ago—back during the hiring process.

And the current research on this isn’t in artificial intelligence’s favor. If AI is used to “collect and analyze data about workers,” then eight out of ten workers say AI use on them would definitely or probably make them feel inappropriately watched. In addition, about a one third of the public does not think AI would lead to equitable evaluations. A majority also agrees this would lead to the information collected about workers being misused (66%).

Artificial intelligence is good at turning anything and everything into a metric. Time is an easy metric. Number of sales calls is an easy metric. Messages on slack is an easy metric. How often you move your mouse is an easy, and terrifying, metric. But just because you have easy numbers to pull on your team doesn’t mean they are the right metrics to be pulling.

Leadership is really about people, not the metrics. How you solicit and give feedback is important. How you support and grow individual employees is important. Inspiring your team and being transparent is important. If you monitor your team endlessly, and your team knows that you’re outsourcing the process of harvesting that data with artificial intelligence, it creates distance between you and them.

And that ultimately works against you in the long run. People don’t like leaders who seem far from them and far from…reality.

Become fluent in artificial intelligence, or risk getting lost in translation

There’s some interesting data from Deloitte on AI that came out in Spring 2024. Organizations reporting “very high” Generative AI expertise expect to change their talent strategies even faster, with 32 percent already making changes. According to their findings, a lot of companies are redesigning work processes and changing workflows to integrate AI at different points.

You’re probably already experiencing this with Google, Microsoft and others integrating artificial intelligence into their core products like email and chats.

Another big focus is going to be on AI fluency. Deloitte found that 47 percent of respondents are dedicating time towards it. The leadership who gets educated on AI early, and keeps training consistently on as it develops, will be the best equipped to shepherd their teams going forward. It’s inevitable that career paths and job descriptions are going to evolve. It’s up to you to stay current.

You NEED to know what the technology is, how it’s being used, and how it’s helping those you’re serving. Be it clients, customers, the public–whomever. Saying you just typed some words into a text box and out came some more words….is not a good answer. Or a good look for you. You sound like you’re treating it like magic, when it’s actually just code.

Turn your conversations and meetings into a database

Middle managers spend a lot of time, arguably too much time, sending progress reports up the chain to the C-Suite and marching orders down to the individual contributors at the bottom. And there’s a fair amount of investigating to find out where things really stand, and time can be spent having to meet multiple people to get all the correct and current information. This is a time slog.

Meanwhile, there are dozens of AI tools now that just take notes. Notes from meetings. Notes from calls. They take the transcript and pair it down to the key takeaways, action items, attendance –a full brief for your records.

So, instead of asking someone to take notes during a meeting or having all your notes in the chat only to evaporate once the zoom call ends, you have a searchable document that you can reference, build on, and keep track of. New hires can use the database to catch up, and senior leaders can get a quick read of the progress and where everything stands.

Use AI/Chat bots to offload small, clerical questions

Here’s a situation: You run a small team and maybe you have a few new hires. You’re going to get a bunch of clerical questions from them over their first 90 days. That’s normal. That’s how it’s supposed to be. Onboarding takes time. “Who’s the point person for this? What’s so and so’s email from HR? What’s the policy for remote days at the company?”

Here’s where artificial intelligence can be really useful. Depending on the sort of chat platform you use– Slack, Teams, whatever, you could make a simple chat bot that you upload a full archive of the company’s policies and your own team norms, clerical details– everything new hires will probably ask you about. So, when those quick questions, quick stop-and-chats happen, the chat-bot can take care of that.

This shouldn’t subtract your time with your new hires. This just subtracts the lower stakes conversations. Now, you have more time for the high-level conversations with them. More coaching. More mentorship. More progression towards team goals. It might sound simple but…that’s because it is.

Use AI as an audience for decisions before taking them public

Being in a leadership role requires making decisive decisions. You include feedback and perspectives from your team as much as possible. Do the research. Talk to people. But then comes the actual decision making. And that is often just you, alone, with your thoughts.

Instead of making your pros and cons list, one practical thing to try is inputting proposed decisions or actions in an AI tool and then asking for all the counterpoints and possible outcomes.

You could even scale this out to your whole team. Ideally, teams should be leveraging task-focused conflict in team discussions to spark new and better ideas. But conflict can be tricky. So, what if AI is always the devil’s advocate? As your team is generating or discussing ideas, you can be feeding those ideas into an AI tool and asking it for counterpoints or how competitors might respond.

Don’t let it make the decision for you but do let it help guide you to possible solutions.

Get the legal clearance before going too deep

One last disclaimer: check with your human resources or your senior leadership, your informational technology (IT) people—or honestly, all of them—to know the boundaries you can work within when using AI tools.

Many of the tools out there are free and still in beta mode or come with a small fee. And most of the larger AI companies are taking whatever data you input and using it to better refine their product. Your company may have rules on the books about data privacy. Certainly, if you work in legal, healthcare, or government services, you’re dealing with sensitive data that may be protected.

Get clear answers before using any AI tools. Until someone above you with authority gives you the OK, you should probably just play with the tools on your own time with your own personal projects.

Conclusion

Artificial intelligence is just getting started in the workplace. And it’s all playing out in real time. If you’re a manager starting to get your hands dirty with these new tools, acknowledge to your team that this is all a work in progress and the norms around AI are likely to evolve. Be sure to keep the playing field level with your team. Practice that transparency, onboard everyone to the tools you’re using and that they can use and see where this takes you. Remember, AI, at its best, is here to enhance our human capabilities, not replace them.

AI will never take the place of a great boss…. but it might be better than being managed by a bad one.

Image credit: David Burkus

Originally published at https://davidburkus.com on September 9, 2024.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Top 10 Human-Centered Change & Innovation Articles of September 2025

Top 10 Human-Centered Change & Innovation Articles of September 2025Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are September’s ten most popular innovation posts:

  1. McKinsey is Wrong That 80% Companies Fail to Generate AI ROI — by Robyn Bolton
  2. Back to Basics for Leaders and Managers — by Robyn Bolton
  3. Growth is Not the Answer — by Mike Shipulski
  4. The Most Challenging Obstacles to Achieving Artificial General Intelligence — by Art Inteligencia
  5. Charlie Kirk and Innovation — by Art Inteligencia
  6. You Just Got Starbucked — by Braden Kelley
  7. Metaphysics Philosophy — by Geoffrey Moore
  8. Invention Through Co-Creation — by Janet Sernack
  9. Sometimes Ancient Wisdom Needs to be Left Behind — by Greg Satell
  10. The Crisis Innovation Trap — by Braden Kelley and Art Inteligencia

BONUS – Here are five more strong articles published in August that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Build a Common Language of Innovation on your team

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last four years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Marketing Guide for Humanity’s Next Chapter

How AI Changes Your Customers

Exclusive Interview with Mark Schaefer

Mark W Schaefer

The rise of artificial intelligence isn’t just an upgrade to our technology; it’s a fundamental shift in what it means to be human and what it takes to lead a successful business. We’ve entered a new epoch defined by “synthetic humanity,” a term coined by Mark Schaefer to describe AI interactions that are indistinguishable from real human connection. This blurring of lines creates an enormous opportunity, which Mark Schaefer refers to as a “seam” — a moment of disruption wide open for innovators. But as algorithms become more skilled at simulating empathy and insight, what must leaders do to maintain authenticity and relevancy? In this exclusive conversation, Mark Shaefer breaks down why synthetic humanity is the most crucial concept for leaders to grasp today, how to use AI as a partner rather than a replacement, and the vital role of human creativity in a world of supercharged innovation.

The Internet, Smartphones, Social Media, and Now AI, Have All Shifted Customer Expectations

Mark Schaefer is a globally-acclaimed author, keynote speaker, and marketing consultant. He is a faculty member of Rutgers University and one of the top business bloggers and podcasters in the world. How AI Changes Your Customers: The Marketing Guide to Humanity’s Next Chapter is his twelfth book, exploring what companies should consider when it comes to artificial intelligence (AI) and their customers.

Below is the text of my interview with Mark and a preview of the kinds of insights you’ll find in How AI Changes Your Customers presented in a Q&A format:

1. I came across the term ‘synthetic humanity’ fairly early on in the book. Why is this concept so important, and what are the most important aspects for leaders to consider?

“Synthetic humanity” is my term for describing the emerging wave of AI interactions that appear, sound, and even feel human — yet are not human at all. This is not science fiction. Already, chatbots can hold natural conversations, generate art, or simulate empathy in ways that blur the line between authentic and artificial.

For leaders, this matters because customers don’t care whether an experience is powered by code or carbon; they care about how it feels. If synthetic humanity can deliver faster, easier, and more personalized service, people will embrace it. The more machines convincingly mimic us, the more vital it becomes to emphasize distinctly human qualities like compassion, vulnerability, creativity, and trust.

Leaders must navigate two urgent questions: Where do we lean into automation for efficiency? And where do we intentionally preserve human touch for meaning? Synthetic humanity can scale interactions, but it cannot scale authenticity. The most successful brands will be those that strike this balance — leveraging AI’s strengths while showcasing the irreplaceable heartbeat of humanity.

2. We discuss disruption quite a bit here on this blog. Can you share a bit more with our innovators about ‘seams’ and the opportunities they create with AI or otherwise?

Throughout history, disruptions to the status quo, such as pandemics, wars, or economic recessions, can either sink a business or elevate it to new heights. Every disruption creates a seam — a moment where the fabric of culture, business, or belief rips just wide enough for an innovator to crawl through and create something new.

We might be living in the ultimate seam.

Google CEO Sundar Pinchai calls AI the most significant innovation in human history — more important than fire, medicine, or the internet. The power of AI seems absolute and threatening. For many, it’s terrifying.

Through my new book, I’m trying to get people to view disruption through a different lens: not fear, but immense possibility.

3. Given that AI has access to all of our accumulated wisdom, does it actually create unique insights and ideas, or will innovation always be left to the humans?

AI is extraordinary at remixing existing content. It can scan millions of data points, connect patterns we might miss, and surface possibilities at lightning speed. That feels like insight, and sometimes it is. However, there is a crucial distinction: AI doesn’t truly care. It lacks context, longing, and lived experience.

Innovation often begins with a problem that aches to be solved or a vision that comes from deep within human culture. AI can suggest ten thousand options, but only a person can say, “This one matters because it touches our values, our customers, our future.”

So the real power is in the partnership. AI accelerates discovery, clears away routine work, and even provokes us with new connections. Humans bring the spark of meaning, the intuition, and the courage to act on something that has never been tried before. Innovation is not being replaced. It is being supercharged. In my earlier book “Audacious: How Humans Win in an AI Marketing World,” I note that the bots are here, but we still own crazy!

This is a time for humans to transcend “competent.” Bots can be competent and ignorable.

4. Do you have any tips for us mere mortals on how to productively use AI without developing creative and intellectual atrophy?

Yes, and it starts with how you frame the role of AI in your life. If you treat it as a replacement, you risk letting your creative muscles go slack. If you treat it as a partner, you can actually get stronger.

Here are a few practical approaches. First, use AI to stretch your perspective, not to finish your work for you. Ask it to give you ten angles on a problem, then choose one and make it your own. Second, set boundaries. Write your first draft by hand or sketch ideas before you ever touch a prompt. Let AI react to your thinking, not define it. Third, use the tool to challenge yourself. Feed it your work and ask, “What am I missing? Where are my blind spots?”

Most importantly, keep doing hard things. Struggle is where growth happens. AI can smooth the path, but sometimes you need the climb. Treat the technology as a coach, not a crutch, and you will come out sharper, faster, and even more creative on the other side.

5. I’ve heard a little bit about AI literacy. What are some of the critical aspects that we should all be aware of or try to learn more about?

How AI Changes Your Customers' MarketingThere are a few critical aspects everyone should know. First, bias. AI models are trained on human data, which means they inherit our blind spots and prejudices. If you don’t recognize this, you may mistake bias for truth. Second, limits. AI is confident even when it is wrong. Knowing how to fact-check and verify is essential. Third, prompting. The quality of your input shapes the quality of the output, so learning how to ask better questions is a new core skill.

Finally, ethics. Just because AI can do something does not mean it should. We all need to be asking: How does this affect privacy, autonomy, and trust?

AI literacy isn’t about becoming a coder. It is about being a thoughtful user, a skeptic when needed, and a leader who understands both the promise and the peril of these tools.

6. What do companies and sole proprietors worried about falling below the fold of the new AI-powered search results need to change online to stay relevant and successful?

I have many practical ideas about this in the book. In short, the old game of chasing clicks and keywords is fading. AI-powered search doesn’t just list links, it delivers answers. That means the winners will be those whose content and presence are woven deeply enough into the digital fabric that the algorithms can’t ignore them.

This requires a shift in focus. Instead of creating content that only ranks, create content that is referenced, cited, and trusted across the web. Build authority by being the source others turn to. Make your ideas so distinct and valuable that they become part of the training data itself. We are entering a golden age for PR!

It also means doubling down on brand signals that AI can’t manufacture. Human stories, original research, strong communities, and unique perspectives will travel farther than generic blog posts. And remember, AI models reward freshness and relevance, so showing up consistently matters.

The book also covers what I call “overrides.” If you create a meaningful, loyal relationship with customers and word of mouth recommendations, that will override the AI recommendations. We consider AI recommendations. We ACT on human recommendations.

7. ‘Weaponizing kindness’ was a terrifying headline I stumbled across in your book. What do organizations need to consider when using AI to interact with customers and what traps are out in front of them?

That phrase is unsettling for a reason. AI can mimic empathy so well that it risks crossing into manipulation. Imagine a chatbot that remembers your child’s name, mirrors your mood, or expresses concern in just the right tone. Done responsibly, that feels like service. Done carelessly, it feels like exploitation.

Organizations need to recognize that kindness delivered at scale is powerful, but if it is hollow or purely transactional, customers will sense it. The first trap is confusing simulation with sincerity. Just because an AI can sound caring does not mean it actually cares. The second trap is overreach. Using personal data to create hyper-tailored interactions can quickly slip from helpful to creepy.

The safeguard is transparency and choice. Be clear about when a customer is interacting with AI. Use technology to enhance human care, not replace it. Always provide people with a way to connect with a real person.

Kindness is a sacred trust in business. Weaponize it, and you erode the very loyalty and love you are trying to build. Use it authentically, and you create relationships no machine can ever replicate.

8. What changing customer expectations (thanks to AI) might companies easily overlook and pay a heavy price for?

One of the biggest shifts is speed. Customers already expect instant answers, but AI raises the bar even higher. If your competitor offers a seamless, AI-powered interaction that solves a problem in seconds, your slower, clunkier process will feel intolerable.

Another overlooked expectation is personalization. People are starting to experience products, services, and recommendations that feel almost eerily tailored to them. That sets a new standard. Companies still delivering one-size-fits-all communication will look outdated. Don’t confuse “personalization” with “personal.”

Perhaps the most subtle change is trust. As customers realize machines can fake warmth and empathy, they will value genuine human touch even more. If every interaction feels synthetic, you risk losing trust, especially if you’re not transparent about it.

The price of ignoring these shifts is steep: irrelevance. Customers rarely complain about unmet expectations anymore; they simply leave. The opportunity is to stay alert, listen closely, and respond quickly as AI reshapes what “good enough” looks like. The companies that thrive will be those that not only keep pace with AI, but also double down on the irreplaceable humanity customers still crave.

9. What unintended consequences of AI do you think companies might face and may not be preparing for? (overcoming AI slander and falsehoods might be one – agree or disagree? Others?)

I agree. In fact, I predict in the book that we cannot foresee AI’s biggest impact yet, as it will likely be an unintended consequence of the technology’s use in an unexpected way.

Where could that occur? Maybe reputational risk at scale. AI systems will generate falsehoods with the same confidence they generate facts, and those errors can stick. A single hallucination about your company, repeated enough times, becomes “truth” in the digital bloodstream. Most companies are not prepared for the speed and reach of misinformation of this kind.

Another consequence is customer dependency. If people hand over more of their decisions to AI, they may lose patience for complexity or nuance in your offerings. That can push companies toward oversimplification, even when a richer human experience would build deeper loyalty.

There is also the cultural risk. Employees might over-rely on AI, quietly eroding skills, judgment, and creativity. A workforce that outsources too much thinking can become brittle in ways that only show up during a crisis.

The real challenge is that these consequences don’t announce themselves. They creep in. Which means leaders must actively audit how AI is being used, question where it might distort reality or weaken capability, and set up safeguards now. The companies that prepare will navigate disruption. The ones that ignore it will be blindsided.

10. Can companies make TOO MUCH use of AI? If so, what would the impacts look like?

Yes, and we will start seeing this more often. It is a pattern that has repeated through history — over-indexing on tech and then bringing the people back in!

When companies lean too heavily on AI, they risk draining the very humanity that makes them memorable. On the surface, it might seem like efficiency: faster service, lower costs, and greater scale. But underneath, the impacts can be corrosive. You might be messing with your brand!

Customers may feel manipulated or devalued if a machine drives every interaction. Even perfect personalization can feel hollow if it lacks genuine care. Second, trust erodes when people sense that a brand hides behind automation rather than showing up with real human accountability. Third, within the company, over-reliance on AI can weaken employee judgment and creativity, resulting in a workforce that follows prompts rather than breaking new ground.

The real danger is commoditization. If every company automates everything, then no company stands out. The winners will be those who know when to say, “This moment deserves a person.” AI should be an amplifier, not a replacement. Too much of it and you don’t just lose connection, you lose your soul.

Conclusion

Thank you for the great conversation Mark!

I hope everyone has enjoyed this peek into the mind of the man behind the inspiring new title How AI Changes Your Customers: The Marketing Guide to Humanity’s Next Chapter!

Image credits: BusinessesGrow.com (Mark W Schaefer)

Content Authenticity Statement: If it wasn’t clear above, the short section in italics was written by Google’s Gemini with edits from Braden Kelley, and the rest of this article is from the minds of Mark Schaefer and Braden Kelley.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

You Need to Know What Your Customers Think of AI

You Need to Know What Your Customers Think of AI

GUEST POST from Shep Hyken

Ten years ago, only the most technologically advanced companies used AI — although it barely resembled what companies use today when communicating with customers — and it was very, very expensive. But not anymore. Today, any company can implement an AI strategy using ChatGPT-type technologies, often creating experiences that give customers what they want. But not always, which is why the information below is important.

The 2025 Findings

My annual customer service and customer experience (CX) research study surveys more than 1,000 U.S. consumers weighted to the population’s demographics of age, gender, ethnicity and geography. It included an entire group of questions focused on how customers react to and accept (or don’t accept) AI options to ask questions, resolve problems and communicate with a company or brand. Consider the following findings:

  • AI Success: Half of U.S. customers (50%) said they have successfully resolved a customer service issue using AI or ChatGPT-type technologies without needing human assistance. In 2024, only three out of 10 customers (32%) did so. That’s great news, but it’s important to point out that age makes a difference. Six out of 10 Gen-Z customers (61%) successfully used AI support versus just 32% of Boomers.
  • AI Is Far From Perfect: Half of U.S. customers (51%) said they received incorrect information from an AI self-service bot. Even with incredible improvement in AI’s capabilities, it still serves up wrong information. That destroys trust, not only in the company but also in the technology as a whole. A few bad answers and customers will be reluctant, at least in the near term, to choose self-service over the traditional mode of communication, the phone.
  • Still, Customers Believe: Four out of 10 customers (42%) believe AI and ChatGPT can handle complex customer service inquiries as effectively as humans. Even with the mistakes, customers believe AI solutions work. However, 86% of customers think companies using AI should always provide an option to speak or text with a real person.
  • The Phone Still Rules: It’s still too early to throw away phone support. My prediction is that it will be years, if ever, that human-to-human interactions completely disappear, which was proven when we asked, “When you have a problem or issue with a company, which solution do you prefer to use: phone or digital self-service?” The answer is that 68% of customers will still choose the phone over digital self-service. That number is highly influenced by the 82% of Baby Boomers who choose to call a company over any other type of digital support.
  • The Future Looks Strong For AI Customer Support: Six out of 10 customers (63%) expect AI-fueled technologies to become the primary mode of customer support. We asked the same question in 2021, and only 21% of customers felt this way.

The Strategy Behind Using AI For CX

  • Age Matters: As you can see from some of the above findings, there is a big generational gap between younger and older customers. Gen-Z customers are more comfortable, have had more success, and want more digital/AI interactions compared to older customers. Know your customer demographics and provide the appropriate support and communication options based on their age. Recognize you may need to provide different support options if your customer base is “everyone.”
  • Trust Is a Factor: Seven out of 10 customers (70%) have concerns about privacy and security when interacting with AI. Once again, age makes a difference. Trust and confidence with AI consistently decrease with age.

The Future of AI

As AI continues to evolve, especially in the customer service and experience world, companies and brands must find a balance between technology and the human touch. While customers are becoming more comfortable and finding success with AI, we can’t become so enamored with it that we abandon what many of our customers expect. The future of AI isn’t a choice between technology and humans. It’s about creating a blended experience that plays to the technology’s strengths and still gives customers the choice.

Furthermore, if every business had a 100% digital experience, what would be a competitive differentiator? Unless you are the only company that sells a specific product, everything becomes a commodity. Again, I emphasize that there must be a balance. I’ll close with something I’ve written before, but bears repeating:

The greatest technology in the world can’t replace the ultimate relationship-building tool between a customer and a business: the human touch.

This article was originally published on Forbes.com.

Image Credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Why Context Engineering is the Next Frontier in AI

Why Context Engineering is the Next Frontier in AI

by Braden Kelley and Art Inteligencia

Observing the rapid evolution of artificial intelligence, one thing has become abundantly clear: while raw processing power and sophisticated algorithms are crucial, the true key to unlocking AI’s transformative potential lies in its ability to understand and leverage context. We’ve seen remarkable advancements in generative AI and machine learning, but these technologies often stumble when faced with the nuances of real-world situations. This is why I believe context engineering – the discipline of explicitly designing and managing the contextual information available to AI systems – is not just an optimization, but the next fundamental frontier in AI innovation.

Think about human intelligence. Our ability to understand language, make decisions, and solve problems is deeply rooted in our understanding of context. A single word can have multiple meanings depending on the sentence it’s used in. A request can be interpreted differently based on the relationship between the people involved or the situation at hand. For AI to truly augment human capabilities and integrate seamlessly into our lives, it needs a similar level of contextual awareness. Current AI models often operate on relatively narrow inputs, lacking the broader understanding of user intent, environmental factors, and historical interactions that humans take for granted. Context engineering aims to bridge this gap, moving AI from being a powerful but often brittle tool to a truly intelligent and adaptable partner.

In the realm of artificial intelligence, context engineering is the strategic and human-centered practice of providing an AI system with the relevant background information it needs to understand a query or situation accurately. It goes beyond simple prompt design by actively building and managing the comprehensive context that surrounds an interaction. This includes integrating historical data, user profiles, real-time environmental factors, and external knowledge sources, allowing the AI to move from a narrow, transactional understanding to a more holistic, human-like awareness. By engineering this context, we enable AI to produce more accurate, personalized, and genuinely useful responses, bridging the gap between a machine’s logic and the nuanced complexity of human communication and problem-solving.

The field of context engineering encompasses a range of techniques and strategies focused on providing AI systems with relevant and actionable context. This includes:

  • Prompt Engineering: Crafting detailed and context-rich prompts that guide AI models towards desired outputs.
  • Memory Management: Implementing mechanisms for AI to remember past interactions and use that history to inform current responses.
  • External Knowledge Integration: Connecting AI systems to external databases, APIs, and real-time data streams to provide up-to-date and relevant information.
  • User Profiling and Personalization: Leveraging data about individual users to tailor AI responses to their specific needs and preferences.
  • Situational Awareness: Incorporating real-world contextual cues, such as location, time of day, and user activity, to make AI more responsive to the current situation.

A Human-Centered Blueprint for Implementation

Implementing context engineering is not a one-time technical fix; it is a continuous, human-centered practice that must be embedded into your innovation lifecycle. To move beyond a static, one-size-fits-all model and create truly intelligent, context-aware AI, consider this blueprint for action:

  • Step 1: Start with the Human Context. Before you even think about data streams or algorithms, you must first deeply understand the human being you are serving. Conduct ethnographic research, user interviews, and journey mapping to identify what context is truly relevant to your users. What are their goals? What unspoken needs do they have? What external factors influence their decisions? The most valuable context often isn’t in a database—it’s in the real-world experiences and emotional states of your users.
  • Step 2: Map the Contextual Landscape. Once you understand the human context, you can begin to identify and integrate the necessary data. This involves creating a “contextual map” that connects the human need to the available data sources. For a customer service AI, this map would link a customer’s inquiry to their purchase history, recent support tickets, and even their browsing behavior on your website. For a medical AI, the map would link a patient’s symptoms to their genetic data, environmental exposure, and family medical history. This mapping process ensures that the AI’s inputs are directly tied to what matters most to the user.
  • Step 3: Build a Dynamic Feedback Loop. The context of a situation is constantly changing. A great context-aware AI is not a static system but a learning one. Implement a continuous feedback loop where human users can correct the AI’s understanding, provide additional information, and refine its responses. This “human-in-the-loop” approach is vital for ethical and accurate AI. It allows the system to learn from its mistakes and adapt to new, unforeseen contexts, ensuring its relevance and reliability over time.
  • Step 4: Prioritize Privacy and Ethical Guardrails. The more context you provide to an AI, the more critical it becomes to manage that information responsibly. From the outset, you must design for privacy, collecting only the data you absolutely need and ensuring it is stored and used in a secure and transparent manner. Establish clear ethical guardrails for how the AI uses and interprets contextual information, particularly for sensitive data. This is not just a regulatory requirement; it is a fundamental aspect of building trust with your users and ensuring that your AI serves humanity, rather than exploiting it.

By following these best practices, you can move beyond simple, reactive AI to a proactive, human-centered intelligence that understands the world not just as a collection of data points, but as a rich tapestry of interconnected context. This is the work that will define the next generation of AI and, in doing so, will fundamentally change how technology serves humanity.

Case Study 1: Improving Customer Service with Context-Aware AI Assistants

The Challenge: Generic and Frustrating Customer Service Chatbots

Many companies have implemented AI-powered chatbots to handle customer inquiries. However, these chatbots often struggle with complex or nuanced issues, leading to frustrating experiences for customers who have to repeat information or are given irrelevant answers. The lack of contextual awareness is a major limitation.

Context Engineering in Action:

A telecommunications company sought to improve its customer service chatbot by implementing robust context engineering. They integrated the chatbot with their CRM system, allowing it to access the customer’s purchase history, past interactions, and current account status. They also implemented memory management so the chatbot could retain information shared earlier in the conversation. Furthermore, they used prompt engineering to guide the chatbot to ask clarifying questions and to tailor its responses based on the specific product or service the customer was inquiring about. For example, if a customer asked about a billing issue, the chatbot could access their latest bill and provide specific details, rather than generic troubleshooting steps. It could also remember if the customer had contacted support recently for a related issue and take that into account.

The Impact:

The context-aware chatbot significantly improved customer satisfaction scores and reduced the number of inquiries that had to be escalated to human agents. Customers felt more understood and received more relevant and efficient support. The company also saw a decrease in customer churn. This case study highlights how context engineering can transform a basic AI tool into a valuable and helpful resource by enabling it to understand the customer’s individual situation and history.

Key Insight: By providing AI customer service assistants with access to relevant customer data and interaction history, companies can significantly enhance the quality and efficiency of support, leading to increased customer satisfaction and loyalty.

Case Study 2: Enhancing Medical Diagnosis with Contextual Patient Information

The Challenge: Over-reliance on Isolated Symptoms in AI Diagnostic Tools

AI is increasingly being used to assist medical professionals in diagnosing diseases. However, early AI diagnostic tools often focused primarily on analyzing individual symptoms in isolation, potentially missing crucial contextual information such as the patient’s medical history, lifestyle, environmental factors, and even subtle cues from their recent health records.

Context Engineering in Action:

A research hospital in the Pacific Northwest developed an AI-powered diagnostic tool for a specific type of rare disease. Recognizing the importance of context, they engineered the AI to integrate a wide range of patient data beyond just the presenting symptoms. This included the patient’s complete medical history (past illnesses, medications, allergies), family medical history, lifestyle information (diet, exercise, smoking habits), recent lab results, and even notes from previous doctor’s visits. The AI was also connected to relevant medical literature to understand the broader context of the disease and potential co-morbidities. By providing the AI with this rich contextual information, the researchers aimed to improve the accuracy and speed of diagnosis, especially in complex cases where isolated symptoms might be misleading.

The Impact:

The context-aware AI diagnostic tool demonstrated a significantly higher accuracy rate in identifying the rare disease compared to traditional methods and earlier AI models that lacked comprehensive contextual input. It was also able to flag potential risks and complications that might have been overlooked otherwise. This case study underscores the critical role of context engineering in high-stakes applications like medical diagnosis, where a holistic understanding of the patient’s situation can lead to more timely and effective treatments.

Key Insight: Context engineering, by enabling a holistic view of a patient’s health and history, is crucial for improving the accuracy and reliability of AI in critical fields like medical diagnosis.

The Future of AI is Contextual

The future of AI is not about building bigger models; it’s about building smarter ones. And a smarter AI is one that can understand and leverage the richness of context, just as humans do. From a human-centered perspective, context engineering is the practice that makes AI more useful, more reliable, and more deeply integrated into our lives in a way that truly helps us. By moving beyond simple prompts and isolated data points, we can create AI systems that are not just powerful tools, but truly intelligent and invaluable partners. The work of bridging the gap between isolated data and meaningful context is where the next great wave of AI innovation will emerge, and it is a task that will demand our full attention.

Image credit: Pexels

Content Authenticity Statement: The topic area and the key elements to focus on were decisions made by Braden Kelley, with help from Google Gemini to shape the article and create the illustrative case studies.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Top 10 Human-Centered Change & Innovation Articles of July 2025

Top 10 Human-Centered Change & Innovation Articles of July 2025Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are July’s ten most popular innovation posts:

  1. Three Executive Decisions for Strategic Foresight Success or Failure — by Robyn Bolton
  2. 3 Secret Saboteurs of Strategic Foresight — by Robyn Bolton
  3. Five Unsung Scientific Discoveries Driving Future Innovation — by Art Inteligencia
  4. Unblocking Change — by Mike Shipulski
  5. Why Elastocalorics Will Redefine Our World — by Art Inteligencia
  6. People Will Be Competent and Hardworking – If We Let Them — by Greg Satell
  7. The Unsung Heroes of Culture — by Braden Kelley and Art Inteligencia
  8. Making it Safe to Innovate — by Janet Sernack
  9. Strategic Foresight Won’t Save Your Company — by Robyn Bolton
  10. Your Work Isn’t Transformative — by Mike Shipulski

BONUS – Here are five more strong articles published in June that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Build a Common Language of Innovation on your team

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last four years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Why Innovators Can’t Ignore the Quantum Revolution

Why Innovators Can't Ignore the Quantum Revolution

GUEST POST from Art Inteligencia

In the world of innovation, we are always looking for the next big thing—the technology that will fundamentally change how we solve problems, create value, and shape the future. For the past several decades, that technology has been the classical computer, with its exponential increase in processing power. But a new paradigm is on the horizon, one that promises to unlock capabilities previously thought impossible: quantum computing. While it may seem like a distant, esoteric concept, innovators and business leaders who ignore quantum computing are doing so at their own peril. This isn’t just about faster computers; it’s about a complete re-imagining of what is computationally possible.

The core difference is simple but profound. A classical computer is like a single light switch—it can be either ON or OFF (1 or 0). A quantum computer, however, uses qubits that can be ON, OFF, or in a state of superposition, meaning it’s both ON and OFF at the same time. This ability, combined with entanglement, allows quantum computers to perform calculations in parallel and tackle problems that are intractable for even the most powerful supercomputers. The shift is not incremental; it is a fundamental leap in computational power, moving from a deterministic, linear process to a probabilistic, multi-dimensional one.

Quantum as an Innovation Engine: Solving the Unsolvable

For innovators, quantum computing is not a threat to be feared, but a tool to be mastered. It provides a new lens through which to view and solve the world’s most complex challenges. The problems that are “hard” for classical computers—like simulating complex molecules, optimizing global supply chains, or cracking certain types of encryption—are the very problems where quantum computers are expected to excel. By leveraging this technology, innovators can create new products, services, and business models that were simply impossible before.

Key Areas Where Quantum Will Drive Innovation

  • Revolutionizing Material Science: Simulating how atoms and molecules interact is a notoriously difficult task for classical computers. Quantum computers can model these interactions with unprecedented accuracy, accelerating the discovery of new materials, catalysts, and life-saving drugs in fields from energy storage to pharmaceuticals.
  • Optimizing Complex Systems: From optimizing financial portfolios to routing delivery trucks in a complex network, optimization problems become exponentially more difficult as the number of variables increases. Quantum algorithms can solve these problems much faster, leading to incredible efficiencies and cost savings.
  • Fueling the Next Wave of AI: Quantum machine learning (QML) can process vast, complex datasets in ways that are impossible for classical AI. This could lead to more accurate predictive models, better image recognition, and new forms of artificial intelligence that can find patterns in data that humans and classical machines would miss.
  • Securing Our Digital Future: While quantum computing poses a threat to current encryption methods, it also offers a solution. Quantum cryptography promises to create uncrackable communication channels, leading to a new era of secure data transmission.

Case Study 1: Accelerating Drug Discovery for a New Tomorrow

A major pharmaceutical company was struggling to develop a new drug for a rare disease. The traditional method involved months of painstaking laboratory experiments and classical computer simulations to model the interactions of a new molecule with its target protein. The sheer number of variables and possible molecular configurations made the process a slow and expensive trial-and-error loop, often with no clear path forward.

They partnered with a quantum computing research firm to apply quantum simulation algorithms. The quantum computer was able to model the complex quantum mechanical properties of the molecules with a level of precision and speed that was previously unattainable. Instead of months, the simulations were run in days. This allowed the human research team to rapidly narrow down the most promising molecular candidates, saving years of R&D time and millions of dollars. The quantum computer didn’t invent the drug, but it acted as a powerful co-pilot, guiding the human innovators to the most probable solutions and dramatically accelerating the path to a breakthrough.

This case study demonstrates how quantum computing can transform the bottleneck of complex simulation into a rapid discovery cycle, augmenting the human innovator’s ability to find life-saving solutions.

Case Study 2: Optimizing Global Logistics for a Sustainable Future

A global shipping and logistics company faced the monumental task of optimizing its entire network of ships, trucks, and warehouses. Factors like fuel costs, weather patterns, traffic, and delivery windows created a mind-bogglingly complex optimization problem. The company’s classical optimization software could only provide a suboptimal solution, leading to wasted fuel, delayed deliveries, and significant carbon emissions.

Recognizing the limitations of their current technology, they began to explore quantum optimization. By using a quantum annealer, a type of quantum computer designed for optimization problems, they were able to model the entire network simultaneously. The quantum algorithm found a more efficient route and scheduling solution that reduced fuel consumption by 15% and cut delivery times by an average of 10%. This innovation not only provided a significant competitive advantage but also had a profound positive impact on the company’s environmental footprint. It was an innovation that leveraged quantum computing to solve a business problem that was previously too complex for existing technology.

This example shows that quantum’s power to solve previously intractable optimization problems can lead to both significant cost savings and sustainable, planet-friendly outcomes.

The Innovator’s Call to Action

The quantum revolution is not a distant sci-fi fantasy; it is a reality in its nascent stages. For innovators, the key is not to become a quantum physicist overnight, but to understand the potential of the technology and to start experimenting now. Here are the steps you must take to prepare for this new era:

  • Educate and Evangelize: Start a dialogue about quantum computing and its potential applications in your industry. Find internal champions who can explore this new frontier and evangelize its possibilities.
  • Find Your Partners: You don’t have to build your own quantum computer. Partner with academic institutions, research labs, or quantum-as-a-service providers to start running pilot projects on a cloud-based quantum machine.
  • Identify the Right Problems: Look for the “intractable” problems in your business—the optimization challenges, the material science hurdles, the data analysis bottlenecks—and see if they are a fit for quantum computing. These are the problems where a quantum solution will deliver a true breakthrough.

The greatest innovations are born from a willingness to embrace new tools and new ways of thinking. Quantum computing is the most powerful new tool we have ever seen. For the innovator of tomorrow, understanding and leveraging this technology will be the key to staying ahead. The quantum leap is upon us—are you ready to take it?

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Boring AI is the Key to Better Customer Service

Boring AI is the Key to Better Customer Service

GUEST POST from Shep Hyken

Boring can be a good thing. When something works the way it’s supposed to, it shouldn’t be a surprise. There shouldn’t be friction or drama if a customer has a problem or wants a question answered. It should just be easy. And when it comes to customer service, “easy” and “boring” are good. The experience should just happen the way the customer wants it to happen. You might call that boring. I call that excellent.

That was the beginning of a conversation I had with Damon Covey, general manager of unified communications and collaboration for GoTo, on Amazing Business Radio. GoTo is one of the leading cloud communications companies, providing software and solutions to companies of all sizes and helping them implement AI systems that work, without the complexity and stress that can come from new technology. Covey’s goal for our conversation was to demystify AI, cutting through the noise and complexities of flashy AI and taking it down to a practical level. Boring was the word he liked to use, emphasizing it should be easy, simple and uncomplicated.

In our discussion, Covey said that large companies used to make six- and seven-figure investments to implement AI. Today, AI technology is far superior and, at the same time, much less expensive, so even the smallest companies can afford it. They can get advanced technology for hundreds of dollars, not hundreds of thousands of dollars. Covey said, “For example, a small bike shop or an automotive dealership can now provide the same advanced customer service options as large corporations.” With that in mind, here are the main takeaways from our conversation:

Conversational AI

Until recently (within the past two or three years), a basic chatbot had to follow pre-set rules. Conversational AI provides a much broader opportunity, allowing a computer to interact with people in a natural, human-like manner. Today, AI can understand and respond to customers’ questions and issues with much more flexibility. It has the capability to recognize different languages and understand fumbled phrases, much like a human would. By using conversational AI, businesses can provide 24/7 service, allowing them to respond to customer queries and schedule appointments even when the customer contacts them outside of regular business hours.

Treat AI Like a Team Member

If you hire a new employee, you train them. Treat your AI solutions the same way. Covey said that, similar to training an employee, you need to set specific parameters and provide the AI with the necessary information to ensure it stays within the scope of your business requirements. He emphasized the importance of making sure the AI only draws from the information provided by your business, such as your website, FAQ pages, product manuals, etc., rather than pulling from a source outside of your company, to maintain accuracy and relevance. Covey said that AI should be continuously optimized and trained over time to improve its performance, much like you would train and coach a human employee to expand their capabilities.

Productivity: Automating Processes

Covey talked about automating processes. Anything you do more than three times can be a candidate for AI automation. For example, AI can integrate with a business’ telecommunications system to automate the process of taking notes during calls. It can then summarize the call, put the information into the customer’s record and create a list of next steps, if appropriate. This is a simple function that helps employees be more productive. Instead of an employee typing notes and summarizing the call, AI can handle the task so the employee can move on to helping the next customer.

Augmenting the Business

AI can help businesses do things they don’t normally do, such as remain open for certain functions (like customer support) after hours. It can act as an after-hours receptionist, answering phone calls, setting appointments or providing basic information to customers after business hours. That turns a business that’s typically open during traditional hours to a 24/7 operation.

It is Easier Than You Think

At the end of the interview, Covey dropped a nugget of wisdom that is the perfect way to close this article. For many, especially smaller organizations, deciding what technology to use and how to best use AI can be a daunting decision. It shouldn’t be. Covey says, “Start with the problem you want to solve, and solve for that problem.” He added that you should start using the technology for small problems. Once you understand how it works, the more complicated issues will be easier to solve for.

And that brings us back to where we started. AI doesn’t need to be complicated or flashy. It should be boring—in a good way. Start small, focus on one problem at a time and let AI do what it’s supposed to do: make customer service easier and more efficient. When done right, your customers won’t be amazed by the AI—they’ll just be amazed by how easy it is to do business with you.

Image Credit: Unsplash

This article was originally published on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.