Tag Archives: Artificial Intelligence

AI, Cognitive Obesity and Arrested Development

AI, Cognitive Obesity and Arrested Development

GUEST POST from Pete Foley

Some of the biggest questions of our age are whether AI will ultimately benefit or hurt us, and how big its’ effect will ultimately be.

And that of course is a problem with any big, disruptive technology.  We want to anticipate how it will play out in the real world, but our forecasts are rarely very accurate, and all too often miss a lot of the more important outcomes. We often don’t anticipate it’s killer applications, how it will evolve or co-evolve with other emergent technologies, or predict all of the side effects and ‘off label’ uses that come with it.  And the bigger the potential impact new tech has, and the broader the potential applications, the harder prediction becomes.  The reality is that in virtually every case, it’s not until we set innovation free that we find its full impact, good, bad or indifferent.

Pandora’s Box

And that can of course be a sizable concern.  We have to open Pandora’s Box in order to find out what is inside, but once open, it may not be possible to close it again.   For AI, the potential scale of its impact makes this particularly risky. It also makes any meaningful regulation really difficult. We cannot regulate what we cannot accurately predict. And if we try we risk not only missing our target, but also creating unintended consequences, and distorting ‘innovation markets’ in unexpected, potentially negative ways.

So it’s not surprising there is a lot of discussion around what AI will or will not do. How will it effect jobs, the economy, security, mental health. Will it ‘pull’ a Skynet, turn rogue and destroy humanity? Will it simply replace human critical thinking to the point where it rules us by default? Or will it ultimately fizzle out to some degree, and become a tool in a society that looks a lot like today, rather than revolutionizing it?

I don’t even begin to claim to predict the future with any accuracy, for all of the reasons mentioned above. But as a way to illustrate how complex an issue this is, I’d like to discuss a few less talked about scenarios.

1.  Less obvious issues:  Obviously AI comes with potential for enormous benefits and commensurate problems.  It’s likely to trigger an arms race between ‘good’ and ‘bad’ applications, and that of itself will likely be a moving target.  An obvious, oft discussed potential issue is of course the ‘Terminator Scenario’ mentioned above.  That’s not completely far fetched, especially with recent developments in AI self preservation and scheming that I’ll touch on later. But there are plenty of other potential, if less extreme pitfalls, many of which involve AI amplifying and empowering bad behavior by humans.  The speed and agility AI hands to hackers, hostile governments, black-hats, terrorists and organized crime vastly enhanced capability for attacks on infrastructure, mass fraud or worse. And perhaps more concerning, there’s the potential for AI to democratize cyber crime, and make it accessible to a large number of ‘petty’ criminals who until now have lacked resources to engage in this area. And when the crime base expands, so does the victim base. Organizations or individuals who were too small to be targeted for ransomware when it took huge resources to create, will presumably become more attractive targets as AI allows similar code to be built in hours by people who possess limited coding skills.

And all of this of course adds another regulation challenge. The last thing we want to do is slow legitimate AI development via legislation, while giving free reign to illegitimate users, who presumably will be far less likely to follow regulations. If the arms race mentioned above occurs, the last thing we want to do is unintentionally tip the advantage to the bad guys!

Social Impacts

But AI also has the potential to be disruptive in more subtle ways.  If the internet has taught us anything, it is that how the general public adopts technology, and how big tech monetizes matter a lot. But this is hard to predict.  Some of the Internet’s biggest negative impacts have derived from largely unanticipated damage to our social fabric.  We are still wrestling with its impact on social isolation, mental health, cognitive development and our vital implicit skill-set. To the last point, simply deferring mental tasks to phones and computers means some cognitive muscles lack exercise, and atrophy, while reduction in human to human interactions depreciate our emotion and social intelligence.

1. Cognitive Obesity  The human brain evolved over tens of thousands, arguable millions of years (depending upon where in you start measuring our hominid history).  But 99% of that evolution was characterized by slow change, and occurred in the context of limited resources, limited access to information, and relatively small social groups.  Today, as the rate of technological innovation explodes, our environment is vastly different from the one our brain evolved to deal with.  And that gap between us and our environment is widening rapidly, as the world is evolving far faster than our biology.  Of course, as mentioned above, the nurture part of our cognitive development does change with changing context, so we do course correct to some degree, but our core DNA cannot, and that has consequences.

Take the current ‘obesity epidemic’.  We evolved to leverage limited food resources, and to maximize opportunities to stock up calories when they occurred.  But today, faced with near infinite availability of food, we struggle to control our scarcity instincts. As a society, we eat far too much, with all of the health issues that brings with it. Even when we are cognitively aware of the dangers of overeating, we find it difficult to resist our implicit instincts to gorge on more food than we need.  The analogy to information is fairly obvious. The internet brought us near infinite access to information and ‘social connections’.  We’ve already seen the negative impact this can have, contributing to societal polarization, loss of social skills, weakened emotional intelligence, isolation, mental health ‘epidemics’ and much more. It’s not hard to envisage these issues growing as AI increases the power of the internet, while also amplifying the seduction of virtual environments.  Will we therefore see a cognitive obesity epidemic as our brain simply isn’t adapted to deal with near infinite resources? Instead of AI turning us all into hyper productive geniuses, will we simply gorge on less productive content, be it cat videos, porn or manipulative but appealing memes and misinformation? Instead of it acting as an intelligence enhancer, will it instead accelerate a dystopian Brave New World, where massive data centers gorge on our common natural resources primarily to create trivial entertainment?

2. Amplified Intelligence.  Even in the unlikely event that access to AI is entirely democratic, it’s guaranteed that its benefits will not be. Some will leverage it far more effectively than others, creating significant risk of accelerating social disparity.  While many will likely gorge unproductively as described above, others will be more disciplined, more focused and hence secure more advantage.  To return to the obesity analogy, It’s well documented that obesity is far more prevalent in lower income groups. It’s hard not to envisage that productive leverage of AI will follow a similar pattern, widening disparities within and between societies, with all of the issues and social instability that comes with that.

3. Arrested Development.  We all know that ultimately we are products of both nature and nurture. As mentioned earlier, our DNA evolves slowly over time, but how it is expressed in individuals is impacted by current or context.  Humans possess enormous cognitive plasticity, and can adapt and change very quickly to different environments.  It’s arguably our biggest ‘blessing’, but can also be a curse, especially when that environment is changing so quickly.

The brain is analogous to a muscle, in that the parts we exercise expand or sharpen, and the parts we don’t atrophy.    As we defer more and more tasks to AI, it’s almost certain that we’ll become less capable in those areas.  At one level, that may not matter. Being weaker at math or grammar is relatively minor if our phones can act as a surrogate, all of my personal issues with autocorrect notwithstanding.

But a bigger potential issue is the erosion of causal reasoning.  Critical thinking requires understanding of underlying mechanisms.  But when infinite information is available at a swipe of a finger, it becomes all too easy to become a ‘headline thinker’, and unconsciously fail to penetrate problems with sufficient depth.

That risks what Art Markman, a psychologist at UT, and mentor and friend, used to call the ‘illusion of understanding’.  We may think we know how something works, but often find that knowledge is superficial, or at least incomplete, when we actually need it.   Whether its fixing a toilet, changing a tire, resetting a fuse, or unblocking a sink, often the need to actually perform a task reveals a lack in deep, causal knowledge.   This often doesn’t matter until it does in home improvement contexts, but at least we get a clear signal when we discover we need to rush to YouTube to fix that leaking toilet!

This has implications that go far beyond home improvement, and is one factor helping to tear our social fabric apart.   We only have to browse the internet to find people with passionate, but often opposing views on a wide variety of often controversial topics. It could be interest rates, Federal budgets, immigration, vaccine policy, healthcare strategy, or a dozen others. But all too often, the passion is not matched by deep causal knowledge.  In reality, these are all extremely complex topics with multiple competing and interdependent variables.  And at risk of triggering hate mail, few if any of them have easy, conclusive answers.  This is not physics, where we can plug numbers into an equation and it spits out a single, unambiguous solution.  The reality is that complex, multi-dimensional problems often have multiple, often competing partial solutions, and optimum outcomes usually require trade offs.  Unfortunately few of us really have the time to assimilate the expertise and causal knowledge to have truly informed and unambiguous answers to most, if not all of these difficult problems.

And worse, AI also helps the ‘bad guys’. It enables unscrupulous parties to manipulate us for their own benefit, via memes, selective information and misinformation that are often designed to make us think we understand complex problems far better than we really do. As we increasingly rely on input from AI, this will inevitable get worse. The internet and social media has already contributed to unprecedented social division and nefarious financial rimes.   Will AI amplify this further?

This problem is not limited to complex social challenges. The danger is that for ALL problems, the internet, and now AI, allows us to create the illusion for ourselves that we understand complex systems far more deeply than we really do.  That in turn risks us becoming less effective problem solvers and innovators. Deep causal knowledge is often critical for innovating or solving difficult problems.  But in a world where we can access answers to questions so quickly and easily, the risk is that we don’t penetrate topics as deeply. I personally recall doing literature searches before starting a project. It was often tedious, time consuming and boring. Exactly the types of task AI is perfect for. But that tedious process inevitably built my knowledge of the space I was moving into, and often proved valuable when we hit problems later in the project. If we now defer this task to AI, even in part, this reduces depth of understanding. And in in complex systems or theoretic problem solving, will often lack the unambiguous signal that usually tells us our skills and knowledge are lacking when doing something relatively simple like fixing a toilet. The more we use AI, the more we risk lacking necessary depth of understanding, but often without realizing it.

Will AI become increasingly unreliable?

We are seeing AI develop the capability to lie, together with a growing propensity to cover it’s tracks when it does so. The AI community call it ’scheming’, but in reality it’s fundamentally lying.  https://openai.com/index/detecting-and-reducing-scheming-in-ai-models/?_bhlid=6a932f218e6ebc041edc62ebbff4f40bb73e9b14. We know from the beginning we’ve faced situations where AI makes mistakes.  And as I discussed recently, the risks associated with that are amplified because of it’s increasingly (super)human or oracle-like interface creating an illusion of omnipotence.

But now it appears to be increasingly developing properties that mirror self preservation.  A few weeks ago there were reports of difficulties in getting AI’s to shut themselves down, and even of AI’s using defensive blackmail when so threatened. Now we are seeing reports of AI’s deliberately trying to hide their mistakes.  And perhaps worse, concerns that attempts to fix this may simply “teach the model to become better at hiding its deceptive behavior”, or in other words, become a better liar.

If we are already in an arms race with an entity to keep it honest, and put our interests above its own, given it’s vastly superior processing power and speed, it may be a race we’ve already lost.  That may sound ‘doomsday-like’, but that doesn’t make it any less possible. And keep in mind, much of the Doomsday projections around AI focus on a ’singularity event’ when AI suddenly becomes self aware. That assumes AI awareness and consciousness will be similar to human, and forces a ‘birth’ analogy onto the technology. However, recent examples of self preservation and dishonesty maybe hint at a longer, more complex transition, some of which may have already started.

How big will the impact of AI be?

I think we all assume that AI’s impact will be profound. After all,  it’s still in its infancy, and is already finding it’s way into all walks of life.  But what if we are wrong, or at least overestimating its impact?  Just to play Devils Advocate, we humans do have a history of over-estimating both the speed and impact of technology driven change.

Remember the unfounded (in hindsight) panic around Y2K?  Or when I was growing up, we all thought 2025 would be full of people whizzing around using personal jet-packs.  In the 60’s and 70’s we were all pretty convinced we were facing nuclear Armageddon. One of the greatest movies of all time, 2001, co-written by inventor and futurist Arthur C. Clark, had us voyaging to Jupiter 24 years ago!  Then there is the great horse manure crisis of 1894. At that time, London was growing rapidly, and literally becoming buried in horse manure.  The London Times predicted that in 50 years all of London would be buried under 9 feet of poop. In 1898 the first global urban planning conference could find no solution, concluding that civilization was doomed. But London, and many other cities received salvation from an unexpected quarter. Henry Ford invented the motor car, which surreptitiously saved the day.  It was not a designed solution for the manure problem, and nobody saw it coming as a solution to that problem. But nonetheless, it’s yet another example of our inability to see the future in all of it’s glorious complexity, and for our predictions to screw towards worse case scenarios and/or hyperbole.

Change Aversion:

That doesn’t of course mean that AI will not have a profound impact. But lot’s of factors could potentially slow down, or reduce its effects.  Not least of these is human nature. Humans possess a profound resistance to change.  For sure, we are curious, and the new and innovative holds great appeal.  That curiosity is a key reason as to why humans now dominate virtually every ecological niche on our planet.   But we are also a bit schizophrenic, in that we love both change and stability and consistency at the same time.  Our brains have limited capacity, especially for thinking about and learning new stuff.  For a majority of our daily activities, we therefore rely on habits, rituals, and automatic behaviors to get us through without using that limited higher cognitive capacity. We can drive, or type, or do parts of our job without really thinking about it. This ‘implicit’ mental processing frees up our conscious brain to manage the new or unexpected.  But as technology like AI accelerates, a couple of things could happen.  One is that as our cognitive capacity gets overloaded, and we unconsciously resist it.  Instead of using the source of all human knowledge for deep self improvement, we instead immerse ourselves in less cognitively challenging content such as social media.

Or, as mentioned earlier, we increasingly lose causal understanding of our world, and do so without realizing it.   Why use our limited thinking capacity for tasks when it is quicker, easier, and arguably more accurate to defer to an AI. But lack of causal understanding seriously inhibits critical thinking and problem solving.  As AI gets smarter, there is a real risk that we as a society become dumber, or at least less innovative and creative.

Our Predictions are Wrong.

If history teaches us anything, most, if not all of the sage and learned predictions about AI will be mostly wrong. There is no denying that it is already assimilating into virtually every area of human society.  Finance, healthcare, medicine, science, economics, logistics, education etc.  And it’s a snooze and you lose scenario, and in many fields of human endeavor, we have little choice.  Fail to embrace the upside of AI and we get left behind.

That much power in things that can think so much faster than us, that may be developing self-interest, if not self awareness, that has no apparent moral framework, and is in danger of becoming an expert liar, is certainly quite sobering.

The Doomsday Mindset.

As suggested above, loss aversion and other biases drive us to focus on the downside of change.   It’s a bias that makes evolutionary sense, and helped keep our ancestors alive long enough to breed and become our ancestors. But remember, that bias is implicitly built into most, if not all of our predictions.   So there’s at least  chance that it’s impact wont be quite as good or bad as our predictions suggest

But I’m not sure we want to rely on that.  Maybe this time a Henry Ford won’t serendipitously rescue us from a giant pile of poop of our own making. But whatever happens, I think it’s a very good bet that we are in for some surprises, both good and bad. Probably the best way to deal with that is to not cling too tightly to our projections or our theories, remain agile, and follow the surprises as much, if not more than met expectations.

Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The AI Innovations We Really Need

The Future of Sustainable AI Data Centers and Green Algorithms

The AI Innovations We Really Need

GUEST POST from Art Inteligencia

The rise of Artificial Intelligence represents a monumental leap in human capability, yet it carries an unsustainable hidden cost. Today’s large language models (LLMs) and deep learning systems are power and water hungry behemoths. Training a single massive model can consume the energy equivalent of dozens of homes for a year, and data centers globally now demand staggering amounts of fresh water for cooling. As a human-centered change and innovation thought leader, I argue that the next great innovation in AI must not be a better algorithm, but a greener one. We must pivot from the purely computational pursuit of performance to the holistic pursuit of water and energy efficiency across the entire digital infrastructure stack. A sustainable AI infrastructure is not just an environmental mandate; it is human-centered mandate for equitable, accessible global technology. The withdrawal of Google’s latest AI data center project in Indiana this week after months of community opposition is proof of this need.

The current model of brute-force computation—throwing more GPUs and more power at the problem—is a dead end. Sustainable innovation requires targeting every element of the AI ecosystem, from the silicon up to the data center’s cooling system. This is an immediate, strategic imperative. Failure to address the environmental footprint of AI is not just an ethical lapse; it’s an economic and infrastructural vulnerability that will limit global AI deployment and adoption, leaving entire populations behind.

Strategic Innovation Across the AI Stack

True, sustainable AI innovation must be decentralized and permeate six core areas:

  1. Processors (ASICs, FPGAs, etc.): The goal is to move beyond general-purpose computing toward Domain-Specific Architecture. Custom ASICs and highly specialized FPGAs designed solely for AI inference and training, rather than repurposed hardware, offer orders of magnitude greater performance-per-watt. The shift to analog and neuromorphic computing drastically reduces the power needed for each calculation by mimicking the brain’s sparse, event-driven architecture.
  2. Algorithms: The most powerful innovation is optimization at the source. Techniques like Sparsity (running only critical parts of a model) and Quantization (reducing the numerical precision required for calculation, e.g., from 32-bit to 8-bit) can cut compute demands by over 50% with minimal loss of accuracy. We need algorithms that are trained to be inherently efficient.
  3. Cooling: The biggest drain on water resources is evaporative cooling. We must accelerate the adoption of Liquid Immersion Cooling (both single-phase and two-phase), which significantly reduces reliance on water and allows for more effective waste heat capture for repurposing (e.g., district heating).
  4. Networking and Storage: Innovations optical networking (replacing copper with fiber) and silicon photonics reduce the energy spikes for data transfer between thousands of chips. For storage, emerging non-volatile memory technologies can cut the energy consumed during frequent data retrieval and writes.
  5. Security: Encryption and decryption are computationally expensive. We need Homomorphic Encryption (HE) accelerators and specialized ASICs that can execute complex security protocols with minimal power draw. Additionally, efficient algorithms for federated learning reduce the need to move sensitive data to central, high-power centers.

“We are generating moderate incremental intelligence by wasting massive amounts of water and power. Sustainability is not a constraint on AI; it is the ultimate measure of its long-term viability.” — Braden Kelley


Case Study 1: Google’s TPU and Data Center PUE

The Challenge:

Google’s internal need for massive, hyper-efficient AI processing far outstripped the efficiency available from standard, off-the-shelf GPUs. They were running up against the physical limits of power consumption and cooling capacity in their massive fleet.

The Innovation:

Google developed the Tensor Processing Unit (TPU), a custom ASIC optimized entirely for their TensorFlow workload. The TPU achieved significantly better performance-per-watt for inference compared to conventional processors at the time of its introduction. Simultaneously, Google pioneered data center efficiency, achieving industry-leading Power Usage Effectiveness (PUE) averages near 1.1. (PUE is defined as Total Energy entering the facility divided by the Energy used by IT Equipment.)

The Impact:

This twin focus—efficient, specialized silicon paired with efficient facility management—demonstrated that energy reduction is a solvable engineering problem. The TPU allows Google to run billions of daily AI inferences using a fraction of the energy that would be required by repurposed hardware, setting a clear standard for silicon specialization and driving down the facility overhead costs.


Case Study 2: Microsoft’s Underwater Data Centers (Project Natick)

The Challenge:

Traditional data centers struggle with constant overheating, humidity, and high energy use for active, water-intensive cooling, leading to high operational and environmental costs.

The Innovation:

Microsoft’s Project Natick experimented with deploying sealed data center racks underwater. The ambient temperature of the deep ocean or a cold sea serves as a massive, free, passive heat sink. The sealed environment (filled with inert nitrogen) also eliminated the oxygen-based corrosion and humidity that cause component failures, resulting in a 8x lower failure rate than land-based centers.

The Impact:

Project Natick provides a crucial proof-of-concept for passive cooling innovation and Edge Computing. By using the natural environment for cooling, it dramatically reduces the PUE and water consumption tied to cooling towers, pushing the industry to consider geographical placement and non-mechanical cooling as core elements of sustainable design. The sealed environment also improves hardware longevity, reducing e-waste.


The Next Wave: Startups and Companies to Watch

The race for the “Green Chip” is heating up. Keep an eye on companies pioneering specialized silicon like Cerebras and Graphcore, whose large-scale architectures aim to minimize data movement—the most energy-intensive part of AI training. Startups like Submer and Iceotope are rapidly commercializing scalable liquid immersion cooling solutions, transforming the data center floor. On the algorithmic front, research labs are focusing Spiking Neural Networks (SNNs) and neuromorphic chips (like those from Intel’s Loihi project), which mimic the brain’s energy efficiency by only firing when necessary. Furthermore, the development of carbon-aware scheduling tools by startups is beginning to allow cloud users to automatically shift compute workloads to times and locations where clean, renewable energy is most abundant, attacking the power consumption problem from the software layer and offering consumers a transparent, green choice.

The Sustainable Mandate

Sustainable AI is not an optional feature; it is a design constraint for all future human-centered innovation. The shift requires organizational courage to reject the incremental path. We must move funding away from simply purchasing more conventional hardware and towards investing in these strategic innovations: domain-specific silicon, quantum-inspired algorithms, liquid cooling, and security protocols designed for minimum power draw. The true power of AI will only be realized when its environmental footprint shrinks, making it globally scalable, ethically sound, and economically viable for generations to come. Human-centered innovation demands a planet-centered infrastructure.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

7 Things Leaders Need to Know About Team AI Usage

7 Things Leaders Need to Know About Team AI Usage

GUEST POST from David Burkus

Leaders, we need to talk about intelligence.

By now you’ve–hopefully–started to take it as seriously as many leaders of industry have been. Either way you look at artificial intelligence, good or bad, it is here to stay. And so we need to start thinking of answers for several questions at the intersection of leadership and AI.

How can it be used effectively, not just to cut costs but to supercharge productivity? How can we use artificial intelligence to supplement our solid foundational leadership? Where should we NOT be using artificial intelligence?

It’s still early in the new world of artificial intelligence in the workplace. A lot of companies are delaying hiring, some are already cutting teams to embrace the optimistic promises AI will bring. But I don’t think we should be all in…yet.

I do know one thing to be true: Leaders using AI will quickly outpace leaders who don’t. And it’s important you get equipped, and in the right way.

Artificial intelligence will make good managers better, but not mediocre bosses better

They say a great actor can bring a C+ movie script up to a B+ or even an A if they are really good. But if a C+ actor is given a C+ script, then it’s going be a C+ movie. The same goes for artificial intelligence and leadership. You need to be a great leader before you start implementing artificial intelligence. AI will not bump up a mediocre manager and turn them into a great leader. It’s not some miracle machine. The truth is you need to have your foundations as a manager be solid first. AI is a good supplement for already successful managers.

Don’t use artificial intelligence to monitor

Often the first temptation of leaders experimenting with AI is to find a productivity AI tool out there, plug it into their IT systems, and start virtually looking over their team’s shoulders to monitor output. There are already dozens of stories…horror stories…of companies doing just that. And it’s not a good look, and deeply hurts morale.

If you need a technology tool to ensure your people are actually working when they say they are, you screwed up a long time ago—back during the hiring process.

And the current research on this isn’t in artificial intelligence’s favor. If AI is used to “collect and analyze data about workers,” then eight out of ten workers say AI use on them would definitely or probably make them feel inappropriately watched. In addition, about a one third of the public does not think AI would lead to equitable evaluations. A majority also agrees this would lead to the information collected about workers being misused (66%).

Artificial intelligence is good at turning anything and everything into a metric. Time is an easy metric. Number of sales calls is an easy metric. Messages on slack is an easy metric. How often you move your mouse is an easy, and terrifying, metric. But just because you have easy numbers to pull on your team doesn’t mean they are the right metrics to be pulling.

Leadership is really about people, not the metrics. How you solicit and give feedback is important. How you support and grow individual employees is important. Inspiring your team and being transparent is important. If you monitor your team endlessly, and your team knows that you’re outsourcing the process of harvesting that data with artificial intelligence, it creates distance between you and them.

And that ultimately works against you in the long run. People don’t like leaders who seem far from them and far from…reality.

Become fluent in artificial intelligence, or risk getting lost in translation

There’s some interesting data from Deloitte on AI that came out in Spring 2024. Organizations reporting “very high” Generative AI expertise expect to change their talent strategies even faster, with 32 percent already making changes. According to their findings, a lot of companies are redesigning work processes and changing workflows to integrate AI at different points.

You’re probably already experiencing this with Google, Microsoft and others integrating artificial intelligence into their core products like email and chats.

Another big focus is going to be on AI fluency. Deloitte found that 47 percent of respondents are dedicating time towards it. The leadership who gets educated on AI early, and keeps training consistently on as it develops, will be the best equipped to shepherd their teams going forward. It’s inevitable that career paths and job descriptions are going to evolve. It’s up to you to stay current.

You NEED to know what the technology is, how it’s being used, and how it’s helping those you’re serving. Be it clients, customers, the public–whomever. Saying you just typed some words into a text box and out came some more words….is not a good answer. Or a good look for you. You sound like you’re treating it like magic, when it’s actually just code.

Turn your conversations and meetings into a database

Middle managers spend a lot of time, arguably too much time, sending progress reports up the chain to the C-Suite and marching orders down to the individual contributors at the bottom. And there’s a fair amount of investigating to find out where things really stand, and time can be spent having to meet multiple people to get all the correct and current information. This is a time slog.

Meanwhile, there are dozens of AI tools now that just take notes. Notes from meetings. Notes from calls. They take the transcript and pair it down to the key takeaways, action items, attendance –a full brief for your records.

So, instead of asking someone to take notes during a meeting or having all your notes in the chat only to evaporate once the zoom call ends, you have a searchable document that you can reference, build on, and keep track of. New hires can use the database to catch up, and senior leaders can get a quick read of the progress and where everything stands.

Use AI/Chat bots to offload small, clerical questions

Here’s a situation: You run a small team and maybe you have a few new hires. You’re going to get a bunch of clerical questions from them over their first 90 days. That’s normal. That’s how it’s supposed to be. Onboarding takes time. “Who’s the point person for this? What’s so and so’s email from HR? What’s the policy for remote days at the company?”

Here’s where artificial intelligence can be really useful. Depending on the sort of chat platform you use– Slack, Teams, whatever, you could make a simple chat bot that you upload a full archive of the company’s policies and your own team norms, clerical details– everything new hires will probably ask you about. So, when those quick questions, quick stop-and-chats happen, the chat-bot can take care of that.

This shouldn’t subtract your time with your new hires. This just subtracts the lower stakes conversations. Now, you have more time for the high-level conversations with them. More coaching. More mentorship. More progression towards team goals. It might sound simple but…that’s because it is.

Use AI as an audience for decisions before taking them public

Being in a leadership role requires making decisive decisions. You include feedback and perspectives from your team as much as possible. Do the research. Talk to people. But then comes the actual decision making. And that is often just you, alone, with your thoughts.

Instead of making your pros and cons list, one practical thing to try is inputting proposed decisions or actions in an AI tool and then asking for all the counterpoints and possible outcomes.

You could even scale this out to your whole team. Ideally, teams should be leveraging task-focused conflict in team discussions to spark new and better ideas. But conflict can be tricky. So, what if AI is always the devil’s advocate? As your team is generating or discussing ideas, you can be feeding those ideas into an AI tool and asking it for counterpoints or how competitors might respond.

Don’t let it make the decision for you but do let it help guide you to possible solutions.

Get the legal clearance before going too deep

One last disclaimer: check with your human resources or your senior leadership, your informational technology (IT) people—or honestly, all of them—to know the boundaries you can work within when using AI tools.

Many of the tools out there are free and still in beta mode or come with a small fee. And most of the larger AI companies are taking whatever data you input and using it to better refine their product. Your company may have rules on the books about data privacy. Certainly, if you work in legal, healthcare, or government services, you’re dealing with sensitive data that may be protected.

Get clear answers before using any AI tools. Until someone above you with authority gives you the OK, you should probably just play with the tools on your own time with your own personal projects.

Conclusion

Artificial intelligence is just getting started in the workplace. And it’s all playing out in real time. If you’re a manager starting to get your hands dirty with these new tools, acknowledge to your team that this is all a work in progress and the norms around AI are likely to evolve. Be sure to keep the playing field level with your team. Practice that transparency, onboard everyone to the tools you’re using and that they can use and see where this takes you. Remember, AI, at its best, is here to enhance our human capabilities, not replace them.

AI will never take the place of a great boss…. but it might be better than being managed by a bad one.

Image credit: David Burkus

Originally published at https://davidburkus.com on September 9, 2024.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Marketing Guide for Humanity’s Next Chapter

How AI Changes Your Customers

Exclusive Interview with Mark Schaefer

Mark W Schaefer

The rise of artificial intelligence isn’t just an upgrade to our technology; it’s a fundamental shift in what it means to be human and what it takes to lead a successful business. We’ve entered a new epoch defined by “synthetic humanity,” a term coined by Mark Schaefer to describe AI interactions that are indistinguishable from real human connection. This blurring of lines creates an enormous opportunity, which Mark Schaefer refers to as a “seam” — a moment of disruption wide open for innovators. But as algorithms become more skilled at simulating empathy and insight, what must leaders do to maintain authenticity and relevancy? In this exclusive conversation, Mark Shaefer breaks down why synthetic humanity is the most crucial concept for leaders to grasp today, how to use AI as a partner rather than a replacement, and the vital role of human creativity in a world of supercharged innovation.

The Internet, Smartphones, Social Media, and Now AI, Have All Shifted Customer Expectations

Mark Schaefer is a globally-acclaimed author, keynote speaker, and marketing consultant. He is a faculty member of Rutgers University and one of the top business bloggers and podcasters in the world. How AI Changes Your Customers: The Marketing Guide to Humanity’s Next Chapter is his twelfth book, exploring what companies should consider when it comes to artificial intelligence (AI) and their customers.

Below is the text of my interview with Mark and a preview of the kinds of insights you’ll find in How AI Changes Your Customers presented in a Q&A format:

1. I came across the term ‘synthetic humanity’ fairly early on in the book. Why is this concept so important, and what are the most important aspects for leaders to consider?

“Synthetic humanity” is my term for describing the emerging wave of AI interactions that appear, sound, and even feel human — yet are not human at all. This is not science fiction. Already, chatbots can hold natural conversations, generate art, or simulate empathy in ways that blur the line between authentic and artificial.

For leaders, this matters because customers don’t care whether an experience is powered by code or carbon; they care about how it feels. If synthetic humanity can deliver faster, easier, and more personalized service, people will embrace it. The more machines convincingly mimic us, the more vital it becomes to emphasize distinctly human qualities like compassion, vulnerability, creativity, and trust.

Leaders must navigate two urgent questions: Where do we lean into automation for efficiency? And where do we intentionally preserve human touch for meaning? Synthetic humanity can scale interactions, but it cannot scale authenticity. The most successful brands will be those that strike this balance — leveraging AI’s strengths while showcasing the irreplaceable heartbeat of humanity.

2. We discuss disruption quite a bit here on this blog. Can you share a bit more with our innovators about ‘seams’ and the opportunities they create with AI or otherwise?

Throughout history, disruptions to the status quo, such as pandemics, wars, or economic recessions, can either sink a business or elevate it to new heights. Every disruption creates a seam — a moment where the fabric of culture, business, or belief rips just wide enough for an innovator to crawl through and create something new.

We might be living in the ultimate seam.

Google CEO Sundar Pinchai calls AI the most significant innovation in human history — more important than fire, medicine, or the internet. The power of AI seems absolute and threatening. For many, it’s terrifying.

Through my new book, I’m trying to get people to view disruption through a different lens: not fear, but immense possibility.

3. Given that AI has access to all of our accumulated wisdom, does it actually create unique insights and ideas, or will innovation always be left to the humans?

AI is extraordinary at remixing existing content. It can scan millions of data points, connect patterns we might miss, and surface possibilities at lightning speed. That feels like insight, and sometimes it is. However, there is a crucial distinction: AI doesn’t truly care. It lacks context, longing, and lived experience.

Innovation often begins with a problem that aches to be solved or a vision that comes from deep within human culture. AI can suggest ten thousand options, but only a person can say, “This one matters because it touches our values, our customers, our future.”

So the real power is in the partnership. AI accelerates discovery, clears away routine work, and even provokes us with new connections. Humans bring the spark of meaning, the intuition, and the courage to act on something that has never been tried before. Innovation is not being replaced. It is being supercharged. In my earlier book “Audacious: How Humans Win in an AI Marketing World,” I note that the bots are here, but we still own crazy!

This is a time for humans to transcend “competent.” Bots can be competent and ignorable.

4. Do you have any tips for us mere mortals on how to productively use AI without developing creative and intellectual atrophy?

Yes, and it starts with how you frame the role of AI in your life. If you treat it as a replacement, you risk letting your creative muscles go slack. If you treat it as a partner, you can actually get stronger.

Here are a few practical approaches. First, use AI to stretch your perspective, not to finish your work for you. Ask it to give you ten angles on a problem, then choose one and make it your own. Second, set boundaries. Write your first draft by hand or sketch ideas before you ever touch a prompt. Let AI react to your thinking, not define it. Third, use the tool to challenge yourself. Feed it your work and ask, “What am I missing? Where are my blind spots?”

Most importantly, keep doing hard things. Struggle is where growth happens. AI can smooth the path, but sometimes you need the climb. Treat the technology as a coach, not a crutch, and you will come out sharper, faster, and even more creative on the other side.

5. I’ve heard a little bit about AI literacy. What are some of the critical aspects that we should all be aware of or try to learn more about?

How AI Changes Your Customers' MarketingThere are a few critical aspects everyone should know. First, bias. AI models are trained on human data, which means they inherit our blind spots and prejudices. If you don’t recognize this, you may mistake bias for truth. Second, limits. AI is confident even when it is wrong. Knowing how to fact-check and verify is essential. Third, prompting. The quality of your input shapes the quality of the output, so learning how to ask better questions is a new core skill.

Finally, ethics. Just because AI can do something does not mean it should. We all need to be asking: How does this affect privacy, autonomy, and trust?

AI literacy isn’t about becoming a coder. It is about being a thoughtful user, a skeptic when needed, and a leader who understands both the promise and the peril of these tools.

6. What do companies and sole proprietors worried about falling below the fold of the new AI-powered search results need to change online to stay relevant and successful?

I have many practical ideas about this in the book. In short, the old game of chasing clicks and keywords is fading. AI-powered search doesn’t just list links, it delivers answers. That means the winners will be those whose content and presence are woven deeply enough into the digital fabric that the algorithms can’t ignore them.

This requires a shift in focus. Instead of creating content that only ranks, create content that is referenced, cited, and trusted across the web. Build authority by being the source others turn to. Make your ideas so distinct and valuable that they become part of the training data itself. We are entering a golden age for PR!

It also means doubling down on brand signals that AI can’t manufacture. Human stories, original research, strong communities, and unique perspectives will travel farther than generic blog posts. And remember, AI models reward freshness and relevance, so showing up consistently matters.

The book also covers what I call “overrides.” If you create a meaningful, loyal relationship with customers and word of mouth recommendations, that will override the AI recommendations. We consider AI recommendations. We ACT on human recommendations.

7. ‘Weaponizing kindness’ was a terrifying headline I stumbled across in your book. What do organizations need to consider when using AI to interact with customers and what traps are out in front of them?

That phrase is unsettling for a reason. AI can mimic empathy so well that it risks crossing into manipulation. Imagine a chatbot that remembers your child’s name, mirrors your mood, or expresses concern in just the right tone. Done responsibly, that feels like service. Done carelessly, it feels like exploitation.

Organizations need to recognize that kindness delivered at scale is powerful, but if it is hollow or purely transactional, customers will sense it. The first trap is confusing simulation with sincerity. Just because an AI can sound caring does not mean it actually cares. The second trap is overreach. Using personal data to create hyper-tailored interactions can quickly slip from helpful to creepy.

The safeguard is transparency and choice. Be clear about when a customer is interacting with AI. Use technology to enhance human care, not replace it. Always provide people with a way to connect with a real person.

Kindness is a sacred trust in business. Weaponize it, and you erode the very loyalty and love you are trying to build. Use it authentically, and you create relationships no machine can ever replicate.

8. What changing customer expectations (thanks to AI) might companies easily overlook and pay a heavy price for?

One of the biggest shifts is speed. Customers already expect instant answers, but AI raises the bar even higher. If your competitor offers a seamless, AI-powered interaction that solves a problem in seconds, your slower, clunkier process will feel intolerable.

Another overlooked expectation is personalization. People are starting to experience products, services, and recommendations that feel almost eerily tailored to them. That sets a new standard. Companies still delivering one-size-fits-all communication will look outdated. Don’t confuse “personalization” with “personal.”

Perhaps the most subtle change is trust. As customers realize machines can fake warmth and empathy, they will value genuine human touch even more. If every interaction feels synthetic, you risk losing trust, especially if you’re not transparent about it.

The price of ignoring these shifts is steep: irrelevance. Customers rarely complain about unmet expectations anymore; they simply leave. The opportunity is to stay alert, listen closely, and respond quickly as AI reshapes what “good enough” looks like. The companies that thrive will be those that not only keep pace with AI, but also double down on the irreplaceable humanity customers still crave.

9. What unintended consequences of AI do you think companies might face and may not be preparing for? (overcoming AI slander and falsehoods might be one – agree or disagree? Others?)

I agree. In fact, I predict in the book that we cannot foresee AI’s biggest impact yet, as it will likely be an unintended consequence of the technology’s use in an unexpected way.

Where could that occur? Maybe reputational risk at scale. AI systems will generate falsehoods with the same confidence they generate facts, and those errors can stick. A single hallucination about your company, repeated enough times, becomes “truth” in the digital bloodstream. Most companies are not prepared for the speed and reach of misinformation of this kind.

Another consequence is customer dependency. If people hand over more of their decisions to AI, they may lose patience for complexity or nuance in your offerings. That can push companies toward oversimplification, even when a richer human experience would build deeper loyalty.

There is also the cultural risk. Employees might over-rely on AI, quietly eroding skills, judgment, and creativity. A workforce that outsources too much thinking can become brittle in ways that only show up during a crisis.

The real challenge is that these consequences don’t announce themselves. They creep in. Which means leaders must actively audit how AI is being used, question where it might distort reality or weaken capability, and set up safeguards now. The companies that prepare will navigate disruption. The ones that ignore it will be blindsided.

10. Can companies make TOO MUCH use of AI? If so, what would the impacts look like?

Yes, and we will start seeing this more often. It is a pattern that has repeated through history — over-indexing on tech and then bringing the people back in!

When companies lean too heavily on AI, they risk draining the very humanity that makes them memorable. On the surface, it might seem like efficiency: faster service, lower costs, and greater scale. But underneath, the impacts can be corrosive. You might be messing with your brand!

Customers may feel manipulated or devalued if a machine drives every interaction. Even perfect personalization can feel hollow if it lacks genuine care. Second, trust erodes when people sense that a brand hides behind automation rather than showing up with real human accountability. Third, within the company, over-reliance on AI can weaken employee judgment and creativity, resulting in a workforce that follows prompts rather than breaking new ground.

The real danger is commoditization. If every company automates everything, then no company stands out. The winners will be those who know when to say, “This moment deserves a person.” AI should be an amplifier, not a replacement. Too much of it and you don’t just lose connection, you lose your soul.

Conclusion

Thank you for the great conversation Mark!

I hope everyone has enjoyed this peek into the mind of the man behind the inspiring new title How AI Changes Your Customers: The Marketing Guide to Humanity’s Next Chapter!

Image credits: BusinessesGrow.com (Mark W Schaefer)

Content Authenticity Statement: If it wasn’t clear above, the short section in italics was written by Google’s Gemini with edits from Braden Kelley, and the rest of this article is from the minds of Mark Schaefer and Braden Kelley.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

You Need to Know What Your Customers Think of AI

You Need to Know What Your Customers Think of AI

GUEST POST from Shep Hyken

Ten years ago, only the most technologically advanced companies used AI — although it barely resembled what companies use today when communicating with customers — and it was very, very expensive. But not anymore. Today, any company can implement an AI strategy using ChatGPT-type technologies, often creating experiences that give customers what they want. But not always, which is why the information below is important.

The 2025 Findings

My annual customer service and customer experience (CX) research study surveys more than 1,000 U.S. consumers weighted to the population’s demographics of age, gender, ethnicity and geography. It included an entire group of questions focused on how customers react to and accept (or don’t accept) AI options to ask questions, resolve problems and communicate with a company or brand. Consider the following findings:

  • AI Success: Half of U.S. customers (50%) said they have successfully resolved a customer service issue using AI or ChatGPT-type technologies without needing human assistance. In 2024, only three out of 10 customers (32%) did so. That’s great news, but it’s important to point out that age makes a difference. Six out of 10 Gen-Z customers (61%) successfully used AI support versus just 32% of Boomers.
  • AI Is Far From Perfect: Half of U.S. customers (51%) said they received incorrect information from an AI self-service bot. Even with incredible improvement in AI’s capabilities, it still serves up wrong information. That destroys trust, not only in the company but also in the technology as a whole. A few bad answers and customers will be reluctant, at least in the near term, to choose self-service over the traditional mode of communication, the phone.
  • Still, Customers Believe: Four out of 10 customers (42%) believe AI and ChatGPT can handle complex customer service inquiries as effectively as humans. Even with the mistakes, customers believe AI solutions work. However, 86% of customers think companies using AI should always provide an option to speak or text with a real person.
  • The Phone Still Rules: It’s still too early to throw away phone support. My prediction is that it will be years, if ever, that human-to-human interactions completely disappear, which was proven when we asked, “When you have a problem or issue with a company, which solution do you prefer to use: phone or digital self-service?” The answer is that 68% of customers will still choose the phone over digital self-service. That number is highly influenced by the 82% of Baby Boomers who choose to call a company over any other type of digital support.
  • The Future Looks Strong For AI Customer Support: Six out of 10 customers (63%) expect AI-fueled technologies to become the primary mode of customer support. We asked the same question in 2021, and only 21% of customers felt this way.

The Strategy Behind Using AI For CX

  • Age Matters: As you can see from some of the above findings, there is a big generational gap between younger and older customers. Gen-Z customers are more comfortable, have had more success, and want more digital/AI interactions compared to older customers. Know your customer demographics and provide the appropriate support and communication options based on their age. Recognize you may need to provide different support options if your customer base is “everyone.”
  • Trust Is a Factor: Seven out of 10 customers (70%) have concerns about privacy and security when interacting with AI. Once again, age makes a difference. Trust and confidence with AI consistently decrease with age.

The Future of AI

As AI continues to evolve, especially in the customer service and experience world, companies and brands must find a balance between technology and the human touch. While customers are becoming more comfortable and finding success with AI, we can’t become so enamored with it that we abandon what many of our customers expect. The future of AI isn’t a choice between technology and humans. It’s about creating a blended experience that plays to the technology’s strengths and still gives customers the choice.

Furthermore, if every business had a 100% digital experience, what would be a competitive differentiator? Unless you are the only company that sells a specific product, everything becomes a commodity. Again, I emphasize that there must be a balance. I’ll close with something I’ve written before, but bears repeating:

The greatest technology in the world can’t replace the ultimate relationship-building tool between a customer and a business: the human touch.

This article was originally published on Forbes.com.

Image Credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Great American Contraction

Population, Scarcity, and the New Era of Human Value

The Great American Contraction - Population, Scarcity, and the New Era of Human Value

GUEST POST from Art Inteligencia

We stand at a unique crossroads in human history. For centuries, the American story has been a tale of growth and expansion. We built an empire on a relentless increase in population and labor, a constant flow of people and ideas fueling ever-greater economic output. But what happens when that foundational assumption is not just inverted, but rendered obsolete? What happens when a country built on the idea of more hands and more minds needing more work suddenly finds itself with a shrinking demand for both, thanks to the exponential rise of artificial intelligence and robotics?

The Old Equation: A Sinking Ship

The traditional narrative of immigration as an economic engine is now a relic of a bygone era. For decades, we debated whether immigrants filled low-skilled labor gaps or competed for high-skilled jobs. That entire argument is now moot. Robotics and autonomous systems are already replacing a vast swath of low-skilled labor, from agriculture to logistics, with greater speed and efficiency than any human ever could. This is not a future possibility; it’s a current reality accelerating at an exponential pace. The need for a large population to perform physical tasks is over.

But the disruption is far more profound. While we were arguing about factory floors and farm fields, Artificial Intelligence (AI) has quietly become a peer-level, and in many cases, superior, knowledge worker. AI can now draft legal briefs, write code, analyze complex data sets, and even generate creative content with a level of precision and speed no human can match. The very “high-skilled” jobs we once championed as the future — the jobs we sought to fill with the world’s brightest minds — are now on the chopping block. The traditional value chain of human labor, from manual to cognitive, is being dismantled from both ends simultaneously.

“The question is no longer ‘What can humans do?’ but ‘What can only a human do?'”

The New Paradigm: Radical Scarcity

This creates a terrifying and necessary paradox. The scarcity we must now manage is not one of labor or even of minds, but of human relevance. The old model of a growing population fueling a growing economy is not just inefficient; it is a direct path to social and economic collapse. A population designed for a labor-based economy is fundamentally misaligned with a future where labor is a non-human commodity. The only logical conclusion is a Great Contraction — a deliberate and necessary reduction of our population to a size that can be sustained by a radically transformed economy.

This reality demands a ruthless re-evaluation of our immigration policy. We can no longer afford to see immigrants as a source of labor, knowledge, or even general innovation. The only value that matters now is singular, irreplaceable talent. We must shift our focus from mass immigration to an ultra-selective, curated approach. The goal is no longer to bring in more people, but to attract and retain the handful of individuals whose unique genius and creativity are so rare that AI can’t replicate them. These are the truly exceptional minds who will pioneer new frontiers, not just execute existing tasks.

The future of innovation lies not in the crowd, but in the individual who can forge a new path where none existed before. We must build a system that only allows for the kind of talent that is a true outlier — the Einstein, the Tesla, the Brin, but with the understanding that even a hundred of them will not be enough to employ millions. We are not looking for a workforce; we are looking for a new type of human capital that can justify its existence in a world of automated plenty. This is a cold and pragmatic reality, but it is the only path forward.

Human-Centered Value in a Post-Labor World

My core philosophy has always been about human-centered innovation. In this new world, that means understanding that the purpose of innovation is not just about efficiency or profit. It’s about preserving and cultivating the rare human qualities that still hold value. The purpose of immigration, therefore, must shift. It is not about filling jobs, but about adding the spark of genius that can redefine what is possible for a smaller, more focused society. We must recognize that the most valuable immigrants are not those who can fill our knowledge economy, but those who can help us build a new economy based on a new, more profound understanding of what it means to be human.

The political and social challenges of this transition are immense. But the choice is clear. We can either cling to a growth-based model and face the inevitable social and economic fallout, or we can embrace this new reality. We can choose to see this moment not as a failure, but as an opportunity to become a smaller, more resilient, and more truly innovative nation. The future isn’t about fewer robots and more people. It’s about robots designing, building and repairing other robots. And, it’s about fewer people, but with more brilliant, diverse, and human ideas.

This may sound like a dystopia to some people, but to others it will sound like the future is finally arriving. If you’re still not quite sure what this future might look like and why fewer humans will be needed in America, here are a couple of videos from the present that will give you a glimpse of why this may be the future of America:

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Customer Experience is Changing

If You Don’t Like Change, You’re Going to Hate Extinction

Customer Experience is Changing

GUEST POST from Shep Hyken

Depending on which studies and articles you read, customer service and customer experience (CX) are getting better … or they’re getting worse. Our customer service and CX research found that 60% of consumers had better customer service experiences than last year, and in general, 82% are happy with the customer service they receive from the companies and brands with which they do business.

Yet, some studies claim customer service is worse than ever. Regardless, more companies than ever are investing in improving CX. Some nail it, but even with an investment, some still struggle. Another telling stat is the growing number of companies attending CX conferences.

Last month, more than 5,000 people representing 1,382 companies attended and participated in Contact Center Week (CCW), the world’s largest conference dedicated to customer service and customer experience. This was the largest attendance to date, representing a 25% growth over last year.

Many recognized brands and CX leaders attended and shared their wisdom from the main stage and breakout rooms. The expo hall featured demonstrations of the latest and greatest solutions to create more effective customer support experiences.

The primary reason I attend conferences like CCW is to stay current with the latest advancements and solutions in CX and to gain insight into how industry leaders think. AI took center stage for most of the presentations. No doubt, it continues to improve and gain acceptance. With that in mind, here are some of my favorite takeaways with my commentary from the sessions I attended:

AI for Training

Becky Ploeger, global head of reservations and customer care at Hilton, uses AI to create micro-lessons for employee training. Hilton is using Centrical’s platform to take various topics and turn them into coaching modules. Employees participate in simulations that replicate customer issues.

Can We Trust AI?

As excited as Ploeger is about AI (and agentic AI), there is still trepidation. CX leaders must recognize that AI is not yet perfect and will occasionally provide inaccurate information. Ploeger said, “We have years and years of experience with agents. We only have six months of experience with agentic AI.”

Wrong Information from AI Costs a Company Money—or Does it?

Gadi Shamia, CEO of Replicant, an AI voice technology company, commented about the mistakes AI makes. In general, CX leaders are complaining that going digital is costing the company money because of the bad information customers receive. Shamia asks, “How much are you losing?” While bad information can cause a customer to defect to a competitor, so does a bad experience with a live customer service rep. So, how often does AI provide incorrect information? How many of those customers leave versus trying to connect with an agent? The metrics you choose to define success with a digital self-service experience need to include more than measuring bad experiences. Mark Killick, SVP of experiential operations at Shipt, weighed in on this topic, saying, “If we don’t fix the problems of providing bad information, we’ll just deliver bad information faster.”

Making the Case to Invest in AI

Mariano Tan, president and CEO of Prosodica says, “Nothing gets funded without a clear business case.” The person in charge of the budget for customer service and CX initiatives (typically the CFO in larger companies) won’t “open the wallet” without proof that the expenditure will yield a return on investment (ROI). People in charge of budgets like numbers, so when you create your “clear business case,” be sure to include the numbers that make a compelling reason to invest in CX. Simply saying, “We’ll reduce churn,” isn’t enough. How much churn—that’s a number. How much does it mean to the bottom line—another number. Numbers sell!

Final Words: Love Change, or Else

Neil Gibson, SVP of CX at FedEx, was part of a panel and shared a quote that is the perfect way to end the article. AI is rapidly changing the way we do business. We must keep up, or else. Gibson quoted Fred Smith, the first CEO and founder of FedEx, who said, “If you don’t like change, you’re going to hate extinction.” In other words, keep up or watch your competition blow past you.

This article was originally published on Forbes.com.

Image Credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Have We Made AI Interfaces Too Human?

Could a Little Uncanny Valley Help Add Some Much Needed Skepticism to How We Treat AI Output?

Have We Made AI Interfaces Too Human?

GUEST POST from Pete Foley

A cool element of AI is how ‘human’ it appear’s to be. This is of course a part of its ‘wow’ factor, and has helped to drive rapid and widespread adoption. It’s also of course a clever illusion, as AI’s don’t really ‘think’ like real humans. But the illusion is pretty convincing. And most of us, me included, who have interacted with AI at any length, have probably at times all but forgotten they are having a conversation with code, albeit sophisticated code.

Benefits of a Human-LIke Interface: And this humanizing of the user interface brings multiple benefits. It is of course a part of the ‘wow’ factor that has helped drive rapid and widespread adoption of the technology. The intuitive, conversational interface also makes it far easier for everyday users to access information without training in search techniques. While AI’s they don’t fundamentally have access to better information than an old fashioned Google search, they are much easier to use. And the humanesque output not only provides ‘ready to use’ and pre-synthesized information, but also increases the believability of the output. Furthermore, by creating an illusion of human-like intelligence, it implicitly implies emotions, compassion and critical thinking behind the output, even if it’s not really there

Democratizing Knowledge: And in many ways, this is a really good thing. Knowledge is power. Democratizing access to it has many benefits, and in so doing adds checks and balances to our society we’ve never before enjoyed. And it’s part of a long-term positive trend. Our societies have evolved from shaman and priests jealously guarding knowledge for their own benefit, through the broader dissemination enabled by the Gutenberg press, books and libraries. That in turn gave way to mass media, the internet, and now the next step, AI. Of course, it’s not quite that simple, as it’s also a bit of an arms race. With this increased access to information has come ever more sophisticated ways in which today’s ’shamans’ or leaders try to protect their advantage. They may no longer use solar eclipses to frighten an astronomically ignorant populace into submission and obedience. But spinning, framing, controlled narratives, selective dissemination of information, fake news, media control, marketing, behavioral manipulation and ’nudging’ are just a few ways in which the flow of information is controlled or manipulated today. We have moved in the right direction, but still have a way to go, and freedom of information and it’s control are always in some kind of arms race.

Two Edged Sword: But this humanization of AI can also be a two edged sword, and comes with downsides in addition to the benefits described above. It certainly improves access and believability, and makes output easier to disseminate, but also hides its true nature. AI operates in a quite different way from a human mind. It lacks intrinsic ethics, emotional connections, genuine empathy, and ‘gut feelings’. To my inexpert mind, it in some uncomfortable ways resembles a psychopath. It’s not evil in a human sense by any means, but it also doesn’t care, and lacks a moral or ethical framework

A brutal example is the recent case of Adam Raine, where ChatGPT advised him on ways to commit suicide, and helped him write a suicide note. A sane human would never do this, but the humanesque nature of the interface appeared to create an illusion for that unfortunate individual that he was dealing with a human, and the empathy, emotional intelligence and compassion that comes with that.

That may be an extreme example. But the illusion of humanity and the ability to access unfiltered information can also bring more subtle issues. For example, while the ability to interrogate AI around our symptoms before visiting a physician certainly empowers us to take a more proactive role in our healthcare. But it can also be counterproductive. A patient who has convinced themselves of an incorrect diagnosis can actually harm themselves, or make a physicians job much harder. And AI lacks the compassion to break bad news gently, or add context in the way a human can.

The Uncanny Valley: That brings me to the Uncanny Valley. This describes when technology approaches but doesn’t quite achieve perfection in human mimicry. In the past we could often detect synthetic content on a subtle and implicit level, even if we were not conscious of it. For example, a computerized voice that missed subtle tonal inflections, or a photoshopped image or manipulated video that missed subtle facial micro expressions might not be obvious, but often still ‘felt’ wrong. Or early drum machines were so perfect that they lacked the natural ’swing’ of even the most precise human drummer, and so had to be modified to include randomness that was below the threshold of conscious awareness, but made them ‘feel’ real.

This difference between conscious and unconscious evaluation creates cognitive dissonance that can result in content feeling odd, or even ‘creepy’. And often, the closer we got to eliminating that dissonance, the creepier it feels. When I’ve dealt with the uncanny valley in the past, it’s generally been something we needed to ‘fix’. For example, over-photoshopping in a print ad, or poor CGI. But be careful what you wish for. AI appears to have marched through the ‘uncanny valley’ to the point where its output feels human. But despite feeling right, it may still lack the ethical, moral or emotional framework of the human responses it mimics.

This begs a question, ‘do we need some implicit as well as explicit cues that remind us we are not dealing with a real human? Could a slight feeling of ‘creepiness maybe help to avoid another Adam Raine? Should we add back some ‘uncanny valley’, and turn what used to be something we thought of as an ‘enemy’ to good use? The latter is one of my favorite innovation strategies. Whether it’s vaccination, or exposure to risks during childhood, or not over-sanitizing, sometimes a little of what does us harm can do us good. Maybe the uncanny valley we’ve typical tried to overcome could now actually help us?

Would just a little implicit doubt also encourage us to think a bit more deeply about the output, rather than simply cut and paste it into a report? By making AI output sound so human, it potentially removes the need for cognitive effort to process the output. Thinking that played a key role in translating search into output can now be skipped. Synthesizing and processing output from a ‘old fashioned’ Google search requires effort and comprehension. With AI, it is all to easy to regurgitate the output, skip meaningful critical thinking, and share what we really don’t understand. Or perhaps worse, we can create an illusion of understanding where we don’t think deeply or causally enough to even realize that we don’t understand what we are sharing. It’s in some ways analogous to proof reading, in that it’s all to easy to skip over content we think we already know, even if we really don’t . And the more we skip over content, the more difficult it is to be discerning, or question the output. When a searcher receives answers in prose he or she can cut and paste into a report or essay, less effort effort and critical thinking goes into comprehension and the critical thinking, and the risk of sharing inaccurate information, or even nonsense increases.

And that also brings up another side effect of low engagement with output – confirmation bias. If the output is already in usable form, doesn’t require synthesizing or comprehension, and it agrees with our beliefs or motivations, it’s a perfect storm. There is little reason to question it, or even truly understand it. We are generally pretty good at challenging something that surprises us, or that we disagree with. But it takes a lot of will, and a deep adherence to the scientific method to challenge output that supports our beliefs or theories

Question everything, and you do nothing! The corollary to this is surely ‘that’s the point of AI?’ It’s meant to give us well structured, and correct answers, and in so doing free up our time for more important things, or to act on ideas, rather than just think about them. If we challenge and analyze every output, why use AI in the first place? That’s certainly fair, but taking AI output without any question is not smart either. Remember that it isn’t human, and is still capable of making really stupid mistakes. Okay, so are humans, but AI is still far earlier in its evolutionary journey, and prone to unanticipated errors. I suspect the answer to this lies in how important the output is, and where it will be used. If it’s important, treat AI output as a hypothesis. Don’t believe everything you read, and before simply sharing or accepting, ask ourselves and AI itself questions around what went into the conclusions, where the data came from, and what the critical thinking path is. Basically apply the scientific method to AI output much the same as we would, or should our own ideas.

Cat Videos and AI Action Figures: Another related risk with AI is if we let it become an oracle. We not only treat its output as human, but as super human. With access to all knowledge, vastly superior processing power compared to us mere mortals, and apparent human reasoning, why bother to think for ourselves? A lot of people worry about AI becoming sentient, more powerful than humans, and the resultant doomsday scenarios involving Terminators and Skynet. While it would be foolish to ignore such possibilities, perhaps there is a more clear and present danger, where instead of AI conquering humanity, we simply cede our position to it. Just as basic mathematical literacy has plummeted since the introduction of calculators, and spell-check has reduced our basic literary capability, what if AI erodes our critical thinking and problem solving? I’m not the first to notice that with the internet we have access to all human knowledge, but all too often use it for cat videos and porn. With AI, we have an extraordinary creativity enhancing tool, but use masses of energy and water for data centers to produce dubious action figures in our own image. Maybe we need a little help doing better with AI. A little ‘uncanny Valley’ would not begin to deal with all of the potential issues, but maybe simply not fully trusting AI output on an implicit level might just help a little bit.

Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Most Challenging Obstacles to Achieving Artificial General Intelligence

The Unclimbed Peaks

The Most Challenging Obstacles to Achieving Artificial General Intelligence

GUEST POST from Art Inteligencia

The pace of artificial intelligence (AI) development over the last decade has been nothing short of breathtaking. From generating photo-realistic images to holding surprisingly coherent conversations, the progress has led many to believe that the holy grail of artificial intelligence — Artificial General Intelligence (AGI) — is just around the corner. AGI is defined as a hypothetical AI that possesses the ability to understand, learn, and apply its intelligence to solve any problem, much like a human. As a human-centered change and innovation thought leader, I am here to argue that while we’ve made incredible strides, the path to AGI is not a straight line. It is a rugged, mountainous journey filled with profound, unclimbed peaks that require us to solve not just technological puzzles, but also fundamental questions about consciousness, creativity, and common sense.

We are currently operating in the realm of Narrow AI, where systems are exceptionally good at a single task, like playing chess or driving a car. The leap from Narrow AI to AGI is not just an incremental improvement; it’s a quantum leap. It’s the difference between a tool that can hammer a nail perfectly and a person who can understand why a house is being built, design its blueprints, and manage the entire process while also making a sandwich and comforting a child. The true obstacles to AGI are not merely computational; they are conceptual and philosophical. They require us to innovate in a way that goes beyond brute-force data processing and into the realm of true understanding.

The Three Grand Obstacles to AGI

While there are many technical hurdles, I believe the path to AGI is blocked by three foundational challenges:

  • 1. The Problem of Common Sense and Context: Narrow AI lacks common sense, a quality that is effortless for humans but incredibly difficult to code. For example, an AI can process billions of images of cars, but it doesn’t “know” that a car needs fuel or that a flat tire means it can’t drive. Common sense is a vast, interconnected web of implicit knowledge about how the world works, and it’s something we’ve yet to find a way to replicate.
  • 2. The Challenge of Causal Reasoning: Current AI models are masterful at recognizing patterns and correlations in data. They can tell you that when event A happens, event B is likely to follow. However, they struggle with causal reasoning — understanding why A causes B. True intelligence involves understanding cause-and-effect relationships, a critical component for true problem-solving, planning, and adapting to novel situations.
  • 3. The Final Frontier of Human-Like Creativity & Understanding: Can an AI truly create something new and original? Can it experience “aha!” moments of insight? Current models can generate incredibly creative outputs based on patterns they’ve seen, but do they understand the deeper meaning or emotional weight of what they create? Achieving AGI requires us to cross the final chasm: imbuing a machine with a form of human-like creativity, insight, and self-awareness.

“We are excellent at building digital brains, but we are still far from replicating the human mind. The real work isn’t in building bigger models; it’s in cracking the code of common sense and consciousness.”


Case Study 1: The Fight for Causal AI (Causaly vs. Traditional Models)

The Challenge:

In scientific research, especially in fields like drug discovery, identifying causal relationships is everything. Traditional AI models can analyze a massive database of scientific papers and tell a researcher that “Drug X is often mentioned alongside Disease Y.” However, they cannot definitively state whether Drug X *causes* a certain effect on Disease Y, or if the relationship is just a correlation. This lack of causal understanding leads to a time-consuming and expensive process of manual verification and experimentation.

The Human-Centered Innovation:

Companies like Causaly are at the forefront of tackling this problem. Instead of relying solely on a brute-force approach to pattern recognition, Causaly’s platform is designed to identify and extract causal relationships from biomedical literature. It uses a different kind of model to recognize phrases and structures that denote cause and effect, such as “is associated with,” “induces,” or “results in.” This allows researchers to get a more nuanced, and scientifically useful, view of the data.

The Result:

By focusing on the causal reasoning obstacle, Causaly has enabled researchers to accelerate the drug discovery process. It helps scientists filter through the noise of correlation to find genuine causal links, allowing them to formulate hypotheses and design experiments with a much higher probability of success. This is not about creating AGI, but about solving one of its core components, proving that a human-centered approach to a single, deep problem can unlock immense value. They are not just making research faster; they are making it smarter and more focused on finding the *why*.


Case Study 2: The Push for Common Sense (OpenAI’s Reinforcement Learning Efforts)

The Challenge:

As impressive as large language models (LLMs) are, they can still produce nonsensical or factually incorrect information, a phenomenon known as “hallucination.” This is a direct result of their lack of common sense. For instance, an LLM might confidently tell you that you can use a toaster to take a bath, because it has learned patterns of words in sentences, not the underlying physics and danger of the real world.

The Human-Centered Innovation:

OpenAI, a leader in AI research, has been actively tackling this through a method called Reinforcement Learning from Human Feedback (RLHF). This is a crucial, human-centered step. In RLHF, human trainers provide feedback to the AI model, essentially teaching it what is helpful, honest, and harmless. The model is rewarded for generating responses that align with human values and common sense, and penalized for those that do not. This process is an attempt to inject a form of implicit, human-like understanding into the model that it cannot learn from raw data alone.

The Result:

RLHF has been a game-changer for improving the safety, coherence, and usefulness of models like ChatGPT. While it’s not a complete solution to the common sense problem, it represents a significant step forward. It demonstrates that the path to a more “intelligent” AI isn’t just about scaling up data and compute; it’s about systematically incorporating a human-centric layer of guidance and values. It’s a pragmatic recognition that humans must be deeply involved in shaping the AI’s understanding of the world, serving as the common sense compass for the machine.


Conclusion: AGI as a Human-Led Journey

The quest for AGI is perhaps the greatest scientific and engineering challenge of our time. While we’ve climbed the foothills of narrow intelligence, the true peaks of common sense, causal reasoning, and human-like creativity remain unscaled. These are not problems that can be solved with bigger servers or more data alone. They require fundamental, human-centered innovation.

The companies and researchers who will lead the way are not just those with the most computing power, but those who are the most creative, empathetic, and philosophically minded. They will be the ones who understand that AGI is not just about building a smart machine; it’s about building a machine that understands the world the way we do, with all its nuances, complexities, and unspoken rules. The path to AGI is a collaborative, human-led journey, and by solving its core challenges, we will not only create more intelligent machines but also gain a deeper understanding of our own intelligence in the process.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Dall-E

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






McKinsey is Wrong That 80% Companies Fail to Generate AI ROI

McKinsey is Wrong That 80% Companies Fail to Generate AI ROI

GUEST POST from Robyn Bolton

Sometimes, you see a headline and just have to shake your head.  Sometimes, you see a bunch of headlines and need to scream into a pillow.  This week’s headlines on AI ROI were the latter:

  • Companies are Pouring Billions Into A.I. It Has Yet to Pay Off – NYT
  • MIT report: 95% of generative AI pilots at companies are failing – Forbes
  • Nearly 8 in 10 companies report using gen AI – yet just as many report no significant bottom-line impact – McKinsey

AI has slipped into what Gartner calls the Trough of Disillusionment. But, for people working on pilots,  it might as well be the Pit of Despair because executives are beginning to declare AI a fad and deny ever having fallen victim to its siren song.

Because they’re listening to the NYT, Forbes, and McKinsey.

And they’re wrong.

ROI Reality Check

In 20205, private investment in generative AI is expected to increase 94% to an estimated $62 billion.  When you’re throwing that kind of money around, it’s natural to expect ROI ASAP.

But is it realistic?

Let’s assume Gen AI “started” (became sufficiently available to set buyer expectations and warrant allocating resources to) in late 2022/early 2023.  That means that we’re expecting ROI within 2 years.

That’s not realistic.  It’s delusional. 

ERP systems “started” in the early 1990s, yet providers like SAP still recommend five-year ROI timeframes.  Cloud Computing“started” in the early 2000s, and yet, in 2025, “48% of CEOs lack confidence in their ability to measure cloud ROI.” CRM systems’ claims of 1-3 years to ROI must be considered in the context of their 50-70% implementation failure rate.

That’s not to say we shouldn’t expect rapid results.  We just need to set realistic expectations around results and timing.

Measure ROI by Speed and Magnitude of Learning

In the early days of any new technology or initiative, we don’t know what we don’t know.  It takes time to experiment and learn our way to meaningful and sustainable financial ROI. And the learnings are coming fast and furious:

Trust, not tech, is your biggest challenge: MIT research across 9,000+ workers shows automation success depends more on whether your team feels valued and believes you’re invested in their growth than which AI platform you choose.

Workers who experience AI’s benefits first-hand are more likely to champion automation than those told, “trust us, you’ll love it.” Job satisfaction emerged as the second strongest indicator of technology acceptance, followed by feeling valued.  If you don’t invest in earning your people’s trust, don’t invest in shiny new tech.

More users don’t lead to more impact: Companies assume that making AI available to everyone guarantees ROI.  Yet of the 70% of Fortune 500 companies deploying Microsoft 365 Copilot and similar “horizontal” tools (enterprise-wide copilots and chatbots), none have seen any financial impact.

The opposite approach of deploying “vertical” function-specific tools doesn’t fare much better.  In fact, less than 10% make it past the pilot stage, despite having higher potential for economic impact.

Better results require reinvention, not optimization:  McKinsey found that call centers that gave agents access to passive AI tools for finding articles, summarizing tickets, and drafting emails resulted in only a 5-10% call time reduction.  Centers using AI tools to automate tasks without agent initiation reduced call time by 20-40%.

Centers reinventing processes around AI agents? 60-90% reduction in call time, with 80% automatically resolved.

How to Climb Out of the Pit

Make no mistake, despite these learnings, we are in the pit of AI despair.  42% of companies are abandoning their AI initiatives.  That’s up from 17% just a year ago.

But we can escape if we set the right expectations and measure ROI on learning speed and quality.

Because the real concern isn’t AI’s lack of ROI today.  It’s whether you’re willing to invest in the learning process long enough to be successful tomorrow.

Image credit: Microsoft CoPilot

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.