Category Archives: Technology

The Future of Military Innovation is Analog, Digital, and Human-Centered

The Hybrid Advantage

The Future of Military Innovation is Analog, Digital, and Human-Centered

GUEST POST from Art Inteligencia

In the high-stakes world of defense and security, the innovation conversation is often hijacked by the pursuit of the most complex, esoteric, and expensive technology — hypersonic weapons, next-generation stealth fighters, and pure AI command structures. But as a human-centered change and innovation thought leader, I argue that this obsession with technological complexity is a critical strategic mistake. The future of military innovation isn’t a matter of choosing between analog or digital; it’s about mastering Hybrid Resilience — the symbiotic deployment of low-cost, human-centric, and commercially available technologies that create disproportionate impact. The best solutions are often not the most advanced, but the ones that are simplest to deploy, easiest to maintain, and most effective at leveraging the human element at the edge of the conflict.

The true measure of innovation effectiveness is not its unit cost, but its cost-per-impact ratio. When simplicity meets massive scale, the result is a disruptive force that can overwhelm even the most sophisticated, closed-loop military industrial complexes. This shift is already defining modern conflict, forcing traditional defense giants to rethink how they invest and innovate.

The New Equation: Low-Cost Digital and The Power of Speed

The most devastating innovations often come with the smallest price tags, leveraging the widespread accessibility of digital tools and talent. The goal is to maximize chaos and damage while minimizing investment.

Operation Spiderweb: Asymmetric Genius Deep Behind Enemy Lines

The coordinated drone attacks known as “Operation Spiderweb” perfectly illustrate the principle of low-cost, high-impact hybrid warfare. This was not a cyberattack, but an ingenious physical and digital operation in which Ukrainian Security Services (SBU) successfully smuggled over 100 small, commercially available FPV (First-Person View) drones into Russia, hidden inside wooden structures on trucks. The drones were then launched deep inside Russian territory, far beyond the reach of conventional long-range weapons, striking strategic bomber aircraft at five different airbases, including one in Eastern Siberia — a distance of over 4,000 km from Ukraine. With a relatively small financial investment in commercial drone technology and a logistics chain that leveraged analog disguise and stealth, Ukraine inflicted an estimated sizable financial damage — potentially billions of dollars — on critical, irreplaceable Russian military assets. This was a triumph of human-centered strategic planning over centralized, predictable defense.

This principle of scale and rapid deployability is also seen in the physical domain. The threat posed by drone swarms that China can fit in a single shipping container is precisely that they are cheap, numerous, and rapidly deployable. This innovation isn’t about the individual drone’s complexity, but the simplicity of its collective deployment. The containerized system makes the deployment highly mobile and scalable, transforming a single cargo vessel or truck into an instant, overwhelming air force.


The Return of Analog: Simplicity for Survivability

While the digital world provides scale, the analog world provides resilience. True innovation anticipates technological failure, deliberately integrating low-tech, human-proof solutions for survivability.

Take, for example, the concept of drones connected with physical connection (optical fiber cables). In an era of intense electronic warfare and GPS denial, a drone linked by a physical fiber-optic cable is uncorruptible by jamming. The drone’s data link, command, and control remain secure, offering an unassailable digital tether in a highly contested electromagnetic environment. This is an elegant, human-centered solution that embraces an “old” technology (the cable) to solve a cutting-edge digital problem (signal jamming). Similarly, in drone defense, the most effective tool for neutralizing small, hostile drones is often not a multi-million-dollar missile system, but a net gun. These net guns in drone defense are a low-tech, high-effectiveness solution that causes zero collateral damage, is easily trainable, and is vastly cheaper than the target itself. They are the ultimate embodiment of human ingenuity solving a technical problem with strategic simplicity.

The Chevy ISV: Commercial Off-the-Shelf Agility

The Chevy ISV (Infantry Squad Vehicle) is a prime example of human-centered innovation prioritizing Commercial Off-the-Shelf (COTS) solutions. Instead of spending decades and billions designing a bespoke vehicle, the U.S. military adapted a proven, commercially available chassis (the Chevy Colorado ZR2) to meet the requirements for rapid, light infantry mobility. This approach is superior because COTS is faster to acquire, cheaper to maintain (parts are globally accessible), and inherently easier for a soldier to operate and troubleshoot. The ISV prioritizes the soldier’s speed, autonomy, and operational simplicity over hyper-specialized military complexity. It’s innovation through rapid procurement and smart adaptation.


The Human-Augmented Future: Decentralized Command

The most cutting-edge military innovation is the marriage of AI and decentralized human judgment. The future warfighter isn’t a passive recipient of intelligence; they are an AI-augmented decision-maker. For instance, programs inspired by DARPA’s vision for adaptive, decentralized command structures use AI to process the vast amounts of sensor data (the digital part) but distribute the processed intelligence to small, autonomous human teams (the analog part) who make rapid, contextual decisions without needing approval from a centralized HQ. This human-in-the-loop architecture values the ethical judgment, local context, and adaptability that only a human can provide, allowing for innovation and mission execution at the tactical edge.


The Innovation Ecosystem: Disruptors on the Front Line

The speed of defense innovation is now being set by agile, often venture-backed startups, not just traditional primes. Companies like Anduril are aggressively driving hardware/software integration and autonomous systems with a focus on COTS and rapid deployment. Palantir continues to innovate on the data side, making complex intelligence accessible and actionable for human commanders. In the specialized drone space, companies are constantly emerging with highly specialized, affordable solutions that utilize commercial components and open-source principles to achieve specialized military effects. These disruptors are forcing the entire defense industry to adopt a “fail-fast” mentality, shortening development cycles from decades to months by prioritizing iterative, human-centered feedback and scalable digital infrastructure.


Conclusion: The Strategy of Strategic Simplicity

The future of military innovation belongs to those who embrace strategic simplicity. It is an innovation landscape where a low-cost digital intrusion can be more damaging than a high-cost missile, where resilience is built with fiber-optic cable, and where the most effective vehicle is a clever adaptation of a commercial pickup truck. Leaders must shift their focus from what money can buy to what human ingenuity can create. By prioritizing Hybrid Resilience — the thoughtful integration of analog durability, digital scale, and, most importantly, human-centered design — we ensure that tomorrow’s forces are not only technologically advanced but also adaptable, sustainable, and capable of facing any challenge with ingenuity and strategic simplicity.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Marketing Guide for Humanity’s Next Chapter

How AI Changes Your Customers

Exclusive Interview with Mark Schaefer

Mark W Schaefer

LAST UPDATED: October 1, 2025 at 12:00PM

The rise of artificial intelligence isn’t just an upgrade to our technology; it’s a fundamental shift in what it means to be human and what it takes to lead a successful business. We’ve entered a new epoch defined by “synthetic humanity,” a term coined by Mark Schaefer to describe AI interactions that are indistinguishable from real human connection. This blurring of lines creates an enormous opportunity, which Mark Schaefer refers to as a “seam” — a moment of disruption wide open for innovators. But as algorithms become more skilled at simulating empathy and insight, what must leaders do to maintain authenticity and relevancy? In this exclusive conversation, Mark Shaefer breaks down why synthetic humanity is the most crucial concept for leaders to grasp today, how to use AI as a partner rather than a replacement, and the vital role of human creativity in a world of supercharged innovation.

The Internet, Smartphones, Social Media, and Now AI, Have All Shifted Customer Expectations

Mark Schaefer is a globally-acclaimed author, keynote speaker, and marketing consultant. He is a faculty member of Rutgers University and one of the top business bloggers and podcasters in the world. How AI Changes Your Customers: The Marketing Guide to Humanity’s Next Chapter is his twelfth book, exploring what companies should consider when it comes to artificial intelligence (AI) and their customers.

Below is the text of my interview with Mark and a preview of the kinds of insights you’ll find in How AI Changes Your Customers presented in a Q&A format:

1. I came across the term ‘synthetic humanity’ fairly early on in the book. Why is this concept so important, and what are the most important aspects for leaders to consider?

“Synthetic humanity” is my term for describing the emerging wave of AI interactions that appear, sound, and even feel human — yet are not human at all. This is not science fiction. Already, chatbots can hold natural conversations, generate art, or simulate empathy in ways that blur the line between authentic and artificial.

For leaders, this matters because customers don’t care whether an experience is powered by code or carbon; they care about how it feels. If synthetic humanity can deliver faster, easier, and more personalized service, people will embrace it. The more machines convincingly mimic us, the more vital it becomes to emphasize distinctly human qualities like compassion, vulnerability, creativity, and trust.

Leaders must navigate two urgent questions: Where do we lean into automation for efficiency? And where do we intentionally preserve human touch for meaning? Synthetic humanity can scale interactions, but it cannot scale authenticity. The most successful brands will be those that strike this balance — leveraging AI’s strengths while showcasing the irreplaceable heartbeat of humanity.

2. We discuss disruption quite a bit here on this blog. Can you share a bit more with our innovators about ‘seams’ and the opportunities they create with AI or otherwise?

Throughout history, disruptions to the status quo, such as pandemics, wars, or economic recessions, can either sink a business or elevate it to new heights. Every disruption creates a seam — a moment where the fabric of culture, business, or belief rips just wide enough for an innovator to crawl through and create something new.

We might be living in the ultimate seam.

Google CEO Sundar Pinchai calls AI the most significant innovation in human history — more important than fire, medicine, or the internet. The power of AI seems absolute and threatening. For many, it’s terrifying.

Through my new book, I’m trying to get people to view disruption through a different lens: not fear, but immense possibility.

3. Given that AI has access to all of our accumulated wisdom, does it actually create unique insights and ideas, or will innovation always be left to the humans?

AI is extraordinary at remixing existing content. It can scan millions of data points, connect patterns we might miss, and surface possibilities at lightning speed. That feels like insight, and sometimes it is. However, there is a crucial distinction: AI doesn’t truly care. It lacks context, longing, and lived experience.

Innovation often begins with a problem that aches to be solved or a vision that comes from deep within human culture. AI can suggest ten thousand options, but only a person can say, “This one matters because it touches our values, our customers, our future.”

So the real power is in the partnership. AI accelerates discovery, clears away routine work, and even provokes us with new connections. Humans bring the spark of meaning, the intuition, and the courage to act on something that has never been tried before. Innovation is not being replaced. It is being supercharged. In my earlier book “Audacious: How Humans Win in an AI Marketing World,” I note that the bots are here, but we still own crazy!

This is a time for humans to transcend “competent.” Bots can be competent and ignorable.

4. Do you have any tips for us mere mortals on how to productively use AI without developing creative and intellectual atrophy?

Yes, and it starts with how you frame the role of AI in your life. If you treat it as a replacement, you risk letting your creative muscles go slack. If you treat it as a partner, you can actually get stronger.

Here are a few practical approaches. First, use AI to stretch your perspective, not to finish your work for you. Ask it to give you ten angles on a problem, then choose one and make it your own. Second, set boundaries. Write your first draft by hand or sketch ideas before you ever touch a prompt. Let AI react to your thinking, not define it. Third, use the tool to challenge yourself. Feed it your work and ask, “What am I missing? Where are my blind spots?”

Most importantly, keep doing hard things. Struggle is where growth happens. AI can smooth the path, but sometimes you need the climb. Treat the technology as a coach, not a crutch, and you will come out sharper, faster, and even more creative on the other side.

5. I’ve heard a little bit about AI literacy. What are some of the critical aspects that we should all be aware of or try to learn more about?

How AI Changes Your Customers' MarketingThere are a few critical aspects everyone should know. First, bias. AI models are trained on human data, which means they inherit our blind spots and prejudices. If you don’t recognize this, you may mistake bias for truth. Second, limits. AI is confident even when it is wrong. Knowing how to fact-check and verify is essential. Third, prompting. The quality of your input shapes the quality of the output, so learning how to ask better questions is a new core skill.

Finally, ethics. Just because AI can do something does not mean it should. We all need to be asking: How does this affect privacy, autonomy, and trust?

AI literacy isn’t about becoming a coder. It is about being a thoughtful user, a skeptic when needed, and a leader who understands both the promise and the peril of these tools.

6. What do companies and sole proprietors worried about falling below the fold of the new AI-powered search results need to change online to stay relevant and successful?

I have many practical ideas about this in the book. In short, the old game of chasing clicks and keywords is fading. AI-powered search doesn’t just list links, it delivers answers. That means the winners will be those whose content and presence are woven deeply enough into the digital fabric that the algorithms can’t ignore them.

This requires a shift in focus. Instead of creating content that only ranks, create content that is referenced, cited, and trusted across the web. Build authority by being the source others turn to. Make your ideas so distinct and valuable that they become part of the training data itself. We are entering a golden age for PR!

It also means doubling down on brand signals that AI can’t manufacture. Human stories, original research, strong communities, and unique perspectives will travel farther than generic blog posts. And remember, AI models reward freshness and relevance, so showing up consistently matters.

The book also covers what I call “overrides.” If you create a meaningful, loyal relationship with customers and word of mouth recommendations, that will override the AI recommendations. We consider AI recommendations. We ACT on human recommendations.

7. ‘Weaponizing kindness’ was a terrifying headline I stumbled across in your book. What do organizations need to consider when using AI to interact with customers and what traps are out in front of them?

That phrase is unsettling for a reason. AI can mimic empathy so well that it risks crossing into manipulation. Imagine a chatbot that remembers your child’s name, mirrors your mood, or expresses concern in just the right tone. Done responsibly, that feels like service. Done carelessly, it feels like exploitation.

Organizations need to recognize that kindness delivered at scale is powerful, but if it is hollow or purely transactional, customers will sense it. The first trap is confusing simulation with sincerity. Just because an AI can sound caring does not mean it actually cares. The second trap is overreach. Using personal data to create hyper-tailored interactions can quickly slip from helpful to creepy.

The safeguard is transparency and choice. Be clear about when a customer is interacting with AI. Use technology to enhance human care, not replace it. Always provide people with a way to connect with a real person.

Kindness is a sacred trust in business. Weaponize it, and you erode the very loyalty and love you are trying to build. Use it authentically, and you create relationships no machine can ever replicate.

8. What changing customer expectations (thanks to AI) might companies easily overlook and pay a heavy price for?

One of the biggest shifts is speed. Customers already expect instant answers, but AI raises the bar even higher. If your competitor offers a seamless, AI-powered interaction that solves a problem in seconds, your slower, clunkier process will feel intolerable.

Another overlooked expectation is personalization. People are starting to experience products, services, and recommendations that feel almost eerily tailored to them. That sets a new standard. Companies still delivering one-size-fits-all communication will look outdated. Don’t confuse “personalization” with “personal.”

Perhaps the most subtle change is trust. As customers realize machines can fake warmth and empathy, they will value genuine human touch even more. If every interaction feels synthetic, you risk losing trust, especially if you’re not transparent about it.

The price of ignoring these shifts is steep: irrelevance. Customers rarely complain about unmet expectations anymore; they simply leave. The opportunity is to stay alert, listen closely, and respond quickly as AI reshapes what “good enough” looks like. The companies that thrive will be those that not only keep pace with AI, but also double down on the irreplaceable humanity customers still crave.

9. What unintended consequences of AI do you think companies might face and may not be preparing for? (overcoming AI slander and falsehoods might be one – agree or disagree? Others?)

I agree. In fact, I predict in the book that we cannot foresee AI’s biggest impact yet, as it will likely be an unintended consequence of the technology’s use in an unexpected way.

Where could that occur? Maybe reputational risk at scale. AI systems will generate falsehoods with the same confidence they generate facts, and those errors can stick. A single hallucination about your company, repeated enough times, becomes “truth” in the digital bloodstream. Most companies are not prepared for the speed and reach of misinformation of this kind.

Another consequence is customer dependency. If people hand over more of their decisions to AI, they may lose patience for complexity or nuance in your offerings. That can push companies toward oversimplification, even when a richer human experience would build deeper loyalty.

There is also the cultural risk. Employees might over-rely on AI, quietly eroding skills, judgment, and creativity. A workforce that outsources too much thinking can become brittle in ways that only show up during a crisis.

The real challenge is that these consequences don’t announce themselves. They creep in. Which means leaders must actively audit how AI is being used, question where it might distort reality or weaken capability, and set up safeguards now. The companies that prepare will navigate disruption. The ones that ignore it will be blindsided.

10. Can companies make TOO MUCH use of AI? If so, what would the impacts look like?

Yes, and we will start seeing this more often. It is a pattern that has repeated through history — over-indexing on tech and then bringing the people back in!

When companies lean too heavily on AI, they risk draining the very humanity that makes them memorable. On the surface, it might seem like efficiency: faster service, lower costs, and greater scale. But underneath, the impacts can be corrosive. You might be messing with your brand!

Customers may feel manipulated or devalued if a machine drives every interaction. Even perfect personalization can feel hollow if it lacks genuine care. Second, trust erodes when people sense that a brand hides behind automation rather than showing up with real human accountability. Third, within the company, over-reliance on AI can weaken employee judgment and creativity, resulting in a workforce that follows prompts rather than breaking new ground.

The real danger is commoditization. If every company automates everything, then no company stands out. The winners will be those who know when to say, “This moment deserves a person.” AI should be an amplifier, not a replacement. Too much of it and you don’t just lose connection, you lose your soul.

Conclusion

Thank you for the great conversation Mark!

I hope everyone has enjoyed this peek into the mind of the man behind the inspiring new title How AI Changes Your Customers: The Marketing Guide to Humanity’s Next Chapter!

Image credits: BusinessesGrow.com (Mark W Schaefer)

Content Authenticity Statement: If it wasn’t clear above, the short section in italics was written by Google’s Gemini with edits from Braden Kelley, and the rest of this article is from the minds of Mark Schaefer and Braden Kelley.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

What the Heck is Electrofermentation?

The Convergence of Biology, Technology, and Human-Centered Innovation

What the Heck is Electrofermentation?

GUEST POST from Art Inteligencia

For centuries, the principles of manufacturing have been rooted in a linear, resource-intensive model: extract, produce, use, and dispose. In this paradigm, our most creative biological processes, like fermentation, have been limited by their own inherent constraints—slow yields, inconsistent outputs, and reliance on non-renewable inputs like sugars. But as a human-centered change and innovation thought leader, I see a new convergence emerging, one that promises to rewrite the rules of industry. It’s a profound synthesis of biology and technology, a marriage of microbes and micro-currents. I’m talking about electrofermentation, and it’s not just a scientific breakthrough; it’s a paradigm shift that enables us to produce the goods of the future in a way that is smarter, cleaner, and fundamentally more sustainable. This is about using electricity to guide and accelerate nature’s most powerful processes, turning waste into value and inefficiency into a new engine for growth.

The Case for a ‘Smarter’ Fermentation

Traditional fermentation, from brewing beer to creating biofuels, is an impressive but imperfect process. It is a biological balancing act, often limited by thermodynamic and redox imbalances that reduce yield and produce unwanted byproducts. Think of it as a chef trying to cook a complex dish without being able to precisely control the heat or the ingredients. This lack of fine-tuned control leads to waste and inefficiency, a costly reality in a world where every resource counts.

Electrofermentation revolutionizes this by introducing electrodes directly into the microbial bioreactor. This allows scientists to apply an electric current that acts as an electron source or sink, providing a powerful, precise control mechanism. This subtle electrical “nudge” steers the microbial metabolism, overcoming the natural limitations of traditional fermentation. The result is a process that is not only more efficient but also more versatile. It enables us to use unconventional feedstocks, such as industrial waste gases or CO₂, and convert them into valuable products with unprecedented speed and yield. It’s the difference between guessing and knowing, between a linear process and a circular one.

The Startups and Companies Leading the Charge

This revolution is already underway, driven by a new generation of companies and startups that are harnessing the power of electrofermentation to solve some of the world’s most pressing problems. At the forefront is LanzaTech, a company that has pioneered a process to recycle carbon emissions. They are essentially retrofitting breweries onto industrial sites like steel mills, using their proprietary microbes to ferment waste carbon gases into ethanol and other valuable chemicals. In the food sector, companies like Arkeon are redefining what we eat. They are building a new food system from the ground up by using microbes to convert CO₂ and hydrogen into sustainable proteins. And in the materials science space, innovators are exploring how this technology can create everything from biodegradable plastics to advanced biopolymers, all from non-traditional and renewable sources. These are not just scientific curiosities; they are real-world ventures creating scalable, impactful solutions that are actively building a circular economy.


Case Study 1: LanzaTech – Turning Pollution into Products

The Challenge:

Industrial emissions from steel mills and other heavy industries are a major contributor to climate change. These waste gases—rich in carbon monoxide (CO) and carbon dioxide (CO₂)—are a significant liability, but they also represent a vast, untapped resource. The challenge was to find a commercially viable way to capture these emissions and transform them into something valuable, rather than simply releasing them into the atmosphere.

The Electrofermentation Solution:

LanzaTech developed a gas fermentation process that uses a special strain of bacteria (Clostridium autoethanogenum) that feeds on carbon-rich industrial gases. This is a form of electrofermentation where the microbes use the electrons from the gas to power their metabolism. The process diverts carbon from being a pollutant and, through a biological synthesis, converts it into useful products. It’s like a biological recycling plant that fits onto a smokestack. The bacteria consume the waste gas, and in return, they produce fuels and chemicals like ethanol, which can then be used to make sustainable aviation fuel, packaging, and household goods. The key to its success is the precision of the fermentation process, which maximizes the conversion of waste carbon to valuable products.

The Human-Centered Result:

LanzaTech’s innovation is a powerful example of a human-centered approach to a global problem. It’s a technology that not only addresses a critical environmental challenge but also creates new economic opportunities and supply chains. By turning industrial emissions from a “bad” into a “good,” it redefines our relationship with waste. It’s a move away from a linear, extractive economy and toward a circular, regenerative one, proving that sustainability can be a catalyst for both innovation and profit. It has commercial plants in operation, showing that this is not just a theoretical solution but a scalable reality.


Case Study 2: Arkeon – The Future of Food from Air

The Challenge:

The global food system is under immense pressure. Rising populations, climate change, and resource-intensive agricultural practices are straining our ability to feed everyone sustainably. The production of protein, in particular, has a significant environmental footprint, requiring vast amounts of land and water and generating substantial greenhouse gas emissions. The challenge is to find a new, highly efficient, and sustainable source of protein that is not dependent on traditional agriculture.

The Electrofermentation Solution:

Arkeon is using a form of electrofermentation to create a protein-rich biomass from air. Their process involves using specialized microbes called archaea, which thrive in extreme environments and can be “fed” on CO₂ and hydrogen gas. By using an electrical current to power this process, Arkeon can precisely control the microbial activity to produce amino acids, the building blocks of protein, with incredible efficiency. This innovative process decouples food production from agricultural land, water, and sunlight, making it a highly resilient and sustainable source of nutrition. It’s a closed-loop system where waste (CO₂) is the primary input, and a high-value, functional protein powder is the output.

The Human-Centered Result:

Arkeon’s work is a powerful human-centered innovation because it tackles one of the most fundamental human needs: food security. By developing a method to create protein from waste gases, the company is not only providing a sustainable alternative but also building a more resilient food system. This technology could one day enable localized, decentralized food production, reducing reliance on complex supply chains and making communities more self-sufficient. It is a bold, forward-looking solution that envisions a future where the air we breathe can be a source of sustainable, high-quality nutrition for everyone.


Conclusion: The Dawn of a New Industrial Revolution

Electrofermentation is far more than a technical trick. It represents a paradigm shift from a linear, extractive model to a circular, regenerative one. By converging biology and technology, we are unlocking the ability to produce what we need, not from the earth’s finite resources, but from the waste and byproducts of our own civilization. It is a testament to the power of human-centered innovation, where the goal is not just to build a better widget but to create a better world. For leaders, the question is not if this will impact your industry, but how you will embrace it. The future belongs to those who see waste not as a liability, but as a feedstock, and who are ready to venture beyond the traditional. This is the dawn of a new industrial revolution, and it’s powered by a jolt of electricity and a microbe’s silent work, promising a more sustainable and abundant future for us all.

This video provides a concise overview of LanzaTech’s carbon recycling process, which is a key example of electrofermentation in action.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

You Need to Know What Your Customers Think of AI

You Need to Know What Your Customers Think of AI

GUEST POST from Shep Hyken

Ten years ago, only the most technologically advanced companies used AI — although it barely resembled what companies use today when communicating with customers — and it was very, very expensive. But not anymore. Today, any company can implement an AI strategy using ChatGPT-type technologies, often creating experiences that give customers what they want. But not always, which is why the information below is important.

The 2025 Findings

My annual customer service and customer experience (CX) research study surveys more than 1,000 U.S. consumers weighted to the population’s demographics of age, gender, ethnicity and geography. It included an entire group of questions focused on how customers react to and accept (or don’t accept) AI options to ask questions, resolve problems and communicate with a company or brand. Consider the following findings:

  • AI Success: Half of U.S. customers (50%) said they have successfully resolved a customer service issue using AI or ChatGPT-type technologies without needing human assistance. In 2024, only three out of 10 customers (32%) did so. That’s great news, but it’s important to point out that age makes a difference. Six out of 10 Gen-Z customers (61%) successfully used AI support versus just 32% of Boomers.
  • AI Is Far From Perfect: Half of U.S. customers (51%) said they received incorrect information from an AI self-service bot. Even with incredible improvement in AI’s capabilities, it still serves up wrong information. That destroys trust, not only in the company but also in the technology as a whole. A few bad answers and customers will be reluctant, at least in the near term, to choose self-service over the traditional mode of communication, the phone.
  • Still, Customers Believe: Four out of 10 customers (42%) believe AI and ChatGPT can handle complex customer service inquiries as effectively as humans. Even with the mistakes, customers believe AI solutions work. However, 86% of customers think companies using AI should always provide an option to speak or text with a real person.
  • The Phone Still Rules: It’s still too early to throw away phone support. My prediction is that it will be years, if ever, that human-to-human interactions completely disappear, which was proven when we asked, “When you have a problem or issue with a company, which solution do you prefer to use: phone or digital self-service?” The answer is that 68% of customers will still choose the phone over digital self-service. That number is highly influenced by the 82% of Baby Boomers who choose to call a company over any other type of digital support.
  • The Future Looks Strong For AI Customer Support: Six out of 10 customers (63%) expect AI-fueled technologies to become the primary mode of customer support. We asked the same question in 2021, and only 21% of customers felt this way.

The Strategy Behind Using AI For CX

  • Age Matters: As you can see from some of the above findings, there is a big generational gap between younger and older customers. Gen-Z customers are more comfortable, have had more success, and want more digital/AI interactions compared to older customers. Know your customer demographics and provide the appropriate support and communication options based on their age. Recognize you may need to provide different support options if your customer base is “everyone.”
  • Trust Is a Factor: Seven out of 10 customers (70%) have concerns about privacy and security when interacting with AI. Once again, age makes a difference. Trust and confidence with AI consistently decrease with age.

The Future of AI

As AI continues to evolve, especially in the customer service and experience world, companies and brands must find a balance between technology and the human touch. While customers are becoming more comfortable and finding success with AI, we can’t become so enamored with it that we abandon what many of our customers expect. The future of AI isn’t a choice between technology and humans. It’s about creating a blended experience that plays to the technology’s strengths and still gives customers the choice.

Furthermore, if every business had a 100% digital experience, what would be a competitive differentiator? Unless you are the only company that sells a specific product, everything becomes a commodity. Again, I emphasize that there must be a balance. I’ll close with something I’ve written before, but bears repeating:

The greatest technology in the world can’t replace the ultimate relationship-building tool between a customer and a business: the human touch.

This article was originally published on Forbes.com.

Image Credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Great American Contraction

Population, Scarcity, and the New Era of Human Value

LAST UPDATED: December 3, 2025 at 6:17 PM
The Great American Contraction - Population, Scarcity, and the New Era of Human Value

GUEST POST from Art Inteligencia

We stand at a unique crossroads in human history. For centuries, the American story has been a tale of growth and expansion. We built an empire on a relentless increase in population and labor, a constant flow of people and ideas fueling ever-greater economic output. But what happens when that foundational assumption is not just inverted, but rendered obsolete? What happens when a country built on the idea of more hands and more minds needing more work suddenly finds itself with a shrinking demand for both, thanks to the exponential rise of artificial intelligence and robotics?

The Old Equation: A Sinking Ship

The traditional narrative of immigration as an economic engine is now a relic of a bygone era. For decades, we debated whether immigrants filled low-skilled labor gaps or competed for high-skilled jobs. That entire argument is now moot. Robotics and autonomous systems are already replacing a vast swath of low-skilled labor, from agriculture to logistics, with greater speed and efficiency than any human ever could. This is not a future possibility; it’s a current reality accelerating at an exponential pace. The need for a large population to perform physical tasks is over.

But the disruption is far more profound. While we were arguing about factory floors and farm fields, Artificial Intelligence (AI) has quietly become a peer-level, and in many cases, superior, knowledge worker. AI can now draft legal briefs, write code, analyze complex data sets, and even generate creative content with a level of precision and speed no human can match. The very “high-skilled” jobs we once championed as the future — the jobs we sought to fill with the world’s brightest minds — are now on the chopping block. The traditional value chain of human labor, from manual to cognitive, is being dismantled from both ends simultaneously.

But workers are not the only thing being disrupted. Governments will be disrupted as well. Why? Because companies will be incentivized to decrease profitability by investing in compute to remain competitive. This means the tax base will shrink at the same time that humans will need increased financial assistance from the government. Taxes are only paid by businesses when there is profit (unless you switch to a revenue basis) and workers only pay taxes when they’re employed. A decreasing tax base and rising welfare costs is obviously unsustainable and another proof point for why smart countries have already started reducing their population to decrease the chances of default and social unrest.

“The question is no longer ‘What can humans do?’ but ‘What can only a human do?'”

The New Paradigm: Radical Scarcity

This creates a terrifying and necessary paradox. The scarcity we must now manage is not one of labor or even of minds, but of human relevance. The old model of a growing population fueling a growing economy is not just inefficient; it is a direct path to social and economic collapse. A population designed for a labor-based economy is fundamentally misaligned with a future where labor is a non-human commodity. The only logical conclusion is a Great Contraction — a deliberate and necessary reduction of our population to a size that can be sustained by a radically transformed economy.

This reality demands a ruthless re-evaluation of our immigration policy. We can no longer afford to see immigrants as a source of labor, knowledge, or even general innovation. The only value that matters now is singular, irreplaceable talent. We must shift our focus from mass immigration to an ultra-selective, curated approach. The goal is no longer to bring in more people, but to attract and retain the handful of individuals whose unique genius and creativity are so rare that AI can’t replicate them. These are the truly exceptional minds who will pioneer new frontiers, not just execute existing tasks.

The future of innovation lies not in the crowd, but in the individual who can forge a new path where none existed before. We must build a system that only allows for the kind of talent that is a true outlier — the Einstein, the Tesla, the Brin, but with the understanding that even a hundred of them will not be enough to employ millions. We are not looking for a workforce; we are looking for a new type of human capital that can justify its existence in a world of automated plenty. This is a cold and pragmatic reality, but it is the only path forward.

Human-Centered Value in a Post-Labor World

My core philosophy has always been about human-centered innovation. In this new world, that means understanding that the purpose of innovation is not just about efficiency or profit. It’s about preserving and cultivating the rare human qualities that still hold value. The purpose of immigration, therefore, must shift. It is not about filling jobs, but about adding the spark of genius that can redefine what is possible for a smaller, more focused society. We must recognize that the most valuable immigrants are not those who can fill our knowledge economy, but those who can help us build a new economy based on a new, more profound understanding of what it means to be human.

The political and social challenges of this transition are immense. But the choice is clear. We can either cling to a growth-based model and face the inevitable social and economic fallout, or we can embrace this new reality. We can choose to see this moment not as a failure, but as an opportunity to become a smaller, more resilient, and more truly innovative nation. The future isn’t about fewer robots and more people. It’s about robots designing, building and repairing other robots. And, it’s about fewer people, but with more brilliant, diverse, and human ideas.

This may sound like a dystopia to some people, but to others it will sound like the future is finally arriving. If you’re still not quite sure what this future might look like and why fewer humans will be needed in America, here are a couple of videos from the present that will give you a glimpse of why this may be the future of America:

INFOGRAPHIC ADDED DECEMBER 3, 2025:

The Great American Contraction Infographic

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Augmented Expertise

How XR is Redefining “In-the-Flow” Training

Augmented Expertise XR

GUEST POST from Art Inteligencia

In our relentless pursuit of innovation and efficiency, we often talk about automation, AI, and the promise of a future where machines handle the heavy lifting. But what about the human element? How do we empower our greatest asset – our people – to perform at their peak, adapt to rapid change, and master increasingly complex tasks without being overwhelmed? The answer, increasingly, lies not in replacing humans, but in augmenting human expertise through sophisticated, intuitive technologies.

One of the most compelling frontiers in this space is Extended Reality (XR) for “in-the-flow” training. This isn’t about traditional classroom learning or even simulated environments that mimic reality. This is about bringing learning directly into the operational context, providing real-time, context-aware guidance that enhances performance precisely when and where it’s needed. Imagine a technician performing a complex repair, seeing holographic instructions overlaid directly onto the machinery. Or a surgeon practicing a new procedure with anatomical data projected onto a mannequin. This is the promise of XR in-the-flow training: learning by doing, with intelligence baked into the environment itself.

Beyond Simulation: The Power of Contextual Learning

For decades, training has largely been a pull-based system: individuals seek out knowledge, or organizations push it through scheduled courses. While effective for foundational understanding, this model struggles in dynamic environments where information decays rapidly, and complexity demands immediate, precise application. The “forgetting curve” is a well-documented phenomenon; we lose a significant portion of what we learn very quickly if it’s not applied.

XR in-the-flow training flips this script. It leverages augmented reality (AR) and mixed reality (MR) to provide just-in-time, just-enough, just-for-me information. Instead of abstract concepts, learners engage with real-world problems, receiving immediate feedback and instruction that is directly relevant to their current task. This approach drastically improves retention, reduces errors, and accelerates skill acquisition because the learning is deeply embedded in the context of action.

“The future of work isn’t about replacing humans with machines; it’s about seamlessly augmenting human capabilities with intelligent tools that empower us to achieve more.”

This paradigm shift has profound implications for human-centered design. We’re moving from designing for a user who consumes information to designing for a user who *interacts* with information as an integral part of their physical workflow. The interface becomes the environment, and the learning experience is woven into the fabric of the task itself.

Case Study 1: Transforming Aerospace Manufacturing

Consider the aerospace industry, where precision, safety, and efficiency are paramount. As aircraft become more sophisticated, the complexity of assembly and maintenance tasks escalates, leading to longer training cycles and higher potential for human error. One leading aerospace manufacturer faced challenges with new hires in assembly operations, particularly with intricate wiring harnesses and component installation.

They deployed an AR-based in-the-flow training system using smart glasses. When a technician dons the headset, holographic overlays guide them through each step of the assembly process. Arrows point to specific components, digital models show correct placement, and textual instructions appear precisely where needed. The system can even detect if a step is performed incorrectly and provide immediate corrective feedback. The results were dramatic: training time for complex tasks was reduced by 30%, and error rates plummeted by 40% in pilot programs. More importantly, new employees felt more confident and productive much faster, leading to higher job satisfaction and retention.

Case Study 2: Revolutionizing Healthcare Procedures

In healthcare, the stakes are even higher. Doctors, nurses, and technicians constantly need to learn new procedures, operate complex medical equipment, and adapt to evolving protocols. Traditional methods often involve classroom sessions, practice on mannequins (away from the real patient context), or observation, which can be time-consuming and resource-intensive.

A major hospital network implemented a mixed reality training solution for surgical residents learning a minimally invasive procedure. Using an MR headset, residents could visualize a patient’s internal anatomy (from MRI or CT scans) as a 3D hologram directly superimposed onto a high-fidelity surgical mannequin. The system provided real-time guidance on instrument placement, incision angles, and potential risks, all without obscuring the physical tools or the training environment. This allowed residents to practice repeatedly in a highly realistic yet safe environment, receiving immediate visual and auditory feedback. The program demonstrated a significant increase in procedural proficiency and a reduction in the learning curve, leading to better patient outcomes and increased surgeon confidence.

The Ecosystem of Augmented Expertise

The innovation in this space is fueled by a dynamic ecosystem of companies and startups. Microsoft with its HoloLens continues to be a leader, providing a robust platform for mixed reality applications in enterprise. Magic Leap is also making strides with its advanced optical technology. Specialized software providers like PTC (Vuforia), Scope AR, and Librestream are developing powerful authoring tools and platforms that enable companies to create their own AR work instructions and remote assistance solutions without extensive coding. Startups like DAQRI (though recently restructured) have pushed the boundaries of industrial smart glasses, while others focus on specific verticals, offering tailored solutions for manufacturing, logistics, and healthcare. The competition is fierce, driving rapid advancements in hardware form factors, content creation tools, and AI integration for more intelligent guidance.

The Path Forward: Designing for Human Potential

The shift towards XR in-the-flow training is more than just a technological upgrade; it’s a fundamental rethinking of how we empower the human workforce. It’s about recognizing that expertise isn’t just accumulated knowledge, but the ability to apply that knowledge effectively in complex, dynamic situations. By integrating learning directly into the flow of work, we unlock unprecedented levels of productivity, safety, and human potential.

For leaders in human-centered change, innovation, and experience design, this presents a massive opportunity. We must move beyond simply adopting technology and focus on designing holistic systems where the technology seamlessly serves the human. This means:

  • Empathy Mapping: Truly understanding the challenges, cognitive loads, and pain points of front-line workers.
  • Iterative Design: Prototyping and testing XR solutions directly with users to ensure they are intuitive, non-intrusive, and genuinely helpful.
  • Ethical Considerations: Addressing concerns around data privacy, cognitive overload, and the psychological impact of constant augmentation.
  • Integration Strategy: Ensuring XR training solutions are integrated with existing learning management systems and operational data streams.

The future of work is not just augmented reality; it’s augmented human capability. By embracing XR for in-the-flow training, we are not just making tasks easier; we are making our people smarter, more adaptable, and ultimately, more valuable. This is true innovation, designed with humanity at its core.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Customer Experience is Changing

If You Don’t Like Change, You’re Going to Hate Extinction

Customer Experience is Changing

GUEST POST from Shep Hyken

Depending on which studies and articles you read, customer service and customer experience (CX) are getting better … or they’re getting worse. Our customer service and CX research found that 60% of consumers had better customer service experiences than last year, and in general, 82% are happy with the customer service they receive from the companies and brands with which they do business.

Yet, some studies claim customer service is worse than ever. Regardless, more companies than ever are investing in improving CX. Some nail it, but even with an investment, some still struggle. Another telling stat is the growing number of companies attending CX conferences.

Last month, more than 5,000 people representing 1,382 companies attended and participated in Contact Center Week (CCW), the world’s largest conference dedicated to customer service and customer experience. This was the largest attendance to date, representing a 25% growth over last year.

Many recognized brands and CX leaders attended and shared their wisdom from the main stage and breakout rooms. The expo hall featured demonstrations of the latest and greatest solutions to create more effective customer support experiences.

The primary reason I attend conferences like CCW is to stay current with the latest advancements and solutions in CX and to gain insight into how industry leaders think. AI took center stage for most of the presentations. No doubt, it continues to improve and gain acceptance. With that in mind, here are some of my favorite takeaways with my commentary from the sessions I attended:

AI for Training

Becky Ploeger, global head of reservations and customer care at Hilton, uses AI to create micro-lessons for employee training. Hilton is using Centrical’s platform to take various topics and turn them into coaching modules. Employees participate in simulations that replicate customer issues.

Can We Trust AI?

As excited as Ploeger is about AI (and agentic AI), there is still trepidation. CX leaders must recognize that AI is not yet perfect and will occasionally provide inaccurate information. Ploeger said, “We have years and years of experience with agents. We only have six months of experience with agentic AI.”

Wrong Information from AI Costs a Company Money—or Does it?

Gadi Shamia, CEO of Replicant, an AI voice technology company, commented about the mistakes AI makes. In general, CX leaders are complaining that going digital is costing the company money because of the bad information customers receive. Shamia asks, “How much are you losing?” While bad information can cause a customer to defect to a competitor, so does a bad experience with a live customer service rep. So, how often does AI provide incorrect information? How many of those customers leave versus trying to connect with an agent? The metrics you choose to define success with a digital self-service experience need to include more than measuring bad experiences. Mark Killick, SVP of experiential operations at Shipt, weighed in on this topic, saying, “If we don’t fix the problems of providing bad information, we’ll just deliver bad information faster.”

Making the Case to Invest in AI

Mariano Tan, president and CEO of Prosodica says, “Nothing gets funded without a clear business case.” The person in charge of the budget for customer service and CX initiatives (typically the CFO in larger companies) won’t “open the wallet” without proof that the expenditure will yield a return on investment (ROI). People in charge of budgets like numbers, so when you create your “clear business case,” be sure to include the numbers that make a compelling reason to invest in CX. Simply saying, “We’ll reduce churn,” isn’t enough. How much churn—that’s a number. How much does it mean to the bottom line—another number. Numbers sell!

Final Words: Love Change, or Else

Neil Gibson, SVP of CX at FedEx, was part of a panel and shared a quote that is the perfect way to end the article. AI is rapidly changing the way we do business. We must keep up, or else. Gibson quoted Fred Smith, the first CEO and founder of FedEx, who said, “If you don’t like change, you’re going to hate extinction.” In other words, keep up or watch your competition blow past you.

This article was originally published on Forbes.com.

Image Credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Have We Made AI Interfaces Too Human?

Could a Little Uncanny Valley Help Add Some Much Needed Skepticism to How We Treat AI Output?

Have We Made AI Interfaces Too Human?

GUEST POST from Pete Foley

A cool element of AI is how ‘human’ it appear’s to be. This is of course a part of its ‘wow’ factor, and has helped to drive rapid and widespread adoption. It’s also of course a clever illusion, as AI’s don’t really ‘think’ like real humans. But the illusion is pretty convincing. And most of us, me included, who have interacted with AI at any length, have probably at times all but forgotten they are having a conversation with code, albeit sophisticated code.

Benefits of a Human-LIke Interface: And this humanizing of the user interface brings multiple benefits. It is of course a part of the ‘wow’ factor that has helped drive rapid and widespread adoption of the technology. The intuitive, conversational interface also makes it far easier for everyday users to access information without training in search techniques. While AI’s they don’t fundamentally have access to better information than an old fashioned Google search, they are much easier to use. And the humanesque output not only provides ‘ready to use’ and pre-synthesized information, but also increases the believability of the output. Furthermore, by creating an illusion of human-like intelligence, it implicitly implies emotions, compassion and critical thinking behind the output, even if it’s not really there

Democratizing Knowledge: And in many ways, this is a really good thing. Knowledge is power. Democratizing access to it has many benefits, and in so doing adds checks and balances to our society we’ve never before enjoyed. And it’s part of a long-term positive trend. Our societies have evolved from shaman and priests jealously guarding knowledge for their own benefit, through the broader dissemination enabled by the Gutenberg press, books and libraries. That in turn gave way to mass media, the internet, and now the next step, AI. Of course, it’s not quite that simple, as it’s also a bit of an arms race. With this increased access to information has come ever more sophisticated ways in which today’s ’shamans’ or leaders try to protect their advantage. They may no longer use solar eclipses to frighten an astronomically ignorant populace into submission and obedience. But spinning, framing, controlled narratives, selective dissemination of information, fake news, media control, marketing, behavioral manipulation and ’nudging’ are just a few ways in which the flow of information is controlled or manipulated today. We have moved in the right direction, but still have a way to go, and freedom of information and it’s control are always in some kind of arms race.

Two Edged Sword: But this humanization of AI can also be a two edged sword, and comes with downsides in addition to the benefits described above. It certainly improves access and believability, and makes output easier to disseminate, but also hides its true nature. AI operates in a quite different way from a human mind. It lacks intrinsic ethics, emotional connections, genuine empathy, and ‘gut feelings’. To my inexpert mind, it in some uncomfortable ways resembles a psychopath. It’s not evil in a human sense by any means, but it also doesn’t care, and lacks a moral or ethical framework

A brutal example is the recent case of Adam Raine, where ChatGPT advised him on ways to commit suicide, and helped him write a suicide note. A sane human would never do this, but the humanesque nature of the interface appeared to create an illusion for that unfortunate individual that he was dealing with a human, and the empathy, emotional intelligence and compassion that comes with that.

That may be an extreme example. But the illusion of humanity and the ability to access unfiltered information can also bring more subtle issues. For example, while the ability to interrogate AI around our symptoms before visiting a physician certainly empowers us to take a more proactive role in our healthcare. But it can also be counterproductive. A patient who has convinced themselves of an incorrect diagnosis can actually harm themselves, or make a physicians job much harder. And AI lacks the compassion to break bad news gently, or add context in the way a human can.

The Uncanny Valley: That brings me to the Uncanny Valley. This describes when technology approaches but doesn’t quite achieve perfection in human mimicry. In the past we could often detect synthetic content on a subtle and implicit level, even if we were not conscious of it. For example, a computerized voice that missed subtle tonal inflections, or a photoshopped image or manipulated video that missed subtle facial micro expressions might not be obvious, but often still ‘felt’ wrong. Or early drum machines were so perfect that they lacked the natural ’swing’ of even the most precise human drummer, and so had to be modified to include randomness that was below the threshold of conscious awareness, but made them ‘feel’ real.

This difference between conscious and unconscious evaluation creates cognitive dissonance that can result in content feeling odd, or even ‘creepy’. And often, the closer we got to eliminating that dissonance, the creepier it feels. When I’ve dealt with the uncanny valley in the past, it’s generally been something we needed to ‘fix’. For example, over-photoshopping in a print ad, or poor CGI. But be careful what you wish for. AI appears to have marched through the ‘uncanny valley’ to the point where its output feels human. But despite feeling right, it may still lack the ethical, moral or emotional framework of the human responses it mimics.

This begs a question, ‘do we need some implicit as well as explicit cues that remind us we are not dealing with a real human? Could a slight feeling of ‘creepiness maybe help to avoid another Adam Raine? Should we add back some ‘uncanny valley’, and turn what used to be something we thought of as an ‘enemy’ to good use? The latter is one of my favorite innovation strategies. Whether it’s vaccination, or exposure to risks during childhood, or not over-sanitizing, sometimes a little of what does us harm can do us good. Maybe the uncanny valley we’ve typical tried to overcome could now actually help us?

Would just a little implicit doubt also encourage us to think a bit more deeply about the output, rather than simply cut and paste it into a report? By making AI output sound so human, it potentially removes the need for cognitive effort to process the output. Thinking that played a key role in translating search into output can now be skipped. Synthesizing and processing output from a ‘old fashioned’ Google search requires effort and comprehension. With AI, it is all to easy to regurgitate the output, skip meaningful critical thinking, and share what we really don’t understand. Or perhaps worse, we can create an illusion of understanding where we don’t think deeply or causally enough to even realize that we don’t understand what we are sharing. It’s in some ways analogous to proof reading, in that it’s all to easy to skip over content we think we already know, even if we really don’t . And the more we skip over content, the more difficult it is to be discerning, or question the output. When a searcher receives answers in prose he or she can cut and paste into a report or essay, less effort effort and critical thinking goes into comprehension and the critical thinking, and the risk of sharing inaccurate information, or even nonsense increases.

And that also brings up another side effect of low engagement with output – confirmation bias. If the output is already in usable form, doesn’t require synthesizing or comprehension, and it agrees with our beliefs or motivations, it’s a perfect storm. There is little reason to question it, or even truly understand it. We are generally pretty good at challenging something that surprises us, or that we disagree with. But it takes a lot of will, and a deep adherence to the scientific method to challenge output that supports our beliefs or theories

Question everything, and you do nothing! The corollary to this is surely ‘that’s the point of AI?’ It’s meant to give us well structured, and correct answers, and in so doing free up our time for more important things, or to act on ideas, rather than just think about them. If we challenge and analyze every output, why use AI in the first place? That’s certainly fair, but taking AI output without any question is not smart either. Remember that it isn’t human, and is still capable of making really stupid mistakes. Okay, so are humans, but AI is still far earlier in its evolutionary journey, and prone to unanticipated errors. I suspect the answer to this lies in how important the output is, and where it will be used. If it’s important, treat AI output as a hypothesis. Don’t believe everything you read, and before simply sharing or accepting, ask ourselves and AI itself questions around what went into the conclusions, where the data came from, and what the critical thinking path is. Basically apply the scientific method to AI output much the same as we would, or should our own ideas.

Cat Videos and AI Action Figures: Another related risk with AI is if we let it become an oracle. We not only treat its output as human, but as super human. With access to all knowledge, vastly superior processing power compared to us mere mortals, and apparent human reasoning, why bother to think for ourselves? A lot of people worry about AI becoming sentient, more powerful than humans, and the resultant doomsday scenarios involving Terminators and Skynet. While it would be foolish to ignore such possibilities, perhaps there is a more clear and present danger, where instead of AI conquering humanity, we simply cede our position to it. Just as basic mathematical literacy has plummeted since the introduction of calculators, and spell-check has reduced our basic literary capability, what if AI erodes our critical thinking and problem solving? I’m not the first to notice that with the internet we have access to all human knowledge, but all too often use it for cat videos and porn. With AI, we have an extraordinary creativity enhancing tool, but use masses of energy and water for data centers to produce dubious action figures in our own image. Maybe we need a little help doing better with AI. A little ‘uncanny Valley’ would not begin to deal with all of the potential issues, but maybe simply not fully trusting AI output on an implicit level might just help a little bit.

Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Most Challenging Obstacles to Achieving Artificial General Intelligence

The Unclimbed Peaks

The Most Challenging Obstacles to Achieving Artificial General Intelligence

GUEST POST from Art Inteligencia

The pace of artificial intelligence (AI) development over the last decade has been nothing short of breathtaking. From generating photo-realistic images to holding surprisingly coherent conversations, the progress has led many to believe that the holy grail of artificial intelligence — Artificial General Intelligence (AGI) — is just around the corner. AGI is defined as a hypothetical AI that possesses the ability to understand, learn, and apply its intelligence to solve any problem, much like a human. As a human-centered change and innovation thought leader, I am here to argue that while we’ve made incredible strides, the path to AGI is not a straight line. It is a rugged, mountainous journey filled with profound, unclimbed peaks that require us to solve not just technological puzzles, but also fundamental questions about consciousness, creativity, and common sense.

We are currently operating in the realm of Narrow AI, where systems are exceptionally good at a single task, like playing chess or driving a car. The leap from Narrow AI to AGI is not just an incremental improvement; it’s a quantum leap. It’s the difference between a tool that can hammer a nail perfectly and a person who can understand why a house is being built, design its blueprints, and manage the entire process while also making a sandwich and comforting a child. The true obstacles to AGI are not merely computational; they are conceptual and philosophical. They require us to innovate in a way that goes beyond brute-force data processing and into the realm of true understanding.

The Three Grand Obstacles to AGI

While there are many technical hurdles, I believe the path to AGI is blocked by three foundational challenges:

  • 1. The Problem of Common Sense and Context: Narrow AI lacks common sense, a quality that is effortless for humans but incredibly difficult to code. For example, an AI can process billions of images of cars, but it doesn’t “know” that a car needs fuel or that a flat tire means it can’t drive. Common sense is a vast, interconnected web of implicit knowledge about how the world works, and it’s something we’ve yet to find a way to replicate.
  • 2. The Challenge of Causal Reasoning: Current AI models are masterful at recognizing patterns and correlations in data. They can tell you that when event A happens, event B is likely to follow. However, they struggle with causal reasoning — understanding why A causes B. True intelligence involves understanding cause-and-effect relationships, a critical component for true problem-solving, planning, and adapting to novel situations.
  • 3. The Final Frontier of Human-Like Creativity & Understanding: Can an AI truly create something new and original? Can it experience “aha!” moments of insight? Current models can generate incredibly creative outputs based on patterns they’ve seen, but do they understand the deeper meaning or emotional weight of what they create? Achieving AGI requires us to cross the final chasm: imbuing a machine with a form of human-like creativity, insight, and self-awareness.

“We are excellent at building digital brains, but we are still far from replicating the human mind. The real work isn’t in building bigger models; it’s in cracking the code of common sense and consciousness.”


Case Study 1: The Fight for Causal AI (Causaly vs. Traditional Models)

The Challenge:

In scientific research, especially in fields like drug discovery, identifying causal relationships is everything. Traditional AI models can analyze a massive database of scientific papers and tell a researcher that “Drug X is often mentioned alongside Disease Y.” However, they cannot definitively state whether Drug X *causes* a certain effect on Disease Y, or if the relationship is just a correlation. This lack of causal understanding leads to a time-consuming and expensive process of manual verification and experimentation.

The Human-Centered Innovation:

Companies like Causaly are at the forefront of tackling this problem. Instead of relying solely on a brute-force approach to pattern recognition, Causaly’s platform is designed to identify and extract causal relationships from biomedical literature. It uses a different kind of model to recognize phrases and structures that denote cause and effect, such as “is associated with,” “induces,” or “results in.” This allows researchers to get a more nuanced, and scientifically useful, view of the data.

The Result:

By focusing on the causal reasoning obstacle, Causaly has enabled researchers to accelerate the drug discovery process. It helps scientists filter through the noise of correlation to find genuine causal links, allowing them to formulate hypotheses and design experiments with a much higher probability of success. This is not about creating AGI, but about solving one of its core components, proving that a human-centered approach to a single, deep problem can unlock immense value. They are not just making research faster; they are making it smarter and more focused on finding the *why*.


Case Study 2: The Push for Common Sense (OpenAI’s Reinforcement Learning Efforts)

The Challenge:

As impressive as large language models (LLMs) are, they can still produce nonsensical or factually incorrect information, a phenomenon known as “hallucination.” This is a direct result of their lack of common sense. For instance, an LLM might confidently tell you that you can use a toaster to take a bath, because it has learned patterns of words in sentences, not the underlying physics and danger of the real world.

The Human-Centered Innovation:

OpenAI, a leader in AI research, has been actively tackling this through a method called Reinforcement Learning from Human Feedback (RLHF). This is a crucial, human-centered step. In RLHF, human trainers provide feedback to the AI model, essentially teaching it what is helpful, honest, and harmless. The model is rewarded for generating responses that align with human values and common sense, and penalized for those that do not. This process is an attempt to inject a form of implicit, human-like understanding into the model that it cannot learn from raw data alone.

The Result:

RLHF has been a game-changer for improving the safety, coherence, and usefulness of models like ChatGPT. While it’s not a complete solution to the common sense problem, it represents a significant step forward. It demonstrates that the path to a more “intelligent” AI isn’t just about scaling up data and compute; it’s about systematically incorporating a human-centric layer of guidance and values. It’s a pragmatic recognition that humans must be deeply involved in shaping the AI’s understanding of the world, serving as the common sense compass for the machine.


Conclusion: AGI as a Human-Led Journey

The quest for AGI is perhaps the greatest scientific and engineering challenge of our time. While we’ve climbed the foothills of narrow intelligence, the true peaks of common sense, causal reasoning, and human-like creativity remain unscaled. These are not problems that can be solved with bigger servers or more data alone. They require fundamental, human-centered innovation.

The companies and researchers who will lead the way are not just those with the most computing power, but those who are the most creative, empathetic, and philosophically minded. They will be the ones who understand that AGI is not just about building a smart machine; it’s about building a machine that understands the world the way we do, with all its nuances, complexities, and unspoken rules. The path to AGI is a collaborative, human-led journey, and by solving its core challenges, we will not only create more intelligent machines but also gain a deeper understanding of our own intelligence in the process.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Dall-E

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






The Crisis Innovation Trap

Why Proactive Innovation Wins

LAST UPDATED: September 3, 2025 at 12:00PM
The Crisis Innovation Trap

by Braden Kelley and Art Inteligencia

In the narrative of business, we often romanticize the idea of “crisis innovation.” The sudden, high-stakes moment when a company, backed against a wall, unleashes a burst of creativity to survive. The pandemic, for instance, forced countless businesses to pivot their models overnight. While this showcases incredible human resilience, it also reveals a dangerous and costly trap: the belief that innovation is something you turn on only when there’s an emergency. As a human-centered change and innovation thought leader, I’ve seen firsthand that relying on crisis as a catalyst is a recipe for short-term fixes and long-term decline. True, sustainable innovation is not a reaction; it’s a proactive, continuous discipline.

The problem with waiting for a crisis is that by the time it hits, you’re operating from a position of weakness. You’re making decisions under immense pressure, with limited resources, and with a narrow focus on survival. This reactive approach rarely leads to truly transformative breakthroughs. Instead, it produces incremental changes and tactical adaptations—often at a steep price in terms of burnout, strategic coherence, and missed opportunities. The most successful organizations don’t innovate to escape a crisis; they innovate continuously to prevent one from ever happening.

The Cost of Crisis-Driven Innovation

Relying on crisis as your innovation driver comes with significant hidden costs:

  • Reactive vs. Strategic: Crisis innovation is inherently reactive. You’re fixing a symptom, not addressing the root cause. This prevents you from engaging in the deep, strategic thinking necessary for true market disruption.
  • Loss of Foresight: When you’re in a crisis, all attention is on the immediate threat. This short-term focus blinds you to emerging trends, shifting customer needs, and new market opportunities that could have been identified and acted upon proactively.
  • Burnout and Exhaustion: Innovation requires creative energy. Forcing your teams into a constant state of emergency to innovate leads to rapid burnout, high turnover, and a culture of fear, not creativity.
  • Suboptimal Outcomes: The solutions developed in a crisis are often rushed, inadequately tested, and sub-optimized. They are designed to solve an immediate problem, not to create a lasting competitive advantage.

“Crisis innovation is a sprint for survival. Proactive innovation is a marathon for market leadership. You can’t win a marathon by only practicing sprints when the gun goes off.”

Building a Culture of Proactive, Human-Centered Innovation

The alternative to the crisis innovation trap is to embed innovation into your organization’s DNA. This means creating a culture where curiosity, experimentation, and a deep understanding of human needs are constant, not sporadic. It’s about empowering your people to solve problems and create value every single day.

  1. Embrace Psychological Safety: Create an environment where employees feel safe to share half-formed ideas, question assumptions, and even fail. This is the single most important ingredient for continuous innovation.
  2. Allocate Dedicated Resources: Don’t expect innovation to happen in people’s spare time. Set aside dedicated time, budget, and talent for exploratory projects and initiatives that don’t have an immediate ROI.
  3. Focus on Human-Centered Design: Continuously engage with your customers and employees to understand their frustrations and aspirations. True innovation comes from solving real human problems, not just from internal brainstorming.
  4. Reward Curiosity, Not Just Results: Celebrate learning, even from failures. Recognize teams for their efforts in exploring new ideas and for the insights they gain, not just for the products they successfully launch.

Case Study 1: Blockbuster vs. Netflix – The Foresight Gap

The Challenge:

In the late 1990s, Blockbuster was the undisputed king of home video rentals. It had a massive physical footprint, brand recognition, and a highly profitable business model based on late fees. The crisis of digital disruption and streaming was not a sudden event; it was a slow-moving signal on the horizon.

The Reactive Approach (Blockbuster):

Blockbuster’s management was aware of the shift to digital, but they largely viewed it as a distant threat. They were so profitable from their existing model that they had no incentive to proactively innovate. When Netflix began gaining traction with its subscription-based, DVD-by-mail service, Blockbuster’s response was a reactive, half-hearted attempt to mimic it. They launched an online service but failed to integrate it with their core business, and their culture remained focused on the physical store model. They only truly panicked and began a desperate, large-scale innovation effort when it was already too late and the market had irreversibly shifted to streaming.

The Result:

Blockbuster’s crisis-driven innovation was a spectacular failure. By the time they were forced to act, they lacked the necessary strategic coherence, internal alignment, and cultural agility to compete. They didn’t innovate to get ahead; they innovated to survive, and they failed. They went from market leader to bankruptcy, a powerful lesson in the dangers of waiting for a crisis to force your hand.


Case Study 2: Lego’s Near-Death and Subsequent Reinvention

The Challenge:

In the early 2000s, Lego was on the brink of bankruptcy. The brand, once a global icon, had become a sprawling, unfocused company that was losing relevance with children increasingly drawn to video games and digital entertainment. The company’s crisis was not a sudden external shock, but a slow, painful internal decline caused by a lack of proactive innovation and a departure from its core values. They had innovated, but in a scattered, unfocused way that diluted the brand.

The Proactive Turnaround (Lego):

Lego’s new leadership realized that a reactive, last-ditch effort wouldn’t save them. They saw the crisis as a wake-up call to fundamentally reinvent how they innovate. Their strategy was not just to survive but to thrive by returning to a proactive, human-centered approach. They went back to their core product, the simple plastic brick, and focused on deeply understanding what their customers—both children and adult fans—wanted. They launched several initiatives:

  • Re-focus on the Core: They trimmed down their product lines and doubled down on what made Lego special—creativity and building.
  • Embracing the Community: They proactively engaged with their most passionate fans, the “AFOLs” (Adult Fans of Lego), and co-created new products like the highly successful Lego Architecture and Ideas series. This wasn’t a reaction to a trend; it was a strategic partnership.
  • Thoughtful Digital Integration: Instead of panicking and launching a thousand digital products, they carefully integrated their physical and digital worlds with games like Lego Star Wars and movies like The Lego Movie. These weren’t rushed reactions; they were part of a long-term, strategic vision.

The Result:

Lego’s transformation from a company on the brink to a global powerhouse is a powerful example of the superiority of proactive innovation. By not just reacting to their crisis but using it as a catalyst to build a continuous, human-centered innovation engine, they not only survived but flourished. They turned a painful crisis into a foundation for a new era of growth, proving that the best time to innovate is always, not just when you have no other choice.


Eight I's of Infinite Innovation

The Eight I’s of Infinite Innovation

Braden Kelley’s Eight I’s of Infinite Innovation provides a comprehensive framework for organizations seeking to embed continuous innovation into their DNA. The model starts with Ideation, the spark of new concepts, which must be followed by Inspiration—connecting those ideas to a compelling, human-centered vision. This vision is refined through Investigation, a process of deeply understanding customer needs and market dynamics, leading to the Iteration of prototypes and solutions based on real-world feedback. The framework then moves from development to delivery with Implementation, the critical step of bringing a viable product to market. This is not the end, however; it’s a feedback loop that requires Invention of new business models, a constant process of Improvement based on outcomes, and finally, the cultivation of an Innovation culture where the cycle can repeat infinitely. Each ‘I’ builds upon the last, creating a holistic and sustainable engine for growth.

Conclusion: The Time to Innovate is Now

The notion of “crisis innovation” is seductive because it offers a heroic narrative. But behind every such story is a cautionary tale of a company that let a problem fester for far too long. The most enduring, profitable, and relevant organizations don’t wait for a burning platform to jump; they are constantly building new platforms. They have embedded a culture of continuous, proactive innovation driven by a deep understanding of human needs. They innovate when times are good so they are prepared when times are tough.

The time to innovate is not when your stock price plummets or your competitor launches a new product. The time to innovate is now, and always. By making innovation a fundamental part of your business, you ensure your organization’s longevity and its ability to not just survive the future, but to shape it.

Image credit: Pixabay

Content Authenticity Statement: The topic area and the key elements to focus on were decisions made by Braden Kelley, with help from Google Gemini to shape the article and create the illustrative case studies.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.