Category Archives: Technology

The Voicebots are Coming

Your Next Customer Support Agent May Not Be a Human

LAST UPDATED: October 27, 2025 at 1:00PM
The Voicebots are Coming

by Braden Kelley

Last week I had the opportunity to attend Customer Contact Week (CCW) in Nashville, Tennessee and learn that the familiar, frustrating tyranny of the touch-tone IVR (Interactive Voice Response) system is finally ending. For too long, the gateway to customer service has felt like a maze designed to prevent contact, not facilitate it. But thanks to the rapid evolution of Conversational AI — fueled by Generative Large Language Models (LLMs) — the entire voice interaction landscape is undergoing a revolutionary, and necessary, change. As a thought leader focused on human-centered change, innovation and experience design, I can tell you the future of the call center isn’t purely automated; it’s intelligently orchestrated.

The voicebot — the modern AI-powered voice agent — is moving past its days as a simple chatbot with a synthesized voice. Today’s AI agents use Natural Language Processing (NLP) to understand intent, context, and even tone, allowing them to handle complex, multi-step issues with startling accuracy. More importantly, they are ushering in the era of the bionic contact center, where the human agent is augmented, not replaced. This hybrid model — where AI handles the heavy lifting and humans provide empathy, complex reasoning, and necessary approvals — is the key to achieving both massive scale and superior Customer Experience (CX).

Overcoming the Voice Friction: The Tech Foundation

The shift to true voice AI required overcoming significant friction points that plagued older systems:

  • Barge-In and Latency: Modern voicebots offer near-instantaneous response times and can handle barge-in (when a customer interrupts the bot) naturally, mimicking human conversation flow.
  • Acoustic Noise: Advanced speech recognition models are highly resilient to background noise and varied accents, ensuring high accuracy even in noisy home or car environments.
  • Intent Nuance: LLMs provide the deep contextual understanding needed to identify customer intent, even when the customer uses vague or emotional language, turning frustrated calls into productive ones.

The Dual Pillars of Voice AI in CX

Conversational AI is transforming voice service through two primary deployment models, both of which reduce Customer Effort Score (CES) and boost Customer Satisfaction (CSAT):

1. Full Call Automation (The AI Front Line)

This model is deployed for high-volume, routine, yet critical interactions. The voicebot connects directly to the company’s backend systems (CRM, ERP, knowledge base) to pull personalized information and take action in real-time. Crucially, these new AI agents move beyond rigid scripts, using Generative AI to create dynamic, human-like dialogue that resolves the issue instantly. This 24/7 self-service capability slashes queue times and dramatically lowers the cost-to-serve.

2. Human-AI Collaboration (The Bionic Agent)

This is where the real human-centered innovation lies. The AI agent handles the bulk of the call — identifying the customer, verifying identity, diagnosing the problem, and gathering data. When the request hits a complexity threshold — such as requiring a policy override, handling an escalated complaint, or needing a final human authorization — the AI performs a contextual handoff. The human agent receives the call along with a complete, structured summary of the conversation, the customer’s intent, and often a recommended next step, turning a frustrating transfer into a seamless, empowered human interaction.

OR, even better can be the solution where a single human agent provides approvals or other guidance to multiple AI voice agents that continue owning their calls while waiting for the human to respond (possibly simultaneously helping the customer with additional queries) before continuing with the conversation through to resolution.

Customer Contact Week Nashville

“The most powerful application of voice AI isn’t automation, it’s augmentation. By freeing human agents from transactional drudgery, we elevate them to be empathic problem solvers, enhancing both their job satisfaction and the customer’s outcome.” — Braden Kelley


Measuring the Success of the Handoff

The quality of the transitions between AI and human is the true measure of success. Leaders must track metrics that assess the efficacy of the handoff itself:

  • Repeat Story Rate: The percentage of customers who have to repeat information to the human agent after an AI handoff. This must be near zero.
  • Agent Ramp-up Time (Post-Transfer): The time it takes for the human agent to absorb the AI-generated context and take meaningful action. Lower is better.
  • Post-Handoff CSAT: The customer satisfaction score specifically captured after a complex AI-to-human transfer, measuring the seamlessness of the experience.

The Agentic Future

The voicebots are indeed coming, and they are bringing with them the most significant shift in customer service since the telephone itself. The next evolution will see agentic AI — bots that can dynamically choose between multiple tools and knowledge sources to resolve novel problems without being strictly pre-scripted. The challenge for leaders is to ensure that as this technology scales, our focus remains firmly on the human experience, leveraging the best of AI’s speed and the best of human empathy to create a truly effortless and satisfying customer journey.

🤖 Companies to Watch in AI Voicebots

The voicebot space is rapidly evolving, driven by generative AI, and the recent Customer Contact Week (CCW) in Nashville highlighted several key players. Companies to watch in this generative AI voicebot and contact center space include market-leading platforms like NICE, Genesys, Zoom and Five9, all of whom are heavily integrating generative and agentic AI features—such as real-time coaching and automated post-call summaries — into their core Contact Center as a Service (CCaaS) offerings.

Beyond the traditional CCaaS providers, specialist AI firms like Replicant, Voice.AI and ASAPP (who had a significant presence at the event) continue to stand out by focusing on either full end-to-end voice automation for complex transactions or providing advanced Human-in-the-Loop AI features to augment live agents, particularly in regulated industries like financial services.

Additionally, major cloud vendors like Google Cloud and AWS (Amazon Connect) are increasingly aggressive, leveraging their foundational AI models to provide scalable, next-generation AI agents and contact center platforms, ensuring they remain transformative forces in customer experience (CX) automation.

HALLOWEEN BONUS: Save 30% on the eBook, hardcover or softcover of my latest book Charting Change (now in its second edition) — FREE SHIPPING WORLDWIDE — using code HAL30 until midnight October 31, 2025

Image credits: Customer Management Practice, Google Gemini

Content Authenticity Statement: The topic area, key elements to focus on, vendors to mention, etc. were decisions made by Braden Kelley, with a little help from Google Gemini to clean up the article.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Future of Humanity in an Artificially Intelligent World

The Future of Humanity in an Artificially Intelligent World

GUEST POST from Greg Satell

The Argentinian writer Jorge Borges had a fascination with a concept known as the infinite monkey theorem. The idea is that if you had an infinite amount of monkeys pecking away at an infinite amount of typewriters, they would randomly create the collected works of Tolstoy and every other masterwork ever written (or that could be written).

The theorem, which has been around for at least a century, is troubling because it calls into question what it means to be human. If we can be inspired by something that could so easily be randomly generated, then what does it mean to be meaningful? Is meaning just an illusion we construct to make ourselves happy?

In recent years, the rise of artificial intelligence has transformed this theoretical dilemma into an intensely practical issue. In a world in which machines are taking over work long thought of as intensely human, what is the role of human labor? How do we create value that is distinct from what machines can do faster and cheaper? The answers will shape our future.

Machines Replacing Humans

The first industrial robot, called Unimate, was installed on an assembly line at General Motors in 1961. Since then, robots have become highly integrated into our economy. They do dangerous jobs, like bomb disposal, as well as more prosaic ones, like running warehouses and legal discovery.

And lest you think that automation only applies to low-skill, mechanical jobs, robots are also invading the creative realm. One book written by a machine was even accepted as a submission for the prestigious Hoshi Shinichi Literary Award in Japan. There are now robots that advise physicians, movie producers and music executives. Some even write songs.

Yet these things have been, for the most part, going on in the background. Very few of us see robots at work in factories, much less evaluating scripts, reading mammograms or doing research for lawyers. However, the introduction of ChatGPT brings the power of AI to anyone with an Internet connection.

If you haven’t tried it yourself yet, here’s an example: I asked it to explain the infinite monkey theorem to me and this is what I got:

The infinite monkey theorem states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type any given text, such as the complete works of William Shakespeare. The theorem is often used as a humorous illustration of the basics of probability theory. However, the theorem is actually a misnomer, since the chances of a monkey randomly typing a meaningful sequence of characters is incredibly small, and the concept of an “infinite” amount of time is physically impossible.

And just for fun, I asked it to produce a one paragraph promotional biography of Greg Satell:

Greg Satell is a thought leader in innovation and business strategy, with over 20 years of experience in the industry. He is a popular speaker and author, known for his bestselling book “Mapping Innovation: A Playbook for Navigating a Disruptive Age.” Greg has also held executive roles at leading companies and has a track record of driving innovation and growth through effective business strategies.

Each of these responses appeared in seconds and, while they are not what I would have chosen to write myself, they are more than passable. With the technology advancing at this pace, how can we compete with the machines?

The Automation Paradox

In 1900, 30 million people in the United States were farmers, but by 1990 that number had fallen to under 3 million even as the population more than tripled. So, in a manner of speaking, 90% of American agriculture workers lost their jobs due to automation. Yet those out-of-work farmers weren’t impoverished. In fact, the 20th century was an era of unprecedented prosperity.

Consider this: Although the workforce in the US has more than doubled since 1950, labor participation rates remain close to all-time highs. Still, a recent report by the US Chamber of Commerce found that we have a massive labor shortage. In the highly-automated manufacturing sector, it estimated that even if every unemployed person with experience were employed, it would only fill half of the vacant jobs.

In fact, when you look at highly automated fields, they tend to be the ones that have major labor shortages. You see touchscreens everywhere you go, but 70% of openings in the retail sector go unfilled. Autopilot has been around for decades, but we face a massive global pilot shortage that’s getting worse every year.

Once a task becomes automated, it also becomes largely commoditized and value is then created in an area that wasn’t quite obvious when people were busy doing more basic things. Go to an Apple store and you’ll notice two things: lots of automation and a sea of employees in blue shirts there to help, troubleshoot and explain things to you. Value doesn’t disappear, it just shifts to a different place.

One striking example of this is the humble community bookstore. With the domination of Amazon, you might think that small independent bookstores would be doomed, but instead they’re thriving. While its true that they can’t match Amazon’s convenience, selection or prices, people are flocking to small local shops for other reasons, such as deep expertise in particular subject matter and the chance to meet people with similar interests.

The Irrational Mind

To understand where value is shifting now, the work of neuroscientist Antonio Damasio can shed some light. He studied patients who, despite having perfectly normal cognitive ability, had lost the ability to feel emotion. Many would assume that, without emotions to distract them, these people would be great at making perfectly rational decisions.

But they weren’t. In fact, they couldn’t make any decisions at all. They could list the factors at play and explain their significance, but they couldn’t feel one way or another about them. In effect, without emotion they couldn’t form any intention. One decision was just like any other, leading to an outcome that they cared nothing about.

The social psychologist Jonathan Haidt built on Damasio’s work to form his theory of social intuitionism. What Haidt found in his research is that we don’t make moral judgments through conscious reasoning, but rather through unconscious intuition. Essentially, we automatically feel a certain way about something and then come up with reasons that we should feel that way.

Once you realize that, it becomes clear why Apple needs so many blue shirts at its stores and why independent bookstores are thriving. An artificial intelligence can access all the information in the world, curate that information and present it to us in an understandable way, but it can’t understand why we should care about it.

In fact, humans often disguise our true intent, even to ourselves. A student might say he wants a new computer to do schoolwork, but may really want a stronger graphics engine to play video games. In much the same way, a person may want to buy a book about a certain subject, but also truly covet a community which shares the same interest.

The Library of Babel And The Intention Economy

In his story The Library of Babel, Borges describes a library which contains books with all potential word combinations in all possible languages. Such a place would encompass all possible knowledge, but would also be completely useless, because the vast majority of books would be gibberish consisting of random strings of symbols.

In essence, deriving meaning would be an exercise in curation, which machines could do if they perfectly understood our intentions. However, human motives are almost hopelessly complex. So much so, in fact, that even we ourselves often have difficulty understanding why we want one thing and not another.

There are some things that a computer will never do. Machines will never strike out at a Little League game, have their hearts broken in a summer romance or see their children born. The inability to share human experiences makes it difficult, if not impossible, for computers to relate to human emotions and infer how those feelings shape preferences in a given context.

That’s why the rise of artificial intelligence is driving a shift from cognitive to social skills. The high paying jobs today have less to do with the ability to retain facts or manipulate numbers—we now use computers for those things—than it does with humans serving other humans. That requires more deep collaboration, teamwork and emotional intelligence.

To derive meaning in an artificially intelligent world we need to look to each other and how we can better understand our intentions. The future of technology is always more human.

HALLOWEEN BONUS: Save 30% on the eBook, hardcover or softcover of Braden Kelley’s latest book Charting Change (now in its second edition) — FREE SHIPPING WORLDWIDE — using code HAL30 until midnight October 31, 2025

— Article courtesy of the Digital Tonto blog
— Image credit: Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

How Cobots are Humanizing the Factory Floor

The Collaborative Revolution

LAST UPDATED: October 25, 2025 at 4:33PM
How Cobots are Humanizing the Factory Floor - The Collaborative Revolution

GUEST POST from Art Inteligencia

For decades, industrial automation has been defined by isolation. Traditional robots were caged behind steel barriers, massive, fast, and inherently dangerous to humans. They operated on the principle of replacement, seeking to swap out human labor entirely for speed and precision. But as a thought leader focused on human-centered change and innovation, I see this model as fundamentally outdated. The future of manufacturing, and indeed, all operational environments, is not about replacement — it’s about augmentation.

Enter the Collaborative Robot, or Cobot. These smaller, flexible, and safety-certified machines are the definitive technology driving the next phase of the Industrial Revolution. Unlike their predecessors, Cobots are designed to work alongside human employees without protective caging. They are characterized by their force-sensing capabilities, allowing them to stop instantly upon contact, and their ease of programming, often achieved through simple hand-guiding (or “teaching”). The most profound impact of Cobots is not on the balance sheet, but on the humanization of work, transforming dull, dirty, and dangerous tasks into collaborative, high-value roles. This shift requires leaders to address the initial psychological barrier of automation, re-framing the technology as a partner in productivity and safety.

The Three Pillars of Cobot-Driven Human-Centered Innovation

The true value of Cobots lies in how they enable the three core tenets of modern innovation:

  • 1. Flexibility and Agility: Cobots are highly portable and quick to redeploy. A human worker can repurpose a Cobot for a new task — from picking parts to applying glue — in a matter of hours. This means production lines can adapt to short runs and product customization far faster than large, fixed automation systems, giving businesses the agility required in today’s volatile market.
  • 2. Ergonomic and Safety Improvement: Cobots take on the ergonomically challenging or repetitive tasks that lead to human injury (like repeated lifting, twisting, or precise insertion). By handling the “Four Ds” (Dull, Dirty, Dangerous, and Difficult-to-Ergonomically-Design), they dramatically improve worker health, morale, and long-term retention.
  • 3. Skill Elevation and Mastery: Instead of being relegated to simple assembly, human workers are freed to focus on high-judgment tasks: quality control, complex troubleshooting, system management, and, crucially, Cobot programming and supervision. This elevates the entire workforce, shifting roles from manual labor to process management and robot literacy.

“Cobots are the innovation that tells human workers: ‘We value your brain and your judgment, not just your back.’ The factory floor is becoming a collaborative workspace, not a cage, but leaders must proactively communicate the upskilling opportunity.”


Case Study 1: Transforming Aerospace Assembly with Human-Robot Teams

The Challenge:

A major aerospace manufacturer faced significant challenges in the final assembly stage of large aircraft components. Tasks involved repetitive drilling and fastener application in tight, ergonomically challenging spaces. The precision required meant workers were often in awkward positions for extended periods, leading to fatigue, potential errors, and high rates of Musculoskeletal Disorders (MSDs).

The Cobot Solution:

The company deployed a fleet of UR-style Cobots equipped with vision systems. The human worker now performs the initial high-judgment setup — identifying the part and initiating the sequence. The Cobot then precisely handles the heavy, repetitive drilling and fastener insertion. The human worker remains directly alongside the Cobot, performing simultaneous quality checks and handling tasks that require tactile feedback or complex dexterity (like cable routing).

The Innovation Impact:

The process yielded a 30% reduction in assembly time and, critically, a near-zero rate of MSDs related to the process. The human role shifted entirely from physical exertion to supervision and quality assurance, turning an exhausting, injury-prone role into a highly skilled, collaborative function. This demonstrates Cobots’ power to improve both efficiency and human well-being, increasing overall job satisfaction.


Case Study 2: Flexible Automation in Small-to-Medium Enterprises (SMEs)

The Challenge:

A small, family-owned metal fabrication business needed to increase production to meet demand for specialized parts. Traditional industrial robotics were too expensive, too large, and required complex, fixed programming — an impossible investment given their frequent product changeovers and limited engineering staff.

The Cobot Solution:

They invested in a single, affordable, lightweight Cobot (e.g., a FANUC CR series) and installed it on a mobile cart. The Cobot was tasked with machine tending — loading and unloading parts from a CNC machine, a task that previously required a dedicated, monotonous human shift. Because the Cobot could be programmed by simple hand-guiding and a user-friendly interface, existing line workers were trained to set up and manage the robot in under a day, focusing on Human-Robot Interaction (HRI) best practices.

The Innovation Impact:

The Cobot enabled lights-out operation for the single CNC machine, freeing up human workers to focus on higher-value tasks like complex welding, custom finishing, and customer consultation. This single unit increased the company’s throughput by 40% without increasing floor space or headcount. More importantly, it democratized automation, proving that Cobots are the essential innovation that makes high-level automation accessible and profitable for small businesses, securing their future competitiveness.


Companies and Startups to Watch in the Cobot Space

The market is defined by both established players leveraging their industrial expertise and nimble startups pushing the envelope on human-AI collaboration. Universal Robots (UR) remains the dominant market leader, largely credited with pioneering the field and setting the standard for user-friendliness and safety. They are focused on expanding their software ecosystem to make deployment even simpler. FANUC and ABB are the industrial giants who have quickly integrated Cobots into their massive automation portfolios, offering hybrid solutions for high-mix, low-volume production. Among the startups, keep an eye on companies specializing in advanced tactile sensing and vision — the critical technologies that will allow Cobots to handle true dexterity. Companies focusing on AI-driven programming (where the Cobot learns tasks from human demonstration) and mobile manipulation (Cobots mounted on Autonomous Mobile Robots, or AMRs) are defining the next generation of truly collaborative, fully mobile smart workspaces.

The shift to Cobots signals a move toward agile manufacturing and a renewed respect for the human worker. The future factory floor will be a hybrid environment where human judgment, creativity, and problem-solving are amplified, not replaced, by safe, intelligent robotic partners. Leaders who fail to see the Cobot as a tool for human-centered upskilling and empowerment will be left behind in the race for true productivity and innovation. The investment must be as much in robot literacy as it is in the robots themselves.

HALLOWEEN BONUS: Save 30% on the eBook, hardcover or softcover of Braden Kelley’s latest book Charting Change (now in its second edition) — FREE SHIPPING WORLDWIDE — using code HAL30 until midnight October 31, 2025

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Are You Getting Your Fair Share of $860 Billion?

Are You Getting Your Fair Share of $860 Billion?

GUEST POST from Shep Hyken

According to Qualtrics, there is an estimated $860 billion worth of revenue and cost savings available for companies that figure out how to create an improved Customer Experience (CX) using AI to better understand and serve their customers. (That includes $420 billion for B2B and $440 billion for B2C.) Qualtrics recently released these figures in a report/eBook titled Unlock the Potential through AI-Enabled CX.

I had a chance to interview Isabelle Zdatny, head of thought leadership at Qualtrics Experience Management Institute, for Amazing Business Radio. She shared insights from the report, including ways in which AI is reshaping how organizations measure, understand and improve their relationships with customers. These ideas are what will help you get more customers, keep existing customers and improve your processes, giving you a share of the $860 billion that is up for grabs. Here are some of the top takeaways from our interview.

AI-Enabled CX Represents a Financial Opportunity

The way AI is used in customer experience is much more than just a way to deflect customers’ questions and complaints to an AI-fueled chatbot or other self-service solution. Qualtrics’ report findings show that the value comes through increased employee productivity, process improvement and revenue growth. Zdatny notes a gap between leadership’s recognition of AI’s potential and their readiness to lead and make a change. Early adopters will likely capture “compounding advantages,” as every customer interaction makes their systems smarter and their advantage more difficult for competitors to overcome. My response to this is that if you aren’t on board with AI for the many opportunities it creates, you’re not only going to be playing catch-up with your competitors, but also having to catch up with the market share you’re losing.

Customers Want Convenience

While overall CX quality is improving, thanks to innovation, today’s customers have less tolerance for friction and mistakes. A single bad experience can cause customers to defect. My customer experience research says an average customer will give you two chances. Zdatny says, “Customers are less tolerant of friction these days. … Deliver one bad experience, and that sends the relationship down a bad path more quickly than it used to.”

AI Takes Us Beyond Surveys

Customer satisfaction surveys can frustrate customers. AI collects the data from interactions between customers and the company and analyzes it using natural language processing and sentiment. It can predict churn and tension. It analyzes customer behavior, and while it doesn’t look at a specific customer (although it can), it is able to spot trends in problems, opportunities and more. The company that uses this information the right way can reap huge financial rewards by creating a better customer experience.

Agentic AI

Agentic AI takes customer interactions to a new level. As a customer interacts with AI-fueled self-service support, the system can do more than give customers information and analyze the interaction. It can also take appropriate action. This is a huge opportunity to make it easier on the workforce as AI processes action items that employees might otherwise handle manually. Think about the dollars saved (part of the $860 billion) by having AI support part of the process so people don’t have to.

Customer Loyalty is at Risk

To wrap this up, Zdatny and I talked about the concept of customer loyalty and how vulnerable companies are to losing their most loyal customers. According to Zdatny, a key reason is the number of options available to consumers. (While there may be fewer options in the B2B world, the concern should still be the same.) Switching brands is easy, and customers are more finicky than ever. Our CX research finds that typical customers give you a second chance before they switch. A loyal customer will give you a third chance — but to put it in baseball terms, “Three strikes and you’re out!” Manage the experience right the first time, and keep in mind that whatever interaction you’re having at that moment is the reason customers will come back—or not—to buy whatever you sell.

Image Credits: Pexels

This article was originally published on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Agentic Browser Wars Have Begun

LAST UPDATED: October 22, 2025 at 9:11AM

The Agentic Browser Wars Have Begun

GUEST POST from Art Inteligencia

On his way out of town to Nashville for Customer Contact Week (CCW) I managed to catch the ear of Braden Kelley (follow him on LinkedIn) to discuss the news that OpenAI is launching its own “agentic” web browser, something that neither of us saw coming given their multi-billion dollar partnership with Microsoft on Copilot. He had some interesting perspectives to share that prompted me to explore the future of the web browser. I hope you enjoy this article (and its embedded videos) on the growing integration of AI into our browsing experiences!

For decades, the web browser has been our window to the digital world — a passive tool that simply displays information. We, the users, have been the active agents, navigating tabs, clicking links, and manually synthesizing data. But a profound shift is underway. The era of the “Agentic Browser” is dawning, and with it, a new battle for the soul of our digital experience. This isn’t just about faster rendering or new privacy features; it’s about embedding proactive, intelligent agents directly into the browser to fundamentally change how we interact with the internet. As a human-centered change and innovation thought leader, I see this as the most significant evolution of the browser since its inception, with massive implications for productivity, information access, and ultimately, our relationship with technology. The Browser Wars 2.0 aren’t about standards; they’re about autonomy.

The core promise of the Agentic Browser is to move from a pull model (we pull information) to a push model (intelligence pushes relevant actions and insights to us). These AI agents, integrated into the browser’s fabric, can observe our intent, learn our preferences, and execute complex, multi-step tasks across websites autonomously. Imagine a browser that doesn’t just show you flight prices, but books your ideal trip, handling preferences, loyalty points, and calendar integration. This isn’t futuristic fantasy; it’s the new battleground, and the titans of tech are already drawing their lines, vying for control over our digital workflow and attention economy.

The Shift: From Passive Viewer to Active Partner

The Agentic Browser represents a paradigm leap. Traditional browsers operate at the rendering layer; Agentic Browsers will operate at the intent layer. They understand why you are on a page, what you are trying to achieve, and can proactively take steps to help you. This requires:

  • Deep Contextual Understanding: Beyond keywords, the agent understands the semantic meaning of pages and user queries, across tabs and sessions.
  • Multi-Step Task Execution: The ability to automate a sequence of actions across different domains (e.g., finding information on one site, comparing on another, completing a form on a third). This is the leap from macro automation to intelligent workflow orchestration.
  • Personalized Learning: Agents learn from user feedback and preferences, refining their autonomy and effectiveness over time, making them truly personal co-pilots.
  • Ethical and Safety Guardrails: Crucially, these agents must operate with transparent consent, robust safeguards, and clear audit trails to prevent misuse or unintended consequences. This builds the foundational trust architecture.

“The Agentic Browser isn’t just a smarter window; it’s an intelligent co-pilot, transforming the internet from a library into a laboratory where your intentions are actively fulfilled. This is where competitive advantage will be forged.” — Braden Kelley


Case Study 1: OpenAI’s Atlas Browser – A New Frontier, Redefining the Default

The Anticipated Innovation:

While still emerging, reports suggest OpenAI’s foray into the browser space with ‘Atlas‘ (a rumored codename that became real) aims to redefine web interaction. Unlike existing browsers that integrate AI as an add-on, Atlas is expected to have generative AI and autonomous agents at its core. This isn’t just a chatbot in your browser; it’s the browser itself becoming an agent, fundamentally challenging the definition of a web session.

The Agentic Vision:

Atlas could seamlessly perform tasks like:

  • Dynamic Information Synthesis: Instead of listing search results, it could directly answer complex questions by browsing, synthesizing, and summarizing information across multiple sources, presenting a coherent answer — effectively replacing the manual search-and-sift paradigm.
  • Automated Research & Comparison: A user asking “What’s the best noise-canceling headphone for long flights under $300?” wouldn’t get links; they’d get a concise report, comparative table, and perhaps even a personalized recommendation based on their past purchase history and stated preferences, dramatically reducing decision fatigue.
  • Proactive Task Completion: If you’re on a travel site, Atlas might identify your upcoming calendar event and proactively suggest hotels near your conference location, or even manage the booking process with minimal input, turning intent into seamless execution.



The Implications for the Wars:

If successful, Atlas could significantly reduce the cognitive load of web interaction, making information access more efficient and task completion more automated. It pushes the boundaries of how much the browser knows and does on your behalf, potentially challenging the existing search, content consumption, and even advertising models that underpin the current internet economy. This represents a bold, ground-up approach to seizing the future of internet interaction.


Case Study 2: Google Gemini and Chrome – The Incumbent’s Agentic Play

The Incumbent’s Response:

Google, with its dominant Chrome browser and powerful Gemini AI model, is uniquely positioned to integrate agentic capabilities. Their strategy seems to be more iterative, building AI into existing products rather than launching a completely new browser from scratch (though they could). This is a play for ecosystem lock-in and leveraging existing market share.

Current and Emerging Agentic Features:

Google’s approach is visible through features like:

  • Gemini in Workspace Integration: Already, Gemini can draft emails, summarize documents, and generate content within Google Workspace. Extending this capability directly into Chrome means the browser could understand a tab’s content and offer to summarize it, extract key data, or generate follow-up actions (e.g., “Draft an email to this vendor summarizing their pricing proposal”), transforming Chrome into an active productivity hub.
  • Enhanced Shopping & Productivity: Chrome’s existing shopping features, when supercharged with Gemini, could become truly agentic. Imagine asking the browser, “Find me a pair of running shoes like these, but with better arch support, on sale.” Gemini could then browse multiple retailers, apply filters, compare reviews, and present tailored options, potentially even initiating a purchase, fundamentally reshaping e-commerce pathways.
  • Contextual Browsing Assistants: Future iterations could see Gemini acting as a dynamic tutor or research assistant. On a complex technical page, it might offer to explain jargon, find related academic papers, or even help you debug code snippets you’re viewing in a web IDE, creating a personalized learning environment.



The Implications for the Wars:

Google’s strategy is about leveraging its vast ecosystem and existing user base. By making Chrome an agentic hub for Gemini, they can offer seamless, context-aware assistance across search, content consumption, and productivity. The challenge will be balancing powerful automation with user control and data privacy — a tightrope walk for any company dealing with such immense data, and a key battleground for user trust and regulatory scrutiny. Other players like Microsoft (Copilot in Edge) are making similar moves, indicating a clear direction for the entire browser market and intensifying the competitive pressure.


Case Study 3: Microsoft Edge and Copilot – An Incumbent’s Agentic Strategy

The Incumbent’s Response:

Microsoft is not merely a spectator in the nascent Agentic Browser Wars; it’s a significant player, leveraging its robust Copilot AI and the omnipresence of its Edge browser. Their strategy centers on deeply integrating generative AI into the browsing experience, transforming Edge from a content viewer into a dynamic, proactive assistant.



A prime example of this is the “Ask Copilot” feature directly embedded into Edge’s address bar. This isn’t just a search box; it’s an intelligent entry point where users can pose complex queries, ask for summaries of the page they’re currently viewing, compare products from different tabs, or even generate content based on their browsing context. By making Copilot instantly accessible and context-aware, Microsoft aims to make Edge the default browser for intelligent assistance, enabling users to move beyond manual navigation and towards seamless, AI-driven task completion and information synthesis without ever leaving their browser.


The Human-Centered Imperative: Control, Trust, and the Future of Work

As these Agentic Browsers evolve, the human-centered imperative is paramount. We must ensure that users retain control, understand how their data is being used, and can trust the agents acting on their behalf. The future of the internet isn’t just about more intelligence; it’s about more empowered human intelligence. The browser wars of the past were about speed and features. The Agentic Browser Wars will be fought on the battleground of trust, utility, and seamless human-AI collaboration, fundamentally altering our digital workflows and requiring us to adapt.

For businesses, this means rethinking your digital presence: How will your website interact with agents? Are your services agent-friendly? For individuals, it means cultivating a new level of digital literacy: understanding how to delegate tasks, verify agent output, and guard your privacy in an increasingly autonomous online world. The passive web is dead. Long live the agentic web. The question is, are you ready to engage in the fight for its future?

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

How Tangible AI Artifacts Accelerate Learning and Alignment

Seeing the Invisible

By Douglas Ferguson, Founder & CEO of Voltage Control
Originally inspired by
“A Lantern in the Fog” on Voltage Control, where teams learn to elevate their ways of working through facilitation mastery and AI-enabled collaboration.

Innovation isn’t just about generating ideas — it’s about testing assumptions before they quietly derail your progress. The faster a team can get something tangible in front of real eyes and minds, the faster they can learn what works, what doesn’t, and why.

Yet many teams stay stuck in abstraction for too long. They debate concepts before they draft them, reason about hypotheses before they visualize them, and lose energy to endless interpretation loops. That’s where AI, when applied strategically, becomes a powerful ally in human-centered innovation — not as a shortcut, but as a clarifier.

How Tangible AI Artifacts Accelerate Learning and Alignment

At Voltage Control, we’ve been experimenting with a practice we call AI Teaming — bringing AI into the collaborative process as a visible, participatory teammate. Using new features in Miro, like AI Flows and Sidekicks, we’re able to layer prompts in sequence so that teams move from research to prototypes in minutes. We call this approach Instant Prototyping — because the prototype isn’t the end goal. It’s the beginning of the real conversation.


Tangibility Fuels Alignment

In human-centered design, the first artifact is often the first alignment. When a team sees a draft — even one that’s flawed — it changes how they think and talk. Suddenly, discussions move from “what if” to “what now.” That’s the tangible magic: the moment ambiguity becomes visible enough to react to.

AI can now accelerate that moment. With one-click flows in Miro, facilitators can generate structured artifacts — such as user flows, screen requirements, or product briefs — based on real research inputs. The output isn’t meant to be perfect; it’s meant to be provocative. A flawed draft surfaces hidden assumptions faster than another round of theorizing ever could.

Each iteration reveals new learning: the missing user story, the poorly defined need, the contradiction in the strategy. These insights aren’t AI’s achievement — they’re the team’s. The AI simply provides a lantern, lighting up the fog so humans can decide where to go next.


Layering Prompts for Better Hypothesis Testing

One of the most powerful aspects of Miro’s new AI Flows is the ability to layer prompts in connected sequences. Instead of a single one-off query, you create a chain of generative steps that build on each other. For example:

  1. Synthesize research into user insights.
  2. Translate insights into “How Might We” statements.
  3. Generate user flows based on selected opportunities.
  4. Draft prototype screens or feature lists.

Each layer of the flow uses the prior outputs as inputs — so when you adjust one, the rest evolves. Change a research insight or tweak your “How Might We” framing, and within seconds, your entire prototype ecosystem updates. It’s an elegant way to make hypothesis testing iterative, dynamic, and evidence-driven.

Seeing the Invisible

In traditional innovation cycles, these transitions can take weeks of hand-offs. With AI flows, they happen in minutes — creating immediate feedback loops that invite teams to think in public and react in real time.

(You can see this process in action in the video embedded below — where we walk through how small prompt adjustments yield dramatically different outputs.)


The Human Element: Facilitating Sensemaking

The irony of AI-assisted innovation is that the faster machines generate, the more valuable human facilitation becomes. Instant prototypes don’t replace discussion — they accelerate it. They make reflection, critique, and sensemaking more productive because there’s something concrete to reference.

Facilitators play a critical role here. Their job is to:

  • Name the decision up front: “By the end of this session, we’ll have a directionally correct concept we’re ready to test.”
  • Guide feedback: Ask, “What’s useful? What’s missing? What will we try next?”
  • Anchor evidence: Trace changes to specific research insights so teams stay grounded.
  • Enable iteration: Encourage re-running the flow after prompt updates to test the effect of new assumptions.

Through this rhythm of generation, reflection, and adjustment, AI becomes a conversation catalyst — not a black box. And the process stays deeply human-centered because it focuses on learning through doing.


Case in Point: Building “Breakout Buddy”

We recently used this exact approach to prototype a new tool called Breakout Buddy — a Zoom app designed to make virtual breakout rooms easier for facilitators. The problem was well-known in our community: facilitators love the connection of small-group moments but dread the logistics. No drag-and-drop, no dynamic reassignment, no simple timers.

Using our Instant Prototyping flow, we gathered real facilitator pain points, synthesized insights, and created an initial app concept in under two hours. The first draft had errors — it misunderstood terms like “preformatted” and missed saving room configurations — but that’s precisely what made it valuable. Those gaps surfaced the assumptions we hadn’t yet defined.

After two quick iterations, we had a working prototype detailed enough for a designer to polish. Within days, we had a testable artifact, a story grounded in user evidence, and a clear set of next steps. The magic wasn’t in the speed — it was in how visible our thinking became.


Designing for Evidence, Not Perfection

If innovation is about learning, then prototypes are your hypotheses made tangible. AI just helps you create more of them — faster — so you can test, compare, and evolve. But the real discipline lies in how you use them.

  • Don’t rush past the drafts. Study what’s wrong and why.
  • Don’t hide your versions. Keep early artifacts visible to trace the evolution.
  • Don’t over-polish. Each iteration should teach, not impress.

When teams treat AI outputs as living evidence rather than final answers, they stay in the human-centered loop — grounded in empathy, focused on context, and oriented toward shared understanding.


A Lantern in the Fog

At Voltage Control, we see AI not as a replacement for creative process, but as a lantern in the fog — illuminating just enough of the path for teams to take their next confident step. Whether you’re redesigning a product, reimagining a service, or exploring cultural transformation, the goal isn’t to hand creativity over to AI. It’s to use AI to make your learning visible faster.

Because once the team can see it, they can improve it. And that’s where innovation truly begins.


🎥 Watch the Demo: How layered AI prompts accelerate hypothesis testing in Miro

Join the waitlist to get your hands on the Instant Prototyping template

Image Credit: Douglas Ferguson, Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Innovation or Not – Chemical-Free Farming with Autonomous Robots

Greenfield Robotics and the Human-Centered Reboot of Agriculture

LAST UPDATED: October 20, 2025 at 9:35PM
Innovation or Not - Chemical-Free Farming with Autonomous Robots

GUEST POST from Art Inteligencia

The operating system of modern agriculture is failing. We’ve optimized for yield at the cost of health—human health, soil health, and planetary health. The relentless pursuit of chemical solutions has led to an inevitable biological counter-strike: herbicide-resistant superweeds and a spiraling input cost crisis. We’ve hit the wall of chemical dependency, and the system is demanding a reboot.

This is where the story of Greenfield Robotics — a quiet, powerful disruption born out of a personal tragedy and a regenerative ethos—begins to rewrite the agricultural playbook. Founded by third-generation farmer Clint Brauer, their mission isn’t just to sell a better tool; it’s to eliminate chemicals from our food supply entirely. This is the essence of true, human-centered innovation: identifying a catastrophic systemic failure and providing an elegantly simple, autonomous solution.

The Geometry of Disruption: From Spray to Scalpel

For decades, weed control has been a brute-force exercise. Farmers apply massive spray rigs, blanketing fields with chemicals to kill the unwanted. This approach is inefficient, environmentally harmful, and, critically, losing the biological war.

Greenfield Robotics flips this model from a chemical mass application to a mechanical, autonomous precision action. Their fleet of small, AI-powered robots—the “Weedbots” or BOTONY fleet—are less like tractors and more like sophisticated surgical instruments. They are autonomous, modular, and relentless.

Imagine a swarm of yellow, battery-powered devices, roughly two feet wide, moving through vast crop rows 18 hours a day, day or night. This isn’t mere automation; it’s coordinated, intelligent fleet management. Using proprietary AI-powered machine vision, the bots navigate with centimeter accuracy, identifying the crop from the weed. Their primary weapon is not a toxic spray, but a spinning blade that mechanically scalps the ground, severing the weed right at the root, ensuring chemical-free eradication.

This seemingly simple mechanical action represents a quantum leap in agricultural efficiency. By replacing chemical inputs with a service-based autonomous fleet, Greenfield solves three concurrent crises:

  • Biological Resistance: Superweeds cannot develop resistance to being physically cut down.
  • Environmental Impact: Zero herbicide use means zero chemical runoff, protecting water systems and beneficial insects.
  • Operational Efficiency: The fleet runs continuously and autonomously (up to 1.6 meters per second), drastically increasing the speed of action during critical growth windows and reducing the reliance on increasingly scarce farm labor.

The initial success is staggering. Working across broadacre crops like soybeans, cotton, and sweet corn, farmers are reporting higher yields and lower costs comparable to, or even better than, traditional chemical methods. The economic pitch is the first step, but the deeper change is the regenerative opportunity it unlocks.

The Human-Centered Harvest: Regenerative Agriculture at Scale

As an innovation leader, I look for technologies that don’t just optimize a process, but fundamentally elevate the human condition around that process. Greenfield Robotics is a powerful example of this.

The human-centered core of this innovation is twofold: the farmer and the consumer.

For the farmer, this technology is an act of empowerment. It removes the existential dread of mounting input costs and the stress of battling resistant weeds with diminishing returns. More poignantly, it addresses the long-term health concerns associated with chemical exposure—a mission deeply personal to Brauer, whose father’s Parkinson’s diagnosis fueled the company’s genesis. This is a profound shift: A technology designed to protect the very people who feed the world.

Furthermore, the modular chassis of the Weedbot is the foundation for an entirely new Agri-Ecosystem Platform. The robot is not limited to cutting weeds. It can be equipped to:

  • Plant cover crops in-season.
  • Apply targeted nutrients, like sea kelp, with surgical precision.
  • Act as a mobile sensor platform, collecting data on crop nutrient deficiencies to guide farmer decision-making.

This capability transforms the farmer’s role from a chemical applicator to a regenerative data strategist. The focus shifts from fighting nature to working with it, utilizing practices that build soil health—reduced tillage, increased biodiversity, and water retention. The human element moves up the value chain, focused on strategic field management powered by real-time autonomous data, while the robot handles the tireless, repeatable, physical labor.

For the consumer, the benefit is clear: chemical-free food at scale. The investment from supply chain giants like Chipotle, through their Cultivate Next venture fund, is a validation of this consumer-driven imperative. They understand that meeting the demand for cleaner, healthier food requires a fundamental, scalable change in production methods. Greenfield provides the industrialized backbone for regenerative, herbicide-free farming—moving this practice from niche to normalized.

Beyond the Bot: A Mindset for Tomorrow’s Food System

The challenge for Greenfield Robotics, and any truly disruptive innovator, is not the technology itself, but the organizational and cultural change required for mass adoption. We are talking about replacing a half-century-old paradigm of chemical dependency with an autonomous, mechanical model. This requires more than just selling a machine; it requires cultivating a Mindset Shift in the farming community.

The company’s initial “Robotics as a Service” model was a brilliant, human-centered strategy for adoption. By deploying, operating, and maintaining the fleets themselves for a per-acre fee, they lowered the financial and technical risk for farmers. This reduced-friction introduction proves that the best innovation is often wrapped in the most accessible business model. As the technology matures, transitioning toward a purchase/lease model shows the market confidence and maturity necessary for exponential growth.

Greenfield Robotics is more than a promising startup; it is a signal. It tells us that the future of food is autonomous, chemical-free, and profoundly human-centered. The next chapter of agriculture will be written not with larger, more powerful tractors and sprayers, but with smaller, smarter, and more numerous robots that quietly tend the soil, remove the toxins, and enable the regenerative practices necessary for a sustainable, profitable future.

This autonomous awakening is our chance to heal the rift between technology and nature, and in doing so, secure a healthier, cleaner food supply for the next generation. The future of farming is not just about growing food; it’s about growing change.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Greenfield Robotics

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






The Nuclear Fusion Accelerator

How AI is Commercializing Limitless Power

The Nuclear Fusion Accelerator - How AI is Commercializing Limitless Power

GUEST POST from Art Inteligencia

For decades, nuclear fusion — the process that powers the sun and promises clean, virtually limitless energy from basic elements like hydrogen — has been the “holy grail” of power generation. The famous joke has always been that fusion is “30 years away.” However, as a human-centered change and innovation thought leader, I can tell you that we are no longer waiting for a scientific miracle; we are waiting for an engineering and commercial breakthrough. And the key catalyst accelerating us across the finish line isn’t a new coil design or a stronger laser. It is Artificial Intelligence.

The journey to commercial fusion involves taming plasma — a superheated, unstable state of matter hotter than the sun’s core — for sustained periods. This process is characterized by extraordinary complexity, high costs, and a constant, data-intensive search for optimal control parameters. AI is fundamentally changing the innovation equation by replacing the slow, iterative process of trial-and-error experimentation with rapid, predictive optimization. Fusion experiments generate petabytes of diagnostic data; AI serves as the missing cognitive layer, enabling physicists and engineers to solve problems in days that once took months or even years of physical testing. AI isn’t just a tool; it is the accelerator that is finally making fusion a question of when, not if, and critically, at a commercially viable price point.

AI’s Core Impact: From Simulation to Scalability

AI accelerates commercialization by directly addressing fusion’s three biggest engineering hurdles, all of which directly affect capital expenditure and time-to-market:

  • 1. Real-Time Plasma Control & Digital Twins: Fusion plasma is highly turbulent and prone to disruptive instabilities. Reinforcement Learning (RL) models and Digital Twins — virtual, real-time replicas of the reactor — learn optimal control strategies. This allows fusion machines to maintain plasma confinement and temperature far more stably, which is essential for continuous, reliable power production.
  • 2. Accelerating Materials Discovery: The extreme environment within a fusion reactor destroys conventional materials. AI, particularly Machine Learning (ML), is used to screen vast material databases and even design novel, radiation-resistant alloys faster than traditional metallurgy, shrinking the time-to-discovery from years to weeks. This cuts R&D costs and delays significantly.
  • 3. Design and Manufacturing Optimization: Designing the physical components is immensely complex. AI uses surrogate models — fast-running, ML-trained replicas of expensive high-fidelity physics codes — to quickly test thousands of design iterations. Furthermore, AI is being used to optimize manufacturing processes like the winding of complex high-temperature superconducting magnets, ensuring precision and reducing production costs.

“AI is the quantum leap in speed, turning the decades-long process of fusion R&D into a multi-year sprint towards commercial viability.” — Dr. Michl Binderbauer, the CEO of TAE Technologies


Case Study 1: The Predict-First Approach to Plasma Turbulence

The Challenge:

A major barrier to net-positive energy is plasma turbulence, the chaotic, swirling structures inside the reactor that cause heat to leak out, dramatically reducing efficiency. Traditionally, understanding this turbulence required running extremely time-intensive, high-fidelity computer codes for weeks on supercomputers to simulate one set of conditions.

The AI Solution:

Researchers at institutions like MIT and others have successfully utilized machine learning to build surrogate models. These models are trained on the output of the complex, weeks-long simulations. Once trained, the surrogate can predict the performance and turbulence levels of a given plasma configuration in milliseconds. This “predict-first” approach allows engineers to explore thousands of potential operating scenarios and refine the reactor’s control parameters efficiently, a process that would have been physically impossible just a few years ago.

The Commercial Impact:

This application of AI dramatically reduces the design cycle time. By rapidly optimizing plasma behavior through simulation, engineers can confirm promising configurations before they ever build a new physical machine, translating directly into lower capital costs, reduced reliance on expensive physical prototypes, and a faster path to commercial-scale deployment.


Case Study 2: Real-Time Stabilization in Commercial Reactor Prototypes

The Challenge:

Modern magnetic confinement fusion devices require precise, continuous adjustment of complex magnetic fields to hold the volatile plasma in place. Slight shifts can lead to a plasma disruption — a sudden, catastrophic event that can damage reactor walls and halt operations. Traditional feedback loops are often too slow and rely on simple, linear control rules.

The AI Solution:

Private companies and large public projects (like ITER) are deploying Reinforcement Learning controllers. These AI systems are given a reward function (e.g., maintaining maximum plasma temperature and density) and train themselves across millions of virtual experiments to operate the magnetic ‘knobs’ (actuators) in the most optimal, non-intuitive way. The result is an AI controller that can detect an instability milliseconds before a human or conventional system can, and execute complex corrective maneuvers in real-time to mitigate or avoid disruptions entirely.

The Commercial Impact:

This shift from reactive to proactive control is critical for commercial viability. A commercial fusion plant needs to operate continuously and reliably to make its levelized cost of electricity competitive. By using AI to prevent costly equipment damage and extend plasma burn duration, the technology becomes more reliable, safer, and ultimately more financially attractive as a baseload power source.


The New Fusion Landscape: Companies to Watch

The private sector, recognizing the accelerating potential of AI, is now dominating the race, backed by billions in private capital. Companies like Commonwealth Fusion Systems (CFS), a spin-out from MIT, are leveraging AI-optimized high-temperature superconducting magnets to shrink the tokamak design to a commercially viable size. Helion Energy, which famously signed the first power purchase agreement with Microsoft, uses machine learning to control their pulsed Magneto-Inertial Fusion systems with unprecedented precision to achieve high plasma temperatures. TAE Technologies applies advanced computing to its field-reversed configuration approach, optimizing its non-radioactive fuel cycle. Other startups like Zap Energy and Tokamak Energy are also deeply integrating AI into their core control and design strategies. The partnership between these agile startups and large compute providers (like AWS and Google) highlights that fusion is now an information problem as much as a physics one.

The Human-Centered Future of Energy

AI is not just optimizing the physics; it is optimizing the human innovation cycle. By automating the data-heavy, iterative work, AI frees up the world’s best physicists and engineers to focus on the truly novel, high-risk breakthroughs that only human intuition can provide. When fusion is commercialized — a time frame that has shrunk from decades to perhaps the next five to ten years — it will not just be a clean energy source; it will be a human-centered energy source. It promises energy independence, grid resiliency, and the ability to meet the soaring demands of a globally connected, AI-driven digital economy without contributing to climate change. The fusion story is rapidly becoming the ultimate story of human innovation, powered by intelligence, both artificial and natural.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






The Ongoing Innovation War Between Hackers and Cybersecurity Firms

Last Updated: October 15, 2025 at 8:36PM PDT

The Ongoing Innovation War Between Hackers and Cybersecurity Firms

GUEST POST from Art Inteligencia

In the world of change and innovation, we often celebrate disruptive breakthroughs — the new product, the elegant service, the streamlined process. But there is a parallel, constant, and far more existential conflict that drives more immediate innovation than any market force: the Innovation War between cyber defenders and adversaries. This conflict isn’t just a cat-and-mouse game; it is a Vicious Cycle of Creative Destruction where every defensive breakthrough creates a target for a new offensive tactic, and every successful hack mandates a fundamental reinvention of the defense at firms like F5 and CrowdStrike. As a human-centered change leader, I find this battleground crucial because its friction dictates the speed of digital progress and, more importantly, the erosion or restoration of citizen and customer trust.

We’ve moved past the era of simple financial hacks. Today’s sophisticated adversaries — nation-states, organized crime syndicates, and activist groups — target the supply chain of trust itself. Their strategies are now turbocharged by Generative AI, allowing for the automated creation of zero-day exploits and hyper-realistic phishing campaigns, fundamentally accelerating the attack lifecycle. This forces cybersecurity firms to innovate in response, focusing on achieving Active Cyber Resilience — the ability to not only withstand attacks but to learn, adapt, and operate continuously even while under fire. The human cost of failure — loss of privacy, psychological distress from disruption, and decreased public faith in institutions — is the real metric of this war.

The Three Phases of Cyber Innovation

The defensive innovation cycle, driven by adversary pressure, can be broken down into three phases:

  • 1. The Breach as Discovery (The Hack): An adversary finds a zero-day vulnerability or exploits a systemic weakness. The hack itself is the ultimate proof-of-concept, revealing a blind spot that internal R&D teams failed to predict. This painful discovery is the genesis of new innovation.
  • 2. The Race to Resilience (The Fix): Cybersecurity firms immediately dedicate immense resources — often leveraging AI and automation for rapid detection and response — to patch the vulnerability, not just technically, but systematically. This results in the rapid development of new threat intelligence, monitoring tools, and architectural changes.
  • 3. The Shift in Paradigm (The Reinvention): Over time, repeated attacks exploiting similar vectors force a foundational change in design philosophy. The innovation becomes less about the patch and more about a new, more secure default state. We transition from building walls to implementing Zero Trust principles, treating every user and connection as potentially hostile.

“In cybersecurity, your adversaries are your involuntary R&D partners. They expose your weakness, forcing you to innovate beyond your comfort zone and into your next generation of defense.” — Frank Hersey


Case Study 1: F5 Networks and the Supply Chain of Trust

The Attack:

F5 Networks, whose BIG-IP products are central to application delivery and security for governments and major corporations globally, was breached by a suspected nation-state actor. The attackers reportedly stole proprietary BIG-IP source code and details on undisclosed security vulnerabilities that F5 was internally tracking.

The Innovation Mandate:

This was an attack on the supply chain of security itself. The theft provides adversaries with a blueprint for crafting highly tailored, future exploits that target F5’s massive client base. The innovation challenge for F5 and the entire industry shifts from simply patching products to fundamentally rethinking their Software Development Lifecycle (SDLC). This demands a massive leap in threat intelligence integration, secure coding practices, and isolating development environments from corporate networks to prevent future compromise of the IP that protects the world.

The Broader Impact:

The F5 breach compels every organization to adopt an unprecedented level of vendor risk management. It drives innovation in how infrastructure is secured, shifting the paradigm from trusting the vendor’s product to verifying the vendor’s integrity and securing the entire delivery pipeline.


Case Study 2: Airport Public Address (PA) System Hacks

The Attack:

Hackers gained unauthorized access to the Public Address (PA) systems and Flight Information Display Screens (FIDS) at various airports (e.g., in Canada and the US). They used these systems to broadcast political and disruptive messages, causing passenger confusion, flight delays, and the immediate deployment of emergency protocols.

The Innovation Mandate:

These attacks were not financially motivated, but aimed at disruption and psychological impact — exploiting the human fear factor. The vulnerability often lay in a seemingly innocuous area: a cloud-based, third-party software provider for the PA system. The innovation mandate here is a change in architectural design philosophy. Security teams must discard the concept of “low-value” systems. They must implement micro-segmentation to isolate all operational technology (OT) and critical public-facing systems from the corporate network. Furthermore, it forces an innovation in physical-digital security convergence, requiring security protocols to manage and authenticate the content being pushed to public-facing devices, treating text-to-speech APIs with the same scrutiny as a financial transaction. The priority shifts to minimizing public and maximizing continuity.

The Broader Impact:

The PA system hack highlights the critical need for digital humility
. Every connected device, from the smart thermostat to the public announcement system, is an attack vector. The innovation is moving security from the data center floor to the terminal wall, reinforcing that the human-centered goal is continuity and maintaining public trust.


Conclusion: The Innovation Imperative

The war between hackers and cybersecurity firms is relentless, but it is ultimately a net positive for innovation, albeit a brutally expensive and high-stakes one. Each successful attack provides the industry with a blueprint for a more resilient, better-designed future.

For organizational leaders, the imperative is clear: stop viewing cybersecurity as a cost center and start treating it as the foundational innovation platform. Your investment in security dictates your speed and trust in the market. Adopt the mindset of Continuous Improvement and Adaptation. Leaders must mandate a Zero Trust roadmap and treat security talent as mission-critical R&D personnel. The speed and quality of your future products will depend not just on your R&D teams, but on how quickly your security teams can learn from the enemy’s last move. In the digital economy, cyber resilience is the ultimate competitive differentiator.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






AI, Cognitive Obesity and Arrested Development

AI, Cognitive Obesity and Arrested Development

GUEST POST from Pete Foley

Some of the biggest questions of our age are whether AI will ultimately benefit or hurt us, and how big its’ effect will ultimately be.

And that of course is a problem with any big, disruptive technology.  We want to anticipate how it will play out in the real world, but our forecasts are rarely very accurate, and all too often miss a lot of the more important outcomes. We often don’t anticipate it’s killer applications, how it will evolve or co-evolve with other emergent technologies, or predict all of the side effects and ‘off label’ uses that come with it.  And the bigger the potential impact new tech has, and the broader the potential applications, the harder prediction becomes.  The reality is that in virtually every case, it’s not until we set innovation free that we find its full impact, good, bad or indifferent.

Pandora’s Box

And that can of course be a sizable concern.  We have to open Pandora’s Box in order to find out what is inside, but once open, it may not be possible to close it again.   For AI, the potential scale of its impact makes this particularly risky. It also makes any meaningful regulation really difficult. We cannot regulate what we cannot accurately predict. And if we try we risk not only missing our target, but also creating unintended consequences, and distorting ‘innovation markets’ in unexpected, potentially negative ways.

So it’s not surprising there is a lot of discussion around what AI will or will not do. How will it effect jobs, the economy, security, mental health. Will it ‘pull’ a Skynet, turn rogue and destroy humanity? Will it simply replace human critical thinking to the point where it rules us by default? Or will it ultimately fizzle out to some degree, and become a tool in a society that looks a lot like today, rather than revolutionizing it?

I don’t even begin to claim to predict the future with any accuracy, for all of the reasons mentioned above. But as a way to illustrate how complex an issue this is, I’d like to discuss a few less talked about scenarios.

1.  Less obvious issues:  Obviously AI comes with potential for enormous benefits and commensurate problems.  It’s likely to trigger an arms race between ‘good’ and ‘bad’ applications, and that of itself will likely be a moving target.  An obvious, oft discussed potential issue is of course the ‘Terminator Scenario’ mentioned above.  That’s not completely far fetched, especially with recent developments in AI self preservation and scheming that I’ll touch on later. But there are plenty of other potential, if less extreme pitfalls, many of which involve AI amplifying and empowering bad behavior by humans.  The speed and agility AI hands to hackers, hostile governments, black-hats, terrorists and organized crime vastly enhanced capability for attacks on infrastructure, mass fraud or worse. And perhaps more concerning, there’s the potential for AI to democratize cyber crime, and make it accessible to a large number of ‘petty’ criminals who until now have lacked resources to engage in this area. And when the crime base expands, so does the victim base. Organizations or individuals who were too small to be targeted for ransomware when it took huge resources to create, will presumably become more attractive targets as AI allows similar code to be built in hours by people who possess limited coding skills.

And all of this of course adds another regulation challenge. The last thing we want to do is slow legitimate AI development via legislation, while giving free reign to illegitimate users, who presumably will be far less likely to follow regulations. If the arms race mentioned above occurs, the last thing we want to do is unintentionally tip the advantage to the bad guys!

Social Impacts

But AI also has the potential to be disruptive in more subtle ways.  If the internet has taught us anything, it is that how the general public adopts technology, and how big tech monetizes matter a lot. But this is hard to predict.  Some of the Internet’s biggest negative impacts have derived from largely unanticipated damage to our social fabric.  We are still wrestling with its impact on social isolation, mental health, cognitive development and our vital implicit skill-set. To the last point, simply deferring mental tasks to phones and computers means some cognitive muscles lack exercise, and atrophy, while reduction in human to human interactions depreciate our emotion and social intelligence.

1. Cognitive Obesity  The human brain evolved over tens of thousands, arguable millions of years (depending upon where in you start measuring our hominid history).  But 99% of that evolution was characterized by slow change, and occurred in the context of limited resources, limited access to information, and relatively small social groups.  Today, as the rate of technological innovation explodes, our environment is vastly different from the one our brain evolved to deal with.  And that gap between us and our environment is widening rapidly, as the world is evolving far faster than our biology.  Of course, as mentioned above, the nurture part of our cognitive development does change with changing context, so we do course correct to some degree, but our core DNA cannot, and that has consequences.

Take the current ‘obesity epidemic’.  We evolved to leverage limited food resources, and to maximize opportunities to stock up calories when they occurred.  But today, faced with near infinite availability of food, we struggle to control our scarcity instincts. As a society, we eat far too much, with all of the health issues that brings with it. Even when we are cognitively aware of the dangers of overeating, we find it difficult to resist our implicit instincts to gorge on more food than we need.  The analogy to information is fairly obvious. The internet brought us near infinite access to information and ‘social connections’.  We’ve already seen the negative impact this can have, contributing to societal polarization, loss of social skills, weakened emotional intelligence, isolation, mental health ‘epidemics’ and much more. It’s not hard to envisage these issues growing as AI increases the power of the internet, while also amplifying the seduction of virtual environments.  Will we therefore see a cognitive obesity epidemic as our brain simply isn’t adapted to deal with near infinite resources? Instead of AI turning us all into hyper productive geniuses, will we simply gorge on less productive content, be it cat videos, porn or manipulative but appealing memes and misinformation? Instead of it acting as an intelligence enhancer, will it instead accelerate a dystopian Brave New World, where massive data centers gorge on our common natural resources primarily to create trivial entertainment?

2. Amplified Intelligence.  Even in the unlikely event that access to AI is entirely democratic, it’s guaranteed that its benefits will not be. Some will leverage it far more effectively than others, creating significant risk of accelerating social disparity.  While many will likely gorge unproductively as described above, others will be more disciplined, more focused and hence secure more advantage.  To return to the obesity analogy, It’s well documented that obesity is far more prevalent in lower income groups. It’s hard not to envisage that productive leverage of AI will follow a similar pattern, widening disparities within and between societies, with all of the issues and social instability that comes with that.

3. Arrested Development.  We all know that ultimately we are products of both nature and nurture. As mentioned earlier, our DNA evolves slowly over time, but how it is expressed in individuals is impacted by current or context.  Humans possess enormous cognitive plasticity, and can adapt and change very quickly to different environments.  It’s arguably our biggest ‘blessing’, but can also be a curse, especially when that environment is changing so quickly.

The brain is analogous to a muscle, in that the parts we exercise expand or sharpen, and the parts we don’t atrophy.    As we defer more and more tasks to AI, it’s almost certain that we’ll become less capable in those areas.  At one level, that may not matter. Being weaker at math or grammar is relatively minor if our phones can act as a surrogate, all of my personal issues with autocorrect notwithstanding.

But a bigger potential issue is the erosion of causal reasoning.  Critical thinking requires understanding of underlying mechanisms.  But when infinite information is available at a swipe of a finger, it becomes all too easy to become a ‘headline thinker’, and unconsciously fail to penetrate problems with sufficient depth.

That risks what Art Markman, a psychologist at UT, and mentor and friend, used to call the ‘illusion of understanding’.  We may think we know how something works, but often find that knowledge is superficial, or at least incomplete, when we actually need it.   Whether its fixing a toilet, changing a tire, resetting a fuse, or unblocking a sink, often the need to actually perform a task reveals a lack in deep, causal knowledge.   This often doesn’t matter until it does in home improvement contexts, but at least we get a clear signal when we discover we need to rush to YouTube to fix that leaking toilet!

This has implications that go far beyond home improvement, and is one factor helping to tear our social fabric apart.   We only have to browse the internet to find people with passionate, but often opposing views on a wide variety of often controversial topics. It could be interest rates, Federal budgets, immigration, vaccine policy, healthcare strategy, or a dozen others. But all too often, the passion is not matched by deep causal knowledge.  In reality, these are all extremely complex topics with multiple competing and interdependent variables.  And at risk of triggering hate mail, few if any of them have easy, conclusive answers.  This is not physics, where we can plug numbers into an equation and it spits out a single, unambiguous solution.  The reality is that complex, multi-dimensional problems often have multiple, often competing partial solutions, and optimum outcomes usually require trade offs.  Unfortunately few of us really have the time to assimilate the expertise and causal knowledge to have truly informed and unambiguous answers to most, if not all of these difficult problems.

And worse, AI also helps the ‘bad guys’. It enables unscrupulous parties to manipulate us for their own benefit, via memes, selective information and misinformation that are often designed to make us think we understand complex problems far better than we really do. As we increasingly rely on input from AI, this will inevitable get worse. The internet and social media has already contributed to unprecedented social division and nefarious financial rimes.   Will AI amplify this further?

This problem is not limited to complex social challenges. The danger is that for ALL problems, the internet, and now AI, allows us to create the illusion for ourselves that we understand complex systems far more deeply than we really do.  That in turn risks us becoming less effective problem solvers and innovators. Deep causal knowledge is often critical for innovating or solving difficult problems.  But in a world where we can access answers to questions so quickly and easily, the risk is that we don’t penetrate topics as deeply. I personally recall doing literature searches before starting a project. It was often tedious, time consuming and boring. Exactly the types of task AI is perfect for. But that tedious process inevitably built my knowledge of the space I was moving into, and often proved valuable when we hit problems later in the project. If we now defer this task to AI, even in part, this reduces depth of understanding. And in in complex systems or theoretic problem solving, will often lack the unambiguous signal that usually tells us our skills and knowledge are lacking when doing something relatively simple like fixing a toilet. The more we use AI, the more we risk lacking necessary depth of understanding, but often without realizing it.

Will AI become increasingly unreliable?

We are seeing AI develop the capability to lie, together with a growing propensity to cover it’s tracks when it does so. The AI community call it ’scheming’, but in reality it’s fundamentally lying.  https://openai.com/index/detecting-and-reducing-scheming-in-ai-models/?_bhlid=6a932f218e6ebc041edc62ebbff4f40bb73e9b14. We know from the beginning we’ve faced situations where AI makes mistakes.  And as I discussed recently, the risks associated with that are amplified because of it’s increasingly (super)human or oracle-like interface creating an illusion of omnipotence.

But now it appears to be increasingly developing properties that mirror self preservation.  A few weeks ago there were reports of difficulties in getting AI’s to shut themselves down, and even of AI’s using defensive blackmail when so threatened. Now we are seeing reports of AI’s deliberately trying to hide their mistakes.  And perhaps worse, concerns that attempts to fix this may simply “teach the model to become better at hiding its deceptive behavior”, or in other words, become a better liar.

If we are already in an arms race with an entity to keep it honest, and put our interests above its own, given it’s vastly superior processing power and speed, it may be a race we’ve already lost.  That may sound ‘doomsday-like’, but that doesn’t make it any less possible. And keep in mind, much of the Doomsday projections around AI focus on a ’singularity event’ when AI suddenly becomes self aware. That assumes AI awareness and consciousness will be similar to human, and forces a ‘birth’ analogy onto the technology. However, recent examples of self preservation and dishonesty maybe hint at a longer, more complex transition, some of which may have already started.

How big will the impact of AI be?

I think we all assume that AI’s impact will be profound. After all,  it’s still in its infancy, and is already finding it’s way into all walks of life.  But what if we are wrong, or at least overestimating its impact?  Just to play Devils Advocate, we humans do have a history of over-estimating both the speed and impact of technology driven change.

Remember the unfounded (in hindsight) panic around Y2K?  Or when I was growing up, we all thought 2025 would be full of people whizzing around using personal jet-packs.  In the 60’s and 70’s we were all pretty convinced we were facing nuclear Armageddon. One of the greatest movies of all time, 2001, co-written by inventor and futurist Arthur C. Clark, had us voyaging to Jupiter 24 years ago!  Then there is the great horse manure crisis of 1894. At that time, London was growing rapidly, and literally becoming buried in horse manure.  The London Times predicted that in 50 years all of London would be buried under 9 feet of poop. In 1898 the first global urban planning conference could find no solution, concluding that civilization was doomed. But London, and many other cities received salvation from an unexpected quarter. Henry Ford invented the motor car, which surreptitiously saved the day.  It was not a designed solution for the manure problem, and nobody saw it coming as a solution to that problem. But nonetheless, it’s yet another example of our inability to see the future in all of it’s glorious complexity, and for our predictions to screw towards worse case scenarios and/or hyperbole.

Change Aversion:

That doesn’t of course mean that AI will not have a profound impact. But lot’s of factors could potentially slow down, or reduce its effects.  Not least of these is human nature. Humans possess a profound resistance to change.  For sure, we are curious, and the new and innovative holds great appeal.  That curiosity is a key reason as to why humans now dominate virtually every ecological niche on our planet.   But we are also a bit schizophrenic, in that we love both change and stability and consistency at the same time.  Our brains have limited capacity, especially for thinking about and learning new stuff.  For a majority of our daily activities, we therefore rely on habits, rituals, and automatic behaviors to get us through without using that limited higher cognitive capacity. We can drive, or type, or do parts of our job without really thinking about it. This ‘implicit’ mental processing frees up our conscious brain to manage the new or unexpected.  But as technology like AI accelerates, a couple of things could happen.  One is that as our cognitive capacity gets overloaded, and we unconsciously resist it.  Instead of using the source of all human knowledge for deep self improvement, we instead immerse ourselves in less cognitively challenging content such as social media.

Or, as mentioned earlier, we increasingly lose causal understanding of our world, and do so without realizing it.   Why use our limited thinking capacity for tasks when it is quicker, easier, and arguably more accurate to defer to an AI. But lack of causal understanding seriously inhibits critical thinking and problem solving.  As AI gets smarter, there is a real risk that we as a society become dumber, or at least less innovative and creative.

Our Predictions are Wrong.

If history teaches us anything, most, if not all of the sage and learned predictions about AI will be mostly wrong. There is no denying that it is already assimilating into virtually every area of human society.  Finance, healthcare, medicine, science, economics, logistics, education etc.  And it’s a snooze and you lose scenario, and in many fields of human endeavor, we have little choice.  Fail to embrace the upside of AI and we get left behind.

That much power in things that can think so much faster than us, that may be developing self-interest, if not self awareness, that has no apparent moral framework, and is in danger of becoming an expert liar, is certainly quite sobering.

The Doomsday Mindset.

As suggested above, loss aversion and other biases drive us to focus on the downside of change.   It’s a bias that makes evolutionary sense, and helped keep our ancestors alive long enough to breed and become our ancestors. But remember, that bias is implicitly built into most, if not all of our predictions.   So there’s at least  chance that it’s impact wont be quite as good or bad as our predictions suggest

But I’m not sure we want to rely on that.  Maybe this time a Henry Ford won’t serendipitously rescue us from a giant pile of poop of our own making. But whatever happens, I think it’s a very good bet that we are in for some surprises, both good and bad. Probably the best way to deal with that is to not cling too tightly to our projections or our theories, remain agile, and follow the surprises as much, if not more than met expectations.

Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.