Category Archives: Technology

The Evolution of Trapped Value in Cloud Computing

The Evolution of Trapped Value in Cloud Computing

GUEST POST from Geoffrey A. Moore

Releasing trapped value drives the adoption of disruptive technology and subsequent category development. The trapped part inspires the technical innovation while the value part funds the business. As targeted trapped value gets released, the remaining value is held in place by a secondary set of traps, calling for a second generation of innovation, and a second round of businesses. This pattern continues until all the energy in the system is exhausted, and the economic priority shifts from growth to maintenance.

Take cloud computing for example. Amazon and Salesforce were early disrupters. The trapped value in retail was consumer access anytime anywhere. The trapped value in SaaS CRM was a corporate IT model that prioritized forecasting and reporting applications for upper management over tools for improving sales productivity in the trenches. As their models grew in success, however, they outgrew the data center operating model upon which they were based, and that was creating problems for both companies.

Help came from an unexpected quarter. Consumer computing, led by Google and Facebook, tackled the trapped value in the data center model by inventing the data-center-as-a-computer operation. The trapped value was in computers and network equipment that was optimized for scaling up to get more power. The new model relentlessly focused on commoditizing both, with stripped-down compute blocks and software-enabled switching—much to the consternation of the established hardware vendors who had no easy place to retreat to.

Their situation was further exacerbated by the rise of hyperscaler compute vendors who offered to outsource the entire enterprise footprint. But as they did, the value trap moved again, and this time it was the hyperscaler pricing model that was holding things back, particularly when switching costs were high. That has given rise to a hybrid architecture which at present is muddling its way through to a moderating norm. Here companies like Equinix and Digital Realty are helping enterprises combine approaches to find their optimal balance.

As this norm takes over more and more of the playing field, we may approach an asymptote of releasable trapped value at the computing layer. If so, that just means it will migrate elsewhere—in this case, up the stack. We are already seeing this in at least three areas of hypergrowth today:

  1. Cybersecurity, where the trapped value is in patching together component subsystems to address ongoing exposure to catastrophic risk.
  2. Content generation, where the trapped value is in time to market, as well as unfulfilled demand, for fresh digital media, both in consumer markets and in the enterprise.
  3. Co-piloting, where the trapped value is in low-yielding engagement with high-value digital services due to topic complexity and the lack of sophistication on the part of the end user.

All three of these opportunities will push further innovation in cloud computing, but the higher margins will now migrate to the next generation.

The net of all this is a fundamental investment thesis that applies equally well to venture investing, enterprise spending, and personal wealth management. As the Watergate pair of Woodward and Bernstein taught us many decades ago, Follow the money! In this case, the money is in the trapped value, so before you invest in any context, first identify the trapped value that when released will create the ROI you are looking for, and then monitor the early stages to determine if indeed it is getting released, and if so, that a fair share of the returns are coming back to you.

That’s what I think. What do you think?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Why 4D Printing is the Next Frontier of Human-Centered Change

The Adaptive Product

LAST UPDATED: November 29, 2025 at 9:23 AM

Why 4D Printing is the Next Frontier of Human-Centered Change

GUEST POST from Art Inteligencia

For centuries, the pinnacle of manufacturing innovation has been the creation of a static, rigid, and perfect form. Additive Manufacturing, or 3D printing, perfected this, giving us complexity without molds. But a seismic shift is underway, introducing the fourth dimension: time. 4D Printing is the technology that builds products designed to change their shape, composition, or functionality autonomously in response to environmental cues.

The innovation isn’t merely in the print, but in the programmable matter. These are objects with embedded behavioral code, turning raw materials into self-assembling, self-repairing, or self-adapting systems. For the Human-Centered Change leader, this is profoundly disruptive, moving design thinking from What the object is, to How the object behaves across its entire lifespan and in shifting circumstances.

The core difference is simple: 3D printing creates a fixed object. 4D printing creates a dynamic system.

The Mechanics of Transformation: Smart Materials

4D printing leverages existing 3D printing technologies (like Stereolithography or Fused Deposition Modeling) but uses Smart Materials instead of traditional static plastics. These materials have properties programmed into their geometry that cause them to react to external stimuli. The key material categories include:

  • Shape Memory Polymers (SMPs): These materials can be printed into one shape (Shape A), deformed into a temporary shape (Shape B), and then recover Shape A when exposed to a specific trigger, usually heat (thermo-responsive).
  • Hydrogels: These polymers swell or shrink significantly when exposed to moisture or water (hygromorphic), allowing for large-scale, water-driven shape changes.
  • Biomaterials and Composites: Complex structures combining stiff and responsive materials to create controlled folding, bending, or twisting motions.

This allows for the creation of Active Origami—intricate, flat-packed structures that self-assemble into complex 3D forms when deployed or activated.

Case Study 1: The Self-Adapting Medical Stent

Challenge: Implanting Devices in Dynamic Human Biology

Traditional medical stents (small tubes used to open blocked arteries) are fixed in size and delivered via invasive surgery or catheter-based deployment. Once implanted, they cannot adapt to a patient’s growth or unexpected biological changes, sometimes requiring further intervention.

4D Printing Intervention: The Time-Lapse Stent

Researchers have pioneered the use of 4D printing to create stents made of bio-absorbable, shape-memory polymers. These devices are printed in a compact, temporarily fixed state, allowing for minimally invasive insertion. Upon reaching the target location inside the body, the polymer reacts to the patient’s body temperature (the Thermal Stimulus).

  • The heat triggers the material to return to its pre-programmed, expanded shape, safely opening the artery.
  • The material is designed to gradually and safely dissolve over months or years once its structural support is no longer needed, eliminating the need for a second surgical removal.

The Human-Centered Lesson:

This removes the human risk and cost associated with two major steps: the complexity of surgical deployment (by making the stent initially small and flexible) and the future necessity of removal (by designing it to disappear). The product adapts to the patient, rather than the patient having to surgically manage the product.

Case Study 2: The Adaptive Building Facade

Challenge: Passive Infrastructure in Dynamic Climates

Buildings are static, but the environment is not. Traditional building systems require complex, motor-driven hardware and electrical sensors to adapt to sun, heat, and rain, leading to high energy costs and mechanical failure.

4D Printing Intervention: Hygromorphic Shading Systems

Inspired by how pinecones open and close based on humidity, researchers are 4D-printing building facade elements (shades, shutters) using bio-based, hygromorphic composites (materials that react to moisture). These large-scale prints are installed without any wires or motors.

  • When the air is dry and hot (high sun exposure), the material remains rigid, allowing light in.
  • When humidity increases (signaling impending rain or high moisture), the material absorbs the water vapor and is designed to automatically bend and curl, creating a self-shading or self-closing surface.

The Human-Centered Lesson:

This shifts the paradigm of sustainability from complex digital control systems to material intelligence. It reduces energy consumption and maintenance costs by eliminating mechanical components. The infrastructure responds autonomously and elegantly to the environment, making the building a more resilient and sustainable partner for the human occupants.

The Companies and Startups Driving the Change

The field is highly collaborative, bridging material science and industrial design. Leading organizations are often found in partnership with academic pioneers like MIT’s Self-Assembly Lab. Major additive manufacturing companies like Stratasys and Autodesk have made significant investments, often focusing on the software and material compatibility required for programmable matter. Other key players include HP Development Company and the innovative work coming from specialized bioprinting firms like Organovo, which explores responsive tissues. Research teams at institutions like the Georgia Institute of Technology continue to push the boundaries of multi-material 4D printing systems, making the production of complex, shape-changing structures faster and more efficient. The next generation of breakthroughs will emerge from the seamless integration of these material, design, and software leaders.

“4D printing is the ultimate realization of design freedom. We are no longer limited to designing for the moment of creation, but for the entire unfolding life of the product.”

The implications of 4D printing are vast, spanning aerospace (self-deploying antennae), consumer goods (adaptive footwear), and complex piping systems (self-regulating valves). For change leaders, the mandate is clear: start viewing your products and infrastructure not as static assets, but as programmable actors in a continuous, changing environment.

Frequently Asked Questions About 4D Printing

1. What is the “fourth dimension” in 4D Printing?

The fourth dimension is time. 4D printing refers to 3D-printed objects that are created using smart, programmable materials that change their shape, color, or function over time in response to specific external stimuli like heat, light, or water/humidity.

2. How is 4D Printing different from 3D Printing?

3D printing creates a final, static object. 4D printing uses the same additive manufacturing process but employs smart materials (like Shape Memory Polymers) that are programmed to autonomously transform into a second, pre-designed shape or state when a specific environmental condition is met, adding the element of time-based transformation.

3. What are the main applications for 4D Printing?

Applications are strongest where adaptation or deployment complexity is key. This includes biomedical devices (self-deploying stents), aerospace (self-assembling structures), soft robotics (flexible, adaptable grippers), and self-regulating infrastructure (facades that adjust to weather).

Your first step toward adopting 4D innovation: Identify one maintenance-heavy, mechanical component in your operation that is currently failing due to environmental change (e.g., a simple valve or a passive weather seal). Challenge your design team to rethink it as an autonomous, 4D-printed shape-memory structure that requires no external power source.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Reasons Customers May Refuse to Speak with AI

The Reasons Customers May Refuse to Speak with AI

GUEST POST from Shep Hyken

If you want to anger your customers, make them do something they don’t want to do.

Up to 66% of U.S. customers say that when it comes to getting help, resolving an issue or making a complaint, they only want to speak to a live person. That’s according to the 2025 State of Customer Service and Customer Experience (CX) annual study. If you don’t provide the option to speak to a live person, you are at risk of losing many customers.

But not all customers feel that way. We asked another sample of more than 1,000 customers about using AI and self-service tools to get customer support, and 34% said they stopped doing business with a company or brand because self-service options were not provided.

These findings reveal the contrasting needs and expectations customers have when communicating with the companies they do business with. While the majority prefer human-to-human interaction, a substantial number (about one-third) not only prefer self-service options — AI-fueled solutions, robust frequently asked question pages on a website, video tutorials and more — but demand it or they will actually leave to find a competitor that can provide what they want.

This creates a big challenge for CX decision-makers that directly impacts customer retention and revenue.

Why Some Customers Resist AI

Our research finds that age makes a difference. For example, Baby Boomers show the strongest preference for human interaction, with 82% preferring the phone over digital solutions. Only half (52%) of Gen-Z feels the same way about the phone. Here’s why:

  1. Lack of Trust: Trust is another concern, with almost half (49%) saying they are scared of technologies like AI and ChatGPT.
  2. Privacy Concerns: Seventy percent of customers are concerned about data privacy and security when interacting with AI.
  3. Success — Or Lack of Success: While I think it’s positive that 50% of customers surveyed have successfully resolved a customer service issue using AI without the need for a live agent, that also means that 50% have not.

Customers aren’t necessarily anti-technology. They’re anti-ineffective technology. When AI fails to understand requests and lacks empathy in sensitive situations, the negative experience can make certain customers want to only communicate with a human. Even half of Gen-Z (48%) says they are frustrated with AI technology (versus 17% of Baby Boomers).

Why Some Customers Embrace AI

The 34% of customers who prefer self-service options to the point of saying they are willing to stop doing business with a company if self-service isn’t available present a dilemma for CX leaders. This can paralyze the decision process for what solutions to buy and implement. Understanding some of the reasons certain customers embrace AI is important:

  1. Speed, Convenience and Efficiency: The ability to get immediate support without having to call a company, wait on hold, be authenticated, etc., is enough to get customers using AI. If you had the choice between getting an answer immediately or having to wait 15 minutes, which would you prefer? (That’s a rhetorical question.)
  2. 24/7 Availability: Immediate support is important, but having immediate access to support outside of normal business hours is even better.
  3. A Belief in the Future: There is optimism about the future of AI, as 63% of customers expect AI technologies to become the primary mode of customer service in the future — a significant increase from just 21% in 2021. That optimism has customers trying and outright adopting the use of AI.

CX leaders must recognize the generational differences — and any other impactful differences — as they make decisions. For companies that sell to customers across generations, this becomes increasingly important, especially as Gen-Z and Millennials gain purchasing power. Turning your back on a generation’s technology expectations puts you at risk of losing a large percentage of customers.

What’s a CX Leader To Do?

Some companies have experimented with forcing customers to use only AI and self-service solutions. This is risky, and for the most part, the experiments have failed. Yet, as AI improves — and it’s doing so at a very rapid pace — it’s okay to push customers to use self-service. Just support it with a seamless transfer to a human if needed. An AI-first approach works as long as there’s a backup.

Forcing customers to use a 100% solution, be it AI or human, puts your company at risk of losing customers. Today’s strategy should be a balanced choice between new and traditional customer support. It should be about giving customers the experience they want and expect — one that makes them say, “I’ll be back!”

Image credit: Pixabay

This article originally appeared on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

We Must Stop Worshiping Algorithms

We Must Stop Worshiping Algorithms

GUEST POST from Greg Satell

In 1954 the economist Paul Samuelson received a postcard from his friend Jimmie Savage asking, “ever hear of this guy?” The ”guy” in question was Louis Bachelier, an obscure mathematician who wrote a dissertation in 1900 that anticipated Einstein’s famous paper on Brownian motion published five years later.

The operative phrase in Bachelier’s paper, “the mathematical expectation of the speculator is zero,” was as powerful as it was unassuming. It implied that markets could be tamed using statistical techniques developed more than a century earlier and would set us down the path that led to the 2008 financial crisis.

For decades we’ve been trying to come up with algorithms to help us engineer our way out of uncertainty and they always fail for the same reason: the world is a messy place. Trusting our destiny to mathematical formulas does not eliminate human error, it merely gives preference to judgements encoded in systems beforehand over choices made by people in real time.

The False Promise Of Financial Engineering

By the 1960s a revolution in mathematical finance, based on Bachelier’s paper and promoted by Samuelson, began to gain momentum. A constellation of new discoveries such as efficient portfolios, the capital asset pricing model (CAPM) and, later, the Black-Scholes model for options pricing created a standard model for thinking about economics and finance.

As things gathered steam, Samuelson’s colleague at MIT, Paul Cootner, compiled the most promising papers in a 500-page tome, The Random Character of Stock Market Prices, which became an instant classic. The book would become a basic reference for the new industries of financial engineering and risk management that were just beginning to emerge at the time.

However, early signs of trouble were being ignored. Included in Cootner’s book was a paper by Benoit Mandelbrot that warned that there was something seriously wrong afoot. He showed, with very clear reasoning and analysis, that actual market data displayed far more volatility than was being predicted. In essence, he was pointing out that Samuelson and his friends were vastly underestimating risk in the financial system.

Leading up to the Great Recession, other warning signs would emerge, such as the collapse of LTCM hedge fund in 1998 and of Enron three years later, but the idea that mathematical formulas could engineer risk out of the system endured. The dreams turned to nightmares in 2008, when the entire house of cards collapsed into the worst financial crisis since the 1930s.

The Road To Shareholder Value

By 1970, Samuelson’s revolution in economics was well underway, but companies were still run much as they were for decades. Professional managers ran companies according to their best judgment about what was best for their shareholders, customers, employees and the communities that they operated in, which left room for variance in performance.

That began to change when Milton Friedman, published an Op-Ed in The New York Times, which argued that managers had only one responsibility: to maximize shareholder value. Much like Bachelier’s paper, Friedman’s assertion implied a simple rule-of-thumb with only one variable to optimize for, rather than personal judgement, should govern.

This was great news for people managing businesses, who no longer had to face the same complex tradeoffs when making decisions. All they had to worry about was whether the stock price went up. Rather than having to choose between investing in factories and equipment to produce more product, or R&D to invent new things, they could simply buy back more stock.

The results are now in and they are abysmal. Productivity growth has been depressed since the 1970s. While corporate profits have grown as a percentage of GDP, household incomes have decoupled from economic growth and stagnated. Markets are less free and less competitive. Even social mobility in the US, the ability for ordinary people to achieve the American dream, has been significantly diminished.

The Chimera Of “Consumer Welfare”

The Gilded Age in America that took place at the end of the 19th century was a period of rapid industrialization and the amassing of great wealth. As railroads began to stretch across the continent, the fortunes of the Rockefellers, Vanderbilts, Carnegies and Morgans were built. The power of these men began to rival governments.

It was also an era of great financial instability. The Panic of 1873 and the Panic of 1893 devastated a populace already at the mercy of the often avaricious tycoons who dominated the marketplace. The Sherman Antitrust Act of 1890 and the Clayton Antitrust Act of 1914 were designed to re-balance the scales and bring competition back to the market.

For the most part they were successful. The breakup of AT&T in the 1980s paved the way for immense innovation in telecommunications. Antitrust action against IBM paved the way for the era of the PC and regulatory action against Microsoft helped promote competition in the Internet. American markets were the most competitive in the world.

Still, competition is an imprecise term. Robert Bork and other conservative legal thinkers wanted a simple, more precise standard, based on consumer welfare. In their view, for regulators to bring action against a company, they had to show that the firm’s actions raise the prices of goods or services.

Here again, human judgment was replaced with an algorithmic approach that led to worse outcomes. Over 75% of industries have seen a rise in industry concentration levels since the late 1990s, which has helped to bring about a decline in business dynamism and record income inequality.

The Chimera Of Objectivity

Humans can be irrational and maddening. Decades of research have shown that, when given the exact same set of facts, even experts will make very different assessments. Some people will be more strict, others more lenient. Some of us are naturally optimistic, others are cynics. A family squabble in the morning can affect the choices we make all day.

So it’s not unreasonable to want to improve quality and reduce variance in our decision making by taking a more algorithmic approach by offering clear sets of instructions that hold sway no matter who applies them. They promise to make things more reliable, reduce uncertainty and, hopefully, improve effectiveness.

Yet as Yassmin Abdel-Magied and I explained in Harvard Business Review, algorithms don’t eliminate human biases, they merely encode them. Humans design the algorithms, collect the data that form the basis for decisions and interpret the results. The notion that algorithms are purely objective is a chimera.

The problem with algorithms is that they encourage us to check out, to fool ourselves into thinking we’ve taken human error out of the system and stop paying attention. They allow us to escape accountability, at least for a while, as we pass the buck to systems that spit out answers which affect real people.

Over the past 20 or thirty years, we’ve allowed this experiment to play out and the results have been tragic. It’s time we try something else.

— Article courtesy of the Digital Tonto blog
— Image credit: Google Gemini (NanoBanana)

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Distributed Quantum Computing

Unleashing the Networked Future of Human Potential

LAST UPDATED: November 21, 2025 at 5:49 PM

Distributed Quantum Computing

GUEST POST from Art Inteligencia

For years, quantum computing has occupied the realm of scientific curiosity and theoretical promise. The captivating vision of a single, powerful quantum machine capable of solving problems intractable for even the most potent classical supercomputers has long driven research. However, the emerging reality of practical, fault-tolerant quantum computation is proving to be less about a single monolithic giant and more about a network of interconnected quantum resources. Recent news, highlighting major collaborations between industry titans, signals a pivotal shift: the world is moving aggressively towards Distributed Quantum Computing.

This isn’t merely a technical upgrade; it’s a profound architectural evolution that will dramatically accelerate the realization of quantum advantage and, in doing so, demand a radical human-centered approach to innovation, ethics, and strategic foresight across every sector. For leaders committed to human-centered change, understanding this paradigm shift is not optional; it’s paramount. Distributed quantum computing promises to unlock unprecedented problem-solving capabilities, but only if we proactively prepare our organizations and our people to harness its immense power ethically and effectively.

The essence of Distributed Quantum Computing lies in connecting multiple, smaller quantum processors — each a “quantum processing unit” (QPU) — through quantum networks. This allows them to function collectively as a much larger, more powerful, and inherently more resilient quantum computer, capable of tackling problems far beyond the scope of any single QPU. This parallel, networked approach will form the bedrock of the future quantum internet, enabling a world where quantum resources are shared, secured, and scaled globally to address humanity’s grand challenges.

The Three-Dimensional Impact of Distributed Quantum Computing

The strategic shift to distributed quantum computing creates a multi-faceted impact on innovation and organizational design:

1. Exponential Scaling of Computational Power

By linking individual QPUs into a cohesive network, we overcome the physical limitations of building ever-larger single quantum chips. This allows for an exponential scaling of computational power that dramatically accelerates the timeline for solving currently intractable problems in areas like molecular simulation, complex optimization, and advanced cryptography. This means a faster path to new drugs, revolutionary materials, and genuinely secure communication protocols for critical infrastructure.

2. Enhanced Resilience and Fault Tolerance

Individual QPUs are inherently susceptible to noise and errors, a significant hurdle for practical applications. A distributed architecture offers a robust path to fault tolerance through redundancy and sophisticated error correction techniques spread across the entire network. If one QPU encounters an error, the network can compensate, making quantum systems far more robust and reliable for real-world, long-term quantum solutions.

3. Distributed Data & Security Implications

Quantum networks will enable the secure distribution of quantum information, paving the way for truly unbreakable quantum communication (e.g., Quantum Key Distribution – QKD) and distributed quantum sensing. This has massive implications for national security, the integrity of global financial transactions, and any domain requiring ultra-secure, decentralized data handling. Concurrently, it introduces pressing new considerations for data sovereignty, ethical data access, and the responsible governance of this powerful technology.

Key Benefits for Human-Centered Innovation and Change

Organizations that proactively engage with and invest in understanding distributed quantum computing will gain significant competitive and societal advantages:

  • Accelerated Breakthroughs: Dramatically faster discovery cycles in R&D for pharmaceuticals, advanced materials science, and clean energy, directly impacting human health, environmental sustainability, and quality of life.
  • Unprecedented Problem Solving: The ability to tackle highly complex optimization problems (e.g., global logistics, nuanced climate modeling, real-time financial market predictions) with a level of accuracy and speed previously unimaginable, leading to greater efficiency and resource allocation.
  • New Security Paradigms: The capacity to develop next-generation, quantum-resistant encryption and establish truly unhackable communication networks, profoundly protecting critical infrastructure, sensitive data, and individual privacy against future threats.
  • Decentralized Innovation Ecosystems: Foster entirely new models of collaborative research and development where diverse organizations can securely pool quantum resources, accelerating open science initiatives and tackling industry-wide challenges more effectively.
  • Strategic Workforce Transformation: Drives the urgent need for comprehensive up-skilling and re-skilling programs in quantum information science, preparing a human workforce capable of designing, managing, and ethically leveraging quantum solutions, ensuring human oversight and value creation.

Case Study 1: Pharma’s Quantum Drug Discovery Network

Challenge: Simulating Complex Protein Folding for Drug Design

A global pharmaceutical consortium faced an intractable problem: accurately simulating the dynamic folding behavior of highly complex proteins to design targeted drugs for debilitating neurological disorders. Classical supercomputers could only approximate these intricate molecular interactions, leading to incredibly lengthy, expensive, and often unsuccessful trial-and-error processes in drug synthesis.

Distributed Quantum Intervention:

The consortium piloted a collaborative Distributed Quantum Simulation Network. Instead of one pharma company trying to acquire or develop a single, massive QPU, they leveraged a quantum networking solution to securely link smaller QPUs from three different member labs (each in a separate geographical location). Each QPU was assigned to focus on simulating a specific, interacting component of the target protein, and the distributed network then combined their entangled computational power to run highly complex simulations. Advanced quantum middleware managed the secure workload distribution and the fusion of quantum data.

The Human-Centered Lesson:

This networked approach allowed for a level of molecular simulation previously impossible, significantly reducing the vast search space for new drug candidates. It fostered unprecedented, secure collaboration among rival labs, effectively democratizing access to cutting-edge quantum resources. The consortium successfully identified several promising lead compounds within months, reducing R&D costs by millions and dramatically accelerating the potential path to a cure for a debilitating disease. This demonstrated that distributed quantum computing not only solves technical problems but also catalyzes human collaboration for greater collective societal good.

Case Study 2: The Logistics Giant and Quantum Route Optimization

Challenge: Optimizing Global Supply Chains in Real-Time

A major global logistics company struggled profoundly with optimizing its vast, dynamic, and interconnected supply chain. Factors like constantly fluctuating fuel prices, real-time traffic congestion, unforeseen geopolitical disruptions, and the immense complexity of last-mile delivery meant their classical optimization algorithms were perpetually lagging, leading to significant inefficiencies, increased carbon emissions, and frequently missed delivery windows.

Distributed Quantum Intervention:

The company made a strategic investment in a dedicated quantum division, which then accessed a commercially available Distributed Quantum Optimization Service. This advanced service securely connected their massive logistics datasets to a network of QPUs located across different cloud providers globally. The distributed quantum system could process millions of variables and complex constraints in near real-time, constantly re-optimizing routes, warehouse inventory, and transportation modes based on live data feeds from myriad sources. The output was not just a single best route, but a probabilistic distribution of highly optimal solutions.

The Human-Centered Lesson:

The quantum-powered optimization led to an impressive 15% reduction in fuel consumption (and thus emissions) and a 20% improvement in on-time delivery metrics. Critically, it freed human logistics managers from the constant, reactive fire-fighting, allowing them to focus on high-level strategic planning, enhancing customer experience, and adapting proactively to unforeseen global events. The ability to model complex interdependencies across a distributed network empowered human decision-makers with superior, real-time insights, transforming a historically reactive operation into a highly proactive, efficient, and sustainable one, all while significantly reducing their global carbon footprint.

Companies and Startups to Watch in Distributed Quantum Computing

The ecosystem for distributed quantum computing is rapidly evolving, attracting significant investment and innovation. Key players include established tech giants like IBM (with its quantum networking efforts and Quantum Network Units – QNUs) and Cisco (investing heavily in the foundational quantum networking infrastructure). Specialized startups are also emerging to tackle the unique challenges of quantum interconnectivity, hardware, and middleware, such as Quantum Machines (for sophisticated quantum control systems), QuEra Computing (pioneering neutral atom qubits for scalable architectures), and PsiQuantum (focused on photonic quantum computing with a long-term goal of fault tolerance). Beyond commercial entities, leading academic institutions like QuTech (TU Delft) are driving foundational research into quantum internet protocols and standards, forming a crucial part of this interconnected future.

The Human Imperative: Preparing for the Quantum Era

Distributed quantum computing is not a distant fantasy; it is an active engineering and architectural challenge unfolding in real-time. For human-centered change leaders, the imperative is crystal clear: we must begin preparing our organizations, developing our talent, and establishing robust ethical frameworks today, not tomorrow.

This means actively fostering quantum literacy across our workforces, identifying strategic and high-impact use cases, and building diverse, interdisciplinary teams capable of bridging the complex gap between theoretical quantum physics and tangible, real-world business and societal value. The future of innovation will be profoundly shaped by our collective ability to ethically harness this networked computational power, not just for unprecedented profit, but for sustainable progress that genuinely benefits all humanity.

“The quantum revolution isn’t coming as a single, overwhelming wave; it’s arriving as a distributed, interconnected network. Our greatest challenge, and our greatest opportunity, is to consciously connect the human potential to its immense power.”

Frequently Asked Questions About Distributed Quantum Computing

1. What is Distributed Quantum Computing?

Distributed Quantum Computing involves connecting multiple individual quantum processors (QPUs) via specialized quantum networks to work together on complex computations. This allows for far greater processing power, enhanced resilience through fault tolerance, and broader problem-solving capability than any single quantum computer could achieve alone, forming the fundamental architecture of a future “quantum internet.”

2. How is Distributed Quantum Computing different from traditional quantum computing?

Traditional quantum computing focuses on building a single, monolithic, and increasingly powerful quantum processor. Distributed Quantum Computing, in contrast, aims to achieve computational scale and inherent fault tolerance by networking smaller, individual QPUs. This architectural shift addresses physical limitations and enables new applications like ultra-secure quantum communication and distributed quantum sensing that are not feasible with single QPUs.

3. What are the key benefits for businesses and society?

Key benefits include dramatically accelerated breakthroughs in critical fields like drug discovery and advanced materials science, unprecedented optimization capabilities for complex problems (e.g., global supply chains, climate modeling), enhanced data security through quantum-resistant encryption, and the creation of entirely new decentralized innovation ecosystems. It also highlights the urgent need for strategic workforce transformation and robust ethical governance frameworks to manage its powerful implications.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Larger-Than-Life Story of Isaac Merritt Singer

Sewing up the Competition

The Larger-Than-Life Story of Isaac Merritt Singer

GUEST POST from John Bessant

‘To be or not to be…. ?’

Sooner or later an actor will find themselves declaiming those words – whether delivering Hamlet’s soliloquy or reflecting on the precarious career prospects of the thespian calling. If the answer turns out to be in the ‘not to be…’ direction then the follow-up question is what else might you be. And if you have a leaning towards high risk options you might select ‘become an entrepreneur’ as an alternative choice.

Torquay is a drama queen of a town. Displaying itself in the summer for the tourists who flock to the English Riviera, attracted by its mild weather and (occasionally) sparkling blue bay. Full of larger-than-life characters, birthplace and home of Agatha Christie and still hosting plenty of theaters to add to the offstage stories playing out in the streets. And tucked away in the town cemetery is the last resting place of one of the largest of characters, an actor and entrepreneur to the end. Isaac Merritt Singer, father of the sewing machine and responsible for much more besides.

Born in 1811 in Pittstown, New York, Singer was youngest of eight children, and from an early age learned to hustle, taking on various odd jobs including learning the skills of joinery and lathe turning. His passion for acting emerged early; when he was twelve he ran away to join an acting troupe called the Rochester Players. Even in those days acting was not a reliable profession and so when he was nineteen he worked as an apprentice machinist. A move which helped support his early days of family life; he married fifteen year old Catherine Haley and had two children with her before finally succumbing once again to the siren call of the stage and joining the Baltimore Strolling Players.

His machinist studies paid off however, when in 1839 he patented a rock-drilling machine.

He’d been working with an older brother to help dig the Illinois waterway and saw how he could improve the process; it worked and he sold it for $2,000 (around $150,000 in today’s money). This windfall gave him the chance to return to the dramatic world and he formed a troupe known as the “Merritt Players”.

On tour he appeared onstage under the name “Isaac Merritt”, with a certain Mary Ann Sponsler who called herself “Mrs. Merritt”; backstage they looked after a family which had begun growing in 1837 and had swollen to what became eight children, The tour lasted about five years during which time he became engaged to her (neglecting to mention that he was already married).

Fortunately he’d kept up his craftsman skills interests and developed and patented a “machine for carving wood and metal” on April 10, 1849. Financially struggling once again he moved the family back to New York City, hoping to market his machine. He built a prototype and more important, met a bookseller, G. B. Zieber who was to become his partner and long-suffering financier.

Unfortunately the prototype was destroyed in a fire; Zieber persuaded Singer to make a new start in Boston in 1850 using space kindly offered by Orson Phelps who ran a small machine shop. Orders for his wood cutting machine were not, however, forthcoming so he turned his inventive eye to the world of sewing machines.

Singer Sewing Machine

A short history of sewing machines…

People started sewing by hand some 20,000 years ago, where the first needles were made from bones or animal horns and the thread made from animal sinew. But it remained a largely manual process until the Industrial Revolution in the 18th century and the growing demand for clothing which manual labor couldn’t really meet. Demand pull innovation prompted plenty of entrepreneurs to try their hand at improving on the basic manual process.

Their task wasn’t easy; sewing is a complex task involving different materials whose shape isn’t fixed in the way that wood or metal can be. And manual labor was still cheaply available so the costs of a machine to replace it would also need to be low. Not surprisingly many of the early inventors died in straitened circumstances.

A German-born engineer working in England, Charles Fredrick Wiesenthal, can lay claim to one of the first patents, awarded in Britain for a mechanical device to aid the art of sewing, in 1755. But this was more of a mechanical aid; it wasn’t until 1790 that an English cabinet maker by the name of Thomas Saint was granted a patent for five types of varnishes and their uses, a machine for ‘spinning, twisting, and doubling the thread’, a machine for ‘stitching, quilting, or sewing’, and a machine for ‘platting or weaving’. A specification which didn’t quite include the kitchen sink but came pretty close to covering it!

His very broad-ranging patent somewhat obscured its real value – the machine for ‘stitching, quilting, or sewing’. (So much so that when the Patent Office republished older patents and arranged them into new classes, it was placed into ‘wearing apparel’ rather than ‘sewing and embroidering’).

But his machine brought together several novel features including a mechanism for feeding material into the machine and a vertical needle. It was particularly designed for working with leather to make saddles and bridles but it was adapted for other materials like canvas to make ship sails.

Saint’s vision somewhat outstripped his ability to make and sell the machine but his underlying model introduced the key elements of what became the basic configuration – the ‘dominant design’ – for sewing machines. Much later, in 1874, a sewing machine manufacturer, William Newton Wilson, found Saint’s drawings in the UK Patent Office, made a few adjustments and built a working machine, which is still on display today on the Science Museum in London).

Saint wasn’t alone in seeing the possibilities in mechanization of sewing. Innovation often involves what’s called ‘swarming’ – many players see the potential and experiment with different designs, borrowing and building on these as they converge towards something which solves the core problem and eventually becomes the ‘dominant design’.

In the following years various attempts were made to develop a viable machine, some more successful than others. In 1804, two Englishmen, Thomas Stone and James Henderson, built a simple sewing device and John Duncan in Scotland offered an embroidery machine. An Austrian tailor, Josef Madersperger, presented his first working sewing machine publicly in 1814. And in 1818 John Doge and John Knowles invented America’s first sewing machine, but it could only sew a few bits of fabric before breaking.

But wasn’t until 40 years after Saint’s patent that a viable machine emerged. Barthelemy Thimonnier, a French tailor, invented a machine that used a hooked needle and one thread, creating a chain stitch. The patent for his machine was issued on 17 July 1830, and in the same year, he and his partners opened the first machine-based clothing manufacturing company in the world to create uniforms for the French Army.

(Unfortunately sewing machine inventors seem to have a poor track record as far as fire risk is concerned; Thimonnier’s factory was burned down, reportedly by workers fearful of losing their livelihood, following the issuing of the patent).

Over in America Walter Hunt joined the party bringing his contribution in 1832 in the form of the first lock-stitch machine. Up till then machines had used a simple chain stitch but the lock stitch was a big step forward since it allowed for tighter more durable seams of the kind needed in many clothes. It wasn’t without its teething troubles and Hunt only sold a handful of machines, he only bothered to patent his idea much later in 1854.

Meanwhile British inventors Newton and Archibold improved on the emerging technology with a better needle and the use of two pressing surfaces to keep the pieces of fabric in position, in 1841. And John Greenough registered a patent for the first sewing machine in the United States in 1842.

Each of these machines had some of the important elements but it was only in 1844 that they converged in the machine built by English inventor John Fisher. All should have been well – except that the apparent curse of incomplete filing (which seems to have afflicted many sewing machine inventors) struck him down. His patent was delayed and he failed to get the recognition he probably deserves as the architect of the modern sewing machine.

Instead it was Elias Howe from America with his 1845 machine (which closely resembled Fisher’s) who took the title. His patent was for “a process that uses thread from 2 different sources….” building on the idea of a lockstitch which William Hunt had actually developed thirteen years earlier. Hunt’s failure to patent this meant that Howe could eventually reap the not inconsiderable rewards, earning him $5 for every sewing machine sold in America which used the lockstitch principle.

Howe’s machine was impressive but like all the others was slow to take off and he decided to try and market it in Europe, sailing for England. Leaving the American market open for other entrants, Including one Isaac Merritt Singer who patented his machine in 1851.

Singer Sewing Table

Image: Public domain, via Wikimedia Commons

Singer’s machine

Singer became interested in sewing machines by trying to make them better. Orson Phelps (in whose machine shop Singer was working) had recently started making sewing machines for the modestly successful Lerow and Blodgett Company. Zieber and Phelps convinced Singer to take a look at the machine to see if he could improve upon its design.

Legend has it that Singer was sceptical at first, questioning its market potential. “You want to do away with the only thing that keeps women quiet?” But they managed to persuade him and in 1850, the three men formed a partnership, with Zieber putting up the money, Singer doing the inventing, and Phelps the manufacturing.

Instead of repairing the machine, Singer redesigned it by installing a treadle to help power the fabric feed and rethinking the way the shuttle mechanism worked, replacing the curved needle with a straight one.

Like Henry Ford after him Singer’s gift was not in pure invention but rather in adapting and recombining different elements. His eventual ddesign for a machine combined elements of Thimonnier, Hunt and Howe’s machines; the idea of using a foot treadle to leave both hands free dated back to the Middle Ages.

Importantly, the new design caused less thread breakage with the innovation of an arm-like apparatus that extended over the worktable, holding the needle at its end. It could sew 900 stitches per minute, a dramatic improvement over an accomplished seamstress’s rate of 40 on simple work. On an item as complex as a shirt the time required could be reduced from fifteen hours to less than one.

Singer obtained US Patent number 8294 for his improvements on August 12, 1851.

But having perfected the machine there were a couple of obstacles in the way of their reaping the rewards from transforming the market. First was the problem of economics; their machine (and others like it) opened up the possibility of selling for home use – but at $125 each ($4,000 in 2022 dollars) the machines were expensive and slow to catch on.

And then there was the small matter of sorting out the legal tangles involved in the intellectual property rights to sewing machinery.

Climbing out of the patent thicket

Elias Howe had been understandably annoyed to find Singer’s machine using elements of his own patent and duly took him to court for patent infringement. Singer tried to argue that Howe had actually infringed upon William Hunt’s original idea; unfortunately for him since Hunt hadn’t patented it that argument failed. The judge ruled that Hunt’s lock-stitch idea was free for anyone – including Howe – to use. Consequently, Singer was forced to pay a lump sum and patent royalties to Howe.

(Interestingly if John Fisher’s UK patent hadn’t have been filed wrongly, he too would have been involved in the law suit since both Howe and Singer’s designs were almost identical to the one Fisher created).

Sounds complicated? It gets worse, mainly because they weren’t the only ones in the game. Inventors like Allen B. Wilson were slugging it out with others like John Bradshaw; both of them had developed and patented devices which improved on Singer and Howe’s ideas. Wilson partnered up with Nathaniel Wheeler to produce a new machine which used a hook instead of a shuttle and much quieter and smoother in operation. That helped the Wheeler & Wilson Company to make and sell more machines in the 1850s and 1860s than any other manufacturer. Wilson also invented the feed mechanism that is still used on every sewing machine today, drawing the cloth through the machine in a smooth and even fashion. Others like Charles Miller patented machinery to help with accessories like buttonhole stitching.

The result was that in the 1850s a rapidly increasing number of companies were vying with each other not only to produce sewing machines but also to file lawsuits for patent infringement by the others. It became known as the Sewing Machine War – and like most wars risked ending up benefiting no-one. It’s an old story and often a vicious and expensive one in which the lawyers end up the only certain winners.

Fortunately this one, though not without its battles, was to arrive at a mutually successful cease-fire. In 1856, the major manufacturers (including Singer, Wheeler & Wilson) met in Albany, New York and Orlando Potter, president of the Grover and Baker Company, proposed that, rather than squander their profits on litigation, they pool their patents.

They agreed to form the Sewing Machine Combination, merging nine of the most important patents; they were able to secure the cooperation of Elias Howe by offering him a royalty on every sewing machine manufactured. Any other manufacturer had to obtain a license for $15 per machine. This lasted until 1877 when the last patent expired.

Singing the Singer song

So the stage was finally set for Isaac Singer to act his most famous role – one which predated Henry Ford as one of the fathers of mass production. In late 1857, Singer opened the world’s first facility for mass producing something other than firearms in New York and was soon able to cut production costs. Sales volume increased rapidly; in 1855 he’d sold 855 machines, a year later over 2500 and in 1858 his production reached 3,591 and he opened three more New York-based manufacturing plants.

Efficiency in production allowed the machines to drop in price to $100, then $60, then $30, and demand exploded. By 1860 and selling over 13,000 machines Singer became the largest manufacturer of sewing machines in the world. Ten years later and that number had risen tenfold; twenty years on they sold over half a million machines a year.

Like Ford he was something of a visionary, seeing the value of a systems approach to the problem of making and selling sewing machines. His was a recombinant approach, taking ideas like standardised and interchangeable parts, division of labour, specialisation of key managerial roles and intensive mechanisation to mass produce and bring costs down.

His thespian skills were usefully deployed in the marketing campaign; amongst other stunts he staged demonstrations of the sewing machine in city centre shop windows where bystanders could watch a (skilled) young woman effortlessly sewing her own creations. And he was famous for his ‘Song of the Shirt’ number which he would deliver as background accompaniment in events at which, once again, an attractive and accomplished seamstress would demonstrate the product.

It’s often easy to overlook the contribution of others in the innovation story – not least when the chief protagonist is an actor with a gift for self-publicity. Much of the development of the Singer business was actually down to the ideas and efforts of his partner at the time Edward Cabot Clark. It was Clark, for example, who came up with the concept of instalment purchasing plans which literally opened the door to many salesmen trying to push their product. He also suggested the model of trading in an older model for one with newer features – something enthusiastically deployed a century later in the promotion of a host of products from smart-phones to saloon cars.

Singer and Clark worked to create the necessary infrastructure to support scaling the business. They opened attractive showrooms, developed a rapid spare parts distribution system and employed a network of repair mechanics.

This emerging market for domestic sewing machines attracted others; in 1863 an enterprising tailor, Ebenezer Butterick, began selling dress patterns and helped open up the home dressmaking business. Magazines, pattern books and sewing circles emerged as women saw the opportunities in doing something which could bring both social and economic benefit to their lives. Schools and colleges began offering courses to teach the required skills, many of them helpfully sponsored by the Singer Sewing Machine Company.

It wasn’t just a new business opportunity; this movement provided important impetus to a redefinition of the role of women in the home and their access to activity which could become more than a simple hobby. Singer’s advertising put women in control with advertisements suggesting that their machine was ‘… sold only by the maker directly to the women of the family’. Charitable groups such as the Ladies Work Society and the Co-operative Needlewoman’s Society emerged aimed at helping poorer women find useful skills and respectable employment in sewing.

By 1863 Singer’s machine had become America’s most popular sewing machine and was on its way to a similar worldwide role. They pioneered international manufacturing, particularly in their presence in Europe having first tried to enter the overseas market through licensing their patents to others. Quality and service problems forced them to rethink and they moved instead to setting up their own facilities.

Their Clydebank complex in Scotland, opened in 1885, became the world’s largest sewing machine factory with two main manufacturing buildings on three levels. One made domestic machines, the other industrial models; the whole was overseen by a giant 60 metre high tower with the name ‘Singer ‘ emblazoned on it and with four clock faces, the world’s largest. Employing over 3500 people it turned out 8000 sewing machines a week. By the 1900s, it was making over 1.5 million machines to be sold around the world.

Estimates place Singer’s market share at 80% of global production, from 1880 through at least 1920 and beyond. Over one thousand different models for industrial and home use were offered. Singer had 1,700 stores in the United States and 4,300 overseas, supported by 60,000 salesmen.

Singer Sewing Machine Two

Image: Public domain via Wikimedia Commons

Off-stage activities

Singer was a big man with a commanding presence and a huge appetite for experiences. But he had no need of a Shakespeare to conjure up a plot for his own dramatic personal life, his was quite rich enough. The kind where it might help to have a few thousand miles of Atlantic Ocean to place between you and what’s going on when your past is suddenly and rapidly catching up with you…

(Pay attention, this gets more complicated than the patent thicket).

Catherine, his first wife, had separated from him back in the 1830s but remained married to him, benefitting from his payments to her. She finally agreed to a divorce in 1860 at which point his long-suffering mistress and mother of eight of his children, Mary Ann believed Isaac was free to marry her. He wasn’t keen to change his arrangements with her b ut in any case the question became somewhat academic.

In 1860 she was riding in her carriage along Fifth Avenue in New York when she happened to see Isaac in another carriage seated alongside Mary McGonigal. One of Isaac’s employees about whom Mary Ann already had suspicions. Confronting him she discovered that not only had he fathered seven children with McGonigal but that he had also had an affair with her sister Kate!

Hell hath no fury like a woman scorned and Mary Ann really went for Isaac, having him arrested and charged with bigamy; he fled to London on bail taking Mary McGonigal with him. But leaving behind even more trouble; further research uncovered a fourth ‘wife’, one Mary Walters who had been one of his glamorous sewing machine demonstrators. She also added another child to the list of his offspring. The final tally of his New York wives netted a total of four families, all living in Manhattan in ignorance of each other with a total of sixteen of his children!

Isaac’s escape to England allowed him enough breathing space to pick up on another affair he had started in France the previous year with Isabella Boyer, a young Frenchwoman whose face had been the model for the Statue of Liberty. He’d managed to leave her pregnant and so she left her husband and moved to England to join Isaac, marrying him in 1863. They settled down to life on their huge estate in Devon where they had a further six children.

Legacy

Singer left behind a lot – not least a huge fortune. On his death in 1871 he was worth around $13m (which would be worth close to $400billion today). From considerably humbler beginnings he’d managed to make his way to a position where he was able to buy a sizeable plot of land near Torquay and build a grand 110 room house (Oldway) modeled on the royal palace at Versailles complete with a hall of mirrors, maze and grotto garden.

And when he was finally laid to rest it was in a cedar, silver, satin and oak-lined marble tomb in a funeral attended by over 2000 mourners.

His wider legacy is, of course, the sewing machine which formed the basis of the company he helped found and which became such a powerful symbol of industrial and social innovation. He reminds us that innovation isn’t a single flash of inspiration but an extended journey and he deployed his skills at navigating that journey in many directions. He’s of course remembered for his product innovations like the sewing machine but throughout his life he developed many ideas into serviceable (and sometimes profitable) ventures.

But he also pioneered extensive process innovation, anticipating Henry Ford’s mass production approach to change the economics of selling consumer goods and rethinking the ways in which his factories could continue to develop. He had the salesman’s gift, but his wasn’t just an easy patter to persuade reluctant adopters. Together with Edward Clark he pioneered ways of targeting and then opening up new markets, particularly in the emerging world of the domestic consumer. And he was above all a systems thinker, recognizing that the success or failure of innovation depends on thinking around a complete business model to ensure that good ideas have an architecture through which they can create value.

Isaac Singer retained his interest in drama up to his death, leaving his adopted home of Torbay with a selection of imposing theaters which still offer performances today. It can only be a matter of time before someone puts together the script for a show based on this larger than life character and the tangled web that he managed to weave.


You can find my podcast here and my videos here

And if you’d like to learn with me take a look at my online courses here

And subscribe to my (free) newsletter here

All images generated by Substack AI unless otherwise indicated

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Don’t Adopt Artificial Incompetence

Don't Adopt Artificial Incompetence

GUEST POST from Shep Hyken

I’ve been reviewing my customer experience research, specifically the section on the future of customer service and AI (Artificial Intelligence). A few findings prove that customers are frustrated and lack confidence in how companies are using AI:

  • In general, 57% of customers are frustrated by AI-fueled self-service options.
  • 49% of customers say technologies like AI and ChatGPT scare them.
  • 51% of customers have received wrong or incorrect information from an AI self-service bot.

As negative as these findings sound, there are plenty of findings that point to AI getting better and more customers feeling comfortable using AI solutions. The technology continues to improve quickly. While it’s only been five months since we surveyed more than 1,000 U.S. consumers, I bet a new survey would show continued improvement and comfort level regarding AI. But for this short article, let’s focus on the problem that needs to be resolved.

Upon reviewing the numbers, I realized that there’s another kind of AI: Artificial Incompetence. That’s my new label for companies that improperly use AI and cause customers to be frustrated, scared and/or receive bad information. After thinking I was clever and invented this term, I was disheartened to discover, after a Google search, that the term already exists; however, it’s not widely used.

So, AI – as in Artificial Incompetence – is a problem you don’t want to have. To avoid it, start by recognizing that AI isn’t perfect. Be sure to have a human backup that’s fast and easy to reach when the customer feels frustrated, angry, or scared.

And now, as the title of this article implies, there’s more. After sharing the new concept of AI with my team, we brainstormed and had fun coming up with two more phrases based on some of the ideas I covered in my past articles and videos:

Feedback Constipation: When you get so much feedback and don’t take action, it’s like eating too much and not being able to “go.” (I know … a little graphic … but it makes the point.) This came from my article Turning Around Declining Customer Satisfaction, which teaches that collecting feedback isn’t valuable unless you use it.

Jargon Jeopardy: Most people – but not everyone – know what CX means. If you are using it with a customer, and they don’t know what it means, how do you think they feel? I was once talking to a customer service rep who kept using abbreviations. I could only guess what they meant. So I asked him to stop with the E-I-E-I-O’s (referencing the lyrics from the song about Old McDonald’s farm.) This was the main theme of my article titled Other Experiences Exist Beyond Customer Experience (EX, WX, DX, UX and more).

So, this was a fun way at poking fun of companies that may think they are doing CX right (and doing it well), but the customer’s perception is the opposite. Don’t use AI that frustrates customers and projects an image of incompetence. Don’t collect feedback unless you plan to use it. Otherwise, it’s a waste of everyone’s time and effort. Finally, don’t confuse customers – and even employees – with jargon and acronyms that make them feel like they are forced to relearn the alphabet.

Image Credits: 1 of 950+ FREE quote slides available at http://misterinnovation.com

This article originally appeared on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Re-engineering Trust and Retention in the AI Contact Center

The Empathy Engine

LAST UPDATED: November 9, 2025 at 1:36PM
Re-engineering Trust and Retention in the AI Contact Center

by Braden Kelley

The contact center remains the single most critical point of human truth for a brand. It is where marketing promises meet operational reality. The challenge today, as highlighted by leaders like Bruce Gilbert of Young Energy at Customer Contact Week(CCW) in Nashville recently, is profound: Customers expect friction-less experiences with empathetic responses. The solution is not merely throwing technology at the problem; it’s about strategically weaving automation into the existing human fabric to create an Empathy Engine.

The strategic error most organizations make is starting with the technology’s capability rather than the human need. The conversation must start with empathy not the technology — focusing first on the customer and agent pain points. AI is not a replacement for human connection; it is an amplification tool designed to remove friction, build trust, and elevate the human agent’s role to that of a high-value relationship manager.

The Trust Imperative: The Cautious Adoption Framework

The first goal when introducing AI into the customer journey is simple: Building trust. The consumer public, after years of frustrating Interactive Voice Response (IVR) systems and rigid chatbots, remains deeply skeptical of automation. A grand, “all-in” AI deployment is often met with immediate resistance, which can manifest as call abandonment or increased churn.

To overcome this, innovation must adhere to a principle of cautious, human-centered rollout — a Cautious Adoption Framework: Starting small and starting with simple things can help to build this trust. Implement AI where the risk of failure is low and the utility is high — such as automating password resets, updating billing addresses, or providing initial diagnostics. These are the repetitive, low-value tasks that bore agents and frustrate customers. By successfully automating these simple, transactional elements, you build confidence in the system, preparing both customers and agents for more complex, AI-assisted interactions down the line. This approach honors the customer’s pace of change.

The Agent Retention Strategy: Alleviating Cognitive Load

The operational cost of the contact center is inextricably linked to agent retention. Finding and keeping high-quality agents remains a persistent challenge, primarily because the job is often highly stressful and repetitive. AI provides a powerful retention tool by directly addressing the root cause: cognitive load.

Reducing the cognitive load and stress level on agents is a non-negotiable step for long-term operational health. AI co-pilots must be designed to act as true partners, not simply data overlays. They should instantly surface relevant knowledge base articles, summarize the customer’s entire history before the agent picks up the call, or even handle real-time data entry. This frees the human agent to focus entirely on the empathetic response — active listening, problem-solving, and de-escalation. By transforming the agent’s role from a low-paid data processor into a high-value relationship manager, we elevate the profession, directly improving agent retention and turning contact center employment into an aspirational career path.

The Systemic Challenge: Orchestrating the AI Ecosystem

A major limiting factor in today’s contact center is the presence of fragmented AI deployments. Many organizations deploy AI in isolated pockets — a siloed chatbot here, a transcription service there. The future demands that we move far beyond siloed AI. The goal is complete AI orchestration across the enterprise, requiring us to get the AIs to talk to each other.

A friction-less customer experience requires intelligence continuity: a Voice AI must seamlessly hand off its collected context to a Predictive AI (which assesses the call risk), which then informs the Generative AI (that drafts the agent’s suggested response). This is the necessary chain of intelligence that supports friction-less service. Furthermore, complexity demands a blended AI approach, recognizing that the solution may involve more than one method (generative vs. directed).

For high-compliance tasks, a directed approach ensures precision: for instance, a flow can insert “read as is” instructions for regulatory disclosures, ensuring legal text is delivered exactly as designed. For complex, personalized problem-solving, a generative approach is vital. The best systems understand the regulatory and emotional context, knowing when to switch modes instantly and without customer intervention.

The Strategic Pivot: Investing in Predictive Empathy

The ultimate strategic advantage lies not in reacting to calls, but in preventing them. This requires a deeper investment in data science, moving from descriptive reporting on what happened to predictive analytics to understand why our customers are calling in before they dial the number.

This approach, which I call Predictive Empathy, uses machine learning to identify customers whose usage patterns, payment history, or recent service interactions suggest a high probability of confusion or frustration (e.g., first-time promotions expiring, unusual service interruptions). The organization then proactively initiates a personalized, AI-assisted outreach to address the problem or explain the confusion before the customer reaches the point of anxiety and makes the call. This shifts the interaction from reactive conflict to proactive support, immediately lowering call volume and transforming brand perception.

The Organizational Checkpoint: Post-Deployment Evolution

Once you’ve successfully implemented AI to address pain points, the work is not finished. A crucial strategic question must be addressed: What happens after AI deployment? What’s your plan?

As AI absorbs simple transactions, the nature of the calls that reach the human agent becomes disproportionately more complex, emotional, and high-value. This creates a skills gap in the remaining human workforce. The organization must plan for and fund the Up-skilling Initiative necessary to handle these elevated interactions, focusing on conflict resolution, complex sales, and deep relationship management. The entire organizational structure — training programs, compensation models, and career paths — must evolve to support this higher-skilled human workforce. By raising the value of the human role, the contact center transitions from a cost center into a profit-generating Relationship Hub.

Conclusion: Architecting the Human Layer

The goal of innovation in the contact center is not the elimination of the human, but the elevation of the human. By using AI to build trust, reduce cognitive load, enable predictive empathy, and connect disparate systems, we free the human agent to deliver on the fundamental customer expectation: a friction-less experience coupled with an empathetic response. This is how we re-engineer the contact center from a cost center into a powerful engine for talent retention and customer loyalty.

“AI handles the transaction. The human handles the trust. Design your systems to protect both.” — Braden Kelley

Your first step into the Empathy Engine: Map the single most stressful task for your top-performing agent and commit to automating 80% of its cognitive load using a simple AI co-pilot within the next 90 days.

What is that task for your organization?

Image credits: Google Gemini

Content Authenticity Statement: The topic area, key elements to focus on, insights captured from the Customer Contact Week session, panelists to mention, etc. were decisions made by Braden Kelley, with a little help from Google Gemini to clean up the article.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Are We Suffering from AI Confirmation Bias?

Are We Suffering From AI Confirmation Bias?

GUEST POST from Geoffrey A. Moore

When social media first appeared on the scene, many of us had high hopes it could play a positive role in community development and civic affairs, as indeed it has. What we did not anticipate was the long-term impact of the digital advertising model that supported it. That model is based on click-throughs, and one of the most effective ways to increase them was to present content that reinforces the recipient’s existing views.

Statisticians call the attraction to one’s existing point of view confirmation bias, and we all have it. As individuals, we believe we are in control of this, but it is obvious that at the level of populations, we are not. Confirmation bias, fed first by social media, and then by traditional media once it is converted to digital, has driven political and social polarization throughout the world. It has been further inflamed by conspiracy theories, malicious communications, fake news, and the like. And now we are faced with the advent of yet another amplifier—artificial intelligence. A significant portion of the fears about how AI could impact human welfare stem from how easily it can be put to malicious use through disinformation campaigns.

The impact of all this on our political life is chilling. Polarized media amplifies the impact of extremism and dampens the impact of moderation. This has most obviously been seen in primary elections, but it has now carried over into general elections to the point where highly unqualified individuals who have no interest in public service hold some of the most important roles in state and federal government. The resulting dysfunction is deeply disturbing, but it is not clear if and where a balance can be found.

Part of the problem is that confirmation bias is an essential part of healthy socialization. It reflects the impact that narratives have on our personal and community identities. What we might see as arrant folly another person sees as a necessary leap of faith. Our founding fathers were committed to protecting our nation from any authority imposing its narratives on unwilling recipients, hence our Constitutional commitment to both freedom of religion and freedom of speech.

In effect, this makes it virtually impossible to legislate our way out of this dilemma. Instead, we must embrace it as a Darwinian challenge, one that calls for us as individuals to adapt our strategies for living to a dangerous new circumstance. Here I think we can take a lesson from our recent pandemic experience. Faced with the threat of a highly contagious, ever-mutating Covid virus, most of the developed economies embraced rapid vaccination as their core response. China, however, did not. It embraced regulation instead. What they and we learned is that you cannot solve problems of contagion through regulation.

We can apply this learning to dealing with the universe of viral memes that have infected our digital infrastructure and driven social discord. Instead of regulation, we need to think of vaccination. The vaccine that protects people from fake news and its many variants is called critical thinking, and the healthcare provider that dispenses it is called public education.

We have spent the past several decades focusing on the STEM wing of our educational system, but at the risk of exercising my own confirmation bias, the immunity protection we need now comes from the liberal arts. Specifically, it emerges from supervised classroom discussions in which students are presented with a wide variety of challenging texts and experiences accompanied by a facilitated dialog that instructs them in the practices of listening, questioning, proposing, debating, and ultimately affirming or denying the validity of the argument under consideration. These discussions are not about promoting or endorsing any particular point of view. Rather, they teach one how to engage with any point of view in a respectful, powerful way. This is the intellectual discipline that underlies responsible citizenship. We have it in our labs. We just need to get it distributed more broadly.

That’s what I think. What do you think?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The AI Agent Paradox

How E-commerce Must Proactively Manage Experiences Created Without Their Consent

LAST UPDATED: November 7, 2025 at 4:31 PM

The AI Agent Paradox

GUEST POST from Art Inteligencia

A fundamental shift is underway in the world of e-commerce, moving control of the customer journey out of the hands of the brand and into the hands of the AI Agent. The recent lawsuit by Amazon against Perplexity regarding unauthorized access to user accounts by its agentic browser is not an isolated legal skirmish; it is a red flag moment for every company that sells online. The core challenge is this: AI agents are building and controlling the shopping experience — the selection, the price comparison, the checkout path — often without the e-commerce site’s knowledge or consent.

This is the AI Agent Paradox: The most powerful tool for customer convenience (the agent) simultaneously poses the greatest threat to brand control, data integrity, and monetization models. The era of passively optimizing a webpage is over. The future belongs to brands that actively manage their relationship with the autonomous, agentic layer that sits between them and their human customers.

The Three Existential Threats of the Autonomous Agent

Unmanaged AI agents, operating as digital squatters on your site, create immediate systemic problems for e-commerce sites:

  1. Data Integrity and Scraping Overload: Agents typically use resource-intensive web scraping techniques that overload servers and pollute internal analytics. The shopping experience they create is invisible to the brand’s A/B testing and personalization engines.
  2. Brand Bypass and Commoditization: Agents prioritize utility over loyalty. If a customer asks for “best price on noise-cancelling headphones,” the agent may bypass your brand story, unique value propositions, and even your preferred checkout flow, reducing your products to mere SKU and price points. This is the Brand Bypass threat.
  3. Security and Liability: Unauthorized access, especially to user accounts (as demonstrated by the Amazon-Perplexity case), creates massive security vulnerabilities and legal liability for the e-commerce platform, which is ultimately responsible for protecting user data.

The How-To: Moving from Resistance to Proactive Partnership

Instead of relying solely on defensive legal action (which is slow and expensive), e-commerce brands must embrace a proactive, human-centered API strategy. The goal is to provide a superior, authorized experience for the AI agents, turning them from adversaries into accelerated sales channels — and honoring the trust the human customer places in their proxy.

Step 1: Build the Agent-Optimized API Layer

Treat the AI agent as a legitimate, high-volume customer with unique needs (structured data, speed). Design a specific, clean Agent API separate from your public-facing web UI. This API should allow agents to retrieve product information, pricing, inventory status, and execute checkout with minimal friction and maximum data hygiene. This immediately prevents the resource-intensive web scraping that plagues servers.

Step 2: Define and Enforce the Rules of Engagement

Your Terms of Service (TOS) must clearly articulate the acceptable use of your data by autonomous agents. Furthermore, the Agent API must enforce these rules programmatically. You can reward compliant agents (faster access, richer data) and throttle or block non-compliant agents (those attempting unauthorized access or violating rate limits). This is where you insert your brand’s non-negotiables, such as attribution requirements or user privacy protocols, thereby regaining control.

Step 3: Offer Value-Added Agent Services and Data

This is the shift from defense to offense. Give agents a reason to partner with you and prefer your site. Offer exclusive agent-only endpoints that provide aggregated, structured data your competitors don’t, such as sustainable sourcing information, local inventory availability, or complex configurator data. This creates a competitive advantage where the agent actually prefers to send traffic to your optimized channel because it provides a superior outcome for the human user.

Case Study 1: The Furniture Retailer and the AI Interior Designer

Challenge: Complex, Multivariable E-commerce Decisions

A high-end furniture and décor retailer struggled with low conversion rates because buying furniture requires complex decisions (size, material, delivery time). Customers were leaving the site to use third-party AI interior design tools.

Proactive Partnership:

The retailer created a “Design Agent API.” This API didn’t just provide price and SKU; it offered rich, structured data on 3D model compatibility, real-time customization options, and material sustainability scores. They partnered with a leading AI interior design platform, providing the agent direct, authorized access to this structured data. The AI agent, in turn, could generate highly accurate virtual room mock-ups using the retailer’s products. This integration streamlined the complex path to purchase, turning the agent from a competitor into the retailer’s most effective pre-visualization sales tool.

Case Study 2: The Specialty Grocer and the AI Recipe Planner

Challenge: Fragmented Customer Journey from Inspiration to Purchase

An online specialty grocer, focused on rare and organic ingredients, saw their customers using third-party AI recipe planners and shopping list creators, which often failed to locate the grocer’s unique SKUs or sent traffic to competitors.

Proactive Partnership:

The grocer developed a “Recipe Fulfillment Endpoint.” They partnered with two popular AI recipe apps. When a user generated a recipe, the AI agent, using the grocer’s endpoint, could instantly check ingredient availability, price, and even offer substitute suggestions from the grocer’s unique inventory. The agent generated a “One-Click, Fully-Customized Cart” for the grocer. The grocer ensured the agent received a small attribution fee (a form of commission), turning the agent into a reliable, high-converting affiliate sales channel. This formalized partnership eliminated the friction between inspiration and purchase, driving massive, high-margin sales.

The Human-Centered Imperative

Ultimately, this is a human-centered change challenge. The human customer trusts their AI agent to act on their behalf. By providing a clean, transparent, and optimized path for the agent, the e-commerce brand is honoring that trust. The focus shifts from control over the interface to control over the data and the rules of interaction. This strategy not only improves server performance and data integrity but also secures the brand’s place in the customer’s preferred, agent-mediated future.

“The AI agent is your customer’s proxy. If you treat the proxy poorly, you treat the customer poorly. The future of e-commerce is not about fighting the agents; it’s about collaborating with them to deliver superior value.” — Braden Kelley

The time to move beyond the reactive defense and into proactive partnership is now. The e-commerce leaders of tomorrow will be the ones who design the best infrastructure for the machines that shop for humans. Your essential first step: Form a dedicated internal team to prototype your Agent API, defining the minimum viable, structured data you can share to incentivize collaboration over scraping.

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.