Category Archives: Technology

Distributed Quantum Computing

Unleashing the Networked Future of Human Potential

LAST UPDATED: November 21, 2025 at 5:49 PM

Distributed Quantum Computing

GUEST POST from Art Inteligencia

For years, quantum computing has occupied the realm of scientific curiosity and theoretical promise. The captivating vision of a single, powerful quantum machine capable of solving problems intractable for even the most potent classical supercomputers has long driven research. However, the emerging reality of practical, fault-tolerant quantum computation is proving to be less about a single monolithic giant and more about a network of interconnected quantum resources. Recent news, highlighting major collaborations between industry titans, signals a pivotal shift: the world is moving aggressively towards Distributed Quantum Computing.

This isn’t merely a technical upgrade; it’s a profound architectural evolution that will dramatically accelerate the realization of quantum advantage and, in doing so, demand a radical human-centered approach to innovation, ethics, and strategic foresight across every sector. For leaders committed to human-centered change, understanding this paradigm shift is not optional; it’s paramount. Distributed quantum computing promises to unlock unprecedented problem-solving capabilities, but only if we proactively prepare our organizations and our people to harness its immense power ethically and effectively.

The essence of Distributed Quantum Computing lies in connecting multiple, smaller quantum processors — each a “quantum processing unit” (QPU) — through quantum networks. This allows them to function collectively as a much larger, more powerful, and inherently more resilient quantum computer, capable of tackling problems far beyond the scope of any single QPU. This parallel, networked approach will form the bedrock of the future quantum internet, enabling a world where quantum resources are shared, secured, and scaled globally to address humanity’s grand challenges.

The Three-Dimensional Impact of Distributed Quantum Computing

The strategic shift to distributed quantum computing creates a multi-faceted impact on innovation and organizational design:

1. Exponential Scaling of Computational Power

By linking individual QPUs into a cohesive network, we overcome the physical limitations of building ever-larger single quantum chips. This allows for an exponential scaling of computational power that dramatically accelerates the timeline for solving currently intractable problems in areas like molecular simulation, complex optimization, and advanced cryptography. This means a faster path to new drugs, revolutionary materials, and genuinely secure communication protocols for critical infrastructure.

2. Enhanced Resilience and Fault Tolerance

Individual QPUs are inherently susceptible to noise and errors, a significant hurdle for practical applications. A distributed architecture offers a robust path to fault tolerance through redundancy and sophisticated error correction techniques spread across the entire network. If one QPU encounters an error, the network can compensate, making quantum systems far more robust and reliable for real-world, long-term quantum solutions.

3. Distributed Data & Security Implications

Quantum networks will enable the secure distribution of quantum information, paving the way for truly unbreakable quantum communication (e.g., Quantum Key Distribution – QKD) and distributed quantum sensing. This has massive implications for national security, the integrity of global financial transactions, and any domain requiring ultra-secure, decentralized data handling. Concurrently, it introduces pressing new considerations for data sovereignty, ethical data access, and the responsible governance of this powerful technology.

Key Benefits for Human-Centered Innovation and Change

Organizations that proactively engage with and invest in understanding distributed quantum computing will gain significant competitive and societal advantages:

  • Accelerated Breakthroughs: Dramatically faster discovery cycles in R&D for pharmaceuticals, advanced materials science, and clean energy, directly impacting human health, environmental sustainability, and quality of life.
  • Unprecedented Problem Solving: The ability to tackle highly complex optimization problems (e.g., global logistics, nuanced climate modeling, real-time financial market predictions) with a level of accuracy and speed previously unimaginable, leading to greater efficiency and resource allocation.
  • New Security Paradigms: The capacity to develop next-generation, quantum-resistant encryption and establish truly unhackable communication networks, profoundly protecting critical infrastructure, sensitive data, and individual privacy against future threats.
  • Decentralized Innovation Ecosystems: Foster entirely new models of collaborative research and development where diverse organizations can securely pool quantum resources, accelerating open science initiatives and tackling industry-wide challenges more effectively.
  • Strategic Workforce Transformation: Drives the urgent need for comprehensive up-skilling and re-skilling programs in quantum information science, preparing a human workforce capable of designing, managing, and ethically leveraging quantum solutions, ensuring human oversight and value creation.

Case Study 1: Pharma’s Quantum Drug Discovery Network

Challenge: Simulating Complex Protein Folding for Drug Design

A global pharmaceutical consortium faced an intractable problem: accurately simulating the dynamic folding behavior of highly complex proteins to design targeted drugs for debilitating neurological disorders. Classical supercomputers could only approximate these intricate molecular interactions, leading to incredibly lengthy, expensive, and often unsuccessful trial-and-error processes in drug synthesis.

Distributed Quantum Intervention:

The consortium piloted a collaborative Distributed Quantum Simulation Network. Instead of one pharma company trying to acquire or develop a single, massive QPU, they leveraged a quantum networking solution to securely link smaller QPUs from three different member labs (each in a separate geographical location). Each QPU was assigned to focus on simulating a specific, interacting component of the target protein, and the distributed network then combined their entangled computational power to run highly complex simulations. Advanced quantum middleware managed the secure workload distribution and the fusion of quantum data.

The Human-Centered Lesson:

This networked approach allowed for a level of molecular simulation previously impossible, significantly reducing the vast search space for new drug candidates. It fostered unprecedented, secure collaboration among rival labs, effectively democratizing access to cutting-edge quantum resources. The consortium successfully identified several promising lead compounds within months, reducing R&D costs by millions and dramatically accelerating the potential path to a cure for a debilitating disease. This demonstrated that distributed quantum computing not only solves technical problems but also catalyzes human collaboration for greater collective societal good.

Case Study 2: The Logistics Giant and Quantum Route Optimization

Challenge: Optimizing Global Supply Chains in Real-Time

A major global logistics company struggled profoundly with optimizing its vast, dynamic, and interconnected supply chain. Factors like constantly fluctuating fuel prices, real-time traffic congestion, unforeseen geopolitical disruptions, and the immense complexity of last-mile delivery meant their classical optimization algorithms were perpetually lagging, leading to significant inefficiencies, increased carbon emissions, and frequently missed delivery windows.

Distributed Quantum Intervention:

The company made a strategic investment in a dedicated quantum division, which then accessed a commercially available Distributed Quantum Optimization Service. This advanced service securely connected their massive logistics datasets to a network of QPUs located across different cloud providers globally. The distributed quantum system could process millions of variables and complex constraints in near real-time, constantly re-optimizing routes, warehouse inventory, and transportation modes based on live data feeds from myriad sources. The output was not just a single best route, but a probabilistic distribution of highly optimal solutions.

The Human-Centered Lesson:

The quantum-powered optimization led to an impressive 15% reduction in fuel consumption (and thus emissions) and a 20% improvement in on-time delivery metrics. Critically, it freed human logistics managers from the constant, reactive fire-fighting, allowing them to focus on high-level strategic planning, enhancing customer experience, and adapting proactively to unforeseen global events. The ability to model complex interdependencies across a distributed network empowered human decision-makers with superior, real-time insights, transforming a historically reactive operation into a highly proactive, efficient, and sustainable one, all while significantly reducing their global carbon footprint.

Companies and Startups to Watch in Distributed Quantum Computing

The ecosystem for distributed quantum computing is rapidly evolving, attracting significant investment and innovation. Key players include established tech giants like IBM (with its quantum networking efforts and Quantum Network Units – QNUs) and Cisco (investing heavily in the foundational quantum networking infrastructure). Specialized startups are also emerging to tackle the unique challenges of quantum interconnectivity, hardware, and middleware, such as Quantum Machines (for sophisticated quantum control systems), QuEra Computing (pioneering neutral atom qubits for scalable architectures), and PsiQuantum (focused on photonic quantum computing with a long-term goal of fault tolerance). Beyond commercial entities, leading academic institutions like QuTech (TU Delft) are driving foundational research into quantum internet protocols and standards, forming a crucial part of this interconnected future.

The Human Imperative: Preparing for the Quantum Era

Distributed quantum computing is not a distant fantasy; it is an active engineering and architectural challenge unfolding in real-time. For human-centered change leaders, the imperative is crystal clear: we must begin preparing our organizations, developing our talent, and establishing robust ethical frameworks today, not tomorrow.

This means actively fostering quantum literacy across our workforces, identifying strategic and high-impact use cases, and building diverse, interdisciplinary teams capable of bridging the complex gap between theoretical quantum physics and tangible, real-world business and societal value. The future of innovation will be profoundly shaped by our collective ability to ethically harness this networked computational power, not just for unprecedented profit, but for sustainable progress that genuinely benefits all humanity.

“The quantum revolution isn’t coming as a single, overwhelming wave; it’s arriving as a distributed, interconnected network. Our greatest challenge, and our greatest opportunity, is to consciously connect the human potential to its immense power.”

Frequently Asked Questions About Distributed Quantum Computing

1. What is Distributed Quantum Computing?

Distributed Quantum Computing involves connecting multiple individual quantum processors (QPUs) via specialized quantum networks to work together on complex computations. This allows for far greater processing power, enhanced resilience through fault tolerance, and broader problem-solving capability than any single quantum computer could achieve alone, forming the fundamental architecture of a future “quantum internet.”

2. How is Distributed Quantum Computing different from traditional quantum computing?

Traditional quantum computing focuses on building a single, monolithic, and increasingly powerful quantum processor. Distributed Quantum Computing, in contrast, aims to achieve computational scale and inherent fault tolerance by networking smaller, individual QPUs. This architectural shift addresses physical limitations and enables new applications like ultra-secure quantum communication and distributed quantum sensing that are not feasible with single QPUs.

3. What are the key benefits for businesses and society?

Key benefits include dramatically accelerated breakthroughs in critical fields like drug discovery and advanced materials science, unprecedented optimization capabilities for complex problems (e.g., global supply chains, climate modeling), enhanced data security through quantum-resistant encryption, and the creation of entirely new decentralized innovation ecosystems. It also highlights the urgent need for strategic workforce transformation and robust ethical governance frameworks to manage its powerful implications.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Larger-Than-Life Story of Isaac Merritt Singer

Sewing up the Competition

The Larger-Than-Life Story of Isaac Merritt Singer

GUEST POST from John Bessant

‘To be or not to be…. ?’

Sooner or later an actor will find themselves declaiming those words – whether delivering Hamlet’s soliloquy or reflecting on the precarious career prospects of the thespian calling. If the answer turns out to be in the ‘not to be…’ direction then the follow-up question is what else might you be. And if you have a leaning towards high risk options you might select ‘become an entrepreneur’ as an alternative choice.

Torquay is a drama queen of a town. Displaying itself in the summer for the tourists who flock to the English Riviera, attracted by its mild weather and (occasionally) sparkling blue bay. Full of larger-than-life characters, birthplace and home of Agatha Christie and still hosting plenty of theaters to add to the offstage stories playing out in the streets. And tucked away in the town cemetery is the last resting place of one of the largest of characters, an actor and entrepreneur to the end. Isaac Merritt Singer, father of the sewing machine and responsible for much more besides.

Born in 1811 in Pittstown, New York, Singer was youngest of eight children, and from an early age learned to hustle, taking on various odd jobs including learning the skills of joinery and lathe turning. His passion for acting emerged early; when he was twelve he ran away to join an acting troupe called the Rochester Players. Even in those days acting was not a reliable profession and so when he was nineteen he worked as an apprentice machinist. A move which helped support his early days of family life; he married fifteen year old Catherine Haley and had two children with her before finally succumbing once again to the siren call of the stage and joining the Baltimore Strolling Players.

His machinist studies paid off however, when in 1839 he patented a rock-drilling machine.

He’d been working with an older brother to help dig the Illinois waterway and saw how he could improve the process; it worked and he sold it for $2,000 (around $150,000 in today’s money). This windfall gave him the chance to return to the dramatic world and he formed a troupe known as the “Merritt Players”.

On tour he appeared onstage under the name “Isaac Merritt”, with a certain Mary Ann Sponsler who called herself “Mrs. Merritt”; backstage they looked after a family which had begun growing in 1837 and had swollen to what became eight children, The tour lasted about five years during which time he became engaged to her (neglecting to mention that he was already married).

Fortunately he’d kept up his craftsman skills interests and developed and patented a “machine for carving wood and metal” on April 10, 1849. Financially struggling once again he moved the family back to New York City, hoping to market his machine. He built a prototype and more important, met a bookseller, G. B. Zieber who was to become his partner and long-suffering financier.

Unfortunately the prototype was destroyed in a fire; Zieber persuaded Singer to make a new start in Boston in 1850 using space kindly offered by Orson Phelps who ran a small machine shop. Orders for his wood cutting machine were not, however, forthcoming so he turned his inventive eye to the world of sewing machines.

Singer Sewing Machine

A short history of sewing machines…

People started sewing by hand some 20,000 years ago, where the first needles were made from bones or animal horns and the thread made from animal sinew. But it remained a largely manual process until the Industrial Revolution in the 18th century and the growing demand for clothing which manual labor couldn’t really meet. Demand pull innovation prompted plenty of entrepreneurs to try their hand at improving on the basic manual process.

Their task wasn’t easy; sewing is a complex task involving different materials whose shape isn’t fixed in the way that wood or metal can be. And manual labor was still cheaply available so the costs of a machine to replace it would also need to be low. Not surprisingly many of the early inventors died in straitened circumstances.

A German-born engineer working in England, Charles Fredrick Wiesenthal, can lay claim to one of the first patents, awarded in Britain for a mechanical device to aid the art of sewing, in 1755. But this was more of a mechanical aid; it wasn’t until 1790 that an English cabinet maker by the name of Thomas Saint was granted a patent for five types of varnishes and their uses, a machine for ‘spinning, twisting, and doubling the thread’, a machine for ‘stitching, quilting, or sewing’, and a machine for ‘platting or weaving’. A specification which didn’t quite include the kitchen sink but came pretty close to covering it!

His very broad-ranging patent somewhat obscured its real value – the machine for ‘stitching, quilting, or sewing’. (So much so that when the Patent Office republished older patents and arranged them into new classes, it was placed into ‘wearing apparel’ rather than ‘sewing and embroidering’).

But his machine brought together several novel features including a mechanism for feeding material into the machine and a vertical needle. It was particularly designed for working with leather to make saddles and bridles but it was adapted for other materials like canvas to make ship sails.

Saint’s vision somewhat outstripped his ability to make and sell the machine but his underlying model introduced the key elements of what became the basic configuration – the ‘dominant design’ – for sewing machines. Much later, in 1874, a sewing machine manufacturer, William Newton Wilson, found Saint’s drawings in the UK Patent Office, made a few adjustments and built a working machine, which is still on display today on the Science Museum in London).

Saint wasn’t alone in seeing the possibilities in mechanization of sewing. Innovation often involves what’s called ‘swarming’ – many players see the potential and experiment with different designs, borrowing and building on these as they converge towards something which solves the core problem and eventually becomes the ‘dominant design’.

In the following years various attempts were made to develop a viable machine, some more successful than others. In 1804, two Englishmen, Thomas Stone and James Henderson, built a simple sewing device and John Duncan in Scotland offered an embroidery machine. An Austrian tailor, Josef Madersperger, presented his first working sewing machine publicly in 1814. And in 1818 John Doge and John Knowles invented America’s first sewing machine, but it could only sew a few bits of fabric before breaking.

But wasn’t until 40 years after Saint’s patent that a viable machine emerged. Barthelemy Thimonnier, a French tailor, invented a machine that used a hooked needle and one thread, creating a chain stitch. The patent for his machine was issued on 17 July 1830, and in the same year, he and his partners opened the first machine-based clothing manufacturing company in the world to create uniforms for the French Army.

(Unfortunately sewing machine inventors seem to have a poor track record as far as fire risk is concerned; Thimonnier’s factory was burned down, reportedly by workers fearful of losing their livelihood, following the issuing of the patent).

Over in America Walter Hunt joined the party bringing his contribution in 1832 in the form of the first lock-stitch machine. Up till then machines had used a simple chain stitch but the lock stitch was a big step forward since it allowed for tighter more durable seams of the kind needed in many clothes. It wasn’t without its teething troubles and Hunt only sold a handful of machines, he only bothered to patent his idea much later in 1854.

Meanwhile British inventors Newton and Archibold improved on the emerging technology with a better needle and the use of two pressing surfaces to keep the pieces of fabric in position, in 1841. And John Greenough registered a patent for the first sewing machine in the United States in 1842.

Each of these machines had some of the important elements but it was only in 1844 that they converged in the machine built by English inventor John Fisher. All should have been well – except that the apparent curse of incomplete filing (which seems to have afflicted many sewing machine inventors) struck him down. His patent was delayed and he failed to get the recognition he probably deserves as the architect of the modern sewing machine.

Instead it was Elias Howe from America with his 1845 machine (which closely resembled Fisher’s) who took the title. His patent was for “a process that uses thread from 2 different sources….” building on the idea of a lockstitch which William Hunt had actually developed thirteen years earlier. Hunt’s failure to patent this meant that Howe could eventually reap the not inconsiderable rewards, earning him $5 for every sewing machine sold in America which used the lockstitch principle.

Howe’s machine was impressive but like all the others was slow to take off and he decided to try and market it in Europe, sailing for England. Leaving the American market open for other entrants, Including one Isaac Merritt Singer who patented his machine in 1851.

Singer Sewing Table

Image: Public domain, via Wikimedia Commons

Singer’s machine

Singer became interested in sewing machines by trying to make them better. Orson Phelps (in whose machine shop Singer was working) had recently started making sewing machines for the modestly successful Lerow and Blodgett Company. Zieber and Phelps convinced Singer to take a look at the machine to see if he could improve upon its design.

Legend has it that Singer was sceptical at first, questioning its market potential. “You want to do away with the only thing that keeps women quiet?” But they managed to persuade him and in 1850, the three men formed a partnership, with Zieber putting up the money, Singer doing the inventing, and Phelps the manufacturing.

Instead of repairing the machine, Singer redesigned it by installing a treadle to help power the fabric feed and rethinking the way the shuttle mechanism worked, replacing the curved needle with a straight one.

Like Henry Ford after him Singer’s gift was not in pure invention but rather in adapting and recombining different elements. His eventual ddesign for a machine combined elements of Thimonnier, Hunt and Howe’s machines; the idea of using a foot treadle to leave both hands free dated back to the Middle Ages.

Importantly, the new design caused less thread breakage with the innovation of an arm-like apparatus that extended over the worktable, holding the needle at its end. It could sew 900 stitches per minute, a dramatic improvement over an accomplished seamstress’s rate of 40 on simple work. On an item as complex as a shirt the time required could be reduced from fifteen hours to less than one.

Singer obtained US Patent number 8294 for his improvements on August 12, 1851.

But having perfected the machine there were a couple of obstacles in the way of their reaping the rewards from transforming the market. First was the problem of economics; their machine (and others like it) opened up the possibility of selling for home use – but at $125 each ($4,000 in 2022 dollars) the machines were expensive and slow to catch on.

And then there was the small matter of sorting out the legal tangles involved in the intellectual property rights to sewing machinery.

Climbing out of the patent thicket

Elias Howe had been understandably annoyed to find Singer’s machine using elements of his own patent and duly took him to court for patent infringement. Singer tried to argue that Howe had actually infringed upon William Hunt’s original idea; unfortunately for him since Hunt hadn’t patented it that argument failed. The judge ruled that Hunt’s lock-stitch idea was free for anyone – including Howe – to use. Consequently, Singer was forced to pay a lump sum and patent royalties to Howe.

(Interestingly if John Fisher’s UK patent hadn’t have been filed wrongly, he too would have been involved in the law suit since both Howe and Singer’s designs were almost identical to the one Fisher created).

Sounds complicated? It gets worse, mainly because they weren’t the only ones in the game. Inventors like Allen B. Wilson were slugging it out with others like John Bradshaw; both of them had developed and patented devices which improved on Singer and Howe’s ideas. Wilson partnered up with Nathaniel Wheeler to produce a new machine which used a hook instead of a shuttle and much quieter and smoother in operation. That helped the Wheeler & Wilson Company to make and sell more machines in the 1850s and 1860s than any other manufacturer. Wilson also invented the feed mechanism that is still used on every sewing machine today, drawing the cloth through the machine in a smooth and even fashion. Others like Charles Miller patented machinery to help with accessories like buttonhole stitching.

The result was that in the 1850s a rapidly increasing number of companies were vying with each other not only to produce sewing machines but also to file lawsuits for patent infringement by the others. It became known as the Sewing Machine War – and like most wars risked ending up benefiting no-one. It’s an old story and often a vicious and expensive one in which the lawyers end up the only certain winners.

Fortunately this one, though not without its battles, was to arrive at a mutually successful cease-fire. In 1856, the major manufacturers (including Singer, Wheeler & Wilson) met in Albany, New York and Orlando Potter, president of the Grover and Baker Company, proposed that, rather than squander their profits on litigation, they pool their patents.

They agreed to form the Sewing Machine Combination, merging nine of the most important patents; they were able to secure the cooperation of Elias Howe by offering him a royalty on every sewing machine manufactured. Any other manufacturer had to obtain a license for $15 per machine. This lasted until 1877 when the last patent expired.

Singing the Singer song

So the stage was finally set for Isaac Singer to act his most famous role – one which predated Henry Ford as one of the fathers of mass production. In late 1857, Singer opened the world’s first facility for mass producing something other than firearms in New York and was soon able to cut production costs. Sales volume increased rapidly; in 1855 he’d sold 855 machines, a year later over 2500 and in 1858 his production reached 3,591 and he opened three more New York-based manufacturing plants.

Efficiency in production allowed the machines to drop in price to $100, then $60, then $30, and demand exploded. By 1860 and selling over 13,000 machines Singer became the largest manufacturer of sewing machines in the world. Ten years later and that number had risen tenfold; twenty years on they sold over half a million machines a year.

Like Ford he was something of a visionary, seeing the value of a systems approach to the problem of making and selling sewing machines. His was a recombinant approach, taking ideas like standardised and interchangeable parts, division of labour, specialisation of key managerial roles and intensive mechanisation to mass produce and bring costs down.

His thespian skills were usefully deployed in the marketing campaign; amongst other stunts he staged demonstrations of the sewing machine in city centre shop windows where bystanders could watch a (skilled) young woman effortlessly sewing her own creations. And he was famous for his ‘Song of the Shirt’ number which he would deliver as background accompaniment in events at which, once again, an attractive and accomplished seamstress would demonstrate the product.

It’s often easy to overlook the contribution of others in the innovation story – not least when the chief protagonist is an actor with a gift for self-publicity. Much of the development of the Singer business was actually down to the ideas and efforts of his partner at the time Edward Cabot Clark. It was Clark, for example, who came up with the concept of instalment purchasing plans which literally opened the door to many salesmen trying to push their product. He also suggested the model of trading in an older model for one with newer features – something enthusiastically deployed a century later in the promotion of a host of products from smart-phones to saloon cars.

Singer and Clark worked to create the necessary infrastructure to support scaling the business. They opened attractive showrooms, developed a rapid spare parts distribution system and employed a network of repair mechanics.

This emerging market for domestic sewing machines attracted others; in 1863 an enterprising tailor, Ebenezer Butterick, began selling dress patterns and helped open up the home dressmaking business. Magazines, pattern books and sewing circles emerged as women saw the opportunities in doing something which could bring both social and economic benefit to their lives. Schools and colleges began offering courses to teach the required skills, many of them helpfully sponsored by the Singer Sewing Machine Company.

It wasn’t just a new business opportunity; this movement provided important impetus to a redefinition of the role of women in the home and their access to activity which could become more than a simple hobby. Singer’s advertising put women in control with advertisements suggesting that their machine was ‘… sold only by the maker directly to the women of the family’. Charitable groups such as the Ladies Work Society and the Co-operative Needlewoman’s Society emerged aimed at helping poorer women find useful skills and respectable employment in sewing.

By 1863 Singer’s machine had become America’s most popular sewing machine and was on its way to a similar worldwide role. They pioneered international manufacturing, particularly in their presence in Europe having first tried to enter the overseas market through licensing their patents to others. Quality and service problems forced them to rethink and they moved instead to setting up their own facilities.

Their Clydebank complex in Scotland, opened in 1885, became the world’s largest sewing machine factory with two main manufacturing buildings on three levels. One made domestic machines, the other industrial models; the whole was overseen by a giant 60 metre high tower with the name ‘Singer ‘ emblazoned on it and with four clock faces, the world’s largest. Employing over 3500 people it turned out 8000 sewing machines a week. By the 1900s, it was making over 1.5 million machines to be sold around the world.

Estimates place Singer’s market share at 80% of global production, from 1880 through at least 1920 and beyond. Over one thousand different models for industrial and home use were offered. Singer had 1,700 stores in the United States and 4,300 overseas, supported by 60,000 salesmen.

Singer Sewing Machine Two

Image: Public domain via Wikimedia Commons

Off-stage activities

Singer was a big man with a commanding presence and a huge appetite for experiences. But he had no need of a Shakespeare to conjure up a plot for his own dramatic personal life, his was quite rich enough. The kind where it might help to have a few thousand miles of Atlantic Ocean to place between you and what’s going on when your past is suddenly and rapidly catching up with you…

(Pay attention, this gets more complicated than the patent thicket).

Catherine, his first wife, had separated from him back in the 1830s but remained married to him, benefitting from his payments to her. She finally agreed to a divorce in 1860 at which point his long-suffering mistress and mother of eight of his children, Mary Ann believed Isaac was free to marry her. He wasn’t keen to change his arrangements with her b ut in any case the question became somewhat academic.

In 1860 she was riding in her carriage along Fifth Avenue in New York when she happened to see Isaac in another carriage seated alongside Mary McGonigal. One of Isaac’s employees about whom Mary Ann already had suspicions. Confronting him she discovered that not only had he fathered seven children with McGonigal but that he had also had an affair with her sister Kate!

Hell hath no fury like a woman scorned and Mary Ann really went for Isaac, having him arrested and charged with bigamy; he fled to London on bail taking Mary McGonigal with him. But leaving behind even more trouble; further research uncovered a fourth ‘wife’, one Mary Walters who had been one of his glamorous sewing machine demonstrators. She also added another child to the list of his offspring. The final tally of his New York wives netted a total of four families, all living in Manhattan in ignorance of each other with a total of sixteen of his children!

Isaac’s escape to England allowed him enough breathing space to pick up on another affair he had started in France the previous year with Isabella Boyer, a young Frenchwoman whose face had been the model for the Statue of Liberty. He’d managed to leave her pregnant and so she left her husband and moved to England to join Isaac, marrying him in 1863. They settled down to life on their huge estate in Devon where they had a further six children.

Legacy

Singer left behind a lot – not least a huge fortune. On his death in 1871 he was worth around $13m (which would be worth close to $400billion today). From considerably humbler beginnings he’d managed to make his way to a position where he was able to buy a sizeable plot of land near Torquay and build a grand 110 room house (Oldway) modeled on the royal palace at Versailles complete with a hall of mirrors, maze and grotto garden.

And when he was finally laid to rest it was in a cedar, silver, satin and oak-lined marble tomb in a funeral attended by over 2000 mourners.

His wider legacy is, of course, the sewing machine which formed the basis of the company he helped found and which became such a powerful symbol of industrial and social innovation. He reminds us that innovation isn’t a single flash of inspiration but an extended journey and he deployed his skills at navigating that journey in many directions. He’s of course remembered for his product innovations like the sewing machine but throughout his life he developed many ideas into serviceable (and sometimes profitable) ventures.

But he also pioneered extensive process innovation, anticipating Henry Ford’s mass production approach to change the economics of selling consumer goods and rethinking the ways in which his factories could continue to develop. He had the salesman’s gift, but his wasn’t just an easy patter to persuade reluctant adopters. Together with Edward Clark he pioneered ways of targeting and then opening up new markets, particularly in the emerging world of the domestic consumer. And he was above all a systems thinker, recognizing that the success or failure of innovation depends on thinking around a complete business model to ensure that good ideas have an architecture through which they can create value.

Isaac Singer retained his interest in drama up to his death, leaving his adopted home of Torbay with a selection of imposing theaters which still offer performances today. It can only be a matter of time before someone puts together the script for a show based on this larger than life character and the tangled web that he managed to weave.


You can find my podcast here and my videos here

And if you’d like to learn with me take a look at my online courses here

And subscribe to my (free) newsletter here

All images generated by Substack AI unless otherwise indicated

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Don’t Adopt Artificial Incompetence

Don't Adopt Artificial Incompetence

GUEST POST from Shep Hyken

I’ve been reviewing my customer experience research, specifically the section on the future of customer service and AI (Artificial Intelligence). A few findings prove that customers are frustrated and lack confidence in how companies are using AI:

  • In general, 57% of customers are frustrated by AI-fueled self-service options.
  • 49% of customers say technologies like AI and ChatGPT scare them.
  • 51% of customers have received wrong or incorrect information from an AI self-service bot.

As negative as these findings sound, there are plenty of findings that point to AI getting better and more customers feeling comfortable using AI solutions. The technology continues to improve quickly. While it’s only been five months since we surveyed more than 1,000 U.S. consumers, I bet a new survey would show continued improvement and comfort level regarding AI. But for this short article, let’s focus on the problem that needs to be resolved.

Upon reviewing the numbers, I realized that there’s another kind of AI: Artificial Incompetence. That’s my new label for companies that improperly use AI and cause customers to be frustrated, scared and/or receive bad information. After thinking I was clever and invented this term, I was disheartened to discover, after a Google search, that the term already exists; however, it’s not widely used.

So, AI – as in Artificial Incompetence – is a problem you don’t want to have. To avoid it, start by recognizing that AI isn’t perfect. Be sure to have a human backup that’s fast and easy to reach when the customer feels frustrated, angry, or scared.

And now, as the title of this article implies, there’s more. After sharing the new concept of AI with my team, we brainstormed and had fun coming up with two more phrases based on some of the ideas I covered in my past articles and videos:

Feedback Constipation: When you get so much feedback and don’t take action, it’s like eating too much and not being able to “go.” (I know … a little graphic … but it makes the point.) This came from my article Turning Around Declining Customer Satisfaction, which teaches that collecting feedback isn’t valuable unless you use it.

Jargon Jeopardy: Most people – but not everyone – know what CX means. If you are using it with a customer, and they don’t know what it means, how do you think they feel? I was once talking to a customer service rep who kept using abbreviations. I could only guess what they meant. So I asked him to stop with the E-I-E-I-O’s (referencing the lyrics from the song about Old McDonald’s farm.) This was the main theme of my article titled Other Experiences Exist Beyond Customer Experience (EX, WX, DX, UX and more).

So, this was a fun way at poking fun of companies that may think they are doing CX right (and doing it well), but the customer’s perception is the opposite. Don’t use AI that frustrates customers and projects an image of incompetence. Don’t collect feedback unless you plan to use it. Otherwise, it’s a waste of everyone’s time and effort. Finally, don’t confuse customers – and even employees – with jargon and acronyms that make them feel like they are forced to relearn the alphabet.

Image Credits: 1 of 950+ FREE quote slides available at http://misterinnovation.com

This article originally appeared on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Re-engineering Trust and Retention in the AI Contact Center

The Empathy Engine

LAST UPDATED: November 9, 2025 at 1:36PM
Re-engineering Trust and Retention in the AI Contact Center

by Braden Kelley

The contact center remains the single most critical point of human truth for a brand. It is where marketing promises meet operational reality. The challenge today, as highlighted by leaders like Bruce Gilbert of Young Energy at Customer Contact Week(CCW) in Nashville recently, is profound: Customers expect friction-less experiences with empathetic responses. The solution is not merely throwing technology at the problem; it’s about strategically weaving automation into the existing human fabric to create an Empathy Engine.

The strategic error most organizations make is starting with the technology’s capability rather than the human need. The conversation must start with empathy not the technology — focusing first on the customer and agent pain points. AI is not a replacement for human connection; it is an amplification tool designed to remove friction, build trust, and elevate the human agent’s role to that of a high-value relationship manager.

The Trust Imperative: The Cautious Adoption Framework

The first goal when introducing AI into the customer journey is simple: Building trust. The consumer public, after years of frustrating Interactive Voice Response (IVR) systems and rigid chatbots, remains deeply skeptical of automation. A grand, “all-in” AI deployment is often met with immediate resistance, which can manifest as call abandonment or increased churn.

To overcome this, innovation must adhere to a principle of cautious, human-centered rollout — a Cautious Adoption Framework: Starting small and starting with simple things can help to build this trust. Implement AI where the risk of failure is low and the utility is high — such as automating password resets, updating billing addresses, or providing initial diagnostics. These are the repetitive, low-value tasks that bore agents and frustrate customers. By successfully automating these simple, transactional elements, you build confidence in the system, preparing both customers and agents for more complex, AI-assisted interactions down the line. This approach honors the customer’s pace of change.

The Agent Retention Strategy: Alleviating Cognitive Load

The operational cost of the contact center is inextricably linked to agent retention. Finding and keeping high-quality agents remains a persistent challenge, primarily because the job is often highly stressful and repetitive. AI provides a powerful retention tool by directly addressing the root cause: cognitive load.

Reducing the cognitive load and stress level on agents is a non-negotiable step for long-term operational health. AI co-pilots must be designed to act as true partners, not simply data overlays. They should instantly surface relevant knowledge base articles, summarize the customer’s entire history before the agent picks up the call, or even handle real-time data entry. This frees the human agent to focus entirely on the empathetic response — active listening, problem-solving, and de-escalation. By transforming the agent’s role from a low-paid data processor into a high-value relationship manager, we elevate the profession, directly improving agent retention and turning contact center employment into an aspirational career path.

The Systemic Challenge: Orchestrating the AI Ecosystem

A major limiting factor in today’s contact center is the presence of fragmented AI deployments. Many organizations deploy AI in isolated pockets — a siloed chatbot here, a transcription service there. The future demands that we move far beyond siloed AI. The goal is complete AI orchestration across the enterprise, requiring us to get the AIs to talk to each other.

A friction-less customer experience requires intelligence continuity: a Voice AI must seamlessly hand off its collected context to a Predictive AI (which assesses the call risk), which then informs the Generative AI (that drafts the agent’s suggested response). This is the necessary chain of intelligence that supports friction-less service. Furthermore, complexity demands a blended AI approach, recognizing that the solution may involve more than one method (generative vs. directed).

For high-compliance tasks, a directed approach ensures precision: for instance, a flow can insert “read as is” instructions for regulatory disclosures, ensuring legal text is delivered exactly as designed. For complex, personalized problem-solving, a generative approach is vital. The best systems understand the regulatory and emotional context, knowing when to switch modes instantly and without customer intervention.

The Strategic Pivot: Investing in Predictive Empathy

The ultimate strategic advantage lies not in reacting to calls, but in preventing them. This requires a deeper investment in data science, moving from descriptive reporting on what happened to predictive analytics to understand why our customers are calling in before they dial the number.

This approach, which I call Predictive Empathy, uses machine learning to identify customers whose usage patterns, payment history, or recent service interactions suggest a high probability of confusion or frustration (e.g., first-time promotions expiring, unusual service interruptions). The organization then proactively initiates a personalized, AI-assisted outreach to address the problem or explain the confusion before the customer reaches the point of anxiety and makes the call. This shifts the interaction from reactive conflict to proactive support, immediately lowering call volume and transforming brand perception.

The Organizational Checkpoint: Post-Deployment Evolution

Once you’ve successfully implemented AI to address pain points, the work is not finished. A crucial strategic question must be addressed: What happens after AI deployment? What’s your plan?

As AI absorbs simple transactions, the nature of the calls that reach the human agent becomes disproportionately more complex, emotional, and high-value. This creates a skills gap in the remaining human workforce. The organization must plan for and fund the Up-skilling Initiative necessary to handle these elevated interactions, focusing on conflict resolution, complex sales, and deep relationship management. The entire organizational structure — training programs, compensation models, and career paths — must evolve to support this higher-skilled human workforce. By raising the value of the human role, the contact center transitions from a cost center into a profit-generating Relationship Hub.

Conclusion: Architecting the Human Layer

The goal of innovation in the contact center is not the elimination of the human, but the elevation of the human. By using AI to build trust, reduce cognitive load, enable predictive empathy, and connect disparate systems, we free the human agent to deliver on the fundamental customer expectation: a friction-less experience coupled with an empathetic response. This is how we re-engineer the contact center from a cost center into a powerful engine for talent retention and customer loyalty.

“AI handles the transaction. The human handles the trust. Design your systems to protect both.” — Braden Kelley

Your first step into the Empathy Engine: Map the single most stressful task for your top-performing agent and commit to automating 80% of its cognitive load using a simple AI co-pilot within the next 90 days.

What is that task for your organization?

Image credits: Google Gemini

Content Authenticity Statement: The topic area, key elements to focus on, insights captured from the Customer Contact Week session, panelists to mention, etc. were decisions made by Braden Kelley, with a little help from Google Gemini to clean up the article.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Are We Suffering from AI Confirmation Bias?

Are We Suffering From AI Confirmation Bias?

GUEST POST from Geoffrey A. Moore

When social media first appeared on the scene, many of us had high hopes it could play a positive role in community development and civic affairs, as indeed it has. What we did not anticipate was the long-term impact of the digital advertising model that supported it. That model is based on click-throughs, and one of the most effective ways to increase them was to present content that reinforces the recipient’s existing views.

Statisticians call the attraction to one’s existing point of view confirmation bias, and we all have it. As individuals, we believe we are in control of this, but it is obvious that at the level of populations, we are not. Confirmation bias, fed first by social media, and then by traditional media once it is converted to digital, has driven political and social polarization throughout the world. It has been further inflamed by conspiracy theories, malicious communications, fake news, and the like. And now we are faced with the advent of yet another amplifier—artificial intelligence. A significant portion of the fears about how AI could impact human welfare stem from how easily it can be put to malicious use through disinformation campaigns.

The impact of all this on our political life is chilling. Polarized media amplifies the impact of extremism and dampens the impact of moderation. This has most obviously been seen in primary elections, but it has now carried over into general elections to the point where highly unqualified individuals who have no interest in public service hold some of the most important roles in state and federal government. The resulting dysfunction is deeply disturbing, but it is not clear if and where a balance can be found.

Part of the problem is that confirmation bias is an essential part of healthy socialization. It reflects the impact that narratives have on our personal and community identities. What we might see as arrant folly another person sees as a necessary leap of faith. Our founding fathers were committed to protecting our nation from any authority imposing its narratives on unwilling recipients, hence our Constitutional commitment to both freedom of religion and freedom of speech.

In effect, this makes it virtually impossible to legislate our way out of this dilemma. Instead, we must embrace it as a Darwinian challenge, one that calls for us as individuals to adapt our strategies for living to a dangerous new circumstance. Here I think we can take a lesson from our recent pandemic experience. Faced with the threat of a highly contagious, ever-mutating Covid virus, most of the developed economies embraced rapid vaccination as their core response. China, however, did not. It embraced regulation instead. What they and we learned is that you cannot solve problems of contagion through regulation.

We can apply this learning to dealing with the universe of viral memes that have infected our digital infrastructure and driven social discord. Instead of regulation, we need to think of vaccination. The vaccine that protects people from fake news and its many variants is called critical thinking, and the healthcare provider that dispenses it is called public education.

We have spent the past several decades focusing on the STEM wing of our educational system, but at the risk of exercising my own confirmation bias, the immunity protection we need now comes from the liberal arts. Specifically, it emerges from supervised classroom discussions in which students are presented with a wide variety of challenging texts and experiences accompanied by a facilitated dialog that instructs them in the practices of listening, questioning, proposing, debating, and ultimately affirming or denying the validity of the argument under consideration. These discussions are not about promoting or endorsing any particular point of view. Rather, they teach one how to engage with any point of view in a respectful, powerful way. This is the intellectual discipline that underlies responsible citizenship. We have it in our labs. We just need to get it distributed more broadly.

That’s what I think. What do you think?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The AI Agent Paradox

How E-commerce Must Proactively Manage Experiences Created Without Their Consent

LAST UPDATED: November 7, 2025 at 4:31 PM

The AI Agent Paradox

GUEST POST from Art Inteligencia

A fundamental shift is underway in the world of e-commerce, moving control of the customer journey out of the hands of the brand and into the hands of the AI Agent. The recent lawsuit by Amazon against Perplexity regarding unauthorized access to user accounts by its agentic browser is not an isolated legal skirmish; it is a red flag moment for every company that sells online. The core challenge is this: AI agents are building and controlling the shopping experience — the selection, the price comparison, the checkout path — often without the e-commerce site’s knowledge or consent.

This is the AI Agent Paradox: The most powerful tool for customer convenience (the agent) simultaneously poses the greatest threat to brand control, data integrity, and monetization models. The era of passively optimizing a webpage is over. The future belongs to brands that actively manage their relationship with the autonomous, agentic layer that sits between them and their human customers.

The Three Existential Threats of the Autonomous Agent

Unmanaged AI agents, operating as digital squatters on your site, create immediate systemic problems for e-commerce sites:

  1. Data Integrity and Scraping Overload: Agents typically use resource-intensive web scraping techniques that overload servers and pollute internal analytics. The shopping experience they create is invisible to the brand’s A/B testing and personalization engines.
  2. Brand Bypass and Commoditization: Agents prioritize utility over loyalty. If a customer asks for “best price on noise-cancelling headphones,” the agent may bypass your brand story, unique value propositions, and even your preferred checkout flow, reducing your products to mere SKU and price points. This is the Brand Bypass threat.
  3. Security and Liability: Unauthorized access, especially to user accounts (as demonstrated by the Amazon-Perplexity case), creates massive security vulnerabilities and legal liability for the e-commerce platform, which is ultimately responsible for protecting user data.

The How-To: Moving from Resistance to Proactive Partnership

Instead of relying solely on defensive legal action (which is slow and expensive), e-commerce brands must embrace a proactive, human-centered API strategy. The goal is to provide a superior, authorized experience for the AI agents, turning them from adversaries into accelerated sales channels — and honoring the trust the human customer places in their proxy.

Step 1: Build the Agent-Optimized API Layer

Treat the AI agent as a legitimate, high-volume customer with unique needs (structured data, speed). Design a specific, clean Agent API separate from your public-facing web UI. This API should allow agents to retrieve product information, pricing, inventory status, and execute checkout with minimal friction and maximum data hygiene. This immediately prevents the resource-intensive web scraping that plagues servers.

Step 2: Define and Enforce the Rules of Engagement

Your Terms of Service (TOS) must clearly articulate the acceptable use of your data by autonomous agents. Furthermore, the Agent API must enforce these rules programmatically. You can reward compliant agents (faster access, richer data) and throttle or block non-compliant agents (those attempting unauthorized access or violating rate limits). This is where you insert your brand’s non-negotiables, such as attribution requirements or user privacy protocols, thereby regaining control.

Step 3: Offer Value-Added Agent Services and Data

This is the shift from defense to offense. Give agents a reason to partner with you and prefer your site. Offer exclusive agent-only endpoints that provide aggregated, structured data your competitors don’t, such as sustainable sourcing information, local inventory availability, or complex configurator data. This creates a competitive advantage where the agent actually prefers to send traffic to your optimized channel because it provides a superior outcome for the human user.

Case Study 1: The Furniture Retailer and the AI Interior Designer

Challenge: Complex, Multivariable E-commerce Decisions

A high-end furniture and décor retailer struggled with low conversion rates because buying furniture requires complex decisions (size, material, delivery time). Customers were leaving the site to use third-party AI interior design tools.

Proactive Partnership:

The retailer created a “Design Agent API.” This API didn’t just provide price and SKU; it offered rich, structured data on 3D model compatibility, real-time customization options, and material sustainability scores. They partnered with a leading AI interior design platform, providing the agent direct, authorized access to this structured data. The AI agent, in turn, could generate highly accurate virtual room mock-ups using the retailer’s products. This integration streamlined the complex path to purchase, turning the agent from a competitor into the retailer’s most effective pre-visualization sales tool.

Case Study 2: The Specialty Grocer and the AI Recipe Planner

Challenge: Fragmented Customer Journey from Inspiration to Purchase

An online specialty grocer, focused on rare and organic ingredients, saw their customers using third-party AI recipe planners and shopping list creators, which often failed to locate the grocer’s unique SKUs or sent traffic to competitors.

Proactive Partnership:

The grocer developed a “Recipe Fulfillment Endpoint.” They partnered with two popular AI recipe apps. When a user generated a recipe, the AI agent, using the grocer’s endpoint, could instantly check ingredient availability, price, and even offer substitute suggestions from the grocer’s unique inventory. The agent generated a “One-Click, Fully-Customized Cart” for the grocer. The grocer ensured the agent received a small attribution fee (a form of commission), turning the agent into a reliable, high-converting affiliate sales channel. This formalized partnership eliminated the friction between inspiration and purchase, driving massive, high-margin sales.

The Human-Centered Imperative

Ultimately, this is a human-centered change challenge. The human customer trusts their AI agent to act on their behalf. By providing a clean, transparent, and optimized path for the agent, the e-commerce brand is honoring that trust. The focus shifts from control over the interface to control over the data and the rules of interaction. This strategy not only improves server performance and data integrity but also secures the brand’s place in the customer’s preferred, agent-mediated future.

“The AI agent is your customer’s proxy. If you treat the proxy poorly, you treat the customer poorly. The future of e-commerce is not about fighting the agents; it’s about collaborating with them to deliver superior value.” — Braden Kelley

The time to move beyond the reactive defense and into proactive partnership is now. The e-commerce leaders of tomorrow will be the ones who design the best infrastructure for the machines that shop for humans. Your essential first step: Form a dedicated internal team to prototype your Agent API, defining the minimum viable, structured data you can share to incentivize collaboration over scraping.

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Why Amazon Paid $3.9 Billion to Get into the Healthcare Business

Why Amazon Paid $3.9 Billion to Get into the Healthcare Business

GUEST POST from Shep Hyken

Amazon is known for its amazing customer experience, despite most customers never talking to an Amazon employee. How does this digital experience — with no human interaction—drive so much loyalty? The short answer is confidence. There is very little that goes wrong with an Amazon experience, and if by chance it does, its system takes care of almost all problems—again, without human interaction. That said, if a customer does need to talk to a human, which is very seldom, the customer support team is there.

But what happens if you combine technology with a high-touch business, like a doctor’s office? You get One Medical, which Amazon bought in 2023 for $3.9 billion. One Medical’s founder, Dr. Tom Lee, is a Harvard-trained primary care physician who then went on to Stanford to get an MBA. Before opening his first clinic, he asked himself, “Why do we do these in healthcare like we’ve always done them? Why does every waiting room look like some sterile IKEA? Why do I wait in a reception area and then wait again in the exam room?” It was questions like these that caused Lee to tinker with and disrupt the traditional medical visit model.

Starting with one clinic, Lee created a different experience. He built an app and charged patients an $89/year subscription that gave them access to doctors. He focused on simple things like getting an appointment without making a call. Those little things were the start of what turned out to be a stellar experience that allowed him to expand, ultimately catching the eye of Amazon.

When the Amazon deal was completed, HealthCare Dive reported that Amazon now had a network of more than 220 medical offices in 27 U.S. markets with more than 836,000 members plus 9,000 enterprise clients. Neil Lindsay, SVP of Amazon Health Services said, “We’re on a mission to make it dramatically easier for people to find, choose, afford and engage with the services, products and professionals they need to get and stay healthy, and coming together with One Medical is a big step on that journey.” That’s what Amazon does. They make it easy for customers.

Joseph Michelli, bestselling business author of numerous books that tell the stories of iconic brands like The Ritz-Carlton, Starbucks, Mercedes and others, recently released a new book, All Business Is Personal: One Medical’s Human-Centered, Technology-Powered Approach to Customer Engagement, that tells the One Medical story. I had a chance to interview him on Amazing Business Radio, and here are the highlights that will give you some insight into why Amazon became interested in acquiring this amazing company.

Question Everything

Just ask, “Why?” It doesn’t matter what type of business you are in, there are reasons for everything. Often the reason a company or person does something is because “We’ve always done it this way.” So, question everything. Maybe you’ll still do it the way you’ve always done it, but at least you will have tried to find a better way.

Create a Stellar Customer (Patient) Experience

As Lee created a Customer Experience (CX) that drove impressive ratings, he looked at the friction most patients experienced. He started with an obvious pain point, the waiting room, which is, as the name implies, a room for people to wait. Some patients in traditional medical practices are forced to wait for unreasonable amounts of time. But not at One Medical. In addition to being easy to get a same-day or next-day appointment, Michelli shared that 95% of patients are seen within three minutes of their scheduled times. As already mentioned, Lee questioned every aspect of the patient’s experience, and he found many ways to make it better.

Blend Technology with the Human Touch

Technology, like apps and AI, makes life more convenient for customers by allowing things like easy online scheduling or getting immediate answers from AI chatbots. Often, technology can feel cold and impersonal, especially in healthcare. The best use of technology is to make things faster and simpler, but smart businesses, like One Medical, know to offer human backup when a customer/patient needs it. Finding the right balance between tech and the human touch keeps your business from being a commodity—just “another faceless service.”

Convenience Is King

People love doing business with companies that create convenient experiences. For One Medical, this means offering same-day appointments, speedy callbacks or handling many issues online versus the phone, so the patient doesn’t have to wait on hold or wait for a callback. Research shows that 73% of customers will pay more for a convenient experience. The easier you make someone’s experience, the more likely they will come back as well as tell others about you.

Make It Personal, Not Just Personalized

It’s great to remember a customer’s name or recall past purchases. That’s personalization. To take it a step further, make it personal. Make the customer feel that you care about them. That means when the customer (or patient) talks to an employee, they feel cared for, listened to and valued. Personal connections build trust and confidence, which leads to repeat business and potential loyalty.

The Effort Is Worth It

These five reasons (and a few more) are what gave Amazon 3.9 billion reasons (as in dollars) to acquire One Medical. Even if you were to practice these reasons flawlessly, you may never catch Amazon’s attention, but you will catch your customers’ (and potential customers’) attention. And that will make the effort worthwhile.

Image Credits: Pixabay

This article originally appeared on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Cutting-Edge Ways to Decouple Data Growth from Power and Water Consumption

The Sustainability Imperative

LAST UPDATED: November 1, 2025 at 8:59 AM

Cutting-Edge Ways to Decouple Data Growth from Power and Water Consumption

GUEST POST from Art Inteligencia

The global digital economy runs on data, and data runs on power and water. As AI and machine learning rapidly accelerate our reliance on high-density compute, the energy and environmental footprint of data centers has become an existential challenge. This isn’t just an engineering problem; it’s a Human-Centered Change imperative. We cannot build a sustainable future on an unsustainable infrastructure. Leaders must pivot from viewing green metrics as mere compliance to seeing them as the ultimate measure of true operational innovation — the critical fuel for your Innovation Bonfire.

The single greatest drain on resources in any data center is cooling, often accounting for 30% to 50% of total energy use, and requiring massive volumes of water for evaporative systems. The cutting edge of sustainable data center design is focused on two complementary strategies: moving the cooling load outside the traditional data center envelope and radically reducing the energy consumed at the chip level. This fusion of architectural and silicon-level innovation is what will decouple data growth from environmental impact.

The Radical Shift: Immersive and Locational Cooling

Traditional air conditioning is inefficient and water-intensive. The next generation of data centers is moving toward direct-contact cooling systems that use non-conductive liquids or leverage natural environments.

Immersion Cooling: Direct-to-Chip Efficiency

Immersion Cooling involves submerging servers directly into a tank of dielectric (non-conductive) fluid. This is up to 1,000 times more efficient at transferring heat than air. There are two primary approaches: single-phase (fluid remains liquid, circulating to a heat exchanger) and two-phase (fluid boils off the server, condenses, and drips back down).

This method drastically reduces cooling energy and virtually eliminates water consumption, leading to Power Usage Effectiveness (PUE) ratios approaching the ideal 1.05. Furthermore, the fluid maintains a more stable, higher operating temperature, making the waste heat easier to capture and reuse, which leads us to our first case study.

Case Study 1: China’s Undersea Data Center – Harnessing the Blue Economy

China’s deployment of a commercial Undersea Data Center (UDC) off the coast of Shanghai is perhaps the most audacious example of locational cooling. This project, developed by Highlander and supported by state entities, involves submerging sealed server modules onto the seabed, where the stable, low temperature of the ocean water is used as a natural, massive heat sink.

The energy benefits are staggering: developers claim UDCs can reduce electricity consumption for cooling by up to 90% compared to traditional land-based facilities. The accompanying Power Usage Effectiveness (PUE) target is below 1.15 — a world-class benchmark. Crucially, by operating in a closed system, it eliminates the need for freshwater entirely. The UDC also draws nearly all its remaining power from nearby offshore wind farms, making it a near-zero carbon, near-zero water compute center. This bold move leverages the natural environment as a strategic asset, turning a logistical challenge (cooling) into a competitive advantage.

Case Study 2: The Heat Reuse Revolution at a Major Cloud Provider

Another powerful innovation is the shift from waste heat rejection to heat reuse. This is where true circular economy thinking enters data center design. A major cloud provider (Microsoft, with its various projects) has pioneered systems that capture the heat expelled from liquid-cooled servers and redirect it to local grids.

In one of their Nordic facilities, the waste heat recovered from the servers is fed directly into a local district heating system. The data center effectively acts as a boiler for the surrounding community, warming homes, offices, and water. This dramatically changes the entire PUE calculation. By utilizing the heat rather than simply venting it, the effective PUE dips well below the reported operational figure, transforming the data center from an energy consumer into an energy contributor. This demonstrates that the true goal is not just to lower consumption, but to create a symbiotic relationship where the output of one system (waste heat) becomes the valuable input for another (community heating).

“The most sustainable data center is the one that gives back more value to the community than it takes resources from the planet. This requires a shift from efficiency thinking to regenerative design.”

Innovators Driving the Sustainability Stack

Innovation is happening at every layer, from infrastructure to silicon:

Leading companies and startups are rapidly advancing sustainable data centers. In the cooling space, companies like Submer Technologies specialize in immersion cooling solutions, making it commercially viable for enterprises. Meanwhile, the power consumption challenge is being tackled at the chip level. AI chip startups like Cerebras Systems and Groq are designing new architectures (wafer-scale and Tensor Streaming Processors, respectively) that aim to deliver performance with vastly improved energy efficiency for AI workloads compared to general-purpose GPUs. Furthermore, cloud infrastructure provider Crusoe focuses on powering AI data centers exclusively with renewable or otherwise stranded, environmentally aligned power sources, such as converting flared natural gas into electricity for compute, tackling the emissions challenge head-on.

The Future of Decoupling Growth

To lead effectively in the next decade, organizations must recognize that the convergence of these technologies — immersion cooling, locational strategy, chip efficiency, and renewable power integration — is non-negotiable. Data center sustainability is the new frontier for strategic change. It requires empowered agency at the engineering level, allowing teams to move fast on Minimum Viable Actions (MVAs) — small, rapid tests of new cooling fluids or localized heat reuse concepts — without waiting for monolithic, years-long CapEx approval. By embedding sustainability into the very definition of performance, we don’t just reduce a footprint; we create a platform for perpetual, human-driven innovation.

You can learn more about how the industry is adapting to these challenges in the face of rising heat from AI in the video:

This video discusses the limitations of traditional cooling methods and the necessity of liquid cooling solutions for next-generation AI data centers.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

UPDATE: Apparently, Microsoft has been experimenting with underwater data centers for years and you can learn more about them and progress in this area in this video here:

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






The Indispensable Role of CX

Insights from CCW’s 25-Year Journey

LAST UPDATED: October 28, 2025 at 12:00PM
The Indispensable Role of CX

by Braden Kelley

I recently had the privilege of sitting down with Mario Matulich, President of Customer Management Practice, at Customer Contact Week (CCW) in Nashville. As an organization celebrating its 25th anniversary, CCW has been a critical barometer for the entire customer experience and contact center industry. Our conversation wasn’t just a look back, but a powerful exploration of the strategic mandate facing CX leaders today, particularly how we manage innovation and human-centered change in an era dominated by AI and tightening budgets.

CCW at 25: The Hub for Benchmarking and Breakthroughs

Mario underscored that CCW is far more than just a conference; it’s a living repository of industry knowledge. Professionals attend for actionable takeaways, which primarily fall into three categories: benchmarking performance against industry leaders, learning about new trends (like Generative AI’s impact), and, critically, sourcing the right vendors and capabilities needed to execute their strategies. It’s where leaders come to calibrate their investment strategies and learn how to do more with their finite resources.

Mario MatulichThis pursuit of excellence is driven by a single, powerful market force: The Amazon Effect. As Mario put it, customers no longer judge your experience solely against your industry peers. They expect every single touchpoint with your company to be as seamless, intuitive, and effective as the best experience they’ve had anywhere. This constantly escalating bar for Customer Effort Score (CES) and Customer Satisfaction (CSAT) makes a complacent CX investment a near-fatal strategic mistake. The customer experience must always be top-tier, or you simply lose the right to compete.

The Strategic Disconnect: CX vs. The Contact Center

One of the most valuable parts of our discussion centered on the subtle, yet crucial, distinction between a Customer Experience (CX) professional and a Contact Center (CC) professional. While both are dedicated to the customer journey, their scope and focus often differ:

  • The CX Professional: Often owns the entire end-to-end customer journey, from marketing to product use to support. Their responsibilities and definition of success are deeply influenced by where CX sits organizationally — is it under Marketing, Operations, or the CEO?
  • The CC Professional: Focused on the operational efficiency, quality, and effectiveness of the voice and digital support channels. Their reality is one of doing a lot with a little, constantly asked to manage complex interactions while being, ironically, often looked to as a prime source of cuts in a downturn.

Social media, for instance, is still a relevant customer service channel, not just a marketing one. However, the operational reality is that many companies, looking for cost-effective solutions, outsource social media support to Business Process Outsourcing (BPO) providers, highlighting the ongoing tension between strategic experience design and operational efficiency.

“Being a CX leader in your industry is not a temporary investment you can cut and reinstate later. Those who cut, discover quickly that regaining customer trust and market position is exponentially harder than maintaining it.” — Braden Kelley

AI in the Contact Center: From Hypothesis to Hyper-Efficiency

The conversation inevitably turned to the single biggest factor transforming the industry today: Artificial Intelligence. Mario and I agreed that while the promise of AI is vast, the quickest, most immediate win for nearly every organization lies in agent assist.

This is where Generative AI tools empower the human agent in real-time — providing instant knowledge base look-ups, auto-summarizing previous interactions, and drafting responses. It’s a human-centric approach that immediately boosts productivity and confidence, improving Agent Experience (AX) and reducing training time.

However, implementing AI successfully isn’t a “flip-the-switch” deployment. The greatest danger is the wholesale adoption of complex technology without rigor. True AI success, Mario noted, must be implemented via the classic innovation loop: hypothesis, prototyping, and testing. AI isn’t a solution; it’s a tool that must be carefully tuned and validated against human-centered metrics before scaling.

The Mandate for Enduring Investment

A recurring theme was the strategic folly of viewing CX as a cost center. In a downturn, the contact center is often the first place management looks for budgetary reductions. Yet, the evidence is overwhelming: CX leadership is not a temporary investment. When you are leading in your industry in customer experience, that position must be maintained. Cut your investment at your peril, and you risk a long, painful road to recovery when the market turns. The CX team, despite being resource-constrained, often represents the last line of defense for the brand, embodying the human-centered change we preach.

As CCW moves into its next 25 years, the lesson is clear: customer expectations are only rising. The best leaders will leverage AI not just to cut costs, but to augment their people and apply the innovation principles of rigorous testing to truly master the new era of customer orchestration. The commitment to a great customer experience is the single, enduring investment that will future-proof your business.

HALLOWEEN BONUS: Save 30% on the eBook, hardcover or softcover of my latest book Charting Change (now in its second edition) — FREE SHIPPING WORLDWIDE — using code HAL30 until midnight October 31, 2025

Image credits: Customer Management Practice

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from Google Gemini to clean up the article.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






The Voicebots are Coming

Your Next Customer Support Agent May Not Be a Human

LAST UPDATED: October 27, 2025 at 1:00PM
The Voicebots are Coming

by Braden Kelley

Last week I had the opportunity to attend Customer Contact Week (CCW) in Nashville, Tennessee and learn that the familiar, frustrating tyranny of the touch-tone IVR (Interactive Voice Response) system is finally ending. For too long, the gateway to customer service has felt like a maze designed to prevent contact, not facilitate it. But thanks to the rapid evolution of Conversational AI — fueled by Generative Large Language Models (LLMs) — the entire voice interaction landscape is undergoing a revolutionary, and necessary, change. As a thought leader focused on human-centered change, innovation and experience design, I can tell you the future of the call center isn’t purely automated; it’s intelligently orchestrated.

The voicebot — the modern AI-powered voice agent — is moving past its days as a simple chatbot with a synthesized voice. Today’s AI agents use Natural Language Processing (NLP) to understand intent, context, and even tone, allowing them to handle complex, multi-step issues with startling accuracy. More importantly, they are ushering in the era of the bionic contact center, where the human agent is augmented, not replaced. This hybrid model — where AI handles the heavy lifting and humans provide empathy, complex reasoning, and necessary approvals — is the key to achieving both massive scale and superior Customer Experience (CX).

Overcoming the Voice Friction: The Tech Foundation

The shift to true voice AI required overcoming significant friction points that plagued older systems:

  • Barge-In and Latency: Modern voicebots offer near-instantaneous response times and can handle barge-in (when a customer interrupts the bot) naturally, mimicking human conversation flow.
  • Acoustic Noise: Advanced speech recognition models are highly resilient to background noise and varied accents, ensuring high accuracy even in noisy home or car environments.
  • Intent Nuance: LLMs provide the deep contextual understanding needed to identify customer intent, even when the customer uses vague or emotional language, turning frustrated calls into productive ones.

The Dual Pillars of Voice AI in CX

Conversational AI is transforming voice service through two primary deployment models, both of which reduce Customer Effort Score (CES) and boost Customer Satisfaction (CSAT):

1. Full Call Automation (The AI Front Line)

This model is deployed for high-volume, routine, yet critical interactions. The voicebot connects directly to the company’s backend systems (CRM, ERP, knowledge base) to pull personalized information and take action in real-time. Crucially, these new AI agents move beyond rigid scripts, using Generative AI to create dynamic, human-like dialogue that resolves the issue instantly. This 24/7 self-service capability slashes queue times and dramatically lowers the cost-to-serve.

2. Human-AI Collaboration (The Bionic Agent)

This is where the real human-centered innovation lies. The AI agent handles the bulk of the call — identifying the customer, verifying identity, diagnosing the problem, and gathering data. When the request hits a complexity threshold — such as requiring a policy override, handling an escalated complaint, or needing a final human authorization — the AI performs a contextual handoff. The human agent receives the call along with a complete, structured summary of the conversation, the customer’s intent, and often a recommended next step, turning a frustrating transfer into a seamless, empowered human interaction.

OR, even better can be the solution where a single human agent provides approvals or other guidance to multiple AI voice agents that continue owning their calls while waiting for the human to respond (possibly simultaneously helping the customer with additional queries) before continuing with the conversation through to resolution.

Customer Contact Week Nashville

“The most powerful application of voice AI isn’t automation, it’s augmentation. By freeing human agents from transactional drudgery, we elevate them to be empathic problem solvers, enhancing both their job satisfaction and the customer’s outcome.” — Braden Kelley


Measuring the Success of the Handoff

The quality of the transitions between AI and human is the true measure of success. Leaders must track metrics that assess the efficacy of the handoff itself:

  • Repeat Story Rate: The percentage of customers who have to repeat information to the human agent after an AI handoff. This must be near zero.
  • Agent Ramp-up Time (Post-Transfer): The time it takes for the human agent to absorb the AI-generated context and take meaningful action. Lower is better.
  • Post-Handoff CSAT: The customer satisfaction score specifically captured after a complex AI-to-human transfer, measuring the seamlessness of the experience.

The Agentic Future

The voicebots are indeed coming, and they are bringing with them the most significant shift in customer service since the telephone itself. The next evolution will see agentic AI — bots that can dynamically choose between multiple tools and knowledge sources to resolve novel problems without being strictly pre-scripted. The challenge for leaders is to ensure that as this technology scales, our focus remains firmly on the human experience, leveraging the best of AI’s speed and the best of human empathy to create a truly effortless and satisfying customer journey.

🤖 Companies to Watch in AI Voicebots

The voicebot space is rapidly evolving, driven by generative AI, and the recent Customer Contact Week (CCW) in Nashville highlighted several key players. Companies to watch in this generative AI voicebot and contact center space include market-leading platforms like NICE, Genesys, Zoom and Five9, all of whom are heavily integrating generative and agentic AI features—such as real-time coaching and automated post-call summaries — into their core Contact Center as a Service (CCaaS) offerings.

Beyond the traditional CCaaS providers, specialist AI firms like Replicant, Voice.AI and ASAPP (who had a significant presence at the event) continue to stand out by focusing on either full end-to-end voice automation for complex transactions or providing advanced Human-in-the-Loop AI features to augment live agents, particularly in regulated industries like financial services.

Additionally, major cloud vendors like Google Cloud and AWS (Amazon Connect) are increasingly aggressive, leveraging their foundational AI models to provide scalable, next-generation AI agents and contact center platforms, ensuring they remain transformative forces in customer experience (CX) automation.

HALLOWEEN BONUS: Save 30% on the eBook, hardcover or softcover of my latest book Charting Change (now in its second edition) — FREE SHIPPING WORLDWIDE — using code HAL30 until midnight October 31, 2025

Image credits: Customer Management Practice, Google Gemini

Content Authenticity Statement: The topic area, key elements to focus on, vendors to mention, etc. were decisions made by Braden Kelley, with a little help from Google Gemini to clean up the article.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.