Category Archives: Technology

Apple Watch Must Die

At least temporarily, because it’s proven bad for innovation

Apple Watch Must Die

by Braden Kelley

I came across an article in The Hill, titled ‘Apple flexes lobbying power as Apple Watch ban comes before Biden next week‘ that highlighted how Apple has been found guilty by the U.S. International Trade Commission (ITC) of infringing upon the intellectual property of startup AliveCor to provide its wearable electrocardiogram features in its Apple Watch.

Apple is now trying to get President Biden to veto the ruling (I didn’t know that was a thing) so that they can keep selling Apple Watches. In my opinion this is a matter for the courts and yet another example of how big tech (and big companies in general) far too often brazenly misappropriate the intellectual property of the little guys. So much so in Apple’s case that over the last 30+ years a popular term has emerged for it called ‘Sherlocking’.

According to the new Microsoft Bing (with ChatGPT):

Sherlocking is a term that refers to Apple’s practice of copying features from third-party apps and integrating them into its own software¹². The term originated from a search tool named Sherlock that Apple developed in the late 90s and later updated to include features from a similar app named Watson²³.

President Biden must let the courts do their job and not intervene if innovation is to thrive in America.

Apple has been found guilty by the ITC and should be forced to stop selling Apple Watches if that is what the court has decided. They should pay damages and redesign their product to design out the intellectual property theft. And, if they feel they are innocent, then they have an avenue of appeal and should exercise it.

But, bottom line, turning a blind eye to intellectual property theft is bad for innovation. We must encourage and protect entrepreneurship for innovation to thrive.

I’ll leave you with this clip from the movie Tucker to ponder on the way out:

And a trailer from probably the best movie on the subject of the struggle of the innovator against big business, based on the real life story of the inventor of the intermittent wiper – Dr. Robert Kearns, it’s called ‘Flash of Genius’:

Hopefully President Biden will stay out of it and let the courts decide based on the evidence.

Keep innovating!

SPECIAL UPDATE: On February 21, 2023 the Biden Administration elected NOT to veto the ITC ruling, leaving the courts to decide whether Apple is innocent or guilty.

Source: Conversation with Bing, 2/18/2023
(1) Apple ‘Sherlocking’ Highlighted in Antitrust Probe—Google Also …. https://www.itechpost.com/articles/105413/20210422/apple-sherlocking-highlighted-antitrust-probe-google-questioned-over-firewall.htm Accessed 2/18/2023.
(2) What Does It Mean When Apple “Sherlocks” an App? – How-To Geek. https://www.howtogeek.com/297651/what-does-it-mean-when-a-company-sherlocks-an-app/ Accessed 2/18/2023.
(3) Sherlock (software) – Wikipedia. https://en.wikipedia.org/wiki/Sherlock_(software) Accessed 2/18/2023.
(4) All the things Apple Sherlocked at WWDC 2022 – TechCrunch. https://techcrunch.com/2022/06/13/all-the-things-apple-sherlocked-at-wwdc-2022/ Accessed 2/18/2023.

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Reality Behind Netflix’s Amazing Success

The Reality Behind Netflix's Amazing Success

GUEST POST from Greg Satell

Today, it’s hard to think of Netflix as anything but an incredible success. Its business has grown at breakneck speed and now streams to 190 countries, yet it has also been consistently profitable, earning over $12 billion last year. With hit series like Orange is the New Black and Stranger Things, it broke the record for Emmy Nominations in 2018.

Most of all, the company has consistently disrupted the media business through its ability to relentlessly innovate. Its online subscription model upended the movie rental business and drove industry giant Blockbuster into bankruptcy. Later, it pioneered streaming video and introduced binge watching to the world.

Ordinarily, a big success like Netflix would offer valuable lessons for the rest of us. Unfortunately, its story has long been shrouded in myth and misinformation. That’s why Netflix Co-Founder Marc Randolph’s book, That Will Never Work, is so valuable. It not only sets the story straight, it offers valuable insight into how to create a successful business.

The Founding Myth

Anthropologists have long been fascinated by origin myths. The Greek gods battled and defeated the Titans to establish Olympus. Remus and Romulus were suckled by a she-wolf and then established Rome. Adam and Eve were seduced by a serpent, ate the forbidden fruit and were banished from the Garden of Eden.

The reason every culture invents origin myths is that they help make sense of a confusing world and reinforce the existing order. Before science, people were ill-equipped to explain things like disease and natural disasters. So, stories, even if the were apocryphal, gave people comfort that there was a rhyme and reason to things.

So it shouldn’t be surprising that an unlikely success such as Netflix has its own origin myth. As legend has it, Co-Founder Reed Hastings misplaced a movie he rented and was charged a $40 dollar late fee. Incensed, he set out to start a movie business that had no late fees. That simple insight led to a disruptive business model that upended the entire industry.

The truth is that late fees had nothing to do with the founding of Netflix. What really happened is that Reed Hastings and Marc Randolph, soon to be unemployed after the sale of their company, Pure Atria, were looking to ride the new e-commerce wave and become the “Amazon of” something. Netflix didn’t arise out of a moment of epiphany, but a process of elimination.

The Subscription Model Was an Afterthought

Netflix really got its start through a morning commute. As Pure Atria was winding down, Randolph and Hastings would drive together from Santa Crux on Highway 17 over the mountain into Silicon Valley. It was a long drive, which gave them lots of time to toss around e-commerce ideas that ranged from customized baseball bats to personalized shampoo.

The reason they eventually settled on movies was the introduction of DVD’s. In 1997, there were very few titles available, so stores didn’t stock them. They were also small and light and were easy to ship. Best of all, the movie studios recognized that they had made a mistake pricing movies on videotape too high and planned to offer DVD’s at a level consumers would buy them.

In the beginning, Netflix earned most of its money selling movies, not renting them. However, before long they realized that it was only a matter of time before Amazon and Walmart began selling DVD’s as well. Once that happened, it was unlikely that Netflix would be able to compete, and they would have to find a way to make the rental model work.

The subscription model began as an experiment. No one seemed to want to rent movies by mail, so they were desperate to find a different model and kept trying things until they hit on something that worked. It wasn’t part of a master plan, but the result of trial and error. “If you would have asked me on launch day to describe what Netflix would eventually look like,” Randolph wrote, “I would have never come up with a monthly subscription service.”

The Canada Principle

As Netflix began to grow it was constantly looking for ways to grow its business. One idea that continually came up was expanding to Canada. It’s just over the border, is largely English speaking, has a business-friendly regulatory environment and shares many cultural traits with the US. It just seemed like an obvious way to increase sales.

Yet they didn’t do it for two reasons. First, while Canada is very similar to the US, it is still another country, with its own currency, laws and other complicating factors. Also, while English is commonly spoken in most parts of Canada, in some regions French predominates. So, what looked simple at first had the potential to become maddeningly complex.

The second and more important reason was that it would have diluted their focus. Nobody has unlimited resources. You only have a certain number of people who can do a certain number of things. For every Canadian problem they had to solve, that was one problem that they weren’t solving in the much larger US business.

That became what Randolph called the “Canada Principle,” or the idea that you need to maximize your focus by limiting the number of opportunities that you pursue. It’s why they dropped DVD sales to focus on renting movies and then dropped a la carte rental to focus on the subscription business. That singularity of focus played a big part in Netflix’s success.

Nobody Knows Anything

Randolph’s mantra throughout the book is that “nobody knows anything.” He borrowed the phrase from the writer William Goldman’s memoir Adventures in the Screen Trade. What Goldman meant was that nobody truly knows how a movie will do until it’s out. Some movies with the biggest budgets and greatest stars flop, while some of the unlikeliest indy films are hits.

For Randolph though, it’s more of a guiding business philosophy. “For every good idea,” he says, “there are a thousand bad ideas it is indistinguishable from.” The only real way to tell the difference is to go out and try them, see what works, discard the failures and build on the successes. You have to, in other words, dare to be crap.

Over the years, I’ve had the chance to get to know hundreds of great innovators and they all tell a different version of the same story. While they often became known for one big idea, they had tried thousands of others before they arrived at the one that worked. It was perseverance and a singularity of focus, not a sudden epiphany, that made the difference.

That’s why the myth of the $40 late fee, while seductive, can be so misleading. What made Netflix successful wasn’t just one big idea. In fact, just about every assumption they made when they started the company was wrong. Rather, it was what they learned along the way that made the difference. That’s the truth of how Netflix became a media powerhouse.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Frontier Airlines Ends Human-to-Human Customer Service

Frontier Airlines Ends Human-to-Human Customer Service

GUEST POST from Shep Hyken

In a bold move to cut costs, Frontier Airlines announced that it would no longer offer human-to-human customer support. As a customer service expert, I was surprised at this move. I have waited to see the fallout, if any, and thought the company might backpedal and reinstate traditional phone support. After almost two months, it hasn’t returned to conventional customer support. The dust has settled a bit, and people (passengers and employees) are adjusting to the decision.

The decision to go digital is different from the decision Northwest Airlines (which eventually merged with Delta) made in 1999 to introduce online check-in to its passengers. The idea behind that technology, and eventually the technology driving online reservations, was to give the customer a better and more convenient experience while at the same time increasing efficiency. The big difference in that decision versus Frontier’s was that there has always been (and still is) an option to connect to a live agent. If passengers didn’t want to use the self-service tools the airline provided, they could still talk to someone who could help them.

That does not appear to be the case with Frontier. There is no other option. The airline is relying on digital support. If you check the website for ways to contact them outside of their self-service options on the site or mobile app, you can use chat, email or file a formal written complaint. Chat is in the moment, and can deliver a good experience—even if it’s AI doing the chatting (and not a human). Email or a written complaint could take too long to resolve an immediate problem, such as rebooking a flight for any last-minute reason.

For some background, Frontier Airlines is a low-cost carrier based in Denver. It has plenty of competition, and when you combine that with rising expenses in almost every area of business and a tough economy, Frontier, just like any other company in almost any industry, is looking to cut costs. In a recent Forbes article, I shared the prediction that some companies will make the mistake of cutting expenses in the wrong places. Those “wrong places” are anywhere the customer will notice. Cutting off phone support to a live human, just one of Frontier’s cost-cutting strategies, is one of those places the customer may notice first.

If a customer wants to change or cancel a flight, make a lost-luggage claim and more, if they have the information they need on hand and the system is intuitive and easy to navigate, the experience could be better than waiting on hold for a live agent. Our customer service research found that 71% of customers are willing to use self-service options. That said, the phone is still the No. 1 channel customers prefer to use when they have a problem, question or complaint.

Frontier’s decision to stop human-to-human customer support has generated controversy and criticism from customers/passengers and employees. The company’s management defends its decision, stating that they need to cut costs to remain competitive. They claim you can eventually reach a human, but their passengers will first have to exhaust the digital options. While self-service automated customer support may help the airline cut costs and increase efficiency, it obviously frustrates customers and negatively impacts employees.

The big concern is that 100% digital or self-service support is still too new. We are still a long way from technology completely replacing the human-to-human interactions we’re used to in the customer service and support worlds. Efficiency is important, but so is the relationship you maintain with your customers and employees. It takes a balance. The best companies figure this out.

Consider this: Video did not kill the radio star. ATMs were predicted to eliminate the need for bank tellers. And for the foreseeable future, technology will not kill live, human-to-human interactions. Frontier customers looking to save money will be forced to adapt to its new way of customer service. Knowing this upfront will help. But also consider this, something I’ve been preaching for several years: The greatest technology in the world hasn’t replaced the ultimate relationship-building tool between a customer and a business, and that is the human touch.

This article was originally published on Forbes.com.

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Technology Pushing Us into a New Ethical Universe

Technology Pushing Us into a New Ethical Universe

GUEST POST from Greg Satell

We take it for granted that we’re supposed to act ethically and, usually, that seems pretty simple. Don’t lie, cheat or steal, don’t hurt anybody on purpose and act with good intentions. In some professions, like law or medicine, the issues are somewhat more complex, and practitioners are trained to make good decisions.

Yet ethics in the more classical sense isn’t so much about doing what you know is right, but thinking seriously about what the right thing is. Unlike the classic “ten commandments” type of morality, there are many situations that arise in which determining the right action to take is far from obvious.

Today, as our technology becomes vastly more powerful and complex, ethical issues are increasingly rising to the fore. Over the next decade we will have to build some consensus on issues like what accountability a machine should have and to what extent we should alter the nature of life. The answers are far from clear-cut, but we desperately need to find them.

The Responsibility of Agency

For decades intellectuals have pondered an ethical dilemma known as the trolley problem. Imagine you see a trolley barreling down the tracks that is about to run over five people. The only way to save them is to pull a lever to switch the trolley to a different set of tracks, but if you do that, one person standing there will be killed. What should you do?

For the most part, the trolley problem has been a subject for freshman philosophy classes and avant garde cocktail parties, without any real bearing on actual decisions. However, with the rise of technologies like self-driving cars, decisions such as whether to protect the life of a passenger or a pedestrian will need to be explicitly encoded into the systems we create.

That’s just the start. It’s become increasingly clear that data bias can vastly distort decisions about everything from whether we are admitted to a school, get a job or even go to jail. Still, we’ve yet to achieve any real clarity about who should be held accountable for decisions an algorithm makes.

As we move forward, we need to give serious thought to the responsibility of agency. Who’s responsible for the decisions a machine makes? What should guide those decisions? What recourse should those affected by a machine’s decision have? These are no longer theoretical debates, but practical problems that need to be solved.

Evaluating Tradeoffs

“Now I am become Death, the destroyer of worlds,” said J. Robert Oppenheimer, quoting the Bhagavad Gita. upon witnessing the world’s first nuclear explosion as it shook the plains of New Mexico. It was clear that we had crossed a Rubicon. There was no turning back and Oppenheimer, as the leader of the project, felt an enormous sense of responsibility.

Yet the specter of nuclear Armageddon was only part of the story. In the decades that followed, nuclear medicine saved thousands, if not millions of lives. Mildly radioactive isotopes, which allow us to track molecules as they travel through a biological system, have also been a boon for medical research.

The truth is that every significant advancement has the potential for both harm and good. Consider CRISPR, the gene editing technology that vastly accelerates our ability to alter DNA. It has the potential to cure terrible diseases such as cancer and Multiple Sclerosis, but also raises troubling issues such as biohacking and designer babies.

In the case of nuclear technology many scientists, including Oppenheimer, became activists. They actively engaged with the wider public, including politicians, intellectuals and the media to raise awareness about the very real dangers of nuclear technology and work towards practical solutions.

Today, we need similar engagement between people who create technology and the public square to explore the implications of technologies like AI and CRISPR, but it has scarcely begun. That’s a real problem.

Building A Consensus Based on Transparency

It’s easy to paint pictures of technology going haywire. However, when you take a closer look, the problem isn’t so much with technological advancement, but ourselves. For example, the recent scandals involving Facebook were not about issues inherent to social media websites, but had more to do with an appalling breach of trust and lack of transparency. The company has paid dearly for it and those costs will most likely continue to pile up.

It doesn’t have to be that way. Consider the case of Paul Berg, a pioneer in the creation of recombinant DNA, for which he won the Nobel Prize. Unlike Zuckerberg, he recognized the gravity of the Pandora’s box he had opened and convened the Asilomar Conference to discuss the dangers, which resulted in the Berg Letter that called for a moratorium on the riskiest experiments until the implications were better understood.

In her book, A Crack in Creation, Jennifer Doudna, who made the pivotal discovery for CRISPR gene editing, points out that a key aspect of the Asilomar conference was that it included not only scientists, but also lawyers, government officials and media. It was the dialogue between a diverse set of stakeholders, and the sense of transparency it produced, that helped the field advance.

The philosopher Martin Heidegger argued that technological advancement is a process of revealing and building. We can’t control what we reveal through exploration and discovery, but we can—and should—be wise about what we build. If you just “move fast and break things,” don’t be surprised if you break something important.

Meeting New Standards

In Homo Deus, Yuval Noah Harari writes that the best reason to learn history is “not in order to predict, but to free yourself of the past and imagine alternative destinies.” As we have already seen, when we rush into technologies like nuclear power, we create problems like Chernobyl and Fukushima and reduce technology’s potential.

The issues we will have to grasp over the next few decades will be far more complex and consequential than anything we have faced before. Nuclear technology, while horrifying in its potential for destruction, requires a tremendous amount of scientific expertise to produce it. Even today, it remains confined to governments and large institutions.

New technologies, such as artificial intelligence and gene editing are far more accessible. Anybody with a modicum of expertise can go online and download powerful algorithms for free. High school kids can order CRISPR kits for a few hundred dollars and modify genes. We need to employ far better judgment than organizations like Facebook and Google have shown in the recent past.

Some seem to grasp this. Most of the major tech companies have joined with the ACLU, UNICEF and other stakeholders to form the Partnership On AI to create a forum that can develop sensible standards for artificial intelligence. Salesforce recently hired a Chief Ethical and Human Use Officer. Jennifer Doudna has begun a similar process for CRISPR at the Innovative Genomics Institute.

These are important developments, but they are little more than first steps. We need a more public dialogue about the technologies we are building to achieve some kind of consensus of what the risks are and what we as a society are willing to accept. If not, the consequences, financial and otherwise, may be catastrophic.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Human-AI Co-Pilot

Redefining the Creative Brief for Generative Tools

The Human-AI Co-Pilot

GUEST POST from Art Inteligencia

The dawn of generative AI (GenAI) has ushered in an era where creation is no longer constrained by human speed or scale. Yet, for many organizations, the promise of the AI co-pilot remains trapped in the confines of simple, often shallow prompt engineering. We are treating these powerful, pattern-recognizing, creative machines like glorified interns, giving them minimal direction and expecting breakthrough results. This approach fundamentally misunderstands the machine’s capability and the new role of the human professional—which is shifting from creator to strategic editor and director.

This is the fundamental disconnect: a traditional creative brief is designed to inspire and constrain a human team—relying heavily on shared context, nuance, and cultural shorthand. An AI co-pilot, however, requires a brief that is explicitly structured to transmit strategic intent, defined constraints, and measurable parameters while leveraging the machine’s core strength: rapid, combinatorial creativity.

The solution is the Human-AI Co-Pilot Creative Brief, a structured document that moves beyond simple what (the output) to define the how (the parameters) and the why (the strategic goal). It transforms the interaction from one of command-and-response to one of genuine, strategic co-piloting.

The Three Failures of the Traditional Prompt

A simple prompt—”Write a blog post about our new product”—fails because it leaves the strategic and ethical heavy lifting to the unpredictable AI default:

  1. It Lacks Strategic Intent: The AI doesn’t know why the product matters to the business (e.g., is it a defensive move against a competitor, or a new market entry?). It defaults to generic, promotional language that lacks a strategic purpose.
  2. It Ignores Ethical Guardrails: It provides no clear instructions on bias avoidance, data sourcing, or the ethical representation of specific communities. The risk of unwanted, biased, or legally problematic output rises dramatically.
  3. It Fails to Define Success: The AI doesn’t know if success means 1,000 words of basic information, or 500 words of emotional resonance that drives a 10% click-through rate. The human is left to manually grade subjective output, wasting time and resources.

The Four Pillars of the Human-AI Co-Pilot Brief

A successful Co-Pilot Brief must be structured data for the machine and clear strategic direction for the human. It contains four critical sections:

1. Strategic Context and Constraint Data

This section is non-negotiable data: Brand Voice Guidelines (tone, lexicon, forbidden words), Target Persona Definition (with explicit demographic and psychographic data), and Measurable Success Metrics (e.g., “Must achieve a Sentiment Score above 75” or “Must reduce complexity score by 20%”). The Co-Pilot needs hard, verifiable parameters, not soft inspiration.

2. Unlearning Instructions (Bias Mitigation)

This is the human-centered, ethical section. It explicitly instructs the AI on what cultural defaults and historical biases to avoid. For example: “Do not use common financial success clichés,” or “Ensure visual representations of leadership roles are diverse and avoid gender stereotypes.” This actively forces the AI to challenge its training data and align with the brand’s ethical standards.

3. Iterative Experimentation Mandates

Instead of asking for one final product, the brief asks for a portfolio of directed experiments. This instructs the AI on the dimensions of variance to explore (e.g., “Generate 3 headline clusters: 1. Fear-based urgency, 2. Aspiration-focused long-term value, 3. Humorous and self-deprecating tone”). This leverages the AI’s speed to deliver human-directed exploration, allowing the human to focus on selection, refinement, and A/B testing—the high-value tasks.

4. Attribution and Integration Protocol

This section ensures the output is useful and compliant. It defines the required format (Markdown, JSON, XML), the needed metadata (source citation for facts, confidence score of the output), and the Human Intervention Point (e.g., “Draft 1 must be edited by the Chief Marketing Officer for final narrative tone and legal review”). This manages the handover and legal chain of custody for the final, approved asset.

Case Study 1: The E-commerce Retailer and the A/B Testing Engine

Challenge: Slow and Costly Product Description Generation

A large e-commerce retailer needed to rapidly create product descriptions for thousands of new items across various categories. The human copywriting team was slow, and their A/B testing revealed that the descriptions lacked variation, leading to plateaued conversion rates.

Co-Pilot Brief Intervention:

The team implemented a Co-Pilot Brief that enforced the Iterative Experimentation Mandate. The brief dictated: 1) Persona Profile, 2) Output Length, and crucially, 3) Mandate: “Generate 5 variants that maximize different psychological triggers: Authority, Scarcity, Social Proof, Reciprocity, and Liking.” The AI delivered a rich portfolio of five distinct, strategically differentiated options for every product. The human team spent time selecting the best option and running the A/B test. This pivot increased the speed of description creation by 400% and—more importantly—increased the success rate of the A/B tests by 30%, proving the value of AI-directed variance.

Case Study 2: The Healthcare Network and Ethical Compliance Messaging

Challenge: Creating Sensitive, High-Compliance Patient Messaging

A national healthcare provider needed to draft complex, highly sensitive communication materials regarding new patient privacy laws (HIPAA) that were legally compliant yet compassionate and easy to understand. The complexity often led to dry, inaccessible language.

Co-Pilot Brief Intervention:

The team utilized a Co-Pilot Brief emphasizing Constraint Data and Unlearning Instructions. The brief included: 1) Full legal text and mandatory compliance keywords (Constraint Data), 2) Unlearning Instructions: “Avoid all medical jargon; do not use the passive voice; maintain a 6th-grade reading level; project a tone of empathetic assurance, not legal warning,” and 3) Success Metric: “Must achieve Flesch-Kincaid Reading Ease Score above 65.” The AI successfully generated drafts that satisfied the legal constraints while adhering to the reading ease metric. The human experts spent less time checking legal compliance and more time refining the final emotional tone, reducing the legal review cycle by 50% and significantly increasing patient comprehension scores.

Conclusion: From Prompt Engineer to Strategic Architect

The Human-AI Co-Pilot Creative Brief is the most important new artifact for innovation teams. It forces us to transition from thinking of the AI as a reactive tool to treating it as a strategic partner that must be precisely directed. It demands that humans define the ethical boundaries, strategic intent, and success criteria, freeing the AI to do what it does best: explore the design space at speed. This elevates the human role from creation to strategic architecture.

“The value of a generative tool is capped by the strategic depth of its brief. The better the instructions, the higher the cognitive floor for the output.”

The co-pilot era is here. Your first step: Take your last successful creative brief and re-write the Objectives section entirely as a set of measurable, hard constraints and non-negotiable unlearning instructions for an AI.

Extra Extra: Futurology is not fortune telling. Futurists use a scientific approach to create their deliverables, but a methodology and tools like those in FutureHacking™ can empower anyone to engage in futurology themselves.

Image credit: 1 of 950+ FREE quote slides available at http://misterinnovation.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Phygital Future

Designing Seamless Experiences Across Worlds

The Phygital Future

GUEST POST from Chateau G Pato

For too long, organizations have treated their physical and digital channels as separate silos, managed by different teams, budgets, and metrics. This disconnect is the root cause of friction, frustration, and failure in the modern customer journey. Customers do not think in channels; they think in experiences.

The future of customer engagement, employee empowerment, and service delivery is Phygital: the seamless, human-centered integration of the digital (technology, data, online) and the physical (locations, people, products). Phygital design is not about adding a screen to a store; it’s about using technology to dissolve the boundaries, focusing entirely on a single, continuous, and highly contextual journey. The goal is to maximize the utility and speed of the digital world while preserving the authenticity and human connection of the physical world.

The Failure of the Digital-First Mandate

The pendulum swung hard toward “Digital-First,” driven by efficiency and the push toward automation. While automation is vital, the pure digital-first mandate often fails at the last mile — the human interaction. Imagine a customer who spends 45 minutes online researching a product, only to have to repeat their entire story to an employee when they walk into a physical store. This is the Phygital Friction Gap — a moment where the digital intelligence is lost, forcing the human to restart the process. This failure occurs because the organization hasn’t designed the two worlds to share context, forcing the customer to carry the burden of the organization’s internal silos.

Phygital design solves this by recognizing that the highest value comes from the intersection, where the speed and intelligence of the digital world elevate the sensory and relational depth of the physical world.

Three Pillars of Seamless Phygital Design

Designing for the Phygital future requires a shift in mindset and strategy, moving from parallel channels to a single, interconnected Experience Architecture.

  1. Contextual Continuity:
    The fundamental rule of Phygital design is Never Ask the Customer to Repeat Themselves. The digital system must carry the customer’s intent, history, and context forward, regardless of the channel they jump to. This requires integrating the CRM, data analytics, and inventory systems so that an in-store associate can see the customer’s browsing history and cart status instantly via a mobile device.
  2. Human-Augmentation, Not Replacement:
    Technology should not be used to replace human interaction, but to augment the human professional. Use AI for mundane, high-volume tasks (data entry, scheduling) to free up employees to focus on high-value, high-empathy interactions (problem-solving, creative consultation). A Phygital environment uses digital intelligence to make the human associate smarter, faster, and more efficient.
  3. Experiential Utility and Delight:
    The physical space must be designed to maximize what the digital cannot offer — sensory experience, immediate gratification, and social connection. If a customer can buy the product cheaper and faster online, the store must offer a compelling reason to visit, such as interactive prototyping, localized expert advice, or a community event. Technology is used to add delight to the physical world, not just efficiency.

Case Study 1: Transforming the Bank Branch into a Consultation Hub

Challenge: Dying Relevance of the Physical Bank Branch

A major retail bank faced the imminent closure of many branches as customers shifted to mobile banking. The few customers who still visited branches were usually facing complex financial problems that demanded significant human expertise and time, clogging up service lines.

Phygital Intervention:

The bank didn’t just add tablets; they re-architected the entire journey. Customers were required to pre-book complex appointments through the mobile app (Digital). This allowed the digital system to collect context and queue the request to the correct specialist before the customer arrived. When the customer walked in, geo-fencing technology alerted the specialist (Physical) to the customer’s arrival. The specialist greeted the customer by name, already possessing their case history, eliminating the need to repeat their issue. This fusion of digital scheduling and physical, informed human contact cut wait times for complex issues by 70% and successfully repositioned the branch as a high-value Consultation Hub rather than a mere transaction counter.

The Ethical Imperative: Transparency and Trust

As we design Phygital experiences, we must address the ethical imperative. The constant collection of data (from location tracking to browsing history) to enable seamlessness can be perceived as invasive. Phygital Trust is built on transparency: customers must understand what data is being used and why, and feel they have genuine control. The seamlessness of the experience should always feel helpful, never creepy.

Case Study 2: Supply Chain Visibility in Manufacturing

Challenge: Lack of Visibility and Trust Between Partners

A global industrial manufacturer struggled with complex, long-lead-time orders, leading to constant back-and-forth communication and mistrust with clients regarding production status. Clients wanted the assurance of seeing the physical process but couldn’t visit the plant.

Phygital Intervention:

The manufacturer implemented a real-time Digital Twin strategy. They placed IoT sensors on key machines and production stations (Physical) and aggregated this data onto a secure, cloud-based platform (Digital). This allowed the client, via a secure web portal, to see the exact stage and location of their custom component in the plant, complete with real-time video feed snapshots and verifiable production data. The physical asset became the source of truth, but the digital interface provided the constant, transparent access the client needed. This Phygital visibility didn’t just improve efficiency; it transformed the client relationship from transactional to one of deep, shared trust, proving the ROI of transparency.

Conclusion: Experience Architecture is the New Battleground

The Phygital Future is here, and it demands that we stop designing for channels and start designing for the human journey. Leaders must champion Experience Architecture — a holistic view of the customer’s path. The organizations that win will be the ones that use the invisible power of data to create visible, human-first magic in the physical world.

“Phygital design is not about technology; it’s about context. It’s the art of giving the human everything they need, exactly where they need it, whether they are holding a smartphone or standing in a store.”

Your first step into the Phygital future: Map one critical customer journey and identify every point where Contextual Continuity is lost when the customer jumps from digital to physical. Eliminate that friction.

Extra Extra: Because innovation is all about change, Braden Kelley’s human-centered change methodology and tools are the best way to plan and execute the changes necessary to support your innovation and transformation efforts — all while literally getting everyone all on the same page for change. Find out more about the methodology and tools, including the book Charting Change by following the link. Be sure and download the TEN FREE TOOLS while you’re here.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

4 Key Aspects of Robots Taking Our Jobs

4 Key Aspects of Robots Taking Our Jobs

GUEST POST from Greg Satell

A 2019 study by the Brookings Institution found that over 61% of jobs will be affected by automation. That comes on the heels of a 2017 report from the McKinsey Global Institute that found that 51% of total working hours and $2.7 trillion dollars in wages are highly susceptible to automation and a 2013 Oxford study that found 47% of jobs will be replaced.

The future looks pretty grim indeed until you start looking at jobs that have already been automated. Fly-by-wire was introduced in 1968, but today we’re facing a massive pilot shortage. The number of bank tellers has doubled since ATMs were introduced. Overall, the US is facing a massive labor shortage.

In fact, although the workforce has doubled since 1970, labor participation rates have risen by more than 10% since then. Everywhere you look, as automation increases, so does the demand for skilled humans. So the challenge ahead isn’t so much finding work for humans, but to prepare humans to do the types of work that will be in demand in the years to come.

1. Automation Doesn’t Replace Jobs, It Replaces Tasks

To understand the disconnect between all the studies that seem to be predicting the elimination of jobs and the increasingly dire labor shortage, it helps to look a little deeper at what those studies are actually measuring. The truth is that they don’t actually look at the rate of jobs being created or lost, but tasks that are being automated. That’s something very different.

To understand why, consider the legal industry, which is rapidly being automated. Basic activities like legal discovery are now largely done by algorithms. Services like LegalZoom automate basic filings. There are even artificial intelligence systems that can predict the outcome of a court case better than a human can.

So, it shouldn’t be surprising that many experts predict gloomy days ahead for lawyers. Yet the number of lawyers in the US has increased by 15% since 2008 and it’s not hard to see why. People don’t hire lawyers for their ability to hire cheap associates to do discovery, file basic documents or even, for the most part, to go to trial. In large part, they want someone they can trust to advise them.

In a similar way we don’t expect bank tellers to process transactions anymore, but to help us with things that we can’t do at an ATM. As the retail sector becomes more automated, demand for e-commerce workers is booming. Go to a highly automated Apple Store and you’ll find far more workers than at a traditional store, but we expect them to do more than just ring us up.

2. When Tasks Become Automated, The Become Commoditized

Let’s think back to what a traditional bank looked like before ATMs or the Internet. In a typical branch, you would see a long row of tellers there to process deposits and withdrawals. Often, especially on Fridays when workers typically got paid, you would expect to see long lines of people waiting to be served.

In those days, tellers needed to process transactions quickly or the people waiting in line would get annoyed. Good service was fast service. If a bank had slow tellers, people would leave and go to one where the lines moved faster. So training tellers to process transactions efficiently was a key competitive trait.

Today, however, nobody waits in line at the bank because processing transactions is highly automated. Our paychecks are usually sent electronically. We can pay bills online and get cash from an ATM. What’s more, these aren’t considered competitive traits, but commodity services. We expect them as a basic requisite of doing business.

In the same way, we don’t expect real estate agents to find us a house or travel agents to book us a flight or find us a hotel room. These are things that we used to happily pay for, but today we expect something more.

3. When Things Become Commodities, Value Shifts Elsewhere

In 1900, 30 million people in the United States were farmers, but by 1990 that number had fallen to under 3 million even as the population more than tripled. So, in a manner of speaking, 90% of American agriculture workers lost their jobs, mostly due to automation. Still, the twentieth century became an era of unprecedented prosperity.

We’re in the midst of a similar transformation today. Just as our ancestors toiled in the fields, many of us today spend much of our time doing rote, routine tasks. However, as two economists from MIT explain in a paper, the jobs of the future are not white collar or blue collar, but those focused on non-routine tasks, especially those that involve other humans.

Consider the case of bookstores. Clearly, by automating the book buying process, Amazon disrupted superstore book retailers like Barnes & Noble and Borders. Borders filed for bankruptcy in 2011 and was liquidated later that same year. Barnes & Noble managed to survive but has been declining for years.

Yet a study at Harvard Business School found that small independent bookstores are thriving by adding value elsewhere, such as providing community events, curating titles and offering personal recommendations to customers. These are things that are hard to do well at a big box retailer and virtually impossible to do online.

4. Value Is Shifting from Cognitive Skills to Social Skills

20 or 30 years ago, the world was very different. High value work generally involved retaining information and manipulating numbers. Perhaps not surprisingly, education and corporate training programs were focused on teaching those skills and people would build their careers on performing well on knowledge and quantitative tasks.

Today, however, an average teenager has more access to information and computing power than a typical large enterprise had a generation ago, so knowledge retention and quantitative ability have largely been automated and devalued. High value work has shifted from cognitive skills to social skills.

Consider that the journal Nature has found that the average scientific paper today has four times as many authors as one did in 1950, and the work they are doing is far more interdisciplinary and done at greater distances than in the past. So even in highly technical areas, the ability to communicate and collaborate effectively is becoming an important skill.

There are some things that a machine will never do. Machines will never strike out at a Little League game, have their hearts broken or see their children born. That makes it difficult, if not impossible, for machines to relate to humans as well as a human can. The future of work is humans collaborating with other humans to design work for machines.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Practical Applications of AI for Human-Centered Innovation

Beyond the Hype

Practical Applications of AI for Human-Centered Innovation

GUEST POST from Chateau G Pato

The air is thick with the buzz of Artificial Intelligence. From Davos to daily headlines, the conversation often oscillates between utopian dreams and dystopian fears. As a thought leader focused on human-centered change and innovation, my perspective cuts through this noise: AI is not just a technology; it is a powerful amplifier of human capability, especially when applied with empathy and a deep understanding of human needs. The true innovation isn’t in what AI can do, but in how it enables humans to do more, better, and more humanely.

Too many organizations are chasing AI for the sake of AI, hoping to find a magic bullet for efficiency. This misses the point entirely. The most transformative applications of AI in innovation are those that don’t replace humans, but rather augment their unique strengths — creativity, empathy, critical thinking, and ethical judgment. This article explores practical, human-centered applications of AI that move beyond the hype to deliver tangible value by putting people at the core of the AI-driven innovation process. It’s about designing a future where humanity remains in the loop, guiding and benefiting from intelligent systems.

AI as an Empathy Amplifier: Deepening Understanding

Human-centered innovation begins with deep empathy for users, customers, and employees. Traditionally, gathering and synthesizing this understanding has been a labor-intensive, often qualitative, process. AI is revolutionizing this by giving innovators superpowers in understanding human context:

  • Sentiment Analysis for Voice of Customer (VoC): AI can process vast quantities of unstructured feedback — customer reviews, social media comments, call center transcripts — to identify emerging pain points, unspoken desires, and critical satisfaction drivers, often in real-time. This provides a granular, data-driven understanding of user sentiment that human analysts alone could never achieve at scale, leading to faster, more targeted product improvements.
  • Personalized Journeys & Predictive Needs: By analyzing behavioral data, AI can predict individual user needs and preferences, allowing for hyper-personalized product recommendations, customized learning paths, or proactive support. This moves from reactive service to anticipatory human care, boosting customer loyalty and reducing friction.
  • Contextualizing Employee Experience (EX): AI can analyze internal communications, HR feedback, and engagement surveys to identify patterns of burnout, identify skill gaps, or flag cultural friction points, allowing leaders to intervene with targeted, human-centric solutions that improve employee well-being and productivity. This directly impacts talent retention and operational efficiency.

“The best AI applications don’t automate human intuition; they liberate it, freeing us to focus on the ‘why’ and ‘how’ of human experience. This is AI as a partner, not a replacement.” — Braden Kelley


Case Study 1: AI-Powered User Research at Adobe

The Challenge:

Adobe, with its vast suite of creative tools, faces the constant challenge of understanding the diverse, evolving needs of millions of users — from professional designers to casual creators. Traditional user research (surveys, interviews, focus groups) is time-consuming and expensive, making it difficult to keep pace with rapid product development cycles and emerging user behaviors.

The AI-Powered Human-Centered Solution:

Adobe developed internal AI tools that leverage natural language processing (NLP) to analyze immense volumes of unstructured user feedback from forums, support tickets, app store reviews, and in-app telemetry. These AI systems identify recurring themes, emerging feature requests, and points of friction with remarkable speed and accuracy. Instead of replacing human researchers, the AI acts as an an ‘insight engine,’ highlighting critical areas for human qualitative investigation. Researchers then use these AI-generated insights to conduct more focused, empathetic interviews and design targeted usability tests, ensuring human intelligence remains in the loop for crucial interpretation and validation.

The Innovation Impact:

This approach drastically accelerates the ideation and validation phases of Adobe’s product development, translating directly into faster time-to-market for new features. It allows human designers to spend less time sifting through data and more time synthesizing insights, collaborating on creative solutions, and directly interacting with users on the most impactful issues. Products are developed with a deeper, faster, and more scalable understanding of user pain points and desires, leading to higher adoption, stronger user loyalty, and ultimately, increased revenue.


AI as a Creativity & Productivity Partner: Amplifying Output

Beyond empathy, AI is fundamentally transforming how human innovators generate ideas, prototype solutions, and execute complex projects, not by replacing creative thought, but by amplifying it while maintaining human oversight.

  • Generative AI for Ideation & Concepting: Large Language Models (LLMs) can act as powerful brainstorming partners, generating hundreds of diverse ideas, marketing slogans, or design concepts from a simple prompt. This allows human creatives to explore a broader solution space faster, finding novel angles they might have missed, thereby reducing ideation cycle time and boosting innovation output.
  • Automated Prototyping & Simulation: AI can rapidly generate low-fidelity prototypes from design specifications, simulate user interactions, or even predict the performance of a physical product before it’s built. This drastically reduces the time and cost of the early innovation cycle, making experimentation more accessible and leading to significant R&D savings.
  • Intelligent Task Automation (Beyond RPA): While Robotic Process Automation (RPA) handles repetitive tasks, AI goes further. It can intelligently automate the contextual parts of a job, managing schedules, prioritizing communications, or summarizing complex documents, freeing human workers for higher-value, creative problem-solving. This leads to increased employee satisfaction and higher strategic output.

Case Study 2: Spotify’s AI-Driven Music Discovery & Creator Tools

The Challenge:

Spotify’s core challenge is matching millions of users with tens of millions of songs, constantly evolving tastes, and emerging artists. Simultaneously, they need to empower artists to find their audience and create efficiently in a crowded market. Traditional human curation alone couldn’t scale to this complexity.

The AI-Powered Human-Centered Solution:

Spotify uses a sophisticated AI engine to power its personalized recommendation algorithms (Discover Weekly, Daily Mixes). This AI doesn’t just match songs; it understands context — mood, activity, time of day, and even the subtle social signals of listening. This frees human curators to focus on high-level thematic curation, editorial playlists, and breaking new artists, rather than sifting through endless catalogs. More recently, Spotify is also exploring AI tools for artists, assisting with everything from mastering tracks to suggesting optimal release times based on audience analytics, always with human creators retaining final creative control.

The Innovation Impact:

The AI system allows Spotify to deliver a highly personalized and human-feeling music discovery experience at an unimaginable scale, directly driving user engagement and subscriber retention. For artists, AI acts as a creative assistant and market intelligence tool, allowing them to focus on making music while gaining insights into audience behavior and optimizing their reach. This symbiotic relationship between human creativity and AI efficiency is a hallmark of human-centered innovation, resulting in a stronger platform ecosystem for both consumers and creators.

The future of innovation isn’t about AI replacing humans; it’s about AI elevating humanity. By focusing on how AI can amplify empathy, foster creativity, and liberate us from mundane tasks, we can build a future where technology truly serves people. This requires a commitment to responsible AI development — ensuring fairness, transparency, and human oversight. The challenge for leaders is not just to adopt AI, but to design its integration with a human-centered lens, ensuring it empowers, rather than diminishes, the human spirit of innovation, and delivers measurable value across the organization.

Extra Extra: Because innovation is all about change, Braden Kelley’s human-centered change methodology and tools are the best way to plan and execute the changes necessary to support your innovation and transformation efforts — all while literally getting everyone all on the same page for change. Find out more about the methodology and tools, including the book Charting Change by following the link. Be sure and download the TEN FREE TOOLS while you’re here.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

3 Steps to Find the Horse’s A** In Your Company (and Create Space for Innovation)

3 Steps to Find the Horse's A** In Your Company (and Create Space for Innovation)

GUEST POST from Robyn Bolton

Innovation thrives within constraints.

Constraints create the need for questions, creative thinking, and experiments.

But as real as constraints are and as helpful as they can be, don’t simply accept them. Instead, question them, push on them, and explore around them.

But first, find the horse’s a**

How Ancient Rome influenced the design of the Space Shuttle

In 1974, Thiokol, an aerospace and chemical manufacturing company, won the contract to build the solid rocket boosters (SRBs) for the Space Shuttle. The SRBs were to be built in a factory in Utah and transported to the launch site via train.

The train route ran through a mountain tunnel that was just barely wider than the tracks.

The standard width of railroad tracks (distance between the rails or the railroad gauge) in the US is 4 feet, 8.5 inches which means that Thiokol’s engineers needed to design SRBs that could fit through a tunnel that was slightly wider than 4 feet 8.5 inches.

4 feet 8.5 inches wide is a constraint. But where did such an oddly specific constraint come from?

The designers and builders of America’s first railroads were the same people and companies that built England’s tramways. Using the existing tramways tools and equipment to build railroads was more efficient and cost-effective, so railroads ended up with the same gauge as tramways – 4 feet 8.5 inches.

The designers and builders of England’s tramways were the same businesses that, for centuries, built wagons. Wanting to use their existing tools and equipment (it was more efficient and cost-effective, after all), the wagon builders built tramways with the exact distance between the rails as wagons had between wheels – 4 feet 8.5 inches.

Wagon wheels were 4 feet 8.5 inches apart to fit into the well-worn grooves in most old European roads. The Romans built those roads, and Roman chariots made those grooves, and a horses pulled those chariots, and the width of a horses was, you guessed it, 4 feet 8.5 inches.

To recap – the width of a horses’ a** (approximately 4 feet 8.5 inches) determined the distance between wheels on the Roman chariots that wore grooves into ancient roads. Those grooves ultimately dictated the width of wagon wheels, tramways, railroad ties, a mountain tunnel, and the Space Shuttle’s SRBs.

How to find the horse’s a**

When you understand the origin of a constraint, aka find the horse’s a**, it’s easier to find ways around it or to accept and work with it. You can also suddenly understand and even anticipate people’s reactions when you challenge the constraints.

Here’s how you do it – when someone offers a constraint:

  1. Thank them for being honest with you and for helping you work more efficiently
  2. Find the horse’s a** by asking questions to understand the constraint – why it exists, what it protects, the risk of ignoring it, who enforces it, and what happened to the last person who challenged it.
  3. Find your degrees of freedom by paying attention to their answers and how they give them. Do they roll their eyes in knowing exasperation? Shrug their shoulders in resignation? Become animated and dogmatic, agitated that someone would question something so obvious?

How to use the horse’s a** to innovate

You must do all three steps because stopping short of step 3 stops creativity in its tracks.

If you stop after Step 1 (which most people do), you only know the constraint, and you’ll probably be tempted to take it as fixed. But maybe it’s not. Perhaps it’s just a habit or heuristic waiting to be challenged.

If you do all three steps, however, you learn tons of information about the constraint, how people feel about it, and the data and evidence that could nudge or even eliminate it.

At the very least, you’ll understand the horse’s a** driving your company’s decisions.

Image credit: Pixabay

Endnotes:

  1. To be very clear, the origin of the constraint is the horse’s a**. The person telling you about the constraint is NOT the horse’s a**.
  2. The truth is never as simple as the story and railroads used to come in different gauges. For a deeper dive into this “more true than not” story (and an alternative theory that it was the North’s triumph in the Civil War that influenced the design of the SRBs, click here

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Top 5 Tech Trends Artificial Intelligence is Monitoring

Top 5 Tech Trends Artificial Intelligence is Monitoring

GUEST POST from Art Inteligencia

Artificial Intelligence is constantly scanning the Internet to identify the technology trends that are the most interesting and potentially the most impactful. At present, according to artificial intelligence, the Top Five Technology Trends being tracked for futurology are:

1. Artificial Intelligence (AI): Artificial Intelligence is the development of computer systems that can perform tasks typically requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. AI research is highly technical and specialized, and is deeply divided into subfields that often fail to communicate with each other.

2. Autonomous Vehicles: Autonomous vehicles are vehicles that can navigate without human input, relying instead on sensors, GPS, and computer technology to determine their location and trajectory. Autonomous vehicles are used in a variety of applications, from consumer transportation to military drones.

3. Virtual Reality (VR): Virtual reality is a computer-generated simulation of a three-dimensional environment that can be interacted with in a seemingly real or physical way by a person using special electronic equipment. VR uses technologies such as gesture control and stereoscopic displays to create immersive experiences for the user.

4. Augmented Reality (AR): Augmented reality is a technology that superimposes computer-generated content onto the real world to enhance or supplement a user’s physical experience. AR is used in a variety of contexts, from gaming to industrial design.

5. Internet of Things (IoT): The Internet of Things is the network of physical devices, vehicles, home appliances, and other items embedded with electronics, software, sensors, and connectivity that enable these objects to connect and exchange data. The IoT has the potential to revolutionize many aspects of our lives, from manufacturing and transportation to healthcare and energy management.

It’s obviously amusing that artificial intelligence considers artificial intelligence to be the number one technology trend at present in its futurology work. I would personally rank it number one, but I would rank autonomous vehicles and virtual reality lower. I would put augmented reality and IoT number two and number three respectively, but what do I know …

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.