Tag Archives: UI

Has AI Killed Design Thinking?

Or Just Removed Its Excuses?

LAST UPDATED: March 2, 2026 at 5:13 PM

Has AI Killed Design Thinking?

by Braden Kelley and Art Inteligencia


I. The Question Everyone Is Whispering

Something fundamental has changed in how products are created.

Artificial intelligence can now generate working software in minutes. Designers can move from an idea to a functional prototype without waiting for engineering. Engineers can generate interface concepts, user flows, and even early product ideas with a few well-crafted prompts.

The traditional product development cycle — design, then build, then test — is collapsing into something faster, messier, and far more fluid.

In the past, the biggest constraint in innovation was the cost and time required to build something. Today, AI dramatically reduces that barrier. Entire features, experiments, and even applications can be created almost instantly.

Which raises an uncomfortable question that many product leaders, designers, and engineers are quietly asking:

If we can ship almost immediately, do we still need design thinking?

At first glance, the answer might seem obvious. Design thinking was created to help teams understand people, define the right problems, and avoid building the wrong solutions. Those goals have not disappeared.

But when the cost of building approaches zero, the role of design inevitably changes. The traditional pacing of discovery, ideation, prototyping, and testing begins to compress. The boundaries between designer and engineer begin to blur.

And as those boundaries dissolve, the question is no longer simply whether design thinking still matters.

The deeper question is whether the discipline itself must evolve to survive in a world where almost anyone can turn an idea into working software.

II. Design Thinking Was Built for a World of Scarcity

To understand how artificial intelligence is reshaping product creation, it helps to remember the environment in which design thinking originally emerged.

Design thinking did not appear because organizations suddenly discovered empathy or creativity. It emerged because building things was expensive, slow, and risky. Every product decision carried significant cost, and mistakes could take months or years to correct.

In that world, organizations needed a structured way to reduce uncertainty before committing engineering resources. Design thinking provided that structure.

Its now-famous stages helped teams move deliberately from understanding people to building solutions:

  • Empathize — deeply understand the people you are designing for.
  • Define — frame the real problem worth solving.
  • Ideate — generate a wide range of possible solutions.
  • Prototype — create rough representations of potential ideas.
  • Test — validate whether those ideas actually work for people.

The goal was simple: avoid spending months building something no one actually needed.

Design thinking slowed teams down in the right places so they could move faster later. It created space for exploration before the heavy machinery of engineering was set in motion.

But this entire framework assumed one critical constraint:

Building was the most expensive part of innovation.

Prototypes were often static mockups. Experiments required engineering time. Even small product changes could take weeks or months to ship.

In other words, design thinking was optimized for a world where the biggest risk was building the wrong thing.

Today, AI is rapidly changing that assumption. When working software can be generated in minutes rather than months, the bottleneck shifts — and the role of design must evolve with it.

III. AI Has Flipped the Innovation Constraint

For most of the history of digital product development, the limiting factor in innovation was the ability to build. Even the best ideas had to wait in line for scarce engineering resources, long development cycles, and complex release processes.

Artificial intelligence is rapidly dismantling that constraint.

Today, AI tools can generate functional code, working interfaces, and interactive prototypes in minutes. What once required a team of specialists and weeks of effort can often be produced by a single individual in an afternoon.

Designers can now:

  • Create interactive prototypes that behave like real products
  • Generate front-end code directly from design concepts
  • Rapidly explore multiple product directions

Engineers can now:

  • Generate user interfaces and layouts
  • Experiment with product concepts before committing to full builds
  • Quickly iterate on product experiences

The barrier between idea and implementation is shrinking dramatically.

As a result, the core constraint in innovation is no longer the ability to build something. The new constraint is the ability to decide what should actually be built.

When creation becomes cheap, judgment becomes the scarce resource.

Organizations can now generate more ideas, features, and experiments than they have the capacity to evaluate thoughtfully. The risk is no longer simply building the wrong thing slowly.

The risk is building thousands of things quickly without enough clarity about which ones actually matter.

This shift fundamentally changes the role of design. Instead of primarily helping teams avoid costly mistakes in development, design increasingly becomes the discipline that helps organizations navigate overwhelming possibility.

IV. The Blurring of Roles: Designers Reach Forward, Engineers Reach Back

One of the most profound effects of AI in product development is the erosion of traditional professional boundaries.

For decades, the technology industry operated with relatively clear separations of responsibility. Designers focused on user needs, interaction models, and visual systems. Engineers translated those designs into working software. Product managers coordinated priorities and timelines between the two.

That structure was largely a reflection of technical limitations. Designing and building required specialized tools, knowledge, and workflows that made cross-disciplinary work difficult.

AI is rapidly dissolving those barriers.

Designers can now reach forward into the domain that once belonged exclusively to engineering. With AI-assisted tools, they can generate working interfaces, produce front-end code, and simulate complex user interactions without waiting for implementation.

At the same time, engineers can reach backward into design. AI systems can help them generate layouts, propose interface structures, and explore experience flows that once required specialized design expertise.

The result is a new kind of creative overlap:

  • Designers who can prototype in code
  • Engineers who can explore experience design
  • Product creators who move fluidly between disciplines

The traditional model of work moving through a linear chain — research to design to engineering — begins to give way to a far more integrated creative process.

The future product creator is not defined by a job title, but by the ability to move fluidly between understanding problems and building solutions.

This does not mean design expertise or engineering skill become less important. If anything, the opposite is true. As tools make it easier for everyone to participate in creation, the depth of real craft becomes more visible and more valuable.

But it does mean the rigid boundaries between “designer” and “builder” are beginning to dissolve, creating a new generation of hybrid creators who can move seamlessly between imagining, designing, and shipping experiences.

V. The Death of the Handoff

For decades, most product development operated like a relay race. Work moved from one team to the next through a series of formal handoffs.

Researchers gathered insights and passed them to designers. Designers created wireframes and mockups that were handed to engineering. Engineers translated those designs into working software and eventually passed the finished product to testing and operations.

Each transition introduced delays, misinterpretations, and loss of context. The original understanding of the problem often became diluted as it traveled through the system.

Artificial intelligence is accelerating the collapse of this model.

When individuals can move rapidly from idea to prototype to functional product, the need for rigid handoffs begins to disappear. A single person can now:

  • Explore a user problem
  • Design a potential solution
  • Generate working code
  • Launch an experiment

Instead of waiting for work to pass from one discipline to another, creators can stay connected to the entire lifecycle of an idea.

The distance between insight and implementation is shrinking.

This shift has profound implications for how innovation happens inside organizations. Instead of large teams coordinating complex handoffs, smaller groups — or even individuals — can rapidly test ideas and learn from real-world feedback.

Product development begins to look less like an industrial assembly line and more like a creative studio, where ideas are explored, built, and refined continuously.

The most effective teams in this environment will not simply move faster. They will maintain ownership of ideas from the moment a problem is discovered all the way through to the moment a solution is experienced by real people.

VI. What AI Actually Kills

Artificial intelligence is not killing design thinking.

What it is killing are many of the habits that organizations adopted in the name of design thinking but that were never truly about understanding people or solving meaningful problems.

For years, some teams have mistaken the appearance of innovation for the practice of it. Workshops replaced experiments. Sticky notes replaced decisions. Slide decks replaced prototypes.

When building was slow and expensive, these behaviors were often tolerated because teams needed time to align before committing resources. But in a world where working solutions can be generated almost instantly, those habits quickly become friction.

AI removes the excuses that allowed these patterns to persist.

Process Theater

Innovation workshops that generate energy but not outcomes become difficult to justify when teams can build and test ideas immediately.

Endless Ideation

Brainstorming sessions that produce dozens of ideas without committing to experiments lose their value when ideas can be rapidly turned into prototypes and evaluated in the real world.

Documentation Instead of Exploration

Detailed reports, long strategy decks, and static artifacts once helped communicate ideas across teams. But when AI allows concepts to be expressed through working experiences, documentation becomes less important than experimentation.

Safe Innovation

Perhaps most importantly, AI challenges organizations that use process as a shield against risk. When it becomes easy to test bold ideas quickly and cheaply, avoiding experimentation becomes a choice rather than a necessity.

AI doesn’t eliminate design thinking. It eliminates the distance between thinking and doing.

The organizations that thrive in this environment will not be the ones with the most polished innovation processes. They will be the ones that are most willing to replace discussion with discovery and ideas with experiments.

Has AI Killed Design Thinking Infographic

VII. The New Role of Design: Decision Velocity

When the cost of building drops dramatically, the nature of competitive advantage changes.

In the past, organizations succeeded by efficiently transforming ideas into products. Engineering capacity, technical expertise, and operational discipline were often the primary constraints.

But when AI can generate working software, prototypes, and experiments almost instantly, the challenge is no longer how quickly something can be built.

The challenge becomes how quickly and wisely teams can decide what is actually worth building.

In an AI-driven world, innovation speed is no longer about development velocity — it is about decision velocity.

This is where the role of design evolves.

Design shifts from primarily producing artifacts — wireframes, mockups, and prototypes — to guiding the choices that shape meaningful innovation.

Designers increasingly become the people who help teams:

  • Frame the right problems to solve
  • Clarify human needs and motivations
  • Prioritize which ideas deserve experimentation
  • Interpret signals from real-world user behavior

In other words, design becomes less about shaping the interface of a product and more about shaping the direction of learning.

When organizations can generate thousands of potential solutions, the real value lies in identifying the small number that actually create meaningful value for people.

Designers, at their best, help organizations navigate that complexity. They connect technology to human context, helping teams avoid the trap of building faster without thinking better.

In the AI era, design is not slowing innovation down. It is helping organizations move quickly without losing their sense of where they should be going.

VIII. From Design Thinking to Design Doing

As artificial intelligence compresses the distance between idea and implementation, the nature of design practice begins to change. The emphasis shifts away from structured stages and toward continuous experimentation.

Traditional design thinking frameworks helped teams organize their thinking before committing to build. But in an AI-enabled environment, building itself becomes part of the thinking process.

Instead of long cycles of analysis followed by development, teams can now explore ideas directly through working prototypes and rapid experiments.

The most effective teams no longer separate thinking from building. They think by building.

This shift marks a move from design thinking to what might be called design doing.

In this model, learning happens through fast cycles of creation, feedback, and refinement. Ideas are not debated endlessly in workshops or captured in lengthy documents. They are explored through tangible experiences that can be observed, tested, and improved.

The practical differences begin to look like this:

Traditional Model AI-Enabled Model
Workshops and brainstorming sessions Rapid experiments and live prototypes
Personas and research summaries Behavioral data and real-world signals
Concept mockups Functional prototypes
Long planning cycles Continuous learning loops

None of this diminishes the importance of understanding people. If anything, the need for deep human insight becomes even more important as the pace of experimentation accelerates.

What changes is how that understanding is expressed. Instead of existing primarily as documents or presentations, insight becomes embedded directly into the experiences teams create and test.

In an AI-native organization, design is no longer a phase that happens before development begins. It becomes an ongoing activity woven directly into the act of building and learning.

IX. Human Trust Becomes the New Design Material

As artificial intelligence accelerates the speed of building, the most important design challenges begin to shift away from usability and toward something deeper: trust.

When products can be created, modified, and deployed almost instantly, the risk is not simply poor interface design. The risk is creating experiences that feel disconnected from human values, human context, and human expectations.

AI makes it easier than ever to generate functionality. But it does not automatically ensure that what is generated is responsible, understandable, or aligned with the needs of the people who will use it.

In an AI-driven world, the most important design material is no longer pixels or screens — it is human trust.

This raises a new set of responsibilities for designers, engineers, and product leaders alike.

Teams must think carefully about questions such as:

  • Do people understand what the system is doing?
  • Are decisions being made transparently?
  • Does the experience respect human autonomy?
  • Does the technology reinforce or erode confidence?

As AI systems become more powerful, the danger is not just that they might fail. The danger is that they might succeed in ways that quietly undermine the relationship between organizations and the people they serve.

Design therefore becomes a critical safeguard. It ensures that rapid technological capability does not outpace thoughtful consideration of human consequences.

In this sense, the role of design expands beyond shaping products. It becomes the discipline that ensures technology remains grounded in human meaning, responsibility, and trust.

X. The Future: Designers Who Ship, Engineers Who Empathize

As AI blurs the traditional boundaries between design and engineering, the most valuable creators in the future will be those who can move fluidly between imagining, designing, and building.

Designers will need to ship working products, not just static prototypes. Engineers will need to empathize deeply with users, understanding problems and shaping experiences that align with human needs.

The new hybrid product creator embodies both curiosity and capability, bridging the gap between thinking and doing. They are able to:

  • Rapidly translate insights into working solutions
  • Experiment and learn from real-world user behavior
  • Balance technical feasibility with human desirability
  • Maintain alignment between strategy, design, and execution

In this new landscape, design thinking does not disappear — it evolves. AI removes many of the barriers that previously prevented designers and engineers from collaborating fully and iterating quickly.

The organizations that succeed will be those where everyone has the ability to both understand humans and act on that understanding at the speed of AI.

The future belongs to hybrid creators who can navigate ambiguity, make fast decisions, and embed human trust into every experiment. In such a world, innovation is no longer the domain of specialists — it is the responsibility of anyone capable of connecting insight with action.

XI. The Real Question Leaders Should Be Asking

The debate is often framed as a dramatic question: “Has AI killed design thinking?” But this framing misses the deeper challenge facing organizations today.

The real question is not whether design thinking survives — it is whether organizations are prepared to operate in a world where anyone can turn ideas into working products almost instantly.

In this AI-accelerated environment, success depends less on the speed of coding or the elegance of design frameworks. It depends on human judgment, understanding, and alignment.

Leaders must ask themselves:

  • Do our teams know what problems are truly worth solving?
  • Can we prioritize experiments that create real human value?
  • Are we embedding human trust and ethical consideration into everything we build?
  • Are our designers and engineers equipped to operate across traditional boundaries?

In this new era, the organizations that thrive will not be the ones with the fastest developers or the slickest design processes.

They will be the organizations that can rapidly identify meaningful opportunities, make thoughtful decisions, and maintain human-centered principles while moving at the speed of AI.

Innovation will no longer belong to the people who can code. It will belong to the people who understand humans well enough to know what should be built in the first place.

The role of leadership is no longer just managing workflows — it is shaping the environment in which hybrid creators can think, act, and build responsibly at unprecedented speed.

New Tools for the New Design Reality

Get the new design thinking downloadsTo help you find problems worth solving and to design and execute experiments, I created a couple of visual and collaborative tools to help you thrive in this new reality. Download them both from my store and enjoy!

  1. Problem Finding Canvas — Only $4.99 for a limited time
  2. Experiment Canvas — FREE

FAQ: AI and the Evolution of Design Thinking

1. Has AI made design thinking obsolete?
No. AI has not killed design thinking, but it has changed the context in which it operates. Traditional design thinking frameworks assumed that building was slow and expensive. With AI accelerating the creation of prototypes and software, design thinking evolves from a staged process into a continuous cycle of experimentation and decision-making.
2. How are the roles of designers and engineers changing with AI?
AI blurs the traditional boundaries between designers and engineers. Designers can now generate working code and functional prototypes, while engineers can explore user experience and interface design. The future favors hybrid creators who can both understand human needs and rapidly implement solutions.
3. What becomes the main focus of design in an AI-driven product environment?
The primary focus shifts from producing artifacts to guiding decision-making and protecting human trust. Design becomes the discipline that helps teams prioritize meaningful experiments, interpret real-world feedback, and ensure that rapid technological development remains aligned with human values and needs.


Image credits: ChatGPT

Content Authenticity Statement: The topic area, key elements to focus on, etc. were decisions made by Braden Kelley, with a little help from ChatGPT to clean up the article and add citations.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Have We Made AI Interfaces Too Human?

Could a Little Uncanny Valley Help Add Some Much Needed Skepticism to How We Treat AI Output?

Have We Made AI Interfaces Too Human?

GUEST POST from Pete Foley

A cool element of AI is how ‘human’ it appear’s to be. This is of course a part of its ‘wow’ factor, and has helped to drive rapid and widespread adoption. It’s also of course a clever illusion, as AI’s don’t really ‘think’ like real humans. But the illusion is pretty convincing. And most of us, me included, who have interacted with AI at any length, have probably at times all but forgotten they are having a conversation with code, albeit sophisticated code.

Benefits of a Human-LIke Interface: And this humanizing of the user interface brings multiple benefits. It is of course a part of the ‘wow’ factor that has helped drive rapid and widespread adoption of the technology. The intuitive, conversational interface also makes it far easier for everyday users to access information without training in search techniques. While AI’s they don’t fundamentally have access to better information than an old fashioned Google search, they are much easier to use. And the humanesque output not only provides ‘ready to use’ and pre-synthesized information, but also increases the believability of the output. Furthermore, by creating an illusion of human-like intelligence, it implicitly implies emotions, compassion and critical thinking behind the output, even if it’s not really there

Democratizing Knowledge: And in many ways, this is a really good thing. Knowledge is power. Democratizing access to it has many benefits, and in so doing adds checks and balances to our society we’ve never before enjoyed. And it’s part of a long-term positive trend. Our societies have evolved from shaman and priests jealously guarding knowledge for their own benefit, through the broader dissemination enabled by the Gutenberg press, books and libraries. That in turn gave way to mass media, the internet, and now the next step, AI. Of course, it’s not quite that simple, as it’s also a bit of an arms race. With this increased access to information has come ever more sophisticated ways in which today’s ’shamans’ or leaders try to protect their advantage. They may no longer use solar eclipses to frighten an astronomically ignorant populace into submission and obedience. But spinning, framing, controlled narratives, selective dissemination of information, fake news, media control, marketing, behavioral manipulation and ’nudging’ are just a few ways in which the flow of information is controlled or manipulated today. We have moved in the right direction, but still have a way to go, and freedom of information and it’s control are always in some kind of arms race.

Two Edged Sword: But this humanization of AI can also be a two edged sword, and comes with downsides in addition to the benefits described above. It certainly improves access and believability, and makes output easier to disseminate, but also hides its true nature. AI operates in a quite different way from a human mind. It lacks intrinsic ethics, emotional connections, genuine empathy, and ‘gut feelings’. To my inexpert mind, it in some uncomfortable ways resembles a psychopath. It’s not evil in a human sense by any means, but it also doesn’t care, and lacks a moral or ethical framework

A brutal example is the recent case of Adam Raine, where ChatGPT advised him on ways to commit suicide, and helped him write a suicide note. A sane human would never do this, but the humanesque nature of the interface appeared to create an illusion for that unfortunate individual that he was dealing with a human, and the empathy, emotional intelligence and compassion that comes with that.

That may be an extreme example. But the illusion of humanity and the ability to access unfiltered information can also bring more subtle issues. For example, while the ability to interrogate AI around our symptoms before visiting a physician certainly empowers us to take a more proactive role in our healthcare. But it can also be counterproductive. A patient who has convinced themselves of an incorrect diagnosis can actually harm themselves, or make a physicians job much harder. And AI lacks the compassion to break bad news gently, or add context in the way a human can.

The Uncanny Valley: That brings me to the Uncanny Valley. This describes when technology approaches but doesn’t quite achieve perfection in human mimicry. In the past we could often detect synthetic content on a subtle and implicit level, even if we were not conscious of it. For example, a computerized voice that missed subtle tonal inflections, or a photoshopped image or manipulated video that missed subtle facial micro expressions might not be obvious, but often still ‘felt’ wrong. Or early drum machines were so perfect that they lacked the natural ’swing’ of even the most precise human drummer, and so had to be modified to include randomness that was below the threshold of conscious awareness, but made them ‘feel’ real.

This difference between conscious and unconscious evaluation creates cognitive dissonance that can result in content feeling odd, or even ‘creepy’. And often, the closer we got to eliminating that dissonance, the creepier it feels. When I’ve dealt with the uncanny valley in the past, it’s generally been something we needed to ‘fix’. For example, over-photoshopping in a print ad, or poor CGI. But be careful what you wish for. AI appears to have marched through the ‘uncanny valley’ to the point where its output feels human. But despite feeling right, it may still lack the ethical, moral or emotional framework of the human responses it mimics.

This begs a question, ‘do we need some implicit as well as explicit cues that remind us we are not dealing with a real human? Could a slight feeling of ‘creepiness maybe help to avoid another Adam Raine? Should we add back some ‘uncanny valley’, and turn what used to be something we thought of as an ‘enemy’ to good use? The latter is one of my favorite innovation strategies. Whether it’s vaccination, or exposure to risks during childhood, or not over-sanitizing, sometimes a little of what does us harm can do us good. Maybe the uncanny valley we’ve typical tried to overcome could now actually help us?

Would just a little implicit doubt also encourage us to think a bit more deeply about the output, rather than simply cut and paste it into a report? By making AI output sound so human, it potentially removes the need for cognitive effort to process the output. Thinking that played a key role in translating search into output can now be skipped. Synthesizing and processing output from a ‘old fashioned’ Google search requires effort and comprehension. With AI, it is all to easy to regurgitate the output, skip meaningful critical thinking, and share what we really don’t understand. Or perhaps worse, we can create an illusion of understanding where we don’t think deeply or causally enough to even realize that we don’t understand what we are sharing. It’s in some ways analogous to proof reading, in that it’s all to easy to skip over content we think we already know, even if we really don’t . And the more we skip over content, the more difficult it is to be discerning, or question the output. When a searcher receives answers in prose he or she can cut and paste into a report or essay, less effort effort and critical thinking goes into comprehension and the critical thinking, and the risk of sharing inaccurate information, or even nonsense increases.

And that also brings up another side effect of low engagement with output – confirmation bias. If the output is already in usable form, doesn’t require synthesizing or comprehension, and it agrees with our beliefs or motivations, it’s a perfect storm. There is little reason to question it, or even truly understand it. We are generally pretty good at challenging something that surprises us, or that we disagree with. But it takes a lot of will, and a deep adherence to the scientific method to challenge output that supports our beliefs or theories

Question everything, and you do nothing! The corollary to this is surely ‘that’s the point of AI?’ It’s meant to give us well structured, and correct answers, and in so doing free up our time for more important things, or to act on ideas, rather than just think about them. If we challenge and analyze every output, why use AI in the first place? That’s certainly fair, but taking AI output without any question is not smart either. Remember that it isn’t human, and is still capable of making really stupid mistakes. Okay, so are humans, but AI is still far earlier in its evolutionary journey, and prone to unanticipated errors. I suspect the answer to this lies in how important the output is, and where it will be used. If it’s important, treat AI output as a hypothesis. Don’t believe everything you read, and before simply sharing or accepting, ask ourselves and AI itself questions around what went into the conclusions, where the data came from, and what the critical thinking path is. Basically apply the scientific method to AI output much the same as we would, or should our own ideas.

Cat Videos and AI Action Figures: Another related risk with AI is if we let it become an oracle. We not only treat its output as human, but as super human. With access to all knowledge, vastly superior processing power compared to us mere mortals, and apparent human reasoning, why bother to think for ourselves? A lot of people worry about AI becoming sentient, more powerful than humans, and the resultant doomsday scenarios involving Terminators and Skynet. While it would be foolish to ignore such possibilities, perhaps there is a more clear and present danger, where instead of AI conquering humanity, we simply cede our position to it. Just as basic mathematical literacy has plummeted since the introduction of calculators, and spell-check has reduced our basic literary capability, what if AI erodes our critical thinking and problem solving? I’m not the first to notice that with the internet we have access to all human knowledge, but all too often use it for cat videos and porn. With AI, we have an extraordinary creativity enhancing tool, but use masses of energy and water for data centers to produce dubious action figures in our own image. Maybe we need a little help doing better with AI. A little ‘uncanny Valley’ would not begin to deal with all of the potential issues, but maybe simply not fully trusting AI output on an implicit level might just help a little bit.

Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Beyond UI/UX: Crafting Truly Holistic Human Experiences

Beyond UI/UX: Crafting Truly Holistic Human Experiences

GUEST POST from Art Inteligencia

From my vantage point here in America, I’ve observed a growing tendency to equate human-centered design solely with UI (user interface) and UX (user experience). While these elements are undoubtedly crucial, they represent only a fraction of what it truly means to craft holistic human experiences. True innovation in this space requires us to look beyond the screen and consider the entire journey, encompassing not just usability and aesthetics, but also emotional resonance, social impact, and long-term well-being.

The focus on UI/UX has brought significant improvements to the digital products we use every day, making them more intuitive and visually appealing. However, a beautifully designed interface or a seamless user flow is insufficient if the underlying service or product fails to meet deeper human needs or creates negative externalities. Think of a highly addictive social media app with a flawless UX but detrimental effects on mental health, or a convenient delivery service that contributes to unsustainable traffic congestion and gig worker precarity. These examples highlight the limitations of a design approach that stops at the surface level.

Crafting truly holistic human experiences demands a broader perspective, one that considers the entire ecosystem surrounding a product or service. It requires us to empathize not just with the direct user, but with all stakeholders impacted, including employees, communities, and the environment. This involves moving beyond user-centricity to a more human-centric approach, where we consider the broader consequences of our creations and strive to design solutions that contribute to overall human flourishing. Key elements of this holistic approach include:

  • Emotional Resonance: Designing for positive emotional connections and memorable moments throughout the entire experience, not just during direct interaction with a digital interface.
  • Ethical Considerations: Proactively addressing potential negative consequences, biases, and unintended harms that our creations might inflict on individuals or society.
  • Accessibility and Inclusivity: Designing experiences that are usable and equitable for people of all abilities, backgrounds, and contexts.
  • Service Design Integration: Mapping the entire customer journey, both online and offline, to identify opportunities for improvement and ensure a consistent and positive experience across all touchpoints.
  • Sustainability and Impact: Considering the environmental and social impact of our designs throughout their lifecycle, striving for solutions that are both beneficial and sustainable.

Case Study 1: Airbnb – Beyond the Booking Interface

The Initial Focus: Streamlining the Accommodation Search

Initially, Airbnb’s primary focus was on creating a user-friendly platform for finding and booking accommodations. Their UI and UX were designed to make this process as seamless and efficient as possible. However, as the platform grew, Airbnb recognized that the true value proposition extended far beyond the transaction itself.

Crafting a Holistic Experience:

Airbnb began to focus on the entire travel experience, recognizing that it encompasses not just finding a place to stay but also the sense of connection with a local community. They introduced “Experiences,” allowing travelers to book unique activities led by local hosts, fostering cultural exchange and deeper connections. They also invested in building trust and safety within their community through enhanced verification processes and host-guest communication tools. Furthermore, they have begun to address their environmental impact through initiatives aimed at promoting sustainable travel. By expanding their focus beyond the booking interface, Airbnb aimed to create a more holistic and enriching human experience for both travelers and hosts.

The Results:

Airbnb’s evolution beyond a simple booking platform has led to increased customer loyalty and a stronger brand identity. The introduction of “Experiences” has diversified their revenue streams and provided unique value to travelers seeking more than just a place to sleep. Their focus on trust and safety has been crucial for scaling their community globally. By considering the broader human needs and the wider impact of their platform, Airbnb has moved beyond providing a service to facilitating meaningful human experiences centered around travel and connection.

Key Insight: Truly holistic design considers the entire user journey and seeks to create meaningful connections and positive impact beyond the core functionality of a product or service.

Case Study 2: IDEO and the Redesign of the Hospital Experience

The Initial Challenge: Focusing on Clinical Efficiency

Traditional hospital design often prioritizes clinical efficiency and medical needs, sometimes at the expense of the patient’s emotional and psychological well-being. While UI/UX might apply to digital interfaces within the hospital, the overall patient experience can feel sterile, confusing, and disempowering.

A Human-Centered Approach to Service Design:

Design firm IDEO has worked with numerous healthcare organizations to redesign the entire hospital experience from a human-centered perspective. This goes far beyond the layout of rooms or the design of medical devices. They have focused on understanding the emotional journey of patients and their families, identifying pain points and opportunities for creating a more supportive and healing environment. This includes rethinking communication between staff and patients, improving wayfinding, creating more comfortable waiting areas, and even designing systems that empower patients to have more control over their care. Their approach considers all touchpoints, both physical and digital, to create a cohesive and empathetic experience.

The Results:

IDEO’s holistic design approach in healthcare has led to significant improvements in patient satisfaction, reduced anxiety, and even better clinical outcomes. By focusing on the emotional and psychological needs of patients, they have transformed the hospital experience from a purely clinical one to a more human and supportive one. Their work demonstrates that truly impactful design considers the entire service ecosystem and aims to create positive experiences for all stakeholders, not just the direct users of a specific interface. This comprehensive approach recognizes that healing involves more than just medical treatment; it also requires emotional support and a sense of well-being.

Key Insight: Holistic human experience design in complex service environments like healthcare requires mapping the entire journey and addressing emotional, physical, and informational needs across all touchpoints.

Moving Towards a More Human-Centered Future

As we continue to innovate here in America and beyond, it’s crucial that we broaden our definition of design to encompass the full spectrum of human experience. By moving beyond a narrow focus on UI/UX and embracing a more holistic, human-centered approach, we can create products, services, and systems that not only are usable and aesthetically pleasing but also contribute to emotional well-being, ethical considerations, accessibility, and a sustainable future. The true power of design lies in its ability to shape not just interfaces, but entire human experiences that are both meaningful and beneficial in the long run. It’s time to design for humanity, in its fullest sense.

Extra Extra: Futurology is not fortune telling. Futurists use a scientific approach to create their deliverables, but a methodology and tools like those in FutureHacking™ can empower anyone to engage in futurology themselves.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.