Oh, what a difference a year makes. A few months ago I traveled to Las Vegas to attend the Customer Contact Week (CCW), the largest conference and trade show in the contact center industry. For the past several years, the big discussion has centered on artificial intelligence (AI), and that continues, but Customer Experience (CX) is also moving into the spotlight. AI and natural language models can give customers an almost human-like experience when they have a question or complaint. However, no surprise, some companies do it better than others.
First, all the hype around AI is not new. AI has been in our lives for decades, just at a much simpler level. How do you think Outlook and other email companies recognize that an email is spam and belongs in the junk/spam folder? Of course, it’s not 100% perfect, and neither are today’s best AI programs.
Many of us use Siri and Alexa. That’s AI. And as simple as that is, it’s obviously more sophisticated when you apply it to customer support and CX.
Let’s go back 10 years ago when I attended the IBM Watson conference in Las Vegas. The big hype then was around AI. There were some incredible cases of AI changing customer service, sales and marketing, not to mention automated processes. One of the demonstrations during the general session showcased AI’s stunning capability. Here’s what I saw:
A customer called the contact center. While the customer service agent listened to the customer, the computer (fueled by AI) listened to the conversation and fed the agent answers without the agent typing the questions. In addition, the computer informed the agent how long the customer had been doing business with the company, how often they made purchases, what products they had bought and more. The computer also compared this customer to others who had the same questions and suggested the agent answer those questions. Even though the customer didn’t yet know to ask them, at some point in the future, they would surely be calling back to do so.
That demonstration was a preview of what we have today. One big difference is that implementing that type of solution back then could have cost hundreds of thousands of dollars, if not more than a million. Today, that technology is affordable to almost any company, costing a fraction of what it cost back then (as in just a few thousand dollars).
Voice Technology Gets Better
Less than two years ago, ChatGPT was introduced to the world. Similar technologies have been developed. The capability continues to improve at an incredibly rapid pace. The response from an AI-fueled chatbot is lightning fast. Now, the technology is moving to voice. Rather than type a question for the chatbot, you talk, and it responds in a human-like voice. While voice technology has existed for years, it’s never been this good. Google introduced voice technology that seemed almost human-like. The operative word here is almost. As good as it was, people could still sense they weren’t talking to a human. Today, the best systems are human-like, not almost human-like. Think Alexa and Siri on steroids.
Foreign Accents Are Disappearing
We’ve all experienced calling customer support, and an offshore customer service agent with a heavy accent answers the call. Sometimes, it’s nearly impossible to understand the agent. New technologies are neutralizing accents. A year ago, the software sounded a little “digital.” Today, it sounds almost perfect.
Why Customers Struggle with AI and Other Self-Service Solutions
As far as these technologies have come, customers still struggle to accept them. Our customer service research (sponsored by RingCentral) found that 63% of customers are frustrated by self-service options, such as ChatGPT and similar technologies. Furthermore, 56% of customers admit to being scared of these technologies. Even though 32% of the customers surveyed said they had successfully resolved a customer service issue using AI or ChatGPT-type technologies, it’s not their top preference as 70% still choose the phone as their first level of support. Inconsistency is part of the problem. Some companies still use old technology. The result is that the customer experience varies from company to company. In other words, customers don’t know whether the next time they experience an AI solution if it will be good or not. Inconsistency destroys trust and confidence.
Companies Are Investing in Creating a Better CX
I’ve never been more excited about customer service, CX and the contact center. The main reason is that almost everything about this conference was focused on creating a better experience for the customer. The above examples are just the tip of the iceberg. Companies and brands know what customers want and expect. They know the only way to keep customers is to give them a product that works with an experience they can count on. Price is no longer a barrier as the cost of some of these technologies has dropped to a level that even small companies can afford.
Customer Service Goes Beyond Technology: We Still Need People!
This article focused on the digital experience rather than the traditional human experience. But to nail it for customers, a company can’t invest in just tech. It must also invest in its employees. Even the best technology doesn’t always get the customer what they need, which means the customer will be transferred to a live agent. That agent must be properly trained to deliver the experience that gets customers to say, “I’ll be back.”
At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?
But enough delay, here are November’s ten most popular innovation posts:
If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!
SPECIAL BONUS: While supplies last, you can get the hardcover version of my first bestselling book Stoking Your Innovation Bonfire for 51% OFF until Amazon runs out of stock or changes the price. This deal won’t last long, so grab your copy while it lasts!
Have something to contribute?
Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.
P.S. Here are our Top 40 Innovation Bloggers lists from the last four years:
Looking back at the beginning of this decade now that we’re closing in on the halfway point, it’s clearly been a wild ride!
We’ve had a global pandemic, groundbreaking technological breakthroughs, geopolitical shocks, supply chain disruptions, and so much more.
These challenges have revealed a critical truth: organizations need to adapt and innovate faster than ever before.
Add to this the tough economic climate, shrinking capital availability, the disillusionment many business leaders feel toward their innovation teams (sometimes justified, sometimes less so), and we’re looking at a highly turbulent environment for corporate innovation.
The mandate has never been so clear: deliver more results, faster, and with fewer resources. For seasoned innovators, that’s just business as usual. However, structural shifts are poised to reshape the innovation management landscape.
With that background, here’s our take on the top trends to watch in 2025.
1. Innovation as a Distributed Core Capability
With tighter budgets, the rise of AI and other transformative technologies, the pressing need for organizations to reinvent themselves, and you can see why innovation is increasingly owned by individual business units.
This shift can arise from necessity—businesses needing to transform—or simply from a desire for better strategic alignment and more measurable outcomes.
Don’t get me wrong, there’s still a need for innovation expertise, but the role of corporate innovators is undoubtedly evolving. Instead of driving innovation directly, they are now enablers and educators, equipping the broader organization to innovate effectively. Embodying this phenomenon is TD Bank, for example:
“The program is truly driven by each line of business—we’re here as a tool to empower their innovation, not to direct it.”
– Josh Death, VP of Intellectual Property and Ideation at TD Bank.
To pull that off, every organization needs to have 3 key elements in place:
Innovation is now at a similar transition point as IT was during the digital transformation era a couple of decades ago: the exact method and approach can be debated, but one thing is clear: every organization must embed innovation as a core capability. Just as some organizations are “digital natives,” the situation is the same for “innovation natives.”
Frameworks, toolkits, and best practices: Innovation isn’t (always) rocket science, but you still need to know what you’re doing. To pull this off, the organization needs to provide its employees with practical tools, frameworks and practices, preferably in the format of a well-designed Innovation System or Program. The recently published ISO 56000 series of standards is now a great starting point, but they need to be complemented with tools that innovators across the organization can use.
Education, coaching, and enablement: A good framework serves as an efficient and effective launching pad, but without proper education, most employees won’t benefit from it. This is where corporate innovation leaders play a key role. They need to organize education and enablement for innovators across the organization, and coach people on how to get past common obstacles. However, doing that at the scale of a large organization is complex—that’s where programs such as The Innovation System, which is included for all HYPE software customers, can be highly effective.
Scalable and adaptive system support: To get measurable outcomes from innovation, you need to operationalize your program. Even the best designed programs with highly effective leaders and coaches can struggle to scale their work and get the outcomes they want without proper system support. That’s where a holistic innovation platform, such as the HYPE Suite, can play a key supporting role.
Generative AI has been the focus of most of the hype around AI lately, and for good reason, but there’s more to AI than that. When you combine the latest generative AI models with proven innovation best practices, more traditional machine learning algorithms, and data from your innovation ecosystem, you have a powerful toolkit that enables a variety of different use cases.
AI can:
Analyze and structure large datasets.
Provide actionable recommendations.
Help users locate relevant information more efficiently.
Detect market signals earlier.
Generate novel ideas.
Coach innovators to enhance their work.
The common denominator for all of them is that AI can help streamline, automate, and accelerate work, and provide easier access to information and skills that used to be the domain of only a few experts within the organization.
However, scaling AI’s benefits isn’t without challenges. Most employees aren’t going to be expert prompters or data analysts that know all the right innovation best practices. So, to unlock the real benefits of using AI, you’re going to need a capable system that is specifically designed for corporate innovation and deeply integrated with AI across the board. When deployed right, AI can help democratize, scale and accelerate innovation like never before.
3. Democratization of Innovation
The third trend builds on the first two. As innovation becomes a core capability better supported by tools, processes, and technology, it will also become more democratized.
Here are the three key shifts are driving this transformation:
Innovation tools, frameworks, and best practices are becoming more widely available, understood, and easier to use: This makes it easier for anyone that wants to be an innovator to get started on the right path and avoid many of the common beginner mistakes.
Technology reduces barriers to entry: Thanks to technologies such as 3D printing, low or no-code software, and Gen AI, it’s never been easier, faster, and cheaper to prototype innovations, whether focused on digital solutions, physical products, or process improvements.
Organizations are looking for more bottom up, employee and team-led innovation and intrapreneurship: Corporate innovation is no longer solely driven by top management. While management needs to set the strategy and targets, more and more organizations are looking towards empowering their employees to help them get where they want to go. It all starts from ideas, but self-organized teams, business units, and intrapreneurship programs are all on the rise. Companies increasingly want to encourage employees to think and act more like entrepreneurs.
When you put all three together, they create a powerful combination that can propel organizations to new heights of innovation and growth.
4. Partner Innovation and the Venture Client Model
No organization, no matter how large or powerful, can house all the best talent on every topic. That’s why the “Not Invented Here” syndrome can be particularly dangerous.
When you need to move fast, and do so with a lower budget, your best bet is to leverage talent from outside your organization.
The trick? Partnering with leaders and early movers in your area of interest to accelerate time to market and gain valuable insights. These partners can include research institutes, universities, or, increasingly, startups.
Historically, large organizations have relied on accelerators or Corporate Venture Capital (CVC) investments to engage with startups. However, both approaches have limitations:
Learning is indirect and secondhand.
They often fail to directly contribute to strategic business goals.
CVC investments require significant capital that could be allocated elsewhere.
The better approach? The Venture Client Model. This approach allows organizations to act as customers and development partners to startups that align with their strategic goals, resulting in:
Lower costs and faster time to market.
Accelerated learning through direct engagement.
Quick ROI by leveraging the organization’s existing scale.
To succeed with this model, you need a systematic approach, the right tools—like HYPE Partnering—and a clear focus on addressing real business problems, not just nice to haves.
The Venture Client Model, featured in Gartner’s latest Hype Cycle for Innovation Practices, brings all these elements together, making it a proven and effective strategy for driving innovation.
5. Cross-industry Collaboration
Building on the trend of partnering, companies are increasingly looking beyond their industries to find innovation opportunities.
Experienced innovators know that there’s no such thing as a new idea. Every idea is simply a combination of previous concepts and ideas applied to solve a specific problem. By partnering with organizations in different industries, companies can leverage highly advanced, specialized capabilities to uncover surprising opportunities and tackle the often-difficult execution phase of innovation.
As such, we’re seeing more and more strategic partnerships between companies from different industries, such as automotive or life science firms partnering with tech companies, to not just learn from one another, but to cocreate hybrid solutionsand products that unlock new value for customers and enable breakthroughs that neither industry could achieve alone.
6. Sustainability and ESG-driven Innovation
Last decade, ESG (Environmental, Social, and Governance) was all the rage. In the last couple of years, many of these initiatives took a backseat due to economic pressures and growing disillusionment with some of the failures associated with many of these programs.
The problem was that many organizations implemented ESG at a superficial level—promises and policies with little real-world impact—leading to skepticism about the value behind the topic at large.
However, the fundamental need for transformation remains critical. From addressing government deficits to combating climate change, the urgency for sustainable innovation is greater than ever.
What’s different now? The drivers and enablers are firmly in place:
Regulatory Pressure: Many governments across the globe are introducing stricter mandates for sustainable practices.
Technological Advancements: Breakthroughs in renewable energy, electrification, AI, and circular solutions provide tools for real change.
Consumer Preferences: Shifts toward sustainability are influencing demand and shaping circular economic models.
For innovators, this is a perfect storm—a unique opportunity to create breakthroughs that move the needle for both their organizations and the planet. Sustainability has been through the Hype Cycle, and is now nearing the plateau of productivity. For many, it’s no longer a “nice-to-have” but a strategic imperative, making ESG-driven innovation one of the most significant trends shaping the future of corporate innovation and strategy.
Conclusion
These trends highlight a clear shift toward more agile, sustainable, and externally focused innovation practices. For many organizations, they’re not just a nice addition, but a must to stay competitive in increasingly complex and fast-moving global markets. What hasn’t changed, is that those organizations that master innovation, unlock new opportunities to create value, drive impact. They will be able to future-proof themselves and leave the competition in the dust.
This article was originally published in HYPE’s blog. Images from Unsplash and Pixabay.
Sign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.
Almost a year and a half ago I read Stephen Wolfram’s very approachable introduction to ChatGPT, What is ChatGPT Doing . . . And Why Does It Work?, and I encourage you to do the same. It has sparked a number of thoughts that I want to share in this post.
First, if I have understood Wolfram correctly, what ChatGPT does can be summarized as follows:
Ingest an enormous corpus of text from every available digitized source.
While so doing, assign to each unique word a unique identifier, a number that will serve as a token to represent that word.
Within the confines of each text, record the location of every token relative to every other token.
Using just these two elements—token and location—determine for every word in the entire corpus the probability of it being adjacent to, or in the vicinity of, every other word.
Feed these probabilities into a neural network to cluster words and build a map of relationships.
Leveraging this map, given any string of words as a prompt, use the neural network to predict the next word (just like AutoCorrect).
Based on feedback from so doing, adjust the internal parameters of the neural network to improve its performance.
As performance improves, extend the reach of prediction from the next word to the next phrase, then to the next clause, the next sentence, the next paragraph, and so on, improving performance at each stage by using feedback to further adjust its internal parameters.
Based on all of the above, generate text responses to user questions and prompts that reviewers agree are appropriate and useful.
OK, I concede this is a radical oversimplification, but for the purposes of this post, I do not think I am misrepresenting what is going on, specifically when it comes to making what I think is the most important point to register when it comes to understanding ChatGPT. That point is a simple one. ChatGPT has no idea what it is talking about.
Indeed, ChatGPT has no ideas of any kind — no knowledge or expertise — because it has no semantic information. It is all math. Math has been used to strip words of their meaning, and that meaning is not restored until a reader or user engages with the output to do so, using their own brain, not ChatGPT’s. ChatGPT is operating entirely on form and not a whit on content. By processing the entirety of its corpus, it can generate the most probable sequence of words that correlates with the input prompt it had been fed. Additionally, it can modify that sequence based on subsequent interactions with an end user. As human beings participating in that interaction, we process these interactions as a natural language conversation with an intelligent agent, but that is not what is happening at all. ChatGPT is using our prompts to initiate a mathematical exercise using tokens and locations as its sole variables.
OK, so what? I mean, if it works, isn’t that all that matters? Not really. Here are some key concerns.
First, and most importantly, ChatGPT cannot be expected to be self-governing when it comes to content. It has no knowledge of content. So, whatever guardrails one has in mind would have to be put in place either before the data gets into ChatGPT or afterward to intercept its answers prior to passing them along to users. The latter approach, however, would defeat the whole purpose of using it in the first place by undermining one of ChatGPT’s most attractive attributes—namely, its extraordinary scalability. So, if guardrails are required, they need to be put in place at the input end of the funnel, not the output end. That is, by restricting the datasets to trustworthy sources, one can ensure that the output will be trustworthy, or at least not malicious. Fortunately, this is a practical solution for a reasonably large set of use cases. To be fair, reducing the size of the input dataset diminishes the number of examples ChatGPT can draw upon, so its output is likely to be a little less polished from a rhetorical point of view. Still, for many use cases, this is a small price to pay.
Second, we need to stop thinking of ChatGPT as artificial intelligence. It creates the illusion of intelligence, but it has no semantic component. It is all form and no content. It is a like a spider that can spin an amazing web, but it has no knowledge of what it is doing. As a consequence, while its artifacts have authority, based on their roots in authoritative texts in the data corpus validated by an extraordinary amount of cross-checking computing, the engine itself has none. ChatGPT is a vehicle for transmitting the wisdom of crowds, but it has no wisdom itself.
Third, we need to fully appreciate why interacting with ChatGPT is so seductive. To do so, understand that because it constructs its replies based solely on formal properties, it is selecting for rhetoric, not logic. It is delivering the optimal rhetorical answer to your prompt, not the most expert one. It is the one that is the most popular, not the one that is the most profound. In short, it has a great bedside manner, and that is why we feel so comfortable engaging with it.
Now, given all of the above, it is clear that for any form of user support services, ChatGPT is nothing less than a godsend, especially where people need help learning how to do something. It is the most patient of teachers, and it is incredibly well-informed. As such, it can revolutionize technical support, patient care, claims processing, social services, language learning, and a host of other disciplines where users are engaging with a technical corpus of information or a system of regulated procedures. In all such domains, enterprises should pursue its deployment as fast as possible.
Conversely, wherever ambiguity is paramount, wherever judgment is required, or wherever moral values are at stake, one must not expect ChatGPT to be the final arbiter. That is simply not what it is designed to do. It can be an input, but it cannot be trusted to be the final output.
That’s what I think. What do you think?
Image Credit: Pexels
Sign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.
In this blog, I return and expand on a paradox that has concerned me for some time. Are we getting too good at innovation, and is it in danger of getting out of control? That may seem like a strange question for an innovator to ask. But innovation has always been a two edged sword. It brings huge benefits, but also commensurate risks.
Ostensibly, change is good. Because of technology, today we mostly live more comfortable lives, and enjoy superior health, longevity, and mostly increased leisure and abundance compared to our ancestors.
Exponential Innovation Growth: The pace of innovation is accelerating. It may not exactly mirror Moore’s Law, and of course, innovation is much harder to quantify than transistors. But the general trend in innovation and change approximates exponential growth. The human stone-age lasted about 300,000 years before ending in about 3,000 BC with the advent of metalworking. The culture of the Egyptian Pharos lasted 30 centuries. It was certainly not without innovations, but by modern standards, things changed very slowly. My mum recently turned 98 years young, and the pace of change she has seen in her lifetime is staggering by comparison to the past. Literally from horse and carts delivering milk when she was a child in poor SE London, to todays world of self driving cars and exploring our solar system and beyond. And with AI, quantum computing, fusion, gene manipulation, manned interplanetary spaceflight, and even advanced behavior manipulation all jockeying for position in the current innovation race, it seems highly likely that those living today will see even more dramatic change than my mum experienced.
The Dark Side of Innovation: While accelerated innovation is probably beneficial overall, it is not without its costs. For starters, while humans are natural innovators, we are also paradoxically change averse. Our brains are configured to manage more of our daily lives around habits and familiar behaviors than new experiences. It simply takes more mental effort to manage new stuff than familiar stuff. As a result we like some change, but not too much, or we become stressed. At least some of the burgeoning mental health crisis we face today is probably attributable the difficulty we have adapting to so much rapid change and new technology on multiple fronts.
Nefarious Innovation: And of course, new technology can be used for nefarious as well as noble purpose. We can now kill our fellow humans far more efficiently, and remotely than our ancestors dreamed of. The internet gives us unprecedented access to both information and connectivity, but is also a source of misinformation and manipulation.
The Abundance Dichotomy: Innovation increases abundance, but it’s arguable if that actually makes us happier. It gives us more, but paradoxically brings greater inequalities in distribution of the ‘wealth’ it creates. Behavior science has shown us consistently that humans make far more relative than absolute judgments. Being better off than our ancestors actually doesn’t do much for us. Instead we are far more interested in being better off than our peers, neighbors or the people we compare ourselves to on Instagram. And therein lies yet another challenge. Social media means we now compare ourselves to far more people than past generations, meaning that the standards we judge ourselves against are higher than ever before.
Side effects and Unintended Consequences: Side effects and unintended consequences are perhaps the most difficult challenge we face with innovation. As the pace of innovation accelerates, so does the build up of side effects, and problematically, these often lag our initial innovations. All too often, we only become aware of them when they have already become a significant problem. Climate change is of course a poster child for this, as a huge unanticipated consequence of the industrial revolution. The same applies to pollution. But as innovation accelerates, the unintended consequences it brings are also stacking up. The first generations of ‘digital natives’ are facing unprecedented mental health challenges. Diseases are becoming resistant to antibiotics, while population density is leading increased rate of new disease emergence. Agricultural efficiency has created monocultures that are inherently more fragile than the more diverse supply chain of the past. Longevity is putting enormous pressure on healthcare.
The More we Innovate, the less we understand: And last, but not least, as innovation accelerates, we understand less about what we are creating. Technology becomes unfathomably complex, and requires increasing specialization, which means few if any really understand the holistic picture. Today we are largely going full speed ahead with AI, quantum computing, genetic engineering, and more subtle, but equally perilous experiments in behavioral and social manipulation. But we are doing so with increasingly less pervasive understanding of direct, let alone unintended consequences of these complex changes!
The Runaway Innovation Train: So should we back off and slow down? Is it time to pump the brakes? It’s an odd question for an innovator, but it’s likely a moot point anyway. The reality is that we probably cannot slow down, even if we want to. Innovation is largely a self-propagating chain reaction. All innovators stand on the shoulders of giants. Every generation builds on past discoveries, and often this growing knowledge base inevitably leads to multiple further innovations. The connectivity and information access of internet alone is driving today’s unprecedented innovation, and AI and quantum computing will only accelerate this further. History is compelling on this point. Stone-age innovation was slow not because our ancestors lacked intelligence. To the best of our knowledge, they were neurologically the same as us. But they lacked the cumulative knowledge, and the network to access it that we now enjoy. Even the smartest of us cannot go from inventing flint-knapping to quantum mechanics in a single generation. But, back to ‘standing on the shoulder of giants’, we can build on cumulative knowledge assembled by those who went before us to continuously improve. And as that cumulative knowledge grows, more and more tools and resources become available, multiple insights emerge, and we create what amounts to a chain reaction of innovations. But the trouble with chain reactions is that they can be very hard to control.
Simultaneous Innovation: Perhaps the most compelling support for this inevitability of innovation lies in the pervasiveness of simultaneous innovation. How does human culture exist for 50,000 years or more and then ‘suddenly’ two people, Darwin and Wallace come up with the theory of evolution independently and simultaneously? The same question for calculus (Newton and Leibniz), or the precarious proliferation of nuclear weapons and other assorted weapons of mass destruction. It’s not coincidence, but simply reflects that once all of the pieces of a puzzle are in place, somebody, and more likely, multiple people will inevitably make connections and see the next step in the innovation chain.
But as innovation expands like a conquering army on multiple fronts, more and more puzzle pieces become available, and more puzzles are solved. But unfortunately associated side effects and unanticipated consequences also build up, and my concern is that they can potentially overwhelm us. And this is compounded because often, as in the case of climate change, dealing with side effects can be more demanding than the original innovation. And because they can be slow to emerge, they are often deeply rooted before we become aware of them. As we look forward, just taking AI as an example, we can already somewhat anticipate some worrying possibilities. But what about the surprises analogous to climate change that we haven’t even thought of yet? I find that a sobering thought that we are attempting to create consciousness, but despite the efforts of numerous Nobel laureates over decades, we still have to idea what consciousness is. It’s called the ‘hard problem’ for good reason.
Stop the World, I Want to Get Off: So why not slow down? There are precedents, in the form of nuclear arms treaties, and a variety of ethically based constraints on scientific exploration. But regulations require everybody to agree and comply. Very big, expensive and expansive innovations are relatively easy to police. North Korea and Iran notwithstanding, there are fortunately not too many countries building nuclear capability, at least not yet. But a lot of emerging technology has the potential to require far less physical and financial infrastructure. Cyber crime, gene manipulation, crypto and many others can be carried out with smaller, more distributed resources, which are far more difficult to police. Even AI, which takes considerable resources to initially create, opens numerous doors for misuse that requires far less resource.
The Atomic Weapons Conundrum. The challenge with getting bad actors to agree on regulation and constraint is painfully illustrated by the atomic bomb. The discovery of fission by Strassman and Hahn in the late 1930’s made the bomb inevitable. This set the stage for a race to turn theory into practice between the Allies and Nazi Germany. The Nazis were bad actor, so realistically our only option was to win the race. We did, but at enormous cost. Once the ‘cat was out of the bag, we faced a terrible choice; create nuclear weapons, and the horror they represent, or chose to legislate against them, but in so doing, cede that terrible power to the Nazi’s? Not an enviable choice.
Cumulative Knowledge. Today we face similar conundrums on multiple fronts. Cumulative knowledge will make it extremely difficult not to advance multiple, potentially perilous technologies. Countries who legislate against it risk either pushing it underground, or falling behind and deferring to others. The recent open letter from Meta to the EU chastising it for the potential economic impacts of its AI regulations may have dripped with self-interest. But that didn’t make it wrong. https://euneedsai.com/ Even if the EU slows down AI development, the pieces of the puzzle are already in place. Big corporations, and less conservative countries will still pursue the upside, and risk the downside. The cat is very much out of the bag.
Muddling Through: The good news is that when faced with potentially perilous change in the past, we’ve muddled through. Hopefully we will do so again. We’ve avoided a nuclear holocaust, at least for now. Social media has destabilized our social order, but hasn’t destroyed it, yet. We’ve been through a pandemic, and come out of it, not unscathed, but still functioning. We are making progress in dealing with climate change, and have made enormous strides in managing pollution.
Chain Reactions: But the innovation chain reaction, and the impact of cumulative knowledge mean that the rate of change will, in the absence of catastrophe, inevitably continue to accelerate. And as it does, so will side effects, nefarious use, mistakes and any unintended consequences that derive from it. Key factors that have helped us in the past are time and resource, but as waves of innovation increase in both frequency and intensity, both are likely to be increasingly squeezed.
What can, or should we do? I certainly don’t have simple answers. We’re all pretty good, although by definition, far from perfect at scenario planning and trouble shooting for our individual innovations. But the size and complexity of massive waves of innovation, such as AI, are obviously far more challenging. No individual, or group can realistically either understand or own all of the implications. But perhaps we as an innovation community should put more collective resources against trying? We’ll never anticipate everything, and we’ll still get blindsided. And putting resources against ‘what if’ scenarios is always a hard sell. But maybe we need to go into sales mode.
Can the Problem Become the Solution? Encouragingly, the same emerging technology that creates potential issues could also help us. AI and quantum computing will give us almost infinite capacity for computation and modeling. Could we collectively assign more of that emerging resource against predicting and managing it’s own risks?
With many emerging technologies, we are now where we were in the 1900’s with climate change. We are implementing massive, unpredictable change, and by definition have no idea what the unanticipated consequences of that will be. I personally think we’ll deal with climate change. It’s difficult to slow a leviathan that’s been building for over a hundred years. But we’ve taken the important first steps in acknowledging the problem, and are beginning to implement corrective action.
But big issues require big solutions. Long-term, I personally believe the most important thing for humanity to escape the gravity well. Given the scale of our ability to curate global change, interplanetary colonization is not a luxury, but an essential. Climate change is a shot across the bow with respect to how fragile our planet is, and how big our (unintended) influence can be. We will hopefully manage that, and avoid nuclear war or synthetic pandemics for long enough to achieve it. But ultimately, humanity needs the insurance dispersed planetary colonization will provide.
Image credits: Microsoft Copilot
Sign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.
At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?
But enough delay, here are September’s ten most popular innovation posts:
If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!
SPECIAL BONUS – THREE DAYS ONLY: From now until 11:59PM ET you can get either the eBook or the hardcover version of the SECOND EDITION of my latest bestselling book Charting Change for 50% OFF using code FLSH50. This deal won’t last long, so grab your copy while supplies last!
Have something to contribute?
Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.
P.S. Here are our Top 40 Innovation Bloggers lists from the last four years:
AI is everywhere: in our workplaces, homes, schools, art galleries, concert halls, and even neighborhood coffee shops. We can’t seem to escape it. Some hope it will unlock our full potential and usher in an era of creativity, prosperity, and peace. Others worry it will eventually replace us. While both outcomes are extreme, if you’ve ever used AI to conduct research with synthetic users, the idea of being “replaced” isn’t so wild.
For the past month, I’ve beta-tested an AI research tool that allows you to create surveys, specify segments of respondents, send the survey to synthetic respondents (AI-generated personas), and get results within minutes.
Sound too good to be true?
Here are the results from my initial test:
150 respondents in 3 niche segments (50 respondents each)
51 questions, including ten open-ended questions requiring short prose responses
1 hour to complete and generate an AI executive summary and full data set of individual responses, enabling further analysis
The Tool is Brilliant
It took just one hour to gather data that traditional survey methods require a month or more to collect, clean, and synthesize. Think of how much time you’ve spent waiting for survey results, checking interim data, and cleaning up messy responses. I certainly did and it made me cry.
The qualitative responses were on-topic, useful, and featured enough quirks to seem somewhat human. I’m pretty sure that has never happened in the history of surveys. Typically, respondents skip open-ended questions or use them to air unrelated opinions.
Every respondent completed the entire survey! There is no need to look for respondents who went too quickly, chose the same option repeatedly, or abandoned the effort altogether. You no longer need to spend hours cleaning data, weeding out partial responses, and hoping you’re left with enough that you can generate statistically significant findings.
The Results are Dangerous
When I presented the results to my client, complete with caveats about AI’s limitations and the tool’s early-stage development, they did what any reasonable person would do – they started making decisions based on the survey results.
STOP!
As humans, we want to solve problems. In business, we are rewarded for solving problems. So, when we see something that looks like a solution, we jump at it.
However, strategic or financially significant decisions should neverrely ona single data source. They are too complex, risky, and costly. And they definitely shouldn’t be made based on fake people’s answers to survey questions!
They’re Also Useful.
Although the synthetic respondents’ data may not be true, it is probably directionally correct because it is based on millions and maybe billions of data points. So, while you shouldn’t make pricing decisions based on data showing that 40% of your target consumers are willing to pay a 30%+ premium for your product, it’s reasonable to believe they may be willing to pay more for your product.
The ability to field an absurdly long survey was also valuable. My client is not unusual in their desire to ask everything they may ever need to know for fear that they won’t have another chance to gather quantitative data (and budgets being what they are, they’re usually right). They often ignore warnings that long surveys lead to abandonment and declining response quality. With AI, we could ask all the questions and then identify the most critical ones for follow-up surveys sent to actual humans.
We Aren’t Being Replaced, We’re Being Spared
AI consumer research won’t replace humans. But it will spare us the drudgery of long surveys filled with useless questions, months of waiting for results, and weeks of data cleaning and analysis. It may just free us up to be creative and spend time with other humans. And that is brilliant.
Image credit: Microsoft Copilot
Sign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.
Whenever I get the chance to interview the CEO of a major CX company, I jump at the chance. I recently conducted a second interview with Alan Masarek, the CEO of Avaya, a company focused on creating customer experience solutions for large enterprises.
My first interview covered an amazing turnaround that Masarek orchestrated in his first year at Avaya, taking the company through Chapter 11 and coming out strong. Masarek admits that even with his extensive financial background, he’s always been a product person, and it’s the combination of the two mindsets that makes him the perfect leader for Avaya.
In our discussion, he shared his view on AI and how it must deliver value in the contact center. What follows is a summary of the main points of our interview, followed by my commentary.
Why Customer Service and CX Are Important: Thanks to the internet, it’s harder for brands to differentiate themselves. Within minutes, a customer can compare prices, check availability, find a company that can deliver the product within a day or two, or find comparable products from other retailers, vendors and manufacturers. Furthermore, while the purchasing experience needs to be positive, it’s what happens beyond the purchase that becomes most important. Masarek says, “Brands are now trying to differentiate based upon the experience they provide. So any tool that can help the brand achieve this is the winner.”
Customer Service Is Rooted in Communications: Twenty years ago, the primary way to communicate with a company was on the phone. While we still do that, the world has evolved to what is referred to as omni-channel, which includes voice, chat, email, brand apps, social media and more. As we move from the phone to alternative channels of communication, companies and brands must find ways to bring them all together to create a seamless journey for the customer.
Organizations Want to Minimize Voice: According to Masarek, companies want to move away from traditional voice communication, which is a human on the phone. That “one-to-one” is very expensive. With digital solutions, you have one-to-many. Masarek says, “It’s asynchronous. And the beauty is you can introduce AI utilities into the customer experience, which creates greater efficiency. You’re solving so many things either digitally or deflecting it altogether via the chatbot, the voice bot or what have you.”
AI Will Not Eliminate Jobs: Masarek says, “There’s a bull and a bear case for an employment point of view relative to AI. Will it be a destroyer of jobs, a bear case, or will it grow jobs, the bull case?” He shared an example that perfectly describes the situation we’re in today. In the 1960s, Barclay’s Bank introduced the ATM. Everyone thought it would be the end of tellers working at banks. That never happened. What did happen is that tellers took on a more important role, going beyond just cashing checks or depositing money. It’s the same in the customer service world. AI technologies will take care of simple tasks, freeing customer service agents to help with more complicated issues. (For more on how AI will not eliminate jobs, read this Forbes article from September 2023.)
The Employee Experience Drives the Customer Experience: AI is not just about supporting the customer. It can also support the agent. When the agent is talking to a customer, generative AI technology can listen in the background, search through a company’s knowledge base and feed the agent information in real time. Masarek said, “Think about what a pleasant experience that is for both the agent and the customer!”
Innovation Without Disruption: A company may invest in a better customer experience, but sometimes, that causes stress to the organization. Masarek is proud of Avaya’s value proposition, which is to add innovation without disruption. This means there’s a seamless integration versus total replacement of existing systems and processes. Regarding the upgrade, Masarek says, “The last thing you want is to rip it all out.”
The Customer-In Approach: As we wrapped up our interview, I asked Masarek for one final nugget of wisdom. He shared his Customer-In approach. Not that long ago, you could compete on product, price and availability. Today, that’s table stakes. What separates one brand from another is the experience. Masarek summarized this point by saying, “You have to set your North Star on as few things as possible. Focus wins. And so, if you’re always thinking Customer First and all your decisions are rooted in that concept, your business will be successful. At the end of the day, brands win on how they make the customer feel. It’s no longer just about product, price and availability.”
Image Credits: Pixabay
This article was originally published on Forbes.com.
Sign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.
Historically, building technology had been about capabilities and features. Engineers and product designers would come up with new things that they thought people wanted, figure out how to make them work and ship “new and improved” products. The result was often things that were maddeningly difficult to use.
That began to change when Don Norman published his classic, The Design of Everyday Things and introduced concepts like dominant design, affordances and natural mapping into industrial design. The book is largely seen as pioneering the user-centered design movement. Today, UX has become a thriving field.
Yet artificial intelligence poses new challenges. We speak or type into an interface and expect machines to respond appropriately. Often they do not. With the popularity of smart speakers like Amazon Alexa and Google Home, we have a dire need for clear principles for human-AI interactions. A few years ago, two researchers at IBM embarked on a journey to do just that.
The Science Of Conversations
Bob Moore first came across conversation analysis as an undergraduate in the late 1980s, became intensely interested and later earned a PhD based on his work in the field. The central problems are well known to anybody who has ever watched Seinfeld or Curb Your Enthusiasm, our conversations are riddled with complex, unwritten rules that aren’t always obvious.
For example, every conversation has an unstated goal, whether it is just to pass the time, exchange information or to inspire an emotion. Yet our conversations are also shaped by context. For example, the unwritten rules would be different for a conversation between a pair of friends, a boss and subordinate, in a courtroom setting or in a doctor’s office.
“What conversation analysis basically tries to reveal are the unwritten rules people follow, bend and break when engaging in conversations,” Moore told me and he soon found that the tech industry was beginning to ask similar questions. So he took a position at Xerox PARC and then Yahoo! before landing at IBM in 2012.
As the company was working to integrate its Watson system with applications from other industries, he began to work with Raphael Arar, an award-winning visual designer and user experience expert. The two began to see that their interests were strangely intertwined and formed a partnership to design better conversations for machines.
Establishing The Rules Of Engagement
Typically, we use natural language interfaces, both voice and text, like a search box. We announce our intention to seek information by saying, “Hey Siri,” or “Hey Alexa,” followed by a simple query, like “where is the nearest Starbucks.” This can be useful, especially when driving or walking down the street,” but is also fairly limited, especially for more complex tasks.
What’s far more interesting — and potentially far more useful — is being able to use natural language interfaces in conjunction with other interfaces, like a screen. That’s where the marriage of conversational analysis and user experience becomes important, because it will help us build conventions for more complex human-computer interactions.
“We wanted to come up with a clear set of principles for how the various aspects of the interface would relate to each other,” Arar told me. “What happens in the conversation when someone clicks on a button to initiate an action?” What makes this so complex is that different conversations will necessarily have different contexts.
For example, when we search for a restaurant on our phone, should the screen bring up a map, information about pricing, pictures of food, user ratings or some combination? How should the rules change when we are looking for a doctor, a plumber or a travel destination?
Deriving Meaning Through Preserving Context
Another aspect of conversations is that they are highly dependent on context, which can shift and evolve over time. For example, if we ask someone for a restaurant nearby, it would be natural for them to ask a question to narrow down the options, such as “what kind of food are you looking for?” If we answer, “Mexican,” we would expect that person to know we are still interested in restaurants, not, say, the Mexican economy or culture.
Another issue is that when we follow a particular logical chain, we often find some disqualifying factor. For instance, a doctor might be looking for a clinical trial for her patient, find one that looks promising but then see that that particular study is closed. Typically, she would have to retrace her steps to go back to find other options.
“A true conversational interface allows us to preserve context across the multiple turns in the interaction,” Moore says. “If we’re successful, the machine will be able to adapt to the user’s level of competence, serving the expert efficiently but also walking the novice through the system, explaining itself as needed.”
And that’s the true potential of the ability to initiate more natural conversations with computers. Much like working with humans, the better we are able to communicate, the more value we can get out of our relationships.
Making The Interface Disappear
In the early days of web usability, there was a constant tension between user experience and design. Media designers were striving to be original. User experience engineers, on the other hand, were trying to build conventions. Putting a search box in the upper right hand corner of a web page might not be creative, but that’s where users look to find it.
Yet eventually a productive partnership formed and today most websites seem fairly intuitive. We mostly know where things are supposed to be and can navigate things easily. The challenge now is to build that same type of experience for artificial intelligence, so that our relationships with the technology become more natural and more useful.
“Much like we started to do with user experience for conventional websites two decades ago, we want the user interface to disappear,” Arar says. Because when we aren’t wrestling with the interface and constantly having to repeat ourselves or figuring out how to rephrase our questions, we can make our interactions much more efficient and productive.
As Moore put it to me, “Much of the value of systems today is locked in the data and, as we add exabytes to that every year, the potential is truly enormous. However, our ability to derive value from that data is limited by the effectiveness of the user interface. The more we can make the interface become intelligent and largely disappear, the more value we will be able unlock.”
— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay
Sign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.
At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?
But enough delay, here are August’s ten most popular innovation posts:
If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!
Have something to contribute?
Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.
P.S. Here are our Top 40 Innovation Bloggers lists from the last four years: