Category Archives: Technology

Balancing Artificial Intelligence with the Human Touch

GUEST POST from Shep Hyken

As AI and ChatGPT-type technologies grow in capability and ease of use and become more cost-effective, more and more companies are making their way to the digital experience. Still, the best companies know better than to switch to 100% digital.

I had a chance to interview Nicole Kyle, managing director and co-founder of CMP Research (Customer Management Practice), for Amazing Business Radio. Kyle’s team provides research and advisory services for the contact center industry and conducts some of the most critical research on the topic of self-service and digital customer service. I first met Kyle at CCW, the largest contact center conference in the industry. I’ve summarized seven of her key observations below, followed by my commentary:

  1. The Amazon Effect has trained customers to expect a level of service that’s not always in line with what companies and brands can provide. This is exactly what’s happening with customer expectations. They no longer compare you just to your direct competitors but to the best experience they’ve had from any company. Amazon and other rockstar brands focused on CX (customer experience) have set the bar higher for all companies in all industries.
  2. People’s acceptance and eventual normalization of digital experiences accelerated during the pandemic, and they have become a way of life for many customers. The pandemic forced customers to accept self-service. For example, many customers never went online to buy groceries, vehicles or other items that were traditionally shopped for in person. Once customers got used to it, as the pandemic became history, many never returned to the “old way” of doing business. At a minimum, many customers expect a choice between the two.
  3. Customers have new priorities and are placing a premium on their time. Seventy-two percent of customers say they want to spend less time interacting with customer service. They want to be self-sufficient in managing typical customer service issues. In other words, they want self-service options that will get them answers to their questions efficiently and in a timely manner. Our CX research differs and is less than half of that 72% number. When I asked Kyle about the discrepancy, she responded, “Customers who have a poor self-service experience are less likely to return to self-service. While there is an increase in preference, you’re not seeing the adoption because some companies aren’t offering the type of self-service experience the customer wants.”
  4. The digital dexterity of society is improving! That phrase is a great way to describe self-service adoption, specifically how customers view chatbots or other ChatGPT-type technologies. Kyle explained, “Digital experiences became normalized during the pandemic, and digital tools, such as generative AI, are now starting to help people in their daily lives, making them more digitally capable.” That translates into customers’ higher acceptance and desire for digital support and CX.
  5. Many customers can tell the difference between talking to an AI chatbot and a live chat with a human agent due to their ability to access technology and the quality of the chatbot. However, customers are still willing to use the tools if the results are good. When it comes to AI interacting with customers via text or voice, don’t get hung up on how lifelike (or not) the experience is as long as it gets your customers what they want quickly and efficiently.
  6. The No. 1 driver of satisfaction (according to 78% of customers surveyed) in a self-service experience is personalization. Personalization is more important than ever in customer service and CX. So, how do you personalize digital support? The “machine” must not only be capable of delivering the correct answers and solutions, but it must also recognize the existing customer, remember issues the customer had in the past, make suggestions that are specific to the customer and provide other customized, personalized approaches to the experience.
  7. With increased investments in self-service and generative AI, 60% of executives say they will reduce the number of frontline customer-facing jobs. But, the good news is that jobs will be created for employees to monitor performance, track data and more. I’m holding firm in my predictions over the past two years that while there may be some job disruption, the frontline customer support agent job will not be eliminated. To Kyle’s point, there will be job opportunities related to the contact center, even if they are not on the front line.

Self-service and automation are a balancing act. The companies that have gone “all in” and eliminated human-to-human customer support have had pushback from customers. Companies that have not adopted newer technologies are frustrating many customers who want and expect self-service solutions. While it may differ from one company to the next, the balance is critical, but smart leaders will find the balance and continue to adapt to the ever-changing expectations of their customers.

Image Credits: Unsplash
This article originally appeared on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Bad Questions to Ask When Developing Technology

Bad Questions to Ask When Developing Technology

GUEST POST from Mike Shipulski

I know you’re trying to do something that has never been done before, but when will you be done?

I don’t know. We’ll run the next experiment then decide what to do next. If it works, we’ll do more of that. And if it doesn’t, we’ll do less of that. That’s all we know right now.

I know you’re trying to create something that is new to our industry, but how many will we sell?

I don’t know. Initial interviews with customers made it clear that this is an important customer problem. So, we’re trying to figure out if the technology can provide a viable solution. That’s all we know right now.

No one is asking for that obscure technology. Why are you wasting time working on that?

Well, the voice of the technology and the S-curve analyses suggest the technology wants to move in this direction, so we’re investing this solution space. It might work and it might not. That’s all we know right now.

Why aren’t you using best practices?

If it hasn’t been done before, there can be no best practice. We prefer to use good practice or emergent practice.

There doesn’t seem like there’s been much progress. Why aren’t you running more experiments?

We don’t know which experiments to run, so we’re taking some time to think about what to do next.

Will it work?

I don’t know.

That new technology may obsolete our most profitable product line. Shouldn’t you stop work on that?

No. If we don’t obsolete our best work, someone else will. Wouldn’t it be better if we did the obsoleting?

How many more people do you need to accelerate the technology development work?

None. Small teams are better.

Sure, it’s a cool technology, but how much will it cost?

We haven’t earned the right to think about the cost. We’re still trying to make it work.

So, what’s your solution?

We don’t know yet. We’re still trying to formulate the customer problem.

You said you’d be done two months ago. Why aren’t you done yet?

I never said we’d be done two months ago. You asked me for a completion date and I could not tell you when we’d be done. You didn’t like that answer so I suggested that you choose your favorite date and put that into your spreadsheet. We were never going to hit that date, and we didn’t.

We’ve got a tight timeline. Why are you going home at 5:00?

We’ve been working on this technology for the last two years. This is a marathon. We’re mentally exhausted. See you tomorrow.

If you don’t work harder, we’ll get someone else to do the technology development work. What do you think about that?

You are confusing activity with progress. We are doing the right analyses and the right thinking and we’re working hard. But if you’d rather have someone else lead this work, so would I.

We need a patented solution. Will your solution be patentable?

I don’t know because we don’t yet have a solution. And when we do have a solution, we still won’t know because it takes a year or three for the Patent Office to make that decision.

So, you’re telling me this might not work?

Yes. That’s what I’m telling you.

So, you don’t know when you’ll be done with the technology work, you don’t know how much the technology will cost, you don’t know if it will be patentable, or who will buy it?

That’s about right.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

AI Strategy Should Have Nothing to do with AI

AI Strategy Should Have Nothing to do with AI

GUEST POST from Robyn Bolton

You’ve heard the adage that “culture eats strategy for breakfast.”  Well, AI is the fruit bowl on the side of your Denny’s Grand Slam Strategy, and culture is eating that, too.

1 tool + 2 companies = 2 strategies

On an Innovation Leader call about AI, two people from two different companies shared stories about what happened when an AI notetaking tool unexpectedly joined a call and started taking notes.  In both stories, everyone on the calls was surprised, uncomfortable, and a little bit angry that even some of the conversation was recorded and transcribed (understandable because both calls were about highly sensitive topics). 

The storyteller from Company A shared that the senior executive on the call was so irate that, after the call, he contacted people in Legal, IT, and Risk Management.  By the end of the day, all AI tools were shut down, and an extensive “ask permission or face termination” policy was issued.

Company B’s story ended differently.  Everyone on the call, including senior executives and government officials, was surprised, but instead of demanding that the tool be turned off, they asked why it was necessary. After a quick discussion about whether the tool was necessary, when it would be used, and how to ensure the accuracy of the transcript, everyone agreed to keep the note-taker running.  After the call, the senior executive asked everyone using an AI note-taker on a call to ask attendees’ permission before turning it on.

Why such a difference between the approaches of two companies of relatively the same size, operating in the same industry, using the same type of tool in a similar situation?

1 tool + 2 CULTURES = 2 strategies

Neither storyteller dove into details or described their companies’ cultures, but from other comments and details, I’m comfortable saying that the culture at Company A is quite different from the one at Company B. It is this difference, more than anything else, that drove Company A’s draconian response compared to Company B’s more forgiving and guiding one.  

This is both good and bad news for you as an innovation leader.

It’s good news because it means that you don’t have to pour hours, days, or even weeks of your life into finding, testing, and evaluating an ever-growing universe of AI tools to feel confident that you found the right one. 

It’s bad news because even if you do develop the perfect AI strategy, it won’t matter if you’re in a culture that isn’t open to exploration, learning, and even a tiny amount of risk-taking.

Curious whether you’re facing more good news than bad news?  Start here.

8 culture = 8+ strategies

In 2018, Boris Groysberg, a professor at Harvard Business School, and his colleagues published “The Leader’s Guide to Corporate Culture,” a meta-study of “more than 100 of the most commonly used social and behavior models [and] identified eight styles that distinguish a culture and can be measured.  I’m a big fan of the model, having used it with clients and taught it to hundreds of executives, and I see it actively defining and driving companies’ AI strategies*.

Results (89% of companies): Achievement and winning

  • AI strategy: Be first and be right. Experimentation is happening on an individual or team level in an effort to gain an advantage over competitors and peers.

Caring (63%): Relationships and mutual trust

  • AI strategy: A slow, cautious, and collaborative approach to exploring and testing AI so as to avoid ruffling feathers

Order (15%): Respect, structure, and shared norms

  • AI strategy: Given the “ask permission, not forgiveness” nature of the culture, AI exploration and strategy are centralized in a single function, and everyone waits on the verdict

Purpose (9%): Idealism and altruism

  • AI strategy: Torn between the undeniable productivity benefits AI offers and the myriad ethical and sustainability issues involved, strategies are more about monitoring than acting.

Safety (8%): Planning, caution, and preparedness

  • AI strategy: Like Order, this culture takes a centralized approach. Unlike Order, it hopes that if it closes its eyes, all of this will just go away.

Learning (7%): Exploration, expansiveness, creativity

  • AI strategy: Slightly more deliberate and guided than Purpose cultures, this culture encourages thoughtful and intentional experimentation to inform its overall strategy

Authority (4%): Strength, decisiveness, and boldness

  • AI strategy: If the AI strategies from Results and Order had a baby, it would be Authority’s AI strategy – centralized control with a single-minded mission to win quickly

Enjoyment (2%): Fun and excitement

  • AI strategy: It’s a glorious free-for-all with everyone doing what they want.  Strategies and guidelines will be set if and when needed.

What do you think?

Based on the story above, what culture best describes Company A?  Company B?

What culture best describes your team or company?  What about your AI strategy?

*Disclaimer. Culture is an “elusive lever” because it is based on assumptions, mindsets, social patterns, and unconscious actions.  As a result, the eight cultures aren’t MECE (mutually exclusive, collectively exhaustive), and multiple cultures often exist in a single team, function, and company.  Bottom line, the eight cultures are a tool, not a law (and I glossed over a lot of stuff from the report)

Image credit: Wikimedia Commons

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

DNA May Be the Next Frontier of Computing and Data Storage

DNA May Be the Next Frontier of Computing and Data Storage

GUEST POST from Greg Satell

Data, as many have noted, has become the new oil, meaning that we no longer regard the information we store as merely a cost of doing business, but a valuable asset and a potential source of competitive advantage. It has become the fuel that powers advanced technologies such as machine learning.

A problem that’s emerging, however, is that our ability to produce data is outstripping our ability to store it. In fact, an article in the journal Nature predicts that by 2040, data storage would consume 10–100 times the expected supply of microchip-grade silicon, using current technology. Clearly, we need a data storage breakthrough.

One potential solution is DNA, which is a million times more information dense than today’s flash drives. It also is more stable, more secure and uses minimal energy. The problem is that it is currently prohibitively expensive. However, a startup that has emerged out of MIT, called CATALOG, may have found the breakthrough we’re looking for: low-cost DNA Storage.

The Makings Of A Scientist-Entrepreneur

Growing up in his native Korea, Hyunjun Park never planned on a career in business, much less the technology business, but expected to become a biologist. He graduated with honors from Seoul National University and then went on to earn a PhD from the University of Wisconsin. Later he joined Tim Lu’s lab at MIT, which specializes in synthetic biology.

In an earlier time, he would have followed an established career path, from PhD to post-doc to assistant professor to tenure. These days, however, there is a growing trend for graduate students to get an entrepreneurial education in parallel with the traditional scientific curriculum. Park, for example, participated in both the Wisconsin Entrepreneurial Bootcamp and Start MIT.

He also met a kindred spirit in Nate Roquet, a PhD candidate who, about to finish his thesis, had started thinking about what to do next. Inspired by a talk from given by the Chief Science Officer at a seed fund, IndieBio, the two began to talk in earnest about starting a company together based on their work in synthetic biology.

As they batted around ideas, the subject of DNA storage came up. By this time, the advantages of the technology were well known but it was not considered practical, costing hundreds of thousands of dollars to store just a few hundred megabytes of data. However, the two did some back-of -the-envelope calculations and became convinced they could do it far more cheaply.

Moving From Idea To Product

The basic concept of DNA storage is simple. Essentially, you just encode the ones and zeros of digital code into the T, G, A and C’s of genetic code. However, stringing those genetic molecules together is tedious and expensive. The idea that Park and Roquet came up with was to use enzymes to alter strands of DNA, rather than building them up piece by piece.

Contrary to popular opinion, most traditional venture capital firms, such as those that populate Sand Hill Road in Silicon Valley, don’t invest in ideas. They invest in products. IndieBio, however, isn’t your typical investor. They give only give a small amount of seed capital, but offer other services, such as wet labs, entrepreneurial training and scientific mentorship. Park and Roquet reached out to them and found some interest.

“We invest in problems, not necessarily solutions,” Arvind Gupta, Founder at IndieBio told me. “Here the problem is massive. How do you keep the world’s knowledge safe? We know DNA can last thousands of years and can be replicated very inexpensively. That’s a really big deal and Hyunjun and Nate’s approach was incredibly exciting.”

Once the pair entered IndieBio’s four-month program, they found both promise and disappointment. Their approach could dramatically reduce the cost of storing information in DNA, but not nearly quickly enough to build a commercially viable product. They would need to pivot if they were going to turn their idea into an actual business.

Scaling To Market

One flaw in CATALOG’s approach was that the process was too complex to scale. Yet they found that by starting with just a few different DNA strands and attaching them together, much like a printing press pre-arranges words in a book, they could come up with something that was not only scalable, but commercially viable from a cost perspective.

The second problem was more thorny. Working with enzymes is incredibly labor intensive and, being biologists, Park and Roquet didn’t have the mechanical engineering expertise to make their process feasible. Fortunately, an advisor, Darren Link, connected the pair to Cambridge Consultants, an innovation consultancy that could help them.

“We started looking at the problem and it seemed that, on paper at least, we could make it work,” Richard Hammond, Technology Director and Head of Synthetic Biology at Cambridge Consultants, told me. “Now we’re about halfway through making the first prototype and we believe we can make it work and scale it significantly. We’re increasingly confident that we can solve the core technical challenges.”

In 2018 CATALOG introduced the world to Shannon, its prototype DNA writer. In 2022 CATALOG announced its DNA computation work at the HPC User Forum. But CATALOG isn’t without competition in the space. For example, Western Digital‘s LTO-9 from 2022, can store 18 TB per cartridge. CATALOG for its part is partnering with Seagate “on several initiatives to advance scalable and automated DNA-based storage and computation platforms, including making DNA-based platforms up to 1000 times smaller.” That should make the process competitive for archival storage, such as medical and legal records as well as storing film databases at movie studios.

“I think the fact that we’re inventing a completely new medium for data storage is really exciting,” Park told me. “I don’t think that we know yet what the true potential is because the biggest use cases probably don’t exist yet. What I do know is that our demand for data storage will soon outstrip our supply and we are thrilled about the possibility of solving that problem.”

Going Beyond Digital

A generation ago, the task of improving data storage would have been seen as solely a computer science problem. Yet today, the digital era is ending and we’re going to have to look further and wider for solutions to the problems we face. With the vast improvement in genomics, which is far outpacing Moore’s law these days, we can expect biology to increasingly play a role.

“Traditional, information technology has been strictly the realm of electrical engineers, physicists and coders,” Gupta of IndieBio told me. “What we’re increasingly finding is that biology, which has been honed for millions of years by evolution, can often point the way to solutions that are more robust and potentially, much cheaper and more efficient.”

Yet this phenomenon goes far beyond biology. We’re also seeing similar accelerations in other fields, such as materials science and space-related technologies. We’re also seeing a new breed of investors, like IndieBio, that focus specifically on scientist entrepreneurs. “I consider myself a product of the growing ecosystem for scientific entrepreneurs at universities and in the investor community,” Park told me.

Make no mistake. We are entering a new era of innovation and the traditional Silicon Valley approach will not get us where we need to go. Instead, we need to forge greater collaboration between the scientific community, the investor community and government agencies to solve problems that are increasingly complex and interdisciplinary.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Video Killed More Than the Radio Star

Video Killed More Than the Radio Star

by Braden Kelley

If you are a child of the eighties, you will remember when MTV went live 24 hours a day with music videos on cable television August 1, 1981 with the broadcast of “Video Killed the Radio Star” by the Buggles.

But I was thinking the other day about how video (or taken more broadly as streaming media – including television, movies, gaming, social media, and the internet) has killed far more things than just radio stars. Many activities have experienced substantial declines due to people staying home and engaging in these forms of entertainment – often by themselves – where in the past people would leave their homes to engage in more human-to-human-interactions.

The ten declines listed below have not only reshaped the American landscape – literally – but have also served to feed declines in the mental health of modern nations at the same time. Without further ado, here is the list

1. Bowling Alleys:

Bowling alleys, once bustling with players and leagues, have faced challenges in recent years. The communal experience of bowling has been replaced by digital alternatives, impacting the industry.

2. Roller Skating Rinks:

Roller skating rinks, which were once popular hangout spots for families and teens, have seen declining attendance. The allure of roller disco and skating parties has waned as people turn to other forms of entertainment.

3. Drive-In Movie Theaters:

Drive-in movie theaters, iconic symbols of mid-20th-century entertainment, have faced challenges in recent decades. While they once provided a unique way to watch films from the comfort of your car, changing lifestyles and technological advancements have impacted their popularity.

4. Arcade Game Centers:

In the ’80s and ’90s, video game arcades were buzzing hubs of entertainment. People flocked to play games like Pac-Man, Street Fighter, and Mortal Kombat. Traditional arcade game centers, filled with pinball machines, classic video games, and ticket redemption games, have struggled to compete with home gaming consoles and online multiplayer experiences. The convenience of playing video games at home has led to a decline in arcade visits. Nostalgia keeps some arcades alive, but they are no longer as prevalent as they once were.

5. Miniature Golf Courses:

Mini-golf courses, with their whimsical obstacles and family-friendly appeal, used to be popular weekend destinations. However, the rise of digital entertainment has impacted their attendance. The allure of playing a round of mini-golf under the sun has faded for many.

6. Indoor Trampoline Parks:

Indoor trampoline parks gained popularity as a fun and active way to spend time with friends and family. However, the pandemic and subsequent lockdowns forced many of these parks to close temporarily. Even before the pandemic, the availability of home trampolines and virtual fitness classes reduced the need for indoor trampoline parks. People can now bounce and exercise at home or virtually, without leaving their living rooms.

7. Live Music Venues:

Live music venues, including small clubs, concert halls, and outdoor amphitheaters, have struggled due to changing entertainment preferences. While some artists and bands continue to perform, the rise of virtual concerts and streaming services has affected attendance. People can now enjoy live music from the comfort of their homes, reducing the need to attend physical venues. The pandemic also disrupted live events, leading to further challenges for the industry.

8. Public Libraries (In-Person Visits):

Public libraries, once bustling with readers and community events, have seen a decline in in-person visits. E-books, audiobooks, and online research resources have made it easier for people to access information without physically visiting a library. While libraries continue to offer valuable services, their role has shifted from primarily physical spaces to digital hubs for learning and exploration – and a place for latchkey kids to go and wait for their parents to get off work.

10. Shopping Malls

Once bustling centers of retail and social activity, shopping malls have faced significant challenges in recent years. Various technological shifts have contributed to their decline, including e-commerce and online shopping, social media and influencer culture, changing demographics and urbanization. Shopping malls are yet another place that parents are no longer dropping off the younger generation at for the day.

And if that’s not enough, here is a bonus one for you:

11. Diners, Malt Shops, Coffee Shops, Dive Bars/Taverns, Neighborhood Pubs (UK) and Drive-In Burger Joints

If you’re a child of the seventies or eighties, no doubt you probably tuned to watch Richie, Potsie, Joanie, Fonsie and Ralph Malph gather every day at Al’s. Unfortunately, many of the more social and casual drinking and dining places are experiences declines as diet, habit and technology changes have kicked in. Demographic changes (aging out of nostalgia) and the rise of food delivery apps and takeout culture have helped to sign their death warrant.

Conclusion

In the ever-evolving landscape of entertainment, video and streaming media have reshaped our experiences and interactions. As we bid farewell to once-thriving institutions, we recognize both the convenience and the cost of this digital transformation. For example, the echoes of strikes and spares have faded as digital alternatives replace the communal joy of bowling. As we navigate this digital era, let us cherish what remains and adapt to what lies ahead. Video may have transformed our world, but the echoes of lost experiences linger, urging us to seek balance in our screens and our souls. As these once ubiquitous gathering places disappear, consumer tastes change and social isolation increases, will we as a society seek to reverse course or evolve to some new way of reconnecting as humans in person? And if so, how?

What other places and/or activities would you have added to the list?
(sound off in the comments)

p.s. Be sure and follow both my personal account and the Human-Centered Change and Innovation community on LinkedIn.

Image credit: Pixabay

References:
(1) Duwamish Drive-In was not really about the movies. https://mynorthwest.com/289708/duwamish-drive-in-not-really-about-the-movies/.
(3) How online gaming has become a social lifeline – BBC. https://www.bbc.com/worklife/article/20201215-how-online-gaming-has-become-a-social-lifeline.
(3) Social media brings benefits and risks to teens. Psychology can help …. https://www.apa.org/monitor/2023/09/protecting-teens-on-social-media.
(4) Frontiers | Social Connectedness, Excessive Screen Time During COVID-19 …. https://www.frontiersin.org/articles/10.3389/fhumd.2021.684137/full.






How I Use AI to Understand Humans

(and Cut Research Time by 80%)

How I Use AI to Understand Humans

GUEST POST from Robyn Bolton

AI is NOT a substitute for person-to-person discovery conversations or Jobs to be Done interviews.

But it is a freakin’ fantastic place to start…if you do the work before you start.

Get smart about what’s possible

When ChatGPT debuted, I had a lot of fun playing with it, but never once worried that it would replace qualitative research.  Deep insights, social and emotional Jobs to be Done, and game-changing surprises only ever emerge through personal conversation.  No matter how good the Large Language Model (LLM) is, it can’t tell you how feelings, aspirations, and motivations drive their decisions.

Then I watched JTBD Untangled’s video with Evan Shore, WalMart’s Senior Director of Product for Health & Wellness, sharing the tests, prompts, and results his team used to compare insights from AI and traditional research approaches.

In a few hours, he generated 80% of the insights that took nine months to gather using traditional methods.

Get clear about what you want and need.

Before getting sucked into the latest shiny AI tools, get clear about what you expect the tool to do for you.  For example:

  • Provide a starting point for research: I used the free version of ChatGPT to build JTBD Canvas 2.0 for four distinct consumer personas.  The results weren’t great, but they provided a helpful starting point.  I also like Perplexity because even the free version links to sources.
  • Conduct qualitative research for meI haven’t used it yet, but a trusted colleague recommended Outset.ai, a service that promises to get to the Why behind the What because of its ability to “conduct and synthesize video, audio, and text conversations.”
  • Synthesize my research and identify insights: An AI platform built explicitly for Jobs to be Done Research?  Yes, please!  That’s precisely what JobLens claims to be, and while I haven’t used it in a live research project, I’ve been impressed by the results of my experiments.  For non-JTBD research, Otter.ai is the original and still my favorite tool for recording, live transcription, and AI-generated summaries and key takeaways.
  • Visualize insights:  MuralMiro, and FigJam are the most widely known and used collaborative whiteboards, all offering hundreds of pre-formatted templates for personas, journey maps, and other consumer research templates.  Another colleague recently sang the praises of theydo, an AI tool designed specifically for customer journey mapping.

Practice your prompts

“Garbage in.  Garbage out.” Has never been truer than with AI.  Your prompts determine the accuracy and richness of the insights you’ll get, so don’t wait until you’ve started researching to hone them.  If you want to start from scratch, you can learn how to write super-effective prompts here and here.  If you’d rather build on someone else’s work, Brian at JobsLens has great prompt resources. 

Spend time testing and refining your prompts by using a previous project as a starting point.  Because you know what the output should be (or at least the output you got), you can keep refining until you get a prompt that returns what you expect.    It can take hours, days, or even weeks to craft effective prompts, but once you have them, you can re-use them for future projects.

Defend your budget

Using AI for customer research will save you time and money, but it is not free. It’s also not just the cost of the subscription or license for your chosen tool(s).  

Remember the 80% of insights that AI surfaced in the JTBD Untangled video?  The other 20% of insights came solely from in-person conversations but comprised almost 100% of the insights that inspired innovative products and services.

AI can only tell you what everyone already knows. You need to discover what no one knows, but everyone feels.  That still takes time, money, and the ability to connect with humans.

Run small experiments before making big promises

People react to change differently.  Some will love the idea of using AI for customer research, while others will resist with.  Everyone, however, will pounce on any evidence that they’re right.  So be prepared.  Take advantage of free trials to play with tools.  Test tools on friends, family, and colleagues.  Then under-promise and over-deliver.

AI is a starting point.  It is not the ending point. 

I’m curious, have you tried using AI for customer research?  What tools have you tried? Which ones do you recommend?

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Value Doesn’t Disappear

It Shifts From One Place to Another

Value Doesn't Disappear

GUEST POST from Greg Satell

A few years ago, I published an article about no-code software platforms, which was very well received. Before long, however, I began to get angry — and sometimes downright nasty — comments from software engineers who were horrified by the notion that you can produce software without actually understanding the code behind it.

Of course, no-code platforms don’t obviate the need for software engineers, but rather automate basic tasks so that amateurs can design applications by themselves. These platforms are, necessarily, limited but can increase productivity dramatically and help line managers customize technology to fit the task at hand.

Similarly, when FORTRAN, the first real computer language, was invented, many who wrote machine code objected, much like the software engineers did to my article. Yet Fortran didn’t destroy computer programming, but democratized and expanded it. The truth is that value never disappears. It just shifts to another place and that’s what we need to learn to focus on.

Why Robots Aren’t Taking Our Jobs

Ever since the financial crisis we’ve been hearing about robots taking our jobs. Yet just the opposite seems to be happening. In fact, we increasingly find ourselves in a labor shortage. Most tellingly, the shortage is especially acute in manufacturing, where automation is most pervasive. So what’s going on?

The fact is that automation doesn’t actually replace jobs, it replaces tasks. To understand how this works, think about the last time you walked into a highly automated Apple store, which actually employs more people than a typical retail location of the same size. They aren’t there to ring up your purchase any faster, but to do all the things that a machine can’t do, like answer your questions and solve your problems.

A few years ago I came across an even more stark example when I asked Vijay Mehta, Chief Innovation Officer for Consumer Information Services at Experian about the effect that shifting to the cloud had on his firm’s business. The first order effect was simple, they needed a lot less technicians to manage its infrastructure and those people could easily be laid off.

Yet they weren’t. Instead Experian shifted a lot of that talent and expertise to focus on creating new services for its customers. One of these, a cloud enabled “data on demand” platform called Ascend has since become one of the $4 billion company’s most profitable products.

Now think of what would have happened if Experian had merely seen cloud technology as an opportunity to cut costs. Sure, it would have fattened its profit margins temporarily, but as its competitors moved to the cloud that advantage would have soon been eroded and, without new products its business would soon decline.

The Outsourcing Dilemma

Another source of disruption in the job market has been outsourcing. While no one seemed to notice when large multinational corporations were outsourcing blue-collar jobs to low cost countries, now so-called “gig economy” sites like Upwork and Fiverr are doing the same thing for white collar professionals like graphic designers and web developers.

So you would expect to see a high degree of unemployment for those job categories, right? Actually no. The Bureau of Labor Statistics expects demand for graphic designers to increase 4% by 2026 and web developers to increase 15%. The site Mashable recently named web development as one of 8 skills you need to get hired in today’s economy.

It’s not hard to see why. While it is true that a skilled professional in a low-cost country can do small projects of the same caliber as those in high cost countries, those tasks do not constitute a whole job. For large, important projects, professionals must collaborate closely to solve complex problems. It’s hard to do that through text messages on a website.

So while it’s true that many tasks are being outsourced, the number of jobs has actually increased. Just like with automation, outsourcing doesn’t make value disappear, but shifts it somewhere else.

The Social Impact

None of this is to say that the effects of technology and globalization hasn’t been real. While it’s fine to speak analytically about value shifting here and there, if a task that you spent years to learn to do well becomes devalued, you take it hard. Economists have also found evidence that disruptions in the job market have contributed to political polarization.

The most obvious thing to do is retrain workers that have been displaced, but it turns out that’s not so simple. In Janesville, a book which chronicles a small town’s struggle to recover from the closing of a GM plant, author Amy Goldstein found that the workers that sought retraining actually did worse than those that didn’t.

When someone loses their job, they don’t need training. They need another job and removing yourself from the job market to take training courses can have serious costs. Work relationships begin to decay and there is no guarantee that the new skills you learn will be in any more demand than the old ones you already had.

In fact, Peter Capelli at the Wharton School argues that the entire notion of a skills gap in America is largely a myth. One reason that there is such a mismatch between the rhetoric about skills and the data is that the most effective training often comes on the job from an employer. It is augmenting skills, not replacing them that creates value.

At the same time, increased complexity in the economy is making collaboration more important, so often the most important skills workers need to learn are soft skills, like writing, listening and being a better team player.

You Can’t Compete With A Robot By Acting Like One

The future is always hard to predict. While it was easy to see that Amazon posed a real problem for large chain bookstores like Barnes & Noble and Borders, it was much less obvious that small independent bookstores would thrive. In much the same way, few saw that ten years after the launch of the Kindle that paper books would surge amid a decline in e-books.

The one overriding trend over the past 50 years or so is that the future is always more human. In Dan Schawbel’s recent book, Back to Human, the author finds that the antidote for our overly automated age is deeper personal relationships. Things like trust, empathy and caring can’t be automated or outsourced.

There are some things a machine will never do. It will never strike out in a little league game, have its heart broken or see its child born. That makes it hard — impossible really — for a machine ever to work effectively with humans as a real person would. The work of humans is increasingly to work with other humans to design work for machines.

That why perhaps the biggest shift in value is from cognitive to social skills. The high paying jobs today have less to do with the ability to retain facts or manipulate numbers (we now use a computer for those things), but require more deep collaboration, teamwork and emotional intelligence.

So while even the most technically inept line manager can now easily produce an application that it would have once required a highly skilled software engineer, to design the next generation of technology, we need engineers and line managers to work more closely together.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Don’t Blame Technology When Innovation Goes Wrong

Don't Blame Technology When Innovation Goes Wrong

GUEST POST from Greg Satell

When I speak at conferences, I’ve noticed that people are increasingly asking me about the unintended consequences of technological advance. As our technology becomes almost unimaginably powerful, there is growing apprehension and fear that we will be unable to control what we create.

This, of course, isn’t anything new. When trains first appeared, many worried that human bodies would melt at the high speeds. In ancient Greece, Plato argued that the invention of writing would destroy conversation. None of these things ever came to pass, of course, but clearly technology has changed the world for good and bad.

The truth is that we can’t fully control technology any more than we can fully control nature or each other. The emergence of significant new technologies unleash forces we can’t hope to understand at the outset and struggle to deal with long after. Yet the most significant issues are most likely to be social in nature and those are the ones we desperately need to focus on.

The Frankenstein Archetype

It’s no accident that Mary Shelley’s novel Frankenstein was published at roughly the same time as the Luddite movement was in full swing. As cottage industries were replaced by smoke belching factories, the sense that man’s creations could turn against him was palpable and the gruesome tale, considered by many to be the first true work of science fiction, touched a nerve.

In many ways, trepidation about technology can be healthy. Concern about industrialization led to social policies that helped mitigate its worst effects. In much the same way, scientists concerned about the threat of nuclear Armageddon did much to help establish policies that would prevent it.

Yet the initial fears almost always prove to be unfounded. While the Luddites burned mills and smashed machines to prevent their economic disenfranchisement, the industrial age led to a rise in the living standards of working people. In a similar vein, more advanced weapons has coincided with a reduction of violent deaths throughout history.

On the other hand, the most challenging aspects of technological advance are often things that we do not expect. While industrialization led to rising incomes, it also led to climate change, something neither the fears of the Luddites nor the creative brilliance of Shelley could have ever conceived of.

The New Frankensteins

Today, the technologies we create will shape the world as never before. Artificially intelligent systems are automating not only physical, but cognitive labor. Gene editing techniques, such as CRISPR, are enabling us to re-engineer life itself. Digital and social media have reshaped human discourse.

So it’s not surprising that there are newfound fears about where it’s all going. A study at Oxford found that 47% of US jobs are at risk of being automated over the next 20 years. The speed and ease of gene editing raises the possibility of biohackers wreaking havoc and the rise of social media has coincided with a disturbing rise of authoritarianism around the globe.

Yet I suspect these fears are mostly misplaced. Instead of massive unemployment, we find ourselves in a labor shortage. While it is true that the biohacking is a real possibility, our increased ability to cure disease will most probably greatly exceed the threat. The increased velocity of information also allows good ideas to travel faster and farther.

On the other hand, these technologies will undoubtedly unleash new challenges that we are only beginning to understand. Artificial intelligence raises disturbing questions about what it means to be human, just as the power of genomics will force us to grapple with questions about the nature of the individual and social media forces us to define the meaning of truth.

Revealing And Building

Clearly, Shelly and the Luddites were very different. While Shelley was an aristocratic intellectual, the Luddites were working class weavers. Yet both saw the rise of technology as the end to a way of life and, in that way, both were right. Technology, if nothing else, forces us to adapt, often in ways we don’t expect.

In his 1954 essay, The Question Concerning Technology the German philosopher Martin Heidegger sheds some light on these issues. He described technology as akin to art, in that it reveals truths about the nature of the world, brings them forth and puts them to some specific use. In the process, human nature and its capacity for good and evil is also revealed.

He gives the example of a hydroelectric dam, which reveals the energy of a river and puts it to use making electricity. In much the same sense, Mark Zuckerberg did not “build” a social network at Facebook, but took natural human tendencies and channeled them in a particular way. After all, we go online not for bits or electrons, but to connect with each other.

Yet in another essay, Building Dwelling Thinking, he explains that building also plays an important role, because to build for the world, we first must understand what it means to live in it. The revealing power of technology forces us to rethink old truths and re-imagine new societal norms. That, more than anything else, is where the challenges lie.

Learning To Ask The Hard Questions

We are now nearing the end of the digital age and entering a new era of innovation which will likely be more impactful than anything we’ve seen since the rise of electricity and internal combustion a century ago. This, in turn, will initiate a new cycle of revealing and building that will be as challenging as anything humanity has ever faced.

So while it is unlikely that we will ever face a robot uprising, artificial intelligence does pose a number of troubling questions. Should safety systems in a car prioritize the life of a passenger or a pedestrian? Who is accountable for the decisions an automated system makes? We worry about who is teaching our children, but scarcely stop to think about who is training our algorithms.

These are all questions that need answers within the next decade. Beyond that, we will have further quandaries to unravel, such as what is the nature of work and how do we value it? How should we deal with the rising inequality that automation creates? Who should benefit from technological breakthroughs?

The unintentional consequences of technology have less to do with the relationship between us and our inventions than it does between us and each other. Every technological shift brings about a societal shift that reshapes values and norms. Clearly, we are not helpless, but we are responsible. These are very difficult questions and we need to start asking them. Only then can we begin the cycle of revealing truths and building a better future.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Humans Are Not as Different from AI as We Think

Humans Are Not as Different from AI as We Think

GUEST POST from Geoffrey A. Moore

By now you have heard that GenAI’s natural language conversational abilities are anchored in what one wag has termed “auto-correct on steroids.” That is, by ingesting as much text as it can possibly hoover up, and by calculating the probability that any given sequence of words will be followed by a specific next word, it mimics human speech in a truly remarkable way. But, do you know why that is so?

The answer is, because that is exactly what we humans do as well.

Think about how you converse. Where do your words come from? Oh, when you are being deliberate, you can indeed choose your words, but most of the time that is not what you are doing. Instead, you are riding a conversational impulse and just going with the flow. If you had to inspect every word before you said it, you could not possibly converse. Indeed, you spout entire paragraphs that are largely pre-constructed, something like the shticks that comedians perform.

Of course, sometimes you really are being more deliberate, especially when you are working out an idea and choosing your words carefully. But have you ever wondered where those candidate words you are choosing come from? They come from your very own LLM (Large Language Model) even though, compared to ChatGPT’s, it probably should be called a TWLM (Teeny Weeny Language Model).

The point is, for most of our conversational time, we are in the realm of rhetoric, not logic. We are using words to express our feelings and to influence our listeners. We’re not arguing before the Supreme Court (although even there we would be drawing on many of the same skills). Rhetoric is more like an athletic performance than a logical analysis would be. You stay in the moment, read and react, and rely heavily on instinct—there just isn’t time for anything else.

So, if all this is the case, then how are we not like GenAI? The answer here is pretty straightforward as well. We use concepts. It doesn’t.

Concepts are a, well, a pretty abstract concept, so what are we really talking about here? Concepts start with nouns. Every noun we use represents a body of forces that in some way is relevant to life in this world. Water makes us wet. It helps us clean things. It relieves thirst. It will drown a mammal but keep a fish alive. We know a lot about water. Same thing with rock, paper, and scissors. Same thing with cars, clothes, and cash. Same thing with love, languor, and loneliness.

All of our knowledge of the world aggregates around nouns and noun-like phrases. To these, we attach verbs and verb-like phrases that show how these forces act out in the world and what changes they create. And we add modifiers to tease out the nuances and differences among similar forces acting in similar ways. Altogether, we are creating ideas—concepts—which we can link up in increasingly complex structures through the fourth and final word type, conjunctions.

Now, from the time you were an infant, your brain has been working out all the permutations you could imagine that arise from combining two or more forces. It might have begun with you discovering what happens when you put your finger in your eye, or when you burp, or when your mother smiles at you. Anyway, over the years you have developed a remarkable inventory of what is usually called common sense, as in be careful not to touch a hot stove, or chew with your mouth closed, or don’t accept rides from strangers.

The point is you have the ability to take any two nouns at random and imagine how they might interact with one another, and from that effort, you can draw practical conclusions about experiences you have never actually undergone. You can imagine exception conditions—you can touch a hot stove if you are wearing an oven mitt, you can chew bubble gum at a baseball game with your mouth open, and you can use Uber.

You may not think this is amazing, but I assure you that every AI scientist does. That’s because none of them have come close (as yet) to duplicating what you do automatically. GenAI doesn’t even try. Indeed, its crowning success is due directly to the fact that it doesn’t even try. By contrast, all the work that has gone into GOFAI (Good Old-Fashioned AI) has been devoted precisely to the task of conceptualizing, typically as a prelude to planning and then acting, and to date, it has come up painfully short.

So, yes GenAI is amazing. But so are you.

That’s what I think. What do you think?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Technical, Market and Emotional Risks

Technical, Market and Emotional Risks

GUEST POST from Mike Shipulski

Technical risk – Will it work?
Market risk – Will they buy it?
Emotional risk – Will people laugh at your crazy idea?

Technical risk – Test it in the lab.
Market risk – Test it with the customer.
Emotional risk – Try it with a friend.

Technical risk – Define the right test.
Market risk – Define the right customer.
Emotional risk – Define the right friend.

Technical risk – Define the minimum acceptable performance criteria.
Market risk – Define the minimum acceptable response from the customer.
Emotional risk – Define the minimum acceptable criticism from your friend.

Technical risk – Can you manufacture it?
Market risk – Can you sell it?
Emotional risk – Can you act on your crazy idea?

Technical risk – How sure are you that you can manufacture it?
Market risk – How sure are you that you can sell it?
Emotional risk – How sure are you that you can act on your crazy idea?

Technical risk – When the VP says it can’t be manufactured, what do you do?
Market risk – When the VP says it can’t be sold, what do you do?
Emotional risk – When the VP says your idea is too crazy, what do you do?

Technical risk – When you knew the technical risk was too high, what did you do?
Market risk – When you knew the market risk was too high, what did you do?
Emotional risk – When you knew someone’s emotional risk was going to be too high, what did you do?

Technical risk – Can you teach others to reduce technical risk? How about increasing it?
Market risk – Can you teach others to reduce market risk? How about increasing it?
Emotional risk – Can you teach others to reduce emotional risk? How about increasing it?

Technical risk – What does it look like when technical risk is too low? And the consequences?
Market risk – What does it look like when market risk is too low? And the consequences?
Emotional risk – What does it look like when emotional risk is too low? And the consequences?

We are most aware of technical risk and spend most of our time trying to reduce it. We have the mindset and toolset to reduce it. We know how to do it. But we were not taught to recognize when technical risk is too low. And if we do recognize it’s too low, we don’t know how to articulate the negative consequences. With all this said, market risk is far more dangerous.

We’re unfamiliar with the toolset and mindset to reduce market risk. Where we can change the design, run the test, and reduce technical risk, market risk is not like that. It’s difficult to understand what drives the customers’ buying decision and it’s difficult to directly (and quickly) change their buying decision. In short, it’s difficult to know what to change so they make a different buying decision. And if they don’t buy, you don’t sell. And that’s a big problem. With that said, emotional risk is far more debilitating.

When a culture creates high emotional risk, people keep their best ideas to themselves. They don’t want to be laughed at or ridiculed, so their best ideas don’t see the light of day. The result is a collection of wonderful ideas known only to the underground Trust Network. A culture that creates high emotional risk has insufficient technical and market risk because everyone is afraid of the consequences of doing something new and different. The result – the company with high emotional risk follows the same old script and does what it did last time. And this works well, right up until it doesn’t.

Here’s a three-pronged approach that may help.

  1. Continue to reduce technical risk.
  2. Learn to reduce market risk early in a project.
  3. And behave in a way that reduces emotional risk so you’ll have the opportunity to reduce technical and market risk.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.