Tag Archives: Artificial Intelligence

How To Create the IKEA Effect

A Customer Experience That Will Be Appreciated

How To Create The IKEA Effect

GUEST POST from Shep Hyken

When reaching out for customer service and support, most customers still prefer to communicate with a company or brand via the traditional phone call. That said, more and more customers are attracted to and embracing a do-it-yourself customer service experience, known as self-service.

I had a chance to sit down with Venk Korla, the president and CEO of HGS Digital, which recently released its HGS Buyers Insight Report. We talked about investments CX (customer experience) leaders are making into AI and digital self-support and the importance of creating a similar experience for employees, which will get to in a moment. But first, I want to share some comments Korla made about comparing customer service to an IKEA experience.

The IKEA Effect

The IKEA effect was identified and named by Michael I. Norton of Harvard Business School, Daniel Mochon of Yale and Dan Ariely of Duke, who published the results of three studies in 2011. A short description of the IKEA effect is that some customers not only enjoy putting furniture together themselves but also find more value in the experience than if a company delivered pre-assembled furniture.

“It’s the same in the customer service/support world,” Korla said. “Customers who easily resolve their issues or have their questions answered on a brand’s self-service portal, either through traditional FAQ pages on a website or something more advanced, such as AI-powered solutions, will not only be happy with the experience but will also be grateful to the company for providing such an easy, fulfilling experience.”

To support this notion, our customer service research (sponsored by RingCentral) found that even with the phone being the No. 1 way customers like to interact with brands, 26% of customers stopped doing business with a company or brand because self-service options were not provided. (Note: Younger generations prefer self-service solutions more than older generations.) As the self-service experience improves, more will adopt it as their go-to method of getting questions answered and problems resolved.

The Big Bet On AI

In the next 18 months, CX decision-makers are betting big on artificial intelligence. The research behind the HGS Buyers Insight Report found that 37% of the leaders surveyed will deploy customer-facing chatbots, 30% will use generative AI or text-speech solutions to support employees taking care of customers, and 28% will invest in and deploy robotic process automation. All of these investments are meant to improve both the customer and employee experience.

While Spending On CX Is A Top Priority, Spending On Employee Experience (EX) Is Lagging

Korla recognizes the need to support not only customers with AI, but also employees. Companies betting on AI must also consider employees as they invest in technology to support customers. Just as a customer uses an AI-powered chatbot to communicate using natural language, the employee interacting directly with the customer should be able to use similar tools.

Imagine the customer support agent receives a call from a customer with a difficult question. As the customer describes the issue, the agent inputs notes into the computer. Within seconds, the agent has the answer to the question appear on their screen. In addition, the AI tool shares insights about the customer, such as their buying patterns, how long they have been a customer, what they’ve called about in the past and more. At this point, a good agent can interpret the information and communicate it in the style that best suits the customer.

Korla explains that the IKEA effect is just as powerful for employees as it is for customers. When employees are armed with the right tools to do their jobs effectively, allowing them to easily support customers and solve their most difficult problems, they are more fulfilled. In the HGS report, 54% of CX leaders surveyed cited talent attraction and retention as a top investment priority. So, for the company that invests in EX tools—specifically AI and automation—the result translates into lower turnover and more engaged employees.

Korla’s insights highlight the essence of the IKEA effect in creating empowering customer experiences and employee experiences. He reminds us that an amazing CX is supported by an amazing EX. As your company prepares to invest in AI and other self-service tools for your customers, consider an investment in similar tools for your employees.

Download the HGS Buyers Insight Report to find out what CX decision-makers will invest in and focus on for 2024 and beyond.

Image Credits: Pixabay
This article originally appeared on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Balancing Artificial Intelligence with the Human Touch

GUEST POST from Shep Hyken

As AI and ChatGPT-type technologies grow in capability and ease of use and become more cost-effective, more and more companies are making their way to the digital experience. Still, the best companies know better than to switch to 100% digital.

I had a chance to interview Nicole Kyle, managing director and co-founder of CMP Research (Customer Management Practice), for Amazing Business Radio. Kyle’s team provides research and advisory services for the contact center industry and conducts some of the most critical research on the topic of self-service and digital customer service. I first met Kyle at CCW, the largest contact center conference in the industry. I’ve summarized seven of her key observations below, followed by my commentary:

  1. The Amazon Effect has trained customers to expect a level of service that’s not always in line with what companies and brands can provide. This is exactly what’s happening with customer expectations. They no longer compare you just to your direct competitors but to the best experience they’ve had from any company. Amazon and other rockstar brands focused on CX (customer experience) have set the bar higher for all companies in all industries.
  2. People’s acceptance and eventual normalization of digital experiences accelerated during the pandemic, and they have become a way of life for many customers. The pandemic forced customers to accept self-service. For example, many customers never went online to buy groceries, vehicles or other items that were traditionally shopped for in person. Once customers got used to it, as the pandemic became history, many never returned to the “old way” of doing business. At a minimum, many customers expect a choice between the two.
  3. Customers have new priorities and are placing a premium on their time. Seventy-two percent of customers say they want to spend less time interacting with customer service. They want to be self-sufficient in managing typical customer service issues. In other words, they want self-service options that will get them answers to their questions efficiently and in a timely manner. Our CX research differs and is less than half of that 72% number. When I asked Kyle about the discrepancy, she responded, “Customers who have a poor self-service experience are less likely to return to self-service. While there is an increase in preference, you’re not seeing the adoption because some companies aren’t offering the type of self-service experience the customer wants.”
  4. The digital dexterity of society is improving! That phrase is a great way to describe self-service adoption, specifically how customers view chatbots or other ChatGPT-type technologies. Kyle explained, “Digital experiences became normalized during the pandemic, and digital tools, such as generative AI, are now starting to help people in their daily lives, making them more digitally capable.” That translates into customers’ higher acceptance and desire for digital support and CX.
  5. Many customers can tell the difference between talking to an AI chatbot and a live chat with a human agent due to their ability to access technology and the quality of the chatbot. However, customers are still willing to use the tools if the results are good. When it comes to AI interacting with customers via text or voice, don’t get hung up on how lifelike (or not) the experience is as long as it gets your customers what they want quickly and efficiently.
  6. The No. 1 driver of satisfaction (according to 78% of customers surveyed) in a self-service experience is personalization. Personalization is more important than ever in customer service and CX. So, how do you personalize digital support? The “machine” must not only be capable of delivering the correct answers and solutions, but it must also recognize the existing customer, remember issues the customer had in the past, make suggestions that are specific to the customer and provide other customized, personalized approaches to the experience.
  7. With increased investments in self-service and generative AI, 60% of executives say they will reduce the number of frontline customer-facing jobs. But, the good news is that jobs will be created for employees to monitor performance, track data and more. I’m holding firm in my predictions over the past two years that while there may be some job disruption, the frontline customer support agent job will not be eliminated. To Kyle’s point, there will be job opportunities related to the contact center, even if they are not on the front line.

Self-service and automation are a balancing act. The companies that have gone “all in” and eliminated human-to-human customer support have had pushback from customers. Companies that have not adopted newer technologies are frustrating many customers who want and expect self-service solutions. While it may differ from one company to the next, the balance is critical, but smart leaders will find the balance and continue to adapt to the ever-changing expectations of their customers.

Image Credits: Unsplash
This article originally appeared on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Time is a Flat Circle

Jamie Dimon’s Comments on AI Just Proved It

Time is a Flat Circle

GUEST POST from Robyn Bolton


“Time is a flat circle.  Everything we have done or will do we will do over and over and over and over again – forever.” –- Rusty Cohle, played by Matthew McConaughey, in True Detective

For the whole of human existence, we have created new things with no idea if, when, or how they will affect humanity, society, or business.  New things can be a distraction, sucking up time and money and offering nothing in return.  Or they can be a bridge to a better future.

As a leader, it’s your job to figure out which things are a bridge (i.e., innovation) and which things suck (i.e., shiny objects).

Innovation is a flat circle

The concept of eternal recurrence, that time repeats itself in an infinite loop, was first taught by Pythagoras (of Pythagorean theorem fame) in the 6th century BC. It remerged (thereby proving its own truth) in Friedreich Nietzsche’s writings in the 19th century, then again in 2014’s first season of True Detective, and then again on Monday in Jamie Dimon’s Annual Letter to Shareholders.

Mr. Dimon, the CEO and Chairman of JPMorgan Chase & Co, first mentioned AI in his 2017 Letter to Shareholders.  So, it wasn’t the mention of AI that was newsworthy. It was how it was mentioned.  Before mentioning geopolitical risks, regulatory issues, or the recent acquisition of First Republic, Mr. Dimon spends nine paragraphs talking about AI, its impact on banking, and how JPMorgan Chase is responding.

Here’s a screenshot of the first two paragraphs:

JP Morgan Annual Letter 2017

He’s right. We don’t know “the full effect or the precise rate at which AI will change our business—or how it will affect society at large.” We were similarly clueless in 1436 (when the printing press was invented), 1712 (when the first commercially successful steam engine was invented), 1882 (when electricity was first commercially distributed), and 1993 (when the World Wide Web was released to the public).

Innovation, it seems, is also a flat circle.

Our response doesn’t have to be.

Historically, people responded to innovation in one of two ways: panic because it’s a sign of the apocalypse or rejoice because it will be our salvation. And those reactions aren’t confined to just “transformational” innovations.  In 2015, a visiting professor at Kings College London declared that the humble eraser (1770) was “an instrument of the devil” because it creates “a culture of shame about error.  It’s a way of lying to the world, which says, ‘I didn’t make a mistake.  I got it right the first time.’”

Neither reaction is true. Fortunately, as time passes, more people recognize that the truth is somewhere between the apocalypse and salvation and that we can influence what that “between” place is through intentional experimentation and learning.

JPMorgan started experimenting with AI over a decade ago, well before most of its competitors.  As a result, they “now have over 400 use cases in production in areas such as marketing, fraud, and risk” that are producing quantifiable financial value for the company. 

It’s not just JPMorgan.  Organizations as varied as John Deere, BMW, Amazon, the US Department of Energy, Vanguard, and Johns Hopkins Hospital have been experimenting with AI for years, trying to understand if and how it could improve their operations and enable them to serve customers better.  Some experiments worked.  Some didn’t.  But every company brave enough to try learned something and, as a result, got smarter and more confident about “the full effect or the precise rate at which AI will change our business.”

You have free will.  Use it to learn.

Cynics believe that time is a flat circle.  Leaders believe it is an ever-ascending spiral, one in which we can learn, evolve, and influence what’s next.  They also have the courage to act on (and invest in) that belief.

What do you believe?  More importantly, what are you doing about it?

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Top 10 Human-Centered Change & Innovation Articles of May 2024

Top 10 Human-Centered Change & Innovation Articles of May 2024Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are May’s ten most popular innovation posts:

  1. Five Lessons from the Apple Car’s Demise — by Robyn Bolton
  2. Six Causes of Employee Burnout — by David Burkus
  3. Learning About Innovation – From a Skateboard? — by John Bessant
  4. Fighting for Innovation in the Trenches — by Geoffrey A. Moore
  5. A Case Study on High Performance Teams — by Stefan Lindegaard
  6. Growth Comes From What You Don’t Have — by Mike Shipulski
  7. Innovation Friction Risks and Pitfalls — by Howard Tiersky
  8. Difference Between Customer Experience Perception and Reality — by Shep Hyken
  9. How Tribalism Can Kill Innovation — by Greg Satell
  10. Preparing the Next Generation for a Post-Digital Age — by Greg Satell

BONUS – Here are five more strong articles published in April that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last four years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






AI Strategy Should Have Nothing to do with AI

AI Strategy Should Have Nothing to do with AI

GUEST POST from Robyn Bolton

You’ve heard the adage that “culture eats strategy for breakfast.”  Well, AI is the fruit bowl on the side of your Denny’s Grand Slam Strategy, and culture is eating that, too.

1 tool + 2 companies = 2 strategies

On an Innovation Leader call about AI, two people from two different companies shared stories about what happened when an AI notetaking tool unexpectedly joined a call and started taking notes.  In both stories, everyone on the calls was surprised, uncomfortable, and a little bit angry that even some of the conversation was recorded and transcribed (understandable because both calls were about highly sensitive topics). 

The storyteller from Company A shared that the senior executive on the call was so irate that, after the call, he contacted people in Legal, IT, and Risk Management.  By the end of the day, all AI tools were shut down, and an extensive “ask permission or face termination” policy was issued.

Company B’s story ended differently.  Everyone on the call, including senior executives and government officials, was surprised, but instead of demanding that the tool be turned off, they asked why it was necessary. After a quick discussion about whether the tool was necessary, when it would be used, and how to ensure the accuracy of the transcript, everyone agreed to keep the note-taker running.  After the call, the senior executive asked everyone using an AI note-taker on a call to ask attendees’ permission before turning it on.

Why such a difference between the approaches of two companies of relatively the same size, operating in the same industry, using the same type of tool in a similar situation?

1 tool + 2 CULTURES = 2 strategies

Neither storyteller dove into details or described their companies’ cultures, but from other comments and details, I’m comfortable saying that the culture at Company A is quite different from the one at Company B. It is this difference, more than anything else, that drove Company A’s draconian response compared to Company B’s more forgiving and guiding one.  

This is both good and bad news for you as an innovation leader.

It’s good news because it means that you don’t have to pour hours, days, or even weeks of your life into finding, testing, and evaluating an ever-growing universe of AI tools to feel confident that you found the right one. 

It’s bad news because even if you do develop the perfect AI strategy, it won’t matter if you’re in a culture that isn’t open to exploration, learning, and even a tiny amount of risk-taking.

Curious whether you’re facing more good news than bad news?  Start here.

8 culture = 8+ strategies

In 2018, Boris Groysberg, a professor at Harvard Business School, and his colleagues published “The Leader’s Guide to Corporate Culture,” a meta-study of “more than 100 of the most commonly used social and behavior models [and] identified eight styles that distinguish a culture and can be measured.  I’m a big fan of the model, having used it with clients and taught it to hundreds of executives, and I see it actively defining and driving companies’ AI strategies*.

Results (89% of companies): Achievement and winning

  • AI strategy: Be first and be right. Experimentation is happening on an individual or team level in an effort to gain an advantage over competitors and peers.

Caring (63%): Relationships and mutual trust

  • AI strategy: A slow, cautious, and collaborative approach to exploring and testing AI so as to avoid ruffling feathers

Order (15%): Respect, structure, and shared norms

  • AI strategy: Given the “ask permission, not forgiveness” nature of the culture, AI exploration and strategy are centralized in a single function, and everyone waits on the verdict

Purpose (9%): Idealism and altruism

  • AI strategy: Torn between the undeniable productivity benefits AI offers and the myriad ethical and sustainability issues involved, strategies are more about monitoring than acting.

Safety (8%): Planning, caution, and preparedness

  • AI strategy: Like Order, this culture takes a centralized approach. Unlike Order, it hopes that if it closes its eyes, all of this will just go away.

Learning (7%): Exploration, expansiveness, creativity

  • AI strategy: Slightly more deliberate and guided than Purpose cultures, this culture encourages thoughtful and intentional experimentation to inform its overall strategy

Authority (4%): Strength, decisiveness, and boldness

  • AI strategy: If the AI strategies from Results and Order had a baby, it would be Authority’s AI strategy – centralized control with a single-minded mission to win quickly

Enjoyment (2%): Fun and excitement

  • AI strategy: It’s a glorious free-for-all with everyone doing what they want.  Strategies and guidelines will be set if and when needed.

What do you think?

Based on the story above, what culture best describes Company A?  Company B?

What culture best describes your team or company?  What about your AI strategy?

*Disclaimer. Culture is an “elusive lever” because it is based on assumptions, mindsets, social patterns, and unconscious actions.  As a result, the eight cultures aren’t MECE (mutually exclusive, collectively exhaustive), and multiple cultures often exist in a single team, function, and company.  Bottom line, the eight cultures are a tool, not a law (and I glossed over a lot of stuff from the report)

Image credit: Wikimedia Commons

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Top 10 Human-Centered Change & Innovation Articles of April 2024

Top 10 Human-Centered Change & Innovation Articles of April 2024Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are April’s ten most popular innovation posts:

  1. Ignite Innovation with These 3 Key Ingredients — by Howard Tiersky
  2. What Have We Learned About Digital Transformation? — by Geoffrey A. Moore
  3. The Collective Growth Mindset — by Stefan Lindegaard
  4. Companies Are Not Families — by David Burkus
  5. 24 Customer Experience Mistakes to Stop in 2024 — by Shep Hyken
  6. Transformation is Human Not Digital — by Greg Satell
  7. Embrace the Art of Getting Started — by Mike Shipulski
  8. Trust as a Competitive Advantage — by Greg Satell
  9. 3 Innovation Lessons from The Departed — by Robyn Bolton
  10. Humans Are Not as Different from AI as We Think — by Geoffrey A. Moore

BONUS – Here are five more strong articles published in March that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last four years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






How I Use AI to Understand Humans

(and Cut Research Time by 80%)

How I Use AI to Understand Humans

GUEST POST from Robyn Bolton

AI is NOT a substitute for person-to-person discovery conversations or Jobs to be Done interviews.

But it is a freakin’ fantastic place to start…if you do the work before you start.

Get smart about what’s possible

When ChatGPT debuted, I had a lot of fun playing with it, but never once worried that it would replace qualitative research.  Deep insights, social and emotional Jobs to be Done, and game-changing surprises only ever emerge through personal conversation.  No matter how good the Large Language Model (LLM) is, it can’t tell you how feelings, aspirations, and motivations drive their decisions.

Then I watched JTBD Untangled’s video with Evan Shore, WalMart’s Senior Director of Product for Health & Wellness, sharing the tests, prompts, and results his team used to compare insights from AI and traditional research approaches.

In a few hours, he generated 80% of the insights that took nine months to gather using traditional methods.

Get clear about what you want and need.

Before getting sucked into the latest shiny AI tools, get clear about what you expect the tool to do for you.  For example:

  • Provide a starting point for research: I used the free version of ChatGPT to build JTBD Canvas 2.0 for four distinct consumer personas.  The results weren’t great, but they provided a helpful starting point.  I also like Perplexity because even the free version links to sources.
  • Conduct qualitative research for meI haven’t used it yet, but a trusted colleague recommended Outset.ai, a service that promises to get to the Why behind the What because of its ability to “conduct and synthesize video, audio, and text conversations.”
  • Synthesize my research and identify insights: An AI platform built explicitly for Jobs to be Done Research?  Yes, please!  That’s precisely what JobLens claims to be, and while I haven’t used it in a live research project, I’ve been impressed by the results of my experiments.  For non-JTBD research, Otter.ai is the original and still my favorite tool for recording, live transcription, and AI-generated summaries and key takeaways.
  • Visualize insights:  MuralMiro, and FigJam are the most widely known and used collaborative whiteboards, all offering hundreds of pre-formatted templates for personas, journey maps, and other consumer research templates.  Another colleague recently sang the praises of theydo, an AI tool designed specifically for customer journey mapping.

Practice your prompts

“Garbage in.  Garbage out.” Has never been truer than with AI.  Your prompts determine the accuracy and richness of the insights you’ll get, so don’t wait until you’ve started researching to hone them.  If you want to start from scratch, you can learn how to write super-effective prompts here and here.  If you’d rather build on someone else’s work, Brian at JobsLens has great prompt resources. 

Spend time testing and refining your prompts by using a previous project as a starting point.  Because you know what the output should be (or at least the output you got), you can keep refining until you get a prompt that returns what you expect.    It can take hours, days, or even weeks to craft effective prompts, but once you have them, you can re-use them for future projects.

Defend your budget

Using AI for customer research will save you time and money, but it is not free. It’s also not just the cost of the subscription or license for your chosen tool(s).  

Remember the 80% of insights that AI surfaced in the JTBD Untangled video?  The other 20% of insights came solely from in-person conversations but comprised almost 100% of the insights that inspired innovative products and services.

AI can only tell you what everyone already knows. You need to discover what no one knows, but everyone feels.  That still takes time, money, and the ability to connect with humans.

Run small experiments before making big promises

People react to change differently.  Some will love the idea of using AI for customer research, while others will resist with.  Everyone, however, will pounce on any evidence that they’re right.  So be prepared.  Take advantage of free trials to play with tools.  Test tools on friends, family, and colleagues.  Then under-promise and over-deliver.

AI is a starting point.  It is not the ending point. 

I’m curious, have you tried using AI for customer research?  What tools have you tried? Which ones do you recommend?

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Don’t Blame Technology When Innovation Goes Wrong

Don't Blame Technology When Innovation Goes Wrong

GUEST POST from Greg Satell

When I speak at conferences, I’ve noticed that people are increasingly asking me about the unintended consequences of technological advance. As our technology becomes almost unimaginably powerful, there is growing apprehension and fear that we will be unable to control what we create.

This, of course, isn’t anything new. When trains first appeared, many worried that human bodies would melt at the high speeds. In ancient Greece, Plato argued that the invention of writing would destroy conversation. None of these things ever came to pass, of course, but clearly technology has changed the world for good and bad.

The truth is that we can’t fully control technology any more than we can fully control nature or each other. The emergence of significant new technologies unleash forces we can’t hope to understand at the outset and struggle to deal with long after. Yet the most significant issues are most likely to be social in nature and those are the ones we desperately need to focus on.

The Frankenstein Archetype

It’s no accident that Mary Shelley’s novel Frankenstein was published at roughly the same time as the Luddite movement was in full swing. As cottage industries were replaced by smoke belching factories, the sense that man’s creations could turn against him was palpable and the gruesome tale, considered by many to be the first true work of science fiction, touched a nerve.

In many ways, trepidation about technology can be healthy. Concern about industrialization led to social policies that helped mitigate its worst effects. In much the same way, scientists concerned about the threat of nuclear Armageddon did much to help establish policies that would prevent it.

Yet the initial fears almost always prove to be unfounded. While the Luddites burned mills and smashed machines to prevent their economic disenfranchisement, the industrial age led to a rise in the living standards of working people. In a similar vein, more advanced weapons has coincided with a reduction of violent deaths throughout history.

On the other hand, the most challenging aspects of technological advance are often things that we do not expect. While industrialization led to rising incomes, it also led to climate change, something neither the fears of the Luddites nor the creative brilliance of Shelley could have ever conceived of.

The New Frankensteins

Today, the technologies we create will shape the world as never before. Artificially intelligent systems are automating not only physical, but cognitive labor. Gene editing techniques, such as CRISPR, are enabling us to re-engineer life itself. Digital and social media have reshaped human discourse.

So it’s not surprising that there are newfound fears about where it’s all going. A study at Oxford found that 47% of US jobs are at risk of being automated over the next 20 years. The speed and ease of gene editing raises the possibility of biohackers wreaking havoc and the rise of social media has coincided with a disturbing rise of authoritarianism around the globe.

Yet I suspect these fears are mostly misplaced. Instead of massive unemployment, we find ourselves in a labor shortage. While it is true that the biohacking is a real possibility, our increased ability to cure disease will most probably greatly exceed the threat. The increased velocity of information also allows good ideas to travel faster and farther.

On the other hand, these technologies will undoubtedly unleash new challenges that we are only beginning to understand. Artificial intelligence raises disturbing questions about what it means to be human, just as the power of genomics will force us to grapple with questions about the nature of the individual and social media forces us to define the meaning of truth.

Revealing And Building

Clearly, Shelly and the Luddites were very different. While Shelley was an aristocratic intellectual, the Luddites were working class weavers. Yet both saw the rise of technology as the end to a way of life and, in that way, both were right. Technology, if nothing else, forces us to adapt, often in ways we don’t expect.

In his 1954 essay, The Question Concerning Technology the German philosopher Martin Heidegger sheds some light on these issues. He described technology as akin to art, in that it reveals truths about the nature of the world, brings them forth and puts them to some specific use. In the process, human nature and its capacity for good and evil is also revealed.

He gives the example of a hydroelectric dam, which reveals the energy of a river and puts it to use making electricity. In much the same sense, Mark Zuckerberg did not “build” a social network at Facebook, but took natural human tendencies and channeled them in a particular way. After all, we go online not for bits or electrons, but to connect with each other.

Yet in another essay, Building Dwelling Thinking, he explains that building also plays an important role, because to build for the world, we first must understand what it means to live in it. The revealing power of technology forces us to rethink old truths and re-imagine new societal norms. That, more than anything else, is where the challenges lie.

Learning To Ask The Hard Questions

We are now nearing the end of the digital age and entering a new era of innovation which will likely be more impactful than anything we’ve seen since the rise of electricity and internal combustion a century ago. This, in turn, will initiate a new cycle of revealing and building that will be as challenging as anything humanity has ever faced.

So while it is unlikely that we will ever face a robot uprising, artificial intelligence does pose a number of troubling questions. Should safety systems in a car prioritize the life of a passenger or a pedestrian? Who is accountable for the decisions an automated system makes? We worry about who is teaching our children, but scarcely stop to think about who is training our algorithms.

These are all questions that need answers within the next decade. Beyond that, we will have further quandaries to unravel, such as what is the nature of work and how do we value it? How should we deal with the rising inequality that automation creates? Who should benefit from technological breakthroughs?

The unintentional consequences of technology have less to do with the relationship between us and our inventions than it does between us and each other. Every technological shift brings about a societal shift that reshapes values and norms. Clearly, we are not helpless, but we are responsible. These are very difficult questions and we need to start asking them. Only then can we begin the cycle of revealing truths and building a better future.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Humans Are Not as Different from AI as We Think

Humans Are Not as Different from AI as We Think

GUEST POST from Geoffrey A. Moore

By now you have heard that GenAI’s natural language conversational abilities are anchored in what one wag has termed “auto-correct on steroids.” That is, by ingesting as much text as it can possibly hoover up, and by calculating the probability that any given sequence of words will be followed by a specific next word, it mimics human speech in a truly remarkable way. But, do you know why that is so?

The answer is, because that is exactly what we humans do as well.

Think about how you converse. Where do your words come from? Oh, when you are being deliberate, you can indeed choose your words, but most of the time that is not what you are doing. Instead, you are riding a conversational impulse and just going with the flow. If you had to inspect every word before you said it, you could not possibly converse. Indeed, you spout entire paragraphs that are largely pre-constructed, something like the shticks that comedians perform.

Of course, sometimes you really are being more deliberate, especially when you are working out an idea and choosing your words carefully. But have you ever wondered where those candidate words you are choosing come from? They come from your very own LLM (Large Language Model) even though, compared to ChatGPT’s, it probably should be called a TWLM (Teeny Weeny Language Model).

The point is, for most of our conversational time, we are in the realm of rhetoric, not logic. We are using words to express our feelings and to influence our listeners. We’re not arguing before the Supreme Court (although even there we would be drawing on many of the same skills). Rhetoric is more like an athletic performance than a logical analysis would be. You stay in the moment, read and react, and rely heavily on instinct—there just isn’t time for anything else.

So, if all this is the case, then how are we not like GenAI? The answer here is pretty straightforward as well. We use concepts. It doesn’t.

Concepts are a, well, a pretty abstract concept, so what are we really talking about here? Concepts start with nouns. Every noun we use represents a body of forces that in some way is relevant to life in this world. Water makes us wet. It helps us clean things. It relieves thirst. It will drown a mammal but keep a fish alive. We know a lot about water. Same thing with rock, paper, and scissors. Same thing with cars, clothes, and cash. Same thing with love, languor, and loneliness.

All of our knowledge of the world aggregates around nouns and noun-like phrases. To these, we attach verbs and verb-like phrases that show how these forces act out in the world and what changes they create. And we add modifiers to tease out the nuances and differences among similar forces acting in similar ways. Altogether, we are creating ideas—concepts—which we can link up in increasingly complex structures through the fourth and final word type, conjunctions.

Now, from the time you were an infant, your brain has been working out all the permutations you could imagine that arise from combining two or more forces. It might have begun with you discovering what happens when you put your finger in your eye, or when you burp, or when your mother smiles at you. Anyway, over the years you have developed a remarkable inventory of what is usually called common sense, as in be careful not to touch a hot stove, or chew with your mouth closed, or don’t accept rides from strangers.

The point is you have the ability to take any two nouns at random and imagine how they might interact with one another, and from that effort, you can draw practical conclusions about experiences you have never actually undergone. You can imagine exception conditions—you can touch a hot stove if you are wearing an oven mitt, you can chew bubble gum at a baseball game with your mouth open, and you can use Uber.

You may not think this is amazing, but I assure you that every AI scientist does. That’s because none of them have come close (as yet) to duplicating what you do automatically. GenAI doesn’t even try. Indeed, its crowning success is due directly to the fact that it doesn’t even try. By contrast, all the work that has gone into GOFAI (Good Old-Fashioned AI) has been devoted precisely to the task of conceptualizing, typically as a prelude to planning and then acting, and to date, it has come up painfully short.

So, yes GenAI is amazing. But so are you.

That’s what I think. What do you think?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Powering the Google Innovation Machine with the World’s Top Minds

Powering the Google Innovation Machine with the World's Top Minds

GUEST POST from Greg Satell

It’s no secret that Google is one of the most innovative companies on the planet. Besides pioneering and then dominating the search industry, it has also become a leader in developing futuristic technologies such as artificial intelligence, driverless cars and quantum computing. It has even launched a life science company.

What makes Google so successful is not one particular process, but how it integrates multiple strategies into a seamless whole. For example, Google Brain started out as a 20% time project, then migrated out to its “X” Division to accelerate development and finally came back to the mothership, where it now collaborates closely with engineering teams to build new products.

Yet perhaps its most important strategy, in fact the one that makes much of the rest possible, is how it partners with top scientists in the academic world. This is no “quick hit,” but a well thought out, long-term game plan designed to establish deep relationships based on cutting edge science and embed that knowledge deeply into just about everything Google does.

Building Deep Relationships to the Academic Community

“We design a variety programs that widen and deepen our relationships with academic scientists,” Maggie Johnson, who heads up University Relations at Google, told me. In fact, there are three distinct ways that Google engages directly with scientists beyond the typical research partnerships with universities.

The first is its Faculty Research Awards program, which are small one-year grants, usually to graduate students or postdocs whose work may be of interest to Google. These are unrestricted gifts, although recipients are highly encouraged to publish their work publicly, that allow the company to develop relationships with young talent at the beginning of their careers.

While anybody can apply for a Faculty Research Award, Focused Research Awards are only available by invitation. Typically, these are awarded to more senior researchers that Google has already had some contact with and last two to three years. However, they are also unrestricted grants that researchers can use as they see fit.

The third way that Google engages with scientists to to proactively engage leaders in a particular field of interest. Geoffrey Hinton, for example, is a pioneer in neural networks and widely considered one of the top AI experts in the world. He splits his time between his faculty position at the University of Toronto and working on Google Brain.

“Spinning In” World Class Scientists

The academic research programs provide many benefits to Google as a company. They give access to the most promising students for recruiting, allow it to help shape university curriculums and keep it connected to breakthrough research in important fields. However, the most direct benefits probably come inviting researchers to spend a sabbatical year at Google, which it calls its Visiting Faculty Program.

For example, Andrew Ng, a top AI researcher, decided to spend a year working at Google and quickly formed a close working relationship with two of the company’s brightest minds, Greg Corrado and Jeff Dean, who were interested in what was then a new brand of artificial intelligence called deep learning. Their collaboration became the Google Brain project.

The Visiting Faculty Program touches on everything Google does. Recently they’ve had people visiting the company like John Canny at UC Berkeley, who helped with the development of TPU’s, chips specialized to run Google’s AI algorithms and Michael Rabin, a Turing Award winning mathematician who was working on auction algorithms. For every Google priority, at least one of the world’s top minds is working with the company on it.

What makes the sabbatical program unusual is how deeply it is integrated into everyday work at the company. “In most cases, these scientists have already been working with our teams through one of our other programs, so the groundwork for a productive relationship has already been laid,” Maggie Johnson told me.

Developing “Win-Win” Relationships

One of the things that makes Google’s outreach to researchers work so well is that it is truly a win-win arrangement. Yes, the company gets top experts in important fields to work on its problems, but the researchers themselves get to work with unparalleled tools and data sets. They also get a much better sense of what problems are considered important in a commercial environment.

Katya Scheinberg, a Professor at Lehigh University who focuses on optimization problems, found working at Google to be a logical extension of her earlier collaboration with the company. “I had been working on large-scale machine learning problems and had some connections with Google scientists. So spending part of my sabbatical year at the company seemed fairly natural. I learned a lot about the practical problems that private sector researchers are working on,” she told me.

Since leaving Google, she’s found that her time at the company has shifted the focus of her research. “Working at Google got me interested in some different problems and alerted me to the possibility of applying some approaches I had worked on before to different fields of application.”

Sometimes scholars stay for longer and can have a transformative impact on the company. As noted above, Andrew Ng spent several years at the company. Andrew Moore, a renowned computer scientist and a former Dean of Carnegie Mellon’s computer program, took a leave of absence from his university to set up Google’s Research Center in Pittsburgh. Lasting relationships like these are rare in industry, but incredibly valuable.

Connecting to Discovery Is Something Anyone Can Do, But You Have to Make the Effort

Clearly, Google is an unusual company. There’s not many places that can attract the type of talent that it can. However, just about any business can, for example, support the work of a young graduate student or postdoc at a local university. In much the same way, inviting even a senior researcher to come for a short time is not prohibitively expensive.

Innovation is never a single event, but a process of discovery, engineering and transformation. It is by connecting to discovery that businesses can truly see into the future and develop the next generation of breakthrough products. Unfortunately, few businesses realize the importance of connecting with the academic world.

Make no mistake, if you don’t discover, you won’t invent and if you don’t invent you will be disrupted eventually. It’s just a matter of time. However, you can’t just show up one day and decide you want to work with the world’s greatest minds. Even Google, with all its resources and acumen, has had to work really hard at it.

It’s made these investments in time, focus and resources because it understands that the search business, as great as it is, won’t deliver outsized profits forever. Today, we no longer have the luxury to manage for stability, but must prepare for disruption.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credit: Dall-E on Bing

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.