Category Archives: Technology

Artificial Intelligence is a No-Brainer

Why innovation management needs co-intelligence

Artificial Intelligence is a No-Brainer

GUEST POST from John Bessant

Long fuse, big bang. A great descriptor which Andrew Hargadon uses to describe the way some major innovations arrive and have impact. For a long time they exist but we hardly notice them, they are confined to limited application, there are constraints on what the technology can do and so on. But suddenly, almost as if by magic they move center stage and seem to have impact everywhere we look.

Which is pretty much the story we now face with the wonderful world of AI. While there is plenty of debate about labels — artificial intelligence, machine learning, different models and approaches — the result is the same. Everywhere we look there is AI — and it’s already having an impact.

More than that; the pace of innovation within the world of AI is breath-taking, even by today’s rapid product cycle standards. We’ve become used to seeing major shifts in things like mobile phones, change happening on a cycle measured in months. But AI announcements of a breakthrough nature seem to happen with weekly frequency.

That’s also reflected in the extent of use — from the ‘early days’ (only last year!) of hearing about Chat GPT and other models we’ve now reached a situation where estimates suggest that millions of people are experimenting with them. Chat GPT has grown from a handful of people to over 200 million in less than a year; it added its first million subscribers within five days of launch! Similar figures show massive and rapid take -up of competing products like Anthropic’s Claude and Google’s Gemini, etc. It’s pretty clear that there’s a high-paced ‘arms race’ going on and it’s drawing in all the big players.

This rapid rate of adoption is being led by an even faster proliferation on the supply side, with many new players entering the market , especially in niche fields. As with the apps market there’s a huge number of players jumping on the bandwagon, and significant growth in the open source availability of models. And many models now allow for users to create their own custom versions — mini-GPTs’ and ‘Co-pilots’ which they can deploy for highly specific needs.

Not surprisingly estimates suggest that the growth potential in the market for AI technologies is vast, amounting to around 200 billion U.S. dollars in 2023 and expected to grow to over 1.8 trillion U.S. dollars by 2030.

Growth in Artificial Intelligence

There’s another important aspect to this growth. As Ethan Mollick suggests in his excellent book ‘Co-intelligence’, everything that we see AI doing today is the product of a far-from-perfect version of the technology; in very short time, given the rate of growth so far, we can expect much more power, integration and multi-modality.

The all-singing, dancing and doing pretty much anything else version of AI we can imagine isn’t far off. Speculation about when AGI — artificial general intelligence — will arrive is still just that — speculative — but the direction of travel is clear.

Not that the impact is seen as entirely positive. Whilst there have been impressive breakthroughs, using AI to help understand and innovate in fields as diverse as healthcare , distribution and education these are matched by growing concern about, for example, privacy and data security, deep-fake abuse and significant employment effects.

With its demonstrable potential for undertaking a wide range of tasks AI certainly poses a threat to the quality and quantity of a wide range of jobs — and at the limit could eliminate them entirely. And where earlier generations of technological automation impacted simple manual operations or basic tasks AI has the capacity to undertake many complex operations — often doing so faster and more effectively than humans.

AI models like Chat GPT can now routinely pass difficult exams for law or medical school, they can interpret complex data sets and spot patterns better than their human counterparts and they can quickly combine and analyze complex data to arrive at decisions which may often be better quality than those made by even experienced practitioners. Not surprisingly the policy discussion around this potential impact has proliferated at a similarly fast rate, echoing growing public concern about the darker side of AI.

But is it inevitable going to be a case of replacement, with human beings shunted to the side-lines? No-one is sure and it is still early days. We’ve had technological revolutions before — think back fifty years to when we first felt the early shock waves of what was to become the ‘microelectronics revolution’. Newspaper headlines and media programs with provocative titles like ‘Now the chips are down’ prompted frenzied discussion and policy planning for a future world staffed by robots and automated to the point where most activity would be undertaken by automated systems, overseen by one man and a dog. The role of the dog being to act as security guard, the role of the man being confined to feeding the dog.

Automation Man and Dog

This didn’t materialize; as many commentators pointed out at the time and as history has shown there were shifts and job changes but there was also compensating creation of new roles and tasks for which new skills were needed. Change yes — but not always in the negative direction and with growing potential for improving the content and quality of remaining and new jobs.

So if history is any guide then there are some grounds for optimism. Certainly we should be exploring and anticipating and particularly trying to match skills and capacity building to likely future needs.

Not least in the area of innovation management. What impact is AI having — and what might the future hold? It’s certainly implicated in a major shift right across the innovation space in terms of its application. If we take a simple ‘innovation compass’ to map these developments we can find plenty of examples:

Exploring Innovation Space

Innovation in terms of what we offer the world — our products and services — here AI already has a strong presence in everything from toys through intelligent and interactive services on our phones through to advanced weapon systems

And it’s the same story if we look at process innovation — changes in the ways we create and deliver whatever it is we offer. AI is embedded in automated and self-optimizing control systems for a huge range of tasks from mining, through manufacturing and out to service delivery.

Position innovation is another dimension where we innovate in opening up new or under-served markets, and changing the stories we tell to existing ones. AI has been a key enabler here, helping spot emerging trends, providing detailed market analysis and underpinning so many of the platform businesses which effectively handle the connection between multi-sided markets. Think Amazon, Uber, Alibaba or AirBnB and imagine them without the support of AI.

And innovation is possible through rethinking the whole approach to what we do, coming up with new business models. Rethinking the underlying value and how it might be delivered — think Spotify, Netflix and many others replacing the way we consume and enjoy our entertainment. Once again AI step forward as a key enabler.

AI is already a 360 degree solution looking for problems to attach to. Importantly this isn’t just in the commercial world; the power of AI is also being harnessed to enable social innovation in many different ways.

But perhaps the real question is not about AI-enabled innovations but one of how it affects innovators — and the organizations employing them? By now we know that innovation isn’t some magical force that strikes blindly in the light bulb moment. It’s a process which can be organized and managed so that we are able to repeat the trick. And after over 100 years of research and documenting hard-won experience we know the kind of things we need to put in place — how to manage innovation. It’s reached the point where we can codify it into an international standard — ISO 56001- and use this as a template to check out the ways in which we build and operate our innovation management systems.

So how will AI affect this — and, more to the point, how is it already doing so? Let’s take our helicopter and look down on where and how AI playing a role in the key areas of innovation management systems.

Typically the ‘front end’ of innovation involves various kinds of search activity, picking up strong and weak signals about needs and opportunities for change. And this kind of exploration and forecasting is something which AI has already shown itself to be very good at — whether in the search for new protein forms or the generation of ideas for consumer products.

Frank Piller’s research team published an excellent piece last year describing their exploration of this aspect of innovation. They looked at the potential which AI offered and tested their predictions out by tasking Chat GPT with a number of prompts based on the needs of a fictitious outdoor activities company. They had it monitoring and picking up on trends, scraping online communities for early warning signals about new consumer themes and, crucially, actually doing idea generation to come up with new product concepts. Their results mimic many other studies which suggest that AI is very good at this — in fact, as Mollick reports, it often does the job better than humans.

Of course finding opportunities is only the start of the innovation process; a key next stage is some kind of strategic selection. Out of all the possibilities of what we could do, what are we going to do and why? Limited resources mean we have to make choices — and the evidence is that AI is pretty helpful here too. It can explore and compare alternatives, make better bets and build more viable business models to take emerging value propositions forward. (At least in the test case where it competed against MBA students…!)

Innovation Process John Bessant

And then we are in the world of implementation, the long and winding road to converting our value proposition into something which will actually work and be wanted. Today’s agile innovation involves a cycle of testing, trial and error learning, gradually pivoting and homing in on what works and building from that. And once again AI is good at this — not least because it’s at the heart of how it does what it does. There’s a clue in the label — machine learning is all about deploying different learning and improvement strategies. AI can carry out fast experiments and focus in, it can simulate markets and bring to bear many of the adoption influences as probabilistic variables which it can work with.

Of course launching a successful version of a value proposition converted to a viable solution is still only half the innovation journey. To have impact we need to scale — but here again AI is likely to change the game. Much of the scaling journey involves understanding and configuring your solution to match the high variability across populations and accelerate diffusion. We know a lot about what influences this (not least thanks to the extensive work of Everett Rogers) and AI has particular capabilities in making sense of the preferences and predilections of populations through studying big datasets. It’s record in persuasion in fields like election campaigning suggests it has the capacity to enhance our ability to influence the innovation adoption decision process.

Scaling also involves complementary assets — the ‘who else?’ and ‘what else?’ which we need to have impact at scale. We need to assemble value networks, ecosystems of co-operating stakeholders — but to do this we need to be able to make connections. Specifically finding potential partners, forming relationships and getting the whole system to perform with emergent properties, where the whole is greater than the sum of the parts.

And here too AI has an growing track record in enabling recombinant innovation, cross-linking, connecting and making sense of patterns, even if we humans can’t always see them.

So far, so disturbing — at least if you are a practicing innovation manager looking over your shoulder at the AI competition rapidly catching up. But what about the bigger picture, the idea of developing and executing an innovation strategy? Here our concern is with the long-term, managing the process of accumulating competencies and capabilities to create long term competitiveness in volatile and unpredictable markets?

It involves being able to imagine and explore different options and make decisions based on the best use of resources and the likely fit with a future world. Which is, once again, the kind of thing which AI has shown itself to be good at. It’s moved a long way from playing chess and winning by brute calculating force. Now it can beat world champions at complex games of strategy like Go and win poker tournaments, bluffing with the best of them to sweep the pot.

Artificial Intelligence Poker Player

So what are we left with? In many ways it takes us right back to basics. We’ve survived as a species on the back of our imaginations — we’re not big or fast, or able to fly, but we are able to think. And our creativity has helped us devise and share tools and techniques, to innovate our way out of trouble. Importantly we’ve learned to do this collectively — shared creativity is a key part of the puzzle.

We’ve seen this throughout history; the recent response to the Covid-19 pandemic provides yet another illustration. In the face of crisis we can work together and innovate radically. It’s something we see in the humanitarian innovation world and in many other crisis contexts. Innovation benefits from more minds on the job.

So one way forward is not to wring our hands and say that the game is over and we should step back and let the AI take over. Rather it points towards us finding ways of working with it — as Mollick’s book title suggests, learning to treat it as a ‘co-intelligence’. Different, certainly but often in in complementary ways. Diversity has always mattered in innovation teams — so maybe by recruiting AI to our team we amplify that effect. There’s enough to do in meeting the challenge of managing innovation against a background of uncertainty; it makes sense to take advantage of all the help we can get.

AI may seem to point to a direction in which our role becomes superfluous — the ‘no-brain needed’ option. But we’re also seeing real possibilities for it to become an effective partner in the process.

And subscribe to my (free) newsletter here

You can find my podcast here and my videos here

And if you’d like to learn with me take a look at my online course here

Image credits: Dall-E via Microsoft CoPilot, John Bessant

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

AI Can Help Attract, Retain and Grow Customer Relationships

AI Can Help Attract, Retain and Grow Customer Relationships

GUEST POST from Shep Hyken

How do you know what your customers want if they don’t tell you? It’s more than sending surveys and interpreting data. Joe Tyrrell is the CEO of Medallia, a company that helps its customers tailor experiences through “intelligent personalization” and automation. I had a chance to interview him on Amazing Business Radio and he shared how smart companies are using AI to build and retain customer relationships. Below are some of his comments followed by my commentary:

  • The generative AI momentum is so widespread that 85% of executives say the technology will be interacting directly with customers in the next two years. AI has been around for longer than most people realize. When a customer is on a website that makes suggestions, when they interact with a chatbot or get the best answers to frequently asked questions, they are interacting with AI-infused technology, whether they know it or not.
  • While most executives want to use AI, they don’t know how they want to use it, the value it will bring and the problems it will solve. In other words, they know they want to use it, but don’t know how (yet). Tyrrell says, “Most organizations don’t know how they are going to use AI responsibly and ethically, and how they will use it in a way that doesn’t introduce unintended consequences, and even worse, unintended bias.” There needs to be quality control and oversight to ensure that AI is meeting the goals and intentions of the company or brand.
  • Generative AI is different than traditional AI. According to Tyrrell, the nature of generative AI is to, “Give me something in real time while I’m interacting with it.” In other words, it’s not just finding answers. It’s communicating with me, almost like human-to-human. When you ask it to clarify a point, it knows exactly how to respond. This is quite different from a traditional search bar on a website—or even a Google search.
  • AI’s capability to personalize the customer experience will be the focus of the next two years. Based on the comment about how AI technology currently interacts with customers, I asked Tyrrell to be more specific about how AI will be used. His answer was focused on personalization. The data we extract from multiple sources will allow for personalization like never before. According to Tyrrell, 82% of consumers say a personalized experience will influence which brand they end up purchasing from in at least half of all shopping situations. The question isn’t whether a company should personalize the customer experience. It is what happens if they don’t.
  • Personalization isn’t about being seen as a consumer, but as a person. That’s the goal of personalization. Medallia’s North Star, which guides all its decisions and investments, is its mission to personalize every customer experience. What makes this a challenge is the word every. If customers experience this one time but the next time the brand acts as if they don’t recognize them, all the work from the previous visit along with the credibility built with the customer is eroded.
  • The next frontier of AI is interpreting social feedback. Tyrrell is excited about Medallia’s future focus. “Surveys may validate information,” says Tyrrell, “but it is often what’s not said that can be just as important, if not even more so.” Tyrrell talked about Medallia’s capability to look everywhere, outside of surveys and social media comments, reviews and ratings, where customers traditionally express themselves. There is behavioral feedback, which Tyrrell refers to as social feedback, not to be confused with social media feedback. Technology can track customer behavior on a website. What pages do they spend the most time on? How do they use the mouse to navigate the page? Tyrell says, “Wherever people are expressing themselves, we capture the information, aggregate it, translate it, interpret it, correlate it and then deliver insights back to our customers.” This isn’t about communicating with customers about customer support issues. It’s mining data to understand customers and make products and experiences better.

Tyrrell’s insights emphasize the opportunities for AI to support the relationship a company or brand has with its customers. The future of customer engagement will be about an experience that creates customer connection. Even though technology is driving the experience, customers appreciate being known and recognized when they return. Tyrrell and I joked about the theme song from the TV sitcom Cheers, which debuted in 1982 and lasted 11 seasons. But it really isn’t a joke at all. It’s what customers want, and it’s so simple. As the song title suggests, customers want to go to a place Where Everybody Knows Your Name.

Image Credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Creating Effective Digital Teams

Creating Effective Digital Teams

GUEST POST from Howard Tiersky

Creating digital products is a multi-disciplinary process, blending creativity, engineering, strategy, customer support, legal regulations and more. How to structure their teams is a major challenge faced by large enterprises and global brands undergoing a digital transformation. Specifically, they need to answer the following three questions:

  1. What’s the optimal way to organize the necessary roles and responsibilities?
  2. Which part of the organization should own each capability?
  3. How do we get everyone working together?

The optimal structure for digital teams varies across different organizations. At FROM, we use a base framework that identifies fifteen key roles or competencies that are part of creating and operating most digital properties. Those roles are divided into three conceptual teams: the Digital Business Team, the Digital Technology Team, and the Extended Business Team.

The Digital Business Team

  1. Digital Business Vision Owner: The Business Vision Owner defines the key business measures and objectives for the digital property, including target market segments and their objectives. This “visioneer” makes final decisions on product direction.
  2. Product Management: Product Management owns the product on a day-to-day basis, and liaises with other areas to make sure the digital value proposition is realized. They’re responsible for commissioning and reviewing customer research to develop and maintain the product roadmap in terms of the business vision and can prioritize the backlog of changes and improvements.
  3. Program Management: Distinct from the Product Manager, the Program Manager is responsible for owning the long-term plan to achieve the product roadmap, including budgets and resource allocations, and for maintaining the release schedule.
  4. User Interface/User Experience: UI/UX is responsible for the overall look and feel of the digital product. They develop and maintain UI standards to be used as the product is developed, are involved in user testing, and QA new releases.
  5. Content Development: Content Development creates non-campaign and non-marketing or editorial content for the site, including articles, instructions, and FAQ or helps content. Their job is to create content that’s easy to understand and consistent with the brand or voice of the product or site.

The Digital Technology Team

  1. Front End Development: Front End Development selects frameworks and defines front-end coding standards for any technologies that will be used. They’re also responsible for writing code that will execute in the browser, such as HTML, HTML5, JavaScript, and mobile code (e.g., Objective-C.) Front End Development drives requirements for back-end development teams, to ensure the full user experience can be implemented.
  2. Back End Development: Back End Development manages core enterprise systems, including inventory, financial, and CRM. They’re responsible for exposing, as web services, the capabilities that are needed for front-end development. They’re responsible for developing and enforcing standards to protect the integrity of those enterprise systems, as well as reviewing requests for and implementing new capabilities.
  3. Data: Data develops and maintains enterprise and digital specific data models, managing data, and creating and maintaining plans for data management and warehousing. They monitor the health of databases, expose services for data access, and manage data architecture.
  4. Infrastructure: Infrastructure maintains the physical hardware used for applications and data. They maintain disaster and business continuity programs and monitor the scalability and reliability of the physical infrastructure. They also monitor and proactively manage the security of the infrastructure environment.
  5. Quality Assurance: Quality Assurance creates and maintains QA standards for code in production, develops automated and manual test scripts, and executes any integration, browser, or performance testing scenarios. They also monitor site metrics to identify problems proactively. (It should be noted that, though you want dedicated QA professionals on your team, QA is everyone’s responsibility!)

The Extended Business Team

  1. Marketing: Marketing is responsible for some key digital operations. They develop offers and campaigns to drive traffic. They manage email lists and execution and manage and maintain the CRM system.
  2. Product and Pricing: Product and Pricing responsibility can vary, depending on industry and type of digital property. When appropriate, they develop, license or merchandise anything sold on the site. They set pricing and drive requirements for aligning digital features with any new products based on those product’s parameters.
  3. Operations: Operations is responsible for fulfillment of the value proposition. For commerce sites, for example, this includes picking, packing and shipping orders. For something like a digital video aggregation site, responsibilities include finding, vetting and uploading new video content.
  4. Business Development: Business Development is focused on creating partnerships that increase traffic and sales, or find new streams of revenue.
  5. Customer Support: Customer support is responsible for maintaining knowledge of digital platforms, policies, and known issues and solutions. They assist customers with problems and questions and track customer interactions to report on trends and satisfaction levels.

How these teams and the roles within them fit together varies from company to company. However, it’s good practice to review this model to see, first, if you have these key roles represented in your organization. Then, make sure to create well-defined responsibilities and processes, and finally, look at how they function together, to see if they’re organized in the most effective manner. If your Digital Business, Digital Technology, and Extended Business teams are in sync, all your projects will benefit.

This article originally appeared on the Howard Tiersky blog
Image Credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Is Disruption About to Claim a New Victim?

Kodak. Blockbuster. Google?

GUEST POST from Robyn Bolton

You know the stories.  Kodak developed a digital camera in the 1970s, but its images weren’t as good as film images, so it ended the project.  Decades later, that decision ended Kodak.  Blockbuster was given the chance to buy Netflix but declined due to its paltry library of titles (and the absence of late fees).  A few years later, that decision led to Blockbuster’s decline and demise.  Now, in the age of AI, disruption may be about to claim another victim – Google.

A very brief history of Google’s AI efforts

In 2017, Google Research invented Transformer, a neural network architecture that could be trained to read sentences and paragraphs, pay attention to how the words relate to each other, and predict the words that would come next. 

In 2020, Google developed LaMDA, or Language Model for Dialogue Applications, using Transformer-based models trained on dialogue and able to chat. 

Three years later, Google began developing its own conversational AI using its LaMDA system. The only wrinkle is that OpenAI launched ChatGPT in November 2022. 

Now to The Financial Times for the current state of things

“In early 2023, months after the launch of OpenAI’s groundbreaking ChatGPT, Google was gearing up to launch its competitor to the model that underpinned the chatbot.

.

The search company had been testing generative AI software internally for several months by then.  But as the company rallied its resources, multiple competing models emerged from different divisions within Google, vying for internal attention.”

That last sentence is worrying.  Competition in the early days of innovation can be great because it pushes people to think differently, ask tough questions, and take risks. But, eventually, one solution should emerge as superior to the others so you can focus your scarce resources on refining, launching, and scaling it. Multiple models “vying for internal attention” so close to launch indicate that something isn’t right and about to go very wrong.

“None was considered good enough to launch as the singular competitor to OpenAI’s model, known as ChatGPT-4.  The company was forced to postpone its plans while it tried to sort through the scramble of research projects.  Meanwhile, it pushed out a chatbot, Bard, that was widely viewed to be far less sophisticated than ChatGPT.”

Nothing signals the threat of disruption more than “good enough.”  If Google, like most incumbent companies, defined “good enough” as “better than the best thing out there,” then it’s no surprise that they wouldn’t want to launch anything. 

What’s weird is that instead of launching one of the “not good enough” models, they launched Bard, an obviously inferior product. Either the other models were terrible (or non-functional), or different people were making different decisions to achieve different definitions of success.  Neither is a good sign.

When Google’s finished product, Gemini, was finally ready nearly a year later, it came with flaws in image generation that CEO Sundar Pichai called ‘completely unacceptable’ – a let-down for what was meant to be a demonstration of Google’s lead in a key new technology.”

“A let-down” is an understatement.  You don’t have to be first.  You don’t have to be the best.  But you also shouldn’t embarrass yourself.  And you definitely shouldn’t launch things that are “completely unacceptable.”

What happens next?

Disruption takes a long time and doesn’t always mean death.  Blackberry still exists, and integrated steel mills, one of Clayton Christensen’s original examples of disruption, still operate.

AI, LLMs, and LaMDAs are still in their infancy, so it’s too early to declare a winner.  Market creation and consumer behavior change take time, and Google certainly has the knowledge and resources to stage a comeback.

Except that that knowledge may be their undoing.  Companies aren’t disrupted because their executives are idiots. They’re disrupted because their executives focus on extending existing technologies and business models to better serve their best customers with higher-profit offerings.  In fact, Professor Christensen often warned that one of the first signs of disruption was a year of record profits.

In 2021, Google posted a profit of $76.033 billion. An 88.81% increase from the previous year.

2022 and 2023 profits have both been lower.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Humans Wanted for the Decade’s Biggest Innovation Challenges

Humans Wanted for the Decade's Biggest Innovation Challenges

GUEST POST from Greg Satell

Every era is defined by the problems it tackles. At the beginning of the 20th century, harnessing the power of internal combustion and electricity shaped society. In the 1960s there was the space race. Since the turn of this century, we’ve learned how to decode the human genome and make machines intelligent.

None of these were achieved by one person or even one organization. In the case of electricity, Faraday and Maxwell established key principles in the early and mid 1800s. Edison, Westinghouse and Tesla came up with the first applications later in that century. Scores of people made contributions for decades after that.

The challenges we face today will be fundamentally different because they won’t be solved by humans alone, but through complex human-machine interactions. That will require a new division of labor in which the highest level skills won’t be things like the ability to retain information or manipulate numbers, but to connect and collaborate with other humans.

Making New Computing Architectures Useful

Technology over the past century has been driven by a long succession of digital devices. First vacuum tubes, then transistors and finally microchips transformed electrical power into something approaching an intelligent control system for machines. That has been the key to the electronic and digital eras.

Yet today that smooth procession is coming to an end. Microchips are hitting their theoretical limits and will need to be replaced by new computing paradigms such as quantum computing and neuromorphic chips. The new technologies will not be digital, but will work fundamentally different than what we’re used to.

They will also have fundamentally different capabilities and will be applied in very different ways. Quantum computing, for example, will be able to simulate physical systems, which may revolutionize sciences like chemistry, materials research and biology. Neuromorphic chips may be thousands of times more energy efficient than conventional chips, opening up new possibilities for edge computing and intelligent materials.

There is still a lot of work to be done to make these technologies useful. To be commercially viable, not only do important applications need to be identified, but much like with classical computers, an entire generation of professionals will need to learn how to use them. That, in truth, may be the most significant hurdle.

Ethics For AI And Genomics

Artificial intelligence, once the stuff of science fiction, has become an everyday technology. We speak into our devices as a matter of course and expect to get back coherent answers. In the near future, we will see autonomous cars and other vehicles regularly deliver products and eventually become an integral part of our transportation system.

This opens up a significant number of ethical dilemmas. If given a choice to protect a passenger or a pedestrian, which should be encoded into the software of a autonomous car? Who gets to decide which factors are encoded into systems that make decisions about our education, whether we get hired or if we go to jail? How will these systems be trained? We all worry about who’s educating our kids, but who’s teaching our algorithms?

Powerful genomics techniques like CRISPR open up further ethical dilemmas. What are the guidelines for editing human genes? What are the risks of a mutation inserted in one species jumping to another? Should we revive extinct species, Jurassic Park style? What are the potential consequences?

What’s striking about the moral and ethical issues of both artificial intelligence and genomics is that they have no precedent, save for science fiction. We are in totally uncharted territory. Nevertheless, it is imperative that we develop a consensus about what principles should be applied, in what contexts and for what purpose.

Closing A Perpetual Skills Gap

Education used to be something that you underwent in preparation for your “real life.” Afterwards, you put away the schoolbooks and got down to work, raised a family and never really looked back. Even today, Pew Research reports that nearly one in four adults in the US did not read a single book last year.

Today technology is making many things we learned obsolete. In fact, a study at Oxford estimated that nearly half of the jobs that exist today will be automated in the next 20 years. That doesn’t mean that there won’t be jobs for humans to do, in fact we are in the midst of an acute labor shortage, especially in manufacturing, where automation is most pervasive.

Yet just as advanced technologies are eliminating the need for skills, they are also increasingly able to help us learn new ones. A number of companies are using virtual reality to train workers and finding that it can boost learning efficiency by as much as 40%. IBM, with the Rensselaer Polytechnic Institute, has recently unveiled a system that help you learn a new language like Mandarin. This video shows how it works.

Perhaps the most important challenge is a shift in mindset. We need to treat education as a lifelong need that extends long past childhood. If we only retrain workers once their industry has become obsolete and they’ve lost their jobs, then we are needlessly squandering human potential, not to mention courting an abundance of misery.

Shifting Value To Humans

The industrial revolution replaced the physical labor of humans with that of machines. The result was often mind-numbing labor in factories. Yet further automation opened up new opportunities for knowledge workers who could design ways to boost the productivity of both humans and machines.

Today, we’re seeing a similar shift from cognitive to social skills. Go into a highly automated Apple Store, to take just one example, and you don’t see a futuristic robot dystopia, but a small army of smiling attendants on hand to help you. The future of technology always seems to be more human.

In much the same way, when I talk to companies implementing advanced technologies like artificial intelligence or cloud computing, the one thing I constantly hear is that the human element is often the most important. Unless you can shift your employees to higher level tasks, you miss out on many of the most important benefits

What’s important to consider is that when a task is automated, it is also democratized and value shifts to another place. So, for example, e-commerce devalues the processing of transactions, but increases the value of things like customer service, expertise and resolving problems with orders, which is why we see all those smiling faces when we walk into an Apple Store.

That’s what we often forget about innovation. It’s essentially a very human endeavor and, to measure as true progress, humans always need to be at the center.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






How To Create the IKEA Effect

A Customer Experience That Will Be Appreciated

How To Create The IKEA Effect

GUEST POST from Shep Hyken

When reaching out for customer service and support, most customers still prefer to communicate with a company or brand via the traditional phone call. That said, more and more customers are attracted to and embracing a do-it-yourself customer service experience, known as self-service.

I had a chance to sit down with Venk Korla, the president and CEO of HGS Digital, which recently released its HGS Buyers Insight Report. We talked about investments CX (customer experience) leaders are making into AI and digital self-support and the importance of creating a similar experience for employees, which will get to in a moment. But first, I want to share some comments Korla made about comparing customer service to an IKEA experience.

The IKEA Effect

The IKEA effect was identified and named by Michael I. Norton of Harvard Business School, Daniel Mochon of Yale and Dan Ariely of Duke, who published the results of three studies in 2011. A short description of the IKEA effect is that some customers not only enjoy putting furniture together themselves but also find more value in the experience than if a company delivered pre-assembled furniture.

“It’s the same in the customer service/support world,” Korla said. “Customers who easily resolve their issues or have their questions answered on a brand’s self-service portal, either through traditional FAQ pages on a website or something more advanced, such as AI-powered solutions, will not only be happy with the experience but will also be grateful to the company for providing such an easy, fulfilling experience.”

To support this notion, our customer service research (sponsored by RingCentral) found that even with the phone being the No. 1 way customers like to interact with brands, 26% of customers stopped doing business with a company or brand because self-service options were not provided. (Note: Younger generations prefer self-service solutions more than older generations.) As the self-service experience improves, more will adopt it as their go-to method of getting questions answered and problems resolved.

The Big Bet On AI

In the next 18 months, CX decision-makers are betting big on artificial intelligence. The research behind the HGS Buyers Insight Report found that 37% of the leaders surveyed will deploy customer-facing chatbots, 30% will use generative AI or text-speech solutions to support employees taking care of customers, and 28% will invest in and deploy robotic process automation. All of these investments are meant to improve both the customer and employee experience.

While Spending On CX Is A Top Priority, Spending On Employee Experience (EX) Is Lagging

Korla recognizes the need to support not only customers with AI, but also employees. Companies betting on AI must also consider employees as they invest in technology to support customers. Just as a customer uses an AI-powered chatbot to communicate using natural language, the employee interacting directly with the customer should be able to use similar tools.

Imagine the customer support agent receives a call from a customer with a difficult question. As the customer describes the issue, the agent inputs notes into the computer. Within seconds, the agent has the answer to the question appear on their screen. In addition, the AI tool shares insights about the customer, such as their buying patterns, how long they have been a customer, what they’ve called about in the past and more. At this point, a good agent can interpret the information and communicate it in the style that best suits the customer.

Korla explains that the IKEA effect is just as powerful for employees as it is for customers. When employees are armed with the right tools to do their jobs effectively, allowing them to easily support customers and solve their most difficult problems, they are more fulfilled. In the HGS report, 54% of CX leaders surveyed cited talent attraction and retention as a top investment priority. So, for the company that invests in EX tools—specifically AI and automation—the result translates into lower turnover and more engaged employees.

Korla’s insights highlight the essence of the IKEA effect in creating empowering customer experiences and employee experiences. He reminds us that an amazing CX is supported by an amazing EX. As your company prepares to invest in AI and other self-service tools for your customers, consider an investment in similar tools for your employees.

Download the HGS Buyers Insight Report to find out what CX decision-makers will invest in and focus on for 2024 and beyond.

Image Credits: Pixabay
This article originally appeared on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Department Of Energy Programs Helping to Create an American Manufacturing Future

Department Of Energy Programs Helping to Create an American Manufacturing Future

GUEST POST from Greg Satell

In the recession that followed the dotcom crash in 2000, the United States lost five million manufacturing jobs and, while there has been an uptick in recent years, all indications are that they may never be coming back. Manufacturing, perhaps more than any other sector, relies on deep networks of skills and assets that tend to be highly regional.

The consequences of this loss are deep and pervasive. Losing a significant portion of our manufacturing base has led not only to economic vulnerability, but to political polarization. Clearly, it is important to rebuild our manufacturing base. But to do that, we need to focus on new, more advanced, technologies

That’s the mission of the Advanced Manufacturing Office (AMO) at the Department of Energy. By providing a crucial link between the cutting edge science done at the National Labs and private industry, it has been able to make considerable progress. As the collaboration between government scientists widen and deepens over time, US manufacturing may well be revived.

Linking Advanced Research To Private Industry

The origins of the Department of Energy date back to the Manhattan Project during World War II. The immense project was, in many respects, the start of “big science.” Hundreds of top researchers, used to working in small labs, traveled to newly established outposts to collaborate at places like Los Alamos, New Mexico and Oak Ridge, Tennessee.

After the war was over, the facilities continued their work and similar research centers were established to expand the effort. These National Labs became the backbone of the US government’s internal research efforts. In 1977, the National Labs, along with a number of other programs, were combined to form the Department of Energy.

One of the core missions of the AMO is to link the research done at the National Labs to private industry and the Lab Embedded Entrepreneurship Programs (LEEP) have been particularly successful in this regard. Currently, there are four such programs, Cyclotron Road, Chain Reaction Innovations, West Gate and Innovation Crossroads.

I was able to visit Innovation Crossroads at Oak Ridge National Laboratory and meet the entrepreneurs in its current cohort. Each is working to transform a breakthrough discovery into a market changing application, yet due to technical risk, would not be able to attract funding in the private sector. The LEEP program offers a small amount of seed money, access to lab facilities and scientific and entrepreneurial mentorship to help them get off the ground.

That’s just one of the ways that the AMO opens up the resources of the National Labs. It also helps business get access to supercomputing resources (5 out of the 10 fastest computers in the world are located in the United States, most of them at the National Labs) and conducts early stage research to benefit private industry.

Leading Public-Private Consortia

Another area in which the AMO supports private industry is through taking a leading role in consortia, such as the Manufacturing Institutes that were set up to to give American companies a leg up in advanced areas such as clean energy, composite materials and chemical process intensification.

The idea behind these consortia is to create hubs that provide a critical link with government labs, top scientists at academic universities and private companies looking to solve real-world problems. It both helps firms advance in key areas and allows researchers to focus their work on where they will have the greatest possible impact.

For example, the Critical Materials Institute (CMI) was set up to develop alternatives to materials that are subject to supply disruptions, such as the rare earth elements that are critical to many high tech products and are largely produced in China. A few years ago it developed, along with several National Labs and Eck Industries, an advanced alloy that can replace more costly materials in components of advanced vehicles and aircraft.

“We went from an idea on a whiteboard to a profitable product in less than two years and turned what was a waste product into a valuable asset,” Robert Ivester, Director of the Advanced Manufacturing Office told me.

Technology Assistance Partnerships

In 2011, the International Organization for Standardization released its ISO 50001 guidelines. Like previous guidelines that focused on quality management and environmental impact, ISO 50001 recommends best practices to reduce energy use. These can benefit businesses through lower costs and result in higher margins.

Still, for harried executives facing cutthroat competition and demanding customers, figuring out how to implement new standards can easily get lost in the mix. So a third key role that the AMO plays is to assist companies who wish to implement new standards by providing tools, guides and access to professional expertise.

The AMO offers similar support for a number of critical areas, such as prototype development and also provides energy assessment centers for firms that want to reduce costs. “Helping American companies adopt new technology and standards helps keep American manufacturers on the cutting edge,” Ivester says.

“Spinning In” Rather Than Spinning Out

Traditionally we think of the role of government in business largely in terms of regulation. Legislatures pass laws and watchdog agencies enforce them so that we can have confidence in the the food we eat, the products we buy and the medicines that are supposed to cure us. While that is clearly important, we often overlook how government can help drive innovation.

Inventions spun out of government labs include the Internet, GPS and laser scanners, just to name a few. Many of our most important drugs were also originally developed with government funding. Still, traditionally the work has mostly been done in isolation and only later offered to private companies through licensing agreements.

What makes the Advanced Manufacturing Office different than most scientific programs is that it is more focused on “spinning in” private industry rather than spinning out technologies. That enables executives and entrepreneurs with innovative ideas to power them with some of the best minds and advanced equipment in the world.

As Ivester put it to me, “Spinning out technologies is something that the Department of Energy has traditionally done. Increasingly, we want to spin ideas from industry into our labs, so that companies and entrepreneurs can benefit from the resources we have here. It also helps keep our scientists in touch with market needs and helps guide their research.”

Make no mistake, innovation needs collaboration. Combining the ideas from the private sector with the cutting edge science from government labs can help American manufacturing compete for the 21st century.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Balancing Artificial Intelligence with the Human Touch

GUEST POST from Shep Hyken

As AI and ChatGPT-type technologies grow in capability and ease of use and become more cost-effective, more and more companies are making their way to the digital experience. Still, the best companies know better than to switch to 100% digital.

I had a chance to interview Nicole Kyle, managing director and co-founder of CMP Research (Customer Management Practice), for Amazing Business Radio. Kyle’s team provides research and advisory services for the contact center industry and conducts some of the most critical research on the topic of self-service and digital customer service. I first met Kyle at CCW, the largest contact center conference in the industry. I’ve summarized seven of her key observations below, followed by my commentary:

  1. The Amazon Effect has trained customers to expect a level of service that’s not always in line with what companies and brands can provide. This is exactly what’s happening with customer expectations. They no longer compare you just to your direct competitors but to the best experience they’ve had from any company. Amazon and other rockstar brands focused on CX (customer experience) have set the bar higher for all companies in all industries.
  2. People’s acceptance and eventual normalization of digital experiences accelerated during the pandemic, and they have become a way of life for many customers. The pandemic forced customers to accept self-service. For example, many customers never went online to buy groceries, vehicles or other items that were traditionally shopped for in person. Once customers got used to it, as the pandemic became history, many never returned to the “old way” of doing business. At a minimum, many customers expect a choice between the two.
  3. Customers have new priorities and are placing a premium on their time. Seventy-two percent of customers say they want to spend less time interacting with customer service. They want to be self-sufficient in managing typical customer service issues. In other words, they want self-service options that will get them answers to their questions efficiently and in a timely manner. Our CX research differs and is less than half of that 72% number. When I asked Kyle about the discrepancy, she responded, “Customers who have a poor self-service experience are less likely to return to self-service. While there is an increase in preference, you’re not seeing the adoption because some companies aren’t offering the type of self-service experience the customer wants.”
  4. The digital dexterity of society is improving! That phrase is a great way to describe self-service adoption, specifically how customers view chatbots or other ChatGPT-type technologies. Kyle explained, “Digital experiences became normalized during the pandemic, and digital tools, such as generative AI, are now starting to help people in their daily lives, making them more digitally capable.” That translates into customers’ higher acceptance and desire for digital support and CX.
  5. Many customers can tell the difference between talking to an AI chatbot and a live chat with a human agent due to their ability to access technology and the quality of the chatbot. However, customers are still willing to use the tools if the results are good. When it comes to AI interacting with customers via text or voice, don’t get hung up on how lifelike (or not) the experience is as long as it gets your customers what they want quickly and efficiently.
  6. The No. 1 driver of satisfaction (according to 78% of customers surveyed) in a self-service experience is personalization. Personalization is more important than ever in customer service and CX. So, how do you personalize digital support? The “machine” must not only be capable of delivering the correct answers and solutions, but it must also recognize the existing customer, remember issues the customer had in the past, make suggestions that are specific to the customer and provide other customized, personalized approaches to the experience.
  7. With increased investments in self-service and generative AI, 60% of executives say they will reduce the number of frontline customer-facing jobs. But, the good news is that jobs will be created for employees to monitor performance, track data and more. I’m holding firm in my predictions over the past two years that while there may be some job disruption, the frontline customer support agent job will not be eliminated. To Kyle’s point, there will be job opportunities related to the contact center, even if they are not on the front line.

Self-service and automation are a balancing act. The companies that have gone “all in” and eliminated human-to-human customer support have had pushback from customers. Companies that have not adopted newer technologies are frustrating many customers who want and expect self-service solutions. While it may differ from one company to the next, the balance is critical, but smart leaders will find the balance and continue to adapt to the ever-changing expectations of their customers.

Image Credits: Unsplash
This article originally appeared on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Bad Questions to Ask When Developing Technology

Bad Questions to Ask When Developing Technology

GUEST POST from Mike Shipulski

I know you’re trying to do something that has never been done before, but when will you be done?

I don’t know. We’ll run the next experiment then decide what to do next. If it works, we’ll do more of that. And if it doesn’t, we’ll do less of that. That’s all we know right now.

I know you’re trying to create something that is new to our industry, but how many will we sell?

I don’t know. Initial interviews with customers made it clear that this is an important customer problem. So, we’re trying to figure out if the technology can provide a viable solution. That’s all we know right now.

No one is asking for that obscure technology. Why are you wasting time working on that?

Well, the voice of the technology and the S-curve analyses suggest the technology wants to move in this direction, so we’re investing this solution space. It might work and it might not. That’s all we know right now.

Why aren’t you using best practices?

If it hasn’t been done before, there can be no best practice. We prefer to use good practice or emergent practice.

There doesn’t seem like there’s been much progress. Why aren’t you running more experiments?

We don’t know which experiments to run, so we’re taking some time to think about what to do next.

Will it work?

I don’t know.

That new technology may obsolete our most profitable product line. Shouldn’t you stop work on that?

No. If we don’t obsolete our best work, someone else will. Wouldn’t it be better if we did the obsoleting?

How many more people do you need to accelerate the technology development work?

None. Small teams are better.

Sure, it’s a cool technology, but how much will it cost?

We haven’t earned the right to think about the cost. We’re still trying to make it work.

So, what’s your solution?

We don’t know yet. We’re still trying to formulate the customer problem.

You said you’d be done two months ago. Why aren’t you done yet?

I never said we’d be done two months ago. You asked me for a completion date and I could not tell you when we’d be done. You didn’t like that answer so I suggested that you choose your favorite date and put that into your spreadsheet. We were never going to hit that date, and we didn’t.

We’ve got a tight timeline. Why are you going home at 5:00?

We’ve been working on this technology for the last two years. This is a marathon. We’re mentally exhausted. See you tomorrow.

If you don’t work harder, we’ll get someone else to do the technology development work. What do you think about that?

You are confusing activity with progress. We are doing the right analyses and the right thinking and we’re working hard. But if you’d rather have someone else lead this work, so would I.

We need a patented solution. Will your solution be patentable?

I don’t know because we don’t yet have a solution. And when we do have a solution, we still won’t know because it takes a year or three for the Patent Office to make that decision.

So, you’re telling me this might not work?

Yes. That’s what I’m telling you.

So, you don’t know when you’ll be done with the technology work, you don’t know how much the technology will cost, you don’t know if it will be patentable, or who will buy it?

That’s about right.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






AI Strategy Should Have Nothing to do with AI

AI Strategy Should Have Nothing to do with AI

GUEST POST from Robyn Bolton

You’ve heard the adage that “culture eats strategy for breakfast.”  Well, AI is the fruit bowl on the side of your Denny’s Grand Slam Strategy, and culture is eating that, too.

1 tool + 2 companies = 2 strategies

On an Innovation Leader call about AI, two people from two different companies shared stories about what happened when an AI notetaking tool unexpectedly joined a call and started taking notes.  In both stories, everyone on the calls was surprised, uncomfortable, and a little bit angry that even some of the conversation was recorded and transcribed (understandable because both calls were about highly sensitive topics). 

The storyteller from Company A shared that the senior executive on the call was so irate that, after the call, he contacted people in Legal, IT, and Risk Management.  By the end of the day, all AI tools were shut down, and an extensive “ask permission or face termination” policy was issued.

Company B’s story ended differently.  Everyone on the call, including senior executives and government officials, was surprised, but instead of demanding that the tool be turned off, they asked why it was necessary. After a quick discussion about whether the tool was necessary, when it would be used, and how to ensure the accuracy of the transcript, everyone agreed to keep the note-taker running.  After the call, the senior executive asked everyone using an AI note-taker on a call to ask attendees’ permission before turning it on.

Why such a difference between the approaches of two companies of relatively the same size, operating in the same industry, using the same type of tool in a similar situation?

1 tool + 2 CULTURES = 2 strategies

Neither storyteller dove into details or described their companies’ cultures, but from other comments and details, I’m comfortable saying that the culture at Company A is quite different from the one at Company B. It is this difference, more than anything else, that drove Company A’s draconian response compared to Company B’s more forgiving and guiding one.  

This is both good and bad news for you as an innovation leader.

It’s good news because it means that you don’t have to pour hours, days, or even weeks of your life into finding, testing, and evaluating an ever-growing universe of AI tools to feel confident that you found the right one. 

It’s bad news because even if you do develop the perfect AI strategy, it won’t matter if you’re in a culture that isn’t open to exploration, learning, and even a tiny amount of risk-taking.

Curious whether you’re facing more good news than bad news?  Start here.

8 culture = 8+ strategies

In 2018, Boris Groysberg, a professor at Harvard Business School, and his colleagues published “The Leader’s Guide to Corporate Culture,” a meta-study of “more than 100 of the most commonly used social and behavior models [and] identified eight styles that distinguish a culture and can be measured.  I’m a big fan of the model, having used it with clients and taught it to hundreds of executives, and I see it actively defining and driving companies’ AI strategies*.

Results (89% of companies): Achievement and winning

  • AI strategy: Be first and be right. Experimentation is happening on an individual or team level in an effort to gain an advantage over competitors and peers.

Caring (63%): Relationships and mutual trust

  • AI strategy: A slow, cautious, and collaborative approach to exploring and testing AI so as to avoid ruffling feathers

Order (15%): Respect, structure, and shared norms

  • AI strategy: Given the “ask permission, not forgiveness” nature of the culture, AI exploration and strategy are centralized in a single function, and everyone waits on the verdict

Purpose (9%): Idealism and altruism

  • AI strategy: Torn between the undeniable productivity benefits AI offers and the myriad ethical and sustainability issues involved, strategies are more about monitoring than acting.

Safety (8%): Planning, caution, and preparedness

  • AI strategy: Like Order, this culture takes a centralized approach. Unlike Order, it hopes that if it closes its eyes, all of this will just go away.

Learning (7%): Exploration, expansiveness, creativity

  • AI strategy: Slightly more deliberate and guided than Purpose cultures, this culture encourages thoughtful and intentional experimentation to inform its overall strategy

Authority (4%): Strength, decisiveness, and boldness

  • AI strategy: If the AI strategies from Results and Order had a baby, it would be Authority’s AI strategy – centralized control with a single-minded mission to win quickly

Enjoyment (2%): Fun and excitement

  • AI strategy: It’s a glorious free-for-all with everyone doing what they want.  Strategies and guidelines will be set if and when needed.

What do you think?

Based on the story above, what culture best describes Company A?  Company B?

What culture best describes your team or company?  What about your AI strategy?

*Disclaimer. Culture is an “elusive lever” because it is based on assumptions, mindsets, social patterns, and unconscious actions.  As a result, the eight cultures aren’t MECE (mutually exclusive, collectively exhaustive), and multiple cultures often exist in a single team, function, and company.  Bottom line, the eight cultures are a tool, not a law (and I glossed over a lot of stuff from the report)

Image credit: Wikimedia Commons

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.