Category Archives: Technology

28 Things I Learned the Hard Way

28 Things I Learned the Hard Way

GUEST POST from Mike Shipulski

  1. If you want to have an IoT (Internet of Things) program, you’ve got to connect your products.
  2. If you want to build trust, give without getting.
  3. If you need someone with experience in manufacturing automation, hire a pro.
  4. If the engineering team wants to spend a year playing with a new technology, before the bell rings for recess ask them what solution they’ll provide and then go ask customers how much they’ll pay and how many they’ll buy.
  5. If you don’t have the resources, you don’t have a project.
  6. If you know how it will turn out, let someone else do it.
  7. If you want to make a friend, help them.
  8. If your products are not connected, you may think you have an IoT program, but you have something else.
  9. If you don’t have trust, you have just what you earned.
  10. If you hire a pro in manufacturing automation, listen to them.
  11. If Marketing has an optimistic sales forecast for the yet-to-be-launched product, go ask customers how much they’ll pay and how many they’ll buy.
  12. If you don’t have a project manager, you don’t have a project.
  13. If you know how it will turn out, teach someone else how to do it.
  14. If a friend needs help, help them.
  15. If you want to connect your products at a rate faster than you sell them, connect the products you’ve already sold.
  16. If you haven’t started building trust, you started too late.
  17. If you want to pull in the delivery date for your new manufacturing automation, instead, tell your customers you’ve pushed out the launch date.
  18. If the VP knows it’s a great idea, go ask customers how much they’ll pay and how many they’ll buy.
  19. If you can’t commercialize, you don’t have a project.
  20. If you know how it will turn out, do something else.
  21. If a friend asks you twice for help, drop what you’re doing and help them immediately.
  22. If you can’t figure out how to make money with IoT, it’s because you’re focusing on how to make money at the expense of delivering value to customers.
  23. If you don’t have trust, you don’t have much.
  24. If you don’t like extreme lead times and exorbitant capital costs, manufacturing automation is not for you.
  25. If the management team doesn’t like the idea, go ask customers how much they’ll pay and how many they’ll buy.
  26. If you’re not willing to finish a project, you shouldn’t be willing to start.
  27. If you know how it will turn out, it’s not innovation.
  28. If you see a friend that needs help, help them ask you for help.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Everyone Clear Now on What ChatGPT is Doing?

Everyone Clear Now on What ChatGPT is Doing?

GUEST POST from Geoffrey A. Moore

Almost a year and a half ago I read Stephen Wolfram’s very approachable introduction to ChatGPT, What is ChatGPT Doing . . . And Why Does It Work?, and I encourage you to do the same. It has sparked a number of thoughts that I want to share in this post.

First, if I have understood Wolfram correctly, what ChatGPT does can be summarized as follows:

  1. Ingest an enormous corpus of text from every available digitized source.
  2. While so doing, assign to each unique word a unique identifier, a number that will serve as a token to represent that word.
  3. Within the confines of each text, record the location of every token relative to every other token.
  4. Using just these two elements—token and location—determine for every word in the entire corpus the probability of it being adjacent to, or in the vicinity of, every other word.
  5. Feed these probabilities into a neural network to cluster words and build a map of relationships.
  6. Leveraging this map, given any string of words as a prompt, use the neural network to predict the next word (just like AutoCorrect).
  7. Based on feedback from so doing, adjust the internal parameters of the neural network to improve its performance.
  8. As performance improves, extend the reach of prediction from the next word to the next phrase, then to the next clause, the next sentence, the next paragraph, and so on, improving performance at each stage by using feedback to further adjust its internal parameters.
  9. Based on all of the above, generate text responses to user questions and prompts that reviewers agree are appropriate and useful.

OK, I concede this is a radical oversimplification, but for the purposes of this post, I do not think I am misrepresenting what is going on, specifically when it comes to making what I think is the most important point to register when it comes to understanding ChatGPT. That point is a simple one. ChatGPT has no idea what it is talking about.

Indeed, ChatGPT has no ideas of any kind — no knowledge or expertise — because it has no semantic information. It is all math. Math has been used to strip words of their meaning, and that meaning is not restored until a reader or user engages with the output to do so, using their own brain, not ChatGPT’s. ChatGPT is operating entirely on form and not a whit on content. By processing the entirety of its corpus, it can generate the most probable sequence of words that correlates with the input prompt it had been fed. Additionally, it can modify that sequence based on subsequent interactions with an end user. As human beings participating in that interaction, we process these interactions as a natural language conversation with an intelligent agent, but that is not what is happening at all. ChatGPT is using our prompts to initiate a mathematical exercise using tokens and locations as its sole variables.

OK, so what? I mean, if it works, isn’t that all that matters? Not really. Here are some key concerns.

First, and most importantly, ChatGPT cannot be expected to be self-governing when it comes to content. It has no knowledge of content. So, whatever guardrails one has in mind would have to be put in place either before the data gets into ChatGPT or afterward to intercept its answers prior to passing them along to users. The latter approach, however, would defeat the whole purpose of using it in the first place by undermining one of ChatGPT’s most attractive attributes—namely, its extraordinary scalability. So, if guardrails are required, they need to be put in place at the input end of the funnel, not the output end. That is, by restricting the datasets to trustworthy sources, one can ensure that the output will be trustworthy, or at least not malicious. Fortunately, this is a practical solution for a reasonably large set of use cases. To be fair, reducing the size of the input dataset diminishes the number of examples ChatGPT can draw upon, so its output is likely to be a little less polished from a rhetorical point of view. Still, for many use cases, this is a small price to pay.

Second, we need to stop thinking of ChatGPT as artificial intelligence. It creates the illusion of intelligence, but it has no semantic component. It is all form and no content. It is a like a spider that can spin an amazing web, but it has no knowledge of what it is doing. As a consequence, while its artifacts have authority, based on their roots in authoritative texts in the data corpus validated by an extraordinary amount of cross-checking computing, the engine itself has none. ChatGPT is a vehicle for transmitting the wisdom of crowds, but it has no wisdom itself.

Third, we need to fully appreciate why interacting with ChatGPT is so seductive. To do so, understand that because it constructs its replies based solely on formal properties, it is selecting for rhetoric, not logic. It is delivering the optimal rhetorical answer to your prompt, not the most expert one. It is the one that is the most popular, not the one that is the most profound. In short, it has a great bedside manner, and that is why we feel so comfortable engaging with it.

Now, given all of the above, it is clear that for any form of user support services, ChatGPT is nothing less than a godsend, especially where people need help learning how to do something. It is the most patient of teachers, and it is incredibly well-informed. As such, it can revolutionize technical support, patient care, claims processing, social services, language learning, and a host of other disciplines where users are engaging with a technical corpus of information or a system of regulated procedures. In all such domains, enterprises should pursue its deployment as fast as possible.

Conversely, wherever ambiguity is paramount, wherever judgment is required, or wherever moral values are at stake, one must not expect ChatGPT to be the final arbiter. That is simply not what it is designed to do. It can be an input, but it cannot be trusted to be the final output.

That’s what I think. What do you think?

Image Credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Innovation is Combination

Silicon Valley’s Innovator’s Dilemma – The Atom, the Bit and the Gene

Innovation is Combination

GUEST POST from Greg Satell

Over the past several decades, innovation has become largely synonymous with digital technology. When the topic of innovation comes up, somebody points to a company like Apple, Google or Meta rather than, say, a car company, a hotel or a restaurant. Management gurus wax poetically about the “Silicon Valley way.”

Of course, that doesn’t mean that other industries haven’t been innovative. In fact, there are no shortage of excellent examples of innovation in cars, hotels, restaurants and many other things. Still, the fact remains that for most of recent memory digital technology has moved further and faster than anything else.

This has been largely due to Moore’s Law, our ability to consistently double the number of transistors we’re able to cram onto a silicon wafer. Now, however, Moore’s Law is ending and we’re entering a new era of innovation. Our future will not be written in ones and zeros, but will be determined by our ability to use information to shape the physical world.

The Atom

The concept of the atom has been around at least since the time of the ancient Greek philosopher Democritus. Yet it didn’t take on any real significance until the early 20th century. In fact, the paper Albert Einstein used for his dissertation helped to establish the existence of atoms through a statistical analysis of Brownian motion.

Yet it was the other papers from Einstein’s miracle year of 1905 that transformed the atom from an abstract concept to a transformative force, maybe even the most transformative force in the 20th century. His theory of mass-energy equivalence would usher in the atomic age, while his work on black-body radiation would give rise to quantum mechanics and ideas so radical that even he would refuse to accept them.

Ironically, despite Einstein’s reluctance, quantum theory would lead to the development of the transistor and the rise of computers. These, in turn, would usher in the digital economy, which provided an alternative to the physical economy of goods and services based on things made from atoms and molecules.

Still, the vast majority of what we buy is made up of what we live in, ride in, eat and wear. In fact, information and communication technologies only make up about 6% of GDP in advanced countries, which is what makes the recent revolution in materials science is so exciting. We’re beginning to exponentially improve the efficiency of how we design the materials that make up everything from solar panels to building materials.

The Bit

While the concept of the atom evolved slowly over millennia, the bit is one of the rare instances in which an idea seems to have arisen in the mind of a single person with little or no real precursor. Introduced by Claude Shannon in a paper in 1948—incidentally, the same year the transistor was invented—the bit has shaped how we see and interact with the world ever since.

The basic idea was that information isn’t a function of content, but the absence of ambiguity, which can be broken down to a single unit – a choice between two alternatives. Much like how a coin toss which lacks information while in the air, but takes on a level of certainty when it lands, information arises when ambiguity disappears.

He called this unit, a “binary digit” or a “bit” and much like the pound, quart, meter or liter, it has become such a basic unit of measurement that it’s hard to imagine our modern world without it. Shannon’s work would soon combine with Alan Turing’s concept of a universal computer to create the digital computer.

Now the digital revolution is ending and we will soon be entering a heterogeneous computing environment that will include things like quantum, neuromorphic and biological computing. Still, Claude Shannon’s simple idea will remain central to how we understand how information interacts with the world it describes.

The Gene

The concept of the gene was first discovered by an obscure Austrian monk named Gregor Mendel, but in one of those strange peculiarities of history, his work went almost totally unnoticed until the turn of the century. Even then, no one really knew what a gene was or how they functioned. The term was, for the most part, just an abstract concept.

That changed abruptly when James Watson and Francis Crick published their article in the scientific journal Nature. In a single stroke, the pair were able to show that genes were, in fact, made up of a molecule called DNA and that they operated through a surprisingly simple code made up of A,T,C and G.

Things really began to kick into high gear when the Human Genome Project was completed in 2003. Since then the cost to sequence a genome has been falling faster than the rate of Moore’s Law, which has unleashed a flurry of innovation. Jennifer Doudna’s discovery of CRISPR in 2012 revolutionized our ability to edit genes. More recently, mRNA technology has helped develop COVID-19 vaccines in record time.

Today, we have entered a new era of synthetic biology in which we can manipulate the genetic code of A,T,C and G almost as easily as we can the bits in the machines that Turing imagined all those years ago. Researchers are also exploring how we can use genes to create advanced materials and maybe even create better computers.

Innovation Is Combination

The similarity of the atom, the bit and the gene as elemental concepts is hard to miss and they’ve allowed us to understand our universe in a visceral, substantial way. Still, they arose in vastly different domains and have been largely applied to separate and distinct fields. In the future, however, we can expect vastly greater convergence between the three.

We’ve already seen glimpses of this. For example, as a graduate student Charlie Bennett was a teaching assistant for James Watson. Yet in between his sessions instructing undergraduates in Watson’s work on genes, he took an elective course on the theory of computing in which he learned about the work of Shannon and Turing. That led him to go work for IBM and become a pioneer in quantum computing.

In much the same way, scientists are applying powerful computers to develop new materials and design genetic sequences. Some of these new materials will be used to create more powerful computers. In the future, we can expect the concepts of the atom, the bit and the gene to combine and recombine in exciting ways that we can only begin to imagine today.

The truth is that innovation is combination and has, in truth, always been. The past few decades, in which one technology so thoroughly dominated that it was able to function largely in isolation to other fields, was an anomaly. What we are beginning to see now is, in large part, a reversion to the mean, where the most exciting work will be interdisciplinary.

This is Silicon Valley’s innovator’s dilemma. Nerdy young geeks will no longer be able to prosper coding blithely away in blissful isolation. It is no longer sufficient to work in bits alone. Increasingly we need to combine those bits with atoms and genes to create significant value. If you want to get a glimpse of the future, that’s where to look.

— Article courtesy of the Digital Tonto blog
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Runaway Innovation Train

The Runaway Innovation Train

GUEST POST from Pete Foley

In this blog, I return and expand on a paradox that has concerned me for some time.    Are we getting too good at innovation, and is it in danger of getting out of control?   That may seem like a strange question for an innovator to ask.  But innovation has always been a two edged sword.  It brings huge benefits, but also commensurate risks. 

Ostensibly, change is good. Because of technology, today we mostly live more comfortable lives, and enjoy superior health, longevity, and mostly increased leisure and abundance compared to our ancestors.

Exponential Innovation Growth:  The pace of innovation is accelerating. It may not exactly mirror Moore’s Law, and of course, innovation is much harder to quantify than transistors. But the general trend in innovation and change approximates exponential growth. The human stone-age lasted about 300,000 years before ending in about 3,000 BC with the advent of metalworking.  The culture of the Egyptian Pharos lasted 30 centuries.  It was certainly not without innovations, but by modern standards, things changed very slowly. My mum recently turned 98 years young, and the pace of change she has seen in her lifetime is staggering by comparison to the past.  Literally from horse and carts delivering milk when she was a child in poor SE London, to todays world of self driving cars and exploring our solar system and beyond.  And with AI, quantum computing, fusion, gene manipulation, manned interplanetary spaceflight, and even advanced behavior manipulation all jockeying for position in the current innovation race, it seems highly likely that those living today will see even more dramatic change than my mum experienced.  

The Dark Side of Innovation: While accelerated innovation is probably beneficial overall, it is not without its costs. For starters, while humans are natural innovators, we are also paradoxically change averse.  Our brains are configured to manage more of our daily lives around habits and familiar behaviors than new experiences.  It simply takes more mental effort to manage new stuff than familiar stuff.  As a result we like some change, but not too much, or we become stressed.  At least some of the burgeoning mental health crisis we face today is probably attributable the difficulty we have adapting to so much rapid change and new technology on multiple fronts.

Nefarious Innovation:  And of course, new technology can be used for nefarious as well as noble purpose. We can now kill our fellow humans far more efficiently, and remotely than our ancestors dreamed of.  The internet gives us unprecedented access to both information and connectivity, but is also a source of misinformation and manipulation.  

The Abundance Dichotomy:  Innovation increases abundance, but it’s arguable if that actually makes us happier.  It gives us more, but paradoxically brings greater inequalities in distribution of the ‘wealth’ it creates. Behavior science has shown us consistently that humans make far more relative than absolute judgments.  Being better off than our ancestors actually doesn’t do much for us.  Instead we are far more interested in being better off than our peers, neighbors or the people we compare ourselves to on Instagram. And therein lies yet another challenge. Social media means we now compare ourselves to far more people than past generations, meaning that the standards we judge ourselves against are higher than ever before.     

Side effects and Unintended Consequences: Side effects and unintended consequences are perhaps the most difficult challenge we face with innovation. As the pace of innovation accelerates, so does the build up of side effects, and problematically, these often lag our initial innovations. All too often, we only become aware of them when they have already become a significant problem. Climate change is of course a poster child for this, as a huge unanticipated consequence of the industrial revolution. The same applies to pollution.  But as innovation accelerates, the unintended consequences it brings are also stacking up.  The first generations of ‘digital natives’ are facing unprecedented mental health challenges.  Diseases are becoming resistant to antibiotics, while population density is leading increased rate of new disease emergence. Agricultural efficiency has created monocultures that are inherently more fragile than the more diverse supply chain of the past.  Longevity is putting enormous pressure on healthcare.

The More we Innovate, the less we understand:  And last, but not least, as innovation accelerates, we understand less about what we are creating. Technology becomes unfathomably complex, and requires increasing specialization, which means few if any really understand the holistic picture.  Today we are largely going full speed ahead with AI, quantum computing, genetic engineering, and more subtle, but equally perilous experiments in behavioral and social manipulation.  But we are doing so with increasingly less pervasive understanding of direct, let alone unintended consequences of these complex changes!   

The Runaway Innovation Train:  So should we back off and slow down?  Is it time to pump the brakes? It’s an odd question for an innovator, but it’s likely a moot point anyway. The reality is that we probably cannot slow down, even if we want to.  Innovation is largely a self-propagating chain reaction. All innovators stand on the shoulders of giants. Every generation builds on past discoveries, and often this growing knowledge base inevitably leads to multiple further innovations.  The connectivity and information access of internet alone is driving today’s unprecedented innovation, and AI and quantum computing will only accelerate this further.  History is compelling on this point. Stone-age innovation was slow not because our ancestors lacked intelligence.  To the best of our knowledge, they were neurologically the same as us.  But they lacked the cumulative knowledge, and the network to access it that we now enjoy.   Even the smartest of us cannot go from inventing flint-knapping to quantum mechanics in a single generation. But, back to ‘standing on the shoulder of giants’, we can build on cumulative knowledge assembled by those who went before us to continuously improve.  And as that cumulative knowledge grows, more and more tools and resources become available, multiple insights emerge, and we create what amounts to a chain reaction of innovations.  But the trouble with chain reactions is that they can be very hard to control.    

Simultaneous Innovation: Perhaps the most compelling support for this inevitability of innovation lies in the pervasiveness of simultaneous innovation.   How does human culture exist for 50,000 years or more and then ‘suddenly’ two people, Darwin and Wallace come up with the theory of evolution independently and simultaneously?  The same question for calculus (Newton and Leibniz), or the precarious proliferation of nuclear weapons and other assorted weapons of mass destruction.  It’s not coincidence, but simply reflects that once all of the pieces of a puzzle are in place, somebody, and more likely, multiple people will inevitably make connections and see the next step in the innovation chain. 

But as innovation expands like a conquering army on multiple fronts, more and more puzzle pieces become available, and more puzzles are solved.  But unfortunately associated side effects and unanticipated consequences also build up, and my concern is that they can potentially overwhelm us. And this is compounded because often, as in the case of climate change, dealing with side effects can be more demanding than the original innovation. And because they can be slow to emerge, they are often deeply rooted before we become aware of them. As we look forward, just taking AI as an example, we can already somewhat anticipate some worrying possibilities. But what about the surprises analogous to climate change that we haven’t even thought of yet? I find that a sobering thought that we are attempting to create consciousness, but despite the efforts of numerous Nobel laureates over decades, we still have to idea what consciousness is. It’s called the ‘hard problem’ for good reason.  

Stop the World, I Want to Get Off: So why not slow down? There are precedents, in the form of nuclear arms treaties, and a variety of ethically based constraints on scientific exploration.  But regulations require everybody to agree and comply. Very big, expensive and expansive innovations are relatively easy to police. North Korea and Iran notwithstanding, there are fortunately not too many countries building nuclear capability, at least not yet. But a lot of emerging technology has the potential to require far less physical and financial infrastructure.  Cyber crime, gene manipulation, crypto and many others can be carried out with smaller, more distributed resources, which are far more difficult to police.  Even AI, which takes considerable resources to initially create, opens numerous doors for misuse that requires far less resource. 

The Atomic Weapons Conundrum.  The challenge with getting bad actors to agree on regulation and constraint is painfully illustrated by the atomic bomb.  The discovery of fission by Strassman and Hahn in the late 1930’s made the bomb inevitable. This set the stage for a race to turn theory into practice between the Allies and Nazi Germany. The Nazis were bad actor, so realistically our only option was to win the race.  We did, but at enormous cost. Once the ‘cat was out of the bag, we faced a terrible choice; create nuclear weapons, and the horror they represent, or chose to legislate against them, but in so doing, cede that terrible power to the Nazi’s?  Not an enviable choice.

Cumulative Knowledge.  Today we face similar conundrums on multiple fronts. Cumulative knowledge will make it extremely difficult not to advance multiple, potentially perilous technologies.  Countries who legislate against it risk either pushing it underground, or falling behind and deferring to others. The recent open letter from Meta to the EU chastising it for the potential economic impacts of its AI regulations may have dripped with self-interest.  But that didn’t make it wrong.   https://euneedsai.com/  Even if the EU slows down AI development, the pieces of the puzzle are already in place.  Big corporations, and less conservative countries will still pursue the upside, and risk the downside. The cat is very much out of the bag.

Muddling Through:  The good news is that when faced with potentially perilous change in the past, we’ve muddled through.  Hopefully we will do so again.   We’ve avoided a nuclear holocaust, at least for now.  Social media has destabilized our social order, but hasn’t destroyed it, yet.  We’ve been through a pandemic, and come out of it, not unscathed, but still functioning.  We are making progress in dealing with climate change, and have made enormous strides in managing pollution.

Chain Reactions:  But the innovation chain reaction, and the impact of cumulative knowledge mean that the rate of change will, in the absence of catastrophe, inevitably continue to accelerate. And as it does, so will side effects, nefarious use, mistakes and any unintended consequences that derive from it. Key factors that have helped us in the past are time and resource, but as waves of innovation increase in both frequency and intensity, both are likely to be increasingly squeezed.   

What can, or should we do? I certainly don’t have simple answers. We’re all pretty good, although by definition, far from perfect at scenario planning and trouble shooting for our individual innovations.  But the size and complexity of massive waves of innovation, such as AI, are obviously far more challenging.  No individual, or group can realistically either understand or own all of the implications. But perhaps we as an innovation community should put more collective resources against trying? We’ll never anticipate everything, and we’ll still get blindsided.  And putting resources against ‘what if’ scenarios is always a hard sell. But maybe we need to go into sales mode. 

Can the Problem Become the Solution? Encouragingly, the same emerging technology that creates potential issues could also help us.  AI and quantum computing will give us almost infinite capacity for computation and modeling.  Could we collectively assign more of that emerging resource against predicting and managing it’s own risks?

With many emerging technologies, we are now where we were in the 1900’s with climate change.  We are implementing massive, unpredictable change, and by definition have no idea what the unanticipated consequences of that will be. I personally think we’ll deal with climate change.  It’s difficult to slow a leviathan that’s been building for over a hundred years.  But we’ve taken the important first steps in acknowledging the problem, and are beginning to implement corrective action. 

But big issues require big solutions.  Long-term, I personally believe the most important thing for humanity to escape the gravity well.   Given the scale of our ability to curate global change, interplanetary colonization is not a luxury, but an essential.  Climate change is a shot across the bow with respect to how fragile our planet is, and how big our (unintended) influence can be.  We will hopefully manage that, and avoid nuclear war or synthetic pandemics for long enough to achieve it.  But ultimately, humanity needs the insurance dispersed planetary colonization will provide.  

Image credits: Microsoft Copilot

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Revolutionizing Customer Service

Brian Higgins On Driving Verizon’s Customer Experience Vision

Revolutionizing Customer Service - Brian Higgins On Driving Verizon's CX Vision

GUEST POST from Shep Hyken

If you have the best product in the world, that’s nice, but it’s not enough. You need a strong customer experience to go with it.

If you have the best service in the world, that’s nice, but it’s not enough. You need a strong product to go with it.

And one other thing. You also need customers! Without them, it doesn’t matter if you have the best product and the best service; you will eventually go out of business.

That’s why I’m excited about this week’s article. I had the opportunity to have an Amazing Business Radio interview with Brian Higgins, the chief customer experience officer at Verizon Consumer. After a career of 20-plus years working for one of the most recognized brands in the world, he has a lot to share about what it takes to get customers to say, “I’ll be back.”

Verizon is one of the most recognizable brands on the planet. A Fortune 50 company, it has more than 100,000 employees, a global presence serving more than 150 countries, more than $130 billion in annual revenue and a market cap of more than $168 billion.

Higgins made it clear that in addition to a premium network and product offerings, there needs to be a focus on customer experience with three primary objectives: addressing pain points, enhancing digital experiences and highlighting signature experiences exclusive to Verizon customers/members. They want to be easy to do business with and to use Customer Experience (CX) to capture market share and retain customers. What follows is a summary of Higgins’ most important points in our interview, followed by my commentary:

  1. Who Reports to Whom?: With Verizon’s emphasis on CX, one of the first questions I asked Higgins was about the company’s structure. Does CX report to marketing? Is CX over sales and marketing? Different companies put an emphasis on marketing, sales or experience. Often, one reports to the other. At Verizon, sales, revenue and experience work together. Higgins says, “We work in partnership with each other. You can’t build an experience if you don’t have the sales, revenue and customer care teams all on board.” The chief sales officer, chief revenue officer and chief experience officer “sit next to each other.”
  2. Membership: In our conversation, Higgins referred to Verizon’s customers as customers, members and subscribers. I asked which he preferred, and he quickly responded, “I would refer to them as members.” The membership is diverse, but the goal is to create a consistent and positive experience regardless of how individuals interact with the company. He sees the relationship with members as a partnership that is an intricate part of their lives. Most people check their phone the moment they wake up, throughout the day, and often, it’s one of the last things they check before going to bed. Verizon is a part of its members’ lives, and that’s an opportunity that cannot be mismanaged or abused.
  3. Employees Must Be Happy Too: More companies are recognizing that their CX must also include EX (employee experience). Employees must have the tools they need. This is an emphasis in his organization. Simplifying the employee experience with better tools and policies is the key to elevating the customer’s experience. Higgins shared the perfect description of why employee experience is paramount to the success of a business: “If employees aren’t happy and don’t feel they have the policies and tools they need that are right to engage with customers, you’re not going to get the experience right.”
  4. Focus on Little Pain Points: One of the priorities Higgins focuses on is what he refers to as “small cracks in the experience.” Seventy-five percent of the calls coming in to customer care are for small problems or questions, such as a promo code that didn’t work or an issue with a bill. His team continuously analyzes all customer journeys and works to fix them when needed. This helps to minimize recurring issues, thereby reducing customer support calls and the time employees spend fixing the same issue.
  5. The Digital Experience: Customers are starting to get comfortable with—and sometimes prefer—digital experiences. Making these experiences seamless and user-friendly increases overall customer satisfaction. More and more, they are using digital platforms to help with the “small cracks in the experience.” Employees also get an AI-infused digital experience. Higgins said Verizon uses AI to analyze customer conversations and provide real-time answers and solutions to employees, demonstrating how AI can support both employees and customers.
  6. Amplifying the Power of One Interaction: The final piece of wisdom Higgins shared was about recognizing how important a single interaction can be. Most customers don’t call very often. They may call once every three years, so each interaction needs to be treated like it’s a special moment—a unique opportunity to leave a lasting positive impression, one that leaves no doubt the customer made the right decision to do business with Verizon. Higgins believes in treating the customer like a relative visiting your home for a holiday. He closed by saying, “You’d be amazed how getting that one interaction with a customer right versus anything less than right can have a huge impact on the brand.”

Higgins’ vision for Verizon is not just about maintaining a superior network. It’s about creating an unparalleled customer experience that resonates with every interaction. As Verizon continues integrating advanced AI technologies and streamlining its processes, the focus continues to be on personalizing and enhancing every customer touchpoint, creating an experience that fosters high customer satisfaction and loyalty.

Image Credits: Pexels

This article originally appeared on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Push versus Pull in the Productivity Zone

Push versus Pull in the Productivity Zone

GUEST POST from Geoffrey A. Moore

Digital transformation is hardly new. Advances in computing create more powerful infrastructure which in turn enables more productive operating models which in turn can enable wholly new business models. From mainframes to minicomputers to PCs to the Internet to the Worldwide Web to cloud computing to mobile apps to social media to generative AI, the hits just keep on coming, and every IT organization is asked to both keep the current systems running and to enable the enterprise to catch the next wave. And that’s a problem.

The dynamics of productivity involve a yin and yang exchange between systems that improve efficiency and programs that improve effectiveness. Systems, in this model, are intended to maintain state, with as little friction as possible. Programs, in this model, are intended to change state, with maximum impact within minimal time. Each has its own governance model, and the two must not be blended.

It is a rare IT organization that does not know how to maintain its own systems. That’s Job One, and the decision rights belong to the org itself. But many IT organizations lose their way when it comes to programs — specifically, the digital transformation initiatives that are re-engineering business processes across every sector of the global economy. They do not lose their way with respect to the technology of the systems. They are missing the boat on the management of the programs.

Specifically, when the CEO champions the next big thing, and IT gets a big chunk of funding, the IT leader commits to making it all happen. This is a mistake. Digital transformation entails re-engineering one or more operating models. These models are executed by organizations outside of IT. For the transformation to occur, the people in these organizations need to change their behavior, often drastically. IT cannot — indeed, must not — commit to this outcome. Change management is the responsibility of the consuming organization, not the delivery organization. In other words, programs must be pulled. They cannot be pushed. IT in its enthusiasm may believe it can evangelize the new operating model because people will just love it. Let me assure you — they won’t. Everybody endorses change as long as other people have to be the ones to do it. No one likes to move their own cheese.

Given all that, here’s the playbook to follow:

  1. If it is a program, the head of the operating unit that must change its behavior has to sponsor the change and pull the program in. Absent this commitment, the program simply must not be initiated.
  2. To govern the program, the Program Management Office needs a team of four, consisting of the consuming executive, the IT executive, the IT project manager, and the consuming organization’s program manager. The program manager, not the IT manager, is responsible for change management.
  3. The program is defined by a performance contract that uses a current state/future state contrast to establish the criteria for program completion. Until the future state is achieved, the program is not completed.
  4. Once the future state is achieved, then the IT manager is responsible for securing the system that will maintain state going forward.

Delivering programs that do not change state is the biggest source of waste in the Productivity Zone. There is an easy fix for this. Just say No.

That’s what I think. What do you think?

Image Credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






We Need to Solve the Productivity Crisis

We Need to Solve the Productivity Crisis

GUEST POST from Greg Satell

When politicians and pundits talk about the economy, they usually do so in terms of numbers. Unemployment is too high or GDP is too low. Inflation should be at this level or at that. You get the feeling that somebody somewhere is turning knobs and flicking levers in order to get the machine humming at just the right speed.

Yet the economy is really about our well being. It is, at its core, our capacity to produce goods and services that we want and need, such as the food that sustains us, the homes that shelter us and the medicines that cure us, not to mention all of the little niceties and guilty pleasures that we love to enjoy.

Our capacity to generate these things is determined by our productive capacity. Despite all the hype about digital technology creating a “new economy,” productivity growth for the past 50 years has been tremendously sluggish. If we are going to revive it and improve our lives we need to renew our commitment to scientific capital, human capital and free markets.

Restoring Scientific Capital

In 1945, Vannevar Bush, delivered a report, Science, The Endless Frontier, that argued that the US government needed to invest in “scientific capital” and through basic research and scientific education. It would set in motion a number of programs that would set the stage for America’s technological dominance during the second half of the century.

Bush’s report led to the development of America’s scientific infrastructure, including agencies such as the National Science Foundation (NSF), National Institutes of Health (NIH) and DARPA. Others, such as the National Labs and science programs at the Department of Agriculture, also contribute significantly to our scientific capital.

The results speak for themselves and returns on public research investment have been shown to surpass those in private industry. To take just one example, it has been estimated that the $3.8 billion invested in the Human Genome Project resulted in nearly $800 billion in economic impact and created over 300,000 jobs in just the first decade.

Unfortunately, we forgot those lessons. Government investment in research as a percentage of GDP has been declining for decades, limiting our ability to produce the kinds of breakthrough discoveries that lead to exciting new industries. What passes for innovation these days displaces workers, but does not lead to significant productivity gains.

So the first step to solving the productivity puzzle would be to renew our commitment to investing in the type of scientific knowledge that, as Bush put it, can “turn the wheels of private and public enterprise.” There was a bill before congress to do exactly that, but unfortunately it got bogged down in the Senate due to infighting.

Investing In Human Capital

Innovation, at its core, is something that people do, which is why education was every bit as important to Bush’s vision as investment was. “If ability, and not the circumstance of family fortune, is made to determine who shall receive higher education in science, then we shall be assured of constantly improving quality at every level of scientific activity,” he wrote.

Programs like the GI Bill delivered on that promise. We made what is perhaps the biggest investment ever in human capital, sending millions to college and creating a new middle class. American universities, considered far behind their European counterparts earlier in the century, especially in the sciences, came to be seen as the best in the world by far.

Today, however, things have gone horribly wrong. A recent study found that about half of all college students struggle with food insecurity, which is probably why only 60% of students at 4-year institutions and even less at community colleges ever earn a degree. The ones that do graduate are saddled with decades of debt

So the bright young people who we don’t starve we are condemning to decades of what is essentially indentured servitude. That’s no way to run an entrepreneurial economy. In fact, a study done by the Federal Reserve Bank of Philadelphia found that student debt has a measurable negative impact on new business creation.

Recommitting Ourselves To Free and Competitive Markets

There is no principle more basic to capitalism than that of free markets, which provide the “invisible hand” to efficiently allocate resources. When market signals get corrupted, we get less of what we need and more of what we don’t. Without vigorous competition, firms feel less of a need to invest and innovate, and become less productive.

There is abundant evidence that is exactly what has happened. Since the late 1970s antitrust enforcement has become lax, ushering in a new gilded age. While digital technology was hyped as a democratizing force, over 75% of industries have seen a rise in concentration levels since the late 1990s, which has led to a decline in business dynamism.

The problem isn’t just monopoly power dominating consumers, either, but also monopsony, or domination of suppliers by buyers, especially in labor markets. There is increasing evidence of collusion among employers designed to keep wages low, while an astonishing abuse of non-compete agreements that have affected more than a third of the workforce.

In a sense, this is nothing new. Adam Smith himself observed in The Wealth of Nations that “Our merchants and master-manufacturers complain much of the bad effects of high wages in raising the price, and thereby lessening the sale of their goods both at home and abroad. They say nothing concerning the bad effects of high profits. They are silent with regard to the pernicious effects of their own gains. They complain only of those of other people.”

Getting Back On Track

In the final analysis, solving the productivity puzzle shouldn’t be that complicated. It seems that everything we need to do we’ve done before. We built a scientific architecture that remains unparalleled even today. We led the world in educating our people. American markets were the most competitive on the planet.

Yet somewhere we lost our way. Beginning in the early 1970s, we started reducing our investment in scientific research and public education. In the early 1980s, the Chicago school of competition law started to gain traction and antitrust enforcement began to wane. Since 2000, competitive markets in the United States have been in serious decline.

None of this was inevitable. We made choices and those choices had consequences. We can make other ones. We can choose to invest in discovering new knowledge, educate our children without impoverishing them, to demand our industries compete and hold our institutions to account. We’ve done these things before and can do so again.

All that’s left is the will and the understanding that the economy doesn’t exist in the financial press, on the floor of the stock markets or in the boardrooms of large corporations, but in our own welfare as well as in our ability to actualize our potential and realize our dreams. Our economy should be there to serve our needs, not the other way around.

— Article courtesy of the Digital Tonto blog
— Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Are You Continuing to Stop and Start the Hard Way?

Are You Continuing to Stop and Start the Hard Way?

GUEST POST from Mike Shipulski

The stop, start, continue method (SSC) is a simple, yet powerful, way to plan your day, week and year. And though it’s simple, it’s not simplistic. And though it looks straightforward, it’s onion-like in its layers.

Stop, start, continue (SSC) is interesting in that it’s forward-looking, present-looking, and rearward-looking at the same time. And its power comes from the requirement that the three time perspectives must be reconciled with each other. Stopping is easy, but what will start? Starting is easy, unless nothing is stopped. Continuing is easy, but it’s not the right thing if the rules have changed. And starting can’t start if everything continues.

Stop. With SSC, stopping is the most important part. That’s why it’s first in the sequence. When everyone’s plates are full and every meeting is an all-you-can-eat buffet, without stopping, all the new action items slathered on top simply slip off the plate and fall to the floor. And this is double trouble because while it’s clear new action items are assigned, there’s no admission that the carpet is soiled with all those recently added action items.

Here’s a rule: If you don’t stop, you can’t start.
And here’s another: Pros stop, and rookies start.

With continuous improvement, you should stop what didn’t work. But with innovation, you should stop what was successful. Let others fan the flames of success while you invent the new thing that will start a bigger blaze.

Start. With SSC, starting is the easy part, but it shouldn’t be. Resources are finite, but we conveniently ignore this reality so we can start starting. The trouble with starting is that no one wants to let go of continuing. Do everything you did last year and start three new initiatives. Continue with your current role, but start doing the new job so you can get the promotion in three years.

Here’s a rule: Starting must come at the expense of continuing.
And here’s another: Pros do stop, start, continue, and rookies do start, start, start.

Continue. With SSC, continue is underrated. If you’re always starting, it’s because you have nothing good to continue. And if you’ve got a lot of continuing to do, it’s because you’ve got a lot of good things going on. And continuing is efficient because you’re not doing something for the first time. And everyone knows how to do the work and it goes smoothly.

But there’s a dark side to continue – it’s called the status quo. The status quo is a powerful, one-trick pony that only knows how to continue. It hates stopping and blocks all starting. Continuing is the mortal enemy of innovation.

Here’s a rule: Continuing must stop, or starting can’t start.
And here’s another: Pros continue and stop before they start, and rookies start.

SSC is like juggling three balls at once. Just as it’s not juggling unless it’s three balls at the same time, it’s not SSC unless it’s stop, start, continue all done at the same time. And just as juggling two balls at once isn’t juggling, it’s not SSC if it’s just two out of the three. And just as dropping two of the three balls on the floor isn’t juggling, it’s not SSC if it’s starting, starting, starting.

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Coping with the Chasm

Coping with the Chasm

GUEST POST from Geoffrey A. Moore

I’ve been talking about crossing the chasm incessantly for over thirty years, and I’m not likely to stop, but it does beg the question, how should you operate when you are in the chasm? What is the chasm itself about, and what actions is it likely to reward or punish?

The chasm is a lull in the Technology Adoption Life Cycle, one that comes after the enthusiasts and visionaries have made their splash and before the pragmatists are willing to commit. At this time the new category is on the map, people are talking about it, often quite enthusiastically, but no one has budgeted for it as yet. That means that conventional go-to-market efforts, based on generating and pursuing qualified leads with prospects who have both budget and intent to purchase, cannot get traction. It does not mean, however, that they won’t entertain sales meetings and demos. They actually want to learn more about this amazing new thing, and so they can keep your go-to-market engine humming with activity. They just won’t buy anything.

Crossing the Chasm says it is time for you to select a beachhead market segment with a compelling reason to buy and approach them with a whole product that addresses an urgent unsolved problem. All well and good, but what if you don’t know enough about the market (or your own product for that matter) to make a sound choice? What if you are stuck in the chasm and have to stay there for a while? What can you do?

First of all, take good care of the early adopter customers you do have. Give them more service than you normally would, in part because you want them to succeed and be good references, but also because in delivering that service, you can get a closer look at their use cases and learn more about the ones that might pull you out of the chasm.

Second, keep your go-to-market organization lean and mean. You cannot sell your way out of the chasm. You cannot market your way out either. The only way out is to find that targetable beachhead segment with the compelling use case that they cannot address through any conventional means. This is an exercise in discovery, so your go-to-market efforts need to be provocative enough to get the meeting (this is where thought leadership marketing is so valuable) and your sales calls need to be intellectually curious about the prospect’s current business challenges (and not presentations about how amazing your company is or flashy demos to show off your product). In short, in the chasm, you are a solution looking for a problem.

Third, get your R&D team directly in contact with the customer, blending engineering, professional services, and customer success all into one flexible organization, all in search of the beachhead use case and the means for mastering its challenges. You made it to the chasm based on breakthrough technology that won the hearts of enthusiasts and visionaries, but that won’t get you across. You have to get pulled out of the chasm by prospective customers who will make a bet on you because they are desperate for a new approach to an increasingly vexing problem, and you have made a convincing case that your technology, product, talent, and commitment can fill the bill.

Finally, let’s talk about what you should not do. You cannot perform your way out of the chasm. You have no power. So, this is not a time to focus on execution. Instead, you have to find a way to increase your power. In the short term, you can do this through consulting projects—you have unique technology power that people want to consume; they just don’t want to consume through a product model at this time. They are happy to pay for bespoke projects, however, and that is really what the Early Market playbook is all about. Of course, projects don’t scale, so they are not a long-term answer, but they do generate income, and they do keep you in contact with the market. What you are looking for is solution power, tying your technology power to a specific use case in a specific segment, one that you could deliver on a repeatable basis and get you out of the chasm. Often these use cases are embedded in bespoke projects, just a part of the visionary’s big picture, but with more than enough meat on the bone to warrant a pragmatist’s attention.

Sooner or later you have to make a bet. You can recognize a good opportunity by the following traits:

  • There is budget to address the problem, and it is being spent now.
  • The results the prospect is getting are not promising and, if anything, the situation is deteriorating.
  • You know from at least one of your projects that you can do a lot better.

That’s about all the data you are going to get. That’s why we call crossing the chasm a high-risk, low-data decision. But it beats staying in the chasm by a long shot.

That’s what I think. What do you think?

Image Credit: Microsoft Copilot

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Creating the Ultimate Customer Experience with AI

Delivering Real Value the Key

Creating the Ultimate Customer Experience with AI

GUEST POST from Shep Hyken

Whenever I get the chance to interview the CEO of a major CX company, I jump at the chance. I recently conducted a second interview with Alan Masarek, the CEO of Avaya, a company focused on creating customer experience solutions for large enterprises.

My first interview covered an amazing turnaround that Masarek orchestrated in his first year at Avaya, taking the company through Chapter 11 and coming out strong. Masarek admits that even with his extensive financial background, he’s always been a product person, and it’s the combination of the two mindsets that makes him the perfect leader for Avaya.

In our discussion, he shared his view on AI and how it must deliver value in the contact center. What follows is a summary of the main points of our interview, followed by my commentary.

Why Customer Service and CX Are Important: Thanks to the internet, it’s harder for brands to differentiate themselves. Within minutes, a customer can compare prices, check availability, find a company that can deliver the product within a day or two, or find comparable products from other retailers, vendors and manufacturers. Furthermore, while the purchasing experience needs to be positive, it’s what happens beyond the purchase that becomes most important. Masarek says, “Brands are now trying to differentiate based upon the experience they provide. So any tool that can help the brand achieve this is the winner.”

Customer Service Is Rooted in Communications: Twenty years ago, the primary way to communicate with a company was on the phone. While we still do that, the world has evolved to what is referred to as omni-channel, which includes voice, chat, email, brand apps, social media and more. As we move from the phone to alternative channels of communication, companies and brands must find ways to bring them all together to create a seamless journey for the customer.

Organizations Want to Minimize Voice: According to Masarek, companies want to move away from traditional voice communication, which is a human on the phone. That “one-to-one” is very expensive. With digital solutions, you have one-to-many. Masarek says, “It’s asynchronous. And the beauty is you can introduce AI utilities into the customer experience, which creates greater efficiency. You’re solving so many things either digitally or deflecting it altogether via the chatbot, the voice bot or what have you.”

AI Will Not Eliminate Jobs: Masarek says, “There’s a bull and a bear case for an employment point of view relative to AI. Will it be a destroyer of jobs, a bear case, or will it grow jobs, the bull case?” He shared an example that perfectly describes the situation we’re in today. In the 1960s, Barclay’s Bank introduced the ATM. Everyone thought it would be the end of tellers working at banks. That never happened. What did happen is that tellers took on a more important role, going beyond just cashing checks or depositing money. It’s the same in the customer service world. AI technologies will take care of simple tasks, freeing customer service agents to help with more complicated issues. (For more on how AI will not eliminate jobs, read this Forbes article from September 2023.)

The Employee Experience Drives the Customer Experience: AI is not just about supporting the customer. It can also support the agent. When the agent is talking to a customer, generative AI technology can listen in the background, search through a company’s knowledge base and feed the agent information in real time. Masarek said, “Think about what a pleasant experience that is for both the agent and the customer!”

Innovation Without Disruption: A company may invest in a better customer experience, but sometimes, that causes stress to the organization. Masarek is proud of Avaya’s value proposition, which is to add innovation without disruption. This means there’s a seamless integration versus total replacement of existing systems and processes. Regarding the upgrade, Masarek says, “The last thing you want is to rip it all out.”

The Customer-In Approach: As we wrapped up our interview, I asked Masarek for one final nugget of wisdom. He shared his Customer-In approach. Not that long ago, you could compete on product, price and availability. Today, that’s table stakes. What separates one brand from another is the experience. Masarek summarized this point by saying, “You have to set your North Star on as few things as possible. Focus wins. And so, if you’re always thinking Customer First and all your decisions are rooted in that concept, your business will be successful. At the end of the day, brands win on how they make the customer feel. It’s no longer just about product, price and availability.”

Image Credits: Pixabay

This article was originally published on Forbes.com.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.