Category Archives: Technology

CEO Secrets of a Successful Turnaround

CEO Secrets of a Successful Turnaround

GUEST POST from Shep Hyken

While most outside of the tech industry won’t know the Avaya brand, most will have experienced its technology if you’ve contacted customer support or communicated directly with a brand for any reason. It is a multinational technology company based in the U.S. that provides communications and collaboration technologies for contact centers in 172 countries, including 90% of the Fortune 100 companies in the U.S. Its product helps give a better customer service experience for its customer’s customers.

I had the opportunity to interview Alan Masarek about the Avaya story. Specifically, we discussed what happened since he joined the company less than one year ago. The short version of the story is that he and his leadership team successfully guided the company through Chapter 11 bankruptcy, restructuring its finances and streamlining its operations. And they did this while maintaining what Masarek calls Avaya’s North Star.

In referring to that “North Star,” Masarek says, “Customer service and experience is core to who we are and for every role in the company. Our customers count on us for the communications and collaboration technology that make customer interactions not only work, but work better.” He went on to explain the four core components they focus on:

1. Culture: Everything starts with culture. Masarek wants to make Avaya a “destination place to work,” which means attracting and keeping the best talent. Once you get good people, you must keep them there. His strategy for creating a “destination place to work” includes three components. The first is a rewards and recognition program that validates an employee’s efforts and creates a sense of accomplishment. The second is to create a culture employees want to be a part of. And third is to provide an opportunity for growth. Masarek says a company’s positive reviews and ratings on glassdoor.com, where employee rate their employers, is a success criteria he looks at.

2. Product: Avaya is a technology company and must continuously innovate and improve. They created a “product roadmap” where customers can see what products are being phased out, retained and, most importantly, being developed for the future. “We must deliver innovation—the right innovation—and we have to deliver it on time and with quality,” said Masarek. “We will be successful when we are both transparent (which is why Avaya published the roadmap) and reliable. When we deliver on that commitment over time, that reliability becomes trust.”

3. Customer Delight: If your customers don’t like the experience or the product doesn’t do what it’s supposed to do, they will find another company and product that meets their needs. Masarek recognizes the importance of customer delight and has invested heavily in hearing and understanding the “Voice of the Customer,” paying attention to customer satisfaction scores and NPS (Net Promoter Scores). Masarek is emphatic about customer delight, stating, “We are in service to the customer. CX is everyone’s responsibility.” And this isn’t just lip service. Those satisfaction and NPS numbers are tied to some of the employees’ compensation plans.

4. Accountability: “We must be accountable,” Masarek says, “to one another, to the customers, and to the results. When you take care of the first three (culture, product and customer delight), this fourth one becomes much easier to achieve.”

While sharing the entire story in a short article is impossible, you can see the overarching strategies and thinking behind Masarek’s leadership and Avaya’s success. And here’s my observation: It’s not complicated!

If you look at the four core components Avaya focuses on, you might say, “There’s nothing new here,” but don’t let simplicity, or that these seem like common sense, get in the way of incorporating them into your strategy. In good times and bad, focusing on culture, product, customer delight and accountability/results are the undeniable strategies that drive success.

This article originally appeared on Forbes.com

Image Credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Power of Dreams

A Veterans Day Innovation Story

The Power of Dreams - A Veterans Day Innovation Story

by Braden Kelley

On this Veterans Day I send my thanks to all of my fellow veterans for the sacrifices they and their families have made in support of the great nations of the world. Military science has long been a source of innovation that goes beyond the defense of a population. From duct tape, GPS, jet engines and the Internet to nuclear power, sanitary napkins and digital photography, there is an endless list of innovations that owe their existence to investments in military research.

Innovation has always been fueled by exceptional ideas that push the boundaries of what is possible. Some of the most groundbreaking inventions in history have originated from the most unexpected sources, proving that inspiration knows no boundaries. One such remarkable innovation that emerged from the realm of dreams is the M9 Gun Director, a groundbreaking concept envisioned by David Parkinson. Today, we explore the fascinating story of how an ordinary dream sparked an extraordinary revolution in military technology.

Dreams have long been a source of fascination for humanity, acting as the gateway to our subconscious minds, guiding our creativity and problem-solving abilities. Great minds throughout history, from Albert Einstein to Nikola Tesla, have attested to the transformative power of dreams shaping their inventions and discoveries. In the case of David Parkinson, the M9 Gun Director serves as a testament to the astounding potential that lies within our dreams.

The Birth of a Revolutionary Concept

In 1895, Parkinson, a modest engineer by profession, experienced a vivid dream that would forever change the world of military technology. In this dream, he envisioned a device capable of automatically predicting and adjusting the trajectory of a gun, enabling unparalleled precision in aiming and firing. This visionary concept would ultimately become the foundation for the M9 Gun Director and revolutionize artillery warfare as we knew it.

Pursuing the Unconventional

David Parkinson, driven by an insatiable curiosity and an unwavering belief in his dream, embarked on a journey to transform this abstract idea into a tangible reality. Despite facing skepticism and opposition, Parkinson remained undeterred, recognizing the immense potential in his concept. He tirelessly invested his time in research, experimentation, and collaboration, all the while fueled by the hope of revolutionizing military technology.

Bringing Dreams to Life

After years of relentless persistence, Parkinson succeeded in developing a prototype that embodied his vision of the M9 Gun Director. It incorporated advanced mechanisms, including gears, gyroscopes, and other innovative technologies, to predict and adjust artillery gun trajectories with remarkable accuracy. This revolutionary innovation significantly enhanced the efficiency, precision, and destructive power of artillery systems, forever changing the course of warfare worldwide.

Implications and Significance

The advent of the M9 Gun Director marked a turning point in military history, fundamentally altering the dynamics of armed conflict. By harnessing the power of dream-inspired innovation, Parkinson had unlocked a whole new level of precision previously unimaginable in the realm of artillery. This groundbreaking invention significantly reduced casualties, transformed strategic planning, and tilted the balance of power on the battlefield.

Embracing the Power of Dreams

The story of David Parkinson and the M9 Gun Director serves as a testament to the incredible creative potential that lies within each of us. It encourages us to embrace the unexplored territories of our dreams, recognizing them not just as fleeting nocturnal experiences, but as wellsprings of unmatched inspiration. Who knows what other world-changing ideas are waiting to be unleashed from within our subconscious minds?

Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

A Quantum Computing Primer

A Quantum Computing Primer

GUEST POST from Greg Satell

Every once in a while, a technology comes along with so much potential that people can’t seem to stop talking about it. That’s fun and exciting, but it can also be confusing. Not all of the people who opine really know what they’re talking about and, as the cacophony of voices increases to a loud roar, it’s hard to know what to believe.

We’re beginning to hit that point with quantum computing. Listen to some and you imagine that you’ll be strolling down to your local Apple store to pick one up any day now. Others will tell you that these diabolical machines will kill encryption and bring global commerce to a screeching halt. None of this is true.

What is true though is that quantum computing is not only almost unimaginably powerful, it is also completely different than anything we’ve ever seen before. You won’t use a quantum computer to write emails or to play videos, but the technology will significantly impact our lives over the next decade or two. Here’s a basic guide to what you really need to know.

Computing In 3 Dimensions

Quantum computing, as any expert will tell you, uses quantum effects such as superposition and entanglement to compute, unlike digital computers that use strings of ones and zeros. Yet quantum effects are so confusing that the great physicist Richard Feynman once remarked that nobody, even world class experts like him, really understands them.

So instead of quantum effects, think of quantum computing as a machine that works in three dimensions rather than two-dimensions like digital computers. The benefits of this should be obvious, because you can fit a lot more stuff into three dimensions than you can into two, so a quantum computer can handle vastly more complexity than the ones we’re used to.

Another added benefit is that we live in three dimensions, so quantum computers can simulate the systems we deal with every day, like those in materials and biological organisms. Digital computers can do this to some extent, but some information always gets lost translating the data from a three dimensional world to a two dimensional one, which leads to problems.

I want to stress that this isn’t exactly an accurate description of how quantum computers really work, but it’s close enough for you to get the gist of why they are so different and, potentially, so useful.

Coherence And Error Correction

Everybody makes mistakes and the same goes for machines. When you think of all the billions of calculations a computer makes, you can see how even an infinitesimally small error rate can cause a lot of problems. That’s why computers have error correction mechanisms built into their code to catch mistakes and correct them.

With quantum computers the problem is much tougher because they work with subatomic particles and these systems are incredibly difficult to keep stable. That’s why quantum chips need to be kept within a fraction of a degree of absolute zero. At even a sliver above that, the system “decoheres” and we won’t be able to make sense out of anything.

It also leads to another problem. Because quantum computers are so prone to error, we need a whole lot of quantum bits (or qubits) for each qubit that performs a logical function. In fact, with today’s technology, we need more than a thousand physical qubits (the kind that are in a machine) for each qubit that can reliably perform a logical function.

This is why most of the fears of quantum computing killing encryption and destroying the financial system are mostly unfounded. The most advanced quantum computers today only have about 50 qubits, not nearly enough to crack anything. We will probably have machines that strong in a decade or so, but by that time quantum safe encryption should be fairly common.

Building Practical Applications

Because quantum computers are so different, it’s hard to make them efficient for the tasks that we use traditional computers for because they effectively have to translate two-dimensional digital problems into their three-dimensional quantum world. The error correction issues only compound the problem.

There are some problems, however, that they’re ideally suited to. One is to simulate quantum systems, like molecules and biological systems, which can be tremendously valuable for people like chemists, materials scientists and medical researchers. Another promising area is large optimization problems for use in the financial industry and helping manage complex logistics.

Yet the people who understand those problems know little about quantum computing. In most cases, they’ve never seen a quantum computer before and have trouble making sense out of the data they generate. So they will have to spend some years working with quantum scientists to figure it out and then some more years explaining what they’ve learned to engineers who can build products and services.

We tend to think of innovation as if it is a single event. The reality is that it’s a long process of discovery, engineering and transformation. We are already well into the engineering phase of quantum computing—we have reasonably powerful machines that work—but the transformation phase has just begun.

The End Of The Digital Revolution And A New Era Of Innovation

One of the reasons that quantum computing has been generating so much excitement is that Moore’s Law is ending. The digital revolution was driven by our ability to cram more transistors onto a silicon wafer, so once we are not able to do that anymore, a key avenue of advancement will no longer be viable.

So many assume that quantum computing will simply take over where digital computing left off. It will not. As noted above, quantum computers are fundamentally different than the ones we are used to. They use different logic, require different computing languages and algorithmic approaches and are suited to different tasks.

That means the major impacts from quantum computers won’t hit for a decade or more. That’s not at all unusual. For example, although Apple came out with the Macintosh in 1984, it wasn’t until the late 90s that there was a measurable bump in productivity. It takes time for an ecosystem to evolve around a technology and drive a significant impact.

What’s most important to understand, however, is that the quantum era will open up new worlds of possibility, enabling us to manage almost unthinkable complexity and reshape the physical world. We are, in many ways, just getting started.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

AI and Human Creativity Solving Complex Problems Together

AI and Human Creativity Solving Complex Problems Together

GUEST POST from Janet Sernack

A recent McKinsey Leading Off – Essentials for leaders and those they lead email newsletter, referred to an article “The organization of the future: Enabled by gen AI, driven by people” which stated that digitization, automation, and AI will reshape whole industries and every enterprise. The article elaborated further by saying that, in terms of magnitude, the challenge is akin to coping with the large-scale shift from agricultural work to manufacturing that occurred in the early 20th century in North America and Europe, and more recently in China. This shift was powered by the defining trait of our species, our human creativity, which is at the heart of all creative problem-solving endeavors, where innovation is the engine of growth, no matter, what the context.

Moving into Unchartered Job and Skills Territory

We don’t yet know what exact technological, or soft skills, new occupations, or jobs will be required in this fast-moving transformation, or how we might further advance generative AI, digitization, and automation.

We also don’t know how AI will impact the need for humans to tap even more into the defining trait of our species, our human creativity. To enable us to become more imaginative, curious, and creative in the way we solve some of the world’s greatest challenges and most complex and pressing problems, and transform them into innovative solutions.

We can be proactive by asking these two generative questions:

  • What if the true potential of AI lies in embracing its ability to augment human creativity and aid innovation, especially in enhancing creative problem solving, at all levels of civil society, instead of avoiding it? (Ideascale)
  • How might we develop AI as a creative thinking partner to effect profound change, and create innovative solutions that help us build a more equitable and sustainable planet for all humanity? (Hal Gregersen)

Because our human creativity is at the heart of creative problem-solving, and innovation is the engine of growth, competitiveness, and profound and positive change.

Developing a Co-Creative Thinking Partnership

In a recent article in the Harvard Business Review “AI Can Help You Ask Better Questions – and Solve Bigger Problems” by Hal Gregersen and Nicola Morini Bianzino, they state:

“Artificial intelligence may be superhuman in some ways, but it also has considerable weaknesses. For starters, the technology is fundamentally backward-looking, trained on yesterday’s data – and the future might not look anything like the past. What’s more, inaccurate or otherwise flawed training data (for instance, data skewed by inherent biases) produces poor outcomes.”

The authors say that dealing with this issue requires people to manage this limitation if they are going to treat AI as a creative-thinking partner in solving complex problems, that enable people to live healthy and happy lives and to co-create an equitable and sustainable planet.

We can achieve this by focusing on specific areas where the human brain and machines might possibly complement one another to co-create the systemic changes the world badly needs through creative problem-solving.

  • A double-edged sword

This perspective is further complimented by a recent Boston Consulting Group article  “How people can create-and destroy value- with generative AI” where they found that the adoption of generative AI is, in fact, a double-edged sword.

In an experiment, participants using GPT-4 for creative product innovation outperformed the control group (those who completed the task without using GPT-4) by 40%. But for business problem solving, using GPT-4 resulted in performance that was 23% lower than that of the control group.

“Perhaps somewhat counterintuitively, current GenAI models tend to do better on the first type of task; it is easier for LLMs to come up with creative, novel, or useful ideas based on the vast amounts of data on which they have been trained. Where there’s more room for error is when LLMs are asked to weigh nuanced qualitative and quantitative data to answer a complex question. Given this shortcoming, we as researchers knew that GPT-4 was likely to mislead participants if they relied completely on the tool, and not also on their own judgment, to arrive at the solution to the business problem-solving task (this task had a “right” answer)”.

  • Taking the path of least resistance

In McKinsey’s Top Ten Reports This Quarter blog, seven out of the ten articles relate specifically to generative AI: technology trends, state of AI, future of work, future of AI, the new AI playbook, questions to ask about AI and healthcare and AI.

As it is the most dominant topic across the board globally, if we are not both vigilant and intentional, a myopic focus on this one significant technology will take us all down the path of least resistance – where our energy will move to where it is easiest to go.  Rather than being like a river, which takes the path of least resistance to its surrounding terrain, and not by taking a strategic and systemic perspective, we will always go, and end up, where we have always gone.

  • Living our lives forwards

According to the Boston Consulting Group article:

“The primary locus of human-driven value creation lies not in enhancing generative AI where it is already great, but in focusing on tasks beyond the frontier of the technology’s core competencies.”

This means that a whole lot of other variables need to be at play, and a newly emerging set of human skills, especially in creative problem solving, need to be developed to maximize the most value from generative AI, to generate the most imaginative, novel and value adding landing strips of the future.

Creative Problem Solving

In my previous blog posts “Imagination versus Knowledge” and “Why Successful Innovators Are Curious Like Cats” we shared that we are in the midst of a “Sputnik Moment” where we have the opportunity to advance our human creativity.

This human creativity is inside all of us, it involves the process of bringing something new into being, that is original, surprising useful, or desirable, in ways that add value to the quality of people’s lives, in ways they appreciate and cherish.

  • Taking a both/and approach

Our human creativity will be paralysed, if we focus our attention and intention only on the technology, and on the financial gains or potential profits we will get from it, and if we exclude the possibilities of a co-creative thinking partnership with the technology.

To deeply engage people in true creative problem solving – and involving them in impacting positively on our crucial relationships and connectedness, with one another and with the natural world, and the planet.

  • A marriage between creatives, technologists, and humanities

In a recent Fast Company video presentation, “Innovating Imagination: How Airbnb Is Using AI to Foster Creativity” Brian Chesky CEO of Airbnb, states that we need to consider and focus our attention and intention on discovering what is good for people.

To develop a “marriage between creatives, technologists, and the humanities” that brings the human out and doesn’t let technology overtake our human element.

Developing Creative Problem-Solving Skills

At ImagineNation, we teach, mentor, and coach clients in creative problem-solving, through developing their Generative Discovery skills.

This involves developing an open and active mind and heart, by becoming flexible, adaptive, and playful in the ways we engage and focus our human creativity in the four stages of creative problem-solving.

Including sensing, perceiving, and enabling people to deeply listen, inquire, question, and debate from the edges of temporarily hidden or emerging fields of the future.

To know how to emerge, diverge, and converge creative insights, collective breakthroughs, an ideation process, and cognitive and emotional agility shifts to:

  • Deepen our attending, observing, and discerning capabilities to consciously connect with, explore, and discover possibilities that create tension and cognitive dissonance to disrupt and challenge the status quo, and other conventional thinking and feeling processes.
  • Create cracks, openings, and creative thresholds by asking generative questions to push the boundaries, and challenge assumptions and mental and emotional models to pull people towards evoking, provoking, and generating boldly creative ideas.
  • Unleash possibilities, and opportunities for creative problem solving to contribute towards generating innovative solutions to complex problems, and pressing challenges, that may not have been previously imagined.

Experimenting with the generative discovery skill set enables us to juggle multiple theories, models, and strategies to create and plan in an emergent, and non-linear way through creative problem-solving.

As stated by Hal Gregersen:

“Partnering with the technology in this way can help people ask smarter questions, making them better problem solvers and breakthrough innovators.”

Succeeding in the Age of AI

We know that Generative AI will change much of what we do and how we do it, in ways that we cannot yet anticipate.

Success in the age of AI will largely depend on our ability to learn and change faster than we ever have before, in ways that preserve our well-being, connectedness, imagination, curiosity, human creativity, and our collective humanity through partnering with generative AI in the creative problem-solving process.

Find Out More About Our Work at ImagineNation™

Find out about our collective, learning products and tools, including The Coach for Innovators, Leaders, and Teams Certified Program, presented by Janet Sernack, is a collaborative, intimate, and deeply personalized innovation coaching and learning program, supported by a global group of peers over 9-weeks, which can be customised as a bespoke corporate learning program.

It is a blended and transformational change and learning program that will give you a deep understanding of the language, principles, and applications of an ecosystem focus, human-centric approach, and emergent structure (Theory U) to innovation, and upskill people and teams and develop their future fitness, within your unique innovation context. Find out more about our products and tools.

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

What Pundits Always Get Wrong About the Future

What Pundits Always Get Wrong About the Future

GUEST POST from Greg Satell

Peter Thiel likes to point out that we wanted flying cars, but got 140 characters instead. He’s only partly right. For decades futuristic visions showed everyday families zipping around in flying cars and it’s true that even today we’re still stuck on the ground. Yet that’s not because we’re unable to build one. In fact the first was invented in 1934.

The problem is not so much with engineering, but economics, safety and convenience. We could build a flying car if we wanted to, but to make one that can compete with regular cars is another matter entirely. Besides, in many ways, 140 characters are better than a flying car. Cars only let us travel around town, the Internet helps us span the globe.

That has created far more value than a flying car ever could. We often fail to predict the future accurately because we don’t account for our capacity to surprise ourselves, to see new possibilities and take new directions. We interact with each other, collaborate and change our priorities. The future that we predict is never as exciting as the one we eventually create.

1. The Future Will Not Look Like The Past

We tend to predict the future by extrapolating from the present. So if we invent a car and then an airplane, it only seems natural that we can combine the two. If family has a car, then having one that flies can seem like a logical next step. We don’t look at a car and dream up, say, a computer. So in 1934, we dreamed of flying cars, but not computers.

It’s not just optimists that fall prey to this fundamental error, but pessimists too. In Homo Deus, author and historian Yuval Noah Harari points to several studies that show that human jobs are being replaced by machines. He then paints a dystopian picture. “Humans might become militarily and economically useless,” he writes. Yeesh!

Yet the picture is not as dark as it may seem. Consider the retail apocalypse. Over the past few years, we’ve seen an unprecedented number of retail store closings. Those jobs are gone and they’re not coming back. You can imagine thousands of retail employees sitting at home, wondering how to pay their bills, just as Harari predicts.

Yet economist Michael Mandel argues that the data tell a very different story. First, he shows that the jobs gained from e-commerce far outstrip those lost from traditional retail. Second, he points out that the total e-commerce sector, including lower-wage fulfillment centers, has an average wage of $21.13 per hour, which is 27 percent higher than the $16.65 that the average worker in traditional retail earns.

So not only are more people working, they are taking home more money too. Not only is the retail apocalypse not a tragedy, it’s somewhat of a blessing.

2. The Next Big Thing Always Starts Out Looking Like Nothing At All

Every technology eventually hits theoretical limits. Buy a computer today and you’ll find that the technical specifications are much like they were five years ago. When a new generation of iPhones comes out these days, reviewers tout the camera rather than the processor speed. The truth is that Moore’s law is effectively over.

That seems tragic, because our ability to exponentially increase the number of transistors that we can squeeze onto a silicon wafer has driven technological advancement over the past few decades. Every 18 months or so, a new generation of chips has come out and opened up new possibilities that entrepreneurs have turned into exciting new businesses.

What will we do now?

Yet there’s no real need to worry. There is no 11th commandment that says, “Thou shalt compute with ones and zeros” and the end of Moore’s law will give way to newer, more powerful technologies, like quantum and neuromorphic computing. These are still in their nascent stage and may not have an impact for at least five to ten years, but will likely power the future for decades to come.

The truth is that the next big thing always starts out looking like nothing at all. Einstein never thought that his work would have a practical impact during his lifetime. When Alexander Fleming first discovered penicillin, nobody noticed. In much the same way, the future is not digital. So what? It will be even better!

3. It’s Ecosystems, Not Inventions, That Drive The Future

When the first automobiles came to market, they were called “horseless carriages” because that’s what everyone knew and was familiar with. So it seemed logical that people would use them much like they used horses, to take the occasional trip into town and to work in the fields. Yet it didn’t turn out that way, because driving a car is nothing like riding a horse.

So first people started taking “Sunday drives” to relax and see family and friends, something that would be too tiring to do regularly on a horse. Gas stations and paved roads changed how products were distributed and factories moved from cities in the north, close to customers, to small towns in the south, where land and labor were cheaper.

As the ability to travel increased, people started moving out of cities and into suburbs. When consumers could easily load a week’s worth of groceries into their cars, corner stores gave way to supermarkets and, eventually, shopping malls. The automobile changed a lot more than simply how we got from place to place. It changed our way of life in ways that were impossible to predict.

Look at other significant technologies, such as electricity and computers, and you find a similar story. It’s ecosystems, rather than inventions, that drive the future.

4. We Can Only Validate Patterns Going Forward

G. H. Hardy once wrote that, “a mathematician, like a painter or poet, is a maker of patterns. If his patterns are more permanent than theirs, it is because they are made with ideas.” Futurists often work the same way, identifying patterns in the past and present, then extrapolating them into the future. Yet there is a substantive difference between patterns that we consider to be preordained and those that are to be discovered.

Think about Steve Jobs and Apple for a minute and you will probably recognize the pattern and assume I misspelled the name of his iconic company by forgetting to include the “e” at the end. But I could have just have easily been about to describe an “Applet” he designed for the iPhone or some connection between Jobs and Appleton WI, a small town outside Green Bay.

The point is that we can only validate patterns going forward, never backward. That, in essence, is what Steve Blank means when he says that business plans rarely survive first contact with customers and why his ideas about lean startups are changing the world. We need to be careful about the patterns we think we see. Some are meaningful. Others are not.

The problem with patterns is that future is something we create, not some preordained plan that we are beholden to. The things we create often become inflection points and change our course. That may frustrate the futurists, but it’s what makes life exciting for the rest of us.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






An Innovation Rant: Just Because You Can Doesn’t Mean You Should

An Innovation Rant: Just Because You Can Doesn’t Mean You Should

GUEST POST from Robyn Bolton

Why are people so concerned about, afraid of, or resistant to new things?

Innovation, by its very nature, is good.  It is something new that creates value.

Naturally, the answer has nothing to do with innovation.

It has everything to do with how we experience it. 

And innovation without humanity is a very bad experience.

Over the last several weeks, I’ve heard so many stories of inhuman innovation that I have said, “I hate innovation” more than once.

Of course, I don’t mean that (I would be at an extraordinary career crossroads if I did).  What I mean is that I hate the choices we make about how to use innovation. 

Just because AI can filter resumes doesn’t mean you should remove humans from the process.

Years ago, I oversaw recruiting for a small consulting firm of about 50 people.  I was a full-time project manager, but given our size, everyone was expected to pitch in and take on extra responsibilities.  Because of our founder, we received more resumes than most firms our size, so I usually spent 2 to 3 hours a week reviewing them and responding to applicants.  It was usually boring, sometimes hilarious, and always essential because of our people-based business.

Would I have loved to have an AI system sort through the resumes for me?  Absolutely!

Would we have missed out on incredible talent because they weren’t out “type?”  Absolutely!

AI judges a resume based on keywords and other factors you program in.  This probably means that it filters out people who worked in multiple industries, aren’t following a traditional career path, or don’t have the right degree.

This also means that you are not accessing people who bring a new perspective to your business, who can make the non-obvious connections that drive innovation and growth, and who bring unique skills and experiences to your team and its ideas.

If you permit AI to find all your talent, pretty soon, the only talent you’ll have is AI.

Just because you can ghost people doesn’t mean you should.

Rejection sucks.  When you reject someone, and they take it well, you still feel a bit icky and sad.  When they don’t take it well, as one of my colleagues said when viewing a response from a candidate who did not take the decision well, “I feel like I was just assaulted by a bag of feathers.  I’m not hurt.  I’m just shocked.”

So, I understand ghosting feels like the better option.  It’s not.  At best, it’s lazy, and at worst, it’s selfish.  Especially if you’re a big company using AI to screen resumes. 

It’s not hard to add a function that triggers a standard rejection email when the AI filters someone out.  It’s not that hard to have a pre-programmed email that can quickly be clicked and sent when a human makes a decision.

The Golden Rule – do unto others as you would have done unto you – doesn’t apply to AI.  It does apply to you.

Just because you can stack bots on bots doesn’t mean you should.

At this point, we all know that our first interaction with customer service will be with a bot.  Whether it’s an online chatbot or an automated phone tree, the journey to a human is often long and frustrating. Fine.  We don’t like it, but we don’t have a choice.

But when a bot transfers us to a bot masquerading as a person?  Do you hate your customers that much?

Some companies do, as my husband and I discovered.  I was on the phone with one company trying to resolve a problem, and he was in a completely different part of the house on the phone with another company trying to fix a separate issue.  When I wandered to the room where my husband was to get information that the “person” I was talking to needed, I noticed he was on hold.  Then he started staring at me funny (not as unusual as you might think).  Then he asked me to put my call on speaker (that was unusual).  After listening for a few minutes, he said, “I’m talking to the same woman.”

He was right.  As we listened to each other’s calls, we heard the same “woman” with the same tenor of voice, unusual cadence of speech, and indecipherable accent.  We were talking to a bot.  It was not helpful.  It took each of us several days and several more calls to finally reach humans.  When that happened, our issues were resolved in minutes.

Just because innovation can doesn’t mean you should allow it to.

You are a human.  You know more than the machine knows (for now).

You are interacting with other humans who, like you, have a right to be treated with respect.

If you forget these things – how important you and your choices are and how you want to be treated – you won’t have to worry about AI taking your job.  You already gave it away.

Image Credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

An Innovation Lesson From The Rolling Stones

An Innovation Lesson From The Rolling Stones

GUEST POST from Robyn Bolton

If you’re like most people, you’ve faced disappointment. Maybe the love of your life didn’t return your affection, you didn’t get into your dream college, or you were passed over for promotion.  It hurts.  And sometimes, that hurt lingers for a long time.

Until one day, something happens, and you realize your disappointment was a gift.  You meet the true love of your life while attending college at your fallback school, and years later, when you get passed over for promotion, the two of you quit your jobs, pursue your dreams, and live happily ever after. Or something like that.

We all experience disappointment.  We also all get to choose whether we stay there, lamenting the loss of what coulda shoulda woulda been, or we can persevere, putting one foot in front of the other and playing The Rolling Stones on repeat:

“You can’t always get what you want

But if you try sometimes, well, you might just find

You get what you need”

That’s life.

That’s also innovation.

As innovators, especially leaders of innovators, we rarely get what we want.  But we always get what we need (whether we like it or not)

We want to know. 
We need to be comfortable not knowing.

Most of us want to know the answer because if we know the answer, there is no risk. There is no chance of being wrong, embarrassed, judged, or punished.  But if there is no risk, there is no growth, expansion, or discovery.

Innovation is something new that creates value. If you know everything, you can’t innovate.

As innovators, we need to be comfortable not knowing.  When we admit to ourselves that we don’t know something, we open our minds to new information, new perspectives, and new opportunities. When we say we don’t know, we give others permission to be curious, learn, and create. 

We want the creative genius and billion-dollar idea. 
We need the team and the steady stream of big ideas.

We want to believe that one person blessed with sufficient time, money, and genius can change the world.  Some people like to believe they are that person, and most of us think we can hire that person, and when we do find that person and give them the resources they need, they will give us the billion-dollar idea that transforms our company, disrupts the industry, and change the world.

Innovation isn’t magic.  Innovation is team work.

We need other people to help us see what we can’t and do what we struggle to do.  The idea-person needs the optimizer to bring her idea to life, and the optimizer needs the idea-person so he has a starting point.  We need lots of ideas because most won’t work, but we don’t know which ones those are, so we prototype, experiment, assess, and refine our way to the ones that will succeed.   

We want to be special.
We need to be equal.

We want to work on the latest and most cutting-edge technology and discuss it using terms that no one outside of Innovation understands. We want our work to be on stage, oohed and aahed over on analyst calls, and talked about with envy and reverence in every meeting. We want to be the cool kids, strutting around our super hip offices in our hoodies and flip-flops or calling into the meeting from Burning Man. 

Innovation isn’t about you.  It’s about serving others.

As innovators, we create value by solving problems.  But we can’t do it alone.  We need experienced operators who can quickly spot design flaws and propose modifications.  We need accountants and attorneys who instantly see risks and help you navigate around them.  We need people to help us bring our ideas to life, but that won’t happen if we act like we’re different or better.  Just as we work in service to our customers, we must also work in service to our colleagues by working with them, listening, compromising, and offering help.

What about you?
What do you want?
What are you learning you need?

Image Credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

AI and the Productivity Paradox

AI and the Productivity Paradox

GUEST POST from Greg Satell

In the 1970’s and 80’s, business investment in computer technology were increasing by more than twenty percent per year. Strangely though, productivity growth had decreased during the same period. Economists found this turn of events so strange that they called it the productivity paradox to underline their confusion.

Productivity growth would take off in the late 1990s, but then mysteriously drop again during the mid-aughts. At each juncture, experts would debate whether digital technology produced real value or if it was all merely a mirage. The debate would continue even as industry after industry was disrupted.

Today, that debate is over, but a new one is likely to begin over artificial intelligence. Much like in the early 1970s, we have increasing investment in a new technology, diminished productivity growth and “experts” predicting massive worker displacement . Yet now we have history and experience to guide us and can avoid making the same mistakes.

You Can’t Manage (Or Evaluate) What You Can’t Measure

The productivity paradox dumbfounded economists because it violated a basic principle of how a free market economy is supposed to work. If profit seeking businesses continue to make substantial investments, you expect to see a return. Yet with IT investment in the 70s and 80s, firms continued to increase their investment with negligible measurable benefit.

A paper by researchers at the University of Sheffield sheds some light on what happened. First, productivity measures were largely developed for an industrial economy, not an information economy. Second, the value of those investments, while substantial, were a small portion of total capital investment. Third, the aggregate productivity numbers didn’t reflect differences in management performance.

Consider a widget company in the 1970s that invested in IT to improve service so that it could ship out products in less time. That would improve its competitive position and increase customer satisfaction, but it wouldn’t produce any more widgets. So, from an economic point of view, it wouldn’t be a productive investment. Rival firms might then invest in similar systems to stay competitive but, again, widget production would stay flat.

So firms weren’t investing in IT to increase productivity, but to stay competitive. Perhaps even more importantly, investment in digital technology in the 70s and 80s was focused on supporting existing business models. It wasn’t until the late 90s that we began to see significant new business models being created.

The Greatest Value Comes From New Business Models—Not Cost Savings

Things began to change when firms began to see the possibilities to shift their approach. As Josh Sutton, CEO of Agorai, an AI marketplace, explained to me, “The businesses that won in the digital age weren’t necessarily the ones who implemented systems the best, but those who took a ‘digital first’ mindset to imagine completely new business models.”

He gives the example of the entertainment industry. Sure, digital technology revolutionized distribution, but merely putting your programming online is of limited value. The ones who are winning are reimagining storytelling and optimizing the experience for binge watching. That’s the real paradigm shift.

“One of the things that digital technology did was to focus companies on their customers,” Sutton continues. “When switching costs are greatly reduced, you have to make sure your customers are being really well served. Because so much friction was taken out of the system, value shifted to who could create the best experience.”

So while many companies today are attempting to leverage AI to provide similar service more cheaply, the really smart players are exploring how AI can empower employees to provide a much better service or even to imagine something that never existed before. “AI will make it possible to put powerful intelligence tools in the hands of consumers, so that businesses can become collaborators and trusted advisors, rather than mere service providers,” Sutton says.

It Takes An Ecosystem To Drive Impact

Another aspect of digital technology in the 1970s and 80s was that it was largely made up of standalone systems. You could buy, say, a mainframe from IBM to automate back office systems or, later, Macintoshes or a PCs with some basic software to sit on employees desks, but that did little more than automate basic clerical tasks.

However, value creation began to explode in the mid-90s when the industry shifted from systems to ecosystems. Open source software, such as Apache and Linux, helped democratize development. Application developers began offering industry and process specific software and a whole cadre of systems integrators arose to design integrated systems for their customers.

We can see a similar process unfolding today in AI, as the industry shifts from one-size-fits-all systems like IBM’s Watson to a modular ecosystem of firms that provide data, hardware, software and applications. As the quality and specificity of the tools continues to increase, we can expect the impact of AI to increase as well.

In 1987, Robert Solow quipped that, “ You can see the computer age everywhere but in the productivity statistics,” and we’re at a similar point today. AI permeates our phones, smart speakers in our homes and, increasingly, the systems we use at work. However, we’ve yet to see a measurable economic impact from the technology. Much like in the 70s and 80s, productivity growth remains depressed. But the technology is still in its infancy.

We’re Just Getting Started

One of the most salient, but least discussed aspects of artificial intelligence is that it’s not an inherently digital technology. Applications like voice recognition and machine vision are, in fact, inherently analog. The fact that we use digital technology to execute machine learning algorithms is actually often a bottleneck.

Yet we can expect that to change over the next decade as new computing architectures, such as quantum computers and neuromorphic chips, rise to the fore. As these more powerful technologies replace silicon chips computing in ones and zeroes, value will shift from bits to atoms and artificial intelligence will be applied to the physical world.

“The digital technology revolutionized business processes, so it shouldn’t be a surprise that cognitive technologies are starting from the same place, but that’s not where they will end up. The real potential is driving processes that we can’t manage well today, such as in synthetic biology, materials science and other things in the physical world,” Agorai’s Sutton told me.

In 1987, when Solow made his famous quip, there was no consumer Internet, no World Wide Web and no social media. Artificial intelligence was largely science fiction. We’re at a similar point today, at the beginning of a new era. There’s still so much we don’t yet see, for the simple reason that so much has yet to happen.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The Hard Problem of Consciousness is Not That Hard

The Hard Problem of Consciousness is Not That Hard

GUEST POST from Geoffrey A. Moore

We human beings like to believe we are special—and we are, but not as special as we might like to think. One manifestation of our need to be exceptional is the way we privilege our experience of consciousness. This has led to a raft of philosophizing which can be organized around David Chalmers’ formulation of “the hard problem.”

In case this is a new phrase for you, here is some context from our friends at Wikipedia:

“… even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?”

— David Chalmers, Facing up to the problem of consciousness

The problem of consciousness, Chalmers argues, is two problems: the easy problems and the hard problem. The easy problems may include how sensory systems work, how such data is processed in the brain, how that data influences behavior or verbal reports, the neural basis of thought and emotion, and so on. The hard problem is the problem of why and how those processes are accompanied by experience.3 It may further include the question of why these processes are accompanied by that particular experience rather than another experience.

The key word here is experience. It emerges out of cognitive processes, but it is not completely reducible to them. For anyone who has read much in the field of complexity, this should not come as a surprise. All complex systems share the phenomenon of higher orders of organization emerging out of lower orders, as seen in the frequently used example of how cells, tissues, organs, and organisms all interrelate. Experience is just the next level.

The notion that explaining experience is a hard problem comes from locating it at the wrong level of emergence. Materialists place it too low—they argue it is reducible to physical phenomena, which is simply another way of denying that emergence is a meaningful construct. Shakespeare is reducible to quantum effects? Good luck with that.

Most people’s problems with explaining experience, on the other hand, is that they place it too high. They want to use their own personal experience as a grounding point. The problem is that our personal experience of consciousness is deeply inflected by our immersion in language, but it is clear that experience precedes language acquisition, as we see in our infants as well as our pets. Philosophers call such experiences qualia, and they attribute all sorts of ineluctable and mysterious qualities to them. But there is a much better way to understand what qualia really are—namely, the pre-linguistic mind’s predecessor to ideas. That is, they are representations of reality that confer strategic advantage to the organism that can host and act upon them.

Experience in this context is the ability to detect, attend to, learn from, and respond to signals from our environment, whether they be externally or internally generated. Experiences are what we remember. That is why they are so important to us.

Now, as language-enabled humans, we verbalize these experiences constantly, which is what leads us to locate them higher up in the order of emergence, after language itself has emerged. Of course, we do have experiences with language directly—lots of them. But we need to acknowledge that our identity as experiencers is not dependent upon, indeed precedes our acquisition of, language capability.

With this framework in mind, let’s revisit some of the formulations of the hard problem to see if we can’t nip them in the bud.

  • The hard problem of consciousness is the problem of explaining why and how we have qualia or phenomenal experiences. Our explanation is that qualia are mental abstractions of phenomenal experiences that, when remembered and acted upon, confer strategic advantage to organisms under conditions of natural and sexual selection. Prior to the emergence of brains, “remembering and acting upon” is a function of chemical signals activating organisms to alter their behavior and, over time, to privilege tendencies that reinforce survival. Once brain emerges, chemical signaling is supplemented by electrical signaling to the same ends. There is no magic here, only a change of medium.
  • Annaka Harris poses the hard problem as the question of “how experience arise[s] out of non-sentient matter.” The answer to this question is, “level by level.” First sentience has to emerge from non-sentience. That happens with the emergence of life at the cellular level. Then sentience has to spread beyond the cell. That happens when chemical signaling enables cellular communication. Then sentience has to speed up to enable mobile life. That happens when electrical signaling enabled by nerves supplements chemical signaling enabled by circulatory systems. Then signaling has to complexify into meta-signaling, the aggregation of signals into qualia, remembered as experiences. Again, no miracles required.
  • Others, such as Daniel Dennett and Patricia Churchland believe that the hard problem is really more of a collection of easy problems, and will be solved through further analysis of the brain and behavior. If so, it will be through the lens of emergence, not through the mechanics of reductive materialism.
  • Consciousness is an ambiguous term. It can be used to mean self-consciousness, awareness, the state of being awake, and so on. Chalmers uses Thomas Nagel’s definition of consciousness: the feeling of what it is like to be something. Consciousness, in this sense, is synonymous with experience. Now we are in the language-inflected zone where we are going to get consciousness wrong because we are entangling it in levels of emergence that come later. Specifically, to experience anything as like anything else is not possible without the intervention of language. That is, likeness is not a qualia, it is a language-enabled idea. Thus, when Thomas Nagel famously asked, “What is it like to be a bat?” he is posing a question that has meaning only for humans, never for bats.

Going back to the first sentence above, self-consciousness is another concept that has been language-inflected in that only human beings have selves. Selves, in other words, are creations of language. More specifically, our selves are characters embedded in narratives, and use both the narratives and the character profiles to organize our lives. This is a completely language-dependent undertaking and thus not available to pets or infants. Our infants are self-sentient, but it is not until the little darlings learn language, hear stories, then hear stories about themselves, that they become conscious of their own selves as separate and distinct from other selves.

On the other hand, if we use the definitions of consciousness as synonymous with awareness or being awake, then we are exactly at the right level because both those capabilities are the symptoms of, and thus synonymous with, the emergence of consciousness.

  • Chalmers argues that experience is more than the sum of its parts. In other words, experience is irreducible. Yes, but let’s not be mysterious here. Experience emerges from the sum of its parts, just like any other layer of reality emergences from its component elements. To say something is irreducible does not mean that it is unexplainable.
  • Wolfgang Fasching argues that the hard problem is not about qualia, but about pure what-it-is-like-ness of experience in Nagel’s sense, about the very givenness of any phenomenal contents itself:

Today there is a strong tendency to simply equate consciousness with qualia. Yet there is clearly something not quite right about this. The “itchiness of itches” and the “hurtfulness of pain” are qualities we are conscious of. So, philosophy of mind tends to treat consciousness as if it consisted simply of the contents of consciousness (the phenomenal qualities), while it really is precisely consciousness of contents, the very givenness of whatever is subjectively given. And therefore, the problem of consciousness does not pertain so much to some alleged “mysterious, nonpublic objects”, i.e. objects that seem to be only “visible” to the respective subject, but rather to the nature of “seeing” itself (and in today’s philosophy of mind astonishingly little is said about the latter).

Once again, we are melding consciousness and language together when, to be accurate, we must continue to keep them separate. In this case, the dangerous phrase is “the nature of seeing.” There is nothing mysterious about seeing in the non-metaphorical sense, but that is not how the word is being used here. Instead, “seeing” is standing for “understanding” or “getting” or “grokking” (if you are nerdy enough to know Robert Heinlein’s Stranger in a Strange Land). Now, I think it is reasonable to assert that animals “grok” if by that we mean that they can reliably respond to environmental signals with strategic behaviors. But anything more than that requires the intervention of language, and that ends up locating consciousness per se at the wrong level of emergence.

OK, that’s enough from me. I don’t think I’ve exhausted the topic, so let me close by saying…

That’s what I think, what do you think?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Leaders Avoid Doing This One Thing

Leaders Avoid Doing This One Thing

GUEST POST from Robyn Bolton


Being a leader isn’t easy. You must BE accountable, compassionate, confident, curious, empathetic, focused, service-driven, and many other things. You must DO many things, including build relationships, communicate clearly, constantly learn, create accountability, develop people, inspire hope and trust, provide stability, and think critically. But if you’re not doing this one thing, none of the other things matter.

Show up.

It seems obvious, but you’ll be surprised how many “leaders” struggle with this. 

Especially when they’re tasked with managing both operations and innovation.

It’s easy to show up to lead operations.

When you have experience and confidence, know likely cause and effect, and can predict with relative certainty what will happen next, it’s easy to show up. You’re less likely to be wrong, which means you face less risk to your reputation, current role, and career prospects.

When it’s time to be a leader in the core business, you don’t think twice about showing up. It’s your job. If you don’t, the business, your career, and your reputation suffer. So, you show up, make decisions, and lead the team out of the unexpected.

It’s hard to show up to lead innovation.

When you are doing something new, facing more unknowns than knowns, and can’t guarantee an outcome, let alone success, showing up is scary. No one will blame you if you’re not there because you’re focused on the core business and its known risks and rewards. If you “lead from the back” (i.e., abdicate your responsibility to lead), you can claim that the team, your peers, or the company are not ready to do what it takes.

When it’s time to be a leader in innovation, there is always something in the core business that is more urgent, more important, and more demanding of your time and attention. Innovation may be your job, but the company rewards you for delivering the core business, so of course, you think twice.

Show up anyway

There’s a reason people use the term “incubation” to describe the early days of the innovation process. To incubate means to “cause or aid the development of” but that’s the 2nd definition. The 1st definition is “to sit on so as to hatch by the warmth of the body.”

You can’t incubate if you don’t show up.

Show up to the meeting or call, even if something else feels more urgent. Nine times out of ten, it can wait half an hour. If it can’t, reschedule the meeting to the next day (or the first day after the crisis) and tell your team why. Don’t say, “I don’t have time,” own your choice and explain, “This isn’t a priority at the moment because….”

Show up when the team is actively learning and learn along with them. Attend a customer interview, join the read-out at the end of an ideation session, and observe people using your (or competitive) solutions. Ask questions, engage in experiments, and welcome the experiences that will inform your decisions.

Show up when people question what the innovation team is doing and why. Especially when they complain that those resources could be put to better use in the core business. Explain that the innovation resources are investments in the company’s future, paving the way for success in an industry and market that is changing faster than ever.

You can’t lead if you don’t show up.

Early in my career, a boss said, “A leader without followers is just a person wandering lost.” Your followers can’t follow you if they can’t find you.

After all, “80% of success is showing up.”

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.