Category Archives: Technology

An Innovation Lesson From The Rolling Stones

An Innovation Lesson From The Rolling Stones

GUEST POST from Robyn Bolton

If you’re like most people, you’ve faced disappointment. Maybe the love of your life didn’t return your affection, you didn’t get into your dream college, or you were passed over for promotion.  It hurts.  And sometimes, that hurt lingers for a long time.

Until one day, something happens, and you realize your disappointment was a gift.  You meet the true love of your life while attending college at your fallback school, and years later, when you get passed over for promotion, the two of you quit your jobs, pursue your dreams, and live happily ever after. Or something like that.

We all experience disappointment.  We also all get to choose whether we stay there, lamenting the loss of what coulda shoulda woulda been, or we can persevere, putting one foot in front of the other and playing The Rolling Stones on repeat:

“You can’t always get what you want

But if you try sometimes, well, you might just find

You get what you need”

That’s life.

That’s also innovation.

As innovators, especially leaders of innovators, we rarely get what we want.  But we always get what we need (whether we like it or not)

We want to know. 
We need to be comfortable not knowing.

Most of us want to know the answer because if we know the answer, there is no risk. There is no chance of being wrong, embarrassed, judged, or punished.  But if there is no risk, there is no growth, expansion, or discovery.

Innovation is something new that creates value. If you know everything, you can’t innovate.

As innovators, we need to be comfortable not knowing.  When we admit to ourselves that we don’t know something, we open our minds to new information, new perspectives, and new opportunities. When we say we don’t know, we give others permission to be curious, learn, and create. 

We want the creative genius and billion-dollar idea. 
We need the team and the steady stream of big ideas.

We want to believe that one person blessed with sufficient time, money, and genius can change the world.  Some people like to believe they are that person, and most of us think we can hire that person, and when we do find that person and give them the resources they need, they will give us the billion-dollar idea that transforms our company, disrupts the industry, and change the world.

Innovation isn’t magic.  Innovation is team work.

We need other people to help us see what we can’t and do what we struggle to do.  The idea-person needs the optimizer to bring her idea to life, and the optimizer needs the idea-person so he has a starting point.  We need lots of ideas because most won’t work, but we don’t know which ones those are, so we prototype, experiment, assess, and refine our way to the ones that will succeed.   

We want to be special.
We need to be equal.

We want to work on the latest and most cutting-edge technology and discuss it using terms that no one outside of Innovation understands. We want our work to be on stage, oohed and aahed over on analyst calls, and talked about with envy and reverence in every meeting. We want to be the cool kids, strutting around our super hip offices in our hoodies and flip-flops or calling into the meeting from Burning Man. 

Innovation isn’t about you.  It’s about serving others.

As innovators, we create value by solving problems.  But we can’t do it alone.  We need experienced operators who can quickly spot design flaws and propose modifications.  We need accountants and attorneys who instantly see risks and help you navigate around them.  We need people to help us bring our ideas to life, but that won’t happen if we act like we’re different or better.  Just as we work in service to our customers, we must also work in service to our colleagues by working with them, listening, compromising, and offering help.

What about you?
What do you want?
What are you learning you need?

Image Credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

AI and the Productivity Paradox

AI and the Productivity Paradox

GUEST POST from Greg Satell

In the 1970’s and 80’s, business investment in computer technology were increasing by more than twenty percent per year. Strangely though, productivity growth had decreased during the same period. Economists found this turn of events so strange that they called it the productivity paradox to underline their confusion.

Productivity growth would take off in the late 1990s, but then mysteriously drop again during the mid-aughts. At each juncture, experts would debate whether digital technology produced real value or if it was all merely a mirage. The debate would continue even as industry after industry was disrupted.

Today, that debate is over, but a new one is likely to begin over artificial intelligence. Much like in the early 1970s, we have increasing investment in a new technology, diminished productivity growth and “experts” predicting massive worker displacement . Yet now we have history and experience to guide us and can avoid making the same mistakes.

You Can’t Manage (Or Evaluate) What You Can’t Measure

The productivity paradox dumbfounded economists because it violated a basic principle of how a free market economy is supposed to work. If profit seeking businesses continue to make substantial investments, you expect to see a return. Yet with IT investment in the 70s and 80s, firms continued to increase their investment with negligible measurable benefit.

A paper by researchers at the University of Sheffield sheds some light on what happened. First, productivity measures were largely developed for an industrial economy, not an information economy. Second, the value of those investments, while substantial, were a small portion of total capital investment. Third, the aggregate productivity numbers didn’t reflect differences in management performance.

Consider a widget company in the 1970s that invested in IT to improve service so that it could ship out products in less time. That would improve its competitive position and increase customer satisfaction, but it wouldn’t produce any more widgets. So, from an economic point of view, it wouldn’t be a productive investment. Rival firms might then invest in similar systems to stay competitive but, again, widget production would stay flat.

So firms weren’t investing in IT to increase productivity, but to stay competitive. Perhaps even more importantly, investment in digital technology in the 70s and 80s was focused on supporting existing business models. It wasn’t until the late 90s that we began to see significant new business models being created.

The Greatest Value Comes From New Business Models—Not Cost Savings

Things began to change when firms began to see the possibilities to shift their approach. As Josh Sutton, CEO of Agorai, an AI marketplace, explained to me, “The businesses that won in the digital age weren’t necessarily the ones who implemented systems the best, but those who took a ‘digital first’ mindset to imagine completely new business models.”

He gives the example of the entertainment industry. Sure, digital technology revolutionized distribution, but merely putting your programming online is of limited value. The ones who are winning are reimagining storytelling and optimizing the experience for binge watching. That’s the real paradigm shift.

“One of the things that digital technology did was to focus companies on their customers,” Sutton continues. “When switching costs are greatly reduced, you have to make sure your customers are being really well served. Because so much friction was taken out of the system, value shifted to who could create the best experience.”

So while many companies today are attempting to leverage AI to provide similar service more cheaply, the really smart players are exploring how AI can empower employees to provide a much better service or even to imagine something that never existed before. “AI will make it possible to put powerful intelligence tools in the hands of consumers, so that businesses can become collaborators and trusted advisors, rather than mere service providers,” Sutton says.

It Takes An Ecosystem To Drive Impact

Another aspect of digital technology in the 1970s and 80s was that it was largely made up of standalone systems. You could buy, say, a mainframe from IBM to automate back office systems or, later, Macintoshes or a PCs with some basic software to sit on employees desks, but that did little more than automate basic clerical tasks.

However, value creation began to explode in the mid-90s when the industry shifted from systems to ecosystems. Open source software, such as Apache and Linux, helped democratize development. Application developers began offering industry and process specific software and a whole cadre of systems integrators arose to design integrated systems for their customers.

We can see a similar process unfolding today in AI, as the industry shifts from one-size-fits-all systems like IBM’s Watson to a modular ecosystem of firms that provide data, hardware, software and applications. As the quality and specificity of the tools continues to increase, we can expect the impact of AI to increase as well.

In 1987, Robert Solow quipped that, “ You can see the computer age everywhere but in the productivity statistics,” and we’re at a similar point today. AI permeates our phones, smart speakers in our homes and, increasingly, the systems we use at work. However, we’ve yet to see a measurable economic impact from the technology. Much like in the 70s and 80s, productivity growth remains depressed. But the technology is still in its infancy.

We’re Just Getting Started

One of the most salient, but least discussed aspects of artificial intelligence is that it’s not an inherently digital technology. Applications like voice recognition and machine vision are, in fact, inherently analog. The fact that we use digital technology to execute machine learning algorithms is actually often a bottleneck.

Yet we can expect that to change over the next decade as new computing architectures, such as quantum computers and neuromorphic chips, rise to the fore. As these more powerful technologies replace silicon chips computing in ones and zeroes, value will shift from bits to atoms and artificial intelligence will be applied to the physical world.

“The digital technology revolutionized business processes, so it shouldn’t be a surprise that cognitive technologies are starting from the same place, but that’s not where they will end up. The real potential is driving processes that we can’t manage well today, such as in synthetic biology, materials science and other things in the physical world,” Agorai’s Sutton told me.

In 1987, when Solow made his famous quip, there was no consumer Internet, no World Wide Web and no social media. Artificial intelligence was largely science fiction. We’re at a similar point today, at the beginning of a new era. There’s still so much we don’t yet see, for the simple reason that so much has yet to happen.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The Hard Problem of Consciousness is Not That Hard

The Hard Problem of Consciousness is Not That Hard

GUEST POST from Geoffrey A. Moore

We human beings like to believe we are special—and we are, but not as special as we might like to think. One manifestation of our need to be exceptional is the way we privilege our experience of consciousness. This has led to a raft of philosophizing which can be organized around David Chalmers’ formulation of “the hard problem.”

In case this is a new phrase for you, here is some context from our friends at Wikipedia:

“… even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?”

— David Chalmers, Facing up to the problem of consciousness

The problem of consciousness, Chalmers argues, is two problems: the easy problems and the hard problem. The easy problems may include how sensory systems work, how such data is processed in the brain, how that data influences behavior or verbal reports, the neural basis of thought and emotion, and so on. The hard problem is the problem of why and how those processes are accompanied by experience.3 It may further include the question of why these processes are accompanied by that particular experience rather than another experience.

The key word here is experience. It emerges out of cognitive processes, but it is not completely reducible to them. For anyone who has read much in the field of complexity, this should not come as a surprise. All complex systems share the phenomenon of higher orders of organization emerging out of lower orders, as seen in the frequently used example of how cells, tissues, organs, and organisms all interrelate. Experience is just the next level.

The notion that explaining experience is a hard problem comes from locating it at the wrong level of emergence. Materialists place it too low—they argue it is reducible to physical phenomena, which is simply another way of denying that emergence is a meaningful construct. Shakespeare is reducible to quantum effects? Good luck with that.

Most people’s problems with explaining experience, on the other hand, is that they place it too high. They want to use their own personal experience as a grounding point. The problem is that our personal experience of consciousness is deeply inflected by our immersion in language, but it is clear that experience precedes language acquisition, as we see in our infants as well as our pets. Philosophers call such experiences qualia, and they attribute all sorts of ineluctable and mysterious qualities to them. But there is a much better way to understand what qualia really are—namely, the pre-linguistic mind’s predecessor to ideas. That is, they are representations of reality that confer strategic advantage to the organism that can host and act upon them.

Experience in this context is the ability to detect, attend to, learn from, and respond to signals from our environment, whether they be externally or internally generated. Experiences are what we remember. That is why they are so important to us.

Now, as language-enabled humans, we verbalize these experiences constantly, which is what leads us to locate them higher up in the order of emergence, after language itself has emerged. Of course, we do have experiences with language directly—lots of them. But we need to acknowledge that our identity as experiencers is not dependent upon, indeed precedes our acquisition of, language capability.

With this framework in mind, let’s revisit some of the formulations of the hard problem to see if we can’t nip them in the bud.

  • The hard problem of consciousness is the problem of explaining why and how we have qualia or phenomenal experiences. Our explanation is that qualia are mental abstractions of phenomenal experiences that, when remembered and acted upon, confer strategic advantage to organisms under conditions of natural and sexual selection. Prior to the emergence of brains, “remembering and acting upon” is a function of chemical signals activating organisms to alter their behavior and, over time, to privilege tendencies that reinforce survival. Once brain emerges, chemical signaling is supplemented by electrical signaling to the same ends. There is no magic here, only a change of medium.
  • Annaka Harris poses the hard problem as the question of “how experience arise[s] out of non-sentient matter.” The answer to this question is, “level by level.” First sentience has to emerge from non-sentience. That happens with the emergence of life at the cellular level. Then sentience has to spread beyond the cell. That happens when chemical signaling enables cellular communication. Then sentience has to speed up to enable mobile life. That happens when electrical signaling enabled by nerves supplements chemical signaling enabled by circulatory systems. Then signaling has to complexify into meta-signaling, the aggregation of signals into qualia, remembered as experiences. Again, no miracles required.
  • Others, such as Daniel Dennett and Patricia Churchland believe that the hard problem is really more of a collection of easy problems, and will be solved through further analysis of the brain and behavior. If so, it will be through the lens of emergence, not through the mechanics of reductive materialism.
  • Consciousness is an ambiguous term. It can be used to mean self-consciousness, awareness, the state of being awake, and so on. Chalmers uses Thomas Nagel’s definition of consciousness: the feeling of what it is like to be something. Consciousness, in this sense, is synonymous with experience. Now we are in the language-inflected zone where we are going to get consciousness wrong because we are entangling it in levels of emergence that come later. Specifically, to experience anything as like anything else is not possible without the intervention of language. That is, likeness is not a qualia, it is a language-enabled idea. Thus, when Thomas Nagel famously asked, “What is it like to be a bat?” he is posing a question that has meaning only for humans, never for bats.

Going back to the first sentence above, self-consciousness is another concept that has been language-inflected in that only human beings have selves. Selves, in other words, are creations of language. More specifically, our selves are characters embedded in narratives, and use both the narratives and the character profiles to organize our lives. This is a completely language-dependent undertaking and thus not available to pets or infants. Our infants are self-sentient, but it is not until the little darlings learn language, hear stories, then hear stories about themselves, that they become conscious of their own selves as separate and distinct from other selves.

On the other hand, if we use the definitions of consciousness as synonymous with awareness or being awake, then we are exactly at the right level because both those capabilities are the symptoms of, and thus synonymous with, the emergence of consciousness.

  • Chalmers argues that experience is more than the sum of its parts. In other words, experience is irreducible. Yes, but let’s not be mysterious here. Experience emerges from the sum of its parts, just like any other layer of reality emergences from its component elements. To say something is irreducible does not mean that it is unexplainable.
  • Wolfgang Fasching argues that the hard problem is not about qualia, but about pure what-it-is-like-ness of experience in Nagel’s sense, about the very givenness of any phenomenal contents itself:

Today there is a strong tendency to simply equate consciousness with qualia. Yet there is clearly something not quite right about this. The “itchiness of itches” and the “hurtfulness of pain” are qualities we are conscious of. So, philosophy of mind tends to treat consciousness as if it consisted simply of the contents of consciousness (the phenomenal qualities), while it really is precisely consciousness of contents, the very givenness of whatever is subjectively given. And therefore, the problem of consciousness does not pertain so much to some alleged “mysterious, nonpublic objects”, i.e. objects that seem to be only “visible” to the respective subject, but rather to the nature of “seeing” itself (and in today’s philosophy of mind astonishingly little is said about the latter).

Once again, we are melding consciousness and language together when, to be accurate, we must continue to keep them separate. In this case, the dangerous phrase is “the nature of seeing.” There is nothing mysterious about seeing in the non-metaphorical sense, but that is not how the word is being used here. Instead, “seeing” is standing for “understanding” or “getting” or “grokking” (if you are nerdy enough to know Robert Heinlein’s Stranger in a Strange Land). Now, I think it is reasonable to assert that animals “grok” if by that we mean that they can reliably respond to environmental signals with strategic behaviors. But anything more than that requires the intervention of language, and that ends up locating consciousness per se at the wrong level of emergence.

OK, that’s enough from me. I don’t think I’ve exhausted the topic, so let me close by saying…

That’s what I think, what do you think?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Leaders Avoid Doing This One Thing

Leaders Avoid Doing This One Thing

GUEST POST from Robyn Bolton


Being a leader isn’t easy. You must BE accountable, compassionate, confident, curious, empathetic, focused, service-driven, and many other things. You must DO many things, including build relationships, communicate clearly, constantly learn, create accountability, develop people, inspire hope and trust, provide stability, and think critically. But if you’re not doing this one thing, none of the other things matter.

Show up.

It seems obvious, but you’ll be surprised how many “leaders” struggle with this. 

Especially when they’re tasked with managing both operations and innovation.

It’s easy to show up to lead operations.

When you have experience and confidence, know likely cause and effect, and can predict with relative certainty what will happen next, it’s easy to show up. You’re less likely to be wrong, which means you face less risk to your reputation, current role, and career prospects.

When it’s time to be a leader in the core business, you don’t think twice about showing up. It’s your job. If you don’t, the business, your career, and your reputation suffer. So, you show up, make decisions, and lead the team out of the unexpected.

It’s hard to show up to lead innovation.

When you are doing something new, facing more unknowns than knowns, and can’t guarantee an outcome, let alone success, showing up is scary. No one will blame you if you’re not there because you’re focused on the core business and its known risks and rewards. If you “lead from the back” (i.e., abdicate your responsibility to lead), you can claim that the team, your peers, or the company are not ready to do what it takes.

When it’s time to be a leader in innovation, there is always something in the core business that is more urgent, more important, and more demanding of your time and attention. Innovation may be your job, but the company rewards you for delivering the core business, so of course, you think twice.

Show up anyway

There’s a reason people use the term “incubation” to describe the early days of the innovation process. To incubate means to “cause or aid the development of” but that’s the 2nd definition. The 1st definition is “to sit on so as to hatch by the warmth of the body.”

You can’t incubate if you don’t show up.

Show up to the meeting or call, even if something else feels more urgent. Nine times out of ten, it can wait half an hour. If it can’t, reschedule the meeting to the next day (or the first day after the crisis) and tell your team why. Don’t say, “I don’t have time,” own your choice and explain, “This isn’t a priority at the moment because….”

Show up when the team is actively learning and learn along with them. Attend a customer interview, join the read-out at the end of an ideation session, and observe people using your (or competitive) solutions. Ask questions, engage in experiments, and welcome the experiences that will inform your decisions.

Show up when people question what the innovation team is doing and why. Especially when they complain that those resources could be put to better use in the core business. Explain that the innovation resources are investments in the company’s future, paving the way for success in an industry and market that is changing faster than ever.

You can’t lead if you don’t show up.

Early in my career, a boss said, “A leader without followers is just a person wandering lost.” Your followers can’t follow you if they can’t find you.

After all, “80% of success is showing up.”

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Robots Aren’t Really Going to Take Over

The Robots Aren't Really Going to Take Over

GUEST POST from Greg Satell

In 2013, a study at Oxford University found that 47% of jobs in the United States are likely to be replaced by robots over the next two decades. As if that doesn’t seem bad enough, Yuval Noah Harari, in his bestselling book Homo Deus, writes that “humans might become militarily and economically useless.” Yeesh! That doesn’t sound good.

Yet today, ten years after the Oxford Study, we are experiencing a serious labor shortage. Even more puzzling is that the shortage is especially acute in manufacturing, where automation is most pervasive. If robots are truly taking over, then why are having trouble finding enough humans to do work that needs being done?

The truth is that automation doesn’t replace jobs, it replaces tasks and when tasks become automated, they largely become commoditized. So while there are significant causes for concern about automation, such as increasing returns to capital amid decreasing returns to labor, the real danger isn’t with automation itself, but what we choose to do with it.

Organisms Are Not Algorithms

Harari’s rationale for humans becoming useless is his assertion that “organisms are algorithms.” Much like a vending machine is programed to respond to buttons, humans and other animals are programed by genetics and evolution to respond to “sensations, emotions and thoughts.” When those particular buttons are pushed, we respond much like a vending machine does.

He gives various data points for this point of view. For example, he describes psychological experiments in which, by monitoring brainwaves, researchers are able to predict actions, such as whether a person will flip a switch, even before he or she is aware of it. He also points out that certain chemicals, such as Ritalin and Prozac, can modify behavior.

Therefore, he continues, free will is an illusion because we don’t choose our urges. Nobody makes a conscious choice to crave chocolate cake or cigarettes any more than we choose whether to be attracted to someone other than our spouse. Those things are a product of our biological programming.

Yet none of this is at all dispositive. While it is true that we don’t choose our urges, we do choose our actions. We can be aware of our urges and still resist them. In fact, we consider developing the ability to resist urges as an integral part of growing up. Mature adults are supposed to resist things like gluttony, adultery and greed.

Revealing And Building

If you believe that organisms are algorithms, it’s easy to see how humans become subservient to machines. As machine learning techniques combine with massive computing power, machines will be able to predict, with great accuracy, which buttons will lead to what actions. Here again, an incomplete picture leads to a spurious conclusion.

In his 1954 essay, The Question Concerning Technology the German philosopher Martin Heidegger sheds some light on these issues. He described technology as akin to art, in that it reveals truths about the nature of the world, brings them forth and puts them to some specific use. In the process, human nature and its capacity for good and evil is also revealed.

He gives the example of a hydroelectric dam, which reveals the energy of a river and puts it to use making electricity. In much the same sense, Mark Zuckerberg did not “build” a social network at Facebook, but took natural human tendencies and channeled them in a particular way. After all, we go online not for bits or electrons, but to connect with each other.

In another essay, Building Dwelling Thinking, Heidegger explains that building also plays an important role, because to build for the world, we first must understand what it means to live in it. Once we understand that Mark Zuckerberg, or anyone else for that matter, is working to manipulate us, we can work to prevent it. In fact, knowing that someone or something seeks to control us gives us an urge to resist. If we’re all algorithms, that’s part of the code.
Social Skills Will Trump Cognitive Skills

All of this is, of course, somewhat speculative. What is striking, however, is the extent to which the opposite of what Harari and other “experts” predict is happening. Not only have greater automation and more powerful machine learning algorithms not led to mass unemployment it has, as noted above, led to a labor shortage. What gives?

To understand what’s going on, consider the legal industry, which is rapidly being automated. Basic activities like legal discovery are now largely done by algorithms. Services like LegalZoom automate basic filings. There are even artificial intelligence systems that can predict the outcome of a court case better than a human can.

So it shouldn’t be surprising that many experts predict gloomy days ahead for lawyers. By now, you can probably predict the punchline. The number of lawyers in the US has increased by 15% since 2008 and it’s not hard to see why. People don’t hire lawyers for their ability to hire cheap associates to do discovery, file basic documents or even, for the most part, to go to trial. In large part, they want someone they can trust to advise them.

The true shift in the legal industry will be from cognitive to social skills. When much of the cognitive heavy lifting can be done by machines, attorneys who can show empathy and build trust will have an advantage over those who depend on their ability to retain large amounts of information and read through lots of documents.

Value Never Disappears, It Just Shifts To Another Place

In 1900, 30 million people in the United States worked as farmers, but by 1990 that number had fallen to under 3 million even as the population more than tripled. So, in a matter of speaking, 90% of American agriculture workers lost their jobs, mostly due to automation. Yet somehow, the twentieth century was seen as an era of unprecedented prosperity.

You can imagine anyone working in agriculture a hundred years ago would be horrified to find that their jobs would vanish over the next century. If you told them that everything would be okay because they could find work as computer scientists, geneticists or digital marketers, they would probably have thought that you were some kind of a nut.

But consider if you told them that instead of working in the fields all day, they could spend that time in a nice office that was cool and dry because of something called “air conditioning,” and that they would have machines that cook meals without needing wood to be chopped and hauled. To sweeten the pot you could tell them that ”work” would mostly consist largely of talking to other people. They may have imagined it as a paradise.

The truth is that value never disappears, it just shifts to another place. That’s why today we have less farmers, but more food and, for better or worse, more lawyers. It is also why it’s highly unlikely that the robots will take over, because we are not algorithms. We have the power to choose.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The Impact of Artificial Intelligence on Future Employment

The Impact of Artificial Intelligence on Future Employment

GUEST POST from Chateau G Pato

The rapid progression of artificial intelligence (AI) has ignited both intrigue and fear among experts in various industries. While the advancements in AI hold promises of improved efficiency, increased productivity, and innumerable benefits, concerns have been raised about the potential impact on employment. As AI technology continues to evolve and permeate into different sectors, it is crucial to examine the implications it may have on the workforce. This article will delve into the impact of AI on future employment, exploring two case study examples that shed light on the subject.

Case Study 1: Autonomous Vehicles

One area where AI has gained significant traction in recent years is autonomous vehicles. While self-driving cars promise to revolutionize transportation, they also pose a potential threat to traditional driving jobs. According to a study conducted by the University of California, Berkeley, an estimated 300,000 truck driving jobs could be at risk in the coming decades due to the rise of autonomous vehicles.

Although this projection may seem alarming, it is important to note that AI-driven automation can also create new job opportunities. With the emergence of autonomous vehicles, positions such as remote monitoring operators, vehicle maintenance technicians, and safety supervisors are likely to be in demand. Additionally, the introduction of AI in this sector could also lead to the creation of entirely new industries such as ride-hailing services, data analysis, and infrastructure development related to autonomous vehicles. Therefore, while some jobs may be displaced, others will potentially emerge, resulting in a shift rather than a complete loss in employment opportunities.

Case Study 2: Healthcare and Diagnostics

The healthcare industry is another sector profoundly impacted by artificial intelligence. AI has already demonstrated remarkable prowess in diagnosing diseases and providing personalized treatment plans. For instance, IBM’s Watson, a cognitive computing system, has proved capable of analyzing vast amounts of medical literature and patient data to assist physicians in making more accurate diagnoses.

While AI undoubtedly enhances healthcare outcomes, concerns arise regarding the future of certain medical professions. Radiologists, for example, who primarily interpret medical images, may face challenges as AI algorithms become increasingly proficient at detecting abnormalities. A study published in Nature in 2020 revealed that AI could outperform human radiologists in interpreting mammograms. As AI is more widely incorporated into the healthcare system, the role of radiologists may evolve to focus on higher-level tasks such as treatment decisions, patient consultation, and research.

Moreover, the integration of AI into healthcare offers new employment avenues. The demand for data scientists, AI engineers, and software developers specialized in healthcare will likely increase. Additionally, healthcare professionals with expertise in data analysis and managing AI systems will be in high demand. As AI continues to transform the healthcare industry, the focus should be on retraining and up-skilling to ensure a smooth transition for affected employees.

Conclusion

The impact of artificial intelligence on future employment is a complex subject with both opportunities and challenges. While certain job roles may face disruption, AI also creates the potential for new roles to emerge. The cases of autonomous vehicles and AI in healthcare provide compelling examples of how the workforce can adapt and evolve alongside technology. Preparing for this transition will require a concerted effort from policymakers, employers, and individuals to ensure a smooth integration of AI into the workplace while safeguarding the interests of employees.

Bottom line: Futurology is not fortune telling. Futurists use a scientific approach to create their deliverables, but a methodology and tools like those in FutureHacking™ can empower anyone to engage in futurology themselves.

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Will Artificial Intelligence Make Us Stupid?

Will Artificial Intelligence Make Us Stupid?

GUEST POST from Shep Hyken

I was just at an industry conference focusing on AI (Artificial Intelligence). Someone commented, “AI is going to make us stupid.” Elaborating on that statement, the commenter’s reasoning was that it takes thinking and problem-solving out of the process. We will be given the answer and won’t have to know anything else.

I can see his point, but there is another way of looking at this. In the form of a question, “Did calculators make us dumb?”

I remembered getting a calculator and was excited that I could do long division by just pushing the buttons on the calculator. Even though it gave me the correct answer, I still had to know what to do with it. It didn’t make me dumb. It made me more efficient.

I liken this to my school days when the teacher said we could bring our books and notes to the final exam. Specifically, I remember my college algebra teacher saying, “I don’t care if you memorize formulas or not. What I care about is that you know how to use the formulas. So, on your way out of today’s class, you will receive a sheet with all the formulas you need to solve the problems on the test.”

Believe me when I tell you that having the formulas didn’t make taking the test easier. However, it did make studying easier. I didn’t have to spend time memorizing formulas. Instead, I focused on how to use the information to efficiently get the correct answer.

Shep Hyken Artificial Intelligence Cartoon

So, how does this apply to customer service? Many people think that AI will be used to replace customer support agents – and even salespeople. They believe all customer questions can be answered digitally with AI-infused technology. That may work for basic questions. For higher-level questions and problems, we still need experts. But there is much more.

AI can’t build relationships. Humans can. So, imagine the customer service agent or salesperson using AI to help them solve problems and get the best answers for their customers. But rather than just reciting the information in front of them, they put their personality into the responses. They communicate the information in a way their customers understand and can relate to. They answer additional and clarifying questions. They can even make suggestions outside of the original intent of the customer’s call. This mixes the best of both worlds: almost instantly accessible, accurate information with a live person’s relationship- and credibility-building skills. That’s a winning combination.

No, AI won’t make us dumb unless we let it. Instead, AI will help us be more efficient and effective. And it could even make us appear to be smarter!

Image Credits: Shep Hyken, Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

 






Why Most Corporate Innovation Programs Fail

(And How To Make Them Succeed)

Why Most Corporate Innovation Programs Fail

GUEST POST from Greg Satell

Today, everybody needs to innovate. So it shouldn’t be surprising that corporate innovation programs have become wildly popular. There is an inherent tradeoff between innovation and the type of optimization that operational executives excel at. Creating a separate unit to address innovation just makes intuitive sense.

Yet corporate innovation programs often fail and it’s not hard to see why. Unlike other business functions, like marketing or finance, in a healthy organization everybody takes pride in their ability to innovate. Setting up a separate innovation unit can often seem like an affront to those who work hard to innovate in operational units.

Make no mistake, a corporate innovation program is no panacea. It doesn’t replace the need to innovate every day. Yet a well designed program can augment those efforts, take the business in new directions and create real value. The key to a successful innovation program is to develop a clear purpose built on a shared purpose that can solve important problems.

A Good Innovation Program Extends, It Doesn’t Replace

It’s no secret that Alphabet is one of the most powerful companies in the world. Nevertheless, it has a vulnerability that is often overlooked. Much like Xerox and Kodak decades ago, it’s highly dependent on a single revenue stream. In 2018, 86% of its revenues came from advertising, mostly from its Google search business.

It is with this in mind that the company created its X division. Because the unit was set up to pursue opportunities outside of its core search business, it didn’t encounter significant resistance. In fact, the X division is widely seen as an extension of what made Alphabet so successful in the first place.

Another important aspect is that the X division provides a platform to incubate internal projects. For example, Google Brain started out as a “20% time project.” As it progressed and needed more resources, it was moved to the X division, where it was scaled up further. Eventually, it returned to the mothership and today is an integral part of the core business.

Notice how the vision of the X division was never to replace innovation efforts in the core business, but to extend them. That’s been a big part of its success and has led to exciting new business like Waymo autonomous vehicles and the Verily healthcare division.

Focus On Commonality, Not Difference

All too often, innovation programs thrive on difference. They are designed to put together a band of mavericks and disruptors who think differently than the rest of the organization. That may be great for instilling a strong esprit de corps among those involved with the innovation program, but it’s likely to alienate others.

As I explain in Cascades, any change effort must be built on shared purpose and shared values. That’s how you build trust and form the basis for effective collaboration between the innovation program and the rest of the organization. Without those bonds of trust, any innovation effort is bound to fail.

You can see how that works in Alphabet’s X division. It is not seen as fundamentally different from the core Google business, but rather as channeling the company’s strengths in new directions. The business opportunities it pursues may be different, but the core values are the same.

The key question to ask is why you need a corporate innovation program in the first place. If the answer is that you don’t feel your organization is innovative enough, then you need to address that problem first. A well designed innovation program can’t be a band-aid for larger issues within the core business.

Executive Sponsorship Isn’t Enough

Clearly, no corporate innovation program can be successful without strong executive sponsorship. Commitment has to come from the top. Yet just as clearly, executive sponsorship isn’t enough. Unless you can build support among key stakeholders inside and outside the organization, support from the top is bound to erode.

For example, when Eric Haller started Datalabs at Experian, he designed it to be focused on customers, rather than ideas developed internally. “We regularly sit down with our clients and try and figure out what’s causing them agita,” he told me, “because we know that solving problems is what opens up enormous business opportunities for us.”

Because the Datalabs units works directly with customers to solve problems that are important to them, it has strong support from a key stakeholder group. Another important aspect at Datalabs is that once a project gets beyond the prototype stage it goes to one of the operational units within the company to be scaled up into a real business. Over the past five years businesses originated at Datalabs have added over $100 million in new revenues.

Perhaps most importantly, Haller is acutely aware how innovation programs can cause resentment, so he works hard to reduce tensions through building collaborations around the organization. Datalabs is not where “innovation happens” at Experian. Rather it serves to augment and expand capabilities that were already there.

Don’t Look For Ideas, Identify Meaningful Problems

Perhaps most importantly, an innovation program should not be seen as a place to generate ideas. The truth is that ideas can come from anywhere. So designating one particular program in which ideas are supposed to happen will not only alienate the rest of the organization, it is also likely to overlook important ideas generated elsewhere.

The truth is that innovation isn’t about ideas. It’s about solving problems. In researching my book, Mapping Innovation, I came across dozens of stories from every conceivable industry and field and it always started with someone who came across a problem they wanted to solve. Sometimes, it happened by chance, but in most cases I found that great innovators were actively looking for problems that interested them.

If you look at successful innovation programs like Alphabet’s X division and Experian’s Datalabs, the fundamental activity is exploration. X division explores domains outside of search, while Datalabs explores problems that its customers need solved. Once you identify a meaningful problem, the ideas will come.

That’s the real potential of innovation programs. They provide a space to explore areas that don’t fit with the current business, but may play an important role in its future. A good innovation program doesn’t replace capabilities in the core organization, but leverages them to create new opportunities.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Four Major Shifts Driving the 21st Century

Four Major Shifts Driving the 21st Century

GUEST POST from Greg Satell

In 1900, most people lived much like their ancestors had for millennia. They lived and worked on farms, using animal power and hand tools to augment their own abilities. They inhabited small communities and rarely, if ever, traveled far from home. They engaged in small scale violence and lived short, hard lives.

That would all change over the next century as we learned to harness the power of internal combustion, electricity and atoms. These advancements allowed us to automate physical labor on a large scale, engage in mass production, travel globally and wage violence that could level entire cities.

Today, at the beginning of a new century, we are seeing similar shifts that are far more powerful and are moving far more quickly. Disruption is no longer seen as merely an event, but a way of life and the fissures are there for all to see. Our future will depend on our determination to solve problems faster than our proclivity to continually create them.

1. Technology Shifts

At the turn of the 20th century, electricity and internal combustion were over a decade old, but hadn’t made much of an impact yet. That would change in the 1920s, as roads got built and new appliances that harnessed the power of electricity were invented. As ecosystems formed around new technologies, productivity growth soared and quality of life increased markedly.

There would be two more major technology shifts over the course of the century. The Green Revolution and the golden age of antibiotics in the 50s and 60s saved an untold number of lives. The digital revolution in the 90s created a new era of communication and media that still reverberates today.

These technological shifts worked for both good and ill in that they revealed the best and worst parts of human nature. Increased mobility helped to bring about violence on a massive scale during two world wars. The digital revolution made war seem almost antiseptic, enabling precision strikes to kill people half a world away at the press of a button.

Today, we are on the brink of a new set of technological shifts that will be more powerful and more pervasive than any we have seen before. The digital revolution is ending, yet new technologies, such as novel computing architectures, artificial intelligence, as well as rapid advancements in genomics and materials science promise to reshape the world as we know it.

2. Resource Shifts

As new technologies reshaped the 20th century, they also reshaped our use of resources. Some of these shifts were subtle, such as how the invention of synthetic indigo dye in Germany affected farmers in India. Yet the biggest resource shift, of course, was the increase in the demand for oil.

The most obvious impact from the rise of oil was how it affected the Middle East. Previously nomadic societies were suddenly awash in money. Within just a single generation, countries like Saudi Arabia, Iraq and Iran became global centers of power. The Arab Oil Embargo of the 1970s nearly brought western societies to their knees and prolonged the existence of the Soviet Union.

So I was more than surprised last year to find when I was at a conference in Bahrain that nearly every official talked openly about he need to “get off oil.” With the rise of renewable energy, depending on a single commodity is no longer a viable way to run a society. Today, solar power is soaring in the Middle East.

Still, resource availability remains a powerful force. As the demand for electric vehicles increases, the supply of lithium could become a serious issue. Already China is threatening to leverage its dominance in rare earth elements in the trade war with the United States. Climate change and population growth is also making water a scarce resource in many places.

3. Migrational Shifts

One of the most notable shifts in the 20th century was how the improvement in mobility enabled people to “vote with their feet.” Those who faced persecution or impoverishment could, if they dared, sail off to some other place where the prospects were better. These migrational shifts also helped shape the 20th century and will likely do the same in the 21st.

Perhaps the most notable migration in the 20th century was from Europe to the United States. Before World War I, immigrants from Southern and Eastern Europe flooded American shores and the backlash led to the Immigration Act of 1924. Later, the rise of fascism led to another exodus from Europe that included many of its greatest scientists.

It was largely through the efforts of immigrant scientists that the United States was able to develop technologies like the atomic bomb and radar during World War II. Less obvious though is the contributions of second and third generation citizens, who make up a large proportion of the economic and political elite in the US.

Today, the most noteworthy shift is the migration of largely Muslim people from war-torn countries into Europe. Much like America in the 1920s, the strains of taking in so many people so quickly has led to a backlash, with nationalist parties making significant gains in many countries.

4. Demographic Shifts

While the first three shifts played strong roles throughout the 20th century, demographic shifts, in many ways, shaped the second half of the century. The post war generation of Baby Boomers repeatedly challenged traditional values and led the charge in political movements such as the struggle for civil rights in the US, the Prague Spring in Czechoslovakia and the March 1968 protests in Poland.

The main drivers of the Baby Boomer’s influence have been its size and economic prosperity. In America alone, 76 million people were born in between 1946 and 1964, and they came of age in the prosperous years of the 1960s. These factors gave them unprecedented political and economic clout that continues to this day.

Yet now, Millennials, who are more diverse and focused on issues such as the environment and tolerance, are beginning to outnumber Baby Boomers. Much like in the 1960s, their increasing influence is driving trends in politics, the economy and the workplace and their values often put them in conflict with the baby boomers.

However, unlike the Baby Boomers, Millennials are coming of age in an era where prosperity seems to be waning. With Baby Boomers retiring and putting further strains on the economy, especially with regard to healthcare costs, tensions are on the rise.

Building On Progress

As Mark Twain is reputed to have said, “History doesn’t repeat itself, but it does rhyme.” While shifts in technology, resources, migration and demographics were spread throughout the 20th century, today we’re experiencing shifts in all four areas at once. Given that the 20th century was rife with massive wars and genocide, that is somewhat worrying.

Many of the disturbing trends around the world, such as the rise of authoritarian and populist movements, global terrorism and cyber warfare, can be attributed to the four shifts. Yet the 20th century was also a time of great progress. Wars became less frequent, life expectancy doubled and poverty fell while quality of life improved dramatically.

So today, while we face seemingly insurmountable challenges, we should also remember that many of the shifts that cause tensions, also give us the power to solve our problems. Advances in genomics and materials science can address climate change and rising healthcare costs. A rising, multicultural generation can unlock creativity and innovation. Migration can move workers to places where they are sorely needed.

The truth is that every disruptive era is not only fraught with danger, but also opportunity. Every generation faces unique challenges and must find the will to solve them. My hope is that we will do the same. The alternative is unthinkable.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The Digital Revolution Has Been A Giant Disappointment

The Digital Revolution Has Been A Giant Disappointment

GUEST POST from Greg Satell

One of the most often repeated episodes in the history of technology is when Steve Jobs was recruiting John Sculley from his lofty position as CEO at Pepsi to come to Apple. “Do you want to sell sugar water for the rest of your life,”Jobs asked, “or do you want to come with me and change the world?”

It’s a strange conceit of digital denizens that their businesses are something nobler than other industries. While it is true that technology can do some wonderful things, if the aim of Silicon Valley entrepreneurs was truly to change the world, why wouldn’t they apply their formidable talents to something like curing cancer or feeding the hungry?

The reality, as economist Robert Gordon explains in the The Rise and Fall of American Growth, is that the measurable impact has been relatively meager. According to the IMF, except for a relatively short burst in growth between 1996 and 2004, productivity has been depressed since the 1970s. We need to rethink how technology impacts our world.

The Old Productivity Paradox

In the 1970s and 80s, business investment in computer technology was increasing by more than 20% per year. Strangely though, productivity growth had decreased during the same period. Economists found this turn of events so strange that they called it the productivity paradox to underline their confusion.

The productivity paradox dumbfounded economists because it violated a basic principle of how a free market economy is supposed to work. If profit seeking businesses continue to make substantial investments, you expect to see a return. Yet with IT investment in the 70s and 80s, firms continued to increase their investment with negligible measurable benefit.

A paper by researchers at the University of Sheffield sheds some light on what happened. First, productivity measures were largely developed for an industrial economy, not an information economy. Second, the value of those investments, while substantial, were a small portion of total capital investment. Third, businesses weren’t necessarily investing to improve productivity, but to survive in a more demanding marketplace.

Yet by the late 1990s, increased computing power combined with the Internet to create a new productivity boom. Many economists hailed the digital age as a “new economy” of increasing returns, in which the old rules no longer applied and a small initial advantage would lead to market dominance. The mystery of the productivity paradox, it seemed, had been solved. We just needed to wait for the technology to hit critical mass.

The New Productivity Paradox

By 2004, the law of increasing returns was there for everyone to see. Google already dominated search, Amazon ruled e-commerce, Apple would go on to dominate mobile computing and Facebook would rule social media. Yet as the dominance of the tech giants grew, productivity would once again fall to depressed levels.

Yet today, more than a decade later, we’re in the midst of a second productivity paradox, just as mysterious as the first one. New technologies like mobile computing and artificial intelligence are there for everyone to see, but they have done little, if anything, to boost productivity.

At the same time the power of digital technology is diminishing. Moore’s law, the decades old paradigm of continuous doubling in the power of computer processing is slowing down and soon will end completely. Without advancement in the underlying technology, it is hard to see how digital technology will ever power another productivity boom.

Considering the optimistic predictions of digital entrepreneurs like Steve Jobs, this is incredibly disappointing. Compare the meager eight years of elevated productivity that digital technology produced with the 50-year boom in productivity created in the wake of electricity and internal combustion and it’s clear that digital technology simply doesn’t measure up.

The Baumol Effect, The Clothesline Paradox and Other Headwinds

Much like the first productivity paradox, it’s hard to determine exactly why the technological advancement over the last 15 years has amounted to so little. Most likely, it is not one factor in particular, but the confluence of a number of them. Increasing productivity growth in an advanced economy is no simple thing.

One possibility for the lack of progress is the Baumol effect, the principle that some sectors of the economy are resistant to productivity growth. For example, despite the incredible efficiency that Jeff Bezos has produced at Amazon, his barber still only cuts one head of hair at a time. In a similar way, sectors like healthcare and education, which require a large amount of labor inputs that resist automation, will act as a drag on productivity growth.

Another factor is the Clothesline paradox, which gets its name from the fact that when you dry your clothes in a machine, it figures into GDP data, but when you hang them on a clothesline, no measurable output is produced. In much the same way, when you use a smartphone to take pictures or to give you directions, there is considerable benefit that doesn’t result in any financial transactions. In fact, because you use less gas and don’t develop film, GDP decreases somewhat.

Additionally, the economist Robert Gordon, mentioned above, notes six headwinds to economic growth, including aging populations, limits to increasing education, income inequality, outsourcing, environmental costs due to climate change and rising household and government debt. It’s hard to see how digital technology will make a dent in any of these problems.

Technology is Never Enough to Change the World

Perhaps the biggest reason that the digital revolution has been such a big disappointment is because we expected the technology to largely do the work for us. While there is no doubt that computers are powerful tools, we still need to put them to good use and we have clearly missed opportunities in that regard.

Think about what life was like in 1900, when the typical American family didn’t have access to running water, electricity or gas powered machines such as tractors or automobiles. Even something simply like cooking a meal took hours of backbreaking labor. Yet investments in infrastructure and education combined with technology to produce prosperity.

Today, however, there is no comparable effort to invest in education and healthcare for those who cannot afford it, to limit the effects of climate change, to reduce debt or to do anything of anything of significance to mitigate the headwinds we face. We are awash in nifty gadgets, but in many ways we are no better off than we were 30 years ago.

None of this was inevitable, but the somewhat the results of choices that we have made. We can, if we really want to, make different choices in the days and years ahead. What I hope we have learned from our digital disappointments is that technology itself is never enough. We are truly the masters of our fate, for better or worse.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.