Tag Archives: Artificial Intelligence

Innovation Evolution in the Era of AI

Innovation Evolution in the Era of AI

GUEST POST from Stefan Lindegaard

Half a decade ago, I laid out a perspective on the evolution of innovation. Now, I return to these reflections with a sentiment of both awe and unease as I observe the profound impacts of AI on innovation and business at large. The transformation unfolding before us presents a remarkable panorama of opportunities, yet it also carries with it the potential for disruption, hence the mixed feelings.

1. The Reign of R&D (1970-2015): There was a time when the Chief Technology Officer (CTO) held the reins. The focus was almost exclusively on Research and Development (R&D), with the power of the CTO often towering over the innovative impulses of the organization. Technology drove progress, but a tech-exclusive vision could sometimes be a hidden pitfall.

2. Era of Innovation Management (1990-2001): A shift towards understanding innovation as a strategic force began to emerge in the ’90s. The concept of managing innovation, previously only a flicker in the business landscape, began its journey towards being a guiding light. Pioneers like Christensen brought innovation into the educational mainstream, marking a paradigm shift in the mindsets of future business leaders.

3. Business Models & Customer Experience (2001-2008): The millennium ushered in an era where simply possessing superior technology wasn’t a winning card anymore. Process refinement, service quality, and most critically, innovative business models became the new mantra. Firms like Microsoft demonstrated this shift, evolving their strategies to stay competitive in this new game.

4. Ecosystems & Platforms (2008-2018): This phase saw the rise of ecosystems and platforms, representing a shift from isolated competition to interconnected collaboration. The lines that once defined industries began to blur. Companies from emerging markets, particularly China, became global players, and we saw industries morphing and intermingling. Case in point: was it still the automotive industry, or had the mobility industry arrived?

5. Corporate Transformation (2019-2025): With the onslaught of digital technologies, corporations faced the need to transform from within. Technological adoption wasn’t a mere surface-level change anymore; it demanded a thorough, comprehensive rethinking of strategies, structures, and processes. Anything less was simply insufficient to weather the storm of this digital revolution.

6. Comborg Transformation (2025-??): As we gaze into the future, the ‘Comborg’ era comes into view. This era sees organizations fusing human elements and digital capabilities into a harmonious whole. In this stage, the equilibrium between human creativity and AI-driven efficiency will be crucial, an exciting but challenging frontier to explore.

I believe that revisiting this timeline of innovation’s evolution highlights the remarkable journey we’ve undertaken. As we now figure out the role of AI in innovation and business, it’s an exciting but also challenging time. Even though it can be a bit scary, I believe we can create a successful future if we use AI in a responsible and thoughtful way.

Stefan Lindegaard Evolution of Innovation

Image Credit: Stefan Lindegaard, Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

An Innovation Rant: Just Because You Can Doesn’t Mean You Should

An Innovation Rant: Just Because You Can Doesn’t Mean You Should

GUEST POST from Robyn Bolton

Why are people so concerned about, afraid of, or resistant to new things?

Innovation, by its very nature, is good.  It is something new that creates value.

Naturally, the answer has nothing to do with innovation.

It has everything to do with how we experience it. 

And innovation without humanity is a very bad experience.

Over the last several weeks, I’ve heard so many stories of inhuman innovation that I have said, “I hate innovation” more than once.

Of course, I don’t mean that (I would be at an extraordinary career crossroads if I did).  What I mean is that I hate the choices we make about how to use innovation. 

Just because AI can filter resumes doesn’t mean you should remove humans from the process.

Years ago, I oversaw recruiting for a small consulting firm of about 50 people.  I was a full-time project manager, but given our size, everyone was expected to pitch in and take on extra responsibilities.  Because of our founder, we received more resumes than most firms our size, so I usually spent 2 to 3 hours a week reviewing them and responding to applicants.  It was usually boring, sometimes hilarious, and always essential because of our people-based business.

Would I have loved to have an AI system sort through the resumes for me?  Absolutely!

Would we have missed out on incredible talent because they weren’t out “type?”  Absolutely!

AI judges a resume based on keywords and other factors you program in.  This probably means that it filters out people who worked in multiple industries, aren’t following a traditional career path, or don’t have the right degree.

This also means that you are not accessing people who bring a new perspective to your business, who can make the non-obvious connections that drive innovation and growth, and who bring unique skills and experiences to your team and its ideas.

If you permit AI to find all your talent, pretty soon, the only talent you’ll have is AI.

Just because you can ghost people doesn’t mean you should.

Rejection sucks.  When you reject someone, and they take it well, you still feel a bit icky and sad.  When they don’t take it well, as one of my colleagues said when viewing a response from a candidate who did not take the decision well, “I feel like I was just assaulted by a bag of feathers.  I’m not hurt.  I’m just shocked.”

So, I understand ghosting feels like the better option.  It’s not.  At best, it’s lazy, and at worst, it’s selfish.  Especially if you’re a big company using AI to screen resumes. 

It’s not hard to add a function that triggers a standard rejection email when the AI filters someone out.  It’s not that hard to have a pre-programmed email that can quickly be clicked and sent when a human makes a decision.

The Golden Rule – do unto others as you would have done unto you – doesn’t apply to AI.  It does apply to you.

Just because you can stack bots on bots doesn’t mean you should.

At this point, we all know that our first interaction with customer service will be with a bot.  Whether it’s an online chatbot or an automated phone tree, the journey to a human is often long and frustrating. Fine.  We don’t like it, but we don’t have a choice.

But when a bot transfers us to a bot masquerading as a person?  Do you hate your customers that much?

Some companies do, as my husband and I discovered.  I was on the phone with one company trying to resolve a problem, and he was in a completely different part of the house on the phone with another company trying to fix a separate issue.  When I wandered to the room where my husband was to get information that the “person” I was talking to needed, I noticed he was on hold.  Then he started staring at me funny (not as unusual as you might think).  Then he asked me to put my call on speaker (that was unusual).  After listening for a few minutes, he said, “I’m talking to the same woman.”

He was right.  As we listened to each other’s calls, we heard the same “woman” with the same tenor of voice, unusual cadence of speech, and indecipherable accent.  We were talking to a bot.  It was not helpful.  It took each of us several days and several more calls to finally reach humans.  When that happened, our issues were resolved in minutes.

Just because innovation can doesn’t mean you should allow it to.

You are a human.  You know more than the machine knows (for now).

You are interacting with other humans who, like you, have a right to be treated with respect.

If you forget these things – how important you and your choices are and how you want to be treated – you won’t have to worry about AI taking your job.  You already gave it away.

Image Credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Top 10 Human-Centered Change & Innovation Articles of September 2023

Top 10 Human-Centered Change & Innovation Articles of September 2023Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are September’s ten most popular innovation posts:

  1. The Malcolm Gladwell Trap — by Greg Satell
  2. Where People Go Wrong with Minimum Viable Products — by Greg Satell
  3. Our People Metrics Are Broken — by Mike Shipulski
  4. Why You Don’t Need An Innovation Portfolio — by Robyn Bolton
  5. Do you have a fixed or growth mindset? — by Stefan Lindegaard
  6. Building a Psychologically Safe Team — by David Burkus
  7. Customer Wants and Needs Not the Same — by Shep Hyken
  8. The Hard Problem of Consciousness is Not That Hard — by Geoffrey A. Moore
  9. Great Coaches Do These Things — by Mike Shipulski
  10. How Not to Get in Your Own Way — by Mike Shipulski

BONUS – Here are five more strong articles published in August that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last three years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






AI and the Productivity Paradox

AI and the Productivity Paradox

GUEST POST from Greg Satell

In the 1970’s and 80’s, business investment in computer technology were increasing by more than twenty percent per year. Strangely though, productivity growth had decreased during the same period. Economists found this turn of events so strange that they called it the productivity paradox to underline their confusion.

Productivity growth would take off in the late 1990s, but then mysteriously drop again during the mid-aughts. At each juncture, experts would debate whether digital technology produced real value or if it was all merely a mirage. The debate would continue even as industry after industry was disrupted.

Today, that debate is over, but a new one is likely to begin over artificial intelligence. Much like in the early 1970s, we have increasing investment in a new technology, diminished productivity growth and “experts” predicting massive worker displacement . Yet now we have history and experience to guide us and can avoid making the same mistakes.

You Can’t Manage (Or Evaluate) What You Can’t Measure

The productivity paradox dumbfounded economists because it violated a basic principle of how a free market economy is supposed to work. If profit seeking businesses continue to make substantial investments, you expect to see a return. Yet with IT investment in the 70s and 80s, firms continued to increase their investment with negligible measurable benefit.

A paper by researchers at the University of Sheffield sheds some light on what happened. First, productivity measures were largely developed for an industrial economy, not an information economy. Second, the value of those investments, while substantial, were a small portion of total capital investment. Third, the aggregate productivity numbers didn’t reflect differences in management performance.

Consider a widget company in the 1970s that invested in IT to improve service so that it could ship out products in less time. That would improve its competitive position and increase customer satisfaction, but it wouldn’t produce any more widgets. So, from an economic point of view, it wouldn’t be a productive investment. Rival firms might then invest in similar systems to stay competitive but, again, widget production would stay flat.

So firms weren’t investing in IT to increase productivity, but to stay competitive. Perhaps even more importantly, investment in digital technology in the 70s and 80s was focused on supporting existing business models. It wasn’t until the late 90s that we began to see significant new business models being created.

The Greatest Value Comes From New Business Models—Not Cost Savings

Things began to change when firms began to see the possibilities to shift their approach. As Josh Sutton, CEO of Agorai, an AI marketplace, explained to me, “The businesses that won in the digital age weren’t necessarily the ones who implemented systems the best, but those who took a ‘digital first’ mindset to imagine completely new business models.”

He gives the example of the entertainment industry. Sure, digital technology revolutionized distribution, but merely putting your programming online is of limited value. The ones who are winning are reimagining storytelling and optimizing the experience for binge watching. That’s the real paradigm shift.

“One of the things that digital technology did was to focus companies on their customers,” Sutton continues. “When switching costs are greatly reduced, you have to make sure your customers are being really well served. Because so much friction was taken out of the system, value shifted to who could create the best experience.”

So while many companies today are attempting to leverage AI to provide similar service more cheaply, the really smart players are exploring how AI can empower employees to provide a much better service or even to imagine something that never existed before. “AI will make it possible to put powerful intelligence tools in the hands of consumers, so that businesses can become collaborators and trusted advisors, rather than mere service providers,” Sutton says.

It Takes An Ecosystem To Drive Impact

Another aspect of digital technology in the 1970s and 80s was that it was largely made up of standalone systems. You could buy, say, a mainframe from IBM to automate back office systems or, later, Macintoshes or a PCs with some basic software to sit on employees desks, but that did little more than automate basic clerical tasks.

However, value creation began to explode in the mid-90s when the industry shifted from systems to ecosystems. Open source software, such as Apache and Linux, helped democratize development. Application developers began offering industry and process specific software and a whole cadre of systems integrators arose to design integrated systems for their customers.

We can see a similar process unfolding today in AI, as the industry shifts from one-size-fits-all systems like IBM’s Watson to a modular ecosystem of firms that provide data, hardware, software and applications. As the quality and specificity of the tools continues to increase, we can expect the impact of AI to increase as well.

In 1987, Robert Solow quipped that, “ You can see the computer age everywhere but in the productivity statistics,” and we’re at a similar point today. AI permeates our phones, smart speakers in our homes and, increasingly, the systems we use at work. However, we’ve yet to see a measurable economic impact from the technology. Much like in the 70s and 80s, productivity growth remains depressed. But the technology is still in its infancy.

We’re Just Getting Started

One of the most salient, but least discussed aspects of artificial intelligence is that it’s not an inherently digital technology. Applications like voice recognition and machine vision are, in fact, inherently analog. The fact that we use digital technology to execute machine learning algorithms is actually often a bottleneck.

Yet we can expect that to change over the next decade as new computing architectures, such as quantum computers and neuromorphic chips, rise to the fore. As these more powerful technologies replace silicon chips computing in ones and zeroes, value will shift from bits to atoms and artificial intelligence will be applied to the physical world.

“The digital technology revolutionized business processes, so it shouldn’t be a surprise that cognitive technologies are starting from the same place, but that’s not where they will end up. The real potential is driving processes that we can’t manage well today, such as in synthetic biology, materials science and other things in the physical world,” Agorai’s Sutton told me.

In 1987, when Solow made his famous quip, there was no consumer Internet, no World Wide Web and no social media. Artificial intelligence was largely science fiction. We’re at a similar point today, at the beginning of a new era. There’s still so much we don’t yet see, for the simple reason that so much has yet to happen.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The Hard Problem of Consciousness is Not That Hard

The Hard Problem of Consciousness is Not That Hard

GUEST POST from Geoffrey A. Moore

We human beings like to believe we are special—and we are, but not as special as we might like to think. One manifestation of our need to be exceptional is the way we privilege our experience of consciousness. This has led to a raft of philosophizing which can be organized around David Chalmers’ formulation of “the hard problem.”

In case this is a new phrase for you, here is some context from our friends at Wikipedia:

“… even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?”

— David Chalmers, Facing up to the problem of consciousness

The problem of consciousness, Chalmers argues, is two problems: the easy problems and the hard problem. The easy problems may include how sensory systems work, how such data is processed in the brain, how that data influences behavior or verbal reports, the neural basis of thought and emotion, and so on. The hard problem is the problem of why and how those processes are accompanied by experience.3 It may further include the question of why these processes are accompanied by that particular experience rather than another experience.

The key word here is experience. It emerges out of cognitive processes, but it is not completely reducible to them. For anyone who has read much in the field of complexity, this should not come as a surprise. All complex systems share the phenomenon of higher orders of organization emerging out of lower orders, as seen in the frequently used example of how cells, tissues, organs, and organisms all interrelate. Experience is just the next level.

The notion that explaining experience is a hard problem comes from locating it at the wrong level of emergence. Materialists place it too low—they argue it is reducible to physical phenomena, which is simply another way of denying that emergence is a meaningful construct. Shakespeare is reducible to quantum effects? Good luck with that.

Most people’s problems with explaining experience, on the other hand, is that they place it too high. They want to use their own personal experience as a grounding point. The problem is that our personal experience of consciousness is deeply inflected by our immersion in language, but it is clear that experience precedes language acquisition, as we see in our infants as well as our pets. Philosophers call such experiences qualia, and they attribute all sorts of ineluctable and mysterious qualities to them. But there is a much better way to understand what qualia really are—namely, the pre-linguistic mind’s predecessor to ideas. That is, they are representations of reality that confer strategic advantage to the organism that can host and act upon them.

Experience in this context is the ability to detect, attend to, learn from, and respond to signals from our environment, whether they be externally or internally generated. Experiences are what we remember. That is why they are so important to us.

Now, as language-enabled humans, we verbalize these experiences constantly, which is what leads us to locate them higher up in the order of emergence, after language itself has emerged. Of course, we do have experiences with language directly—lots of them. But we need to acknowledge that our identity as experiencers is not dependent upon, indeed precedes our acquisition of, language capability.

With this framework in mind, let’s revisit some of the formulations of the hard problem to see if we can’t nip them in the bud.

  • The hard problem of consciousness is the problem of explaining why and how we have qualia or phenomenal experiences. Our explanation is that qualia are mental abstractions of phenomenal experiences that, when remembered and acted upon, confer strategic advantage to organisms under conditions of natural and sexual selection. Prior to the emergence of brains, “remembering and acting upon” is a function of chemical signals activating organisms to alter their behavior and, over time, to privilege tendencies that reinforce survival. Once brain emerges, chemical signaling is supplemented by electrical signaling to the same ends. There is no magic here, only a change of medium.
  • Annaka Harris poses the hard problem as the question of “how experience arise[s] out of non-sentient matter.” The answer to this question is, “level by level.” First sentience has to emerge from non-sentience. That happens with the emergence of life at the cellular level. Then sentience has to spread beyond the cell. That happens when chemical signaling enables cellular communication. Then sentience has to speed up to enable mobile life. That happens when electrical signaling enabled by nerves supplements chemical signaling enabled by circulatory systems. Then signaling has to complexify into meta-signaling, the aggregation of signals into qualia, remembered as experiences. Again, no miracles required.
  • Others, such as Daniel Dennett and Patricia Churchland believe that the hard problem is really more of a collection of easy problems, and will be solved through further analysis of the brain and behavior. If so, it will be through the lens of emergence, not through the mechanics of reductive materialism.
  • Consciousness is an ambiguous term. It can be used to mean self-consciousness, awareness, the state of being awake, and so on. Chalmers uses Thomas Nagel’s definition of consciousness: the feeling of what it is like to be something. Consciousness, in this sense, is synonymous with experience. Now we are in the language-inflected zone where we are going to get consciousness wrong because we are entangling it in levels of emergence that come later. Specifically, to experience anything as like anything else is not possible without the intervention of language. That is, likeness is not a qualia, it is a language-enabled idea. Thus, when Thomas Nagel famously asked, “What is it like to be a bat?” he is posing a question that has meaning only for humans, never for bats.

Going back to the first sentence above, self-consciousness is another concept that has been language-inflected in that only human beings have selves. Selves, in other words, are creations of language. More specifically, our selves are characters embedded in narratives, and use both the narratives and the character profiles to organize our lives. This is a completely language-dependent undertaking and thus not available to pets or infants. Our infants are self-sentient, but it is not until the little darlings learn language, hear stories, then hear stories about themselves, that they become conscious of their own selves as separate and distinct from other selves.

On the other hand, if we use the definitions of consciousness as synonymous with awareness or being awake, then we are exactly at the right level because both those capabilities are the symptoms of, and thus synonymous with, the emergence of consciousness.

  • Chalmers argues that experience is more than the sum of its parts. In other words, experience is irreducible. Yes, but let’s not be mysterious here. Experience emerges from the sum of its parts, just like any other layer of reality emergences from its component elements. To say something is irreducible does not mean that it is unexplainable.
  • Wolfgang Fasching argues that the hard problem is not about qualia, but about pure what-it-is-like-ness of experience in Nagel’s sense, about the very givenness of any phenomenal contents itself:

Today there is a strong tendency to simply equate consciousness with qualia. Yet there is clearly something not quite right about this. The “itchiness of itches” and the “hurtfulness of pain” are qualities we are conscious of. So, philosophy of mind tends to treat consciousness as if it consisted simply of the contents of consciousness (the phenomenal qualities), while it really is precisely consciousness of contents, the very givenness of whatever is subjectively given. And therefore, the problem of consciousness does not pertain so much to some alleged “mysterious, nonpublic objects”, i.e. objects that seem to be only “visible” to the respective subject, but rather to the nature of “seeing” itself (and in today’s philosophy of mind astonishingly little is said about the latter).

Once again, we are melding consciousness and language together when, to be accurate, we must continue to keep them separate. In this case, the dangerous phrase is “the nature of seeing.” There is nothing mysterious about seeing in the non-metaphorical sense, but that is not how the word is being used here. Instead, “seeing” is standing for “understanding” or “getting” or “grokking” (if you are nerdy enough to know Robert Heinlein’s Stranger in a Strange Land). Now, I think it is reasonable to assert that animals “grok” if by that we mean that they can reliably respond to environmental signals with strategic behaviors. But anything more than that requires the intervention of language, and that ends up locating consciousness per se at the wrong level of emergence.

OK, that’s enough from me. I don’t think I’ve exhausted the topic, so let me close by saying…

That’s what I think, what do you think?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Top 10 Human-Centered Change & Innovation Articles of August 2023

Top 10 Human-Centered Change & Innovation Articles of August 2023Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are August’s ten most popular innovation posts:

  1. The Paradox of Innovation Leadership — by Janet Sernack
  2. Why Most Corporate Innovation Programs Fail — by Greg Satell
  3. A Top-Down Open Innovation Approach — by Geoffrey A. Moore
  4. Innovation Management ISO 56000 Series Explained — by Diana Porumboiu
  5. Scale Your Innovation by Mapping Your Value Network — by John Bessant
  6. The Impact of Artificial Intelligence on Future Employment — by Chateau G Pato
  7. Leaders Avoid Doing This One Thing — by Robyn Bolton
  8. Navigating the Unpredictable Terrain of Modern Business — by Teresa Spangler
  9. Imagination versus Knowledge — by Janet Sernack
  10. Productive Disagreement Requires Trust — by Mike Shipulski

BONUS – Here are five more strong articles published in July that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last three years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






The Robots Aren’t Really Going to Take Over

The Robots Aren't Really Going to Take Over

GUEST POST from Greg Satell

In 2013, a study at Oxford University found that 47% of jobs in the United States are likely to be replaced by robots over the next two decades. As if that doesn’t seem bad enough, Yuval Noah Harari, in his bestselling book Homo Deus, writes that “humans might become militarily and economically useless.” Yeesh! That doesn’t sound good.

Yet today, ten years after the Oxford Study, we are experiencing a serious labor shortage. Even more puzzling is that the shortage is especially acute in manufacturing, where automation is most pervasive. If robots are truly taking over, then why are having trouble finding enough humans to do work that needs being done?

The truth is that automation doesn’t replace jobs, it replaces tasks and when tasks become automated, they largely become commoditized. So while there are significant causes for concern about automation, such as increasing returns to capital amid decreasing returns to labor, the real danger isn’t with automation itself, but what we choose to do with it.

Organisms Are Not Algorithms

Harari’s rationale for humans becoming useless is his assertion that “organisms are algorithms.” Much like a vending machine is programed to respond to buttons, humans and other animals are programed by genetics and evolution to respond to “sensations, emotions and thoughts.” When those particular buttons are pushed, we respond much like a vending machine does.

He gives various data points for this point of view. For example, he describes psychological experiments in which, by monitoring brainwaves, researchers are able to predict actions, such as whether a person will flip a switch, even before he or she is aware of it. He also points out that certain chemicals, such as Ritalin and Prozac, can modify behavior.

Therefore, he continues, free will is an illusion because we don’t choose our urges. Nobody makes a conscious choice to crave chocolate cake or cigarettes any more than we choose whether to be attracted to someone other than our spouse. Those things are a product of our biological programming.

Yet none of this is at all dispositive. While it is true that we don’t choose our urges, we do choose our actions. We can be aware of our urges and still resist them. In fact, we consider developing the ability to resist urges as an integral part of growing up. Mature adults are supposed to resist things like gluttony, adultery and greed.

Revealing And Building

If you believe that organisms are algorithms, it’s easy to see how humans become subservient to machines. As machine learning techniques combine with massive computing power, machines will be able to predict, with great accuracy, which buttons will lead to what actions. Here again, an incomplete picture leads to a spurious conclusion.

In his 1954 essay, The Question Concerning Technology the German philosopher Martin Heidegger sheds some light on these issues. He described technology as akin to art, in that it reveals truths about the nature of the world, brings them forth and puts them to some specific use. In the process, human nature and its capacity for good and evil is also revealed.

He gives the example of a hydroelectric dam, which reveals the energy of a river and puts it to use making electricity. In much the same sense, Mark Zuckerberg did not “build” a social network at Facebook, but took natural human tendencies and channeled them in a particular way. After all, we go online not for bits or electrons, but to connect with each other.

In another essay, Building Dwelling Thinking, Heidegger explains that building also plays an important role, because to build for the world, we first must understand what it means to live in it. Once we understand that Mark Zuckerberg, or anyone else for that matter, is working to manipulate us, we can work to prevent it. In fact, knowing that someone or something seeks to control us gives us an urge to resist. If we’re all algorithms, that’s part of the code.
Social Skills Will Trump Cognitive Skills

All of this is, of course, somewhat speculative. What is striking, however, is the extent to which the opposite of what Harari and other “experts” predict is happening. Not only have greater automation and more powerful machine learning algorithms not led to mass unemployment it has, as noted above, led to a labor shortage. What gives?

To understand what’s going on, consider the legal industry, which is rapidly being automated. Basic activities like legal discovery are now largely done by algorithms. Services like LegalZoom automate basic filings. There are even artificial intelligence systems that can predict the outcome of a court case better than a human can.

So it shouldn’t be surprising that many experts predict gloomy days ahead for lawyers. By now, you can probably predict the punchline. The number of lawyers in the US has increased by 15% since 2008 and it’s not hard to see why. People don’t hire lawyers for their ability to hire cheap associates to do discovery, file basic documents or even, for the most part, to go to trial. In large part, they want someone they can trust to advise them.

The true shift in the legal industry will be from cognitive to social skills. When much of the cognitive heavy lifting can be done by machines, attorneys who can show empathy and build trust will have an advantage over those who depend on their ability to retain large amounts of information and read through lots of documents.

Value Never Disappears, It Just Shifts To Another Place

In 1900, 30 million people in the United States worked as farmers, but by 1990 that number had fallen to under 3 million even as the population more than tripled. So, in a matter of speaking, 90% of American agriculture workers lost their jobs, mostly due to automation. Yet somehow, the twentieth century was seen as an era of unprecedented prosperity.

You can imagine anyone working in agriculture a hundred years ago would be horrified to find that their jobs would vanish over the next century. If you told them that everything would be okay because they could find work as computer scientists, geneticists or digital marketers, they would probably have thought that you were some kind of a nut.

But consider if you told them that instead of working in the fields all day, they could spend that time in a nice office that was cool and dry because of something called “air conditioning,” and that they would have machines that cook meals without needing wood to be chopped and hauled. To sweeten the pot you could tell them that ”work” would mostly consist largely of talking to other people. They may have imagined it as a paradise.

The truth is that value never disappears, it just shifts to another place. That’s why today we have less farmers, but more food and, for better or worse, more lawyers. It is also why it’s highly unlikely that the robots will take over, because we are not algorithms. We have the power to choose.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The Impact of Artificial Intelligence on Future Employment

The Impact of Artificial Intelligence on Future Employment

GUEST POST from Chateau G Pato

The rapid progression of artificial intelligence (AI) has ignited both intrigue and fear among experts in various industries. While the advancements in AI hold promises of improved efficiency, increased productivity, and innumerable benefits, concerns have been raised about the potential impact on employment. As AI technology continues to evolve and permeate into different sectors, it is crucial to examine the implications it may have on the workforce. This article will delve into the impact of AI on future employment, exploring two case study examples that shed light on the subject.

Case Study 1: Autonomous Vehicles

One area where AI has gained significant traction in recent years is autonomous vehicles. While self-driving cars promise to revolutionize transportation, they also pose a potential threat to traditional driving jobs. According to a study conducted by the University of California, Berkeley, an estimated 300,000 truck driving jobs could be at risk in the coming decades due to the rise of autonomous vehicles.

Although this projection may seem alarming, it is important to note that AI-driven automation can also create new job opportunities. With the emergence of autonomous vehicles, positions such as remote monitoring operators, vehicle maintenance technicians, and safety supervisors are likely to be in demand. Additionally, the introduction of AI in this sector could also lead to the creation of entirely new industries such as ride-hailing services, data analysis, and infrastructure development related to autonomous vehicles. Therefore, while some jobs may be displaced, others will potentially emerge, resulting in a shift rather than a complete loss in employment opportunities.

Case Study 2: Healthcare and Diagnostics

The healthcare industry is another sector profoundly impacted by artificial intelligence. AI has already demonstrated remarkable prowess in diagnosing diseases and providing personalized treatment plans. For instance, IBM’s Watson, a cognitive computing system, has proved capable of analyzing vast amounts of medical literature and patient data to assist physicians in making more accurate diagnoses.

While AI undoubtedly enhances healthcare outcomes, concerns arise regarding the future of certain medical professions. Radiologists, for example, who primarily interpret medical images, may face challenges as AI algorithms become increasingly proficient at detecting abnormalities. A study published in Nature in 2020 revealed that AI could outperform human radiologists in interpreting mammograms. As AI is more widely incorporated into the healthcare system, the role of radiologists may evolve to focus on higher-level tasks such as treatment decisions, patient consultation, and research.

Moreover, the integration of AI into healthcare offers new employment avenues. The demand for data scientists, AI engineers, and software developers specialized in healthcare will likely increase. Additionally, healthcare professionals with expertise in data analysis and managing AI systems will be in high demand. As AI continues to transform the healthcare industry, the focus should be on retraining and up-skilling to ensure a smooth transition for affected employees.

Conclusion

The impact of artificial intelligence on future employment is a complex subject with both opportunities and challenges. While certain job roles may face disruption, AI also creates the potential for new roles to emerge. The cases of autonomous vehicles and AI in healthcare provide compelling examples of how the workforce can adapt and evolve alongside technology. Preparing for this transition will require a concerted effort from policymakers, employers, and individuals to ensure a smooth integration of AI into the workplace while safeguarding the interests of employees.

Extra Extra: Because innovation is all about change, Braden Kelley’s human-centered change methodology and tools are the best way to plan and execute the changes necessary to support your innovation and transformation efforts — all while literally getting everyone all on the same page for change. Find out more about the methodology and tools, including the book Charting Change by following the link. Be sure and download the TEN FREE TOOLS while you’re here.

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Will Artificial Intelligence Make Us Stupid?

Will Artificial Intelligence Make Us Stupid?

GUEST POST from Shep Hyken

I was just at an industry conference focusing on AI (Artificial Intelligence). Someone commented, “AI is going to make us stupid.” Elaborating on that statement, the commenter’s reasoning was that it takes thinking and problem-solving out of the process. We will be given the answer and won’t have to know anything else.

I can see his point, but there is another way of looking at this. In the form of a question, “Did calculators make us dumb?”

I remembered getting a calculator and was excited that I could do long division by just pushing the buttons on the calculator. Even though it gave me the correct answer, I still had to know what to do with it. It didn’t make me dumb. It made me more efficient.

I liken this to my school days when the teacher said we could bring our books and notes to the final exam. Specifically, I remember my college algebra teacher saying, “I don’t care if you memorize formulas or not. What I care about is that you know how to use the formulas. So, on your way out of today’s class, you will receive a sheet with all the formulas you need to solve the problems on the test.”

Believe me when I tell you that having the formulas didn’t make taking the test easier. However, it did make studying easier. I didn’t have to spend time memorizing formulas. Instead, I focused on how to use the information to efficiently get the correct answer.

Shep Hyken Artificial Intelligence Cartoon

So, how does this apply to customer service? Many people think that AI will be used to replace customer support agents – and even salespeople. They believe all customer questions can be answered digitally with AI-infused technology. That may work for basic questions. For higher-level questions and problems, we still need experts. But there is much more.

AI can’t build relationships. Humans can. So, imagine the customer service agent or salesperson using AI to help them solve problems and get the best answers for their customers. But rather than just reciting the information in front of them, they put their personality into the responses. They communicate the information in a way their customers understand and can relate to. They answer additional and clarifying questions. They can even make suggestions outside of the original intent of the customer’s call. This mixes the best of both worlds: almost instantly accessible, accurate information with a live person’s relationship- and credibility-building skills. That’s a winning combination.

No, AI won’t make us dumb unless we let it. Instead, AI will help us be more efficient and effective. And it could even make us appear to be smarter!

Image Credits: Shep Hyken, Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

 






Top 10 Human-Centered Change & Innovation Articles of July 2023

Top 10 Human-Centered Change & Innovation Articles of July 2023Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are July’s ten most popular innovation posts:

  1. 95% of Work is Noise — by Mike Shipulski
  2. Four Characteristics of High Performing Teams — by David Burkus
  3. 39 Digital Transformation Hacks — by Stefan Lindegaard
  4. How to Create Personas That Matter — by Braden Kelley
  5. The Real Problem with Problems — by Mike Shipulski
  6. A Triumph of Artificial Intelligence Rhetoric — by Geoffrey A. Moore
  7. Ideas Have Limited Value — by Greg Satell
  8. Three Cognitive Biases That Can Kill Innovation — by Greg Satell
  9. Navigating the AI Revolution — by Teresa Spangler
  10. How to Make Navigating Ambiguity a Super Power — by Robyn Bolton

BONUS – Here are five more strong articles published in June that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last three years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.