Tag Archives: Artificial Intelligence

The Hard Problem of Consciousness is Not That Hard

The Hard Problem of Consciousness is Not That Hard

GUEST POST from Geoffrey A. Moore

We human beings like to believe we are special—and we are, but not as special as we might like to think. One manifestation of our need to be exceptional is the way we privilege our experience of consciousness. This has led to a raft of philosophizing which can be organized around David Chalmers’ formulation of “the hard problem.”

In case this is a new phrase for you, here is some context from our friends at Wikipedia:

“… even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?”

— David Chalmers, Facing up to the problem of consciousness

The problem of consciousness, Chalmers argues, is two problems: the easy problems and the hard problem. The easy problems may include how sensory systems work, how such data is processed in the brain, how that data influences behavior or verbal reports, the neural basis of thought and emotion, and so on. The hard problem is the problem of why and how those processes are accompanied by experience.3 It may further include the question of why these processes are accompanied by that particular experience rather than another experience.

The key word here is experience. It emerges out of cognitive processes, but it is not completely reducible to them. For anyone who has read much in the field of complexity, this should not come as a surprise. All complex systems share the phenomenon of higher orders of organization emerging out of lower orders, as seen in the frequently used example of how cells, tissues, organs, and organisms all interrelate. Experience is just the next level.

The notion that explaining experience is a hard problem comes from locating it at the wrong level of emergence. Materialists place it too low—they argue it is reducible to physical phenomena, which is simply another way of denying that emergence is a meaningful construct. Shakespeare is reducible to quantum effects? Good luck with that.

Most people’s problems with explaining experience, on the other hand, is that they place it too high. They want to use their own personal experience as a grounding point. The problem is that our personal experience of consciousness is deeply inflected by our immersion in language, but it is clear that experience precedes language acquisition, as we see in our infants as well as our pets. Philosophers call such experiences qualia, and they attribute all sorts of ineluctable and mysterious qualities to them. But there is a much better way to understand what qualia really are—namely, the pre-linguistic mind’s predecessor to ideas. That is, they are representations of reality that confer strategic advantage to the organism that can host and act upon them.

Experience in this context is the ability to detect, attend to, learn from, and respond to signals from our environment, whether they be externally or internally generated. Experiences are what we remember. That is why they are so important to us.

Now, as language-enabled humans, we verbalize these experiences constantly, which is what leads us to locate them higher up in the order of emergence, after language itself has emerged. Of course, we do have experiences with language directly—lots of them. But we need to acknowledge that our identity as experiencers is not dependent upon, indeed precedes our acquisition of, language capability.

With this framework in mind, let’s revisit some of the formulations of the hard problem to see if we can’t nip them in the bud.

  • The hard problem of consciousness is the problem of explaining why and how we have qualia or phenomenal experiences. Our explanation is that qualia are mental abstractions of phenomenal experiences that, when remembered and acted upon, confer strategic advantage to organisms under conditions of natural and sexual selection. Prior to the emergence of brains, “remembering and acting upon” is a function of chemical signals activating organisms to alter their behavior and, over time, to privilege tendencies that reinforce survival. Once brain emerges, chemical signaling is supplemented by electrical signaling to the same ends. There is no magic here, only a change of medium.
  • Annaka Harris poses the hard problem as the question of “how experience arise[s] out of non-sentient matter.” The answer to this question is, “level by level.” First sentience has to emerge from non-sentience. That happens with the emergence of life at the cellular level. Then sentience has to spread beyond the cell. That happens when chemical signaling enables cellular communication. Then sentience has to speed up to enable mobile life. That happens when electrical signaling enabled by nerves supplements chemical signaling enabled by circulatory systems. Then signaling has to complexify into meta-signaling, the aggregation of signals into qualia, remembered as experiences. Again, no miracles required.
  • Others, such as Daniel Dennett and Patricia Churchland believe that the hard problem is really more of a collection of easy problems, and will be solved through further analysis of the brain and behavior. If so, it will be through the lens of emergence, not through the mechanics of reductive materialism.
  • Consciousness is an ambiguous term. It can be used to mean self-consciousness, awareness, the state of being awake, and so on. Chalmers uses Thomas Nagel’s definition of consciousness: the feeling of what it is like to be something. Consciousness, in this sense, is synonymous with experience. Now we are in the language-inflected zone where we are going to get consciousness wrong because we are entangling it in levels of emergence that come later. Specifically, to experience anything as like anything else is not possible without the intervention of language. That is, likeness is not a qualia, it is a language-enabled idea. Thus, when Thomas Nagel famously asked, “What is it like to be a bat?” he is posing a question that has meaning only for humans, never for bats.

Going back to the first sentence above, self-consciousness is another concept that has been language-inflected in that only human beings have selves. Selves, in other words, are creations of language. More specifically, our selves are characters embedded in narratives, and use both the narratives and the character profiles to organize our lives. This is a completely language-dependent undertaking and thus not available to pets or infants. Our infants are self-sentient, but it is not until the little darlings learn language, hear stories, then hear stories about themselves, that they become conscious of their own selves as separate and distinct from other selves.

On the other hand, if we use the definitions of consciousness as synonymous with awareness or being awake, then we are exactly at the right level because both those capabilities are the symptoms of, and thus synonymous with, the emergence of consciousness.

  • Chalmers argues that experience is more than the sum of its parts. In other words, experience is irreducible. Yes, but let’s not be mysterious here. Experience emerges from the sum of its parts, just like any other layer of reality emergences from its component elements. To say something is irreducible does not mean that it is unexplainable.
  • Wolfgang Fasching argues that the hard problem is not about qualia, but about pure what-it-is-like-ness of experience in Nagel’s sense, about the very givenness of any phenomenal contents itself:

Today there is a strong tendency to simply equate consciousness with qualia. Yet there is clearly something not quite right about this. The “itchiness of itches” and the “hurtfulness of pain” are qualities we are conscious of. So, philosophy of mind tends to treat consciousness as if it consisted simply of the contents of consciousness (the phenomenal qualities), while it really is precisely consciousness of contents, the very givenness of whatever is subjectively given. And therefore, the problem of consciousness does not pertain so much to some alleged “mysterious, nonpublic objects”, i.e. objects that seem to be only “visible” to the respective subject, but rather to the nature of “seeing” itself (and in today’s philosophy of mind astonishingly little is said about the latter).

Once again, we are melding consciousness and language together when, to be accurate, we must continue to keep them separate. In this case, the dangerous phrase is “the nature of seeing.” There is nothing mysterious about seeing in the non-metaphorical sense, but that is not how the word is being used here. Instead, “seeing” is standing for “understanding” or “getting” or “grokking” (if you are nerdy enough to know Robert Heinlein’s Stranger in a Strange Land). Now, I think it is reasonable to assert that animals “grok” if by that we mean that they can reliably respond to environmental signals with strategic behaviors. But anything more than that requires the intervention of language, and that ends up locating consciousness per se at the wrong level of emergence.

OK, that’s enough from me. I don’t think I’ve exhausted the topic, so let me close by saying…

That’s what I think, what do you think?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Top 10 Human-Centered Change & Innovation Articles of August 2023

Top 10 Human-Centered Change & Innovation Articles of August 2023Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are August’s ten most popular innovation posts:

  1. The Paradox of Innovation Leadership — by Janet Sernack
  2. Why Most Corporate Innovation Programs Fail — by Greg Satell
  3. A Top-Down Open Innovation Approach — by Geoffrey A. Moore
  4. Innovation Management ISO 56000 Series Explained — by Diana Porumboiu
  5. Scale Your Innovation by Mapping Your Value Network — by John Bessant
  6. The Impact of Artificial Intelligence on Future Employment — by Chateau G Pato
  7. Leaders Avoid Doing This One Thing — by Robyn Bolton
  8. Navigating the Unpredictable Terrain of Modern Business — by Teresa Spangler
  9. Imagination versus Knowledge — by Janet Sernack
  10. Productive Disagreement Requires Trust — by Mike Shipulski

BONUS – Here are five more strong articles published in July that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last three years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Robots Aren’t Really Going to Take Over

The Robots Aren't Really Going to Take Over

GUEST POST from Greg Satell

In 2013, a study at Oxford University found that 47% of jobs in the United States are likely to be replaced by robots over the next two decades. As if that doesn’t seem bad enough, Yuval Noah Harari, in his bestselling book Homo Deus, writes that “humans might become militarily and economically useless.” Yeesh! That doesn’t sound good.

Yet today, ten years after the Oxford Study, we are experiencing a serious labor shortage. Even more puzzling is that the shortage is especially acute in manufacturing, where automation is most pervasive. If robots are truly taking over, then why are having trouble finding enough humans to do work that needs being done?

The truth is that automation doesn’t replace jobs, it replaces tasks and when tasks become automated, they largely become commoditized. So while there are significant causes for concern about automation, such as increasing returns to capital amid decreasing returns to labor, the real danger isn’t with automation itself, but what we choose to do with it.

Organisms Are Not Algorithms

Harari’s rationale for humans becoming useless is his assertion that “organisms are algorithms.” Much like a vending machine is programed to respond to buttons, humans and other animals are programed by genetics and evolution to respond to “sensations, emotions and thoughts.” When those particular buttons are pushed, we respond much like a vending machine does.

He gives various data points for this point of view. For example, he describes psychological experiments in which, by monitoring brainwaves, researchers are able to predict actions, such as whether a person will flip a switch, even before he or she is aware of it. He also points out that certain chemicals, such as Ritalin and Prozac, can modify behavior.

Therefore, he continues, free will is an illusion because we don’t choose our urges. Nobody makes a conscious choice to crave chocolate cake or cigarettes any more than we choose whether to be attracted to someone other than our spouse. Those things are a product of our biological programming.

Yet none of this is at all dispositive. While it is true that we don’t choose our urges, we do choose our actions. We can be aware of our urges and still resist them. In fact, we consider developing the ability to resist urges as an integral part of growing up. Mature adults are supposed to resist things like gluttony, adultery and greed.

Revealing And Building

If you believe that organisms are algorithms, it’s easy to see how humans become subservient to machines. As machine learning techniques combine with massive computing power, machines will be able to predict, with great accuracy, which buttons will lead to what actions. Here again, an incomplete picture leads to a spurious conclusion.

In his 1954 essay, The Question Concerning Technology the German philosopher Martin Heidegger sheds some light on these issues. He described technology as akin to art, in that it reveals truths about the nature of the world, brings them forth and puts them to some specific use. In the process, human nature and its capacity for good and evil is also revealed.

He gives the example of a hydroelectric dam, which reveals the energy of a river and puts it to use making electricity. In much the same sense, Mark Zuckerberg did not “build” a social network at Facebook, but took natural human tendencies and channeled them in a particular way. After all, we go online not for bits or electrons, but to connect with each other.

In another essay, Building Dwelling Thinking, Heidegger explains that building also plays an important role, because to build for the world, we first must understand what it means to live in it. Once we understand that Mark Zuckerberg, or anyone else for that matter, is working to manipulate us, we can work to prevent it. In fact, knowing that someone or something seeks to control us gives us an urge to resist. If we’re all algorithms, that’s part of the code.
Social Skills Will Trump Cognitive Skills

All of this is, of course, somewhat speculative. What is striking, however, is the extent to which the opposite of what Harari and other “experts” predict is happening. Not only have greater automation and more powerful machine learning algorithms not led to mass unemployment it has, as noted above, led to a labor shortage. What gives?

To understand what’s going on, consider the legal industry, which is rapidly being automated. Basic activities like legal discovery are now largely done by algorithms. Services like LegalZoom automate basic filings. There are even artificial intelligence systems that can predict the outcome of a court case better than a human can.

So it shouldn’t be surprising that many experts predict gloomy days ahead for lawyers. By now, you can probably predict the punchline. The number of lawyers in the US has increased by 15% since 2008 and it’s not hard to see why. People don’t hire lawyers for their ability to hire cheap associates to do discovery, file basic documents or even, for the most part, to go to trial. In large part, they want someone they can trust to advise them.

The true shift in the legal industry will be from cognitive to social skills. When much of the cognitive heavy lifting can be done by machines, attorneys who can show empathy and build trust will have an advantage over those who depend on their ability to retain large amounts of information and read through lots of documents.

Value Never Disappears, It Just Shifts To Another Place

In 1900, 30 million people in the United States worked as farmers, but by 1990 that number had fallen to under 3 million even as the population more than tripled. So, in a matter of speaking, 90% of American agriculture workers lost their jobs, mostly due to automation. Yet somehow, the twentieth century was seen as an era of unprecedented prosperity.

You can imagine anyone working in agriculture a hundred years ago would be horrified to find that their jobs would vanish over the next century. If you told them that everything would be okay because they could find work as computer scientists, geneticists or digital marketers, they would probably have thought that you were some kind of a nut.

But consider if you told them that instead of working in the fields all day, they could spend that time in a nice office that was cool and dry because of something called “air conditioning,” and that they would have machines that cook meals without needing wood to be chopped and hauled. To sweeten the pot you could tell them that ”work” would mostly consist largely of talking to other people. They may have imagined it as a paradise.

The truth is that value never disappears, it just shifts to another place. That’s why today we have less farmers, but more food and, for better or worse, more lawyers. It is also why it’s highly unlikely that the robots will take over, because we are not algorithms. We have the power to choose.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Impact of Artificial Intelligence on Future Employment

The Impact of Artificial Intelligence on Future Employment

GUEST POST from Chateau G Pato

The rapid progression of artificial intelligence (AI) has ignited both intrigue and fear among experts in various industries. While the advancements in AI hold promises of improved efficiency, increased productivity, and innumerable benefits, concerns have been raised about the potential impact on employment. As AI technology continues to evolve and permeate into different sectors, it is crucial to examine the implications it may have on the workforce. This article will delve into the impact of AI on future employment, exploring two case study examples that shed light on the subject.

Case Study 1: Autonomous Vehicles

One area where AI has gained significant traction in recent years is autonomous vehicles. While self-driving cars promise to revolutionize transportation, they also pose a potential threat to traditional driving jobs. According to a study conducted by the University of California, Berkeley, an estimated 300,000 truck driving jobs could be at risk in the coming decades due to the rise of autonomous vehicles.

Although this projection may seem alarming, it is important to note that AI-driven automation can also create new job opportunities. With the emergence of autonomous vehicles, positions such as remote monitoring operators, vehicle maintenance technicians, and safety supervisors are likely to be in demand. Additionally, the introduction of AI in this sector could also lead to the creation of entirely new industries such as ride-hailing services, data analysis, and infrastructure development related to autonomous vehicles. Therefore, while some jobs may be displaced, others will potentially emerge, resulting in a shift rather than a complete loss in employment opportunities.

Case Study 2: Healthcare and Diagnostics

The healthcare industry is another sector profoundly impacted by artificial intelligence. AI has already demonstrated remarkable prowess in diagnosing diseases and providing personalized treatment plans. For instance, IBM’s Watson, a cognitive computing system, has proved capable of analyzing vast amounts of medical literature and patient data to assist physicians in making more accurate diagnoses.

While AI undoubtedly enhances healthcare outcomes, concerns arise regarding the future of certain medical professions. Radiologists, for example, who primarily interpret medical images, may face challenges as AI algorithms become increasingly proficient at detecting abnormalities. A study published in Nature in 2020 revealed that AI could outperform human radiologists in interpreting mammograms. As AI is more widely incorporated into the healthcare system, the role of radiologists may evolve to focus on higher-level tasks such as treatment decisions, patient consultation, and research.

Moreover, the integration of AI into healthcare offers new employment avenues. The demand for data scientists, AI engineers, and software developers specialized in healthcare will likely increase. Additionally, healthcare professionals with expertise in data analysis and managing AI systems will be in high demand. As AI continues to transform the healthcare industry, the focus should be on retraining and up-skilling to ensure a smooth transition for affected employees.

Conclusion

The impact of artificial intelligence on future employment is a complex subject with both opportunities and challenges. While certain job roles may face disruption, AI also creates the potential for new roles to emerge. The cases of autonomous vehicles and AI in healthcare provide compelling examples of how the workforce can adapt and evolve alongside technology. Preparing for this transition will require a concerted effort from policymakers, employers, and individuals to ensure a smooth integration of AI into the workplace while safeguarding the interests of employees.

Bottom line: Futurology is not fortune telling. Futurists use a scientific approach to create their deliverables, but a methodology and tools like those in FutureHacking™ can empower anyone to engage in futurology themselves.

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Will Artificial Intelligence Make Us Stupid?

Will Artificial Intelligence Make Us Stupid?

GUEST POST from Shep Hyken

I was just at an industry conference focusing on AI (Artificial Intelligence). Someone commented, “AI is going to make us stupid.” Elaborating on that statement, the commenter’s reasoning was that it takes thinking and problem-solving out of the process. We will be given the answer and won’t have to know anything else.

I can see his point, but there is another way of looking at this. In the form of a question, “Did calculators make us dumb?”

I remembered getting a calculator and was excited that I could do long division by just pushing the buttons on the calculator. Even though it gave me the correct answer, I still had to know what to do with it. It didn’t make me dumb. It made me more efficient.

I liken this to my school days when the teacher said we could bring our books and notes to the final exam. Specifically, I remember my college algebra teacher saying, “I don’t care if you memorize formulas or not. What I care about is that you know how to use the formulas. So, on your way out of today’s class, you will receive a sheet with all the formulas you need to solve the problems on the test.”

Believe me when I tell you that having the formulas didn’t make taking the test easier. However, it did make studying easier. I didn’t have to spend time memorizing formulas. Instead, I focused on how to use the information to efficiently get the correct answer.

Shep Hyken Artificial Intelligence Cartoon

So, how does this apply to customer service? Many people think that AI will be used to replace customer support agents – and even salespeople. They believe all customer questions can be answered digitally with AI-infused technology. That may work for basic questions. For higher-level questions and problems, we still need experts. But there is much more.

AI can’t build relationships. Humans can. So, imagine the customer service agent or salesperson using AI to help them solve problems and get the best answers for their customers. But rather than just reciting the information in front of them, they put their personality into the responses. They communicate the information in a way their customers understand and can relate to. They answer additional and clarifying questions. They can even make suggestions outside of the original intent of the customer’s call. This mixes the best of both worlds: almost instantly accessible, accurate information with a live person’s relationship- and credibility-building skills. That’s a winning combination.

No, AI won’t make us dumb unless we let it. Instead, AI will help us be more efficient and effective. And it could even make us appear to be smarter!

Image Credits: Shep Hyken, Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

 

Top 10 Human-Centered Change & Innovation Articles of July 2023

Top 10 Human-Centered Change & Innovation Articles of July 2023Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are July’s ten most popular innovation posts:

  1. 95% of Work is Noise — by Mike Shipulski
  2. Four Characteristics of High Performing Teams — by David Burkus
  3. 39 Digital Transformation Hacks — by Stefan Lindegaard
  4. How to Create Personas That Matter — by Braden Kelley
  5. The Real Problem with Problems — by Mike Shipulski
  6. A Triumph of Artificial Intelligence Rhetoric — by Geoffrey A. Moore
  7. Ideas Have Limited Value — by Greg Satell
  8. Three Cognitive Biases That Can Kill Innovation — by Greg Satell
  9. Navigating the AI Revolution — by Teresa Spangler
  10. How to Make Navigating Ambiguity a Super Power — by Robyn Bolton

BONUS – Here are five more strong articles published in June that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last three years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Navigating the AI Revolution

Leveraging the Three Horizons for Tomorrow’s Triumphs

Navigating the AI Revolution - Leveraging the Three Horizons for Tomorrow's Triumphs

GUEST POST from Teresa Spangler

The future belongs to those who prepare for it today. As we stand at the dawn of the AI revolution, we must not merely adapt to change; we must anticipate it, shape it, and turn it to our advantage. Embracing the three horizons of AI is not just about technology or strategy; it’s about purpose – our purpose as leaders to guide our organizations, our people, and our society into a prosperous, equitable, and truly human future.

Teresa Spangler

As we turn the page on a year of profound transformation, the horizon of 2024 and beyond takes shape. Artificial Intelligence (AI) is steadfastly marching forward, and as leaders, the pressing call to pilot our organizations through these new frontiers couldn’t be more poignant. We must explore how executive leadership can initiate actionable measures today to harness tomorrow’s opportunities.

As the silhouette of 2024 looms ahead, we realize that maneuvering through the turbulent waters of change requires not just a reactive approach, but a meticulously charted plan. A navigational tool that can prove invaluable in this journey is the Three Horizons framework for futures planning. This framework allows us to methodically comprehend, envision, and shape our path through the cascading waves of AI development. By exploring each horizon in detail, we can create a strategic roadmap that integrates immediate actions, mid-term plans, and long-term visions. Let’s delve deeper into this process, beginning with the groundwork of understanding today’s AI landscape.

The Groundwork: Understanding Today’s AI Landscape – Horizon 1

Diving into the fast-paced whirlwind of AI, a comprehensive grasp of today’s landscape is the cornerstone for future triumphs. Familiarity with various AI technologies, like machine learning, natural language processing, robotics, and computer vision, is now an indispensable part of the executive toolkit. However, a theory is merely the starting point.

Turning this knowledge into strategic assets necessitates that you:

  • Actively interact with AI tools like, ChatGPT, DALL-E, DeepArt and DeepDream, Stable Diffusion, Midjourney …etc. Developing even rudimentary AI models with platforms like TensorFlow or PyTorch can shed light on AI’s potential and limitations. For instance, IBM’s Project Debater showcases how AI can understand the context and form logical arguments, pushing the boundary of natural language processing.
  • Forecast AI’s immediate future is leveraging trends in AI research, market dynamics, societal needs, and regulatory shifts. Access the best industry reports and collaborate with external experts that offer invaluable insights. A recent McKinsey report, for instance, found that companies integrating AI were nearly twice as likely to be top-quartile performers in their industry.

It’s widely acknowledged that AI will significantly alter the dynamics of how our world operates. While the intricacies of this transformation can seem complex, it’s certainly not an insurmountable challenge! The Three Horizons methodology is one of many effective strategies your organization can adopt to manage this transition. By strategically navigating through these horizons with a cohesive team and a well-articulated plan, your organization will be well-positioned to embrace the AI revolution. Here are a few other methodologies you might consider:

  1. Scenario Planning: This approach involves envisioning different future states and developing strategies to succeed in each potential scenario.
  2. Backcasting: Starting with a desirable future end-state, this method works backward to identify the strategic steps required to reach that goal.
  3. Roadmapping: This technique charts out the evolution of technologies and products, helping you understand how technological progress might affect your business over time.

Choosing the right methodology will depend on your specific circumstances and objectives. Regardless of the approach, remember that the key to success lies in aligning your team and developing a clear, comprehensive plan of action.

On to Horizon 2 & 3

Navigating the Waves: Crafting the Mid-Term AI Future – Horizon 2

As part of the C-suite, your role extends beyond mere reactions to change – you’re a herald of future trends. Structuring the mid-term AI future necessitates:

  • Assimilating the implications of AI for your industry. Evaluate how job roles might evolve, identify the ethical and privacy concerns, and understand the geopolitical interplays of AI on your global strategies. For instance, AI-driven automation could reshape employment, as seen with Amazon’s warehouse robots.
  • Tailoring a 3-5 year forecast using foresight platforms like FuturePlatform to incorporate technological breakthroughs, policy changes, societal trends, and economic factors. Staying informed about AI regulations through think tanks like the AI Now Institute can help you navigate this complex terrain.

Setting the Sails: Envisioning a Decade Ahead – Horizon 3

Leadership in the AI epoch means having the courage to gaze beyond the immediate future. For the long-term horizon, consider the following:

  • Contemplating the possibilities. Quantum computing, advanced neural networks, and sophisticated AI-human interfaces might be the norm a decade from now. Consider how Microsoft’s recent advancements in quantum computing could revolutionize data processing and analysis in your industry.
  • Employing scenario planning to prepare for a multitude of futures. Use strategic planning software like Lucid chart to visualize different assumptions about technological progress, regulatory changes, and societal evolution.
  • Formulating strategic plans based on these scenarios. The essence of leadership is making today’s decisions with an eye on tomorrow’s probabilities.
  • Maximize the power of external expertise. Benefit from programs like Plazabridge Group’s Innovation Pro™, Innofusion™ Transformation, Innofusion™ Sprint, and Innofusion™ Sustainability Assessment to aid your journey. These programs offer valuable outside perspectives that can enrich your understanding and application of AI. They provide fresh insights, hands-on experience, and expert guidance in navigating the complex AI landscape. Find out more [Learn more] to embark on your AI journey.

External experts act as crucial navigators in this AI expedition. They help decode ethical challenges, demystify technological complexities, and forecast future trends, equipping executives to make well-informed, strategic decisions in the face of AI’s rapid evolution.

As we draw closer to 2024, remember that we’re not merely spectators of the emerging AI revolution – we’re the trailblazers. As leaders, we have the power to do more than respond to change; we can architect it. The ripples of our leadership will extend beyond our organizations, shaping the very fabric of our society. The future isn’t something that simply happens to us – we’re active participants in its creation. Now is the time to embrace this momentous journey, and lead with boldness and determination.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

A Triumph of Artificial Intelligence Rhetoric

Understanding ChatGPT

A Triumph of Artificial Intelligence Rhetoric - Understanding ChatGPT

GUEST POST from Geoffrey A. Moore

I recently finished reading Stephen Wolfram’s very approachable introduction to ChatGPT, What is ChatGPT Doing . . . And Why Does It Work?, and I encourage you to do the same. It has sparked a number of thoughts that I want to share in this post.

First, if I have understood Wolfram correctly, what ChatGPT does can be summarized as follows:

  1. Ingest an enormous corpus of text from every available digitized source.
  2. While so doing, assign to each unique word a unique identifier, a number that will serve as a token to represent that word.
  3. Within the confines of each text, record the location of every token relative to every other token.
  4. Using just these two elements—token and location—determine for every word in the entire corpus the probability of it being adjacent to, or in the vicinity of, every other word.
  5. Feed these probabilities into a neural network to cluster words and build a map of relationships.
  6. Leveraging this map, given any string of words as a prompt, use the neural network to predict the next word (just like AutoCorrect).
  7. Based on feedback from so doing, adjust the internal parameters of the neural network to improve its performance.
  8. As performance improves, extend the reach of prediction from the next word to the next phrase, then to the next clause, the next sentence, the next paragraph, and so on, improving performance at each stage by using feedback to further adjust its internal parameters.
  9. Based on all of the above, generate text responses to user questions and prompts that reviewers agree are appropriate and useful.

OK, I concede this is a radical oversimplification, but for the purposes of this post, I do not think I am misrepresenting what is going on, specifically when it comes to making what I think is the most important point to register when it comes to understanding ChatGPT. That point is a simple one. ChatGPT has no idea what it is talking about.

Indeed, ChatGPT has no ideas of any kind—no knowledge or expertise—because it has no semantic information. It is all math. Math has been used to strip words of their meaning, and that meaning is not restored until a reader or user engages with the output to do so, using their own brain, not ChatGPT’s. ChatGPT is operating entirely on form and not a whit on content. By processing the entirety of its corpus, it can generate the most probable sequence of words that correlates with the input prompt it had been fed. Additionally, it can modify that sequence based on subsequent interactions with an end user. As human beings participating in that interaction, we process these interactions as a natural language conversation with an intelligent agent, but that is not what is happening at all. ChatGPT is using our prompts to initiate a mathematical exercise using tokens and locations as its sole variables.

OK, so what? I mean, if it works, isn’t that all that matters? Not really. Here are some key concerns.

First, and most importantly, ChatGPT cannot be expected to be self-governing when it comes to content. It has no knowledge of content. So, whatever guardrails one has in mind would have to be put in place either before the data gets into ChatGPT or afterward to intercept its answers prior to passing them along to users. The latter approach, however, would defeat the whole purpose of using it in the first place by undermining one of ChatGPT’s most attractive attributes—namely, its extraordinary scalability. So, if guardrails are required, they need to be put in place at the input end of the funnel, not the output end. That is, by restricting the datasets to trustworthy sources, one can ensure that the output will be trustworthy, or at least not malicious. Fortunately, this is a practical solution for a reasonably large set of use cases. To be fair, reducing the size of the input dataset diminishes the number of examples ChatGPT can draw upon, so its output is likely to be a little less polished from a rhetorical point of view. Still, for many use cases, this is a small price to pay.

Second, we need to stop thinking of ChatGPT as artificial intelligence. It creates the illusion of intelligence, but it has no semantic component. It is all form and no content. It is a like a spider that can spin an amazing web, but it has no knowledge of what it is doing. As a consequence, while its artifacts have authority, based on their roots in authoritative texts in the data corpus validated by an extraordinary amount of cross-checking computing, the engine itself has none. ChatGPT is a vehicle for transmitting the wisdom of crowds, but it has no wisdom itself.

Third, we need to fully appreciate why interacting with ChatGPT is so seductive. To do so, understand that because it constructs its replies based solely on formal properties, it is selecting for rhetoric, not logic. It is delivering the optimal rhetorical answer to your prompt, not the most expert one. It is the one that is the most popular, not the one that is the most profound. In short, it has a great bedside manner, and that is why we feel so comfortable engaging with it.

Now, given all of the above, it is clear that for any form of user support services, ChatGPT is nothing less than a godsend, especially where people need help learning how to do something. It is the most patient of teachers, and it is incredibly well-informed. As such, it can revolutionize technical support, patient care, claims processing, social services, language learning, and a host of other disciplines where users are engaging with a technical corpus of information or a system of regulated procedures. In all such domains, enterprises should pursue its deployment as fast as possible.

Conversely, wherever ambiguity is paramount, wherever judgment is required, or wherever moral values are at stake, one must not expect ChatGPT to be the final arbiter. That is simply not what it is designed to do. It can be an input, but it cannot be trusted to be the final output.

That’s what I think. What do you think?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Top 10 Human-Centered Change & Innovation Articles of June 2023

Top 10 Human-Centered Change & Innovation Articles of June 2023Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are June’s ten most popular innovation posts:

  1. Generation AI Replacing Generation Z — by Braden Kelley
  2. Mission Critical Doesn’t Mean What You Think it Does — by Geoffrey A. Moore
  3. “I don’t know,” is a clue you’re doing it right — by Mike Shipulski
  4. 5 Tips for Leaders Navigating Uncertainty – From Executives at P&G, CVS, Hannaford, and Intel — by Robyn Bolton
  5. Reverse Innovation — by Mike Shipulski
  6. Change Management Best Practices for Maximum Adoption — by Art Inteligencia
  7. Making Employees Happy at Work — by David Burkus
  8. 4 Things Leaders Must Know About Artificial Intelligence and Automation — by Greg Satell
  9. Be Human – People Will Notice — by Mike Shipulski
  10. How to Fail Your Way to Success — by Robyn Bolton

BONUS – Here are five more strong articles published in May that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last three years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






4 Things Leaders Must Know About Artificial Intelligence and Automation

4 Things Leaders Must Know About Artificial Intelligence and Automation

GUEST POST from Greg Satell

In 2011, MIT economists Erik Brynjolfsson and Andrew McAfee self-published an unassuming e-book titled Race Against The Machine. It quickly became a runaway hit. Before long, the two signed a contract with W. W. Norton & Company to publish a full-length version, The Second Machine Age that was an immediate bestseller.

The subject of both books was how “digital technologies are rapidly encroaching on skills that used to belong to humans alone.” Although the authors were careful to point out that automation is nothing new, they argued, essentially, that at some point a difference in scale becomes a difference in kind and forecasted we were close to hitting a tipping point.

In recent years, their vision has come to be seen as deterministic and apocalyptic, with humans struggling to stay relevant in the face of a future ruled by robot overlords. There’s no evidence that’s true. The future, in fact, will be driven by humans collaborating with other humans to design work for machines to create value for other humans.

1. Automation Doesn’t Replace Jobs, It Replaces Tasks

When a new technology appears, we always seem to assume that its primary value will be to replace human workers and reduce costs, but that’s rarely true. For example, when automatic teller machines first appeared in the early 1970s, most people thought it would lead to less branches and tellers, but actually just the opposite happened.

What really happens is that as a task is automated, it becomes commoditized and value shifts somewhere else. That’s why today, as artificial intelligence is ramping up, we increasingly find ourselves in a labor shortage. Most tellingly, the shortage is especially acute in manufacturing, where automation is most pervasive.

That’s why the objective of any viable cognitive strategy is not to cut costs, but to extend capabilities. For example, when simple consumer service tasks are automated, that can free up time for human agents to help with more thorny issues. In much the same way, when algorithms can do much of the analytical grunt work, human executives can focus on long-term strategy, which computers tend to not do so well.

The winners in the cognitive era will not be those who can reduce costs the fastest, but those who can unlock the most value over the long haul. That will take more than simply implementing projects. It will require serious thinking about what your organization’s mission is and how best to achieve it.

2. Value Never Disappears, It Just Shifts To Another Place

In 1900, 30 million people in the United States were farmers, but by 1990 that number had fallen to under 3 million even as the population more than tripled. So, in a manner of speaking, 90% of American agriculture workers lost their jobs, mostly due to automation. Still, the twentieth century was seen as an era of unprecedented prosperity.

We’re in the midst of a similar transformation today. Just as our ancestors toiled in the fields, many of us today spend much of our time doing rote, routine tasks. Yet, as two economists from MIT explain in a paper, the jobs of the future are not white collar or blue collar, but those focused on non-routine tasks, especially those that involve other humans.

Far too often, however, managers fail to recognize value hidden in the work their employees do. They see a certain job description, such as taking an order in a restaurant or answering a customer’s call, and see how that task can be automated to save money. What they don’t see, however, is the hidden value of human interaction often embedded in many jobs.

When we go to a restaurant, we want somebody to take care of us (which is why we didn’t order takeout). When we have a problem with a product or service, we want to know somebody cares about solving it. So the most viable strategy is not to cut jobs, but to redesign them to leverage automation to empower humans to become more effective.

3. As Machines Learn To Think, Cognitive Skills Are Being Replaced By Social Skills

20 or 30 years ago, the world was very different. High value work generally involved the retaining information and manipulating numbers. Perhaps not surprisingly, education and corporate training programs were focused on building those skills and people would build their careers on performing well on knowledge and quantitative tasks.

Today, however, an average teenager has more access to information and computing power than even a large enterprise would a generation ago, so knowledge retention and quantitative ability have largely been automated and devalued, so high value work has shifted from cognitive skills to social skills.

To take just one example, the journal Nature has noted that the average scientific paper today has four times as many authors as one did in 1950 and the work they are doing is far more interdisciplinary and done at greater distances than in the past. So even in highly technical areas, the ability to communicate and collaborate effectively is becoming an important skill.

There are some things that a machine will never do. Machines will never strike out at a Little League game, have their hearts broken or see their children born. That makes it difficult, if not impossible, for machines to relate to humans as well as a human can.

4. AI Is A Force Multiplier, Not A Magic Box

The science fiction author Arthur C. Clark noted that “Any sufficiently advanced technology is indistinguishable from magic” and that’s largely true. So when we see a breakthrough technology for the first time, such as when IBM’s Watson system beat top human players at Jeopardy!, many immediately began imagining all the magical possibilities that could be unleashed.

Unfortunately, that always leads to trouble. Many firms raced to implement AI applications without understanding them and were immediately disappointed that the technology was just that — technology — and not actually magic. Besides wasting resources, these projects were also missed opportunities to implement something truly useful.

As Josh Sutton, CEO of Agorai, a platform that helps companies build AI applications for their business, put it, “What I tell business leaders is that AI is useful for tasks you understand well enough that you could do them if you had enough people and enough time, but not so useful if you couldn’t do it with more people and more time. It’s a force multiplier, not a magic box.”

So perhaps most importantly, what business leaders need to understand about artificial intelligence is that it is not inherently utopian or apocalyptic, but a business tool. Much like any other business tool its performance is largely dependent on context and it is a leader’s job to help create that context.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.