Tag Archives: Artificial Intelligence

Artificial Intelligence is Forcing Us to Answer Some Very Human Questions

Artificial Intelligence is Forcing Us to Answer Some Very Human Questions

GUEST POST from Greg Satell

Chris Dixon, who invested early in companies ranging from Warby Parker to Kickstarter, once wrote that the next big thing always starts out looking like a toy. That’s certainly true of artificial intelligence, which started out playing games like chess, go and playing humans on the game show Jeopardy!

Yet today, AI has become so pervasive we often don’t even recognize it anymore. Besides enabling us to speak to our phones and get answers back, intelligent algorithms are often working in the background, providing things like predictive maintenance for machinery and automating basic software tasks.

As the technology becomes more powerful, it’s also forcing us to ask some uncomfortable questions that were once more in the realm of science fiction or late-night dorm room discussions. When machines start doing things traditionally considered to be uniquely human, we need to reevaluate what it means to be human and what is to be a machine.

What Is Original and Creative?

There is an old literary concept called the Infinite Monkey Theorem. The basic idea is that if you had an infinite amount of monkeys pecking away an infinite amount of keyboards, they would, in time, produce the complete works of Shakespeare or Tolstoy or any other literary masterpiece.

Today, our technology is powerful enough to simulate infinite monkeys and produce something that looks a whole lot like original work. Music scholar and composer David Cope has been able to create algorithms that produce original works of music which are so good that even experts can’t tell the difference. Companies like Narrative Science are able to produce coherent documents from raw data this way.

So there’s an interesting philosophical discussion to be had about what what qualifies as true creation and what’s merely curation. If an algorithm produces War and Peace randomly, does it retain the same meaning? Or is the intent of the author a crucial component of what creativity is about? Reasonable people can disagree.

However, as AI technology becomes more common and pervasive, some very practical issues are arising. For example, Amazon’s Audible unit has created a new captions feature for audio books. Publishers sued, saying it’s a violation of copyright, but Amazon claims that because the captions are created with artificial intelligence, it is essentially a new work.

When machines can create does that qualify as an original, creative intent? Under what circumstances can a work be considered new and original? We are going to have to decide.

Bias And Transparency

We generally accept that humans have biases. In fact, Wikipedia lists over 100 documented biases that affect our judgments. Marketers and salespeople try to exploit these biases to influence our decisions. At the same time, professional training is supposed to mitigate them. To make good decisions, we need to conquer our tendency for bias.

Yet however much we strive to minimize bias, we cannot eliminate it, which is why transparency is so crucial for any system to work. When a CEO is hired to run a corporation, for example, he or she can’t just make decisions willy nilly, but is held accountable to a board of directors who represent shareholders. Records are kept and audited to ensure transparency.

Machines also have biases which are just as pervasive and difficult to root out. Amazon had to scrap an AI system that analyzed resumes because it was biased against female candidates. Google’s algorithm designed to detect hate speech was found to be racially biased. If two of the most sophisticated firms on the planet are unable to eliminate bias, what hope is there for the rest of us?

So, we need to start asking the same questions of machine-based decisions as we do of human ones. What information was used to make a decision? On what basis was a judgment made? How much oversight should be required and by whom? We all worry about who and what are influencing our children, we need to ask the same questions about our algorithms.

The Problem of Moral Agency

For centuries, philosophers have debated the issue of what constitutes a moral agent, meaning to what extent someone is able to make and be held responsible for moral judgments. For example, we generally do not consider those who are insane to be moral agents. Minors under the age of eighteen are also not fully held responsible for their actions.

Yet sometimes the issue of moral agency isn’t so clear. Consider a moral dilemma known as the trolley problem. Imagine you see a trolley barreling down the tracks that is about to run over five people. The only way to save them is to pull a lever to switch the trolley to a different set of tracks, but if you do one person standing there will be killed. What should you do?

For the most part, the trolley problem has been a subject for freshman philosophy classes and avant-garde cocktail parties, without any real bearing on actual decisions. However, with the rise of technologies like self-driving cars, decisions such as whether to protect the life of a passenger or a pedestrian will need to be explicitly encoded into the systems we create.

On a more basic level, we need to ask who is responsible for a decision an algorithm makes, especially since AI systems are increasingly capable of making judgments humans can’t understand. Who is culpable for an algorithmically driven decision gone bad? By what standard should they be evaluated?

Working Towards Human-Machine Coevolution

Before the industrial revolution, most people earned their living through physical labor. Much like today, tradesman saw mechanization as a threat — and indeed it was. There’s not much work for blacksmiths or loom weavers these days. What wasn’t clear at the time was that industrialization would create a knowledge economy and demand for higher paid cognitive work.

Today, we’re going through a similar shift, but now machines are taking over cognitive tasks. Just as the industrial revolution devalued certain skills and increased the value of others, the age of thinking machines is catalyzing a shift from cognitive skills to social skills. The future will be driven by humans collaborating with other humans to design work for machines that creates value for other humans.

Technology is, as Marshal McLuhan pointed out long ago, an extension of man. We are constantly coevolving with our creations. Value never really disappears, it just shifts to another place. So, when we use technology to automate a particular task, humans must find a way to create value elsewhere, which creates an opportunity to create new technologies.

This is how humans and machines coevolve. The dilemma that confronts us now is that when machines replace tasks that were once thought of as innately human, we must redefine ourselves and that raises thorny questions about our relationship to the moral universe. When men become gods, the only thing that remains to conquer is ourselves.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Coming Innovation Slowdown

The Coming Innovation Slowdown

GUEST POST from Greg Satell

Take a moment to think about what the world must have looked like to J.P. Morgan a century ago, in 1919. He was not only an immensely powerful financier with access to the great industrialists of the day, but also an early adopter of new technologies. One of the first electric generators was installed at his home.

The disruptive technologies of the day, electricity and internal combustion, were already almost 40 years old, but had little measurable economic impact. Life largely went on as it always had. That would quickly change over the next decade when those technologies would drive a 50-year boom in productivity unlike anything the world had ever seen before.

It is very likely that we are at a similar point now. Despite significant advances in technology, productivity growth has been depressed for most of the last 50 years. Over the next ten years, however, we’re likely to see that change as nascent technologies hit their stride and create completely new industries. Here’s what you’ll need to know to compete in the new era.

1. Value Will Shift from Bits to Atoms

Over the past few decades, innovation has become almost synonymous with digital technology. Every 18 months or so, semiconductor manufacturers would bring out a new generation of processors that were twice as powerful as what came before. These, in turn, would allow entrepreneurs to imagine completely new possibilities.

However, while the digital revolution has given us snazzy new gadgets, the impact has been muted. Sure, we have hundreds of TV channels and we’re able to talk to our machines and get coherent answers back, but even at this late stage, information and communication technologies make up only about 6% of GDP in advanced countries.

At first, that sounds improbable. How could so much change produce so little effect? But think about going to a typical household in 1960, before the digital revolution took hold. You would likely see a TV, a phone, household appliances and a car in the garage. Now think of a typical household in 1910, with no electricity or running water. Even simple chores like cooking and cleaning took hours of backbreaking labor.

The truth is that much of our economy is still based on what we eat, wear and live in, which is why it’s important that the nascent technologies of today, such as synthetic biology and materials science, are rooted in the physical world. Over the next generation, we can expect innovation to shift from bits back to atoms.

2. Innovation Will Slow Down

We’ve come to take it for granted that things always accelerate because that’s what has happened for the past 30 years or so. So we’ve learned to deliberate less, to rapidly prototype and iterate and to “move fast and break things” because, during the digital revolution, that’s what you needed to do to compete effectively.

Yet microchips are a very old technology that we’ve come to understand very, very well. When a new generation of chips came off the line, they were faster and better, but worked the same way as earlier versions. That won’t be true with new computing architectures such as quantum and neuromorphic computing. We’ll have to learn how to use them first.

In other cases, such as genomics and artificial intelligence, there are serious ethical issues to consider. Under what conditions is it okay to permanently alter the germ line of a species. Who is accountable for the decisions and algorithm makes? On what basis should those decisions be made? To what extent do they need to be explainable and auditable?

Innovation is a process of discovery, engineering and transformation. At the moment, we find ourselves at the end of one transformational phase and about to enter a new one. It will take a decade or so to understand these new technologies enough to begin to accelerate again. We need to do so carefully. As we have seen over the past few years, when you move fast and break things, you run the risk of breaking something important.

3. Ecosystems Will Drive Technology

Let’s return to J.P. Morgan in 1919 and ask ourselves why electricity and internal combustion had so little impact up to that point. Automobiles and electric lights had been around a long time, but adoption takes time. It takes a while to build roads, to string wires and to train technicians to service new inventions reliably.

As economist Paul David pointed out in his classic paper, The Dynamo and the Computer, it takes time for people to learn how to use new technologies. Habits and routines need to change to take full advantage of new technologies. For example, in factories, the biggest benefit electricity provided was through enabling changes in workflow.

The biggest impacts come from secondary and tertiary technologies, such as home appliances in the case of electricity. Automobiles did more than provide transportation, but enables a shift from corner stores to supermarkets and, eventually, shopping malls. Refrigerated railroad cars revolutionized food distribution. Supply chains were transformed. Radios, and later TV, reshaped entertainment.

Nobody, not even someone like J.P. Morgan could have predicted all that in 1919, because it’s ecosystems, not inventions, that drive transformation and ecosystems are non-linear. We can’t simply extrapolate out from the present and get a clear future of what the future is going to look like.

4. You Need to Start Now

The changes that will take place over the next decade or so are likely to be just as transformative—and possibly even more so—than those that happened in the 1920s and 30s. We are on the brink of a new era of innovation that will see the creation of entirely new industries and business models.

Yet the technologies that will drive the 21st century are still mostly in the discovery and engineering phases, so they’re easy to miss. Once the transformation begins in earnest, however, it will likely be too late to adapt. In areas like genomics, materials science, quantum computing and artificial intelligence, if you get a few years behind, you may never catch up.

So the time to start exploring these new technologies is now and there are ample opportunities to do so. The Manufacturing USA Institutes are driving advancement in areas as diverse as bio-fabrication, additive manufacturing and composite materials. IBM has created its Q Network to help companies get up to speed on quantum computing and the Internet of Things Consortium is doing the same thing in that space.

Make no mistake, if you don’t explore, you won’t discover. If you don’t discover you won’t invent. And if you don’t invent, you will be disrupted eventually, it’s just a matter of time. It’s always better to prepare than to adapt and the time to start doing that is now.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Rise of the Prompt Engineer

Rise of the Prompt Engineer

GUEST POST from Art Inteligencia

The world of tech is ever-evolving, and the rise of the prompt engineer is just the latest development. Prompt engineers are software developers who specialize in building natural language processing (NLP) systems, like voice assistants and chatbots, to enable users to interact with computer systems using spoken or written language. This burgeoning field is quickly becoming essential for businesses of all sizes, from startups to large enterprises, to remain competitive.

Five Skills to Look for When Hiring a Prompt Engineer

But with the rapid growth of the prompt engineer field, it can be difficult to hire the right candidate. To ensure you’re getting the best engineer for your project, there are a few key skills you should look for:

1. Technical Knowledge: A competent prompt engineer should have a deep understanding of the underlying technologies used to create NLP systems, such as machine learning, natural language processing, and speech recognition. They should also have experience developing complex algorithms and working with big data.

2. Problem-Solving: Prompt engineering is a highly creative field, so the ideal candidate should have the ability to think outside the box and come up with innovative solutions to problems.

3. Communication: A prompt engineer should be able to effectively communicate their ideas to both technical and non-technical audiences in both written and verbal formats.

4. Flexibility: With the ever-changing landscape of the tech world, prompt engineers should be comfortable working in an environment of constant change and innovation.

5. Time Management: Prompt engineers are often involved in multiple projects at once, so they should be able to manage their own time efficiently.

These are just a few of the skills to look for when hiring a prompt engineer. The right candidate will be able to combine these skills to create effective and user-friendly natural language processing systems that will help your business stay ahead of the competition.

But what if you want or need to build your own artificial intelligence queries without the assistance of a professional prompt engineer?

Four Secrets of Writing a Good AI Prompt

As AI technology continues to advance, it is important to understand how to write a good prompt for AI to ensure that it produces accurate and meaningful results. Here are some of the secrets to writing a good prompt for AI.

1. Start with a clear goal: Before you begin writing a prompt for AI, it is important to have a clear goal in mind. What are you trying to accomplish with the AI? What kind of outcome do you hope to achieve? Knowing the answers to these questions will help you write a prompt that is focused and effective.

2. Keep it simple: AI prompts should be as straightforward and simple as possible. Avoid using jargon or complicated language that could confuse the AI. Also, try to keep the prompt as short as possible so that it is easier for the AI to understand.

3. Be specific: To get the most accurate results from your AI, you should provide a specific prompt that clearly outlines what you are asking. You should also provide any relevant information, such as the data or information that the AI needs to work with.

4. Test your prompt: Before you use your AI prompt in a real-world situation, it is important to test it to make sure that it produces the results that you are expecting. This will help you identify any issues with the prompt or the AI itself and make the necessary adjustments.

By following these tips, you can ensure that your AI prompt is effective and produces the results that you are looking for. Writing a good prompt for AI is a skill that takes practice, but by following these secrets you can improve your results.

So, whether you look to write your own AI prompts or feel the need to hire a professional prompt engineer, now you are equipped to be successful either way!

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

AI is a Powerful New Tool for Entrepreneurs

AI is a Powerful New Tool for Entrepreneurs

by Braden Kelley

In today’s digital, always connected world, Google too often stands as a gatekeeper between entrepreneurs and small businesses and financial success. Ranking well in the search engines requires time and expertise that many entrepreneurs and small business owners don’t have, because their focus must be on fine tuning the value proposition and operations of their business.

The day after Google was invented, the search engine marketing firm was probably created to make money off of hard working entrepreneurs and small businesses owners trying to make the most of their investment in a web site through search engine optimization (SEO), keyword advertising, and social media strategies.

According to IBISWorld the market size of the SEO & Internet Marketing Consulting industry is $75.0 Billion. Yes, that’s billion with a ‘b’.

Creating content for web sites is an even bigger market. According to Technavio the global content marketing size is estimated to INCREASE by $584.0 Billion between 2022 and 2027. This is the growth number. The market itself is MUCH larger.

The introduction of ChatGPT threatens to upend these markets, to the detriment of this group of businesses, but to the benefit to the nearly 200,000 dentists in the United States, more than 100,000 plumbers, million and a half real estate agents, and numerous other categories of small businesses.

Many of these content marketing businesses create a number of different types of content for the tens of millions of small businesses in the United States, from blog articles to tweets to Facebook pages and everything in-between. The content marketing agencies that small businesses hire recent college graduates or offshore resources in places like the Philippines, India, Pakistan, Ecuador, Romania, and lots of other locations around the world and bill their work to their clients at a much higher rate.

Outsourcing content creation has been a great way for small businesses to leverage external resources so they can focus on the business, but now may be the time to bring some of this content creation work back in house. Particularly where the content is pretty straightforward and informational for an average visitor to the web site.

With ChatGPT you can ask it to “write me an article on how to brush your teeth” or “write me ten tweets on teethbrushing” or “write me a facebook post on the most common reasons a toilet won’t flush.”

I asked it to do the last one for me and here is what it came up with:

Continue reading the rest of this article on CustomerThink (including the ChatGPT results)

Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Our Fear of China is Overblown

Our Fear of China is Overblown

GUEST POST from Greg Satell

The rise of China over the last 40 years has been one of history’s great economic miracles. According to the World Bank, since it began opening up its economy in 1979, China’s GDP has grown from a paltry $178 billion to a massive $13.6 trillion. At the same time, research by McKinsey shows that its middle class is expanding rapidly.

What’s more, it seems like the Asian giant is just getting started. China has become increasingly dominant in scientific research and has embarked on two major initiatives: Made in China 2025, which aims to make it the leading power in 10 emerging industries, and a massive Belt and Road infrastructure initiative that seeks to shore up its power throughout Asia.

Many predict that China will dominate the 21st century in much the same way that America dominated the 20th. Yet I’m not so sure. First, American dominance was due to an unusual confluence of forces unlikely to be repeated. Second, China has weaknesses—and we have strengths—that aren’t immediately obvious. We need to be clear headed about China’s rise.

The Making of an American Century

America wasn’t always a technological superpower. In fact, at the turn of the 20th century, much like China at the beginning of this century, the United States was largely a backwater. Still mostly an agrarian nation, the US lacked the industrial base and intellectual heft of Europe. Bright young students would often need to go overseas for advanced degrees. With no central bank, financial panics were common.

Yet all that changed quickly. Industrialists like Thomas Edison and Henry Ford put the United States at the forefront of the two most important technologies of the time, electricity and internal combustion. Great fortunes produced by a rising economy endowed great educational institutions. In 1913 the Federal Reserve Act was passed, finally bringing financial stability to a growing nation. By the 1920s, much like China today, America had emerged as a major world power.

Immigration also played a role. Throughout the early 1900s immigrants coming to America provided enormous entrepreneurial energy as well as cheap labor. With the rise of fascism in the 1930s, our openness to new people and new ideas attracted many of the world’s greatest scientists to our shores and created a massive brain drain in Europe.

At the end of World War II, the United States was the only major power left with its industrial base still intact. We seized the moment wisely, using the Marshall Plan to rebuild our allies and creating scientific institutions, such as the National Science Foundation (NSF) and the National Institutes of Health (NIH) that fueled our technological and economic dominance for the rest of the century.

There are many parallels between the 1920s and the historical moment of today, but there are also many important differences. It was a number of forces, including our geography, two massive world wars, our openness as a culture and a number of wise policy choices that led to America’s dominance. Some of these factors can be replicated, but others cannot.

MITI and the Rise of Japan

Long before China loomed as a supposed threat to American prosperity and dominance, Japan was considered to be a chief economic rival. Throughout the 1970s and 80s, Japanese firms came to lead in many key industries, such as automobiles, electronics and semiconductors. The United States, by comparison, seemed feckless and unable to compete.

Key to Japan’s rise was a long-term industrial policy. The Ministry of International Trade and Industry (MITI) directed investment and funded research that fueled an economic miracle. Compared to America’s haphazard policies, Japan’s deliberate and thoughtful strategy seemed like a decidedly more rational and wiser model.

Yet before long things began to unravel. While Japan continued to perform well in many of the industries and technologies that the MITI focused on, it completely missed out on new technologies, such as minicomputers and workstations in the 1980s and personal computers in the 1990s. As MITI continued to support failing industries, growth slowed and debt piled up, leading to a lost decade of economic malaise.

At the same time, innovative government policy in the US also helped turn the tide. For example, in 1987 a non-profit consortium made up of government labs, research universities and private sector companies, called SEMATECH, was created to regain competitiveness in the semiconductor industry. America soon retook the lead, which continues even today.

China 2025 and the Belt and Road Initiative

While the parallels with America in the 1920s underline China’s potential, Japan’s experience in the 1970s and 80s highlight its peril. Much like Japan, it is centralizing decision-making around a relatively small number of bureaucrats and focusing on a relatively small number of industries and technologies.

Much like Japan back then, China seems wise and rational. Certainly, the technologies it is targeting, such as artificial intelligence, electric cars and robotics would be on anybody’s list of critical technologies for the future. The problem is that the future always surprises us. What seems clear and obvious today may look ridiculous and naive a decade from now.

To understand the problem, consider quantum computing, which China is investing heavily in. However, the technology is far from monolithic. In fact, there are a wide variety of approaches being championed by different firms, such as IBM, Microsoft, Google, Intel and others. Clearly, some of these firms are going to be right and some will be wrong.

The American firms that get it wrong will fail, but others will surely succeed. In China, however, the ones that get it wrong will likely be government bureaucrats who will have the power to prop up state supported firms indefinitely. Debt will pile up and competitiveness will decrease, much like it did in Japan in the 1990s.

This is, of course, speculation. However, there are indications that it is already happening. A recent bike sharing bubble has ignited concerns that similar over-investment is happening in artificial intelligence. Many investors have also become concerned that China’s slowing economy will be unable to support its massive debt load.

The Path Forward

The rise of China presents a generational challenge. Clearly, we cannot ignore a rising power, yet we shouldn’t overreact either. While many have tried to cast China as a bad actor, engaging in intellectual theft, currency manipulation and other unfair trade policies, others point out that it is wisely investing for the long-term while the US manages by the quarter.

Interestingly, as Fareed Zakaria recently pointed out, the same accusations made about China’s unfair trade policies today were leveled at Japan 40 years ago. In retrospect, however, our fears about Japan seem almost quaint. Not only were we not crushed by Japan’s rise, we are clearly better for it, incorporating Japanese ideas like lean manufacturing and combining them with our own innovations.

I suspect, or at least I hope, that we will benefit from China’s rise much as we did from Japan’s. We will learn from its innovations and be inspired to develop more of our own. If a Chinese scientist invents a cure for cancer, American lives will be saved. If an American scientist invents a better solar panel, fewer Chinese will be choking on smog.

Perhaps most of all, we need to remember that what made the 20th Century the American Century was our ability to rise to the challenges that history presented. Whether it was rebuilding Europe in the 40s and 50s, or Sputnik in the 50s and 60s or Japan in the 70s and 80s, competition always brought out the best in us. Then, as now, our destiny was our own to determine.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Top 10 Human-Centered Change & Innovation Articles of January 2023

Top 10 Human-Centered Change & Innovation Articles of January 2023Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are January’s ten most popular innovation posts:

  1. Top 40 Innovation Bloggers of 2022 — Curated by Braden Kelley
  2. Back to Basics: The Innovation Alphabet — by Robyn Bolton
  3. 99.7% of Innovation Processes Miss These 3 Essential Steps — by Robyn Bolton
  4. Top 100 Innovation and Transformation Articles of 2022 — Curated by Braden Kelley
  5. Ten Ways to Make Time for Innovation — by Nick Jain
  6. Agility is the 2023 Success Factor — by Soren Kaplan
  7. Five Questions All Leaders Should Always Be Asking — by David Burkus
  8. 23 Ways in 2023 to Create Amazing Experiences — by Shep Hyken
  9. Startups Must Be Where Their Customers Are — by Steve Blank
  10. Will CHATgpt make us more or less innovative? — by Pete Foley

BONUS – Here are five more strong articles published in December that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last three years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Will CHATgpt make us more or less innovative?

Will CHATgpt make us more or less innovative?

GUEST POST from Pete Foley

The rapid emergence of increasingly sophisticated ‘AI ‘ programs such as CHATgpt will profoundly impact our world in many ways. That will inevitably include Innovation, especially the front end. But will it ultimately help or hurt us? Better access to information should be a huge benefit, and my intuition was to dive in and take full advantage. I still think it has enormous upside, but I also think it needs to be treated with care. At this point at least, it’s still a tool, not an oracle. It’s an excellent source for tapping existing information, but it’s (not yet) a source of new ideas. As with any tool, those who understand deeply how it works, its benefits and its limitations, will get the most from it. And those who use it wrongly could end up doing more harm than good. So below I’ve mapped out a few pros and cons that I see. It’s new, and like everybody else, I’m on a learning curve, so would welcome any and all thoughts on these pros and cons:

What is Innovation?

First a bit of a sidebar. To understand how to use a tool, I at least need to have a reasonably clear of what goals I want it to help me achieve. Obviously ‘what is innovation’ is a somewhat debatable topic, but my working model is that the front end of innovation typically involves taking existing knowledge or technology, and combining it in new, useful ways, or in new contexts, to create something that is new, useful and ideally understandable and accessible. This requires deep knowledge, curiosity and the ability to reframe problems to find new uses of existing assets. A recent illustrative example is Oculus Rift, an innovation that helped to make virtual reality accessible by combining fairly mundane components including a mobile phone screen and a tracking sensor and ski glasses into something new. But innovation comes in many forms, and can also involve serendipity and keen observation, as in Alexander Fleming’s original discovery of penicillin. But even this requires deep domain knowledge to spot the opportunity and reframing undesirable mold into a (very) useful pharmaceutical. So, my start-point is which parts of this can CHATgpt help with?

Another sidebar is that innovation is of course far more than simply discovery or a Eureka moment. Turning an idea into a viable product or service usually requires considerable work, with the development of penicillin being a case in point. I’ve no doubt that CHATgpt and its inevitable ‘progeny’ will be of considerable help in that part of the process too.   But for starters I’ve focused on what it brings to the discovery phase, and the generation of big, game changing ideas.

First the Pros:

1. Staying Current: We all have to strike a balance between keeping up with developments in our own fields, and trying to come up with new ideas. The sheer volume of new information, especially in developing fields, means that keeping pace with even our own area of expertise has become challenging. But spend too much time just keeping up, and we become followers, not innovators, so we have to carve out time to also stretch existing knowledge. But if we don’t get the balance right, and fail to stay current, we risk get leapfrogged by those who more diligently track the latest discoveries. Simultaneous invention has been pervasive at least since the development of calculus, as one discovery often signposts and lays the path for the next. So fail to stay on top of our field, and we potentially miss a relatively easy step to the next big idea. CHATgpt can become an extremely efficient tool for tracking advances without getting buried in them.

2. Pushing Outside of our Comfort Zone: Breakthrough innovation almost by definition requires us to step beyond the boundaries of our existing knowledge. Whether we are Dyson stealing filtration technology from a sawmill for his unique ‘filterless’ vacuum cleaner, physicians combining stem cell innovation with tech to create rejection resistant artificial organs, or the Oculus tech mentioned above, innovation almost always requires tapping resources from outside of the established field. If we don’t do this, then we not only tend towards incremental ideas, but also tend to stay in lock step with other experts in our field. This becomes increasingly the case as an area matures, low hanging fruit is exhausted, and domain knowledge becomes somewhat commoditized. CHATgpt simply allows us to explore beyond our field far more efficiently than we’ve ever been able to before. And as it or related tech evolves, it will inevitably enable ever more sophisticated search. From my experience it already enables some degree of analogous search if you are thoughtful about how to frame questions, thus allowing us to more effectively expand searches for existing solutions to problems that lie beyond the obvious. That is potentially really exciting.

Some Possible Cons:

1. Going Down the Rabbit Hole: CHATgpt is crack cocaine for the curious. Mea culpa, this has probably been the most time consuming blog I’ve ever written. Answers inevitably lead to more questions, and it’s almost impossible to resist playing well beyond the specific goals I initially have. It’s fascinating, it’s fun, you learn a lot of stuff you didn’t know, but I at least struggle with discipline and focus when using it. Hopefully that will wear off, and I will find a balance that uses it efficiently.

2. The Illusion of Understanding: This is a bit more subtle, but a topic inevitably enhances our understanding of it. The act of asking questions is as much a part of learning as reading answers, and often requires deep mechanistic understanding. CHATgpa helps us probe faster, and its explanations may help us to understand concepts more quickly. But it also risks the illusion of understanding. When the heavy loading of searching is shifted away from us, we get quick answers, but may also miss out on the deeper mechanistic understanding we’d have gleaned if we’d been forced to work a bit harder. And that deeper understanding can be critical when we are trying to integrate superficially different domains as part of the innovation process. For example, knowing that we can use a patient’s stem cells to minimize rejection of an artificial organ is quite different from understanding how the immune system differentiates between its own and other stem cells. The risk is that sophisticated search engines will do more heavy lifting, allow us to move faster, but also result in a more superficial understanding, which reduces our ability to spot roadblocks early, or solve problems as we move to the back end of innovation, and reduce an idea to practice.

3. Eureka Moment: That’s the ‘conscious’ watch out, but there is also an unconscious one. It’s no secret that quite often our biggest ideas come when we are not actually trying. Archimedes had his Eureka moment in the bath, and many of my better ideas come when I least expect them, perhaps in the shower, when I first wake up, or am out having dinner. The neuroscience of creativity helps explain this, in that the restructuring of problems that leads to new insight and the integration of ideas works mostly unconsciously, and when we are not consciously focused on a problem. It’s analogous to the ‘tip of the tongue’ effect, where the harder we try to remember something, the harder it gets, but then comes to us later when we are not trying. But the key for the Eureka moment is that we need sufficiently deep knowledge for those integrations to occur. If CHATgpt increases the illusion of understanding, we could see less of those Eureka moments, and the ‘obvious in hindsight ideas’ they create.

Conclusion

I think that ultimately innovation will be accelerated by CHATgpt and what follows, perhaps quite dramatically. But I also think that we as innovators need to try and peel back the layers and understand as much as we can about these tools, as there is potential for us to trip up. We need to constantly reinvent the way we interact with them, leverage them as sophisticated innovation tools, but avoid them becoming oracles. We also need to ensure that we, and future generations use them to extend our thinking skill set, but not become a proxy for it. The calculator has in some ways made us all mathematical geniuses, but in other ways has reduced large swathes of the population’s ability to do basic math. We need to be careful that CHATgpt doesn’t do the same for our need for cognition, and deep mechanistic and/or critical thinking.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Top 100 Innovation and Transformation Articles of 2022

Top 100 Innovation and Transformation Articles of 2022

2021 marked the re-birth of my original Blogging Innovation blog as a new blog called Human-Centered Change and Innovation.

Many of you may know that Blogging Innovation grew into the world’s most popular global innovation community before being re-branded as InnovationExcellence.com and being ultimately sold to DisruptorLeague.com.

Thanks to an outpouring of support I’ve ignited the fuse of this new multiple author blog around the topics of human-centered change, innovation, transformation and design.

I feel blessed that the global innovation and change professional communities have responded with a growing roster of contributing authors and more than 17,000 newsletter subscribers.

To celebrate we’ve pulled together the Top 100 Innovation and Transformation Articles of 2022 from our archive of over 1,000 articles on these topics.

We do some other rankings too.

We just published the Top 40 Innovation Bloggers of 2022 and as the volume of this blog has grown we have brought back our monthly article ranking to complement this annual one.

But enough delay, here are the 100 most popular innovation and transformation posts of 2022.

Did your favorite make the cut?

1. A Guide to Organizing Innovation – by Jesse Nieminen

2. The Education Business Model Canvas – by Arlen Meyers, M.D.

3. 50 Cognitive Biases Reference – Free Download – by Braden Kelley

4. Why Innovation Heroes Indicate a Dysfunctional Organization – by Steve Blank

5. The One Movie All Electric Car Designers Should Watch – by Braden Kelley

6. Don’t Forget to Innovate the Customer Experience – by Braden Kelley

7. What Latest Research Reveals About Innovation Management Software – by Jesse Nieminen

8. Is Now the Time to Finally End Our Culture of Disposability? – by Braden Kelley

9. Free Innovation Maturity Assessment – by Braden Kelley

10. Cognitive Bandwidth – Staying Innovative in ‘Interesting’ Times – by Pete Foley

11. Is Digital Different? – by John Bessant

12. Top 40 Innovation Bloggers of 2021 – Curated by Braden Kelley

13. Can We Innovate Like Elon Musk? – by Pete Foley

14. Why Amazon Wants to Sell You Robots – by Shep Hyken

15. Free Human-Centered Change Tools – by Braden Kelley

16. What is Human-Centered Change? – by Braden Kelley

17. Not Invented Here – by John Bessant

18. Top Five Reasons Customers Don’t Return – by Shep Hyken

19. Visual Project Charter™ – 35″ x 56″ (Poster Size) and JPG for Online Whiteboarding – by Braden Kelley

20. Nine Innovation Roles – by Braden Kelley

21. How Consensus Kills Innovation – by Greg Satell

22. Why So Much Innoflation? – by Arlen Meyers, M.D.

23. ACMP Standard for Change Management® Visualization – 35″ x 56″ (Poster Size) – Association of Change Management Professionals – by Braden Kelley

24. 12 Reasons to Write Your Own Letter of Recommendation – by Arlen Meyers, M.D.

25. The Five Keys to Successful Change – by Braden Kelley

26. Innovation Theater – How to Fake It ‘Till You Make It – by Arlen Meyers, M.D.

27. Five Immutable Laws of Change – by Greg Satell

28. How to Free Ourselves of Conspiracy Theories – by Greg Satell

29. An Innovation Action Plan for the New CTO – by Steve Blank

30. How to Write a Failure Resume – by Arlen Meyers, M.D.


Build a common language of innovation on your team


31. Entrepreneurs Must Think Like a Change Leader – by Braden Kelley

32. No Regret Decisions: The First Steps of Leading through Hyper-Change – by Phil Buckley

33. Parallels Between the 1920’s and Today Are Frightening – by Greg Satell

34. Technology Not Always the Key to Innovation – by Braden Kelley

35. The Era of Moving Fast and Breaking Things is Over – by Greg Satell

36. A Startup’s Guide to Marketing Communications – by Steve Blank

37. You Must Be Comfortable with Being Uncomfortable – by Janet Sernack

38. Four Key Attributes of Transformational Leaders – by Greg Satell

39. We Were Wrong About What Drove the 21st Century – by Greg Satell

40. Stoking Your Innovation Bonfire – by Braden Kelley

41. Now is the Time to Design Cost Out of Our Products – by Mike Shipulski

42. Why Good Ideas Fail – by Greg Satell

43. Five Myths That Kill Change and Transformation – by Greg Satell

44. 600 Free Innovation, Transformation and Design Quote Slides – Curated by Braden Kelley

45. FutureHacking – by Braden Kelley

46. Innovation Requires Constraints – by Greg Satell

47. The Experiment Canvas™ – 35″ x 56″ (Poster Size) – by Braden Kelley

48. The Pyramid of Results, Motivation and Ability – by Braden Kelley

49. Four Paradigm Shifts Defining Our Next Decade – by Greg Satell

50. Why Most Corporate Mindset Programs Are a Waste of Time – by Alain Thys


Accelerate your change and transformation success


51. Impact of Cultural Differences on Innovation – by Jesse Nieminen

52. 600+ Downloadable Quote Posters – Curated by Braden Kelley

53. The Four Secrets of Innovation Implementation – by Shilpi Kumar

54. What Entrepreneurship Education Really Teaches Us – by Arlen Meyers, M.D.

55. Reset and Reconnect in a Chaotic World – by Janet Sernack

56. You Can’t Innovate Without This One Thing – by Robyn Bolton

57. Why Change Must Be Built on Common Ground – by Greg Satell

58. Four Innovation Ecosystem Building Blocks – by Greg Satell

59. Problem Seeking 101 – by Arlen Meyers, M.D.

60. Taking Personal Responsibility – Back to Leadership Basics – by Janet Sernack

61. The Lost Tribe of Medicine – by Arlen Meyers, M.D.

62. Invest Yourself in All That You Do – by Douglas Ferguson

63. Bureaucracy and Politics versus Innovation – by Braden Kelley

64. Dare to Think Differently – by Janet Sernack

65. Bridging the Gap Between Strategy and Reality – by Braden Kelley

66. Innovation vs. Invention vs. Creativity – by Braden Kelley

67. Building a Learn It All Culture – by Braden Kelley

68. Real Change Requires a Majority – by Greg Satell

69. Human-Centered Innovation Toolkit – by Braden Kelley

70. Silicon Valley Has Become a Doomsday Machine – by Greg Satell

71. Three Steps to Digital and AI Transformation – by Arlen Meyers, M.D.

72. We need MD/MBEs not MD/MBAs – by Arlen Meyers, M.D.

73. What You Must Know Before Leading a Design Thinking Workshop – by Douglas Ferguson

74. New Skills Needed for a New Era of Innovation – by Greg Satell

75. The Leader’s Guide to Making Innovation Happen – by Jesse Nieminen

76. Marriott’s Approach to Customer Service – by Shep Hyken

77. Flaws in the Crawl Walk Run Methodology – by Braden Kelley

78. Disrupt Yourself, Your Team and Your Organization – by Janet Sernack

79. Why Stupid Questions Are Important to Innovation – by Greg Satell

80. Breaking the Iceberg of Company Culture – by Douglas Ferguson


Get the Change Planning Toolkit


81. A Brave Post-Coronavirus New World – by Greg Satell

82. What Can Leaders Do to Have More Innovative Teams? – by Diana Porumboiu

83. Mentors Advise and Sponsors Invest – by Arlen Meyers, M.D.

84. Increasing Organizational Agility – by Braden Kelley

85. Should You Have a Department of Artificial Intelligence? – by Arlen Meyers, M.D.

86. This 9-Box Grid Can Help Grow Your Best Future Talent – by Soren Kaplan

87. Creating Employee Connection Innovations in the HR, People & Culture Space – by Chris Rollins

88. Developing 21st-Century Leader and Team Superpowers – by Janet Sernack

89. Accelerate Your Mission – by Brian Miller

90. How the Customer in 9C Saved Continental Airlines from Bankruptcy – by Howard Tiersky

91. How to Effectively Manage Remotely – by Douglas Ferguson

92. Leading a Culture of Innovation from Any Seat – by Patricia Salamone

93. Bring Newness to Corporate Learning with Gamification – by Janet Sernack

94. Selling to Generation Z – by Shep Hyken

95. Importance of Measuring Your Organization’s Innovation Maturity – by Braden Kelley

96. Innovation Champions and Pilot Partners from Outside In – by Arlen Meyers, M.D.

97. Transformation Insights – by Bruce Fairley

98. Teaching Old Fish New Tricks – by Braden Kelley

99. Innovating Through Adversity and Constraints – by Janet Sernack

100. It is Easier to Change People than to Change People – by Annette Franz

Curious which article just missed the cut? Well, here it is just for fun:

101. Chance to Help Make Futurism and Foresight Accessible – by Braden Kelley

These are the Top 100 innovation and transformation articles of 2022 based on the number of page views. If your favorite Human-Centered Change & Innovation article didn’t make the cut, then send a tweet to @innovate and maybe we’ll consider doing a People’s Choice List for 2022.

If you’re not familiar with Human-Centered Change & Innovation, we publish 1-6 new articles every week focused on human-centered change, innovation, transformation and design insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook feed or on Twitter or LinkedIn too!

Editor’s Note: Human-Centered Change & Innovation is open to contributions from any and all the innovation & transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have a valuable insight to share with everyone for the greater good. If you’d like to contribute, contact us.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Unlocking the Power of Cause and Effect

Unlocking the Power of Cause and Effect

GUEST POST from Greg Satell

In 2011, IBM’s Watson system beat the best human players in the game show, Jeopardy! Since then, machines have shown that they can outperform skilled professionals in everything from basic legal work to diagnosing breast cancer. It seems that machines just get smarter and smarter all the time.

Yet that is largely an illusion. While even a very young human child understands the basic concept of cause and effect, computers rely on correlations. In effect, while a computer can associate the sun rising with the day breaking, it doesn’t understand that one causes the other, which limits how helpful computers can be.

That’s beginning to change. A group of researchers, led by artificial intelligence pioneer Judea Pearl, are working to help computers understand cause and effect based on a new causal calculus. The effort is still in its nascent stages, but if they’re successful we could be entering a new era in which machines not only answer questions, but help us pose new ones.

Observation and Association

Most of what we know comes from inductive reasoning. We make some observations and associate those observations with specific outcomes. For example, if we see animals going to a drink at a watering hole every morning, we would expect to see them at the same watering hole in the future. Many animals share this type of low-level reasoning and use it for hunting.

Over time, humans learned how to store these observations as data and that’s helped us make associations on a much larger scale. In the early years of data mining, data was used to make very basic types of predictions, such as the likelihood that somebody buying beer at a grocery store will also want to buy something else, like potato chips or diapers.

The achievement over the last decade or so is that advancements in algorithms, such as neural networks, have allowed us to make much more complex associations. To take one example, systems that have observed thousands of mammograms have learned to associate the ones that show a tumor with a very high degree of accuracy.

However, and this is a crucial point, the system that detects cancer doesn’t “know” it’s cancer. It doesn’t associate the mammogram with an underlying cause, such as a gene mutation or lifestyle choice, nor can it suggest a specific intervention, such as chemotherapy. Perhaps most importantly, it can’t imagine other possibilities and suggest alternative tests.

Confounding Intervention

The reason that correlation is often very different from causality is the presence of something called a confounding factor. For example, we might find a correlation between high readings on a thermometer and ice cream sales and conclude that if we put the thermometer next to a heater, we can raise sales of ice cream.

I know that seems silly, but problems with confounding factors arise in the real world all the time. Data bias is especially problematic. If we find a correlation between certain teachers and low test scores, we might assume that those teachers are causing the low test scores when, in actuality, they may be great teachers who work with problematic students.

Another example is the high degree of correlation between criminal activity and certain geographical areas, where poverty is a confounding factor. If we use zip codes to predict recidivism rates, we are likely to give longer sentences and deny parole to people because they are poor, while those with more privileged backgrounds get off easy.

These are not at all theoretical examples. In fact, they happen all the time, which is why caring, competent teachers can, and do, get fired for those particular qualities and people from disadvantaged backgrounds get mistreated by the justice system. Even worse, as we automate our systems, these mistaken interventions become embedded in our algorithms, which is why it’s so important that we design our systems to be auditable, explainable and transparent.

Imagining A Counterfactual

Another confusing thing about causation is that not all causes are the same. Some causes are sufficient in themselves to produce an effect, while others are necessary, but not sufficient. Obviously, if we intend to make some progress we need to figure out what type of cause we’re dealing with. The way to do that is by imagining a different set of facts.

Let’s return to the example of teachers and test scores. Once we have controlled for problematic students, we can begin to ask if lousy teachers are enough to produce poor test scores or if there are other necessary causes, such as poor materials, decrepit facilities, incompetent administrators and so on. We do this by imagining counterfactual, such as “What if there were better materials, facilities and administrators?”

Humans naturally imagine counterfactuals all the time. We wonder what would be different if we took another job, moved to a better neighborhood or ordered something else for lunch. Machines, however, have great difficulty with things like counterfactuals, confounders and other elements of causality because there’s been no standard way to express them mathematically.

That, in a nutshell, is what Judea Pearl and his colleagues have been working on over the past 25 years and many believe that the project is finally ready to bear fruit. Combining humans innate ability to imagine counterfactuals with machines’ ability to crunch almost limitless amounts of data can really be a game changer.

Moving Towards Smarter Machines

Make no mistake, AI systems’ ability to detect patterns has proven to be amazingly useful. In fields ranging from genomics to materials science, researchers can scour massive databases and identify associations that a human would be unlikely to detect manually. Those associations can then be studied further to validate whether they are useful or not.

Still, the fact that our machines don’t understand concepts like the fact that thermometers don’t increase ice cream sales limits their effectiveness. As we learn how to design our systems to detect confounders and imagine counterfactuals, we’ll be able to evaluate not only the effectiveness of interventions that have been tried, but also those that haven’t, which will help us come up with better solutions to important problems.

For example, in a 2019 study the Congressional Budget Office estimated that raising the national minimum wage to $15 per hour would result in a decrease in employment from zero to four million workers, based on a number of observational studies. That’s an enormous range. However, if we were able to identify and mitigate confounders, we could narrow down the possibilities and make better decisions.

While still nascent, the causal revolution in AI is already underway. McKinsey recently announced the launch of CausalNex, an open source library designed to identify cause and effect relationships in organizations, such as what makes salespeople more productive. Causal approaches to AI are also being deployed in healthcare to understand the causes of complex diseases such as cancer and evaluate which interventions may be the most effective.

Some look at the growing excitement around causal AI and scoff that it is just common sense. But that is exactly the point. Our historic inability to encode a basic understanding of cause and effect relationships into our algorithms has been a serious impediment to making machines truly smart. Clearly, we need to do better than merely fitting curves to data.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Challenges of Artificial Intelligence Adoption, Dissemination and Implementation

Challenges of Artificial Intelligence Adoption, Dissemination and Implementation

GUEST POST from Arlen Meyers, M.D.

Dissemination and Implementation Science (DIS) is a growing research field that seeks to inform how evidence-based interventions can be successfully adopted, implemented, and maintained in health care delivery and community settings.

Here is what you should know about dissemination and implementation.

Sickcare artificial intelligence products and services have a unique set of barriers to dissemination and implementation.

Every sickcare AI entrepreneur will eventually be faced with the task of finding customers willing and able to buy and integrate the product into their facility. But, every potential customer or segment is not the same.

There are differences in:

  1. The governance structure
  2. The process for vetting and choosing a particular vendor or solution
  3. The makeup of the buying group and decision makers
  4. The process customers use to disseminate and implement the solution
  5. Whether or not they are willing to work with vendors on pilots
  6. The terms and conditions of contracts
  7. The business model of the organization when it comes to working with early-stage companies
  8. How stakeholders are educated and trained
  9. When and how which end users and stakeholders have input in the decision
  10. The length of the sales cycle
  11. The complexity of the decision-making process
  12. Whether the product is a point solution or platform
  13. Whether the product can be used throughout all parts of just a few of the sickcare delivery network
  14. A transactional approach v a partnership and future development one
  15. The service after the sale arrangement

Here is what Sales Navigator won’t tell you.

Here is why ColdLinking does not work.

When it comes to AI product marketing and sales, when you have seen one successful integration, you have seen one process to make it happen and the success of the dissemination and implentation that creates the promised results will vary from one place to the next.

Do your homework. One size does not fit all.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.