Tag Archives: AI

Generation AI Replacing Generation Z

Generation AI Replacing Generation Z

by Braden Kelley

The boundary lines between different named generations are a bit fuzzy but the goal should always be to draw the boundary at an event significant enough to create substantial behavior changes in the new generation worthy of consideration in strategy formation.

I believe we have arrived at such a point and that it is time for GenZ to cede the top of strategy mountain to a new generation I call Generation AI (GenAI).

The dividing line for Generation AI falls around 2014 and the people of GenAI are characterized by being the first group of people to grow up not knowing a world without easy access to generative artificial intelligence (AI) tools that begin to transform their interactions with our institutions and each other.

We have already seen professors and teachers having to police AI-generated school essays, while the rest of us are trying to cope with frighteningly realistic deep fake audio and video. But what other impacts on people’s behavior will we see as a result of the coming ubiquity of artificial intelligence?

It is important to remember that generative artificial intelligence is not really artificial intelligence but collective intelligence informed by what we the people have contributed to the training/reference set. As such these large language models are predicting the next word or combining existing content based on whatever training set they are exposed to. They are not creating original thought.

Generative AI is being built into nearly all of our existing software and cloud tools, and GenAI will grow up only knowing a reality where every application and web site they interact with will have an AI component to it. Generation AI will not know a time where they cannot ask an AI, in the same way that GenZ relies on social search, and Gen X and Millenials assume search engines hold their answers.

Our brains are changing to focus more on processing and less on storage. These changes make us more capable, but more vulnerable too.

This new AI technology represents a double-edge sword and its effects could fall on either edge of the sword in different areas:

Option 1 – Best Case

  • Generative AI will amplify creativity by encouraging recombination of existing images, text, audio and video in new inspiring ways using the outputs of AI as inputs into human creativity

Option 2 – Worst Case

  • Generative AI will reduce creativity because people will become reliant on using artificial intelligence to create, creating an echo chamber of new content only created from existing content, leading to AI outputs becoming the only outputs and a world where people spend more time interacting with AI’s than with other people

Which of these two options on the impact of AI reliance do you see as the most likely in the areas where you focus?

How do you see Generation AI impacting the direction of societies around the world?

Are you planning to add Generation AI to your marketing strategies and strategic planning for 2024 or beyond?

Reference

For reference, here is timeline of previous American generations according to an article from NPR:

Though there is a consensus on the general time period for generations, there is not an agreement on the exact year that each generation begins and ends.

Generation Z – Born 2001-2013 (Age 10-22)

These kids were the first born with the Internet and are suspected to be the most individualistic and technology-dependent generation. Sometimes referred to as the iGeneration.

EDITOR’S NOTE: This description is erroneous, the differentiating factor of GenZ is that they experienced the rise of social media.

Millennials – Born 1980-2000 (Age 23-43)

They experienced the rise of the Internet, Sept. 11 and the wars that followed. Sometimes called Generation Y. Because of their dependence on technology, they are said to be entitled and narcissistic.

Generation X – Born 1965-1979 (Age 44-58)

They were originally called the baby busters because fertility rates fell after the boomers. As teenagers, they experienced the AIDs epidemic and the fall of the Berlin Wall. Sometimes called the MTV Generation, the “X” in their name refers to this generation’s desire not to be defined.

EDITOR’S NOTE: GenX also experienced the rise of the personal computer and this has influenced their parenting of a large portion of Millenials and GenZ

Baby Boomers – Born 1943-1964 (Age 59-80)

The boomers were born during an economic and baby boom following World War II. These hippie kids protested against the Vietnam War and participated in the civil rights movement, all with rock ‘n’ roll music blaring in the background.

Silent Generation – Born 1925-1942 (Age 81-98)

They were too young to see action in World War II and too old to participate in the fun of the Summer of Love. This label describes their conformist tendencies and belief that following the rules was a sure ticket to success.

GI Generation – Born 1901-1924 (Age 99+)

They were teenagers during the Great Depression and fought in World War II. Sometimes called the greatest generation (following a book by journalist Tom Brokaw) or the swing generation because of their jazz music.

If you’d like to sign up to learn more about my new FutureHacking™ methodology and set of tools, go here.

Build a Common Language of Innovation on your team

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

When Innovation Becomes Magic

When Innovation Becomes Magic

GUEST POST from Pete Foley

Arthur C Clarke’s 3rd Law famously stated:

“Any sufficiently advanced technology is indistinguishable from magic”

In other words, if the technology of an advanced civilization is so far beyond comprehension, it appears magical to a less advanced one. This could take the form of a human encounter with a highly advanced extraterrestrial civilization, how current technology might be viewed by historical figures, or encounters between human cultures with different levels of scientific and technological knowledge.

Clarke’s law implicitly assumed that knowledge within a society is sufficiently democratized that we never view technology within a civilization as ‘magic’.  But a combination of specialization, rapid advancements in technology, and a highly stratified society means this is changing.  Generative AI, Blockchain and various forms of automation are all ‘everyday magic’ that we increasingly use, but mostly with little more than an illusion of understanding around how they work.  More technological leaps are on the horizon, and as innovation accelerates exponentially, we are all going to have to navigate a world that looks and feels increasingly magical.   Knowing how to do this effectively is going to become an increasingly important skill for us all.  

The Magic Behind the Curtain:  So what’s the problem? Why do we need to understand the ‘magic’ behind the curtain, as long as we can operate the interface, and reap the benefits?  After all, most of us use phones, computers, cars, or take medicines without really understanding how they work.  We rely on experts to guide us, and use interfaces that help us navigate complex technology without a need for deep understanding of what goes on behind the curtain.

It’s a nuanced question.  Take a car as an analogy.  We certainly don’t need to know how to build one in order to use one.  But we do need to know how to operate it and understand what it’s performance limitations are.  It also helps to have at least some basic knowledge of how it works; enough to change a tire on a remote road, or to have some concept of basic mechanics to minimize the potential of being ripped off by a rogue mechanic.  In a nutshell, the more we understand it, the more efficiently, safely and economically we leverage it.  It’s a similar situation with medicine.  It is certainly possible to defer all of our healthcare decisions to a physician.  But people who partner with their doctors, and become advocates for their own health generally have superior outcomes, are less likely to die from unintended contraindications, and typically pay less for healthcare.  And this is not trivial.  The third leading cause of death in Europe behind cancer and heart disease are issues associated with prescription medications.  We don’t need to know everything to use a tool, but in most cases, the more we know the better

The Speed/Knowledge Trade-Off:  With new, increasingly complex technologies coming at us in waves, it’s becoming increasing challenging to make sense of what’s ‘behind the curtain’. This has the potential for costly mistakes.  But delaying embracing technology until we fully understand it can come with serious opportunity costs.  Adopt too early, and we risk getting it wrong, too late and we ‘miss the bus’.  How many people who invested in crypto currency or NFT’s really understood what they were doing?  And how many of those have lost on those deals, often to the benefit of those with deeper knowledge?  That isn’t to in anyway suggest that those who are knowledgeable in those fields deliberately exploit those who aren’t, but markets tend to reward those who know, and punish those who don’t.    

The AI Oracle:  The recent rise of Generative AI has many people treating it essentially as an oracle.  We ask it a question, and it ‘magically’ spits out an answer in a very convincing and sharable format.  Few of us understand the basics of how it does this, let alone the details or limitations. We may not call it magic, but we often treat it as such.  We really have little choice; as we lack sufficient understanding to apply quality critical thinking to what we are told, so have to take answers on trust.  That would be brilliant if AI was foolproof.  But while it is certainly right a lot of the time, it does make mistakes, often quite embarrassing ones. . For example, Google’s BARD incorrectly claimed the James Webb Space Telescope had taken the first photo of a planet outside our solar system, which led to panic selling of parent company Alphabet’s stock.  Generative AI is a superb innovation, but its current iterations are far from perfect.  They are limited by the data bases they are fed on, are extremely poor at spotting their own mistakes, can be manipulated by the choice of data sets they are trained on, and they lack the underlying framework of understanding that is essential for critical thinking or for making analogical connections.  I’m sure that we’ll eventually solve these issues, either with iterations of current tech, or via integration of new technology platforms.  But until we do, we have a brilliant, but still flawed tool.  It’s mostly right, is perfect for quickly answering a lot of questions, but its biggest vulnerability is that most users have pretty limited capability to understand when it’s wrong.

Technology Blind Spots: That of course is the Achilles Heel, or blind spot and a dilemma. If an answer is wrong, and we act on it without realizing, it’s potentially trouble. But if we know the answer, we didn’t really need to ask the AI. Of course, it’s more nuanced than that.  Just getting the right answer is not always enough, as the causal understanding that we pick up by solving a problem ourselves can also be important.  It helps us to spot obvious errors, but also helps to generate memory, experience, problem solving skills, buy-in, and belief in an idea.  Procedural and associative memory is encoded differently to answers, and mechanistic understanding helps us to reapply insights and make analogies. 

Need for Causal Understanding.  Belief and buy-in can be particularly important. Different people respond to a lack of ‘internal’ understanding in different ways.  Some shy away from the unknown and avoid or oppose what they don’t understand. Others embrace it, and trust the experts.  There’s really no right or wrong in this.  Science is a mixture of both approaches it stands on the shoulders of giants, but advances based on challenging existing theories.  Good scientists are both data driven and skeptical.  But in some cases skepticism based on lack of causal understanding can be a huge barrier to adoption. It has contributed to many of the debates we see today around technology adoption, including genetically engineered foods, efficacy of certain pharmaceuticals, environmental contaminants, nutrition, vaccinations, and during Covid, RNA vaccines and even masks.  Even extremely smart people can make poor decisions because of a lack of causal understanding.  In 2003, Steve Jobs was advised by his physicians to undergo immediately surgery for a rare form of pancreatic cancer.  Instead he delayed the procedure for nine months and attempted to treat himself with alternative medicine, a decision that very likely cut his life tragically short.

What Should We Do?  We need to embrace new tools and opportunities, but we need to do so with our eyes open.   Loss aversion, and the fear of losing out is a very powerful motivator of human behavior, and so an important driver in the adoption of new technology.  But it can be costly. A lot of people lost out with crypto and NFT’s because they had a fairly concrete idea of what they could miss out on if they didn’t engage, but a much less defined idea of the risk, because they didn’t deeply understand the system. Ironically, in this case, our loss aversion bias caused a significant number of people to lose out!

Similarly with AI, a lot of people are embracing it enthusiastically, in part because they are afraid of being left behind.  That is probably right, but it’s important to balance this enthusiasm with an understanding of its potential limitations.  We may not need to know how to build a car, but it really helps to know how to steer and when to apply the brakes .   Knowing how to ask an AI questions, and when to double check answers are both going to be critical skills.  For big decisions, ‘second opinions’ are going to become extremely important.   And the human ability to interpret answers through a filter of nuance, critical thinking, different perspectives, analogy and appropriate skepticism is going to be a critical element in fully leveraging AI technology, at least for now. 

Today AI is still a tool, not an oracle. It augments our intelligence, but for complex, important or nuanced decisions or information retrieval, I’d be wary of sitting back and letting it replace us.  Its ability to process data in quantity is certainly superior to any human, but we still need humans to interpret, challenge and integrate information.  The winners of this iteration of AI technology will be those who become highly skilled at walking that line, and who are good at managing the trade off between speed and accuracy using AI as a tool.  The good news is that we are naturally good at this, it’s a critical function of the human brain, embodied in the way it balances Kahneman’s System 1 and System 2 thinking. Future iterations may not need us, but for now AI is a powerful partner and tool, but not a replacement

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Sustaining Imagination is Hard

by Braden Kelley

Recently I stumbled across a new Royal Institute video of Martin Reeves, a managing director and senior partner in BCG’s San Francisco office. Martin leads the BCG Henderson Institute, BCG’s vehicle for exploring ideas from beyond the world of business, which have implications for business strategy management.

I previously interviewed Martin along with his co-author Dr. Jack Fuller in a post titled ‘Building an Imagination Machine‘. In this video you’ll find him presenting content along similar themes. I think you’ll enjoy it:

Bonus points to anyone who can name this napkin sketch in the comments.

In the video Martin explores several of the frameworks introduced in his book The Imagination Machine. One of the central tenets of Martin’s video is the fact that sustaining imagination is hard. There are three core reasons why this is so:

  1. Overspecialization – As companies grow, jobs become increasingly smaller in scope and greater in specialization, leading to myopia as fewer and fewer people see the problems that the company started to solve in the first place
  2. Insularity – As companies grow, the majority of employees shift from being externally facing to being internally facing, isolating more and more employees from the customer and their evolving wants and needs
  3. Complacency – As companies become successful, predictably, the successful parts of the business receive most of the attention and investment, making it difficult for new efforts to receive the care and feeding necessary for them to grow and dare I say – replace – the currently idolized parts of the business

I do like the notion Martin presents that companies wishing to be continuously successful, continuously seek to be surprised and invest energy in rethinking, exploring and probing in areas where they find themselves surprised.

Martin also explores some of the common misconceptions about imagination, including the ideas that imagination is:

  1. A solitary endeavor
  2. It comes out of nowhere
  3. Unmanageable

And finally, Martin puts forward his ideas on how imagination can be harnessed systematically, using a simple six-step model:

  1. Seduction – Where can we find surprise?
  2. Idea – Do we embrace the messiness of the napkin sketch? Or expect perfection?
  3. Collision – Where can we collide this idea with the real world for validation or more surprise?
  4. Epidemic – How can we foster collective imagination? What behaviors are we encouraging?
  5. New Ordinary – How can we create new norms? What evolvable scripts can we create that live inbetween the 500-page manual and the one-sentence vision?
  6. Encore – How can we sustain imagination? How can we maintain a Day One mentality?

And no speech in 2023 would be complete without some analysis of what role artificial intelligence (AI) has to play. Martin’s perspective is that when it comes to the different levels of cognition, AI might be good at finding patterns of correlation, but humans have more advanced capabilities than machines when it comes to finding causation and counterfactual opportunities. There is an opportunity for all of us to think about how we can leverage AI across the six steps in the model above to accelerate or enhance our human efforts.

To close, Martin highlighted that when it comes to leading re-imagination, it is important to look outward, to self-disrupt, to establish heroic goals, utilize multiple mental models, and foster playfulness and experimentation across the organization to help keep imagination alive.

p.s. If you’re committed to learning the art and science of getting to the future first, then be sure and subscribe to my newsletter to make sure you’re one of the first to get certified in the FutureHacking™ methodology.

Image credits: Netflix

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Just Because We Can, Doesn’t Mean That We Should!

Just Because We Can, Doesn’t Mean That We Should!

GUEST POST from Pete Foley

An article on innovation from the BBC caught my eye this week. https://www.bbc.com/news/science-environment-64814781. After extensive research and experimentation, a group in Spain has worked out how to farm octopus. It’s clever innovation, but also comes with some ethical questions. The solution involves forcing highly intelligent, sentient animals together in unnatural environments, and then killing them in a slow, likely highly stressful way. And that triggers something that I believe we need to always keep front and center in innovation: Just Because We Can, Doesn’t Mean That We Should!

Pandora’s Box

It’s a conundrum for many innovations. Change opens Pandora’s Box, and with new possibilities come unknowns, new questions, new risks and sometimes, new moral dilemmas. And because our modern world is so complex, interdependent, and evolves so quickly, we can rarely fully anticipate all of these consequences at conception.

Scenario Planning

In most fields we routinely try and anticipate technical challenges, and run all sorts of stress, stability and consumer tests in an effort to anticipate potential problems. We often still miss stuff, especially when it’s difficult to place prototypes into realistic situations. Phones still catch fire, Hyundai’s can be surprisingly easy to steal, and airbags sometimes do more harm than good. But experienced innovators, while not perfect, tend to be pretty good at catching many of the worst technical issues.

Another Innovators Dilemma

Octopus farming doesn’t, as far as I know, have technical issues, but it does raise serious ethical questions. And these can sometimes be hard to spot, especially if we are very focused on technical challenges. I doubt that the innovators involved in octopus farming are intrinsically bad people intent on imposing suffering on innocent animals. But innovation requires passion, focus and ownership. Love is Blind, and innovators who’ve invested themselves into a project are inevitably biased, and often struggle to objectively view the downsides of their invention.

And this of course has far broader implications than octopus farming. The moral dilemma of innovation and unintended consequences has of course been brought into sharp focus with recent advances in AI.  In this case the stakes are much higher. Stephen Hawking and many others expressed concerns that while AI has the potential to provide incalculable benefits, it also has the potential to end the human race. While I personally don’t see CHATgpt as Armageddon, it is certainly evidence that Pandora’s Box is open, and none of us really knows how it will evolve, for better or worse.

What are our Solutions

So what can we do to try and avoid doing more harm than good? Do we need an innovator’s equivalent of the Hippocratic Oath? Should we as a community commit to do no harm, and somehow hold ourselves accountable? Not a bad idea in theory, but how could we practically do that? Innovation and risk go hand in hand, and in reality we often don’t know how an innovation will operate in the real world, and often don’t fully recognize the killer application associated with a new technology. And if we were to eliminate most risk from innovation, we’d also eliminate most progress. This said, I do believe how we balance progress and risk is something we need to discuss more, especially in light of the extraordinary rate of technological innovation we are experiencing, the potential size of its impact, and the increasing challenges associated with predicting outcomes as the pace of change accelerates.

Can We Ever Go Back?

Another issue is that often the choice is not simply ‘do we do it or not’, but instead ‘who does it first’? Frequently it’s not so much our ‘brilliance’ that creates innovation. Instead, it’s simply that all the pieces have just fallen into place and are waiting for someone to see the pattern. From calculus onwards, the history of innovation is replete with examples of parallel discovery, where independent groups draw the same conclusions from emerging data at about the same time.

So parallel to the question of ‘should we do it’ is ‘can we afford not to?’ Perhaps the most dramatic example of this was the nuclear bomb. For the team working the Manhattan Project it must have been ethically agonizing to create something that could cause so much human suffering. But context matters, and the Allies at the time were in a tight race with the Nazi’s to create the first nuclear bomb, the path to which was already sketched out by discoveries in physics earlier that century. The potential consequences of not succeeding were even more horrific than those of winning the race. An ethical dilemma of brutal proportions.

Today, as the pace of change accelerates, we face a raft of rapidly evolving technologies with potential for enormous good or catastrophic damage, and where Pandoras Box is already cracked open. Of course AI is one, but there are so many others. On the technical side we have bio-engineering, gene manipulation, ecological manipulation, blockchain and even space innovation. All of these have potential to do both great good and great harm. And to add to the conundrum, even if we were to decide to shut down risky avenues of innovation, there is zero guarantee that others would not pursue them. On the contrary, as bad players are more likely to pursue ethically dubious avenues of research.

Behavioral Science

And this conundrum is not limited to technical innovations. We are also making huge strides in understanding how people think and make decisions. This is superficially more subtle than AI or bio-manipulation, but as a field I’m close to, it’s also deeply concerning, and carries similar potential to do both great good or cause great harm. Public opinion is one of the few tools we have to help curb mis-use of technology, especially in democracies. But Behavioral Science gives us increasingly effective ways to influence and nudge human choices, often without people being aware they are being nudged. In parallel, technology has given us unprecedented capability to leverage that knowledge, via the internet and social media. There has always been a potential moral dilemma associated with manipulating human behavior, especially below the threshold of consciousness. It’s been a concern since the idea of subliminal advertising emerged in the 1950’s. But technical innovation has created a potentially far more influential infrastructure than the 1950’s movie theater.   We now spend a significant portion of our lives on line, and techniques such as memes, framing, managed choice architecture and leveraging mere exposure provide the potential to manipulate opinions and emotional engagement more profoundly than ever before. And the stakes have gotten higher, with political advertising, at least in the USA, often eclipsing more traditional consumer goods marketing in sheer volume.   It’s one thing to nudge someone between Coke and Pepsi, but quite another to use unconscious manipulation to drive preference in narrowly contested political races that have significant socio-political implications. There is no doubt we can use behavioral science for good, whether it’s helping people eat better, save better for retirement, drive more carefully or many other situations where the benefit/paternalism equation is pretty clear. But especially in socio-political contexts, where do we draw the line, and who decides where that line is? In our increasingly polarized society, without some oversight, it’s all too easy for well intentioned and passionate people to go too far, and in the worst case flirt with propaganda, and thus potentially enable damaging or even dangerous policy.

What Can or Should We Do?

We spend a great deal of energy and money trying to find better ways to research and anticipate both the effectiveness and potential unintended consequences of new technology. But with a few exceptions, we tend to spend less time discussing the moral implications of what we do. As the pace of innovations accelerates, does the innovation community need to adopt some form of ‘do no harm’ Hippocratic Oath? Or do we need to think more about educating, training, and putting processes in place to try and anticipate the ethical downsides of technology?

Of course, we’ll never anticipate everything. We didn’t have the background knowledge to anticipate that the invention of the internal combustion engine would seriously impact the world’s climate. Instead we were mostly just relieved that projections of cities buried under horse poop would no longer come to fruition.

But other innovations brought issues we might have seen coming with a bit more scenario-planning? Air bags initially increased deaths of children in automobile accidents, while prohibition in the US increased both crime and alcoholism. Hindsight is of course very clear, but could a little more foresight have anticipated these? Perhaps my favorite example unintended consequences is the ‘Cobra Effect’. The British in India were worried about the number of venomous cobra snakes, and so introduced a bounty for every dead cobra. Initially successful, this ultimately led to the breeding of cobras for bounty payments. On learning this, the Brits scrapped the reward. Cobra breeders then set the now-worthless snakes free. The result was more cobras than the original start-point. It’s amusing now, but it also illustrates the often significant gap between foresight and hindsight.

I certainly don’t have the answers. But as we start to stack up world changing technologies in increasingly complex, dynamic and unpredictable contexts, and as financial rewards often favor speed over caution, do we as an innovation community need to start thinking more about societal and moral risk? And if so, how could, or should we go about it?

I’d love to hear the opinions of the innovation community!

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Rise of the Prompt Engineer

Rise of the Prompt Engineer

GUEST POST from Art Inteligencia

The world of tech is ever-evolving, and the rise of the prompt engineer is just the latest development. Prompt engineers are software developers who specialize in building natural language processing (NLP) systems, like voice assistants and chatbots, to enable users to interact with computer systems using spoken or written language. This burgeoning field is quickly becoming essential for businesses of all sizes, from startups to large enterprises, to remain competitive.

Five Skills to Look for When Hiring a Prompt Engineer

But with the rapid growth of the prompt engineer field, it can be difficult to hire the right candidate. To ensure you’re getting the best engineer for your project, there are a few key skills you should look for:

1. Technical Knowledge: A competent prompt engineer should have a deep understanding of the underlying technologies used to create NLP systems, such as machine learning, natural language processing, and speech recognition. They should also have experience developing complex algorithms and working with big data.

2. Problem-Solving: Prompt engineering is a highly creative field, so the ideal candidate should have the ability to think outside the box and come up with innovative solutions to problems.

3. Communication: A prompt engineer should be able to effectively communicate their ideas to both technical and non-technical audiences in both written and verbal formats.

4. Flexibility: With the ever-changing landscape of the tech world, prompt engineers should be comfortable working in an environment of constant change and innovation.

5. Time Management: Prompt engineers are often involved in multiple projects at once, so they should be able to manage their own time efficiently.

These are just a few of the skills to look for when hiring a prompt engineer. The right candidate will be able to combine these skills to create effective and user-friendly natural language processing systems that will help your business stay ahead of the competition.

But what if you want or need to build your own artificial intelligence queries without the assistance of a professional prompt engineer?

Four Secrets of Writing a Good AI Prompt

As AI technology continues to advance, it is important to understand how to write a good prompt for AI to ensure that it produces accurate and meaningful results. Here are some of the secrets to writing a good prompt for AI.

1. Start with a clear goal: Before you begin writing a prompt for AI, it is important to have a clear goal in mind. What are you trying to accomplish with the AI? What kind of outcome do you hope to achieve? Knowing the answers to these questions will help you write a prompt that is focused and effective.

2. Keep it simple: AI prompts should be as straightforward and simple as possible. Avoid using jargon or complicated language that could confuse the AI. Also, try to keep the prompt as short as possible so that it is easier for the AI to understand.

3. Be specific: To get the most accurate results from your AI, you should provide a specific prompt that clearly outlines what you are asking. You should also provide any relevant information, such as the data or information that the AI needs to work with.

4. Test your prompt: Before you use your AI prompt in a real-world situation, it is important to test it to make sure that it produces the results that you are expecting. This will help you identify any issues with the prompt or the AI itself and make the necessary adjustments.

By following these tips, you can ensure that your AI prompt is effective and produces the results that you are looking for. Writing a good prompt for AI is a skill that takes practice, but by following these secrets you can improve your results.

So, whether you look to write your own AI prompts or feel the need to hire a professional prompt engineer, now you are equipped to be successful either way!

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






AI is a Powerful New Tool for Entrepreneurs

AI is a Powerful New Tool for Entrepreneurs

by Braden Kelley

In today’s digital, always connected world, Google too often stands as a gatekeeper between entrepreneurs and small businesses and financial success. Ranking well in the search engines requires time and expertise that many entrepreneurs and small business owners don’t have, because their focus must be on fine tuning the value proposition and operations of their business.

The day after Google was invented, the search engine marketing firm was probably created to make money off of hard working entrepreneurs and small businesses owners trying to make the most of their investment in a web site through search engine optimization (SEO), keyword advertising, and social media strategies.

According to IBISWorld the market size of the SEO & Internet Marketing Consulting industry is $75.0 Billion. Yes, that’s billion with a ‘b’.

Creating content for web sites is an even bigger market. According to Technavio the global content marketing size is estimated to INCREASE by $584.0 Billion between 2022 and 2027. This is the growth number. The market itself is MUCH larger.

The introduction of ChatGPT threatens to upend these markets, to the detriment of this group of businesses, but to the benefit to the nearly 200,000 dentists in the United States, more than 100,000 plumbers, million and a half real estate agents, and numerous other categories of small businesses.

Many of these content marketing businesses create a number of different types of content for the tens of millions of small businesses in the United States, from blog articles to tweets to Facebook pages and everything in-between. The content marketing agencies that small businesses hire recent college graduates or offshore resources in places like the Philippines, India, Pakistan, Ecuador, Romania, and lots of other locations around the world and bill their work to their clients at a much higher rate.

Outsourcing content creation has been a great way for small businesses to leverage external resources so they can focus on the business, but now may be the time to bring some of this content creation work back in house. Particularly where the content is pretty straightforward and informational for an average visitor to the web site.

With ChatGPT you can ask it to “write me an article on how to brush your teeth” or “write me ten tweets on teethbrushing” or “write me a facebook post on the most common reasons a toilet won’t flush.”

I asked it to do the last one for me and here is what it came up with:

Continue reading the rest of this article on CustomerThink (including the ChatGPT results)

Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Will CHATgpt make us more or less innovative?

Will CHATgpt make us more or less innovative?

GUEST POST from Pete Foley

The rapid emergence of increasingly sophisticated ‘AI ‘ programs such as CHATgpt will profoundly impact our world in many ways. That will inevitably include Innovation, especially the front end. But will it ultimately help or hurt us? Better access to information should be a huge benefit, and my intuition was to dive in and take full advantage. I still think it has enormous upside, but I also think it needs to be treated with care. At this point at least, it’s still a tool, not an oracle. It’s an excellent source for tapping existing information, but it’s (not yet) a source of new ideas. As with any tool, those who understand deeply how it works, its benefits and its limitations, will get the most from it. And those who use it wrongly could end up doing more harm than good. So below I’ve mapped out a few pros and cons that I see. It’s new, and like everybody else, I’m on a learning curve, so would welcome any and all thoughts on these pros and cons:

What is Innovation?

First a bit of a sidebar. To understand how to use a tool, I at least need to have a reasonably clear of what goals I want it to help me achieve. Obviously ‘what is innovation’ is a somewhat debatable topic, but my working model is that the front end of innovation typically involves taking existing knowledge or technology, and combining it in new, useful ways, or in new contexts, to create something that is new, useful and ideally understandable and accessible. This requires deep knowledge, curiosity and the ability to reframe problems to find new uses of existing assets. A recent illustrative example is Oculus Rift, an innovation that helped to make virtual reality accessible by combining fairly mundane components including a mobile phone screen and a tracking sensor and ski glasses into something new. But innovation comes in many forms, and can also involve serendipity and keen observation, as in Alexander Fleming’s original discovery of penicillin. But even this requires deep domain knowledge to spot the opportunity and reframing undesirable mold into a (very) useful pharmaceutical. So, my start-point is which parts of this can CHATgpt help with?

Another sidebar is that innovation is of course far more than simply discovery or a Eureka moment. Turning an idea into a viable product or service usually requires considerable work, with the development of penicillin being a case in point. I’ve no doubt that CHATgpt and its inevitable ‘progeny’ will be of considerable help in that part of the process too.   But for starters I’ve focused on what it brings to the discovery phase, and the generation of big, game changing ideas.

First the Pros:

1. Staying Current: We all have to strike a balance between keeping up with developments in our own fields, and trying to come up with new ideas. The sheer volume of new information, especially in developing fields, means that keeping pace with even our own area of expertise has become challenging. But spend too much time just keeping up, and we become followers, not innovators, so we have to carve out time to also stretch existing knowledge. But if we don’t get the balance right, and fail to stay current, we risk get leapfrogged by those who more diligently track the latest discoveries. Simultaneous invention has been pervasive at least since the development of calculus, as one discovery often signposts and lays the path for the next. So fail to stay on top of our field, and we potentially miss a relatively easy step to the next big idea. CHATgpt can become an extremely efficient tool for tracking advances without getting buried in them.

2. Pushing Outside of our Comfort Zone: Breakthrough innovation almost by definition requires us to step beyond the boundaries of our existing knowledge. Whether we are Dyson stealing filtration technology from a sawmill for his unique ‘filterless’ vacuum cleaner, physicians combining stem cell innovation with tech to create rejection resistant artificial organs, or the Oculus tech mentioned above, innovation almost always requires tapping resources from outside of the established field. If we don’t do this, then we not only tend towards incremental ideas, but also tend to stay in lock step with other experts in our field. This becomes increasingly the case as an area matures, low hanging fruit is exhausted, and domain knowledge becomes somewhat commoditized. CHATgpt simply allows us to explore beyond our field far more efficiently than we’ve ever been able to before. And as it or related tech evolves, it will inevitably enable ever more sophisticated search. From my experience it already enables some degree of analogous search if you are thoughtful about how to frame questions, thus allowing us to more effectively expand searches for existing solutions to problems that lie beyond the obvious. That is potentially really exciting.

Some Possible Cons:

1. Going Down the Rabbit Hole: CHATgpt is crack cocaine for the curious. Mea culpa, this has probably been the most time consuming blog I’ve ever written. Answers inevitably lead to more questions, and it’s almost impossible to resist playing well beyond the specific goals I initially have. It’s fascinating, it’s fun, you learn a lot of stuff you didn’t know, but I at least struggle with discipline and focus when using it. Hopefully that will wear off, and I will find a balance that uses it efficiently.

2. The Illusion of Understanding: This is a bit more subtle, but a topic inevitably enhances our understanding of it. The act of asking questions is as much a part of learning as reading answers, and often requires deep mechanistic understanding. CHATgpa helps us probe faster, and its explanations may help us to understand concepts more quickly. But it also risks the illusion of understanding. When the heavy loading of searching is shifted away from us, we get quick answers, but may also miss out on the deeper mechanistic understanding we’d have gleaned if we’d been forced to work a bit harder. And that deeper understanding can be critical when we are trying to integrate superficially different domains as part of the innovation process. For example, knowing that we can use a patient’s stem cells to minimize rejection of an artificial organ is quite different from understanding how the immune system differentiates between its own and other stem cells. The risk is that sophisticated search engines will do more heavy lifting, allow us to move faster, but also result in a more superficial understanding, which reduces our ability to spot roadblocks early, or solve problems as we move to the back end of innovation, and reduce an idea to practice.

3. Eureka Moment: That’s the ‘conscious’ watch out, but there is also an unconscious one. It’s no secret that quite often our biggest ideas come when we are not actually trying. Archimedes had his Eureka moment in the bath, and many of my better ideas come when I least expect them, perhaps in the shower, when I first wake up, or am out having dinner. The neuroscience of creativity helps explain this, in that the restructuring of problems that leads to new insight and the integration of ideas works mostly unconsciously, and when we are not consciously focused on a problem. It’s analogous to the ‘tip of the tongue’ effect, where the harder we try to remember something, the harder it gets, but then comes to us later when we are not trying. But the key for the Eureka moment is that we need sufficiently deep knowledge for those integrations to occur. If CHATgpt increases the illusion of understanding, we could see less of those Eureka moments, and the ‘obvious in hindsight ideas’ they create.

Conclusion

I think that ultimately innovation will be accelerated by CHATgpt and what follows, perhaps quite dramatically. But I also think that we as innovators need to try and peel back the layers and understand as much as we can about these tools, as there is potential for us to trip up. We need to constantly reinvent the way we interact with them, leverage them as sophisticated innovation tools, but avoid them becoming oracles. We also need to ensure that we, and future generations use them to extend our thinking skill set, but not become a proxy for it. The calculator has in some ways made us all mathematical geniuses, but in other ways has reduced large swathes of the population’s ability to do basic math. We need to be careful that CHATgpt doesn’t do the same for our need for cognition, and deep mechanistic and/or critical thinking.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Practical Applications of AI for Human-Centered Innovation

Beyond the Hype

Practical Applications of AI for Human-Centered Innovation

GUEST POST from Chateau G Pato

The air is thick with the buzz of Artificial Intelligence. From Davos to daily headlines, the conversation often oscillates between utopian dreams and dystopian fears. As a thought leader focused on human-centered change and innovation, my perspective cuts through this noise: AI is not just a technology; it is a powerful amplifier of human capability, especially when applied with empathy and a deep understanding of human needs. The true innovation isn’t in what AI can do, but in how it enables humans to do more, better, and more humanely.

Too many organizations are chasing AI for the sake of AI, hoping to find a magic bullet for efficiency. This misses the point entirely. The most transformative applications of AI in innovation are those that don’t replace humans, but rather augment their unique strengths — creativity, empathy, critical thinking, and ethical judgment. This article explores practical, human-centered applications of AI that move beyond the hype to deliver tangible value by putting people at the core of the AI-driven innovation process. It’s about designing a future where humanity remains in the loop, guiding and benefiting from intelligent systems.

AI as an Empathy Amplifier: Deepening Understanding

Human-centered innovation begins with deep empathy for users, customers, and employees. Traditionally, gathering and synthesizing this understanding has been a labor-intensive, often qualitative, process. AI is revolutionizing this by giving innovators superpowers in understanding human context:

  • Sentiment Analysis for Voice of Customer (VoC): AI can process vast quantities of unstructured feedback — customer reviews, social media comments, call center transcripts — to identify emerging pain points, unspoken desires, and critical satisfaction drivers, often in real-time. This provides a granular, data-driven understanding of user sentiment that human analysts alone could never achieve at scale, leading to faster, more targeted product improvements.
  • Personalized Journeys & Predictive Needs: By analyzing behavioral data, AI can predict individual user needs and preferences, allowing for hyper-personalized product recommendations, customized learning paths, or proactive support. This moves from reactive service to anticipatory human care, boosting customer loyalty and reducing friction.
  • Contextualizing Employee Experience (EX): AI can analyze internal communications, HR feedback, and engagement surveys to identify patterns of burnout, identify skill gaps, or flag cultural friction points, allowing leaders to intervene with targeted, human-centric solutions that improve employee well-being and productivity. This directly impacts talent retention and operational efficiency.

“The best AI applications don’t automate human intuition; they liberate it, freeing us to focus on the ‘why’ and ‘how’ of human experience. This is AI as a partner, not a replacement.” — Braden Kelley


Case Study 1: AI-Powered User Research at Adobe

The Challenge:

Adobe, with its vast suite of creative tools, faces the constant challenge of understanding the diverse, evolving needs of millions of users — from professional designers to casual creators. Traditional user research (surveys, interviews, focus groups) is time-consuming and expensive, making it difficult to keep pace with rapid product development cycles and emerging user behaviors.

The AI-Powered Human-Centered Solution:

Adobe developed internal AI tools that leverage natural language processing (NLP) to analyze immense volumes of unstructured user feedback from forums, support tickets, app store reviews, and in-app telemetry. These AI systems identify recurring themes, emerging feature requests, and points of friction with remarkable speed and accuracy. Instead of replacing human researchers, the AI acts as an an ‘insight engine,’ highlighting critical areas for human qualitative investigation. Researchers then use these AI-generated insights to conduct more focused, empathetic interviews and design targeted usability tests, ensuring human intelligence remains in the loop for crucial interpretation and validation.

The Innovation Impact:

This approach drastically accelerates the ideation and validation phases of Adobe’s product development, translating directly into faster time-to-market for new features. It allows human designers to spend less time sifting through data and more time synthesizing insights, collaborating on creative solutions, and directly interacting with users on the most impactful issues. Products are developed with a deeper, faster, and more scalable understanding of user pain points and desires, leading to higher adoption, stronger user loyalty, and ultimately, increased revenue.


AI as a Creativity & Productivity Partner: Amplifying Output

Beyond empathy, AI is fundamentally transforming how human innovators generate ideas, prototype solutions, and execute complex projects, not by replacing creative thought, but by amplifying it while maintaining human oversight.

  • Generative AI for Ideation & Concepting: Large Language Models (LLMs) can act as powerful brainstorming partners, generating hundreds of diverse ideas, marketing slogans, or design concepts from a simple prompt. This allows human creatives to explore a broader solution space faster, finding novel angles they might have missed, thereby reducing ideation cycle time and boosting innovation output.
  • Automated Prototyping & Simulation: AI can rapidly generate low-fidelity prototypes from design specifications, simulate user interactions, or even predict the performance of a physical product before it’s built. This drastically reduces the time and cost of the early innovation cycle, making experimentation more accessible and leading to significant R&D savings.
  • Intelligent Task Automation (Beyond RPA): While Robotic Process Automation (RPA) handles repetitive tasks, AI goes further. It can intelligently automate the contextual parts of a job, managing schedules, prioritizing communications, or summarizing complex documents, freeing human workers for higher-value, creative problem-solving. This leads to increased employee satisfaction and higher strategic output.

Case Study 2: Spotify’s AI-Driven Music Discovery & Creator Tools

The Challenge:

Spotify’s core challenge is matching millions of users with tens of millions of songs, constantly evolving tastes, and emerging artists. Simultaneously, they need to empower artists to find their audience and create efficiently in a crowded market. Traditional human curation alone couldn’t scale to this complexity.

The AI-Powered Human-Centered Solution:

Spotify uses a sophisticated AI engine to power its personalized recommendation algorithms (Discover Weekly, Daily Mixes). This AI doesn’t just match songs; it understands context — mood, activity, time of day, and even the subtle social signals of listening. This frees human curators to focus on high-level thematic curation, editorial playlists, and breaking new artists, rather than sifting through endless catalogs. More recently, Spotify is also exploring AI tools for artists, assisting with everything from mastering tracks to suggesting optimal release times based on audience analytics, always with human creators retaining final creative control.

The Innovation Impact:

The AI system allows Spotify to deliver a highly personalized and human-feeling music discovery experience at an unimaginable scale, directly driving user engagement and subscriber retention. For artists, AI acts as a creative assistant and market intelligence tool, allowing them to focus on making music while gaining insights into audience behavior and optimizing their reach. This symbiotic relationship between human creativity and AI efficiency is a hallmark of human-centered innovation, resulting in a stronger platform ecosystem for both consumers and creators.

The future of innovation isn’t about AI replacing humans; it’s about AI elevating humanity. By focusing on how AI can amplify empathy, foster creativity, and liberate us from mundane tasks, we can build a future where technology truly serves people. This requires a commitment to responsible AI development — ensuring fairness, transparency, and human oversight. The challenge for leaders is not just to adopt AI, but to design its integration with a human-centered lens, ensuring it empowers, rather than diminishes, the human spirit of innovation, and delivers measurable value across the organization.

Extra Extra: Because innovation is all about change, Braden Kelley’s human-centered change methodology and tools are the best way to plan and execute the changes necessary to support your innovation and transformation efforts — all while literally getting everyone all on the same page for change. Find out more about the methodology and tools, including the book Charting Change by following the link. Be sure and download the TEN FREE TOOLS while you’re here.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Why No Organization Innovates Alone Anymore

The Ecosystem Advantage

Why No Organization Innovates Alone Anymore

GUEST POST from Chateau G Pato

For centuries, the story of innovation was a story of closed walls and proprietary secrets. Companies poured resources into internal R&D labs, operating under the fiercely competitive belief that only self-reliance could guarantee advantage. This mindset, rooted in the industrial age, is now the single greatest obstacle to sustained change and growth. As a human-centered change and innovation thought leader, I assert that today’s most profound breakthroughs occur not within the isolated organization, but within expansive, fluid innovation ecosystems. The future belongs to the orchestrators, not the hoarders.

The speed and complexity of modern disruption — from advanced digital services to grand societal challenges—render the solo innovation model obsolete. No single company, no matter how large or well-funded, possesses all the necessary capital, talent, data, or technical expertise. The Ecosystem Advantage is the strategic realization that exponential innovation requires the symbiotic sharing of risk, resources, and intellectual property across a network of partners—customers, suppliers, competitors, startups, and academia. Critically, this collaborative model is inherently more human-centered because it forces the integration of diverse perspectives, mitigating internal blind spots and algorithmic bias.

Modern technology
— APIs for seamless data exchange, cloud platforms for shared development, and secure tools like blockchain for transparent IP tracking—makes this complex collaboration technically feasible. The challenge is no longer technological; it is strategic and cultural: managing complexity and balancing competition with collaboration.

The Three Strategic Imperatives of Ecosystem Innovation

To transition from isolated R&D to ecosystem orchestration, leaders must embrace three core strategic shifts:

  • 1. Shift from Ownership to Access: Abandon the idea that you must own every asset, technology, or line of code. The strategic imperative is to gain timely access to specialized capabilities, whether through open-source collaboration, strategic partnerships, or co-development agreements. This drastically reduces sunk costs and accelerates time-to-market.
  • 2. Curate the Edges for Diversity: Innovation often arises from the periphery—from startups, adjacent industries, or unexpected voices. Ecosystem leaders must proactively curate relationships at the “edges” of their industry, using ventures, accelerators, and challenge platforms to source disruptive ideas and integrate them rapidly. This diversity of thought is the engine of human-centered innovation.
  • 3. Govern for Trust, Not Control: Traditional contracts focused on control and IP protection can stifle the necessary fluid exchange of an ecosystem. Effective orchestration requires governance frameworks that prioritize trust, transparency, and a clearly defined mutual value proposition. The reward must be distributed fairly and clearly articulated to incentivize continuous participation and manage the inherent complexity.

“If you try to innovate alone, your speed is limited to your weakest internal link. If you innovate in an ecosystem, your speed is limited only by the velocity and diversity of your network.”


Case Study 1: Apple’s App Store – Ecosystem as a Business Model

The Challenge:

When the iPhone launched in 2007, its initial functionality was limited. The challenge was rapidly expanding the utility and perceived value of the platform beyond Apple’s internal capacity to develop software, making it indispensable to billions of users globally.

The Ecosystem Solution:

Apple did not try to develop all the necessary applications internally. Instead, it built the App Store — a highly curated platform that served as a controlled gateway for third-party developers. This move fundamentally shifted Apple’s role from a monolithic software provider to an ecosystem orchestrator. Apple provided the core technology (iOS, hardware APIs, payment processing) and governance rules, while external developers contributed the innovation, content, and diverse features.

The Innovation Impact:

The App Store unlocked an unprecedented flywheel effect. External developers created billions of dollars in new services, simultaneously making the iPhone platform exponentially more valuable and cementing Apple’s dominance. This model proved that by prioritizing access to external intellectual capital and accepting the risk of external development, the orchestrator gains massive leverage, speed, and market penetration.


Case Study 2: The Partnership for AI (PAI) – Ecosystem for Ethical Governance

The Challenge:

The development of advanced Artificial Intelligence poses complex, societal-level challenges related to ethics, fairness, and safety—issues that cannot be solved by any one company, given the competitive pressures in the sector.

The Ecosystem Solution:

The Partnership on AI (PAI) was established by major tech competitors (including Google, Amazon, Meta, Microsoft, and others), alongside civil society, academic, and journalistic organizations. PAI functions as a non-competitive ecosystem designed for pre-competitive alignment on ethical and human-centered AI standards. Instead of hoarding proprietary research, members collaborate openly on principles, best practices, and research that aims to ensure AI benefits society while mitigating risks like bias and misuse.

The Innovation Impact:

PAI demonstrates that ecosystems are not just for product innovation; they are essential for governance innovation. By establishing a shared, multi-stakeholder framework, the partnership reduces regulatory risk for all participants and ensures that the human element (represented by civil society and academics) is integrated into the design of core AI principles. This collaboration creates a foundational layer of ethical trust and shared responsibility, which is a prerequisite for the public adoption of exponential technologies.


The New Leadership Imperative: Be the Nexus

The Ecosystem Advantage is a human-centered mandate. It recognizes that the best ideas are often housed outside your walls and that true change requires collective action. For leaders, this means shedding the scarcity mindset and adopting a role as a Nexus — a strategic connector who enables value to flow freely and safely across boundaries.

Success is no longer measured by the size of your internal R&D budget, but by the health, diversity, and velocity of your external network. To thrive in the era of exponential change, you must master the three imperatives: prioritizing access over ownership, proactively curating the edges of your industry, and establishing governance models built on trust. Stop trying to win the race alone. Start building the highway for everyone; that is the new competitive advantage.

Extra Extra: Because innovation is all about change, Braden Kelley’s human-centered change methodology and tools are the best way to plan and execute the changes necessary to support your innovation and transformation efforts — all while literally getting everyone all on the same page for change. Find out more about the methodology and tools, including the book Charting Change by following the link. Be sure and download the TEN FREE TOOLS while you’re here.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






When Humans and AI Innovate Together

The Symbiotic Relationship

When Humans and AI Innovate Together

GUEST POST from Chateau G Pato

The narrative surrounding Artificial Intelligence often veers into two extremes: utopian savior or dystopian overlord. Both miss the profound truth of our current inflection point. As a human-centered change and innovation thought leader, I argue that the most impactful future of AI is not one where machines replace humans, nor one where humans merely manage machines. Instead, it is a symbiotic relationship — a partnership where the unique strengths of human creativity, empathy, and intuition merge with AI’s unparalleled speed, scale, and analytical power. This “Human-AI Teaming” is not just an operational advantage; it is the definitive engine for exponential, human-centered innovation.

The true genius of AI lies not in its ability to replicate human thought, but to augment it. Humans excel at divergent thinking, ethical reasoning, abstract problem framing, and connecting seemingly unrelated concepts. AI excels at convergent thinking, pattern recognition in vast datasets, rapid prototyping, and optimizing complex systems. When these distinct capabilities are deliberately integrated, the result is a cognitive leap forward—a powerful fusion, much like a mythical centaur, that delivers solutions previously unimaginable. This shift demands a radical rethink of organizational structures, skill development, and how we define “innovation” itself, acknowledging potential pitfalls like algorithmic bias and explainability challenges not as roadblocks, but as design challenges for stronger symbiosis.

The Pillars of Human-AI Symbiosis in Innovation

Building a truly symbiotic innovation capability requires focus on three strategic pillars:

  • 1. AI as a Cognitive Multiplier: Treat AI not as an autonomous decision-maker, but as an extension of human intellect. This means AI excels at hypothesis generation, data synthesis, anomaly detection, and providing diverse perspectives based on vast amounts of information, all to supercharge human problem-solving, allowing us to explore far more options than before.
  • 2. Humans as Ethical & Creative Architects: The human role is elevated to architect and guide. We define the problem, set the ethical boundaries, provide the contextual nuance, and apply the “human filter” to AI’s outputs. Our unique capacity for empathy, understanding unspoken needs, and managing the inherent biases of AI remains irreplaceable in truly human-centered design.
  • 3. Iterative Feedback Loops: The symbiotic relationship thrives on constant learning. Humans train AI with nuanced feedback, helping it understand complex, subjective scenarios and correct for biases. AI, in turn, provides data-driven insights and rapid experimentation capabilities that help humans refine their hypotheses and accelerate the innovation cycle. This continuous exchange refines both human understanding and AI performance.

“The future of innovation isn’t about AI or humans. It’s about how elegantly we can weave the unparalleled strengths of both into a singular, accelerated creative force.” — Satya Nadella


Case Study 1: Moderna and AI-Driven Vaccine Development

The Challenge:

Developing a vaccine for a novel pathogen like SARS-CoV-2 traditionally takes years, an impossibly long timeline during a pandemic. The complexity of mRNA sequencing, protein folding, and clinical trial design overwhelmed human capacity alone.

The Symbiotic Innovation:

Moderna leveraged an AI-first approach where human scientists defined the immunological targets and ethical parameters, but AI algorithms rapidly designed, optimized, and tested millions of potential mRNA sequences. AI analyzed vast genomic databases to predict optimal antigen structures and identify potential immune responses. Human scientists then performed the critical biological testing and validation, refined these AI-generated candidates, and managed the ethical and logistical complexities of clinical trials and regulatory approval. The explainability of AI’s outputs was crucial for human trust and regulatory acceptance.

The Exponential Impact:

This human-AI partnership dramatically accelerated the vaccine development timeline, bringing a highly effective mRNA vaccine from concept to clinical trials in a matter of weeks, not years. AI handled the computational heavy lifting of molecular design, freeing human experts to focus on the high-level strategy, rigorous validation, and the profound human impact of global health. It exemplifies AI as a cognitive multiplier in a crisis, under human-led ethical governance.


Case Study 2: Generative Design in Engineering (e.g., Autodesk Fusion 360)

The Challenge:

Traditional engineering design is constrained by human experience and iterative trial-and-error, leading to designs that are often sub-optimal in terms of weight, material usage, or performance. Designing for radical efficiency requires exploring millions of permutations—a task beyond human capacity.

The Symbiotic Innovation:

Platforms like Autodesk Fusion 360 integrate Generative Design AI. Human engineers define the essential design parameters: materials, manufacturing methods, load-bearing requirements, weight constraints, and optimization goals (e.g., minimum weight, maximum stiffness). The AI then autonomously explores hundreds or thousands of design options, often generating organic, complex structures that no human designer would conceive. The human engineer then acts as a discerning curator and refiner, selecting the most promising AI-generated designs, applying aesthetic and practical considerations, and testing them for real-world viability and manufacturability.

The Exponential Impact:

This collaboration has led to breakthroughs in lightweighting and material efficiency across industries, from aerospace to automotive. AI explores an immense solution space, while humans inject creativity, contextual understanding, and final aesthetic and ethical judgment. The result is parts that are significantly lighter, stronger, and more sustainable—innovations that would have been impossible for either human or AI to achieve alone. It’s AI expanding the realm of possibility for human architects, leading to more sustainable and cost-effective products.


The Leadership Mandate: Cultivating the Centaur Organization

Building a truly symbiotic human-AI innovation engine is not merely a technical problem; it is a profound leadership challenge. It demands investing in new skills (prompt engineering, AI ethics, data literacy, and critical thinking to evaluate AI outputs), redesigning workflows to integrate AI at key decision points, and—most crucially—cultivating a culture of psychological safety where employees are encouraged to experiment with AI, understand its limitations, and provide frank feedback without fear.

Leaders must define AI not as a replacement, but as an unparalleled partner, actively addressing challenges like algorithmic bias and the need for explainability through robust human oversight. By strategically integrating AI as a cognitive multiplier, empowering humans as ethical and creative architects, and establishing robust iterative feedback loops, organizations can unlock an era of innovation previously confined to science fiction. The future of human-centered innovation is not human-only, nor AI-only. It is a powerful, elegant dance between both, continuously learning and adapting.

Extra Extra: Because innovation is all about change, Braden Kelley’s human-centered change methodology and tools are the best way to plan and execute the changes necessary to support your innovation and transformation efforts — all while literally getting everyone all on the same page for change. Find out more about the methodology and tools, including the book Charting Change by following the link. Be sure and download the TEN FREE TOOLS while you’re here.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.