Tag Archives: AI

Sustaining Imagination is Hard

by Braden Kelley

Recently I stumbled across a new Royal Institute video of Martin Reeves, a managing director and senior partner in BCG’s San Francisco office. Martin leads the BCG Henderson Institute, BCG’s vehicle for exploring ideas from beyond the world of business, which have implications for business strategy management.

I previously interviewed Martin along with his co-author Dr. Jack Fuller in a post titled ‘Building an Imagination Machine‘. In this video you’ll find him presenting content along similar themes. I think you’ll enjoy it:

Bonus points to anyone who can name this napkin sketch in the comments.

In the video Martin explores several of the frameworks introduced in his book The Imagination Machine. One of the central tenets of Martin’s video is the fact that sustaining imagination is hard. There are three core reasons why this is so:

  1. Overspecialization – As companies grow, jobs become increasingly smaller in scope and greater in specialization, leading to myopia as fewer and fewer people see the problems that the company started to solve in the first place
  2. Insularity – As companies grow, the majority of employees shift from being externally facing to being internally facing, isolating more and more employees from the customer and their evolving wants and needs
  3. Complacency – As companies become successful, predictably, the successful parts of the business receive most of the attention and investment, making it difficult for new efforts to receive the care and feeding necessary for them to grow and dare I say – replace – the currently idolized parts of the business

I do like the notion Martin presents that companies wishing to be continuously successful, continuously seek to be surprised and invest energy in rethinking, exploring and probing in areas where they find themselves surprised.

Martin also explores some of the common misconceptions about imagination, including the ideas that imagination is:

  1. A solitary endeavor
  2. It comes out of nowhere
  3. Unmanageable

And finally, Martin puts forward his ideas on how imagination can be harnessed systematically, using a simple six-step model:

  1. Seduction – Where can we find surprise?
  2. Idea – Do we embrace the messiness of the napkin sketch? Or expect perfection?
  3. Collision – Where can we collide this idea with the real world for validation or more surprise?
  4. Epidemic – How can we foster collective imagination? What behaviors are we encouraging?
  5. New Ordinary – How can we create new norms? What evolvable scripts can we create that live inbetween the 500-page manual and the one-sentence vision?
  6. Encore – How can we sustain imagination? How can we maintain a Day One mentality?

And no speech in 2023 would be complete without some analysis of what role artificial intelligence (AI) has to play. Martin’s perspective is that when it comes to the different levels of cognition, AI might be good at finding patterns of correlation, but humans have more advanced capabilities than machines when it comes to finding causation and counterfactual opportunities. There is an opportunity for all of us to think about how we can leverage AI across the six steps in the model above to accelerate or enhance our human efforts.

To close, Martin highlighted that when it comes to leading re-imagination, it is important to look outward, to self-disrupt, to establish heroic goals, utilize multiple mental models, and foster playfulness and experimentation across the organization to help keep imagination alive.

p.s. If you’re committed to learning the art and science of getting to the future first, then be sure and subscribe to my newsletter to make sure you’re one of the first to get certified in the FutureHacking™ methodology.

Image credits: Netflix

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Just Because We Can, Doesn’t Mean That We Should!

Just Because We Can, Doesn’t Mean That We Should!

GUEST POST from Pete Foley

An article on innovation from the BBC caught my eye this week. https://www.bbc.com/news/science-environment-64814781. After extensive research and experimentation, a group in Spain has worked out how to farm octopus. It’s clever innovation, but also comes with some ethical questions. The solution involves forcing highly intelligent, sentient animals together in unnatural environments, and then killing them in a slow, likely highly stressful way. And that triggers something that I believe we need to always keep front and center in innovation: Just Because We Can, Doesn’t Mean That We Should!

Pandora’s Box

It’s a conundrum for many innovations. Change opens Pandora’s Box, and with new possibilities come unknowns, new questions, new risks and sometimes, new moral dilemmas. And because our modern world is so complex, interdependent, and evolves so quickly, we can rarely fully anticipate all of these consequences at conception.

Scenario Planning

In most fields we routinely try and anticipate technical challenges, and run all sorts of stress, stability and consumer tests in an effort to anticipate potential problems. We often still miss stuff, especially when it’s difficult to place prototypes into realistic situations. Phones still catch fire, Hyundai’s can be surprisingly easy to steal, and airbags sometimes do more harm than good. But experienced innovators, while not perfect, tend to be pretty good at catching many of the worst technical issues.

Another Innovators Dilemma

Octopus farming doesn’t, as far as I know, have technical issues, but it does raise serious ethical questions. And these can sometimes be hard to spot, especially if we are very focused on technical challenges. I doubt that the innovators involved in octopus farming are intrinsically bad people intent on imposing suffering on innocent animals. But innovation requires passion, focus and ownership. Love is Blind, and innovators who’ve invested themselves into a project are inevitably biased, and often struggle to objectively view the downsides of their invention.

And this of course has far broader implications than octopus farming. The moral dilemma of innovation and unintended consequences has of course been brought into sharp focus with recent advances in AI.  In this case the stakes are much higher. Stephen Hawking and many others expressed concerns that while AI has the potential to provide incalculable benefits, it also has the potential to end the human race. While I personally don’t see CHATgpt as Armageddon, it is certainly evidence that Pandora’s Box is open, and none of us really knows how it will evolve, for better or worse.

What are our Solutions

So what can we do to try and avoid doing more harm than good? Do we need an innovator’s equivalent of the Hippocratic Oath? Should we as a community commit to do no harm, and somehow hold ourselves accountable? Not a bad idea in theory, but how could we practically do that? Innovation and risk go hand in hand, and in reality we often don’t know how an innovation will operate in the real world, and often don’t fully recognize the killer application associated with a new technology. And if we were to eliminate most risk from innovation, we’d also eliminate most progress. This said, I do believe how we balance progress and risk is something we need to discuss more, especially in light of the extraordinary rate of technological innovation we are experiencing, the potential size of its impact, and the increasing challenges associated with predicting outcomes as the pace of change accelerates.

Can We Ever Go Back?

Another issue is that often the choice is not simply ‘do we do it or not’, but instead ‘who does it first’? Frequently it’s not so much our ‘brilliance’ that creates innovation. Instead, it’s simply that all the pieces have just fallen into place and are waiting for someone to see the pattern. From calculus onwards, the history of innovation is replete with examples of parallel discovery, where independent groups draw the same conclusions from emerging data at about the same time.

So parallel to the question of ‘should we do it’ is ‘can we afford not to?’ Perhaps the most dramatic example of this was the nuclear bomb. For the team working the Manhattan Project it must have been ethically agonizing to create something that could cause so much human suffering. But context matters, and the Allies at the time were in a tight race with the Nazi’s to create the first nuclear bomb, the path to which was already sketched out by discoveries in physics earlier that century. The potential consequences of not succeeding were even more horrific than those of winning the race. An ethical dilemma of brutal proportions.

Today, as the pace of change accelerates, we face a raft of rapidly evolving technologies with potential for enormous good or catastrophic damage, and where Pandoras Box is already cracked open. Of course AI is one, but there are so many others. On the technical side we have bio-engineering, gene manipulation, ecological manipulation, blockchain and even space innovation. All of these have potential to do both great good and great harm. And to add to the conundrum, even if we were to decide to shut down risky avenues of innovation, there is zero guarantee that others would not pursue them. On the contrary, as bad players are more likely to pursue ethically dubious avenues of research.

Behavioral Science

And this conundrum is not limited to technical innovations. We are also making huge strides in understanding how people think and make decisions. This is superficially more subtle than AI or bio-manipulation, but as a field I’m close to, it’s also deeply concerning, and carries similar potential to do both great good or cause great harm. Public opinion is one of the few tools we have to help curb mis-use of technology, especially in democracies. But Behavioral Science gives us increasingly effective ways to influence and nudge human choices, often without people being aware they are being nudged. In parallel, technology has given us unprecedented capability to leverage that knowledge, via the internet and social media. There has always been a potential moral dilemma associated with manipulating human behavior, especially below the threshold of consciousness. It’s been a concern since the idea of subliminal advertising emerged in the 1950’s. But technical innovation has created a potentially far more influential infrastructure than the 1950’s movie theater.   We now spend a significant portion of our lives on line, and techniques such as memes, framing, managed choice architecture and leveraging mere exposure provide the potential to manipulate opinions and emotional engagement more profoundly than ever before. And the stakes have gotten higher, with political advertising, at least in the USA, often eclipsing more traditional consumer goods marketing in sheer volume.   It’s one thing to nudge someone between Coke and Pepsi, but quite another to use unconscious manipulation to drive preference in narrowly contested political races that have significant socio-political implications. There is no doubt we can use behavioral science for good, whether it’s helping people eat better, save better for retirement, drive more carefully or many other situations where the benefit/paternalism equation is pretty clear. But especially in socio-political contexts, where do we draw the line, and who decides where that line is? In our increasingly polarized society, without some oversight, it’s all too easy for well intentioned and passionate people to go too far, and in the worst case flirt with propaganda, and thus potentially enable damaging or even dangerous policy.

What Can or Should We Do?

We spend a great deal of energy and money trying to find better ways to research and anticipate both the effectiveness and potential unintended consequences of new technology. But with a few exceptions, we tend to spend less time discussing the moral implications of what we do. As the pace of innovations accelerates, does the innovation community need to adopt some form of ‘do no harm’ Hippocratic Oath? Or do we need to think more about educating, training, and putting processes in place to try and anticipate the ethical downsides of technology?

Of course, we’ll never anticipate everything. We didn’t have the background knowledge to anticipate that the invention of the internal combustion engine would seriously impact the world’s climate. Instead we were mostly just relieved that projections of cities buried under horse poop would no longer come to fruition.

But other innovations brought issues we might have seen coming with a bit more scenario-planning? Air bags initially increased deaths of children in automobile accidents, while prohibition in the US increased both crime and alcoholism. Hindsight is of course very clear, but could a little more foresight have anticipated these? Perhaps my favorite example unintended consequences is the ‘Cobra Effect’. The British in India were worried about the number of venomous cobra snakes, and so introduced a bounty for every dead cobra. Initially successful, this ultimately led to the breeding of cobras for bounty payments. On learning this, the Brits scrapped the reward. Cobra breeders then set the now-worthless snakes free. The result was more cobras than the original start-point. It’s amusing now, but it also illustrates the often significant gap between foresight and hindsight.

I certainly don’t have the answers. But as we start to stack up world changing technologies in increasingly complex, dynamic and unpredictable contexts, and as financial rewards often favor speed over caution, do we as an innovation community need to start thinking more about societal and moral risk? And if so, how could, or should we go about it?

I’d love to hear the opinions of the innovation community!

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Rise of the Prompt Engineer

Rise of the Prompt Engineer

GUEST POST from Art Inteligencia

The world of tech is ever-evolving, and the rise of the prompt engineer is just the latest development. Prompt engineers are software developers who specialize in building natural language processing (NLP) systems, like voice assistants and chatbots, to enable users to interact with computer systems using spoken or written language. This burgeoning field is quickly becoming essential for businesses of all sizes, from startups to large enterprises, to remain competitive.

Five Skills to Look for When Hiring a Prompt Engineer

But with the rapid growth of the prompt engineer field, it can be difficult to hire the right candidate. To ensure you’re getting the best engineer for your project, there are a few key skills you should look for:

1. Technical Knowledge: A competent prompt engineer should have a deep understanding of the underlying technologies used to create NLP systems, such as machine learning, natural language processing, and speech recognition. They should also have experience developing complex algorithms and working with big data.

2. Problem-Solving: Prompt engineering is a highly creative field, so the ideal candidate should have the ability to think outside the box and come up with innovative solutions to problems.

3. Communication: A prompt engineer should be able to effectively communicate their ideas to both technical and non-technical audiences in both written and verbal formats.

4. Flexibility: With the ever-changing landscape of the tech world, prompt engineers should be comfortable working in an environment of constant change and innovation.

5. Time Management: Prompt engineers are often involved in multiple projects at once, so they should be able to manage their own time efficiently.

These are just a few of the skills to look for when hiring a prompt engineer. The right candidate will be able to combine these skills to create effective and user-friendly natural language processing systems that will help your business stay ahead of the competition.

But what if you want or need to build your own artificial intelligence queries without the assistance of a professional prompt engineer?

Four Secrets of Writing a Good AI Prompt

As AI technology continues to advance, it is important to understand how to write a good prompt for AI to ensure that it produces accurate and meaningful results. Here are some of the secrets to writing a good prompt for AI.

1. Start with a clear goal: Before you begin writing a prompt for AI, it is important to have a clear goal in mind. What are you trying to accomplish with the AI? What kind of outcome do you hope to achieve? Knowing the answers to these questions will help you write a prompt that is focused and effective.

2. Keep it simple: AI prompts should be as straightforward and simple as possible. Avoid using jargon or complicated language that could confuse the AI. Also, try to keep the prompt as short as possible so that it is easier for the AI to understand.

3. Be specific: To get the most accurate results from your AI, you should provide a specific prompt that clearly outlines what you are asking. You should also provide any relevant information, such as the data or information that the AI needs to work with.

4. Test your prompt: Before you use your AI prompt in a real-world situation, it is important to test it to make sure that it produces the results that you are expecting. This will help you identify any issues with the prompt or the AI itself and make the necessary adjustments.

By following these tips, you can ensure that your AI prompt is effective and produces the results that you are looking for. Writing a good prompt for AI is a skill that takes practice, but by following these secrets you can improve your results.

So, whether you look to write your own AI prompts or feel the need to hire a professional prompt engineer, now you are equipped to be successful either way!

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

AI is a Powerful New Tool for Entrepreneurs

AI is a Powerful New Tool for Entrepreneurs

by Braden Kelley

In today’s digital, always connected world, Google too often stands as a gatekeeper between entrepreneurs and small businesses and financial success. Ranking well in the search engines requires time and expertise that many entrepreneurs and small business owners don’t have, because their focus must be on fine tuning the value proposition and operations of their business.

The day after Google was invented, the search engine marketing firm was probably created to make money off of hard working entrepreneurs and small businesses owners trying to make the most of their investment in a web site through search engine optimization (SEO), keyword advertising, and social media strategies.

According to IBISWorld the market size of the SEO & Internet Marketing Consulting industry is $75.0 Billion. Yes, that’s billion with a ‘b’.

Creating content for web sites is an even bigger market. According to Technavio the global content marketing size is estimated to INCREASE by $584.0 Billion between 2022 and 2027. This is the growth number. The market itself is MUCH larger.

The introduction of ChatGPT threatens to upend these markets, to the detriment of this group of businesses, but to the benefit to the nearly 200,000 dentists in the United States, more than 100,000 plumbers, million and a half real estate agents, and numerous other categories of small businesses.

Many of these content marketing businesses create a number of different types of content for the tens of millions of small businesses in the United States, from blog articles to tweets to Facebook pages and everything in-between. The content marketing agencies that small businesses hire recent college graduates or offshore resources in places like the Philippines, India, Pakistan, Ecuador, Romania, and lots of other locations around the world and bill their work to their clients at a much higher rate.

Outsourcing content creation has been a great way for small businesses to leverage external resources so they can focus on the business, but now may be the time to bring some of this content creation work back in house. Particularly where the content is pretty straightforward and informational for an average visitor to the web site.

With ChatGPT you can ask it to “write me an article on how to brush your teeth” or “write me ten tweets on teethbrushing” or “write me a facebook post on the most common reasons a toilet won’t flush.”

I asked it to do the last one for me and here is what it came up with:

Continue reading the rest of this article on CustomerThink (including the ChatGPT results)

Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Will CHATgpt make us more or less innovative?

Will CHATgpt make us more or less innovative?

GUEST POST from Pete Foley

The rapid emergence of increasingly sophisticated ‘AI ‘ programs such as CHATgpt will profoundly impact our world in many ways. That will inevitably include Innovation, especially the front end. But will it ultimately help or hurt us? Better access to information should be a huge benefit, and my intuition was to dive in and take full advantage. I still think it has enormous upside, but I also think it needs to be treated with care. At this point at least, it’s still a tool, not an oracle. It’s an excellent source for tapping existing information, but it’s (not yet) a source of new ideas. As with any tool, those who understand deeply how it works, its benefits and its limitations, will get the most from it. And those who use it wrongly could end up doing more harm than good. So below I’ve mapped out a few pros and cons that I see. It’s new, and like everybody else, I’m on a learning curve, so would welcome any and all thoughts on these pros and cons:

What is Innovation?

First a bit of a sidebar. To understand how to use a tool, I at least need to have a reasonably clear of what goals I want it to help me achieve. Obviously ‘what is innovation’ is a somewhat debatable topic, but my working model is that the front end of innovation typically involves taking existing knowledge or technology, and combining it in new, useful ways, or in new contexts, to create something that is new, useful and ideally understandable and accessible. This requires deep knowledge, curiosity and the ability to reframe problems to find new uses of existing assets. A recent illustrative example is Oculus Rift, an innovation that helped to make virtual reality accessible by combining fairly mundane components including a mobile phone screen and a tracking sensor and ski glasses into something new. But innovation comes in many forms, and can also involve serendipity and keen observation, as in Alexander Fleming’s original discovery of penicillin. But even this requires deep domain knowledge to spot the opportunity and reframing undesirable mold into a (very) useful pharmaceutical. So, my start-point is which parts of this can CHATgpt help with?

Another sidebar is that innovation is of course far more than simply discovery or a Eureka moment. Turning an idea into a viable product or service usually requires considerable work, with the development of penicillin being a case in point. I’ve no doubt that CHATgpt and its inevitable ‘progeny’ will be of considerable help in that part of the process too.   But for starters I’ve focused on what it brings to the discovery phase, and the generation of big, game changing ideas.

First the Pros:

1. Staying Current: We all have to strike a balance between keeping up with developments in our own fields, and trying to come up with new ideas. The sheer volume of new information, especially in developing fields, means that keeping pace with even our own area of expertise has become challenging. But spend too much time just keeping up, and we become followers, not innovators, so we have to carve out time to also stretch existing knowledge. But if we don’t get the balance right, and fail to stay current, we risk get leapfrogged by those who more diligently track the latest discoveries. Simultaneous invention has been pervasive at least since the development of calculus, as one discovery often signposts and lays the path for the next. So fail to stay on top of our field, and we potentially miss a relatively easy step to the next big idea. CHATgpt can become an extremely efficient tool for tracking advances without getting buried in them.

2. Pushing Outside of our Comfort Zone: Breakthrough innovation almost by definition requires us to step beyond the boundaries of our existing knowledge. Whether we are Dyson stealing filtration technology from a sawmill for his unique ‘filterless’ vacuum cleaner, physicians combining stem cell innovation with tech to create rejection resistant artificial organs, or the Oculus tech mentioned above, innovation almost always requires tapping resources from outside of the established field. If we don’t do this, then we not only tend towards incremental ideas, but also tend to stay in lock step with other experts in our field. This becomes increasingly the case as an area matures, low hanging fruit is exhausted, and domain knowledge becomes somewhat commoditized. CHATgpt simply allows us to explore beyond our field far more efficiently than we’ve ever been able to before. And as it or related tech evolves, it will inevitably enable ever more sophisticated search. From my experience it already enables some degree of analogous search if you are thoughtful about how to frame questions, thus allowing us to more effectively expand searches for existing solutions to problems that lie beyond the obvious. That is potentially really exciting.

Some Possible Cons:

1. Going Down the Rabbit Hole: CHATgpt is crack cocaine for the curious. Mea culpa, this has probably been the most time consuming blog I’ve ever written. Answers inevitably lead to more questions, and it’s almost impossible to resist playing well beyond the specific goals I initially have. It’s fascinating, it’s fun, you learn a lot of stuff you didn’t know, but I at least struggle with discipline and focus when using it. Hopefully that will wear off, and I will find a balance that uses it efficiently.

2. The Illusion of Understanding: This is a bit more subtle, but a topic inevitably enhances our understanding of it. The act of asking questions is as much a part of learning as reading answers, and often requires deep mechanistic understanding. CHATgpa helps us probe faster, and its explanations may help us to understand concepts more quickly. But it also risks the illusion of understanding. When the heavy loading of searching is shifted away from us, we get quick answers, but may also miss out on the deeper mechanistic understanding we’d have gleaned if we’d been forced to work a bit harder. And that deeper understanding can be critical when we are trying to integrate superficially different domains as part of the innovation process. For example, knowing that we can use a patient’s stem cells to minimize rejection of an artificial organ is quite different from understanding how the immune system differentiates between its own and other stem cells. The risk is that sophisticated search engines will do more heavy lifting, allow us to move faster, but also result in a more superficial understanding, which reduces our ability to spot roadblocks early, or solve problems as we move to the back end of innovation, and reduce an idea to practice.

3. Eureka Moment: That’s the ‘conscious’ watch out, but there is also an unconscious one. It’s no secret that quite often our biggest ideas come when we are not actually trying. Archimedes had his Eureka moment in the bath, and many of my better ideas come when I least expect them, perhaps in the shower, when I first wake up, or am out having dinner. The neuroscience of creativity helps explain this, in that the restructuring of problems that leads to new insight and the integration of ideas works mostly unconsciously, and when we are not consciously focused on a problem. It’s analogous to the ‘tip of the tongue’ effect, where the harder we try to remember something, the harder it gets, but then comes to us later when we are not trying. But the key for the Eureka moment is that we need sufficiently deep knowledge for those integrations to occur. If CHATgpt increases the illusion of understanding, we could see less of those Eureka moments, and the ‘obvious in hindsight ideas’ they create.

Conclusion

I think that ultimately innovation will be accelerated by CHATgpt and what follows, perhaps quite dramatically. But I also think that we as innovators need to try and peel back the layers and understand as much as we can about these tools, as there is potential for us to trip up. We need to constantly reinvent the way we interact with them, leverage them as sophisticated innovation tools, but avoid them becoming oracles. We also need to ensure that we, and future generations use them to extend our thinking skill set, but not become a proxy for it. The calculator has in some ways made us all mathematical geniuses, but in other ways has reduced large swathes of the population’s ability to do basic math. We need to be careful that CHATgpt doesn’t do the same for our need for cognition, and deep mechanistic and/or critical thinking.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Practical Applications of AI for Human-Centered Innovation

Beyond the Hype

Practical Applications of AI for Human-Centered Innovation

GUEST POST from Chateau G Pato

The air is thick with the buzz of Artificial Intelligence. From Davos to daily headlines, the conversation often oscillates between utopian dreams and dystopian fears. As a thought leader focused on human-centered change and innovation, my perspective cuts through this noise: AI is not just a technology; it is a powerful amplifier of human capability, especially when applied with empathy and a deep understanding of human needs. The true innovation isn’t in what AI can do, but in how it enables humans to do more, better, and more humanely.

Too many organizations are chasing AI for the sake of AI, hoping to find a magic bullet for efficiency. This misses the point entirely. The most transformative applications of AI in innovation are those that don’t replace humans, but rather augment their unique strengths — creativity, empathy, critical thinking, and ethical judgment. This article explores practical, human-centered applications of AI that move beyond the hype to deliver tangible value by putting people at the core of the AI-driven innovation process. It’s about designing a future where humanity remains in the loop, guiding and benefiting from intelligent systems.

AI as an Empathy Amplifier: Deepening Understanding

Human-centered innovation begins with deep empathy for users, customers, and employees. Traditionally, gathering and synthesizing this understanding has been a labor-intensive, often qualitative, process. AI is revolutionizing this by giving innovators superpowers in understanding human context:

  • Sentiment Analysis for Voice of Customer (VoC): AI can process vast quantities of unstructured feedback — customer reviews, social media comments, call center transcripts — to identify emerging pain points, unspoken desires, and critical satisfaction drivers, often in real-time. This provides a granular, data-driven understanding of user sentiment that human analysts alone could never achieve at scale, leading to faster, more targeted product improvements.
  • Personalized Journeys & Predictive Needs: By analyzing behavioral data, AI can predict individual user needs and preferences, allowing for hyper-personalized product recommendations, customized learning paths, or proactive support. This moves from reactive service to anticipatory human care, boosting customer loyalty and reducing friction.
  • Contextualizing Employee Experience (EX): AI can analyze internal communications, HR feedback, and engagement surveys to identify patterns of burnout, identify skill gaps, or flag cultural friction points, allowing leaders to intervene with targeted, human-centric solutions that improve employee well-being and productivity. This directly impacts talent retention and operational efficiency.

“The best AI applications don’t automate human intuition; they liberate it, freeing us to focus on the ‘why’ and ‘how’ of human experience. This is AI as a partner, not a replacement.” — Braden Kelley


Case Study 1: AI-Powered User Research at Adobe

The Challenge:

Adobe, with its vast suite of creative tools, faces the constant challenge of understanding the diverse, evolving needs of millions of users — from professional designers to casual creators. Traditional user research (surveys, interviews, focus groups) is time-consuming and expensive, making it difficult to keep pace with rapid product development cycles and emerging user behaviors.

The AI-Powered Human-Centered Solution:

Adobe developed internal AI tools that leverage natural language processing (NLP) to analyze immense volumes of unstructured user feedback from forums, support tickets, app store reviews, and in-app telemetry. These AI systems identify recurring themes, emerging feature requests, and points of friction with remarkable speed and accuracy. Instead of replacing human researchers, the AI acts as an an ‘insight engine,’ highlighting critical areas for human qualitative investigation. Researchers then use these AI-generated insights to conduct more focused, empathetic interviews and design targeted usability tests, ensuring human intelligence remains in the loop for crucial interpretation and validation.

The Innovation Impact:

This approach drastically accelerates the ideation and validation phases of Adobe’s product development, translating directly into faster time-to-market for new features. It allows human designers to spend less time sifting through data and more time synthesizing insights, collaborating on creative solutions, and directly interacting with users on the most impactful issues. Products are developed with a deeper, faster, and more scalable understanding of user pain points and desires, leading to higher adoption, stronger user loyalty, and ultimately, increased revenue.


AI as a Creativity & Productivity Partner: Amplifying Output

Beyond empathy, AI is fundamentally transforming how human innovators generate ideas, prototype solutions, and execute complex projects, not by replacing creative thought, but by amplifying it while maintaining human oversight.

  • Generative AI for Ideation & Concepting: Large Language Models (LLMs) can act as powerful brainstorming partners, generating hundreds of diverse ideas, marketing slogans, or design concepts from a simple prompt. This allows human creatives to explore a broader solution space faster, finding novel angles they might have missed, thereby reducing ideation cycle time and boosting innovation output.
  • Automated Prototyping & Simulation: AI can rapidly generate low-fidelity prototypes from design specifications, simulate user interactions, or even predict the performance of a physical product before it’s built. This drastically reduces the time and cost of the early innovation cycle, making experimentation more accessible and leading to significant R&D savings.
  • Intelligent Task Automation (Beyond RPA): While Robotic Process Automation (RPA) handles repetitive tasks, AI goes further. It can intelligently automate the contextual parts of a job, managing schedules, prioritizing communications, or summarizing complex documents, freeing human workers for higher-value, creative problem-solving. This leads to increased employee satisfaction and higher strategic output.

Case Study 2: Spotify’s AI-Driven Music Discovery & Creator Tools

The Challenge:

Spotify’s core challenge is matching millions of users with tens of millions of songs, constantly evolving tastes, and emerging artists. Simultaneously, they need to empower artists to find their audience and create efficiently in a crowded market. Traditional human curation alone couldn’t scale to this complexity.

The AI-Powered Human-Centered Solution:

Spotify uses a sophisticated AI engine to power its personalized recommendation algorithms (Discover Weekly, Daily Mixes). This AI doesn’t just match songs; it understands context — mood, activity, time of day, and even the subtle social signals of listening. This frees human curators to focus on high-level thematic curation, editorial playlists, and breaking new artists, rather than sifting through endless catalogs. More recently, Spotify is also exploring AI tools for artists, assisting with everything from mastering tracks to suggesting optimal release times based on audience analytics, always with human creators retaining final creative control.

The Innovation Impact:

The AI system allows Spotify to deliver a highly personalized and human-feeling music discovery experience at an unimaginable scale, directly driving user engagement and subscriber retention. For artists, AI acts as a creative assistant and market intelligence tool, allowing them to focus on making music while gaining insights into audience behavior and optimizing their reach. This symbiotic relationship between human creativity and AI efficiency is a hallmark of human-centered innovation, resulting in a stronger platform ecosystem for both consumers and creators.

The future of innovation isn’t about AI replacing humans; it’s about AI elevating humanity. By focusing on how AI can amplify empathy, foster creativity, and liberate us from mundane tasks, we can build a future where technology truly serves people. This requires a commitment to responsible AI development — ensuring fairness, transparency, and human oversight. The challenge for leaders is not just to adopt AI, but to design its integration with a human-centered lens, ensuring it empowers, rather than diminishes, the human spirit of innovation, and delivers measurable value across the organization.

Extra Extra: Because innovation is all about change, Braden Kelley’s human-centered change methodology and tools are the best way to plan and execute the changes necessary to support your innovation and transformation efforts — all while literally getting everyone all on the same page for change. Find out more about the methodology and tools, including the book Charting Change by following the link. Be sure and download the TEN FREE TOOLS while you’re here.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Why No Organization Innovates Alone Anymore

The Ecosystem Advantage

Why No Organization Innovates Alone Anymore

GUEST POST from Chateau G Pato

For centuries, the story of innovation was a story of closed walls and proprietary secrets. Companies poured resources into internal R&D labs, operating under the fiercely competitive belief that only self-reliance could guarantee advantage. This mindset, rooted in the industrial age, is now the single greatest obstacle to sustained change and growth. As a human-centered change and innovation thought leader, I assert that today’s most profound breakthroughs occur not within the isolated organization, but within expansive, fluid innovation ecosystems. The future belongs to the orchestrators, not the hoarders.

The speed and complexity of modern disruption — from advanced digital services to grand societal challenges—render the solo innovation model obsolete. No single company, no matter how large or well-funded, possesses all the necessary capital, talent, data, or technical expertise. The Ecosystem Advantage is the strategic realization that exponential innovation requires the symbiotic sharing of risk, resources, and intellectual property across a network of partners—customers, suppliers, competitors, startups, and academia. Critically, this collaborative model is inherently more human-centered because it forces the integration of diverse perspectives, mitigating internal blind spots and algorithmic bias.

Modern technology
— APIs for seamless data exchange, cloud platforms for shared development, and secure tools like blockchain for transparent IP tracking—makes this complex collaboration technically feasible. The challenge is no longer technological; it is strategic and cultural: managing complexity and balancing competition with collaboration.

The Three Strategic Imperatives of Ecosystem Innovation

To transition from isolated R&D to ecosystem orchestration, leaders must embrace three core strategic shifts:

  • 1. Shift from Ownership to Access: Abandon the idea that you must own every asset, technology, or line of code. The strategic imperative is to gain timely access to specialized capabilities, whether through open-source collaboration, strategic partnerships, or co-development agreements. This drastically reduces sunk costs and accelerates time-to-market.
  • 2. Curate the Edges for Diversity: Innovation often arises from the periphery—from startups, adjacent industries, or unexpected voices. Ecosystem leaders must proactively curate relationships at the “edges” of their industry, using ventures, accelerators, and challenge platforms to source disruptive ideas and integrate them rapidly. This diversity of thought is the engine of human-centered innovation.
  • 3. Govern for Trust, Not Control: Traditional contracts focused on control and IP protection can stifle the necessary fluid exchange of an ecosystem. Effective orchestration requires governance frameworks that prioritize trust, transparency, and a clearly defined mutual value proposition. The reward must be distributed fairly and clearly articulated to incentivize continuous participation and manage the inherent complexity.

“If you try to innovate alone, your speed is limited to your weakest internal link. If you innovate in an ecosystem, your speed is limited only by the velocity and diversity of your network.”


Case Study 1: Apple’s App Store – Ecosystem as a Business Model

The Challenge:

When the iPhone launched in 2007, its initial functionality was limited. The challenge was rapidly expanding the utility and perceived value of the platform beyond Apple’s internal capacity to develop software, making it indispensable to billions of users globally.

The Ecosystem Solution:

Apple did not try to develop all the necessary applications internally. Instead, it built the App Store — a highly curated platform that served as a controlled gateway for third-party developers. This move fundamentally shifted Apple’s role from a monolithic software provider to an ecosystem orchestrator. Apple provided the core technology (iOS, hardware APIs, payment processing) and governance rules, while external developers contributed the innovation, content, and diverse features.

The Innovation Impact:

The App Store unlocked an unprecedented flywheel effect. External developers created billions of dollars in new services, simultaneously making the iPhone platform exponentially more valuable and cementing Apple’s dominance. This model proved that by prioritizing access to external intellectual capital and accepting the risk of external development, the orchestrator gains massive leverage, speed, and market penetration.


Case Study 2: The Partnership for AI (PAI) – Ecosystem for Ethical Governance

The Challenge:

The development of advanced Artificial Intelligence poses complex, societal-level challenges related to ethics, fairness, and safety—issues that cannot be solved by any one company, given the competitive pressures in the sector.

The Ecosystem Solution:

The Partnership on AI (PAI) was established by major tech competitors (including Google, Amazon, Meta, Microsoft, and others), alongside civil society, academic, and journalistic organizations. PAI functions as a non-competitive ecosystem designed for pre-competitive alignment on ethical and human-centered AI standards. Instead of hoarding proprietary research, members collaborate openly on principles, best practices, and research that aims to ensure AI benefits society while mitigating risks like bias and misuse.

The Innovation Impact:

PAI demonstrates that ecosystems are not just for product innovation; they are essential for governance innovation. By establishing a shared, multi-stakeholder framework, the partnership reduces regulatory risk for all participants and ensures that the human element (represented by civil society and academics) is integrated into the design of core AI principles. This collaboration creates a foundational layer of ethical trust and shared responsibility, which is a prerequisite for the public adoption of exponential technologies.


The New Leadership Imperative: Be the Nexus

The Ecosystem Advantage is a human-centered mandate. It recognizes that the best ideas are often housed outside your walls and that true change requires collective action. For leaders, this means shedding the scarcity mindset and adopting a role as a Nexus — a strategic connector who enables value to flow freely and safely across boundaries.

Success is no longer measured by the size of your internal R&D budget, but by the health, diversity, and velocity of your external network. To thrive in the era of exponential change, you must master the three imperatives: prioritizing access over ownership, proactively curating the edges of your industry, and establishing governance models built on trust. Stop trying to win the race alone. Start building the highway for everyone; that is the new competitive advantage.

Extra Extra: Because innovation is all about change, Braden Kelley’s human-centered change methodology and tools are the best way to plan and execute the changes necessary to support your innovation and transformation efforts — all while literally getting everyone all on the same page for change. Find out more about the methodology and tools, including the book Charting Change by following the link. Be sure and download the TEN FREE TOOLS while you’re here.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

When Humans and AI Innovate Together

The Symbiotic Relationship

When Humans and AI Innovate Together

GUEST POST from Chateau G Pato

The narrative surrounding Artificial Intelligence often veers into two extremes: utopian savior or dystopian overlord. Both miss the profound truth of our current inflection point. As a human-centered change and innovation thought leader, I argue that the most impactful future of AI is not one where machines replace humans, nor one where humans merely manage machines. Instead, it is a symbiotic relationship — a partnership where the unique strengths of human creativity, empathy, and intuition merge with AI’s unparalleled speed, scale, and analytical power. This “Human-AI Teaming” is not just an operational advantage; it is the definitive engine for exponential, human-centered innovation.

The true genius of AI lies not in its ability to replicate human thought, but to augment it. Humans excel at divergent thinking, ethical reasoning, abstract problem framing, and connecting seemingly unrelated concepts. AI excels at convergent thinking, pattern recognition in vast datasets, rapid prototyping, and optimizing complex systems. When these distinct capabilities are deliberately integrated, the result is a cognitive leap forward—a powerful fusion, much like a mythical centaur, that delivers solutions previously unimaginable. This shift demands a radical rethink of organizational structures, skill development, and how we define “innovation” itself, acknowledging potential pitfalls like algorithmic bias and explainability challenges not as roadblocks, but as design challenges for stronger symbiosis.

The Pillars of Human-AI Symbiosis in Innovation

Building a truly symbiotic innovation capability requires focus on three strategic pillars:

  • 1. AI as a Cognitive Multiplier: Treat AI not as an autonomous decision-maker, but as an extension of human intellect. This means AI excels at hypothesis generation, data synthesis, anomaly detection, and providing diverse perspectives based on vast amounts of information, all to supercharge human problem-solving, allowing us to explore far more options than before.
  • 2. Humans as Ethical & Creative Architects: The human role is elevated to architect and guide. We define the problem, set the ethical boundaries, provide the contextual nuance, and apply the “human filter” to AI’s outputs. Our unique capacity for empathy, understanding unspoken needs, and managing the inherent biases of AI remains irreplaceable in truly human-centered design.
  • 3. Iterative Feedback Loops: The symbiotic relationship thrives on constant learning. Humans train AI with nuanced feedback, helping it understand complex, subjective scenarios and correct for biases. AI, in turn, provides data-driven insights and rapid experimentation capabilities that help humans refine their hypotheses and accelerate the innovation cycle. This continuous exchange refines both human understanding and AI performance.

“The future of innovation isn’t about AI or humans. It’s about how elegantly we can weave the unparalleled strengths of both into a singular, accelerated creative force.” — Satya Nadella


Case Study 1: Moderna and AI-Driven Vaccine Development

The Challenge:

Developing a vaccine for a novel pathogen like SARS-CoV-2 traditionally takes years, an impossibly long timeline during a pandemic. The complexity of mRNA sequencing, protein folding, and clinical trial design overwhelmed human capacity alone.

The Symbiotic Innovation:

Moderna leveraged an AI-first approach where human scientists defined the immunological targets and ethical parameters, but AI algorithms rapidly designed, optimized, and tested millions of potential mRNA sequences. AI analyzed vast genomic databases to predict optimal antigen structures and identify potential immune responses. Human scientists then performed the critical biological testing and validation, refined these AI-generated candidates, and managed the ethical and logistical complexities of clinical trials and regulatory approval. The explainability of AI’s outputs was crucial for human trust and regulatory acceptance.

The Exponential Impact:

This human-AI partnership dramatically accelerated the vaccine development timeline, bringing a highly effective mRNA vaccine from concept to clinical trials in a matter of weeks, not years. AI handled the computational heavy lifting of molecular design, freeing human experts to focus on the high-level strategy, rigorous validation, and the profound human impact of global health. It exemplifies AI as a cognitive multiplier in a crisis, under human-led ethical governance.


Case Study 2: Generative Design in Engineering (e.g., Autodesk Fusion 360)

The Challenge:

Traditional engineering design is constrained by human experience and iterative trial-and-error, leading to designs that are often sub-optimal in terms of weight, material usage, or performance. Designing for radical efficiency requires exploring millions of permutations—a task beyond human capacity.

The Symbiotic Innovation:

Platforms like Autodesk Fusion 360 integrate Generative Design AI. Human engineers define the essential design parameters: materials, manufacturing methods, load-bearing requirements, weight constraints, and optimization goals (e.g., minimum weight, maximum stiffness). The AI then autonomously explores hundreds or thousands of design options, often generating organic, complex structures that no human designer would conceive. The human engineer then acts as a discerning curator and refiner, selecting the most promising AI-generated designs, applying aesthetic and practical considerations, and testing them for real-world viability and manufacturability.

The Exponential Impact:

This collaboration has led to breakthroughs in lightweighting and material efficiency across industries, from aerospace to automotive. AI explores an immense solution space, while humans inject creativity, contextual understanding, and final aesthetic and ethical judgment. The result is parts that are significantly lighter, stronger, and more sustainable—innovations that would have been impossible for either human or AI to achieve alone. It’s AI expanding the realm of possibility for human architects, leading to more sustainable and cost-effective products.


The Leadership Mandate: Cultivating the Centaur Organization

Building a truly symbiotic human-AI innovation engine is not merely a technical problem; it is a profound leadership challenge. It demands investing in new skills (prompt engineering, AI ethics, data literacy, and critical thinking to evaluate AI outputs), redesigning workflows to integrate AI at key decision points, and—most crucially—cultivating a culture of psychological safety where employees are encouraged to experiment with AI, understand its limitations, and provide frank feedback without fear.

Leaders must define AI not as a replacement, but as an unparalleled partner, actively addressing challenges like algorithmic bias and the need for explainability through robust human oversight. By strategically integrating AI as a cognitive multiplier, empowering humans as ethical and creative architects, and establishing robust iterative feedback loops, organizations can unlock an era of innovation previously confined to science fiction. The future of human-centered innovation is not human-only, nor AI-only. It is a powerful, elegant dance between both, continuously learning and adapting.

Extra Extra: Because innovation is all about change, Braden Kelley’s human-centered change methodology and tools are the best way to plan and execute the changes necessary to support your innovation and transformation efforts — all while literally getting everyone all on the same page for change. Find out more about the methodology and tools, including the book Charting Change by following the link. Be sure and download the TEN FREE TOOLS while you’re here.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Preparing Your Workforce for Collaborative Intelligence

Upskilling for the AI Era

Preparing Your Workforce for Collaborative Intelligence

GUEST POST from Chateau G Pato

The rise of Artificial Intelligence is not a distant threat looming on the horizon; it is the fundamental reality of business today. Yet, the conversation is often dominated by fear—the fear of job replacement, of technical obsolescence, and of organizational disruption. As a human-centered change and innovation thought leader, I argue that this narrative misses the most profound opportunity: the chance to redefine the very nature of human work. The true imperative for leaders is not to acquire AI tools, but to upskill their human workforce for a symbiotic partnership with those tools. We must shift our focus from automation to Collaborative Intelligence, where the strength of the machine (speed, data processing) complements the genius of the human (creativity, empathy, judgment).

The AI Era demands a strategic pivot in talent development. We need to move past reactive technical training and invest in the skills that are uniquely human, those that machines can augment but never truly replicate. The future of competitive advantage lies not in owning the best algorithms, but in cultivating the workforce most skilled at collaborating with algorithms. This requires a shift in mindset, skills, and organizational design, ensuring that every employee — from the frontline associate to the senior executive — understands their new role as an AI partner, strategist, and ethical steward.

The Three Pillars of Collaborative Intelligence

Preparing your workforce for the AI era means focusing on three critical, human-centric skill areas that machines will struggle to master:

  • 1. Strategic Judgment and Empathy: AI excels at calculation, but it lacks contextual awareness, cultural nuance, and empathy. The human role shifts to interpreting the AI’s output, exercising ethical judgment, and translating data into emotionally resonant actions for customers and colleagues. This requires deep training in human-centered design principles and ethical decision-making.
  • 2. Creative Problem-Solving and Experimentation: The most valuable new skill is not coding, but prompt engineering and defining the right questions. Humans must conceptualize new use cases, challenge the AI’s assumptions, and rapidly prototype new solutions. This demands a culture of psychological safety where continuous experimentation and failure are encouraged as essential steps toward innovation.
  • 3. Data Literacy and AI Stewardship: Every employee must become literate in data and AI concepts. They don’t need to write code, but they must understand how the AI makes decisions, where its data comes from, and why a result might be biased or flawed. The human is the ethical backstop and the responsible steward of the algorithm’s power.

“The AI won’t take your job; a person skilled in AI will. The upskilling challenge is not about the technology; it’s about the partnership.” — Braden Kelley


Case Study 1: The Global Consulting Firm – From Analyst to Interpreter

The Challenge:

A major global consulting firm faced the threat of AI automation taking over their junior analysts’ core tasks: data aggregation, slide creation, and basic research. They realized that their competitive edge was not in performing these routine tasks, but in their consultants’ ability to synthesize, communicate, and build client trust—all uniquely human skills.

The Collaborative Intelligence Solution:

The firm launched a massive internal upskilling initiative focused on transforming the junior analyst role from “data processor” to “AI interpreter and client strategist.” The training focused heavily on non-technical skills: narrative storytelling (using AI-generated data to craft compelling client stories), ethical deliberation (identifying bias in AI-generated recommendations), and active listening (improving client empathy). AI was positioned not as a replacement, but as an instant, tireless research assistant that handled 80% of the routine work.

The Human-Centered Result:

By investing in human judgment and communication, the firm increased the value of its junior workforce. Consultants spent less time creating slides and more time on high-impact client interactions, leading to stronger relationships and more innovative solutions. This shift proved that the ultimate value-add in a service industry is the human capacity for strategic synthesis and trustworthy communication — skills that thrive when augmented by AI.


Case Study 2: Leading Retail Bank – Embedding AI into Customer Service

The Challenge:

A large retail bank implemented AI chatbots and automated routing systems to handle routine customer inquiries, intending to reduce call center costs. However, customer satisfaction plummeted because complex or emotionally charged issues were being mishandled by the automation. The human agents felt demoralized, fearing redundancy.

The Collaborative Intelligence Solution:

The bank pivoted its strategy, creating a new role: the Augmented Human Agent. The human agents were upskilled in two key areas. First, they received intensive training in emotional regulation and conflict resolution to handle the high-stress, complex calls that the AI flagged and escalated. Second, they were trained in “AI tuning” — learning to review the chatbot’s transcripts, identify common failure points, and provide direct feedback to the AI development team. This turned the agents from passive recipients of technology into active partners in its improvement.

The Human-Centered Result:

This approach restored customer trust. Customers felt valued because their most difficult problems were routed quickly to a highly skilled, emotionally intelligent human. Employee engagement improved because agents felt empowered and recognized as essential collaborators in the bank’s digital transformation. The result was a successful blend: AI handled the volume and efficiency, while highly skilled humans handled the emotion and complexity, achieving both cost savings and higher customer satisfaction.


Conclusion: The Future of Work is Partnership

The AI Era is not about a technological race; it is about a human race to redefine skills, value, and purpose. The most forward-thinking leaders will treat AI deployment as a catalyst for human capital development. This means shifting budget from outdated legacy training programs to investments in judgment, ethics, creativity, and empathy. The future of work is not about the “Man vs. Machine” conflict, but the Man with Machine partnership.

Your competitive advantage tomorrow will be determined by how effectively your people can collaborate with the intelligent systems at their disposal. By focusing your upskilling efforts on the three pillars of Collaborative Intelligence, you ensure that your workforce is not just surviving the AI revolution, but actively leading it—creating a future that is not just efficient, but fundamentally human-centered and more innovative.

Extra Extra: Because innovation is all about change, Braden Kelley’s human-centered change methodology and tools are the best way to plan and execute the changes necessary to support your innovation and transformation efforts — all while literally getting everyone all on the same page for change. Find out more about the methodology and tools, including the book Charting Change by following the link. Be sure and download the TEN FREE TOOLS while you’re here.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

AI-Powered Foresight

Predicting Trends and Uncovering New Opportunities

AI-Powered Foresight

GUEST POST from Chateau G Pato

In a world of accelerating change, the ability to see around corners is no longer a luxury; it’s a strategic imperative. For decades, organizations have relied on traditional market research, analyst reports, and expert intuition to predict the future. While these methods provide a solid view of the present and the immediate horizon, they often struggle to detect the faint, yet potent, signals of a more distant future. As a human-centered change and innovation thought leader, I believe that **Artificial Intelligence is the most powerful new tool for foresight**. AI is not here to replace human intuition, but to act as a powerful extension of it, allowing us to process vast amounts of data and uncover patterns that are invisible to the human eye. The future of innovation isn’t about predicting what’s next; it’s about systematically sensing and shaping what’s possible. AI is the engine that makes this possible.

The human brain is a marvel of pattern recognition, but it is limited by its own biases, a finite amount of processing power, and the sheer volume of information available today. AI, however, thrives in this chaos. It can ingest and analyze billions of data points—from consumer sentiment on social media, to patent filings, to macroeconomic indicators—in a fraction of the time. It can identify subtle correlations and weak signals that, when combined, point to a major market shift years before it becomes a mainstream trend. By leveraging AI for foresight, we can move from a reactive position to a proactive one, turning our organizations from followers into first-movers.

The AI Foresight Blueprint

Leveraging AI for foresight isn’t a one-and-done task; it’s a continuous, dynamic process. Here’s a blueprint for how organizations can implement it:

  • Data-Driven Horizon Scanning: Use AI to continuously monitor a wide range of data sources, from academic papers and startup funding rounds to online forums and cultural movements. An AI can flag anomalies and emerging clusters of activity that fall outside of your industry’s current focus.
  • Pattern Recognition & Trend Identification: AI models can connect seemingly unrelated data points to identify nascent trends. For example, an AI might link a rise in plant-based food searches to an increase in sustainable packaging patents and a surge in home gardening interest, pointing to a larger “Conscious Consumer” trend.
  • Scenario Generation: Once a trend is identified, an AI can help generate multiple future scenarios. By varying key variables—e.g., “What if the trend accelerates rapidly?” or “What if a major competitor enters the market?”—an AI can help teams visualize and prepare for a range of possible futures.
  • Opportunity Mapping: AI can go beyond trend prediction to identify specific market opportunities. It can analyze the intersection of an emerging trend with a known customer pain point, generating a list of potential product or service concepts that address an unmet need.

“AI for foresight isn’t about getting a crystal ball; it’s about building a powerful telescope to see what’s on the horizon and a microscope to see what’s hidden in the data.”


Case Study 1: Stitch Fix – Algorithmic Personal Styling

The Challenge:

In the crowded and highly subjective world of fashion retail, predicting what a single customer will want to wear—let alone an entire market segment—is a monumental challenge. Traditional methods relied on seasonal buying patterns and the intuition of human stylists. This often led to excess inventory and a high rate of returns.

The AI-Powered Foresight Response:

Stitch Fix, the online personal styling service, built its entire business model on AI-powered foresight. The company’s core innovation was not in fashion, but in its algorithm. The AI ingests data from every single customer interaction—what they kept, what they returned, their style feedback, and even their Pinterest boards. This data is then cross-referenced with a vast inventory and emerging fashion trends. The AI can then:

  • Predict Individual Preference: The algorithm learns each customer’s taste over time, predicting with high accuracy which items they will like. This is a form of micro-foresight.
  • Uncover Macro-Trends: By analyzing thousands of data points across its customer base, the AI can detect emerging fashion trends long before they hit the mainstream. For example, it might notice a subtle shift in the popularity of a certain color, fabric, or cut among its early adopters.

The Result:

Stitch Fix’s AI-driven foresight has allowed them to operate with a level of efficiency and personalization that is nearly impossible for traditional retailers to replicate. By predicting consumer demand, they can optimize their inventory, reduce waste, and provide a highly-tailored customer experience. The AI doesn’t just help them sell clothes; it gives them a real-time, data-backed view of future consumer behavior, making them a leader in a fast-moving and unpredictable industry.


Case Study 2: Netflix – The Algorithm That Sees the Future of Entertainment

The Challenge:

In the early days of streaming, content production was a highly risky and expensive gamble. Studios would greenlight shows based on the intuition of executives, focus group data, and the past success of a director or actor. This process was slow and often led to costly failures.

The AI-Powered Foresight Response:

Netflix, a pioneer of AI-powered foresight, revolutionized this model. They used their massive trove of user data—what people watched, when they watched it, what they re-watched, and what they skipped—to predict not just what their customers wanted to watch, but what kind of content would be successful to produce. When they decided to create their first original series, House of Cards, they didn’t do so on a hunch. Their AI analyzed that a significant segment of their audience had a high affinity for the original British series, enjoyed films starring Kevin Spacey, and had a preference for political thrillers directed by David Fincher. The AI identified the convergence of these three seemingly unrelated data points as a major opportunity.

  • Predictive Content Creation: The algorithm predicted that a show with these specific attributes would have a high probability of success, a hypothesis that was proven correct.
  • Cross-Genre Insight: The AI’s ability to see patterns across genres and user demographics allowed Netflix to move beyond traditional content silos and identify new, commercially viable niches.

The Result:

Netflix’s success with House of Cards was a watershed moment that proved the power of AI-powered foresight. By using data to inform its creative decisions, Netflix was able to move from a content distributor to a powerful content creator. The company now uses AI to inform everything from production budgets to marketing campaigns, transforming the entire entertainment industry and proving that a data-driven approach to creativity is not only possible but incredibly profitable. Their foresight wasn’t a lucky guess; it was a systematic, AI-powered process.


Conclusion: The Augmented Innovator

The era of “gut-feel” innovation is drawing to a close. The most successful organizations of the future will be those that have embraced a new model of augmented foresight, where human intuition and AI’s analytical power work in harmony. AI can provide the objective, data-backed foundation for our predictions, but it is up to us, as human leaders, to provide the empathy, creativity, and ethical judgment to turn those predictions into a better future.

AI is not here to tell you what to do; it’s here to show you what’s possible. Our role is to ask the right questions, to lead with a strong sense of purpose, and to have the courage to act on the opportunities that AI uncovers. By training our teams to listen to the whispers in the data and to trust in this new collaborative process, we can move from simply reacting to the future to actively creating it, one powerful insight at a time.

Extra Extra: Because innovation is all about change, Braden Kelley’s human-centered change methodology and tools are the best way to plan and execute the changes necessary to support your innovation and transformation efforts — all while literally getting everyone all on the same page for change. Find out more about the methodology and tools, including the book Charting Change by following the link. Be sure and download the TEN FREE TOOLS while you’re here.

Image credit: Microsoft CoPilot

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.