Tag Archives: generative AI

What Are We Going to Do Now with GenAI?

What Are We Going to Do Now With GenAI?

GUEST POST from Geoffrey A. Moore

In 2023 we simply could not stop talking about Generative AI. But in 2024 the question for each enterprise became (continuing to today) — and this includes yours as well — is What are we going to do about it? Tough questions call for tough frameworks, so let’s run this one through the Hierarchy of Powers to see if it can shine some light on what might be your company’s best bet.

Category Power

Gen AI can have an impact anywhere in the Category Maturity Life Cycle, but the way it does so differs depending on where your category is, as follows:

  • Early Market. GenAI will almost certainly be a differentiating ingredient that is enabling a disruptive innovation, and you need to be on the bleeding edge. Think ChatGPT.
  • Crossing the chasm. Nailing your target use case is your sole priority, so you would use GenAI if, and only if, it helped you do so, and avoid getting distracted by its other bells and whistles. Think Khan Academy at the school district level.
  • Inside the tornado. Grabbing as much market share as you can is now the game to play, and GenAI-enabled features can help you do so provided they are fully integrated (no “some assembly required”). You cannot afford to slow your adoption down just at the time it needs to be at full speed. Think Microsoft CoPilot.
  • Growth Main Street (category still growing double digits). Market share boundaries are settling in, so the goal now is to grow your patch as fast as you can, solidifying your position and taking as much share as you can from the also-rans. Adding GenAI to the core product can provide a real boost as long as the disruption is minimal. Think Salesforce CRM.
  • Mature Main Street (category stabilized, single-digit growth). You are now marketing primarily to your installed base, secondarily seeking to pick up new logos as they come into play. GenAI can give you a midlife kicker provided you can use it to generate meaningful productivity gains. Think Adobe Photoshop.
  • Late Main Street (category declining, negative growth). The category has never been more profitable, so you are looking to extend its life in as low-cost a way as you can. GenAI can introduce innovative applications that otherwise would never occur to your end users. Think HP home printing.

Company Power

There are two dimensions of company power to consider when analyzing the ROI from a GenAI investment, as follows:

  • Market Share Status. Are you the market share leader, a challenger, or simply a participant? As a challenger, you can use GenAI to disrupt the market pecking order provided you differentiate in a way that is challenging for the leader to copy. On the other hand, as a leader, you can use GenAI to neutralize the innovations coming from challengers provided you can get it to market fast enough to keep the ecosystem in your camp. As a participant, you would add GenAI only if was your single point of differentiation (as a low-share participant, your R&D budget cannot fund more than one).
  • Default Operating Model. Is your core business better served by the complex systems operating model (typical for B2B companies with hundreds to thousands of large enterprises for customers) or the volume operations operating model (typical for B2C companies with hundreds of thousands to millions of consumers)? The complex systems model has sufficient margins to invest professional services across the entire ownership life cycle, from design consulting to installation to expansion. You are going to need deep in-house expertise to win big in this game. By contrast, GenAI deployed via the volume operations model has to work out-of-the-box. Consumers have neither the courage nor the patience to work through any disconnects.

Market Power

Whereas category share leaders benefit most from going broad, market segment leaders win big by going deep. The key tactic is to overdo it on the use cases that mean the most to your target customers, taking your offer beyond anything reasonable for a category leader to copy. GenAI can certainly be a part of this approach, as the two slides below illustrate:

Market Segmentation for Complex Systems

In the complex systems operating model, GenAI should accentuate the differentiation of your whole product, the complete solution to whatever problem you are targeting. That might mean, for example, taking your Large Language Model to a level of specificity that would normally not be warranted. This sets you apart from the incumbent vendor who has nothing like what you offer as well as from other technology vendors who have not embraced your target segment’s specific concerns. Think Crowdstrike’s Charlotte AI for cybersecurity analysis.

Market Segmentation for Volume Operations

In the volume operations operating model, GenAI should accentuate the differentiation of your brand promise by overdelivering on the relevant value discipline. Once again, it is critical not to get distracted by shiny objects—you want to differentiate in one quadrant only, although you can use GenAI in the other three for neutralization purposes. For Performance, think knowledge discovery. For Productivity, think writing letters. For Economy, think tutoring. For Convenience, think gift suggestions.

Offer Power

Everybody wants to “be innovative,” but it is worth stepping back a moment to ask, how do we get a Return on Innovation? Compared to its financial cousin, this kind of ROI is more of a leading indicator and thus of more strategic value. Basically, it comes in three forms:

  1. Differentiation. This creates customer preference, the goal being not just to be different but to create a clear separation from the competition, one that they cannot easily emulate. Think OpenAI.
  2. Neutralization. This closes the gap between you and a competitor who is taking market share away from you, the goal being to get to “good enough, fast enough,” thereby allowing your installed base to stay loyal. Think Google Bard.
  3. Optimization. This reduces the cost while maintaining performance, the goal being to expand the total available market. Think Edge GenAI on PCs and Macs.

For most of us, GenAI will be an added ingredient rather than a core product, which makes the ROI question even more important. The easiest way to waste innovation dollars is to spend them on differentiation that does not go far enough, neutralization that does not go fast enough, or optimization that does not go deep enough. So, the key lesson here is, pick one and only one as your ROI goal, and then go all in to get a positive return.

Execution Power

How best to incorporate GenAI into your existing enterprise depends on which zone of operations you are looking to enhance, as illustrated by the zone management framework below:

Zone Management Framework

If you are unsure exactly what to do, assign the effort to the Incubation Zone and put them on the clock to come up with a good answer as fast as possible. If you can incorporate it directly into your core business’s offerings at relatively low risk, by all means, do so as it is the current hot ticket, and assign it to the Performance Zone. If there is not a good fit, consider using it internally instead to improve your own productivity, assigning it to the Productivity Zone. Finally, although it is awfully early days for this, if you are convinced it is an absolutely essential ingredient in a big bet you feel compelled to make, then assign it to the Transformation Zone and go all in. Again, the overall point is manage your investment in GenAI out of one zone and only one zone, as the success metrics for each zone are incompatible with those of the other three.

One final point. Embracing anything as novel as GenAI has to feel risky. I submit, however, that in 2025 not building upon meaningful GenAI action taken in 2024 is even more so.

That’s what I think. What do you think?

Image Credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Role Platforms Play in Business Networks

The Role Platforms Play in Business Networks

GUEST POST from Geoffrey A. Moore

A decade and a half ago, my colleague at TCG Advisors, Philip Lay, led a body of work with SAP around the topic of business network transformation. It was spurred by the unfolding transition from client-server architecture to a cloud-first, mobile-first world, and it explored the implications for managing both high-volume transactions as well as high-complexity relationships. Our hypothesis was that high-volume networks would be dominated by a small number of very powerful concentrators whereas the high-complexity networks would be orchestrated by a small number of very influential orchestrators.

The concentrator model has played out pretty much as expected, although the astounding success of Amazon in dominating retail is in itself a story for the ages. The key has been how IT platforms anchored in cloud and mobile, now supplemented with AI, have enabled transactional enterprises in multiple sectors of the economy to scale to levels previously unimaginable. And these same platforms, when opened to third parties, have proved equally valuable to the long tail of small entrepreneurial businesses, garnering them access to a mass-market distribution channel for their offerings, something well beyond their reach in the prior era.

The impact on the orchestrator model, by contrast, is harder to see, in part because so much of it plays out behind closed doors “in the room where it happens.” Enterprises like JP Morgan Chase, Accenture, Salesforce, Cisco, and SAP clearly extend their influence well beyond their borders. Their ability to orchestrate their value chains, however, has historically been grounded primarily in a network of personal relationships maintained through trustworthiness, experience, and intelligence, not technology. So, where does an IT platform fit into that kind of ecosystem?

Here it helps to bring in a distinction between core and context. Core is what differentiates your business; context is everything else you do. Unless you are yourself a major platform provider, the platform per se is always context, never core. So, all the talk about what is your platform strategy is frankly a bit overblown. Nonetheless, in both the business models under discussion, platforms can impinge upon the core, and that is where your attention does need to be focused.

In the case of the high-volume transaction model, where commoditization is an everyday fact of life, many vendors have sought to differentiate the customer experience, both during the buying process and over the useful life of the offer. This calls for deep engagement with the digital resources available, including accessing and managing multiple sources of data, applying sophisticated analytics, and programming real-time interactions. That said, such data-driven personalization is a tactic that has been pursued for well over a decade now, and the opportunities to differentiate have diminished considerably. The best of those remaining are in industries dominated by an oligopoly of Old Guard enterprises that are so encumbered with legacy systems that they cannot field a credible digital game. If you are playing elsewhere, you will likely fare better if you get back to innovating on the offering itself.

In the case of managing context in a high-complexity relationship model, it is friction that is the everyday fact of life worth worrying about. Most of it lies in the domain of transaction processing, the “paperwork” that tags along with every complex sale. Anything vendors can do to simplify transactional processes will pay off not only in higher customer satisfaction but also in faster order processing, better retention, and improved cross-sell and up-sell. It is not core, it does not differentiate, but it does make everyone breathe easier, including your own workforce. Here, given the remarkable recent advances in data management, machine learning, and generative AI, there is enormous opportunity to change the game, and very little downside risk for so doing. The challenge is to prioritize this effort, especially in established enterprises where the inertia of budget entitlement keeps resources trapped in the coffers of the prior era’s winning teams.

The key takeaway from all this is that for most of us platforms are not strategic so much as they are operational. That is, the risk is less that you might choose an unsuitable platform and more that you may insufficiently invest in exploiting whatever one you do choose. So, the sooner you get this issue off the board’s agenda and into your OKRs, the better.

That’s what I think. What do you think?

Image Credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Humans Are Not as Different from AI as We Think

Humans Are Not as Different from AI as We Think

GUEST POST from Geoffrey A. Moore

By now you have heard that GenAI’s natural language conversational abilities are anchored in what one wag has termed “auto-correct on steroids.” That is, by ingesting as much text as it can possibly hoover up, and by calculating the probability that any given sequence of words will be followed by a specific next word, it mimics human speech in a truly remarkable way. But, do you know why that is so?

The answer is, because that is exactly what we humans do as well.

Think about how you converse. Where do your words come from? Oh, when you are being deliberate, you can indeed choose your words, but most of the time that is not what you are doing. Instead, you are riding a conversational impulse and just going with the flow. If you had to inspect every word before you said it, you could not possibly converse. Indeed, you spout entire paragraphs that are largely pre-constructed, something like the shticks that comedians perform.

Of course, sometimes you really are being more deliberate, especially when you are working out an idea and choosing your words carefully. But have you ever wondered where those candidate words you are choosing come from? They come from your very own LLM (Large Language Model) even though, compared to ChatGPT’s, it probably should be called a TWLM (Teeny Weeny Language Model).

The point is, for most of our conversational time, we are in the realm of rhetoric, not logic. We are using words to express our feelings and to influence our listeners. We’re not arguing before the Supreme Court (although even there we would be drawing on many of the same skills). Rhetoric is more like an athletic performance than a logical analysis would be. You stay in the moment, read and react, and rely heavily on instinct—there just isn’t time for anything else.

So, if all this is the case, then how are we not like GenAI? The answer here is pretty straightforward as well. We use concepts. It doesn’t.

Concepts are a, well, a pretty abstract concept, so what are we really talking about here? Concepts start with nouns. Every noun we use represents a body of forces that in some way is relevant to life in this world. Water makes us wet. It helps us clean things. It relieves thirst. It will drown a mammal but keep a fish alive. We know a lot about water. Same thing with rock, paper, and scissors. Same thing with cars, clothes, and cash. Same thing with love, languor, and loneliness.

All of our knowledge of the world aggregates around nouns and noun-like phrases. To these, we attach verbs and verb-like phrases that show how these forces act out in the world and what changes they create. And we add modifiers to tease out the nuances and differences among similar forces acting in similar ways. Altogether, we are creating ideas—concepts—which we can link up in increasingly complex structures through the fourth and final word type, conjunctions.

Now, from the time you were an infant, your brain has been working out all the permutations you could imagine that arise from combining two or more forces. It might have begun with you discovering what happens when you put your finger in your eye, or when you burp, or when your mother smiles at you. Anyway, over the years you have developed a remarkable inventory of what is usually called common sense, as in be careful not to touch a hot stove, or chew with your mouth closed, or don’t accept rides from strangers.

The point is you have the ability to take any two nouns at random and imagine how they might interact with one another, and from that effort, you can draw practical conclusions about experiences you have never actually undergone. You can imagine exception conditions—you can touch a hot stove if you are wearing an oven mitt, you can chew bubble gum at a baseball game with your mouth open, and you can use Uber.

You may not think this is amazing, but I assure you that every AI scientist does. That’s because none of them have come close (as yet) to duplicating what you do automatically. GenAI doesn’t even try. Indeed, its crowning success is due directly to the fact that it doesn’t even try. By contrast, all the work that has gone into GOFAI (Good Old-Fashioned AI) has been devoted precisely to the task of conceptualizing, typically as a prelude to planning and then acting, and to date, it has come up painfully short.

So, yes GenAI is amazing. But so are you.

That’s what I think. What do you think?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

AI and Human Creativity Solving Complex Problems Together

AI and Human Creativity Solving Complex Problems Together

GUEST POST from Janet Sernack

A recent McKinsey Leading Off – Essentials for leaders and those they lead email newsletter, referred to an article “The organization of the future: Enabled by gen AI, driven by people” which stated that digitization, automation, and AI will reshape whole industries and every enterprise. The article elaborated further by saying that, in terms of magnitude, the challenge is akin to coping with the large-scale shift from agricultural work to manufacturing that occurred in the early 20th century in North America and Europe, and more recently in China. This shift was powered by the defining trait of our species, our human creativity, which is at the heart of all creative problem-solving endeavors, where innovation is the engine of growth, no matter, what the context.

Moving into Unchartered Job and Skills Territory

We don’t yet know what exact technological, or soft skills, new occupations, or jobs will be required in this fast-moving transformation, or how we might further advance generative AI, digitization, and automation.

We also don’t know how AI will impact the need for humans to tap even more into the defining trait of our species, our human creativity. To enable us to become more imaginative, curious, and creative in the way we solve some of the world’s greatest challenges and most complex and pressing problems, and transform them into innovative solutions.

We can be proactive by asking these two generative questions:

  • What if the true potential of AI lies in embracing its ability to augment human creativity and aid innovation, especially in enhancing creative problem solving, at all levels of civil society, instead of avoiding it? (Ideascale)
  • How might we develop AI as a creative thinking partner to effect profound change, and create innovative solutions that help us build a more equitable and sustainable planet for all humanity? (Hal Gregersen)

Because our human creativity is at the heart of creative problem-solving, and innovation is the engine of growth, competitiveness, and profound and positive change.

Developing a Co-Creative Thinking Partnership

In a recent article in the Harvard Business Review “AI Can Help You Ask Better Questions – and Solve Bigger Problems” by Hal Gregersen and Nicola Morini Bianzino, they state:

“Artificial intelligence may be superhuman in some ways, but it also has considerable weaknesses. For starters, the technology is fundamentally backward-looking, trained on yesterday’s data – and the future might not look anything like the past. What’s more, inaccurate or otherwise flawed training data (for instance, data skewed by inherent biases) produces poor outcomes.”

The authors say that dealing with this issue requires people to manage this limitation if they are going to treat AI as a creative-thinking partner in solving complex problems, that enable people to live healthy and happy lives and to co-create an equitable and sustainable planet.

We can achieve this by focusing on specific areas where the human brain and machines might possibly complement one another to co-create the systemic changes the world badly needs through creative problem-solving.

  • A double-edged sword

This perspective is further complimented by a recent Boston Consulting Group article  “How people can create-and destroy value- with generative AI” where they found that the adoption of generative AI is, in fact, a double-edged sword.

In an experiment, participants using GPT-4 for creative product innovation outperformed the control group (those who completed the task without using GPT-4) by 40%. But for business problem solving, using GPT-4 resulted in performance that was 23% lower than that of the control group.

“Perhaps somewhat counterintuitively, current GenAI models tend to do better on the first type of task; it is easier for LLMs to come up with creative, novel, or useful ideas based on the vast amounts of data on which they have been trained. Where there’s more room for error is when LLMs are asked to weigh nuanced qualitative and quantitative data to answer a complex question. Given this shortcoming, we as researchers knew that GPT-4 was likely to mislead participants if they relied completely on the tool, and not also on their own judgment, to arrive at the solution to the business problem-solving task (this task had a “right” answer)”.

  • Taking the path of least resistance

In McKinsey’s Top Ten Reports This Quarter blog, seven out of the ten articles relate specifically to generative AI: technology trends, state of AI, future of work, future of AI, the new AI playbook, questions to ask about AI and healthcare and AI.

As it is the most dominant topic across the board globally, if we are not both vigilant and intentional, a myopic focus on this one significant technology will take us all down the path of least resistance – where our energy will move to where it is easiest to go.  Rather than being like a river, which takes the path of least resistance to its surrounding terrain, and not by taking a strategic and systemic perspective, we will always go, and end up, where we have always gone.

  • Living our lives forwards

According to the Boston Consulting Group article:

“The primary locus of human-driven value creation lies not in enhancing generative AI where it is already great, but in focusing on tasks beyond the frontier of the technology’s core competencies.”

This means that a whole lot of other variables need to be at play, and a newly emerging set of human skills, especially in creative problem solving, need to be developed to maximize the most value from generative AI, to generate the most imaginative, novel and value adding landing strips of the future.

Creative Problem Solving

In my previous blog posts “Imagination versus Knowledge” and “Why Successful Innovators Are Curious Like Cats” we shared that we are in the midst of a “Sputnik Moment” where we have the opportunity to advance our human creativity.

This human creativity is inside all of us, it involves the process of bringing something new into being, that is original, surprising useful, or desirable, in ways that add value to the quality of people’s lives, in ways they appreciate and cherish.

  • Taking a both/and approach

Our human creativity will be paralysed, if we focus our attention and intention only on the technology, and on the financial gains or potential profits we will get from it, and if we exclude the possibilities of a co-creative thinking partnership with the technology.

To deeply engage people in true creative problem solving – and involving them in impacting positively on our crucial relationships and connectedness, with one another and with the natural world, and the planet.

  • A marriage between creatives, technologists, and humanities

In a recent Fast Company video presentation, “Innovating Imagination: How Airbnb Is Using AI to Foster Creativity” Brian Chesky CEO of Airbnb, states that we need to consider and focus our attention and intention on discovering what is good for people.

To develop a “marriage between creatives, technologists, and the humanities” that brings the human out and doesn’t let technology overtake our human element.

Developing Creative Problem-Solving Skills

At ImagineNation, we teach, mentor, and coach clients in creative problem-solving, through developing their Generative Discovery skills.

This involves developing an open and active mind and heart, by becoming flexible, adaptive, and playful in the ways we engage and focus our human creativity in the four stages of creative problem-solving.

Including sensing, perceiving, and enabling people to deeply listen, inquire, question, and debate from the edges of temporarily hidden or emerging fields of the future.

To know how to emerge, diverge, and converge creative insights, collective breakthroughs, an ideation process, and cognitive and emotional agility shifts to:

  • Deepen our attending, observing, and discerning capabilities to consciously connect with, explore, and discover possibilities that create tension and cognitive dissonance to disrupt and challenge the status quo, and other conventional thinking and feeling processes.
  • Create cracks, openings, and creative thresholds by asking generative questions to push the boundaries, and challenge assumptions and mental and emotional models to pull people towards evoking, provoking, and generating boldly creative ideas.
  • Unleash possibilities, and opportunities for creative problem solving to contribute towards generating innovative solutions to complex problems, and pressing challenges, that may not have been previously imagined.

Experimenting with the generative discovery skill set enables us to juggle multiple theories, models, and strategies to create and plan in an emergent, and non-linear way through creative problem-solving.

As stated by Hal Gregersen:

“Partnering with the technology in this way can help people ask smarter questions, making them better problem solvers and breakthrough innovators.”

Succeeding in the Age of AI

We know that Generative AI will change much of what we do and how we do it, in ways that we cannot yet anticipate.

Success in the age of AI will largely depend on our ability to learn and change faster than we ever have before, in ways that preserve our well-being, connectedness, imagination, curiosity, human creativity, and our collective humanity through partnering with generative AI in the creative problem-solving process.

Find Out More About Our Work at ImagineNation™

Find out about our collective, learning products and tools, including The Coach for Innovators, Leaders, and Teams Certified Program, presented by Janet Sernack, is a collaborative, intimate, and deeply personalized innovation coaching and learning program, supported by a global group of peers over 9-weeks, which can be customised as a bespoke corporate learning program.

It is a blended and transformational change and learning program that will give you a deep understanding of the language, principles, and applications of an ecosystem focus, human-centric approach, and emergent structure (Theory U) to innovation, and upskill people and teams and develop their future fitness, within your unique innovation context. Find out more about our products and tools.

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Human-AI Co-Pilot

Redefining the Creative Brief for Generative Tools

The Human-AI Co-Pilot

GUEST POST from Art Inteligencia

The dawn of generative AI (GenAI) has ushered in an era where creation is no longer constrained by human speed or scale. Yet, for many organizations, the promise of the AI co-pilot remains trapped in the confines of simple, often shallow prompt engineering. We are treating these powerful, pattern-recognizing, creative machines like glorified interns, giving them minimal direction and expecting breakthrough results. This approach fundamentally misunderstands the machine’s capability and the new role of the human professional—which is shifting from creator to strategic editor and director.

This is the fundamental disconnect: a traditional creative brief is designed to inspire and constrain a human team—relying heavily on shared context, nuance, and cultural shorthand. An AI co-pilot, however, requires a brief that is explicitly structured to transmit strategic intent, defined constraints, and measurable parameters while leveraging the machine’s core strength: rapid, combinatorial creativity.

The solution is the Human-AI Co-Pilot Creative Brief, a structured document that moves beyond simple what (the output) to define the how (the parameters) and the why (the strategic goal). It transforms the interaction from one of command-and-response to one of genuine, strategic co-piloting.

The Three Failures of the Traditional Prompt

A simple prompt—”Write a blog post about our new product”—fails because it leaves the strategic and ethical heavy lifting to the unpredictable AI default:

  1. It Lacks Strategic Intent: The AI doesn’t know why the product matters to the business (e.g., is it a defensive move against a competitor, or a new market entry?). It defaults to generic, promotional language that lacks a strategic purpose.
  2. It Ignores Ethical Guardrails: It provides no clear instructions on bias avoidance, data sourcing, or the ethical representation of specific communities. The risk of unwanted, biased, or legally problematic output rises dramatically.
  3. It Fails to Define Success: The AI doesn’t know if success means 1,000 words of basic information, or 500 words of emotional resonance that drives a 10% click-through rate. The human is left to manually grade subjective output, wasting time and resources.

The Four Pillars of the Human-AI Co-Pilot Brief

A successful Co-Pilot Brief must be structured data for the machine and clear strategic direction for the human. It contains four critical sections:

1. Strategic Context and Constraint Data

This section is non-negotiable data: Brand Voice Guidelines (tone, lexicon, forbidden words), Target Persona Definition (with explicit demographic and psychographic data), and Measurable Success Metrics (e.g., “Must achieve a Sentiment Score above 75” or “Must reduce complexity score by 20%”). The Co-Pilot needs hard, verifiable parameters, not soft inspiration.

2. Unlearning Instructions (Bias Mitigation)

This is the human-centered, ethical section. It explicitly instructs the AI on what cultural defaults and historical biases to avoid. For example: “Do not use common financial success clichés,” or “Ensure visual representations of leadership roles are diverse and avoid gender stereotypes.” This actively forces the AI to challenge its training data and align with the brand’s ethical standards.

3. Iterative Experimentation Mandates

Instead of asking for one final product, the brief asks for a portfolio of directed experiments. This instructs the AI on the dimensions of variance to explore (e.g., “Generate 3 headline clusters: 1. Fear-based urgency, 2. Aspiration-focused long-term value, 3. Humorous and self-deprecating tone”). This leverages the AI’s speed to deliver human-directed exploration, allowing the human to focus on selection, refinement, and A/B testing—the high-value tasks.

4. Attribution and Integration Protocol

This section ensures the output is useful and compliant. It defines the required format (Markdown, JSON, XML), the needed metadata (source citation for facts, confidence score of the output), and the Human Intervention Point (e.g., “Draft 1 must be edited by the Chief Marketing Officer for final narrative tone and legal review”). This manages the handover and legal chain of custody for the final, approved asset.

Case Study 1: The E-commerce Retailer and the A/B Testing Engine

Challenge: Slow and Costly Product Description Generation

A large e-commerce retailer needed to rapidly create product descriptions for thousands of new items across various categories. The human copywriting team was slow, and their A/B testing revealed that the descriptions lacked variation, leading to plateaued conversion rates.

Co-Pilot Brief Intervention:

The team implemented a Co-Pilot Brief that enforced the Iterative Experimentation Mandate. The brief dictated: 1) Persona Profile, 2) Output Length, and crucially, 3) Mandate: “Generate 5 variants that maximize different psychological triggers: Authority, Scarcity, Social Proof, Reciprocity, and Liking.” The AI delivered a rich portfolio of five distinct, strategically differentiated options for every product. The human team spent time selecting the best option and running the A/B test. This pivot increased the speed of description creation by 400% and—more importantly—increased the success rate of the A/B tests by 30%, proving the value of AI-directed variance.

Case Study 2: The Healthcare Network and Ethical Compliance Messaging

Challenge: Creating Sensitive, High-Compliance Patient Messaging

A national healthcare provider needed to draft complex, highly sensitive communication materials regarding new patient privacy laws (HIPAA) that were legally compliant yet compassionate and easy to understand. The complexity often led to dry, inaccessible language.

Co-Pilot Brief Intervention:

The team utilized a Co-Pilot Brief emphasizing Constraint Data and Unlearning Instructions. The brief included: 1) Full legal text and mandatory compliance keywords (Constraint Data), 2) Unlearning Instructions: “Avoid all medical jargon; do not use the passive voice; maintain a 6th-grade reading level; project a tone of empathetic assurance, not legal warning,” and 3) Success Metric: “Must achieve Flesch-Kincaid Reading Ease Score above 65.” The AI successfully generated drafts that satisfied the legal constraints while adhering to the reading ease metric. The human experts spent less time checking legal compliance and more time refining the final emotional tone, reducing the legal review cycle by 50% and significantly increasing patient comprehension scores.

Conclusion: From Prompt Engineer to Strategic Architect

The Human-AI Co-Pilot Creative Brief is the most important new artifact for innovation teams. It forces us to transition from thinking of the AI as a reactive tool to treating it as a strategic partner that must be precisely directed. It demands that humans define the ethical boundaries, strategic intent, and success criteria, freeing the AI to do what it does best: explore the design space at speed. This elevates the human role from creation to strategic architecture.

“The value of a generative tool is capped by the strategic depth of its brief. The better the instructions, the higher the cognitive floor for the output.”

The co-pilot era is here. Your first step: Take your last successful creative brief and re-write the Objectives section entirely as a set of measurable, hard constraints and non-negotiable unlearning instructions for an AI.

Extra Extra: Futurology is not fortune telling. Futurists use a scientific approach to create their deliverables, but a methodology and tools like those in FutureHacking™ can empower anyone to engage in futurology themselves.

Image credit: 1 of 950+ FREE quote slides available at http://misterinnovation.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.