Winning with Artificial Intelligence in 90 Days

Winning with Artificial Intelligence in 90 Days

Exclusive Interview with Charlene Li

The rapid evolution of artificial intelligence (AI) has shifted the technology from a futuristic curiosity to the primary engine of modern organizational growth. In an era defined by data-driven decision-making, the ability to effectively harness machine learning and predictive analytics is no longer just a competitive advantage; it is a fundamental requirement for long-term viability. However, the path to integration is rarely linear. Many organizations find themselves caught between the urgent need for transformation and the daunting reality of legacy infrastructure, talent shortages, and the cultural shifts required to move beyond small-scale pilots toward true enterprise-wide intelligence.

While the potential for increased efficiency and innovation is clear, the execution remains a significant hurdle.

The organizations that thrive in this new landscape are those that treat AI as a core strategic pillar rather than a plug-and-play software update. This requires a rethink of how human talent and machine intelligence coexist, ensuring that the technology enhances human capability rather than simply automating existing inefficiencies. Overcoming these challenges involves not just technical prowess, but a disciplined approach to change management and a clear vision for how intelligence will redefine the value the organization provides to its customers.

Today we will dive deep into what it takes to quickly achieve success with artificial intelligence with our special guest.

Creating a 90-Day Blueprint to Win with Artificial Intelligence

Charlene LiI recently had the opportunity to interview Charlene Li, a New York Times bestselling author, keynote speaker, and AI transformation strategist. Her latest book, Winning with AI: The 90-Day Blueprint for Success, co-authored with Dr. Katia Walsh, gives senior leaders a practical framework for moving from AI experimentation to measurable business value. Her prior books include The Disruption Mindset, Open Leadership, and Groundswell. Fast Company named her one of the most creative people in business, and she has worked with global organizations including 14 of the Dow Jones Industrial 30 companies. She is the founder of Altimeter Group (acquired by Prophet) and currently leads Quantum Networks Group.

Below is the text of my interview with Charlene and a preview of the kinds of insights you’ll find in Winning with AI: The 90-Day Blueprint for Success presented in a Q&A format:

1. What confusion is being created by speaking of “AI” as one thing when there are different kinds of AI, and how does this hold back AI adoption?

When people say “AI,” they’re usually thinking ChatGPT. But ChatGPT is generative AI — and that’s just one of three types of AI showing up in business today. There’s also predictive AI, which has been quietly running in your CRM, your fraud detection, and your streaming recommendations for years. And there’s agentic AI, which takes autonomous action toward a goal rather than waiting for a prompt.

The Oracle (predictive), the Creator (generative), and the Agent (agentic) — that’s how Katia and I describe them in Winning with AI. They do fundamentally different things, and they require fundamentally different things from you.

The conflation matters because it leads to bad decisions. Leaders see a generative AI demo, get excited, and ask their teams to “do something with AI” — when the actual business problem might be better solved with predictive AI (and probably already could’ve been three years ago). Or they hear “agentic AI” and assume their organization is ready to deploy autonomous agents when they haven’t even gotten generative AI into their workforce yet.

The winners aren’t choosing among types — they’re using all three strategically, in combination. A customer care transformation might use predictive AI to route inquiries, generative AI to draft responses, and agentic AI to handle routine cases autonomously. Once you can see the three distinctly, the question stops being “what can I do with AI?” and starts being “what can AI do for me?” That’s the question that actually unlocks value.

2. What are some of the key characteristics of AI inertia and some of the best ways to break free?

We call it pilot purgatory — and almost every organization we work with is stuck there. The signs are easy to spot: dozens of disconnected pilots, lots of conference attendance, lots of slide decks, no measurable financial impact. An MIT study found 95% of AI initiatives fail to scale. That’s not a technology failure. It’s a failure of leadership and culture.

The classic characteristics:

    • Use cases as a strategy. Many use cases equals procrastination. A long list of pilots is how organizations look busy without committing to anything.
    • Diffused accountability. When the CIO, CFO, and CMO all “share” responsibility for AI, no one owns the outcome.
    • Waiting for the foundation to be perfect. Clean data, the right platform, the perfect org structure — these become reasons to delay rather than constraints to solve through.
    • Confusing motion with progress. Running pilots feels like progress. It isn’t, unless those pilots are tied to your most important business problems.

To break free: pick your biggest strategic problems, figure out how AI solves them, invest heavily in those solutions, and move with urgency. Appoint one AI value owner who lives, breathes, and dreams AI outcomes. Kill pilots that aren’t on a path to scale. And replace “fail fast” with “learn fast” — nobody actually rewards failure, and the language of failure lets people walk away from things that should be pushed through.
Speed is the new moat. The companies that win aren’t the ones with the best technology. They’re the ones that adapt faster than their competitors.

3. There are still a lot of people out there not using AI (or not realizing that they are). What are some of the best ways for people to get started with AI?

Most people are already using AI — every spam filter, every Google Maps route, every recommendation on a streaming service is AI. So the real question is: how do you get started with the kind of AI that’s reshaping work right now, which is generative AI?

My advice is genuinely simple. Pick one of the major tools — Claude, ChatGPT, Gemini, Copilot — and start using it for one real task you do every week. Not a toy task. A real one. Drafting an email. Prepping for a meeting. Summarizing a long document. Brainstorming an approach to a problem you’re stuck on.

Two practical tips that make a big difference:

Write better prompts. A good prompt has a role (“Act as a marketing strategist”), instructions (what you want done), context (the background the AI needs), and an output format (memo, table, slide outline). Then refine through dialogue. Most people give AI two sentences and judge it on the result. Give it two paragraphs and you’ll be amazed.

Try the flipped interaction. Instead of asking AI for an answer, ask it to ask you questions until it has enough context to give a good answer. For example, at the end of a prompt, add this sentence: “Ask me any clarifying questions you may have.” It turns your prompt into a conversation.

I think of AI fluency as learning to eat with chopsticks: at first you’re concentrating on every motion, and eventually it’s just how you eat. You won’t get there by reading about it. You get there by using it. Every day. On real work.

4. Does AI safety really matter? It seems like all of the major AI players are just focused on speed and getting to AGI before China, am I wrong?

You’re not wrong about what the AI players are doing. But you’re probably not playing that game – more on that below. First, I’d push back on the framing that safety and speed are opposites.

Think of Formula 1. The drivers who win championships have absolute confidence in their brakes, their crash structures, their fire suppression systems. That’s why they can push so hard on speed. Safety is what makes speed possible. The companies moving fastest on AI adoption aren’t the ones cutting corners on responsibility — they’re the ones with the highest ethical standards, because trust eliminates friction. When your team knows where the guardrails are, when your customers trust your intentions, when your board has confidence in your approach, you can move at the speed AI demands.

The 2024 Edelman Trust Barometer found that 43% of people would reject AI in products and services if they don’t believe the innovation has been thoroughly scrutinized. That’s not a PR problem — it’s a revenue and competitive position problem.

On the AGI race specifically, the geopolitical framing oversimplifies what’s actually a much more textured conversation about how AI is deployed within companies, governments, and communities. Most leaders I work with aren’t worrying about AGI — they’re worrying about whether their AI customer service tool is treating customers fairly, whether their AI-driven hiring screen is introducing bias, and whether their data is being used in ways customers didn’t consent to. Those are the safety questions that matter for the next five years, regardless of what the frontier players are doing.

5. Where is the government being too hands off with AI and its impacts, and what conversations should governments and societies be having about AI and its impacts that they’re not?

I’ll be careful here because I’m not a policy person — I work with the leaders implementing AI inside organizations. But from that vantage point, a few things stand out.

The conversation we aren’t having enough is about workforce transition. Not “will AI take jobs” — we’ve been arguing about that abstractly for three years. The real question is what happens to the millions of people whose roles will substantially change in the next five years, and who’s responsible for helping them adapt. Right now, that’s mostly being left to individual employers, and the gap between what enlightened employers are doing and what the median employer is doing is enormous. That gap will become a societal problem long before regulators catch up.

The second underdiscussed conversation is about education. We’re training a generation of students with curricula designed for a pre-AI world. By the time we figure out what AI fluency looks like in K–12, the kids who needed it most will be in the workforce.

Third — and this is where I’d actually like to see governments lean in more — is data. Most AI regulation focuses on the models. The leverage is in the data: who owns it, how it can be used, what consent looks like in a world where data collected for one purpose can be repurposed for AI training that wasn’t imagined when it was collected.

That said, regulations always lag technology. Anchoring your responsible and ethical AI policy in your organization’s values rather than waiting for rules is the right move, regardless of what governments do.

6. What are the key pillars that form the basis of a strong AI foundation for those who seek to take full advantage of AI in their organization?

In Winning with AI, Katia and I lay out four building blocks. They develop together, not sequentially.

Mindset — the cultural ability to move at AI’s speed. Speed, focus, customer-centricity, experimentation, and learning from setbacks rather than treating them as evidence that the technology doesn’t work. Without the right mindset, you can have the best tools in the world, and they’ll sit unused.

Skillset — AI fluency across the workforce, not just in IT. Everyone needs to understand what AI can and can’t do, how to use it responsibly, and how to apply it to their actual work.

Toolset — the technical foundation. We tell leaders to build with LEGO, not cathedrals. Modular, interchangeable components you can swap as the technology evolves, sitting on top of data that’s good enough to start with.

Decision-set — the governance and decision-making structures that let you move fast without breaking things. Who decides what, how quickly, with what oversight.

The mistake organizations make is treating these as a sequence — first we’ll fix the data, then we’ll train people, then we’ll deploy. That sequence will take you a decade. The right approach is to build the blocks while delivering value, using each AI application to strengthen multiple blocks at once.

And one piece that wraps all four: leadership. Without active, visible commitment from the top, the four building blocks don’t compound. With it, they accelerate.

7. Of all the outcomes that the different types of AI can achieve, which activities create the most value for organizations?

Winning with AIWe frame the value AI creates in three areas: engagement, efficiencies, and reinvention.

Engagement is about deepening relationships with customers and employees through personalization, prediction, and proactive service. Anticipating what someone needs before they articulate it.

Efficiencies are about doing what you already do, faster and cheaper. This is where most organizations start — and where most get stuck. Efficiency gains are real, but they’re easy for competitors to replicate, which means they don’t create lasting advantage.

Reinvention is the most transformational and the most uncomfortable. It’s not asking “how can we do what we do faster?” — it’s asking “what becomes possible now that the old constraints are gone?” New business models. New revenue streams. New markets that were never economical before.

The trap is thinking efficiency is AI’s value. We call it the efficiency trap. Companies that limit themselves to efficiency are using a strategic weapon as a cost-cutting tool. The real competitive advantage comes from engagement and reinvention.

A great example: Coursera. Translation used to cost about $10,000 per course, which made global expansion economically impossible at the scale of their 5,000+ course catalog. Generative AI eliminated that constraint overnight. CEO Jeff Maggioncalda saw it immediately and launched Project Genesis by the end of 2022. That’s reinvention — AI removing a constraint that defined the business model.

If I had to pick one activity that creates the most value, it would be: using AI to remove a constraint that has shaped your industry’s economics for so long that nobody questions it anymore.

8. There was a lot of talk for a while about becoming an AI-first organization. Is this something that companies should be trying to do?

No. Be AI-ready instead.

“AI-first” is a technology company’s framing. It puts the technology in the driver’s seat, which sounds visionary but in practice produces dozens of disconnected pilots with no strategic impact. You end up chasing AI because it’s shiny rather than because it solves a real problem.

“AI-ready” is a business leader’s framing. It puts strategy in the driver’s seat. You’re building the culture, the skills, the decision systems, and the technical foundation that let AI create real value against the strategic priorities you already have.

Said simply: AI-first is a technology mindset. AI-ready is a business mindset.

You don’t actually need an AI strategy. You need a business strategy that uses AI. Anyone selling you on an AI strategy is selling you the wrong thing.

9. What should people be doing as individuals to maintain their value to their organizations and to grow their careers?

Three things, in order.

One: develop genuine AI fluency. Not “I’ve used ChatGPT a few times” fluency. Real fluency — the kind where AI is woven into how you think, prepare, decide, and communicate. The people and organizations who get to AI fluence in 2026 will pull dramatically ahead of those who don’t, and the gap will be very hard to close once it opens.

Two: deepen what’s uniquely human. AI can amplify cognition at speeds and scales no individual can match. What it can’t do is exercise empathy, self-reflection, intuition, judgment, and wisdom. These five traits — the foundation of what Katia and I call “superhumans” in the book — become more valuable, not less, as AI handles more of the cognitive work. The leaders who pair AI’s reach with these distinctly human capacities are the ones creating the most value.

Three: build a lifelong learning practice. The shelf life of any specific skill is shrinking. The skill that doesn’t depreciate is the ability to learn — quickly, repeatedly, with intellectual humility. Normalize not knowing. Embed reflection into how you work. Treat curiosity as a professional asset, not a side hobby.

If you do those three things, you’ll be more valuable in the future than you are today, regardless of what happens to your specific role.

10. What have organizations gotten wrong about rolling out AI and what can the early adopters do to recover from botched initial rollouts?

The biggest things organizations get wrong:

  • Treating AI as a technology project. It’s a business initiative for value creation that happens to use technology. When IT owns it, it stays small.
  • Use cases instead of strategy. A laundry list of pilots is procrastination dressed up as progress.
  • Diffused accountability. Without a single AI value owner, the work fragments.
  • Skipping the people work. Throwing tools at employees without addressing the fear underneath. Until fear is replaced by trust, no amount of training will change behavior.

If you’ve already botched the rollout, here’s the recovery path:

Stop and audit. What’s actually scaling, what’s not, what’s draining resources without producing value? Be honest. Sunset the dead ends.

Appoint one accountable AI leader. If no single person is accountable for AI value creation across the enterprise, fix that this quarter. Not part-time, not committee-led — one person whose performance is measured on the value that AI creates.

Pick one strategically meaningful problem and go after it. Not the easiest problem. The one whose solution would matter most to the business.

Learn from Ally Bank. When generative AI emerged, Ally’s CIO Sathish Muthukrishnan deliberately chose the most resistant audience — customer service agents — and a low-stakes problem: summarizing customer calls. The result was so valuable that the agents who’d been most skeptical became the loudest advocates: “Don’t take this away from me.” Targeting the skeptics with a real win is one of the most powerful change strategies we’ve seen.

A botched rollout isn’t a death sentence. It’s actually a useful clearing of the underbrush — assuming you learn from it.

11. Several studies have come out recently about the negative effects of AI on human cognition. Any tips for how to best use AI without degrading your brain?

This is a real concern and worth taking seriously. The risk isn’t AI itself — it’s lazy AI use. Using AI to skip thinking rather than to enhance it.

A few habits I’ve found useful:

Think first, then prompt. Before going to AI for an answer, write down what you think. Coursera’s Jeff Maggioncalda calls this cognitive bootstrapping — write your perspective on a decision, then ask AI to challenge it: “What are the strengths and weaknesses of this view? What are my blind spots? What would you recommend I improve?” AI sharpens your thinking instead of replacing it.

Treat AI outputs as drafts, not deliverables. Read critically. Push back. Ask why. Verify facts. The moment you stop questioning AI’s outputs is the moment your thinking starts to atrophy.

Protect deep work. Schedule time for thinking that doesn’t involve AI at all. Reading, writing, reflecting, walking — the unstructured time where your brain consolidates what it knows. AI can compress research, but it can’t compress wisdom. That still has to come from lived experience, integrated over time.

Notice the difference between using AI to accelerate something you understand and using AI to substitute for understanding. Acceleration is healthy. Substitution erodes you.

The promise of AI isn’t to do our thinking for us. It’s to help us think better. The discipline is staying on the right side of that line.

12. Any question you wish I had asked but didn’t?

Yes — I’d love a question about the human possibility on the other side of this.

Most AI conversation is about risk, displacement, and disruption. Those are real. But the conversation Katia and I get most excited about is what becomes possible when AI handles the cognitive work that has been depleting people for decades — the synthesis, the routing, the routine analysis — and frees up human capacity for what only humans can do.

We call those people “superhumans” — not because they’re enhanced by technology in some sci-fi sense, but because they finally have the room to be more deeply human. To exercise empathy, self-reflection, intuition, judgment, and wisdom at a level that’s been crowded out by cognitive overload.

The first companies to deliberately develop and organization filled with superhumans won’t just have a competitive advantage. They’ll be creating an entirely new form of value — one we haven’t fully named yet. That’s the future I want leaders thinking about. Not “how do I survive AI?” but “what becomes possible for my people on the other side of this?”

Dream it. Then build it.

Conclusion

Thank you for the great conversation Charlene!

I hope everyone has enjoyed this peek into the mind of one of the women behind the insightful new title Winning with AI: The 90-Day Blueprint for Success!

Image credits: Charlene Li, Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Leave a Reply

Your email address will not be published. Required fields are marked *