Tag Archives: service level agreements

Co-Creating AI with Frontline Stakeholders

LAST UPDATED: March 14, 2026 at 11:52 AM

Co-Creating AI with Frontline Stakeholders

GUEST POST from Art Inteligencia


I. The “Stable Spine” of Trust: Anchoring AI in Human Safety

To scale any innovation — especially one as disruptive as Agentic AI — an organization must first establish what I call the “Stable Spine.” This is the rigid, dependable core of organizational values, psychological safety, and transparent communication that allows the “Modular Wings” of technological experimentation to flex without breaking the culture.

Establishing Psychological Safety First

The greatest barrier to AI adoption isn’t technical debt; it’s automation anxiety. When frontline stakeholders feel that AI is being “done to” them, they instinctively protect their tribal knowledge. Co-creation flips this script. By involving employees before a single line of code is written, we shift the narrative from replacement to augmentation.

  • The Pre-Mortem Dialogue: Openly discussing “What happens if this works?” and “How does this change your value to the firm?”
  • Vulnerability in Leadership: Admitting that the AI is a “student” and the frontline workers are the “teachers” provides the grounding needed for honest feedback.

Moving from “Black Box” to “Glass Box” Collaboration

Traditional AI implementations often fail because they are opaque. A Human-Centered approach demands a “Glass Box” philosophy where the logic, data inputs, and intent of the AI are visible to those using it. When a Regulatory Compliance Officer understands why an agent flagged a specific document, they transition from a skeptic to a supervisor of the technology.

Defining the Shared Purpose

The “Stable Spine” is reinforced when the AI’s goals are perfectly aligned with the frontline’s daily friction points. We aren’t just implementing AI to “increase efficiency” (a corporate-centric goal); we are implementing it to “remove the soul-crushing administrative burden” (a human-centric goal). Shared Purpose is the glue that keeps stakeholders engaged when the initial novelty of the tech wears off.

“Innovation is not about the technology; it’s about the humans the technology serves. If the spine of trust isn’t straight, the wings of innovation will never lift.” — Braden Kelley

II. Identifying High-Friction “Experience Level Measures” (XLMs)

To move beyond the hype of AI, we must move beyond the vanity of traditional metrics. In a human-centered innovation framework, we don’t just look at Key Performance Indicators (KPIs); we look at Experience Level Measures (XLMs). While a KPI tells you what happened (e.g., “Average Handle Time”), an XLM tells you how it felt for the human involved. This is where the real “Revenue Leakage” and “Engagement Leakage” are hidden.

The CX/EX Audit: Hunting for Friction

Innovation starts by identifying where human potential is being throttled. We conduct a dual audit of the Customer Experience (CX) and the Employee Experience (EX). When frontline stakeholders are forced to perform “swivel-chair” data entry or navigate fragmented legacy systems, their cognitive load is exhausted before they ever reach a high-value task. These are the high-friction zones ripe for AI co-creation.

Mapping the “Soul-Crushing” Journey

By mapping the stakeholder journey, we can pinpoint specific moments where AI agents can act as a “frictionless lubricant.” We look for three specific types of friction:

  • Cognitive Friction: Where a worker must synthesize too much disparate data to make a simple decision.
  • Process Friction: Where “the way we’ve always done it” creates unnecessary loops or wait times.
  • Emotional Friction: Where the task is so repetitive or mundane that it leads to burnout and disengagement.

From SLAs to XLMs: Redefining Value

Traditional Service Level Agreements (SLAs) are often centered on the machine or the process. In a co-created AI environment, we shift the focus to the human outcome. If an AI agent reduces a task from 60 minutes to 10 minutes, the value isn’t just the 50 minutes saved; the value is what the human does with that newly found 50 minutes. Does it go toward deep work, creative problem solving, or building a stronger relationship with the customer?

Traditional Metric (KPI) Human-Centered Metric (XLM) The AI Opportunity
Task Completion Rate Cognitive Ease Score Automating “Low-Value” data synthesis.
Response Time Empathy Availability Freeing up humans for complex emotional labor.
Error Rate Confidence Index Using AI as a “second pair of eyes” to reduce stress.

“Efficiency is doing things right; Effectiveness is doing the right things. XLMs ensure that our AI initiatives are making us more effective, not just faster at being frustrated.” — Braden Kelley

III. The Co-Creation Workshop: Where Art Meets Science

In the world of innovation, we often talk about the “Science” of data and the “Art” of human intuition. The Co-Creation Workshop is the laboratory where these two forces collide. We don’t just ask frontline stakeholders what they want; we observe how they solve problems and then design AI “agents” that mimic their best instincts while automating their worst hurdles.

Empathy-Driven Design and Personas

We begin by building robust Personas for our frontline stakeholders. Whether it’s a Global Supply Chain Manager balancing logistics during a port strike or a Customer Success Lead managing a high-churn account, we need to understand the emotional and contextual landscape they inhabit. This empathy-driven approach ensures the AI is built for the “messy reality” of the job, not a sanitized version of the process manual.

[Image of an Empathy Map for User Experience Design]

Designing “Modular Wings” for Human Agency

A key Braden Kelley principle is that while the organization needs a “Stable Spine,” the frontline needs “Modular Wings.” In our workshop, we identify which parts of the AI system should be rigid (compliance, data integrity) and which should be flexible (UI preferences, decision-making thresholds).

  • The Rigidity: The underlying LLM and the corporate data safety protocols.
  • The Flexibility: The ability for the frontline worker to “tune” the agent’s tone, level of detail, and escalation triggers.

By giving users the “knobs and dials,” we increase their sense of ownership over the final product.

Rapid Prototyping: The Experience Walkthrough

Instead of long development cycles, we use Experience Prototypes. These are low-fidelity simulations — sometimes as simple as a storyboard or a “Wizard of Oz” test — where the human interacts with a “pretend” AI. This allows us to map the Human-AI Handoff:

  1. The Trigger: What event causes the human to turn to the AI?
  2. The Interaction: How does the AI present information? (Is it a suggestion, a summary, or a draft?)
  3. The Judgment: How does the human validate or correct the AI’s output?
  4. The Feedback Loop: How does the AI learn from that correction?

The “Art” of Intuition vs. The “Science” of Automation

The workshop highlights that AI excels at Synthesizing (Science), but humans excel at Contextualizing (Art). We use this session to define the “Escalation Matrix.” If the data is 90% certain but the human “gut feeling” says otherwise, how does the system handle that conflict? Designing for this tension is what makes an AI tool truly innovative rather than just “efficient.”

“Co-creation is the bridge between a tool that is technically impressive and a tool that is actually used. If the frontline doesn’t see their ‘Art’ reflected in the ‘Science’ of the AI, they will find a way to bypass it.” — Braden Kelley

IV. Solving for “Causal AI” and Intent: From Correlation to Context

In the “Science” of standard machine learning, models are often built on correlations — patterns in data that suggest what might happen next. But for a frontline worker in a high-stakes environment, “what” isn’t enough. To truly co-create, we must move toward Causal AI, where the system and the human collaborate to understand the why behind a recommendation. This is where we bridge the gap between algorithmic output and human intent.

Moving Beyond the Correlation Trap

If an AI agent suggests a supply chain reroute or a specific credit adjustment, the frontline stakeholder needs to see the “connective tissue” of that logic. Without causality, the AI is just a black box throwing out guesses. In our co-creation sessions, we design Explainability Interfaces that highlight the primary drivers of a decision.

  • The “Why” Prompt: Every AI suggestion should include a “Show Logic” feature that maps the causal factors (e.g., “Delayed shipment in Suez + Low local inventory + 10% surge in regional demand”).
  • The Counter-Factual: Allowing users to ask, “What if the shipment wasn’t delayed?” to see how the AI’s intent changes.

Context Injection: The Frontline as the “Ground Truth”

Data science often suffers from “Data Silos” — it sees the numbers but misses the Context. A frontline worker knows that a 20% spike in orders might be a one-time anomaly due to a local event, not a permanent trend.

Co-creation allows us to build “Context Injection” points where the human can feed the “Art” of their situational awareness back into the “Science” of the model. This transforms the AI from a static tool into a dynamic partner that respects the Ground Truth of the shop floor or the call center.

Human-in-the-Loop (HITL) 2.0: From Safety Net to Co-Pilot

We are evolving the concept of Human-in-the-Loop. In version 1.0, the human was merely a “kill switch” for when the AI failed. In HITL 2.0, the human is a Co-Pilot. We design the interaction so that:

  1. The AI Proposes: Offering 2–3 paths based on data.
  2. The Human Disposes: Choosing the path that aligns with the current organizational intent (which might shift faster than the data).
  3. The System Learns: Capturing the reasoning behind the human’s choice to refine future causal models.

The Outcome: Cognitive Alignment

When we solve for intent, we achieve Cognitive Alignment. The frontline stakeholder no longer views the AI as a competitor or a mystery, but as an extension of their own expertise. They aren’t just using an app; they are directing an agent that understands their goals, their constraints, and their “Art.”

“An AI that can’t explain its ‘Why’ will eventually be ignored by the people who know ‘How.’ Causal AI is the key to moving from temporary adoption to permanent innovation.”

V. Scaling the Innovation Bonfire: From Pilot to Organizational Agility

The final challenge of any innovation isn’t the spark; it’s the sustainment. Too often, co-creation is treated as a “one-off” workshop. To truly scale, we must take the lessons from our frontline stakeholders and feed them back into the organizational furnace. This is how we move from a single pilot to what I call the “Innovation Bonfire” — a self-sustaining culture of continuous improvement.

Avoiding the “Pilot Trap”

Many AI initiatives die in “Pilot Purgatory” because they fail to account for the Systemic Friction of a full-scale rollout. Scaling requires moving from a specialized co-creation group to a broader “Modular Wings” approach across the enterprise. We must ensure that the insights gained from one department (e.g., Supply Chain) are translated into reusable components for another (e.g., R&D Project Management).

  • Internal Advocacy: Empowering your original co-creators to act as “Innovation Ambassadors.” Their peers are more likely to trust a tool recommended by a colleague than one mandated by IT.
  • Feedback Loops: Implementing automated mechanisms where frontline users can “vote” on AI suggestions or flag hallucinations in real-time.

The Flywheel of Continuous Learning

Innovation is not a destination; it’s a cycle. As the AI handles more of the “Science” (the repetitive, high-rigor tasks), the frontline stakeholders have more bandwidth for the “Art” (the complex, high-empathy tasks). This creates a Flywheel Effect:

  1. Release: The AI releases human capacity by removing friction.
  2. Reinvest: Humans reinvest that capacity into solving higher-order problems.
  3. Refine: Those new solutions provide fresh data and “Ground Truth” to further refine the AI.

Maintaining the “Human-Centered” Spark at Scale

As you scale, the temptation is to “standardize” everything until the “Art” is squeezed out. This is a mistake. Organizational Agility depends on your ability to maintain that Stable Spine of core processes while allowing different teams the autonomy to adapt the AI to their unique workflows.

We must continuously ask: “Is this technology still serving the human, or have we started serving the technology?” Revisiting your Experience Level Measures (XLMs) quarterly ensures that the innovation remains grounded in actual human value rather than just technical efficiency.

The Outcome: An Agentic Organization

An organization that masters co-creation doesn’t just “use AI.” It becomes an Agentic Organization — a living system where humans and machines are seamlessly integrated, each playing to their strengths. The “Science” of the AI provides the scale, but the “Art” of your people provides the competitive advantage. That is how you win in a world of constant change.

“To scale an innovation bonfire, you don’t just need more fuel; you need more oxygen. In an organization, that oxygen is the trust, empathy, and agency of your frontline people.” — Braden Kelley

Conclusion: Leading the Agentic Revolution with Empathy

The journey from top-down implementation to bottom-up co-creation is the defining shift of the current technological era. As we have explored, successfully integrating AI into the fabric of an organization is not merely a technical hurdle — it is a human-centered design challenge. When we balance the Science of algorithmic rigor with the Art of human empathy, we don’t just “deploy software”; we empower a workforce.

The Human-Centered Dividend

By prioritizing the “Stable Spine” of trust and focusing on Experience Level Measures (XLMs), organizations can unlock a level of agility that was previously impossible. The dividend of this approach is twofold:

  • Operational Resilience: Systems built on the “Ground Truth” of frontline expertise are inherently more robust and adaptable to market shifts.
  • Human Flourishing: By removing “soul-crushing” friction, we allow our people to return to the work they were meant to do — creative problem solving, strategic thinking, and high-empathy customer connection.

A Call to Action for Innovation Leaders

The Innovation Bonfire is waiting to be lit, but it requires leaders who are brave enough to share the matches. If you are ready to move beyond the “Black Box” and start co-creating with your most valuable asset — your people — start with these three steps:

  1. Audit the Friction: Use XLMs to find where your frontline is currently being throttled.
  2. Invite the Experts: Bring the people who do the work into the design room before the technology is finalized.
  3. Design for “Why”: Prioritize causal clarity over simple correlation to build a “Glass Box” culture.

Final Thought

In a world increasingly dominated by Agentic AI, the ultimate competitive advantage isn’t the code you own; it’s the Human-AI Synergy you cultivate. Innovation is, and always has been, a team sport. Your most important teammates are already on your payroll, waiting to help you build the future.

“We shape our tools, and thereafter our tools shape us. Let us ensure we shape our AI with enough heart to make the future a place where humans truly belong.” — Braden Kelley

Continue the Conversation

Are you ready to audit your organization’s Customer Experience or develop a Human-Centered AI Strategy? Let’s work together to turn your innovation friction into a scalable bonfire.

Contact: Book an advisory session

Frequently Asked Questions

To help both human readers and search engines better understand the core concepts of co-creating AI, I’ve prepared this brief FAQ. Below the human-readable text, you’ll find the JSON-LD structured data to help “answer engines” index this content accurately.

1. What is the difference between a KPI and an XLM in AI implementation?

While a Key Performance Indicator (KPI) measures the “What” (output, speed, efficiency), an Experience Level Measure (XLM) measures the “How” (the human experience of the process). In AI, XLMs track things like cognitive load and emotional friction to ensure the technology is actually helping people, not just making a broken process faster.

2. Why is “Causal AI” important for frontline stakeholders?

Standard AI often shows correlations, but Causal AI explains the logic or “Why” behind a suggestion. For frontline workers, understanding the intent and cause of an AI recommendation builds trust and allows them to apply their own contextual expertise — the “Art” — to the AI’s “Science.”

3. How does the “Stable Spine” framework assist with AI adoption?

The Stable Spine represents the rigid core of trust, safety, and transparency within an organization. By establishing this foundation first, leaders provide the security employees need to experiment with the “Modular Wings” — the flexible, innovative applications of AI that can change and adapt over time.

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.