Navigation

Training Sections

Foundations Chat to Agent Market Landscape CX Landscape Where Gladly Fits

Deep Dives

Gladly Differentiators Claude Code 101

What's Under the Hood

The core technologies behind AI โ€” no prior knowledge required

What even is AI?

At its core, AI is software that finds patterns in data and uses those patterns to make predictions. When you type "The sky is..." and it predicts "blue" - that's pattern matching from billions of examples it learned from.

The Evolution: How We Got Here

Click each era to understand what changed and why it matters:

AI Evolution Timeline
1
Neural Networks
1950s-2010s
2
Transformers
2017
3
LLMs
2020s
4
Agents
2024+

Step 1: How AI Reads Text (Tokenization)

Here's a key insight: AI can't read words like you do. It converts text into numbers called "tokens." Think of tokens as puzzle pieces the AI uses to understand language.

Tokenizer Demo
Tokens (what the model sees)
ยท = space
Spaces are tokens too! "ยทmy" means the space is attached to that token.
Case matters
"Hello" and "hello" are different tokens with different IDs.
Rare words split
Uncommon names like "austin" become "aust" + "in" or similar pieces.
Why this matters: The AI doesn't "understand" words - it processes token IDs. This is why it can struggle with spelling, rare names, and word boundaries. Try typing your name!

Step 2: The Context Window (AI's Memory)

The context window is like AI's short-term memory - it's how much text the AI can "see" at once. Everything outside the window is forgotten.

Context Window Visualizer
Simulated conversation (drag the slider)
Window Size 8 tokens
What the AI can "see"
Real-world context windows:
โ€ข GPT-3.5: ~4,000 tokens (~3,000 words)
โ€ข GPT-4: ~128,000 tokens (~100,000 words)
โ€ข Claude 3: ~200,000 tokens (~150,000 words)

Why it matters: If a customer conversation is too long, the AI might "forget" important details from the beginning.

Step 3: How AI Generates Responses (Prediction)

Here's the core mechanic: AI generates text one token at a time by predicting "what word is most likely to come next?" It's like autocomplete on your phone, but much more sophisticated.

Next Token Predictor
Input sequence
"The customer said they wanted to _"
Predicted next tokens (by probability)
Temperature 0.7

Low = predictable, High = creative/random

Step 4: How AI Understands Context (Attention)

The breakthrough that made modern AI possible: attention. When processing a word, the AI "looks at" all other words to understand meaning. For example, in "I need help tracking my order" - the word "order" pays attention to "tracking" to understand the context.

Attention Weights

Darker purple = stronger attention. Notice how "order" strongly attends to "tracking" - the AI is connecting these concepts.

Why this matters: Attention is how AI understands that "bank" means different things in "river bank" vs "bank account" - by looking at surrounding words.

Why AI "Hallucinates"

Key insight: AI doesn't "know" facts - it predicts statistically likely text.

When it confidently states something false, it's because that pattern was statistically common in its training data, or it's interpolating between patterns. This is why grounding AI in real-time data (like customer databases) is so important in CX.

From Chat to Agent

How we went from chatbots to autonomous systems

Chatbot vs Agent: What's the Difference?

Chatbot: Responds to your message, then waits. It's reactive.

Agent: Receives your request, then autonomously takes multiple steps to complete the task. It can use tools, check databases, and loop until the job is done.

The Agent Loop

Every AI agent runs on this continuous cycle:

💡
Think
Act
👁
Observe
Done?

Agent Loop Example: "Where's my order?"

Click through each step to see how an agent handles this common request:

Agent Loop Walkthrough

Interactive: Watch an Agent Work in Real-Time

Chat with a simulated agent and watch the backend as it processes your request:

Agent Simulator
Order Status
Process Return
Get Recommendation
Customer Chat
Select a scenario and type a message to see the agent in action
Backend (What's happening)
System
Agent initialized. Waiting for input...

The Agentic Stack

Every agent system is built from layers. Think of it like a cake - each layer has a job. Click each layer to learn more:

Your Application
The user-facing product

What it is: The interface customers actually see and use - a chat widget on your website, a voice assistant on the phone, or an email handler.

Example: When a customer clicks "Chat with us", that chat window is the application layer.

Agent SDK
The engine that runs the loop

What it is: The "brain infrastructure" - software that handles the Think โ†’ Act โ†’ Observe loop we just covered.

Why it matters: Without an SDK, developers would have to build all that loop logic from scratch. SDKs make building agents dramatically faster.

Examples: Claude Agent SDK (Anthropic), Agents SDK (OpenAI)

Hooks
The guardrails

What it is: Automatic rules that ALWAYS run at certain moments - before the agent uses a tool, after it finishes, etc.

Why it matters: AI is probabilistic - it might forget to do something 15% of the time. Hooks guarantee certain behaviors happen 100% of the time.

Example: "Every time the agent writes to the database, log it for compliance" - this hook runs every single time, no exceptions.

Analogy: Self-driving cars make decisions, but they always stop at red lights. Hooks are red lights for AI.
Skills
The playbooks

What it is: Pre-packaged instructions for specific tasks. Instead of explaining your return policy every time, you create a "Returns" skill the AI loads when relevant.

Why it matters: Skills keep the AI focused. It doesn't load everything it knows - just what's relevant to the current task.

Example: A "Process Return" skill might include: return policy rules, required customer data, approved refund methods, and escalation triggers.

MCP (Model Context Protocol)
Connections to tools

What it is: A universal standard for connecting AI to external tools and databases. Build one connection, and any AI that speaks MCP can use it.

Why it matters: Before MCP, connecting AI to Salesforce required custom code. Connecting to Shopify? Different custom code. MCP standardizes all of this.

The numbers: 97M monthly SDK downloads. Supported by Claude, ChatGPT, Gemini, Copilot. Donated to Linux Foundation Dec 2025.

Analogy: Remember when every phone needed a different charger? MCP is USB-C for AI integrations.
A2A (Agent-to-Agent Protocol)
Connections to other agents

What it is: A standard for AI agents to talk to each other. One agent can delegate tasks to specialists.

Why it matters: Imagine a "Travel Planner" agent that coordinates with a Flight agent, Hotel agent, and Budget agent - each specialized, working together.

Example: Customer asks "Plan my Tokyo trip" โ†’ Primary agent coordinates with flight booking agent, hotel agent, and tour agent โ†’ assembles complete itinerary.

Analogy: MCP is plugging your laptop into a power outlet. A2A is your laptop joining a video call with other computers.

Deep Dive: See Each Layer in Action

MCP: Before & After

See why MCP matters - toggle between the old way and the new way:

Before MCP
After MCP
A2A: Multi-Agent Orchestration

Scenario: Customer asks "Plan my Tokyo trip" - watch how agents collaborate:

Hooks: Reliability Comparison

Scenario: Agent edits a file - should it auto-format? Run the simulation multiple times:

Without Hooks (~85% reliable)
With Hooks (100% reliable)
Skills: Dynamic Loading

Type a customer message and watch the relevant skill activate:

Agent SDK: What It Handles

The SDK handles the hard infrastructure so developers focus on the application:

Without SDK: Build Everything
With SDK: Focus on Your App
500+ lines โ†’ ~20 lines

Same functionality, fraction of the code

Who's Building What

The major players shaping the AI landscape

Quick Context: What's a "Foundation Model"?

Foundation models are the massive AI systems (like GPT-4, Claude, Gemini) trained on huge datasets. Companies then build applications on top of these foundations. Think of it like: Foundation models = the engine, Applications = the car.

The Foundation Model Providers

These companies build the core AI "engines" that power everything else. Each has distinct strengths:

Foundation Model Landscape
Capability Comparison
Key insight: No single provider "wins" across all dimensions. The choice depends on your priorities: consumer reach (OpenAI), safety/enterprise (Anthropic), cloud integration (Google), distribution (Microsoft), or control/cost (Open Source).

The Governance Layer

AAIF (Agentic AI Foundation)

Founded December 2025 under Linux Foundation. Co-founders: Anthropic, OpenAI, Block.

Members: AWS, Google, Microsoft, Cloudflare, Bloomberg. Governs MCP.

Pattern: TCP/IP, HTML, Kubernetes - standards that won weren't owned by one company.

Agentic AI in CX

How AI agents are transforming customer experience

Why CX is the AI Frontier

Customer experience has massive volumes (millions of conversations), clear success metrics (resolution, satisfaction), and direct revenue impact. This makes it the perfect proving ground for AI agents - and why billions are being invested in CX AI.

The Customer Journey

AI agents are appearing at every stage. Click each to explore:

📢
Marketing
💰
Sales
💬
Service
🏆
Success

Marketing Agents

Capabilities: Personalization, content generation, campaign optimization, lead scoring

Examples: AI that writes email sequences, optimizes ad spend, predicts conversion

Sales Agents

Capabilities: Outbound prospecting, lead qualification, meeting scheduling, deal intelligence

Examples: AI that researches prospects, sends personalized outreach, handles scheduling

Service Agents

Capabilities: Ticket resolution, escalation handling, knowledge retrieval, sentiment detection

Examples: AI that resolves returns, updates accounts, answers product questions autonomously

Success Agents

Capabilities: Onboarding, proactive outreach, churn prediction, renewal automation

Examples: AI that detects at-risk customers, triggers retention plays, identifies upsells

Interactive: Deflection vs Resolution

Key industry terms:

Deflection: Pushing customers away from expensive channels (agents) to cheaper ones (FAQ, phone tree). Success = "they stopped asking."

Resolution: Actually solving the customer's problem. Success = "their issue is fixed."

Watch the same scenario play out with each approach:

Compare Approaches
Deflection Bot
Resolution Agent
Customer Conversation
What's Happening

๐ŸŽฏ The CX Competitive Landscape

Understanding who's building what โ€” and why it matters for Gladly.

๐Ÿ“ The Framework: Three Approaches to CX AI

🌱
Build Native
Start fresh with AI at the core. No legacy constraints.
🚀
Transform Platform
Evolve an established platform with AI-first capabilities.
📦
Bolt-On
Add AI features to existing ticket-based systems.
🔥 Disruptors

AI-Native

Purpose-built from day one for autonomous resolution. No legacy architecture. Multi-channel native.

Sierra
$10B
Sep 2025 ยท $100M ARR
Decagon
$4-5B
Talks Nov 2025 ยท $35M ARR
⚡ Innovators

AI-First Platform

AI agents built on established CX platforms with proven customer relationships and data.

Intercom Fin 3
$0.99/resolution
Voice ยท Email ยท Chat ยท Social
Gladly
People, Not Tickets
AI agents across full journey: pre-purchase โ†’ conversion โ†’ post-purchase โ†’ loyalty
Retail-native ยท Agent + Helpdesk unified ยท AI tooling for human agents
Full Journey AI + Human 300M+ Conversations
🏢 Giants

Legacy + AI

Enterprise incumbents adding AI capabilities to traditional ticket-based systems.

Salesforce
Agentforce 360
Voice ยท Multi-model
Zendesk
$200M AI ARR
80% auto-resolution
Kustomer
Meta-owned
AI Add-on

๐Ÿš€ The Funding Velocity

AI-native startups are raising at unprecedented speed. Context for every conversation.

Valuation Timeline (2024-2025)
$10B
$5B
$0
Sierra
$1B ยท Feb 2024
Seed + Series A
Sierra
$4.5B ยท Oct 2024
Series B ยท $175M raised
Sierra
$10B ยท Sep 2025
Series C ยท $350M raised
$100M ARR ยท Voice + Chat
Decagon
Seed ยท 2024
Founded, stealth mode
Decagon
$1.5B ยท Jun 2025
Series C ยท $131M raised
Decagon
$4-5B ยท Nov 2025
In talks ยท Not closed
Feb '24
Oct '24
Sep '25
Sierra
Decagon
Hover dots for details

๐Ÿ“‹ Strategic Playbooks by Category

How each category approaches winning โ€” and what it means for competition.

🌱 AI-Native Playbook

  • Speed to market โ€” Ship fast, iterate faster
  • Vertical focus โ€” Deep in specific industries
  • Outcome pricing โ€” Per-resolution economics
  • Voice-enabled โ€” Sierra pioneered voice AI agents
Watch for: Enterprise deals, vertical expansion

🚀 Platform Playbook

  • Customer context โ€” Leverage existing data
  • Unified experience โ€” AI + human seamless
  • Trust & relationships โ€” Known vendor, lower risk
  • Full stack โ€” Not just AI, complete solution
Gladly's edge: Customer-centric, not ticket-centric

📦 Bolt-On Playbook

  • Enterprise distribution โ€” Already deployed
  • Feature checkbox โ€” "We have AI too"
  • Multi-model โ€” GPT, Gemini, Claude options
  • Gradual migration โ€” Low disruption path
Limitation: Ticket architecture constrains AI potential

๐Ÿ“ˆ Macro Trends Shaping CX AI

The bigger picture โ€” what's driving the market and where it's headed.

Design for Devotion, Not Deflection

The Gladly approach to customer experience AI

The "AND, Not OR" Principle

Efficiency and cost savings are essential, but only one dimension. The best CX AI delivers operational efficiency AND lasting customer value.

Interactive: The Architecture Difference

See how the same customer interaction differs:

Architecture Comparison
Ticket-Based System
Customer-Centered (Gladly)
Customer: Sarah
System View

Key Differentiators

👤

Customer at the Center

Designed around people, not tickets. Every conversation builds on complete customer history โ€” because customers aren't case numbers.

People-First Architecture
🔗

Unified Platform

Comprehensive context from all systems, surfaced automatically in every experience.

📅

A Decade of CX Focus

10+ years and 300M+ conversations of expertise built into the platform.

🔁

True Omnichannel

One continuous thread across voice, chat, SMS, email, and social.

🤝

Seamless Handoffs

Imperceptible AI-to-human transitions. Customers never repeat themselves.

📈

CX as Strategic Partner

Reporting that positions CX as a business driver, not a cost center.

Simply Powerful

An intuitive platform built for CX professionals, not IT departments.

โœจ Explore Interactive Deep Dive โ†’

See each differentiator in action with interactive demos

The Measurement Shift

Efficiency Metrics (Essential) Devotion Metrics (Also Essential)
Resolution rateCustomer satisfaction (CSAT)
Cost per contactCustomer effort score (CES)
Handle timeRepeat purchase rate
First contact resolutionCustomer lifetime value (LTV)

โญ The North Star Framework

How we position Gladly's approach to AI-powered CX:

๐Ÿ”๏ธ
The Mountain
Customer Devotion
What we're climbing toward
๐Ÿ‘น
The Villain
Deflection Bots
& ticket-based systems
โš ๏ธ
The Problem
One-Dimensional AI
Success = only cost savings
๐Ÿ’œ
The Solution
Gladly
Efficiency today + LTV always

๐ŸŽค The 30-Second Pitch

โ

Every business needs efficiency โ€” that's AI table stakes. But most AI stops there, optimizing for deflection and leaving long-term value on the table.

Gladly is the only customer experience AI that delivers the cost savings you need AND the customer devotion that drives lasting business value.

It's not either/or. It's both.

Questions?

Let's discuss what this means for our team and customers.

Navigate