What's Under the Hood
The core technologies behind AI โ no prior knowledge required
What even is AI?
At its core, AI is software that finds patterns in data and uses those patterns to make predictions. When you type "The sky is..." and it predicts "blue" - that's pattern matching from billions of examples it learned from.
The Evolution: How We Got Here
Click each era to understand what changed and why it matters:
Step 1: How AI Reads Text (Tokenization)
Here's a key insight: AI can't read words like you do. It converts text into numbers called "tokens." Think of tokens as puzzle pieces the AI uses to understand language.
Spaces are tokens too! "ยทmy" means the space is attached to that token.
"Hello" and "hello" are different tokens with different IDs.
Uncommon names like "austin" become "aust" + "in" or similar pieces.
Step 2: The Context Window (AI's Memory)
The context window is like AI's short-term memory - it's how much text the AI can "see" at once. Everything outside the window is forgotten.
โข GPT-3.5: ~4,000 tokens (~3,000 words)
โข GPT-4: ~128,000 tokens (~100,000 words)
โข Claude 3: ~200,000 tokens (~150,000 words)
Why it matters: If a customer conversation is too long, the AI might "forget" important details from the beginning.
Step 3: How AI Generates Responses (Prediction)
Here's the core mechanic: AI generates text one token at a time by predicting "what word is most likely to come next?" It's like autocomplete on your phone, but much more sophisticated.
Step 4: How AI Understands Context (Attention)
The breakthrough that made modern AI possible: attention. When processing a word, the AI "looks at" all other words to understand meaning. For example, in "I need help tracking my order" - the word "order" pays attention to "tracking" to understand the context.
Darker purple = stronger attention. Notice how "order" strongly attends to "tracking" - the AI is connecting these concepts.
Why AI "Hallucinates"
Key insight: AI doesn't "know" facts - it predicts statistically likely text.
When it confidently states something false, it's because that pattern was statistically common in its training data, or it's interpolating between patterns. This is why grounding AI in real-time data (like customer databases) is so important in CX.
From Chat to Agent
How we went from chatbots to autonomous systems
Chatbot vs Agent: What's the Difference?
Chatbot: Responds to your message, then waits. It's reactive.
Agent: Receives your request, then autonomously takes multiple steps to complete the task. It can use tools, check databases, and loop until the job is done.
The Agent Loop
Every AI agent runs on this continuous cycle:
Agent Loop Example: "Where's my order?"
Click through each step to see how an agent handles this common request:
Interactive: Watch an Agent Work in Real-Time
Chat with a simulated agent and watch the backend as it processes your request:
The Agentic Stack
Every agent system is built from layers. Think of it like a cake - each layer has a job. Click each layer to learn more:
What it is: The interface customers actually see and use - a chat widget on your website, a voice assistant on the phone, or an email handler.
Example: When a customer clicks "Chat with us", that chat window is the application layer.
What it is: The "brain infrastructure" - software that handles the Think โ Act โ Observe loop we just covered.
Why it matters: Without an SDK, developers would have to build all that loop logic from scratch. SDKs make building agents dramatically faster.
Examples: Claude Agent SDK (Anthropic), Agents SDK (OpenAI)
What it is: Automatic rules that ALWAYS run at certain moments - before the agent uses a tool, after it finishes, etc.
Why it matters: AI is probabilistic - it might forget to do something 15% of the time. Hooks guarantee certain behaviors happen 100% of the time.
Example: "Every time the agent writes to the database, log it for compliance" - this hook runs every single time, no exceptions.
What it is: Pre-packaged instructions for specific tasks. Instead of explaining your return policy every time, you create a "Returns" skill the AI loads when relevant.
Why it matters: Skills keep the AI focused. It doesn't load everything it knows - just what's relevant to the current task.
Example: A "Process Return" skill might include: return policy rules, required customer data, approved refund methods, and escalation triggers.
What it is: A universal standard for connecting AI to external tools and databases. Build one connection, and any AI that speaks MCP can use it.
Why it matters: Before MCP, connecting AI to Salesforce required custom code. Connecting to Shopify? Different custom code. MCP standardizes all of this.
The numbers: 97M monthly SDK downloads. Supported by Claude, ChatGPT, Gemini, Copilot. Donated to Linux Foundation Dec 2025.
What it is: A standard for AI agents to talk to each other. One agent can delegate tasks to specialists.
Why it matters: Imagine a "Travel Planner" agent that coordinates with a Flight agent, Hotel agent, and Budget agent - each specialized, working together.
Example: Customer asks "Plan my Tokyo trip" โ Primary agent coordinates with flight booking agent, hotel agent, and tour agent โ assembles complete itinerary.
Deep Dive: See Each Layer in Action
See why MCP matters - toggle between the old way and the new way:
Scenario: Customer asks "Plan my Tokyo trip" - watch how agents collaborate:
Scenario: Agent edits a file - should it auto-format? Run the simulation multiple times:
Type a customer message and watch the relevant skill activate:
The SDK handles the hard infrastructure so developers focus on the application:
Same functionality, fraction of the code
Who's Building What
The major players shaping the AI landscape
Quick Context: What's a "Foundation Model"?
Foundation models are the massive AI systems (like GPT-4, Claude, Gemini) trained on huge datasets. Companies then build applications on top of these foundations. Think of it like: Foundation models = the engine, Applications = the car.
The Foundation Model Providers
These companies build the core AI "engines" that power everything else. Each has distinct strengths:
The Governance Layer
AAIF (Agentic AI Foundation)
Founded December 2025 under Linux Foundation. Co-founders: Anthropic, OpenAI, Block.
Members: AWS, Google, Microsoft, Cloudflare, Bloomberg. Governs MCP.
Agentic AI in CX
How AI agents are transforming customer experience
Why CX is the AI Frontier
Customer experience has massive volumes (millions of conversations), clear success metrics (resolution, satisfaction), and direct revenue impact. This makes it the perfect proving ground for AI agents - and why billions are being invested in CX AI.
The Customer Journey
AI agents are appearing at every stage. Click each to explore:
Marketing Agents
Capabilities: Personalization, content generation, campaign optimization, lead scoring
Examples: AI that writes email sequences, optimizes ad spend, predicts conversion
Sales Agents
Capabilities: Outbound prospecting, lead qualification, meeting scheduling, deal intelligence
Examples: AI that researches prospects, sends personalized outreach, handles scheduling
Service Agents
Capabilities: Ticket resolution, escalation handling, knowledge retrieval, sentiment detection
Examples: AI that resolves returns, updates accounts, answers product questions autonomously
Success Agents
Capabilities: Onboarding, proactive outreach, churn prediction, renewal automation
Examples: AI that detects at-risk customers, triggers retention plays, identifies upsells
Interactive: Deflection vs Resolution
Key industry terms:
● Deflection: Pushing customers away from expensive channels (agents) to cheaper ones (FAQ, phone tree). Success = "they stopped asking."
● Resolution: Actually solving the customer's problem. Success = "their issue is fixed."
Watch the same scenario play out with each approach:
๐ฏ The CX Competitive Landscape
Understanding who's building what โ and why it matters for Gladly.
AI-Native
Purpose-built from day one for autonomous resolution. No legacy architecture. Multi-channel native.
AI-First Platform
AI agents built on established CX platforms with proven customer relationships and data.
Legacy + AI
Enterprise incumbents adding AI capabilities to traditional ticket-based systems.
๐ The Funding Velocity
AI-native startups are raising at unprecedented speed. Context for every conversation.
$1B ยท Feb 2024
Seed + Series A
$4.5B ยท Oct 2024
Series B ยท $175M raised
$10B ยท Sep 2025
Series C ยท $350M raised
$100M ARR ยท Voice + Chat
Seed ยท 2024
Founded, stealth mode
$1.5B ยท Jun 2025
Series C ยท $131M raised
$4-5B ยท Nov 2025
In talks ยท Not closed
๐ Strategic Playbooks by Category
How each category approaches winning โ and what it means for competition.
🌱 AI-Native Playbook
- Speed to market โ Ship fast, iterate faster
- Vertical focus โ Deep in specific industries
- Outcome pricing โ Per-resolution economics
- Voice-enabled โ Sierra pioneered voice AI agents
🚀 Platform Playbook
- Customer context โ Leverage existing data
- Unified experience โ AI + human seamless
- Trust & relationships โ Known vendor, lower risk
- Full stack โ Not just AI, complete solution
📦 Bolt-On Playbook
- Enterprise distribution โ Already deployed
- Feature checkbox โ "We have AI too"
- Multi-model โ GPT, Gemini, Claude options
- Gradual migration โ Low disruption path
๐ Macro Trends Shaping CX AI
The bigger picture โ what's driving the market and where it's headed.
Sierra + Decagon
target (Zendesk)
price anchor
Sierra (2025)
The Economics Shift
From: Per-seat licensing (pay for capacity)
To: Per-resolution pricing (pay for outcomes)
The Metrics Revolution
Dying: Deflection rate, tickets closed
Rising: Resolution rate, CSAT, LTV impact
Voice is the New Frontier
ElevenLabs partnerships: Deutsche Telekom, Cisco, SharpenCX
All major players now have voice: Sierra, Intercom, Zendesk, Salesforce
Multi-Model is Standard
Salesforce Agentforce: GPT-5, Gemini, Claude
Sierra AgentOS 2.0: Multiple models with supervision layers
Design for Devotion, Not Deflection
The Gladly approach to customer experience AI
The "AND, Not OR" Principle
Efficiency and cost savings are essential, but only one dimension. The best CX AI delivers operational efficiency AND lasting customer value.
Interactive: The Architecture Difference
See how the same customer interaction differs:
Key Differentiators
Customer at the Center
Designed around people, not tickets. Every conversation builds on complete customer history โ because customers aren't case numbers.
People-First ArchitectureUnified Platform
Comprehensive context from all systems, surfaced automatically in every experience.
A Decade of CX Focus
10+ years and 300M+ conversations of expertise built into the platform.
True Omnichannel
One continuous thread across voice, chat, SMS, email, and social.
Seamless Handoffs
Imperceptible AI-to-human transitions. Customers never repeat themselves.
CX as Strategic Partner
Reporting that positions CX as a business driver, not a cost center.
Simply Powerful
An intuitive platform built for CX professionals, not IT departments.
See each differentiator in action with interactive demos
The Measurement Shift
| Efficiency Metrics (Essential) | Devotion Metrics (Also Essential) |
|---|---|
| Resolution rate | Customer satisfaction (CSAT) |
| Cost per contact | Customer effort score (CES) |
| Handle time | Repeat purchase rate |
| First contact resolution | Customer lifetime value (LTV) |
โญ The North Star Framework
How we position Gladly's approach to AI-powered CX: