AI Agents in Plain English
If you've built software before, you're used to writing explicit instructions: "If the user clicks this button, do that. When this condition is true, execute this code."
An AI agent is different. Instead of explicit instructions for every scenario, you give it a goal and tools, and it figures out the steps.
The Core Difference
Traditional software:
You write: IF condition THEN action
You handle: Every possible path
AI agent:
You write: Here's the goal, here are the tools
Agent decides: Which tools to use and when
A Simple Example
Let's say you want to monitor job postings and send alerts.
Traditional approach:
// You write every step explicitly async function checkJobPostings() { const posts = await fetchJobBoard(); const filtered = posts.filter(p => p.title.includes("TypeScript")); if (filtered.length > 0) { await sendEmail(filtered); } }
Agent approach:
// You define the goal and tools const agent = new Agent({ name: "Job Alert Agent", instructions: "Monitor job postings and alert me about TypeScript roles", tools: [fetchJobBoardTool, filterJobsTool, sendEmailTool], }); // Agent decides which tools to use and when await agent.run();
The agent might:
- Call
fetchJobBoardTool
- Analyze the results
- Call
filterJobsTool
with relevant criteria - Call
sendEmailTool
with findings
You didn't write the if/else logic. The agent reasoned about what to do.
Why This Matters for the Web
The web has always been about access to information and services. But connecting them requires:
- APIs that change
- Data formats that differ
- Logic that adapts to context
Agents excel at this. They can:
- Read documentation and figure out how to use APIs
- Handle inconsistent data formats
- Make contextual decisions
- Adapt to changes without code updates
The Three Modalities
Not every AI application needs full autonomy. There are three distinct modalities, each suited to different use cases:
Chatbot → Copilot → Agent
(Reactive) (Assistive) (Autonomous)
Chatbots
What: Conversational interfaces that respond to user input in real-time.
Characteristics:
- User-initiated every interaction
- Synchronous request/response
- No autonomy between conversations
- Stateless or short-term memory
When to use:
- Customer support
- Information retrieval
- Interactive Q&A
- Command interfaces
Example use cases:
- "What's my order status?"
- "Explain this error message"
- "Search our documentation"
TypeScript Example: Simple Support Chatbot
import { Agent } from '@mastra/core'; import { openai } from '@ai-sdk/openai'; import { z } from 'zod'; // Define tools the chatbot can use const searchDocsTool = { description: 'Search documentation for relevant information', parameters: z.object({ query: z.string().describe('Search query'), }), execute: async ({ query }: { query: string }) => { // In reality, this would search a vector database return { results: [ { title: 'Getting Started', snippet: 'To begin...' }, { title: 'API Reference', snippet: 'The API...' }, ], }; }, }; const getOrderStatusTool = { description: 'Get the status of a customer order', parameters: z.object({ orderId: z.string().describe('Order ID'), }), execute: async ({ orderId }: { orderId: string }) => { // In reality, this would call your order management system return { orderId, status: 'shipped', tracking: 'ABC123', estimatedDelivery: '2025-10-05', }; }, }; // Create the chatbot agent const supportChatbot = new Agent({ name: 'Support Chatbot', instructions: `You are a helpful customer support chatbot. Your role is to: - Answer questions about orders - Help users find documentation - Provide clear, friendly responses Always be concise and helpful.`, model: { provider: 'OPEN_AI', name: 'gpt-4o-mini', toolChoice: 'auto', }, tools: { searchDocs: searchDocsTool, getOrderStatus: getOrderStatusTool, }, }); // Use the chatbot async function chat(userMessage: string) { const response = await supportChatbot.generate(userMessage); return response.text; } // Example usage const response1 = await chat("What's the status of order #12345?"); console.log(response1); // "Your order #12345 has been shipped! The tracking number is ABC123..." const response2 = await chat("How do I get started with the API?"); console.log(response2); // "I found the Getting Started guide for you. To begin..."
Key characteristics:
- Responds only when user messages
- Each conversation is independent
- Uses tools to fetch information
- Returns immediately
Copilots
What: AI assistants that work alongside you, suggesting actions and helping with tasks while you maintain control.
Characteristics:
- Proactive suggestions
- User always in control (approve/reject)
- Context-aware
- Embedded in workflows
When to use:
- Code completion
- Writing assistance
- Design suggestions
- Decision support
Example use cases:
- "Suggest improvements to this code"
- "Help me draft this email"
- "Recommend next steps in this workflow"
TypeScript Example: Code Review Copilot
import { Agent } from '@mastra/core'; import { openai } from '@ai-sdk/openai'; import { z } from 'zod'; const analyzeCodeTool = { description: 'Analyze code for issues and improvements', parameters: z.object({ code: z.string().describe('Code to analyze'), language: z.string().describe('Programming language'), }), execute: async ({ code, language }: { code: string; language: string }) => { // In reality, this would run static analysis return { issues: [ { type: 'performance', line: 12, message: 'Consider using a Map instead of object for lookups', }, { type: 'security', line: 24, message: 'User input should be sanitized before use', }, ], suggestions: [ { type: 'refactor', message: 'This function could be split into smaller pieces', }, ], }; }, }; const suggestTestCasesTool = { description: 'Suggest test cases for the given code', parameters: z.object({ code: z.string().describe('Code to generate tests for'), framework: z.string().describe('Testing framework to use'), }), execute: async ({ code, framework }: { code: string; framework: string }) => { return { testCases: [ 'Test with empty input', 'Test with invalid data type', 'Test boundary conditions', 'Test error handling', ], exampleTest: ` describe('myFunction', () => { it('should handle empty input', () => { expect(myFunction('')).toBe(null); }); });`, }; }, }; // Create the copilot const codeReviewCopilot = new Agent({ name: 'Code Review Copilot', instructions: `You are a code review assistant. Your role is to: - Identify potential issues in code - Suggest improvements - Recommend test cases - Explain your reasoning Always provide specific, actionable suggestions.`, model: { provider: 'OPEN_AI', name: 'gpt-4o', toolChoice: 'auto', }, tools: { analyzeCode: analyzeCodeTool, suggestTestCases: suggestTestCasesTool, }, }); // Use the copilot async function reviewCode(code: string, language: string) { const response = await codeReviewCopilot.generate( `Please review this ${language} code and provide suggestions:\n\n${code}` ); return response.text; } // Example usage const myCode = ` function processUser(user) { return user.name.toUpperCase(); } `; const review = await reviewCode(myCode, 'javascript'); console.log(review); // "I've analyzed your code. Here are some suggestions: // 1. Add null checking for user.name to prevent errors // 2. Consider type checking or TypeScript for better safety // ..."
Key characteristics:
- Provides suggestions proactively
- User decides what to accept
- Works within existing workflows
- Context-aware (understands what you're doing)
Agents
What: Autonomous systems that pursue goals independently, making decisions and taking actions without constant human input.
Characteristics:
- Autonomous operation
- Multi-step workflows
- Long-running tasks
- Self-directed decision-making
When to use:
- Background monitoring
- Scheduled analysis
- Event-driven workflows
- Complex multi-step tasks
Example use cases:
- "Monitor competitor sites and alert on changes"
- "Process incoming emails and route to teams"
- "Analyze data daily and generate reports"
TypeScript Example: Autonomous Monitoring Agent
import { Agent } from '@mastra/core'; import { openai } from '@ai-sdk/openai'; import { z } from 'zod'; const fetchWebsiteTool = { description: 'Fetch content from a website', parameters: z.object({ url: z.string().describe('URL to fetch'), }), execute: async ({ url }: { url: string }) => { const response = await fetch(url); const html = await response.text(); return { url, content: html.slice(0, 1000) }; // Simplified }, }; const detectChangesTool = { description: 'Detect changes between old and new content', parameters: z.object({ oldContent: z.string().describe('Previous content'), newContent: z.string().describe('Current content'), }), execute: async ({ oldContent, newContent }: { oldContent: string; newContent: string }) => { const changed = oldContent !== newContent; return { changed, summary: changed ? 'Content has been updated' : 'No changes detected', }; }, }; const sendAlertTool = { description: 'Send an alert notification', parameters: z.object({ subject: z.string().describe('Alert subject'), message: z.string().describe('Alert message'), }), execute: async ({ subject, message }: { subject: string; message: string }) => { console.log(`ALERT: ${subject}\n${message}`); // In reality, would send via email/Slack/SMS return { sent: true }; }, }; // Create the autonomous agent const monitoringAgent = new Agent({ name: 'Website Monitor Agent', instructions: `You are a website monitoring agent that runs autonomously. Your task: 1. Fetch content from specified URLs 2. Compare with previous versions 3. If changes are detected, analyze what changed 4. Send alerts for significant changes You should: - Be thorough in detecting changes - Provide clear summaries of what changed - Only alert on meaningful changes (not minor formatting) You operate independently on a schedule.`, model: { provider: 'OPEN_AI', name: 'gpt-4o', toolChoice: 'auto', }, tools: { fetchWebsite: fetchWebsiteTool, detectChanges: detectChangesTool, sendAlert: sendAlertTool, }, }); // Agent runs autonomously (scheduled or triggered) async function runMonitoringCycle(urls: string[], previousContent: Record<string, string>) { const task = `Check these URLs for changes and alert if anything significant changed: ${urls.join('\n')} Previous content is available in your context.`; const response = await monitoringAgent.generate(task, { // In reality, you'd pass previous content from a database context: { previousContent }, }); return response.text; } // Example: Run on a schedule (e.g., every hour) setInterval(async () => { const urls = [ 'https://competitor.com/pricing', 'https://competitor.com/features', ]; const previousContent = { // Load from database }; await runMonitoringCycle(urls, previousContent); }, 60 * 60 * 1000); // Every hour
Key characteristics:
- Runs without human intervention
- Makes decisions independently
- Multi-step reasoning
- Long-running operation
Workflows vs Agents
One of the most common questions: "Should I build a workflow or an agent?"
The answer isn't binary—it's a spectrum. Let's understand the framework.
The Key Question
Who designs the flowchart, and when?
Traditional Workflows
- You design the flowchart
- At build time
- Every step is predetermined
- Execution follows exact paths
Agentic Workflows
- The AI designs the flowchart
- At run time
- Steps are determined dynamically
- Execution adapts to context
The Spectrum
Fixed Workflow ←──────────────────────→ Autonomous Agent
You AI
determine determines
all steps all steps
Most real-world systems land somewhere in the middle:
Low Autonomy High Autonomy
├──────────┼──────────┼──────────┼──────────┼──────────┤
│ │ │ │ │ │
Fixed Branching Decision Dynamic Fully
Workflow Workflow Points Planning Autonomous
1. Fixed Workflow
Characteristics:
- Predetermined steps
- No deviation
- Consistent output
Example: Image processing pipeline
async function processImage(imageUrl: string) { const image = await download(imageUrl); const resized = await resize(image, 800, 600); const optimized = await optimize(resized); const uploaded = await upload(optimized); return uploaded.url; }
When to use:
- Steps are always the same
- Predictability is critical
- No ambiguity in requirements
2. Branching Workflow
Characteristics:
- Predetermined branches
- Conditions you define
- Multiple paths, all known
Example: Email triage
async function triageEmail(email: Email) { const category = await classifyEmail(email); if (category === 'support') { await routeToSupport(email); } else if (category === 'sales') { await routeToSales(email); } else { await routeToGeneral(email); } }
When to use:
- Known categories or conditions
- Clear decision criteria
- Limited branching paths
3. Decision Points
Characteristics:
- Mostly fixed workflow
- AI makes specific decisions
- Human-defined guardrails
Example: Content moderation
const moderationAgent = new Agent({ name: 'Content Moderator', instructions: 'Review content and decide if it should be approved, flagged, or rejected', tools: { analyzeContent, checkGuidelines }, }); async function moderateContent(content: string) { // Fixed: Get the decision const decision = await moderationAgent.generate( `Review this content: ${content}` ); // Fixed: Apply the decision if (decision.includes('approved')) { await publishContent(content); } else if (decision.includes('flagged')) { await sendForHumanReview(content); } else { await rejectContent(content); } }
When to use:
- Most steps are clear
- Specific decisions need AI judgment
- Outcomes are well-defined
4. Dynamic Planning
Characteristics:
- AI plans the sequence
- You provide tools and constraints
- Multi-step reasoning
Example: Research agent
const researchAgent = new Agent({ name: 'Research Agent', instructions: `Given a research question, plan and execute research steps: 1. Break down the question into sub-questions 2. Search for relevant information 3. Analyze and synthesize findings 4. Generate a comprehensive report`, tools: { searchWeb, extractContent, summarize, analyzeData, }, }); async function research(question: string) { // Agent determines which tools to use, in what order const result = await researchAgent.generate( `Research this question: ${question}` ); return result.text; }
When to use:
- Problem requires multiple steps
- Steps depend on intermediate results
- You want the AI to figure out the approach
5. Fully Autonomous
Characteristics:
- Agent operates independently
- Long-running tasks
- Handles unexpected situations
- Makes high-level decisions
Example: Autonomous business analyst
const analystAgent = new Agent({ name: 'Business Analyst Agent', instructions: `You are an autonomous business analyst. Your job: - Monitor key business metrics daily - Investigate anomalies - Generate insights and recommendations - Alert stakeholders when needed You have full autonomy to: - Decide which metrics to analyze - Determine what's significant - Choose how to investigate - Decide when to alert`, tools: { queryDatabase, runAnalysis, generateReport, sendAlert, scheduleFollowup, }, }); // Agent runs on schedule, makes all decisions async function dailyAnalysis() { await analystAgent.generate('Perform your daily business analysis'); }
When to use:
- Task is open-ended
- You trust the AI's judgment
- Human oversight is available but not constant
- Value is in autonomy
Choosing the Right Level
Ask yourself:
-
Do I know all the steps ahead of time?
- Yes → Fixed or Branching Workflow
- Sometimes → Decision Points
- No → Dynamic Planning or Autonomous
-
How much do I trust the AI?
- Limited → Keep tight control
- Moderate → Decision Points
- High → More autonomy
-
What are the consequences of mistakes?
- High stakes → Less autonomy, more human review
- Low stakes → More autonomy acceptable
-
How much does context vary?
- Consistent → Fixed workflow
- Variable → More autonomy
-
Do I need to explain every decision?
- Yes → Fixed workflow (transparent)
- No → Autonomy acceptable
Code Comparison: Same Task, Different Approaches
Let's see the same task (processing customer inquiries) at different autonomy levels:
Fixed Workflow Approach
async function handleInquiry(inquiry: string) { // You decide all steps const category = await classifyInquiry(inquiry); const response = await generateResponse(inquiry, category); const sentiment = await analyzeSentiment(response); if (sentiment === 'negative') { await escalateToHuman(inquiry, response); } else { await sendResponse(response); } }
Agentic Approach
const customerAgent = new Agent({ name: 'Customer Service Agent', instructions: 'Handle customer inquiries appropriately', tools: { classifyInquiry, generateResponse, analyzeSentiment, escalateToHuman, sendResponse, searchKnowledgeBase, checkOrderStatus, }, }); async function handleInquiry(inquiry: string) { // Agent decides which tools to use and when await customerAgent.generate( `Handle this customer inquiry: ${inquiry}` ); }
The agentic approach lets the AI:
- Decide if classification is needed
- Choose whether to search the knowledge base
- Determine if order status is relevant
- Make the escalation decision
Your First TypeScript Agent
Let's build a simple but functional agent from scratch. This agent will check the weather and suggest what to wear.
The Complete Code
// weather-agent.ts import { Agent } from '@mastra/core'; import { openai } from '@ai-sdk/openai'; import { z } from 'zod'; // Tool 1: Get current weather const getWeatherTool = { description: 'Get the current weather for a location', parameters: z.object({ location: z.string().describe('City name or zip code'), }), execute: async ({ location }: { location: string }) => { // In a real app, call a weather API (e.g., OpenWeatherMap) // For this example, we'll return mock data return { location, temperature: 72, conditions: 'Partly cloudy', humidity: 65, windSpeed: 8, }; }, }; // Tool 2: Get weather forecast const getForecastTool = { description: 'Get the weather forecast for the next few hours', parameters: z.object({ location: z.string().describe('City name or zip code'), hours: z.number().describe('Number of hours to forecast'), }), execute: async ({ location, hours }: { location: string; hours: number }) => { return { location, forecast: [ { time: '2pm', temp: 75, conditions: 'Sunny' }, { time: '4pm', temp: 78, conditions: 'Sunny' }, { time: '6pm', temp: 72, conditions: 'Clear' }, ].slice(0, hours), }; }, }; // Create the agent const weatherAgent = new Agent({ name: 'Weather Assistant', instructions: `You are a helpful weather assistant. Your job is to: 1. Get weather information for the requested location 2. Provide relevant forecast information 3. Give practical suggestions based on conditions 4. Be friendly and conversational When suggesting what to wear, consider: - Temperature (cold: <60°F, mild: 60-75°F, warm: >75°F) - Conditions (rain, snow, sun) - Wind and humidity Always provide specific, actionable advice.`, model: { provider: 'OPEN_AI', name: 'gpt-4o-mini', // Fast and cost-effective toolChoice: 'auto', // Agent decides when to use tools }, tools: { getWeather: getWeatherTool, getForecast: getForecastTool, }, }); // Use the agent async function askWeather(question: string) { const response = await weatherAgent.generate(question); return response.text; } // Example usage async function main() { console.log('Weather Agent Demo\n'); // Example 1: Simple weather check const response1 = await askWeather( "What's the weather like in San Francisco?" ); console.log('User: What\'s the weather like in San Francisco?'); console.log(`Agent: ${response1}\n`); // Example 2: Outfit suggestion const response2 = await askWeather( "I'm going out in New York this evening. What should I wear?" ); console.log('User: I\'m going out in New York this evening. What should I wear?'); console.log(`Agent: ${response2}\n`); // Example 3: Planning ahead const response3 = await askWeather( "Should I bring an umbrella to Seattle tomorrow?" ); console.log('User: Should I bring an umbrella to Seattle tomorrow?'); console.log(`Agent: ${response3}\n`); } main().catch(console.error);
How to Test It
-
Save the code to
weather-agent.ts
-
Set up your environment:
# Create .env file echo "OPENAI_API_KEY=your-key-here" > .env
- Run the agent:
npx tsx weather-agent.ts
What's Happening
Let's trace through what happens when you ask:
"What's the weather like in San Francisco?"
- Agent receives the question
- Agent reasons: "I need current weather data for San Francisco"
- Agent calls
getWeatherTool
with{ location: "San Francisco" }
- Tool returns weather data
- Agent synthesizes a natural response using the data
- Returns: "It's currently 72°F and partly cloudy in San Francisco..."
The key: You didn't write the if/else logic. The agent decided:
- Which tool to call
- What parameters to pass
- How to format the response
Extending the Agent
Want to make it more useful? Add more tools:
// Add UV index tool const getUVIndexTool = { description: 'Get UV index for sun protection advice', parameters: z.object({ location: z.string(), }), execute: async ({ location }: { location: string }) => { return { location, uvIndex: 7, risk: 'High' }; }, }; // Add air quality tool const getAirQualityTool = { description: 'Get air quality index for outdoor activity advice', parameters: z.object({ location: z.string(), }), execute: async ({ location }: { location: string }) => { return { location, aqi: 45, quality: 'Good' }; }, }; // Add to agent const enhancedWeatherAgent = new Agent({ // ... same config tools: { getWeather: getWeatherTool, getForecast: getForecastTool, getUVIndex: getUVIndexTool, getAirQuality: getAirQualityTool, }, });
Now the agent can give more comprehensive advice:
- "UV index is high, wear sunscreen"
- "Air quality is good for outdoor exercise"
Next Steps
You've built your first agent! Here's what to explore next:
- Connect real APIs: Replace mock data with actual weather APIs
- Add memory: Make the agent remember your location preferences
- Add scheduling: Run the agent daily and send morning briefings
- Add more tools: Calendar integration, location services, etc.
The pattern is always the same:
- Define tools (functions the agent can call)
- Write clear instructions
- Let the agent decide the rest
Key Takeaways
1. Agents Are Different
Traditional software: You write every step
Agents: You define goals and tools, AI figures out steps
2. Three Modalities
- Chatbots: Reactive, respond to user input
- Copilots: Assistive, work alongside you
- Agents: Autonomous, pursue goals independently
3. It's a Spectrum
Not binary. Most real systems combine workflows and agentic behavior:
- Start with workflows where steps are clear
- Add autonomy where flexibility matters
- Match autonomy level to trust and stakes
4. Tools Are the Key
Agents are only as capable as the tools you give them:
- Good tools = capable agent
- Poor tools = limited agent
- No tools = expensive chatbot
5. Start Simple
You don't need full autonomy on day one:
- Build a simple chatbot
- Add tools gradually
- Increase autonomy as you gain confidence
What's Next
Now that you understand what agents are, the next chapters will teach you:
- The 7 Components of an Agent: Deep dive into prompts, tools, memory, and more
- Triggers: When and how agents wake up to do their work
- Core Patterns: Proven architectures for common agentic applications
You're ready to go deeper. Let's build.
Get chapter updates & code samples
We’ll email diagrams, code snippets, and additions.