Introduction
An agent without a trigger is like a sports car with no ignition—technically impressive but functionally useless.
Triggers are how agents know when to wake up and do their work.
Why Triggers Are Critical
The trigger is often the most important architectural decision. A badly designed trigger can:
- Miss critical events
- Run too frequently (wasting money)
- Create race conditions
- Fail silently
The 5 Types of Triggers
┌─────────────────────────────────────────────────────────────┐
│ TRIGGER TYPES │
│ │
│ 1. MANUAL TRIGGERS │
│ ┌────────────┬────────────┬────────────┬────────────┐ │
│ │ CLI │ API │ Voice │ App UI │ │
│ │"Run now" │ POST /run │"Hey agent" │ [Button] │ │
│ └────────────┴────────────┴────────────┴────────────┘ │
│ │
│ 2. SCHEDULED TRIGGERS │
│ ┌────────────┬────────────┬────────────┐ │
│ │ Daily │ Hourly │ Weekly │ │
│ │ 8:00 AM │ :00 │ Monday 9AM│ │
│ └────────────┴────────────┴────────────┘ │
│ │
│ 3. EVENT-BASED (INTERNAL) │
│ ┌────────────┬────────────┬────────────┐ │
│ │ Webhook │ DB Change │File Watch │ │
│ │ POST /hook │ INSERT user│ file.txt │ │
│ └────────────┴────────────┴────────────┘ │
│ │
│ 4. EVENT-BASED (EXTERNAL) │
│ ┌────────────┬────────────┬────────────┐ │
│ │ RSS Feed │3rd Party │Social Media│ │
│ │New post │Stripe hook │@mention │ │
│ └────────────┴────────────┴────────────┘ │
│ │
│ 5. VISION-BASED TRIGGERS │
│ ┌────────────┬────────────┬────────────┐ │
│ │Object Det. │ Scene Chg. │ Image Upld │ │
│ │Person seen │Background │User photo │ │
│ └────────────┴────────────┴────────────┘ │
└─────────────────────────────────────────────────────────────┘
Type | Latency | Control | Complexity | Cost |
---|---|---|---|---|
Manual | Instant | Full | Low | Per run |
Scheduled | Predictable | High | Low | Fixed |
Event (Internal) | Instant | High | Medium | Variable |
Event (External) | Delayed | Low | High | Variable |
Vision | Near-instant | Medium | Medium-High | Per image |
Trigger Type 1: Manual
Manual triggers are user-initiated—explicitly telling the agent to run.
Pattern 1: CLI Trigger
// agent-cli.ts import { jobAlertAgent } from './agents/job-alert'; async function runCLI() { const userId = process.argv[2]; console.log(`🚀 Running job alert for: ${userId}`); const result = await jobAlertAgent.generate(`Find jobs for user ${userId}`); console.log('✅ Result:', result.text); } runCLI().catch(console.error);
Usage: node agent-cli.ts user-123
Pattern 2: API Trigger
// app/api/agents/run/route.ts import { NextResponse } from 'next/server'; import { jobAlertAgent } from '@/mastra/agents/job-alert'; export async function POST(request: Request) { const { userId, parameters } = await request.json(); const result = await jobAlertAgent.generate( `Run job alert for user ${userId} with: ${JSON.stringify(parameters)}` ); return NextResponse.json({ success: true, result: result.text, executedAt: new Date().toISOString(), }); }
Client usage:
const response = await fetch('/api/agents/run', { method: 'POST', body: JSON.stringify({ userId: 'user-123', parameters: { keywords: ['TypeScript'] }, }), });
Pattern 3: Voice Trigger
// lib/voice-trigger.ts export class VoiceTrigger { private recognition: SpeechRecognition; constructor() { const SpeechRecognition = window.SpeechRecognition || window.webkitSpeechRecognition; this.recognition = new SpeechRecognition(); this.setupRecognition(); } private setupRecognition() { this.recognition.continuous = true; this.recognition.onresult = async (event) => { const transcript = event.results[event.results.length - 1][0].transcript; await this.handleCommand(transcript); }; } private async handleCommand(command: string) { if (command.toLowerCase().includes('check for jobs')) { await fetch('/api/agents/run', { method: 'POST' }); this.speak('Checking for jobs now'); } } private speak(text: string) { const utterance = new SpeechSynthesisUtterance(text); window.speechSynthesis.speak(utterance); } start() { this.recognition.start(); } stop() { this.recognition.stop(); } }
React component:
'use client'; export function VoiceControl() { const [voiceTrigger] = useState(() => new VoiceTrigger()); const [listening, setListening] = useState(false); const toggle = () => { listening ? voiceTrigger.stop() : voiceTrigger.start(); setListening(!listening); }; return ( <button onClick={toggle}> {listening ? '🎤 Listening...' : '🎙️ Voice Control'} </button> ); }
Pattern 4: Application UI Trigger
'use client'; import { useState } from 'react'; import { Button } from '@/components/ui/button'; export function AgentTrigger() { const [running, setRunning] = useState(false); const [result, setResult] = useState<string | null>(null); const runAgent = async () => { setRunning(true); const res = await fetch('/api/agents/run', { method: 'POST' }); const data = await res.json(); setResult(data.result); setRunning(false); }; return ( <div> <Button onClick={runAgent} disabled={running}> {running ? 'Running...' : 'Check Jobs Now'} </Button> {result && <p>{result}</p>} </div> ); }
Trigger Type 2: Scheduled
Scheduled triggers run at specific times or intervals.
Cron Patterns
┌───────────── minute (0-59)
│ ┌───────────── hour (0-23)
│ │ ┌───────────── day of month (1-31)
│ │ │ ┌───────────── month (1-12)
│ │ │ │ ┌───────────── day of week (0-6)
* * * * *
Examples:
0 8 * * * # Daily at 8 AM
0 */4 * * * # Every 4 hours
0 9 * * 1 # Monday at 9 AM
*/15 * * * * # Every 15 minutes
With Vercel Cron (Recommended)
Step 1: Configure cron in vercel.json
{ "crons": [ { "path": "/api/cron/daily-job-alerts", "schedule": "0 8 * * *" }, { "path": "/api/cron/hourly-check", "schedule": "0 * * * *" } ] }
Step 2: Create API route that calls your agent
// app/api/cron/daily-job-alerts/route.ts import { NextRequest, NextResponse } from 'next/server'; import { jobAlertAgent } from '@/mastra/agents/job-alert'; export async function GET(request: NextRequest) { // Verify this is called by Vercel Cron const authHeader = request.headers.get('authorization'); if (authHeader !== `Bearer ${process.env.CRON_SECRET}`) { return NextResponse.json({ error: 'Unauthorized' }, { status: 401 }); } try { console.log('🕐 Running daily job alerts'); // Call your Mastra agent const result = await jobAlertAgent.generate( 'Generate daily job digest for all active users' ); return NextResponse.json({ success: true, result: result.text, executedAt: new Date().toISOString(), }); } catch (error) { console.error('Job alert failed:', error); return NextResponse.json( { error: 'Execution failed' }, { status: 500 } ); } }
Why this works:
- Vercel cron hits your API route on schedule
- API route calls your Mastra agent
- Serverless, no need to keep a process running
- Built-in monitoring in Vercel dashboard
With node-cron (Self-Hosted)
If you're self-hosting or need more control:
import cron from 'node-cron'; import { jobAlertAgent } from '@/mastra/agents/job-alert'; cron.schedule('0 8 * * *', async () => { console.log('🕐 Running daily job alerts'); await jobAlertAgent.generate('Daily job digest'); }, { timezone: 'America/Los_Angeles', });
Time Zone Aware
// Run at 8 AM in each user's timezone cron.schedule('0 * * * *', async () => { const currentHour = new Date().getUTCHours(); const users = await db.users.findMany({ where: { alertHour: 8, timezone: currentHour - 8 }, }); for (const user of users) { await jobAlertAgent.generate(`Alert for user ${user.id}`); } });
Common Schedules
// Morning digest cron.schedule('0 8 * * *', async () => { await dailyDigestAgent.generate('Morning briefing'); }); // Hourly monitoring cron.schedule('0 * * * *', async () => { await monitorAgent.generate('Health check'); }); // Weekly report (Monday 9 AM) cron.schedule('0 9 * * 1', async () => { await reportAgent.generate('Weekly analytics'); }); // End of month cron.schedule('0 0 1 * *', async () => { await billingAgent.generate('Monthly invoices'); });
Trigger Type 3: Event-Based (Internal)
Internal events from your own system.
Pattern 1: Webhooks
// app/api/webhooks/job-application/route.ts import { NextResponse } from 'next/server'; import { jobAlertAgent } from '@/mastra/agents/job-alert'; import { verifySignature } from '@/lib/webhook-security'; export async function POST(request: Request) { // Verify webhook signature const signature = request.headers.get('x-webhook-signature'); if (!verifySignature(signature, await request.text())) { return NextResponse.json({ error: 'Invalid signature' }, { status: 401 }); } const event = await request.json(); if (event.type === 'job.applied') { await jobAlertAgent.generate(` User ${event.userId} applied to ${event.jobId}. Update preferences based on this application. `); } return NextResponse.json({ success: true }); }
Security helper:
import crypto from 'crypto'; export function verifySignature(signature: string | null, payload: string): boolean { if (!signature) return false; const hmac = crypto.createHmac('sha256', process.env.WEBHOOK_SECRET!); hmac.update(payload); const expected = hmac.digest('hex'); return crypto.timingSafeEqual(Buffer.from(signature), Buffer.from(expected)); }
Pattern 2: Database Changes
// With Prisma middleware import { db } from '@/lib/db'; import { jobAlertAgent } from '@/mastra/agents/job-alert'; db.$use(async (params, next) => { const result = await next(params); if (params.model === 'JobPreferences' && params.action === 'update') { await jobAlertAgent.generate(` User ${result.userId} updated preferences. Run immediate search with: ${JSON.stringify(result)} `); } return result; });
MongoDB change streams:
import { MongoClient } from 'mongodb'; const client = new MongoClient(process.env.MONGODB_URI!); const db = client.db('jobs'); const changeStream = db.collection('jobPostings').watch([ { $match: { operationType: 'insert' } }, ]); changeStream.on('change', async (change) => { const newJob = change.fullDocument; await jobAlertAgent.generate(` New job: ${JSON.stringify(newJob)} Find matching users and send instant alerts. `); });
Pattern 3: File System Watch
import chokidar from 'chokidar'; import { readFile } from 'fs/promises'; const watcher = chokidar.watch('./uploads/jobs/*.csv'); watcher.on('add', async (path) => { console.log(`📁 New file: ${path}`); const content = await readFile(path, 'utf-8'); const jobs = parse(content, { columns: true }); await jobAlertAgent.generate(` Process ${jobs.length} jobs from CSV. Match with user preferences. `); });
Trigger Type 4: Event-Based (External)
External events from third-party systems.
Pattern 1: RSS Polling
import Parser from 'rss-parser'; import { jobAlertAgent } from '@/mastra/agents/job-alert'; const parser = new Parser(); const seenItems = new Set<string>(); async function pollRSS(feedUrl: string) { const feed = await parser.parseURL(feedUrl); for (const item of feed.items) { if (seenItems.has(item.guid!)) continue; seenItems.add(item.guid!); await jobAlertAgent.generate(` New job from RSS: Title: ${item.title} Link: ${item.link} Description: ${item.contentSnippet} `); } } // Poll every 5 minutes setInterval(async () => { await pollRSS('https://stackoverflow.com/jobs/feed'); await pollRSS('https://weworkremotely.com/categories/remote-programming-jobs.rss'); }, 300000);
Pattern 2: Third-Party Webhooks
// app/api/webhooks/stripe/route.ts import Stripe from 'stripe'; import { jobAlertAgent } from '@/mastra/agents/job-alert'; const stripe = new Stripe(process.env.STRIPE_SECRET_KEY!); export async function POST(request: Request) { const sig = request.headers.get('stripe-signature')!; const body = await request.text(); const event = stripe.webhooks.constructEvent( body, sig, process.env.STRIPE_WEBHOOK_SECRET! ); if (event.type === 'customer.subscription.created') { await jobAlertAgent.generate(` New subscriber: ${event.data.object.customer} Set up daily job alerts. `); } return NextResponse.json({ received: true }); }
Pattern 3: Social Media Monitoring
import { TwitterApi } from 'twitter-api-v2'; const client = new TwitterApi(process.env.TWITTER_BEARER_TOKEN!); // Monitor mentions every minute setInterval(async () => { const mentions = await client.v2.search({ query: '@YourJobBot', max_results: 10, }); for (const tweet of mentions.data) { await jobAlertAgent.generate(` Twitter mention from ${tweet.author_id}: "${tweet.text}" Reply with relevant job suggestions. `); } }, 60000);
Trigger Type 5: Vision-Based
Vision triggers wake agents when images contain specific content or conditions. This is one of the most overlooked but powerful trigger types—giving agents "eyes" to see and react to visual information.
Why Vision Triggers Matter
Traditional agents are blind. They can process text, but can't "see" what's happening in images, videos, or the real world. Vision triggers unlock entirely new categories of agentic applications:
- Security & Monitoring - Detect people in restricted areas
- Quality Control - Identify defects in products
- Content Moderation - Flag inappropriate images
- Accessibility - Describe scenes for visually impaired users
- Inventory - Track stock levels from shelf photos
- Document Processing - Extract data from receipts, forms, IDs
Moondream: Vision AI for Agents
Moondream is a lightweight, fast vision model perfect for agent triggers. It's:
- Ridiculously lightweight - 9B params, 2B active (faster than GPT-4V)
- Actually affordable - Fraction of the cost of frontier vision models
- Simple by design - Clean API, works with Node.js/TypeScript
- Versatile - Query, detect, point, caption built-in
Key capabilities:
- Visual Q&A - "Is there a person in this image?"
- Object detection - Bounding boxes for "car", "face", etc.
- Pointing - Find exact coordinates of objects
- Captioning - Natural language descriptions
- OCR & document understanding
Setup
Install the Moondream SDK:
npm install moondream
Get an API key from Moondream Cloud Console (free tier available).
MOONDREAM_API_KEY=your_api_key_here
Pattern 1: Image Upload Trigger
User uploads an image → Vision analysis → Agent takes action.
Use case: Content moderation agent
// app/api/upload-image/route.ts import { NextRequest, NextResponse } from 'next/server'; import moondream from 'moondream'; import { contentModerationAgent } from '@/mastra/agents/content-moderation'; const vision = moondream.vl({ apiKey: process.env.MOONDREAM_API_KEY }); export async function POST(request: NextRequest) { const formData = await request.formData(); const image = formData.get('image') as File; if (!image) { return NextResponse.json({ error: 'No image provided' }, { status: 400 }); } // Convert image to buffer const buffer = Buffer.from(await image.arrayBuffer()); // Use Moondream to analyze the image const visionResult = await vision.query(buffer, 'Does this image contain any inappropriate content, violence, or explicit material? Answer yes or no, then explain.' ); // If vision detects issues, trigger the moderation agent if (visionResult.answer.toLowerCase().startsWith('yes')) { await contentModerationAgent.generate(` Image flagged by vision system: Analysis: ${visionResult.answer} Take appropriate action: flag, notify moderators, or auto-remove. `); return NextResponse.json({ success: true, flagged: true, reason: visionResult.answer, }); } return NextResponse.json({ success: true, flagged: false, message: 'Image approved', }); }
Pattern 2: Real-Time Object Detection
Continuously monitor images for specific objects.
Use case: Warehouse inventory agent
// lib/vision-triggers/inventory-monitor.ts import moondream from 'moondream'; import { inventoryAgent } from '@/mastra/agents/inventory'; const vision = moondream.vl({ apiKey: process.env.MOONDREAM_API_KEY! }); export class InventoryMonitor { async processShelfImage(imageBuffer: Buffer, shelfId: string) { // Step 1: Ask what products are visible const queryResult = await vision.query( imageBuffer, 'List all product types visible on this shelf as a comma-separated list.' ); const products = queryResult.answer .split(',') .map(p => p.trim()) .filter(p => p.length > 0); // Step 2: Detect each product and count const detections = await Promise.all( products.map(async (product) => { const detectResult = await vision.detect(imageBuffer, product); return { product, count: detectResult.objects?.length || 0, locations: detectResult.objects, }; }) ); // Step 3: Trigger inventory agent if stock is low const lowStock = detections.filter(d => d.count < 3); if (lowStock.length > 0) { await inventoryAgent.generate(` Low stock detected on shelf ${shelfId}: ${lowStock.map(item => `- ${item.product}: ${item.count} remaining`).join('\n')} Trigger restock workflow and notify warehouse manager. `); } return detections; } }
Usage with scheduled monitoring:
// app/api/cron/monitor-inventory/route.ts import { NextResponse } from 'next/server'; import { InventoryMonitor } from '@/lib/vision-triggers/inventory-monitor'; import { fetchShelfImages } from '@/lib/warehouse-cameras'; export async function GET() { const monitor = new InventoryMonitor(); const shelfImages = await fetchShelfImages(); // From warehouse camera system const results = await Promise.all( shelfImages.map(({ shelfId, image }) => monitor.processShelfImage(image, shelfId) ) ); return NextResponse.json({ success: true, results }); }
Pattern 3: Scene Change Detection
Trigger when visual scene changes significantly.
Use case: Security monitoring agent
// lib/vision-triggers/security-monitor.ts import moondream from 'moondream'; import { securityAgent } from '@/mastra/agents/security'; const vision = moondream.vl({ apiKey: process.env.MOONDREAM_API_KEY! }); export class SecurityMonitor { private lastSceneDescription: string | null = null; async monitorCamera(imageBuffer: Buffer, cameraId: string) { // Get current scene description const captionResult = await vision.caption(imageBuffer); const currentScene = captionResult.caption; // Check for people in restricted area const peopleCheck = await vision.detect(imageBuffer, 'person'); const peopleDetected = peopleCheck.objects && peopleCheck.objects.length > 0; if (peopleDetected) { // Get detailed analysis const analysis = await vision.query( imageBuffer, 'Describe the people in this image: How many? What are they doing? Are they wearing uniforms or badges?' ); // Trigger security agent await securityAgent.generate(` ALERT: Person detected in restricted area Camera: ${cameraId} Time: ${new Date().toISOString()} People count: ${peopleCheck.objects?.length} Analysis: ${analysis.answer} Scene: ${currentScene} Determine threat level and take appropriate action. `); } // Detect significant scene changes if (this.lastSceneDescription && this.hasSignificantChange(this.lastSceneDescription, currentScene)) { await securityAgent.generate(` Scene change detected: Camera: ${cameraId} Previous: ${this.lastSceneDescription} Current: ${currentScene} Investigate and log incident. `); } this.lastSceneDescription = currentScene; return { peopleDetected, peopleCount: peopleCheck.objects?.length || 0, scene: currentScene, }; } private hasSignificantChange(prev: string, current: string): boolean { // Simple word overlap detection const prevWords = new Set(prev.toLowerCase().split(/\s+/)); const currentWords = new Set(current.toLowerCase().split(/\s+/)); const overlap = [...prevWords].filter(w => currentWords.has(w)).length; const similarity = overlap / Math.max(prevWords.size, currentWords.size); return similarity < 0.5; // Less than 50% overlap = significant change } }
Pattern 4: Document Processing
Extract structured data from images of documents.
Use case: Receipt processing agent
// lib/vision-triggers/receipt-processor.ts import moondream from 'moondream'; import { expenseAgent } from '@/mastra/agents/expense'; const vision = moondream.vl({ apiKey: process.env.MOONDREAM_API_KEY! }); export async function processReceipt(imageBuffer: Buffer, userId: string) { // Extract key information using vision const merchantQuery = await vision.query( imageBuffer, 'What is the merchant or store name on this receipt?' ); const totalQuery = await vision.query( imageBuffer, 'What is the total amount on this receipt? Include currency symbol.' ); const dateQuery = await vision.query( imageBuffer, 'What is the date on this receipt?' ); const itemsQuery = await vision.query( imageBuffer, 'List all the purchased items on this receipt as a comma-separated list.' ); // Trigger expense agent with extracted data const result = await expenseAgent.generate(` New receipt uploaded by user ${userId}: Merchant: ${merchantQuery.answer} Total: ${totalQuery.answer} Date: ${dateQuery.answer} Items: ${itemsQuery.answer} 1. Validate this expense against company policy 2. Categorize the expense 3. Create expense report entry 4. Check if receipt is duplicate 5. Flag if approval needed `); return { merchant: merchantQuery.answer, total: totalQuery.answer, date: dateQuery.answer, items: itemsQuery.answer, agentAction: result.text, }; }
Pattern 5: Combining Vision with Other Triggers
Vision + Scheduled = Powerful monitoring.
// app/api/cron/daily-site-check/route.ts import { NextResponse } from 'next/server'; import puppeteer from 'puppeteer'; import moondream from 'moondream'; import { monitoringAgent } from '@/mastra/agents/monitoring'; const vision = moondream.vl({ apiKey: process.env.MOONDREAM_API_KEY! }); export async function GET() { // Take screenshot of your production site const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.goto('https://yourapp.com'); const screenshot = await page.screenshot({ fullPage: true }); await browser.close(); // Use vision to check if site looks normal const analysis = await vision.query( screenshot, 'Does this website appear to be displaying correctly? Are there any error messages, missing content, or broken layouts?' ); if (analysis.answer.toLowerCase().includes('error') || analysis.answer.toLowerCase().includes('broken')) { await monitoringAgent.generate(` ALERT: Visual regression detected on production site Analysis: ${analysis.answer} 1. Check application logs 2. Notify on-call engineer 3. Take site offline if critical `); // Also send immediate alert return NextResponse.json({ success: true, alert: true, issue: analysis.answer, }); } return NextResponse.json({ success: true, alert: false, message: 'Site appears healthy', }); }
Vision Trigger Best Practices
1. Optimize Image Size
import sharp from 'sharp'; async function optimizeForVision(buffer: Buffer): Promise<Buffer> { return sharp(buffer) .resize(1024, 1024, { fit: 'inside' }) // Max dimension .jpeg({ quality: 80 }) .toBuffer(); }
2. Cache Vision Results
import { createHash } from 'crypto'; const visionCache = new Map<string, any>(); async function analyzeWithCache(imageBuffer: Buffer, query: string) { const hash = createHash('sha256') .update(imageBuffer) .update(query) .digest('hex'); if (visionCache.has(hash)) { return visionCache.get(hash); } const result = await vision.query(imageBuffer, query); visionCache.set(hash, result); return result; }
3. Handle Vision Failures Gracefully
async function safeVisionQuery( imageBuffer: Buffer, query: string, fallback: string = 'Unable to analyze image' ) { try { const result = await vision.query(imageBuffer, query); return result.answer; } catch (error) { console.error('Vision analysis failed:', error); return fallback; } }
4. Rate Limiting
import PQueue from 'p-queue'; const visionQueue = new PQueue({ concurrency: 5, // Max 5 concurrent vision requests interval: 1000, // Per second intervalCap: 10, // Max 10 requests per interval }); async function queuedVisionAnalysis(imageBuffer: Buffer, query: string) { return visionQueue.add(() => vision.query(imageBuffer, query)); }
Real-World Use Cases
1. E-commerce: Product Quality Control
- Suppliers upload product photos
- Vision detects defects, wrong colors, missing labels
- Agent automatically accepts/rejects or flags for human review
2. Healthcare: Patient Monitoring
- Camera monitors patient in recovery room
- Vision detects if patient is in distress or fallen
- Agent alerts nurses with urgency level
3. Smart Home: Presence Detection
- Doorbell camera detects package delivery
- Vision confirms package on doorstep
- Agent notifies homeowner and logs delivery time
4. Finance: Document Verification
- User uploads ID for KYC compliance
- Vision extracts data and verifies authenticity
- Agent checks against database and approves/flags
5. Accessibility: Scene Description
- Visually impaired user takes photo
- Vision generates detailed description
- Agent reads aloud via text-to-speech
💡 Deep Dive: This introduction to vision triggers sets the foundation. For complete production-ready vision agent architectures with full working examples, see Chapter 13: Pattern 5 - Vision-Based Agents.
Testing Triggers Locally
Mock Webhooks
// test/mock-webhook.ts export async function triggerWebhook(event: any) { const response = await fetch('http://localhost:3000/api/webhooks/test', { method: 'POST', headers: { 'Content-Type': 'application/json', 'x-webhook-signature': generateTestSignature(event), }, body: JSON.stringify(event), }); return response.json(); } // Usage in tests await triggerWebhook({ type: 'job.applied', userId: 'test-user', jobId: 'test-job', });
Test Scheduled Jobs
import { jobAlertAgent } from '@/mastra/agents/job-alert'; describe('Scheduled job alerts', () => { it('should generate daily digest', async () => { const result = await jobAlertAgent.generate('Daily digest'); expect(result.text).toContain('jobs'); }); });
Simulate Events
// test/event-simulator.ts export class EventSimulator { static async simulateJobApplication(userId: string, jobId: string) { return await fetch('http://localhost:3000/api/webhooks/job-application', { method: 'POST', body: JSON.stringify({ type: 'job.applied', userId, jobId }), }); } static async simulateRSSUpdate(items: any[]) { // Mock RSS feed } }
Production Trigger Patterns
Idempotency
const processed = new Set<string>(); export async function POST(request: Request) { const event = await request.json(); if (processed.has(event.id)) { return NextResponse.json({ duplicate: true }); } await processEvent(event); processed.add(event.id); return NextResponse.json({ success: true }); }
Retry Logic
async function processWithRetry(event: any, maxRetries = 3) { for (let attempt = 1; attempt <= maxRetries; attempt++) { try { return await jobAlertAgent.generate(`Process: ${JSON.stringify(event)}`); } catch (error) { if (attempt === maxRetries) throw error; await sleep(1000 * attempt); // Exponential backoff } } }
Dead Letter Queue
import { Queue } from 'bullmq'; const dlq = new Queue('failed-triggers'); try { await processEvent(event); } catch (error) { await dlq.add('failed', { event, error: error.message, timestamp: new Date(), }); }
Monitoring
import { logger } from '@/lib/logger'; cron.schedule('0 8 * * *', async () => { const start = Date.now(); try { await jobAlertAgent.generate('Daily digest'); logger.info('Job alert succeeded', { duration: Date.now() - start }); } catch (error) { logger.error('Job alert failed', { error, duration: Date.now() - start }); } });
Key Takeaways
- Choose the right trigger type - Manual for testing, scheduled for regular tasks, events for real-time, vision for image-based workflows
- Vision triggers are powerful - Give agents eyes for security, inventory, document processing, and more
- Voice and UI triggers - Make agents accessible and interactive
- Secure webhooks - Always verify signatures
- Handle failures - Idempotency, retries, monitoring
- Test locally - Mock events before production
- Monitor in production - Track success/failure rates
What's Next
You now understand all five trigger types. Next chapters cover:
- Chapter 8: Building Agentic UIs with Cedar - Create interfaces that work with all trigger types
- Chapter 9-12: Core Patterns - Event processing, monitoring, and scheduled analysis
- Chapter 13: Pattern 5 - Vision-Based Agents - Deep dive into production vision agent architectures
Get chapter updates & code samples
We’ll email diagrams, code snippets, and additions.