Server-Sent Events: A Practical Guide for the Real World

Tiger AbrodiTiger Abrodi
14 min read

Introduction to SSE

Server-Sent Events (SSE) let servers send real-time updates to clients over one HTTP connection. Unlike WebSockets, SSE only goes from server to client and uses standard HTTP, making it easier to set up and more compatible with current systems.

SSE is great for streaming data from server to client, like:

  • Live notifications and alerts

  • Real-time dashboard updates

  • Streaming AI-generated content

  • Activity feeds

  • Status updates for long-running operations

How SSE Works

Server-Sent Events work through a persistent HTTP connection with a special text/event-stream content type. The server keeps the connection open and sends messages when new data is available.

The basic flow works like this:

  1. Client establishes an SSE connection by making a GET request to a server endpoint

  2. Server responds with Content-Type: text/event-stream header

  3. Server keeps the connection open

  4. When new data is available, server formats it as an event and sends it over the connection

  5. Client receives and processes events as they arrive

The event format follows a simple text-based protocol:

event: eventName
id: eventId
data: Event data goes here

Each event is separated by two newlines (\n\n). The event, id, and retry fields are optional, while data is required.

Backend Implementation with Hono

Let's implement SSE for streaming AI responses using Hono, a lightweight TypeScript web framework:

import { Hono } from 'hono'
import { streamText } from './ai-service' // Your AI service

const app = new Hono()

app.get('/api/stream', async (c) => {
  // Set SSE headers
  c.header('Content-Type', 'text/event-stream')
  c.header('Cache-Control', 'no-cache')
  c.header('Connection', 'keep-alive')

  // Get query parameters
  const prompt = c.req.query('prompt') || 'Hello'

  // Create streaming response
  const stream = new TransformStream()
  const writer = stream.writable.getWriter()
  const encoder = new TextEncoder()

  // Handle client disconnect
  c.req.raw.signal.addEventListener('abort', () => {
    writer.close()
  })

  // Start AI text generation in the background
  generateAiResponse(prompt, writer, encoder)

  return c.body(stream.readable)
})

async function generateAiResponse(
  prompt: string,
  writer: WritableStreamDefaultWriter,
  encoder: TextEncoder
) {
  try {
    // Send start event
    const startEvent = `event: start\ndata: {"id":"${Date.now()}"}\n\n`
    writer.write(encoder.encode(startEvent))

    // Stream the AI response
    for await (const chunk of streamText(prompt)) {
      // Format each chunk as an SSE event
      const event = `event: chunk\ndata: ${JSON.stringify({
        text: chunk,
        timestamp: Date.now()
      })}\n\n`

      writer.write(encoder.encode(event))
    }

    // Send completion event
    const endEvent = `event: complete\ndata: {"timestamp":${Date.now()}}\n\n`
    writer.write(encoder.encode(endEvent))
  } catch (error) {
    // Send error event
    const errorEvent = `event: error\ndata: ${JSON.stringify({
      error: error.message
    })}\n\n`
    writer.write(encoder.encode(errorEvent))
  } finally {
    writer.close()
  }
}

// Mock AI text generation service
async function* streamText(prompt: string) {
  const response = `This is a simulated AI response to: "${prompt}". It streams word by word.`
  const words = response.split(' ')

  for (const word of words) {
    await new Promise(r => setTimeout(r, 200)) // Simulate processing time
    yield word + ' '
  }
}

export default app

This implementation includes:

  • Proper SSE headers

  • Typed event structure with event names

  • Error handling

  • Client disconnect handling

  • Clean resource management

PS. The proper way with Hono

Above is just for educational purposes, the proper way with Hono is to use streamSSE helper.

import { Hono } from 'hono'
import { streamSSE } from 'hono/sse'

const app = new Hono()

app.get('/api/stream', async (c) => {
  const prompt = c.req.query('prompt') || 'Hello'

  return streamSSE(c, async (stream) => {
    try {
      // Send start event
      stream.writeSSE({ event: 'start', data: JSON.stringify({ id: Date.now() }) })

      // Stream AI response
      for await (const chunk of streamText(prompt)) {
        stream.writeSSE({
          event: 'chunk',
          data: JSON.stringify({ text: chunk, timestamp: Date.now() }),
        })
      }

      // Send completion event
      stream.writeSSE({ event: 'complete', data: JSON.stringify({ timestamp: Date.now() }) })
    } catch (error) {
      // Send error event
      stream.writeSSE({ event: 'error', data: JSON.stringify({ error: error.message }) })
    }
  })
})

// Mock AI text generation service
async function* streamText(prompt: string) {
  const response = `This is a simulated AI response to: "${prompt}". It streams word by word.`
  const words = response.split(' ')
  for (const word of words) {
    await new Promise((r) => setTimeout(r, 200)) // Simulate processing time
    yield word + ' '
  }
}

export default app

Frontend Implementation with React

Now let's create a React component that connects to our SSE endpoint and displays the streamed AI response:

import React, { useState, useEffect, useRef } from 'react';

// Define component props interface
interface AIStreamProps {
  prompt: string; // The input prompt to process
}

// Define event data structure for TypeScript type safety
interface ChunkEvent {
  text: string;       // Individual text chunk from the stream
  timestamp: number;  // When the chunk was generated
}

const AIStream: React.FC<AIStreamProps> = ({ prompt }) => {
  // State management
  const [content, setContent] = useState<string>('');      // Accumulated stream content
  const [status, setStatus] = useState<                   // Current stream state
    'idle' | 'loading' | 'complete' | 'error'
  >('idle');
  const [error, setError] = useState<string | null>(null); // Any error messages

  // Refs for cleanup operations
  const eventSourceRef = useRef<EventSource | null>(null); // Reference to EventSource instance
  const controllerRef = useRef<AbortController>();         // For aborting pending requests

  useEffect(() => {
    // Only run when prompt changes and is not empty
    if (!prompt) return;

    // Reset state for new request
    setContent('');
    setStatus('loading');
    setError(null);

    // Cleanup previous connection if exists
    if (eventSourceRef.current) {
      eventSourceRef.current.close();
    }

    // Create new AbortController for request cancellation
    const controller = new AbortController();
    controllerRef.current = controller;

    // Create EventSource connection with proper encoding
    const eventSource = new EventSource(
      `/api/stream?prompt=${encodeURIComponent(prompt)}`,
      {
        // Attach abort signal for modern browsers
        signal: controller.signal,
      }
    );
    eventSourceRef.current = eventSource;

    // Event: Connection established
    eventSource.onopen = () => {
      console.log('SSE connection opened');
      setStatus('loading');
    };

    // Event: Received message (default handler)
    eventSource.onmessage = (e) => {
      try {
        // Parse and accumulate content
        const data: ChunkEvent = JSON.parse(e.data);
        setContent(prev => prev + data.text);
      } catch (err) {
        console.error('Error parsing SSE data:', err);
        setError('Failed to parse server response');
        setStatus('error');
      }
    };

    // Event: Custom completion event from server
    eventSource.addEventListener('complete', () => {
      console.log('Stream completed');
      setStatus('complete');
      eventSource.close(); // Explicitly close connection
    });

    // Event: Error handling
    eventSource.onerror = (e) => {
      // Differentiate between connection states
      switch (eventSource.readyState) {
        case EventSource.CONNECTING:
          console.log('Reconnecting...');
          break;
        case EventSource.CLOSED:
          setError('Connection failed - please try again');
          setStatus('error');
          break;
      }
    };

    // Cleanup function for component unmount
    return () => {
      console.log('Cleaning up SSE connection');
      // Abort any pending requests
      controller.abort();
      // Close SSE connection
      eventSource.close();
      // Clear references
      eventSourceRef.current = null;
    };
  }, [prompt]); // Dependency array - runs when prompt changes

  return (
    <div className="ai-stream">
      {/* Status indicator */}
      <div className="status" role="status">
        Status: {status}
      </div>

      {/* Main content area with accessibility features */}
      <div 
        className="content"
        role="region"            // Identify content area
        aria-live="polite"       // Screen reader updates
        aria-busy={status === 'loading'} // Loading state
      >
        {content || (status === 'loading' 
          ? 'Generating response...' 
          : 'Enter a prompt to begin')}
      </div>

      {/* Error display */}
      {error && (
        <div className="error" role="alert">
          Error: {error}
        </div>
      )}
    </div>
  );
};

export default AIStream;
sequenceDiagram
    participant User
    participant React as React Frontend
    participant Server as Backend Server
    participant AI as AI Service

    User->>React: Enter prompt
    React->>Server: GET /api/stream?prompt=Hello
    Note over Server: Set SSE headers
    Server->>React: 200 OK with text/event-stream

    Server->>AI: Initialize AI processing
    Server->>React: event: start<br>data: {"id":"123"}

    loop For each generated chunk
        AI-->>Server: Generate text chunk
        Server-->>React: event: chunk<br>data: {"text":"Hello "}
        React-->>User: Update UI with chunk
    end

    AI-->>Server: Generation complete
    Server-->>React: event: complete<br>data: {"timestamp":1234567890}
    React-->>User: Show complete status

    Note over React,Server: Connection closed

Adding a Streaming UI Component

Let's create a simple user interface that allows users to input prompts and see the streaming response:

import React, { useState } from 'react';
import AIStream from './AIStream';

const AIChat: React.FC = () => {
  const [prompt, setPrompt] = useState('');
  const [activePrompt, setActivePrompt] = useState<string | null>(null);

  const handleSubmit = (e: React.FormEvent) => {
    e.preventDefault();
    if (prompt.trim()) {
      setActivePrompt(prompt);
    }
  };

  return (
    <div className="ai-chat">
      <form onSubmit={handleSubmit}>
        <input
          type="text"
          value={prompt}
          onChange={(e) => setPrompt(e.target.value)}
          placeholder="Ask something..."
        />
        <button type="submit">Send</button>
      </form>

      {activePrompt && <AIStream prompt={activePrompt} />}
    </div>
  );
};

export default AIChat;

Typical flow

A typical flow for AI applications can look like:

  1. On the home page.

  2. Submit prompt.

  3. Backend creates resource and does some preparation potentially.

  4. Redirected to the detail page after resource created.

  5. On the detail page, the SSE connection gets setup.

  6. When done with initial prompt, SSE connection closed.

  7. Setup new connection for each prompt you submit on the detail page.

Best Practices for SSE in Production

1. Event Formatting

Events need a consistent, structured format to ensure reliable processing on the client. A good format helps with ordering, deduplication, and error recovery.

// Good event structure
const event = `event: ${eventType}\ndata: ${JSON.stringify({
  id: eventId, // Unique identifier for deduplication
  content: payload, // The actual data
  timestamp: Date.now(), // For ordering and debugging
  sequence: sequenceNumber, // Essential for correct ordering on reconnection
})}\n\n`;

Always use UTF-8 encoding, as it's required by the SSE specification:

c.header("Content-Type", "text/event-stream; charset=utf-8");

Deduplication explained

Deduplication prevents duplicate processing of events during reconnections. When an SSE connection drops, the browser automatically reconnects and sends the last event ID it received via the "Last-Event-ID" header. The server uses this to determine which events to send, avoiding duplicates.

The EventSource API handles this automatically as long as you:

  1. Include the id: field in your SSE format (not just in JSON data)

  2. Support "Last-Event-ID" header on your server

2. Reconnection Strategy

SSE reconnection involves two sides working together:

Client-side (automatic): The EventSource API automatically:

  • Attempts to reconnect when the connection drops

  • Sends the last received event ID in the "Last-Event-ID" HTTP header

  • Skips duplicate events it has already processed

Server-side (you must implement): Your server must:

  • Check for the "Last-Event-ID" header

  • Fetch any events the client missed during disconnection

  • Send those events before starting new ones

// Server implementation for handling reconnections
app.get("/api/stream", async (c) => {
  // EventSource automatically sends this header on reconnection
  const lastEventId = c.req.header("Last-Event-ID");

  if (lastEventId) {
    // You need to implement this part - fetch missed events
    const missedEvents = await db.getEventsSince(lastEventId);

    // Send missed events to catch the client up
    for (const event of missedEvents) {
      const eventData = `id: ${event.id}\nevent: ${
        event.type
      }\ndata: ${JSON.stringify(event.payload)}\n\n`;
      writer.write(encoder.encode(eventData));
    }
  }

  // Then continue with new events...
});

// Optional: Control reconnection timing to prevent reconnection storms
const event = `event: chunk\nretry: 5000\ndata: {...}\n\n`;

// Only needed for complex cases beyond automatic reconnection
// Most apps don't need this custom error handling
eventSource.addEventListener("error", (e) => {
  if (eventSource.readyState === EventSource.CLOSED) {
    // Custom reconnection for special cases (authentication expired, etc.)
    setTimeout(() => {
      const newEventSource = new EventSource("/api/stream");
    }, 3000);
  }
});

Without proper server implementation, automatic reconnection will work but events that occurred during disconnection will be permanently lost.

sequenceDiagram
    participant Client
    participant Server

    Client->>Server: GET /api/stream
    Server->>Client: event: chunk<br>id: 1<br>data: {"text":"First "}
    Server->>Client: event: chunk<br>id: 2<br>data: {"text":"chunk "}

    Note over Client,Server: Connection lost

    Client->>Server: GET /api/stream<br>Last-Event-ID: 2
    Note over Server: Resume from event ID 2
    Server->>Client: event: chunk<br>id: 3<br>data: {"text":"of text "}
    Server->>Client: event: chunk<br>id: 4<br>data: {"text":"continues "}

    Note right of Client: Browser automatically<br>stores last event ID

3. Connection Management

Proper connection handling prevents resource leaks and ensures your application stays in a clean state when connections end. Without proper cleanup, your server can waste memory, CPU, and other resources.

// Backend: Handle client disconnect
c.req.raw.signal.addEventListener("abort", () => {
  // Clean up resources
  writer.close();

  // Cancel any ongoing processing
  if (aiStreamTask) {
    aiStreamTask.cancel("Client disconnected");
  }

  // Release any locks or database connections
  releaseResources(userId);
});

// Frontend: Close connections when component unmounts
useEffect(() => {
  const eventSource = new EventSource("...");

  // Set up event listeners
  eventSource.addEventListener("message", handleMessage);
  eventSource.addEventListener("error", handleError);

  return () => {
    // Clean up event listeners
    eventSource.removeEventListener("message", handleMessage);
    eventSource.removeEventListener("error", handleError);

    // Close the connection
    eventSource.close();
  };
}, []);

Remember that browsers limit concurrent connections to the same domain (typically 6), so use SSE connections carefully and only when necessary. For applications needing multiple streams, consider using different subdomains to work around this limitation.

Common resource leaks to avoid:

  • Unclosed database connections

  • Running background tasks for disconnected clients

  • Memory allocated for processing user requests

  • Temporary files or buffers created during streaming

4. Error Handling

Different errors require different handling strategies. Classify errors as transient (retry) or fatal (report to user).

// Backend
try {
  // Process request
} catch (error) {
  // Differentiate between error types
  if (isTransientError(error)) {
    // For temporary issues (e.g., temporary service unavailability)
    const retryEvent = `event: error\nretry: 3000\ndata: ${JSON.stringify({
      error: error.message,
      code: error.code,
      recoverable: true,
    })}\n\n`;
    writer.write(encoder.encode(retryEvent));
  } else {
    // For permanent issues (e.g., invalid input, authentication)
    const fatalEvent = `event: error\ndata: ${JSON.stringify({
      error: error.message,
      code: error.code,
      recoverable: false,
    })}\n\n`;
    writer.write(encoder.encode(fatalEvent));
    writer.close(); // Close connection for non-recoverable errors
  }
} finally {
  // Always close the writer
  writer.close();
}

// Frontend
eventSource.addEventListener("error", (e) => {
  // Check HTTP status codes for server errors
  if (e.status >= 400) {
    if (e.status === 429) {
      // Rate limiting requires different handling
      console.warn("Rate limited, backing off");
      setTimeout(reconnect, 5000);
      return;
    }

    if (e.status >= 500) {
      // Server errors may be transient
      if (retryCount < maxRetries) {
        // Exponential backoff helps prevent thundering herd problem
        setTimeout(reconnect, Math.pow(2, retryCount) * 1000);
        retryCount++;
      }
      return;
    }
  }

  // Parse error data if available
  const errorData = e.data ? JSON.parse(e.data) : { error: "Unknown error" };

  if (errorData.recoverable === false) {
    // Show user-friendly error message for non-recoverable errors
    setError(getUserFriendlyError(errorData.code));
    eventSource.close(); // Don't retry for non-recoverable errors
  }
});

5. Performance Considerations

SSE connections consume server resources, so manage them carefully:

// Send heartbeats to keep connections alive through proxies
const keepAliveInterval = setInterval(() => {
  const heartbeat = `: heartbeat\n\n`; // Comment line keeps connection alive
  writer.write(encoder.encode(heartbeat));
}, 30000); // Every 30 seconds

// Set a reasonable timeout for idle connections
const timeout = setTimeout(() => {
  const timeoutEvent = `event: timeout\ndata: {"message":"Connection idle"}\n\n`;
  writer.write(encoder.encode(timeoutEvent));
  writer.close();
}, 5 * 60 * 1000); // 5 minutes

// Clear intervals if client disconnects or stream ends
c.req.raw.signal.addEventListener("abort", () => {
  clearTimeout(timeout);
  clearInterval(keepAliveInterval);
});

Note: One thing to note is that the heartbeat approach (: heartbeat\n\n) uses a comment line in the SSE protocol. This is a clever trick because comment lines (starting with a colon) are ignored by the EventSource API but still keep the connection alive through proxies and load balancers that might otherwise time out idle connections.

For high-scale applications, consider these additional techniques:

  • Connection limiting: Cap maximum connections per user to prevent resource exhaustion

      // Check active connection count before accepting new ones
      if ((await getUserConnectionCount(userId)) >= MAX_CONNECTIONS_PER_USER) {
        return c.json({ error: "Too many connections" }, 429);
      }
    
  • Worker pools for CPU-intensive tasks: Offload processing to prevent blocking the main event loop

      // Using worker threads for AI processing
      const result = await workerPool.runTask({
        type: "process_ai_request",
        payload: userPrompt,
      });
    

PS. A worker pool is essentially a group of worker threads (or processes) that stand ready to handle CPU-intensive tasks. This design pattern helps prevent these heavy computations from blocking your main application thread.

  • Event batching: Group rapid events to reduce overhead

      // Batch multiple updates if they occur within a short window
      let pendingUpdates = [];
      let batchTimeout = null;
    
      function queueUpdate(update) {
        pendingUpdates.push(update);
    
        if (!batchTimeout) {
          batchTimeout = setTimeout(() => {
            const batchEvent = `event: batch\ndata: ${JSON.stringify(
              pendingUpdates
            )}\n\n`;
            writer.write(encoder.encode(batchEvent));
    
            pendingUpdates = [];
            batchTimeout = null;
          }, 100); // 100ms batching window
        }
      }
    

Common Challenges and Solutions

Request Deduplication

If a user refreshes the page, the browser might create a duplicate SSE connection while the previous one is still active. This wastes server resources and can cause unexpected behavior.

// Backend: Track connections by client ID
const connections = new Map();

app.get("/api/stream", async (c) => {
  const clientId = c.req.query("clientId");

  // Close existing connection if any
  if (connections.has(clientId)) {
    const { writer } = connections.get(clientId);
    writer.close();
  }

  // Store new connection
  connections.set(clientId, { writer });

  // Remove on disconnect
  c.req.raw.signal.addEventListener("abort", () => {
    connections.delete(clientId);
  });
});

// Frontend: Generate or retrieve client ID
const clientId = localStorage.getItem("clientId") || crypto.randomUUID();
localStorage.setItem("clientId", clientId);
const eventSource = new EventSource(`/api/stream?clientId=${clientId}`);

Authentication and Authorization

SSE connections can outlive authentication tokens. Implement a token refresh mechanism:

// Backend: Check token validity on each message
async function sendMessage(writer, userId, message) {
  // Verify token is still valid before sending
  if (!(await isTokenValid(userId))) {
    writer.write(
      encoder.encode(`event: authError\ndata: {"message":"Token expired"}\n\n`)
    );
    writer.close();
    return;
  }

  writer.write(encoder.encode(`data: ${JSON.stringify(message)}\n\n`));
}

// Frontend: Handle auth errors
eventSource.addEventListener("authError", (e) => {
  eventSource.close();
  // Redirect to login or refresh token
  refreshToken().then(() => {
    // Reconnect with new token
    createNewEventSource();
  });
});

Conclusion

Server-Sent Events (SSE) are a simple way to stream data from server to client using standard HTTP. Unlike WebSockets, SSE is designed for server-to-client communication.

For AI apps, SSE is great for streaming responses to users in real-time, making the experience more interactive.

By using the tips in this guide and handling errors and reconnections well, you can create reliable streaming apps with SSE.

graph TD
    SSE[Server-Sent Events]

    SSE --> Headers[Headers<br>Content-Type: text/event-stream<br>Cache-Control: no-cache]
    SSE --> Format[Event Format<br>event: type<br>id: 123<br>data: JSON payload]
    SSE --> Features[Key Features]
    SSE --> UseCases[Use Cases]
    SSE --> Challenges[Challenges]

    Features --> Auto[Automatic Reconnection]
    Features --> LastEvent[Last-Event-ID Support]
    Features --> Standard[Standard HTTP/HTTPS]
    Features --> Unidirectional[Server to Client Only]

    UseCases --> AI[AI Response Streaming]
    UseCases --> Notifications[Real-time Notifications]
    UseCases --> Progress[Progress Updates]
    UseCases --> Stock[Financial Data Updates]

    Challenges --> Timeout[Proxy Timeouts]
    Challenges --> Connection[Connection Limits]
    Challenges --> Browser[Browser Compatibility]

    style SSE fill:#f96,stroke:#333,stroke-width:2px
    style Headers fill:#9cf,stroke:#333,stroke-width:1px
    style Format fill:#9cf,stroke:#333,stroke-width:1px
    style Features fill:#9cf,stroke:#333,stroke-width:1px
    style UseCases fill:#9cf,stroke:#333,stroke-width:1px
    style Challenges fill:#9cf,stroke:#333,stroke-width:1px
0
Subscribe to my newsletter

Read articles from Tiger Abrodi directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Tiger Abrodi
Tiger Abrodi

Just a guy who loves to write code and watch anime.