AI News Hub Logo

AI News Hub

Idempotency explained — Part 1: basics, idempotency key and Go implementation

DEV Community
Odilon HUGONNOT

The user clicks "Pay". Nothing happens. The network is slow, the spinner keeps spinning. They click again. This time it goes through. Your API received two identical requests 800ms apart. Two possible scenarios. Either your customer just got charged twice — and you have a legal problem, a chargeback to handle, and a difficult conversation ahead. Or you thought about idempotency, and the second request is treated as a duplicate: same response, zero additional side effects. The mathematical definition: an operation f is idempotent if f(f(x)) = f(x). Applying it multiple times gives the same result as once. In practice: pressing the elevator call button five times — the elevator arrives once. That's idempotent. Ordering the same dish five times at a restaurant — you get five plates. That's not idempotent, and neither is your bill. In HTTP, verbs have precise guarantees: GET — idempotent. Calling GET /orders/42 a hundred times changes nothing server-side. PUT — idempotent. Updating a resource with the same data multiple times: same result. DELETE — idempotent. The first deletion removes the resource; subsequent ones return 404. The final result is the same: the resource no longer exists. POST — not idempotent by default. Each call to POST /orders creates a new order. A commonly misunderstood point: idempotent doesn't mean "no side effects". DELETE /users/42 does delete the user — that's a very concrete side effect. But that effect is stable: after the first call, subsequent ones no longer change the system's state. That's idempotency. Duplicates arrive in three ways in a distributed system. None of them are rare or theoretical. The client sends a request. The network times out after 30 seconds. The client automatically retries. Except your server had already received the first request — it was just busy processing it. You now have two executions for the same intent. This is the classic case of a payment SDK that retries three times on network failure. If your POST /payments endpoint is not idempotent, the customer is potentially charged three times. The user clicks "Confirm order". The interface gives no immediate feedback. They click again. Two identical requests go out a few hundred milliseconds apart. Two orders created in the database. Customer support is going to love it. Kafka, RabbitMQ, Stripe or GitHub webhooks guarantee message delivery at least once — not exactly once. If the consumer crashes after processing the message but before acknowledging receipt, the broker resends the message. That's a normal protocol guarantee, not a bug. Your consumer must be able to receive the same message twice without creating a duplicate. The concrete consequences: Double bank charge — chargeback, critical incident. Confirmation email sent three times — the customer thinks it's a bug or an attack. Order created twice — incorrect stock, duplicated delivery, wrong accounting. The canonical solution for making a POST endpoint idempotent. Stripe has been using it for years and makes it a prerequisite for payments in production. The principle: the client generates a UUID v4 for each intent of an operation and sends it in the Idempotency-Key header. The server stores the result of the first processing. For any subsequent request with the same key, it returns the stored result without reprocessing. POST /payments Idempotency-Key: 550e8400-e29b-41d4-a716-446655440000 Content-Type: application/json {"amount": 100, "currency": "EUR", "customer_id": "cus_8Rn2xM"} The complete flow: First request with the key: normal processing, result stored. Same key, same payload: stored result returned immediately. Zero reprocessing. Same key, different payload: 422 error. Client-side inconsistency. Different key: new intent, normal processing. The key is generated client-side, not server-side — the client is the one who knows that "this request is a repeat of the one 30 seconds ago". The server can't guess that. package middleware import ( "bytes" "net/http" "sync" "time" ) type CachedResult struct { StatusCode int Body []byte CreatedAt time.Time } // IdempotencyStore — in-memory map, demo only. // For production, see the next section: PostgreSQL store. type IdempotencyStore struct { mu sync.RWMutex results map[string]CachedResult } func NewIdempotencyStore() *IdempotencyStore { return &IdempotencyStore{results: make(map[string]CachedResult)} } func (s *IdempotencyStore) Get(key string) (CachedResult, bool) { s.mu.RLock() defer s.mu.RUnlock() result, ok := s.results[key] return result, ok } func (s *IdempotencyStore) Set(key string, result CachedResult) { s.mu.Lock() defer s.mu.Unlock() s.results[key] = result } // responseRecorder captures the response for storage. type responseRecorder struct { http.ResponseWriter code int body bytes.Buffer } func (r *responseRecorder) WriteHeader(code int) { r.code = code r.ResponseWriter.WriteHeader(code) } func (r *responseRecorder) Write(b []byte) (int, error) { r.body.Write(b) return r.ResponseWriter.Write(b) } func IdempotencyMiddleware(store *IdempotencyStore) func(http.Handler) http.Handler { return func(next http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { if r.Method != http.MethodPost { next.ServeHTTP(w, r) return } key := r.Header.Get("Idempotency-Key") if key == "" { http.Error(w, `{"error": "Idempotency-Key header required"}`, http.StatusBadRequest) return } // Already cached? Return immediately. if cached, ok := store.Get(key); ok { w.Header().Set("X-Idempotent-Replayed", "true") w.WriteHeader(cached.StatusCode) w.Write(cached.Body) return } // First pass: capture and store the response. rec := &responseRecorder{ResponseWriter: w, code: http.StatusOK} next.ServeHTTP(rec, r) store.Set(key, CachedResult{ StatusCode: rec.code, Body: rec.body.Bytes(), CreatedAt: time.Now(), }) }) } } This version has two unacceptable limitations in production: Multi-instance: each pod has its own map. A request on pod A isn't visible to pod B. No TTL: keys accumulate in memory until restart. The usual solution is Redis. But if your stack doesn't include Redis — which is often the case on projects that want to stay simple — PostgreSQL does exactly the same job. The idea: a dedicated idempotency_keys table that acts as the shared store. All pods read and write to the same database. PostgreSQL handles atomicity itself. CREATE TABLE idempotency_keys ( key UUID PRIMARY KEY, status VARCHAR(16) NOT NULL DEFAULT 'pending', -- pending | completed status_code INT, body TEXT, created_at TIMESTAMPTZ NOT NULL DEFAULT NOW() ); -- Automatic cleanup of old keys (Stripe keeps 24h) -- Run periodically via a job or a background goroutine DELETE FROM idempotency_keys WHERE created_at 24h in background handler := middleware.IdempotencyMiddleware(store)(myHandler) For simple cases where the idempotency key can live directly on the business table (payments, orders), no dedicated table needed. A UNIQUE constraint is enough — it's the most solid layer because it's atomic by construction. CREATE TABLE payments ( id UUID PRIMARY KEY DEFAULT gen_random_uuid(), idempotency_key UUID UNIQUE NOT NULL, customer_id VARCHAR(64) NOT NULL, amount DECIMAL(10,2) NOT NULL, currency CHAR(3) NOT NULL, status VARCHAR(20) NOT NULL DEFAULT 'pending', created_at TIMESTAMPTZ NOT NULL DEFAULT NOW() ); INSERT INTO payments (idempotency_key, customer_id, amount, currency) VALUES ($1, $2, $3, $4) ON CONFLICT (idempotency_key) DO NOTHING RETURNING id, status, created_at; func CreatePayment(ctx context.Context, db *sqlx.DB, req PaymentRequest) (*Payment, error) { var payment Payment err := db.GetContext(ctx, &payment, ` INSERT INTO payments (idempotency_key, customer_id, amount, currency) VALUES ($1, $2, $3, $4) ON CONFLICT (idempotency_key) DO NOTHING RETURNING id, idempotency_key, customer_id, amount, currency, status, created_at `, req.IdempotencyKey, req.CustomerID, req.Amount, req.Currency) if err == sql.ErrNoRows { // Key already known — return the existing payment. err = db.GetContext(ctx, &payment, "SELECT * FROM payments WHERE idempotency_key = $1", req.IdempotencyKey, ) if err != nil { return nil, fmt.Errorf("fetching existing payment: %w", err) } return &payment, nil } if err != nil { return nil, fmt.Errorf("inserting payment: %w", err) } return &payment, nil } Two concurrent requests with the same key: PostgreSQL places a row-level lock during insertion. Only one INSERT succeeds. No race condition, no Redis, no intermediate state to manage. Duplicates come from three places: automatic retries, double-click, at-least-once delivery from brokers. The idempotency key is the universal pattern for POST: the client generates a UUID per intent, the server deduplicates. The PostgreSQL store replaces Redis: idempotency_keys table, INSERT ON CONFLICT for atomic lock, TTL cleanup goroutine. Multi-instance, zero external dependency. The UNIQUE constraint in the database is the most solid layer: atomic by construction, no possible race condition. Part 2 goes further: in a CQRS and Event Sourcing architecture, idempotency touches commands, events and aggregates. Optimistic locking, outbox pattern — the four layers that make a distributed system never create a duplicate, even under load. 📄 Associated CLAUDE.md View • Download • Catalogue