Event Debouncing with Logic Apps and Azure Table Storage
Forwarding every webhook event directly to a downstream API is a recipe for throttling, duplicate processing, and out-of-order writes. This post walks through a simple pattern — three Logic Apps and one table — that buffers events and processes only the latest state per entity. A source system fires events on every create or update. The same entity can be updated dozens of times in minutes. You don't need to process every intermediate state — only the final one. If an entity is updated 10 times in 30 minutes, only the most recent state matters. Source System Webhook │ ▼ rcv-events (HTTP trigger) │ upsert each event → EventBuffer table ▼ Azure Table Storage: EventBuffer │ PartitionKey: "relation-events" RowKey: entityId Status: "Pending" ▼ prc-events (Timer: every 5 min) │ query Pending rows older than X min → dispatch each ▼ prc-process-single-event │ mark Processing → fetch fresh from source → call downstream │ delete on success / reset to Pending on failure ▼ Downstream API rcv-events accepts a batch of events via HTTP and upserts each one into the buffer table. No queue, no broker — the HTTP trigger is the ingress. Each row looks like this: { "PartitionKey": "relation-events", "RowKey": "", "Event": "updated", "EntityType": "Record", "Status": "Pending", "ReceivedAt": "2026-04-20T14:30:00Z" } RowKey = entityId is the key insight. No matter how many events arrive for the same entity, there is always exactly one row. The tenth update overwrites the ninth. Deduplication is a schema decision, not code. prc-events runs on a timer (every 5 minutes) and queries rows where Status eq 'Pending' and LastUpdated <= utcNow() - X minutes. The time window is your debounce threshold — nothing gets processed until the burst settles. For each pending row, prc-process-single-event: Marks the row Processing — prevents double-processing if the timer fires again mid-run Fetches the current state from the source system — never trusts the buffered payload, which may already be stale Calls the downstream API with fresh data Deletes the row on success / resets to Pending on failure This gives at-least-once delivery with automatic retry — no custom infrastructure needed. Pending → Processing → [deleted] │ └──(on failure)──→ Pending Three states, one field. Fully visible in Azure Storage Explorer during an incident. Deduplication for free — one row per entity, always the latest No ordering concerns — you fetch fresh data at processing time, so intermediate states are irrelevant Protects downstream — one API call per entity per window, not one per event Operationally transparent — query the table, see exactly what's pending or stuck No broker needed at low-to-moderate scale (~16 events/hour is well within range) Consider adding Service Bus only if you need strict ordering, dead-lettering, or multiple consumers on the same stream. No Service Bus. No custom retry logic. No ordering guarantees needed. Just a table, a timer, and one row per entity.
