AI News Hub Logo

AI News Hub

Keycloak Knows. Why Doesn't The Rest Of Your Stack?

DEV Community
Mr. Buch

Here's a situation I've been in more times than I'd like to admit. You set up Keycloak. It works great. Users register, log in, reset passwords — all handled. You move on to building the actual product. Then three weeks later, someone asks why the CRM doesn't have half the users in it. Or why the billing system is charging people who deleted their accounts six months ago. Or why the welcome email never went out. Because Keycloak knew. Nobody else did. So you go looking for the clean solution. Maybe you poll the admin API every few minutes? Sure, if you enjoy stale data and hammering your auth server for no reason. Maybe you query Keycloak's database directly? Works great until the next upgrade shuffles the schema and you spend a weekend figuring out why everything broke. Maybe you just... duplicate the registration logic in your backend and keep both in sync manually? I've seen this in production. It's exactly as bad as it sounds. None of these are good options. They're just different ways to be annoyed later. What I actually wanted was simple: when something happens in Keycloak, POST it to my backend. That's it. I don't want to poll. I don't want to touch the database. I just want an event, a payload, and an endpoint to send it to. So I built it. Keycloak Webhook is a small Keycloak extension — drop the JAR in, add two fields to your client config, and you start getting HTTP POSTs every time a user does something. Registration, login, logout, password reset, email change, account deletion. Your backend just handles the request and moves on. The extension registers itself as a Keycloak event listener. When a user event fires, it looks up the webhook config on that client, then hands off the HTTP POST to a background thread. Keycloak doesn't wait. The user doesn't wait. If your endpoint is slow, fine. If it's down, it retries three times with a short backoff (1s, 2s, 3s) and logs what happened. Then life goes on. The payload you get looks like this: { "type": "REGISTER", "user_id": "6f8df73e-9c42-4f8b-b3a1-c1d9bcb45f0b", "user_name": "john_doe", "email": "[email protected]", "first_name": "John", "last_name": "Doe", "email_verified": false, "created_timestamp": 1715726400000, "user_ip": "203.0.113.45", "user_agent": "Mozilla/5.0...", "delete_by_admin": false, "user_roles": ["user"], "organizations": [ { "id": "org-123", "name": "ACME Corporation", "alias": "acme" } ] } Supported events: REGISTER, REGISTER_ERROR, LOGIN, LOGOUT, RESET_PASSWORD, VERIFY_EMAIL, UPDATE_EMAIL, DELETE_ACCOUNT. REGISTER_ERROR is the weird one — registration failed, so there's no user in Keycloak yet, but we still send what we have (email, name from the form, error details). Useful for tracking failed signups or debugging onboarding drop-off. Build the JAR: git clone cd keycloak-webhook mvn clean package Mount it into Keycloak: docker run -p 8080:8080 \ -v ./target/keycloak-webhook.jar:/opt/keycloak/providers/keycloak-webhook.jar \ quay.io/keycloak/keycloak:26.6 start-dev It's a Keycloak SPI — auto-registers on startup, no theme files, no extra config. Now the step everyone skips: go to Admin console → your realm → Realm Settings → Events, and add brew-event-webhook to the Event Listeners field. Save. Do this for every realm you care about. The JAR alone does nothing until you activate it here. I know because I've forgotten this myself. Then configure the webhook endpoint on whichever client you want. There's no Attributes tab in the UI for this — you'll need the Keycloak Admin API. You can get the client UUID from Admin console → Clients → your client → the URL in your browser. For the token, don't use your admin user credentials. Instead, create a dedicated client for this: Admin console → Clients → Create client Enable Service account roles (under Capability config) Go to that client → Service accounts roles tab → Assign role → filter by realm-management → add manage-clients Then get a token from that client: curl -X POST \ "https://your-keycloak/realms/{realm}/protocol/openid-connect/token" \ -d "grant_type=client_credentials" \ -d "client_id={your-service-client-id}" \ -d "client_secret={your-service-client-secret}" Now fetch the full client representation first — the PUT replaces the entire object, so you need the existing data: curl -X GET \ "https://your-keycloak/admin/realms/{realm}/clients/{client-uuid}" \ -H "Authorization: Bearer {access_token}" Take that JSON, add (or merge) your webhook attributes into it, and PUT it back: curl -X PUT \ "https://your-keycloak/admin/realms/{realm}/clients/{client-uuid}" \ -H "Authorization: Bearer {access_token}" \ -H "Content-Type: application/json" \ -d '{ ...existing client JSON..., "attributes": { ...existing attributes..., "api.url": "https://yourapi.com/webhooks/keycloak", "api.key": "your-secret-token" } }' Don't skip the GET step. Sending a partial body to the PUT will wipe out existing client config. That's the whole setup. Different clients can point to completely different endpoints with different secrets — a web app and mobile app posting to separate backends, each with their own auth key. No global config file, no redeploy. Attribute Required Description api.url Yes Your webhook endpoint api.key Yes Bearer token, sent in the Authorization header disable.autologin No true to prevent Keycloak from auto-logging in users after registration trusted.proxy.count No Number of reverse proxies in front of Keycloak (default: 1). If client IPs are coming out wrong, this is probably why Short answer: nothing bad. Keycloak keeps running, users keep getting logged in, and you get log lines that look like this: WARN: Webhook request failed (attempt 1/3): 500 Internal Server Error WARN: Webhook request failed (attempt 2/3): 500 Internal Server Error WARN: Webhook request failed (attempt 3/3): 500 Internal Server Error WARN: Max retries exceeded for webhook. Event: REGISTER, User: testuser After three failures, the event is gone. There's no queue, no database, no replay mechanism. This is a deliberate tradeoff — adding durable queuing would mean adding infrastructure, and most people don't need it. For syncing a CRM or sending a welcome email, losing one webhook during a 3am outage is acceptable. If you genuinely can't lose events, pair this with Keycloak's built-in event log as a backup, or replay from the admin API after recovery. But in practice, I've found that the retry behavior covers most real outage scenarios — by the third attempt, you're probably back up. Keycloak event listeners are synchronous. If I block on the HTTP POST, I block Keycloak — the user stares at a spinner while we wait for your endpoint to respond. That's a bad time. Every webhook runs on a background thread pool instead. Your endpoint can take 10 seconds, throw a 503, or be unreachable. The user already logged in. Keycloak already moved on. This is non-negotiable — it's the whole reason the extension is useful in production. No payload transformation, no event filtering, no guaranteed delivery, no replay. If you only want REGISTER events, filter in your handler. If you need to reshape the payload for your CRM, do it in your backend. The extension does one thing — get events out of Keycloak and into your hands — and it does it without making itself complicated to operate. Found a bug or want a feature? Open an issue on GitLab.