OpenAI Euphony: A Browser-Based Viewer for Harmony Conversations and Codex CLI Sessions
OpenAI Euphony: A Browser-Based Viewer for Harmony Conversations and Codex CLI Sessions OpenAI just shipped Euphony — an Apache 2.0 open source web app for inspecting two specific formats that have been a pain to read with a text editor: Harmony conversations — the structural wire format gpt-oss models are trained on Codex CLI sessions — the rollout-*.jsonl files auto-created by Codex CLI There's already a hosted build you can use without installing anything. If you've ever looked at a raw Harmony conversation, you know the pain. The format uses explicit role/channel/stop tokens like , , , — great for training, terrible for human reading. Codex CLI sessions are similar. Every session dumps a JSONL file to $CODEX_HOME/sessions/YYYY/MM/DD/rollout-*.jsonl. If you want to understand what your agent did last Tuesday, you're scrolling through hundreds of lines of structured tool calls and responses. Euphony fills this gap with a proper timeline UI, filters, and metadata inspection. Three ways to load data: Paste JSON/JSONL from clipboard Drag and drop a local .json or .jsonl file Enter a public HTTP(S) URL (great for Hugging Face datasets) Frontend-only (recommended for deployment): export VITE_EUPHONY_FRONTEND_ONLY=true pnpm run dev All processing happens in the browser. URL fetches are client-side. Translation uses the user's own OpenAI API key. Safe to host on GitHub Pages, Cloudflare, etc. Backend-assisted (local dev only): uvicorn fastapi-main:app --app-dir server --host 127.0.0.1 --port 8020 --reload Adds a FastAPI server for large remote files, server-side translation, and Harmony rendering. Do not expose this externally — SSRF risk. Query large datasets right in the UI: Goal Query Assistant messages only messages[?role=='assistant'] Specific tool calls messages[?recipient=='browser'] Last 10 messages `messages \ For Harmony debugging, the token view shows: Raw Harmony renderer output Token ID arrays Decoded token strings Display string conversions Invaluable if you're tracking down tokenizer mismatches. This is the feature I find most compelling. You can drop Euphony into any web stack via a custom element: Theme with CSS variables: euphony-conversation { --euphony-user-color: #4F46E5; --euphony-assistant-color: #10B981; --euphony-background: #0D1117; } import './lib/euphony.js'; declare global { namespace JSX { interface IntrinsicElements { 'euphony-conversation': { src?: string; data?: string; }; } } } export function ConversationViewer({ url }: { url: string }) { return ; } import './lib/euphony.js'; defineProps(); Remember to add compilerOptions.isCustomElement for Vue to recognize the tag. git clone https://github.com/openai/euphony.git cd euphony pnpm install # Recommended: frontend-only export VITE_EUPHONY_FRONTEND_ONLY=true pnpm run dev Open http://localhost:3000/ and drag in any JSONL file. To embed in your own app: pnpm install pnpm run build:library # Output: ./lib/euphony.js Frontend-only mode: safe to deploy to static hosts Set VITE_EUPHONY_FRONTEND_ONLY=true at build time Backend mode: keep bound to 127.0.0.1, never expose to the internet Translation: pass user's API key at runtime, never hardcode Codex CLI debugging — Paste a rollout-*.jsonl path and step through the agent's reasoning Dataset inspection — Point at a Hugging Face URL to audit conversation quality Agent platform UI — Drop the Web Component into your admin dashboard Tokenizer debugging — Use the token inspector to track down Harmony rendering issues Repo Hosted app Harmony format cookbook Harmony renderer OpenAI Devs announcement The fact that OpenAI is open-sourcing their internal inspection tools signals something important: agent workflow debugging is becoming a first-class concern. If you're building on gpt-oss or using Codex CLI, Euphony is worth 10 minutes of your time. And if you're shipping your own agent platform, the Web Components embed means you can have a proper conversation viewer without writing one from scratch. I'll probably integrate it into my own tooling this week. Curious if anyone else is already using it in production.
