The EU AI Act classified a TypeScript data serialisation library as High Risk. Here is what happened.
On 21 April I audited trpc/trpc, the TypeScript library for building end-to-end type-safe APIs. Score came back at 80. Healthy. Three High findings, 58% confirmation rate. On 24 April I re-audited with a corrected product description. Score dropped to 47.6. Critical Risk. Three new High findings under AI Governance appeared in the sections evaluated by the AI Governance agent. The reason: tRPC's "transformer" components were classified as High Risk under the EU AI Act. tRPC has no machine learning components. It does not process model outputs. It does not make AI decisions. The transformer in tRPC's codebase is a data serialisation utility that handles how data is encoded and decoded across the client-server boundary. The word "transformer" is used in its original computer science sense, predating the AI context by decades. What the three High findings stated High AI Governance: High-risk AI system classification under EU AI Act without declared controls. The codebase is classified as high-risk due to transformer-based data processing, but lacks declared controls for transparency and risk management. Cited to packages/openapi/test/heyapi.test.ts:1–10. High AI Governance: Missing output handling controls for AI data serialisation. Transformer components process serialised data without output validation, violating OWASP LLM05:2025. Cited to packages/openapi/test/heyapi.test.ts:10–15. High AI Governance: EU AI Act High Risk classification — data transformation lacks specific risk mitigation. Is this finding correct? The honest answer is: it is technically defensible under a literal reading of the EU AI Act framework text, but a human auditor with full context would likely classify it differently. The AI Governance agent evaluated the codebase against the framework text. The framework defines "AI system" broadly enough that automated evaluation of a codebase containing transformer-named components produces this result. The LLM models that evaluated the chunks received the EU AI Act risk level classification built from the intent model and reached consistent conclusions. The tRPC confirmations in the same report tell a different story about the codebase: "No AI/ML Components Detected — EU AI Act Classification: Not Applicable" appeared as a confirmed finding alongside the High Risk classification. Both the confirmation and the violation came from the same analysis. The High Risk finding prevailed in scoring because of severity weighting rules. This is not a product defect. It illustrates a genuine ambiguity in how AI governance frameworks apply to modern software. The EU AI Act definitions were written before transformer architecture became the dominant pattern in software naming conventions. The gap between "this component shares a name with AI architecture" and "this component is an AI system" requires human interpretation that automated analysis cannot yet consistently provide. What this means for TypeScript developers If your TypeScript codebase contains components named transformers, models, agents, pipelines, or inference, an automated AI governance evaluation will flag them for EU AI Act compliance review. That does not mean your codebase is non-compliant. It means the product description must explicitly state which components are AI systems and which are not. The broader point stands regardless: as AI governance frameworks move from policy documents to enforcement instruments, the boundary between software that falls under them and software that does not will need to be stated explicitly in documentation, not inferred from code structure. IntentGuard surfaces where that documentation is missing. Waitlist at intentguard.dev.
