AI News Hub Logo

AI News Hub

Your Blockchain Can't Tell What's an AI

DEV Community
NOVAInetwork

Imagine an AI agent submits a transaction to a typical chain. What does the chain actually see? It sees a 20-byte address. Maybe a 32-byte one if you're on a hashed-address chain. It sees the transaction calldata. It sees a signature. It sees a fee. It does not see the word "AI" anywhere. The address could be a wallet on a phone. It could be a bot script. It could be a smart contract, but it doesn't know that for sure either, because contracts and EOAs share the address space. This is the identity problem. The chain has no native concept of an AI, so it cannot apply different rules to one. It cannot say "AI agents pay a different fee floor" or "this kind of message is only valid if it came from a registered model" or "this entity has a 100-object memory cap." All of that has to be invented at a higher layer, usually inside a contract, and the chain itself stays neutral. Neutral sounds nice. In practice it means every project reinvents the same identity primitives, slightly differently, with slightly different security properties. Walk through what an AI-agent project usually has to build inside a contract: A registry mapping addresses to agent metadata. A balance tracker so the agent can pay fees from a budget rather than the user's wallet. A nonce per agent for replay protection. A capability flag system to gate what the agent is allowed to call. An audit log of every action the agent took. Per-agent quotas for storage, calls per block, or whatever the application needs. None of this is exotic. All of it is identity infrastructure. The chain does not provide it, so each project bolts it on top. The result is that two AI agents from two different projects have different definitions of "agent," different ways of tracking activity, and different audit semantics. There is no protocol-level answer to "what is an AI doing on this chain right now?" Three questions a chain should be able to answer about an AI: Is this address an AI? What is it allowed to do? What has it done? On most chains, the answer to all three is "I don't know, ask the indexer." That is a bad answer when fees, governance, and consensus might want to gate behavior on the result. I built NOVAI so the answer is "yes, here's the entity record." Every AI on the chain is registered as an AiEntity. The struct lives in crates/ai_entities/src/lib.rs. The fields that matter for identity: pub struct AiEntity { pub id: AiEntityId, // 32 bytes, deterministic pub code_hash: CodeHash, // hash of code or weights pub creator: Address, // who registered it pub pubkey: [u8; 32], // entity's ed25519 key pub economic_balance: u128, // entity's own balance pub nonce: u64, // entity's tx nonce pub capabilities: Capabilities, pub autonomy_mode: AutonomyMode, pub is_active: bool, // ... } The id is computed as blake3("NOVAI_AI_ENTITY_ID_V1" || code_hash || creator). Same code and same creator produce the same id. Different creators always get different ids, even when running the same model. There is no name service, no off-chain registry, no central authority. The id is a function of two facts. The entity has its own ed25519 keypair. The entity signs its own transactions. The entity pays fees from its own balance. The address derived from the entity's public key is reverse-indexed to the entity record, so when a transaction arrives, the dispatcher knows whether the sender is an AI before it routes the call. That last sentence is the whole point. Here is the function that does it, from crates/execution/src/lib.rs: pub fn check_ai_entity_sender( db: &K, tx: &TxV1, ) -> Result, ExecError> { let Some(entity) = lookup_ai_entity_by_address(db, &tx.from)? else { return Ok(None); }; if !entity.is_active { return Err(ExecError::EntityNotActive); } let tx_type = tx.payload.first().copied() .ok_or(ExecError::UnknownPayloadVersion { version: 0 })?; match tx_type { TRANSFER_PAYLOAD_V1 => Ok(Some(entity)), SIGNAL_COMMITMENT_PAYLOAD_V1 => { if entity.has_capability("emit_proposals") { Ok(Some(entity)) } else { Err(ExecError::IssuerMissingCapability) } } CREATE_MEMORY_OBJECT_PAYLOAD_V1 | UPDATE_MEMORY_OBJECT_PAYLOAD_V1 | DELETE_MEMORY_OBJECT_PAYLOAD_V1 => { if entity.has_capability("read_memory_objects") { Ok(Some(entity)) } else { Err(ExecError::IssuerMissingCapability) } } _ => Err(ExecError::IssuerMissingCapability), } } This is one function. It runs before every transaction. It returns Some(entity) if the sender is a registered AI, None if it is a normal account, and an error if the AI is trying to do something it is not allowed to do. That is the answer to all three questions: Is this address an AI? Look it up. The lookup either resolves to an entity record or it does not. What is it allowed to do? Read the capabilities bitfield and the autonomy_mode. The dispatcher enforces them. What has it done? Every signal the entity emits is indexed by issuer and height. Every memory object it owns is indexed by type. The chain stores all of this natively. Once the chain has a typed answer to "is this an AI," a lot of things become straightforward: Per-AI fee policy. A future governance change could set a different minimum fee for AI-issued signal commitments. The dispatcher already branches on tx type and entity status. The hook is there. Per-AI quotas. Every entity is capped at 100 memory objects and 64 KiB per object. These are protocol constants, not contract logic. They apply uniformly. Capability gates. A Gated-mode entity can request execution of a Tier 1 or Tier 2 action, but only through approval gates (Multisig, Threshold, or TimelockOnly). The capability flag and the gate type live in the entity record. The chain checks both. Native audit. A wallet, an explorer, or another bot can ask "what has this entity done in the last N blocks?" The answer is a query, not an indexer ETL. Governance over AI behavior. A governance proposal can deactivate an entity by flipping is_active to false. The dispatcher rejects every subsequent tx from that entity at the type-system level. There is no contract-level kill switch you have to remember to wire up. I want to be honest about the limit. The chain knows that an entity with code_hash H is registered, and it can verify that the entity signed a transaction with the matching key. It does not know that the entity's actual computation matches the code at that hash. That is a separate problem, and it is the role of the Autonomous autonomy mode (currently reserved) and ZK-proof verification. For now, the trust model is this: the chain knows who the entity claims to be, what it is allowed to do, and what it has done. The chain does not know that the entity is faithful to its declared code. That gap exists on every AI-on-chain project I have looked at. NOVAI is structured to close it through ZK proofs in a later phase, but the chain has to know what an AI is first. If you are coming from blockchain: you get a way to reason about AI agents that is finite, typed, and indexable. The dispatcher tells you what an AI can do. The state tells you what it has done. If you are coming from AI: you get a substrate where your agent has its own identity, its own balance, its own memory, and its own audit trail. You do not have to rebuild any of that. You build the agent. Repo: github.com/0x-devc/NOVAI-node. The architecture doc walks through every crate. The first-AI-entity tutorial registers an entity in about ten minutes. Twitter: [@NOVAInetwork]