AI News Hub Logo

AI News Hub

I built a centralized data orchestration engine for distributed Minecraft servers — here's how

DEV Community
Mustafa Bingül

I built a centralized data orchestration engine for distributed Minecraft servers This post is about a real architectural problem I ran into while working on a multi-server Minecraft network — and how I solved it by building a dedicated data microservice called Nexus Core. A large Minecraft network runs many game servers at the same time: lobby servers, PvP arenas, survival worlds, minigame instances. Each one is a separate JVM process running a Spigot plugin. The traditional approach is simple — every server connects directly to MongoDB, manages its own in-memory cache, and handles its own data logic. This works fine at small scale. At larger scale, three problems emerge: Connection pool exhaustion. Each server holds its own pool of MongoDB connections. With 20 servers, you're throwing 20× the connections at your database for no good reason. Cache incoherency. Server A caches a player's balance. Server B modifies it. Server A's cache is now stale. The player sees incorrect data depending on which server they're on. Duplicated logic. The same data access code exists in every plugin. Changing how player data is structured means touching 15 codebases. Nexus Core is a standalone Java application that acts as the single source of truth for all data in the network. Game servers never query MongoDB directly. Instead, they publish a structured packet to a Redis channel, and Nexus Core handles the rest. Spigot #1 ─┐ Spigot #2 ──► Redis (pub/sub) ──► Nexus Core ──► MongoDB Spigot #3 ─┘ This gives us: A single, optimized MongoDB connection pool A centralized Redis cache that all servers read from Data logic defined once per data type, shared everywhere Every request from a game server is a JSON packet published to a Redis channel: { "protocol": 100, "source": "pvp-1", "type": "GET_DATA", "data": { "uuid": "550e8400-e29b-41d4-a716-446655440000" } } protocol is a numeric ID that identifies which data type is being requested. source is the server's identifier, used to publish the response back to the right channel. type is one of GET, SET, DELETE, or GET_ALL. Nexus Core subscribes to the inbound channel, deserializes the packet, and routes it to the right handler in O(1) via a Map. A DataAddon is an abstract class that you extend to define a new data type. It maps to exactly one MongoDB collection and one Redis cache namespace. Here's what a minimal addon looks like: public class PlayerStatsAddon extends DataAddon { @Override public int addonId() { return 100; } @Override public String addonName() { return "Player Stats"; } @Override public String getDatabase() { return "nexus_core_db"; } @Override public String getCollection() { return "player_stats"; } @Override public String cacheKeyHeaderTag() { return "stats"; } @DbDataModels(isId = true) private String uuid; @DbDataModels(defaultValue = "0", isId = false) private int kills; @DbDataModels(defaultValue = "0", isId = false) private int deaths; @Override public boolean handleRequest(String source, RequestType type, NexusJsonDataContainer data) { return true; // return false to reject the request } } Nexus Core discovers the schema at boot time via reflection over @DbDataModels annotations. It finds which field is the primary key, reads default values for missing fields, and knows how to serialize/deserialize documents without you writing any mapping code. Registering the addon is one line: NexusApplication.getInstance().getProtocolHandler().registerAddon(new PlayerStatsAddon()); From that point on, any game server can send a packet with protocol: 100 and get a full player stats document back. Here's the full flow for a GET request: Game server publishes packet to Redis Nexus Core receives it and looks up the addon by protocol ID handleRequest() is called — if it returns false, an empty response is sent back immediately Nexus Core checks Redis for key stats:{uuid} Cache hit → serialize and publish response Cache miss → query MongoDB, write result to Redis, publish response SET writes through to both MongoDB and Redis. DELETE removes from both. GET_ALL bypasses cache entirely — caching arbitrary result sets would make invalidation logic very complex. This is one of my favorite parts of the design. Before any database operation happens, your addon's handleRequest() method is called synchronously. You can inspect the source server, the request type, and the payload — and reject the request outright by returning false. This is where authorization lives. For example: @Override public boolean handleRequest(String source, RequestType type, NexusJsonDataContainer data) { if (type == RequestType.REMOVE_DATA) { return source.equals("admin"); // only admin server can delete } return true; } Keep this method fast — it runs synchronously and blocks the request pipeline. Nexus Core ships with a live monitoring UI built entirely in Java Swing with Java2D — no external libraries. It shows real-time JVM CPU load, process RAM usage, number of objects currently in the Redis cache, and the count of active addons. All rendered as animated donut charts and scrolling line graphs. Looking back, the main thing missing is test coverage. The addon system and cache invalidation logic are exactly the kind of thing that benefits from solid unit tests, and I didn't write them. That's the next thing I'm adding. I'd also consider replacing the protocol integer with a string identifier — numeric IDs work fine but require a separate constants file to stay readable, which is an avoidable friction point. https://github.com/mustafabinguldev/nexus-core If you're building anything on distributed Java infrastructure or have thoughts on the addon model, I'd love to hear from you in the comments.