AI News Hub Logo

AI News Hub

How I Conquered the Brimble Task: Insights and Strategies

DEV Community
Onatade Abdulmajeed

Table of Contents Introduction Understanding the Brimble Task Deployment Strategy and Debugging Workflow System Architecture and Core Components Concrete Implementation Details Request–Response Flow Reflecting on the Journey Conclusion Additional Resources "You don’t need to know everything to start, you just need the courage to continue." Onatade Abdulmajeed I took the Brimble Challenge as a junior full‑stack developer with the help of my mentor, and brother. At first, I thought I couldn’t do it because it was more complicated than the tasks I had taken in the past. I had less than three (3) days to complete it, and there was also an issue that affected the electricity supply in my neighborhood. For me, this task was a chance to push myself, and explore technologies like Railpack and Caddy. It pushed me to move beyond just coding to handling real-time production. For me, it wasn’t only about finishing the task, but about proving to myself that I can handle complex systems and debug under pressure. The Brimble task involves building a simple one-page platform that can deploy applications from a Git repository and stream logs in live. The backend is responsible for cloning the repo, building the application, and running it. The entire system should work together and be runnable with a single docker compose up command. In this article, I’ll share my experience on how I tackled the Brimble task, the way I overcome challenges, including the errors I faced and the solutions I found, and a breakdown of how the core components of my application work. The Brimble task introduces few hard requirements that go beyond standard application development: Runs end-to-end with docker compose up: The whole system must come up with a single docker compose up on a clean machine. Live log streaming: Build and deploy logs must stream to the UI in real time over SSE or WebSocket. Routing Traffic: Configuring Caddy to handle multiple deployments under unique paths without conflicts. Taking on the Brimble task was more than just showing I could write code. It tested my ability to adapt, solve problems, and think like an engineer dealing with production systems. It challenged me to adapt to unfamiliar tools, debug issues in a production-like environment, and ensure that different parts of the system works end-to-end I started by working on the backend and frontend separately, making sure each part ran smoothly on its own. Once both the UI and API were working locally, I tried connecting them into a single deployment pipeline. That’s when i started getting errors. Once I entered the deployment phase, several issues surfaced—mostly around configuration and the build process. At that point, I had to shift from just “building features” to actually debugging the system step by step. The first issues appeared during the frontend setup, particularly around routing and TypeScript configuration. Missing generated route file Cannot find module './routeTree.gen' This happened because I introduced TanStack Router without setting up its file generation process. The router expects a generated route tree file, which didn’t exist in my project. Instead of adding more complexity just to fix this, I removed the router entirely and simplified the setup. Since the task only required a single-page interface, this made the application cleaner and easier to manage. TypeScript configuration conflict server.ts is not under rootDir This issue came from the frontend trying to compile backend files. My TypeScript configuration wasn’t properly scoped, so it ended up including files outside the frontend directory. I fixed this by restricting the configuration to only include the src folder and explicitly excluding the backend. This separation made the project structure more stable and predictable. After fixing the frontend, the next set of problems appeared during the deployment pipeline, specifically at the build stage. Build failure during tar process TypeError: callback only supported with file option This error came from how I was using the tar package. I was relying on a callback-based approach that is no longer supported in newer versions. Instead of trying to patch the implementation, I stepped back and reconsidered the approach. Since Docker can build directly from a project directory, I removed the tar step completely and switched to using a direct Docker build command. This simplified the pipeline and removed an unnecessary layer of complexity. Docker build failing without clear error output Command failed: docker build ... At this stage, the build was still failing, but the logs weren’t helpful. I was only logging the generic error message, which hid the actual issue. To fix this, I updated the logging system to include the full output from the build process. Once I could see the detailed error messages, it became much easier to identify and fix the real problems. While testing, I noticed that some deployments were being triggered multiple times. Duplicate deployment execution The logs showed repeated steps (cloning, building, etc.), which indicated that the pipeline was running more than once for a single action. This turned out to be a frontend issue—multiple requests were being sent, likely due to repeated clicks. I resolved this by adding a simple control to prevent duplicate submissions. At the beginning, I approached problems by trying quick fixes based on assumptions. As more issues appeared, that approach stopped working. I shifted to a more structured method: Relying on logs instead of guesswork Understanding each error before attempting a fix Simplifying parts of the system instead of over-engineering Fixing issues step by step rather than all at once By resolving configuration issues, simplifying the build process, and improving how errors were logged, I was able to stabilize the deployment pipeline up to the container build stage. More importantly, this process helped me understand how systems behave outside of local development, and why structured debugging and clear configuration are critical when working with deployment pipelines. The system is a mini deployment pipeline made up of four main components: The frontend, the backend, a container runtime (Docker), and Caddy the reverse proxy. The frontend is a single-page interface where users can submit a Git repository and trigger a deployment. It also displays deployment status and streams logs in real time so users can see what is happening during the process. The backend handles the core logic of the system. When a request is received from the frontend, it clones the repository, build the application into a Docker image, and starts a container from that image. Docker is used to run applications in isolated environments. This ensures that each deployment runs consistently, regardless of differences between development and production environments. Caddy acts as a reverse proxy and serves as the entry point to all deployed applications. It routes requests to the correct running container, making each deployed app accessible through a single UI. In the deployment pipeline, these components are connected in a clear flow: the frontend sends a deployment request to the backend, the backend processes the request and uses Docker to build and run the application, and logs from each stage are streamed back to the frontend. Once the application is running, Caddy routes traffic to it, allowing users to access the deployed service. The overall flow of the system can be visualized as follows: The backend is the coordinator: it accepts deployment requests, validates input, enqueues build jobs, invokes Docker to build and run containers, streams logs to the UI, and persists deployment state. Router and Endpoints POST /deploy — triggers a new deployment GET /status/:id — fetches deployment status GET /logs/:id — streams build/runtime logs Controllers Validate incoming deployment requests Queue build jobs Return immediate deployment status Worker Responsible for actual execution of deployment tasks: Clones Git repositories Runs docker build Starts containers via docker run Handles retries and timeouts Streams logs during execution Persistence Deployment ID Status (pending, building, running, failed) Logs references Stored in a database or file-based system Log Streamer Uses SSE (Server-Sent Events) or WebSockets Streams build and runtime logs to the frontend Enables live deployment monitoring 2. Concrete Implementation Details Worker Execution Model In-process worker, separate service, or queue-based system (e.g., Redis queue) Queue-based systems improve scalability and fault tolerance Environment Variables & Secrets Passed into Docker build/run via .env, -e flags, or secret store Sensitive values must never be logged Common Issues File permission errors in volumes Docker permission issues Port conflicts between containers Secret leakage via logs 3. Routing (Caddy Reverse Proxy)** Caddy is the single entry point for external traffic. It reverse-proxies requests to running containers and manages TLS automatically. Caddyfile Structure Site blocks per domain reverse_proxy for routing handle_path for URL rewriting Example Caddyfile example.com { handle_path /apps/{id}/* { reverse_proxy 127.0.0.1:{container_port} } } Route Pattern Used /apps// Route Registration Backend updates Caddy configuration dynamically. Either: Reload Caddy after updating config file OR use Caddy Admin API for live updates Common Routing Errors Invalid Caddyfile syntax Path collisions between deployments Missing handle_path causing broken asset routing Forgotten reload after config update 5. Middleware and Logging Middleware ensures security, validation, and observability before requests reach controllers. Auth Middleware Validates authentication tokens Injects user context into request object Validation Middleware Validates request payloads for POST /deploy Ensures repository URL and build config are correct Logging Middleware Attaches metadata to each request: Request ID User ID Route Timestamp Error Middleware Centralized error handler Returns consistent JSON error responses Prevents stack traces leaking to frontend 6. Concrete Implementation Details Token Format Typically JWT (Bearer token in Authorization header) Verified at middleware level before controller execution Logging Fields requestId route method userId startTime / endTime statusCode Error Handling UX Frontend receives structured errors: { "error": "Deployment failed", "code": "BUILD_ERROR" } Request–Response Flow High-Level Sequence Frontend calls POST /deploy Backend validates request Job is queued for worker Worker clones repo and builds Docker image Worker runs container Caddy route is registered Deployment status is updated Logs stream to frontend in real time Concrete Details Log Streaming SSE or WebSockets Frontend subscribes using deployment ID Timeouts & Retries Build timeout (10–15 minutes) Limited retry attempts for failures Deployment Readiness (Only marked “ready”) when: Container is running Route is registered in Caddy The UI triggers a deployment request, the backend orchestrates building and running Docker containers, Caddy exposes them via dynamic routing, and logs stream back to the frontend in real time. Skills and knowledge acquired from taking the Brimble Task-Home Task Technical skills: improved containerization (Docker image builds, volume and permission handling), reverse‑proxy configuration (Caddy routing and rewrites), and real‑time log streaming (SSE/WebSocket). Operational skills: designing a deployment pipeline, writing idempotent route registration, and adding health checks and retries. Debugging and observability: reading multi‑layer logs, tracing request/job IDs across services, and isolating failures with incremental tests. Process skills: documenting environment expectations, creating reproducible steps for fixes, and using peer reviews to catch config mistakes early. Impact on personal and professional growth Confidence: Completing the Brimble Task made me more comfortable owning full‑stack deployment problems and troubleshooting production‑like failures. Efficiency: I now approach deployments with checklists and small, testable changes, which reduces downtime and wasted effort. Career value: The experience strengthened my systems thinking and made me better at coordinating cross‑cutting concerns—skills that translate directly to larger infrastructure and DevOps roles. Mindset shift: I moved from “fix the immediate error” to “improve the pipeline,” which yields longer‑term gains and fewer repeat incidents. Encouragement for others facing similar challenges Start small and iterate: Break the pipeline into testable steps; validate each layer before chaining them together. Make failures visible: Stream logs early and add health checks so problems surface where you can act on them. Document assumptions: Keep a versioned .env.example, a short runbook for common fixes, and a debugging journal. Ask for quick reviews: A second pair of eyes on Caddyfiles, Dockerfiles, or env configs often prevents hours of troubleshooting. Persist and learn: Every failure teaches a concrete improvement—capture it, apply it, and move forward. Conclusion The project taught that deployments are their own engineering challenge, prioritize explicit configuration, early log streaming, health checks, and small, reversible changes to make the pipeline reliable. Treat failures as feedback: instrument handoffs, trace contextual IDs, and apply system thinking so fixes reduce repeat incidents and improve long‑term stability. Share your experience of a challenge you faced and how you solved it in the comments, your experience could save someone hours of debugging. Brimble careers page: Overview of the role and hiring context for the task. https://www.brimble.io/careers/fullstack-infra-engineer Project repository: Source code, configuration, and deployment scripts for the system. https://github.com/Spider1201/deployment-system Video demo: Short walkthrough showing the deployment flow and UI log streaming. https://youtu.be/dEDjUQ5iZow?si=clLN7kKQr1prFSmQ Docker documentation and best practices for image builds and volume permissions. Caddy docs for reverse proxying, handle_path, and dynamic config approaches. Guides on SSE/WebSocket patterns for real‑time log streaming and observability. Subscribe for more practical tips on deployments, debugging, and systems engineering. If this article helped, share it or leave a comment with your own challenge and one lesson learned.