I Built a Private AI Assistant That Runs 100% Locally β No Cloud, No Subscriptions
Every time I used ChatGPT or similar tools the same thought crossed my mind: "Where is this conversation going? Who has access to it? What are they doing with my data?" So I decided to build my own solution. Meet CrustAI π¦ β a fully private, self-hosted AI assistant that runs entirely on your own machine. No cloud. No subscriptions. No data leaving your computer. Ever. π Documentation | β GitHub Most AI assistants have one fundamental issue β your data belongs to someone else. When you ask ChatGPT something personal, that conversation is: Stored on OpenAI's servers Potentially used to train future models Subject to their privacy policy (which can change) Accessible in case of a data breach I wanted an AI assistant that was genuinely mine β one that I could use daily through apps I already use (Telegram, WhatsApp) without worrying about privacy. CrustAI is a self-hosted AI assistant powered by Ollama that connects to your favorite messaging platforms and runs completely offline after setup. π 100% Private β all processing happens on your hardware π§ Local LLM β powered by Ollama (llama3.2, tinyllama, phi3, mistral...) π± Multi-platform β Telegram, WhatsApp, Discord and Slack 𧬠Long-term memory β remembers facts across conversations π£οΈ Offline voice β speech-to-text and text-to-speech (pt-BR) β‘ REST API β built-in Fastify server for custom integrations π Custom personality β fully configurable assistant behavior Technology Purpose Node.js Runtime environment Ollama Local LLM inference engine node-telegram-bot-api Telegram integration @whiskeysockets/baileys WhatsApp integration discord.js Discord integration @slack/bolt Slack integration Fastify REST API server sql.js Embedded SQLite for memory yaml Configuration management The architecture is straightforward β a central message handler receives messages from any platform and routes them through the same pipeline: βββββββββββββββββββββββββββββββββββββββββββββββββββββββ β CrustAI Core β β β β ββββββββββββ ββββββββββββ ββββββββββββββββββββ β β β Telegram β β Discord β β WhatsApp β β β β Adapter β β Adapter β β Adapter β β β ββββββ¬ββββββ ββββββ¬ββββββ ββββββββββ¬ββββββββββ β β ββββββββββββββββΌβββββββββββββββββββ β β βΌ β β βββββββββββββββββββ β β β Message Handler β β β ββββββββββ¬βββββββββ β β βββββββββββββΌββββββββββββ β β βΌ βΌ βΌ β β βββββββββββ ββββββββββ ββββββββββββ β β β Ollama β β Memory β β REST API β β β β (LLM) β β Store β β Server β β β βββββββββββ ββββββββββ ββββββββββββ β βββββββββββββββββββββββββββββββββββββββββββββββββββββββ Each platform adapter is independent β you enable only what you need in config.yml. Node.js β₯ 20 Ollama installed A Telegram bot token from @BotFather Step 1 β Clone the repository git clone https://github.com/DaveSimoes/CrustAI.git cd CrustAI npm install # Start the Ollama server (keep this terminal open) ollama serve # In a new terminal β pull a lightweight model ollama pull tinyllama # 600MB β works on modest hardware # or ollama pull llama3.2 # 2GB β needs ~8GB RAM cp config/config.example.yml config/config.yml Edit config/config.yml: model: tinyllama ollama_url: http://localhost:11434 language: pt-BR telegram: enabled: true token: YOUR_BOT_TOKEN_HERE allowed_user_ids: [] npm start Expected output: β Ollama connected (tinyllama) β Memory store ready (./data/memory.db) β Telegram ready β REST API ready (http://localhost:3000) π¦ CrustAI is ready. Your shell awaits. Once running, use these commands directly in Telegram (or any connected platform): Command Description /ping Check if the bot is alive /help Show all commands /model Show which AI model is running /remember Store a fact in long-term memory /forget Erase all stored facts /clear Clear conversation history CrustAI was built with privacy as its core principle, not an afterthought: β All conversations processed locally β nothing leaves your hardware One concern I had was performance. I'm running CrustAI with tinyllama on a machine with limited RAM and it handles daily conversations well. For basic Q&A and conversation, tinyllama is surprisingly capable. If you have more resources, llama3.2 or phi3 give significantly better results. [ ] Web UI dashboard [ ] Image understanding (multimodal LLMs) [ ] Plugin system for custom tools [ ] Docker one-click deployment [ ] Mobile app companion CrustAI is open source and I'd love contributions from the community: π Bug reports β open an issue π‘ Feature ideas β let's discuss in issues π§ Pull requests β always welcome β Star the repo β helps others discover it! GitHub: https://github.com/DaveSimoes/CrustAI Documentation: https://documentcrustai.netlify.app Building CrustAI taught me a lot about local LLM inference, multi-platform bot development, and the real meaning of privacy-first software. If you're tired of sending your conversations to the cloud, give CrustAI a try. Your data deserves to stay yours. Made with π¦ and β€οΈ by Dave Simoes If you found this useful, consider starring the repo and sharing with others who care about privacy!
