If you have been using Helicone to track LLM costs and traces, you may have noticed it was acquired by Mintlify in March 2026. Development has stopped. The self-hosted version has open issues that are not being fixed. The team is focused on Mintlify now.
If you need a replacement, this post covers how to migrate to Torrix, a self-hosted LLM observability proxy that uses the same proxy-header model as Helicone.
What Torrix does
Torrix is a single Docker container that sits between your app and any LLM provider. Every call is logged to a local SQLite database with full prompt, response, token counts, cost, and latency. Nothing leaves your server.
It supports OpenAI, Anthropic, Gemini, Groq, Mistral, Ollama, DeepSeek, Azure OpenAI, and any provider with a /v1/chat/completions endpoint.
What changes when migrating from Helicone
Only three things change: the base URL, the auth header name, and where you pass your OpenAI key. Your prompts, models, and messages stay exactly the same.
Discussion
Break the silence
Take the opportunity to kick things off.