Overview
I host a private system I call HackerAI on my Tailscale network. It is built around OpenWebUI and configured with multiple uncensored models that act like a hacking council for security work.
The goal was to create an internal AI environment that could help me think through problems from multiple angles while staying inside a private, controlled network instead of depending on public chat tools.
Why I Built It
When I am working on authorized penetration testing, exploit development, security auditing, or report writing, one model usually is not enough. Different models are good at different things:
- Some are better at technical reasoning
- Some are stronger at code review or debugging
- Some are better at summarizing findings clearly for reports
I wanted a private setup where I could use them together as a kind of advisory panel instead of treating AI like a single all-purpose assistant.
Architecture
The system is hosted behind my Tailscale network and exposed only through that private mesh. OpenWebUI serves as the front end, giving me a single interface to work with multiple models without exposing the environment to the public internet.

That approach gave me:
- Private access control through Tailscale
- A centralized interface for multiple models
- Better separation between sensitive security work and public AI platforms
How I Use It
HackerAI is designed to support the real workflow around security work, not just chat.
It helps with:
- Brainstorming approaches during authorized penetration tests
- Assisting with exploit-development research and debugging
- Reviewing technical findings during audits
- Turning raw notes into cleaner, more structured reports
Instead of relying on one response stream, I can compare model output, pressure-test ideas, and use the system as a collaborative reasoning layer.
Why It Matters
This project reflects a lot of what I care about technically: private infrastructure, practical security workflows, AI orchestration, and building tools that are actually useful in the middle of real work.
It is also a good example of how I think about AI: not as magic, but as something that becomes much more valuable when it is deployed inside a secure system with a clear job to do.