Overview
I configured a dedicated Ubuntu 24.04 server on high-performance Alienware hardware and optimized it for remote AI and ML workloads. The goal was to build a machine that could stay online as persistent infrastructure, remain remotely manageable, and avoid the overhead and risk that come with a traditional desktop-style setup.

Security Hardening
The environment was set up with a strict UFW firewall policy and trimmed down to reduce unnecessary exposure. I disabled physical TTY consoles and GUI access to minimize the local attack surface, and I used Tailscale SSH for secure remote administration instead of relying on broader public-facing access patterns.
That gave me:
- Tighter control over inbound access
- Cleaner remote management through an encrypted mesh
- A smaller and more intentional operating footprint
GPU Optimization
Because the box was meant for headless AI workloads, GPU stability mattered just as much as raw performance. I configured NVIDIA Driver 590+ with Persistence Mode enabled and worked through software-rendering issues that can show up in headless environments.
The end result was a more reliable CUDA setup with lower friction for long-running inference or training tasks, even without a traditional monitor-attached workstation flow.
Power Resilience
Availability was another important part of the build. I configured Wake-on-LAN and BIOS-level AC recovery so the machine could recover cleanly after power events and remain usable in remote deployment scenarios.
That meant the server was designed not just to perform well, but to stay reachable and recoverable when it mattered.
Why It Matters
This project sits at the intersection of security, systems administration, networking, and AI infrastructure. It reflects the kind of work I enjoy most: building machines that are practical, hardened, and actually ready for real remote use instead of just looking good on paper.