GPU
GPU A16
Entry-level GPU for local inference and AI experimentation.
Private Inference
Run LLM models locally on A16 with 16 GB VRAM — your data never leaves the VM.
Ollama Pre-installed
Download and serve models instantly. No setup, no configuration, just inference.
Full Agent Capabilities
Shell commands, file editing, code execution, and any LLM provider — all in a secure sandbox.
Specifications
CPU
6 vCPU
Memory
64 GB
Storage
500 GB NVMe
Workspaces
3
GPU
A16
VRAM
16 GB
Most Popular
GPU A16
GPUEntry-level GPU for local inference and AI experimentation.
$449/month
CPU: 6 vCPU
RAM: 64 GB RAM
Disk: 500 GB NVMe
BW: 8 TB Transfer
GPU: A16 — 16 GB VRAM
- Everything in Ultra
- NVIDIA A16 — 16 GB GDDR6
- Ollama pre-installed
- Private on-device inference
- Run 7B-13B parameter models
- 3 workspaces
- Daily backups
- Firewall management (50 rules)
- SSH access