GPU

GPU A100

High-performance GPU for production inference and fine-tuning.

Private Inference

Run LLM models locally on A100 with 40 GB VRAM — your data never leaves the VM.

Ollama Pre-installed

Download and serve models instantly. No setup, no configuration, just inference.

Full Agent Capabilities

Shell commands, file editing, code execution, and any LLM provider — all in a secure sandbox.

Specifications

CPU

6 vCPU

Memory

60 GB

Storage

800 GB NVMe

Workspaces

5

GPU

A100

VRAM

40 GB

GPU A100

GPU

High-performance GPU for production inference and fine-tuning.

$1249/month
CPU: 6 vCPU
RAM: 60 GB RAM
Disk: 800 GB NVMe
BW: 10 TB Transfer
GPU: A100 — 40 GB VRAM
  • Everything in GPU A16
  • NVIDIA A100 — 40 GB HBM2e
  • Run 30B-70B parameter models
  • Fine-tuning capable
  • 5 workspaces
  • Hourly backups
  • Firewall management (50 rules)
  • SSH access

Other plans in this category