🤫

The GPU shortage is your opportunity

H100s cost $30K+. Cloud GPU waitlists stretch for months. Meanwhile, a MacBook Pro with 96 GB unified memory can run a 70B parameter model while you sip coffee. No CUDA drivers. No cloud bills. No Jensen Huang fan club membership required. Your Mac is already an AI workstation — DevPulse tells you exactly which models fit alongside your dev tools.

Edge & Tiny (< 5B)

Small (5–10B)

Medium (10–20B)

Large (20–40B)

XL (70B+)

Mixture of Experts

Model data and VRAM estimates sourced from CanIRun.ai, which aggregates data from llama.cpp, Ollama, and LM Studio. Thank you to the CanIRun.ai team for making this data available.

Skip the GPU waitlist

Your Mac already has the hardware. DevPulse tells you exactly which models fit alongside Chrome, Docker, and everything else you're running.

Download for macOS

macOS 14+ · Apple Silicon & Intel · Free during launch