Local AI Models
While everyone's fighting over NVIDIA GPUs like it's Black Friday at Costco, your M-series Mac has been sitting on up to 192 GB of unified memory this whole time. Check which of 17+ AI models your Mac can run locally — based on your actual available memory, not just your spec sheet.
VRAM estimates sourced from CanIRun.ai — model data from llama.cpp, Ollama, and LM Studio.
The GPU shortage is your opportunity
H100s cost $30K+. Cloud GPU waitlists stretch for months. Meanwhile, a MacBook Pro with 96 GB unified memory can run a 70B parameter model while you sip coffee. No CUDA drivers. No cloud bills. No Jensen Huang fan club membership required. Your Mac is already an AI workstation — DevPulse tells you exactly which models fit alongside your dev tools.
Model data and VRAM estimates sourced from CanIRun.ai, which aggregates data from llama.cpp, Ollama, and LM Studio. Thank you to the CanIRun.ai team for making this data available.
Your Mac already has the hardware. DevPulse tells you exactly which models fit alongside Chrome, Docker, and everything else you're running.
Download for macOS