Framework Desktop

AMD Ryzen AI Max+ 395 — 128GB LPDDR5x-8000

AI in a Box: Massive gaming capability, heavy-duty AI compute, and standard PC parts, all in 4.5L.

Overview

The Framework Desktop with the AMD Ryzen AI Max+ 395 processor and 128GB of soldered LPDDR5x-8000 memory is the ultimate AI workstation in a mini PC form factor. Released in 2025, it features the powerful Radeon 8060S integrated GPU with up to 96GB of dedicated VRAM (in Windows), making it capable of running very large language models like OpenAI's gpt-oss-120b at 38 tokens/second.

Starting at $1,269 (for the base 32GB model). DIY build-yourself PC with moderate difficulty and approximately 10 minutes setup time. Supports Windows 11 or any Linux distribution.

Key Specs (128GB Config)

SpecDetails
ProcessorAMD Ryzen AI Max+ 395 (soldered)
Base Clock3.0GHz
Max BoostUp to 5.1GHz
Cores / Threads16-core / 32-thread
L3 Cache64MB
Processor Power120W sustained, 140W boost
Memory128GB LPDDR5x-8000 (soldered)
Memory Bus256-bit at 8000 MT/s
GPUAMD Radeon 8060S
GPU ClockUp to 2.9GHz
Compute Units40 CUs
MALL Cache32MB
NPU32 Tiles, up to 50 TOPS
Storage2x Samsung 990 EVO Plus SSD 4TB
Form FactorMini-ITX, 4.5L volume
Weight3.1 kg
Case Dimensions96.8 x 205.5 x 226.1 mm (H x W x D)
Power Supply400W FlexATX, ATX 3.0, 80 Plus Gold (110V) / Silver (230V)
PSU FanDelta AFB0412SHBYQB 40x40mm (0-RPM mode)
OSarchlinux
Release Year2025

Cooling

FanSpeedNoiseAirflowConnector
Noctua NF-A12x25 HS-PWM2400 RPM28.8 dBA117.6 m³/h (69.25 CFM)4-pin PWM

Machine Learning Performance

With up to 96GB of memory accessible by the Radeon 8060S GPU (even more on Linux), very large language models like OpenAI's gpt-oss-120b can run real-time.

Recommended Models & Token Speed (LM Studio on Fedora 42)

ModelQuantizationSpeed
OpenAI gpt-oss-20bMXFP458 tok/s
OpenAI gpt-oss-120bMXFP438 tok/s

VRAM by Configuration

ConfigurationTotal MemoryMax Dedicated VRAM (Windows)
Max 395+ (128GB)128GB96GB

* In Linux you can override the VRAM setting to go higher.

Supported Models & Tools

LM Studio, Ollama, llama.cpp, and other open source libraries work out of the box on Windows and Linux:

llama.cpp & Toolboxes

Using kyuz0's AMD Strix Halo toolboxes for optimized llama.cpp inference on this hardware.

Coding Setup

VS Code with AI coding assistants:

CLI Usage

Models

ModelStatus
Qwen3.5-122B-A10BOld champion
Qwen3.6-35B-A3BCurrently testing
Gemma-4-26B-A4B-IT

Back to Home