Skip to content
Tutorial emka
Menu
  • Home
  • Debian Linux
  • Ubuntu Linux
  • Red Hat Linux
Menu
qwen3 on AMD GPU MI200

How to Run Qwen (14B) on AMD MI200 with vLLM

Posted on February 3, 2026

If you are trying to run LLMs on AMD Instinct MI200 (MI210/MI250) cards, you have probably already experienced the pain of “HSA errors,” random segmentation faults, or containers that just hang forever.

We went through the struggle of finding the right Docker image so you don’t have to. Here is the definitive, battle-tested guide to running an OpenAI-compatible API for Qwen (14B) on ROCm.

The MI200 is a beast, but it uses the gfx90a architecture. Most “bleeding edge” Docker images today are optimized for the newer MI300 (gfx942). If you try to run the latest vLLM (0.11.x) with the default settings, it will crash because the new execution engine (aiter) isn’t fully compatible with MI200 yet.

We are going to use a stable setup that disables the experimental features and just runs fast.

Prerequisites

  • Host OS: Linux with ROCm kernel drivers installed (rocm-dkms).
  • Docker: Installed and running.
  • GPU: AMD Instinct MI200 series (MI210, MI250/X).

Don’t use latest. Don’t use 0.11.x. We are using vLLM 0.10.1 on ROCm 6.4. It provides the best balance of modern model support (like Qwen 2.5) and stability.

Copy and paste this exact command.

docker run -it --rm \
    --device /dev/kfd \
    --device /dev/dri \
    --group-add video \
    --ipc=host \
    --cap-add=SYS_PTRACE \
    --security-opt seccomp=unconfined \
    -p 48700:8000 \
    -e HUGGING_FACE_HUB_TOKEN="your_hf_token_here" \
    -e VLLM_USE_V1=0 \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --name qwen-server \
    rocm/vllm:rocm6.4.1_vllm_0.10.1_20250909 \
    vllm serve Qwen/Qwen2.5-14B-Instruct \
    --dtype float16 \
    --gpu-memory-utilization 0.90 \
    --max-model-len 32768 \
    --tensor-parallel-size 1 \
    --host 0.0.0.0 \
    --port 8000

Screenshot

Why these flags matter

  • VLLM_USE_V1=0: This is the most important line. The new “V1” engine in vLLM crashes on MI200 when loading JIT kernels. We force the legacy engine (V0) for rock-solid stability.
  • --dtype float16: We don’t trust auto mode on ROCm containers. Explicitly telling it to use float16 prevents initialization stalls.
  • --security-opt seccomp=unconfined: AMD GPUs need direct memory access that Docker blocks by default. Without this, you get permission errors.
  • The Model Size (14B): We chose 14B because the 32B model (at float16) requires ~64GB of VRAM just for weights. On a single GPU, you’ll hit OOM (Out Of Memory) instantly once you add the KV cache. 14B sits in the “sweet spot”—fast, smart, and leaves room for context.

Step 2: Testing the API

Once the container says Application startup complete, your API is live on port 48700.

You can test it with curl. Note: Be careful with your JSON syntax! Use straight quotes ("), not curly smart quotes (“), or the API will throw a 400 Bad Request error.

Here is a test command with a complex System Prompt (as requested in our logs):

curl http://localhost:48700/v1/chat/completions \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer any_token_is_fine" \
    -d '{
        "model": "Qwen/Qwen2.5-14B-Instruct",
        "messages": [
            {
                "role": "system",
                "content": "You are a helpful assistant. Please answer in JSON format."
            },
            {
                "role": "user",
                "content": "Are you running on an AMD GPU?"
            }
        ],
        "temperature": 0.7
    }'

Screenshot

Note: If you are using a custom model name (like Qwen3), make sure the "model" field in your JSON matches exactly what you passed in the Docker command.

Troubleshooting

1. It hangs at “Loading model weights…”

  • Cause: ROCm is compiling kernels (JIT) for your specific GPU.
  • Fix: Wait. On the very first run, this can take 2–5 minutes. Subsequent runs will be instant.

2. RuntimeError: Engine core initialization failed

  • Cause: You forgot VLLM_USE_V1=0.
  • Fix: Add the env var. The V1 engine tries to use aiter libraries optimized for MI300, which segfault on MI200.

3. HIP out of memory

  • Cause: Your model is too fat.
  • Fix: If you absolutely need a 32B or 70B model, you must use Quantization. Change the docker command to use an AWQ model:Bashvllm serve Qwen/Qwen2.5-32B-Instruct-AWQ --quantization awq

Leave a Reply Cancel reply

You must be logged in to post a comment.

Recent Posts

  • How to Run Qwen (14B) on AMD MI200 with vLLM
  • How to Enable New Run Dialog in Windows 11
  • How to Disable AI Features in Firefox 148
  • Git 2.53: What’s New?
  • Linux From Scratch Ditches Old System V init
  • How to Maintained the SSD with TRIM
  • What is CVE-2024-21009? Microsoft Office Security Serious Bug
  • Windows 11 Shutdown Problems: Why Your PC Won’t Turn Off (and What Microsoft’s Doing)
  • What is the Steam Overlay Error?
  • Why Your Computer Thinks Winaero Tweaker is Bad (and Why It’s Probably Wrong!)
  • What is Origami Linux? A Super-Safe, Unchangeable Computer System!
  • Why Does OneNote Freeze? Easy Fixes for Typing & Drawing Problems!
  • What is Protected File System (PFS) in Windows 11?
  • Linux News Roundup February 2026
  • How to Install JellyFin Media Server on Samsung TV with TizenOS
  • Why OneNote Clears Your Notes
  • AMD NPU Monitoring on Linux: A Beginner’s Guide to AI Chip Tracking!
  • How to Fix AMD Adrenalin’s Game Detection Issues on Windows
  • Greg Kroah-Hartman Wins Multiple Award at European Open Source Awards!
  • What are Microsoft Copilot Reminders?
  • What’s New in Plasma 6.7? Quick Notification History Clear-Up!
  • Awesome Alternatives to Microsoft Defender Application Guard (MDAG)
  • How tto Enable DLSS 4 & 5 for Your GPU
  • Backlinks: Why They’re Super Important for Your Website!
  • Gnome’s Smart Windows: Tiling Shell 17.3 Makes Organizing Your Screen Easier!
  • Cara Mengatasi Video Nest Cam Bermasalah dan Video Hilang
  • iTunes Masih Jadi Rajanya Music? Ini Faktanya!
  • Google Home Smart Button Makin Canggih: Kini Otomatisasi Lebih Fleksibel!
  • F1: The Movie Raih Grammy! Tak Terduga, Kalahkan Bintang Country Ternama
  • Blokir Situs Judi Online: Lindungi Diri & Keluarga dari Dampak Negatif
  • Cara Membuat Podcast dari PDF dengan NotebookLlama dan Groq
  • Tutorial Membuat Sistem Automatic Content Recognition (ACR) untuk Deteksi Logo
  • Apa itu Google Code Wiki?
  • Cara Membuat Agen AI Otomatis untuk Laporan ESG dengan Python dan LangChain
  • Cara Membuat Pipeline RAG dengan Framework AutoRAG
  • Apa itu Spear-Phishing via npm? Ini Pengertian dan Cara Kerjanya yang Makin Licin
  • Apa Itu Predator Spyware? Ini Pengertian dan Kontroversi Penghapusan Sanksinya
  • Mengenal Apa itu TONESHELL: Backdoor Berbahaya dari Kelompok Mustang Panda
  • Siapa itu Kelompok Hacker Silver Fox?
  • Apa itu CVE-2025-52691 SmarterMail? Celah Keamanan Paling Berbahaya Tahun 2025
©2026 Tutorial emka | Design: Newspaperly WordPress Theme