prl_vm_app memory leak and kernel panics on M5 / M5 Max, macOS 26.4

Discussion in 'Parallels Desktop on a Mac with Apple silicon' started by JohnS46, Apr 11, 2026 at 9:20 AM.

  1. JohnS46

    JohnS46

    Messages:
    1
    Posting this in case anyone else is seeing the same thing, and to get attention on what I believe is an M5-specific regression in Parallels' graphics path.

    The symptom: prl_vm_app grows unbounded over time while a Windows 11 VM is running with 3D acceleration enabled. On my MacBook Air M5 (32 GB RAM), it climbs roughly 1.8 GB per hour until it saturates physical memory, triggers Jetsam events, and kernel panics the host. Six unexpected reboots since March 25. On my MacBook Pro M5 Max (128 GB RAM), the exact same leak is reproducing but hasn't yet exhausted headroom, so the symptoms are subtler: just steadily climbing prl_vm_app memory and elevated SSD write activity.

    What macOS itself is saying: Both machines have multiple prl_vm_app_*.diag microstackshot reports in /Library/Logs/DiagnosticReports/. These are macOS's own resource-abuse diagnostics flagging prl_vm_app for excessive file-backed dirty-memory writes. On the M5 Max, one capture shows prl_vm_app writing 8.59 GB of file-backed memory over 6.1 hours. Another two show over 2 GB in ~10 minutes each. The threshold for triggering these captures is 24.86 KB/sec over 24 hours. Parallels is hitting 3,400+ KB/sec, roughly 138x the allowed rate.

    What the stack traces show: The dominant call in every capture is repeated pwrite() from prl_vm_app. Deeper stacks from my earlier panic reports go through IOGPUMetalTexture getBytes and AGXMetalG17G 350.38 readRegion -- that's Apple's M5-generation GPU driver texture readback path (T8142 SoC). On two-different-machine reproduction with different VM memory configs and different auto-sizing settings, the common factor is the M5 GPU driver family.

    My working hypothesis: Parallels' zero-copy IOSurface path may be failing a capability check on AGXMetalG17G and silently falling back to a readRegion-based copy path that leaks its staging buffers. Every frame Windows DWM paints, Parallels reads pixels back from the M5 GPU, and that memory piles up as file-backed dirty pages until macOS flushes it to swap. Over hours, the buffer pile grows until RAM is exhausted.

    I've opened a detailed ticket with Parallels Support (case #5680322) with two full Technical Reports (517507467 and 517512131) capturing this on both machines, verbose logging enabled, with all the evidence above plus correlation to host kernel panics. No response from engineering yet.

    If you're on an M5 or M5 Max running Parallels with a 3D-accelerated Windows VM, please check:
    1. Open Activity Monitor, sort by Memory, find prl_vm_app, and watch its Real Memory column over a couple of hours. If it climbs steadily against a stable VM config, you're probably affected.
    2. In Terminal: ls -lht /Library/Logs/DiagnosticReports/ | grep prl_vm_app. If you see .diag files dated recently, macOS has already flagged your installation for the same behavior.
    3. Open one of those .diag files and look for Event: disk writes and the pwrite stack. If it's there, please reply with the byte counts and durations from your captures.
    The more machines we can document this on, the harder it'll be for Parallels to route this back as generic tuning advice. If anyone from Parallels engineering sees this: happy to share both Technical Report bundles, the panic .ips files, and the full analysis.
     

Share This Page