prl_vm_app memory leak and kernel panics on M5 / M5 Max, macOS 26.4

Discussion in 'Parallels Desktop on a Mac with Apple silicon' started by JohnS46, Apr 11, 2026.

  1. JohnS46

    JohnS46

    Messages:
    1
    Posting this in case anyone else is seeing the same thing, and to get attention on what I believe is an M5-specific regression in Parallels' graphics path.

    The symptom: prl_vm_app grows unbounded over time while a Windows 11 VM is running with 3D acceleration enabled. On my MacBook Air M5 (32 GB RAM), it climbs roughly 1.8 GB per hour until it saturates physical memory, triggers Jetsam events, and kernel panics the host. Six unexpected reboots since March 25. On my MacBook Pro M5 Max (128 GB RAM), the exact same leak is reproducing but hasn't yet exhausted headroom, so the symptoms are subtler: just steadily climbing prl_vm_app memory and elevated SSD write activity.

    What macOS itself is saying: Both machines have multiple prl_vm_app_*.diag microstackshot reports in /Library/Logs/DiagnosticReports/. These are macOS's own resource-abuse diagnostics flagging prl_vm_app for excessive file-backed dirty-memory writes. On the M5 Max, one capture shows prl_vm_app writing 8.59 GB of file-backed memory over 6.1 hours. Another two show over 2 GB in ~10 minutes each. The threshold for triggering these captures is 24.86 KB/sec over 24 hours. Parallels is hitting 3,400+ KB/sec, roughly 138x the allowed rate.

    What the stack traces show: The dominant call in every capture is repeated pwrite() from prl_vm_app. Deeper stacks from my earlier panic reports go through IOGPUMetalTexture getBytes and AGXMetalG17G 350.38 readRegion -- that's Apple's M5-generation GPU driver texture readback path (T8142 SoC). On two-different-machine reproduction with different VM memory configs and different auto-sizing settings, the common factor is the M5 GPU driver family.

    My working hypothesis: Parallels' zero-copy IOSurface path may be failing a capability check on AGXMetalG17G and silently falling back to a readRegion-based copy path that leaks its staging buffers. Every frame Windows DWM paints, Parallels reads pixels back from the M5 GPU, and that memory piles up as file-backed dirty pages until macOS flushes it to swap. Over hours, the buffer pile grows until RAM is exhausted.

    I've opened a detailed ticket with Parallels Support (case #5680322) with two full Technical Reports (517507467 and 517512131) capturing this on both machines, verbose logging enabled, with all the evidence above plus correlation to host kernel panics. No response from engineering yet.

    If you're on an M5 or M5 Max running Parallels with a 3D-accelerated Windows VM, please check:
    1. Open Activity Monitor, sort by Memory, find prl_vm_app, and watch its Real Memory column over a couple of hours. If it climbs steadily against a stable VM config, you're probably affected.
    2. In Terminal: ls -lht /Library/Logs/DiagnosticReports/ | grep prl_vm_app. If you see .diag files dated recently, macOS has already flagged your installation for the same behavior.
    3. Open one of those .diag files and look for Event: disk writes and the pwrite stack. If it's there, please reply with the byte counts and durations from your captures.
    The more machines we can document this on, the harder it'll be for Parallels to route this back as generic tuning advice. If anyone from Parallels engineering sees this: happy to share both Technical Report bundles, the panic .ips files, and the full analysis.
     
  2. Keshav Dosieah

    Keshav Dosieah Staff Member

    Messages:
    59
    Hello John,

    Thank you for your post and the very detailed explanation you provided both here and over the ticket and we are grateful for the time you have invested in getting all this information.

    We will be communicating with you via the ticket you created 5680322.

    Regards,
    Parallels Team
     
  3. MartinO10

    MartinO10 Bit poster

    Messages:
    1
    Confirming this is not M5-specific. I'm seeing the identical behaviour on a MacBook Air M3 (Mac15,12), 16 GB RAM, Parallels 26.3.2 (57398), macOS 26.4.1 (25E253).

    Symptoms: two unexpected VM resets over four weeks. Windows Event Viewer shows Kernel-Power Event 41 (BugCheck 63) each time, no preceding BSOD. macOS logs show prl_vm_app assertions being invalidated and the VM process losing state without any SIGKILL or Jetsam event. The VM process survived but the guest OS was reset.

    I have five prl_vm_app .diag microstackshot reports in /Library/Logs/DiagnosticReports/ from the last week. They show the same pattern you described: Event: disk writes, pwrite() dominating the stack, footprint growing from 10.95 GB to 40.36 GB over ~19.5 hours, with 34.36 GB of file-backed memory dirtied (489.50 KB/sec average). The heaviest stack is pwrite via QtCore on a worker thread.

    I monitored prl_vm_app VSZ in real time and observed ~1.7 GB growth every 15 minutes with 3D acceleration enabled. I tried disabling 3D acceleration via prlctl set "Windows 11" --3d-accelerate off. VSZ growth initially appeared to slow, but over 24 hours it grew ~170 GB, so the leak appears to still be active with 3D acceleration off. Not a fix.

    This is reproducible on M3, not just M5, so the leak appears to affect multiple Apple GPU driver families.

    Happy to share my .diag files if Parallels engineering wants M3-specific captures.
     
  4. Adarsh Jeetun

    Adarsh Jeetun Staff Member

    Messages:
    130
    Hello @MartinO10,
    We will be contacting you further via ticket #57***06 for more details.
    Please check your mailbox.
    Thank you for your cooperation!
     

Share This Page