AI Development Package GPU usage

Discussion in 'Linux Virtual Machine' started by HubertS4, Oct 31, 2024.

  1. HubertS4

    HubertS4 Bit poster

    Messages:
    1
    Hello,

    Has anyone here used the Ubuntu VM AI development package? I understand that it includes Ollama pre-installed. Does anyone know if Ollama inside the Ubuntu VM can utilize the Apple silicon GPU?

    Thanks,
    Hubert
     
  2. Nico Gonzalez

    Nico Gonzalez

    Messages:
    1
    Yes, I've worked with the Ubuntu VM AI development package a few times. From what I've seen, Ollama running inside the Ubuntu VM doesn't directly utilize the Apple Silicon GPU -- mainly because the virtualized environment can't access Apple's Metal API. The GPU acceleration is limited to what the VM layer can expose, so most of the heavy lifting still happens on the CPU side.

    If you're looking for better GPU performance on Apple Silicon for AI workloads, I'd suggest running Ollama natively on macOS instead of inside a VM. That setup allows direct GPU access, and you'll notice a big speed improvement for model inference.

    I ran into the same issue when testing local LLM setups for client AI apps at Guru Technolabs, where we handle end-to-end AI development and optimization. Running Ollama natively has consistently given us the best performance results.
     
  3. RohitR5

    RohitR5

    Messages:
    1
    Yes, the Ubuntu VM AI development package does include Ollama pre-installed, which makes it convenient for AI model deployment and testing. However, when running inside a virtual machine (VM) on Apple Silicon (like M1 or M2 chips), Ollama cannot directly utilize the Apple GPU for acceleration because GPU access is restricted within virtualized environments.

    If you need full GPU support, it's best to run Ollama natively on macOS rather than inside a VM.

    For more tech insights, AI tools, and digital growth strategies, visit Virtual Real Design -- Best Digital Marketing Agency.
     

Share This Page