another AI question
Martin Cracauer
cracauer at cons.org
Tue Apr 7 17:50:21 EDT 2026
The situation with LLMs on FreeBSD is not totally catastrophic.
The NVidia drivers are currently broken on my 5090, so I cannot
compare Vulkan/FreeBSD to Linux/Cuda.
But they work on my 2080ti with Vulkan and run both ollama and
llama.cpp, accelerated.
On my laptop with "AMD Ryzen 7 PRO 4750U with Radeon Graphics" also
runs Vulkan and accelerates ollama (although only by a factor of 3
compared to CPU). This combo does not run llama.cpp
Now that NVidia drivers are running on at least one of my cards I'll
give it another go to run CUDA through Linuxulator.
Martin
--
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Martin Cracauer <cracauer at cons.org> http://www.cons.org/cracauer/
More information about the talk
mailing list