I’m running ai on an old 1080 ti. You can run ai on almost anything, but the less memory you have the smaller (ie. dumber) your models will have to be.
As for the “how”, I use Ollama and Open WebUI. It’s pretty easy to set up.
I’m running ai on an old 1080 ti. You can run ai on almost anything, but the less memory you have the smaller (ie. dumber) your models will have to be.
As for the “how”, I use Ollama and Open WebUI. It’s pretty easy to set up.
deleted by creator


Wait until he learns they make chips. I can totally see him saying if Lay’s can make chips in the US then why can’t nvidia?
This is even dumber than tracking how many lines of code each dev commits.