gemma

AI

Running LLM locally (AMD GPU/CPU compatible)

Things I want to do Run LLM (Chat AI) locally using llama.cpp. This article uses gemma, Google's local model. ...
タイトルとURLをコピーしました