Check hardware support https://docs.ollama.com/gpu
Check how much VRAM available.
Find a model that fits into your GPU VRAM from the main page.
try
ollama run <model-name>
check if it's running 100% on GPU with
ollama ps
NAME ID SIZE PROCESSOR CONTEXT UNTIL
llama3.2:3b a80c4f17acd5 2.8 GB 100% GPU 4096 4 minutes from now