Track_Shovel@slrpnk.net to Lemmy Shitpost@lemmy.worldEnglish · 1 年前Hexadecimalslrpnk.netimagemessage-square136fedilinkarrow-up11.07Karrow-down124
arrow-up11.04Karrow-down1imageHexadecimalslrpnk.netTrack_Shovel@slrpnk.net to Lemmy Shitpost@lemmy.worldEnglish · 1 年前message-square136fedilink
minus-squarevvilld@lemmy.worldlinkfedilinkarrow-up1·1 年前I meant, how does one run it locally. I see a lot of people saying to just “run it locally” but for someone without a background in coding that doesn’t really mean much.
minus-squarecoldsideofyourpillow@lemmy.cafelinkfedilinkEnglisharrow-up1·edit-21 年前You don’t need a background in coding at all. In fact, the spaces of machine learning and programming are almost completely seperate. Download Ollama. Depending on the power of your GPU, run one of the following commands: DeepSeek-R1-Distill-Qwen-1.5B: ollama run deepseek-r1:1.5b DeepSeek-R1-Distill-Qwen-7B: ollama run deepseek-r1:7b DeepSeek-R1-Distill-Llama-8B: ollama run deepseek-r1:8b DeepSeek-R1-Distill-Qwen-14B: ollama run deepseek-r1:14b DeepSeek-R1-Distill-Qwen-32B: ollama run deepseek-r1:32b DeepSeek-R1-Distill-Llama-70B: ollama run deepseek-r1:70b Bigger models means better output, but also longer generation times.
I meant, how does one run it locally. I see a lot of people saying to just “run it locally” but for someone without a background in coding that doesn’t really mean much.
You don’t need a background in coding at all. In fact, the spaces of machine learning and programming are almost completely seperate.
Download Ollama.
Depending on the power of your GPU, run one of the following commands:
DeepSeek-R1-Distill-Qwen-1.5B:
ollama run deepseek-r1:1.5bDeepSeek-R1-Distill-Qwen-7B:
ollama run deepseek-r1:7bDeepSeek-R1-Distill-Llama-8B:
ollama run deepseek-r1:8bDeepSeek-R1-Distill-Qwen-14B:
ollama run deepseek-r1:14bDeepSeek-R1-Distill-Qwen-32B:
ollama run deepseek-r1:32bDeepSeek-R1-Distill-Llama-70B:
ollama run deepseek-r1:70bBigger models means better output, but also longer generation times.