Hello everyone! Over the past few weeks, you’ve probably heard about how impressive DeepSeek has been performing. As an open-source AI model, DeepSeek has shown incredible capabilities, rivaling even OpenAI’s most advanced AI model, GPT-o1. In today’s tutorial, I’m going to show you how to easily set up the DeepSeek R1 model on your own device locally—completely free!
NOTEBefore continuing, please note that you will need a PC or phone with decent specifications. A 64-bit CPU with at least 8 GB of RAM is necessary.
Steps
1. Install Ollama
If you’re using Windows or Mac, you can simply visit the official Ollama website and download the installer from there. Alternatively, if you prefer a more technical approach, similar to how Linux users often operate, you can open your terminal and run the following commands:
1. Windows
# Scoop
scoop bucket add extras
scoop install extras/ollama-full
# Chocolatey
choco install ollama
# Winget
winget install --id=Ollama.Ollama -e
2. Mac
# Brew
brew install ollama
3. Linux
# Arch Linux
yay -Sy ollama
# NixOS
nix-env -iA nixos.ollama
# Manual
curl -fsSL https://ollama.com/install.sh | sh
2. Start Ollama
In Arch Linux (systemd), you can easily set up and start the Ollama service using the following commands:
systemctl enable ollama
systemctl start --now ollama
I dont know how you can do it in Windows or Mac but alternavely you can do it like this :
ollama serve
3. Install Deepseek R1
To install Deepseek R1, the process is quite straightforward. You can simply run the following command:
ollama run deepseek-r1:1.5b
NOTEPlease note that the 1.5b model is the most lightweight option for Deepseek R1. It runs perfectly fine on my ThinkPad L380, which is equipped with an i5-8250U processor and 8GB of RAM.
4. Enjoy!
Yes, that’s all. It’s quite straightforward, right? Now, you can use Deepseek R1 directly from your terminal. There’s also a way to connect it to a decent frontend, which you can often find on GitHub, but that’s a topic for another day. Stay tuned for updates!