How to Install and Run DeepSeek-R1 AI Locally on Your Computer

Photo of author

DeepSeek AI has been gaining a lot of popularity recently. This is primarily because of its advanced capabilities, low API cost, and reasoning features. However, some users have raised concerns related to privacy and data sharing since a Chinese firm developed the chatbot. Some users have also accused the bot for censoring information.

So if you want to try the DeepSeek AI but are concerned about your data, then the best way to use it would be to locally install it on your device. Installing the chatbot locally means it will have no affiliation with the cloud or internet whatsoever and your data will be stored in your device only.

What are the advantages of running DeepSeek AI locally?

There are several advantages of running the DeepSeek AI locally. One of the most important advantages in this case would be data privacy and security. Since all your data is stored locally on your device, you will have full control over it.

Further, since local installation removes any dependence on the internet, it means you can use the AI anywhere, anytime you want. The overall performance and response time will be also increased given that there would be no network latency.

How to Install and Run DeepSeek-R1 AI Locally

Two ways are using which you can run DeepSeek AI locally. Method 1 requires LM Studio while Method 2 requires Ollama. Both methods do not require any technical expertise and are easy to install.

Run DeepSeek AI Locally using LM Studio

Step: 1 Download and install LM Studio V0.3.0 or later on your device. (Download Link)

Step: 2 Once the download is finished, run and install the .exe file. It will take a few seconds to finish the installation.

Step: 3 Now open the installed application. Click on the “Search” icon and then select “DeepSeek R1 Distill (Qwen 7B)” and tap on the “Download” button. Since the file is around ~4.5GB, the download might take some time.

Step: 4 After the file is downloaded, click on the “Chat” icon and then click on “Select a model to load” (or press CTRL+L). Now select DeepSeek R1 and click on “Load Model.”

That’s it. You will be now able to access and run the DeepSeek R1 locally. However, if you get any error, then go the Step 4 and drag the “GPU offload” slider to the minimum (or zero). This will fix the issue, if any.

Run DeepSeek AI locally using Ollama

Step: 1 Download Ollama and install it on your device. (Download Link)

Step: 2 A terminal will now open in front of you. Copy the below-given command, paste it on the terminal and hit enter.

ollama run deepseek-r1:1.5b

Step: 3 Once the installation is finished, you will be now able to chat with the DeepSeek R1. If you want to exit, simply close the terminal or press “CTRL+D”.

Conclusion

These were two simple methods to install and run DeepSeek AI locally on your device. And since the model is not connected directly to the internet, it may get some facts or answers wrong. So it’s recommended to only use the AI bot for general purposes.

For advanced use, you can use the web or app of the tool. Local installation protects your data but has some cut-downs in terms of overall performance and reliability.

Read more:

Explore Stories

AUTHOR.

Photo of author

Chandan

Iโ€™m a consumer tech writer passionate about breaking down the latest gadgets and smartphones into easy-to-understand guides and news. I love exploring new tech firsthand and sharing practical, relatable insights to help readers stay ahead in the ever-evolving world of technology.

Leave a Comment

Share to...