Catalog

How to Run DeepSeek R1 Locally on Windows and Mac

February 25, 202562 views

DeepSeek is known for giving precise answers and was quick to stand at par with ChatGPT immediately after its launch. Its ability to understand complex queries, generate insightful responses, and maintain contextual accuracy has allowed it to naturally climb the ladder of AI innovations.

However, as AI reliance grows, so do privacy concerns. A wave of discussions surrounding data security has led many users to explore ways to run DeepSeek locally. Running it on your own device not only ensures greater control over your data but also allows for faster processing without dependence on external servers. In this article, I will walk you through how to run DeepSeek R1 locally, giving you a step-by-step guide to setting it up and making the most of its capabilities while keeping your data private.

What is Deepseek R1?

Deepseek R1 is the latest in generative AI, making waves with its advanced reasoning and problem-solving capabilities. It is highlighted as a powerful contender in AI, seemingly outpacing its rivals in efficiency and depth. Built on cutting-edge architecture, DeepSeek excels in complex reasoning, coding, and multilingual understanding.

It is an AI advanced AI model designed for strong reasoning capabilities. The reasoning feature means that it can analyze complex problems, break them down logically, and generate well-structured, accurate answers.

How Does Its Reasoning Work?

Multi-Step Thinking: Instead of giving a simple response, DeepSeek R1 follows a step-by-step approach to solve problems, similar to human logical reasoning.

Mathematical and Logical Precision: It can process and solve math problems, programming challenges, and logic puzzles more effectively than many other models.

Context Awareness: It understands relationships between different pieces of information, allowing it to make better decisions and predictions.

Error Detection & Correction: If you present incorrect or ambiguous information, it tries to detect inconsistencies and suggest better solutions.

Advantages of Running DeepSeek Locally

If you run DeepSeek R1 locally, you will gain a lot of benefits that you might not experience when using it on the web or through an app, such as:

Privacy & Security: Your data stays on your machine, reducing risks associated with cloud-based AI services.

Customization: You can fine-tune the model, integrate it into specific workflows, or modify parameters as needed.

No API Limits or Costs: Unlike cloud-based models that may have usage limits or require payments, running it locally gives you full access without restrictions.

Offline Access: You can use it even without an internet connection, making it reliable for uninterrupted work.

Faster Processing (If You Have Good Hardware): Running locally on a powerful GPU can sometimes be faster than relying on cloud services, especially for large-scale tasks.

System Considerations for Version Selection

Your system requirements for running DeepSeek locally depend on the model variant you choose:

  • 1.5B Parameters → NVIDIA RTX 3060 (12GB VRAM) or higher

  • 7B / 8B Parameters → NVIDIA RTX 3070 (8GB VRAM) or higher

  • 14B Parameters → NVIDIA RTX 3080 (10GB VRAM) or higher

  • 32B Parameters → NVIDIA RTX 4090 (24GB VRAM)

  • 70B Parameters → NVIDIA RTX 4090 (24GB VRAM) ×2

  • 671B Parameters → Multi-GPU setup (e.g., NVIDIA A100 80GB ×16)

For most personal setups, 7B or 14B is a good balance between performance and feasibility. If you have a high-end GPU, you can push for 32B or higher. What’s your current hardware setup?

Run DeepSeek R1 Locally on Windows

Running DeepSeek on Windows is a great choice, especially if you’re concerned about privacy. If you want to use this AI model without an internet connection, you can set it up with Ollama. Follow these steps to install and run DeepSeek on Windows safely.

Step 1: Open your web browser, go to the Ollama website, and click "Download for Windows" to start the download.

Step 2: Once the download is complete, find the installer file and double-click it to open.

Step 3: A setup window will appear. Click "Install" and wait for the process to finish.

Step 4: After installation, Ollama will automatically start running in the background.

Now, you’re ready to install DeepSeek using PowerShell.

Step 5: Click the search bar on your taskbar, type "PowerShell", and open Windows PowerShell.

Step 6: Go to the Ollama website, click the "Models" tab, and find "deepseek-r1" in the list.

Select deepseek-r1 model 

Step 7: Click on "Parameters", then pick the model version that suits your computer.

Tip: It is advised to pick a lighter model as it will put a lot less load on your system.

Step 8: Click the "Copy" button next to the command needed to install DeepSeek.

Copy DeepSeek Model code 

Step 9: Switch to PowerShell, press CTRL + V to paste the command, and hit Enter to start the installation.

Step 10: Once the installation is finished, a confirmation message will appear—DeepSeek is now set up!

Now to ensure everything is working, disconnect from the internet, type a message in PowerShell, and see if DeepSeek responds instantly. This confirms that DeepSeek is running locally on your system.

Run DeepSeek R1 Locally on Mac

If you're using macOS and want to run DeepSeek offline without any complicated setup, the process is simple. Follow these steps to install and use DeepSeek on your Mac with Ollama.

Step 1: Open your browser, go to the Ollama website, and click “Download for macOS” to start downloading the file.

Step 2: Once the download is complete, you need to find the file. Open Finder, go to the Downloads folder, and look for the Ollama.zip file. Double-click on it to extract the installation package.

Step 3: After extracting the file, you’ll see the Ollama application file. Double-click on it to start the installation.

Note: You may get a pop-up asking you to move Ollama to your Applications folder. Click “Move to Applications” to proceed.

Step 4: Once the installation is finished, open Launchpad (or go to your Applications folder) and click on Ollama to launch the app. A welcome screen will appear. Click “Next” to move forward.

Step 5: Ollama requires a command line tool to function properly. When prompted, click “Install” to begin this process.

At this point, your Mac will ask for administrator permission. Enter your macOS username and password, then press Enter to continue the installation.

Step 6: Once the setup is complete, Ollama will provide a run command that you need to use in Terminal. Click on the “Copy” button to copy this command to your clipboard.

Step 7: Next, to run the command, open Terminal by clicking the magnifying glass icon in the top-right corner of your screen, typing “Terminal” in the search bar, and pressing Enter.

Step 8: Once Terminal is open, click inside the window, press CMD + V to paste the copied command, then press Enter to start installing Ollama in the background.

Step 9: After the installation process finishes, return to the Ollama window and click “Finish” to exit the setup.

At this point, Ollama is successfully installed on your Mac, and you’re ready to install DeepSeek.

Step 10: To install DeepSeek, close the terminal and go back to the Ollama website and use the search bar to look for “DeepSeek R1”. Click on the first result that appears.

Step 11: Once on the DeepSeek page, you will see a section labeled “Parameters”. Here, you can choose the DeepSeek R1 model version that best fits your needs.

Tip: Larger models require more system resources but provide better accuracy. If your Mac has lower specifications, consider selecting a smaller model for better performance.

Step 12: Once you’ve selected the model version you want, look for the “Copy” button next to the Terminal command. Click on it to copy the command.

Step 13: Since DeepSeek is installed through Terminal, you need to reopen it:

  • If Terminal is already open, close the current window and open a new Terminal window.

  • If Terminal is closed, repeat Step 7 to open it again.

Once Terminal is open, click inside the window and press CMD + V to paste the DeepSeek installation command.

Step 14: DeepSeek will now start installing on your Mac. The process may take a few minutes, depending on your internet speed and system performance.

Once the installation is complete, you’re ready to start using DeepSeek offline.

Step 15: Now that DeepSeek is installed, you can test it by typing a message in Terminal. Simply enter any text and press Enter to see if DeepSeek responds.

Step 16: When you’re finished using DeepSeek, type “/bye” in Terminal and press Enter to close it.

Note: If you ever need to reopen DeepSeek, simply launch Terminal and paste the DeepSeek R1 command you copied earlier.

FAQs

1. What are the minimum Requirements of Hardware to run this?

Running DeepSeek-R1 models locally requires hardware that scales with the model's parameter size. Smaller models, like the 1.5 billion parameter version, can operate on consumer-grade GPUs with at least 3.5 GB of VRAM. For the 7 billion parameter model, a GPU with a minimum of 6 GB VRAM, such as an NVIDIA RTX 3060, is recommended. Larger models, like the 70 billion parameter version, demand high-end GPUs with substantial VRAM, and the largest 671 billion parameter model necessitates multiple high-performance GPUs, making it impractical for most local setups. Having at least 16 GB of RAM (preferably 32 GB) and sufficient storage for model files is advisable.

2. Which version of DeepSeek R1 should I choose?

It depends on your hardware and what you need DeepSeek for. If you have a high-end GPU (like an RTX 3090 or better), you can try the 14B or 32B versions for more powerful performance. If you're running on a CPU or a lower-end GPU, the 7B model is a good balance between capability and hardware requirements. The 1.5B version is the lightest and best for basic tasks on minimal hardware.

3. Can we run DeepSeek R1 in a container or on a remote server?

Yes, you can run DeepSeek R1 in a container or on a remote server. As long as Ollama is installed, DeepSeek R1 can be deployed using Docker, cloud virtual machines (VMs), or on-premises servers.

  • Docker: You can create a containerized environment for DeepSeek R1, ensuring consistency across different systems.

  • Cloud VMs: Platforms like AWS, Google Cloud, or Azure allow you to run DeepSeek R1 remotely, provided they support Ollama.

  • On-Prem Servers: If you have a local machine with the required hardware, you can set up DeepSeek R1 without relying on cloud resources.

Run DeepSeek Securely and Confidently

Running DeepSeek locally gives you greater control over your privacy because everything is processed directly on your device, without relying on cloud servers. This means your data and interactions remain private, reducing the risk of third-party access or data leaks. Unlike online AI models that may store or analyze your inputs, a locally-run model ensures that no external service can track or misuse your information. Plus, once installed, DeepSeek can run without an internet connection, adding another layer of security.

Hopefully, this guide on how to run DeepSeek R1 has answered your questions and helped you get it up and running!

AI enthusiast with 10 years in tech and writing. Follow me for the latest insights on AI trends and tools to boost your writing, workplace efficiency, and productivity!