Install Ollama and DeepSeek Locally

Installing Ollama and Running DeepSeek Locally

This guide will walk you through the process of installing Ollama and running the DeepSeek language model on your local machine.

Prerequisites

What is Ollama?

Ollama is a simple and efficient way to run large language models locally. It offers a command-line interface and takes care of model downloading, execution, and environment setup behind the scenes.

Steps to Install and Run DeepSeek Locally

1. Install Ollama

Visit Ollama’s official website and download the installer for your OS:

After installation, verify Ollama is working by running:

ollama --version

2. Run the DeepSeek Model

Ollama uses a simple pull-and-run system. To download and run the DeepSeek model:

ollama run deepseek

This command will automatically:

⚠️ Depending on the size of the model and your internet speed, the initial pull may take a few minutes.

3. List Available Models

To explore other models Ollama supports:

ollama list

And to remove any model:

ollama remove <model-name>

4. Run DeepSeek in the Background (Optional)

If you want to serve DeepSeek continuously in the background:

ollama serve &

You can now make requests via API to http://localhost:11434 or use it in apps like LM Studio, Obsidian, or VSCode extensions that support custom LLM endpoints.

5. Customize the DeepSeek Model (Optional)

You can create a custom Modelfile to tweak how DeepSeek behaves. Example:

FROM deepseek
PARAMETER temperature 0.7

Then run:

ollama create deepseek-custom -f Modelfile
ollama run deepseek-custom

Troubleshooting Tips

Conclusion

You’re now set up to run DeepSeek locally using Ollama! This is a great way to experiment with open-source language models without relying on the cloud. For more details, check out the Ollama documentation or the DeepSeek GitHub.