# Deploying DeepSeek on your server in just a few clicks

DeepSeek AI is a powerful open-source AI model that can operate without requiring a GPU. When combined with Ollama, it enables running AI locally with full control over performance and privacy.

Hosting DeepSeek on your own server ensures a high level of security, eliminating the risk of data interception via API. The official DeepSeek servers are frequently overloaded. By deploying the model locally, you can utilize AI resources exclusively for your needs, without sharing them with other users.

Note: DeepSeek is a third-party development. SpaceCore Solution LTD is not responsible for the operation of this software.

## What Are the System Requirements?

Let’s compare different models with varying requirements. Each model can run on both CPU and GPU. However, since we are using a server, this guide will focus on the installation and operation of the model on CPU power.

<table><thead><tr><th width="174">Model Version</th><th width="107">Size</th><th width="100">RAM</th><th width="131">Suitable Plan</th><th>Capabilities</th></tr></thead><tbody><tr><td>deepseek-r1:1.5b</td><td>1.1 GB</td><td>4 GB</td><td><a href="https://spacecore.pro/virtual-servers/">Hi-CPU Galaxy</a></td><td>Simple tasks, perfect for testing and trial runs.</td></tr><tr><td>deepseek-r1:7b</td><td>4.7 GB</td><td>10 GB</td><td><a href="https://spacecore.pro/virtual-servers/">Hi-CPU Orion</a></td><td>Writing and translating texts. Developing simple code.</td></tr><tr><td>deepseek-r1:14b</td><td>9 GB</td><td>20 GB</td><td><a href="https://spacecore.pro/virtual-servers/">Hi-CPU Pulsar</a></td><td>Advanced capabilities in development and copywriting. Excellent balance of speed and functionality.</td></tr><tr><td>deepseek-r1:32b</td><td>19 GB</td><td>40 GB</td><td><a href="https://spacecore.pro/virtual-servers/">Hi-CPU Infinity</a></td><td>Analogous power to ChatGPT o1 mini. In-depth data</td></tr><tr><td>deepseek-r1:70b</td><td>42 GB</td><td>85 GB</td><td><a href="https://spacecore.pro/dedicated-servers/">Minimal Dedicated Server</a></td><td>High-level computations for business tasks. Deep data analysis and comprehensive development</td></tr><tr><td>deepseek-r1:671b</td><td>720 GB</td><td>768 GB</td><td><a href="https://spacecore.pro/dedicated-servers/">Powerful dedicated server on request</a></td><td>The DeepSeek R1 is the most advanced model, offering computational functions comparable to the latest ChatGPT versions, and is recommended to be hosted on a high-performance dedicated server with NVMe drives.</td></tr></tbody></table>

DeepSeek 14B Installation

Let's install the 14B model, chosen for its high performance and moderate resource consumption; this guide applies to any available model, allowing you to install a different version if needed.

{% hint style="info" %}
The installation is performed on a [Hi-CPU Pulsar](https://spacecore.pro/virtual-servers/) plan with **Ubuntu 22.04,** which is an ideal choice for DeepSeek
{% endhint %}

Run the following command to update all system packages to the latest version:

```
sudo apt update && sudo apt upgrade -y
```

Ollama is a package manager required for deploying DeepSeek. Install it with:

```
curl -fsSL https://ollama.com/install.sh | sh && sudo systemctl start ollama
```

Once the installation is complete, use the following command to download the required DeepSeek model:

```
ollama pull deepseek-r1:14b
```

{% hint style="info" %}
deepseek-r1:14b is the name of the selected model. To install a different version, simply replace it, e.g., deepseek-r1:32b
{% endhint %}

The installation process takes approximately 2 minutes on a [Hi-CPU Pulsar](https://spacecore.pro/virtual-servers/) server due to high network speed.\
Execute the following command to launch the DeepSeek model:

```
ollama run deepseek-r1:14b
```

Once started, a command-line interface will appear where you can communicate with the AI.

<figure><img src="https://287241268-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FyXvdNIxNLIOcexv7AfU2%2Fuploads%2F3IDyfHfKc2JnBRHukiT4%2FTermius_HybpSyx7YW.png?alt=media&#x26;token=df9884d0-e387-47ea-bc97-ccf83747373d" alt=""><figcaption></figcaption></figure>

You can also run a single query directly from the command line, for example:

```
ollama run deepseek-r1:14b "What is heavier: 1 kg of iron or 1 kg of feathers?"
```

<figure><img src="https://287241268-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FyXvdNIxNLIOcexv7AfU2%2Fuploads%2F6tMBKRWLlzpmjBCtdhTJ%2Ftg_image_299192087.png?alt=media&#x26;token=53035879-c36f-4399-a891-490bbcbffbd3" alt=""><figcaption></figcaption></figure>

To generate a public key and enable API access, use:

```
ollama serve
```

To check installed models and their status:

```
ollama list
```

<figure><img src="https://287241268-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FyXvdNIxNLIOcexv7AfU2%2Fuploads%2FKXZ5PO7VDt8Vq29NTesv%2Ftg_image_2963635539.png?alt=media&#x26;token=dff75425-a22f-4442-8c0d-1c371b18985e" alt=""><figcaption></figcaption></figure>

## Заключение

We highly recommend deploying DeepSeek R1 models on servers with sufficient RAM. For stable operation, it's advisable to rent servers with at least a small memory buffer and fast NVMe disks.

The server plans listed in the comparison table are perfectly optimized for DeepSeek AI hosting. We guarantee the quality and reliability of our servers at SpaceCore. Entrust your server deployment to us and build a robust infrastructure for seamless and efficient AI usage in your business!
