Changes between Version 8 and Version 9 of Deepseek


Ignore:
Timestamp:
02/05/25 15:49:25 (2 weeks ago)
Author:
krit
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • Deepseek

    v8 v9  
    22ref [https://hub.docker.com/r/ollama/ollama here][[br]]
    33ref [https://www.kdnuggets.com/using-deepseek-r1-locally here][[br]]
    4 To run deepseek locally, we need to install ollama then deepseek-r1:1.5b, deepseek-r1:7b, or deepseek-r1:8b
     4To run deepseek locally, we need to install ollama then deepseek-r1:1.5b, deepseek-r1:7b, or deepseek-r1:8b, 14b, 32b, 70b. [[br]]
     5The deepseek-r1:14b model is likely a 14-billion-parameter model. The size of such models can vary depending on the precision (e.g., FP16, INT8, etc.), but as a rough estimate:[[br]]
     6
     7 * A 14B parameter model in FP16 precision typically requires around 28 GB of disk space (2 bytes per parameter).
     8
     9 * If the model is quantized (e.g., INT8), it could be smaller, around 14 GB (1 byte per parameter).
     10
     11
    512{{{
    613~]$ docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama