4 | | To run deepseek locally, we need to install ollama then deepseek-r1:1.5b, deepseek-r1:7b, or deepseek-r1:8b |
| 4 | To run deepseek locally, we need to install ollama then deepseek-r1:1.5b, deepseek-r1:7b, or deepseek-r1:8b, 14b, 32b, 70b. [[br]] |
| 5 | The deepseek-r1:14b model is likely a 14-billion-parameter model. The size of such models can vary depending on the precision (e.g., FP16, INT8, etc.), but as a rough estimate:[[br]] |
| 6 | |
| 7 | * A 14B parameter model in FP16 precision typically requires around 28 GB of disk space (2 bytes per parameter). |
| 8 | |
| 9 | * If the model is quantized (e.g., INT8), it could be smaller, around 14 GB (1 byte per parameter). |
| 10 | |
| 11 | |