一 方案总览与前置条件
二 方案一 Ollama 零门槛本地运行(推荐)
ollama -v,出现版本号即表示安装成功。ollama run deepseek-r1:1.5bollama run deepseek-r1:8bollama run deepseek-r1:14bollama run deepseek-r1:32bollama list/bye 退出。三 方案二 Python Transformers 本地加载(进阶)
python -m venv deepseek_env && deepseek_env\Scripts\activatepip install torch torchvision --index-url https://download.pytorch.org/whl/cu118pip install transformers>=4.34.0git clone https://github.com/deepseek-ai/DeepSeek-R1.gitfrom transformers import AutoModelForCausalLM, AutoTokenizermodel = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-R1-7B", torch_dtype=torch.float16, device_map="auto")tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-R1-7B")inputs = tokenizer("写一段 Python 快速排序", return_tensors="pt").to(model.device)outputs = model.generate(**inputs, max_length=200)print(tokenizer.decode(outputs[0], skip_special_tokens=True))model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", torch_dtype=torch.float16, load_in_4bit=True)。四 常见问题与排查
ollama -v 验证安装;必要时重启终端或电脑。ollama run deepseek-r1:<规格> 可继续下载;网络稳定时速度会提升。nvidia-smi 检查驱动与 CUDA 状态。