Linux配置 Llama 3环境
一 环境准备与安装
curl -fsSL https://ollama.com/install.sh | shollama --versioncurl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama.tgzsudo mkdir -p /opt/ollama && sudo tar -xzf ollama.tgz -C /opt/ollama二 启动服务与远程访问
sudo vim /etc/systemd/system/ollama.service[Unit]
Description=Ollama Service
After=network-online.target
[Service]
ExecStart=/opt/ollama/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_ORIGINS=*"
Environment="OLLAMA_MODELS=/opt/ollama/models"
[Install]
WantedBy=multi-user.targetsudo systemctl daemon-reload && sudo systemctl enable ollama && sudo systemctl start ollamacurl http://127.0.0.1:11434(返回“Ollama is running”即正常)curl http://服务器IP:11434sudo lsof -i :11434 查 PID 后 kill 再启动三 运行 Llama 3 与基础命令
ollama run llama3ollama run llama3:70bollama servesudo systemctl restart ollamanetstat -tulpn | grep 11434curl http://localhost:11434/api/generate -d '{ "model":"llama3", "prompt":"请用中文介绍Llama 3", "stream":false }'~/.ollama/modelsOLLAMA_MODELS=/your/path 并在服务中声明四 可视化界面 Open WebUI
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:mainhttp://localhost:3000,在界面中选择Llama 3使用--network=host(避免容器访问宿主机端口问题)--add-host=host.docker.internal:host-gateway)firewall-cmd --permanent --add-port=3000/tcp && firewall-cmd --reloaddocker run ... -v /opt/ollama/models:/app/backend/models ...五 常见问题与优化
OLLAMA_MODELS 指定目录-p 5000:8080)