Amazon EC2 部署Ollama + webUI
·
最近和同事闲聊,我们能不能内网自己部署一个LLM,于是便有了Ollama + webUI的尝试
对于Linux,使用一行命令即可
curl -fsSL https://ollama.com/install.sh | sh
ollama --help
Large language model runner
Usage:
ollama [flags]
ollama [command]
Available Commands:
serve Start ollama
create Create a model from a Modelfile
show Show information for a model
run Run a model
pull Pull a model from a registry
push Push a model to a registry
list List models
ps List running models
cp Copy a model
rm Remove a model
help Help about any command
Flags:
-h, --help help for ollama
-v, --version Show version information
docker run -d --gpus=all -p 11434:11434 --name ollama ollama/ollama
除了 Llama 3, Phi 3, Mistral, Gemma 2, and
docker run -d -p 3001:8080 --gpus all --add-host=host.docker.internal:host-gateway -e OLLAMA_BASE_URL=https://example.com --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
[1] https://ollama.com/
[2] https://ollama.com/library
[3]https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image
更多推荐
已为社区贡献3条内容
所有评论(0)