AI工具--Ollama安装及本地大模型部署
·
参考
- ollama官网:https://ollama.com/
- ollama前端UI:https://www.cnblogs.com/zepc007/p/18152751
- vscode配置本地ollama:https://kxq.io/archives/localai-assistedcodewritingwithollama
- 使用ollama:https://blog.csdn.net/jxjdhdnd/article/details/139184347
- spring Ai框架整合Ollama,调用本地大模型:https://blog.csdn.net/qq_41712271/article/details/138389221?spm=1001.2101.3001.6650.6&utm_medium=distribute.pc_relevant.none-task-blog-2%7Edefault%7EBlogOpenSearchComplete%7ERate-6-138389221-blog-139464540.235%5Ev43%5Epc_blog_bottom_relevance_base9&depth_1-utm_source=distribute.pc_relevant.none-task-blog-2%7Edefault%7EBlogOpenSearchComplete%7ERate-6-138389221-blog-139464540.235%5Ev43%5Epc_blog_bottom_relevance_base9&utm_relevant_index=5
- 【AIGC】本地部署 ollama + open-webui: https://blog.csdn.net/kida_yuan/article/details/138435649
1. Ollama
1.1 安装
- 官网(https://ollama.com/)下载并安装Ollama
- 安装之后运行ollama,并使用如下指令安装并启动预训练模型
ollama run llama3
ollama run llama2
....
- 等待安装完成就可以在命令框进行QA交互
示例模型及下载运行指令
Llama 3 8B 4.7GB ollama run llama3
Llama 3 70B 40GB ollama run llama3:70b
Phi 3 Mini 3.8B 2.3GB ollama run phi3
Phi 3 Medium 14B 7.9GB ollama run phi3:medium
Gemma 2B 1.4GB ollama run gemma:2b
Gemma 7B 4.8GB ollama run gemma:7b
Mistral 7B 4.1GB ollama run mistral
Moondream 2 1.4B 829MB ollama run moondream
Neural Chat 7B 4.1GB ollama run neural-chat
Starling 7B 4.1GB ollama run starling-lm
Code Llama 7B 3.8GB ollama run codellama
Llama 2 Uncensored 7B 3.8GB ollama run llama2-uncensored
LLaVA 7B 4.5GB ollama run llava
Solar 10.7B 6.1GB ollama run solar
1.2 卸载
- 查看已经安装了哪些模型
ollama list
- 根据模型名删除模型
ollama rm pretrained_model_name
2. 前端UI配置
3. vscode配置本地llama3
更多推荐

所有评论(0)