使用onnxruntime-gpu 模型推理
onnxruntime
microsoft/onnxruntime: 是一个用于运行各种机器学习模型的开源库。适合对机器学习和深度学习有兴趣的人,特别是在开发和部署机器学习模型时需要处理各种不同框架和算子的人。特点是支持多种机器学习框架和算子,包括 TensorFlow、PyTorch、Caffe 等,具有高性能和广泛的兼容性。
项目地址:https://gitcode.com/gh_mirrors/on/onnxruntime
·
1.安装onnxruntime-gpu
新版的onnxruntime-gpu 即支持gpu的推理,也支持cpu的推理。
卸载旧的1.7.1 cpu版本,安装新的gpu版本:
pip uninstall onnxruntime
pip install onnxruntime-gpu
检查是否安装成功:
>>> import onnxruntime
>>> onnxruntime.__version__
'1.10.0'
>>> onnxruntime.get_device()
'GPU'
>>> onnxruntime.get_available_providers()
['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
2.修改推理代码
在推理代码上增加 providers参数,选择推理的框架。看自己支持哪个就选择自己支持的就可以了。
session = onnxruntime.InferenceSession('yolov5s.onnx', None)
# 改为:
session = onnxruntime.InferenceSession('yolov5s.onnx',
providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'])
如果运行推理代码出现 Tensorrt, CUDA都无法推理,如下所示,则是自己的 ONNX Runtime, TensorRT, CUDA 版本没对应正确 。
2022-08-09 15:38:31.386436528 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:509 CreateExecutionProviderInstance] Failed to create TensorrtExecutionProvider. Please reference https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#requirements to ensure all dependencies are met.
对应版本如下 :

microsoft/onnxruntime: 是一个用于运行各种机器学习模型的开源库。适合对机器学习和深度学习有兴趣的人,特别是在开发和部署机器学习模型时需要处理各种不同框架和算子的人。特点是支持多种机器学习框架和算子,包括 TensorFlow、PyTorch、Caffe 等,具有高性能和广泛的兼容性。
最近提交(Master分支:5 个月前 )
97c2bbe3
### Description
<!-- Describe your changes. -->
Fix shape infer of onnx GroupNorm.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
Unable to run shape inference for onnx `GroupNorm`.
[model.onnx](https://raw.githubusercontent.com/onnx/onnx/refs/heads/main/onnx/backend/test/data/node/test_group_normalization_example/model.onnx)
> python
D:\source\cognition\onnxruntime\onnxruntime\python\tools\symbolic_shape_infer.py
--input model.onnx
Traceback (most recent call last):
File
"D:\source\cognition\onnxruntime\onnxruntime\python\tools\symbolic_shape_infer.py",
line 2999, in <module>
out_mp = SymbolicShapeInference.infer_shapes(
File
"D:\source\cognition\onnxruntime\onnxruntime\python\tools\symbolic_shape_infer.py",
line 2935, in infer_shapes
raise Exception("Incomplete symbolic shape inference") 1 天前
1fc9c482
### Description
Enable coremltools for Linux build. In order to do this, I did:
1. Add uuid-devel to the Linux images and regenerate them.
2. Patch the coremltools code a little bit to add some missing header
files.
### Motivation and Context
To make the code simpler. Later on I will create another PR to remove
the COREML_ENABLE_MLPROGRAM C/C++ macro.
Also, after this PR I will bring more changes to
onnxruntime_provider_coreml.cmake to make it work with vcpkg. 2 天前
更多推荐




所有评论(0)