onnxruntime gpu windows 1.13.1

ONNX Runtime versionONNX versionONNX opset versionONNX ML opset versionONNX IR versionCUDACUDNN
1.151.141938
1.141.131838
1.131.12173811.6/11.48.2.4.15
1.121.121738
1.111.111628
1.101.101528
1.91.10152811.48.2.2.26
1.81.91427
1.71.8132711.0.38.0.2.39
1.61.81327

cuda 11.6 对应的 cudnn是 8.2,用8.4 或8.5的cudnn都不行!!!

TensorRT

TensorRT 8.0 GA Update 1==tensorrt-8.0.3.4 cuda-11.3~11.1 cudnn8.2

TensorRT 8.2 GA Update 4 ==TensorRT-8.2.5.1 cuda-11.4~11.1 cudnn8.2

TensorRT 8.4 GA Update 2==TensorRT-8.4.3.1 cuda-11.6~11.1 cudnn8.4

TensorRT 8.5 GA Update 2==TensorRT-8.5.3.1 cuda-11.8~11.1 cudnn8.6

onnx

导出onnx时,batch_size=1,dynamic=False

GitHub 加速计划 / on / onnxruntime
13.76 K
2.79 K
下载
microsoft/onnxruntime: 是一个用于运行各种机器学习模型的开源库。适合对机器学习和深度学习有兴趣的人,特别是在开发和部署机器学习模型时需要处理各种不同框架和算子的人。特点是支持多种机器学习框架和算子,包括 TensorFlow、PyTorch、Caffe 等,具有高性能和广泛的兼容性。
最近提交(Master分支:2 个月前 )
b1ccbe2a ### Description <!-- Describe your changes. --> Update comment for `-I` to mention that symbolic dim values can be provided with `-f`. ### Motivation and Context <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. --> 1 天前
f6e1d448 Option is named onnxruntime_FORCE_GENERIC_ALGORITHMS Follow up to https://github.com/microsoft/onnxruntime/pull/22125. ### Description This change adds compile-time option to disable optimized algorithms and use generic algorithms (exclude AVX* and SSE etc in GEMM) on x86. This new option is intended only for testing these algorithms, not for production use. Following build command on linux x86_64 builds onnxruntime with new option enabled: `./build.sh --parallel --cmake_extra_defines onnxruntime_FORCE_GENERIC_ALGORITHMS=1` ### Motivation and Context This change allows testing generic algorithms. This may be needed for platforms which don't have optimized implementations available, like in https://github.com/microsoft/onnxruntime/pull/22125. 1 天前
Logo

旨在为数千万中国开发者提供一个无缝且高效的云端环境,以支持学习、使用和贡献开源项目。

更多推荐