前言

  1. 笔者搭建时,很多文章已经很老,版本对应比较久远了。这里采用的是新版
  2. 版本对应很重要,尝了一下onnxruntime-gpu最新版1.60搭配cuda1.7
  3. 文末给测试搭建是否成功的简易代码。

环境版本

采用的是vs2022、cuda1.7、cudnn8.9.1、onnxruntime-gpu1.60

onnxruntime下载

选择所需对应版本
请添加图片描述
点击绿色背景内链接跳转下载
请添加图片描述
下载完成后保存到熟悉路径

vs

创建新项目

选择 工具->NuGet管理包->程序包管理控制台,添加刚刚buget包到设置中的程序包源:
在这里插入图片描述
添加完成后点击浏览,右侧界面中安装配置。

完成后添加依赖

打开属性管理器,vc++zhong包含目录添加头文件路径:

build/native/include

请添加图片描述
继续添加算法库文件 :

runtime/win-x64

请添加图片描述
添加连接器中输入:
请添加图片描述
附加依赖项:
请添加图片描述

环境变量

将 onnxruntime.dll 所在的文件夹添加到系统环境变量。

测试是否成功

#include <iostream>
#include <onnxruntime_cxx_api.h>

int main(int argc, char* argv[]) {
 Ort::Env env(ORT_LOGGING_LEVEL_WARNING, "test");
 return 0;
 }
GitHub 加速计划 / on / onnxruntime
27
3
下载
microsoft/onnxruntime: 是一个用于运行各种机器学习模型的开源库。适合对机器学习和深度学习有兴趣的人,特别是在开发和部署机器学习模型时需要处理各种不同框架和算子的人。特点是支持多种机器学习框架和算子,包括 TensorFlow、PyTorch、Caffe 等,具有高性能和广泛的兼容性。
最近提交(Master分支:7 个月前 )
cd9c02f9 ### Description Re-enables (and fixes) generation of compiled EpContext models with **both** input and output models stored in buffers. ### Motivation and Context Previous PR #24176 inadvertently added a check that disabled storing both input and output models in buffers. However, we need this functionality. This was actually a fortunate scenario, as it led to the discovery of a bug. 18 小时前
fcb4866d ### Description Adds session config option (`"session.disable_model_compile"`) that disables model compilation during session initialization. If this option is set to "1", inference session creation will fail with error code ORT_MODEL_REQUIRES_COMPILATION if compilation is required to run the model on any Execution Provider added to the session. Only the following kinds of models are valid when this option is set to "1": - Pre-compiled models that have EPContext nodes for the compiling Execution Providers in the session. - Non-compiled models that run only on non-compiling Execution Providers, like CPU EP. ### Example usage The following example (taken from a unit test) tries to load a model that requires compilation with a session that disables compilation. The session creation fails with error code `ORT_MODEL_REQUIRES_COMPILATION`. Then, the example compiles the model and loads the compiled model successfully. ```C++ // Taken from a unit test ... // Initialize session options with QNN EP Ort::SessionOptions session_options; ProviderOptions provider_options; provider_options["backend_type"] = "htp"; provider_options["offload_graph_io_quantization"] = "0"; session_options.AppendExecutionProvider("QNN", provider_options); session_options.AddConfigEntry(kOrtSessionOptionsDisableEpCompile, "1"); // Disable model compilation! // Create an inference session that fails with error ORT_MODEL_REQUIRES_COMPILATION try { Ort::Session session(*ort_env, input_model_file, session_options); FAIL() << "Expected Session creation to fail but it succeeded"; // Should not get here! } catch (const Ort::Exception& excpt) { OrtErrorCode error_code = excpt.GetOrtErrorCode(); std::string_view error_msg = excpt.what(); ASSERT_EQ(error_code, ORT_MODEL_REQUIRES_COMPILATION); ASSERT_THAT(error_msg, testing::HasSubstr(kQnnExecutionProvider)); } // Session creation failed because the model was not pre-compiled. // Try to compile it now. // Create model compilation options from the session options. Ort::ModelCompilationOptions compile_options(*ort_env, session_options); compile_options.SetInputModelPath(input_model_file); compile_options.SetOutputModelPath(output_model_file); // Compile the model. Ort::Status status = Ort::CompileModel(*ort_env, compile_options); ASSERT_TRUE(status.IsOK()) << status.GetErrorMessage(); // Should be able to create a session with the compiled model and the original session options. Ort::Session session(*ort_env, output_model_file, session_options); ``` ### Motivation and Context Compiling models can take a very long time. Want to have a session option that requires input models that do not need to be compiled. 19 小时前
Logo

旨在为数千万中国开发者提供一个无缝且高效的云端环境,以支持学习、使用和贡献开源项目。

更多推荐