c++通过onnxruntime调用sklearn
onnxruntime
microsoft/onnxruntime: 是一个用于运行各种机器学习模型的开源库。适合对机器学习和深度学习有兴趣的人,特别是在开发和部署机器学习模型时需要处理各种不同框架和算子的人。特点是支持多种机器学习框架和算子,包括 TensorFlow、PyTorch、Caffe 等,具有高性能和广泛的兼容性。
项目地址:https://gitcode.com/gh_mirrors/on/onnxruntime
免费下载资源
·
概述
python的sklearn很方便训练一些机器学习模型,但是c++调用推理时会出现各种意料之外的问题。可以实现c++用onnxruntime调用sklearn。
训练sklearn并将模型保存为onnx
以一个训练好的随机森林模型为例
import onnxmltools
from skl2onnx import convert_sklearn
from skl2onnx.common.data_types import *
import pickle
with open("rfc.pkl", "rb") as f:
model = pickle.load(f)
#定义输入数据的type和shape
initial_type = [('float_input',DoubleTensorType([None, 5]))]
onnx_model = convert_sklearn(model,initial_types=initial_type)
onnxmltools.utils.save_model(onnx_model, "model.onnx")
import onnxruntime as rt
import numpy as np
sess = rt.InferenceSession("model.onnx")
input_name = sess.get_inputs()[0].name
label_name = sess.get_outputs()[0].name
probability_name = sess.get_outputs()[1].name
feature = data # load data
pred_onx = sess.run([label_name, probability_name], {input_name: feature})
c++调用
#include <iostream>
#include <vector>
#include <assert.h>
#include <core/session/onnxruntime_cxx_api.h>
#include <core/providers/cuda/cuda_provider_factory.h>
#include <core/session/onnxruntime_c_api.h>
int main(int argc,char** argv)
{
Ort::Env env(ORT_LOGGING_LEVEL_WARNING, "Default");
#ifdef _WIN32
const wchar_t* model_path = L"model.onnx";
#else
const char* model_path = "model.onnx";
#endif
Ort::Session session{env, model_path, Ort::SessionOptions{nullptr}}; //CPU
Ort::AllocatorWithDefaultOptions allocator;
// 获取模型输入输出的节点个数
size_t num_input_nodes = session.GetInputCount();
size_t num_output_nodes = session.GetOutputCount();
// 打印模型中输入和输出的名称
for(int i = 0; i < num_input_nodes; i++)
{
char* in_name = session.GetInputName(i,allocator);
std::cout<<in_name<<std::endl;
}
for(int i = 0; i < num_output_nodes; i++)
{
char* out_name = session.GetOutputName(i,allocator);
std::cout<<out_name<<std::endl;
}
// 输出输出节点必须与上面模型的输入输出节点名一致
std::vector<const char*> input_node_names = {"float_input"};
std::vector<const char*> output_node_names = {"output_label","output_probability"};
// 设置输出数据的维度,这里以单条数据为例
std::vector<int64_t> input_node_dims = {1, 5};
size_t input_tensor_size = 1 * 5;
// 构造输入数据
std::vector<double> input_tensor_values(input_tensor_size);
for (unsigned int i = 0; i < input_tensor_size; i++)
{
input_tensor_values[i] = (double)i / (input_tensor_size + 1);
std::cout<<input_tensor_values[i]<<std::endl;
}
// create input tensor object from data values
auto memory_info = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);
Ort::Value input_tensor = Ort::Value::CreateTensor<double>(memory_info, input_tensor_values.data(), input_tensor_size, input_node_dims.data(), 2);
assert(input_tensor.IsTensor());
std::vector<Ort::Value> ort_inputs;
ort_inputs.push_back(std::move(input_tensor));
// score model & input tensor, get back output tensor
auto output_tensors = session.Run(Ort::RunOptions{nullptr}, input_node_names.data(), ort_inputs.data(), ort_inputs.size(), output_node_names.data(), 2);
// Get pointer to output tensor float values
int* floatarr = output_tensors[0].GetTensorMutableData<int>();
std::cout<<*floatarr<<std::endl;
}
GitHub 加速计划 / on / onnxruntime
13.76 K
2.79 K
下载
microsoft/onnxruntime: 是一个用于运行各种机器学习模型的开源库。适合对机器学习和深度学习有兴趣的人,特别是在开发和部署机器学习模型时需要处理各种不同框架和算子的人。特点是支持多种机器学习框架和算子,包括 TensorFlow、PyTorch、Caffe 等,具有高性能和广泛的兼容性。
最近提交(Master分支:2 个月前 )
b1ccbe2a
### Description
<!-- Describe your changes. -->
Update comment for `-I` to mention that symbolic dim values can be
provided with `-f`.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. --> 1 天前
f6e1d448
Option is named onnxruntime_FORCE_GENERIC_ALGORITHMS
Follow up to https://github.com/microsoft/onnxruntime/pull/22125.
### Description
This change adds compile-time option to disable optimized algorithms and
use generic algorithms (exclude AVX* and SSE etc in GEMM) on x86. This
new option is intended only for testing these algorithms, not for
production use.
Following build command on linux x86_64 builds onnxruntime with new
option enabled:
`./build.sh --parallel --cmake_extra_defines
onnxruntime_FORCE_GENERIC_ALGORITHMS=1`
### Motivation and Context
This change allows testing generic algorithms. This may be needed for
platforms which don't have optimized implementations available, like in
https://github.com/microsoft/onnxruntime/pull/22125. 1 天前
更多推荐
已为社区贡献1条内容
所有评论(0)