学习OpenVINO笔记之Inference Engine
Inference Engine是OpenVINO具体实施单元,支持CPU,GPU,FPGA,Movidius,GNA等因特尔开发的硬件平台,并提供可操作API,API为C++接口,也支持python.
Inference Engine lib
Inference Engine最核心的的lib为:
linux: libinference_engine.so
window OS: inference_engine.dll
主要提供API中的功能,是对各种功能进行抽象,和具体硬件平台无关
除了上述lib之外,为了将上述功能在具体硬件实现,还得需要具体硬件lib(相当于驱动层),各个平台上名字不一样,具体版本也不太一样,本文学习的主要是R2版本
平台 | lib(linux) | 依赖lib(linux) | DLL(window) | 依赖DLL(window) |
CPU | libMKLDNNPlugin.so | libmklml_tiny.so , libiomp5md.so | MKLDNNPlugin.dll | mklml_tiny.dll , libiomp5md.dll |
GPU | libclDNNPlugin.so | libclDNN64.so | clDNNPlugin.dll | clDNN64.dll |
FPGA | libdliaPlugin.so | libdla_compiler_core.so , libdla_runtime_core.so | dliaPlugin.dll | dla_compiler_core.dll , dla_runtime_core.dll |
MYRIAD | libmyriadPlugin.so | NA | myriadPlugin.dll | NA |
HDDL | libHDDLPlugin.so | libbsl.so , libhddlapi.so , libmvnc-hddl.so | HDDLPlugin.dll | bsl.dll , hddlapi.dll , json-c.dll , libcrypto-1_1-x64.dll , libssl-1_1-x64.dll , mvnc-hddl.dll |
GNA | libGNAPlugin.so | libgna_api.so | GNAPlugin.dll | gna.dll |
HETERO | libHeteroPlugin.so | Same as for selected plugins | HeteroPlugin.dll | Same as for selected plugins |
MULTI | libMultiDevicePlugin.so | Same as for selected plugins | MultiDevicePlugin.dll | Same as for selected plugins |
linux可以通过设置LD_LIBRARY_PATH 相关lib路径,也可以通过bin/setupvars.sh
设置
windows 可以通过设置PATH 相关lib路径,也可以通过bin/setupvars.bat
设置
libinference_engine.so主要包含(注意R1版本和R2版本有很大不同):
读取CNN网络
(InferenceEngine::CNNNetReader)
维护CNN网络信息
(InferenceEngine::CNNNetwork)
创建Inference Engine Core object(InferenceEngine::Core)
运行网络,根据输入输出相关数据(InferenceEngine::ExecutableNetwork and InferenceEngine::InferRequest)
Inference Engine API
Inference Engine一般API工作流程:
1:读取IR. 使用InferenceEngine::CNNNetReader读取IR到InferenceEngine::CNNNetwork
2:准备输入输出格式。加载完网络之后,需要明确输入输出
InferenceEngine::CNNNetwork::getInputsInfo()
InferenceEngine::CNNNetwork::getOutputsInfo()
3:创建core object. Core是指定到具体设备运行的类,是核心功能
设置相关InferenceEngine::Core::SetConfig
相关扩展
InferenceEngine::Core::AddExtension
4:编译和加载网络到设备
InferenceEngine::Core::LoadNetwork()
5:设置输出。 InferenceEngine::InferRequest
6:执行。
同步执行InferenceEngine::InferRequest::Infer()
异步执行InferenceEngine::InferRequest::StartAsync(),
异步时可以使用InferenceEngine::InferRequest::Wait()进行等待完成
7:获取输出结果
InferenceEngine::IInferRequest::GetBlob()
参考资料
https://docs.openvinotoolkit.org/2019_R2/_docs_IE_DG_inference_engine_intro.html
更多推荐
所有评论(0)