Added DLLEXPORT so that FOnnxTensorInfo can be used from C++ in other modules.
Added conditional compilation to work with the latest API of ONNX Runtime. See Use latest ONNX Runtime for more details.
v1.6 (Jan 27, 2023)
Added support for Unreal Engine 5.1.
Added OnnxModel destructor call when UOnnxModelWrapper is destroyed.
v1.5 (Apr 10, 2022)
Added support for Unreal Engine 5.0 official release.
To bind single-precision float inputs/outputs from Blurprint in UE5, see this page.
In UE5, OnnxRuntime module has been renamed to OnnxRuntimeNNEngine module to avoid conflicts with the engine module. Please take care when migrating existing projects.
You need an NVIDIA GPU which supports CUDA, cuDNN, and TensorRT.
You need to install CUDA ver 11.4.2, cuDNN ver 8.2.4, and TensorRT ver 8.2.3.0.
DNN models which contain unsupported operators cannot be loaded when TensorRT is enabled.
See the official document for supported operators.
(NNEngine uses TensorRT 8.2 as backend on Linux)
Tested environment:
Unreal Engine: 4.26.2, 4.27.2
Vulkan utils: 1.1.70+dfsg1-1ubuntu0.18.04.1
.NET SDK: 6.0.101-1
OS: Ubuntu 18.04.6 Desktop 64bit
CPU: Intel i3-8350K
GPU: NVIDIA GeForce GTX 1080 Ti
Driver: 470.130.01
CUDA: 11.4.2-1
cuDNN: 8.2.4
TensorRT: 8.2.3.0
Added EXPERIMENTAL support for Android as target build
Tested environment:
Device: Xiaomi Redmi Note 9S
Android version: 10 QKQ1.191215.002
Note:
You need to convert your model to ORT format.
See the official document for the details.
There are some DNN models which cannot be loaded on Android.
NNEngine uses ONNX Runtime Mobile ver 1.8.1 on Android.