No description
Find a file
2021-03-04 21:02:47 -05:00
include Update onnxruntime_c_api_expanded.h 2021-02-25 10:43:00 -05:00
lib initial commit 2021-02-25 10:41:58 -05:00
sample initial commit 2021-02-25 11:13:02 -05:00
src initial commit 2021-02-25 11:07:42 -05:00
LICENSE Initial commit 2021-02-25 10:23:19 -05:00
onnxruntime.nimble Update onnxruntime.nimble 2021-02-25 11:20:39 -05:00
README.md Update README.md 2021-03-04 21:02:47 -05:00

onnxruntime-nim

onnxruntime C Api wrapped for nim

Install

1. Install Onnxruntime

  • Go to Onnxruntime Releases

  • Choose source code/compiled binaries for your system, such as the one for Linux X86_64 CPU 1.6.0

  • Unzip the file, copy ./include and ./lib to the right places

    For example, on Linux, you may copy ./lib to /usr/lib/x86_64-linux-gn/, and ./include/ to /usr/include/ Or just put ./include ./lib into your project; as long as you and your code know where they are.

2. Install Onnxruntime C Api for nim

git clone https://github.com/YesDrX/onnxruntime-nim
cd onnxruntime-nim
nimble install
nimble install onnxruntime

3. Sample Code

cd ./sample
nim c --run C_Api_Sample.nim
  • Output
Using Onnxruntime C Api : 1.6.0
WARNING: Since openmp is enabled in this build, this API cannot be used to configure intra op num threads. Please use the openmp environment variables to control the number of threads.
Using Onnxruntime C API
Number of inputs = 1
Input 0 : name= data_0
Input 0 : type = ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT
Input 0 : num_dims = 4
Input 0 : dim 0 = 1
Input 0 : dim 1 = 3
Input 0 : dim 2 = 224
Input 0 : dim 3 = 224
[Class 0] :  0.000045
[Class 1] :  0.003846
[Class 2] :  0.000125
[Class 3] :  0.001180
[Class 4] :  0.001317
Done!