NAV
python

INFERENCE INTRODUCTION

Welcome to the REBOTNIX World! You can use our VISIONTOOLS to build your own custom AI pipelines, including Tracking, Training and Inference.

We have language bindings in Python!

SETUP

A REBOTNIX license key is required. Without a valid key the results will be limited to 10 annotations or 100 frames.

To utilize your NVIDIA GPU for a speedup please install the CUDA Toolkit as well as CUDnn.

Please use Python version 3.7 and OpenCV version 3.2.

In addition we strongly recommend to install all neccessary python packages within a virtual environment. To do so we are providing a file named "requirements.txt". An example on how to install the packages can be found on the right side. ```python

execute in terminal

virtualenv -p python3.7 rebotnix_venv source rebotnix_venv pip install -r requirements.txt ```

INFERENCE

REBOTNIX Detection is a tool to detect objects using a custom trained Object Detection Model. This module runs only on a NVIDIA GPU.


# import detector
from detector_rebotnix_license import detector_rebotnix
import sys

# set datapath to desired model-weights and provide your license key
license_key = ""
weights = "/path/to/rebotnix_training.weights"
detector = detector_rebotnix(weights, license_key)

#input file
file = "/path/to/example.jpg"

#set an output path if you want to store a rendered image or video
output_path = "/path/to/output_dir"

#confidence threshold
conf = 0.75

#start frame number if video file
start_frame = 1

#stop frame number if video file
stop_frame = 10

result = detector.detect(file, conf, output_path, start_frame, stop_frame)

# returns a json with all objects
sys.stdout.write(str(result))

The above command returns a list of JSON structured like this:

{
    "modeldescription": "REBOTNIX CUSTOM OBJECT DETECTION MODEL",
    "modelid": "/path/to/rebotnix_training.weights",
    "width": 2048,
    "height": 1536,
    "file_path": "/path/to/example.jpg",
    "results":
        {
            "frame": 1,
            "annotations": 
                [
                    {
                        "xpos1": 1489,
                        "ypos1": 1023,
                        "xpos2": 1508,
                        "ypos2": 1052,
                        "category": "category_0",
                        "accuracy": 0.9951561093330383
                    },
                    ...
                ]
        },
        ...
}