Toolchain Python API

We provide a Python package in the docker toolchain. The package name is ktc. You can simply start by having import ktc in your Python script. We hope this python API could help you simplify the workflow. In the following sections, we'll introduce the API and their usage. You can also find the usage using the python help() function.

The general workflow is the same as mentioned in the toolchain manual. Models shall be passed through the several stages to get the result: ONNX optimization, model analysis, compilation and inference.

There is also an simple example called /workspace/examples/test_python_api.py in the docker. This might helps you understand the Python API usage.

Note that this package is only available in the docker due to the dependency issue.

1 ONNX Optimizer and Editor

The ONNX optimizer and editor provide the following API.

1.1 Converters

Keras to ONNX

ktc.onnx_optimizer.keras2onnx_flow(keras_model_path, optimize, input_shape)

Return the converted onnx object. Convert keras model to onnx object.

Args:

Caffe to ONNX

ktc.onnx_optimizer.caffe2onnx_flow(caffe_model_path, caffe_weight_path)

Return the converted onnx object. Convert caffe model to onnx object.

Args:

TFLite to ONNX

ktc.onnx_optimizer.tflite2onnx_flow(tflite_path, release_mode, bottom_nodes)

Return the converted onnx object. Convert tflite model to onnx object.

Args:

1.2 Optimizers

onnx version update

ktc.onnx_optimizer.onnx1_4to1_6(model)

Return the updated onnx model. Update model ir_version from 4 to 6 and update opset from 9 to 11.

Args:

Pytorch exported onnx optimization.

ktc.onnx_optimizer.torch_exported_onnx_flow(m, disable_fuse_bn=False):

Return the optimized model. Optimize the Pytorch exported onnx. Note that onnx2onnx_flow is still needed after running this optimizaiton.

Args:

General onnx optimization

ktc.onnx_optimizer.onnx2onnx_flow(m, disable_fuse_bn=False, bn_on_skip=False, bn_before_add=False, bgr=False, norm=False, rgba2yynn=False, eliminate_tail=False)

Return the optimized model. Optimize the onnx model.

Args:

1.3 Editors

Delete specific nodes

ktc.onnx_optimizer.delete_nodes(model, node_names)

Return the result onnx model. Delete nodes with the given names.

Args:

Delete specific inputs

ktc.onnx_optimizer.delete_inputs(model, value_names)

Return the result onnx model. Delete specific inputs

Args:

Delete specific outputs

ktc.onnx_optimizer.delete_outputs(model, value_names)

Return the result onnx model. Delete specific outputs

Args:

Cut the graph from the given node.

ktc.onnx_optimizer.cut_graph_from_nodes(model, node_names)

Return the result onnx model. Cut the graph from the given node. The difference between this function and the delete_node is that this function also delete all the following nodes after the specific nodes.

Args:

Cut the graph from the given operator type.

ktc.onnx_optimizer.remove_nodes_with_types(model, type_names)

Return the result onnx model. Cut the graph from the nodes with specific operation types. Similar behaviour to cut_graph_from_nodes.

Args:

Change input/output shapes

ktc.onnx_optimizer.change_input_output_shapes(model, input_shape_mapping=None, output_shape_mapping=None)

Return the result onnx model. Change input shapes and output shapes.

Args:

Add do-nothing Conv nodes after specific values

ktc.onnx_optimizer.add_conv_after(model, value_names)

Return the result onnx model. Add a do-nothing Conv node after the specific value.

Args:

Add do-nothing BN nodes after specific values

ktc.onnx_optimizer.add_bn_after(model, value_names)

Return the result onnx model. Add a do-nothing BN node after the specific value.

Args:

Rename an output

ktc.onnx_optimizer.rename_output(model, old_name, new_name)

Return the result onnx model. Rename the specific output

Args:

2 Toolchain Utilities

This section mainly contains the API manual of analyser, compiler and IP evaluator.

2.1 Model Config

To start using the the toolchain utilities, one must first initilize an ModelConfig object.

class ktc.ModelConfig(self, id, version, platform, onnx_model=None, onnx_path=None, bie_path=None)

Create an Kneron model config object. One of these three parameters is required: onnx_model, onnx_path, bie_path.

Args:

2.2 Model Analysis

analysis is a public class function of class ModelConfig.

classmethod analysis(input_mapping, output_bie=None, threads=4, quantize_mode="default")

Fix point analysis for the model. If the object is initialized with an onnx. This step is required before compiling. The result bie path will be returned.

Args:

2.3 Model Evaluation

evaluate is a public class function of class ModelConfig.

classmethod evaluate()

Return the evaluation result as str. The IP evaluator gives an estimation of the model running performance. It can run with either onnx or bie. In other words, One can run it without running analysis(...).

2.4 Compiler

The compile functions serve the same purpose as the batch compiler in the toolchain manual. The nef file path will be returned from both function

ktc.compile(model_list, output_dir=None, dedicated_output_buffer=True, weight_compress=False)

Compile the models and generate the nef file. The nef path will be returned.

Args:

ktc.encrypt_compile(model_list, output_dir=None, dedicated_output_buffer=True, mode=None, key="", key_file="", encryption_efuse_key="", weight_compress=False)

Compile the models, generate an encrypted nef file. The nef path will be returned.

Args:

3 Inferencer

Inferencer can be called through ktc.kneron_inference(...). The usage is the same as in the E2E simulator. Please check its document for details.

By the way, for one who wondering what the radix should be, we provide a function to get radix from the input files.

Get radix

ktc.get_radix(inputs)

Get the radix value from the given inputs.

Args:

Raises: * ValueError: raise if the input values are out of range