from_onnx()

from_onnx()#

import set_env
from tvm.relay.frontend.onnx import from_onnx
from_onnx?
Signature:
from_onnx(
    model,
    shape=None,
    dtype='float32',
    opset=None,
    freeze_params=True,
    convert_config=None,
    export_node_renamed_model_path=None,
)
Docstring:
Convert a ONNX model into an equivalent Relay Function.

ONNX graphs are represented as Python Protobuf objects.
The companion parameters will be handled automatically.
However, the input names from onnx graph is vague, mixing inputs and
network weights/bias such as "1", "2"...
For convenience, we rename the `real` input names to "input_0",
"input_1"... And renaming parameters to "param_0", "param_1"...

By default, ONNX defines models in terms of dynamic shapes. The ONNX importer
retains that dynamism upon import, and the compiler attempts to convert the
model into a static shapes at compile time. If this fails, there may still
be dynamic operations in the model. Not all TVM kernels currently support
dynamic shapes, please file an issue on discuss.tvm.apache.org
if you hit an error with dynamic kernels.

Parameters
----------
model : protobuf object
    ONNX ModelProto after ONNX v1.1.0

shape : dict of str to tuple, optional
    The input shape to the graph

dtype : str or dict of str to str
    The input types to the graph

opset : int, optional
    Override to autodetected opset.
    This can be helpful for some testing.

freeze_params: bool
    If this parameter is true, the importer will take any provided
    onnx input values (weights, shapes, etc) and embed them into the relay model
    as Constants instead of variables. This allows more aggressive optimizations
    at compile time and helps in making models static if certain inputs represent
    attributes relay would traditionally consider compile-time constants.

convert_config : Optional[Dict[str, Any]]
    Default config:
        use_nt_batch_matmul : bool = True
            True to convert qualified onnx `matmul` to `nn.batch_matmul` strict to NT format
            (transpose_a=False, transpose_b=True).

export_node_renamed_model_path : str, optional
    Export the node renamed onnx model to the path.
    Some models do not contain names in their nodes. During the conversion, if names of nodes
    are empty, new names will be assigned based on their op types. The exported model can be the
    reference to spans.

Returns
-------
mod : tvm.IRModule
    The relay module for compilation

params : dict of str to tvm.nd.NDArray
    The parameter dict to be used by relay
File:      /media/pc/data/lxw/ai/tvm/python/tvm/relay/frontend/onnx.py
Type:      function

from_onnx 的函数,用于将 ONNX 模型转换为等效的 Relay 函数。ONNX 图表示为 Python Protobuf 对象。这个函数接受以下参数:

  • model:ONNX ModelProto 对象,在 ONNX v1.1.0 之后的版本中定义。

  • shape:(可选)输入图的形状字典,键是字符串,值是元组。

  • dtype:输入图的数据类型,可以是字符串或字典,键是字符串,值也是字符串。

  • opset:整数,可选参数,用于覆盖自动检测的 opset。这在某些测试中可能会有所帮助。

  • freeze_params:布尔值,如果为 True,则导入器会将提供的任何 ONNX 输入值(权重、形状等)嵌入到 Relay 模型中作为常量而不是变量。这允许在编译时进行更积极的优化,并在某些输入表示 Relay 通常会视为编译时常量的属性时使模型静态化。

  • convert_config:可选的字典,默认配置包括名为 use_nt_batch_matmul 的布尔值,如果为 True,则将合格的 ONNX matmul 转换为严格的 NT 格式 (transpose_a=False, transpose_b=True)nn.batch_matmul

  • export_node_renamed_model_path:字符串,可选参数,用于导出节点重命名的 ONNX 模型。某些模型在其节点中不包含名称。在转换过程中,如果节点的名称为空,则会根据其算子类型分配新名称。导出的模型可以作为 spans 的引用。

函数返回两个值:

  • modtvm.IRModule 对象,用于编译的 Relay 模块。

  • params:字典,键是字符串,值是 tvm.nd.NDArray 对象,用于 Relay 的参数字典。

由于 ONNX graph 中的输入名称含糊不清,混合了输入和网络权重/偏差,如 "1""2" 等。为了方便起见,我们将实际的输入名称重命名为 "input_0""input_1" …并将参数重命名为 "param_0""param_1"

默认情况下,ONNX 以动态形状定义模型。ONNX 导入器在导入时保留该动态性,编译器尝试在编译时将模型转换为静态形状。如果这失败了,模型中可能仍然存在动态算子。目前并非所有 TVM 内核都支持动态形状。