diff --git a/docs/lite/docs/source_en/use/nnie.md b/docs/lite/docs/source_en/use/nnie.md index 3cfc49e36b4631e839aa770ce0392ea575fffa30..563af48b752e21c538ad2560f8aa80ab35e0d5e1 100644 --- a/docs/lite/docs/source_en/use/nnie.md +++ b/docs/lite/docs/source_en/use/nnie.md @@ -339,7 +339,7 @@ During model conversion, the `nnie.cfg` file declared by the NNIE_CONFIG_PATH en You only need to provide image_list whose quantity is the same as that of model inputs. If the model contains the ROI pooling or PSROI pooling layer, you need to provide roi_coordinate_file, the quantity and sequence correspond to the number and sequence of the ROI pooling or PSROI pooling layer in the .prototxt file. -### Suffix \_cpu of the Node Name in the prototxt File +### Suffix cpu of the Node Name in the prototxt File In the .prototxt file, you can add _cpu to the end of the node name to declare CPU custom operator. The_cpu suffix is ignored in MindSpore Lite and is not supported. If you want to redefine the implementation of an existing operator or add an operator, you can register the operator in custom operator mode. @@ -369,7 +369,7 @@ During model conversion, the `nnie.cfg` file declared by the NNIE_CONFIG_PATH en In this example, a custom operator of the MY_CUSTOM type is defined. During inference, you need to register a custom operator of the MY_CUSTOM type. -### Suffix \_report of the Top Domain in the prototxt File +### Suffix report of the Top Domain in the prototxt File When converting the NNIE model, MindSpore Lite fuses most operators into the binary file for NNIE running. Users cannot view the output of the intermediate operators. In this case, you can add the _report suffix to the top domain, during image composition conversion, the output of the intermediate operator is added to the output of the fused layer. If the operator has output (not fused), the output remains unchanged. @@ -410,4 +410,4 @@ During model conversion, the `nnie.cfg` file declared by the NNIE_CONFIG_PATH en ### Segmentation Mechanism and Restrictions Due to the restrictions on the operators supported by the NNIE chip, if there are operators that are not supported by the NNIE chip, the model needs to be divided into supported layers and unsupported layers. - The chip on the board supports a maximum of eight supported layers. If the number of supported layers after segmentation is greater than 8, the model cannot run. You can observe the custom operator (whose attribute contains type:NNIE) by using Netron to obtain the number of supported layers after conversion. \ No newline at end of file + The chip on the board supports a maximum of eight supported layers. If the number of supported layers after segmentation is greater than 8, the model cannot run. You can observe the custom operator (whose attribute contains type:NNIE) by using Netron to obtain the number of supported layers after conversion.