diff --git a/docs/lite/docs/source_en/use/runtime_train_cpp.md b/docs/lite/docs/source_en/use/runtime_train_cpp.md index 1468a1113dee55f9462b5dea8675e19d04136b51..8a2f2ecc2ec3abbe8ce69791e398d461a13f57b4 100644 --- a/docs/lite/docs/source_en/use/runtime_train_cpp.md +++ b/docs/lite/docs/source_en/use/runtime_train_cpp.md @@ -454,7 +454,7 @@ MindSpore Lite framework allows the user to set two callback functions that will - Name and type of the running node While the node name and type will be the same before and after running the node, the output tensors will differ between the two callbacks invocations. -For some operators, also the input tesnors will vary. +For some operators, also the input tensors will vary. ```cpp /// \brief CallBackParam defines input arguments for callback function. diff --git a/docs/mindspore/source_en/faq/data_processing.md b/docs/mindspore/source_en/faq/data_processing.md index ba93d4b5efee8d0c6759b817e42f44299abd23be..50c3bec17ff5a15dcc05eb45106ef387f5de0214 100644 --- a/docs/mindspore/source_en/faq/data_processing.md +++ b/docs/mindspore/source_en/faq/data_processing.md @@ -405,7 +405,7 @@ A: The main reason is that the parameter `num_parallel_workers` is configured to A: When the `GeneratorDataset` is used to load Numpy array returned by Pyfunc, MindSpore performs conversion from the Numpy array to the MindSpore Tensor. If the memory pointed to by the Numpy array has been freed, a memory copy error may occur. An example is as shown below: -- Perform an in place conversion among Numpy array, MindSpore Tensor and Numpy array in `__getitem__` function. Tensor `tensor` and Numpy array `ndarray_1` share the same memory and Tensor `tesnor` will go out of scope when the function exits, and the memory which is pointed to by Numpy array will be freed. +- Perform an in place conversion among Numpy array, MindSpore Tensor and Numpy array in `__getitem__` function. Tensor `tensor` and Numpy array `ndarray_1` share the same memory and Tensor `tensor` will go out of scope when the function exits, and the memory which is pointed to by Numpy array will be freed. ```python diff --git a/docs/mindspore/source_en/migration_guide/model_development/model_development.md b/docs/mindspore/source_en/migration_guide/model_development/model_development.md index 615fe3e0ac0093d24aa4564fb43d2450a026dff7..c75e369977997373955463d4579e494e6f4c2d1c 100644 --- a/docs/mindspore/source_en/migration_guide/model_development/model_development.md +++ b/docs/mindspore/source_en/migration_guide/model_development/model_development.md @@ -54,7 +54,7 @@ During MindSpore network implementation, there are some problem-prone areas. Whe 1. The MindSpore operator is used in data processing. Multi-threaded/multi-process is usually in the data processing process, so there is a limitation of using MindSpore operators in this scenario. It is recommended to use a three-party implementation instead of the operator use in the data processing process, such as numpy, opencv, pandas, PIL. 2. Control flow. For details, refer to [Flow Control Statements](https://www.mindspore.cn/tutorials/experts/en/r1.9/network/control_flow.html). Compilation in graph mode can be slow when multiple layers of conditional control statements are called. -3. Slicing operation. When it comes to slicing a Tesnor, note that whether subscript of the slice is a variable. When it is a variable, there will be restrictions. Please refer to network body and loss building for dynamic shape mitigation. +3. Slicing operation. When it comes to slicing a Tensor, note that whether subscript of the slice is a variable. When it is a variable, there will be restrictions. Please refer to network body and loss building for dynamic shape mitigation. 4. Customized mixed precision conflicts with `amp_level` in Model, so don't set `amp_level` in Model if you use customized mixed precision. 5. In Ascend environment, Conv, Sort and TopK can only be float16, and add [loss scale](https://mindspore.cn/tutorials/experts/en/r1.9/others/mixed_precision.html) to avoid overflow. 6. In the Ascend environment, operators with the stride property such as Conv and Pooling have rules about the length of the stride, which needs to be mitigated. diff --git a/docs/mindspore/source_zh_cn/migration_guide/analysis_and_preparation.md b/docs/mindspore/source_zh_cn/migration_guide/analysis_and_preparation.md index 6a55c58c5527537e951978314a46a20e9d813190..3419213ec605b4d9f9acfde353576c5704c7a551 100644 --- a/docs/mindspore/source_zh_cn/migration_guide/analysis_and_preparation.md +++ b/docs/mindspore/source_zh_cn/migration_guide/analysis_and_preparation.md @@ -331,7 +331,7 @@ MindSpore在持续交付中,部分功能存在限制,在网络迁移过程 想要了解动态shape,需要先了解什么是静态shape。 静态shape指在网路执行阶段Tensor的shape没有发生变化。 -比如resnet50网络如果保证图片的输入shape一直是`224*224`的,那么在网络训练阶段,四个残差模块的输出Tesnor的shape分别是`B*64*56*56`,`B*128*28*28`,`B*256*14*14`,`B*512*7*7`,`B`指`BatchSize`,在训练过程中也是固定的,此时网络中全部是静态的shape,没有动态shape。 +比如resnet50网络如果保证图片的输入shape一直是`224*224`的,那么在网络训练阶段,四个残差模块的输出Tensor的shape分别是`B*64*56*56`,`B*128*28*28`,`B*256*14*14`,`B*512*7*7`,`B`指`BatchSize`,在训练过程中也是固定的,此时网络中全部是静态的shape,没有动态shape。 如果输入的shape不一定是`224*224`的,那么四个残差模块输出Tensor的shape将会随输入shape变化,此时就不是静态shape,而是动态shape了。一般动态shape引入的原因有: #### 输入shape不固定 diff --git a/docs/mindspore/source_zh_cn/migration_guide/model_development/model_development.md b/docs/mindspore/source_zh_cn/migration_guide/model_development/model_development.md index b8d24b4736c432f324ac9fb77d03e824345142dd..53af31d61a18d07199cf2596e0647105bfb72fa7 100644 --- a/docs/mindspore/source_zh_cn/migration_guide/model_development/model_development.md +++ b/docs/mindspore/source_zh_cn/migration_guide/model_development/model_development.md @@ -58,7 +58,7 @@ 1. 数据处理中使用MindSpore的算子。数据处理过程一般会有多线程/多进程,此场景下数据处理使用MindSpore的算子存在限制,数据处理过程中使用的算子建议使用三方的实现代替,如numpy,opencv,pandas,PIL等。 2. 控制流。详情请参考[流程控制语句](https://www.mindspore.cn/tutorials/experts/zh-CN/r1.9/network/control_flow.html)。当多层调用条件控制语句时在图模式下编译会很慢。 -3. 切片操作,当遇到对一个Tesnor进行切片时需要注意,切片的下标是否是变量,当是变量时会有限制,请参考[网络主体和loss搭建](https://www.mindspore.cn/docs/zh-CN/r1.9/migration_guide/model_development/model_and_loss.html)对动态shape规避。 +3. 切片操作,当遇到对一个Tensor进行切片时需要注意,切片的下标是否是变量,当是变量时会有限制,请参考[网络主体和loss搭建](https://www.mindspore.cn/docs/zh-CN/r1.9/migration_guide/model_development/model_and_loss.html)对动态shape规避。 4. 自定义混合精度和Model里的`amp_level`冲突,使用自定义的混合精度就不要设置Model里的`amp_level`。 5. 在Ascend环境下Conv,Sort,TopK只能是float16的,注意加[loss scale](https://mindspore.cn/tutorials/experts/zh-CN/r1.9/others/mixed_precision.html)避免溢出。 6. 在Ascend环境下Conv,Pooling等带有stride属性的算子对stride的长度有规定,需要规避。