From 011e49dd0ae403cb116a2f9fd27abd80533b98a7 Mon Sep 17 00:00:00 2001
From: huanxiaoling <3174348550@qq.com>
Date: Tue, 18 Oct 2022 13:38:36 +0800
Subject: [PATCH] update en files
---
.../image_classfication_dataset_process.md | 452 ++++++++++
docs/federated/docs/source_en/index.rst | 1 +
.../image_classfication_dataset_process.md | 12 +-
...lassification_application_in_cross_silo.md | 4 +-
docs/mindspore/source_en/index.rst | 3 +
.../analysis_and_preparation.md | 475 ++++++++++
.../migration_guide/debug_and_tune.md | 296 ++++++
.../images/evaluation_procession.png | Bin 0 -> 23286 bytes
.../images/parameter_freeze.png | Bin 0 -> 34703 bytes
.../images/train_procession.png | Bin 0 -> 39117 bytes
.../model_development/model_and_loss.md | 702 +++++++++++++++
.../model_development/model_development.md | 8 +-
.../training_and_evaluation_procession.md | 2 +-
.../source_en/migration_guide/overview.md | 10 +-
.../source_en/migration_guide/sample_code.md | 840 ++++++++++++++++++
.../typical_api_comparision.md | 396 +++++++++
.../migration_guide/use_third_party_op.md | 2 +-
.../migration_guide/enveriment_preparation.md | 2 +-
18 files changed, 3187 insertions(+), 18 deletions(-)
create mode 100644 docs/federated/docs/source_en/image_classfication_dataset_process.md
create mode 100644 docs/mindspore/source_en/migration_guide/analysis_and_preparation.md
create mode 100644 docs/mindspore/source_en/migration_guide/debug_and_tune.md
create mode 100644 docs/mindspore/source_en/migration_guide/model_development/images/evaluation_procession.png
create mode 100644 docs/mindspore/source_en/migration_guide/model_development/images/parameter_freeze.png
create mode 100644 docs/mindspore/source_en/migration_guide/model_development/images/train_procession.png
create mode 100644 docs/mindspore/source_en/migration_guide/model_development/model_and_loss.md
create mode 100644 docs/mindspore/source_en/migration_guide/sample_code.md
create mode 100644 docs/mindspore/source_en/migration_guide/typical_api_comparision.md
diff --git a/docs/federated/docs/source_en/image_classfication_dataset_process.md b/docs/federated/docs/source_en/image_classfication_dataset_process.md
new file mode 100644
index 0000000000..07cca7214f
--- /dev/null
+++ b/docs/federated/docs/source_en/image_classfication_dataset_process.md
@@ -0,0 +1,452 @@
+# Federated Learning Image Classification Dataset Process
+
+
+
+This tutorial uses the federated learning dataset `FEMNIST` in the `leaf` dataset, which contains 62 different categories of handwritten digits and letters (digits 0 to 9, 26 lowercase letters, and 26 uppercase letters) with an image size of `28 x 28` pixels. The dataset contains handwritten digits and letters from 3500 users (up to 3500 clients can be simulated to participate in federated learning). The total data volume is 805,263, the average data volume per user is 226.83, and the variance of the data volume for all users is 88.94.
+
+## Device-Cloud Federated Learning Image Classification Dataset Process
+
+Refer to [leaf dataset instruction](https://github.com/TalwalkarLab/leaf) to download the dataset.
+
+1. Environmental requirements before downloading the dataset.
+
+ ```sh
+ numpy==1.16.4
+ scipy # conda install scipy
+ tensorflow==1.13.1 # pip install tensorflow
+ Pillow # pip install Pillow
+ matplotlib # pip install matplotlib
+ jupyter # conda install jupyter notebook==5.7.8 tornado==4.5.3
+ pandas # pip install pandas
+ ```
+
+2. Use git to download the official dataset generation script.
+
+ ```sh
+ git clone https://github.com/TalwalkarLab/leaf.git
+ ```
+
+ After downloading the project, the directory structure is as follows:
+
+ ```sh
+ leaf/data/femnist
+ ├── data # Used to store the dataset generated by the command
+ ├── preprocess # Store the code related to data pre-processing
+ ├── preprocess.sh # shell script generated by femnist dataset
+ └── README.md # Official dataset download guidance
+ ```
+
+3. Taking `femnist` dataset as an example, run the following command to enter the specified path.
+
+ ```sh
+ cd leaf/data/femnist
+ ```
+
+4. Using the command `. /preprocess.sh -s niid --sf 1.0 -k 0 -t sample` generates a dataset containing 3500 users, and the training sets and the test sets are divided in a ratio of 9:1 for each user's data.
+
+ The meaning of the parameters in the command can be found in the `leaf/data/femnist/README.md` file.
+
+ The directory structure after running is as follows:
+
+ ```text
+ leaf/data/femnist/35_client_sf1_data/
+ ├── all_data # All datasets are mixed together, without distinguishing the training sets and test sets, containing a total of 35 json files, and each json file contains the data of 100 users
+ ├── test # The test sets are divided into the training sets and the test sets in a ratio of 9:1 for each user's data, containing a total of 35 json files, and each json file contains the data of 100 users
+ ├── train # The training sets are divided into the training sets and the test sets in a ratio of 9:1 for each user's data, containing a total of 35 json files, and each json file contains the data of 100 users
+ └── ... # Other documents do not need to use, and details are not described herein
+ ```
+
+ Each json file contains the following three parts:
+
+ - `users`: User list.
+ - `num_samples`: The sample number list of each user.
+ - `user_data`: A dictionary object with user names as key and their respective data as value. For each user, the data is represented as a list of images, with each image represented as a list of integers of size 784 (obtained by spreading the `28 x 28` image array).
+
+ Before rerunning `preprocess.sh`, make sure to delete the `rem_user_data`, `sampled_data`, `test` and `train` subfolders from the data directory.
+
+5. Divide the 35 json files into 3500 json files (each json file represents a user).
+
+ The code is as follows:
+
+ ```python
+ import os
+ import json
+
+ def mkdir(path):
+ if not os.path.exists(path):
+ os.mkdir(path)
+
+ def partition_json(root_path, new_root_path):
+ """
+ partition 35 json files to 3500 json file
+
+ Each raw .json file is an object with 3 keys:
+ 1. 'users', a list of users
+ 2. 'num_samples', a list of the number of samples for each user
+ 3. 'user_data', an object with user names as keys and their respective data as values; for each user, data is represented as a list of images, with each image represented as a size-784 integer list (flattened from 28 by 28)
+
+ Each new .json file is an object with 3 keys:
+ 1. 'user_name', the name of user
+ 2. 'num_samples', the number of samples for the user
+ 3. 'user_data', an dict object with 'x' as keys and their respective data as values; with 'y' as keys and their respective label as values;
+
+ Args:
+ root_path (str): raw root path of 35 json files
+ new_root_path (str): new root path of 3500 json files
+ """
+ paths = os.listdir(root_path)
+ count = 0
+ file_num = 0
+ for i in paths:
+ file_num += 1
+ file_path = os.path.join(root_path, i)
+ print('======== process ' + str(file_num) + ' file: ' + str(file_path) + '======================')
+ with open(file_path, 'r') as load_f:
+ load_dict = json.load(load_f)
+ users = load_dict['users']
+ num_users = len(users)
+ num_samples = load_dict['num_samples']
+ for j in range(num_users):
+ count += 1
+ print('---processing user: ' + str(count) + '---')
+ cur_out = {'user_name': None, 'num_samples': None, 'user_data': {}}
+ cur_user_id = users[j]
+ cur_data_num = num_samples[j]
+ cur_user_path = os.path.join(new_root_path, cur_user_id + '.json')
+ cur_out['user_name'] = cur_user_id
+ cur_out['num_samples'] = cur_data_num
+ cur_out['user_data'].update(load_dict['user_data'][cur_user_id])
+ with open(cur_user_path, 'w') as f:
+ json.dump(cur_out, f)
+ f = os.listdir(new_root_path)
+ print(len(f), ' users have been processed!')
+ # partition train json files
+ partition_json("leaf/data/femnist/35_client_sf1_data/train", "leaf/data/femnist/3500_client_json/train")
+ # partition test json files
+ partition_json("leaf/data/femnist/35_client_sf1_data/test", "leaf/data/femnist/3500_client_json/test")
+ ```
+
+ where `root_path` is `leaf/data/femnist/35_client_sf1_data/{train,test}`. `new_root_path` is set by itself to store the generated 3500 user json files, which need to be processed separately for the training and test folders.
+
+ Each of the 3500 newly generated user json files contains the following three parts:
+
+ - `user_name`: User name.
+ - `num_samples`: The number of user samples
+ - `user_data`: A dictionary object with 'x' as key and user data as value; with 'y' as key and the label corresponding to the user data as value.
+
+ Print the result as following after running the script, which means a successful run:
+
+ ```sh
+ ======== process 1 file: /leaf/data/femnist/35_client_sf1_data/train/all_data_16_niid_0_keep_0_train_9.json======================
+ ---processing user: 1---
+ ---processing user: 2---
+ ---processing user: 3---
+ ......
+ ```
+
+6. Convert a json file to an image file.
+
+ Refer to the following code:
+
+ ```python
+ import os
+ import json
+ import numpy as np
+ from PIL import Image
+
+ name_list = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9',
+ 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U',
+ 'V', 'W', 'X', 'Y', 'Z',
+ 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u',
+ 'v', 'w', 'x', 'y', 'z'
+ ]
+
+ def mkdir(path):
+ if not os.path.exists(path):
+ os.mkdir(path)
+
+ def json_2_numpy(img_size, file_path):
+ """
+ read json file to numpy
+ Args:
+ img_size (list): contain three elements: the height, width, channel of image
+ file_path (str): root path of 3500 json files
+ return:
+ image_numpy (numpy)
+ label_numpy (numpy)
+ """
+ # open json file
+ with open(file_path, 'r') as load_f_train:
+ load_dict = json.load(load_f_train)
+ num_samples = load_dict['num_samples']
+ x = load_dict['user_data']['x']
+ y = load_dict['user_data']['y']
+ size = (num_samples, img_size[0], img_size[1], img_size[2])
+ image_numpy = np.array(x, dtype=np.float32).reshape(size) # mindspore doesn't support float64 and int64
+ label_numpy = np.array(y, dtype=np.int32)
+ return image_numpy, label_numpy
+
+ def json_2_img(json_path, save_path):
+ """
+ transform single json file to images
+
+ Args:
+ json_path (str): the path json file
+ save_path (str): the root path to save images
+
+ """
+ data, label = json_2_numpy([28, 28, 1], json_path)
+ for i in range(data.shape[0]):
+ img = data[i] * 255 # PIL don't support the 0/1 image ,need convert to 0~255 image
+ im = Image.fromarray(np.squeeze(img))
+ im = im.convert('L')
+ img_name = str(label[i]) + '_' + name_list[label[i]] + '_' + str(i) + '.png'
+ path1 = os.path.join(save_path, str(label[i]))
+ mkdir(path1)
+ img_path = os.path.join(path1, img_name)
+ im.save(img_path)
+ print('-----', i, '-----')
+
+ def all_json_2_img(root_path, save_root_path):
+ """
+ transform json files to images
+ Args:
+ json_path (str): the root path of 3500 json files
+ save_path (str): the root path to save images
+ """
+ usage = ['train', 'test']
+ for i in range(2):
+ x = usage[i]
+ files_path = os.path.join(root_path, x)
+ files = os.listdir(files_path)
+
+ for name in files:
+ user_name = name.split('.')[0]
+ json_path = os.path.join(files_path, name)
+ save_path1 = os.path.join(save_root_path, user_name)
+ mkdir(save_path1)
+ save_path = os.path.join(save_path1, x)
+ mkdir(save_path)
+ print('=============================' + name + '=======================')
+ json_2_img(json_path, save_path)
+
+ all_json_2_img("leaf/data/femnist/3500_client_json/", "leaf/data/femnist/3500_client_img/")
+ ```
+
+ Print the result as following after running the script, which means a successful run:
+
+ ```sh
+ =============================f0644_19.json=======================
+ ----- 0 -----
+ ----- 1 -----
+ ----- 2 -----
+ ......
+ ```
+
+7. Since the dataset under some user folders is small, if the number is smaller than the batch size, random expansion is required.
+
+ The entire dataset `"leaf/data/femnist/3500_client_img/"` can be checked and expanded by referring to the following code:
+
+ ```python
+ import os
+ import shutil
+ from random import choice
+
+ def count_dir(path):
+ num = 0
+ for root, dirs, files in os.walk(path):
+ for file in files:
+ num += 1
+ return num
+
+ def get_img_list(path):
+ img_path_list = []
+ label_list = os.listdir(path)
+ for i in range(len(label_list)):
+ label = label_list[i]
+ imgs_path = os.path.join(path, label)
+ imgs_name = os.listdir(imgs_path)
+ for j in range(len(imgs_name)):
+ img_name = imgs_name[j]
+ img_path = os.path.join(imgs_path, img_name)
+ img_path_list.append(img_path)
+ return img_path_list
+
+ def data_aug(data_root_path, batch_size = 32):
+ users = os.listdir(data_root_path)
+ tags = ["train", "test"]
+ aug_users = []
+ for i in range(len(users)):
+ user = users[i]
+ for tag in tags:
+ data_path = os.path.join(data_root_path, user, tag)
+ num_data = count_dir(data_path)
+ if num_data < batch_size:
+ aug_users.append(user + "_" + tag)
+ print("user: ", user, " ", tag, " data number: ", num_data, " < ", batch_size, " should be aug")
+ aug_num = batch_size - num_data
+ img_path_list = get_img_list(data_path)
+ for j in range(aug_num):
+ img_path = choice(img_path_list)
+ info = img_path.split(".")
+ aug_img_path = info[0] + "_aug_" + str(j) + ".png"
+ shutil.copy(img_path, aug_img_path)
+ print("[aug", j, "]", "============= copy file:", img_path, "to ->", aug_img_path)
+ print("the number of all aug users: " + str(len(aug_users)))
+ print("aug user name: ", end=" ")
+ for k in range(len(aug_users)):
+ print(aug_users[k], end = " ")
+
+ if __name__ == "__main__":
+ data_root_path = "leaf/data/femnist/3500_client_img/"
+ batch_size = 32
+ data_aug(data_root_path, batch_size)
+ ```
+
+8. Convert the expanded image dataset into a bin file format usable in the Federated Learning Framework.
+
+ Refer to the following code:
+
+ ```python
+ import numpy as np
+ import os
+ import mindspore.dataset as ds
+ import mindspore.dataset.vision as vision
+ import mindspore.dataset.transforms as transforms
+ import mindspore
+
+ def mkdir(path):
+ if not os.path.exists(path):
+ os.mkdir(path)
+
+ def count_id(path):
+ files = os.listdir(path)
+ ids = {}
+ for i in files:
+ ids[i] = int(i)
+ return ids
+
+ def create_dataset_from_folder(data_path, img_size, batch_size=32, repeat_size=1, num_parallel_workers=1, shuffle=False):
+ """ create dataset for train or test
+ Args:
+ data_path: Data path
+ batch_size: The number of data records in each group
+ repeat_size: The number of replicated data records
+ num_parallel_workers: The number of parallel workers
+ """
+ # define dataset
+ ids = count_id(data_path)
+ mnist_ds = ds.ImageFolderDataset(dataset_dir=data_path, decode=False, class_indexing=ids)
+ # define operation parameters
+ resize_height, resize_width = img_size[0], img_size[1] # 32
+
+ transform = [
+ vision.Decode(True),
+ vision.Grayscale(1),
+ vision.Resize(size=(resize_height, resize_width)),
+ vision.Grayscale(3),
+ vision.ToTensor(),
+ ]
+ compose = transforms.Compose(transform)
+
+ # apply map operations on images
+ mnist_ds = mnist_ds.map(input_columns="label", operations=transforms.TypeCast(mindspore.int32))
+ mnist_ds = mnist_ds.map(input_columns="image", operations=compose)
+
+ # apply DatasetOps
+ buffer_size = 10000
+ if shuffle:
+ mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size) # 10000 as in LeNet train script
+ mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True)
+ mnist_ds = mnist_ds.repeat(repeat_size)
+ return mnist_ds
+
+ def img2bin(root_path, root_save):
+ """
+ transform images to bin files
+
+ Args:
+ root_path: the root path of 3500 images files
+ root_save: the root path to save bin files
+
+ """
+
+ use_list = []
+ train_batch_num = []
+ test_batch_num = []
+ mkdir(root_save)
+ users = os.listdir(root_path)
+ for user in users:
+ use_list.append(user)
+ user_path = os.path.join(root_path, user)
+ train_test = os.listdir(user_path)
+ for tag in train_test:
+ data_path = os.path.join(user_path, tag)
+ dataset = create_dataset_from_folder(data_path, (32, 32, 1), 32)
+ batch_num = 0
+ img_list = []
+ label_list = []
+ for data in dataset.create_dict_iterator():
+ batch_x_tensor = data['image']
+ batch_y_tensor = data['label']
+ trans_img = np.transpose(batch_x_tensor.asnumpy(), [0, 2, 3, 1])
+ img_list.append(trans_img)
+ label_list.append(batch_y_tensor.asnumpy())
+ batch_num += 1
+
+ if tag == "train":
+ train_batch_num.append(batch_num)
+ elif tag == "test":
+ test_batch_num.append(batch_num)
+
+ imgs = np.array(img_list) # (batch_num, 32,3,32,32)
+ labels = np.array(label_list)
+ path1 = os.path.join(root_save, user)
+ mkdir(path1)
+ image_path = os.path.join(path1, user + "_" + "bn_" + str(batch_num) + "_" + tag + "_data.bin")
+ label_path = os.path.join(path1, user + "_" + "bn_" + str(batch_num) + "_" + tag + "_label.bin")
+
+ imgs.tofile(image_path)
+ labels.tofile(label_path)
+ print("user: " + user + " " + tag + "_batch_num: " + str(batch_num))
+ print("total " + str(len(use_list)) + " users finished!")
+
+ root_path = "leaf/data/femnist/3500_client_img/"
+ root_save = "leaf/data/femnist/3500_clients_bin"
+ img2bin(root_path, root_save)
+ ```
+
+ Print the result as following after running the script, which means a successful run:
+
+ ```sh
+ user: f0141_43 test_batch_num: 1
+ user: f0141_43 train_batch_num: 10
+ user: f0137_14 test_batch_num: 1
+ user: f0137_14 train_batch_num: 11
+ ......
+ total 3500 users finished!
+ ```
+
+9. Generate `3500_clients_bin` folder containing a total of 3500 user folders with the following directory structure:
+
+ ```sh
+ leaf/data/femnist/3500_clients_bin
+ ├── f0000_14 # User number
+ │ ├── f0000_14_bn_10_train_data.bin # The training data of user f0000_14 (The number 10 after bn_ represents the batch number)
+ │ ├── f0000_14_bn_10_train_label.bin # Training tag for user f0000_14
+ │ ├── f0000_14_bn_1_test_data.bin # Test data of user f0000_14 (the number 1 after bn_ represents batch number)
+ │ └── f0000_14_bn_1_test_label.bin # Test tag for user f0000_14
+ ├── f0001_41 # User number
+ │ ├── f0001_41_bn_11_train_data.bin # The training data of user f0001_41 (The number 11 after bn_ represents the batch number)
+ │ ├── f0001_41_bn_11_train_label.bin # Training tag for user f0001_41
+ │ ├── f0001_41_bn_1_test_data.bin # Test data of user f0001_41 (the number 1 after bn_ represents batch number)
+ │ └── f0001_41_bn_1_test_label.bin # Test tag for user f0001_41
+ │ ...
+ └── f4099_10 # User number
+ ├── f4099_10_bn_4_train_data.bin # The training data of user f4099_10 (the number 4 after bn_ represents the batch number)
+ ├── f4099_10_bn_4_train_label.bin # Training tag for user f4099_10
+ ├── f4099_10_bn_1_test_data.bin # Test data of user f4099_10 (the number 1 after bn_ represents batch number)
+ └── f4099_10_bn_1_test_label.bin # Test tag for user f4099_10
+ ```
+
+The `3500_clients_bin` folder generated according to steps 1 to 9 above can be directly used as the input data for the device-cloud federated image classification task.
diff --git a/docs/federated/docs/source_en/index.rst b/docs/federated/docs/source_en/index.rst
index 39f7e69a8f..9c76cd68c9 100644
--- a/docs/federated/docs/source_en/index.rst
+++ b/docs/federated/docs/source_en/index.rst
@@ -108,4 +108,5 @@ Common Application Scenarios
:maxdepth: 1
:caption: References
+ image_classfication_dataset_process
faq
\ No newline at end of file
diff --git a/docs/federated/docs/source_zh_cn/image_classfication_dataset_process.md b/docs/federated/docs/source_zh_cn/image_classfication_dataset_process.md
index e7fec758a0..18ff515ef6 100644
--- a/docs/federated/docs/source_zh_cn/image_classfication_dataset_process.md
+++ b/docs/federated/docs/source_zh_cn/image_classfication_dataset_process.md
@@ -2,7 +2,7 @@
-本教程采用`leaf`数据集中的联邦学习数据集`FEMNIST`, 该数据集包含62个不同类别的手写数字和字母(数字0~9、26个小写字母、26个大写字母),图像大小为`28 x 28`像素,数据集包含3500个用户的手写数字和字母(最多可模拟3500个客户端参与联邦学习),总数据量为805263,平均每个用户包含数据量为226.83,所有用户数据量的方差为88.94。
+本教程采用`leaf`数据集中的联邦学习数据集`FEMNIST`,该数据集包含62个不同类别的手写数字和字母(数字0~9、26个小写字母、26个大写字母),图像大小为`28 x 28`像素,数据集包含3500个用户的手写数字和字母(最多可模拟3500个客户端参与联邦学习),总数据量为805263,平均每个用户包含数据量为226.83,所有用户数据量的方差为88.94。
## 端云联邦学习图像分类数据集处理
@@ -132,7 +132,7 @@
- `user_name`: 用户名。
- `num_samples`: 用户的样本数。
- - `user_data`: 一个以'x'为key,以用户数据为value的字典对象; 以'y'为key,以用户数据对应的标签为value。
+ - `user_data`: 一个以'x'为key,以用户数据为value的字典对象;以'y'为key,以用户数据对应的标签为value。
运行该脚本打印如下,代表运行成功:
@@ -434,18 +434,18 @@
├── f0000_14 # 用户编号
│ ├── f0000_14_bn_10_train_data.bin # 用户f0000_14的训练数据 (bn_后面的数字10代表batch number)
│ ├── f0000_14_bn_10_train_label.bin # 用户f0000_14的训练标签
- │ ├── f0000_14_bn_1_test_data.bin # 用户f0000_14的测试数据 (bn_后面的数字1代表batch number)
+ │ ├── f0000_14_bn_1_test_data.bin # 用户f0000_14的测试数据 (bn_后面的数字1代表batch number)
│ └── f0000_14_bn_1_test_label.bin # 用户f0000_14的测试标签
├── f0001_41 # 用户编号
│ ├── f0001_41_bn_11_train_data.bin # 用户f0001_41的训练数据 (bn_后面的数字11代表batch number)
│ ├── f0001_41_bn_11_train_label.bin # 用户f0001_41的训练标签
- │ ├── f0001_41_bn_1_test_data.bin # 用户f0001_41的测试数据 (bn_后面的数字1代表batch number)
- │ └── f0001_41_bn_1_test_label.bin # 用户f0001_41的测试标签
+ │ ├── f0001_41_bn_1_test_data.bin # 用户f0001_41的测试数据 (bn_后面的数字1代表batch number)
+ │ └── f0001_41_bn_1_test_label.bin # 用户f0001_41的测试标签
│ ...
└── f4099_10 # 用户编号
├── f4099_10_bn_4_train_data.bin # 用户f4099_10的训练数据 (bn_后面的数字4代表batch number)
├── f4099_10_bn_4_train_label.bin # 用户f4099_10的训练标签
- ├── f4099_10_bn_1_test_data.bin # 用户f4099_10的测试数据 (bn_后面的数字1代表batch number)
+ ├── f4099_10_bn_1_test_data.bin # 用户f4099_10的测试数据 (bn_后面的数字1代表batch number)
└── f4099_10_bn_1_test_label.bin # 用户f4099_10的测试标签
```
diff --git a/docs/federated/docs/source_zh_cn/image_classification_application_in_cross_silo.md b/docs/federated/docs/source_zh_cn/image_classification_application_in_cross_silo.md
index 2fec556f21..a332654cfd 100644
--- a/docs/federated/docs/source_zh_cn/image_classification_application_in_cross_silo.md
+++ b/docs/federated/docs/source_zh_cn/image_classification_application_in_cross_silo.md
@@ -8,7 +8,7 @@
## 下载数据集
-本示例采用[leaf数据集](https://github.com/TalwalkarLab/leaf)中的联邦学习数据集`FEMNIST`, 该数据集包含62个不同类别的手写数字和字母(数字0~9、26个小写字母、26个大写字母),图像大小为`28 x 28`像素,数据集包含3500个用户的手写数字和字母(最多可模拟3500个客户端参与联邦学习),总数据量为805263,平均每个用户包含数据量为226.83,所有用户数据量的方差为88.94。
+本示例采用[leaf数据集](https://github.com/TalwalkarLab/leaf)中的联邦学习数据集`FEMNIST`,该数据集包含62个不同类别的手写数字和字母(数字0~9、26个小写字母、26个大写字母),图像大小为`28 x 28`像素,数据集包含3500个用户的手写数字和字母(最多可模拟3500个客户端参与联邦学习),总数据量为805263,平均每个用户包含数据量为226.83,所有用户数据量的方差为88.94。
可参考文档[端云联邦学习图像分类数据集处理](https://www.mindspore.cn/federated/docs/zh-CN/master/image_classfication_dataset_process.html)中步骤1~7获取图片形式的3500个用户数据集`3500_client_img`。
@@ -153,7 +153,7 @@ if __name__ == "__main__":
### 安装MindSpore和Mindspore Federated
-包括源码和下载发布版两种方式,支持CPU、GPU硬件平台,根据硬件平台选择安装即可。安装步骤可参考[MindSpore安装指南](https://www.mindspore.cn/install), [Mindspore Federated安装指南](https://www.mindspore.cn/federated/docs/zh-CN/master/federated_install.html)。
+包括源码和下载发布版两种方式,支持CPU、GPU硬件平台,根据硬件平台选择安装即可。安装步骤可参考[MindSpore安装指南](https://www.mindspore.cn/install),[Mindspore Federated安装指南](https://www.mindspore.cn/federated/docs/zh-CN/master/federated_install.html)。
目前联邦学习框架只支持Linux环境中部署,cross-silo联邦学习框架需要MindSpore版本号>=1.5.0。
diff --git a/docs/mindspore/source_en/index.rst b/docs/mindspore/source_en/index.rst
index 8cf01bc7b6..9163308ef1 100644
--- a/docs/mindspore/source_en/index.rst
+++ b/docs/mindspore/source_en/index.rst
@@ -76,7 +76,10 @@ MindSpore Documentation
migration_guide/overview
migration_guide/enveriment_preparation
+ migration_guide/analysis_and_preparation
migration_guide/model_development/model_development
+ migration_guide/debug_and_tune
+ migration_guide/sample_code
migration_guide/use_third_party_op
.. toctree::
diff --git a/docs/mindspore/source_en/migration_guide/analysis_and_preparation.md b/docs/mindspore/source_en/migration_guide/analysis_and_preparation.md
new file mode 100644
index 0000000000..2925e33cce
--- /dev/null
+++ b/docs/mindspore/source_en/migration_guide/analysis_and_preparation.md
@@ -0,0 +1,475 @@
+# Model Analysis and Preparation
+
+
+
+## Obtaining Sample Code
+
+When you obtain a paper to implement migration on MindSpore, you need to find the reference code that has been implemented in other frameworks. In principle, the reference code must meet at least one of the following requirements:
+
+1. The author opens the paper to the public.
+2. The implementation is starred and forked by many developers, which means it is widely recognized.
+3. The code is new and maintained by developers.
+4. The PyTorch reference code is preferred.
+
+If a new paper has no reference implementation, you can refer to [Constructing MindSpore Network](https://www.mindspore.cn/docs/en/master/migration_guide/model_development/model_development.html).
+
+## Analyzing Algorithm and Network Structure
+
+First, when reading the paper and reference code, you need to analyze the network structure to organize the code writing. The following shows the general network structure of YOLOX.
+
+| Module| Implementation|
+| ---- | ---- |
+| backbone | CSPDarknet (s, m, l, x)|
+| neck | FPN |
+| head | Decoupled Head |
+
+Second, analyze the innovative points of the migration algorithm and record the tricks used during the training, for example, data augmentation added during data processing, shuffle, optimizer, learning rate attenuation policy, and parameter initialization. You can prepare a checklist and fill in the corresponding items during analysis.
+
+For example, the following records some tricks used by the YOLOX network during training.
+
+
| Trick | +Record | +
|---|---|
| Data augmentation | +Mosaic, including random scaling, crop, and layout | +
| MixUp | +|
| Learning rate attenuation policy | +Multiple attenuation modes are available. By default, the COS learning rate attenuation is used. | +
| Optimizer parameters | +SGD momentum=0.9, nesterov=True, and no weight decay | +
| Training parameters | +epoch: 300; batchsize: 8 | +
| Network structure optimization points | +Decoupled Head; Anchor Free; SimOTA | +
| Training process optimization points | +EMA; Data augmentation is not performed for the last 15 epochs; mixed precision | +
+
+## Function Debugging
+
+During network migration, you are advised to use the PyNative mode for debugging. In PyNative mode, you can perform debugging, and log printing is user-friendly. After the debugging is complete, the graph mode is used. The graph mode is more user-friendly in execution performance. You can also find some problems in network compilation. For example, gradient truncation caused by third-party operators.
+For details, see [Function Debugging](https://www.mindspore.cn/tutorials/experts/en/master/debug/function_debug.html).
+
+## Accuracy Debugging
+
+The accuracy debugging process is as follows:
+
+### 1. Checking Parameters
+
+This part includes checking all parameters and the number of trainable parameters, and checking the shape of all parameters.
+
+#### Obtaining MindSpore Parameters
+
+`Parameter` is used for MindSpore trainable and untrainable parameters.
+
+```python
+from mindspore import nn
+
+class msNet(nn.Cell):
+ def __init__(self):
+ super(msNet, self).__init__()
+ self.fc = nn.Dense(1, 1, weight_init='normal')
+ def construct(self, x):
+ output = self.fc(x)
+ return output
+
+msnet = msNet()
+# Obtain all parameters.
+all_parameter = []
+for item in msnet.get_parameters():
+ all_parameter.append(item)
+ print(item.name, item.data.shape)
+print(f"all parameter numbers: {len(all_parameter)}")
+
+# Obtain trainable parameters.
+trainable_params = msnet.trainable_params()
+for item in trainable_params:
+ print(item.name, item.data.shape)
+print(f"trainable parameter numbers: {len(trainable_params)}")
+```
+
+```text
+ fc.weight (1, 1)
+ fc.bias (1,)
+ all parameter numbers: 2
+ fc.weight (1, 1)
+ fc.bias (1,)
+ trainable parameter numbers: 2
+```
+
+#### Obtaining PyTorch Parameters
+
+`Parameter` is used for PyTorch trainable parameters, and `requires_grad=False` or `buffer` is used for PyTorch untrainable parameters.
+
+```python
+from torch import nn
+
+class ptNet(nn.Module):
+ def __init__(self):
+ super(ptNet, self).__init__()
+ self.fc = nn.Linear(1, 1)
+ def construct(self, x):
+ output = self.fc(x)
+ return output
+
+
+ptnet = ptNet()
+all_parameter = []
+trainable_params = []
+# Obtain network parameters.
+for name, item in ptnet.named_parameters():
+ if item.requires_grad:
+ trainable_params.append(item)
+ all_parameter.append(item)
+ print(name, item.shape)
+
+for name, buffer in ptnet.named_buffers():
+ all_parameter.append(buffer)
+ print(name, buffer.shape)
+print(f"all parameter numbers: {len(all_parameter)}")
+print(f"trainable parameter numbers: {len(trainable_params)}")
+```
+
+```text
+ fc.weight torch.Size([1, 1])
+ fc.bias torch.Size([1])
+ all parameter numbers: 2
+ trainable parameter numbers: 2
+```
+
+The parameters of MindSpore and PyTorch are similar except BatchNorm. Note that MindSpore does not have parameters corresponding to `num_batches_tracked`. You can replace this parameter with `global_step` in the optimizer.
+
+| MindSpore | PyTorch |
+| --------- | --------|
+| gamma | weight |
+| beta | bias |
+| moving_mean | running_mean |
+| moving_variance | running_var |
+| -| num_batches_tracked |
+
+### 2. Model Verification
+
+The implementation of the model algorithm is irrelevant to the framework. The trained parameters can be converted into the [checkpoint](https://www.mindspore.cn/tutorials/en/master/beginner/save_load.html) file of MindSpore and loaded to the network for inference verification.
+
+For details about the model verification process, see [ResNet Network Migration](https://www.mindspore.cn/docs/en/master/migration_guide/sample_code.html#model-validation).
+
+### 3. Inference Verification
+
+After confirming that the model structures are the same, you are advised to perform inference verification again. In addition to models, the entire inference process also involves datasets and metrics. When the inference results are inconsistent, you can use the control variable method to gradually rectify the fault.
+
+For details about the inference verification process, see [ResNet Network Migration](https://www.mindspore.cn/docs/en/master/migration_guide/sample_code.html#inference-process).
+
+### 4. Training Accuracy
+
+After the inference verification is complete, the basic model, data processing, and metrics calculation are normal. If the training accuracy is still abnormal, how do we locate the fault?
+
+- Add loss scale. On Ascend, operators such as Conv, Sort, and TopK can only be float16. MatMul is recommended to be float16 due to performance problems. Therefore, it is recommended that loss scale be used as a standard configuration for network training.
+
+```python
+import mindspore as ms
+from mindspore import nn
+# Model
+loss_scale_manager = ms.FixedLossScaleManager(drop_overflow_update=False) # Static loss scale
+# loss_scale_manager = ms.DynamicLossScaleManager() # Dynamic loss scale
+
+# 1. General process
+loss = nn.MSELoss()
+opt = nn.Adam(params=msnet.trainable_params(), learning_rate=0.01)
+model = ms.Model(network=msnet, loss_fn=loss, optimizer=opt, loss_scale_manager=loss_scale_manager)
+
+# 2. Self-packaged forward network and loss function
+msnet.to_float(ms.float16)
+loss.to_float(ms.float32)
+net_with_loss = nn.WithLossCell(msnet, loss)
+# It is recommended that loss_fn be used for the mixed precision of the model. Otherwise, float16 is used for calculation of the loss part, which may cause overflow.
+model = ms.Model(network=net_with_loss, optimizer=opt)
+
+# 3. Self-packaged training process
+scale_sense = nn.FixedLossScaleUpdateCell(1)#(config.loss_scale) # Static loss scale
+# scale_sense = nn.DynamicLossScaleUpdateCell(loss_scale_value=config.loss_scale,
+# scale_factor=2, scale_window=1000) # Dynamic loss scale
+train_net = nn.TrainOneStepWithLossScaleCell(net_with_loss, optimizer=opt, scale_sense=scale_sense)
+model = ms.Model(network=train_net)
+```
+
+- Check whether overflow occurs. When loss scale is added, overflow detection is added by default to monitor the overflow result. If overflow occurs continuously, you are advised to use the [debugger](https://www.mindspore.cn/mindinsight/docs/en/master/debugger.html) or [dump data](https://mindspore.cn/tutorials/experts/en/master/debug/dump.html) of MindInsight to check why overflow occurs.
+
+```python
+import numpy as np
+from mindspore import dataset as ds
+
+def get_data(num, w=2.0, b=3.0):
+ for _ in range(num):
+ x = np.random.uniform(-10.0, 10.0)
+ noise = np.random.normal(0, 1)
+ y = x * w + b + noise
+ yield np.array([x]).astype(np.float32), np.array([y]).astype(np.float32)
+
+
+def create_dataset(num_data, batch_size=16, repeat_size=1):
+ input_data = ds.GeneratorDataset(list(get_data(num_data)), column_names=['data', 'label'])
+ input_data = input_data.batch(batch_size, drop_remainder=True)
+ input_data = input_data.repeat(repeat_size)
+ return input_data
+
+train_net.set_train()
+dataset = create_dataset(1600)
+iterator = dataset.create_tuple_iterator()
+for i, data in enumerate(iterator):
+ loss, overflow, scaling_sens = train_net(*data)
+ print("step: {}, loss: {}, overflow:{}, scale:{}".format(i, loss, overflow, scaling_sens))
+```
+
+```text
+ step: 0, loss: 138.42825, overflow:False, scale:1.0
+ step: 1, loss: 118.172104, overflow:False, scale:1.0
+ step: 2, loss: 159.14542, overflow:False, scale:1.0
+ step: 3, loss: 150.65671, overflow:False, scale:1.0
+ ... ...
+ step: 97, loss: 69.513245, overflow:False, scale:1.0
+ step: 98, loss: 51.903114, overflow:False, scale:1.0
+ step: 99, loss: 42.250656, overflow:False, scale:1.0
+```
+
+- Check the optimizer, loss, and parameter initialization. In addition to the model and dataset, only the optimizer, loss, and parameter initialization are added in the entire training process. If the training is abnormal, check the optimizer, loss, and parameter initialization. Especially for loss and parameter initialization, there is a high probability that the problem occurs.
+- Check whether to add seeds for multiple devices to ensure that the initialization of multiple SIM cards is consistent. Determine whether to perform gradient aggregation during [customized training](https://www.mindspore.cn/docs/en/master/migration_guide/model_development/training_and_gradient.html#customizing-training-cell).
+
+```python
+import mindspore as ms
+ms.set_seed(1) # The random seeds of MindSpore, NumPy, and dataset are fixed. The random seed of the API needs to be set in the API attribute.
+```
+
+- Check whether the data processing meets the expectation through visualization. Focus on data shuffle and check whether data mismatch occurs.
+
+For details about more accuracy debugging policies, see [Accuracy Debugging](https://mindspore.cn/mindinsight/docs/en/master/accuracy_problem_preliminary_location.html).
+
+## Performance Tuning
+
+The performance tuning directions are as follows:
+
+1. Operator performance tuning
+2. Framework enabling performance tuning
+3. Multi-Node synchronization performance tuning
+4. Data processing performance tuning
+
+For details, see [ResNet Network Migration](https://www.mindspore.cn/docs/en/master/migration_guide/sample_code.html).
+
+> Some networks are large or there are many [process control statements](https://mindspore.cn/tutorials/en/master/advanced/modules/control_flow.html). In this case, the build is slow in graph mode. During performance tuning, distinguish graph build from network execution. This section describes the performance tuning policies in the network execution phase. If graph build is slow, try [incremental operator build](https://mindspore.cn/tutorials/experts/en/master/debug/op_compilation.html) or contact [MindSpore community](https://gitee.com/mindspore/mindspore/issues) for feedback.
+
+### Operator Performance Tuning
+
+#### Poor Operator Performance
+
+If a single operator takes a long time and the performance of the same operator varies greatly in different shapes or data types, the problem is caused by the operator performance. The solution is as follows:
+
+1. Use data types with less computational workload. For example, if there is no obvious difference between the precision of the same operator in float16 and float32 modes, you can use the float16 format with less calculation workload.
+2. Use other operators with the same algorithm to avoid this problem.
+3. Pay attention to 16-alignment in the Ascend environment. Due to the design of the Ascend AI Processors, it is recommended that the calculation on the AI core be 16-alignment (each dimension in the shape is a multiple of 16).
+4. [Operator Tuning](https://mindspore.cn/tutorials/experts/en/master/debug/auto_tune.html).
+
+If you find an operator with poor performance, you are advised to contact [MindSpore community](https://gitee.com/mindspore/mindspore/issues) for feedback. We will optimize the operator in time after confirming that the problem is caused by poor performance.
+
+### Framework Enabling Performance Tuning
+
+#### Using the Static Graph Mode
+
+Generally, MindSpore in static graph mode is much faster than that in PyNative mode. It is recommended that training and inference be performed in static graph mode. For details, see [Combination of Dynamic and Static Graphs](https://www.mindspore.cn/docs/en/master/design/dynamic_graph_and_static_graph.html).
+
+#### On-device Execution
+
+MindSpore provides an [on-device execution method](https://www.mindspore.cn/docs/en/master/design/overview.html) to concurrently process data and execute the network on the device. You only need to set `dataset_sink_mode=True` in `model.train`. Note that this configuration is `True` by default. When this configuration is enabled, one epoch returns the result of only one network. You are advised to change the value to `False` during debugging.
+
+#### Using Automatic Mixed Precision
+
+The mixed precision training method accelerates the deep neural network training process by mixing the single-precision floating-point data format and the half-precision floating-point data format without compromising the network accuracy. Mixed precision training can accelerate the computing process, reduce memory usage and retrieval, and enable a larger model or batch size to be trained on specific hardware.
+
+For details, see [Mixed Precision Tutorial](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html).
+
+#### Enabling Graph Kernel Fusion
+
+Graph kernel fusion is a unique network performance optimization technology of MindSpore. It can automatically analyze and optimize the logic of existing network computational graphs, simplify and replace computational graphs, split and fuse operators, and build operators in a special way based on the target hardware capability to improve the computing resource utilization of devices and optimize the overall network performance. Compared with traditional optimization technologies, the graph kernel fusion technology has unique advantages, such as joint optimization of multiple operators across boundaries, cross-layer collaboration with operator compilation, and real-time compilation of operators based on Polyhedral. In addition, the entire optimization process of graph kernel fusion can be automatically completed after users enable the corresponding configuration. Network developers do not need to perform extra perception, so that users can focus on network algorithm implementation.
+
+Graph kernel fusion applies to scenarios that have high requirements on network execution time. Basic operators are combined to implement customized combination operators and these basic operators are automatically fused to improve the performance of the customized combination operators.
+
+For details, see [Graph Kernel Fusion Tutorial](https://www.mindspore.cn/docs/en/master/design/graph_fusion_engine.html).
+
+#### Others
+
+If there are too many conversion operators (TransData and Cast operators) and the conversion takes a long time, analyze the necessity of the manually added Cast operator. If the accuracy is not affected, delete the redundant Cast and TransData operators.
+
+If there are too many conversion operators automatically generated by MindSpore, the MindSpore framework may not be fully optimized for some special cases. In this case, contact [MindSpore community](https://gitee.com/mindspore/mindspore/issues) for feedback.
+
+In [dynamic shape scenario](https://www.mindspore.cn/docs/en/master/migration_guide/analysis_and_preparation.html), continuous graph build is required, which may cause a long end-to-end training time. You are advised to [avoid dynamic shape](https://www.mindspore.cn/docs/en/master/migration_guide/model_development/model_and_loss.html).
+
+### Multi-Node Synchronization Performance Tuning
+
+During distributed training, after forward propagation and gradient calculation are complete in a step training process, each machine starts to perform AllReduce gradient synchronization. The AllReduce synchronization time is mainly affected by the number of weights and machines. For a more complex network with a larger machine scale, the AllReduce gradient update time is longer. In this case, you can perform AllReduce segmentation to reduce the time consumption.
+
+In normal cases, AllReduce gradient synchronization waits until all backward operators are executed. That is, after the gradient of all gradients is calculated, the gradients of all machines are synchronized at a time. After AllReduce segmentation is used, the gradients of some weights can be calculated, gradient synchronization of this part of weights is immediately performed. In this way, gradient synchronization and gradient calculation of remaining operators can be performed concurrently, and this part of AllReduce gradient synchronization time is hidden. The shard strategy is usually manually tried to find an optimal solution (more than two shards are supported).
+The [ResNet-50](https://gitee.com/mindspore/models/blob/master/official/cv/resnet/train.py) is used as an example. The network has 160 weights. [85, 160] indicates that gradient synchronization is performed immediately after the gradients of weights 0 to 85 are calculated, and gradient synchronization is performed after the gradients of weights 86 to 160 are calculated. The network is divided into two shards. Therefore, gradient synchronization needs to be performed twice. The sample code is as follows:
+
+```python
+import os
+import mindspore as ms
+from mindspore.communication import init
+
+device_id = int(os.getenv('DEVICE_ID', '0'))
+rank_size = int(os.getenv('RANK_SIZE', '1'))
+rank_id = int(os.getenv('RANK_ID', '0'))
+
+# init context
+ms.set_context(mode=ms.GRAPH_MODE, device_target='Ascend', device_id=device_id)
+if rank_size > 1:
+ ms.set_auto_parallel_context(device_num=rank_size, parallel_mode=ms.ParallelMode.DATA_PARALLEL,
+ gradients_mean=True)
+ ms.set_auto_parallel_context(all_reduce_fusion_config=[85, 160])
+ init()
+```
+
+For details, see [Cluster Performance Profiling](https://www.mindspore.cn/mindinsight/docs/en/master/performance_profiling_of_cluster.html).
+
+### Data Processing Performance Tuning
+
+The performance jitter of a single step and the empty data queue for a period of time are caused by the poor performance of the data preprocessing part. As a result, the data processing speed cannot keep up with the iteration speed of a single step. The two symptoms usually occur in pairs.
+
+When the data processing speed is slow, the empty queue is gradually consumed from the beginning when the queue is full. The training process starts to wait for the empty queue to fill in data. Once new data is filled in, the network continues single-step training. Because no queue is used as the buffer for data processing, the performance jitter of data processing is directly reflected by the performance of a single step. Therefore, the performance jitter of a single step is also caused.
+
+For details about data performance problems, see [Data Preparation Performance Analysis](https://www.mindspore.cn/mindinsight/docs/en/master/performance_profiling_ascend.html#data-preparation-performance-analysis) of MindInsight. This describes common data performance problems and solutions.
+
+For more performance debugging methods, see [Performance Tuning](https://www.mindspore.cn/tutorials/experts/en/master/debug/performance_optimization.html).
diff --git a/docs/mindspore/source_en/migration_guide/model_development/images/evaluation_procession.png b/docs/mindspore/source_en/migration_guide/model_development/images/evaluation_procession.png
new file mode 100644
index 0000000000000000000000000000000000000000..12392b142d1528f145d89f079b1860c2ca28b0ee
GIT binary patch
literal 23286
zcmeFZ2T)Vt`!yI*EFfY*P!Lc7>0qHKC8$W3-V9Zw*C+-MBm_}GMT%7EgceF55}Je(
z6#)Th2?VK8kP<>C)DW^a{^tLmoo{x&o&9EZXLjbhGt8ajl3U*LyyrR3IrpuJkq#FJ
zKL-c|;?mQ-X9fZt)B}MI_#8b9d~-f(Z2{OFcxt9|7gW)6fdYIuZXg$*<0ZIbnYBwzFu1 zpVhY)n_W6eljSgWkL=|1u;3gmUsq4&hI*&ucxDt;M}71{NhASOGvo?eV*Wm&q1(P& zu{za`()2DqY{by0A}AW5{OYGu`mCS7%2OaZnr~$E1t|M96v>;#WR!~DTNHL0?`K4H zKBI&cjKQO4C=PmfjQ^I*F4z3Nyg@yG5>?AH(krVB=IFjvLb_sqf7axKf*G}PYI51J zPfu1+QiWQ?qgJTdt@T=N86Xh_e?G@*GpkHN45yVT#$s-)*UVOnt?PIk)@mb*9Qu-Ng|8B6F2ZpzL_< zxVxg;N3GXT**Qz+q19!X=rBh>IW D-$$1o@2;XrT9sC zmy|eR)*`=L%I6*C#WZ2)S@xn%=EOYRz3?miPE~;|1c0QahV7;5z4zz^-lbVkjvtnW zhz|-xb6lQxiE+*z*t!8)Q*h%nLKtq9>|*(JIVl;S9J41#GI+RerR|aGmw?>EfRBvL z8B~*Xrs }kZp^`V^PX%E0UV~?3PxMK$R?!Z>_;ejR?a&w^;7=Aj& zJ4Eu=v97-X`c>} 5rE*wNUmW48(*%E88c8m;v^Vii1_ UBUvzb1F4Qd-${;%*<^9%% za3|Fe@k??c&JTUZ<__P(EW)kWa>|c@NQ#9{o!3!O8l59&%rjm}*?Pp-&foda<_mGf znA;RD{GB3P=jqgznZ{m|wflTkkS%;y hKb9IOiI`f|vf~SF(jvfI%ZdK; zZLxv^*pD6#KW)B{1Vp4|yoSxH8-0z}H(*K?*Ny{EW{E?>^teNk4pV$T5EKjb^aYZ& zM{F+@T5=Rwh-4G)%X&jotI?QTU2bGjk0+!s&Bl)<%|RxROkZv4& F?lm_N>k^$AT%2xLw0QjqZkUE@2kY~mXB zo;HnW5>mm?(ELG~VY%ZVxJ*e6uQBg!N2ioXJ%?*kJ42$mK@bakw%7YPZT2rOtcHGC zmgVjL|4eoDKiESV`t~}r7sF#u7)4nPVsa8xL3eG2l;iylc+&!Ltr`Cl$4X|HVA2HO zn*`xj&$`PvG~PVVQY1$Pe)*sM=>OK?@kQkZVt?-+BLEm~<^*;Fk_uWtpn*X^=zM{# zkk2vn8cVpUau$^C2zcHINKr`RvnSU>w^v}t7sCO?TX?`EK+zd30V*h$VXSKh;0MTI z`Sx~LQa;jKK+hpAsGk66^~H_VR`01C2Hk!NOf1kqR_CSe3}tyiDvVX*&k0@AV;x%u zW;JLanGK0v2ms=%E=$n_sJFsrfSCd+E(205(SK@aTmzs_cshFk)F;D|2L`MIw=2qL z4>MJ%RqAH$Z{c%dfn%9A6&x$m(B|_v8=tDh+K#pipM+RNjFx;8s7e3 P5U80BgY^+&sEgVq{C$NiteWo&YnzjYKasBdise tvf|HNk`76DWV( zO)Ce8nGxWftO4XmfMM%C*&EYDkkn(jQcThSt4VjkbVNaK*wRyq$rCGyvSGOa+@EJ_ ztL>U8IDf77>anRfa5~zq@^Tg%sL6eQZ>Iwqww?jV-DMLd=C=UKFpHq)i57ZjA2c3d z&yA%W)*q&UKpt0Q`9z0b4|QHS+rLUdA(@^d(8IjjGPXE{_w;WsA=P_l;?z@EK4;cx zTT*350EtKn|tY>g-zb8D6~y%|I7P7$T~ZW>_QAMoxF+*;EV> zInT&H7QxA`@>Dpac;>qH?w+v0i-cX!r-~B)!<_G$S7V@}Q|)owEp66mbY*J+6n3gR zT~YeHdI$`=HSrv?aSH_Ef2*t~mZP9INbZ!{nrq<}2Ru0rDuqgAfLJ*gT`j~M;02M` z2OlU0EsgcvTmX!$#FExxwj(vk1-(Rv)8X2m0V7S~OK$K@$ft*QR0= m_%tJhQq&nL6AI2ed+8X2#9J&AB;}l3e4?@O{A*J)C`Y#^^y=Cluu2 z21pkT `~{Lv!~I3_etD_gO)_ICC=t-#++eYqiGbEm8Zdtl{O=@S#w#4Zb_$?H z3GG}Q5(8Gmu_MCw!P_e>Fnzd0*F+*>&zYE1!`2rT$`!N`a`X^rBUkxXb5S>Qua|ke zC3!4_T0Y)sx^|^evBo8^p|?O|yLX01C>G+3WqN#Jw|ns(`wO15q}kN8( CP?}n>|8<}!eidCe4hD7R z-R_h^nzAcc+Xq;l)24cMT=e{%dk4uJj<}N$!Bf`wr^z8Cqu@)Sa@DscoQvMt>}<9H z%hH&OP4dEhDRx>A&wh3pU}=&Si;pLmK8H3RHm*(yE6kv!bVb$76{g<3oh#0UVA9FN zl4-6{ud=^S1^hlPR1P_Gq_jj9nhy!OR&}Y;3>=@!2G_Smp2b(Uoao@qVTJ+$kUc)w z_ao?blVI@Q4;?_s;#4Hy+FJVuRa!uO_gQ#C%0C{kfrzrCtabWd{;AflLqdwu)g7hA zq4U0{ZOjdt_2xAm-k#&@_7fK_f-G5d*+GAW{)l(^!E5Uhs9xzR`K!wY6>vxw6q1Pf z{9 3`VuV`?e~IPT7Oz44FO&kqrkfMq>|+jxK*zG7_+ky}%?LD=!m! zDLO35_<-oC$*}0r@hlIF@oS&T=hnCnuBdD(FeKVAK)Dc~hQ(Tn3Bgi?2c#g9d?Ow3 z%c!Ina=DlNJZ 3&=;`872@ z@dfE)F8MJ^9E{5ofRecnk^ivf5B08bgsAsDK2Z8=%($N`w^ZSQ@Ckpwfi1>Tf2fc= zX> _t=!-D4ihy6^6YKIY zAU^GxTLbcTcq^;q?AKlNeaZtaD UTltz-Uc-(853CL_$gWM0yHxMti+bRk!zTowjZfyR*8mCe72%>}rAF3_Bxj^- zJ ;mAlJ-DEwlH`1FV7t1u6hL!fTw zVuOrgwhOl%^F|hg08#YV$Wq^kjf(CFnl@#%80ckT^}_6wm7M1=6%!Q(HC1W=*7vkn zOLqX#cDRGrSz7P(DC6he-A2bjF?q^*9>%hQq8+@H(Skh4(tvgXcAWBPKQJa)VqkKk z(7@X-CXBOFg!Lki0OH!6gk%FLo3o$2K3284kQZ7@zc43WDqC*~9>`5&83X#PtHa`x zW}=v5vtT~l 6fC@QRM60 z#yHQffv+ssw^4;(rWt0R1_WFa2cG$s8$YNU_iOmqs<+jvL*qq9i;S-i{wkGvQd}@a zG)^4cRPyZ4>vK%q#~gv8Ylj0z|LcFX;tS?0kI
{ l zvN8V`Ai8x?Gt?oBBX+TNH1fAsT+?Rha}(?eU{=XyBAD`mWj$|9^AoayhQ(NRRgj6i z@A?(JT7#-F$+$gMcVD%Ooz!8sz&`vM(TCw{(lU#;;B3<3-zby^g#5rZUs4`Xz z{smg3$I46w%Dl^bKiKlfzEKHtUK&$i{@mz5LUM2^=?7vb)uaYka&L0>-!wz_iPc zW0_X7(wC=G9hYnzau(lufuW;|6eZhahVi2d7$wexzg?<}Ji83#;R2IkQAHX~s+_VU zFW-2w?YaI}({ktso(qy?fGg~F{yz@|>)k9k0KwEu8g7{C2oBaXf^9i{o!9<$bo_HQ zlq>+Hxw(EN+bo80BNc-O?|kJBi;u0r)N&>$3EaFotWti~FS=Wq8=5S|LXJmc2R<(t zFzI;8z(~`|-n!xu+fFxY(Xxxc6`}uh|HvAOZ%lO$!DcpAMm2@dREGVSt)39~OOn7yDZtL6JgPMAxOrb0_Nz zJet~PBrfX2-e;|4{NCT+ocfP~Ka7_%*JpSF#=^b;c7b9!lkJiEzUM`KtNXP(n03?& zz{rpUULWp_{NvAajxi<7JlcW?*`=hH9 ^9MYBiTrZCawo6*zOL{0`+2{U z+{|8$Y^vFgs`7fB5VZBJuaLwbYW5!}$BVbk(4GudtjvW!VL^C{Ac++IXzG$Y;itwN z(C8c*fB6O2V-EI~zesh@>%;;4`#VaY(p-IV?rXYf&%`WXUxwk|e`6OiqpVCx?I7=h zkFRK?Z78#&8wv33xD6D0?4K91g&dFO2tBkdu4jvaCoWemk4E?Itl=i?zc_p6af4 z>3onNq`Njga3+8HZeoCE8#Dk4DH4|Jb`n!6tDv>)z;mBRIsu77WpYfB3a7>pI?&KB zLjz{|K#!lB#+ToYkS?1)@B6$BT-F1o5!Y`Ynz|v3WzRhLHpq(N*#ND{QvU1Ajs3mj zL45$Z`3rpST#bS9?n_=d(99pw4GI051uQAbf4Dv?#dx>!AFIb1$v0|$f4Qecw;kL{ z59cQ`M#*OI*>~A>qxf6ji`r;w!`3%rU7R-y2j&GFTNP0BAH$(WxNYx9CKA2Rls{#4 zeZkJqrPfp1gHF(&e;y7Uf6M;UpVeEwA?IS#U6|=uG2i#ZUH0Cziy&nj0NEwQ&ToC* z&uAQj!nIiq5fiq=QkaRzIk{VwO~+eB#>19cpIG12Q40b3lh?m8TDvW>3!zxYAwhsW zD2b7+Uok+lQgXID|1>TK;&Rtt>^|0-dW280Vnu(s{t?M8`o3;vCKg_T^~hZfkyaTw zR*Ic}?>+_1*r>An4=xUP{$QY0r4yI?wN<1fnPL%${4pL|zss5}!YO*lY*MlVS5|08 zY6)j4I=tC)(B=!FR8k?oSYkoOE{}dbI&F8co>=hd9OGR_uqVJ3{SCrzaj|U!ex*^a zgM~&8%Xj$=01%;!Vv`>G^otPOvkPDS_d5d;PU{2pwi^kn0JEf@1Nek$b=+5b$5;Of zZVS)d4&zX>qn$)a7u3U67wH2Pq??<^`BghSXBmx{kqZ+Ve70rnG{Lm1tS3Q;ldQhD zLkC`2fsAtR_VyZmDwXn4-lC_+tW+t4y4R#4V##as+jULQuAr|dY3C03t^GCvEmJ0R zAaTwPBY*p%t4g9aM&$_vNaIffsL(^}>C(7czCc8nB1>MiPD8iwipkTYo2*D1QBxFG z>7cVEN=+VJQ*z!EHSHkmoUnQw*K`pNX>r1A;6)9VMte+b9v_n@?zM+o3_aO{Tp)M{ zhiK{kx`^WO*ps`7P+4Ba{RT#|H)X=Uq^dd|Y}Mo`>;k8)xsw#i2aj8f&l>7R9Dq3l z;%3U8twug2B3ORHgt;jT)W?-s c|`d*Cr* zg*I{8 6h?X_D|< eiVInlhYna@tA+>03JXFt1X|HK9W2X=DvYpU2o=D2;|U( z^O-MI-99>{G;){y>CKaf_8$WFv3u!8DDBVQ;%$uZNK2d@XDV+W{ZM#@HipU&2zRvA zu9f)wdv>rUOmJ_bG>u%iKHjgrB*g>8x!qitnV}7bI(1Cme2_+N<1GaExY0Dy#hw9f zM?7*i+s&X|w1R^WUU^%O<-}1Hi3>d 4vuSMfJaL1lQF~@z`-1W@s!Gxt{8S=Kq@}&xc?(BhFfJ`+9>Wnpf`Y5(4V# z!t76L!o(dOq?1mNtw#kNZZI`lVtvjB92%a}i0b?Sj)L;< 21OPOt pnjn+(wpeL!^s}$)MRz^92uu*ugV`)`Tx@V3xj9Af@{;)heY3li$KvRO0 zoVhhx2Wss!2OOca#mgM_XtE!=j9tFhzUlD3M8VZ!`~>NGQmewj+?eyoE_kR*F>6Jr zGlQW!(bmDah&0sC>JgV0T49eF-eU^#WfvZ^9N!AKA=2p46r2ygUP%6;zl_c2I;geo z#~QM!2?`S (^4GAfvujc4xiIi-Q(ru8x7^nJW `N7MMI>ffm%i9#^!26yY%kPU{u7|AaU~@#2HAPjR*oqnga4p_H;hf3fLlVQ4N;^ zcQ!#c^^7F8-tv}{_QF_!aVoT>l}7d0mTH*V!m;muk`-HwZqw~T-jj3=Zvay1Xda%o zs2D0ydA3C16bV~y@7QNI6&9vdo|8Adz1rXq@*Vlnp2b(MoS)m7r?ow2tj7=MFlm7& zi5~X0o0W?S6M7`$aRj9;#|c4Mb?9>Y^g8x<%CJDxD0%75CQ0ceUT+0@I5n^jkOuMW zmhzNF%7%LmK3}c+p)_aHODELK=jQWWF!(i4*b+3;cgyPhw;;!7I{%akb+eR;Fu(xi zueDb|LbxO|CQfyrb3}ztWIjz=nn)=0(nLSb$Od3q!-o36YPar;M_x%eL+hgM-M9Z% zW3W!xqv*%0V>zmK48|^_OCSEio4&v&F~hkWZLgz!b-tu$|LIr-65OvkJlJ1#=#{ z1n377;>Y{K98-#&yJG&Bd?z{iwsLt_> dt|1ub=KE|eja6=-x{Q=yjIWP4`MNG_8c5ogCOWkXUfKmC zMJGtY2!a+^W7}I}S|Szj$+CMGwK^qNPn9>Gu@rB&S&;WT{Hv p)z4Ld z_HoHf5xd)|2-M7T|Ni-j5-wHe$7bOt& Y2Ufoq6!mj^Y#1qI9y+%qNm4_3 zICSNAK;%rp^G7~YH?`)ARjd6p6{d?6g2 M6Vp8?vO#myQxORPDzA1m3bcC)CRM6hDlb z+sL>l_gW#0{u9L> =RD9 zrFly3gWF!(zGr*Rmjzk&SaK&Rpisk98zU2^0eCKg{$>|itrP@oMEFpz(oX=HZc((@ z93kUHc(AC^!SniQIwLdjGEqTpf}S+cETgaWfI{OgYT==Pq$uwi?i&tD99H3l!%?HO zD(_y%31z}otmRh_{?gHO=qVNjpY_r)+HH)ft#cl%>lEK7C?oZMOWPAO&BnjowYL?! zt5{cD JHvd43sBX8XXSJ zp(xJ@B*}!!GbuyovDQr6P@*@Z?Tp?{TwHzCkYOv_YV`^qdcsco@Wef;m5S%N_9J|j z3QQt*Zhib5omUmh11=o(74UE>E_%$ct07x9XT{y&y5Kckus-h16D=0nN7Wsh8%wSS z{^SG&^B$FWwT%Lj=1w4MJ%Zj$eL-z4@b$_(nOE%_VOy~wL(F-sIyqrB*gWiU%i%-b z{;1BOjVgbCpX^T8y7!vjw24W?oL?yjxg^yL y7(g8mx-AX@A*){0gBbD2D z%|VK2fUt+V#o`H12B%PVA0|ZO07|K*E~@K&Ks%166Zt23HSE3CXBM1pRVsNH)~Elq zq<`a*s+>Xopxewq7kgNf{KM~S8@#ALv2?uIg+=?5{AQU+3OMB5 jXqW8FM4KEFO;^y__9F7 aE$ DaOiuw ?OcwC?H%%y%!EPp(2yJs$cx<5!x?bW1UX(tnvk$ zrvrP8N%Zr?s-CMn-U&je=T}0Cz&cSuqZIwLEwKMC^aP1j5)1VC(EQEHpffKN1^HPu z_A8oUWy=d*Hzu73TedDweQhs!
>ls^i#ibMPsEO| z4x1E>5(6RoB36;asva1!D_c ip7PtzBO61F)`M_zEVRI+@U1ZX8GeS}4>FtOEOXs^@F0P6+cHjf@W~ z`LHIG9u2kO<(7+G?(~XH8iUmdBixN+2S(!WgUS;Io1_M0|Nf^;6l$siHM47j*9F_Q zg!3lg%@uX9A%eeu{`1)fz5f68%~vu{ZS3q4|1c}wvXk%;TEo-`akbjOIp%)=ZGH)| literal 0 HcmV?d00001 diff --git a/docs/mindspore/source_en/migration_guide/model_development/images/parameter_freeze.png b/docs/mindspore/source_en/migration_guide/model_development/images/parameter_freeze.png new file mode 100644 index 0000000000000000000000000000000000000000..598d2a7f9a02cc1ebf80b1155485da3b1b66506e GIT binary patch literal 34703 zcmaI7Wk4Ix_dN`yxO ID-HN-r7Y!a9g8un@ zf6wdZ#Y%QJGqZE&%suzq*&oV^GPqb2SV%}nxL;+ZRFRNg6(b=bOJShBT!GV5p}riD zT~uWxkjlrX_FsOywh&hkM?$KK!G19M? fZ5T z&T2NjR<9(WO_9nH?^J4}Xmvc3Bw$lcxYw=H`%Ol|fAQouu46OyXR+pH@je|qwwSzr z6VGHb_lL)3u2jTgH`PMKrtGs7U|F0@Yz-w9ITbDSHKEDymx1`$t5~?WhInRh#Ky*B z6W=!NfIY!ljjlw`!UbE%%`tT>B`u$%l*=hi2DShJQD$8{ow2PKYM0M7ovh~^H%G{x z<8Afx=WUQIW2&PUjWfU0=P-cAAuY)6!&`rC> lrG~y03C&SvEn5K?y8&^HoCspN!Ld>P7^hEud(6X<^CYL z=xwLz?nvsO`?6i~w?BFKqgCcxeLn{|{Sm|4bGi-+?T^O}Oyj^|npvtZYse}$GXw_V z$||F&LLT @jV?yw6Nt_0m!si3@elgPIlVnOKpZuHAt7M#40MFh`P~(%3)!_V>7b$D{i* zj8!l&inG1(0SV+$>=Jy-)3aoHeYCh3hD+%i_2Wla Jl9)5LF+dV9(5heF zv`evWoJ0Iih76Zy(!rHQ3WKY2BxKt^#R3AZe~u^3kQ%I(Tkr6LPZx}r*Y(d7snsxL zdUiD=?2*v;997v4B`}|XeYo- 6_@INu$c5M1> z19mA&E0BuWjT3NZ@@UcbIZ81!cai?CN$6l4oHFufcHCPD){VbD< ucpKPMPQc$l1_9%ysh=#!|teIegr-We`M%zSbx~%yJw#)nmC|7D+C_#SrQDa z(#YAp6U0#@)Q}KO`QQI|=6!Qc1n2}a9@tgSBnYYdcGDbA8lwHc6C@qnSFNNHm(28M zDx?w^bRh)BBXKz_Ap|eyf<(;4Il7a7;&PV%AyML16NUg~)4AbQvNU0(SJgyaJDCE~ zZ^>;pECo4q%VC*&LOS|~{tg(=5hL^Ps{Y%C*~b;9ermsAE*Hl6&wGB299Oz)=VX9S zI*)BaZxzwTlb<@ou%hfW|N11|*q{G&@a>+Hg`*AK>^R!CUCj+b9v2}9tsCcqqC&>P z4EH}V4C^6O8@w$vEKGdjp5uO$=M;G~BWa?BHPO>9*J#fO$aAXj 1FTR}{JYN~q zD`Y!AcVTnh>hUOBwY@zT0kpp&%wgK|k-l<)-0O`78$UnZDA~83eE}SPzmrnI{j92W zFQ e|%1s z1f&z+alZ<0PGHQ+x!cQ)`E!bR`WKAPDEf>TkDoX2Uxj_-l}M6&!j-ToA<$P?zlXkn z#}$R;ybTT>XDiDo$)ZH!nLwvgHx}4;H|W0CBJTANZdLN?IB(y1TqXvV)*`iUQNv|{ zS4B#*ABt)-&*@oY-?A9J09U5kSHO*(u-j_LebsG#`=! h2M KDTn`Rf)Z3H-9A;dS>d=+Xw$^N|M~&-WcL{K Fwt2r`aLosv;k)|@@>Qc;FO^NVek4Id^o9do!b+#EbMNIQ zG1+@rm*c%?mW(KSe-nc>F*Ss4AJh5N{XK5u4sH@!$C xi&W;0*T{-K0i-!*Ri zK<<`O @ymK4yhNa|S)%s_K$J%0o z28FRA|Hf-IQ)GR}oVd41UgaL^orcOzH>dm2I{0Qa{$-o*C#tD+Mhe*ThE`tBCUE#8 zlKoSCFLMD`T=R){H4rDq%Ds1C?Vp=5bl@foi8pkl6XuV30 7 zbPF#q-3pF6o}Ykq0xzE9y6Y_316w|c?Z@HRdXwfqq(8Sy#Ubso=|UIJKc;+#U3tev zZqh~MB{Tj5raUF4FmL-)T+*k^o~zK)279SbHxe9aw$)$jM|qb`{~KG^+4$fO_ahcS zlkw_0n|brwk8d>mUi!lf?ltZs;Z^-NqCCfez4$*Dv+itg7T9 p02gH;%v=7a@cw(XCDt@gFMnWNb;@(|3I`zC8Mte8>#wCAcMDJ7} zOW@0FX&y*evDCV@y4QM`iB-k{ 416+E@=4hzZXKgOqP iLn6q`pH+vuH) ze@5^6q3-4X_&d0AkFnCQ^A%+lXq;Uv;(P94%Tq<``A} Rz#L{JaBV zGq7HA1N8<2i_WWgpROQ5Dxni-(~b;gqPilB&{X0OzgDE2Juv)pvL5xmx~9ap_uzb} z^J`I=bM*gMMF&V;xR7N7Do59h`m6nEle{CP^{ULF?ng%`fqr*TU?8&M`D*cRxt#?0 zLgzbXyjq=CsY3$lO|e}`y7g0-3{tQO&B%jUO@reGDl@asVM%if{)tg2m(y6xQ5Aj_ znby=&pX8xy9_!$13F9>+(m51-;Y!W|?P-x^fl`BxHHRN$si_6~=bfFM=4+#Ry+ rJ#O*%$`FG=6Zg zVzYI}epLA?oU+3H{O%a|)fH1OE|T+oIpq8I$QRRCS?((+>Q$|;NV8sdkl(Nso)G5f zrZx>SruhN@8ELot8~x-hrvmTJrwQ77EV*`S&G98U!r6ix>zLIP$A#dnzGJfjF1xgg zRm=7*4F2a#{+lKemmK;+DaFDO*6)cb3} p^K$*%Ho@b@!Pay0%RR*69e|unq z`C?dpXlm!KymW$}lEw6qVR2U0(j%6@+0vsxWV_~GvzPkT-T4|N`xRQVJImRXqz?5r z9v4HyfBppK*ko)21qx{z92H*}l`_-Cxkky@bxh%W9}3$UXN50DJ}u^bjrTk%i8V*R z!sObx{3(5vw!Uo)$PHeFrsq~CV_+TpW84q^`kN-RfZlzLoVqVu^uefOCj!=VWF8ds zr(l2Ry@LDopmET~znQ-`9natn9&Ub&SP`A+LM0{Obydt&Gvo^ALlGYi9^Mm>Jttii z3oHNFuyNIWc~R$4uK@sX!UWdG`@q}y7AuW{wH2(y&6KbbbcG5ce^T->!e4;>d;Yu6 zPnBOdb5NE^p3Bf2x%H`AhEa6dmZ!AD6!#CU8380SYZf)5ycfD{fh03yjG01-fGbxB z1vB63c2X`Zn$>mDSkpa*8|CJLU0Ibc_jmJ4)L?qPJy;NM{>K1(HM?EaPxDDsjRjhm z>0QiR`eHAa@)5j#eUH_|quOajBU&yC`L};8wIeuauZ(Ic4 %@N1Ec*cBZ2to_ z$az%{TE+yVLi-jpj&2^Xd{t0)@SdM~P Pnp4Tw~QiD+;NTXaG9ss;j#&xb;Q99c*?x{9U9b1LwWlj#v6)Jfv-H z`Sqt3+Qw7~B|D=O`Q$|;3pkBmMCo4;HV0Juog4oiXI(E@dd}Ec0bd)l*GmK{wHpU!l76`poeIy`+#L?O*(>$54*ejeINw`;iVnF>p{>m3>H_huck zG~aKY-G1f_SxqdkQ~#^^y*f8~ZI~FtwCcQG4~05QzrN#3$r<9M7!)qN6fqkgd?8M$ zE6&60W**oaAJ6PCG@If`L9Xl~70@X{(aa?4Jp49ifmOqd;)6R;r=9O=@tuQFVIZxV zqhxt<-wsDd^6r t=1}9)jnw{BjQWt)%O0F{wv~n~&iAgyo_yeAu`2Q<17VI0dJq zQcfkqag!XNuklOdS_@DTCSTxO>sFdYq*IqNhpWM?s-aFN#WQ!7U{YgXmu>FJfF5I3cb2I<|x$n%V?H#V$v)M9{=CW zbq4--k@~kJ96bFy9Vfw`eQ4;Ke`jW99vsz-U+YR5T;v8CG;wL_Ifbqtd>?~&9Af)6 zS){d|leO< &`@_rGykHb;C_7*K53fC{Y?NIXb{! l*0y#3z&!LY*{$3lgjn6317;x3h~I0F-sA$m6%Gg5K>VvZlARi^X$9(7*8b;Ov5 z`%fP^(%ecYn)7gRFO|CCVi{+MB~kkkqUWfYLH(Z4S9ZaH^y@SmqVIeBqdDi_@QylU z641I}j+&J_<}9~}`?#vg=(4yg@C=T+0p}y)k2^SyRsIDw#GV|#93?8w;ZYRY)KIj0 zCpDktqj8Cp4!cAe7)ZV%%IS~&UpViIr>YSe@&BEZiEG>0y_Kf(CSWdp3G}64;$|S1 zoXB>W$l^WpbGp7WOOqERK*E)`jHMXiN=X_|7&R6aFMY$>1lrW>xJz6o%8=XVJD3NQ zo(9Zb)YlZ {57U)%C3*?oe!{ioJlMcYQuy7o z1}EVQfSnhm&6e}cq=4{Q5AouQAhm@K ;BXsJyPU(xA(T&W96+yx)Hqf z)+3ND41;dz+d+5S-; spIQFVO}OfOO^Tmla(M~kG7h2Plq)&i;4{61{cMg90&6gw`CwK~*l!EEgN zPehmNO?+^4vIKCeVY&OD$shYFxkQa&a0S$_wZNfuEnd>TWwP4sv1<>(!FjUczG-Pd zB)r$=x!HU_oT<25_I*>$I|c+_|MV3NsNOi1h)$iDw{QQI+tVvsr0;j}HJr#?0!xEs zFa-NCZw3_vIgLwfsgPyVyVX^G8}DxbzIEzFHU}LurM~eZcx7gUqvbVmrQF3GpNO7u zq$V>y)YOpE+VY|fWf|8WWZ{zaHD2$xA6lWdTu-^FihEe~UcX^IsC(w0_?1}1uRp6u zpmS2YvfR_<1y&3?LXMLz2G)8{+;zHd&VXuxQ9lChI *f>wG%>LrFje`qj33(sTw_xcNHQ>IECv-&AlAIqFEb $~>xUp%@%Jh#C4PFk(=@ec1953VD1wxCa0-gDck| zpYjhSjZD9VPxME=lfS(EYjIQ)TgCIbRCw?62ed*%8|H*r0Z+B92AY);cK!0&F0)0E z!PeQ9yM&VfvYz(>6mFsvZXG=g*O;}&1^d;QkE@K8kNpz=6ApW1wFPcA^aK>pcIiS* z#FIdG#e%3gOEElaumtL6W~z+$i=R9n*OBsb=NjVe(X0;i`m2w!-^*(<__rLWM4>}v ziD@YBr?{V6XJJeA+MzZ0xus=<%PBaF24&3QBo|fDg@n)Ip@0L9fg0>CN`joHHNNTN zlD1#YO;*(f3>_=ZkMW&}_Th-gwM15%lU!q)^*(zcd-SOhp1z(A yfwIRD5~R z-42V;80#~>BB17}&(2veiW2F8CP$6ReGH1KdMJTJgkmT2+j*Ua=T?xz6;^NA1V1mM zi>AYY3$8deb81WRHJAqOBFCCA6QOlhbzyE0s#BTR>*Bb^CQz5 KT&>jBFb0js;XplU5pp;%JJFBid`PkJ)nDvv P{voJF}rx2CnQv=F57XUAg!C;J(><~>cn>LUQfuG m@;nCmpagBu46H^C-OI`YDHDc#Wp12wnf~JF4 z16xb)G&gk`zuaSdZIHObML39UxGNJ;Zmgg}^QI02nczt OeV;!wEXC$cG3UCg zXK~^|_kUm0OaEt%z1Qj05)AviYO1daKBgwxki)Ph%q?->D>AbfM@ETyqy}+Q#}&+C z>1WFeZB;2pw*^TpxY?0S=56~;ymJ3_^i~s&QCRBtcC$493d4>tVdmXWbMnT;a6)!{ z6G#2i_DaIb3ThfW48}1cz*hOAsPcqHQ{v}X#tnK)jJpMQg8v)jzi3yN9ryt~S}Ur} z>Ug47M!H-(jZX;{1=|7Dzlm9=%Os?K2mj_bV0#o6kbwobsc?P@wSNEl&@@Td#ON&x zq^iP>!8WQnX}TZ0Kvkxn8>OkhH@iYx*|7NRI-R9trQH?YXL!wsO{rExG%WS*e{Y^* zneLERlhhoG;`r_?5|`btcF0Z0KIDOWAg~cSniZfXpoiA`3nk>h<0Xd)RT4^xu=)I~ z?m`MpI{V{M_fz(N6AdGkBI)aeV>n5wj?=W9LRC@7QY3 9J0%^;o04XS$?*AI>$Sqne)G|Vn%JOxpL@)Cah2wShaC@n5sQsz=zX k}`s zz6IH{u+@#Sc=NH|Kn_&1op}=BiZ?N@ZDL$Y4i1$>?S|;*ODbe6u`kHWvk{%PFV#ZM zS<7xQ&H!9LG1bCwS1FNe=C+! foszZEdMm-B{v)Xzx# zPX>gwg!fhYg~?nXS-)15LQReG*cV%NZZkJ1)tC%LLRtmF@zS)2@~>papcb9$RXd?j z tmn|NECL(OqQc`zHEs`W~5H z49w#Q$95&_w^TlO!*3K*#sO1A$Ix7Yt=F26SDnT>=#*`Cz=+1q8hZATDdM&uXWF&B znA^ZlFxq49?XtQc9f}0wU0mWDHK#4phg}_{msd*9M#xe!Pn2zz(~nGgM`p0j2z|{A zn}GbrajR>oJH|}h$8oEue^ZkwX~tR8YL673_Hn(S$Qo%ms0JlgnNMiV90lWwB9l =}LMOnj1m^|hqs)(*V2NHe$;xzyz4~j*{NFw|L60oAJ z6ua{^SuKL5Hz_Y~lPGE57D*FaEzad0troHOENeE!EV>{Ak<|r7Of!=vmavJu@hTFl zil!TW*HQ>8-#v#&?kt2LO8BI;cjC_Lc8Cf`P8+jkdK$S=cpr($F+UD-P7}UU6jkIH z-=F#(#19QPZ j=E&CxwuvehmwPeO zV-nQl*x_q3N?&;Sr&>4PHAmkSMTEd3iyE?r-jxHWuEOsg`-Rm{;@L2;=eI`_YJUsQ zrFJ8cxO{t5P}oyWi0u9xQ^TVo=5W2Su(9i3g3oj9?#}GWdP#3JA0bEA_l70V)AI;V z+ObpYx87mAZV(pB5dT}V^eApiaZA?!?3y$F!jlp)lkv?wgdRcrSYr8Zv;xRJO)Go! z=AC3)eV(q5G8cbA5r|l-OV?4UuE*CI`TH_cy~_Kib0zdew719obWJ?i%Tct4P2+F_ z{G!@|GyZ`;A2=H<=xVFM^|!4&JL< o7B? zmI;_re;lU8uL!rh@N{K^cejFHxt;U6Wnu;=WqJO-t#+i9Nz8|?8&XF1Vkza^4^EA5 zRXoz{F{1DKWnPlw${mQEGbgJ=lW)Otb$FPl9zXKGiutvweO=ZL4lvi|6!|D1;iTI- zvp?Mf5~eYE_x)s8t-;~bQOf|l@awbb$(DOSB+`_om^uAA2nff+C>r4tjnTK681$t( z42vC}O 0A94schKl`r zW$Bc2Rc oA*0 zqkQ ;~T>qT8{(`g12DQDC=}GF<(l^SaX7Z_@Bd# zdL_#1R(j mxR>7zZq@DT}#Tn||)m-p$0tu9Z6sP#Liya|xlp6X{R5 z)s_n_)hNHbzn>o;(@x&8_#l-Wd*^tBfwiy#x*G3A55M0u(D(!n8bwZ``e))PqJCRD zB^X {`p46&Lp$2~$*af1PIzs2UD5~PvpyFNE7VMsyHdT0HFw_5nlCBsM zPb>PfRi3Tw+p$2Do5zkXhEQWGh33@1WXk2@1vDoSPkrs?V)N~slAGxTkJOv|R8{E3 zo8V;kQ@&Pf+OK4HrYQH8sX1=7eVVbHen+{+cSzUA5x2fQnlsE6 I9nx8?8Kk zzd-k~=6+F=XB)msv`U>iQQ$(ubt{k=@lj!2TvD4|6@-4iNN>`)34hZrm5LIEajd2$ zN;KQK g_v~W#fR$t;!L=|=UO%2@E5?% zTf`NgB|V)~e*o -E#%G*Dg-=)nRLE20eeVxx3>oL(r5U_hB>|O_L4B|%;dbUK?y24 zc)#Ka3jxd@vNaqoayKdt4n}YJ1;AT7tFlJEg!c Mp2E0budY-IXwq{v5iAXqm4#q4B(E3u5we5CsED@Q88GpD_{ -uzKf?{*3I^E}#&{vq1+bvC?8$d9jfcV`Yf zc6=JAH^O%rnDIVc?D6^!KI~u6UkgXuga};RUi_z@w}c%!{7NH^fUA>WKk{u%c$CGF zpmDHt70oQ#PxECl3@+=_^bErzy6t3!DA*B&s%o;aPcyT_C81ONqT-E4_q+7iWPHnA zvjKE8fZmw*UrZm^GB7EbHA9_Io K_;?;1_z@x#-MUtaC;iNi*cE-<2wX{EizK2v$%36*>WlTOA5L*(IDvYLZgH$GhGP zM*# -x7!s$#ah){9=A9t z4(45`%BD&hM9xiLE5v|2y0ER{exMfQoc8>lxh|rlFR*h*&?oSL4`=~e)o*rLbo_$* zJ^VhUfj1z>pcH*9ikJER5^o+&BBGwlFOsFhbhrzh5b)lJdczR;UvQ`#d50&gT#)%$ z!p&gI{%u1svaH?@`h+-L;;EY!!&B&kP3F!S_8N&)dRnB}+g*HvNjBE)9U{}tS2p*L z&UV`M`JrF#tgyL-)q_C+u %ZwQn>B7!%6eo8hZORabA+~>VU;Dm22>5NC2XkC52>)Fp_V@!aBE)uLQ99} z&ly#cAJ!vDG@#BkWcu?P40H!M_xpaF#W*xT^4cG&B&co@&3y4Hv|sWsg>7N&0}JiM zR*4+~sAjH7Jat|xI{&N$w==t7xSBOpq>c~h&EN=K_!P |m${8n~cw6ezHD5d)M*Iq7O?$cJfihC`CUl&SzRd Vm&Z z$+jZrCoOc53RHt1m=SO5?ju3RL)+KsN(RKi4{3yx|XeX>|AFs)e@`7lg~INM+2 zD3!T$$3`PA#OU9~*U+6RlSK!9Ky+DPNOz;v_0Zap9~I`0e(Bv}-z*&^jeL8qoVu_o zgCbetJ!4EZsY2j)*l}%yJQJ?Lk4Lb@<9NVkpSDAC)%w<7uTavm6B{6LkEkT|JL>k1 zJg^1ucCRB&-y1a8&~-jfl|~?+T%_?l6=imLJzkC3N{p<#hJ4n4W;tD+pcq~sDM&u= zzvwoWdH6y#Iqu`eJKcoqF0mFgn||wT$pA=n^Ndc_G t&f1P2Vmhbk{%Ue72n{yt!9 zMquYvJHwBQdnWoPXpAEVin^-R$G6rfen-=w;j+i?B~Xp}K!Ti%f2qQXr#3ZgbEFQ# zHPnAZ^)LWLmsek>sn3v_gkJO@BVo(cKXr D<_U>7>hP?lL~Utx`O`RN8%dn?t do)MSsP%6Rw48oU z`@T6?`urg+xVC(=$x{(xTM3Q@63Le|8Nc=g+vpkQT#%7etE4yRe+E5c?fvd-1zlvJ zg+9O@#tnqV)f!=4O%LsENee&t>BSa526UhcU)ak1J^K5QYC|@-tuE{2M5aSmQW_EX zxvAFyS6o!xFM6WlQ&?Ewi2sOYCk#wlHOt2+4ERatH$(1ZhVwXz-|3#yHBw(vg{E2i z(v4-&keng(FWLug3PL6C*=?c-HaF0ECris0_ZiFnnImiM(yZ*8d(xhFHjI$${G$|P zH1#{)ZC1UWEtZ+Bs&W->989` xyu W$@>&>DQkn(sA-IGcSu@khw0|C_0vn0O7h}A;|4wTpi3GEjRtGL9PqG{% zpdNCcD;kQU|IG`gv<*to^)v=r*r9Je e>z7uuNtaePW{B8RUZ3?2@-kg$iSo%6~C*aN=gBsobvm$VugIlML ze;^|1&qw0F$O{lwy#*QaUtlN>t?*x%nfK4bfPSPmXOc;NFLP BD9I3G2raG2nOnL#5tzG#MkGVx?ixkQ^He zLAc27PBdQn1Ed^1w{9+mUAulQ)5)v|-6!4lW2`3Qs8W2=WV64^Y)UWJj^$k_f6|=p zO{b#2%7q8DS}}0j ?G8Pp zf0ERwJJKSN92bl0jA=&F8`?b98gp07;Jmjt)QF~x=sjtT(QZZg`?K~9r!(HiBB4=& zYxdBi0bd{|de?+ZWgkmKtq(2lUN0=;M`aiK;`uI$X^!(~_+ xVHn|2Ev*|DNi4(%H_7TRNib& iET`1W90x21- z-N{``jC*gxI$T*)SRMYmGqBJXNY}$3XkjmN4gfxA0d?Z(&|mYTmI+kPAn>b}8EMQM z@XedaqT^yLc1u6<_fI{vsz*2JL^shUpmD!8elGkhG<<&M?X@bL+1QNkFU*G$v}c93 z`ZwbbsIB^aD(cRUixAap3Q0b)$!Q#RYiD@{a^Km0cX|mvUtvDCfQ#f`s4p;R2$c#* zBc$sy=l*C@lQ#N7`pR7ka{lB)8Y+A)uqhg6eM4;hI&A4^+|zeM^bu}o64EKetv6OX z!R7<&5q_FVk_#D<2S06T-tsQhx_-Y?n*41&H+M-NN-pu{ }LA6JA4O2#BHx1C9DM6h=;6C?jtD~w6Ip8CUjq3v4`4k{ng{z39mxjG+2u4x}-UV z1)e&M;yd28qlu2Yf0~N%#+$=ka1DLR=x# MCsN<xt!yXBCoyM~_~OU#qv6bp^F@?oGdii2B)rf5Dh0hLl5Z*~ z_SC_6 _l3%bbD~ACeP^H|B{y z
k|Mo&)1<~AEHBj8qx#nUfg8kSeNegn9MaXZ*U0)^R|NV*r(S7=0Yt3Q*q z?8mKM02b_#NnqTXwYkriw$raHW#j7^3hgP)(zC9uWwZR0ay4;3);}Dm8n?9&Kmz#k z;ECtN;tOgMe3($YU{FFNIrS65+Up4AAcv?JSM02TKgTwP909jx=^b`j5_M0%3(o5L zuISHzbBQ%?5|*7CUntUl;{X5uE~n(5?Cp8&w<<$XrdESbL4+3~s7uCVO96+YF+#$e z{^6%Sx2FG<60sJBWtkVdra~7ZBX{`DUu%7%;u#M>&F46#_R!QMNJdV%x4bU)UyWci z5=a(w4%7vuLI|syW6S@z#wLbB*EYsi=gI6|9@tW{;K*LW<7akFdE=euxOrf%VM1tv zXFrp#oJ1oVZ~94UeP8t@6N521d|93|lz*8XjZ-en9lM%rx~~h4O!54FY82S{UI4W^ zQ=? JmYbLa2{r FOD;H zEa;`!M_7J}IBtHtj*B&P0;k^}83x#8(m&}JFHH?uy(jI*(d7`=ailh)?C(H5WWEV? z?; ^BM&5JEuMRto)v#}Sj9r{??Z+ec-#h1Cw?hU||!!CtX`=$nO3lyU=2 zB(}i7+Qo(TIRJSL@pVRcY~-*sMj6|7Zjc*s&)md~nSSW8UF@^zQH4qf()h3$6}xB` zdEAZGw*Ht4eR~lnOfgTblWhjwi6z&j3QiCrYWh7yuj6@j k7EMx3nUX)1@%;kY6IGhgy0qW3 zc`4JiN)^tr;k*W2sit8D{$b* $dZpHQMnn!M&hZasj|6dJfcizUf6aa zx1qqctUNkJR+|t0 NXTH)+14n57o535l&ICh8p>czzyS%qaop@+4q z92?s27d|FJGc6OmvS~Zh*vG72I){Q!Xl5^d_cvKw@2%8@5Qy|I=%^zSN6_(GLWk6k zUGQ0NvYhY7FT5<-i}72cQpR_vpTq~xCHyjTS5|v2*|0fpyf_Ss i) z+_ftY8IN*96OyLX^zBV3roKxwj!0t*?{`}PqnoD`fAEms- zx!w) qKY_7}>+s7p$A%@@$uOr JTT zJC2M8oq*BIwVfZdw#G?`_3WRjj18bn?2}jg%MGKT{Cb2CfWu1JKCxw78~Ayl-S$&` zg%Z(=ZL 0vR`wOd*pf~)H~%|Z4v7(;XF|czkSWO zz^uR#N|M;a)`IMXUwhnC5>_6EWi2ONm?IDAAJH8>VVy&PR}pEuf4(-@981;zdG5jr z!aEdkxFP}&E4(tZ1^T_e6II=4Ue71Xn(`vwUcy3U`0$6yus1ex-=4tTp~mM{kE-U4 z6HEAY%e%G{a12guz1^?Tj bS*(VMjgn_0}V?HkW@XF0qcz{Ag!t1fsZSLRD$= z8s>aQg`3}@q8IeQ>3*xxHcYzZZR>)>lrU~QX6j)7x+|O42XZ%9b7sgx+47R3S6W~2 z$C14-_ig(r9FM%I n1j$p^|}kBI)yOY^gh79Su! zb%0l(`;;UzkHwnN%$R+gJbNkZvTwV7H@wM3A>ai3k?uJXs{2}z3rm%qPh7%GCbud8 zkNxlq)buK2O&{|MaBRWN%(m0`D!$4392qB^T;h_=liZco>fA{dv*GpG1w K({|3Kc2M*?IX`CH(qH1uhE!#iM@M46?L%V>#nTEt|MH_ zXX#SD@~AnOr2YZH0{K$ufzFi&%<^+Ytu O}!v3$$+jp%L_PsSna^@clhd8bU z$fI&sKjHT?7CrK!-U_?<6+Sd#!gjfa*}!|+Ky<6I%6~XWBi%6y=%m_swr+h<-ewqd z-XyK}La{goGqv_l$Hg)VyXAtIksHp!z4a15l`4e`JkK}K7~;LU=l@$fu-U0~ig1~D z6|30U&Z(OjU{|Ot*y;G|VQf>3VRn`?JzistE%oOs-c9ym@?HLo#2J&Kg_p(bY+}N_ z3T^c6>wC_)_(Rza`*r+DDE9nf7RaWW2{i(y-_Sk&3B#(#u3p19wOC8U-nwc-g)@V= zv62Rzy3pEZ0|7+a{lIOW3DQmBkfYxQA%gRdVi6T(a=yHiY|Ny+w!;s(zI)w1bR=qc zdM^CKut|Hsc}VMfo?wZ%&h2lIc9P}k@S%cPlCx7bix`UWV=Z$;(CqwGYtvetp&>>e zc2XSgv&YMqj*6$iJ*n#9hdu(7eI7FN)WhBJsT-{~2_tW6{3NWRzs4`0iPG3F;_C&4 z<9D%nrN!X(FN3uX*JVNl<)2WPmWumeplDKTuqTqesg7Qm?ov$=ewq F->CTxv|IGSz?TGFrA!jlt!Vf0S@;L-Z-npV2~LiVgPx;Jlv&VuCF z#|dLx2qZ44JT+ZuLG3adop0WpgS?lzxC=giSvU)3&Hm>{f?wYf4?-8om>^Homo)>- z-|>eg>6_YzSgQoBAN-e(?)5zD2Y!a$(qhQjvZ^E0GmpFLB^ZdWw*OVGX6uo9BWd~P z)huUTZ6s*BiLaGFNsMchl~_VBfG0OQ(?UOqjU1QF+G)6qmY6S+xBc|GRp*E|zQF zhbj!BDwce>G4OqZR`$a@JGAD(*A1}Nuwlg43)~&$oL#9nDLT`yYqb)NB{0JShPM`) z*yPKfro5tbJLKznP%S!Qj8ShV-Yc}u^J!7Wn2dU1@q%P2iJx|8OF}(wHv-dygJ;Y< z7udtp6+OR`UntcjJJsbac4PD|5c@UMDeG~BI8OQK%s^M_Hg&y465hKD0D`sz@`;9D zz6;S7LAvkjQ6hzHxylQ3!Y*-OxGt5yyY}26$&u`bWC_bkv>9aI<}%2%N`$8N@x&Nn z>0y-2{&wiFEf!<-ECGyTMFRufSR||Tw!k6N6w-}%yOI Vw5{5gq zcbuhb98XnAtuCuXKZ~+J(&~D=FcTNq+HlO^mSMKemekg iCZtq7+Tn=oJ&}+;t+BW(hEU>XhATBl498lbeH;$3*y(TymY8 znJHI7Olg$zlb_1xGH^mI8;f#uUcrpWZ-DCU1kOpX@UXDrpIt)0MoTP}ulH-aXrk 9Pit?esD{gBO6Fp$!^!fCzQDS;8`9+ zr|liJ41Ghm1idfx1{6vnr?P&!;Tgp%eT%S?v(r&mM}QQI3t}k#d~)MIx!yqc dUNSf29Nk5-w3NRRZlJ_%iJ~SS8+5KlhD#=L zjHBaW(Dir{Wx6B+;BHyHHR B Lt9h{Ap#qZsb&j#l8H$+GKt)wlompjCrI**XI%a~`V$J+jN*S@wPJ}B6@ z4+?u}X!@05)BO~=M8pr;`%u($6TuG<`|S9om>a5JcS`mnq{_g uiukdiV zz3~DhomG1WQd%kgU~fiDL}Q$;Hr@ ?ULi&mija^X$Hh-bj-wn$|lW zgr%m($ hS%RiD?|$1 z^F@5h838i8vRt9n11it 5AFCC8*$7=x)9Zz zl4-KOtX81P*vTbjEe1`;y@Zu-GP8G-5eXiEMGyd909|^;*s^;cPnN7)H~F}@#_!H| zLHRQ#>X49 m8^`#$%jL-Iu5i*?ObzB!}WVCW;$)AJ45TJzv1 zsS+|4=!?f;uM@;v^SGnmaBXGL@&yVX>&|LN4Dm3sa*>YeE7m?1upHVLvZAQ&tGO5b zGe7pMV?XLX z)D9pG{!1Sp)UQYMCj%*d<1=FqiwIg09;?r3u6u>X54V^7EU-nxQ#40eG9-9i_sg}{ zE@Z~>-Ga~gFd&8ny*`N7F531gaRK_!WAq>P)ceVt77ebgg6}BqAZgW!6!w$}VrNGl zVC`=sDZdYz&qo|r&gPhUBGmO;`wGoH7{`$agXhGL4bMEG6H9>kzrOn7;9x(HrWDBI2k>tQ zMf~;=D!6}hKxG_ 1n9)VeCLd(Me zTq2W;3*^^PW-|7h)bnQTwDTPOldwH!*SsMRgqs8MkUQGaKX4<3t`aq0cUmLEIdjr8 z{YT~vM^gA&;>7bsdAv^qM0arM0u3nN<6uE5g+H|@Zp6$tRA?v50OQpZDNdF8{Y?r) zELt1dpMFpQw -}GA?tS!g1od|({_Y$hjkN)ee%%p9O?cPK702e=LBqj z9BO0E5Jq)T_L~#@)YtjCAve^ZD-zPhs5}1ndGQ}_b5{b(BuOuqBR6JBzgcddZ+ks% z1EGu=H#J~(n^=PD329@|BddDO{e1W#o_#9tLDIlbWjJQu;bI^0MT3&a^yEnXh0d6m zWRMaE@bq+7u^2ceWmn=a;jb$ hK9oY13XZX1hn8-gBAXDZZ!^ziRqmI9u!gTe3Y{qjFJ|@r$|62(81-P4 zp5RQFtA?|_(M61#OjFS~2EL0O9&=&M&zu1vt4<=bev2UBD0r^R;*z0wC=@1M#B~5s z`-h>lTKAU6HPEBMu*Y3xq(=$c@d6Ne hzU9J}eCEnm0HjKk(kk~gls zo~}LQeW#}ya9I6pUvGH!okwxul)Y{EC?3=jh~Xn=w#YOH{{zJ9L+H%7T`tC?9+r>t zI9>Rg5?zt8d?U(PKE-ut^)0Y0dfpwLI@B^Zms{7bq3CNlDK*CH5uc4%EuCH3(q=RI z$9a{<2YU4`4aBMI{k8;R_nYQrOwHEM?u6Hm>$YoyA7t)WuLB=i6cwwV(S+CjvWDdv zl~e-TDfSjzeLyUQ&23>}IsOko0tAh|^P5&@T`%Gr7FVYZEeg~>85JJI9;Zssu?t?o z)s-C8($Mz5zVtyQ|9)M+^WAm4LuQb^G(V8s_4`k3uXKLFW@}+Uh4>w(`Uf#Wj%R5y zkA9s>0FV%z&$`(>*&F_aFyXony;qFFqc87oT6sA7>X9qVR9ak9SLTba_7^8h`;!MR zt{t+s7sY+{uF0&NE5K~bT~P*-Mq95JgW&+OV!d!=A}@3~xdEh2&nq{uK9zr8b$>fy z_JI^8!s@4E${zK*z3kjz<_Ulb){Laj#v(mA;cckkRioPGi*ech+spl!P}atRjJGFx zkGl$9N%wQ<55E<5-Dk;2Ptp7At*ZC=&^pQ%B%2W%UQ`im;Jjzntm=A h k i) jF9wn3fN{YSI#8Ecqf#g3Iv)n~)9*jBXlxchodG-V_%t5r^HQm%044h@u?g z{6~kkg$_lzmcPyK`iHwa&zl*N@qOPyLl+cCG0}tC@IT32!caz*tjrO7Eh#Aod|7xS zO6)#u1e!xQC5hP0wcxs6TK)VbcN=wEj`7$L168@LRSmKZ0^CuZjI5au{H9l!JfUqKc?sdEvUPw(In|ChA6cKRn9jWX0WdH;1`3F%n3oi--ha zErg6=6A0O6=K@Sqd)Xj8SF=Jp=QuzxxcHJkpjW&ZMEVS6 C!Cf<=SaoM+IIie`%Si(W3Q*X zejrHptNH2Sm q?Da{g AtQ%}UDMa62RG7$nt^~TH1E?Yb#BwAVBNGa4MIAovonTnv@F-vspHDz`bs;dXp z$-j}M!#;jdls)kmPawoX+gOZ4xWc(8kSHSA k>5n42DKXYp_?#5i+k5>t(%;z*Y<@hQiif`gmN}^fnW;;yKgWNE8TBiqs z%-OdTgLfE^xlaCtvfZ-u)|gJ*$G=4$9 PgyC}^XI4Z1{ty#X%f-;#OOfYI~C2>kSVk1g8)VnycLR#i1h5w0-#Alh_ydi9= zL}O2Trj|Tq!*I%2FR!htp*NIWrf@6oitBz9?QT#d Z`e=-sj- c3k(qZfzaD_p F5YJ+U}+X#0Hs2 qeLfSV&WaZX(Zgzopicc(<9?Z}$*|du1Fg8! zSVD-I3)!yZeqPL3N|BD(AARp}Ub2v^Qjuvan@%O^O$uCUx#(e>`{K~e`(lOs=v&;n z+C}{?K!N+_n?;PCf41f!-@4~4LQz`$2TXSo%ygA}cX~WB*CFM zfCG`;m5X{E A3#4-A>+ zm?_FIR~il`cv<{9lqkr6)Ef@?M%6JCz2sS{Js7wiTSfdjK;L2i7Dg6f|E6`$I+V}= zwy&zLt9tWyZ)%RIAcIw_^6Afwn=brzzuas-72U9x%;`cqf-Zf-S!xUzI4Sr0{7)+X z8xMV=PU)-2kq!2~L!2$Rpx@RQgN|Js9Jntv9@zajDj1{Ve%fs!lxs&^ZEMtmTPdj; z9Fc#FuX>v>Uc{+2Fkn@V!&IP%gB^&UZexkavut}XeywCjJR$(v7MSaWi?_Ui%U>(8 zLkdd9|IoH!vtYeHATUHG&==Cewfp&--6&wiZV@=7p614X9ahPXode+-&!2F&@;gno zHjmFteKcAkfbwPff$M$i)x5d(e9uPzbk7A>?w0}8qNCO)PwhV;R>C^SB{F=k1uy=J z)fL5kyI>Qe>`81 i(bhb16B_#FwShV!Dl3}|BV#s zFH)N}1W{>d!joW-oF=U}9n#X*t`h&mV=5`U>EytgnAOf`{S5@hG#$e-&6~eDzOLTb z!@x~i{aP48YDZyy8nM&0JKn% wDh70yembv8sv!;Z-!j Z;eM?Z~sR| z*lj5d-$s}d8Ei^B-?N5qlk-kIKDQ3b6;^Epr GLTEmwuc{n4~nu*Ka97(-o= zUGl(};uMEz&AK*;uQb<>&SJ0+jQ-@{!TL#4YLmz1$p@|N2@k~ye?tmz-CJ2mIS z+L>r=s|H&5G^_<`#Q63)1qn{~X_SSV(iN|{=`L`PwF%He24djYs03f+U*oU7VB#-z zA|ZN@&J~--QqcF22;v_pv!R66GFHZWBW1BI!zh{p5;@*>Y#ROcCP?D;rRUdf-=`a& zsn;@QtKnWBf@RFArWW$_W}A_@gh=?0cw9x7J$5dB-=+T5nX$@mJMt 3!K{$B~hq)t0X+M4C!_t zXt$`$3w#qHoMulcU{${s@aB%+_RLrd4RPsjxYp5aAgNCRSa=Cr?1paN-;y)mKujQoZ)-;6>)E>`(y*V-mb {mnKt$ zZ+wOOo;}cf`|S}fuP<2MhBuSS3Ue!5n+dJ=P(|@#XS9xUy?rrwW=M;F2;<7dJABpV za{&_fIc;4AXU)6GU$*VYL>o}06P!yoexl~pfBfg&3BG>PQJtB*;IODYZOtBt^b+!! zAunp(a9G<18Mj~X1Me#l2?7a$UBYK_t$8H-&GI#KaWcKG7}x;;f0~7;boQwkuYBog z^I1xY+w6+FBg*I7vAVf7vutZ!w?YD)`EOenpUsVWPA5!~op`*RN
#c0C)ceH$OZzpd8`!b@UY i`4Reu5Q-A(Bcv>AW;xv6NI9@s&WQ*x`D6GJP!EhA45fgHr&dw?MhsWGqOt|A#l zkt-B``pA3qGh@mRfwf6lch9p&X1pWKMibk3Z#{U0nNLQEXWRhegM!e($hT3#ZJ|Hp z5PjvPeaI^@b%%#*f$+x1eV?!`T#-}hY!v9QVFI`VlHm0&r#$lPgKz>mDU0sh<;rMR zE0OW+(fum!3HHoL+E$FKPubD!F^}ZfrG9|<11y`AvN5Tpq9Ekg8%+(qn rd+ z|MA>?N652pjpS%Sio8|=24_%#YxI*!P>V!J(OaU7A0t1jT&f~D9$#3sbc40fuRo?o ze$f>Ay?Ijr%*P@b&@xco0#>NuqIYxy1zHx!v9R-OZQK?ZVz=gmVE|QZ-*@~e_pUXj z^6w{^-(NMI*;eb41c#r2u(m=Yjrcqha2_C!-p``oKFkj%OSrAct82rT!1J wrRq4mZ#7q zQCXDXjlQ|uZ^$HfuQQ|P%c #$XPY|T>FA!8Zlmrs2fIlo@%}cP2=#SyQF(5N zQFfsLn~{NjqvqkZT1 QGZi^x-0U~7AR7nw*Wqihmuv!5>g^N0{n zdTQ9>YNw}P-sWQLE%wt1wx{e5V%Y@NaABI8zbxNNAwj2s6PVCg;8G9WpDsN&w-1u{ zX_dYSZ%%o!`z=p|+*iZx;Lr7PORhK_J21+7llH)6jQL5pb8793yJ0KI++8+UeholJ zh5*BibOrWJ%nt`|h^CJ}=)u4Ww&eIlO#tnY@LsOt7-@{dJ>m~#Iiy7QnX|P32$Pnn zF7dOoLAGz9ge)P)njv|#_ByDQ+;fJZELHnuO(U?R#P0yRvTOlpQ>Q|A6viCD7&pR@ zck`=CVMQ%9A~rn>Hd1!>ibFWb2@I_j(-_Hkws!muO-tbHDihM`c&N4V(~xR4P<0oo z(11ckIzcfic6P;gf6`gKo$j>^%Xn+82M}Wxz@>ll8;kHyCttuL5!ve;>#!;;7dvC? zV^Gecc`k96pAB4mc5*?0H|+;bDMQx0KU==^inLg=i#nH&LsT_(&43C--efc_wfQvM z`hvl*WNk5n)Vsq6LYr8@Mw4DBO7lEezSCj&IZcr7_s4JYRh?Myy)HviXMmqQ(7ao; z_g{#~4rvIPOkjTRw?qG3b9m_m4H@ 9wAF#6jz1|IIC;Ko`CTxDQ&+`QnK%&-^Yo?!f zqBq9w4)%L2?3S%lsnAUXjKGk1-sRUe>i+yzal#EfS}^9&$48YN9dJ_u_Q-G;i;~XE zh~-r)vHd!qpq6RQH!a!7m*o^j0^wRow!6F8a#a`E@zYoZv#*V>GlyV+=V7q9s2$UO zU=QB