# Kubespray Offilne Deploy **Repository Path**: cuigray/kubespray-offilne-deploy ## Basic Information - **Project Name**: Kubespray Offilne Deploy - **Description**: 利用 KubeSpray 来离线部署 k8s 集群 - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 1 - **Created**: 2025-10-02 - **Last Updated**: 2025-10-02 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # 1. Refs https://github.com/kubernetes-sigs/kubespray/blob/release-2.23/docs/offline-environment.md # 2. 所需物料清单获取 可以利用官方提供的工具来获取所需要的二进制文件、镜像列表,命令如下 ```shell cd ~/kubespray-v2.23.1/contrib/offline/ sh generate_list.sh ``` 运行完成后会生成一个 temp 目录,该目录下包含 4 个文件 ```shell $ tree temp/ temp/ ├── files.list ├── files.list.template ├── images.list └── images.list.template 0 directories, 4 files ``` 其中 `files.list` 包含所需的二进制文件,`images.list` 则是所涉及到的所有镜像 # 3. 软件包准备 由于已经进尝试在线安装过一次了,所有的二进制也都保留下来了,当然也可以通过 shell 脚本把文件都给下载下来 ## 3.1 环境配置 ```shell # SELinux sed -i.bak 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config setenforce 0 && sestatus # Firewalld systemctl disable --now firewalld && systemctl status firewalld sync && reboot ``` # 4. 镜像准备 ## 4.0 Containerd 服务安装 ### a. 解压二进制文件 ```shell tar xf /data/KubeSprayInstallK8S/kubespray-binaries/containerd-1.7.5-linux-amd64.tar.gz -C /usr/local/ ``` ### b. 创建 Service 文件 ```shell cat <<-EOF > /etc/systemd/system/containerd.service # Copyright The containerd Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target local-fs.target [Service] ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/local/bin/containerd Type=notify Delegate=yes KillMode=process Restart=always RestartSec=5 # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNPROC=infinity LimitCORE=infinity LimitNOFILE=infinity LimitMEMLOCK=infinity # Comment TasksMax if your systemd version does not supports it. # Only systemd 226 and above support this version. TasksMax=infinity OOMScoreAdjust=-999 # Set the cgroup slice of the service so that kube reserved takes effect [Install] WantedBy=multi-user.target EOF ``` ### c. 服务启动及验证 ```shell systemctl enable --now containerd.service systemctl status containerd.service containerd --version ``` ### d. 安装 nerdctl cli 工具 ```shell tar xf /data/KubeSprayInstallK8S/kubespray-binaries/nerdctl-1.5.0-linux-amd64.tar.gz -C /usr/local/bin/ nerdctl --version ``` ## ~~4.1 下载导入镜像 -- (如果已经开启了本地镜像服务,则可跳过此步骤)~~ ```shell # 下载镜像 cat images.list | awk '{print "nerdctl -n k8s.io --debug=true --insecure-registry image pull --platform linux/amd64 "$0}' # 导出镜像 nerdctl -n k8s.io image save --platform=linux/amd64 -o kubespray-offline-k8s-images-v1.27.7.tar $(cat images.list | xargs) ``` 导入必须镜像 ```shell # 镜像导入 nerdctl -n k8s.io image load --platform=linux/amd64 -i /data/KubeSprayInstallK8S/images/kubespray-offline-k8s-images-v1.27.7.tar nerdctl image load --platform=linux/amd64 -i /data/KubeSprayInstallK8S/images/registry-v2.tar ``` ## 4.2 启动镜像服务 ```shell # 启动镜像服务 ## 需要先安装网络插件工具 mkdir -pv /opt/cni/bin tar xf /data/KubeSprayInstallK8S/kubespray-binaries/cni-plugins-linux-amd64-v1.4.0.tgz -C /opt/cni/bin/ cp /data/KubeSprayInstallK8S/kubespray-binaries/runc.amd64 /usr/local/bin/runc chmod +x /usr/local/bin/runc nerdctl container run -td -p 5000:5000 --restart=always --name=registry registry:2 ``` ## 4.3 ~~配置insecure_registries参数~~ -- 【可先忽略,后面有配置步骤】 配置完成后,需要在 kubespray 的对应目录中修改 containerd.yml 的参数,使本地的地址可信,其中 192.168.67.131:5000 就是我们刚刚启动的本地镜像服务 ```yaml # file name: inventory/(inventory名称)/group_vars/all/containerd.yml containerd_insecure_registries: "192.168.67.131:5000": "http://192.168.67.131:5000" ``` ## 4.4 镜像打标签 ```shell ip_addr=$(ip a | egrep 'inet [0-9]{1,3}\.' | egrep -v '(*lo$|nerdctl)' | awk '{print $2}' | cut -d'/' -f1) echo ${ip_addr} local_registry="${ip_addr}:5000" cd /data/KubeSprayInstallK8S/soft/kubespray-2.23.1_customized/contrib/offline/ for image in $(cat temp/images.list); do img_new_name=${local_registry}/${image#*/} nerdctl -n k8s.io image tag ${image} ${img_new_name} nerdctl -n k8s.io image push --insecure-registry ${img_new_name} done ``` ~~在 kubespray 主目录下生成镜像打包以及推送的脚本,将脚本复制到包含全部镜像的机器上,执行推送至本地远程仓库~~ ~~如果想要制作自己的镜像仓库,则还需要下面的步骤~~ ```shell cd /data/KubeSprayInstallK8S/images/ nerdctl -n k8s.io image load --platform=linux/amd64 -i ./kubespray-offline-k8s-images-v1.27.7.tar # 这里一定要确认好本机的 IP 地址 ip_addr=$(ip a | egrep 'inet [0-9]{1,3}\.' | egrep -v '(*lo$|nerdctl)' | awk '{print $2}' | cut -d'/' -f1) echo ${ip_addr} cd /data/KubeSprayInstallK8S/soft/kubespray-2.23.1_customized cat image-push.sh | sed -e 's!192.168.67.131!'${ip_addr}'!g' -e 's!image push!-n k8s.io image push!g' > /tmp/imgs.sh sh -x /tmp/imgs.sh rm -f /tmp/imgs.sh ``` # 5. 本地 yum 源以及 Python 环境 ## 5.1 本地 yum 源 先使用本地的 yum 源来临时安装 nginx 服务,安装完成后再使用 nginx 提供的 http 源 ```shell cd /etc/yum.repos.d/ tar zcf repos.tar *.repo && rm -f *.repo cat <<-EOF > /etc/yum.repos.d/local.repo [local-repo] name=Local Yum Repository baseurl=file:///data/KubeSprayInstallK8S/repos/yum-local enabled=1 gpgcheck=0 EOF yum clean all && yum makecache yum install -y nginx systemctl enable --now nginx ``` 备份以及创建本地 yum 仓库 ```shell ip_addr=$(ip a | egrep 'inet [0-9]{1,3}\.' | egrep -v '(*lo$|nerdctl)' | awk '{print $2}' | cut -d'/' -f1) echo ${ip_addr} host="http://${ip_addr}:8000" cat <<-EOF > /etc/yum.repos.d/local.repo [local-repo] name=Local Yum Repository baseurl=${host}/repos/yum enabled=1 gpgcheck=0 EOF cp /data/KubeSprayInstallK8S/conf/nginx/local_service.conf /etc/nginx/conf.d/ nginx -t nginx -s reload yum clean all && yum makecache fast ``` 关于 `openssl11` 的相关部署可以在 `~/KubeSprayInstallK8S/yum-repos` 目录中单独安装 ## 5.2 依赖库 如果发现 yum repo 有问题,则可以尝试重新制作一下仓库 ```shell cd /data/KubeSprayInstallK8S/repos/yum-local rpm -ivh deltarpm-3.6-3.el7.x86_64.rpm rpm -ivh libxml2-python-2.9.1-6.el7.5.x86_64.rpm rpm -ivh python-deltarpm-3.6-3.el7.x86_64.rpm rpm -ivh createrepo-0.9.9-28.el7.noarch.rpm createrepo . yum clean all && yum makecache fast ``` 安装相关的依赖包(关于离线仓库的制作,可以从安装光盘中复制,然后用工具 createrepo 来生成,一定要注意安装的系统一定要使用同一张光盘) ```shell yum install -y perl-devel createrepo yum-utils net-tools tree rsync yum install -y gcc ncurses-devel gdbm-devel xz-devel sqlite-devel tk-devel uuid-devel readline-devel bzip2-devel libffi-devel ``` ## 5.3 Python 编译 ```shell cd /data/KubeSprayInstallK8S/soft/ && rm -fr Python-3.10.13 tar xf Python-3.10.13.tar.xz cd Python-3.10.13 ./configure --prefix=/opt/Python-3.10.13 --enable-optimizations && make altinstall ``` ## 5.4 创建虚拟环境 ```shell mkdir -pv /data/venv && cd /data/venv /opt/Python-3.10.13/bin/python3.10 -m venv kubespray echo -e '\nsource /data/venv/kubespray/bin/activate\n' >> ~/.bashrc source /data/venv/kubespray/bin/activate ``` ## 5.5 安装 ansible ```shell source /data/venv/kubespray/bin/activate # pip 离线安装(假定 pip 离线包已经上传到 /data/KubeSprayInstallK8S/pip_pkgs/ 目录下) cd /data/KubeSprayInstallK8S/soft/kubespray-2.23.1_customized pip install -r requirements.txt --no-index --find-links=/data/KubeSprayInstallK8S/pip_pkgs/ # 如果安装的 ansible-core 的版本不是 2.14.11 则需要单独安装 pip install --no-index --find-links=/data/KubeSprayInstallK8S/pip_pkgs/ ansible-core==2.14.11 ``` # 6. HTTP 服务 这里就使用 kubespray 这台主机来启 nginx 服务 ```shell yum install -y nginx systemctl enable --now nginx ``` 配置文件内容 ```shell cat <<-EOF > /etc/nginx/conf.d/local_services.conf server { listen 8000; server_name _; root /usr/share/nginx/html; error_page 404 /404.html; location = /404.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } location /repos/yum { alias /data/KubeSprayInstallK8S/repos/yum-local; # alias /media/iso; } location /bin { alias /data/KubeSprayInstallK8S/kubespray-binaries/; } access_log /var/log/nginx/yum-local.access.log main; error_log /var/log/nginx/yum-local.error.log; } EOF ``` # 7. 创建主机清单及安装 ## 7.1 创建主机清单 ```shell cd /data/KubeSprayInstallK8S/soft/kubespray-2.23.1_customized inventory_name="offline-cluster" [ -d inventory/${inventory_name} ] && rm -fr ./inventory/${inventory_name} cp -frp inventory/mycluster-template inventory/${inventory_name} declare -a IPS=(192.168.10.151 192.168.10.152 192.168.10.153 192.168.10.154 192.168.10.155) # 查看设置的数组内容 echo ${IPS[@]} # 生成 hosts.yaml 文件(需要注意的是,控制节点的个数不能为偶数) KUBE_CONTROL_HOSTS=3 HOST_PREFIX=k8s-node CONFIG_FILE=inventory/${inventory_name}/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]} ``` ## 7.2 修改主机名称 ```shell # 批量修改一下我们的主机名称 #kubespray(需要额外注意一下主节点的个数,动态调整一下) sed -i.origin \ -e 's#k8s-node1#k8s-master01#g' \ -e 's#k8s-node2#k8s-master02#g'\ -e 's#k8s-node3#k8s-master03#g'\ -e 's#k8s-node4#k8s-worker01#g' \ -e 's#k8s-node5#k8s-worker02#g' \ inventory/${inventory_name}/hosts.yaml ``` 如果不想在 master 节点上运行 Pod,则还需要编辑下 hosts.yaml > [!NOTE] 注意 > **主节点** & **ETCD** 的个数一定要为奇数个 ```shell # 部分片段为 $ cat inventory/offline-cluster/hosts.yaml all: hosts: k8s-master01: ansible_host: 192.168.10.161 ip: 192.168.10.161 access_ip: 192.168.10.161 k8s-master02: ansible_host: 192.168.10.162 ip: 192.168.10.162 access_ip: 192.168.10.162 k8s-master03: ansible_host: 192.168.10.163 ip: 192.168.10.163 access_ip: 192.168.10.163 k8s-worker01: ansible_host: 192.168.10.164 ip: 192.168.10.164 access_ip: 192.168.10.164 k8s-worker02: ansible_host: 192.168.10.165 ip: 192.168.10.165 access_ip: 192.168.10.165 children: kube_control_plane: hosts: k8s-master01: k8s-master02: k8s-master03: kube_node: hosts: k8s-worker01: k8s-worker02: etcd: hosts: k8s-master01: k8s-master02: k8s-master03: k8s_cluster: children: kube_control_plane: kube_node: calico_rr: hosts: {} ``` ## 7.3 拷贝模板文件 ```shell inventory_name="offline-cluster" yml_file="inventory/${inventory_name}/group_vars/all/offline.yml" yml_ctd_file="inventory/${inventory_name}/group_vars/all/containerd.yml" ip_addr=$(ip a | egrep 'inet [0-9]{1,3}\.' | egrep -v '(*lo$|nerdctl)' | awk '{print $2}' | cut -d'/' -f1) echo ${ip_addr} # 离线下载链接相关 \cp ${yml_file} ${yml_file}.origin sed 's!<++>!'${ip_addr}'!g' inventory/mycluster-template/group_vars/all/offline.yml > ${yml_file} ## 仓库地址修改 local_registry_addr="${ip_addr}:5000" sed -i 's!# registry_host: "myprivateregisry.com"!registry_host: "'${local_registry_addr}'"!g' ${yml_file} ## yum repo yum_repo_addr="http://${ip_addr}:8000/repos/yum" sed -i 's!http://myinternalyumrepo!'${yum_repo_addr}'!g' ${yml_file} # containd 相关 \cp ${yml_ctd_file} ${yml_ctd_file}.origin sed 's!<++>!'${ip_addr}'!g' inventory/mycluster-template/group_vars/all/containerd.yml > ${yml_ctd_file} ``` ## 7.4 集群主机系统配置 ```shell IPS="192.168.10.151,192.168.10.152,192.168.10.153,192.168.10.154,192.168.10.155" ansible all -i ${IPS} -m shell -a 'cd /etc/yum.repos.d/ && tar cf repos.tar *.repo && rm -f *.repo' ansible all -i ${IPS} -m copy -a 'src=/etc/yum.repos.d/local.repo dest=/etc/yum.repos.d/local.repo force=yes' ansible all -i ${IPS} -m shell -a 'yum clean all && yum makecache' ansible all -i ${IPS} -m shell -a 'systemctl disable --now firewalld' ansible all -i ${IPS} -m shell -a 'systemctl is-enabled firewalld' ansible all -i ${IPS} -m shell -a 'sed -i "s!SELINUX=permissive!SELINUX=disabled!" /etc/selinux/config' ansible all -i ${IPS} -m shell -a 'sed -i "s!SELINUX=enforcing!SELINUX=disabled!" /etc/selinux/config' ansible all -i ${IPS} -m shell -a 'systemctl disable --now firewalld && sync && reboot' ``` ## 7.5 集群部署 ```shell inventory_name="offline-cluster" ansible-playbook -i inventory/${inventory_name}/hosts.yaml --become --become-user=root cluster.yml -vvvv ``` 至此,如果不出现问题,集群就全部部署完成了,如果没有成功,可以多尝试几次,添加 `-vvvvv` 来查看具体哪个环境出错了,可以参考 7.4 执行相关的部分 # 8. 配置文件参考 ## inventory/mycluster/group_vars/all/offline.yml 这里的 IP 要替换为提供 HTTP 服务的地址 ```yaml --- files_repo: "http://192.168.67.129:8000/bin" kubeadm_download_url: "{{ files_repo }}/kubeadm" kubectl_download_url: "{{ files_repo }}/kubectl" kubelet_download_url: "{{ files_repo }}/kubelet" cni_download_url: "{{ files_repo }}/cni-plugins-linux-amd64-v1.3.0.tgz" crictl_download_url: "{{ files_repo }}/crictl-{{ crictl_version }}-{{ ansible_system | lower }}-{{ image_arch }}.tar.gz" etcd_download_url: "{{ files_repo }}/etcd-{{ etcd_version }}-linux-{{ image_arch }}.tar.gz" calicoctl_download_url: "{{ files_repo }}/calicoctl-linux-{{ image_arch }}" calico_crds_download_url: "{{ files_repo }}/{{ calico_version }}.tar.gz" ciliumcli_download_url: "{{ files_repo }}/cilium-linux-{{ image_arch }}.tar.gz" helm_download_url: "{{ files_repo }}/helm-{{ helm_version }}-linux-{{ image_arch }}.tar.gz" crun_download_url: "{{ files_repo }}/crun-{{ crun_version }}-linux-{{ image_arch }}" kata_containers_download_url: "{{ files_repo }}/kata-static-{{ kata_containers_version }}-{{ ansible_architecture }}.tar.xz" cri_dockerd_download_url: "{{ files_repo }}/v{{ cri_dockerd_version }}/cri-dockerd-{{ cri_dockerd_version }}.{{ image_arch }}.tgz" runc_download_url: "{{ files_repo }}/runc.{{ image_arch }}" crio_download_url: "{{ files_repo }}/cri-o.{{ image_arch }}.{{ crio_version }}.tar.gz" skopeo_download_url: "{{ files_repo }}/skopeo-linux-{{ image_arch }}" containerd_download_url: "{{ files_repo }}/containerd-{{ containerd_version }}-linux-{{ image_arch }}.tar.gz" nerdctl_download_url: "{{ files_repo }}/nerdctl-{{ nerdctl_version }}-{{ ansible_system | lower }}-{{ image_arch }}.tar.gz" gvisor_runsc_download_url: "{{ files_repo }}/{{ ansible_architecture }}/runsc" gvisor_containerd_shim_runsc_download_url: "{{ files_repo }}/{{ ansible_architecture }}/containerd-shim-runsc-v1" krew_download_url: "{{ files_repo }}/krew-{{ host_os }}_{{ image_arch }}.tar.gz" ``` 需要替换一下对应的 IP 地址,替换命令为 ```shell ip_addr=$(ip a | egrep 'inet [0-9]{1,3}\.' | egrep -v '*lo$' | awk '{print $2}' | cut -d'/' -f1) sed -ri.bak 's!192.168.67.129!'${ip_addr}'!g' inventory/mycluster/group_vars/all/offline.yml # 也可以通过下面命令查看是否生效 $ diff inventory/mycluster/group_vars/all/offline.yml{,.bak} 5c5 < files_repo: "http://192.168.10.156:8000/bin" --- > files_repo: "http://192.168.67.129:8000/bin" ``` ## inventory/mycluster/group_vars/all/containerd.yml ```yaml --- containerd_insecure_registries: "192.168.67.131:5000": "http://192.168.67.131:5000" ``` ## inventory/mycluster/hosts.yaml ```yaml all: hosts: k8s-master01: ansible_host: 192.168.67.131 ip: 192.168.67.131 access_ip: 192.168.67.131 k8s-worker01: ansible_host: 192.168.67.132 ip: 192.168.67.132 access_ip: 192.168.67.132 k8s-worker02: ansible_host: 192.168.67.133 ip: 192.168.67.133 access_ip: 192.168.67.133 children: kube_control_plane: hosts: k8s-master01: kube_node: hosts: k8s-worker01: k8s-worker02: etcd: hosts: k8s-master01: k8s_cluster: children: kube_control_plane: kube_node: calico_rr: hosts: {} ```