Kubernetes 实战篇 v1.0

[begin] Kubernetes [/begin] 简称K8s,是用8代替名字中间的8个字符“ubernete”而成的缩写。是一个开源的,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效(powerful),Kubernetes提供了应用部署,规划,更新,维护的一种机制。

更新

v1.0:1.1 二进制部署ETCD Server 分布式存储及Kubernetes 部署

1. 安装部署

1.1 Kubernetes+外至ETCD集群+高可用部署

1.1.1 环境说明

由于主机在多个不同的网段中,直接kubeadm初始化ETCD集群部署时,master10网段创建证书时无法创建网关证书,导致master10节点无法加入ETCD集群,故手动二进制方式部署ETCD集群,并Kubernetes 连接至本地ETCD集群及VIP高可用。

主机IPVIP角色
master01192.168.0.20192.168.0.250etcd01、master、apiserver
master02192.168.0.21192.168.0.250etcd02、master、apiserver
master10192.168.1.20etcd03、master、apiserver
worker00192.168.0.30node
worker01192.168.0.31node
worker02192.168.0.32node
worker10192.168.1.30node
worker11192.168.1.31node

1.1.2 操作系统初始化

  • 各节点执行修改计算机名
hostnamectl set-hostname master00
hostnamectl set-hostname master01
hostnamectl set-hostname master10
hostnamectl set-hostname worker00
hostnamectl set-hostname worker01
hostnamectl set-hostname worker02
hostnamectl set-hostname worker10
hostnamectl set-hostname worker11
  • 各节点执行添加 hosts
cat >> /etc/hosts << EOF
10.10.0.20 k8s-master00
10.10.0.21 k8s-master01
10.10.1.20 k8s-master10
10.10.0.30 k8s-worker00
10.10.0.31 k8s-worker01
10.10.0.32 k8s-worker02
10.10.1.30 k8s-worker10
10.10.1.31 k8s-worker11
EOF
  • 主master00节点执行 ssh 免密登录
ssh-keygen -t rsa
ssh-copy-id master00
ssh-copy-id master01
ssh-copy-id master10
ssh-copy-id worker00
ssh-copy-id worker01
ssh-copy-id worker02
ssh-copy-id worker10
ssh-copy-id worker11
  • 安装 Docker(脚本安装)
bash <(curl -sSL https://gitee.com/SuperManito/LinuxMirrors/raw/main/DockerInstallation.sh)
  • 创建Kubeadm、Kubectl、Kubelet 及环境配置脚本
vi master.sh
#!/bin/bash

echo "[TASK 1] Disable and turn off SWAP"
sed -i '/swap/d' /etc/fstab
swapoff -a

echo "[TASK 2] Stop and Disable firewall"
systemctl disable --now ufw >/dev/null 2>&1

echo "[TASK 3] Enable and Load Kernel modules"
cat >>/etc/modules-load.d/containerd.conf<<EOF
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter

echo "[TASK 4] Add Kernel settings"
cat >>/etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
EOF
sysctl --system >/dev/null 2>&1

echo "[TASK 5] Install containerd runtime"
apt update -qq >/dev/null 2>&1
apt install -qq -y containerd ipset ipvsadm apt-transport-https >/dev/null 2>&1
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml
systemctl restart containerd
systemctl enable containerd >/dev/null 2>&1

echo "[TASK 6] Add apt repo for kubernetes"
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - >/dev/null 2>&1
apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main" >/dev/null 2>&1

echo "[TASK 7] Install Kubernetes components (kubeadm, kubelet and kubectl)"
apt install -qq -y kubeadm=1.24.0-00 kubelet=1.24.0-00 kubectl=1.24.0-00 >/dev/null 2>&1
  • 执行初始化脚本
sh master.sh
  • 拷贝环境搭建脚本至其它节点
scp -r /Data/Kubernetes/master.sh root@master00:/root/
scp -r /Data/Kubernetes/master.sh root@master01:/root/
scp -r /Data/Kubernetes/master.sh root@master10:/root/
scp -r /Data/Kubernetes/master.sh root@worker00:/root/
scp -r /Data/Kubernetes/master.sh root@worker01:/root/
scp -r /Data/Kubernetes/master.sh root@worker02:/root/
scp -r /Data/Kubernetes/master.sh root@worker10:/root/
scp -r /Data/Kubernetes/master.sh root@worker11:/root/
  • 登录各节点执行脚本
ssh root@master00
ssh root@master01
ssh root@master10
ssh root@worker00
ssh root@worker01
ssh root@worker02
ssh root@worker10
ssh root@worker11
  • 登录各节点安装ntpdate 并同步时间
apt install ntpdate -y
ntpdate time.windows.com
  • 登录各master节点中创建文件夹
mkdir -p /opt/etcd/{bin,conf,data,json_file,ssl}
mkdir -p /etc/kubernetes/pki

1.1.3 下载ETCD 组件及CFSSL 组件

  • 下载 cfssl 二进制包用于签发证书,官网地址:https://pkg.cfssl.org/
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
  • 下载 ETCD 二进制包
wget https://github.com/etcd-io/etcd/releases/download/v3.4.19/etcd-v3.4.19-linux-amd64.tar.gz
tar xf etcd-v3.4.19-linux-amd64.tar.gz
ls etcd-v3.4.19-linux-amd64/

1.1.4 安装部署ETCD Server 及SSL 证书申请

  • 将二进制包移动到 bin 目录下
mv etcd-v3.4.19-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
  • 签发证书 CA 证书
cat > /opt/etcd/ssl/ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
cat > /opt/etcd/ssl/ca-csr.json <<EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "ShanXi",
            "ST": "HouMa"
        }
    ]
}
EOF
  • 进入 etcd>ssl 目录生成 CA 证书
cd /opt/etcd/ssl
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
  • 签发证书 ETCD Server 证书
cat > /opt/etcd/ssl/server-csr.json <<EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.0.2",
    "192.168.0.20",
    "192.168.0.21",
    "192.168.0.30",
    "192.168.0.31",
    "192.168.0.32",
    "192.168.1.2",
    "192.168.1.20",
    "192.168.1.30",
    "192.168.1.31"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "ShanXi",
            "ST": "HouMa"
        }
    ]
}
EOF
  • 生成 ETCD Server 证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
  • 生成文件如下
ls /opt/etcd/ssl
ca.csr # CA 证书请求
ca-key.pem # CA 私钥
ca.pem # CA 证书
server.csr # etcd 证书请求
server-key.pem # etcd 私钥
server.pem # etcd 证书
  • 创建 ETCD 配置文件
cat > /opt/etcd/conf/etcd.conf <<END
#[Member]
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.20:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.20:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.20:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.20:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.0.20:2380,etcd2=https://192.168.0.21:2380,etcd3=https://192.168.1.20:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

END
  • 创建 ETCD 的 Systemd service 文件
cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/conf/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • 拷贝 ETCD 所有文件至其他两个master节点
scp /usr/lib/systemd/system/etcd.service root@master01:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service root@master10:/usr/lib/systemd/system/
scp -r /opt/etcd root@master01:/opt/
scp -r /opt/etcd root@master10:/opt/
  • 登录其他master节点修改配置文件
vi /opt/etcd/conf/etcd.conf
cat > /opt/etcd/conf/etcd.conf <<END
#[Member]
ETCD_NAME="etcd1" #修改为对应etcd名称
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.20:2380"。 #修改为对应master节点IP地址
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.20:2379" #修改为对应master节点IP地址
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.20:2380" #修改为对应master节点IP地址
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.20:2379" #修改为对应master节点IP地址
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.0.20:2380,etcd2=https://192.168.0.21:2380,etcd3=https://192.168.1.20:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

END
  • 各 Master 节点启动 ETCD 集群
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
  • 检查 ETCD 集群健康状态,输出内容如下则说明 ETCD 集群正常
cd /opt/etcd/ssl
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.0.20:2379,https://192.168.0.21:2379,https://192.168.1.20:2379" endpoint health --write-out=table
+---------------------------+----------+----------------+----------+
|        ENDPOINT           | HEALTH   |    TOOK        | ERROR    |
+---------------------------+----------+----------------+----------+
| https://192.168.0.20:2379 |   true   |   22.483116ms  |          |
| https://192.168.0.21:2379 |   true   |   31.319528ms  |          |
| https://192.168.1.20:2379 |   true   |   139.27626ms  |          |
+---------------------------+----------+----------------+----------+

1.1.5 kube-vip初始化配置

首先 获取 kube-vip 的 docker 镜像,并在 /etc/kuberentes/manifests 中设置静态 pod 的 yaml 资源清单文件,这样 Kubernetes 就会自动在每个控制平面节点上部署 kube-vip 的 pod 了。 (先在master配置启动,等集群初始化完毕后在其他Master启动kube-vip即可)

  • 设置VIP地址(VIP地址为虚拟地址,请选择通网段未占用的IP地址)
export VIP=192.168.0.250
export INTERFACE=ens160
ctr image pull docker.io/plndr/kube-vip:0.3.1
ctr run --rm --net-host docker.io/plndr/kube-vip:0.3.1 vip \
/kube-vip manifest pod \
--interface $INTERFACE \
--vip $VIP \
--controlplane \
--services \
--arp \
--leaderElection | tee  /etc/kubernetes/manifests/kube-vip.yaml
  • 查看配置文件(无需修改)
cat /etc/kubernetes/manifests/kube-vip.yaml 

1.1.6 kubeadm 初始化部署集群

  • 编辑config.yaml文件
vi config.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"  #使用IPVS模式,非iptables
---
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki  
clusterName: kubernetes
controlPlaneEndpoint: 192.168.0.250:6443  #api server IP (VIP)地址
controllerManager: {}
dns:
  type: CoreDNS  #默认DNS:CoreDNS
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers  #国内阿里镜像
kind: ClusterConfiguration
kubernetesVersion: v1.24.0   #K8S版本
networking:
  dnsDomain: cluster.local
  serviceSubnet: 172.201.0.0/16  #SVC网络段
  podSubnet: 172.200.0.0/16     #POD网络段
apiServer:
        certSANs: 
        - 192.168.0.20
        - 192.168.0.21
        - 192.168.1.20
        extraArgs:
           etcd-cafile: /opt/etcd/ssl/ca.pem
           etcd-certfile: /opt/etcd/ssl/server.pem
           etcd-keyfile: /opt/etcd/ssl/server-key.pem
etcd:  #使用外接etcd高可用
    external:
        caFile: /opt/etcd/ssl/ca.pem
        certFile: /opt/etcd/ssl/server.pem
        keyFile: /opt/etcd/ssl/server-key.pem
        endpoints:
        - https://192.168.0.20:2379
        - https://192.168.0.21:2379
        - https://192.168.1.20:2379
  • 执行以配置文件部署
kubeadm init --config config.yaml
  • 输出一下信息则初始化部署完成
Your Kubernetes control-plane has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
Alternatively, if you are the root user, you can run:
 
  export KUBECONFIG=/etc/kubernetes/admin.conf
 
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
 
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
 
  kubeadm join 192.168.0.250:6443 --token 3agpl3.32d0qhws3w4exvyx \
        --discovery-token-ca-cert-hash sha256:1d5708425143e92aa096cb1742361e71326f90863bb25709aba4ae9b487765c3 \
        --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.250:6443 --token 3agpl3.32d0qhws3w4exvyx \
        --discovery-token-ca-cert-hash sha256:1d5708425143e92aa096cb1742361e71326f90863bb25709aba4ae9b487765c3
  • 创建用户操作权限
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 拷贝 Kubernetes 证书文件到其他master节点
scp -r /etc/kubernetes/pki root@master01:/etc/kubernetes/
scp -r /etc/kubernetes/pki root@master10:/etc/kubernetes/
  • master节点加入集群
kubeadm join 10.10.0.250:6443 --token 3agpl3.32d0qhws3w4exvyx \
        --discovery-token-ca-cert-hash sha256:1d5708425143e92aa096cb1742361e71326f90863bb25709aba4ae9b487765c3 \
        --control-plane
  • node 节点加入集群
kubeadm join 10.10.0.250:6443 --token 3agpl3.32d0qhws3w4exvyx \
        --discovery-token-ca-cert-hash sha256:1d5708425143e92aa096cb1742361e71326f90863bb25709aba4ae9b487765c3
  • 查看加入集群命令
kubeadm token create --print-join-command
  • 安装网络插件 calico
wget https://docs.projectcalico.org/manifests/calico.yaml
kubectl apply -f calico.yaml

1.1.7 查看集群

  • 查看集群 pod 运行状况
kubectl get pod -A
  • 查看集群节点运行状况
kubectl get nodes
  • 查看集群组件运行状况
kubectl get cs

1.1.8 kube-vip集群配置

  • kube-vip.yaml 配置文件至Master01 (kube-vip仅本网段生效)
scp /etc/kubernetes/manifests/kube-vip.yaml root@master01:/etc/kubernetes/manifests/
  • 检查VIP是否正常,确保VIP只出现在一个节点上说明一切正常
ip addr | grep -A5 192.168.0.250

1.1.9 报错

  • 其它 Master 节点加入集群时报错
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR CRI]: container runtime is not running: output: E0725 08:28:43.725092   27421 remote_runtime.go:925] "Status from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
time="2022-07-25T08:28:43+08:00" level=fatal msg="getting status of runtime: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
  • 由于 containerd 组件原因导致报错,删除配置文件并重启服务,解决!
rm -rf /etc/containerd/config.toml
systemctl restart containerd

1.2 使用sealos脚本部署Kubernetes集群

  • Sealos 下载脚本
wget https://github.com/labring/sealos/releases/download/v4.0.0/sealos_4.0.0_linux_amd64.tar.gz
  • 解压、授权文件及移动文件位置
tar zxvf sealos_4.0.0_linux_amd64.tar.gz && chmod +x sealos && mv sealos /usr/bin
  • 执行Master初始化
sealos run labring/kubernetes:v1.24.0 labring/calico:v3.22.1 \
     --masters 10.10.0.20 \
     --nodes 10.10.0.21,10.10.0.22,10.10.0.23 -p password
  • 命令解释

labring/kubernetes:v1.24.0 #运行版本
–masters #Master 节点IP(以,隔开)
–nodes #nodes 节点IP (以,隔开)
-p #root密码

1.3 使用 Kubeadm 安装部署集群

1.3.1 操作系统初始化

  • 创建脚本文件
vi master.sh
  • 黏贴脚本配置
#!/bin/bash

echo "[TASK 1] Disable and turn off SWAP"
sed -i '/swap/d' /etc/fstab
swapoff -a

echo "[TASK 2] Stop and Disable firewall"
systemctl disable --now ufw >/dev/null 2>&1

echo "[TASK 3] Enable and Load Kernel modules"
cat >>/etc/modules-load.d/containerd.conf<<EOF
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter

echo "[TASK 4] Add Kernel settings"
cat >>/etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
EOF
sysctl --system >/dev/null 2>&1

echo "[TASK 5] Install containerd runtime"
apt update -qq >/dev/null 2>&1
apt install -qq -y containerd apt-transport-https >/dev/null 2>&1
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml
systemctl restart containerd
systemctl enable containerd >/dev/null 2>&1

echo "[TASK 6] Add apt repo for kubernetes"
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - >/dev/null 2>&1
apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main" >/dev/null 2>&1

echo "[TASK 7] Install Kubernetes components (kubeadm, kubelet and kubectl)"
apt install -qq -y kubeadm=1.24.0-00 kubelet=1.24.0-00 kubectl=1.24.0-00 >/dev/null 2>&1
  • 执行脚本
sh master.sh
  • 安装其它程序
apt install conntrack socat ipvsadm ipset chrony -y
  • 验证是否安装成功
kubeadm version
kubectl version

1.3.2 初始化master节点

  • 可以先拉取集群所需要的images(可做可不做)
kubeadm config images pull
  • 使用 Kubeadm 初始化 Master 节点
kubeadm init
--apiserver-advertise-address=192.168.0.20 \
--control-plane-endpoint=master01 \
--kubernetes-version v1.24 \
--service-cidr=10.96.0.0/16 --pod-network-cidr=10.97.0.0/16
  • 创建用户操作权限
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

1.3.3 部署pod network方案,Flannel 网络插件

去https://kubernetes.io/docs/concepts/cluster-administration/addons/ 选择一个network方案, 根据提供的具体链接去部署。
这里我们选择overlay的方案,名字叫 flannel 部署

  • 下载文件
wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
  • 查看配置文件
  • 确保network是我们配置的 –pod-network-cidr 10.97.0.0/16
net-conf.json: |
  {
    "Network": "10.97.0.0/16",
    "Backend": {
      "Type": "vxlan"
    }
  }
  • 确保iface=enp0s8,iface为网卡接口名称
- name: kube-flannel
 #image: flannelcni/flannel:v0.18.0 for ppc64le and mips64le (dockerhub limitations may apply)
  image: rancher/mirrored-flannelcni-flannel:v0.18.0
  command:
  - /opt/bin/flanneld
  args:
  - --ip-masq
  - --kube-subnet-mgr
  - --iface=enp0s8
  • 可使用命令查看接口名称
ip add
  • 输出结果如下 enp0s8 为网卡接口名称
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:59:c5:26 brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.10/24 brd 192.168.56.255 scope global enp0s8
      valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe59:c526/64 scope link
      valid_lft forever preferred_lft forever
  • 执行部署 Flannel 网络插件
kubectl apply -f kube-flannel.yml

1.3.4 查看集群

  • 查看集群 pod 运行状况
kubectl get pods -A
  • 查看集群节点运行状况
kubectl get nodes

1.3.5 Master 节点重置

  • Master 节点执行
kubectl reset
  • 删除并新建配置
rm -rf /root/.kube/
mkdir /root/.kube/
cp -i /etc/kubernetes/admin.conf /root/.kube/config

2. 持久化存储

2.1 NFS 持久化存储

2.1.1 环境说明

由于本地有NAS 系统,顾无需额外搭建NFS Server 端

2.1.2 客户端

  • 安装客户端工具
sudo apt install nfs-common -y
  • 查看NAS_NFS服务器上的共享目录
sudo showmount -e 192.168.0.5
  • 挂载共享目录
sudo mount -t nfs 192.168.0.5:/volume1/Kubernetes_NFS /Data
  • 查看挂载
df -h | grep 192.168.0.5
  • 卸载客户端的挂载目录,在客户端执行以下命令
umount /Data

2.1.3 创建存储类

  • 创建配置文件
vi stragecalss.yaml
  • 编辑配置
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: fuseim.pri/ifs  
parameters:
  archiveOnDelete: "true"  ## 删除pv的时候,pv的内容是否要备份

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: kube-system
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
          # resources:
          #    limits:
          #      cpu: 10m
          #    requests:
          #      cpu: 10m
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs  
            - name: NFS_SERVER
              value: 192.168.0.5 ## 指定自己nfs服务器地址
            - name: NFS_PATH  
              value: /volume1/Kubernetes_NFS  ## nfs服务器共享的目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.0.5
            path: /volume1/Kubernetes_NFS
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: kube-system
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
  • 创建存储类
kubectl apply -f stragecalss.yaml

3. 部署应用

3.1 部署 Kubesphere

3.1.1 在已部署好的 Kubernetes 上安装 KubeSphere

  • 下载配置文件
wget https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
wget https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml
  • 编辑 cluster-configuration.yaml 打开需要的服务(最小化安装可以直接执行)
  • 执行部署
kubectl apply -f kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
  • 检查安装日志
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
  • 查看 Kubesphere 控制台端口
kubectl get svc/ks-console -n kubesphere-system
  • 浏览器中输入master节点ip:30880 进入控制台
192.168.0.250:30880
  • 使用默认帐户和密码 (admin/P@88w0rd)
------本页内容已结束,喜欢请分享------

感谢您的来访,获取更多精彩文章请收藏本站。

© 版权声明
THE END
喜欢就支持一下吧
点赞9 分享
评论 抢沙发
头像
欢迎您留下宝贵的见解!
提交
头像

昵称

取消
昵称表情代码图片

    暂无评论内容