网络 Kubernetes-网络模型原则
在不使用网络地址转换(NAT)的情况下,集群中的Pod能够与任意其他Pod进行通信
在不使用网络地址转换(NAT)的情况下,在集群节点上运行的程序能与同一节点上的任何Pod进行通信
每个Pod都有自己的IP地址(IP-per-Pod),并且任意其他Pod都可以通过相同的这个地址访问它
借助CNI标准,Kubernetes可以实现容器网络问题的解决。通过插件化的方式来集成各种网络插件,实现集群内部网终相互通信,只要实现CNI标准中定义的核心接口操作(ADD,将容器添加到网络;DEL,从网络中删除一个容器;CHECK,检查容器的网络是否符合预期等)。CNI插件通常聚焦在容器到容器的网络通信。
CNl的接口并不是指HTTP,gRPC这种接口,CNl接口是指对可执行程序的调用(exec)可执行程序,Kubernetes节点默认的CN插件路径为/opt/cni/bin
CNI通过JSON格式的配置文件来描述网络配置,当需要设置容器网络时,由容器运行时负责执行CN!插件,并通过CNl插件的标准输入(stdi)来传递配置文件信息,通过标准输出(stdout)接收插件的执行结果。从网络插件功能可以分为五类:
Main插件,创建具体网络设备(bridge:网桥设备,连接container和host;ipvlan:为容器增加ipvlan网卡;loopback:IO设备;macvlan:为容器创建一个MAC地址;ptp:创建一对VethPair;vlan:分配一个vlan设备;host-device:将已存在的设备移入容器内) IPAM插件:负责分配IP地址(dhcp:容器向DHCP服务器发起请求,给Pod发放或回收P地址;host-local:使用预先配置的IP地址段来进行分配;static:为容器分配一个静态IPv4/IPv6地址主要用于debug) META插件:其他功能的插件(tuning:通过sysctl调整网络设备参数;portmap:通过iptables配置端口映射;bandwidth:使用Token Bucket Filter来限流;sbr:为网卡设置source based routing;firewall:通过iptables给容器网络的进出流量进行限制) Windows插件:专门用于Windows平台的CNl插件(win-bridge与win-overlay网络插件) 第三方网络插件:第三方开源的网络插件众多,每个组件都有各自的优点及适应的场景,难以形成统一的标准组件,常用有Flannel、Calico、Cilium、OVN网络插件 提供商 网络模型 路由分发 网络策略 网格 外部数据存储 加密 Ingress/Egress策略 Canal 封装(VXLAN) 否 是 否 k8s API 是 是 Flannel 封装(VXLAN) 否 否 否 k8s API 是 否 Calico 封装(VXLAN, IPIP)或未封装 是 是 是 Etcd和k8s API 是 是 Weave 封装 是 是 是 否 是 是 Cilium 封装(VXLAN) 是 是 是 Etcd和k8s API 是 是
网络模型:封装或未封装。 路由分发:一种外部网关协议,用于在互联网上交换路由和可达性信息。BGP可以帮助进行跨集群ρod之间的网络。此功能对于未封装的CNI网络插件是必须的,并且通常由BGP完成。如果你想构建跨网段拆分的集群,路由分发是一个很好的功能。 网络策略:Kubernetes提供了强制执行规则的功能,这些规则决定了哪些service可以使用网络策略进行相互通信。这是从Kubernetes1.7起稳定的功能,可以与某些网络插件一起使用。 网格:允许在不同的Kubernetes集群间进行service之间的网络通信。 外部数据存储:具有此功能的C!网络插件需要一个外部数据存储来存储数据。 加密:允许加密和安全的网络控制和数据平面。 Ingress/Egress策略:允许你管理Kubernetes和非Kubernetes通信的路由控制。 Calico是一个纯三层的虚拟网络,它没有复用docker的docker0网桥,而是自己实现的,calico网络不对数据包进行额外封装,不需要NAT和端口映射
Calico架构 :
Felix
bird (BGP Client)
BGP Client将通过BGP协议广播告诉剩余calico节点,从而实现网络互通
confd
通过监听etcd以了解BGP配置和全局默认值的更改。Confd根据ETCD虫数据的更新,动态生成BRD配置文件文件更改时confd触发BlRD重新加载
Calico网络模式—VXLAN :
什么是VXLAN?
VXLAN,即Virtual Extensible LAN(虚拟可扩展局域网),是Linux本身支持的一网种网络虚拟化技术。VXLAN可以完全在内核态实现封装和解封装工作,从而通过“隧道”机制,构建出覆盖网络(Overlay Network)
基于三层的”二层“通信,层即vxlan包封装在udp数据包中,要求udp在k8s节点间三层可达;二层即vxlan封包的源mac地址和目的mac地址是自己的vxlan设备mac和对端vxlan设备mac实现通讯。
数据包封包:封包,在vxlan设备上将pod发来的数据包源、目的mac替换为本机vxlan网卡和对端节点vxlan网卡的mac。外层udp目的ip地址根据路由和对端vxlan的mac fdb表获取
优势:只要k8s节点间三层互通,可以跨网段,对主机网关路由没有特殊要求。各个node节点通过vxlan设备实现基于三层的”二层”互通,三层即vxlan包封装在udp数据包中,要求udp在k8s节点间三层可达;二层即vxlan封包的源mac地址和目的mac地址是自己的vxlan设备mac和对端vxlan设备mac
缺点:需要进行vxan的数据包封包和解包会存在一定的性能损耗
Calico配置开启VXLAN
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 -name:CALICO IPV4POOL IPIP -value:"Never" -name:CALICO IPV4POOL VXLAN -value:"Always" -name:CALICO IPV6POOL VXLAN -value:"Always" calico_backend:"vxlan"
Calico网络模式—IPIP :
Linux原生内核支持
IPIP隧道的工作原理是将源主机的IP数据包封装在一个新的P数据包中,新的IP数据包的目的地址是隧道的另一端。在隧道的另一端,接收方将解封装原始IP数据包,并将其传递到目标主机。IPIP隧道可以在不同的网络之间建立连接,例如在IPV4网络和IPV6网络之间建立连接。
数据包封包:封包,在tunl0设备上将pod发来的数据包的mac层去掉,留下ip层封包。 外层数据包目的ip地址根据路由得到。 优点:只要k8s节点间三层互通,可以跨网段,对主机网关路由没有特殊要求。 缺点:需要进行IPIP的数据包封包和解包会存在一定的性能损耗
Calico配置开启IPIP
1 2 3 4 5 6 7 8 9 -name:CALICO IPV4POOL IPIP -value:"Always" -name:CALICO IPV4POOL VXLAN -value:"Never" -name:CALICO IPV6POOL VXLAN -value:"Never"
Calico网络模式—BGP :
边界网关协议(Border Gateway Protocol,.BGP)是互联网上一个核心的去中心化自治路由协议。它通过维护IP路由表或‘前缀’表来实现自治系统(AS)之间的可达性,属于量路由协议。BGP不使用传统的内部网关协议(IGP)的指标,而使用基于路径、网络策略或规则集来决定路由。因此,它更适合被称为矢量性协议,而不是路由协议。BGP,通俗的讲就是讲接入到机房的多条线路(如电信、联通,移动等)融合为一体,实现多线单IP,BGP机房的优点:服务器只需要设置一个IP地址,最佳访问路由是由网络上的骨干路由器根据路由跳数与其它技术指标来确定的,不会占用服务器的任何系统。
数据包封包:不需要进行数据包封包
优点:不用封包解包,通过BGP协议可实现pod网络在主机间的三层可达
缺点:跨网段时,配置较为复杂网络要求较高,主机网关路由也需要充当BGP Speaker。.
Calico配置开启BGP
1 2 3 4 5 6 7 8 9 -name:CALICO IPV4POOL IPIP -value:"Off" -name:CALICO IPV4POOL VXLAN -value:"Never" -name:CALICO IPV6POOL VXLAN -value:"Never"
Service 在Kubernetes集群中,每个Node运行一个kube-proy进程。kube-proxy负责为Service实现了一种VIP(虚拟IP)的形式。
在Kubernetes v1.0版本,代理完全在userspace。在Kubernetes v1.1版本,新增了iptables代理,但并不是默认的运行模式。从Kubernetes v1.2起,默认就是iptables代理。在Kubernetes v1.8.0-beta.0中,添加了ipvs代理。
userspace
kube-proxy:
监听APISERVER将Service的变化修改本地的iptables规则 代理当前节点的pod用户请求 Iptables
kube-proxy:
监听APISERVER将Service的变化修改本地的iptables规则 相对于userspace方式,kube-proxy功能解耦压力更小
ipvs
kube-proxy:
监听APISERVER将Service的变化修改本地的ipvs规则 Secret Kubernetes通过仅仅将Secret分发到需要访问Secret的Pod所在的机器节点来保障其安全性。Secret只会存储在节点的内存中,永不写入物理存储,这样从节点删除secret时就不需要擦除磁盘数据。
从Kubernetes1.7版本开始,etcd会以加密形式存储Secret,.一定程度的保证了Secret安全性。
Secret类型 :
Downward API Downward API是Kubernetes中的一个功能,它允许容器在运行时从Kubernetes API服务器获取有关它们自身的信息。这些信息可以作为容器内部的环境变量或文件注入到容器中,以便容器可以获取有关其运行环境的各种信息,如Pod名称、命名空间、标签等
提供容器元数据 动态配置 与Kubernetes环境集成 HELM Helm是官方提供的类似于YUM的包管理器,是部署环境的流程封装。Helm有两个重要的慨念:chart和release
Chart:是创建一个应用的信息集合,包括各种Kubernetes对象的配置模板、参数定义、依赖关系、文档说明等。chart是应用部署的自包含逻辑单元。可以将chart想象成apt、yum中的软件安装包 Release:是chart的运行实例,代表了一个正在运行的应用。当chart被安装到Kubernetes集群,就生成一个release。chart能够多次安装到同一个集群,每次安装都是一个release。 Hlelm cli:helm客户端组件,负责和kubernetes apiserver通信 Repository:用于发布和存储Chart的仓库 下载安装 下载Helm
1 2 3 4 5 wget https://get.helm.sh/helm-v3.18.4-linux-amd64.tar.gz tar -zxvf helm-v3.18.4-linux-amd64.tar.gz cp -a linux-amd64/helm /usr/local/bin/ chmod a+x /usr/local/bin/helm helm version
添加chart仓库国内源
1 2 3 4 helm repo add bitnami https://helm-charts.itboon.top/bitnami --force-update helm repo update # 搜索仓库内容 helm search repo bitnami
安装chart示例 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 # 查看apache包配置 helm show values bitnami/apache # 安装apache helm install bitnami/apache --generate-name # 查看 helm list -n default kubectl get svc kubectl get pod # 查看chart基本信息 helm show chart bitnami/apache # 查看chart所有信息 helm show all bitnami/apache # 删除版本 helm uninstall apache-1753181984 # 保留历史版本 helm uninstall apache-1753181984 --keep-history # 查看该版本的信息 helm status apache-1753182488
拓展
1 2 3 4 5 6 7 # 在当前仓库中搜索wordpress的cahrt包 helm search repo wordpress # 在官方仓库中搜索wordpress的cahrt包 helm search hub wordpress # 安装apache并指定名称为apache-1753234488 helm install apache-1753234488 bitnami/apache
安装自定义chart
1 2 3 4 5 6 7 8 9 10 # 查看apache包配置 helm show values bitnami/apache # 创建yaml文件,添加要修改的参数 vi apache.yml service: type: NodePort # 覆盖配置参数,并安装apache helm install -f apache.yml bitnami/apache --generate-name
除了使用yaml文件覆盖外,还可以使用--set:通过命令行的方式对指定项进行覆盖。如果同时使用两种方式,则--set中的值会被合并到--values中,但是--set中的值优先级更高,在--set中覆盖的内容会被被保存在ConfigMap中。可以通过helm get values <release-name>来查看指定release中--set设置的值。也可以通过运行helm upgrade并指定--reset-values字段来清除--set中设置的值。
--set的格式和限制
--set选项使用0或多个name/value对。最简单的用法类似于:--set name=value,等价于如下YAML格式:
多个值使用逗号分割,因此--set a=b,c=d的YAML表示是:
支持更复杂的表达式。例如,--set outer.inner=value被转换成了:
列表使用花括号({})来表示,例如,--set name={a,b,c}被转换成了:
某些name/key可以设置为null或者空数组,例如--set name=[],a=null
升级和回滚 helm upgrade执行最小侵入式升级,只更新上次发布以来发生更改的内容。
1 2 # helm upgrade -f yaml文件 版本名 chart包 helm upgrade -f apache.yml apache-1753183272 bitnami/apache
版本回滚
1 2 3 4 5 6 # 查看存在的版本 # helm history 版本名 helm history apache-1753183272 # 执行回滚 # helm rollback 版本名 版本号 helm rollback apache-1753183272 1
创建自定义chart包 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 # 创建chart包 helm create test # 删除不需要的文件 # 在templates中创建yaml资源清单 vi nodePort.yaml # apiVersion: v1 kind: Service metadata: name: myapp-test-202401110926-svc labels: app: myapp-test spec: type: NodePort selector: app: myapp-test ports: - name: "80-80" protocol: TCP port: 80 targetPort: 80 nodePort: 31111 # vi deplyment.yaml # apiVersion: apps/v1 kind: Deployment metadata: name: myapp-test-202401110926-deploy labels: app: myapp-test spec: replicas: 5 selector: matchLabels: app: myapp-test template: metadata: labels: app: myapp-test spec: containers: - name: myapp image: wangyanglinux/myapp:v1.0 # # 发布部署 helm install test test/
完整示例
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 vi templates/NOTES.txt # 1. 这是一个测试的 myapp chart 2. myapp release 名字:myapp-test-{{ now | date "20060102030405" }}-deploy 3. service 名字:myapp-test-{{ now | date "20060102030405" }}-svc # vi templates/deplyment.yaml # apiVersion: apps/v1 kind: Deployment metadata: name: myapp-test-{{ now | date "20060102030405" }}-deploy labels: app: myapp-test spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: app: myapp-test template: metadata: labels: app: myapp-test spec: containers: - name: myapp image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" # vi templates/service.yaml # apiVersion: v1 kind: Service metadata: name: myapp-test-{{ now | date "20060102030405" }}-svc labels: app: myapp-test spec: type: {{ .Values.service.type | quote }} selector: app: myapp-test ports: - name: "80-80" protocol: TCP port: 80 targetPort: 80 {{- if eq .Values.service.type "NodePort" }} nodePort: {{ .Values.service.nodePort }} {{- end }} # # 与templates目录同一级 vi values.yaml # replicaCount: 5 image: repository: wangyanglinux/myapp tag: "v1.0" service: type: NodePort nodePort: 32321 #
二进制高可用Kubernetes集群部署 前言 通过5台服务器使用二进制方式部署采用三主两从的高可用Kubernetes集群。
集群架构 (1)基础环境
操作系统:Rocky Linux release 10.0
软件:Kubernetes-1.33.4、docker-28.3.3
(2)环境准备
主机名 IP 集群及组件角色 k8s-master01 192.168.0.111 master,api-server,control manager,scheduler,etcd, kubelet,kube-proxy,nginx k8s-master02 192.168.0.112 master,api-server,control manager,scheduler,etcd, kubelet,kube-proxy,nginx k8s-master03 192.168.0.113 master,api-server,control manager,scheduler,etcd, kubelet,kube-proxy,nginx k8s-node01 192.168.0.114 worker,kubelet,kube-proxy,nginx k8s-node02 192.168.0.115 worker,kubelet,kube-proxy,nginx
环境初始化 (1)更换系统软件源,下载依赖软件
1 2 3 4 5 6 7 8 sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://dl.rockylinux.org/$contentdir|baseurl=https://mirrors.aliyun.com/rockylinux|g' \ -i.bak \ /etc/yum.repos.d/[Rr]ocky*.repo dnf makecache # 下载依赖软件 yum install -y wget openssl gcc gcc-c++ zlib-devel openssl-devel make redhat-rpm-config
(2)重命名hostname
1 2 3 4 5 hostnamectl set-hostname k8s-master01 && bash hostnamectl set-hostname k8s-master02 && bash hostnamectl set-hostname k8s-node01 && bash hostnamectl set-hostname k8s-node02 && bash hostnamectl set-hostname k8s-node03 && bash
(3)系统环境修改
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 # 关闭firewalld防火墙 systemctl stop firewalld systemctl disable firewalld firewall-cmd --state # 安装iptables yum install -y iptables-services systemctl start iptables iptables -F systemctl enable iptables # selinux永久关闭 setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config cat /etc/selinux/config # swap永久关闭 swapoff -a sed -ri 's/.*swap.*/#&/' /etc/fstab cat /etc/fstab # 设置时区 timedatectl set-timezone Asia/Shanghai date # 添加hosts cat >> /etc/hosts << EOF 192.168.0.111 k8s-master01 192.168.0.112 k8s-master02 192.168.0.113 k8s-master03 192.168.0.114 k8s-node01 192.168.0.115 k8s-node02 EOF # 查看 cat /etc/hosts
(4)安装ipvs
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 # 安装ipvs yum -y install ipvsadm sysstat conntrack libseccomp # 开启路由转发 echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf sysctl -p # ipvs加载模块 cat >> /etc/modules-load.d/ipvs.conf <<EOF ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack ip_tables ip_set xt_set ipt_set ipt_rpfilter ipt_REJECT ipip EOF systemctl restart systemd-modules-load.service lsmod | grep -e ip_vs -e nf_conntrack
(5)排除 calico 网卡被 NetworkManager 所管理
1 2 3 4 5 6 7 # 排除 calico 网卡被 NetworkManager 所管理 cat > /etc/NetworkManager/conf.d/calico.conf << EOF [keyfile] unmanaged-devices=interface-name:cali*;interface-name:tunl* EOF systemctl restart NetworkManager
(6)配置时间同步服务器
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 # 修改k8s-master01上的chrony配置文件 sed -i -e 's/2\.rocky\.pool\.ntp\.org/ntp.aliyun.com/g' -e 's/#allow 192\.168\.0\.0\/16/allow 192.168.0.0\/24/g' -e 's/#local stratum 10/local stratum 10/g' /etc/chrony.conf # 修改k8s-master02上的chrony配置文件 sed -i -e 's/2\.rocky\.pool\.ntp\.org/ntp.aliyun.com/g' -e 's/#allow 192\.168\.0\.0\/16/allow 192.168.0.0\/24/g' -e 's/#local stratum 10/local stratum 11/g' /etc/chrony.conf # 修改k8s-master03上的chrony配置文件 sed -i -e 's/2\.rocky\.pool\.ntp\.org/ntp.aliyun.com/g' -e 's/#allow 192\.168\.0\.0\/16/allow 192.168.0.0\/24/g' -e 's/#local stratum 10/local stratum 12/g' /etc/chrony.conf # 修改k8s-node01、k8s-node02、k8s-node03上的chrony配置文件 sed -i 's/^pool 2\.rocky\.pool\.ntp\.org iburst$/pool 192.168.0.111 iburst\ pool 192.168.0.112 iburst\ pool 192.168.0.113 iburst/g' /etc/chrony.conf # 重启chronyd systemctl restart chronyd # 验证 chronyc sources -v
(7)设置进程可打开的最大文件数
1 2 3 4 5 6 7 8 9 10 11 # 配置 ulimit ulimit -SHn 65535 cat >> /etc/security/limits.conf << EOF * soft nofile 655360 * hard nofile 131072 * soft nproc 655350 * hard nproc 655350 * soft memlock unlimited * hard memlock unlimitedd EOF
(8)修改内核参数
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 cat <<EOF > /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 net.ipv6.conf.all.disable_ipv6 = 0 net.ipv6.conf.default.disable_ipv6 = 0 net.ipv6.conf.lo.disable_ipv6 = 0 net.ipv6.conf.all.forwarding = 1 EOF sysctl --system
安装docker (1)使用脚本安装docker
1 bash <(curl -sSL https://linuxmirrors.cn/docker.sh)
(2)Docker配置修改
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 cat >/etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "registry-mirrors": [ "http://hub-mirror.c.163.com", "https://hub.rat.dev", "https://docker.mirrors.ustc.edu.cn", "https://docker.1panel.live", "https://docker.m.daocloud.io", "https://docker.1ms.run" ], "max-concurrent-downloads": 10, "log-driver": "json-file", "log-level": "warn", "log-opts": { "max-size": "10m", "max-file": "3" }, "data-root": "/data/dockerData" } EOF systemctl daemon-reload systemctl restart docker
安装cri-dockerd (1)下载安装cri-dockerd
1 2 3 4 5 wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.18/cri-dockerd-0.3.18.amd64.tgz tar xvf cri-dockerd-*.amd64.tgz cp -r cri-dockerd/* /usr/bin/ chmod +x /usr/bin/cri-dockerd
(2)添加cri-docker服务配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 cat > /usr/lib/systemd/system/cri-docker.service <<EOF [Unit] Description=CRI Interface for Docker Application Container Engine Documentation=https://docs.mirantis.com After=network-online.target firewalld.service docker.service Wants=network-online.target Requires=cri-docker.socket [Service] Type=notify ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.10 ExecReload=/bin/kill -s HUP $MAINPID TimeoutSec=0 RestartSec=2 Restart=always StartLimitBurst=3 StartLimitInterval=60s LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TasksMax=infinity Delegate=yes KillMode=process [Install] WantedBy=multi-user.target EOF
(3)添加cri-docker的socket配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 cat > /usr/lib/systemd/system/cri-docker.socket <<EOF [Unit] Description=CRI Docker Socket for the API PartOf=cri-docker.service [Socket] ListenStream=%t/cri-dockerd.sock SocketMode=0660 SocketUser=root SocketGroup=docker [Install] WantedBy=sockets.target EOF
(4)启动cri-dockerd,使得配置生效
1 2 3 systemctl daemon-reload systemctl enable --now cri-docker.service systemctl status cri-docker.service
安装etcd集群(master节点) (1)下载etcd包并安装
1 2 3 4 5 6 wget https://github.com/etcd-io/etcd/releases/download/v3.6.4/etcd-v3.6.4-linux-amd64.tar.gz tar -xf etcd-*.tar.gz mv etcd-*/etcd /usr/local/bin/ && mv etcd-*/etcdctl /usr/local/bin/ ls /usr/local/bin/ etcdctl version
安装Kubernetes集群 (1)下载Kubernetes二进制包并安装
1 2 3 4 5 6 7 8 9 10 11 wget https://dl.k8s.io/v1.33.2/kubernetes-server-linux-amd64.tar.gz # 在master节点执行 tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} # 在node节点执行 tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,-proxy} ls /usr/local/bin/ kubelet --version # 所有节点执行,存放cni插件 mkdir -p /opt/cni/bin
生成相关证书(master节点) 安装cfssl证书管理工具 1 2 3 4 5 6 7 wget https://hub.gitmirror.com/https://github.com/cloudflare/cfssl/releases/download/v1.6.5/cfssl-certinfo_1.6.5_linux_amd64 -O /usr/local/bin/cfssl-certinfo wget https://hub.gitmirror.com/https://github.com/cloudflare/cfssl/releases/download/v1.6.5/cfssljson_1.6.5_linux_amd64 -O /usr/local/bin/cfssljson wget https://hub.gitmirror.com/https://github.com/cloudflare/cfssl/releases/download/v1.6.5/cfssl_1.6.5_linux_amd64 -O /usr/local/bin/cfssl -O /usr/local/bin/cfssl # 添加可执行权限,查看版本 chmod +x /usr/local/bin/cfssl* cfssl version
生成ETCD证书 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 mkdir -p /etc/etcd/ssl && cd /etc/etcd/ssl # 创建生成证书的配置文件 cat > ca-config.json << EOF { "signing": { "default": { "expiry": "876000h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "876000h" } } } } EOF # 创建证书签发请求文件 cat > etcd-ca-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ], "ca": { "expiry": "876000h" } } EOF
签发ETCD的CA证书和密钥
1 cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
创建用于生成ETCD的服务端证书的配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 cat > etcd-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ] } EOF
签发ETCD的服务端证书
1 cfssl gencert -ca=/etc/etcd/ssl/etcd-ca.pem -ca-key=/etc/etcd/ssl/etcd-ca-key.pem -config=ca-config.json -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.0.111,192.168.0.112,192.168.0.113 -profile=kubernetes etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd
生成Kubernetes证书 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 mkdir -p /etc/kubernetes/pki && cd /etc/kubernetes/pki # 创建证书签发请求文件 cat > ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ], "ca": { "expiry": "876000h" } } EOF
签发Kubernetes的CA证书和密钥
1 cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
生成ApiServer证书 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 # 创建证书签发请求文件 cat > apiserver-csr.json << EOF { "CN": "kube-apiserver", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ] } EOF # 创建生成证书的配置文件 cat > ca-config.json << EOF { "signing": { "default": { "expiry": "876000h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "876000h" } } } } EOF
签发ApiServer的CA证书和密钥
1 cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname=10.96.0.1,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.0.111,192.168.0.112,192.168.0.113,192.168.0.114,192.168.0.115,192.168.0.116,192.168.0.117,192.168.0.118,192.168.0.119,192.168.0.120 -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
10.96.0.1是Kubernetes的service的默认地址
kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local是Kubernetes的默认解析域名
生成ApiServer聚合证书 1 2 3 4 5 6 7 8 9 10 11 12 13 # 创建证书签发请求文件 cat > front-proxy-ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "ca": { "expiry": "876000h" } } EOF
签发ApiServer聚合证书和密钥
1 cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca
生成ApiServer聚合证书的客户端证书 1 2 3 4 5 6 7 8 9 10 # 创建证书签发请求文件 cat > front-proxy-client-csr.json << EOF { "CN": "front-proxy-client", "key": { "algo": "rsa", "size": 2048 } } EOF
签发ApiServer聚合证书的客户端证书和密钥
1 2 3 4 5 cfssl gencert \ -ca=/etc/kubernetes/pki/front-proxy-ca.pem \ -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem \ -config=ca-config.json \ -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client
生成controller-manager证书 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 # 创建证书签发请求文件 cat > manager-csr.json << EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-controller-manager", "OU": "Kubernetes-manual" } ] } EOF
签发controller-manager证书和密钥
1 2 3 4 5 6 cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager
生成controller-manager专用的 kubeconfig 配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:8443 \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig kubectl config set-context system:kube-controller-manager@kubernetes \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig kubectl config set-credentials system:kube-controller-manager \ --client-certificate=/etc/kubernetes/pki/controller-manager.pem \ --client-key=/etc/kubernetes/pki/controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig kubectl config use-context system:kube-controller-manager@kubernetes \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
生成kube-scheduler证书 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 # 创建证书签发请求文件 cat > scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-scheduler", "OU": "Kubernetes-manual" } ] } EOF
签发kube-scheduler证书和密钥
1 2 3 4 5 6 cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
生成kube-scheduler专用的 kubeconfig 配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:8443 \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=/etc/kubernetes/pki/scheduler.pem \ --client-key=/etc/kubernetes/pki/scheduler-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-context system:kube-scheduler@kubernetes \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config use-context system:kube-scheduler@kubernetes \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
生成admin证书 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 # 创建证书签发请求文件 cat > admin-csr.json << EOF { "CN": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:masters", "OU": "Kubernetes-manual" } ] } EOF
签发admin证书和密钥
1 2 3 4 5 6 cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
生成admin专用的 kubeconfig 配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:8443 \ --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-credentials kubernetes-admin \ --client-certificate=/etc/kubernetes/pki/admin.pem \ --client-key=/etc/kubernetes/pki/admin-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-context kubernetes-admin@kubernetes \ --cluster=kubernetes \ --user=kubernetes-admin \ --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig
生成kube-proxy证书 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 # 创建证书签发请求文件 cat > kube-proxy-csr.json << EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-proxy", "OU": "Kubernetes-manual" } ] } EOF
签发 kube-proxy证书和密钥
1 2 3 4 5 6 cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy
生成 kube-proxy专用的 kubeconfig 配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:8443 \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=/etc/kubernetes/pki/kube-proxy.pem \ --client-key=/etc/kubernetes/pki/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config set-context kube-proxy@kubernetes \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config use-context kube-proxy@kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
创建ServiceAccount加密密钥 1 2 openssl genrsa -out /etc/kubernetes/pki/sa.key 2048 openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
添加组件配置,启动服务 ETCD组件(master节点) (1)k8s-master01配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master01' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.0.111:2380' listen-client-urls: 'https://192.168.0.111:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.0.111:2380' advertise-client-urls: 'https://192.168.0.111:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.0.111:2380,k8s-master02=https://192.168.0.112:2380,k8s-master03=https://192.168.0.113:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF
(2)k8s-master02配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master02' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.0.112:2380' listen-client-urls: 'https://192.168.0.112:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.0.112:2380' advertise-client-urls: 'https://192.168.0.112:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.0.111:2380,k8s-master02=https://192.168.0.112:2380,k8s-master03=https://192.168.0.113:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF
(3)k8s-master03配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master03' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.0.113:2380' listen-client-urls: 'https://192.168.0.113:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.0.113:2380' advertise-client-urls: 'https://192.168.0.113:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://192.168.0.111:2380,k8s-master02=https://192.168.0.112:2380,k8s-master03=https://192.168.0.113:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF
(4)创建etcd服务启动配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 cat > /usr/lib/systemd/system/etcd.service << EOF [Unit] Description=Etcd Service Documentation=https://coreos.com/etcd/docs/latest/ After=network.target [Service] Type=notify ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml Restart=on-failure RestartSec=10 LimitNOFILE=65536 [Install] WantedBy=multi-user.target Alias=etcd3.service EOF
(5)启动etcd服务
1 2 3 4 5 6 mkdir -p /etc/kubernetes/pki/etcd ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/ systemctl daemon-reload systemctl enable --now etcd.service systemctl status etcd.service
(6)etcd 集群的健康状态
1 2 export ETCDCTL_API=3 etcdctl --endpoints="192.168.0.111:2379,192.168.0.112:2379,192.168.0.113:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table
+——————————+—————————+————-+————————-+————-+————+———————————-+———-+—————-+——————+—————-+——————+——————————+————+—————————————+—————————-+ | ENDPOINT | ID | VERSION | STORAGE VERSION | DB SIZE | IN USE | PERCENTAGE NOT IN USE | QUOTA | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | DOWNGRADE TARGET VERSION | DOWNGRADE ENABLED | +——————————+—————————+————-+————————-+————-+————+———————————-+———-+—————-+——————+—————-+——————+——————————+————+—————————————+—————————-+ | 192.168.0.111:2379 | 9c35553b47538310 | 3.6.4 | 3.6.0 | 20 kB | 16 kB | 20% | 0 B | true | false | 3 | 6 | 6 | | | false | | 192.168.0.112:2379 | 545bae002651f913 | 3.6.4 | 3.6.0 | 20 kB | 16 kB | 20% | 0 B | false | false | 2 | 7 | 7 | | | false | | 192.168.0.113:2379 | d7497b3a31d15f9e | 3.6.4 | 3.6.0 | 20 kB | 16 kB | 20% | 0 B | false | false | 2 | 7 | 7 | | | false | +——————————+—————————+————-+————————-+————-+————+———————————-+———-+—————-+——————+—————-+——————+——————————+————+—————————————+—————————-+
1 2 # 将etcd相关防火墙策略保存,防止重启后etcd服务无法启动 service iptables save
如果出现重启服务器后,etcd集群服务无法启动的情况。清空防火墙规则,启动etcd服务并保存策略即可。
安装Nginx配置高可用 (1)下载安装Nginx
1 2 3 4 5 6 wget https://nginx.org/download/nginx-1.28.0.tar.gz tar xvf nginx-1.28.0.tar.gz cd nginx-1.28.0 # 编译安装,其中--with-stream为启用四层代理 ./configure --with-stream --without-http --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module make && make install
(2)创建配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 cat > /usr/local/nginx/conf/kube-nginx.conf <<EOF worker_processes 1; events { worker_connections 1024; } stream { upstream backend { least_conn; hash $remote_addr consistent; server 192.168.0.111:6443 max_fails=3 fail_timeout=30s; server 192.168.0.112:6443 max_fails=3 fail_timeout=30s; server 192.168.0.113:6443 max_fails=3 fail_timeout=30s; } server { listen 127.0.0.1:8443; proxy_connect_timeout 1s; proxy_pass backend; } } EOF
(6)添加nignx服务
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 cat > /etc/systemd/system/kube-nginx.service <<EOF [Unit] Description=kube-apiserver nginx proxy After=network.target After=network-online.target Wants=network-online.target [Service] Type=forking ExecStartPre=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -t ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx ExecReload=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -s reload PrivateTmp=true Restart=always RestartSec=5 StartLimitInterval=0 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
(7)启动服务
1 2 3 systemctl daemon-reload systemctl enable --now kube-nginx.service systemctl status kube-nginx.service
ApiServer组件 1 2 # 所有节点创建所需目录 mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes
(1)k8s-master01节点添加apiserver服务
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \\ --v=2 \\ --allow-privileged=true \\ --bind-address=0.0.0.0 \\ --secure-port=6443 \\ --advertise-address=192.168.0.111 \\ --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \\ --service-node-port-range=30000-32767 \\ --etcd-servers=https://192.168.0.111:2379,https://192.168.0.112:2379,https://192.168.0.113:2379 \\ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\ --etcd-certfile=/etc/etcd/ssl/etcd.pem \\ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\ --client-ca-file=/etc/kubernetes/pki/ca.pem \\ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\ --service-account-key-file=/etc/kubernetes/pki/sa.pub \\ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\ --service-account-issuer=https://kubernetes.default.svc.cluster.local \\ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \\ --authorization-mode=Node,RBAC \\ --enable-bootstrap-token-auth=true \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\ --requestheader-allowed-names=aggregator \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-username-headers=X-Remote-User \\ --enable-aggregator-routing=true Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF
(2)k8s-master02节点添加apiserver服务
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \\ --v=2 \\ --allow-privileged=true \\ --bind-address=0.0.0.0 \\ --secure-port=6443 \\ --advertise-address=192.168.0.112 \\ --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \\ --service-node-port-range=30000-32767 \\ --etcd-servers=https://192.168.0.111:2379,https://192.168.0.112:2379,https://192.168.0.113:2379 \\ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\ --etcd-certfile=/etc/etcd/ssl/etcd.pem \\ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\ --client-ca-file=/etc/kubernetes/pki/ca.pem \\ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\ --kubelet-client-key=/etc/kuberntes/pki/apiserver-key.pem \\ --service-account-key-file=/etc/kubernetes/pki/sa.pub \\ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\ --service-account-issuer=https://kubernetes.default.svc.cluster.local \\ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \\ --authorization-mode=Node,RBAC \\ --enable-bootstrap-token-auth=true \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\ --requestheader-allowed-names=aggregator \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-username-headers=X-Remote-User \\ --enable-aggregator-routing=true Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF
(3)k8s-master03节点添加apiserver服务
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \\ --v=2 \\ --allow-privileged=true \\ --bind-address=0.0.0.0 \\ --secure-port=6443 \\ --advertise-address=192.168.0.113 \\ --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \\ --service-node-port-range=30000-32767 \\ --etcd-servers=https://192.168.0.111:2379,https://192.168.0.112:2379,https://192.168.0.113:2379 \\ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\ --etcd-certfile=/etc/etcd/ssl/etcd.pem \\ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\ --client-ca-file=/etc/kubernetes/pki/ca.pem \\ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\ --service-account-key-file=/etc/kubernetes/pki/sa.pub \\ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\ --service-account-issuer=https://kubernetes.default.svc.cluster.local \\ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \\ --authorization-mode=Node,RBAC \\ --enable-bootstrap-token-auth=true \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\ --requestheader-allowed-names=aggregator \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-username-headers=X-Remote-User \\ --enable-aggregator-routing=true Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target EOF
(4)启动kube-apiserver服务
1 2 3 systemctl daemon-reload systemctl enable --now kube-apiserver systemctl status kube-apiserver
controller-manager组件 所有master节点添加controller-manager服务
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-controller-manager \\ --v=2 \\ --bind-address=0.0.0.0 \\ --root-ca-file=/etc/kubernetes/pki/ca.pem \\ --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \\ --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \\ --service-account-private-key-file=/etc/kubernetes/pki/sa.key \\ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \\ --leader-elect=true \\ --use-service-account-credentials=true \\ --node-monitor-grace-period=40s \\ --node-monitor-period=5s \\ --controllers=*,bootstrapsigner,tokencleaner \\ --allocate-node-cidrs=true \\ --service-cluster-ip-range=10.96.0.0/12,fd00:1111::/112 \\ --cluster-cidr=172.16.0.0/12,fc00:2222::/112 \\ --node-cidr-mask-size-ipv4=24 \\ --node-cidr-mask-size-ipv6=120 \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF
启动controller-manager服务
1 2 3 systemctl daemon-reload systemctl enable --now kube-controller-manager systemctl status kube-controller-manager
kube-scheduler组件(master节点) 所有master节点添加kube-scheduler服务
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 cat > /usr/lib/systemd/system/kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-scheduler \\ --v=2 \\ --bind-address=0.0.0.0 \\ --leader-elect=true \\ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF
启动kube-scheduler服务
1 2 3 systemctl daemon-reload systemctl enable --now kube-scheduler systemctl status kube-scheduler
TLS Bootstrapping 配置(master01节点,用于自动签发证书) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true --server=https://127.0.0.1:8443 \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-credentials tls-bootstrap-token-user \ --token=c8ad9c.2e4d610cf3e7426e \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-context tls-bootstrap-token-user@kubernetes \ --cluster=kubernetes \ --user=tls-bootstrap-token-user \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config use-context tls-bootstrap-token-user@kubernetes \ --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig # token的位置在bootstrap.secret.yaml,如果修改的话到这个文件修改 mkdir -p /root/.kube && cp /etc/kubernetes/admin.kubeconfig /root/.kube/config
查看集群状态
创建bootstrap-secret授权文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 cat > bootstrap-secret.yaml << EOF apiVersion: v1 kind: Secret metadata: name: bootstrap-token-c8ad9c namespace: kube-system type: bootstrap.kubernetes.io/token stringData: description: "The default bootstrap token generated by 'kubelet '." token-id: "c8ad9c" token-secret: "2e4d610cf3e7426e" usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubelet-bootstrap roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-bootstrapper subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: node-autoapprove-bootstrap roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclient subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: node-autoapprove-certificate-rotation roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubelet rules: - apiGroups: - "" resources: - nodes/proxy - nodes/stats - nodes/log - nodes/spec - nodes/metrics verbs: - "*" --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: system:kube-apiserver namespace: "" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-apiserver-to-kubelet subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kube-apiserver EOF
执行bootstrap-secret授权文件
1 kubectl apply -f bootstrap-secret.yaml
将证书文件复制到其他节点
1 2 cd /etc/kubernetes/ for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do ssh $NODE mkdir -p /etc/kubernetes/pki; for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig kube-proxy.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}; done; done
Kubelet组件(所有节点) (1)添加kubelet服务
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 cat > /usr/lib/systemd/system/kubelet.service << EOF [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=network-online.target firewalld.service cri-docker.service docker.socket containerd.service Wants=network-online.target Requires=docker.socket containerd.service [Service] ExecStart=/usr/local/bin/kubelet \\ --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig \\ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\ --config=/etc/kubernetes/kubelet-conf.yml \\ --container-runtime-endpoint=unix:///run/cri-dockerd.sock \\ --node-labels=node.kubernetes.io/node= [Install] WantedBy=multi-user.target EOF
(2)创建kubelet配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 cat > /etc/kubernetes/kubelet-conf.yml <<EOF apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration address: 0.0.0.0 port: 10250 readOnlyPort: 10255 authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.pem authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s cgroupDriver: systemd cgroupsPerQOS: true clusterDNS: - 10.96.0.10 clusterDomain: cluster.local containerLogMaxFiles: 5 containerLogMaxSize: 10Mi contentType: application/vnd.kubernetes.protobuf cpuCFSQuota: true cpuManagerPolicy: none cpuManagerReconcilePeriod: 10s enableControllerAttachDetach: true enableDebuggingHandlers: true enforceNodeAllocatable: - pods eventBurst: 10 eventRecordQPS: 5 evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% evictionPressureTransitionPeriod: 5m0s failSwapOn: true fileCheckFrequency: 20s hairpinMode: promiscuous-bridge healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 20s imageGCHighThresholdPercent: 85 imageGCLowThresholdPercent: 80 imageMinimumGCAge: 2m0s iptablesDropBit: 15 iptablesMasqueradeBit: 14 kubeAPIBurst: 10 kubeAPIQPS: 5 makeIPTablesUtilChains: true maxOpenFiles: 1000000 maxPods: 110 nodeStatusUpdateFrequency: 10s oomScoreAdj: -999 podPidsLimit: -1 registryBurst: 10 registryPullQPS: 5 resolvConf: /etc/resolv.conf rotateCertificates: true runtimeRequestTimeout: 2m0s serializeImagePulls: true staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 4h0m0s syncFrequency: 1m0s volumeStatsAggPeriod: 1m0s EOF
(3)启动kubelet服务
1 2 3 systemctl daemon-reload systemctl enable --now kubelet systemctl status kubelet
kube-proxy组件(所有节点) (1)将证书发送至其他节点
1 for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done
(2)添加kube-proxy服务
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 cat > /usr/lib/systemd/system/kube-proxy.service << EOF [Unit] Description=Kubernetes Kube Proxy Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-proxy \\ --config=/etc/kubernetes/kube-proxy.yaml \\ --cluster-cidr=172.16.0.0/12,fc00:2222::/112 \\ --v=2 Restart=always RestartSec=10s [Install] WantedBy=multi-user.target EOF
(3)创建kube-proxy配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 cat > /etc/kubernetes/kube-proxy.yaml << EOF apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig qps: 5 clusterCIDR: 172.16.0.0/12,fc00:2222::/112 configSyncPeriod: 15m0s conntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0s enableProfiling: false healthzBindAddress: 0.0.0.0:10256 hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30s ipvs: masqueradeAll: true minSyncPeriod: 5s scheduler: "rr" syncPeriod: 30s kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1:10249 mode: "ipvs" nodePortAddresses: null oomScoreAdj: -999 portRange: "" udpIdleTimeout: 250ms EOF
(4)启动kube-proxy服务
1 2 3 systemctl daemon-reload systemctl enable --now kube-proxy.service systemctl status kube-proxy.service
安装Calico网络插件 (1)下载安装Calico插件
1 2 3 4 5 6 7 8 9 10 11 wget https://hub.gitmirror.com/https://raw.githubusercontent.com/projectcalico/calico/v3.30.2/manifests/calico.yaml # 取消注释并修改pod网端 - name: CALICO_IPV4POOL_CIDR value: "172.16.0.0/12" # 更改容器镜像源 sed -i "s#docker.io/calico/#m.daocloud.io/docker.io/calico/#g" calico.yaml # 执行安装 kubectl apply -f calico.yaml
安装CoreDNS (1)下载安装Helm
1 2 3 4 5 wget https://get.helm.sh/helm-v3.18.4-linux-amd64.tar.gz tar -zxvf helm-v3.18.4-linux-amd64.tar.gz cp -a linux-amd64/helm /usr/local/bin/ chmod a+x /usr/local/bin/helm helm version
(2)使用Helm安装CoreDNS
1 2 3 4 helm repo add coredns https://coredns.github.io/helm helm pull coredns/coredns tar xvf coredns-*.tgz cd coredns/
(3)修改values.yml文件
1 2 3 4 5 clusterIP: "10.96.0.10" # 更换镜像源 sed -i 's|coredns/coredns|m.daocloud.io/docker.io/coredns/coredns|g' values.yaml sed -i "s|registry.k8s.io/cpa/cluster-proportional-autoscaler|m.daocloud.io/registry.k8s.io/cpa/cluster-proportional-autoscaler|g" values.yaml
(4)安装
1 2 3 helm install coredns ./ -n kube-system # 查看安装情况 kubectl get pod -n kube-system
安装Metrics Server 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 wget https://hub.gitmirror.com/https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.8.0/components.yaml # 修改配置 - args: - --cert-dir=/tmp - --secure-port=10250 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem - --requestheader-username-headers=X-Remote-User - --requestheader-group-headers=X-Remote-Group - --requestheader-extra-headers-prefix=X-Remote-Extra- volumeMounts: - mountPath: /tmp name: tmp-dir - name: ca-ssl mountPath: /etc/kubernetes/pki volumes: - emptyDir: {} name: tmp-dir - name: ca-ssl hostPath: path: /etc/kubernetes/pki # 修改镜像源 sed -i "s#registry.k8s.io/#m.daocloud.io/registry.k8s.io/#g" components.yaml # 执行部署 kubectl apply -f components.yaml # 验证 kubectl top pod -A
设置各节点角色标签和污点 (1)设置master节点污点
1 2 3 kubectl taint node k8s-master01 node-role.kubernetes.io/master:NoSchedule kubectl taint node k8s-master02 node-role.kubernetes.io/master:NoSchedule kubectl taint node k8s-master03 node-role.kubernetes.io/master:NoSchedule
(2)设置各节点角色标签
1 2 3 4 5 6 7 8 9 10 11 12 13 # k8s-master01 kubectl label nodes k8s-master01 node-role.kubernetes.io/master= kubectl label nodes k8s-master01 node-role.kubernetes.io/control-plane= # k8s-master02 kubectl label nodes k8s-master02 node-role.kubernetes.io/master= kubectl label nodes k8s-master02 node-role.kubernetes.io/control-plane= # k8s-master03 kubectl label nodes k8s-master03 node-role.kubernetes.io/master= kubectl label nodes k8s-master03 node-role.kubernetes.io/control-plane= # k8s-node01 kubectl label nodes k8s-node01 node-role.kubernetes.io/worker= # k8s-node02 kubectl label nodes k8s-node02 node-role.kubernetes.io/worker=
设置命令补全 1 2 3 4 yum install bash-completion -y source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc
验证高可用 (1)验证etcd集群高可用
1 2 3 # 验证etcd集群高可用 export ETCDCTL_API=3 etcdctl --endpoints="192.168.0.113:2379,192.168.0.112:2379,192.168.0.111:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table
(2)使用 etcdhelper 查询 etcd 中 k8s 的资源数据
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 # 拉取etcdhelper源码文件 git clone --depth 1 https://hub.gitmirror.com/https://github.com/openshift/origin.git # 下载依赖软件 yum install -y go # 构建etcdhelper执行程序 cd origin/tools/etcdhelper go build etcdhelper.go mv etcdhelper /usr/local/bin/ # 设定参数别名 echo 'alias ectl="etcdhelper -endpoint https://192.168.0.111:2379 -cacert /etc/etcd/ssl/etcd-ca.pem -key /etc/etcd/ssl/etcd-key.pem -cert /etc/etcd/ssl/etcd.pem"' >> ~/.bashrc # 执行配置生效 source ~/.bashrc # 查看etcd数据 ectl ls # 查看当前的 schedule 主节点 ectl get /registry/leases/kube-system/kube-scheduler # 查看当前的 controllermanager 主节点 ectl get /registry/leases/kube-system/kube-controller-manager
部署deployment验证 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 vi test.yml apiVersion: apps/v1 kind: Deployment metadata: name: test01 spec: replicas: 6 selector: matchLabels: app: test01 template: metadata: labels: app: test01 spec: containers: - name: test01 image: nginx ports: - containerPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: test02 spec: replicas: 5 selector: matchLabels: app: test02 template: metadata: labels: app: test02 spec: nodeName: k8s-node02 containers: - name: test02 image: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: test01 spec: type: NodePort selector: app: test01 ports: - name: http port: 80 targetPort: 80 --- apiVersion: v1 kind: Service metadata: name: test02 spec: type: NodePort selector: app: test02 ports: - name: http port: 80 targetPort: 80 kubectl apply -f test.yml kubectl get pod,svc,deployments -o wide
本篇知识来源于B站视频BV1PbeueyE8V