文章目录
下面还是在Master Node上操作,即同时作为Worker Node
1、创建工作目录并拷贝二进制文件
在所有worker node创建工作目录: (node节点操作)
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
cd /opt/tools/kubernetes/server/bin/
cp kubelet kube-proxy /opt/kubernetes/bin/
scp kubelet kube-proxy k8s-node1:/opt/kubernetes/bin/
scp kubelet kube-proxy k8s-node2:/opt/kubernetes/bin/
2 部署kubelet (master节点操作)
2.1 创建配置文件
cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=k8s-master \
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0"
EOF
- –hostname–override:显示名称,集群中唯一
- –network–plugin:启用CNI
- –kubeconfig:空路径,会自动生成,后面用于连接apiserver
- –bootstrap-kubeconfig:首次启动向apiserver申请证书
- –config:配置参数文件
- –cert–dir:kubelet证书生成目录
- –pod–infra–container–image:管理Pod网络容器的镜像
2.2 配置参数文件
cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF
2.3 生成bootstrap.kubeconfig文件
KUBE_APISERVER="https://10.20.17.20:6443" # apiserver IP:PORT
TOKEN="063e91e42837f2a2b36860457f515053" # 与token.csv里保持一致
cd /root/TLS/k8s
# 生成 kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes
--certificate-authority=/opt/kubernetes/ssl/ca.pem
--embed-certs=true
--server=${KUBE_APISERVER}
--kubeconfig=bootstrap.kubeconfig
kubectl config set-credentials "kubelet-bootstrap"
--token=${TOKEN}
--kubeconfig=bootstrap.kubeconfig
kubectl config set-context default
--cluster=kubernetes
--user="kubelet-bootstrap"
--kubeconfig=bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
cp /root/TLS/k8s/bootstrap.kubeconfig /opt/kubernetes/cfg
2.4 systemd管理kubelet
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
2.5 启动并设置开机启动
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
3 批准kubelet证书申请并加入集群
# 查看kubelet证书请求
[root@k8s-master ~]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-LHEDjWtPT39E8gkKemznF7a5GgEfX4Y5Q34E-MgzJbw 9m53s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
# 批准申请
kubectl certificate approve node-csr-LHEDjWtPT39E8gkKemznF7a5GgEfX4Y5Q34E-MgzJbw
# 查看节点
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady <none> 21s v1.18.3
4 部署kube-proxy (master节点操作)
4.1 创建配置文件
cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF
4.2 配置参数文件
cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master
clusterCIDR: 10.0.0.0/24
EOF
4.3 生成kube-proxy.kubeconfig文件
# 切换工作目录
cd /root/TLS/k8s
# 创建证书请求文件
cat > kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
# ls kube-proxy*pem
kube-proxy-key.pem kube-proxy.pem
生成kubeconfig文件:
KUBE_APISERVER="https://10.20.17.20:6443"
cd /root/TLS/k8s
kubectl config set-cluster kubernetes
--certificate-authority=/opt/kubernetes/ssl/ca.pem
--embed-certs=true
--server=${KUBE_APISERVER}
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy
--client-certificate=./kube-proxy.pem
--client-key=./kube-proxy-key.pem
--embed-certs=true
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default
--cluster=kubernetes
--user=kube-proxy
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
cp /root/TLS/k8s/kube-proxy.kubeconfig /opt/kubernetes/cfg/
4.4 systemd管理kube-proxy
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
4.5 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
5 部署CNI网络 (master节点操作)
5.1 准备CNI二进制文件
下载地址:https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz
5.2 解压二进制包并移动到默认工作目录
mkdir -p /opt/cni/bin
cd /opt/tools/
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin
5.3 部署CNI网络
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 若默认镜像地址无法访问,修改为docker hub镜像仓库。此处我们不进行修改
sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.12.0-amd64#g" kube-flannel.yml
# kubectl apply -f kube-flannel.yml
# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-amd64-c4t2v 1/1 Running 0 25s
# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready <none> 36m v1.18.3
6 授权apiserver访问kubelet (master节点操作)
cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
- pods/log
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
kubectl apply -f apiserver-to-kubelet-rbac.yaml
7 新增加Worker Node
7.1 拷贝已部署好的Node相关文件到新节点
master节点操作
在master节点将Worker Node涉及文件拷贝到新节点:node1 、node2
scp -r /opt/kubernetes k8s-node1:/opt/
scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service k8s-node1:/usr/lib/systemd/system
scp -r /opt/cni/ k8s-node1:/opt/
scp /opt/kubernetes/ssl/ca.pem k8s-node1:/opt/kubernetes/ssl
7.2 删除kubelet证书和kubeconfig文件
node节点操作
rm /opt/kubernetes/cfg/kubelet.kubeconfig
rm -f /opt/kubernetes/ssl/kubelet*
7.3 修改主机名
node节点操作
vim /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-node1
vim /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-node1
7.4 启动并设置开机启动
node 节点操作
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl start kube-proxy
systemctl enable kube-proxy
7.5 在Master上批准新Node kubelet证书申请
master节点操作
# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-LHEDjWtPT39E8gkKemznF7a5GgEfX4Y5Q34E-MgzJbw 68m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued
node-csr-eFXMlBTEP1jYeRrMur_ZdpMeWyKmtyQ-A_LGOQZ74a0 57s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
kubectl certificate approve node-csr-eFXMlBTEP1jYeRrMur_ZdpMeWyKmtyQ-A_LGOQZ74a0
7.6 查看Node状态
master节点操作
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready <none> 60m v1.18.3
k8s-node1 Ready <none> 62s v1.18.3
注:若新加的节点状态为NotReady时,可使用journalctl -f -u kubelet 检查问题,若为以下报错:
Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
此问题为网络插件没有准备好,我们可以执行命令docker images|grep flannel来查看flannel镜像是否已经成功拉取下来.经过排查,flannel镜像拉取的有点慢,稍等一会以后就可以了,或者手动执行命令下载镜像:docker pull quay.io/coreos/flannel:v0.12.0-amd64
若需要继续添加node节点2,同上操作即可,记得修改主机名!
原文地址:https://blog.csdn.net/cljdsc/article/details/134758668
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。
如若转载,请注明出处:http://www.7code.cn/show_30314.html
如若内容造成侵权/违法违规/事实不符,请联系代码007邮箱:suwngjj01@126.com进行投诉反馈,一经查实,立即删除!