单节点搭建测试k8s集群

现在很多人为了学习或者部署自己的一些服务,会在不同的云平台购买一些轻量型服务器,一般新用户可以用几十块前就可以买到一台不错配置的云服务器一年使用权,很多人为了学习k8s,也会用这些云服务器来搭建k8s集群,但是新用户一般就1台服务器,其实搭建k8s不是很方便,如果要想在一台服务器上搭建一个k8s集群,可以通过k3s搭建,像腾讯云默认提供了k3s的镜像,重装服务器后,会自动搭建好一个k3s集群。除了k3s的方式,我们还可以用kubeadm来搭建,只有一台服务器的话,我们就将这个节点即做master又作为node进行使用,下面我们来具体搭建下。

准备工作

首先云服务器的操作系统我们选择centos7.8,默认最低配置2c4g,然后在节点上执行下一些准备工作

设置下主机名,就一台服务器,我们就设置为master吧,然后写入到hosts文件

1
2
3
4
# hostnamectl set-hostname master

# $a的含义:$代表的是最后一行,而a的动作是新增
# sed -i '$a 10.0.8.6 master' /etc/hosts

然后配置下系统参数的转发设置

1
2
3
4
5
6
7
8
9
# cat <<EOF > /etc/sysctl.d/Kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF

# modprobe br_netfilter #不执行会报错sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
# sysctl -p /etc/sysctl.d/Kubernetes.conf

安装docker

首先我们配置下docker的镜像源,并修改为腾讯镜像源

1
2
# wget -O /etc/yum.repos.d/docker-ce.repo http://download.docker.com/linux/centos/docker-ce.repo
# sed -i 's+download.docker.com+mirrors.cloud.tencent.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo

然后安装下docker 19.03版本

1
2
3
# yum install -y docker-ce-19.03.9-3.el7
# systemctl start docker && systemctl enable docker
#

设置下docker的镜像源为腾讯云的镜像源

1
2
3
4
5
6
# cat <<EOF > /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://mirrors.ccs.tencentyun.com"]
}
EOF

然后重启下docker

1
# systemctl daemon-reload && systemctl restart docker

安装kubeadm、kubectl、kubelet

这里部署的k8s版本是1.20版本,所以选择1.20版本进行安装,首先加下k8s的软件源

1
2
3
4
5
6
7
8
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.cloud.tencent.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enable=1
gpgcheck=0
repo_gpgcheck=0
EOF

然后安装1.20版本的kubeadm、kubectl、kubelet,kubelet设置为开机启动

1
2
# yum install -y kubeadm-1.20.0 kubectl-1.20.0 kubelet-1.20.0
# systemctl enable kubelet

由于k8s的docker镜像默认是从k8s.gcr.io拉取,这个地址默认是需要翻墙的,大家可以从我的个人仓库下载,具体的镜像列表如下

1
2
3
4
5
6
7
8
9
10
kube-apiserver:v1.20.0
kube-scheduler:v1.20.0
kube-controller-manager:v1.20.0
kube-proxy:v1.20.0
etcd:3.4.13-0
coredns:1.7.0
pause:3.2
flannel:v0.13.1-rc1 # 网络组件
dashboard:v2.0.0-rc7 # 仪表盘相关组件,可以忽略
metrics-scraper:v1.0.4 # 仪表盘相关组件,可以忽略

可以用如下命令拉取镜像,然后修改下名称

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
docker pull ccr.ccs.tencentyun.com/niewx-k8s/kube-controller-manager:v1.20.0
docker pull ccr.ccs.tencentyun.com/niewx-k8s/kube-apiserver:v1.20.0
docker pull ccr.ccs.tencentyun.com/niewx-k8s/kube-scheduler:v1.20.0
docker pull ccr.ccs.tencentyun.com/niewx-k8s/etcd:3.4.13-0
docker pull ccr.ccs.tencentyun.com/niewx-k8s/coredns:1.7.0
docker pull ccr.ccs.tencentyun.com/niewx-k8s/pause:3.2
docker pull ccr.ccs.tencentyun.com/niewx-k8s/kube-proxy:v1.20.0
docker pull ccr.ccs.tencentyun.com/niewx-k8s/flannel:v0.13.1-rc1
docker pull ccr.ccs.tencentyun.com/niewx-k8s/dashboard:v2.0.0-rc7
docker pull ccr.ccs.tencentyun.com/niewx-k8s/metrics-scraper:v1.0.4

docker tag ccr.ccs.tencentyun.com/niewx-k8s/kube-controller-manager:v1.20.0 k8s.gcr.io/kube-controller-manager:v1.20.0
docker tag ccr.ccs.tencentyun.com/niewx-k8s/kube-apiserver:v1.20.0 k8s.gcr.io/kube-apiserver:v1.20.0
docker tag ccr.ccs.tencentyun.com/niewx-k8s/kube-scheduler:v1.20.0 k8s.gcr.io/kube-scheduler:v1.20.0
docker tag ccr.ccs.tencentyun.com/niewx-k8s/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
docker tag ccr.ccs.tencentyun.com/niewx-k8s/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
docker tag ccr.ccs.tencentyun.com/niewx-k8s/pause:3.2 k8s.gcr.io/pause:3.2
docker tag ccr.ccs.tencentyun.com/niewx-k8s/kube-proxy:v1.20.0 k8s.gcr.io/kube-proxy:v1.20.0
docker tag ccr.ccs.tencentyun.com/niewx-k8s/flannel:v0.13.1-rc1 quay.io/coreos/flannel:v0.13.1-rc1
docker tag ccr.ccs.tencentyun.com/niewx-k8s/dashboard:v2.0.0-rc7 kubernetesui/dashboard:v2.0.0-rc7
docker tag ccr.ccs.tencentyun.com/niewx-k8s/metrics-scraper:v1.0.4 kubernetesui/metrics-scraper:v1.0.4

初始化集群

首先我们生成下初始配置

1
kubeadm config print init-defaults > /root/kubeadm-config.yaml

你也可以用我的配置,注意我的配置在certSANs加上了节点的公网ip,这样可以通过kubeconfig来公网访问apiserver。主要修改下advertiseAddress的ip地址为master节点的地址,然后在 dnsDomain: cluster.local下面增加 podSubnet: “10.244.0.0/16”

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
cat <<EOF > /root/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.0.8.6
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: master
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
certSANs:
- 10.0.8.6
- 42.193.xx.xx
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
scheduler: {}
EOF

然后初始化集群

1
kubeadm init --config=/root/kubeadm-config.yaml

执行下面命令拷贝kubeconfig

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

通过 kubectl get cs命令查看组件准备状态:

1
2
3
4
5
# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0 Healthy {"health":"true"}

这里scheduler和controller-manager显示不健康,需要修改2个配置文件 kube-controller-manager.yaml和 kube-scheduler.yaml,将配置文件中 –prot = 0这项注释掉

修改保存完成后,等待一会儿状态就正常了

1
2
3
4
5
# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}

部署flannel网络组件

这个时候节点还是notready,因为未安装网络组件flannel,下面来安装下,下载flannel的部署yaml然后进行部署

1
2
3
4
5
6
7
8
# wget https://www.tx97106.com/resource/kube-flannel.yml
# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

检查集群状态

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 45h v1.20.0

[root@master ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-74ff55c5b-qt54q 1/1 Running 0 45h
kube-system etcd-master 1/1 Running 0 45h
kube-system kube-apiserver-master 1/1 Running 0 45h
kube-system kube-controller-manager-master 1/1 Running 0 45h
kube-system kube-flannel-ds-7mjg5 1/1 Running 0 45h
kube-system kube-proxy-j22z7 1/1 Running 0 45h
kube-system kube-scheduler-master 1/1 Running 0 45h

查看下节点和pod状态都是正常的,说明我们的集群已经部署好了,因为master默认是不调度业务pod,通过会有污点,但是我们这里只有一个节点,所以我们去掉下污点,让pod可以正常调度到master节点

1
kubectl taint node  master node-role.kubernetes.io/master-

下面我们测试部署下一个服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
kubectl apply -f - << EOF 
apiVersion: apps/v1
kind: Deployment
metadata:
name: debug
namespace: default
labels:
app: debug
spec:
replicas: 1
selector:
matchLabels:
app: debug
template:
metadata:
labels:
app: debug
spec:
containers:
- name: debug
image: ccr.ccs.tencentyun.com/nwx_registry/troubleshooting:latest
tolerations:
- operator: Exists
dnsPolicy: ClusterFirst
EOF

查看pod正常运行,说明集群部署成功

1
2
3
[root@master ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
debug-5b5bbcbb44-v9tl4 1/1 Running 0 95s 10.244.0.8 master <none> <none>

单节点搭建测试k8s集群
https://www.niewx.cn/2022/03/24/2022-03-24-Build-a-test-k8s-cluster-with-a-single-node/
作者
VashonNie
发布于
2022年3月24日
许可协议