linux搭建k8s集群1.15.1版+搭建Harbor私有仓库(一)

前期准备:

三台虚拟机:一台master节点,两台node节点
虚拟机配置:4G内存+2个处理器+100G硬盘+1个NAT网卡
镜像:CentOS-7-x86_64-DVD-1810.iso
搭建过程中所需软件包以及yaml文件看评论区留言
在这里插入图片描述
虚拟机安装完成之后,对其进行配网,安装和配网就不详细演示了
网段是192.168.66.0,三台主机的ip分别为
k8s-master: 192.168.66.10
k8s-node1: 192.168.66.20
k8s-node2: 192.168.66.21

要求三台主机都可访问互联网,上述操作完成以后,打一个初始化快照,后续会方便很多

1.基本配置(master,node1,node2)

注:以下操作在三个节点执行

1.1 修改主机名

[root@localhost ~]# hostnamectl set-hostname k8s-master
[root@localhost ~]# hostnamectl set-hostname k8s-node1
[root@localhost ~]# hostnamectl set-hostname k8s-node2

1.2 编写hosts配置文件

[root@k8s-master ~]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.66.10   k8s-master
192.168.66.20   k8s-node1
192.168.66.21   k8s-node2

在这里插入图片描述

1.3 安装所需依赖包

[root@k8s-master ~]# yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget  vim net-tools

在这里插入图片描述

1.4 关闭防火墙+iptables设置空规则

[root@k8s-master ~]# systemctl  stop firewalld  &&  systemctl  disable firewalld 
[root@k8s-master ~]# yum -y install iptables-services  &&  systemctl  start iptables  &&  systemctl  enable iptables &&  iptables -F  &&  service iptables save

在这里插入图片描述

1.4 关闭Selinux

[root@k8s-master ~]# swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
[root@k8s-master ~]# setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

在这里插入图片描述

1.5 调整内核参数,对于K8s

[root@k8s-master ~]# cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 
vm.overcommit_memory=1 
vm.panic_on_oom=0  
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

[root@k8s-master ~]# cp kubernetes.conf  /etc/sysctl.d/kubernetes.conf
[root@k8s-master ~]# sysctl -p /etc/sysctl.d/kubernetes.conf

在这里插入图片描述

1.6 调整系统时区

三个节点的时间必须一致

设置系统时区为 中国/shanghai

[root@k8s-master ~]# timedatectl set-timezone Asia/Shanghai

将当前的 UTC 时间写入硬件时钟

[root@k8s-master ~]# timedatectl set-local-rtc 0

安装时间同步

时间需与当前物理机时间一致(与本机电脑时间)

[root@k8s-master ~]# yum install -y chrony
[root@k8s-master ~]# systemctl restart chronyd
[root@k8s-master ~]# systemctl enable chronyd

在这里插入图片描述

1.7 关闭系统不需要的服务避免资源被占用

[root@k8s-master ~]# systemctl stop postfix && systemctl disable postfix

1.8 设置rsyslogd和systemd journal

[root@k8s-master ~]# mkdir /var/log/journal 
[root@k8s-master ~]# mkdir /etc/systemd/journald.conf.d
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
# 持久化保存到磁盘
Storage=persistent   
# 压缩历史日志
Compress=yes 
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000    
# 最大占用空间 10G
SystemMaxUse=10G  
# 单日志文件最大 200M
SystemMaxFileSize=200M   
# 日志保存时间 2 周
MaxRetentionSec=2week   
# 不将日志转发到 syslog
ForwardToSyslog=no
EOF

[root@k8s-master ~]# systemctl restart systemd-journald

在这里插入图片描述

1.9 升级系统内核

CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定,安装完成后检查 /boot/grub2/grub.cfg 中对应内核 menuentry 中是否包含 initrd16 配置,如果没有,再安装一次!

[root@k8s-master ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm  
[root@k8s-master ~]# yum --enablerepo=elrepo-kernel install -y kernel-lt   

设置开机从新内核启动

随着时间推移内核版本也会更新,升级完成以后内核版本号你我可能会不一致,只需在cat查看时选择你当前系统中所在的版本号即可

[root@k8s-master ~]# grub2-editenv list
[root@k8s-master ~]# cat /boot/grub2/grub.cfg | grep menuentry
[root@k8s-master ~]# grub2-set-default 'CentOS Linux (5.4.114-1.el7.elrepo.x86_64) 7 (Core)'
[root@k8s-master ~]# grub2-editenv list
reboo

在这里插入图片描述
查看内核是否升级成功

[root@k8s-master ~]# uname -r
5.4.114-1.el7.elrepo.x86_64

在这里插入图片描述

1.10 kube-proxy 开启ipvs的前置条件

[root@k8s-master ~]# modprobe br_netfilter
[root@k8s-master ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
[root@k8s-master ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules
[root@k8s-master ~]# source /etc/sysconfig/modules/ipvs.modules
[root@k8s-master ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh               16384  0 
ip_vs_wrr              16384  0 
ip_vs_rr               16384  0 
ip_vs                 155648  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          147456  1 ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
libcrc32c              16384  3 nf_conntrack,xfs,ip_vs

在这里插入图片描述

1.11安装docker软件

[root@k8s-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@k8s-master ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-master ~]# yum install -y docker-ce

在这里插入图片描述
创建/etc/docker目录 配置 daemon

[root@k8s-master ~]# mkdir /etc/docker
[root@k8s-master ~]# cat > /etc/docker/daemon.json <<EOF
{
    "exec-opts": ["native.cgroupdriver=systemd"],
    "log-driver": "json-file",
    "log-opts": {
       "max-size": "100m"
    }
}
EOF

[root@k8s-master ~]# mkdir -p /etc/systemd/system/docker.service.d
[root@k8s-master ~]# systemctl daemon-reload && systemctl restart docker && systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

在这里插入图片描述

1.12 添加k8syum源

添加yum源的repo文件

[root@k8s-master ~]# cat  <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装k8s1.17.4版

[root@k8s-master ~]# yum -y install kubeadm-1.17.4-0 kubelet-1.17.4-0 kubectl-1.17.4-0 -y
[root@k8s-master ~]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

1.13准备集群镜像包

在安装kubernetes集群之前,必须要提前准备好集群需要的镜像,所需镜像可以通过下面命令查看

[root@master ~]# kubeadm config images list

下载镜像
此镜像在kubernetes的仓库中,由于网络原因,无法连接,下面提供了一种替代方案

images=(
    kube-apiserver:v1.17.4
    kube-controller-manager:v1.17.4
    kube-scheduler:v1.17.4
    kube-proxy:v1.17.4
    pause:3.1
    etcd:3.4.3-0
    coredns:1.6.5
)

for imageName in ${images[@]} ; do
	docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
	docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName 		k8s.gcr.io/$imageName
	docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done

将上面的命令写成一个镜像拉取脚本

[root@master ~]# vim image.sh                      
images=(
    kube-apiserver:v1.17.4
    kube-controller-manager:v1.17.4
    kube-scheduler:v1.17.4
    kube-proxy:v1.17.4
    pause:3.1
    etcd:3.4.3-0
    coredns:1.6.5
)

for imageName in ${images[@]} ; do
        docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
        docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName               k8s.gcr.io/$imageName
        docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done

给予执行权限,运行脚本
[root@master ~]# chmod +x image.sh
[root@master ~]# ./image.sh 

查看k8s镜像是否拉取成功
[root@master ~]# docker images 
REPOSITORY                           TAG       IMAGE ID       CREATED         SIZE
k8s.gcr.io/kube-proxy                v1.17.4   6dec7cfde1e5   19 months ago   116MB
k8s.gcr.io/kube-controller-manager   v1.17.4   7f997fcf3e94   19 months ago   161MB
k8s.gcr.io/kube-apiserver            v1.17.4   2e1ba57fe95a   19 months ago   171MB
k8s.gcr.io/kube-scheduler            v1.17.4   5db16c1c7aff   19 months ago   94.4MB
k8s.gcr.io/coredns                   1.6.5     70f311871ae1   23 months ago   41.6MB
k8s.gcr.io/etcd                      3.4.3-0   303ce5db0e90   24 months ago   288MB
k8s.gcr.io/pause                     3.1       da86e6ba6ca1   3 years ago     742kB



2.安装Kubeadm主从配置(集群初始化)

2.1 初始化主节点(master)

以下操作仅在master节点执行

[root@k8s-master ~]# kubeadm config print init-defaults > kubeadm-config.yaml
2.1.1修改yaml文件

仅列出要修改的部分,未列出部分默认不修改

[root@k8s-master ~]# vim kubeadm-config.yaml 
  advertiseAddress: 192.168.66.10     # master节点ip
  kubernetesVersion: v1.15.1				# 版本号是1.15.1
  networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: "10.244.0.0/16"
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs

在这里插入图片描述

2.1.2 初始化master
[root@k8s-master ~]# kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
Flag --experimental-upload-certs has been deprecated, use --upload-certs instead
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
....
.....
........
.....
...
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.66.10:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:ce8565bd22cc56904e8b387a2c65568bf8de9118ad05cbca32271dd52c43989d 

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master ~]# 

在这里插入图片描述

2.2 node节点加入master(node1,node2)

以下操作仅在node节点执行,node1和node2

[root@k8s-node1 ~]# kubeadm join 192.168.66.10:6443 --token abcdef.0123456789abcdef     --discovery-token-ca-cert-hash sha256:ce8565bd22cc56904e8b387a2c65568bf8de9118ad05cbca32271dd52c43989d 
[root@k8s-node2 ~]# kubeadm join 192.168.66.10:6443 --token abcdef.0123456789abcdef     --discovery-token-ca-cert-hash sha256:ce8565bd22cc56904e8b387a2c65568bf8de9118ad05cbca32271dd52c43989d 

在这里插入图片描述
加入成功后在master节点使用kubectl查看节点信息

[root@k8s-master ~]# kubectl get node
NAME         STATUS     ROLES    AGE     VERSION
k8s-master   NotReady   master   9m53s   v1.15.1
k8s-node1    NotReady   <none>   77s     v1.15.1
k8s-node2    NotReady   <none>   51s     v1.15.1

可以看到三个节点的状态是notready,部署网络后即变成ready

2.3部署flannel网络(master)

以下操作仅在master节点执行

[root@k8s-master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@k8s-master ~]# kubectl create -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

若出现无响应,可下载此处文件:

修改2处的镜像源地址:

 image: lizhenliang/flannel:v0.12.0-amd64
 image: lizhenliang/flannel:v0.12.0-amd64

在这里插入图片描述
替换掉此处的镜像拉取之后,再次执行

[root@master ~]# kubectl apply -f kube-flannel.yml 

等待片刻后再次查看node节点信息,可发现三个节点状态均为ready

[root@k8s-master ~]# kubectl get node 
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   15m     v1.15.1
k8s-node1    Ready    <none>   6m25s   v1.15.1
k8s-node2    Ready    <none>   5m59s   v1.15.1

在这里插入图片描述

到此k8s集群搭建完成,Harbor私有仓库搭建详情访问:

GitHub 加速计划 / ha / harbor
23.24 K
4.68 K
下载
Harbor 是一个开源的容器镜像仓库,用于存储和管理 Docker 镜像和其他容器镜像。 * 容器镜像仓库、存储和管理 Docker 镜像和其他容器镜像 * 有什么特点:支持多种镜像格式、易于使用、安全性和访问控制
最近提交(Master分支:2 个月前 )
9e55afbb pull image from registry.goharbor.io instead of dockerhub Update testcase to support Docker Image Can Be Pulled With Credential Change gitlab project name when user changed. Update permissions count and permission count total Change webhook_endpoint_ui Signed-off-by: stonezdj <stone.zhang@broadcom.com> Co-authored-by: Wang Yan <wangyan@vmware.com> 10 天前
3dbfd422 Signed-off-by: wang yan <wangyan@vmware.com> 10 天前
Logo

旨在为数千万中国开发者提供一个无缝且高效的云端环境,以支持学习、使用和贡献开源项目。

更多推荐