三台全部安装
由于k8s版本问题,docker最新版1.13不太稳定,因此采用1.12版本
http://vault.centos.org/7.4.1708/extras/x86_64/Packages/在这个页面找到docker1.12的安装包
1)下载软件包
wget http://vault.centos.org/7.4.1708/extras/x86_64/Packages/docker-client-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm wget http://vault.centos.org/7.4.1708/extras/x86_64/Packages/docker-common-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm wget http://vault.centos.org/7.4.1708/extras/x86_64/Packages/docker-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm2)安装软件
k8s-master [root@k8s-master soft]# yum -y localinstall docker-common-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm [root@k8s-master soft]# yum -y localinstall docker-client-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm [root@k8s-master soft]# yum -y localinstall docker-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm k8s-node1 [k8s-node1 soft]# yum -y localinstall docker-common-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm [k8s-node1 soft]# yum -y localinstall docker-client-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm [k8s-node1 soft]# yum -y localinstall docker-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm k8s-node2 [k8s-node2 soft]# yum -y localinstall docker-common-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm [k8s-node2 soft]# yum -y localinstall docker-client-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpm [k8s-node2 soft]# yum -y localinstall docker-1.12.6-71.git3e8e77d.el7.centos.x86_64.rpmk8s-matser安装即可
1.安装etcd数据库 [root@k8s-master ~]# yum -y install etcd 2.配置etcd数据库 [root@k8s-master ~]# vim /etc/etcd/etcd.conf ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.81.210:2379" 3.启动etcd [root@k8s-master ~]# systemctl restart etcd [root@k8s-master ~]# systemctl enable etcd Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service. 4.设置key [root@k8s-master ~]# etcdctl set testdir/testkey01 jiangxl jiangxl [root@k8s-master ~]# etcdctl get testdir/testkey01 jiangxl [root@k8s-master ~]# etcdctl ls /testdir 5.检查etcd [root@k8s-master ~]# etcdctl -C http://192.168.81.210:2379 cluster-health member 8e9e05c52164694d is healthy: got healthy result from http://192.168.81.210:2379 cluster is healthymaster节点主要服务apiserver、kube-controller-master、kube-scheduler
1.安装k8s-master服务 [root@k8s-master ~]# yum install kubernetes-master -y 2.配置apiserver文件 [root@k8s-master ~]# vim /etc/kubernetes/apiserver KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" //api server监听地址 KUBE_API_PORT="--port=8080" //api server的监听端口 KUBELET_PORT="--kubelet-port=10250" //与node节点的kubelet通信的端口 KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.81.210:2379" //配置etcd服务的监听地址 KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota" 3.配置config文件 主从都需要修改这个文件 [root@k8s-master ~]# vim /etc/kubernetes/config KUBE_MASTER="--master=http://192.168.81.210:8080" //master所在服务器的ip和端口 4.启动kube-apiserver、kube-controller-manager、kube-scheduler [root@k8s-master ~]# systemctl start kube-apiserver.service [root@k8s-master ~]# systemctl enable kube-apiserver.service [root@k8s-master ~]# systemctl start kube-controller-manager.service [root@k8s-master ~]# systemctl enable kube-controller-manager.service [root@k8s-master ~]# systemctl start kube-scheduler.service [root@k8s-master ~]# systemctl enable kube-scheduler.service 5.验证k8s master [root@k8s-master ~]# kubectl get nodes No resources found. 出现这一步表示k8smaster安装完成node节点主要服务是kubelet和kube-proxy
所有节点都需要配置flannel网络,因为容器需要跨主机进行通讯,前面我们学了docker的macvlan和overlay网络今天学习下flannel网络
node节点配置都一致
重启服务 [root@k8s-node1 ~]# systemctl start flanneld [root@k8s-node1 ~]# systemctl enable flanneld [root@k8s-node1 ~]# systemctl restart docker [root@k8s-node1 ~]# systemctl restart kubelet [root@k8s-node1 ~]# systemctl restart kube-proxy安装完flannel网络后每个机器上都会多出来一块flannel0的网卡
每台机器的第三位ip不一致因为我们采用的掩码是16位的
1)所有节点安装busybox镜像
[root@k8s-master ~]# docker pull busybox Using default tag: latest Trying to pull repository docker.io/library/busybox ... latest: Pulling from docker.io/library/busybox 91f30d776fb2: Pull complete Digest: sha256:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce3417932)运行容器并测试网络
[root@k8s-master ~]# docker run -it docker.io/busybox /bin/sh [root@k8s-node1 ~]# docker run -it docker.io/busybox /bin/sh [root@k8s-node2 ~]# docker run -it docker.io/busybox /bin/sh如果没有镜像仓库的话,我们每次需要安装一个容器就要去官网pull一个镜像,非常麻烦,因此我们搭建一个镜像仓库将所有镜像都存放在仓库里面
安装harbor
安装harbor之前必须安装docker和docker-compose [root@docker03 ~]# tar xf harbor-offline-installer-v1.5.1.tgz [root@docker03 ~]# cd harbor/ 修改harbor配置文件,只需要修改ip和密码即可 [root@docker03 harbor]# vim harbor.cfg hostname = 192.168.81.230 harbor_admin_password = admin [root@docker03 harbor]# ./install.sh 访问http://192.168.81.230/harbor/projects 修改docker篇日志文件 [root@k8s-master ~]# vim /etc/sysconfig/docker OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --registry-mirror=https://registry.docker-cn.com --insecure-registry=192.168.81.210' [root@k8s-master ~]# systemctl retsart docker上传镜像
[root@k8s-master ~]# docker tag docker.io/nginx:1.15 192.168.81.240/nginx:1.15 [root@k8s-master ~]# docker login 192.168.81.240 Username: admin Password: Login Succeeded [root@k8s-master ~]# docker tag docker.io/nginx:1.15 192.168.81.240/k8s/nginx:1.15 [root@k8s-master ~]# docker push 192.168.81.240/k8s/nginx:1.15 The push refers to a repository [192.168.81.240/k8s/nginx] 332fa54c5886: Pushed 6ba094226eea: Pushed 6270adb5794c: Pushed 1.15: digest: sha256:e770165fef9e36b990882a4083d8ccf5e29e469a8609bb6b2e3b47d9510e2c8d size: 948