1. 问题1:No networks found in /etc/cni/net.d

  1. Jan 30 11:55:52 master kubelet[186184]: E0130 11:55:52.804619 186184 kubelet_node_status.go:380] Error updating node status, will retry: error getting node "master": Get https://10.115.0.230:6443/api/v1/nodes/master?timeout=10s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  2. Jan 30 11:55:55 master kubelet[186184]: E0130 11:55:55.915679 186184 dns.go:132] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 192.168.1.21 192.168.48.1 8.8.8.8
  3. Jan 30 11:55:56 master kubelet[186184]: W0130 11:55:56.858221 186184 cni.go:203] Unable to update cni config: No networks found in /etc/cni/net.d
  4. Jan 30 11:55:56 master kubelet[186184]: E0130 11:55:56.858380 186184 kubelet.go:2192] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message: docker: network plugin is not ready: cni config uninitialized

解决方式:

  1. [[email protected].io]$ kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.27.226
  2. [[email protected].io]$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

稍等片刻再查看,即可发现 Master 状态正常:

  1. [[email protected].io]$ kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. master Ready master 8m34s v1.13.2

2. 问题2: 升级 Docker

  1. [[email protected].io]$ yum remove -y docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engine
  2. [[email protected].io]$ yum install -y yum-utils device-mapper-persistent-data lvm2
  3. [[email protected].io]$ yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
  4. [[email protected].io]$ yum install docker-ce
  5. [[email protected].io]$ systemctl start docker
  6. [[email protected].io]$ systemctl enable docker
  7. [[email protected].io]$ systemctl status docker

3. 问题3: Docker 添加 proxy

  1. [[email protected].io]# mkdir -p /etc/systemd/system/docker.service.d
  2. [[email protected].io]# cat /etc/systemd/system/docker.service.d/http-proxy.conf
  3. [Service]
  4. Environment="HTTP_PROXY=http://192.168.1.1:8118/"
  5. Environment="HTTPS_PROXY=http://192.168.1.1:8118/"
  6. [[email protected].io]# systemctl daemon-reload
  7. [[email protected].io]# systemctl restart docker

4. 问题 4:单节点让 master 也可以调度 pods

  1. [[email protected].io]$ kubectl taint nodes --all node-role.kubernetes.io/master-

看到类似的输出表示就成功了:

  1. node/master untainted

5. kubernetes service external ip pending

It looks like you are using a custom Kubernetes Cluster (using minikube, kubeadm or the like). In this case, there is no LoadBalancer integrated (unlike AWS or Google Cloud). With this default setup, you can only use NodePort (more info here: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) or an Ingress Controller. With the Ingress Controller you can setup a domain name which maps to your pod (more information here: https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-controllers)

6. 查看 flannel 的日志

在问题一中可能有时并不那么顺利,那么就需要查看 flannel 的日志,查看日志分为两步,分别是:

  1. 查看 POD ID:[[email protected]]$ kubectl get po --namespace kube-system -l app=flannel
  2. 查看 POD 日志:[[email protected]]$ kubectl logs --namespace kube-system <POD_ID> -c kube-flannel

7. 设置 Node 的 roles

  1. [[email protected]-master liqiang.io]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. lq-master Ready master 28m v1.14.1
  4. lq-slave01 Ready <none> 5m29s v1.14.1
  5. lq-slave02 Ready <none> 5m15s v1.14.1

查看 node 的时候发现没有 Role,似乎这是没有影响的,但是我还是想把它设置上。

  1. [[email protected]-master liqiang.io]# kubectl label node lq-slave01 node-role.kubernetes.io/worker=worker
  2. node/lq-slave01 labeled
  3. [[email protected]-master liqiang.io]# kubectl label node lq-slave02 node-role.kubernetes.io/worker=worker
  4. node/lq-slave02 labeled
  5. [[email protected]-master liqiang.io]# kubectl get nodes
  6. NAME STATUS ROLES AGE VERSION
  7. lq-master Ready master 31m v1.14.1
  8. lq-slave01 Ready worker 7m57s v1.14.1
  9. lq-slave02 Ready worker 7m43s v1.14.1

其实就是给节点打上一个特殊标签