1. No networks found in /etc/cni/net.d

Jan 30 11:55:52 master kubelet[186184]: E0130 11:55:52.804619  186184 kubelet_node_status.go:380] Error updating node status, will retry: error getting node "master": Get https://10.115.0.230:6443/api/v1/nodes/master?timeout=10s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jan 30 11:55:55 master kubelet[186184]: E0130 11:55:55.915679  186184 dns.go:132] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 192.168.1.21 192.168.48.1 8.8.8.8
Jan 30 11:55:56 master kubelet[186184]: W0130 11:55:56.858221  186184 cni.go:203] Unable to update cni config: No networks found in /etc/cni/net.d
Jan 30 11:55:56 master kubelet[186184]: E0130 11:55:56.858380  186184 kubelet.go:2192] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message: docker: network plugin is not ready: cni config uninitialized

解决方式:

[[email protected]]$ kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.27.226
[[email protected]]$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

稍等片刻再查看,即可发现 Master 状态正常:

[[email protected]]$ kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    master   8m34s   v1.13.2

2. 升级 Docker

[[email protected]]$ yum remove -y docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux  docker-engine-selinux docker-engine
[[email protected]]$ yum install -y yum-utils device-mapper-persistent-data lvm2
[[email protected]]$ yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
[[email protected]]$ yum install docker-ce

[[email protected]]$ systemctl start docker
[[email protected]]$ systemctl enable docker
[[email protected]]$ systemctl status docker

3. Docker 添加 proxy

[[email protected]]# mkdir -p /etc/systemd/system/docker.service.d
[[email protected]]# cat /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://192.168.1.1:8118/"
Environment="HTTPS_PROXY=http://192.168.1.1:8118/"
[[email protected]]# systemctl daemon-reload
[[email protected]]# systemctl restart docker

4. 单节点让 master 也可以调度 pods

[[email protected]]$ kubectl taint nodes --all node-role.kubernetes.io/master-

看到类似的输出表示就成功了:

node/master untainted

5. kubernetes service external ip pending

It looks like you are using a custom Kubernetes Cluster (using minikube, kubeadm or the like). In this case, there is no LoadBalancer integrated (unlike AWS or Google Cloud). With this default setup, you can only use NodePort (more info here: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) or an Ingress Controller. With the Ingress Controller you can setup a domain name which maps to your pod (more information here: https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-controllers)

6. 查看 flannel 的日志

在问题一中可能有时并不那么顺利,那么就需要查看 flannel 的日志,查看日志分为两步,分别是:

  1. 查看 POD ID:[[email protected]]$ kubectl get po --namespace kube-system -l app=flannel
  2. 查看 POD 日志:[[email protected]]$ kubectl logs --namespace kube-system <POD_ID> -c kube-flannel

7. 设置 Node 的 roles

[root@lq-master liqiang.io]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
lq-master    Ready    master   28m     v1.14.1
lq-slave01   Ready    <none>   5m29s   v1.14.1
lq-slave02   Ready    <none>   5m15s   v1.14.1

查看 node 的时候发现没有 Role,似乎这是没有影响的,但是我还是想把它设置上。

[root@lq-master liqiang.io]# kubectl label node lq-slave01 node-role.kubernetes.io/worker=worker
node/lq-slave01 labeled
[root@lq-master liqiang.io]# kubectl label node lq-slave02 node-role.kubernetes.io/worker=worker
node/lq-slave02 labeled
[root@lq-master liqiang.io]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
lq-master    Ready    master   31m     v1.14.1
lq-slave01   Ready    worker   7m57s   v1.14.1
lq-slave02   Ready    worker   7m43s   v1.14.1

其实就是给节点打上一个特殊标签