Network支持的网络模式:
1.Overlay Network: 覆盖网络,在基础网络上叠加的一种虚拟网络技术模式,该网络中断 主机通过虚拟链路链接起来
2.VXLAN: 将源数据抱封装到UDP中,并使用基础网络的IP/MAC作为外层报文头进行封装,然后在以太网上传输,到达目的地后由隧道端点解封装并将数据发送给目标地址
3.Flannel: 是Overlay网络的一种,也是将源数据包装分在另一种网络包里面进行路由转发和通信,目前已经支持UDP,VXLAN,AWS VPC和GCE路由等数据转发方式
多主机容器网络通信其他主流方案: 隧道方案(Weave , OpenvSwitch), 路由方案(Calico)等
软件包准备
flannel下载地址:
wget https://github.com/coreos/flannel/releases/download/v0.9.1/flannel-v0.9.1-linux-amd64.tar.gz
部署flannel:
1. 软件准备,只在从节点上安装,主节点上不安装
wget https://github.com/coreos/flannel/releases/d[root@k8s-master ~]# ownload/v0.9.1/flannel-v0.9.1-linux-amd64.tar.gz
[root@k8s-master ~]# tar zxvf flannel-v0.9.1-linux-amd64.tar.gz
[root@k8s-master ~]# scp flanneld mk-docker-opts.sh root@192.168.1.102:/opt/kubernetes/bin/
flanneld 100% 33MB 32.9MB/s 00:01
mk-docker-opts.sh 100% 2139 3.3MB/s 00:00
[root@k8s-master ~]# scp flanneld mk-docker-opts.sh root@192.168.1.103:/opt/kubernetes/bin/
flanneld 100% 33MB 32.9MB/s 00:01
mk-docker-opts.sh 100% 2139 1.6MB/s 00:00
#[root@k8s-master ~]# mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/
安装部署flannel,在node2节点操作
1. 设置flannel的配置文件
[root@k8s-node-2 ~]# cat <<EOF > /opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.1.101:2379,https://192.168.1.102:2379,https://192.168.1.103:2379 \
-etcd-cafile=/opt/kubernetes/ssl/ca.pem \
-etcd-certfile=/opt/kubernetes/ssl/server.pem \
-etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
EOF
2. 设置网络信息
#在etcd中新建flannel的网络信息,设置key值,需要使用的证书ca.pem,server.pem,server-key.pem
[root@k8s-node-2 ~]# /opt/kubernetes/bin/etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --endpoints="https://192.168.1.101:2379,https://192.168.1.102:2379,https://192.168.1.103:2379" set /coreos.com/network/config '{"Network":"172.17.0.0/16","Backend":{"Type":"vxlan"}}'
"Network":"172.17.0.0/16","Backend":{"Type":"vxlan"}}'
#获取key的值
[root@k8s-node-2 ~]# /opt/kubernetes/bin/etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --endpoints="https://192.168.1.101:2379,https://192.168.1.102:2379,https://192.168.1.103:2379" get /coreos.com/network/config
{"Network":"172.17.0.0/16","Backend":{"Type":"vxlan"}}
3. 生成flannel的服务配置文件
[root@k8s-node-2 ~]# cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
4. 查看fanlnel启动后的生成的变量文件
[root@k8s-node-2 kubernetes]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.55.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.55.1/24 --ip-masq=false --mtu=1450"
5. 修改docker的服务配置文件,使docker读取flannel的变量文件
[root@k8s-node-2 ~]# cat <<EOF > /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd \$DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
6. 重启flannel和docker
[root@k8s-node-2 ~]# systemctl daemon-reload
[root@k8s-node-2 ~]# systemctl start flanneld
[root@k8s-node-2 ~]# systemctl restart docker
[root@k8s-node-2 ~]# systemctl enable flanneld
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
7. 检查网络,查看是否有新的网卡出现,并且flannel和docker在同一个网段
[root@k8s-node-2 kubernetes]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.55.1 netmask 255.255.255.0 broadcast 172.17.55.255
ether 02:42:84:0f:5d:f5 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.103 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::7972:d223:8d4:5e84 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:80:ae:64 txqueuelen 1000 (Ethernet)
RX packets 681439 bytes 252771300 (241.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 556663 bytes 55376987 (52.8 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.55.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::60a2:c6ff:fe10:fa prefixlen 64 scopeid 0x20<link>
ether 62:a2:c6:10:00:fa txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0
在node1节点上部署flannel
1. 拷贝配置文件到node1节点
[root@k8s-node-2 kubernetes]# scp /opt/kubernetes/cfg/flanneld root@192.168.1.102:/opt/kubernetes/cfg/
flanneld 100% 251 4.2KB/s 00:00
[root@k8s-node-2 kubernetes]# scp /usr/lib/systemd/system/flanneld.service root@192.168.1.102:/usr/lib/systemd/system/flanneld.service
flanneld.service 100% 417 299.8KB/s 00:00
[root@k8s-node-2 kubernetes]# scp /usr/lib/systemd/system/docker.service root@192.168.1.102:/usr/lib/systemd/system/docker.service
docker.service 100% 527 840.7KB/s 00:00
2. 重启flannel和docker
[root@k8s-node-2 ~]# systemctl daemon-reload
[root@k8s-node-2 ~]# systemctl start flanneld
[root@k8s-node-2 ~]# systemctl restart docker
[root@k8s-node-2 ~]# systemctl enable flanneld
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
3. 检查网络,查看是否有新的网卡出现,并且flannel和docker在同一个网段
[root@k8s-node-1 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.86.1 netmask 255.255.255.0 broadcast 172.17.86.255
ether 02:42:4f:54:59:ca txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.102 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::ccf5:8e87:bd29:b01f prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:82:17:5e txqueuelen 1000 (Ethernet)
RX packets 641238 bytes 270432271 (257.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 546530 bytes 172633725 (164.6 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.86.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::44e2:edff:fe09:c3c2 prefixlen 64 scopeid 0x20<link>
ether 46:e2:ed:09:c3:c2 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0
测试flannel网络
1. ping测试
#在node1节点ping node2的docker网关地址
[root@k8s-node-1 ~]# ping 172.17.55.1
PING 172.17.55.1 (172.17.55.1) 56(84) bytes of data.
64 bytes from 172.17.55.1: icmp_seq=1 ttl=64 time=0.364 ms
64 bytes from 172.17.55.1: icmp_seq=2 ttl=64 time=0.191 ms
64 bytes from 172.17.55.1: icmp_seq=3 ttl=64 time=0.218 ms
#在node1节点ping node2的flannel的网关地址
[root@k8s-node-1 ~]# ping 172.17.55.0
PING 172.17.55.0 (172.17.55.0) 56(84) bytes of data.
64 bytes from 172.17.55.0: icmp_seq=1 ttl=64 time=0.464 ms
64 bytes from 172.17.55.0: icmp_seq=2 ttl=64 time=0.589 ms
64 bytes from 172.17.55.0: icmp_seq=3 ttl=64 time=0.110 ms
#在node2节点ping node1的docker网关地址
[root@k8s-node-2 ~]# ping 172.17.86.1
PING 172.17.86.1 (172.17.86.1) 56(84) bytes of data.
64 bytes from 172.17.86.1: icmp_seq=1 ttl=64 time=0.287 ms
64 bytes from 172.17.86.1: icmp_seq=2 ttl=64 time=0.268 ms
64 bytes from 172.17.86.1: icmp_seq=3 ttl=64 time=0.297 ms
#在node2节点ping node1的flannel的网关地址
[root@k8s-node-2 ~]# ping 172.17.86.0
PING 172.17.86.0 (172.17.86.0) 56(84) bytes of data.
64 bytes from 172.17.86.0: icmp_seq=1 ttl=64 time=0.258 ms
64 bytes from 172.17.86.0: icmp_seq=2 ttl=64 time=0.226 ms
64 bytes from 172.17.86.0: icmp_seq=3 ttl=64 time=0.231 ms
2. 查看etcd中存储的key值
[root@k8s-node-1 ~]# /opt/kubernetes/bin/etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --endpoints="https://192.168.1.101:2379,https://192.168.1.102:2379,https://192.168.1.103:2379" ls /coreos.com/network/subnets
/coreos.com/network/subnets/172.17.55.0-24
/coreos.com/network/subnets/172.17.86.0-24
3. 获取key的详细信息
[root@k8s-node-2 ~]# /opt/kubernetes/bin/etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --endpoints="https://192.168.1.101:2379,https://192.168.1.102:2379,https://192.168.1.103:2379" get /coreos.com/network/subnets/172.17.55.0-24
{"PublicIP":"192.168.1.103","BackendType":"vxlan","BackendData":{"VtepMAC":"62:a2:c6:10:00:fa"}}
4. 查看路由表
[root@k8s-node-2 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.1 0.0.0.0 UG 100 0 0 ens33
172.17.55.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
172.17.86.0 172.17.86.0 255.255.255.0 UG 0 0 0 flannel.1
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33