1、路由配置 路由是互聯(lián)網(wǎng)絡(luò)的核心,沒有路由的網(wǎng)絡(luò)如同一座孤島,掌握路由的配置是IT人員的必備技能。 例如:現(xiàn)在有三臺主機需要通信,其中A和B在同一網(wǎng)段,C在另一網(wǎng)段,這兩個網(wǎng)段有三個路由相隔,如何實現(xiàn)他們之間的通信呢? 主機A:IP=192.168.1.100/24 主機B:IP=192.168.1.63/24 主機C:IP=10.2.110.100/16 R1的接口0:IP=192.168.1.1/24,接口1:IP=110.1.24.10/24 R2的接口0:IP=110.1.24.20/24,接口1:IP=72.98.2.10/16 R3的接口0:IP=72.98.70.20/16,接口1:IP=10.2.0.1/16 通過分析上面的網(wǎng)絡(luò)環(huán)境,可以得到R1,R2和R3的路由信息,這里我們指定每一個路由的靜態(tài)路由表 R1:路由表 網(wǎng)段 網(wǎng)關(guān) 接口 192.168.1.0/24 0.0.0.0/0 eth0 110.1.24.10/24 0.0.0.0/0 eth1 72.98.0.0/16 110.1.24.20 eth1 10.2.0.0/16 110.1.24.20 eth1 0.0.0.0/0 110.1.24.20 eth1 R2:路由表 網(wǎng)段 網(wǎng)關(guān) 接口 192.168.1.0/24 110.1.24.10 eth0 110.1.24.10/24 0.0.0.0/0 eth0 72.98.0.0/16 0.0.0.0/0 eth1 10.2.0.0/16 72.98.70.20 eth1 0.0.0.0/0 外網(wǎng)IP(這里不寫) R3:路由表 網(wǎng)段 網(wǎng)關(guān) 接口 192.168.1.0/24 72.98.2.10 eth0 110.1.24.10/24 72.98.2.10 eth0 72.98.0.0/16 0.0.0.0/0 eth0 10.2.0.0/16 0.0.0.0/0 eth1 0.0.0.0/0 72.98.2.10 eth0 這里用3臺centos系統(tǒng)作為路由 用node1主機來做遠程桌面route1 [root@node1 ~]# ip addr show dev eth1 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:e2:96:7c brd ff:ff:ff:ff:ff:ff inet 192.168.1.1/24 scope global eth1 inet6 fe80::20c:29ff:fee2:967c/64 scope link valid_lft forever preferred_lft forever[root@node1 ~]# ip addr show dev eth2 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:e2:96:86 brd ff:ff:ff:ff:ff:ff inet 110.1.24.10/24 scope global eth2 inet6 fe80::20c:29ff:fee2:9686/64 scope link valid_lft forever preferred_lft forever [root@node1 ~]# route add -net 10.2.0.0/16 gw 110.1.24.20 dev eth2[root@node1 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 110.1.24.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 10.2.0.0 110.1.24.20 255.255.0.0 UG 0 0 0 eth2 72.98.0.0 110.1.24.20 255.255.0.0 UG 0 0 0 eth2 10.1.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 [root@node1 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward note2用來做route2 [root@node2 ~]# ip addr add 110.1.24.20/24 dev eth1[root@node2 ~]# ip addr add 72.98.2.10/16 dev eth2[root@node2 ~]# ip addr show dev eth1 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:00:90:24 brd ff:ff:ff:ff:ff:ff inet 110.1.24.20/24 scope global eth1 inet6 fe80::20c:29ff:fe00:9024/64 scope link valid_lft forever preferred_lft forever[root@node2 ~]# ip addr show dev eth2 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:00:90:2e brd ff:ff:ff:ff:ff:ff inet 72.98.2.10/16 scope global eth2 inet6 fe80::20c:29ff:fe00:902e/64 scope link valid_lft forever preferred_lft forever [root@node2 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 110.1.24.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 72.98.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth2 10.1.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 eth1[root@node2 ~]# route add -net 192.168.1.0/24 gw 110.1.24.10 dev eth1[root@node2 ~]# route add -net 10.2.0.0/16 gw 72.98.70.20 dev eth2[root@node2 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 110.1.24.10 255.255.255.0 UG 0 0 0 eth1 110.1.24.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.2.0.0 72.98.70.20 255.255.0.0 UG 0 0 0 eth2 72.98.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth2 10.1.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 eth1 [root@node2 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward note3用來做route3 [root@node3 ~]# ip addr add 72.98.70.20/16 dev eth1[root@node3 ~]# ip addr add 10.2.0.1/16 dev eth2[root@node3 ~]# ip addr show dev eth1 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:47:d8:e1 brd ff:ff:ff:ff:ff:ff inet 72.98.70.20/16 scope global eth1 inet6 fe80::20c:29ff:fe47:d8e1/64 scope link valid_lft forever preferred_lft forever[root@node3 ~]# ip addr show dev eth2 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:47:d8:eb brd ff:ff:ff:ff:ff:ff inet 10.2.0.1/16 scope global eth2 inet6 fe80::20c:29ff:fe47:d8eb/64 scope link valid_lft forever preferred_lft forever [root@node3 ~]# route add -net 110.1.24.0/24 gw 72.98.2.10 dev eth1[root@node3 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 72.98.2.10 255.255.255.0 UG 0 0 0 eth1 110.1.24.0 72.98.2.10 255.255.255.0 UG 0 0 0 eth1 10.2.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth2 72.98.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 10.1.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 eth1 [root@node3 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward 主機A: [root@host1 ~]# ip addr add 192.168.1.100/24 dev eno33554984[root@host1 ~]# ip route add default via 192.168.1.1[root@host1 ~]# ip addr show dev eno33554984 3: eno33554984: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:2b:82:a6 brd ff:ff:ff:ff:ff:ff inet 192.168.1.100/24 scope global eno33554984 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe2b:82a6/64 scope link valid_lft forever preferred_lft forever[root@host1 ~]# route -n -bash: route: command not found[root@host1 ~]# ip route show 10.1.0.0/16 dev eno16777736 proto kernel scope link src 10.1.70.171 metric 100 192.168.1.0/24 dev eno33554984 proto kernel scope link src 192.168.1.100 0.0.0.0 via 192.168.1.1 dev eno33554984 主機B: [root@host2 ~]# ip addr show dev eno33554984 3: eno33554984: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:aa:22:47 brd ff:ff:ff:ff:ff:ff inet 192.168.1.63/24 scope global eno33554984 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:feaa:2247/64 scope link valid_lft forever preferred_lft forever[root@host2 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.1.0.0 0.0.0.0 255.255.0.0 U 100 0 0 eno16777736 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eno33554984 0.0.0.0 192.168.1.1 255.255.255.255 UGH 0 0 0 eno33554984 主機C: root@debian:~# ip addr show dev eth1 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:f1:04:08 brd ff:ff:ff:ff:ff:ff inet 10.2.110.100/16 scope global eth1 valid_lft forever preferred_lft forever root@debian:~# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.1.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 10.2.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 0.0.0.0 10.2.0.1 255.255.255.255 UGH 0 0 0 eth1 root@debian:~# 至此所有配置已經(jīng)結(jié)束,關(guān)閉所有主機的網(wǎng)關(guān)和selinux 測試: 在主機C上: root@debian:~# ping -I eth1 192.168.1.1 PING 192.168.1.1 (192.168.1.1) from 10.2.110.100 eth1: 56(84) bytes of data. 64 bytes from 192.168.1.1: icmp_seq=1 ttl=62 time=0.691 ms 64 bytes from 192.168.1.1: icmp_seq=2 ttl=62 time=1.17 ms ^C --- 192.168.1.1 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms rtt min/avg/max/mdev = 0.691/0.931/1.171/0.240 ms root@debian:~# ping -I eth1 192.168.1.63 PING 192.168.1.63 (192.168.1.63) from 10.2.110.100 eth1: 56(84) bytes of data. 64 bytes from 192.168.1.63: icmp_seq=1 ttl=61 time=1.22 ms 64 bytes from 192.168.1.63: icmp_seq=2 ttl=61 time=0.927 ms ^C --- 192.168.1.63 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.927/1.074/1.221/0.147 ms root@debian:~# ping -I eth1 192.168.1.100 PING 192.168.1.100 (192.168.1.100) from 10.2.110.100 eth1: 56(84) bytes of data. 64 bytes from 192.168.1.100: icmp_seq=1 ttl=61 time=1.21 ms 64 bytes from 192.168.1.100: icmp_seq=2 ttl=61 time=1.78 ms ^C --- 192.168.1.100 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 1.214/1.497/1.780/0.283 ms root@debian:~# 在主機A上: [root@host1 ~]# ping -I eno33554984 10.2.110.100 PING 10.2.110.100 (10.2.110.100) from 192.168.1.100 eno33554984: 56(84) bytes of data. 64 bytes from 10.2.110.100: icmp_seq=1 ttl=61 time=0.985 ms 64 bytes from 10.2.110.100: icmp_seq=2 ttl=61 time=1.09 ms 64 bytes from 10.2.110.100: icmp_seq=3 ttl=61 time=1.89 ms 64 bytes from 10.2.110.100: icmp_seq=4 ttl=61 time=2.00 ms ^C --- 10.2.110.100 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3005ms rtt min/avg/max/mdev = 0.985/1.493/2.008/0.459 ms[root@host1 ~]# 在主機B上: [root@host2 ~]# ping -I eno33554984 10.2.110.100 PING 10.2.110.100 (10.2.110.100) from 192.168.1.63 eno33554984: 56(84) bytes of data. 64 bytes from 10.2.110.100: icmp_seq=1 ttl=61 time=1.15 ms 64 bytes from 10.2.110.100: icmp_seq=2 ttl=61 time=1.93 ms 64 bytes from 10.2.110.100: icmp_seq=3 ttl=61 time=0.979 ms ^C --- 10.2.110.100 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 0.979/1.355/1.930/0.412 ms [root@host2 ~]# ping -I eno33554984 72.98.70.20 PING 72.98.70.20 (72.98.70.20) from 192.168.1.63 eno33554984: 56(84) bytes of data. 64 bytes from 72.98.70.20: icmp_seq=1 ttl=62 time=0.751 ms 64 bytes from 72.98.70.20: icmp_seq=2 ttl=62 time=0.807 ms 64 bytes from 72.98.70.20: icmp_seq=3 ttl=62 time=1.33 ms ^C --- 72.98.70.20 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.751/0.964/1.335/0.264 ms[root@host2 ~]# ping -I eno33554984 72.98.70.10 ###不知道為啥ping不通 PING 72.98.70.10 (72.98.70.10) from 192.168.1.63 eno33554984: 56(84) bytes of data. From 110.1.24.20 icmp_seq=1 Destination Host Unreachable From 110.1.24.20 icmp_seq=2 Destination Host Unreachable From 110.1.24.20 icmp_seq=3 Destination Host Unreachable ^C --- 72.98.70.10 ping statistics --- 5 packets transmitted, 0 received, +3 errors, 100% packet loss, time 4002ms pipe 4[root@host2 ~]# ping -I eno33554984 110.1.24.20 PING 110.1.24.20 (110.1.24.20) from 192.168.1.63 eno33554984: 56(84) bytes of data. 64 bytes from 110.1.24.20: icmp_seq=1 ttl=63 time=0.556 ms 64 bytes from 110.1.24.20: icmp_seq=2 ttl=63 time=2.15 ms 64 bytes from 110.1.24.20: icmp_seq=3 ttl=63 time=0.972 ms ^C --- 110.1.24.20 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 0.556/1.228/2.157/0.678 ms[root@host2 ~]# ping -I eno33554984 110.1.24.10 PING 110.1.24.10 (110.1.24.10) from 192.168.1.63 eno33554984: 56(84) bytes of data. 64 bytes from 110.1.24.10: icmp_seq=1 ttl=64 time=0.282 ms 64 bytes from 110.1.24.10: icmp_seq=2 ttl=64 time=0.598 ms 64 bytes from 110.1.24.10: icmp_seq=3 ttl=64 time=0.367 ms ^C --- 110.1.24.10 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.282/0.415/0.598/0.135 ms[root@host2 ~]# 2.centos7的網(wǎng)絡(luò)組實現(xiàn) 網(wǎng)絡(luò)組類似于centos6的bond,都是多個網(wǎng)卡使用一個IP,是增強網(wǎng)絡(luò)健壯性的一個手段 網(wǎng)絡(luò)組:是將多個網(wǎng)卡聚合在一起方法,從而實現(xiàn)冗錯和提高吞吐量 網(wǎng)絡(luò)組不同于舊版中bonding技術(shù),提供更好的性能和擴展性 網(wǎng)絡(luò)組由內(nèi)核驅(qū)動和teamd守護進程實現(xiàn).包名是teamd 啟動網(wǎng)絡(luò)組接口不會自動啟動網(wǎng)絡(luò)組中的port接口 具體的runner方式可以查看man 5 teamd.conf幫助 創(chuàng)建網(wǎng)絡(luò)組接口: [root@linux ~]# nmcli con add type team con-name test ifname team0 config '{"runner":{"name":"activebackup"}}' Connection 'test' (5a3bfb26-993f-45ad-add6-246ff419e7bd) successfully added. 此時在網(wǎng)絡(luò)配置目錄下生成了一個文件 [root@linux ~]# ls /etc/sysconfig/network-scripts/ifcfg-test /etc/sysconfig/network-scripts/ifcfg-test [root@linux ~]# nmcli dev show team0 GENERAL.DEVICE: team0 GENERAL.TYPE: team GENERAL.HWADDR: 82:D0:69:2C:48:6E GENERAL.MTU: 1500 GENERAL.STATE: 70 (connecting (getting IP configuration)) GENERAL.CONNECTION: test GENERAL.CON-PATH: /org/freedesktop/NetworkManager/ActiveConnection/3[root@linux ~]# nmcli con show NAME UUID TYPE DEVICE eno33554984 fb67dbad-ec81-39b4-42b1-ebf975c3ff13 802-3-ethernet eno33554984 eno16777736 d329fbf7-4423-4a10-b097-20b266c26768 802-3-ethernet eno16777736 eno50332208 d2665055-8e83-58f1-e9e3-49a5fb133641 802-3-ethernet eno50332208 test 5a3bfb26-993f-45ad-add6-246ff419e7bd team team0 給team0設(shè)置靜態(tài)IP和開機自啟動 [root@linux ~]# nmcli con mod test ipv4.method manual ipv4.addresses "10.1.70.24/16" connection.autoconnect yes[root@linux ~]# cat /etc/sysconfig/network-scripts/ifcfg-test DEVICE=team0 TEAM_CONFIG="{\"runner\":{\"name\":\"activebackup\"}}" DEVICETYPE=Team BOOTPROTO=none DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no NAME=test UUID=5a3bfb26-993f-45ad-add6-246ff419e7bd ONBOOT=yes IPADDR=10.1.70.24 PREFIX=16 IPV6_PEERDNS=yes IPV6_PEERROUTES=yes[root@linux ~]# 創(chuàng)建兩個port接口 [root@linux ~]# nmcli con add type team-slave con-name test-1 ifname eno33554984 master team0 Connection 'test-1' (234c3e91-d90d-421c-ae88-133deddfce94) successfully added.[root@linux ~]# nmcli con add type team-slave con-name test-2 ifname eno50332208 master team0 Connection 'test-2' (116ef596-d983-456c-a6ae-a74a4f8c03dc) successfully added.[root@linux ~]# [root@linux ~]# cat /etc/sysconfig/network-scripts/ifcfg-test-1 NAME=test-1 UUID=234c3e91-d90d-421c-ae88-133deddfce94 DEVICE=eno33554984 ONBOOT=yes TEAM_MASTER=team0 DEVICETYPE=TeamPort[root@linux ~]# cat /etc/sysconfig/network-scripts/ifcfg-test-2 NAME=test-2 UUID=116ef596-d983-456c-a6ae-a74a4f8c03dc DEVICE=eno50332208 ONBOOT=yes TEAM_MASTER=team0 DEVICETYPE=TeamPort 查看網(wǎng)絡(luò)組狀態(tài): [root@linux ~]# teamdctl team0 stat setup: runner: activebackup runner: active port: 發(fā)現(xiàn)port端口均沒有開啟 開啟port端口 [root@linux ~]# nmcli con up test-1 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)[root@linux ~]# nmcli con up test-2 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/7) [root@linux ~]# teamdctl team0 stat setup: runner: activebackup ports: eno33554984 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 eno50332208 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: eno33554984 可以看到端口開啟成功 [root@linux ~]# ping -I team0 10.1.70.172 PING 10.1.70.172 (10.1.70.172) from 10.1.70.24 team0: 56(84) bytes of data. 64 bytes from 10.1.70.172: icmp_seq=1 ttl=64 time=0.500 ms 64 bytes from 10.1.70.172: icmp_seq=2 ttl=64 time=0.804 ms ^C --- 10.1.70.172 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.500/0.652/0.804/0.152 ms[root@linux ~]# 配置成功,可以看到當前活動的是eno33554984,測試禁用后能否成功 [root@linux ~]# nmcli device disconnect eno33554984 Device 'eno33554984' successfully disconnected.[root@linux ~]# ping -I team0 10.1.70.172 PING 10.1.70.172 (10.1.70.172) from 10.1.70.24 team0: 56(84) bytes of data. 測試不成功,通過查找資料了解到當使用activebackup的runner時,必須加上一個參數(shù) [root@linux ~]# nmcli con modify test team.config '{"runner":{"name":"activebackup","hwaddr_policy":"by_active"}}'[root@linux ~]# cat /etc/sysconfig/network-scripts/ifcfg-test DEVICE=team0 TEAM_CONFIG="{\"runner\":{\"name\":\"activebackup\",\"hwaddr_policy\":\"by_active\"}}" DEVICETYPE=Team BOOTPROTO=none DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no NAME=test UUID=5a3bfb26-993f-45ad-add6-246ff419e7bd ONBOOT=yes IPADDR=10.1.70.24 PREFIX=16 IPV6_PEERDNS=yes IPV6_PEERROUTES=yes |
|