The four simple network models in KVM are as follows:
1. Isolation model (QEMU’s built-in user mode networking): A network is established between virtual machines. This mode cannot communicate with the host or other networks. It is equivalent to the virtual machine just being connected to a switch. .
2. Routing model (direct allocation of network devices (including VT-d and SR-IOV)): equivalent to the virtual machine connecting to a router, which is forwarded uniformly by the router (physical network card), but the source address will not be changed.
3. NAT model: In routing mode, there will be situations where virtual machines can access other hosts, but packets from other hosts cannot reach the virtual machines. In NAT mode, the source address is converted into a router (physical network card) address, so that other hosts can also access the virtual machine. Knowing which host the message came from is often used in the docker environment.
4. Bridge model (Bridge): Create a virtual network card in the host as the host’s network card, and the physical network card acts as a switch.
Below are the interpretation and implementation of the four network models respectively. Before that, let’s first explain the environment:
Host: VMware virtual machine, 2CPU, 4G memory, 40G storage
Host operating system: CentOS 6.5 x86_64
Network card 1:192.168.49.10 (NAT mode) Default gateway: 192.168.49.2 Can access the external network
Network card 2:172.28.88.100 (Host-Only mode)
1. Isolation model
As shown in the figure above, both Guest1 and Guest2 are virtual machines created on the host machine. The virtual machine’s network card is divided into the first half and the second half. The first half is located on the virtual machine, and the second half is on the host machine. According to the figure As shown in the figure, the first half is eth0, which is the name of the network card seen inside the virtual machine, and the second half is vnet0 and vnet1, which are the names of the network cards seen on the host machine. In fact, all data sent to eth0 on Guest1 is sent directly to vnet0, and vnet0 performs data transmission and processing.
In isolation mode, the host creates a virtual switch vSwitch, and then connects vnet0 and vnet1 to the virtual switch. The switch can also be called a bridge, because vnet0 and vnet1 are in a bridge, so they can communicate with each other, and the virtual machine eth0 transmits data through the second half, so as long as the first half of the virtual machine’s IP address is in the same network segment, they can communicate with each other. This is the isolation mode.
Implementation method:
1. Create virtual bridge br0
[root@kvm-node1 ~]# yum -y install bridge-utils [root@kvm-node1 ~]# brctl addbr br0 [root@kvm-node1 ~]# ifconfig br0 up [root@kvm-node1 ~]# brctl show bridge name bridge id STP enabled interfaces br0 8000.000000000000 no
2. Write a network card startup script
[root@kvm-node1 ~]# vi /opt/tools/qemu-ifup.sh [root@kvm-node1 ~]# cat /opt/tools/qemu-ifup.sh
Execute the shell script: ./opt/tools/qemu-ifup.sh physical network card name
#!/bin/bash BRIDGE=br0 if [-n $1];then ip link set $1 up sleep 2 brctl addif $BRIDGE $1 [ $? -eq 0 ] & amp; & amp; exit 0 || exit 1 else echo -e "\033[1;31mYou must give an interface.\033[0m" exit 3 fi
3. Use qumu-kvm to create 2 virtual machines
Create the first kvm virtual machine named centos5-1:
qemu-kvm -name “centos5-1” -smp 1 -m 512 -drive file=/images/kvm/centos5.img,if=virtio,media=disk,cache=writeback -net nic,model=virtio,macaddr= 00:0c:29:86:4e:1a -net tap,ifname=vnet0.0,script=/opt/tools/qemu-ifup
Create a second kvm virtual machine named centos5-2:
qemu-kvm -name “centos5-2” -smp 1 -m 512 -drive file=/images/kvm/centos5_2.img,if=virtio,media=disk,cache=writeback -net nic,model=virtio,macaddr= 00:0c:29:86:4e:6a -net tap,ifname=vnet0.1,script=/opt/tools/qemu-ifup
Here I used a previously installed virtual machine image file, skipping the installation process and entering the system directly. For the installation process, please refer to the previous article.
4. Go to the host machine to view the second half of the virtual machine’s network card
After both virtual machines are started, we can see 2 chapters of virtual network cards vnet0.0 and vnet0.1 in the host machine. This is the second half of the network cards of the two virtual machines.
[root@kvm-node1 ~]# ifconfig … vnet0.0 Link encap:Ethernet HWaddr D2:65:97:7B:15:00 inet6 addr: fe80::d065:97ff:fe7b:1500/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:24 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:4993 (4.8 KiB) TX bytes:468 (468.0 b) vnet0.1 Link encap:Ethernet HWaddr 82:94:C0:92:18:34 inet6 addr: fe80::8094:c0ff:fe92:1834/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:25 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:5329 (5.2 KiB) TX bytes:468 (468.0 b)
View virtual bridge information:
The IP addresses of eth0 of the two virtual machines have been manually assigned. We try to ping the other’s IP address on each virtual machine.
The two virtual machines can communicate with each other, but what about the virtual machine and the host?
The virtual machine cannot ping the host machine
The host machine also cannot ping the virtual machine.
This is the isolation model. Virtual machines can communicate with each other, and the network between the host and the virtual machine is isolated.
2. Routing model
On the basis of the isolation model, add a virtual network card virnet0 of the host machine to the virtual network bridge, so that virnet0 can communicate with the virtual machine. By setting the default gateway of the virtual machine to the IP address of virnet0, and then in the host machine Turn on IP address forwarding so that the virtual machine can access the host machine. However, at this time, the virtual machine can only send packets to the external network. Because the external network is not routed to the virtual machine, the external network cannot send the packet back to the virtual machine.
Implementation method:
1. Create a virtual network card on the host machine
[root@kvm-node1 ~]# yum -y install tunctl [root@kvm-node1 ~]# tunctl -b tap0 [root@kvm-node1 ~]# ifconfig tap0 tap0 Link encap:Ethernet HWaddr 12:37:6E:87:3C:A8 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
2. Add the tap0 network card to the virtual bridge br0
[root@kvm-node1 ~]# ifconfig tap0 192.168.220.100 up [root@kvm-node1 ~]# brctl addif br0 tap0 [root@kvm-node1 ~]# ifconfig tap0 0.0.0.0 [root@kvm-node1 ~]# ifconfig br0 192.168.220.100 netmask 255.255.255.0 up
3. Enter the virtual machine and set the default gateway of the virtual machine to the IP address of tap0
4. Turn on the host’s IP forwarding function
[root@kvm-node1 ~]# sed -i /ip_forward/s/0/1/’ /etc/sysctl.conf [root@kvm-node1 ~]# sysctl -p
5. Try to ping the host’s IP address in the virtual machine
You can ping the IP address of the host machine
6. Try to ping the IP address of the host’s gateway
Unable to ping the IP address of the host’s gateway
The reason why the ping fails is because after the packet is sent to the gateway, the gateway cannot find the route to return the packet, so the packet cannot be replied. At this time, we add an iptables rule on the host so that the gateway can return the packet.
iptables -t nat -A POSTROUTING -s 192.168.220.0/24 -j MASQUERADE
At this point, we try again to ping the host gateway address on the virtual machine
You can now ping the gateway of the host machine.
7. Ping the address of the virtual machine on a host other than the host computer
For example, I am on a host running VMware, and of course the host’s gateway is also on this host (those who are familiar with VMware should know this). At this time, the IP addresses of my windows host are 192.168.49.1 and 172.28.88.1. I try to The host goes up to ping the address of the virtual machine.
At this time, the external host (windows host) cannot ping the virtual machine, so in order to ping it, we need to add a route,
route add 192.168.220.0 mask 255.255.255.0 192.168.49.10
After adding this route, the external host can communicate with the virtual machine.
Through the above practices, we can also find the flaws of the routing model. Although the virtual machine can communicate with the host and send data packets to the external network, the external network cannot send back the data packets. If we want the external network to communicate with the virtual machine, we must To add corresponding routing rules. This is obviously impractical for large-scale virtual environments.
3. NAT model
The NAT model is actually the implementation of SNAT. In routing, the virtual machine can send packets to the external host, but the external host cannot respond to the request because it cannot find the route to the virtual machine. However, the external host can communicate with the host machine, so a NAT forwarding is added on the host machine, so that when the external host requests the virtual machine, the IP address of the virtual machine is converted to an address on the host machine, thereby realizing the connection between the external network and the virtual machine. Communication is actually just address translation through the POSTROUTING chain of the nat table of iptables.
Implementation method:
1. Write a virtual machine startup script
Use shell script method: ./opt/tools/qemu-natup.sh physical network card name
[root@kvm-node1 ~]# cat /opt/tools/qemu-natup.sh #!/bin/bash bridge=br0 net="192.168.122.1/24" checkbr(){ if brctl show |grep -i $1;then return 0 else return 1 fi } initbr(){ brctl addbr $bridge ip link set $bridge up ip addr add $net dev $bridge } enable_ip_forward(){ sysctl -w net.ipv4.ip_forward=1 sysctl -p } setup_nat(){ checkbr $bridge if [ $? -eq 1 ];then initbr enable_ip_forward iptables -t nat -A POSTROUTING -s $net ! -d $net -j MASQUERADE fi } if [-n $1];then setup_nat ip link set $1 up brctl addif $bridge $1 exit 0 else echo "Error: no interface specified." exit 1 fi
2. Write a virtual machine stop script
[root@kvm-node1 ~]# cat /opt/tools/qemu-natdown #!/bin/bash bridge=br0 net="192.168.122.0/24" remove_rule() { iptables -t nat -F } isalone_bridge() { if ! brctl show | awk "/^$bridge/{print \$4}" | grep "[^[:space:]]" & amp;> /dev/null; then ip link set $bridge down brctl delbr $bridge remove_rule fi } if [-n $1];then ip link set $1 down brctl delif $bridge $1 isalone_bridge exit 0 else echo "Error: no interface specified." exit 1 fi
3. Start the virtual machine
qemu-kvm --name “centos5-nat” -smp 1 -m 512 -cpu host --drive file=/images/kvm/centos5.img,if=virtio,media=disk,cache=writeback -net nic,model=virtio,macaddr=00:0c:29:86:4e:1a -net tap,ifname=vnet0.0,script=/opt/tools/qemu-natup,downscript=/opt/tools/qemu-natdown -daemonize
4. Enter the virtual machine and configure the network
5. Add a default gateway to the virtual machine and set the IP address of br0 as the default gateway of the virtual machine
You can see that after adding the default gateway, the virtual machine can already ping the host. In addition, because iptables rules have been added to the host, the external network (192.168.49.1) can also be pinged at this time, and the implementation of the NAT model is completed.
4. Bridge model
Create a bridge device in the host and put the host’s eth0 on the bridge. In this way, eth0 on Guest1 will send the message to vnet0, and then directly to eth0 on the host. Change the source address to the one on the host. The address of eth0.
When the response message reaches eth0 on the physical machine, how to determine whether the response message is sent to the virtual machine or the physical machine itself?
The physical machine first creates a virtual network card and turns on the promiscuous mode on the physical machine (no matter whether the mac address is your own, it will receive the response message). If the mac address is your own, it will be forwarded to the virtual network card. If it is not your own, it will be forwarded to vnet0. , this is the bridge model. Because the network card of the physical machine has the function of a bridge, it is called the bridge model.
Implementation method:
1. Create a virtual bridge startup script
[root@kvm-node1 tools]# cat qemu-brup #!/bin/bash bridge=br0 device=eth1 device_ip=`ifconfig eth1|awk '/inet addr/ {print $2}' |cut -d: -f2` checkbr(){ if brctl show |grep -i $1;then return 0 else return 1 fi } initbr(){ brctl addbr $bridge ip link set $bridge up brctl addif $bridge $device ifconfig $device 0.0.0.0 ifconfig $bridge ${device_ip} netmask 255.255.255.0 up } setup_nat(){ checkbr $bridge if [ $? -eq 1 ];then initbr fi } if [-n $1];then setup_nat ip link set $1 up brctl addif $bridge $1 exit 0 else echo "Error: no interface specified." exit 1 fi
2. Create a virtual bridge stop script
[root@kvm-node1 tools]# cat qemu-brdown #!/bin/bash bridge=br0 device=eth1 isalone_bridge() { if ! brctl show | awk "/^$bridge/{print \$4}" | grep "[^[:space:]]" & amp;> /dev/null; then ip link set $bridge down brctl delbr $bridge remove_rule fi } if [-n $1];then ip link set $1 down brctl delif $bridge $1 ifconfig $bridge 0.0.0.0 brctl delif $bridge $device isalone_bridge exit 0 else echo "Error: no interface specified." exit 1 fi
3. Use qemu-kvm to create a virtual machine
qemu-kvm --name “centos5-bridge” -smp 1 -m 512 --drive file=/images/kvm/centos5.img,if=virtio,media=disk,cache=writeback -net nic,model =virtio,macaddr=00:0c:29:86:4e:6a -net tap,ifname=vnet0.0,script=/opt/tools/qemu-brup,downscript=/opt/tools/qemu-brdown --daemonize
4. Log in to the virtual machine and check the virtual machine network
The IP segment of the virtual machine and eth1 on the host are in the same network segment (because the IP address of eth0 of the host is used for ssh, eth1 is used for bridging here)
5. Ping the host test on the virtual machine
The virtual machine can ping the host’s IP address (the IP address of eth1) or the external network (the gateway address of eth1).
At this point, the bridge model is implemented.