KVM virtualization (2)

Article directory

    • 4.7 kvm virtual machine clone
      • 4.7.1 Full clone
      • 4.7.2 Linked clones
    • 4.8 Bridged network for kvm virtual machines
      • 4.8.1 Create a bridge network card
      • 4.8.2 New Virtual Machine Using Bridge Mode
      • 4.8.3 Modify the existing virtual machine network to bridge mode
    • 4.9 Hot Addition Technology
      • 4.9.1 kvm hot add hard disk
      • 4.9.2 kvm virtual machine online hot add network card
      • 4.9.3 kvm virtual machine online hot add memory
      • 4.9.4 kvm virtual machine online hot add cpu
      • 4.10 kvm virtual machine live migration (shared network file system)
    • 5.1 Automatically deploy openstack M version with script

4.7 kvm virtual machine clone

The principle of automatic transmission configuration and manual transmission configuration is the same, and the automatic transmission is in place in one step.

4.7.1 Full clone

Automatic block:

# shut down the virtual machine before cloning
[root@localhost ~]# virsh shutdown web01
Domain web01 is being shutdown

# -o old virtual machine -n new virtual machine (full clone)
[root@localhost ~]# virt-clone --auto-clone -o web01 -n web02

Tips:

# --auto-clone is to create the disk file and the original disk file are in the same directory
[root@localhost ~]# virt-clone -o web01 -n web02 --auto-clone
# --file Specify the specific path for the created disk file
[root@localhost ~]# virt-clone -o web01 -n web02 --file /mnt/web03.qcow3

Manual transmission:

[root@localhost opt]# qemu-img convert -f qcow2 -O qcow2 -c web01.qcow2 web03.qcow2
[root@localhost opt]# virsh dumpxml web01 >web02.xml
[root@localhost opt]# vim web02.xml
#Modify the name of the virtual machine
#Delete virtual machine uuid
#Delete mac address mac add
#Modify the disk path disk
[root@localhost opt]# virsh define web02.xml
[root@localhost opt]# virsh start web02

4.7.2 Linked clones

Linked clones can only be done manually step by step, but these contents can be written into scripts to create them automatically

To manually link a cloned virtual machine:

# (1) Generate a virtual machine disk file
[root@localhost opt]# qemu-img create -f qcow2 -b web01.qcow2 web03.qcow2

# (2) Generate the configuration file of the virtual machine, -b create a reference disk
[root@localhost opt]# virsh dumpxml web01 > web03.xml
[root@localhost opt]# vim web03.xml
#Modify the name of the virtual machine
<name>web03</name>
#Delete virtual machine uuid
<uuid>8e505e25-5175-46ab-a9f6-feaa096daaa4</uuid>
#Delete mac address
<mac address='52:54:00:4e:5b:89'/>
#Modify disk path
<source file='/opt/web03.qcow2'/>

# (3) Import the virtual machine and start the test
[root@localhost opt]# virsh define web03.xml
[root@localhost opt]# virsh start web03

Tips:

# View web03.qcow2 disk information
[root@localhost opt]# qemu-img info web03.qcow2
image: web03.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
# Disks for linked clones are small
disk size: 3.8M
cluster_size: 65536
# web03.qcow2 refers to the disk web01.qcow2
backing file: web01.qcow2

# Because web01.qcow2 is a template machine with special permissions, it cannot be deleted to prevent accidental deletion
[root@localhost opt] chattr + i web01.qcow2

Fully automatic linked clone script:

[root@kvm01 scripts]# cat link_clone.sh
#!/bin/bash
old_vm=$1
new_vm=$2
#a: Generate virtual machine disk files
old_disk=`virsh dumpxml $old_vm|grep "<source file"|awk -F"'" '{print $2}'`
disk_tmp=`dirname $old_disk`
qemu-img create -f qcow2 -b $old_disk ${disk_tmp}/${new_vm}.qcow2
#b: Generate the configuration file of the virtual machine
virsh dumpxml $old_vm >/tmp/${new_vm}.xml
#Modify the name of the virtual machine
sed -ri "s#(<name>)(.*)(</name>)#\1${new_vm}\3#g" /tmp/${new_vm}.xml
#Delete virtual machine uuid
sed -i '/<uuid>/d' /tmp/${new_vm}.xml
#Delete mac address
sed -i '/<mac address/d' /tmp/${new_vm}.xml
#Modify disk path
sed -ri "s#(<source file=')(.*)('/>)#\1${disk_tmp}/${new_vm}.qcow2\3#g" /tmp/ ${new_vm}.xml
#c: Import the virtual machine and perform a startup test
virsh define /tmp/${new_vm}.xml
virsh start ${new_vm}

Bridging network for 4.8 kvm virtual machines

The default virtual machine network is NAT mode, network segment 192.168.122.0/24

4.8.1 Create a bridge network card

Prerequisites for kvm to configure the bridge network card

# (1) Turn off NetworkManager
[root@localhost ~]# systemctl stop NetworkManager
[root@localhost ~]# systemctl disable NetworkManager

# (2) Modify the network card configuration file, other parts are not required
[root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=none
NAME=eth0
DEVICE=eth0
ONBOOT=yes
IPADDR=10.0.0.11
PREFIX=24
GATEWAY=10.0.0.254
DNS1=114.114.114.114
DNS2=223.5.5.5

# (3) Turn off selinux, permanently turn off and modify the configuration file
[root@localhost ~]# setenforce 0

Start creating a bridge network card:

# (1) Create a bridge network card command, create br0 based on eth0
[root@localhost ~]# virsh iface-bridge eth0 br0
Created bridge br0 with attached device eth0
Bridge interface br0 started

# (2) Check the ip address of br0
[root@localhost ~]# ifconfig br0
[root@localhost ~]# ifconfig eth0


# Tips: cancel the bridge network card command (this step will not be executed temporarily)
[root@localhost ~]# virsh iface-unbridge br0

4.8.2 The new virtual machine uses bridge mode

If you create a new virtual machine, you can directly specify the network mode. Because of the experiment, reinstalling the system is too troublesome, just modify the virtual machine configuration file directly, refer to 4.8.3 Modify the existing virtual machine network to bridge mode

# (1) Default NAT mode
virt-install --virt-type kvm --os-type=linux --os-variant rhel7 --name web04 --memory 1024 --vcpus 1 --disk /opt/web04.qcow2 --boot hd **- -network network=default** --graphics vnc,listen=0.0.0.0 --noautoconsole

# (2) bridge mode
virt-install --virt-type kvm --os-type=linux --os-variant rhel7 --name web04 --memory 1024 --vcpus 1 --disk /data/web04.qcow2 --boot hd **- -network bridge=br0** --graphics vnc,listen=0.0.0.0 --noautoconsole

Tips:
(1) Production environment network card configuration

In production, the physical server generally has four network cards
eth0 is specially bridged to the public network segment eth0 is bridged to br0
eth1 is exclusively bridged to the intranet segment eth1 is bridged to br1
Then the virtual machine can choose to connect to the external network or to connect to the internal network

(2) If the virtual machine cannot obtain an ip address, please configure it as shown in the figure below

(3) Little knowledge

# (1) View the network type of the virtual machine, the network configuration when installing the virtual machine,
[root@localhost ~]# virsh net-list --all
 Name State Autostart Persistent
-------------------------------------------------- --------
 default active yes yes

# (2) The configuration file path of the default network card: /etc/libvirt/qemu/networks
[root@localhost ~]# cat /etc/libvirt/qemu/networks/default.xml
<network>
  <name>default</name>
  <uuid>55320028-c40f-4b38-a9fc-b9c66beac7da</uuid>
  <forward mode='nat'/>
  <bridge name='virbr0' stp='on' delay='0'/>
  <mac address='52:54:00:ac:e0:f6'/>
  <ip address='192.168.122.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.122.2' end='192.168.122.254'/>
    </dhcp>
  </ip>
</network>

# (3) As long as the libvirt service is started, there will be a default network card virbr0, bridge name='virbr0'
[root@localhost ~]# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.000c2907b027 yes eth0
vnet0
virbr0 8000.525400ace0f6 yes virbr0-nic

4.8.3 Modify the existing virtual machine network to bridge mode

# (1) Modify the virtual machine configuration file in the shutdown state, otherwise it may not take effect
[root@localhost ~]# virsh shutdown web01
[root@localhost ~]# virsh edit web01
<interface type='bridge'>
  <source bridge='br0'/>

# (2) Start the virtual machine and test the virtual machine network (if dhcp is not enabled on the upper layer, you need to manually configure the ip address, IPADDR, NATMASK.GATEWAY, DNS1=180.76.76.76)
[root@localhost ~]# virsh start web01
[root@localhost ~]# virsh console web01

# The virtual machine automatically obtains 10.0.0.20, as shown in the figure below, the dhcp configuration is 10.0.0.20-10.0.0.254
[root@web01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:33:7c:ad brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.20/24 brd 10.0.0.255 scope global dynamic eth0

Network schematic diagram:


NAT address translation:

# The kernel conversion parameter is set to 0, and Baidu cannot be pinged after the virtual machine is shut down
sysctl -a | grep ipv4 | grep forward
# Turn off kernel parameters
sysctl net.ipv4.ip_forward=0
# Open kernel parameters
sysctl net.ipv4.ip_forward=1

# Look at the iptables rules
iptables -t nat -L -n

1578543325216

4.9 Hot Addition Technology

Hot add hard disk, network card, memory, cpu

4.9.1 kvm hot add hard drive

provisional with immediate effect

virsh attach-disk web01 /data/web01-add.qcow2 vdb –subdriver qcow2
Take effect permanently (requires restart)

virsh attach-disk web01 /data/web01-add.qcow2 vdb –subdriver qcow2 –config
Temporarily strip the hard drive

virsh detach-disk web01 vdb

Permanently strip hard drive

virsh detach-disk web01 vdb –config
Expansion:
Unmount the mount directory of the expansion disk in the virtual machine
Strip the hard disk virsh detach-disk web01 vdb on the host
Adjust the capacity on the host qemu-img resize
Attach the hard disk again on the host virsh attach-disk web01 /data/web01-add.qcow2 vdb –subdriver qcow2
Mount the expansion disk again in the virtual machine
Use xfs_growfs to update the super block information of the expansion disk in the virtual machine

4.9.2 kvm virtual machine online hot add network card

virsh attach-interface web04 –type bridge –source br0 –model virtio

virsh attach-interface web04 –type bridge –source br0 –model virtio –config

virsh detach-interface web04 –type bridge –mac 52:54:00:35:d3:71

4.9.3 kvm virtual machine hot add memory online

virt-install –virt-type kvm –os-type=linux –os-variant rhel7 –name web04 –memory 512,maxmemory=2048 –vcpus 1 –disk /data/web04.qcow2 –boot hd –network bridge=br0 –graphics vnc,listen=0.0.0.0 –noautoconsole

Temporary hot-add memory
virsh setmem web04 1024M
Permanently increase memory
virsh setmem web04 1024M –config

Adjust the maximum virtual machine memory

virsh setmaxmem web04 4G #default permanent

4.9.4 kvm virtual machine hot add cpu online

virt-install –virt-type kvm –os-type=linux –os-variant rhel7 –name web04 –memory 512,maxmemory=2048 –vcpus 1,maxvcpus=10 –disk /data/web04. qcow2 –boot hd –network bridge=br0 –graphics vnc,listen=0.0.0.0 –noautoconsole
hot add cpu cores
virsh setvcpus web04 4
Permanently add CPU cores
virsh setvcpus web04 4 –config

Adjust the maximum value of the virtual machine cpu

virsh setvcpus web01 –maximum 4 –config

4.10 kvm virtual machine live migration (shared network file system)

Cold migration kvm virtual machine: configuration files, disk files

Live migration kvm virtual machine: configuration file, nfs share

kvm virtual machine live migration
1: The environment on both sides (bridge network card)

hostname ip memory network software requirements virtualization
kvm01 10.0.0.11 2G create br0 bridge NIC kvm and nfs enable virtualization
kvm02 10.0.0.12 2G create br0 bridge NIC kvm and nfs enable virtualization
nfs01 10.0.0.31 1G no nfs no
2: Implement shared storage (nfs)

yum install nfs-utils rpcbind -y
?
vim /etc/exports
/data 10.0.0.0/24(rw,async,no_root_squash,no_all_squash)
?
systemctl start rpcbind nfs
?
#kvm01 and kvm02
mount -t nfs 10.0.0.11:/data /data
3: Online thermal migration

#temporary migration
virsh migrate –live –verbose web04 qemu + ssh://10.0.0.11/system –unsafe
#permanent migration
virsh migrate –live –verbose web03 qemu + ssh://10.0.0.11/system –unsafe –persistent –undefinesource
5.kvm management platform
2000 kvm host machines
Check how many virtual machines each host has?
Check how many resources are left on each host?
View the ip address of each host machine and each virtual machine?

excel asset management cmdb

kvm management platform, database tools

Information: host, total configuration, remaining total configuration
Virtual machine information, configuration information, ip address, operating system

kvm management platform with billing function, openstack ceilometer billing ecs IAAS layer
Automatic management of kvm hosts, customized operations of cloud hosts

5.1 Automatically deploy openstack M version with script

Deploy openstack
Clone an openstack template machine:

all-in-one environment

4G memory, open virtualization, mount centos7.6 CD

1563350783150

After the virtual machine is turned on, modify the ip address to 10.0.0.11

Upload the script openstack-mitaka-autoinstall.sh to the /root directory
Upload image: cirros-0.3.4-x86_64-disk.img to /root directory
Upload the configuration file: local_settings to the /root directory
Upload openstack_rpm.tar.gz to /root,
tar xf openstack_rpm.tar.gz -C /opt/
mount /dev/cdrom /mnt

sh /root/openstack-mitaka-autoinstall.sh
About 10-30 minutes
Visit http://10.0.0.11/dashboard
Domain: default
Username: admin
Password: ADMIN_PASS

Note: Modify the host resolution (10.0.0.11 controller) on the windows system

Add node node:
Modify the ip address 10.0.0.12
hostnamectl set-hostname compute1
Log in again for the new hostname to take effect
Upload openstack_rpm.tar.gz to /root,
tar xf openstack_rpm.tar.gz -C /opt/
mount /dev/cdrom /mnt

Upload script openstack_compute_install.sh

sh openstack_compute_install.sh

openstack controller main control node, node node, kvm host
node node, kvm host
node node, kvm host
node node, kvm host