[Xinghai Essay] SDN neutron (2) core-plugin (ML2)

Neutron Architecture Neutron-plugin Core-plugin (ML2)

Neutron-server receives two types of requests:

REST API request: Receive REST API request and distribute the REST API to the corresponding Plugin (L3RouterPlugin).
RPC request: Receive Plugin agent request and distribute it to the corresponding Plugin (NeutronL3agent).

Neutron-plugin is divided into Core-plugin and Service-plugin.

Core-plugin: ML2 is responsible for managing the second-layer network. ML2 mainly includes three types of core resources: Network, Subnet, and Port. The REST API for operating the three types of resources is natively supported.
Service-plugin: implements L3-L7 network, including Router, Firewall, and VPN.

core-plugin

core-plugin(ML-2)
|- – – – – – – – – – – – – – – – – – – – – – – – – |
Type Manager(Vxlan) – – – – – – – Mechanism Manager (openvswitch)
Each part is divided into Manager and Driver

Execution flow

_eventlet_wsgi_server():
service.serve_wsgi(service.NeutronApiService)
WsgiService.start
config.load_paste_app(app_name)

neutron.api.v2.router:APIRouter.factory

Neutron plugin is an important part of Neutron, which allows Neutron to be integrated with different network technologies.
First, we need to install the Neutron plugin. Neutron plug-ins can be Open vSwitch, Linux Bridge, Cisco Nexus, VMware NSX, etc. Before installing the Neutron plug-in, we need to ensure that the Neutron service is installed.

Taking Open vSwitch as an example, we can use the following command to install the Open vSwitch plug-in:

sudo apt-get install neutron-plugin-openvswitch-agent

After installing the Neutron plugin, we need to configure it. Configuration files are usually located in the /etc/neutron/plugins/ directory. Taking Open vSwitch as an example, we need to edit the /etc/neutron/plugins/ml2/ml2_conf.ini file.

In this file, we need to configure the following parameters:

- type_drivers: Specify supported network types, such as vlan, vxlan, etc.
- tenant_network_types: Specify the tenant network type, such as vlan, vxlan, etc.
- mechanism_drivers: Specify network mechanism drivers, such as Open vSwitch, Linux Bridge, etc.
- bridge_mappings: Specify the mapping relationship between the physical network and the virtual network.

The following is an example configuration file:

[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch

[ml2_type_flat]
flat_networks = external

[ml2_type_vxlan]
vni_ranges = 1:1000

[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[ovs]
bridge_mappings = external:br-ex

In the above example, we specified that the network types supported by the Open vSwitch plug-in are flat, vlan, and vxlan. We also specified the tenant network type as vxlan and set the mechanism driver to Open vSwitch. We also specify the mapping relationship between the physical network and the virtual network, such as mapping the external network to the br-ex bridge.
After completing the configuration of the Neutron plug-in, we need to restart the Neutron service for the configuration to take effect. We can restart the Neutron service using the following command:

sudo service neutron-server restart
sudo service neutron-plugin-openvswitch-agent restart

neutron-plugin is divided into two categories: core-plugin and service-plugin

L2-L3 is called core plugin, including network, subnet, port
L4-L7 is called service plugin, including router, firewall, loadbalancer, VPN, metering, etc.

Openstack neutron loading plugin
RESOURCES = {<!-- -->'network': 'networks',
             'subnet': 'subnets',
             'subnetpool': 'subnetpools',
             'port': 'ports'}
SUB_RESOURCES = {<!-- -->}
COLLECTION_ACTIONS = ['index', 'create']
MEMBER_ACTIONS = ['show', 'update', 'delete']
REQUIREMENTS = {<!-- -->'id': attributes.UUID_PATTERN, 'format': 'json'}
class APIRouter(wsgi.Router):

    @classmethod
    def factory(cls, global_config, **local_config):
        return cls(**local_config)

    def __init__(self, **local_config):
    #Create mapper
        mapper = routes_mapper.Mapper()
        #Get the core_plugin of NeutornManage. This is defined in /etc/neutron/neutron.conf. For example, mine is core_plugin = ml2
        plugin = manager.NeutronManager.get_plugin()
        #Scan all extensions under neutron/extensions
        ext_mgr = extensions.PluginAwareExtensionManager.get_instance()
        ext_mgr.extend_resources("2.0", attributes.RESOURCE_ATTRIBUTE_MAP)

        col_kwargs = dict(collection_actions=COLLECTION_ACTIONS,
                          member_actions=MEMBER_ACTIONS)
#Build mapping rule function
        def _map_resource(collection, resource, params, parent=None):
            allow_bulk = cfg.CONF.allow_bulk
            allow_pagination = cfg.CONF.allow_pagination
            allow_sorting = cfg.CONF.allow_sorting
            #Create controller
            controller = base.create_resource(
                collection, resource, plugin, params, allow_bulk=allow_bulk,
                parent=parent, allow_pagination=allow_pagination,
                allow_sorting=allow_sorting)
            path_prefix = None
            if parent:
                path_prefix = "/%s/{%s_id}/%s" % (parent['collection_name'],
                                                  parent['member_name'],
                                                  collection)
            mapper_kwargs = dict(controller=controller,
                                 requirements=REQUIREMENTS,
                                 path_prefix=path_prefix,
                                 **col_kwargs)
            # mapper.collection This is to build mapping rules
            #collection: a string, which is the plural form of resources, such as networks
            #resource: a string, which is the singular form of resource, such as network
            return mapper.collection(collection, resource,
                                     **mapper_kwargs)
#Build/mapping rules
        mapper.connect('index', '/', controller=Index(RESOURCES))
        
        for resources in RESOURCES:
        #For each core resource (network, subnet, etc.), call _map_resource to build mapping rules
            _map_resource(RESOURCES[resource], resource,
                          attributes.RESOURCE_ATTRIBUTE_MAP.get(
                              RESOURCES[resource], dict()))
            resource_registry.register_resource_by_name(resource)
#SUB_RESOURCES is empty and of no use
        for resource in SUB_RESOURCES:
            _map_resource(SUB_RESOURCES[resource]['collection_name'], resource,
                          attributes.RESOURCE_ATTRIBUTE_MAP.get(
                              SUB_RESOURCES[resource]['collection_name'],
                              dict()),
                          SUB_RESOURCES[resource]['parent'])

        #Clear all rules currently configured in the policy engine. Its last call in unit tests and core API router initialization ensures that rules are loaded after all extensions have loaded
        policy.reset()
        #Initialize wsgi.Router
        super(APIRouter, self).__init__(mapper)
Four networking models

Type Manager

#neutron/plugins/ml2/managers.py
class TypeManager(stevedore.named.NamedExtensionManager):
    """Manage network segment types using drivers."""
    def __init__(self):
        # Mapping from type name to DriverManager
        self.drivers = {<!-- -->}
        LOG.info(_("Configured type driver names: %s"),
                 cfg.CONF.ml2.type_drivers)
        super(TypeManager, self).__init__('neutron.ml2.type_drivers',
                                          cfg.CONF.ml2.type_drivers,
                                          invoke_on_load=True)
        LOG.info(_("Loaded type driver names: %s"), self.names())
        self._register_types()
        self._check_tenant_network_types(cfg.CONF.ml2.tenant_network_types)

    def create_network_segments(self, context, network, tenant_id):
        """Call type drivers to create network segments."""
        segments = self._process_provider_create(network)
        session = context.session
        with session.begin(subtransactions=True):
            network_id = network['id']
            if segments:
                for segment in segments:
                    segment = self.reserve_provider_segment(
                        session, segment)
                    db.add_network_segment(session, network_id, segment)
            else:
                segment = self.allocate_tenant_segment(session)
                db.add_network_segment(session, network_id, segment)
Type Drivers
  1. The Flat model is the simplest. All virtual machines share a private IP network segment. The IP address is injected when the virtual machine starts. Communication between virtual machines is directly forwarded through the bridge in HyperVisor. Public network traffic is on the gateway of this network segment. Perform NAT (Nova-network is implemented as iptables that opens the nova-network host kernel, and Neutron is implemented as l3-agent on the network node). The difference between the Flat DHCP model and Flat is that the DHCP process is enabled in the bridge, and the virtual machine obtains an IP address through DHCP messages (Nova-network is implemented as dnsmaq in the nova-network host, and Neutron is implemented as dhcp-agent on the network node).

How Dnsmasq works
When Dnsmasq receives a DNS request from the user, it will first search for the /etc/hosts file. If the /etc/hosts file does not have the requested record, it will then search for the external DNS defined in /etc/resolv.conf (also called the upstream DNS server, nameserver configuration), the external DNS finds the request through recursive query and responds to the client, and then dnsmasq caches the request result (cached into memory) for subsequent parsing requests.
Configure Dnsmasq as a DNS cache server, and add local intranet resolution to the /etc/hosts file. In this way, whenever an intranet machine queries, the hosts file will be queried first, which is equivalent to sharing /etc/hosts with the entire network. Used by network machines to solve the problem of mutual recognition between intranet machines. It’s so easy to edit just one hosts file compared to editing the hosts file machine by machine or adding Bind DNS records.

  1. The VLAN model introduces a multi-tenant mechanism. Virtual machines can use different private IP network segments, and one tenant can have multiple IP network segments. The virtual machine IP obtains the IP address through a DHCP message (Nova-network is implemented as dnsmaq in the nova-network host, and Neutron is implemented as dhcp-agent on the network node). Communication between virtual machines within a network segment is forwarded directly through the network bridge in HyperVisor. Cross-network segment communication for the same tenant is routed through the gateway. Different tenants are isolated through ACLs on the gateway. Public network traffic is NATed on the gateway of the network segment ( Nova-network is implemented as iptables that opens the nova-network host kernel, and Neutron is implemented as l3-agent on the network node). If different tenants logically share a gateway, IP address reuse between tenants cannot be achieved.
  2. The Overlay model (mainly including GRE and VxlAN tunnel technologies) has the following improvements compared to the VLAN model. 1) The number of tenants has increased from 4K to 16 million; 2) Tenant internal communications can span any IP network, supporting arbitrary migration of virtual machines; 3) Generally speaking, each tenant logically has a gateway instance, and IP addresses can be transferred between tenants Multiplexing; 4) Ability to combine SDN technology to optimize traffic.

    TYPE=0x8100, PRI is the priority field, CFI is not commonly used, CFI=0 in Ethernet. The next 12 bits of VID are the tag bits of the VLAN. That is to say, there are up to 4096 VLANs in a LAN in 802.1q. However, different switches have different plans for VLANs. The actual number of VLANs that users can use on a switch Much less than 4096. It is a very simple field. It has no domain classification and cannot be used for addressing. It is just a label. However, this is enough for network virtualization in traditional LAN.

The difference between vxlan and vlan:
vxlan supports more layer 2 networks
vxlan uses 12 bits to represent vlan ID, so it supports up to 2^12=4094 vlans
The ID used by vxlan uses 24 bits and can support up to 2^24.

Existing network paths are utilized more efficiently
vlan uses spanning tree protocol to avoid loops, which will block half of the network path
vxlan data packets are encapsulated into UDP and transmitted through the network layer, and all network paths can be used

Prevent physical switch Mac table exhaustion
vlan needs to record the Mac physical address in the switch’s Mac table
vxlan uses a tunnel mechanism, and the Mac physical address does not need to be recorded on the switch.

Rely on the physical second layer to establish a virtual second layer (vlan mode)

Physical Layer 2 refers to: The physical network is a Layer 2 network that communicates based on the broadcast method of the Ethernet protocol.
The virtual second layer refers to: the virtual network implemented by neutron is also a second layer network (the network used by openstack’s vm machine must be a large second layer), and it also communicates based on the broadcast method of the Ethernet protocol, but there is no doubt that This virtual network relies on the physical layer 2 network

Rely on the physical three layers to establish a virtual second layer (gre module and vxlan mode)

The physical three-layer refers to: the physical network is a three-layer network, and communication is based on IP routing.
The virtual second layer refers to: the virtual network implemented by neutron is still a second layer network (the network used by openstack’s vm machine must be a large second layer), and still communicates based on the broadcast method of the Ethernet protocol, but there is no doubt that What’s more, the virtual network relies on the physical three-layer network, which is somewhat similar to the concept of VPN. The basic principle is to encapsulate the private network packets and finally tunnel the IP address for transmission.

Users (such as demo users) can create a network (L2 network) under their own project. This network is divided on the basis of the second layer and is an isolated second layer network segment. Similar to the virtual LAN (VLAN) in the physical network world, every time an L2 network is established, it will be assigned a segment ID, which identifies a broadcast domain. This ID is randomly assigned unless the administrator is used as an administrator. menu, you can manually specify the ID
A subnet is a group of IPv4 or IPv6 addresses and their associated configuration. It is an address pool from which OpenStack assigns IP addresses to virtual machines (VMs). Each subnet is designated as a Classless Inter-Domain Routing scope and must be associated with a network. In addition to subnets, tenants can specify a gateway, a list of Domain Name System (DNS) name servers, and a set of host routes. VM instances on this subnet will then automatically inherit this configuration.
Each network uses its own DHCP Agent, and each DHCP Agent is within a Network namespace
IP addresses in different networks can be overlapping
Traffic between subnets across the network must go through the L3 Virtual Router

The difference between Provider and Tenant in L2 network

The Provider network is created by the Admin user, while the Tenant network is created by the tenant ordinary user.
Provider network is directly mapped to a certain segment of the physical network, such as corresponding to a certain VLAN, so corresponding configuration needs to be done in the physical network in advance. The tenant network is a virtualized network, and Neutron needs to be responsible for its routing and other three-layer functions.
For Flat and VLAN type networks, only Provider network makes sense. Even this type of tenant network essentially corresponds to an actual physical segment.
For GRE and VXLAN type networks, only tenant network makes sense, because it does not depend on the specific physical network, but only requires the physical network to provide IP and multicast.
The Provider network is created based on the physical network parameters entered by the admin user; while the tenant work is created by the tenant ordinary user, Neutron selects specific configurations based on its network configuration, including network type, physical network and segmentation_id.
Allow the use of segmentation_id that is not within the scope of the configuration item when creating a Provider network.

Mechanism Manager

The ML2 plugin interacts with the configured mechanism driver through neutron.plugins.ml2.managers.MechanismManager.

MechanismManager manages MechanismDriver.
MechanismDriver is called when creating, updating, or deleting networks or ports. It is responsible for the specific implementation of the underlying layer 2 network technology and different networks.
interact with network devices. Two methods of MechanismDriver will be called in each event

Mechanism Driver

This part is support for various L2 technologies, such as Open vSwitch, linux bridge, etc.

The corresponding functions of each Driver will be called in sequence according to the configuration sequence. For example, for operations that require configuring a switch, both the Open vSwitch virtual switch and the external real physical switch such as a Cisco switch need to be configured. In this case, the Open vSwitch Mechanism Driver and Cisco Mechanisam Driver is called and processed.

If any of the extension drivers, type drivers and mechanism drivers cannot accept the newly created Network, it will cause the creation of the Network to fail and clear the Network’s records in the DB.

The entire process is similar to creating a Network. The difference is that there is no need to call type drivers, because type drivers are for Network type support, so there is no need to call them in Port create. On the other hand, creating a Port will call rpc to notify L2 agents. This is because Port is just an object in memory or DB in ML2. What really works in SDN is the virtual port on each L2 agent. ML2 notifies L2 agents through rpc and creates corresponding virtual ports. In this way, virtual machines, virtual routers, DHCP services, etc. can provide network services based on virtual machine ports.

Neutron uses the virtual machine switch created by the open source Open vSwitch by default

The way Mechanism Manager distributes operations and specifically transfers operations to Mechanism Driver is the same as Type Manager, but an operation that requires Mechanism Driver processing is: the corresponding function of each Driver will be called in sequence according to the configuration order to complete, such as operations that require configuring a switch. , maybe both the Open vSwitch virtual switch and an external real physical switch such as a Cisco switch need to be configured. At this time, both the Open vSwitch Mechanism Driver and the Cisco Mechanisam Driver need to be called for processing.

yum install openstack-neutron-openvswitch -y
vi /etc/neutron/plugins/ml2/openvswitch_agent.ini
[ovs]
tunnel_bridge = br-tun
local_ip = 192.168.100.20 #Tunnel IP address Management network card IP address
integration_bridge = br-int
tenant_network_type = vxlan
tunnel_type = vxlan
tunnel_id_ranges = 1:1000
enable_tunneling = true
[agent]
tunnel_types = vxlan
l2_population = true
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = true
systemctl start neutron-openvswitch-agent.service
systemctl enable neutron-openvswitch-agent.service
ifconfig
brctl show
ovs-vsctl show
Neutron-VLAN-OVS

Different from Linux bridge, openvswitch does not isolate different vlans through vlan interfaces such as eth1.1 eth1.2, but uses the flow table rules of openvswitch to specify how to forward data in and out of br-int to achieve isolation of different vlans. .

In the figure, all virtual machines of the computing node are connected to the int bridge, and the virtual machines are divided into two networks. The Int bridge will mark the incoming data packets with vlan ID numbers according to different networks, and then forward them to the eth bridge, which is directly connected to the physical network. At this time, the traffic flows from the computing node to the network node.
The function of the ehx int bridge of the network node is similar, with the addition of an ex bridge. This bridge is created in advance by the management and is connected to the physical network card. The ex bridge and the int bridge are connected through a pair of patch-ports. After the traffic of the virtual machine reaches the int bridge, it is routed to the ex bridge.

Composition of OpenvSwitch

1. The main components of ovs are as follows:

ovs-vswitchd: The OVS daemon is the core component of OVS, which implements the switching function. Together with the Linux kernel compatibility module, it implements flow-based switching. It communicates with the upper controller in compliance with the OPENFLOW protocol. It communicates with ovsdb-server using the OVSDB protocol. It communicates with the kernel module through netlink. It supports multiple independent datapaths (bridges). It implements binding and VLAN by changing the flow table. and other functions.
ovsdb-server: A lightweight database service that mainly saves the entire OVS configuration information, including interfaces, switching content, VLANs, etc. ovs-vswitchd will work according to the configuration information in the database. It uses OVSDB (JSON-RPC) to exchange information between manager and ovs-vswitchd.
ovs-dpctl: A tool used to configure the switch kernel module and control forwarding rules.?
ovs-vsctl: Mainly used to obtain or change the configuration information of ovs-vswitchd. When this tool operates, it will update the database in ovsdb-server.?
ovs-appctl: It is mainly used to send commands to the OVS daemon process and is generally not used.
ovsdbmonitor: GUI tool to display data information in ovsdb-server.?
ovs-controller: A simple OpenFlow controller
ovs-ofctl: used to control the flow table contents when OVS works as an OpenFlow switch.

2.OpenvSwitch workflow

The process of realizing virtual machine and external communication through ovs, the communication process is as follows:

1. The VM instance generates a data packet and sends it to the virtual network interface VNIC in the instance, which is eth0 in the instance.
2. This data packet will be transmitted to the VNIC interface on the physical machine, as shown in the figure is the vnet interface.
3. The packet comes out of the vnet NIC and reaches the bridge (virtual switch) br100.
4. The data packet is processed by the switch and sent out from the physical interface on the physical node, such as eth0 on the physical machine in the figure.
5. When the data packet goes out from eth0, it is operated according to the routing and default gateway on the physical node. At this time, the data packet is actually out of your control.

Note: Generally, the port connecting the L2 switch to eth0 is a trunk port, because the VNET corresponding to the virtual machine often sets a VLAN TAG. You can control the network broadcast domain of the virtual machine by setting the VALN TAG on the vnet corresponding to the virtual machine. If you run multiple For a virtual machine, the vnets corresponding to multiple virtual machines can be set with different vlan tags. Then when the data packets of these virtual machines go out from eth0(4), they will be marked with a TAG. In this way, the trunk interface must be used.

# Add bridge:
ovs-vsctl add-br br0
?
# List all bridges:
ovs-vsctl list-br
?
# Determine whether the network bridge exists:
ovs-vsctl br-exists br0
?
# Mount the physical network card to the bridge:
ovs-vsctl add-port br0 eth0
?
# List all ports in the bridge:
ovs-vsctl list-ports br0
?
# List all bridges mounted to the network card:
ovs-vsctl port-to-br eth0
?
# Check the network status of ovs:
ovs-vsctl show
?
# Delete the network port that has been mounted on the bridge:
ovs-vsctl del-port br0 eth0
?
# Delete the bridge:
ovs-vsctl del-br br0
?
# Set up the controller:
ovs-vsctl set-controller br0 tcp:ip:6633
?
# Delete controller:
ovs-vsctl del-controller br0
?
#Set to support OpenFlow Version 1.3:
ovs-vsctl set bridge br0 protocols=OpenFlow13
?
# Delete OpenFlow support settings:
ovs-vsctl clear bridge br0 protocols
?
# Set vlan label:
ovs-vsctl add-port br0 vlan3 tag=3 -- set interface vlan3 type=internal
?
# Delete vlan tag:
ovs-vsctl del-port br0 vlan3
?
# Query VLAN:
ovs-vsctl show
ifconfig vlan3
?
# Check the status of all switch ports on the bridge:
ovs-ofctl dump-ports br0
?
# View all flow rules on the bridge:
ovs-ofctl dump-flows br0
?
# Check the version of ovs:
ovs-ofctl -V# Configure the port tagovs-vsctl set port br-ex tag=101

The OpenFlow protocol defines how the switch applies to the controller for a flow table when packet matching fails. When the switch receives a data packet that cannot be matched by each flow in the current flow table, it encapsulates the relevant information of the mismatched packet in The Packet-In message is sent to the controller to let the controller know the packet mismatch, and the controller installs a new flow table to the switch through Flow-Mod and other messages.

extension

In the neutron/plugins/ml2/manager.py file, in addition to Type Manager and Mechanism, Extension Manger is also defined

The extension should be placed in the neutron/extensions folder, or set api_extensions_path in the configuration file
The class name of the extension should have the same name as the file, and of course the first letter should be capitalized.
The interface defined by ExtensionDescriptor in neutron.api.extensions.py should be implemented
Add the alias of our extension to the supported_extension_aliases of the corresponding plugin.

create_network

create_network_in_db
extension_manager.process_create_network
type_manager.create_network_segments
mechanism_manager.create_network_precommit
db commit
mechanism_manager.create_network_postcommit

create_subnet

create_subnet_in_db
extension_manager.process_create_subnet
mechanism_manager.create_subnet_precommit
db commit
mechanism_manager.create_subnet_postcommit

create_port

create_port_in_db
extension_manager.process_create_port
port bidding
mechanism_manager.create_port_precommit
db commit
mechanism_manager.create_port_postcommit