An OpenShift cluster is configured using an overlay software-defined network (SDN) for both the Pod and Service networks. By default, VMs are configured with connectivity to the SDN and have the same features/connectivity as Pod-based applications.
Host-level networking configurations are created and applied using the NMstate operator. This includes the ability to report the current configuration options, such as bonds, bridges, and VLAN tags to help segregate networking resources, as well as apply desired-state configuration for those entities.
The initial bond interface, consisting of two adapters bonded together with an IP address on the machine network specified and configured at install time, is used for the SDN, management traffic between the node and the control plane (and administrator access), and live migration traffic. During installation, use the host network interface configuration options to configure the bond and set the IP address needed.
Additional dedicated Network Interfaces for traffic types¶
The following is a sample NMstate configuration making use of two adapters on the host to create a bonded interface in the LACP (802.3ad) run mode. The bonds are intended to be used for isolating network traffic for different purposes. This provides the advantage of avoiding noisy neighbor scenarios for some interfaces that may have a large impact, for example a backup for a virtual machine consuming significant network throughput impacting ODF or etcd traffic on a shared interface.
apiVersion:nmstate.io/v1kind:NodeNetworkConfigurationPolicymetadata:annotations:description:a bond for VM traffic and VLANsname:bonding-policyspec:desiredState:interfaces:-link-aggregation:mode:802.3adport:-enp6s0f0-enp6s0f1name:bond1state:uptype:bond
An example configuration for VM network connectivity is below, note that the bond configuration should be a part of the same NodeNetworkConfigurationPolicy to ensure they are configured together.
oc apply -f - <<EOFapiVersion:nmstate.io/v1alpha1kind:NodeNetworkConfigurationPolicymetadata:name:br1-ens3-policy-workersspec:nodeSelector:node-role.kubernetes.io/worker:""desiredState:interfaces:-name:br1description:Linux bridge with ens3 as a porttype:linux-bridgestate:upipv4:enabled:truedhcp:truebridge:options:stp:enabled:falseport:-name:ens3EOF
[root@compute-0~]# nmcli con showNAMEUUIDTYPEDEVICE
br12ae82518-2ff3-4d49-b95c-fc8fbf029d48bridgebr1
bridge-slave-ens3faac459f-ce51-4ce9-8616-ea9d23aff675ethernetens3
Wiredconnection1e158d160-1743-3b00-9f67-258849993562ethernet--
[root@compute-0~]# nmcli -f bridge con show br1bridge.mac-address:--
bridge.stp:no
bridge.priority:32768bridge.forward-delay:15bridge.hello-time:2bridge.max-age:20bridge.ageing-time:300bridge.group-forward-mask:0bridge.multicast-snooping:yes
bridge.vlan-filtering:no
bridge.vlan-default-pvid:1bridge.vlans:--
[root@compute-0~]# ip a show dev ens32:ens3:<BROADCAST,MULTICAST,UP,LOWER_UP>mtu1500qdiscfq_codelmasterbr1stateUPgroupdefaultqlen1000link/ether52:54:00:a8:34:0dbrdff:ff:ff:ff:ff:ff
[root@compute-0~]# ip a show dev br117:br1:<BROADCAST,MULTICAST,UP,LOWER_UP>mtu1500qdiscnoqueuestateUPgroupdefaultqlen1000link/ether52:54:00:a8:34:0dbrdff:ff:ff:ff:ff:ff
inet192.168.52.13/24brd192.168.52.255scopeglobaldynamicnoprefixroutebr1
valid_lft3523secpreferred_lft3523sec
inet6fe80::70f0:71c5:53ea:71ee/64scopelinknoprefixroute
valid_lftforeverpreferred_lftforever
Connection problem with kubevirt.io/allow-pod-bridge-network-live-migration after live migration¶