v4.17
cnv
kubevirt
ocp-v
networking
Networking
An OpenShift cluster is configured using an overlay software-defined network (SDN) for both the Pod and Service networks. By default, VMs are configured with connectivity to the SDN and have the same features/connectivity as Pod-based applications.
Host-level networking configurations are created and applied using the NMstate operator. This includes the ability to report the current configuration options, such as bonds, bridges, and VLAN tags to help segregate networking resources, as well as apply desired-state configuration for those entities.
Bonded NICs for Management and SDN
The initial bond interface, consisting of two adapters bonded together with an IP address on the machine network specified and configured at install time, is used for the SDN, management traffic between the node and the control plane (and administrator access), and live migration traffic. During installation, use the host network interface configuration options to configure the bond and set the IP address needed.
Additional dedicated Network Interfaces for traffic types
The following is a sample NMstate configuration making use of two adapters on the host to create a bonded interface in the LACP (802.3ad) run mode. The bonds are intended to be used for isolating network traffic for different purposes. This provides the advantage of avoiding noisy neighbor scenarios for some interfaces that may have a large impact, for example a backup for a virtual machine consuming significant network throughput impacting ODF or etcd traffic on a shared interface.
apiVersion : nmstate.io/v1
kind : NodeNetworkConfigurationPolicy
metadata :
annotations :
description : a bond for VM traffic and VLANs
name : bonding-policy
spec :
desiredState :
interfaces :
- link-aggregation :
mode : 802.3ad
port :
- enp6s0f0
- enp6s0f1
name : bond1
state : up
type : bond
Example VM Network Configuration
An example configuration for VM network connectivity is below, note that the bond configuration should be a part of the same NodeNetworkConfigurationPolicy to ensure they are configured together.
apiVersion : nmstate.io/v1
kind : NodeNetworkConfigurationPolicy
metadata :
name : ovs-br1-vlan-trunk
spec :
nodeSelector :
node-role.kubernetes.io/worker : ''
desiredState :
interfaces :
- name : ovs-br1
description : |-
A dedicated OVS bridge with bond2 as a port
allowing all VLANs and untagged traffic
type : ovs-bridge
state : up
bridge :
allow-extra-patch-ports : true
options :
stp : true
port :
- name : bond2
ovn :
bridge-mappings :
- localnet : vlan-2024
bridge : ovs-br1
state : present
- localnet : vlan-1993
bridge : ovs-br1
state : present
---
apiVersion : k8s.cni.cncf.io/v1
kind : NetworkAttachmentDefinition
metadata :
annotations :
description : VLAN 2024 connection for VMs
name : vlan-2024
namespace : default
spec :
config : |-
{
"cniVersion": "0.3.1",
"name": "vlan-2024",
"type": "ovn-k8s-cni-overlay",
"topology": "localnet",
"netAttachDefName": "default/vlan-2024",
"vlanID": 2024,
"ipam": {}
}
---
apiVersion : k8s.cni.cncf.io/v1
kind : NetworkAttachmentDefinition
metadata :
annotations :
description : VLAN 1993 connection for VMs
name : vlan-1993
namespace : default
spec :
config : |-
{
"cniVersion": "0.3.1",
"name": "vlan-1993",
"type": "ovn-k8s-cni-overlay",
"topology": "localnet",
"netAttachDefName": "default/vlan-1993",
"vlanID": 1993,
"ipam": {}
}
Create a bridge on the main interface
All nodes on which the configuration is executed are restarted.
oc apply -f - <<EOF
apiVersion : nmstate.io/v1alpha1
kind : NodeNetworkConfigurationPolicy
metadata :
name : br1-ens3-policy-workers
spec :
nodeSelector :
node-role.kubernetes.io/worker : ""
desiredState :
interfaces :
- name : br1
description : Linux bridge with ens3 as a port
type : linux-bridge
state : up
ipv4 :
enabled : true
dhcp : true
bridge :
options :
stp :
enabled : false
port :
- name : ens3
EOF
Create Network Attachment Definition
cat << EOF | oc apply -f -
apiVersion : "k8s.cni.cncf.io/v1"
kind : NetworkAttachmentDefinition
metadata :
name : tuning-bridge-fixed
annotations :
k8s.v1.cni.cncf.io/resourceName : bridge.network.kubevirt.io/br1
spec :
config : '{
"cniVersion": "0.3.1",
"name": "br1",
"plugins": [
{
"type": "cnv-bridge",
"bridge": "br1"
},
{
"type": "cnv-tuning"
}
]
}'
EOF
Example: Localnet
Apply localnet-demo
Create new project
oc new-project localnet-demo
Create net-attach-def
Attach Fedora VM
Example: Bonding -> VLAN -> LocalNet & Bridge
Tested with OpenShift 4.17.11
NMState for initial setup / add-node
hosts :
- hostname : inf49
rootDeviceHints :
deviceName : /dev/sda
interfaces :
- macAddress : b4:99:ba:b4:49:d2
name : enp3s0f0
- macAddress : 00:1b:21:b5:6a:20
name : ens2f0
- macAddress : 00:1b:21:b5:6a:21
name : ens2f1
networkConfig :
interfaces :
- name : enp3s0f0
type : ethernet
ipv6 :
enabled : false
ipv4 :
enabled : false
- name : bond0.32
type : vlan
state : up
ipv4 :
enabled : true
dhcp : true
ipv6 :
enabled : false
vlan :
base-iface : bond0
id : 32
- name : bond0
type : bond
state : up
link-aggregation :
mode : active-backup
options :
primary : ens2f0
miimon : '140'
port :
- ens2f0
- ens2f1
NodeNetworkConfigurationPolicy (NNCP), create linux bridge connected to bond0
apiVersion : nmstate.io/v1
kind : NodeNetworkConfigurationPolicy
metadata :
name : coe-bridge
spec :
desiredState :
interfaces :
- bridge :
options :
stp :
enabled : false
port :
- name : bond0
name : coe-bridge
state : up
type : linux-bridge
nodeSelector :
bond0-available : "true"
net-attach-def connect to bridge
apiVersion : k8s.cni.cncf.io/v1
kind : NetworkAttachmentDefinition
metadata :
annotations :
k8s.v1.cni.cncf.io/resourceName : bridge.network.kubevirt.io/coe-bridge
name : vlan1004
namespace : coe-bridge-test
spec :
config : |-
{
"cniVersion": "0.3.1",
"name": "vlan1004",
"type": "bridge",
"bridge": "coe-bridge",
"ipam": {},
"macspoofchk": false,
"preserveDefaultVlan": false,
"vlan": 1004
}
Example: OVN Bonding (balance-slb) (OVN)
Tested with OpenShift 4.18.13
Warning
Balance-slb is only supported with "OVN Bonding" and not with Linux Bonds!
NodeNetworkConfigurationPolicy, bond1
apiVersion : nmstate.io/v1
kind : NodeNetworkConfigurationPolicy
metadata :
name : bond1
spec :
desiredState :
interfaces :
- ipv4 :
enabled : false
ipv6 :
enabled : false
name : enp2s0
state : up
type : ethernet
- ipv4 :
enabled : false
ipv6 :
enabled : false
name : enp3s0
state : up
type : ethernet
- bridge :
allow-extra-patch-ports : true
port :
- name : patch-phy-to-ex
- link-aggregation :
mode : balance-slb
port :
- name : enp2s0
- name : enp3s0
name : ovs-bond
ipv4 :
dhcp : false
enabled : false
ipv6 :
dhcp : false
enabled : false
name : br-pub
state : up
type : ovs-bridge
ovn :
bridge-mappings :
- bridge : br-pub
localnet : localnet-pub
state : present
nodeSelector :
kubernetes.io/hostname : ocp1-worker-0
NetworkAttachmentDefinition
Via YAML or WebUI
apiVersion : k8s.cni.cncf.io/v1
kind : NetworkAttachmentDefinition
metadata :
annotations :
k8s.ovn.org/network-id : '7'
k8s.ovn.org/network-name : localnet-pub
name : nad-localnet-pub
namespace : bonding-test
spec :
config : |-
{
"cniVersion": "0.4.0",
"name": "localnet-pub",
"type": "ovn-k8s-cni-overlay",
"netAttachDefName": "bonding-test/nad-localnet-pub",
"topology": "localnet"
}
Example: Firewalling (Isolation)
Enable MultiNetworkPolicy
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/networking/multiple-networks#nw-multi-network-policy-enable_configuring-multi-network-policy
oc patch network.operator.openshift.io cluster \
--type= merge \
-p '{"spec":{"useMultiNetworkPolicy":true}}'
Wait for the rollout / configuration
$ oc get co/network
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
network 4 .18.17 True True False 3d17h DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" update is rolling out ( 3 out of 6 updated)
Create two VM's with coe network connect
Let's apply some MultiNetworkPolicy
Debugging purpose
Create br1 via nmcli
nmcli con show --active
nmcli con add type bridge ifname br1 con-name br1
nmcli con add type bridge-slave ifname ens3 master br1
nmcli con modify br1 bridge.stp no
nmcli con down 'Wired connection 1'
nmcli con up br1
nmcli con mod br1 connection.autoconnect yes
nmcli con mod 'Wired connection 1' connection.autoconnect no
[ root@compute-0 ~] # nmcli con show
NAME UUID TYPE DEVICE
br1 2ae82518-2ff3-4d49-b95c-fc8fbf029d48 bridge br1
bridge-slave-ens3 faac459f-ce51-4ce9-8616-ea9d23aff675 ethernet ens3
Wired connection 1 e158d160-1743-3b00-9f67-258849993562 ethernet --
[ root@compute-0 ~] # nmcli -f bridge con show br1
bridge.mac-address: --
bridge.stp: no
bridge.priority: 32768
bridge.forward-delay: 15
bridge.hello-time: 2
bridge.max-age: 20
bridge.ageing-time: 300
bridge.group-forward-mask: 0
bridge.multicast-snooping: yes
bridge.vlan-filtering: no
bridge.vlan-default-pvid: 1
bridge.vlans: --
[ root@compute-0 ~] # ip a show dev ens3
2 : ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br1 state UP group default qlen 1000
link/ether 52 :54:00:a8:34:0d brd ff:ff:ff:ff:ff:ff
[ root@compute-0 ~] # ip a show dev br1
17 : br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52 :54:00:a8:34:0d brd ff:ff:ff:ff:ff:ff
inet 192 .168.52.13/24 brd 192 .168.52.255 scope global dynamic noprefixroute br1
valid_lft 3523sec preferred_lft 3523sec
inet6 fe80::70f0:71c5:53ea:71ee/64 scope link noprefixroute
valid_lft forever preferred_lft forever
Connection problem with kubevirt.io/allow-pod-bridge-network-live-migration after live migration
HCP Cluster sendling
oc get nodes
NAME STATUS ROLES AGE VERSION
sendling-d0c14274-6nbvl Ready worker 11d v1.27.8+4fab27b
sendling-d0c14274-sz7rb Ready worker 11d v1.27.8+4fab27b
Ping check details node/sendling-d0c14274-6nbvl
oc debug node/sendling-d0c14274-6nbvl
Starting pod/sendling-d0c14274-6nbvl-debug ...
To use host binaries, run ` chroot /host`
Pod IP: 10 .128.8.133
If you don' t see a command prompt, try pressing enter.
sh-4.4# ping www.google.de
PING www.google.de ( 172 .253.62.94) 56 ( 84 ) bytes of data.
64 bytes from bc-in-f94.1e100.net ( 172 .253.62.94) : icmp_seq = 1 ttl = 99 time = 112 ms
64 bytes from bc-in-f94.1e100.net ( 172 .253.62.94) : icmp_seq = 2 ttl = 99 time = 98 .3 ms
^C
--- www.google.de ping statistics ---
2 packets transmitted, 2 received, 0 % packet loss, time 1000ms
rtt min/avg/max/mdev = 98 .310/105.047/111.785/6.745 ms
sh-4.4# exit
exit
Removing debug pod ...
Ping check details node/sendling-d0c14274-sz7rb
$ oc debug node/sendling-d0c14274-sz7rb
Starting pod/sendling-d0c14274-sz7rb-debug ...
To use host binaries, run ` chroot /host`
Pod IP: 10 .131.9.28
If you don' t see a command prompt, try pressing enter.
sh-4.4# ping www.google.de
PING www.google.de ( 172 .253.62.94) 56 ( 84 ) bytes of data.
Node sendling-d0c14274-6nbvl - Ping google ✅
Node sendling-d0c14274-sz7rb - Ping google ❌
$ oc get pods -l kubevirt.io= virt-launcher -o wide -n rbohne-hcp-sendling
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
virt-launcher-sendling-d0c14274-6nbvl-pb6zd 1 /1 Running 0 6d2h 10 .128.8.133 inf8 <none> 1 /1
virt-launcher-sendling-d0c14274-sz7rb-cw5vj 1 /1 Running 0 3d20h 10 .131.9.28 ucs-blade-server-1 <none> 1 /1
virt-launcher-sendling-d0c14274-sz7rb-mbmv8 0 /1 Completed 0 3d20h 10 .131.9.28 ucs-blade-server-3 <none> 1 /1
virt-launcher-sendling-d0c14274-sz7rb-nb25r 0 /1 Completed 0 6d2h 10 .131.9.28 ucs-blade-server-1 <none> 1 /1
$
Checkout node routing
Host subnets:
$ oc get nodes -o custom-columns= "NODE:.metadata.name,node-subnets:.metadata.annotations.k8s\.ovn\.org/node-subnets"
NODE node-subnets
...
inf8 { "default" :[ "10.131.8.0/21" ]}
ucs-blade-server-1 { "default" :[ "10.131.0.0/21" ]}
ucs-blade-server-3 { "default" :[ "10.130.8.0/21" ]}
...
$ oc get pods -n openshift-ovn-kubernetes -o wide -l app = ovnkube-node
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
...
ovnkube-node-9xt5n 8 /8 Running 8 2d7h 10 .32.96.101 ucs-blade-server-1 <none> <none>
ovnkube-node-hhsx5 8 /8 Running 8 2d7h 10 .32.96.8 inf8 <none> <none>
ovnkube-node-qx9bh 8 /8 Running 9 ( 2d6h ago) 2d7h 10 .32.96.103 ucs-blade-server-3 <none> <none>
...
$ oc exec -n openshift-ovn-kubernetes -c ovn-controller ovnkube-node-9xt5n -- ovn-nbctl lr-route-list ovn_cluster_router
IPv4 Routes
Route Table <main>:
10 .128.8.133 100 .88.0.9 dst-ip
10 .129.8.107 10 .129.8.107 dst-ip rtos-ucs-blade-server-1 ecmp
10 .129.8.107 100 .88.0.8 dst-ip ecmp
10 .130.10.29 10 .130.10.29 dst-ip rtos-ucs-blade-server-1
10 .131.8.41 10 .131.8.41 dst-ip rtos-ucs-blade-server-1
10 .131.9.28 10 .131.9.28 dst-ip rtos-ucs-blade-server-1 ecmp
10 .131.9.28 100 .88.0.8 dst-ip ecmp
10 .131.9.44 10 .131.9.44 dst-ip rtos-ucs-blade-server-1
100 .64.0.2 100 .88.0.2 dst-ip
100 .64.0.3 100 .88.0.3 dst-ip
100 .64.0.4 100 .88.0.4 dst-ip
100 .64.0.5 100 .64.0.5 dst-ip
100 .64.0.6 100 .88.0.6 dst-ip
100 .64.0.8 100 .88.0.8 dst-ip
100 .64.0.9 100 .88.0.9 dst-ip
100 .64.0.10 100 .88.0.10 dst-ip
10 .128.0.0/21 100 .88.0.2 dst-ip
10 .128.8.0/21 100 .88.0.6 dst-ip
10 .128.16.0/21 100 .88.0.10 dst-ip
10 .129.0.0/21 100 .88.0.3 dst-ip
10 .130.0.0/21 100 .88.0.4 dst-ip
10 .130.8.0/21 100 .88.0.8 dst-ip
10 .131.8.0/21 100 .88.0.9 dst-ip
10 .128.0.0/14 100 .64.0.5 src-ip
$ oc exec -n openshift-ovn-kubernetes -c ovn-controller ovnkube-node-hhsx5 -- ovn-nbctl lr-route-list ovn_cluster_router
IPv4 Routes
Route Table <main>:
10 .128.8.133 10 .128.8.133 dst-ip rtos-inf8
10 .129.8.107 100 .88.0.5 dst-ip ecmp
10 .129.8.107 100 .88.0.8 dst-ip ecmp
10 .130.10.29 100 .88.0.5 dst-ip
10 .131.8.41 100 .88.0.5 dst-ip
10 .131.9.28 100 .88.0.5 dst-ip ecmp
10 .131.9.28 100 .88.0.8 dst-ip ecmp
10 .131.9.44 100 .88.0.5 dst-ip
100 .64.0.2 100 .88.0.2 dst-ip
100 .64.0.3 100 .88.0.3 dst-ip
100 .64.0.4 100 .88.0.4 dst-ip
100 .64.0.5 100 .88.0.5 dst-ip
100 .64.0.6 100 .88.0.6 dst-ip
100 .64.0.8 100 .88.0.8 dst-ip
100 .64.0.9 100 .64.0.9 dst-ip
100 .64.0.10 100 .88.0.10 dst-ip
10 .128.0.0/21 100 .88.0.2 dst-ip
10 .128.8.0/21 100 .88.0.6 dst-ip
10 .128.16.0/21 100 .88.0.10 dst-ip
10 .129.0.0/21 100 .88.0.3 dst-ip
10 .130.0.0/21 100 .88.0.4 dst-ip
10 .130.8.0/21 100 .88.0.8 dst-ip
10 .131.0.0/21 100 .88.0.5 dst-ip
10 .128.0.0/14 100 .64.0.9 src-ip
$
$ oc exec -n openshift-ovn-kubernetes -c ovn-controller ovnkube-node-qx9bh -- ovn-nbctl lr-route-list ovn_cluster_router
IPv4 Routes
Route Table <main>:
10 .128.8.133 100 .88.0.9 dst-ip
10 .129.8.107 100 .88.0.5 dst-ip
10 .130.10.29 100 .88.0.5 dst-ip
10 .131.8.41 100 .88.0.5 dst-ip
10 .131.9.28 100 .88.0.5 dst-ip
10 .131.9.44 100 .88.0.5 dst-ip
100 .64.0.2 100 .88.0.2 dst-ip
100 .64.0.3 100 .88.0.3 dst-ip
100 .64.0.4 100 .88.0.4 dst-ip
100 .64.0.5 100 .88.0.5 dst-ip
100 .64.0.6 100 .88.0.6 dst-ip
100 .64.0.8 100 .64.0.8 dst-ip
100 .64.0.9 100 .88.0.9 dst-ip
100 .64.0.10 100 .88.0.10 dst-ip
10 .128.0.0/21 100 .88.0.2 dst-ip
10 .128.8.0/21 100 .88.0.6 dst-ip
10 .128.16.0/21 100 .88.0.10 dst-ip
10 .129.0.0/21 100 .88.0.3 dst-ip
10 .130.0.0/21 100 .88.0.4 dst-ip
10 .131.0.0/21 100 .88.0.5 dst-ip
10 .131.8.0/21 100 .88.0.9 dst-ip
10 .128.0.0/14 100 .64.0.8 src-ip
2025-08-05
2020-05-06
Contributors: