Skip to content

Add Node to an existing cluster

Tested with OpenShift 4.17

Doc bug to improve RH Documentation: OSDOCS-13020

Documentation

Documetation Notes
Adding worker nodes to an on-premise cluster
  • Supports only to add worker nodes
  • Single Node documentation: 10.1.3. Adding worker nodes using the Assisted Installer API
  • Ignored, because all my clusters are install without assisted isntaller (SaaS)
  • Single Node documentation: 10.1.4. Adding worker nodes to single-node OpenShift clusters manually
  • This works with all cluster types, doesn't matter which cluster size or installation type
  • How to get RHEL CoreOS boot image

    Download generic Version from Red Hat resources

    via oc

    1
    2
    3
    4
    5
    6
    7
    % oc -n openshift-machine-config-operator \
        get configmap/coreos-bootimages \
        -o jsonpath='{.data.stream}' \
        | jq -r '.architectures.x86_64.artifacts.metal.formats.iso.disk.location'
    https://rhcos.mirror.openshift.com/art/storage/prod/streams/4.17-9.4/builds/417.94.202410090854-0/x86_64/rhcos-417.94.202410090854-0-live.x86_64.iso
    
    curl -L -O ...
    

    via openshift-install

    1
    2
    3
    % openshift-install coreos print-stream-json \
      | jq -r '.architectures.x86_64.artifacts.metal.formats.iso.disk.location'
    https://rhcos.mirror.openshift.com/art/storage/prod/streams/4.17-9.4/builds/417.94.202410090854-0/x86_64/rhcos-417.94.202410090854-0-live.x86_64.iso
    

    Add control-plane node

    Node overview

    Node IP Mac
    cp-1 (0) 10.32.105.69 0E:C0:EF:20:69:45
    cp-2 (1) 10.32.105.70 0E:C0:EF:20:69:46
    cp-3 (2) 10.32.105.71 0E:C0:EF:20:69:47
    cp-4 (4) 10.32.105.72 0E:C0:EF:20:69:48
    cp-5 (5) 10.32.105.73 0E:C0:EF:20:69:49

    Configure DHCP & DNS

    DHCP

    1
    2
    3
    4
    5
    6
    host ocp1-cp-4 {
      hardware ethernet 0E:C0:EF:20:69:48;
      fixed-address 10.32.105.72;
      option host-name "ocp1-cp-4";
      option domain-name "stormshift.coe.muc.redhat.com";
    }
    

    DNS

    72.105.32.10.in-addr.arpa. 120  IN      PTR     ocp1-cp-4.stormshift.coe.muc.redhat.com.
    ocp1-cp-4.stormshift.coe.muc.redhat.com. 60 IN A 10.32.105.72
    

    At target cluster (stormshift-ocp1)

    Get RHCOS and Download it

    1
    2
    3
    4
    5
    6
    % oc -n openshift-machine-config-operator     get configmap/coreos-bootimages     -o jsonpath='{.data.stream}'     | jq -r '.architectures.x86_64.artifacts.metal.formats.iso.disk.location'
    https://rhcos.mirror.openshift.com/art/storage/prod/streams/4.17-9.4/builds/417.94.202410090854-0/x86_64/rhcos-417.94.202410090854-0-live.x86_64.iso
    curl -L -O https://rhcos.mirror.openshift.com/art/storage/prod/streams/4.17-9.4/builds/417.94.202410090854-0/x86_64/rhcos-417.94.202410090854-0-live.x86_64.iso
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 1187M  100 1187M    0     0  30.4M      0  0:00:39  0:00:39 --:--:-- 32.9M
    

    Extract ignition and put it into a Webserver

    Control plane node ignition:

    1
    2
    3
    oc extract -n openshift-machine-api \
      secret/master-user-data-managed \
      --keys=userData --to=- > cp.ign
    

    Worker node ignition:

    1
    2
    3
    oc extract -n openshift-machine-api \
      secret/worker-user-data-managed \
      --keys=userData --to=- > worker.ign
    

    At hosting cluster (ISAR)

    Upload ISO

    % oc project stormshift-ocp1-infra
    Now using project "stormshift-ocp1-infra" on server "https://api.isar.coe.muc.redhat.com:6443".
    % virtctl image-upload dv rhcos-417-94-202410090854-0-live --size=2Gi --storage-class coe-netapp-nas --image-path rhcos-417.94.202410090854-0-live.x86_64.iso
    PVC stormshift-ocp1-infra/rhcos-417-94-202410090854-0-live not found
    DataVolume stormshift-ocp1-infra/rhcos-417-94-202410090854-0-live created
    Waiting for PVC rhcos-417-94-202410090854-0-live upload pod to be ready...
    Pod now ready
    Uploading data to https://cdi-uploadproxy-openshift-cnv.apps.isar.coe.muc.redhat.com
    
     1.16 GiB / 1.16 GiB [===================================================================================================================================] 100.00% 11s
    
    Uploading data completed successfully, waiting for processing to complete, you can hit ctrl-c without interrupting the progress
    Processing completed successfully
    Uploading rhcos-417.94.202410090854-0-live.x86_64.iso completed successfully
    

    Create VM

    ---
    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: ocp1-cp-4
      namespace: stormshift-ocp1-infra
    spec:
      dataVolumeTemplates:
        - metadata:
            creationTimestamp: null
            name: ocp1-cp-4-root
          spec:
            source:
              blank: {}
            storage:
              accessModes:
                - ReadWriteMany
              resources:
                requests:
                  storage: 120Gi
              storageClassName: coe-netapp-san
      running: true
      template:
        metadata:
          creationTimestamp: null
        spec:
          architecture: amd64
          domain:
            cpu:
              cores: 8
            devices:
              disks:
                - bootOrder: 1
                  disk:
                    bus: virtio
                  name: root
                - bootOrder: 2
                  cdrom:
                    bus: sata
                  name: cdrom
              interfaces:
                - bridge: {}
                  macAddress: '0E:C0:EF:20:69:48'
                  model: virtio
                  name: coe
            machine:
              type: pc-q35-rhel9.4.0
            memory:
              guest: 16Gi
            resources:
              limits:
                memory: 16706Mi
              requests:
                memory: 16Gi
          networks:
            - multus:
                networkName: coe-bridge
              name: coe
          volumes:
            - name: cdrom
              persistentVolumeClaim:
                claimName: rhcos-417-94-202410090854-0-live
            - dataVolume:
                name: ocp1-cp-4-root
              name: root
    
    • ToDo: Serial consol does not work

    Install coreos via Console

    1
    2
    3
    curl -L -O http://10.32.96.31/stormshift-ocp1-cp.ign
    sudo coreos-installer install -i stormshift-ocp1-cp.ign /dev/vda
    sudo reboot
    

    Wait for and approve CSR

    oc get csr | awk '/Pending/ { print $1 }' | xargs oc adm certificate approve
    

    In case of control-plane node

    Let's create BareMetalHost object and MachineObject

    ---
    apiVersion: metal3.io/v1alpha1
    kind: BareMetalHost
    metadata:
      name: ocp1-cp-4
      namespace: openshift-machine-api
    spec:
      automatedCleaningMode: metadata
      bootMACAddress: 0E:C0:EF:20:69:48
      bootMode: legacy
      customDeploy:
        method: install_coreos
      externallyProvisioned: true
      online: true
      userData:
        name: master-user-data-managed
        namespace: openshift-machine-api
    
    ---
    apiVersion: machine.openshift.io/v1beta1
    kind: Machine
    metadata:
      annotations:
        machine.openshift.io/instance-state: externally provisioned
        metal3.io/BareMetalHost: openshift-machine-api/ocp1-cp-4
      labels:
        machine.openshift.io/cluster-api-cluster: ocp1-nlxjs
        machine.openshift.io/cluster-api-machine-role: master
        machine.openshift.io/cluster-api-machine-type: master
      name: ocp1-cp-4
      namespace: openshift-machine-api
    spec:
      metadata: {}
      providerSpec:
        value:
          apiVersion: baremetal.cluster.k8s.io/v1alpha1
          customDeploy:
            method: install_coreos
          hostSelector: {}
          image:
            checksum: ""
            url: ""
          kind: BareMetalMachineProviderSpec
          metadata:
            creationTimestamp: null
          userData:
            name: master-user-data-managed
    

    Patch BareMetalHost status

    Open API proxy in on terminal

    oc proxy
    

    Patch object in another terminal

    export HOST_PROXY_API_PATH="http://127.0.0.1:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts"
    
    read -r -d '' host_patch << EOF
    {
      "status": {
        "hardware": {
          "nics": [
            {
              "ip": "10.32.105.72",
              "mac": "0E:C0:EF:20:69:48"
            }
          ]
        }
      }
    }
    EOF
    
    curl -vv \
         -X PATCH \
         "${HOST_PROXY_API_PATH}/ocp1-cp-4/status" \
         -H "Content-type: application/merge-patch+json" \
         -d "${host_patch}"
    

    Add worker node

    • I added two interfaces to the VM for bonding tests
    • nodes-config.yaml does not match to vm example!!
    nodes-config.yaml for a BareMetal node
    hosts:
      - hostname: inf49
        rootDeviceHints:
          deviceName: /dev/sda
        interfaces:
          - macAddress: b4:99:ba:b4:49:d2
            name: enp3s0f0
          - macAddress: 00:1b:21:b5:6a:20
            name: ens2f0
          - macAddress: 00:1b:21:b5:6a:21
            name: ens2f1
        networkConfig:
          interfaces:
            - name: enp3s0f0
              type: ethernet
              ipv6:
                enabled: false
              ipv4:
                enabled: false
            - name: bond0.32
              type: vlan
              state: up
              ipv4:
                enabled: true
                dhcp: true
              ipv6:
                enabled: false
              vlan:
                base-iface: bond0
                id: 32
            - name: bond0
              type: bond
              state: up
              link-aggregation:
                mode: active-backup
                options:
                  primary: ens2f0
                  miimon: '140'
                port:
                  - ens2f0
                  - ens2f1
    

    Create & upload iso:

    oc adm node-image create nodes-config.yaml
    virtctl image-upload dv  extra-worker-1-iso  --size=2Gi --storage-class coe-netapp-nas --image-path node.x86_64.iso
    
    Example VM definition extra-worker-1.yaml
    ---
    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: extra-worker-1
      namespace: stormshift-ocp1-infra
    spec:
      dataVolumeTemplates:
        - metadata:
            creationTimestamp: null
            name: extra-worker-1-root
          spec:
            source:
              blank: {}
            storage:
              accessModes:
                - ReadWriteMany
              resources:
                requests:
                  storage: 120Gi
              storageClassName: coe-netapp-san
      running: true
      template:
        metadata:
          creationTimestamp: null
        spec:
          architecture: amd64
          domain:
            cpu:
              cores: 8
            devices:
              disks:
                - bootOrder: 1
                  disk:
                    bus: virtio
                  name: root
                - bootOrder: 2
                  cdrom:
                    bus: sata
                  name: cdrom
              interfaces:
                - bridge: {}
                  macAddress: '0E:C0:EF:20:69:4B'
                  model: virtio
                  name: coe
                - bridge: {}
                  macAddress: '0E:C0:EF:20:69:4C'
                  model: virtio
                  name: coe2
            machine:
              type: pc-q35-rhel9.4.0
            memory:
              guest: 16Gi
            resources:
              limits:
                memory: 16706Mi
              requests:
                memory: 16Gi
          networks:
            - multus:
                networkName: coe-bridge
              name: coe
            - multus:
                networkName: coe-bridge
              name: coe2
          volumes:
            - name: cdrom
              persistentVolumeClaim:
                claimName: extra-worker-1-iso
            - dataVolume:
                name: extra-worker-1-root
              name: root
    

    2025-01-19 2024-12-27 Contributors: Robert Bohne