Skip to content

Agent-based installation with Proxy on OpenShift Virt

This walkthrough installs an on-premise cluster with the Agent-based Installer on OpenShift Virtualization (lab: ISAR). Cluster nodes live on a dedicated VLAN (2001); outbound HTTP/HTTPS uses a Squid proxy, cluster DNS is served on the same segment, and a bastion builds the agent ISO and uploads it for the control-plane and worker VMs.

Official documentation: Installing an on-premise cluster with the Agent-based Installer

Overview

What you deploy: On VLAN 2001 DNS, an HTTP proxy, and a bastion with openshift-install, oc, and virtctl—then six VMs (three control plane, three workers) that boot the generated agent.x86_64.iso.

Why a proxy: Restricted networks do not allow nodes to reach Red Hat mirrors and APIs directly. Traffic is sent to the proxy (for example 192.168.201.2:3128) so pulls and related traffic leave the segment in a controlled way. DNS (for example 192.168.201.3) resolves API and ingress names for the install.

Flow:

  1. Create a project on ISAR and attach VLAN 2001 (bridge manifest).
  2. Deploy DNS and the Squid proxy.
  3. Deploy the bastion; verify proxy and DNS; install client tools.
  4. Author install-config.yaml and agent-config.yaml, run openshift-install agent create image, upload the ISO to the cluster, and create VMs from the template using the MACs in the table below.

Tested with:

Component Version
OpenShift v4.20.14

VLAN 2001 IP Overview

IP MAC Usage
192.168.201.3 0E:C0:EF:A8:C9:03 DNS
192.168.201.2 0E:C0:EF:A8:C9:02 Proxy
192.168.201.4 0E:C0:EF:A8:C9:04 bastion
192.168.201.10 API VIP
192.168.201.11 INGRESS VIP
192.168.201.12 0E:C0:EF:A8:C9:0C cp-0
192.168.201.13 0E:C0:EF:A8:C9:0D cp-1
192.168.201.14 0E:C0:EF:A8:C9:0E cp-2
192.168.201.15 0E:C0:EF:A8:C9:0F worker-0
192.168.201.16 0E:C0:EF:A8:C9:10 worker-1
192.168.201.17 0E:C0:EF:A8:C9:11 worker-2

Prepare the project/namespace on ISAR

oc new-project rbohne-2026-03-13-proxy

Let's attach vlan 2001:

oc apply -f https://examples.openshift.pub/cluster-installation/agent-base-proxy/coe-bridge-2001.yaml
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: coe-bridge-2001
spec:
  config: |
    {
      "name": "coe-bridge",
      "type": "bridge",
      "cniVersion": "0.3.1",
      "bridge": "coe-bridge",
      "macspoofchk": false,
      "ipam": {},
      "vlan": 2001,
      "preserveDefaultVlan": false
    }

Deploy dns server

oc apply -f https://examples.openshift.pub/cluster-installation/agent-base-proxy/dns-config.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: dns-scc-rolebinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:openshift:scc:anyuid
subjects:
  - kind: ServiceAccount
    name: dns
---
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: coe-bridge-2001-3
spec:
  config: |-
    {
        "cniVersion": "0.3.1",
        "name": "coe-bridge-2001-3",
        "type": "bridge",
        "bridge": "coe-bridge",
        "ipam":{
          "type": "static",
            "addresses": [
              {
                "address": "192.168.201.3/24"
              }
            ]
        },
        "macspoofchk": false,
        "preserveDefaultVlan": false,
        "vlan": 2001
    }
---
apiVersion: v1
data:
  Corefile: |-
    .:53 {
        file proxy.test.zone proxy.test
        # forward . 10.32.96.1:53 10.32.96.31:53
    }
  proxy.test.zone: |-
    ;
    ; BIND data file for proxy.test
    ;
    $ORIGIN proxy.test.
    $TTL    60
    @       IN      SOA       @ dns.proxy.test. (
                              2021060201 ; Serial
                              60       ; Refresh after 3 hours
                              60       ; Retry after 1 hour
                              60       ; Expire after 1 week
                              60 )     ; Negative caching TTL of 1 day

            IN      NS      dns.proxy.test.

    ; Glue rsecord
    dns.proxy.test. IN A 192.168.201.3

    bastion.proxy.test. IN A 192.168.201.4
    squid.proxy.test. IN A 192.168.201.2

    api IN A 192.168.201.10
    *.apps IN A 192.168.201.11

kind: ConfigMap
metadata:
  name: dns-config
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: dns
    app.kubernetes.io/component: dns
    app.kubernetes.io/instance: dns
    app.kubernetes.io/name: dns
    app.kubernetes.io/part-of: dns
    app.openshift.io/runtime: coredns
  name: dns
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: dns
  strategy:
    type: Recreate
  template:
    metadata:
      annotations:
        k8s.v1.cni.cncf.io/networks: '[ { "name": "coe-bridge-2001-3", "mac": "0E:C0:EF:A8:C9:03",
          "interface": "eth1" } ]'
        openshift.io/required-scc: anyuid
      labels:
        app: dns
        deployment: dns
    spec:
      containers:
        - image: registry.redhat.io/openshift4/ose-coredns-rhel9:v4.21.0
          imagePullPolicy: IfNotPresent
          name: dns
          ports:
            - containerPort: 53
              protocol: TCP
            - containerPort: 53
              protocol: UDP
          resources:
            limits:
              cpu: "1"
              memory: 128Mi
            requests:
              cpu: 500m
              memory: 64Mi
          securityContext:
            runAsUser: 0
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /config
              name: dns-config
          workingDir: /config/
      dnsPolicy: ClusterFirst
      nodeSelector:
        beta.kubernetes.io/instance-type: Cisco-UCS-C460-M4
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: dns
      serviceAccountName: dns
      terminationGracePeriodSeconds: 30
      volumes:
        - configMap:
            defaultMode: 420
            name: dns-config
          name: dns-config

Deploy proxy server

oc apply -f https://examples.openshift.pub/cluster-installation/agent-base-proxy/squid-proxy.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: proxy
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: proxy-scc-rolebinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:openshift:scc:privileged
subjects:
  - kind: ServiceAccount
    name: proxy
---
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: coe-bridge-2001-2
spec:
  config: |-
    {
        "cniVersion": "0.3.1",
        "name": "coe-bridge-2001-2",
        "type": "bridge",
        "bridge": "coe-bridge",
        "ipam":{
          "type": "static",
            "addresses": [
              {
                "address": "192.168.201.2/24"
              }
            ]
        },
        "macspoofchk": false,
        "preserveDefaultVlan": false,
        "vlan": 2001
    }
---
apiVersion: v1
kind: ConfigMap
metadata:
  annotations:
  name: squid-proxy-config
data:
  squid.conf: |
      #
      # Recommended minimum configuration:
      #

      # Example rule allowing access from your local networks.
      # Adapt to list your (internal) IP networks from where browsing
      # should be allowed
      acl localnet src 0.0.0.1-0.255.255.255    # RFC 1122 "this" network (LAN)
      acl localnet src 10.0.0.0/8       # RFC 1918 local private network (LAN)
      acl localnet src 100.64.0.0/10        # RFC 6598 shared address space (CGN)
      acl localnet src 169.254.0.0/16   # RFC 3927 link-local (directly plugged) machines
      acl localnet src 172.16.0.0/12        # RFC 1918 local private network (LAN)
      acl localnet src 192.168.0.0/16       # RFC 1918 local private network (LAN)
      acl localnet src fc00::/7         # RFC 4193 local private network range
      acl localnet src fe80::/10        # RFC 4291 link-local (directly plugged) machines

      acl SSL_ports port 443
      acl SSL_ports port 8443
      acl SSL_ports port 8200

      acl Safe_ports port 80        # http
      acl Safe_ports port 21        # ftp
      acl Safe_ports port 443       # https
      acl Safe_ports port 70        # gopher
      acl Safe_ports port 210       # wais
      acl Safe_ports port 1025-65535    # unregistered ports
      acl Safe_ports port 280       # http-mgmt
      acl Safe_ports port 488       # gss-http
      acl Safe_ports port 591       # filemaker
      acl Safe_ports port 777       # multiling http

      #
      # Recommended minimum Access Permission configuration:
      #
      # Deny requests to certain unsafe ports
      http_access deny !Safe_ports

      # Deny CONNECT to other than secure SSL ports
      http_access deny CONNECT !SSL_ports

      http_access deny !localnet

      # acl allow_domains dstdomain "/etc/squid/allow_domains"

      acl allow_domains dstdomain .github.com
      acl allow_domains dstdomain github.githubassets.com
      acl allow_domains dstdomain objects.githubusercontent.com
      acl allow_domains dstdomain www.googleapis.com
      acl allow_domains dstdomain external-secrets.io
      acl allow_domains dstdomain charts.external-secrets.io
      acl allow_domains dstdomain vault.corp.redhat.com

      acl allow_domains dstdomain .quay.io
      acl allow_domains dstdomain registry.redhat.io
      acl allow_domains dstdomain registry.access.redhat.com
      http_access allow allow_domains

      # Allow coe network
      acl allow_ips dst 10.32.96.0/20
      http_access allow allow_ips

      # And finally deny all other access to this proxy
      # http_access deny all

      # Squid normally listens to port 3128
      http_port 3128

      # Uncomment and adjust the following to add a disk cache directory.
      #cache_dir ufs /var/spool/squid 100 16 256

      # Leave coredumps in the first cache dir
      coredump_dir /var/spool/squid

      #
      # Add any of your own refresh_pattern entries above these.
      #
      refresh_pattern ^ftp:     1440    20% 10080
      refresh_pattern ^gopher:  1440    0%  1440
      refresh_pattern -i (/cgi-bin/|\?) 0   0%  0
      refresh_pattern .     0   20% 4320
---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
  labels:
    app: squid-proxy
    app.kubernetes.io/component: squid-proxy
    app.kubernetes.io/instance: squid-proxy
    app.kubernetes.io/name: squid-proxy
    app.kubernetes.io/part-of: squid-proxy
    app.openshift.io/runtime: linux
  name: squid-proxy
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: squid-proxy
  strategy:
    type: Recreate
  template:
    metadata:
      annotations:
        k8s.v1.cni.cncf.io/networks: '[ { "name": "coe-bridge-2001-2", "mac": "0E:C0:EF:A8:C9:02",
          "interface": "eth1" } ]'
        openshift.io/required-scc: privileged
      labels:
        app: squid-proxy
        deployment: squid-proxy
    spec:
      containers:
        - command:
            - /usr/sbin/squid
            - --foreground
            - -f
            - /config/squid.conf
          image: quay.io/stormshift/ubi-squid:202603131038
          imagePullPolicy: IfNotPresent
          name: squid-proxy
          ports:
            - containerPort: 3128
              name: proxy
              protocol: TCP
          resources: {}
          securityContext:
            runAsUser: 0
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /config
              name: squid-proxy-config
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: proxy
      serviceAccountName: proxy
      terminationGracePeriodSeconds: 30
      volumes:
        - configMap:
            defaultMode: 420
            name: squid-proxy-config
          name: squid-proxy-config

Deploy bastion

oc process -n openshift rhel9-desktop-medium -p=NAME=bastion | oc apply -f -

Configure VM at WebUI

OpenShift installation

Create agent.iso on bastion

install-config.yaml

apiVersion: v1
baseDomain: test
compute:
  - architecture: amd64
    name: worker
    platform: {}
    replicas: 3
controlPlane:
  architecture: amd64
  name: master
  platform: {}
  replicas: 3
metadata:
  creationTimestamp: null
  name: proxy
networking:
  clusterNetwork:
    - cidr: 10.128.0.0/14
      hostPrefix: 23
  machineNetwork:
    - cidr: 192.168.201.0/24
  networkType: OVNKubernetes
  serviceNetwork:
    - 172.30.0.0/16
platform:
  baremetal:
    apiVIP: 192.168.201.10
    ingressVIP: 192.168.201.11
proxy:
  httpProxy: http://192.168.201.2:3128/
  httpsProxy: http://192.168.201.2:3128/
  noProxy: 192.168.201.0/24,.apps.proxy.test,api.proxy.test
sshKey: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEQM82o2imwpHyGVO7DxCNbdE0ZWnkp6oxdawb7/MOCT coe-muc"
pullSecret: '{"auths":{"cloud...}'

agent-config.yaml

apiVersion: v1alpha1
kind: AgentConfig
metadata:
  name: proxy
rendezvousIP:
# Required DHCP to fetch rhcos at initramfs level
# minimalISO: true
hosts:
  - hostname: cp-0
    role: master
    interfaces:
      - name: eth0
        macAddress: 0E:C0:EF:A8:C9:0C
    networkConfig:
      interfaces:
        - name: eth0
          type: ethernet
          state: up
          mtu: 1450
          mac-address: 0E:C0:EF:A8:C9:0C
          ipv4:
            enabled: true
            address:
              - ip: 192.168.201.12
                prefix-length: 24
            dhcp: false
      dns-resolver:
        config:
          server:
            - 192.168.201.3
      routes:
        config:
          - destination: 0.0.0.0/0
            next-hop-address: 192.168.201.1
            next-hop-interface: eth0
            table-id: 254
  - hostname: cp-1
    role: master
    interfaces:
      - name: eth0
        macAddress: 0E:C0:EF:A8:C9:0D
    networkConfig:
      interfaces:
        - name: eth0
          type: ethernet
          state: up
          mtu: 1450
          mac-address: 0E:C0:EF:A8:C9:0D
          ipv4:
            enabled: true
            address:
              - ip: 192.168.201.13
                prefix-length: 24
            dhcp: false
      dns-resolver:
        config:
          server:
            - 192.168.201.3
      routes:
        config:
          - destination: 0.0.0.0/0
            next-hop-address: 192.168.201.1
            next-hop-interface: eth0
            table-id: 254
  - hostname: cp-2
    role: master
    interfaces:
      - name: eth0
        macAddress: 0E:C0:EF:A8:C9:0E
    networkConfig:
      interfaces:
        - name: eth0
          type: ethernet
          state: up
          mtu: 1450
          mac-address: 0E:C0:EF:A8:C9:0E
          ipv4:
            enabled: true
            address:
              - ip: 192.168.201.14
                prefix-length: 24
            dhcp: false
      dns-resolver:
        config:
          server:
            - 192.168.201.3
      routes:
        config:
          - destination: 0.0.0.0/0
            next-hop-address: 192.168.201.1
            next-hop-interface: eth0
            table-id: 254
  - hostname: worker-0
    role: worker
    interfaces:
      - name: eth0
        macAddress: 0E:C0:EF:A8:C9:0F
    networkConfig:
      interfaces:
        - name: eth0
          type: ethernet
          state: up
          mtu: 1450
          mac-address: 0E:C0:EF:A8:C9:0F
          ipv4:
            enabled: true
            address:
              - ip: 192.168.201.15
                prefix-length: 24
            dhcp: false
      dns-resolver:
        config:
          server:
            - 192.168.201.3
      routes:
        config:
          - destination: 0.0.0.0/0
            next-hop-address: 192.168.201.1
            next-hop-interface: eth0
            table-id: 254
  - hostname: worker-1
    role: worker
    interfaces:
      - name: eth0
        macAddress: 0E:C0:EF:A8:C9:10
    networkConfig:
      interfaces:
        - name: eth0
          type: ethernet
          state: up
          mtu: 1450
          mac-address: 0E:C0:EF:A8:C9:10
          ipv4:
            enabled: true
            address:
              - ip: 192.168.201.16
                prefix-length: 24
            dhcp: false
      dns-resolver:
        config:
          server:
            - 192.168.201.3
      routes:
        config:
          - destination: 0.0.0.0/0
            next-hop-address: 192.168.201.1
            next-hop-interface: eth0
            table-id: 254
  - hostname: worker-2
    role: worker
    interfaces:
      - name: eth0
        macAddress: 0E:C0:EF:A8:C9:11
    networkConfig:
      interfaces:
        - name: eth0
          type: ethernet
          state: up
          mtu: 1450
          mac-address: 0E:C0:EF:A8:C9:11
          ipv4:
            enabled: true
            address:
              - ip: 192.168.201.17
                prefix-length: 24
            dhcp: false
      dns-resolver:
        config:
          server:
            - 192.168.201.3
      routes:
        config:
          - destination: 0.0.0.0/0
            next-hop-address: 192.168.201.1
            next-hop-interface: eth0
            table-id: 254

Create ISO:

1
2
3
4
5
6
7
8
9
[cloud-user@bastion ~]$ openshift-install-fips agent create image --dir=conf/
INFO Configuration has 3 master replicas, 0 arbiter replicas, and 3 worker replicas
INFO The rendezvous host IP (node0 IP) is 192.168.201.12
INFO Extracting base ISO from release payload
INFO Verifying cached file
INFO Using cached Base ISO /home/cloud-user/.cache/agent/image_cache/coreos-x86_64.iso
INFO Consuming Install Config from target directory
INFO Consuming Agent Config from target directory
INFO Generated  ISO at conf/agent.x86_64.iso

Upload ISO to ISAR:

1
2
3
4
5
6
7
virtctl image-upload pvc agent-iso \
    --size=2Gi \
    --image-path=agent.x86_64.iso \
    --storage-class coe-netapp-nas \
    --volume-mode=filesystem \
    --access-mode=ReadWriteMany \
    --force-bind --insecure

Deploy VirtualMachines

1
2
3
4
5
6
oc process -f vm-template.yaml -p NAME=cp-0 -p MAC=0E:C0:EF:A8:C9:0C | oc apply -f  -
oc process -f vm-template.yaml -p NAME=cp-1 -p MAC=0E:C0:EF:A8:C9:0D | oc apply -f  -
oc process -f vm-template.yaml -p NAME=cp-2 -p MAC=0E:C0:EF:A8:C9:0E | oc apply -f  -
oc process -f vm-template.yaml -p NAME=worker-0 -p MAC=0E:C0:EF:A8:C9:0F | oc apply -f  -
oc process -f vm-template.yaml -p NAME=worker-1 -p MAC=0E:C0:EF:A8:C9:10 | oc apply -f  -
oc process -f vm-template.yaml -p NAME=worker-2 -p MAC=0E:C0:EF:A8:C9:11 | oc apply -f  -

2026-03-19 2026-03-19 Contributors: Robert Bohne