diff --git a/content/cluster-installation/hosted-control-plane/tenant-network/.$detail.drawio.bkp b/content/cluster-installation/hosted-control-plane/tenant-network/.$detail.drawio.bkp
new file mode 100644
index 00000000..076cc4a1
--- /dev/null
+++ b/content/cluster-installation/hosted-control-plane/tenant-network/.$detail.drawio.bkp
@@ -0,0 +1,190 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/cluster-installation/hosted-control-plane/tenant-network/.$overview.drawio.bkp b/content/cluster-installation/hosted-control-plane/tenant-network/.$overview.drawio.bkp
new file mode 100644
index 00000000..3d1d9e74
--- /dev/null
+++ b/content/cluster-installation/hosted-control-plane/tenant-network/.$overview.drawio.bkp
@@ -0,0 +1,79 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/cluster-installation/hosted-control-plane/tenant-network/api-lb.conf b/content/cluster-installation/hosted-control-plane/tenant-network/api-lb.conf
new file mode 100644
index 00000000..b5a7e9a1
--- /dev/null
+++ b/content/cluster-installation/hosted-control-plane/tenant-network/api-lb.conf
@@ -0,0 +1,29 @@
+global
+ log 127.0.0.1 local2
+ pidfile /var/run/haproxy.pid
+ maxconn 4000
+ daemon
+defaults
+ mode http
+ log global
+ option dontlognull
+ option http-server-close
+ option redispatch
+ retries 3
+ timeout http-request 10s
+ timeout queue 1m
+ timeout connect 10s
+ timeout client 1m
+ timeout server 1m
+ timeout http-keep-alive 10s
+ timeout check 10s
+ maxconn 3000
+
+listen api
+ bind *:6443
+ mode tcp
+ balance source
+ server ucs-blade-server-5 10.32.96.105:30918 check inter 1s
+ server ucs-blade-server-6 10.32.96.106:30918 check inter 1s
+ server ucs-blade-server-7 10.32.96.107:30918 check inter 1s
+ server ucs-blade-server-8 10.32.96.108:30918 check inter 1s
diff --git a/content/cluster-installation/hosted-control-plane/tenant-network/index.md b/content/cluster-installation/hosted-control-plane/tenant-network/index.md
new file mode 100644
index 00000000..8326ff68
--- /dev/null
+++ b/content/cluster-installation/hosted-control-plane/tenant-network/index.md
@@ -0,0 +1,230 @@
+---
+title: Hosted Control Plane and tenant networking
+linktitle: Hosted Control Plane and tenant networking
+description: Hosted Control Plane and tenant networking
+tags: ['hcp','v4.21']
+---
+# Hosted Control Plane and tenant networking
+
+Official documentation: Net yet available
+
+Tested with:
+
+|Component|Version|
+|---|---|
+|OpenShift|v4.21.9|
+|OpenShift Virt|v4.21.0|
+
+## ToDo's
+
+* Add custom endpointpublishing strategy
+* Find a solution for the nodeport chicken egg problem of external api load balancer
+* Check WebUI bug: Ingress domain is wrong.
+
+## Overview
+
+Challenge: running an hosted cluster with in different tenant network segment/vlan without widely open access from tenant segment to managment segment.
+
+Addtional requirement, the hub cluster should not have any address or network connection into the tenant network segment. It's only allowed to place virtual machines into the network segment.
+
+{ page="Page-1" }
+
+The worker nodes of the hosted cluster are quite easy to solve, just connected them into the tenant network segment (import, DHCP is required).
+
+The hosted control plane compontents to expose into tenant network segment is more challenging. Following components have to concider:
+
+* API Server
+* OAuth
+* Konnectivity
+* Ignition
+
+Here an list of possible exposing options for these components:
+
+|Component/Service|Exposing strategy (`servicePublishingStrategy`)|Kubernetes Service type `LoadBalancer`|Ingress/Route|
+|---|---|---|---|
+|API Server|
LoadBalancer (Recommended, K8s Service Type Load Balancer)NodePort* (not for production)|✅|❌|
+|OAuth|Route/Ingress (default)NodePort* (not for production)|❌|✅|
+|Konnectivity|Route/Ingress (default)LoadBalancer (K8s Service Type Load Balancer)NodePort* (not for production)|✅|✅|
+|Ignition|Route/Ingress (default)NodePort* (not for production)|✅|❌|
+
+For our proof of concept we want to try following, exposing the components via:
+
+* API Server: LoadBalancer
+* OAuth: Router/Ingress: via a dedicted router shard.
+* Konnectivity: via a dedicted router shard.
+* Ignition: via a dedicted router shard.
+
+## Exposing compontents via router/ingress shard
+
+The idea with the dedicated router/ingress shared is to expose the router/ingress shard into the tenant network segment and only for the hosted cluster components.
+
+In front of the router/ingress shared is an external load balancer (for example, f5 bigip, netscaler,..) with access into the managment network segment and expose the router shared into the tenant network segment.
+
+## Proof of concept envrioment overview
+
+{ page="Page-2" }
+
+### Router between Mgmt and Tenant-A
+
+[VyOS Router](https://vyos.io/) router & firewall. Do not allow Traffic between Mgmt and Tenant-A network except DNS and gateway. To provde direct internect connection.
+
+??? example "VyOS config commands"
+
+ ```shell
+ --8<-- "content/cluster-installation/hosted-control-plane/tenant-network/vyos-router-2003.txt"
+ ```
+
+### Ingress Sharding
+
+* [2.3.4. Ingress sharding in OpenShift Container Platform](https://docs.redhat.com/en/documentation/openshift_container_platform/4.21/html/ingress_and_load_balancing/configuring-ingress-cluster-traffic#nw-ingress-sharding-concept_configuring-ingress-cluster-traffic-ingress-controller)
+* [3.1.3.8.1. Example load balancer configuration for user-provisioned clusters](https://docs.redhat.com/en/documentation/openshift_container_platform/4.21/html/installing_on_vmware_vsphere/user-provisioned-infrastructure)
+
+???+ example "Ingress Controller"
+
+ ```yaml
+ --8<-- "content/cluster-installation/hosted-control-plane/tenant-network/ingress-controller-shard.yaml"
+ ```
+
+```shell
+% oc get svc -n openshift-ingress router-nodeport-tenant-a
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+router-nodeport-tenant-a NodePort 172.30.141.209 80:32460/TCP,443:32488/TCP,1936:32095/TCP 106s
+```
+
+Ingress sharding load balancer is an RHEL 9 system with haproxy.
+
+* Install HAProxy `dnf install haproxy`
+* Configure selinux `setsebool -P haproxy_connect_any 1`
+* Apply Example haproxy.conf (don't forget to update ports)
+* Enabel and start haproxy `systemctl enable --now haproxy`
+
+??? example "HAProxy config"
+
+ ```shell
+ --8<-- "content/cluster-installation/hosted-control-plane/tenant-network/ingress-shared-haproxy.conf"
+ ```
+
+Add DNS Records
+
+```bind
+konnectivity.tenant-a.coe.muc.redhat.com. IN A 192.168.203.111
+oauth.tenant-a.coe.muc.redhat.com. IN A 192.168.203.111
+ignition.tenant-a.coe.muc.redhat.com. IN A 192.168.203.111
+```
+
+### Start hosted control plane and nodepool
+
+```yaml
+apiVersion: hypershift.openshift.io/v1beta1
+kind: HostedCluster
+metadata:
+ name: 'tenant-a'
+ namespace: 'clusters'
+ labels:
+ "cluster.open-cluster-management.io/clusterset": 'default'
+spec:
+ configuration:
+ ingress:
+ appsDomain: apps.tenant-a.coe.muc.redhat.com
+ domain: ''
+ loadBalancer:
+ platform:
+ type: ''
+ channel: fast-4.21
+ etcd:
+ managed:
+ storage:
+ persistentVolume:
+ size: 8Gi
+ type: PersistentVolume
+ managementType: Managed
+ release:
+ image: quay.io/openshift-release-dev/ocp-release:4.21.11-multi
+ pullSecret:
+ name: pullsecret-cluster-tenant-a
+ sshKey:
+ name: sshkey-cluster-tenant-a
+ networking:
+ clusterNetwork:
+ - cidr: 10.132.0.0/14
+ serviceNetwork:
+ - cidr: 172.31.0.0/16
+ networkType: OVNKubernetes
+ controllerAvailabilityPolicy: SingleReplica
+ infrastructureAvailabilityPolicy: SingleReplica
+ platform:
+ type: KubeVirt
+ kubevirt:
+ baseDomainPassthrough: false
+ infraID: 'tenant-a'
+ services:
+ - service: APIServer
+ servicePublishingStrategy:
+ type: LoadBalancer
+ loadBalancer:
+ hostname: api.tenant-a.coe.muc.redhat.com
+ - service: OAuthServer
+ servicePublishingStrategy:
+ type: Route
+ route:
+ hostname: oauth.tenant-a.coe.muc.redhat.com
+ - service: OIDC
+ servicePublishingStrategy:
+ type: Route
+ - service: Konnectivity
+ servicePublishingStrategy:
+ type: Route
+ route:
+ hostname: konnectivity.tenant-a.coe.muc.redhat.com
+ - service: Ignition
+ servicePublishingStrategy:
+ type: Route
+ route:
+ hostname: ignition.tenant-a.coe.muc.redhat.com
+```
+
+```yaml
+---
+apiVersion: hypershift.openshift.io/v1beta1
+kind: NodePool
+metadata:
+ name: 'tenant-a'
+ namespace: 'clusters'
+spec:
+ arch: amd64
+ clusterName: 'tenant-a'
+ replicas: 2
+ management:
+ autoRepair: false
+ upgradeType: Replace
+ platform:
+ type: KubeVirt
+ kubevirt:
+ compute:
+ cores: 2
+ memory: 8Gi
+ rootVolume:
+ type: Persistent
+ persistent:
+ size: 32Gi
+ additionalNetworks:
+ - name: default/cudn-localnet1-2003
+ attachDefaultNetwork: false
+ release:
+ image: quay.io/openshift-release-dev/ocp-release:4.21.11-multi
+```
+
+### Deploy external load balancer for ingress of hosted cluster
+
+Ingress sharding load balancer is an RHEL 9 system with haproxy.
+
+* Install HAProxy `dnf install haproxy`
+* Configure selinux `setsebool -P haproxy_connect_any 1`
+* Apply Example haproxy.conf (don't forget to update ports)
+* Enabel and start haproxy `systemctl enable --now haproxy`
+
+??? example "HAProxy config"
+
+ ```shell
+ --8<-- "content/cluster-installation/hosted-control-plane/tenant-network/ingress-lb.conf"
+ ```
diff --git a/content/cluster-installation/hosted-control-plane/tenant-network/ingress-controller-shard.yaml b/content/cluster-installation/hosted-control-plane/tenant-network/ingress-controller-shard.yaml
new file mode 100644
index 00000000..e5c7b064
--- /dev/null
+++ b/content/cluster-installation/hosted-control-plane/tenant-network/ingress-controller-shard.yaml
@@ -0,0 +1,17 @@
+apiVersion: operator.openshift.io/v1
+kind: IngressController
+metadata:
+ name: tenant-a
+ namespace: openshift-ingress-operator
+spec:
+ domain: tenant-a.coe.muc.redhat.com
+
+ endpointPublishingStrategy:
+ type: NodePortService
+ namespaceSelector:
+ matchExpressions:
+ - key: kubernetes.io/metadata.name
+ operator: In
+ values:
+ - ingress-test
+ - clusters-tenant-a
diff --git a/content/cluster-installation/hosted-control-plane/tenant-network/ingress-lb.conf b/content/cluster-installation/hosted-control-plane/tenant-network/ingress-lb.conf
new file mode 100644
index 00000000..518833e1
--- /dev/null
+++ b/content/cluster-installation/hosted-control-plane/tenant-network/ingress-lb.conf
@@ -0,0 +1,34 @@
+global
+ log 127.0.0.1 local2
+ pidfile /var/run/haproxy.pid
+ maxconn 4000
+ daemon
+defaults
+ mode http
+ log global
+ option dontlognull
+ option http-server-close
+ option redispatch
+ retries 3
+ timeout http-request 10s
+ timeout queue 1m
+ timeout connect 10s
+ timeout client 1m
+ timeout server 1m
+ timeout http-keep-alive 10s
+ timeout check 10s
+ maxconn 3000
+
+listen ingress-router-443
+ bind *:443
+ mode tcp
+ balance source
+ server tenant-a-gngj5-mfwp6 192.168.203.101:30190 check inter 1s
+ server tenant-a-gngj5-rrbmv 192.168.203.102:30190 check inter 1s
+
+listen ingress-router-80
+ bind *:80
+ mode tcp
+ balance source
+ server tenant-a-gngj5-mfwp6 192.168.203.101:30282 check inter 1s
+ server tenant-a-gngj5-rrbmv 192.168.203.102:30282 check inter 1s
diff --git a/content/cluster-installation/hosted-control-plane/tenant-network/ingress-shared-haproxy.conf b/content/cluster-installation/hosted-control-plane/tenant-network/ingress-shared-haproxy.conf
new file mode 100644
index 00000000..e111e741
--- /dev/null
+++ b/content/cluster-installation/hosted-control-plane/tenant-network/ingress-shared-haproxy.conf
@@ -0,0 +1,39 @@
+global
+ log 127.0.0.1 local2
+ pidfile /var/run/haproxy.pid
+ maxconn 4000
+ daemon
+defaults
+ mode http
+ log global
+ option dontlognull
+ option http-server-close
+ option redispatch
+ retries 3
+ timeout http-request 10s
+ timeout queue 1m
+ timeout connect 10s
+ timeout client 1m
+ timeout server 1m
+ timeout http-keep-alive 10s
+ timeout check 10s
+ maxconn 3000
+
+listen ingress-router-443
+ bind *:443
+ mode tcp
+ balance source
+ server ucs-blade-server-5 10.32.96.105:32488 check inter 1s
+ server ucs-blade-server-6 10.32.96.106:32488 check inter 1s
+ server ucs-blade-server-7 10.32.96.107:32488 check inter 1s
+ server ucs-blade-server-8 10.32.96.108:32488 check inter 1s
+
+listen ingress-router-80
+ bind *:80
+ mode tcp
+ balance source
+ server ucs-blade-server-5 10.32.96.105:32460 check inter 1s
+ server ucs-blade-server-6 10.32.96.106:32460 check inter 1s
+ server ucs-blade-server-7 10.32.96.107:32460 check inter 1s
+ server ucs-blade-server-8 10.32.96.108:32460 check inter 1s
+
diff --git a/content/cluster-installation/hosted-control-plane/tenant-network/overview.drawio b/content/cluster-installation/hosted-control-plane/tenant-network/overview.drawio
new file mode 100644
index 00000000..af78cf9b
--- /dev/null
+++ b/content/cluster-installation/hosted-control-plane/tenant-network/overview.drawio
@@ -0,0 +1,264 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/content/cluster-installation/hosted-control-plane/tenant-network/vyos-router-2003.txt b/content/cluster-installation/hosted-control-plane/tenant-network/vyos-router-2003.txt
new file mode 100644
index 00000000..5e09bc44
--- /dev/null
+++ b/content/cluster-installation/hosted-control-plane/tenant-network/vyos-router-2003.txt
@@ -0,0 +1,27 @@
+set firewall group address-group ALLOWED-IPS address '10.32.96.1'
+set firewall group address-group ALLOWED-IPS address '10.32.96.31'
+set firewall group address-group ALLOWED-IPS address '10.32.111.254'
+set firewall ipv4 forward filter rule 49 action 'accept'
+set firewall ipv4 forward filter rule 49 description 'Allow IPs'
+set firewall ipv4 forward filter rule 49 destination group address-group 'ALLOWED-IPS'
+set firewall ipv4 forward filter rule 50 action 'drop'
+set firewall ipv4 forward filter rule 50 description 'Drop enire coe lab'
+set firewall ipv4 forward filter rule 50 destination address '10.32.96.0/20'
+
+set interfaces ethernet eth0 address 'dhcp'
+set interfaces ethernet eth1 address '192.168.203.1/24'
+
+set nat source rule 100 outbound-interface name 'eth0'
+set nat source rule 100 source address '192.168.203.0/24'
+set nat source rule 100 translation address 'masquerade'
+set service dhcp-server listen-interface 'eth1'
+set service dhcp-server shared-network-name coe-2003 authoritative
+set service dhcp-server shared-network-name coe-2003 subnet 192.168.203.0/24 option default-router '192.168.203.1'
+set service dhcp-server shared-network-name coe-2003 subnet 192.168.203.0/24 option name-server '10.32.96.1'
+set service dhcp-server shared-network-name coe-2003 subnet 192.168.203.0/24 range 1 start '192.168.203.100'
+set service dhcp-server shared-network-name coe-2003 subnet 192.168.203.0/24 range 1 stop '192.168.203.200'
+set service dhcp-server shared-network-name coe-2003 subnet 192.168.203.0/24 subnet-id '1'
+set service ssh
+set system host-name 'router-2003'
+set system name-server '10.32.96.1'
+set system name-server '10.32.96.31'
diff --git a/mkdocs.yml b/mkdocs.yml
index 4e97e62a..758dbf29 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -57,6 +57,7 @@ extra:
extra_javascript:
- https://viewer.diagrams.net/js/viewer-static.min.js
- javascripts/drawio-reload.js
+
# Extensions
markdown_extensions:
- pymdownx.emoji:
@@ -99,6 +100,7 @@ plugins:
verbose: false
- glightbox
- drawio:
+ viewer_js: "https://viewer.diagrams.net/js/viewer-static.min.js"
toolbar: false # control if hovering on a diagram shows a toolbar for zooming or not (default: true)
tooltips: false # control if tooltips will be shown (default: true)
edit: false # control if edit button will be shown in the lightbox view (default: true)
@@ -131,6 +133,7 @@ nav:
- Hosted Control Plane:
- cluster-installation/hosted-control-plane/index.md
- KubeVirt Networking: cluster-installation/hosted-control-plane/kubevirt-networking.md
+ - Tenant Network: cluster-installation/hosted-control-plane/tenant-network/index.md
- Nvidia GPU:
- cluster-installation/gpu/index.md
- GPU on-prem: cluster-installation/gpu/gpu-on-prem.md
diff --git a/requirements.txt b/requirements.txt
index 3502fd9b..56da76a8 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -8,5 +8,5 @@ git+https://github.com/fralau/mkdocs_macros_plugin.git@v1.3.7
# Only for pre-commit checks not for mkdocs it selfe
pre-commit==4.0.1
mkdocs-git-authors-plugin==0.9.2
-mkdocs-drawio==1.8.2
+mkdocs-drawio==1.15.0
mike==2.1.3
diff --git a/run-local.sh b/run-local.sh
index 25ab3bbc..83382995 100755
--- a/run-local.sh
+++ b/run-local.sh
@@ -1,4 +1,4 @@
podman run -ti --user 0 --rm \
-v $(pwd):/opt/app-root/src:z \
- -p 8080:8080 quay.io/openshift-examples/builder:202601121657
+ -p 8080:8080 quay.io/openshift-examples/builder:202604300846