As more teams adopt Ambient Mode with OpenShift Service Mesh 3 to simplify their service mesh architecture, there’s another important operational consideration that often gets overlooked: how to control where the Istio CNI agent runs in your cluster. In a typical OpenShift deployment, the Istio CNI DaemonSet schedules a pod on every node to handle traffic redirection for workloads participating in the mesh. However, there are cases where you don’t want every node running these components, for example, on specialized infrastructure nodes, GPU‑backed worker pools, or isolated system nodes where mesh traffic shaping is unnecessary or even disruptive. OpenShift Service Mesh provides a simple yet powerful way to exclude specific nodes from hosting IstioCNI pods by using a dedicated node label, allowing you to fine‑tune your mesh footprint and operational boundaries.
Check existing deployment
In our previous post on getting started with Ambient Mode in OpenShift Service Mesh 3, we explored deploying Istio, IstioCNI, and Ztunnel components as part of the mesh. By default, IstioCNI daemonsets are scheduled on every node in the cluster. In this post, we’ll walk you through how to exclude specific nodes from running IstioCNI daemonsets, giving you more control over your cluster’s mesh footprint.
The following command output shows all Istio CNI pods running across every node in the cluster.
❯ oc get po -o custom-columns=NAME:.metadata.name,NODE:.spec.nodeName | grep istio-cni
istio-cni-node-68gl4 ip-10-0-9-189.ap-southeast-1.compute.internal
istio-cni-node-9hcs8 ip-10-0-53-158.ap-southeast-1.compute.internal
istio-cni-node-g49tp ip-10-0-33-126.ap-southeast-1.compute.internal
istio-cni-node-gg56v ip-10-0-32-165.ap-southeast-1.compute.internal
istio-cni-node-h7ggt ip-10-0-10-137.ap-southeast-1.compute.internal
istio-cni-node-hzst8 ip-10-0-45-84.ap-southeast-1.compute.internal
Deploy the changes
Next, we’ll exclude IstioCNI daemonsets from all master nodes, since they aren’t needed for running application workloads. To do this, we’ll update the IstioCNI custom resource (CR) with the following configuration:
kind: IstioCNI
apiVersion: sailoperator.io/v1
metadata:
name: default
spec:
namespace: istio-system
profile: ambient
version: v1.27.3
values:
cni:
ambient:
reconcileIptablesOnStartup: true
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: maistra.io/exclude-cni
operator: NotIn
values:
- "true"
Post-change validation
In the CR above, we can see that adding the label maistra.io/exclude-cni=true to a node prevents it from hosting IstioCNI daemonsets. Once you apply this CR and add the label to your chosen nodes, IstioCNI pods will no longer be scheduled on those nodes, such as the master nodes.
❯ oc label node <node-name> maistra.io/exclude-cni=true
node/<node-name> labeled
❯ oc get po -o custom-columns=NAME:.metadata.name,NODE:.spec.nodeName | grep istio-cni
istio-cni-node-2wzk4 ip-10-0-9-189.ap-southeast-1.compute.internal
istio-cni-node-b5ptq ip-10-0-10-137.ap-southeast-1.compute.internal
istio-cni-node-rg5rn ip-10-0-45-84.ap-southeast-1.compute.internal
If the master nodes no longer appear in the list, it means you’ve successfully excluded IstioCNI daemonsets from those master nodes, or from any other nodes you choose. Congratulations!
Summary
By using the maistra.io/exclude-cni=true label, you can easily exclude specific nodes from running IstioCNI daemonsets, ensuring a more efficient and controlled OpenShift Service Mesh deployment.
I’m a cloud-native software architect passionate about building resilient, scalable systems. My work focuses on Java and modern frameworks like Quarkus, Spring, microservices architecture, Kubernetes, Service Mesh, and DevSecOps automation. I’m currently working as Consulting Architect in Red Hat Asia Pacific.



