<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:hashnode="https://hashnode.com/rss"><channel><title><![CDATA[On Cloud Nine with Ratnopam]]></title><description><![CDATA[Reduce EKS cross-AZ traffic cost using topology aware hints]]></description><link>https://blog.ratnopamc.com</link><generator>RSS for Node</generator><lastBuildDate>Mon, 02 Dec 2024 20:38:27 GMT</lastBuildDate><atom:link href="https://blog.ratnopamc.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><atom:link rel="next" href="https://blog.ratnopamc.com/rss.xml?page=2"/><atom:link rel="previous" href="https://blog.ratnopamc.com/rss.xml"/><item><title><![CDATA[Reduce cross-AZ traffic costs on EKS using topology aware hints]]></title><description><![CDATA[Reduce cross zone traffic cost and network latency using topology-aware-hints in EKS clusters]]></description><link>https://blog.ratnopamc.com/reduce-cross-az-traffic-costs-on-eks-using-topology-aware-hints</link><guid isPermaLink="true">https://blog.ratnopamc.com/reduce-cross-az-traffic-costs-on-eks-using-topology-aware-hints</guid><category><![CDATA[Amazon Web Services]]></category><category><![CDATA[EKS]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Ratnopam Chakrabarti]]></dc:creator><pubDate>Wed, 04 Jan 2023 02:56:37 GMT</pubDate><content:encoded>&lt;![CDATA[&lt;p&gt;From a high availability standpoint, it&apos;s considered a best practice to spread workloads across multiple nodes in an EKS cluster. In addition to having multiple replicas of the application, one should also consider spreading the workload across multiple Availability Zones to attain high availability and improve reliability. This ensures fault-tolerance and avoids application downtime in the event of a worker node failure. One way to achieve this kind of deployment in EKS is to use &lt;code&gt;podAnitiAffinity&lt;/code&gt;. For example, the below manifest tells EKS scheduler to deploy each replica of the Pod on a node that&apos;s in a separate Availability Zone(AZ).&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-yaml&quot;&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;apps/v1&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Deployment&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;spread-az&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;labels:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;app:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;web-server&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;replicas:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;3&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;selector:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;matchLabels:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;app:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;web-server&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;template:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;labels:&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;app:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;web-server&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;affinity:&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;podAntiAffinity:&lt;/span&gt;          &lt;span class=&quot;hljs-attr&quot;&gt;requiredDuringSchedulingIgnoredDuringExecution:&lt;/span&gt;          &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;labelSelector:&lt;/span&gt;              &lt;span class=&quot;hljs-attr&quot;&gt;matchExpressions:&lt;/span&gt;              &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;key:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;app&lt;/span&gt;                &lt;span class=&quot;hljs-attr&quot;&gt;operator:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;In&lt;/span&gt;                &lt;span class=&quot;hljs-attr&quot;&gt;values:&lt;/span&gt;                &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;web-server&lt;/span&gt;            &lt;span class=&quot;hljs-attr&quot;&gt;topologyKey:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;kubernetes.io/zone&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;containers:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;web-app&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;image:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;nginx:1.16-alpine&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The above manifest makes use of topologyKey &lt;code&gt;kubernetes.io/zone&lt;/code&gt; . It tells the Kubernetes scheduler not to schedule two Pods in same AZ.&lt;/p&gt;&lt;p&gt;One of the other approaches that can be used to spread Pods across AZs is to use &lt;a target=&quot;_blank&quot; href=&quot;https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/&quot;&gt;Pod Topology Spread Constraints&lt;/a&gt; which was GA-ed in Kubernetes 1.19. This mechanism aims to spread pods evenly onto multiple node topologies.&lt;/p&gt;&lt;p&gt;While both of these approaches provide high-availability and resiliency for application workloads, customers incur costs for data transfers in inter-AZ traffic routing within an EKS cluster. For large EKS clusters running hundreds of nodes and thousands of pods, the data transfer costs for cross-AZ traffic can be significant.&lt;/p&gt;&lt;h2 id=&quot;heading-enter-topology-aware-hints&quot;&gt;Enter &lt;em&gt;Topology Aware Hints&lt;/em&gt;&lt;/h2&gt;&lt;p&gt;To address cross-AZ data transfer costs (which comes up during many EKS conversations on cost optimzation), pods running in a cluster must be able to perform topology-aware routing based on Availability Zone. And this is precisely what &lt;a target=&quot;_blank&quot; href=&quot;https://kubernetes.io/docs/concepts/services-networking/topology-aware-hints/&quot;&gt;Topology Aware Hints&lt;/a&gt; helps achieve. Topology Aware Hints provides a mechanism to help keep traffic within the zone it originated from. Prior to topology-aware-hints, Service &lt;a target=&quot;_blank&quot; href=&quot;https://kubernetes.io/docs/concepts/services-networking/service-topology/#examples&quot;&gt;topology-keys&lt;/a&gt; could be used for similar functionality. This was deprecated in kubernetes 1.21 in favor of topology-aware-hints which was introduced in kubernetes 1.21 and became &quot;beta&quot; in Kubernetes 1.23. With &lt;a target=&quot;_blank&quot; href=&quot;https://aws.amazon.com/about-aws/whats-new/2022/11/amazon-eks-eks-distro-support-kubernetes-version-1-24/&quot;&gt;EKS 1.24&lt;/a&gt; however, this is enabled by default and EKS users and customers can leverage this feature to keep kubernetes service traffic within the same AZ.&lt;/p&gt;&lt;p&gt;Let&apos;s dive in further and see this in action!&lt;/p&gt;&lt;p&gt;For the purposes of this blogpost, let&apos;s create a three-node EKS cluster.&lt;/p&gt;&lt;p&gt;Type the following command in your cloud9 terminal.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-yaml&quot;&gt;&lt;span class=&quot;hljs-string&quot;&gt;cat&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&amp;lt;&amp;lt;EOF&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;eks-config.yaml&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;eksctl.io/v1alpha5&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;ClusterConfig&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;topology-demo-cluster&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;region:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;us-west-2&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;version:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;1.24&quot;&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;managedNodeGroups:&lt;/span&gt;  &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;appservers&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;instanceType:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;t3.xlarge&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;desiredCapacity:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;3&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;minSize:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;maxSize:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;4&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;labels:&lt;/span&gt; { &lt;span class=&quot;hljs-attr&quot;&gt;role:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;appservers&lt;/span&gt; }    &lt;span class=&quot;hljs-attr&quot;&gt;volumeSize:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;8&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;iam:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;withAddonPolicies:&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;imageBuilder:&lt;/span&gt; &lt;span class=&quot;hljs-literal&quot;&gt;true&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;autoScaler:&lt;/span&gt; &lt;span class=&quot;hljs-literal&quot;&gt;true&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;xRay:&lt;/span&gt; &lt;span class=&quot;hljs-literal&quot;&gt;true&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;cloudWatch:&lt;/span&gt; &lt;span class=&quot;hljs-literal&quot;&gt;true&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;albIngress:&lt;/span&gt; &lt;span class=&quot;hljs-literal&quot;&gt;true&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;ssh:&lt;/span&gt;       &lt;span class=&quot;hljs-attr&quot;&gt;enableSsm:&lt;/span&gt; &lt;span class=&quot;hljs-literal&quot;&gt;true&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;EOF&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;eksctl&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;create&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;cluster&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;-f&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;eks-config.yaml&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Once the cluster is created, check the status of the worker nodes and their distribution across AZs.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl get nodes -L topology.kubernetes.io/zoneNAME                                           STATUS   ROLES    AGE   VERSION               ZONEip-192-168-4-149.us-west-2.compute.internal    Ready    &amp;lt;none&amp;gt;   36h   v1.24.7-eks-fb459a0   us-west-2bip-192-168-48-125.us-west-2.compute.internal   Ready    &amp;lt;none&amp;gt;   36h   v1.24.7-eks-fb459a0   us-west-2cip-192-168-75-68.us-west-2.compute.internal    Ready    &amp;lt;none&amp;gt;   36h   v1.24.7-eks-fb459a0   us-west-2d&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;We have each worker node in a separate AZ deployed in our EKS cluster. Let&apos;s now try to run a sample application in this cluster.&lt;/p&gt;&lt;p&gt;Use the below application manifest to deploy three replicas of our sample application to deploy in the newly created EKS cluster as below&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-yaml&quot;&gt;&lt;span class=&quot;hljs-string&quot;&gt;cat&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&amp;lt;&amp;lt;EOF&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;app-manifest.yaml&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;v1&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Namespace&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;   &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;topology-demo-ns&lt;/span&gt;&lt;span class=&quot;hljs-meta&quot;&gt;---&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;apps/v1&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Deployment&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;getaz&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;namespace:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;topology-demo-ns&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;labels:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;app:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;getaz&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;replicas:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;3&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;selector:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;matchLabels:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;app:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;getaz&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;template:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;labels:&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;app:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;getaz&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;namespace:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;topology-demo-ns&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;topologySpreadConstraints:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;maxSkew:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;topologyKey:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;topology.kubernetes.io/zone&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;whenUnsatisfiable:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;DoNotSchedule&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;labelSelector:&lt;/span&gt;          &lt;span class=&quot;hljs-attr&quot;&gt;matchLabels:&lt;/span&gt;            &lt;span class=&quot;hljs-attr&quot;&gt;app:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;getaz&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;containers:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;getaz-container&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;image:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;getazcontainer:latest&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;imagePullPolicy:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Always&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;ports:&lt;/span&gt;        &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;containerPort:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;3000&lt;/span&gt;          &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;web-port&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;resources:&lt;/span&gt;          &lt;span class=&quot;hljs-attr&quot;&gt;requests:&lt;/span&gt;            &lt;span class=&quot;hljs-attr&quot;&gt;cpu:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;256m&quot;&lt;/span&gt;&lt;span class=&quot;hljs-meta&quot;&gt;---&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;v1&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Service&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;getazservice&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;namespace:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;topology-demo-ns&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;selector:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;app:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;getaz&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;ports:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;port:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;80&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;targetPort:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;web-port&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;protocol:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;TCP&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;EOF&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;kubectl&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;apply&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;-f&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;app-manifest.yaml&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The application manifest creates -&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;a Namespace named &quot;topology-demo-ns&quot;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;a Deployment named &quot;getaz&quot; with three Pods. Each Pod runs a container named &quot;getaz-container&quot;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;a Service named &quot;getazservice&quot;.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;The &quot;getaz&quot; Pods and Service &quot;getazservice&quot; all run in the &quot;topology-demo-ns&quot; namespace.&lt;/p&gt;&lt;p&gt;In the above example, we&apos;re using the Pod &lt;code&gt;topologySpreadConstraints&lt;/code&gt; with &lt;code&gt;maxSkew&lt;/code&gt; set to 1 and &lt;code&gt;whenUnsatisfiable&lt;/code&gt; set to &quot;DoNotSchedule&quot; to deploy each replica of our sample application in a separate AZ. This example leverages a well-known node label called &lt;code&gt;topology.kubernetes.io/zone&lt;/code&gt; that worker nodes in an EKS cluster is assigned to by default, as the &lt;code&gt;topologyKey&lt;/code&gt; in the pod topology spread. To get the labels on a worker node in the EKS cluster that we spun up, use the below command&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;$ kubectl describe node ip-192-168-48-125.us-west-2.compute.internalName:               ip-192-168-48-125.us-west-2.compute.internalRoles:              &amp;lt;none&amp;gt;Labels:             alpha.eksctl.io/cluster-name=topology-demo-cluster                    alpha.eksctl.io/nodegroup-name=appservers                    beta.kubernetes.io/arch=amd64                    beta.kubernetes.io/instance-type=t3.xlarge                    beta.kubernetes.io/os=linux                    eks.amazonaws.com/capacityType=ON_DEMAND                    eks.amazonaws.com/nodegroup=appservers                    eks.amazonaws.com/nodegroup-image=ami-0b149b4c68ab69dce                    eks.amazonaws.com/sourceLaunchTemplateId=lt-0a47ee5069d44e8d4                    eks.amazonaws.com/sourceLaunchTemplateVersion=1                    failure-domain.beta.kubernetes.io/region=us-west-2                    failure-domain.beta.kubernetes.io/zone=us-west-2c                    k8s.io/cloud-provider-aws=8d60a23f89f8b00a31bfef5d05edc662                    kubernetes.io/arch=amd64                    kubernetes.io/hostname=ip-192-168-48-125.us-west-2.compute.internal                    kubernetes.io/os=linux                    node.kubernetes.io/instance-type=t3.xlarge                    role=appservers                    topology.kubernetes.io/region=us-west-2                    topology.kubernetes.io/zone=us-west-2c&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;In the &lt;code&gt;topologySpreadConstraints&lt;/code&gt; section of the example manifest,&lt;/p&gt;&lt;p&gt;&lt;strong&gt;maxSkew&lt;/strong&gt; defines the degree to which Pods may be distributed unevenly. This field must be filled out, and the value must be greater than zero. Its semantics vary depending on the value of &lt;em&gt;whenUnsatisfiable&lt;/em&gt; field.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;whenUnsatisfiable&lt;/strong&gt; specifies how to handle a Pod placement that does not satisfy the spread constraint:&lt;/p&gt;&lt;p&gt;- &lt;em&gt;DoNotSchedule&lt;/em&gt; (the default value) instructs the scheduler not to schedule it.&lt;/p&gt;&lt;p&gt;- &lt;em&gt;ScheduleAnyway&lt;/em&gt; instructs the scheduler to continue scheduling it while prioritizing Nodes with the lowest skew.&lt;/p&gt;&lt;p&gt;Let&apos;s check the status and spread of our application pods.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl get po -n topology-demo-ns -o wideNAME                    READY   STATUS    RESTARTS   AGE   IP               NODE                                           NOMINATED NODE   READINESS GATESgetaz-9685bbd44-65wcn   1/1     Running   0          2m    192.168.63.154   ip-192-168-48-125.us-west-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;getaz-9685bbd44-kf7gs   1/1     Running   0          2m    192.168.69.57    ip-192-168-75-68.us-west-2.compute.internal    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;getaz-9685bbd44-tjqkd   1/1     Running   0          2m    192.168.24.149   ip-192-168-4-149.us-west-2.compute.internal    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;We see from above output that each replica is running on a separate node and since each node is running in a separate AZ, effectively we have three pods each running in a different AZ in the EKS cluster.&lt;/p&gt;&lt;p&gt;For getting detail information about &lt;code&gt;topologySpreadConstraints&lt;/code&gt;, you can use &lt;code&gt;kubectl explain Pod.spec.topologySpreadConstraints&lt;/code&gt; command. You can mix and match these attributes to achieve different spread topologies.&lt;/p&gt;&lt;p&gt;Let us now check the &lt;code&gt;Service&lt;/code&gt; that got created by deploying the app-manifest.yaml file.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl -n topology-demo-ns get svcNAME           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGEgetazservice   ClusterIP   10.100.9.165   &amp;lt;none&amp;gt;        80/TCP    53mkubectl -n topology-demo-ns describe svc getazservice Name:              getazserviceNamespace:         topology-demo-nsLabels:            &amp;lt;none&amp;gt;Annotations:       &amp;lt;none&amp;gt;Selector:          app=getazType:              ClusterIPIP Family Policy:  SingleStackIP Families:       IPv4IP:                10.100.9.165IPs:               10.100.9.165Port:              &amp;lt;&lt;span class=&quot;hljs-built_in&quot;&gt;unset&lt;/span&gt;&amp;gt;  80/TCPTargetPort:        web-port/TCPEndpoints:         192.168.24.149:3000,192.168.63.154:3000,192.168.69.57:3000Session Affinity:  NoneEvents:            &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;We have a &lt;code&gt;Service&lt;/code&gt; named &quot;getazservice&quot; of &lt;code&gt;ClusterIP&lt;/code&gt; type deployed. The service doesn&apos;t have any &lt;code&gt;Annotations&lt;/code&gt; set on it.&lt;/p&gt;&lt;p&gt;As next step, let&apos;s deploy a test container that we&apos;re going to use to call &quot;getazservice&quot; and check if there are any inter-AZ calls we can spot.&lt;/p&gt;&lt;p&gt;Use the below command to deploy a curl container and ensure &lt;code&gt;curl&lt;/code&gt; is installed.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl run curl-debug --image=radial/busyboxplus:curl -l &lt;span class=&quot;hljs-string&quot;&gt;&quot;type=debug&quot;&lt;/span&gt; -n topology-demo-ns -it --tty  sh&lt;span class=&quot;hljs-comment&quot;&gt;# check if curl is installed&lt;/span&gt;curl --version&lt;span class=&quot;hljs-comment&quot;&gt;#exit the container&lt;/span&gt;&lt;span class=&quot;hljs-built_in&quot;&gt;exit&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Once the debug container is running, create a bash script to call &quot;getazservice&quot; in a loop and print the Availability Zone of the pod that responded to the call.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl &lt;span class=&quot;hljs-built_in&quot;&gt;exec&lt;/span&gt; -it --tty -n topology-demo-ns $(kubectl get pod -l  &lt;span class=&quot;hljs-string&quot;&gt;&quot;type=debug&quot;&lt;/span&gt; -n topology-demo-ns -o  jsonpath=&lt;span class=&quot;hljs-string&quot;&gt;&apos;{.items[0].metadata.name}&apos;&lt;/span&gt;) sh &lt;span class=&quot;hljs-comment&quot;&gt;#create a test script and call service&lt;/span&gt; cat &amp;lt;&amp;lt;EOF&amp;gt;&amp;gt; test.sh n=1 &lt;span class=&quot;hljs-keyword&quot;&gt;while&lt;/span&gt; [ \&lt;span class=&quot;hljs-variable&quot;&gt;$n&lt;/span&gt; -le 5 ] &lt;span class=&quot;hljs-keyword&quot;&gt;do&lt;/span&gt;     curl -s getazservice.topology-demo-ns     sleep 1     &lt;span class=&quot;hljs-built_in&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;---&quot;&lt;/span&gt;     n=\$(( n+&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt; )) &lt;span class=&quot;hljs-keyword&quot;&gt;done&lt;/span&gt;EOF chmod +x test.sh clear ./test.sh &lt;span class=&quot;hljs-comment&quot;&gt;#exit the test container&lt;/span&gt; &lt;span class=&quot;hljs-built_in&quot;&gt;exit&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The above execution of the test script in the debug container should produce an output like below which shows that the calls to the &quot;getazservice&quot; Service and its backing Pods are distributed across AZs.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;us-west-2d---us-west-2b---us-west-2d---us-west-2c---us-west-2d---&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The load-balancing and forwarding logic of the service call in this case is based on &lt;code&gt;kube-proxy&lt;/code&gt; mode. EKS by default implements &quot;iptables&quot; mode of &lt;code&gt;kube-proxy&lt;/code&gt;. When the &lt;code&gt;curl-debug&lt;/code&gt; container sends the &lt;code&gt;curl&lt;/code&gt; request to the &quot;getazservice&quot; virtual IP, the packet is then processed by the iptables rules on that worker node which are configured by the &lt;code&gt;kube-proxy&lt;/code&gt;. Then a Pod backing the &quot;getazservice&quot; &lt;code&gt;Service&lt;/code&gt; gets chosen at random by default. For detailed documentation on different kube-proxy modes(iptables, ipvs) please refer to &lt;a target=&quot;_blank&quot; href=&quot;https://kubernetes.io/docs/reference/networking/virtual-ips/&quot;&gt;Kubernetes Documentation&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;To avoid this &quot;randomness&quot; of routing and reduce the cost of inter-AZ traffic routing and network latency, topology-aware-hints can be activated for the &lt;code&gt;Service&lt;/code&gt; to ensure that the service call is routed to a Pod that resides in the same AZ as that of the Pod which the request originated from.&lt;/p&gt;&lt;p&gt;To enable topology-aware routing, simply add the &lt;code&gt;service.kubernetes.io/topology-aware-hints annotation&lt;/code&gt; to &quot;auto&quot; for the &quot;getazservice&quot; as below and re-deploy the manifest.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-yaml&quot;&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;v1&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Service&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;getazservice&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;namespace:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;topology-demo-ns&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;annotations:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;service.kubernetes.io/topology-aware-hints:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;auto&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;selector:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;app:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;getaz&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;ports:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;port:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;80&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;targetPort:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;web-port&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;protocol:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;TCP&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;When we describe the Service, we see the &lt;code&gt;Annotation&lt;/code&gt; associated with it.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl -n topology-demo-ns describe svc getazservice Name:              getazserviceNamespace:         topology-demo-nsLabels:            &amp;lt;none&amp;gt;Annotations:       service.kubernetes.io/topology-aware-hints: autoSelector:          app=getazType:              ClusterIPIP Family Policy:  SingleStackIP Families:       IPv4IP:                10.100.9.165IPs:               10.100.9.165Port:              &amp;lt;&lt;span class=&quot;hljs-built_in&quot;&gt;unset&lt;/span&gt;&amp;gt;  80/TCPTargetPort:        web-port/TCPEndpoints:         192.168.24.149:3000,192.168.63.154:3000,192.168.69.57:3000Session Affinity:  NoneEvents:            &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If we run the same test as before with the debug container, this time we should see an output similar to below&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;us-west-2b---us-west-2b---us-west-2b---us-west-2b---us-west-2b---&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This shows that the calls to &quot;getazservice&quot; is getting consistently picked up by the backing Pod that resides in the same AZ as the requester Pod. Topology aware routing in this case is enabled by &lt;code&gt;EndPointSlice&lt;/code&gt; Controller and the &lt;code&gt;kube-proxy&lt;/code&gt; components. &lt;code&gt;EndPointSlice&lt;/code&gt; API in Kubernetes provides a way to track network endpoints within a cluster. &lt;code&gt;EndpointSlices&lt;/code&gt; offer a more scalable and extensible alternative to &lt;code&gt;Endpoints&lt;/code&gt; and is available since Kubernetes 1.21. When calculating the endpoints for a &lt;code&gt;Service&lt;/code&gt; that&apos;s annotated with &lt;code&gt;service.kubernetes.io/topology-aware-hints: auto&lt;/code&gt; , the &lt;code&gt;EndpointSlice&lt;/code&gt; controller considers the topology (region and zone) of each &lt;code&gt;Service&lt;/code&gt; endpoint and populates the &lt;code&gt;hints&lt;/code&gt; field to allocate it to a zone. Once the &quot;hints&quot; are populated, &lt;code&gt;kube-proxy&lt;/code&gt; can then consume these hints, and use them to influence how the traffic is routed (favoring topologically closer endpoints).&lt;/p&gt;&lt;p&gt;This solution reduces inter-AZ traffic routing and in turn lowers the cross-AZ data transfer costs in an EKS cluster. By enabling &quot;intelligent&quot; routing, it also helps reduce the network latency. While this approach works well in most cases, sometimes the &lt;code&gt;EndpointSlice&lt;/code&gt; controller allocates endpoints from a different zone to ensure more even distribution of endpoints between zones. This results in some traffic being routed to other zones. Thus, when using Topology-Aware-hints, its important to have application pods balanced across AZs using Topology Spread Constraints to avoid imbalances in the amount of traffic handled by each pod. Additionally, there are some other &lt;a target=&quot;_blank&quot; href=&quot;https://kubernetes.io/docs/concepts/services-networking/topology-aware-hints/#safeguards&quot;&gt;safeguards and constraints&lt;/a&gt; that one should be aware of before using this approach. As alternative solutions, one can use Service Mesh technologies like Istio or Linkerd to achieve topology-aware routing; however service mesh based solutions present additional complexities for the cluster operators to manage. In comparison, using topology-aware-hints is much simpler to implement, is supported out-of-the-box in EKS 1.24 and works great in reducing cross-AZ traffic costs within an EKS cluster.&lt;/p&gt;]]&gt;</content:encoded><hashnode:content>&lt;![CDATA[&lt;p&gt;From a high availability standpoint, it&apos;s considered a best practice to spread workloads across multiple nodes in an EKS cluster. In addition to having multiple replicas of the application, one should also consider spreading the workload across multiple Availability Zones to attain high availability and improve reliability. This ensures fault-tolerance and avoids application downtime in the event of a worker node failure. One way to achieve this kind of deployment in EKS is to use &lt;code&gt;podAnitiAffinity&lt;/code&gt;. For example, the below manifest tells EKS scheduler to deploy each replica of the Pod on a node that&apos;s in a separate Availability Zone(AZ).&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-yaml&quot;&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;apps/v1&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Deployment&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;spread-az&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;labels:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;app:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;web-server&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;replicas:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;3&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;selector:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;matchLabels:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;app:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;web-server&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;template:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;labels:&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;app:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;web-server&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;affinity:&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;podAntiAffinity:&lt;/span&gt;          &lt;span class=&quot;hljs-attr&quot;&gt;requiredDuringSchedulingIgnoredDuringExecution:&lt;/span&gt;          &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;labelSelector:&lt;/span&gt;              &lt;span class=&quot;hljs-attr&quot;&gt;matchExpressions:&lt;/span&gt;              &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;key:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;app&lt;/span&gt;                &lt;span class=&quot;hljs-attr&quot;&gt;operator:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;In&lt;/span&gt;                &lt;span class=&quot;hljs-attr&quot;&gt;values:&lt;/span&gt;                &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;web-server&lt;/span&gt;            &lt;span class=&quot;hljs-attr&quot;&gt;topologyKey:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;kubernetes.io/zone&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;containers:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;web-app&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;image:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;nginx:1.16-alpine&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The above manifest makes use of topologyKey &lt;code&gt;kubernetes.io/zone&lt;/code&gt; . It tells the Kubernetes scheduler not to schedule two Pods in same AZ.&lt;/p&gt;&lt;p&gt;One of the other approaches that can be used to spread Pods across AZs is to use &lt;a target=&quot;_blank&quot; href=&quot;https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/&quot;&gt;Pod Topology Spread Constraints&lt;/a&gt; which was GA-ed in Kubernetes 1.19. This mechanism aims to spread pods evenly onto multiple node topologies.&lt;/p&gt;&lt;p&gt;While both of these approaches provide high-availability and resiliency for application workloads, customers incur costs for data transfers in inter-AZ traffic routing within an EKS cluster. For large EKS clusters running hundreds of nodes and thousands of pods, the data transfer costs for cross-AZ traffic can be significant.&lt;/p&gt;&lt;h2 id=&quot;heading-enter-topology-aware-hints&quot;&gt;Enter &lt;em&gt;Topology Aware Hints&lt;/em&gt;&lt;/h2&gt;&lt;p&gt;To address cross-AZ data transfer costs (which comes up during many EKS conversations on cost optimzation), pods running in a cluster must be able to perform topology-aware routing based on Availability Zone. And this is precisely what &lt;a target=&quot;_blank&quot; href=&quot;https://kubernetes.io/docs/concepts/services-networking/topology-aware-hints/&quot;&gt;Topology Aware Hints&lt;/a&gt; helps achieve. Topology Aware Hints provides a mechanism to help keep traffic within the zone it originated from. Prior to topology-aware-hints, Service &lt;a target=&quot;_blank&quot; href=&quot;https://kubernetes.io/docs/concepts/services-networking/service-topology/#examples&quot;&gt;topology-keys&lt;/a&gt; could be used for similar functionality. This was deprecated in kubernetes 1.21 in favor of topology-aware-hints which was introduced in kubernetes 1.21 and became &quot;beta&quot; in Kubernetes 1.23. With &lt;a target=&quot;_blank&quot; href=&quot;https://aws.amazon.com/about-aws/whats-new/2022/11/amazon-eks-eks-distro-support-kubernetes-version-1-24/&quot;&gt;EKS 1.24&lt;/a&gt; however, this is enabled by default and EKS users and customers can leverage this feature to keep kubernetes service traffic within the same AZ.&lt;/p&gt;&lt;p&gt;Let&apos;s dive in further and see this in action!&lt;/p&gt;&lt;p&gt;For the purposes of this blogpost, let&apos;s create a three-node EKS cluster.&lt;/p&gt;&lt;p&gt;Type the following command in your cloud9 terminal.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-yaml&quot;&gt;&lt;span class=&quot;hljs-string&quot;&gt;cat&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&amp;lt;&amp;lt;EOF&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;eks-config.yaml&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;eksctl.io/v1alpha5&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;ClusterConfig&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;topology-demo-cluster&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;region:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;us-west-2&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;version:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;1.24&quot;&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;managedNodeGroups:&lt;/span&gt;  &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;appservers&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;instanceType:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;t3.xlarge&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;desiredCapacity:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;3&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;minSize:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;maxSize:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;4&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;labels:&lt;/span&gt; { &lt;span class=&quot;hljs-attr&quot;&gt;role:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;appservers&lt;/span&gt; }    &lt;span class=&quot;hljs-attr&quot;&gt;volumeSize:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;8&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;iam:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;withAddonPolicies:&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;imageBuilder:&lt;/span&gt; &lt;span class=&quot;hljs-literal&quot;&gt;true&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;autoScaler:&lt;/span&gt; &lt;span class=&quot;hljs-literal&quot;&gt;true&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;xRay:&lt;/span&gt; &lt;span class=&quot;hljs-literal&quot;&gt;true&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;cloudWatch:&lt;/span&gt; &lt;span class=&quot;hljs-literal&quot;&gt;true&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;albIngress:&lt;/span&gt; &lt;span class=&quot;hljs-literal&quot;&gt;true&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;ssh:&lt;/span&gt;       &lt;span class=&quot;hljs-attr&quot;&gt;enableSsm:&lt;/span&gt; &lt;span class=&quot;hljs-literal&quot;&gt;true&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;EOF&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;eksctl&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;create&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;cluster&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;-f&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;eks-config.yaml&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Once the cluster is created, check the status of the worker nodes and their distribution across AZs.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl get nodes -L topology.kubernetes.io/zoneNAME                                           STATUS   ROLES    AGE   VERSION               ZONEip-192-168-4-149.us-west-2.compute.internal    Ready    &amp;lt;none&amp;gt;   36h   v1.24.7-eks-fb459a0   us-west-2bip-192-168-48-125.us-west-2.compute.internal   Ready    &amp;lt;none&amp;gt;   36h   v1.24.7-eks-fb459a0   us-west-2cip-192-168-75-68.us-west-2.compute.internal    Ready    &amp;lt;none&amp;gt;   36h   v1.24.7-eks-fb459a0   us-west-2d&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;We have each worker node in a separate AZ deployed in our EKS cluster. Let&apos;s now try to run a sample application in this cluster.&lt;/p&gt;&lt;p&gt;Use the below application manifest to deploy three replicas of our sample application to deploy in the newly created EKS cluster as below&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-yaml&quot;&gt;&lt;span class=&quot;hljs-string&quot;&gt;cat&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&amp;lt;&amp;lt;EOF&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;app-manifest.yaml&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;v1&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Namespace&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;   &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;topology-demo-ns&lt;/span&gt;&lt;span class=&quot;hljs-meta&quot;&gt;---&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;apps/v1&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Deployment&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;getaz&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;namespace:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;topology-demo-ns&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;labels:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;app:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;getaz&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;replicas:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;3&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;selector:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;matchLabels:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;app:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;getaz&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;template:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;labels:&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;app:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;getaz&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;namespace:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;topology-demo-ns&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;topologySpreadConstraints:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;maxSkew:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;topologyKey:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;topology.kubernetes.io/zone&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;whenUnsatisfiable:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;DoNotSchedule&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;labelSelector:&lt;/span&gt;          &lt;span class=&quot;hljs-attr&quot;&gt;matchLabels:&lt;/span&gt;            &lt;span class=&quot;hljs-attr&quot;&gt;app:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;getaz&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;containers:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;getaz-container&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;image:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;getazcontainer:latest&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;imagePullPolicy:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Always&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;ports:&lt;/span&gt;        &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;containerPort:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;3000&lt;/span&gt;          &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;web-port&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;resources:&lt;/span&gt;          &lt;span class=&quot;hljs-attr&quot;&gt;requests:&lt;/span&gt;            &lt;span class=&quot;hljs-attr&quot;&gt;cpu:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;256m&quot;&lt;/span&gt;&lt;span class=&quot;hljs-meta&quot;&gt;---&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;v1&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Service&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;getazservice&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;namespace:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;topology-demo-ns&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;selector:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;app:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;getaz&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;ports:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;port:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;80&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;targetPort:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;web-port&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;protocol:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;TCP&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;EOF&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;kubectl&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;apply&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;-f&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;app-manifest.yaml&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The application manifest creates -&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;a Namespace named &quot;topology-demo-ns&quot;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;a Deployment named &quot;getaz&quot; with three Pods. Each Pod runs a container named &quot;getaz-container&quot;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;a Service named &quot;getazservice&quot;.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;The &quot;getaz&quot; Pods and Service &quot;getazservice&quot; all run in the &quot;topology-demo-ns&quot; namespace.&lt;/p&gt;&lt;p&gt;In the above example, we&apos;re using the Pod &lt;code&gt;topologySpreadConstraints&lt;/code&gt; with &lt;code&gt;maxSkew&lt;/code&gt; set to 1 and &lt;code&gt;whenUnsatisfiable&lt;/code&gt; set to &quot;DoNotSchedule&quot; to deploy each replica of our sample application in a separate AZ. This example leverages a well-known node label called &lt;code&gt;topology.kubernetes.io/zone&lt;/code&gt; that worker nodes in an EKS cluster is assigned to by default, as the &lt;code&gt;topologyKey&lt;/code&gt; in the pod topology spread. To get the labels on a worker node in the EKS cluster that we spun up, use the below command&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;$ kubectl describe node ip-192-168-48-125.us-west-2.compute.internalName:               ip-192-168-48-125.us-west-2.compute.internalRoles:              &amp;lt;none&amp;gt;Labels:             alpha.eksctl.io/cluster-name=topology-demo-cluster                    alpha.eksctl.io/nodegroup-name=appservers                    beta.kubernetes.io/arch=amd64                    beta.kubernetes.io/instance-type=t3.xlarge                    beta.kubernetes.io/os=linux                    eks.amazonaws.com/capacityType=ON_DEMAND                    eks.amazonaws.com/nodegroup=appservers                    eks.amazonaws.com/nodegroup-image=ami-0b149b4c68ab69dce                    eks.amazonaws.com/sourceLaunchTemplateId=lt-0a47ee5069d44e8d4                    eks.amazonaws.com/sourceLaunchTemplateVersion=1                    failure-domain.beta.kubernetes.io/region=us-west-2                    failure-domain.beta.kubernetes.io/zone=us-west-2c                    k8s.io/cloud-provider-aws=8d60a23f89f8b00a31bfef5d05edc662                    kubernetes.io/arch=amd64                    kubernetes.io/hostname=ip-192-168-48-125.us-west-2.compute.internal                    kubernetes.io/os=linux                    node.kubernetes.io/instance-type=t3.xlarge                    role=appservers                    topology.kubernetes.io/region=us-west-2                    topology.kubernetes.io/zone=us-west-2c&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;In the &lt;code&gt;topologySpreadConstraints&lt;/code&gt; section of the example manifest,&lt;/p&gt;&lt;p&gt;&lt;strong&gt;maxSkew&lt;/strong&gt; defines the degree to which Pods may be distributed unevenly. This field must be filled out, and the value must be greater than zero. Its semantics vary depending on the value of &lt;em&gt;whenUnsatisfiable&lt;/em&gt; field.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;whenUnsatisfiable&lt;/strong&gt; specifies how to handle a Pod placement that does not satisfy the spread constraint:&lt;/p&gt;&lt;p&gt;- &lt;em&gt;DoNotSchedule&lt;/em&gt; (the default value) instructs the scheduler not to schedule it.&lt;/p&gt;&lt;p&gt;- &lt;em&gt;ScheduleAnyway&lt;/em&gt; instructs the scheduler to continue scheduling it while prioritizing Nodes with the lowest skew.&lt;/p&gt;&lt;p&gt;Let&apos;s check the status and spread of our application pods.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl get po -n topology-demo-ns -o wideNAME                    READY   STATUS    RESTARTS   AGE   IP               NODE                                           NOMINATED NODE   READINESS GATESgetaz-9685bbd44-65wcn   1/1     Running   0          2m    192.168.63.154   ip-192-168-48-125.us-west-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;getaz-9685bbd44-kf7gs   1/1     Running   0          2m    192.168.69.57    ip-192-168-75-68.us-west-2.compute.internal    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;getaz-9685bbd44-tjqkd   1/1     Running   0          2m    192.168.24.149   ip-192-168-4-149.us-west-2.compute.internal    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;We see from above output that each replica is running on a separate node and since each node is running in a separate AZ, effectively we have three pods each running in a different AZ in the EKS cluster.&lt;/p&gt;&lt;p&gt;For getting detail information about &lt;code&gt;topologySpreadConstraints&lt;/code&gt;, you can use &lt;code&gt;kubectl explain Pod.spec.topologySpreadConstraints&lt;/code&gt; command. You can mix and match these attributes to achieve different spread topologies.&lt;/p&gt;&lt;p&gt;Let us now check the &lt;code&gt;Service&lt;/code&gt; that got created by deploying the app-manifest.yaml file.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl -n topology-demo-ns get svcNAME           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGEgetazservice   ClusterIP   10.100.9.165   &amp;lt;none&amp;gt;        80/TCP    53mkubectl -n topology-demo-ns describe svc getazservice Name:              getazserviceNamespace:         topology-demo-nsLabels:            &amp;lt;none&amp;gt;Annotations:       &amp;lt;none&amp;gt;Selector:          app=getazType:              ClusterIPIP Family Policy:  SingleStackIP Families:       IPv4IP:                10.100.9.165IPs:               10.100.9.165Port:              &amp;lt;&lt;span class=&quot;hljs-built_in&quot;&gt;unset&lt;/span&gt;&amp;gt;  80/TCPTargetPort:        web-port/TCPEndpoints:         192.168.24.149:3000,192.168.63.154:3000,192.168.69.57:3000Session Affinity:  NoneEvents:            &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;We have a &lt;code&gt;Service&lt;/code&gt; named &quot;getazservice&quot; of &lt;code&gt;ClusterIP&lt;/code&gt; type deployed. The service doesn&apos;t have any &lt;code&gt;Annotations&lt;/code&gt; set on it.&lt;/p&gt;&lt;p&gt;As next step, let&apos;s deploy a test container that we&apos;re going to use to call &quot;getazservice&quot; and check if there are any inter-AZ calls we can spot.&lt;/p&gt;&lt;p&gt;Use the below command to deploy a curl container and ensure &lt;code&gt;curl&lt;/code&gt; is installed.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl run curl-debug --image=radial/busyboxplus:curl -l &lt;span class=&quot;hljs-string&quot;&gt;&quot;type=debug&quot;&lt;/span&gt; -n topology-demo-ns -it --tty  sh&lt;span class=&quot;hljs-comment&quot;&gt;# check if curl is installed&lt;/span&gt;curl --version&lt;span class=&quot;hljs-comment&quot;&gt;#exit the container&lt;/span&gt;&lt;span class=&quot;hljs-built_in&quot;&gt;exit&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Once the debug container is running, create a bash script to call &quot;getazservice&quot; in a loop and print the Availability Zone of the pod that responded to the call.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl &lt;span class=&quot;hljs-built_in&quot;&gt;exec&lt;/span&gt; -it --tty -n topology-demo-ns $(kubectl get pod -l  &lt;span class=&quot;hljs-string&quot;&gt;&quot;type=debug&quot;&lt;/span&gt; -n topology-demo-ns -o  jsonpath=&lt;span class=&quot;hljs-string&quot;&gt;&apos;{.items[0].metadata.name}&apos;&lt;/span&gt;) sh &lt;span class=&quot;hljs-comment&quot;&gt;#create a test script and call service&lt;/span&gt; cat &amp;lt;&amp;lt;EOF&amp;gt;&amp;gt; test.sh n=1 &lt;span class=&quot;hljs-keyword&quot;&gt;while&lt;/span&gt; [ \&lt;span class=&quot;hljs-variable&quot;&gt;$n&lt;/span&gt; -le 5 ] &lt;span class=&quot;hljs-keyword&quot;&gt;do&lt;/span&gt;     curl -s getazservice.topology-demo-ns     sleep 1     &lt;span class=&quot;hljs-built_in&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;---&quot;&lt;/span&gt;     n=\$(( n+&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt; )) &lt;span class=&quot;hljs-keyword&quot;&gt;done&lt;/span&gt;EOF chmod +x test.sh clear ./test.sh &lt;span class=&quot;hljs-comment&quot;&gt;#exit the test container&lt;/span&gt; &lt;span class=&quot;hljs-built_in&quot;&gt;exit&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The above execution of the test script in the debug container should produce an output like below which shows that the calls to the &quot;getazservice&quot; Service and its backing Pods are distributed across AZs.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;us-west-2d---us-west-2b---us-west-2d---us-west-2c---us-west-2d---&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The load-balancing and forwarding logic of the service call in this case is based on &lt;code&gt;kube-proxy&lt;/code&gt; mode. EKS by default implements &quot;iptables&quot; mode of &lt;code&gt;kube-proxy&lt;/code&gt;. When the &lt;code&gt;curl-debug&lt;/code&gt; container sends the &lt;code&gt;curl&lt;/code&gt; request to the &quot;getazservice&quot; virtual IP, the packet is then processed by the iptables rules on that worker node which are configured by the &lt;code&gt;kube-proxy&lt;/code&gt;. Then a Pod backing the &quot;getazservice&quot; &lt;code&gt;Service&lt;/code&gt; gets chosen at random by default. For detailed documentation on different kube-proxy modes(iptables, ipvs) please refer to &lt;a target=&quot;_blank&quot; href=&quot;https://kubernetes.io/docs/reference/networking/virtual-ips/&quot;&gt;Kubernetes Documentation&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;To avoid this &quot;randomness&quot; of routing and reduce the cost of inter-AZ traffic routing and network latency, topology-aware-hints can be activated for the &lt;code&gt;Service&lt;/code&gt; to ensure that the service call is routed to a Pod that resides in the same AZ as that of the Pod which the request originated from.&lt;/p&gt;&lt;p&gt;To enable topology-aware routing, simply add the &lt;code&gt;service.kubernetes.io/topology-aware-hints annotation&lt;/code&gt; to &quot;auto&quot; for the &quot;getazservice&quot; as below and re-deploy the manifest.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-yaml&quot;&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;v1&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Service&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;getazservice&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;namespace:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;topology-demo-ns&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;annotations:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;service.kubernetes.io/topology-aware-hints:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;auto&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;selector:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;app:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;getaz&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;ports:&lt;/span&gt;    &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;port:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;80&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;targetPort:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;web-port&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;protocol:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;TCP&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;When we describe the Service, we see the &lt;code&gt;Annotation&lt;/code&gt; associated with it.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl -n topology-demo-ns describe svc getazservice Name:              getazserviceNamespace:         topology-demo-nsLabels:            &amp;lt;none&amp;gt;Annotations:       service.kubernetes.io/topology-aware-hints: autoSelector:          app=getazType:              ClusterIPIP Family Policy:  SingleStackIP Families:       IPv4IP:                10.100.9.165IPs:               10.100.9.165Port:              &amp;lt;&lt;span class=&quot;hljs-built_in&quot;&gt;unset&lt;/span&gt;&amp;gt;  80/TCPTargetPort:        web-port/TCPEndpoints:         192.168.24.149:3000,192.168.63.154:3000,192.168.69.57:3000Session Affinity:  NoneEvents:            &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If we run the same test as before with the debug container, this time we should see an output similar to below&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;us-west-2b---us-west-2b---us-west-2b---us-west-2b---us-west-2b---&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This shows that the calls to &quot;getazservice&quot; is getting consistently picked up by the backing Pod that resides in the same AZ as the requester Pod. Topology aware routing in this case is enabled by &lt;code&gt;EndPointSlice&lt;/code&gt; Controller and the &lt;code&gt;kube-proxy&lt;/code&gt; components. &lt;code&gt;EndPointSlice&lt;/code&gt; API in Kubernetes provides a way to track network endpoints within a cluster. &lt;code&gt;EndpointSlices&lt;/code&gt; offer a more scalable and extensible alternative to &lt;code&gt;Endpoints&lt;/code&gt; and is available since Kubernetes 1.21. When calculating the endpoints for a &lt;code&gt;Service&lt;/code&gt; that&apos;s annotated with &lt;code&gt;service.kubernetes.io/topology-aware-hints: auto&lt;/code&gt; , the &lt;code&gt;EndpointSlice&lt;/code&gt; controller considers the topology (region and zone) of each &lt;code&gt;Service&lt;/code&gt; endpoint and populates the &lt;code&gt;hints&lt;/code&gt; field to allocate it to a zone. Once the &quot;hints&quot; are populated, &lt;code&gt;kube-proxy&lt;/code&gt; can then consume these hints, and use them to influence how the traffic is routed (favoring topologically closer endpoints).&lt;/p&gt;&lt;p&gt;This solution reduces inter-AZ traffic routing and in turn lowers the cross-AZ data transfer costs in an EKS cluster. By enabling &quot;intelligent&quot; routing, it also helps reduce the network latency. While this approach works well in most cases, sometimes the &lt;code&gt;EndpointSlice&lt;/code&gt; controller allocates endpoints from a different zone to ensure more even distribution of endpoints between zones. This results in some traffic being routed to other zones. Thus, when using Topology-Aware-hints, its important to have application pods balanced across AZs using Topology Spread Constraints to avoid imbalances in the amount of traffic handled by each pod. Additionally, there are some other &lt;a target=&quot;_blank&quot; href=&quot;https://kubernetes.io/docs/concepts/services-networking/topology-aware-hints/#safeguards&quot;&gt;safeguards and constraints&lt;/a&gt; that one should be aware of before using this approach. As alternative solutions, one can use Service Mesh technologies like Istio or Linkerd to achieve topology-aware routing; however service mesh based solutions present additional complexities for the cluster operators to manage. In comparison, using topology-aware-hints is much simpler to implement, is supported out-of-the-box in EKS 1.24 and works great in reducing cross-AZ traffic costs within an EKS cluster.&lt;/p&gt;]]&gt;</hashnode:content><hashnode:coverImage>https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/3b1ffa1d6b218b1b41368e3a38e45f61.jpeg</hashnode:coverImage></item><item><title><![CDATA[Taking Amazon EKS Anywhere for a spin]]></title><description><![CDATA[Prelude
Before we discuss about EKS Anywhere, it's useful to have a basic idea about Amazon EKS and more importantly Amazon EKS Distro.
Amazon Elastic Kubernetes Service a.k.a. Amazon EKS is a managed kubernetes service from AWS to run and scale Kube...]]></description><link>https://blog.ratnopamc.com/taking-amazon-eks-anywhere-for-a-spin</link><guid isPermaLink="true">https://blog.ratnopamc.com/taking-amazon-eks-anywhere-for-a-spin</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[EKS]]></category><dc:creator><![CDATA[Ratnopam Chakrabarti]]></dc:creator><pubDate>Sun, 26 Jun 2022 10:14:49 GMT</pubDate><content:encoded>&lt;![CDATA[&lt;h1 id=&quot;heading-prelude&quot;&gt;Prelude&lt;/h1&gt;&lt;p&gt;Before we discuss about EKS Anywhere, it&apos;s useful to have a basic idea about Amazon EKS and more importantly Amazon EKS Distro.&lt;/p&gt;&lt;p&gt;Amazon Elastic Kubernetes Service a.k.a. Amazon EKS is a managed kubernetes service from AWS to run and scale Kubernetes applications on AWS cloud platform. EKS uses Amazon EKS Distro otherwise known as EKS-D to create reliable and secure Kubernetes clusters.&lt;/p&gt;&lt;p&gt;EKS Distro includes binaries and containers of open-source Kubernetes, etcd (cluster configuration database), networking, and storage plugins, tested for compatibility.&lt;/p&gt;&lt;p&gt;Maintaining and running your own Kubernetes clusters takes a lot of effort for teams in tracking updates, figuring out compatibility between different kubernetes versions and simply keeping up-to-date with upstream kubernetes release cadence. This is where EKS-D comes to the rescue. EKS Distro reduces the need to track updates, determine compatibility, and standardize on a common Kubernetes version across teams.&lt;/p&gt;&lt;h1 id=&quot;heading-what-is-eks-anywhere&quot;&gt;WHAT is EKS Anywhere&lt;/h1&gt;&lt;p&gt;With that basic understanding of EKS and EKS-D, let&apos;s now take a closer look at EKS-Anywhere. &lt;/p&gt;&lt;p&gt;Amazon EKS Anywhere is a new deployment option for Amazon EKS that enables you to easily create and operate Kubernetes clusters on-premises with your own virtual machines. EKS Anywhere is an open-source deployment option for Amazon EKS that builds on the strengths of EKS-D and allows teams to create and operate Kubernetes clusters on-premises. EKS Anywhere is based on the overarching design principle that supports BYOI (Bring Your Own Infrastructure) model when it comes to deploying kubernetes clusters. It supports deploying production grade kubernetes clusters on VMWare&apos;s vSphere and plans to add support for bare metal in 2022.&lt;/p&gt;&lt;h1 id=&quot;heading-what-problem-does-it-solve&quot;&gt;WHAT problem does it solve&lt;/h1&gt;&lt;p&gt;The salient use cases that EKS Anywhere solves are &lt;/p&gt;&lt;ul&gt;&lt;li&gt;Hybrid cloud consistency&lt;/li&gt;&lt;li&gt;Disconnected environment&lt;/li&gt;&lt;li&gt;Application modernization&lt;/li&gt;&lt;li&gt;Data sovereignty&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;The core benefit of using EKS Anywhere is that it offers customers a consistent and reliable mechanism of running Amazon&apos;s kubernetes distribution within their own on-premises infrastructure. Some enterprises have a mix of deployment architecture with some kubernetes workload running in cloud on AWS EKS while some other applications still running in on-premise kubernetes clusters. EKS Anywhere offers strong operational consistency with Amazon EKS so teams can standardize their Kubernetes operations across a hybrid cloud environment based on a unified toolset.&lt;/p&gt;&lt;p&gt;Businesses that have a large on-premises footprint and want to modernize their applications can leverage EKS Anywhere to simplify the creation and operation of on-premises Kubernetes clusters and focus more on developing and modernizing applications. In addition, customers who want to keep their data within private data centers due to legal reasons can benefit by using trusted Amazon EKS Kubernetes distribution and tools to where their data needs to be.&lt;/p&gt;&lt;h1 id=&quot;heading-why-should-one-use-it&quot;&gt;WHY should one use it&lt;/h1&gt;&lt;p&gt;Kubernetes adoption is growing every day and the tooling around it also keeps on piling up. While this provides customers with multiple options, it is quite challenging to ensure that teams pick the right tools for the job and doesn&apos;t add complexity in their operations workflow. Keeping pace with upstream kubernetes release cadence without breaking existing applications is also a non-trivial task. &lt;/p&gt;&lt;p&gt;Teams that are operating kubernetes clusters on-premises typically need to take on a lot of operational challenges such as creating and upgrading clusters in a timely manner with upstream releases, maintaining and resolving version mismatches between kubernetes releases and integrating a variety of third party tools to perform cluster operations. The same applies to hybrid cluster set ups and leads to unnecessary complexity, fragmented tooling and support options, and inconsistencies between the cloud and on-premises clusters that make it hard to manage applications across environments.&lt;/p&gt;&lt;p&gt;With Amazon EKS Anywhere, teams have Kubernetes operational tooling that is consistent with Amazon EKS and is optimized to simplify cluster installation with default configurations for the operating system and networking needed to operate Kubernetes on-premises.  If you&apos;re someone who wants to reduce operational complexity, adopt a consistent and reliable workflow of managing kubernetes clusters across cloud and on-premises, leverage latest tooling and security hardened updated kubernetes distribution to operate on then EKS Anywhere might be a worthy option for you.&lt;/p&gt;&lt;h1 id=&quot;heading-kicking-the-tires&quot;&gt;Kicking The Tires&lt;/h1&gt;&lt;p&gt;EKS Anywhere allows you to create and manage production kubernetes cluster on VMWare vSpehere. However, if you don&apos;t have a vSphere environment at your disposal, EKS Anywhere also supports creating development clusters locally with Docker provider.&lt;/p&gt;&lt;p&gt;Here, I will walk you through the cluster creation process on a virtual machine using the Docker provider. This set up is for local use and not recommended for Production purposes. However the concepts around the cluster creation and management workflow across providers are the same. &lt;/p&gt;&lt;h2 id=&quot;heading-clusters-and-more-clusters&quot;&gt;Clusters and more Clusters&lt;/h2&gt;&lt;p&gt; This is the path we&apos;re going to take for the purposes of this post. &lt;/p&gt;&lt;h3 id=&quot;heading-cluster-management-workflow&quot;&gt;Cluster Management Workflow&lt;/h3&gt;&lt;p&gt;The EKS Anywhere cluster creation process makes it easy not only to bring up a cluster initially, but also to update configuration settings and to upgrade Kubernetes versions going forward. The cluster creation process  involves stepping through different types of clusters.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Bootstrap cluster - A temporary kubernetes clusters that&apos;s ephemeral in nature and is solely used for creating a management cluster.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Management cluster - A Kubernetes cluster that manages the lifecycle of Workload Clusters.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Workload cluster - A Kubernetes cluster whose lifecycle is managed by a Management Cluster.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;blockquote&gt;&lt;p&gt;To manage the lifecycle of a Workload kubernetes cluster we need to have a Management kubernetes cluster in place first. And to have a Management cluster, we need to spin up a bootstrap kubernetes cluster to start the cluster creation workflow.&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Think of the Bootstrap cluster as a launchpad for EKS Anywhere to start the Workload cluster creation process. Once it&apos;s able to create the Management cluster, from that point on, Management cluster continues the process and takes over the lifecycle management of the Workload cluster. The essence of this design paradigm is to make kubernetes clusters &quot;self-aware&quot; and be able to manage its lifecycle themselves without the need of a launchpad (i.e. Bootstrap cluster). A common practice is to delete the bootstrap cluster once its job is done and repurpose the infrastructure to save resource.&lt;/p&gt;&lt;p&gt; An obvious question about the above scenario is how to spin up the bootstrap cluster and how to avoid a chicken and egg situation where we attempt to create bootstrap cluster using EKS Anywhere even before there&apos;s a launchpad cluster in place.&lt;/p&gt;&lt;p&gt;Enter [KinD] (https://kind.sigs.k8s.io/) or Kubernetes in Docker. KinD can create  kubernetes clusters that run kubernetes as docker containers. EKS Anywhere runs a &lt;code&gt;KinD&lt;/code&gt; cluster on an administrative workstation or virtual machine to act as a &lt;code&gt;bootstrap cluster&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;Let&apos;s roll up our sleeves now and see EKS Anywhere in action!&lt;/p&gt;&lt;h3 id=&quot;heading-prerequisites&quot;&gt;Prerequisites&lt;/h3&gt;&lt;p&gt;To start with, prepare your &lt;code&gt;Administrative&lt;/code&gt; Workstation with following pre-requisites.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Docker 20.x.x&lt;/li&gt;&lt;li&gt;Ubuntu (20.04.2 LTS). If you&apos;re on a Mac use &lt;code&gt;10.15&lt;/code&gt;&lt;/li&gt;&lt;li&gt;4 CPU cores&lt;/li&gt;&lt;li&gt;16GB memory&lt;/li&gt;&lt;li&gt;30GB free disk space&lt;/li&gt;&lt;/ul&gt;&lt;blockquote&gt;&lt;p&gt;Make sure that your local workstation or virtual machine in cloud meets all of the above requirements. &lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;I used a virtual machine in cloud running Ubuntu 20.04 LTS with above configurations.&lt;/p&gt;&lt;h3 id=&quot;heading-docker&quot;&gt;Docker&lt;/h3&gt;&lt;p&gt; &lt;a target=&quot;_blank&quot; href=&quot;https://docs.docker.com/engine/install/ubuntu/&quot;&gt;Install Docker on Ubuntu&lt;/a&gt;.Check the docker version to make sure it&apos;s 20.x.x. &lt;/p&gt;&lt;pre&gt;&lt;code&gt;&lt;span class=&quot;hljs-string&quot;&gt;$&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;docker&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;version&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;Client:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Docker&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Engine&lt;/span&gt; &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Community&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;Version:&lt;/span&gt;           &lt;span class=&quot;hljs-number&quot;&gt;20.10&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.17&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;API version:&lt;/span&gt;       &lt;span class=&quot;hljs-number&quot;&gt;1.41&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;Go version:&lt;/span&gt;        &lt;span class=&quot;hljs-string&quot;&gt;go1.17.11&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;Git commit:&lt;/span&gt;        &lt;span class=&quot;hljs-string&quot;&gt;100c701&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;Built:&lt;/span&gt;             &lt;span class=&quot;hljs-string&quot;&gt;Mon&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Jun&lt;/span&gt;  &lt;span class=&quot;hljs-number&quot;&gt;6&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;23&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;:02:57&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;2022&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;OS/Arch:&lt;/span&gt;           &lt;span class=&quot;hljs-string&quot;&gt;linux/amd64&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;Context:&lt;/span&gt;           &lt;span class=&quot;hljs-string&quot;&gt;default&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;Experimental:&lt;/span&gt;      &lt;span class=&quot;hljs-literal&quot;&gt;true&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;Server:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Docker&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Engine&lt;/span&gt; &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Community&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;Engine:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;Version:&lt;/span&gt;          &lt;span class=&quot;hljs-number&quot;&gt;20.10&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.17&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;API version:&lt;/span&gt;      &lt;span class=&quot;hljs-number&quot;&gt;1.41&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;(minimum&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;version&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;1.12&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;)&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;Go version:&lt;/span&gt;       &lt;span class=&quot;hljs-string&quot;&gt;go1.17.11&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;Git commit:&lt;/span&gt;       &lt;span class=&quot;hljs-string&quot;&gt;a89b842&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;Built:&lt;/span&gt;            &lt;span class=&quot;hljs-string&quot;&gt;Mon&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Jun&lt;/span&gt;  &lt;span class=&quot;hljs-number&quot;&gt;6&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;23&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;:01:03&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;2022&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;OS/Arch:&lt;/span&gt;          &lt;span class=&quot;hljs-string&quot;&gt;linux/amd64&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;Experimental:&lt;/span&gt;     &lt;span class=&quot;hljs-literal&quot;&gt;false&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;containerd:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;Version:&lt;/span&gt;          &lt;span class=&quot;hljs-number&quot;&gt;1.6&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;GitCommit:&lt;/span&gt;        &lt;span class=&quot;hljs-string&quot;&gt;10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;runc:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;Version:&lt;/span&gt;          &lt;span class=&quot;hljs-number&quot;&gt;1.1&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.2&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;GitCommit:&lt;/span&gt;        &lt;span class=&quot;hljs-string&quot;&gt;v1.1.2-0-ga916309&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;docker-init:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;Version:&lt;/span&gt;          &lt;span class=&quot;hljs-number&quot;&gt;0.19&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;GitCommit:&lt;/span&gt;        &lt;span class=&quot;hljs-string&quot;&gt;de40ad0&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&quot;heading-kubectl&quot;&gt;kubectl&lt;/h3&gt;&lt;p&gt;You need &lt;code&gt;kubectl&lt;/code&gt; installed to connect to kubernetes clusters from your workstation. If you don&apos;t have it installed, use the &lt;code&gt;snap&lt;/code&gt; commands to install it.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;sudo snap &lt;span class=&quot;hljs-keyword&quot;&gt;install&lt;/span&gt; kubectl &lt;span class=&quot;hljs-comment&quot;&gt;--classic&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&quot;heading-eksctl&quot;&gt;eksctl&lt;/h3&gt;&lt;p&gt;Install the latest release of eksctl. The EKS Anywhere plugin requires eksctl version 0.66.0 or newer. &lt;/p&gt;&lt;pre&gt;&lt;code&gt;curl &lt;span class=&quot;hljs-string&quot;&gt;&quot;https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz&quot;&lt;/span&gt; \    &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;silent &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;location \    &lt;span class=&quot;hljs-operator&quot;&gt;|&lt;/span&gt; tar xz &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;C &lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;tmpsudo mv &lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;tmp&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;eksctl &lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;usr&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;local&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;bin&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&quot;heading-eksctl-anywhere-plugin&quot;&gt;eksctl anywhere plugin&lt;/h3&gt;&lt;p&gt;Install the &lt;code&gt;eksctl-anywhere&lt;/code&gt; plugin.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;export EKSA_RELEASE&lt;span class=&quot;hljs-operator&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;&quot;0.9.1&quot;&lt;/span&gt; OS&lt;span class=&quot;hljs-operator&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;&quot;$(uname -s | tr A-Z a-z)&quot;&lt;/span&gt; RELEASE_NUMBER&lt;span class=&quot;hljs-operator&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;12&lt;/span&gt;curl &lt;span class=&quot;hljs-string&quot;&gt;&quot;https://anywhere-assets.eks.amazonaws.com/releases/eks-a/${RELEASE_NUMBER}/artifacts/eks-a/v${EKSA_RELEASE}/${OS}/amd64/eksctl-anywhere-v${EKSA_RELEASE}-${OS}-amd64.tar.gz&quot;&lt;/span&gt; \    &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;silent &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;location \    &lt;span class=&quot;hljs-operator&quot;&gt;|&lt;/span&gt; tar xz ./eksctl&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;anywheresudo mv ./eksctl&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;anywhere &lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;usr&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;local&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;bin&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Verify your installed version.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ eksctl anywhere &lt;span class=&quot;hljs-keyword&quot;&gt;version&lt;/span&gt;v0&lt;span class=&quot;hljs-number&quot;&gt;.9&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.1&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&quot;heading-create-your-local-eks-anywhere-cluster&quot;&gt;Create your local EKS Anywhere cluster&lt;/h2&gt;&lt;p&gt;Now that you have all the tools installed, let&apos;s proceed with creating the &lt;code&gt;local&lt;/code&gt; EKS Anywhere cluster using the &lt;code&gt;Docker&lt;/code&gt; provider.&lt;/p&gt;&lt;p&gt;First we create a cluster configuration and save it in a file.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ CLUSTER_NAME&lt;span class=&quot;hljs-operator&quot;&gt;=&lt;/span&gt;rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;devubuntu@ip&lt;span class=&quot;hljs-number&quot;&gt;-10&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;-0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;-0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;-12&lt;/span&gt;:&lt;span class=&quot;hljs-operator&quot;&gt;~&lt;/span&gt;$ eksctl anywhere generate clusterconfig $CLUSTER_NAME \&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt;    &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;provider docker &lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt; $CLUSTER_NAME.yaml&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Check the configuration file.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;&lt;span class=&quot;hljs-string&quot;&gt;$&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;cat&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;rc-dev.yaml&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;anywhere.eks.amazonaws.com/v1alpha1&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Cluster&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;rc-dev&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;clusterNetwork:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;cniConfig:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;cilium:&lt;/span&gt; {}    &lt;span class=&quot;hljs-attr&quot;&gt;pods:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;cidrBlocks:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;192.168&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;/16&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;services:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;cidrBlocks:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;10.96&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;/12&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;controlPlaneConfiguration:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;count:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;datacenterRef:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;DockerDatacenterConfig&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;rc-dev&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;externalEtcdConfiguration:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;count:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;kubernetesVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;1.22&quot;&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;managementCluster:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;rc-dev&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;workerNodeGroupConfigurations:&lt;/span&gt;  &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;count:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;md-0&lt;/span&gt;&lt;span class=&quot;hljs-meta&quot;&gt;---&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;anywhere.eks.amazonaws.com/v1alpha1&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;DockerDatacenterConfig&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;rc-dev&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt; {}&lt;span class=&quot;hljs-meta&quot;&gt;---&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;By default, EKS Anywhere creates a kubernetes cluster with one control plane and one worker node. Currently it installs the kubernetes &lt;code&gt;1.22&lt;/code&gt; version. &lt;/p&gt;&lt;p&gt;Another configuration worth noticing is the default cni provider which is &lt;code&gt;cilium&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;You can customize and alter these settings to suit your workload cluster requirement. &lt;/p&gt;&lt;h3 id=&quot;heading-start-small&quot;&gt;Start Small&lt;/h3&gt;&lt;p&gt;We&apos;re going to start with installing our first EKS Anywhere cluster with bare minimum set up with 1 control plane and 1 worker node. Later on, we&apos;d increase the worker node count.&lt;/p&gt;&lt;p&gt;Once we have the configuration file, all you need to do is&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ time eksctl anywhere &lt;span class=&quot;hljs-keyword&quot;&gt;create&lt;/span&gt; cluster -f $CLUSTER_NAME.yamlPerforming setup &lt;span class=&quot;hljs-keyword&quot;&gt;and&lt;/span&gt; validations&lt;span class=&quot;hljs-keyword&quot;&gt;Warning&lt;/span&gt;: The docker infrastructure provider &lt;span class=&quot;hljs-keyword&quot;&gt;is&lt;/span&gt; meant &lt;span class=&quot;hljs-keyword&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;hljs-keyword&quot;&gt;local&lt;/span&gt; development &lt;span class=&quot;hljs-keyword&quot;&gt;and&lt;/span&gt; testing &lt;span class=&quot;hljs-keyword&quot;&gt;only&lt;/span&gt; Docker Provider setup &lt;span class=&quot;hljs-keyword&quot;&gt;is&lt;/span&gt; valid &lt;span class=&quot;hljs-keyword&quot;&gt;Validate&lt;/span&gt; certificate &lt;span class=&quot;hljs-keyword&quot;&gt;for&lt;/span&gt; registry mirror &lt;span class=&quot;hljs-keyword&quot;&gt;Create&lt;/span&gt; preflight validations passCreating &lt;span class=&quot;hljs-keyword&quot;&gt;new&lt;/span&gt; bootstrap clusterProvider specific pre-capi-&lt;span class=&quot;hljs-keyword&quot;&gt;install&lt;/span&gt;-setup &lt;span class=&quot;hljs-keyword&quot;&gt;on&lt;/span&gt; bootstrap clusterInstalling cluster-api providers &lt;span class=&quot;hljs-keyword&quot;&gt;on&lt;/span&gt; bootstrap clusterProvider specific post-setupCreating &lt;span class=&quot;hljs-keyword&quot;&gt;new&lt;/span&gt; workload clusterInstalling networking &lt;span class=&quot;hljs-keyword&quot;&gt;on&lt;/span&gt; workload clusterInstalling cluster-api providers &lt;span class=&quot;hljs-keyword&quot;&gt;on&lt;/span&gt; workload clusterInstalling EKS-A secrets &lt;span class=&quot;hljs-keyword&quot;&gt;on&lt;/span&gt; workload clusterMoving cluster &lt;span class=&quot;hljs-keyword&quot;&gt;management&lt;/span&gt; &lt;span class=&quot;hljs-keyword&quot;&gt;from&lt;/span&gt; bootstrap &lt;span class=&quot;hljs-keyword&quot;&gt;to&lt;/span&gt; workload clusterInstalling EKS-A custom components (CRD &lt;span class=&quot;hljs-keyword&quot;&gt;and&lt;/span&gt; controller) &lt;span class=&quot;hljs-keyword&quot;&gt;on&lt;/span&gt; workload clusterInstalling EKS-D components &lt;span class=&quot;hljs-keyword&quot;&gt;on&lt;/span&gt; workload clusterCreating EKS-A CRDs instances &lt;span class=&quot;hljs-keyword&quot;&gt;on&lt;/span&gt; workload clusterInstalling AddonManager &lt;span class=&quot;hljs-keyword&quot;&gt;and&lt;/span&gt; GitOps Toolkit &lt;span class=&quot;hljs-keyword&quot;&gt;on&lt;/span&gt; workload clusterGitOps &lt;span class=&quot;hljs-keyword&quot;&gt;field&lt;/span&gt; &lt;span class=&quot;hljs-keyword&quot;&gt;not&lt;/span&gt; specified, bootstrap flux skippedWriting cluster config &lt;span class=&quot;hljs-keyword&quot;&gt;file&lt;/span&gt;Deleting bootstrap cluster🎉 Cluster created!&lt;span class=&quot;hljs-built_in&quot;&gt;real&lt;/span&gt;    &lt;span class=&quot;hljs-number&quot;&gt;5&lt;/span&gt;m1&lt;span class=&quot;hljs-number&quot;&gt;.553&lt;/span&gt;s&lt;span class=&quot;hljs-keyword&quot;&gt;user&lt;/span&gt;    &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;m2&lt;span class=&quot;hljs-number&quot;&gt;.941&lt;/span&gt;s&lt;span class=&quot;hljs-keyword&quot;&gt;sys&lt;/span&gt;    &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;m2&lt;span class=&quot;hljs-number&quot;&gt;.084&lt;/span&gt;s&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Within approx. 5 mins you have the workload cluster ready for use without much hassle; that&apos;s pretty neat!&lt;/p&gt;&lt;p&gt;By default, the console log above highlights the different stages of the workload cluster creation lifecycle; however if you want to see more logs, then add the &lt;code&gt;-v&lt;/code&gt; parameter in the &lt;code&gt;eksctl anywhere&lt;/code&gt; cluster create command to tun on the verbose mode. &lt;/p&gt;&lt;p&gt;You can check the bootstrap cluster by issuing the below command. Install &lt;code&gt;KinD&lt;/code&gt; if you don&apos;t have it on your workstation by following &lt;a target=&quot;_blank&quot; href=&quot;https://kind.sigs.k8s.io/docs/user/quick-start/#installation&quot;&gt;these&lt;/a&gt; instructions.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ kind &lt;span class=&quot;hljs-keyword&quot;&gt;get&lt;/span&gt; clustersrc-dev&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Now it&apos;s time to verify the cluster and check kubernetes version installed.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;# export the kubeconfig &lt;span class=&quot;hljs-keyword&quot;&gt;to&lt;/span&gt; &lt;span class=&quot;hljs-type&quot;&gt;point&lt;/span&gt; &lt;span class=&quot;hljs-keyword&quot;&gt;to&lt;/span&gt; the &lt;span class=&quot;hljs-keyword&quot;&gt;cluster&lt;/span&gt;$ export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-&lt;span class=&quot;hljs-keyword&quot;&gt;cluster&lt;/span&gt;.kubeconfig# &lt;span class=&quot;hljs-keyword&quot;&gt;check&lt;/span&gt; nodes &lt;span class=&quot;hljs-keyword&quot;&gt;of&lt;/span&gt; the k8s &lt;span class=&quot;hljs-keyword&quot;&gt;cluster&lt;/span&gt;$ kubectl &lt;span class=&quot;hljs-keyword&quot;&gt;get&lt;/span&gt; nodes&lt;span class=&quot;hljs-type&quot;&gt;NAME&lt;/span&gt;                           STATUS   ROLES                  AGE   &lt;span class=&quot;hljs-keyword&quot;&gt;VERSION&lt;/span&gt;rc-dev-gzktk                   Ready    control-plane,master   &lt;span class=&quot;hljs-number&quot;&gt;15&lt;/span&gt;m   v1&lt;span class=&quot;hljs-number&quot;&gt;.22&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;-eks-bb942e6rc-dev-md&lt;span class=&quot;hljs-number&quot;&gt;-0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;-7&lt;/span&gt;c4c7f595d&lt;span class=&quot;hljs-number&quot;&gt;-5&lt;/span&gt;lcq8   Ready    &amp;lt;&lt;span class=&quot;hljs-keyword&quot;&gt;none&lt;/span&gt;&amp;gt;                 &lt;span class=&quot;hljs-number&quot;&gt;14&lt;/span&gt;m   v1&lt;span class=&quot;hljs-number&quot;&gt;.22&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;-eks-bb942e&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&quot;heading-spin-up-a-multi-node-cluster&quot;&gt;Spin Up a multi node cluster&lt;/h2&gt;&lt;p&gt;While the basic cluster with a control plane and a worker node is nice, having the ability to spin up a multi node kubernetes cluster is fantastic. Let&apos;s see if EKS Anywhere is up to the task.&lt;/p&gt;&lt;p&gt;In order to increase the number of nodes in the cluster, you need to modify the cluster configuration file. There&apos;s no option to pass this as a parameter to the &lt;code&gt;eksctl anywhere&lt;/code&gt; command; at least for now.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ cp rc-dev.yaml rc-dev-multinode.yaml&lt;span class=&quot;hljs-comment&quot;&gt;#change metadata and name of the cluster configuration&lt;/span&gt;$ sed -i &lt;span class=&quot;hljs-string&quot;&gt;&apos;s/rc-dev/rc-dev-multinode/g&apos;&lt;/span&gt; rc-dev-multinode.yaml&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;And finally, change the &lt;code&gt;workerNodeGroupConfigurations.count&lt;/code&gt; value to 3. Now let&apos;s deploy the cluster.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ eksctl anywhere create cluster &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;f rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode.yaml&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&quot;heading-verify-cluster&quot;&gt;Verify cluster&lt;/h2&gt;&lt;p&gt;Export the kubeconfig file as before and check nodes status&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ kubectl get nodesNAME                                  STATUS   ROLES                  AGE     VERSION$ kubectl get nodesNAME                                     STATUS   ROLES                  AGE     VERSIONrc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;7dwtd                   Ready    control&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;plane,master   2m40s   v1&lt;span class=&quot;hljs-number&quot;&gt;.22&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bb942e6rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;md&lt;span class=&quot;hljs-number&quot;&gt;-0&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;86786b8dfb&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dsfrw   Ready    &lt;span class=&quot;hljs-operator&quot;&gt;&amp;lt;&lt;/span&gt;none&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt;                 2m9s    v1&lt;span class=&quot;hljs-number&quot;&gt;.22&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bb942e6rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;md&lt;span class=&quot;hljs-number&quot;&gt;-0&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;86786b8dfb&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;gpx2x   Ready    &lt;span class=&quot;hljs-operator&quot;&gt;&amp;lt;&lt;/span&gt;none&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt;                 2m10s   v1&lt;span class=&quot;hljs-number&quot;&gt;.22&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bb942e6rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;md&lt;span class=&quot;hljs-number&quot;&gt;-0&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;86786b8dfb&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;tx6b4   Ready    &lt;span class=&quot;hljs-operator&quot;&gt;&amp;lt;&lt;/span&gt;none&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt;                 2m10s   v1&lt;span class=&quot;hljs-number&quot;&gt;.22&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bb942e6&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Also, check for all system pods if they&apos;re in &lt;code&gt;running&lt;/code&gt; state.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ kubectl get po &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;ANAMESPACE                           NAME                                                             READY   STATUS    RESTARTS   AGEcapd&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         capd&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;6466d54c9d&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;j9klq                         &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          112scapi&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;kubeadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bootstrap&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system       capi&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;kubeadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bootstrap&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;7c6bcf98b7&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;tb2zx       &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m1scapi&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;kubeadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;control&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;plane&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system   capi&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;kubeadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;control&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;plane&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;68b4c848dc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;zz6gs   &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          115scapi&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         capi&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;64647f455d&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;x5zx7                         &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m3scert&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager                        cert&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;8674857d7b&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;5hmgq                                    &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m41scert&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager                        cert&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;cainjector&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;f5b94ccdf&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;9bglv                          &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m41scert&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager                        cert&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;webhook&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;84fbd6fb68&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;6xkkj                            &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m40seksa&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         eksa&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;79b4b76bb8&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;lbc79                         &lt;span class=&quot;hljs-number&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;2&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          92setcdadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bootstrap&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;provider&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system   etcdadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bootstrap&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;provider&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;664f699b7c&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;q44db   &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2metcdadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system           etcdadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;59dc96c7b9&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;hpnnp           &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          118skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         cilium&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;858bk                                                     &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m51skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         cilium&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;89r45                                                     &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m51skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         cilium&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bxfv6                                                     &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m51skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         cilium&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;operator&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;7698596ff4&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;2k264                                 &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m51skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         cilium&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;operator&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;7698596ff4&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;cgjqn                                 &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m51skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         cilium&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;z4zgt                                                     &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m51skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         coredns&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;55467bc785&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;n4mv2                                         &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          3m15skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         coredns&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;55467bc785&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;qgdhx                                         &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          3m15skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         kube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;apiserver&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;7dwtd                            &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          3m18skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         kube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;7dwtd                   &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          3m18skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         kube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;proxy&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;2npkx                                                 &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          3m16skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         kube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;proxy&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;7j7vm                                                 &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m56skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         kube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;proxy&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;8ntxj                                                 &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m56skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         kube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;proxy&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;cwgtn                                                 &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m55skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         kube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;scheduler&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;7dwtd                            &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          3m18s&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;To verify that a cluster control plane is up and running, use the &lt;code&gt;kubectl&lt;/code&gt; command to show that the control plane pods are all running.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ kubectl get po &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;A &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;l control&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;plane&lt;span class=&quot;hljs-operator&quot;&gt;=&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;managerNAMESPACE                           NAME                                                             READY   STATUS    RESTARTS   AGEcapd&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         capd&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;6466d54c9d&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;j9klq                         &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          20mcapi&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;kubeadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bootstrap&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system       capi&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;kubeadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bootstrap&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;7c6bcf98b7&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;tb2zx       &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          20mcapi&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;kubeadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;control&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;plane&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system   capi&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;kubeadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;control&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;plane&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;68b4c848dc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;zz6gs   &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          20mcapi&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         capi&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;64647f455d&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;x5zx7                         &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          20metcdadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bootstrap&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;provider&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system   etcdadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bootstrap&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;provider&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;664f699b7c&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;q44db   &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          20metcdadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system           etcdadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;59dc96c7b9&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;hpnnp           &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          20m&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Once you&apos;ve verified that all nodes in the workload cluster are in &lt;code&gt;ready&lt;/code&gt; state and the pods  are in &lt;code&gt;Running&lt;/code&gt; status, you can go ahead and deploy some test workloads onto the cluster. &lt;/p&gt;&lt;h2 id=&quot;heading-deploy-sample-workload&quot;&gt;Deploy sample workload&lt;/h2&gt;&lt;p&gt;To deploy a sample test workload in your shiny new multinode workload cluster, do&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ kubectl apply &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;f &lt;span class=&quot;hljs-string&quot;&gt;&quot;https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml&quot;&lt;/span&gt;deployment.apps/hello&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;a createdservice&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;hello&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;a created&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Check all kubernetes resources in the &lt;code&gt;default&lt;/code&gt; namespace.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ kubectl get allNAME                              READY   STATUS    RESTARTS   AGEpod&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;hello&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;a&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;9644dd8dc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;znwzr   &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          37sNAME                  TYPE        CLUSTER&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;IP       EXTERNAL&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;IP   PORT(S)        AGEservice&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;hello&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;a   NodePort    &lt;span class=&quot;hljs-number&quot;&gt;10.106&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.132&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.175&lt;/span&gt;   &lt;span class=&quot;hljs-operator&quot;&gt;&amp;lt;&lt;/span&gt;none&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt;        &lt;span class=&quot;hljs-number&quot;&gt;80&lt;/span&gt;:&lt;span class=&quot;hljs-number&quot;&gt;30687&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;TCP   37sservice&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;kubernetes    ClusterIP   &lt;span class=&quot;hljs-number&quot;&gt;10.96&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.1&lt;/span&gt;        &lt;span class=&quot;hljs-operator&quot;&gt;&amp;lt;&lt;/span&gt;none&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt;        &lt;span class=&quot;hljs-number&quot;&gt;443&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;TCP        6m32sNAME                          READY   UP&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;TO&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;DATE   AVAILABLE   AGEdeployment.apps/hello&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;a   &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;            &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;           37sNAME                                    DESIRED   CURRENT   READY   AGEreplicaset.apps/hello&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;a&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;9644dd8dc   &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;         &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;         &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;       37s&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;To access the default web-page of the sample workload, forward the deployment port to your localhost&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ kubectl port&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;forward deploy&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;hello&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;a &lt;span class=&quot;hljs-number&quot;&gt;8000&lt;/span&gt;:&lt;span class=&quot;hljs-number&quot;&gt;80&lt;/span&gt;Forwarding &lt;span class=&quot;hljs-keyword&quot;&gt;from&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;127.0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.1&lt;/span&gt;:&lt;span class=&quot;hljs-number&quot;&gt;8000&lt;/span&gt; &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;80&lt;/span&gt;Forwarding &lt;span class=&quot;hljs-keyword&quot;&gt;from&lt;/span&gt; [::&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;]:&lt;span class=&quot;hljs-number&quot;&gt;8000&lt;/span&gt; &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;80&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;From a second terminal, try accessing the webpage by doing &lt;code&gt;curl localhost:8000&lt;/code&gt;&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1656229177222/UrfVS0j3Y.png&quot; alt=&quot;Screen Shot 2022-06-26 at 12.35.28 AM.png&quot; /&gt;&lt;/p&gt;&lt;h2 id=&quot;heading-scale-your-cluster&quot;&gt;Scale your cluster&lt;/h2&gt;&lt;p&gt;Currently, the only way you can scale the workload cluster is by manually incrementing the number of control plane and worker nodes in the cluster configuration file and upgrading the cluster. &lt;/p&gt;&lt;p&gt;Increment the worker node count to 5 as shown below&lt;/p&gt;&lt;pre&gt;&lt;code&gt;&lt;span class=&quot;hljs-string&quot;&gt;$&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;cat&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;rc-dev-multinode-more.yaml&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;anywhere.eks.amazonaws.com/v1alpha1&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Cluster&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;rc-dev-multinode&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;clusterNetwork:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;cniConfig:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;cilium:&lt;/span&gt; {}    &lt;span class=&quot;hljs-attr&quot;&gt;pods:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;cidrBlocks:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;192.168&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;/16&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;services:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;cidrBlocks:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;10.96&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;/12&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;controlPlaneConfiguration:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;count:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;datacenterRef:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;DockerDatacenterConfig&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;rc-dev-multinode&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;externalEtcdConfiguration:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;count:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;kubernetesVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;1.21&quot;&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;managementCluster:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;rc-dev-multinode&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;workerNodeGroupConfigurations:&lt;/span&gt;  &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;count:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;5&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;md-0&lt;/span&gt;&lt;span class=&quot;hljs-meta&quot;&gt;---&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;anywhere.eks.amazonaws.com/v1alpha1&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;DockerDatacenterConfig&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;rc-dev-multinode&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt; {}&lt;span class=&quot;hljs-meta&quot;&gt;---&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Let&apos;s give it a try to see if we can upgrade the workload cluster to add more worker nodes.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ eksctl anywhere upgrade cluster &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;f rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;more.yaml Performing setup and validations Docker Provider setup &lt;span class=&quot;hljs-keyword&quot;&gt;is&lt;/span&gt; valid Validate certificate &lt;span class=&quot;hljs-keyword&quot;&gt;for&lt;/span&gt; registry mirror Control plane ready Worker nodes ready Nodes ready Cluster CRDs ready Cluster object present on workload cluster Upgrade cluster kubernetes version increment Validate &lt;span class=&quot;hljs-keyword&quot;&gt;immutable&lt;/span&gt; fields Upgrade preflight validations passEnsuring etcd CAPI providers exist on management cluster before upgradeUpgrading core componentsPausing EKS&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;A cluster controller reconcilePausing Flux kustomizationGitOps field not specified, pause flux kustomization skippedCreating bootstrap clusterInstalling cluster&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;api providers on bootstrap clusterMoving cluster management &lt;span class=&quot;hljs-keyword&quot;&gt;from&lt;/span&gt; workload to bootstrap clusterUpgrading workload clusterMoving cluster management &lt;span class=&quot;hljs-keyword&quot;&gt;from&lt;/span&gt; bootstrap to workload clusterApplying &lt;span class=&quot;hljs-keyword&quot;&gt;new&lt;/span&gt; EKS&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;A cluster resource; resuming reconcileResuming EKS&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;A controller reconciliationUpdating Git Repo with &lt;span class=&quot;hljs-keyword&quot;&gt;new&lt;/span&gt; EKS&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;A cluster specGitOps field not specified, update git repo skippedForcing reconcile Git repo with latest commitGitOps not configured, force reconcile flux git repo skippedResuming Flux kustomizationGitOps field not specified, resume flux kustomization skippedWriting cluster config file🎉 Cluster upgraded&lt;span class=&quot;hljs-operator&quot;&gt;!&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Let&apos;s check the nodes to verify the state of the cluster.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ kubectl get nodesNAME                                     STATUS   ROLES                  AGE   VERSIONrc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;7dwtd                   Ready    control&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;plane,master   43m   v1&lt;span class=&quot;hljs-number&quot;&gt;.22&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bb942e6rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;md&lt;span class=&quot;hljs-number&quot;&gt;-0&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;86786b8dfb&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dsfrw   Ready    &lt;span class=&quot;hljs-operator&quot;&gt;&amp;lt;&lt;/span&gt;none&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt;                 43m   v1&lt;span class=&quot;hljs-number&quot;&gt;.22&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bb942e6rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;md&lt;span class=&quot;hljs-number&quot;&gt;-0&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;86786b8dfb&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;gpx2x   Ready    &lt;span class=&quot;hljs-operator&quot;&gt;&amp;lt;&lt;/span&gt;none&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt;                 43m   v1&lt;span class=&quot;hljs-number&quot;&gt;.22&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bb942e6rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;md&lt;span class=&quot;hljs-number&quot;&gt;-0&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;86786b8dfb&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;stkx8   Ready    &lt;span class=&quot;hljs-operator&quot;&gt;&amp;lt;&lt;/span&gt;none&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt;                 10m   v1&lt;span class=&quot;hljs-number&quot;&gt;.22&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bb942e6rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;md&lt;span class=&quot;hljs-number&quot;&gt;-0&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;86786b8dfb&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;tx6b4   Ready    &lt;span class=&quot;hljs-operator&quot;&gt;&amp;lt;&lt;/span&gt;none&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt;                 43m   v1&lt;span class=&quot;hljs-number&quot;&gt;.22&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bb942e6rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;md&lt;span class=&quot;hljs-number&quot;&gt;-0&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;86786b8dfb&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;vqpf9   Ready    &lt;span class=&quot;hljs-operator&quot;&gt;&amp;lt;&lt;/span&gt;none&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt;                 10m   v1&lt;span class=&quot;hljs-number&quot;&gt;.22&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bb942e6&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;And sure enough, we have all 5 worker nodes in &lt;code&gt;ready&lt;/code&gt; status.&lt;/p&gt;&lt;p&gt;You can scale workload clusters in a semi automatic way by storing your cluster config manifest in git and then having a CI/CD system deploy your changes. Or you can use a GitOps controller to apply the changes. EKS Anywhere allows cluster management by incorporating GitOps. It uses Flux to manage clusters with GitOps. More on this in a next post.&lt;/p&gt;&lt;h2 id=&quot;heading-adding-integration-to-your-eks-anywhere-cluster&quot;&gt;Adding Integration to your EKS Anywhere Cluster&lt;/h2&gt;&lt;p&gt;Standing up a workload cluster is awesome; however as mentioned earlier, Kubernetes can quickly become operation extensive and needs the flexibility of integrating with a wide array of tools. And EKS Anywhere doesn&apos;t disappoint.&lt;/p&gt;&lt;p&gt;EKS Anywhere offers custom integration for certain third-party vendor components, namely Ubuntu TLS, Cilium, and Flux. It also provides flexibility for you to integrate with your choice of tools in other areas. This is really great and frees up the teams from being locked in with a certain vendor and enables them to swap out default components with tools of their choice; for instance, swapping out cilium with a different CNI such as calico.&lt;/p&gt;&lt;p&gt;Some of the key integration components compatible with EKS Anywhere clusters are&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Load Balancer - KubeVip, MetalLB&lt;/li&gt;&lt;li&gt;Local container repository - Harbor&lt;/li&gt;&lt;li&gt;Monitoring     - Prometheus , Grafana , Datadog , or NewRelic&lt;/li&gt;&lt;li&gt;Logging     - Splunk or Fluentbit&lt;/li&gt;&lt;li&gt;Secret Management - Hashicorp Vault&lt;/li&gt;&lt;li&gt;Policy agent - Open Policy Agent (OPA)&lt;/li&gt;&lt;li&gt;Service mesh - Istio, Linkerd&lt;/li&gt;&lt;li&gt;Cost management     - KubeCost&lt;/li&gt;&lt;li&gt;Etcd backup and restore - Valero&lt;/li&gt;&lt;/ul&gt;&lt;h1 id=&quot;heading-cluster-api-kubernetes-capi&quot;&gt;Cluster API Kubernetes (CAPI)&lt;/h1&gt;&lt;p&gt;Any introduction to EKS Anywhere will not be complete without mentioning Cluster API kubernetes project.&lt;/p&gt;&lt;p&gt;&lt;a target=&quot;_blank&quot; href=&quot;https://cluster-api.sigs.k8s.io/&quot;&gt;Kubernetes Cluster API&lt;/a&gt; or CAPI is a Kubernetes SIG (Special Interest Group) project focused on providing declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters. Two of the main goals of CAPI project are&lt;/p&gt;&lt;ul&gt;&lt;li&gt;To manage the lifecycle (create, scale, upgrade, destroy) of Kubernetes-conformant clusters using a declarative API.&lt;/li&gt;&lt;li&gt;To work in different environments, both on-premises and in the cloud.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;To add some context, the notion of Bootstrapping cluster, Management cluster and Workload cluster in the context of managing kubernetes cluster lifecycle was first championed by CAPI.  &lt;/p&gt;&lt;p&gt;EKS Anywhere works under the same principles and uses CAPI underneath to implement some of these features. It uses an infrastructure provider model for creating, upgrading, and managing Kubernetes clusters that leverages the Kubernetes Cluster API project. The first supported EKS Anywhere provider, VMware vSphere, is implemented based on the Kubernetes Cluster API Provider vSphere (CAPV) specifications. Similarly, EKS Anywhere supports Cluster API for Docker Provider (CAPD) for creating development and test workload clusters.&lt;/p&gt;&lt;p&gt;The EKS Anywhere project wraps Cluster API, various other CLIs and plugins (eksctl cli, anywhere plugin, kubectl, aws-iam-authenticator)  and bundles them in a single package to simplify the creation of workload clusters.&lt;/p&gt;&lt;h1 id=&quot;heading-epilogue&quot;&gt;Epilogue&lt;/h1&gt;&lt;p&gt;Amazon EKS Anywhere aims to solve the pain points of managing the lifecycle of  kubernetes clusters in on-premise set up and provides a consistent and reliable workflow for creating and managing kubernetes clusters across deployment models and providers. With its currently supported VMWare&apos;s vSphere provider and upcoming support for bare metal, I am keen to explore its potential and study its adoption across customers and teams.&lt;/p&gt;&lt;p&gt;Let me know your thoughts in the comments.&lt;/p&gt;]]&gt;</content:encoded><hashnode:content>&lt;![CDATA[&lt;h1 id=&quot;heading-prelude&quot;&gt;Prelude&lt;/h1&gt;&lt;p&gt;Before we discuss about EKS Anywhere, it&apos;s useful to have a basic idea about Amazon EKS and more importantly Amazon EKS Distro.&lt;/p&gt;&lt;p&gt;Amazon Elastic Kubernetes Service a.k.a. Amazon EKS is a managed kubernetes service from AWS to run and scale Kubernetes applications on AWS cloud platform. EKS uses Amazon EKS Distro otherwise known as EKS-D to create reliable and secure Kubernetes clusters.&lt;/p&gt;&lt;p&gt;EKS Distro includes binaries and containers of open-source Kubernetes, etcd (cluster configuration database), networking, and storage plugins, tested for compatibility.&lt;/p&gt;&lt;p&gt;Maintaining and running your own Kubernetes clusters takes a lot of effort for teams in tracking updates, figuring out compatibility between different kubernetes versions and simply keeping up-to-date with upstream kubernetes release cadence. This is where EKS-D comes to the rescue. EKS Distro reduces the need to track updates, determine compatibility, and standardize on a common Kubernetes version across teams.&lt;/p&gt;&lt;h1 id=&quot;heading-what-is-eks-anywhere&quot;&gt;WHAT is EKS Anywhere&lt;/h1&gt;&lt;p&gt;With that basic understanding of EKS and EKS-D, let&apos;s now take a closer look at EKS-Anywhere. &lt;/p&gt;&lt;p&gt;Amazon EKS Anywhere is a new deployment option for Amazon EKS that enables you to easily create and operate Kubernetes clusters on-premises with your own virtual machines. EKS Anywhere is an open-source deployment option for Amazon EKS that builds on the strengths of EKS-D and allows teams to create and operate Kubernetes clusters on-premises. EKS Anywhere is based on the overarching design principle that supports BYOI (Bring Your Own Infrastructure) model when it comes to deploying kubernetes clusters. It supports deploying production grade kubernetes clusters on VMWare&apos;s vSphere and plans to add support for bare metal in 2022.&lt;/p&gt;&lt;h1 id=&quot;heading-what-problem-does-it-solve&quot;&gt;WHAT problem does it solve&lt;/h1&gt;&lt;p&gt;The salient use cases that EKS Anywhere solves are &lt;/p&gt;&lt;ul&gt;&lt;li&gt;Hybrid cloud consistency&lt;/li&gt;&lt;li&gt;Disconnected environment&lt;/li&gt;&lt;li&gt;Application modernization&lt;/li&gt;&lt;li&gt;Data sovereignty&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;The core benefit of using EKS Anywhere is that it offers customers a consistent and reliable mechanism of running Amazon&apos;s kubernetes distribution within their own on-premises infrastructure. Some enterprises have a mix of deployment architecture with some kubernetes workload running in cloud on AWS EKS while some other applications still running in on-premise kubernetes clusters. EKS Anywhere offers strong operational consistency with Amazon EKS so teams can standardize their Kubernetes operations across a hybrid cloud environment based on a unified toolset.&lt;/p&gt;&lt;p&gt;Businesses that have a large on-premises footprint and want to modernize their applications can leverage EKS Anywhere to simplify the creation and operation of on-premises Kubernetes clusters and focus more on developing and modernizing applications. In addition, customers who want to keep their data within private data centers due to legal reasons can benefit by using trusted Amazon EKS Kubernetes distribution and tools to where their data needs to be.&lt;/p&gt;&lt;h1 id=&quot;heading-why-should-one-use-it&quot;&gt;WHY should one use it&lt;/h1&gt;&lt;p&gt;Kubernetes adoption is growing every day and the tooling around it also keeps on piling up. While this provides customers with multiple options, it is quite challenging to ensure that teams pick the right tools for the job and doesn&apos;t add complexity in their operations workflow. Keeping pace with upstream kubernetes release cadence without breaking existing applications is also a non-trivial task. &lt;/p&gt;&lt;p&gt;Teams that are operating kubernetes clusters on-premises typically need to take on a lot of operational challenges such as creating and upgrading clusters in a timely manner with upstream releases, maintaining and resolving version mismatches between kubernetes releases and integrating a variety of third party tools to perform cluster operations. The same applies to hybrid cluster set ups and leads to unnecessary complexity, fragmented tooling and support options, and inconsistencies between the cloud and on-premises clusters that make it hard to manage applications across environments.&lt;/p&gt;&lt;p&gt;With Amazon EKS Anywhere, teams have Kubernetes operational tooling that is consistent with Amazon EKS and is optimized to simplify cluster installation with default configurations for the operating system and networking needed to operate Kubernetes on-premises.  If you&apos;re someone who wants to reduce operational complexity, adopt a consistent and reliable workflow of managing kubernetes clusters across cloud and on-premises, leverage latest tooling and security hardened updated kubernetes distribution to operate on then EKS Anywhere might be a worthy option for you.&lt;/p&gt;&lt;h1 id=&quot;heading-kicking-the-tires&quot;&gt;Kicking The Tires&lt;/h1&gt;&lt;p&gt;EKS Anywhere allows you to create and manage production kubernetes cluster on VMWare vSpehere. However, if you don&apos;t have a vSphere environment at your disposal, EKS Anywhere also supports creating development clusters locally with Docker provider.&lt;/p&gt;&lt;p&gt;Here, I will walk you through the cluster creation process on a virtual machine using the Docker provider. This set up is for local use and not recommended for Production purposes. However the concepts around the cluster creation and management workflow across providers are the same. &lt;/p&gt;&lt;h2 id=&quot;heading-clusters-and-more-clusters&quot;&gt;Clusters and more Clusters&lt;/h2&gt;&lt;p&gt; This is the path we&apos;re going to take for the purposes of this post. &lt;/p&gt;&lt;h3 id=&quot;heading-cluster-management-workflow&quot;&gt;Cluster Management Workflow&lt;/h3&gt;&lt;p&gt;The EKS Anywhere cluster creation process makes it easy not only to bring up a cluster initially, but also to update configuration settings and to upgrade Kubernetes versions going forward. The cluster creation process  involves stepping through different types of clusters.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Bootstrap cluster - A temporary kubernetes clusters that&apos;s ephemeral in nature and is solely used for creating a management cluster.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Management cluster - A Kubernetes cluster that manages the lifecycle of Workload Clusters.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Workload cluster - A Kubernetes cluster whose lifecycle is managed by a Management Cluster.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;blockquote&gt;&lt;p&gt;To manage the lifecycle of a Workload kubernetes cluster we need to have a Management kubernetes cluster in place first. And to have a Management cluster, we need to spin up a bootstrap kubernetes cluster to start the cluster creation workflow.&lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;Think of the Bootstrap cluster as a launchpad for EKS Anywhere to start the Workload cluster creation process. Once it&apos;s able to create the Management cluster, from that point on, Management cluster continues the process and takes over the lifecycle management of the Workload cluster. The essence of this design paradigm is to make kubernetes clusters &quot;self-aware&quot; and be able to manage its lifecycle themselves without the need of a launchpad (i.e. Bootstrap cluster). A common practice is to delete the bootstrap cluster once its job is done and repurpose the infrastructure to save resource.&lt;/p&gt;&lt;p&gt; An obvious question about the above scenario is how to spin up the bootstrap cluster and how to avoid a chicken and egg situation where we attempt to create bootstrap cluster using EKS Anywhere even before there&apos;s a launchpad cluster in place.&lt;/p&gt;&lt;p&gt;Enter [KinD] (https://kind.sigs.k8s.io/) or Kubernetes in Docker. KinD can create  kubernetes clusters that run kubernetes as docker containers. EKS Anywhere runs a &lt;code&gt;KinD&lt;/code&gt; cluster on an administrative workstation or virtual machine to act as a &lt;code&gt;bootstrap cluster&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;Let&apos;s roll up our sleeves now and see EKS Anywhere in action!&lt;/p&gt;&lt;h3 id=&quot;heading-prerequisites&quot;&gt;Prerequisites&lt;/h3&gt;&lt;p&gt;To start with, prepare your &lt;code&gt;Administrative&lt;/code&gt; Workstation with following pre-requisites.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Docker 20.x.x&lt;/li&gt;&lt;li&gt;Ubuntu (20.04.2 LTS). If you&apos;re on a Mac use &lt;code&gt;10.15&lt;/code&gt;&lt;/li&gt;&lt;li&gt;4 CPU cores&lt;/li&gt;&lt;li&gt;16GB memory&lt;/li&gt;&lt;li&gt;30GB free disk space&lt;/li&gt;&lt;/ul&gt;&lt;blockquote&gt;&lt;p&gt;Make sure that your local workstation or virtual machine in cloud meets all of the above requirements. &lt;/p&gt;&lt;/blockquote&gt;&lt;p&gt;I used a virtual machine in cloud running Ubuntu 20.04 LTS with above configurations.&lt;/p&gt;&lt;h3 id=&quot;heading-docker&quot;&gt;Docker&lt;/h3&gt;&lt;p&gt; &lt;a target=&quot;_blank&quot; href=&quot;https://docs.docker.com/engine/install/ubuntu/&quot;&gt;Install Docker on Ubuntu&lt;/a&gt;.Check the docker version to make sure it&apos;s 20.x.x. &lt;/p&gt;&lt;pre&gt;&lt;code&gt;&lt;span class=&quot;hljs-string&quot;&gt;$&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;docker&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;version&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;Client:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Docker&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Engine&lt;/span&gt; &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Community&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;Version:&lt;/span&gt;           &lt;span class=&quot;hljs-number&quot;&gt;20.10&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.17&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;API version:&lt;/span&gt;       &lt;span class=&quot;hljs-number&quot;&gt;1.41&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;Go version:&lt;/span&gt;        &lt;span class=&quot;hljs-string&quot;&gt;go1.17.11&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;Git commit:&lt;/span&gt;        &lt;span class=&quot;hljs-string&quot;&gt;100c701&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;Built:&lt;/span&gt;             &lt;span class=&quot;hljs-string&quot;&gt;Mon&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Jun&lt;/span&gt;  &lt;span class=&quot;hljs-number&quot;&gt;6&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;23&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;:02:57&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;2022&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;OS/Arch:&lt;/span&gt;           &lt;span class=&quot;hljs-string&quot;&gt;linux/amd64&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;Context:&lt;/span&gt;           &lt;span class=&quot;hljs-string&quot;&gt;default&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;Experimental:&lt;/span&gt;      &lt;span class=&quot;hljs-literal&quot;&gt;true&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;Server:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Docker&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Engine&lt;/span&gt; &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Community&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;Engine:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;Version:&lt;/span&gt;          &lt;span class=&quot;hljs-number&quot;&gt;20.10&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.17&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;API version:&lt;/span&gt;      &lt;span class=&quot;hljs-number&quot;&gt;1.41&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;(minimum&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;version&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;1.12&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;)&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;Go version:&lt;/span&gt;       &lt;span class=&quot;hljs-string&quot;&gt;go1.17.11&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;Git commit:&lt;/span&gt;       &lt;span class=&quot;hljs-string&quot;&gt;a89b842&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;Built:&lt;/span&gt;            &lt;span class=&quot;hljs-string&quot;&gt;Mon&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Jun&lt;/span&gt;  &lt;span class=&quot;hljs-number&quot;&gt;6&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;23&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;:01:03&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;2022&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;OS/Arch:&lt;/span&gt;          &lt;span class=&quot;hljs-string&quot;&gt;linux/amd64&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;Experimental:&lt;/span&gt;     &lt;span class=&quot;hljs-literal&quot;&gt;false&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;containerd:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;Version:&lt;/span&gt;          &lt;span class=&quot;hljs-number&quot;&gt;1.6&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;GitCommit:&lt;/span&gt;        &lt;span class=&quot;hljs-string&quot;&gt;10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;runc:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;Version:&lt;/span&gt;          &lt;span class=&quot;hljs-number&quot;&gt;1.1&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.2&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;GitCommit:&lt;/span&gt;        &lt;span class=&quot;hljs-string&quot;&gt;v1.1.2-0-ga916309&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;docker-init:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;Version:&lt;/span&gt;          &lt;span class=&quot;hljs-number&quot;&gt;0.19&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;GitCommit:&lt;/span&gt;        &lt;span class=&quot;hljs-string&quot;&gt;de40ad0&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&quot;heading-kubectl&quot;&gt;kubectl&lt;/h3&gt;&lt;p&gt;You need &lt;code&gt;kubectl&lt;/code&gt; installed to connect to kubernetes clusters from your workstation. If you don&apos;t have it installed, use the &lt;code&gt;snap&lt;/code&gt; commands to install it.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;sudo snap &lt;span class=&quot;hljs-keyword&quot;&gt;install&lt;/span&gt; kubectl &lt;span class=&quot;hljs-comment&quot;&gt;--classic&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&quot;heading-eksctl&quot;&gt;eksctl&lt;/h3&gt;&lt;p&gt;Install the latest release of eksctl. The EKS Anywhere plugin requires eksctl version 0.66.0 or newer. &lt;/p&gt;&lt;pre&gt;&lt;code&gt;curl &lt;span class=&quot;hljs-string&quot;&gt;&quot;https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz&quot;&lt;/span&gt; \    &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;silent &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;location \    &lt;span class=&quot;hljs-operator&quot;&gt;|&lt;/span&gt; tar xz &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;C &lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;tmpsudo mv &lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;tmp&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;eksctl &lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;usr&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;local&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;bin&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&quot;heading-eksctl-anywhere-plugin&quot;&gt;eksctl anywhere plugin&lt;/h3&gt;&lt;p&gt;Install the &lt;code&gt;eksctl-anywhere&lt;/code&gt; plugin.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;export EKSA_RELEASE&lt;span class=&quot;hljs-operator&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;&quot;0.9.1&quot;&lt;/span&gt; OS&lt;span class=&quot;hljs-operator&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;&quot;$(uname -s | tr A-Z a-z)&quot;&lt;/span&gt; RELEASE_NUMBER&lt;span class=&quot;hljs-operator&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;12&lt;/span&gt;curl &lt;span class=&quot;hljs-string&quot;&gt;&quot;https://anywhere-assets.eks.amazonaws.com/releases/eks-a/${RELEASE_NUMBER}/artifacts/eks-a/v${EKSA_RELEASE}/${OS}/amd64/eksctl-anywhere-v${EKSA_RELEASE}-${OS}-amd64.tar.gz&quot;&lt;/span&gt; \    &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;silent &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;location \    &lt;span class=&quot;hljs-operator&quot;&gt;|&lt;/span&gt; tar xz ./eksctl&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;anywheresudo mv ./eksctl&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;anywhere &lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;usr&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;local&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;bin&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Verify your installed version.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ eksctl anywhere &lt;span class=&quot;hljs-keyword&quot;&gt;version&lt;/span&gt;v0&lt;span class=&quot;hljs-number&quot;&gt;.9&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.1&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&quot;heading-create-your-local-eks-anywhere-cluster&quot;&gt;Create your local EKS Anywhere cluster&lt;/h2&gt;&lt;p&gt;Now that you have all the tools installed, let&apos;s proceed with creating the &lt;code&gt;local&lt;/code&gt; EKS Anywhere cluster using the &lt;code&gt;Docker&lt;/code&gt; provider.&lt;/p&gt;&lt;p&gt;First we create a cluster configuration and save it in a file.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ CLUSTER_NAME&lt;span class=&quot;hljs-operator&quot;&gt;=&lt;/span&gt;rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;devubuntu@ip&lt;span class=&quot;hljs-number&quot;&gt;-10&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;-0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;-0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;-12&lt;/span&gt;:&lt;span class=&quot;hljs-operator&quot;&gt;~&lt;/span&gt;$ eksctl anywhere generate clusterconfig $CLUSTER_NAME \&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt;    &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;provider docker &lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt; $CLUSTER_NAME.yaml&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Check the configuration file.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;&lt;span class=&quot;hljs-string&quot;&gt;$&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;cat&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;rc-dev.yaml&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;anywhere.eks.amazonaws.com/v1alpha1&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Cluster&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;rc-dev&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;clusterNetwork:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;cniConfig:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;cilium:&lt;/span&gt; {}    &lt;span class=&quot;hljs-attr&quot;&gt;pods:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;cidrBlocks:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;192.168&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;/16&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;services:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;cidrBlocks:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;10.96&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;/12&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;controlPlaneConfiguration:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;count:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;datacenterRef:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;DockerDatacenterConfig&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;rc-dev&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;externalEtcdConfiguration:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;count:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;kubernetesVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;1.22&quot;&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;managementCluster:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;rc-dev&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;workerNodeGroupConfigurations:&lt;/span&gt;  &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;count:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;md-0&lt;/span&gt;&lt;span class=&quot;hljs-meta&quot;&gt;---&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;anywhere.eks.amazonaws.com/v1alpha1&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;DockerDatacenterConfig&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;rc-dev&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt; {}&lt;span class=&quot;hljs-meta&quot;&gt;---&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;By default, EKS Anywhere creates a kubernetes cluster with one control plane and one worker node. Currently it installs the kubernetes &lt;code&gt;1.22&lt;/code&gt; version. &lt;/p&gt;&lt;p&gt;Another configuration worth noticing is the default cni provider which is &lt;code&gt;cilium&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;You can customize and alter these settings to suit your workload cluster requirement. &lt;/p&gt;&lt;h3 id=&quot;heading-start-small&quot;&gt;Start Small&lt;/h3&gt;&lt;p&gt;We&apos;re going to start with installing our first EKS Anywhere cluster with bare minimum set up with 1 control plane and 1 worker node. Later on, we&apos;d increase the worker node count.&lt;/p&gt;&lt;p&gt;Once we have the configuration file, all you need to do is&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ time eksctl anywhere &lt;span class=&quot;hljs-keyword&quot;&gt;create&lt;/span&gt; cluster -f $CLUSTER_NAME.yamlPerforming setup &lt;span class=&quot;hljs-keyword&quot;&gt;and&lt;/span&gt; validations&lt;span class=&quot;hljs-keyword&quot;&gt;Warning&lt;/span&gt;: The docker infrastructure provider &lt;span class=&quot;hljs-keyword&quot;&gt;is&lt;/span&gt; meant &lt;span class=&quot;hljs-keyword&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;hljs-keyword&quot;&gt;local&lt;/span&gt; development &lt;span class=&quot;hljs-keyword&quot;&gt;and&lt;/span&gt; testing &lt;span class=&quot;hljs-keyword&quot;&gt;only&lt;/span&gt; Docker Provider setup &lt;span class=&quot;hljs-keyword&quot;&gt;is&lt;/span&gt; valid &lt;span class=&quot;hljs-keyword&quot;&gt;Validate&lt;/span&gt; certificate &lt;span class=&quot;hljs-keyword&quot;&gt;for&lt;/span&gt; registry mirror &lt;span class=&quot;hljs-keyword&quot;&gt;Create&lt;/span&gt; preflight validations passCreating &lt;span class=&quot;hljs-keyword&quot;&gt;new&lt;/span&gt; bootstrap clusterProvider specific pre-capi-&lt;span class=&quot;hljs-keyword&quot;&gt;install&lt;/span&gt;-setup &lt;span class=&quot;hljs-keyword&quot;&gt;on&lt;/span&gt; bootstrap clusterInstalling cluster-api providers &lt;span class=&quot;hljs-keyword&quot;&gt;on&lt;/span&gt; bootstrap clusterProvider specific post-setupCreating &lt;span class=&quot;hljs-keyword&quot;&gt;new&lt;/span&gt; workload clusterInstalling networking &lt;span class=&quot;hljs-keyword&quot;&gt;on&lt;/span&gt; workload clusterInstalling cluster-api providers &lt;span class=&quot;hljs-keyword&quot;&gt;on&lt;/span&gt; workload clusterInstalling EKS-A secrets &lt;span class=&quot;hljs-keyword&quot;&gt;on&lt;/span&gt; workload clusterMoving cluster &lt;span class=&quot;hljs-keyword&quot;&gt;management&lt;/span&gt; &lt;span class=&quot;hljs-keyword&quot;&gt;from&lt;/span&gt; bootstrap &lt;span class=&quot;hljs-keyword&quot;&gt;to&lt;/span&gt; workload clusterInstalling EKS-A custom components (CRD &lt;span class=&quot;hljs-keyword&quot;&gt;and&lt;/span&gt; controller) &lt;span class=&quot;hljs-keyword&quot;&gt;on&lt;/span&gt; workload clusterInstalling EKS-D components &lt;span class=&quot;hljs-keyword&quot;&gt;on&lt;/span&gt; workload clusterCreating EKS-A CRDs instances &lt;span class=&quot;hljs-keyword&quot;&gt;on&lt;/span&gt; workload clusterInstalling AddonManager &lt;span class=&quot;hljs-keyword&quot;&gt;and&lt;/span&gt; GitOps Toolkit &lt;span class=&quot;hljs-keyword&quot;&gt;on&lt;/span&gt; workload clusterGitOps &lt;span class=&quot;hljs-keyword&quot;&gt;field&lt;/span&gt; &lt;span class=&quot;hljs-keyword&quot;&gt;not&lt;/span&gt; specified, bootstrap flux skippedWriting cluster config &lt;span class=&quot;hljs-keyword&quot;&gt;file&lt;/span&gt;Deleting bootstrap cluster🎉 Cluster created!&lt;span class=&quot;hljs-built_in&quot;&gt;real&lt;/span&gt;    &lt;span class=&quot;hljs-number&quot;&gt;5&lt;/span&gt;m1&lt;span class=&quot;hljs-number&quot;&gt;.553&lt;/span&gt;s&lt;span class=&quot;hljs-keyword&quot;&gt;user&lt;/span&gt;    &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;m2&lt;span class=&quot;hljs-number&quot;&gt;.941&lt;/span&gt;s&lt;span class=&quot;hljs-keyword&quot;&gt;sys&lt;/span&gt;    &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;m2&lt;span class=&quot;hljs-number&quot;&gt;.084&lt;/span&gt;s&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Within approx. 5 mins you have the workload cluster ready for use without much hassle; that&apos;s pretty neat!&lt;/p&gt;&lt;p&gt;By default, the console log above highlights the different stages of the workload cluster creation lifecycle; however if you want to see more logs, then add the &lt;code&gt;-v&lt;/code&gt; parameter in the &lt;code&gt;eksctl anywhere&lt;/code&gt; cluster create command to tun on the verbose mode. &lt;/p&gt;&lt;p&gt;You can check the bootstrap cluster by issuing the below command. Install &lt;code&gt;KinD&lt;/code&gt; if you don&apos;t have it on your workstation by following &lt;a target=&quot;_blank&quot; href=&quot;https://kind.sigs.k8s.io/docs/user/quick-start/#installation&quot;&gt;these&lt;/a&gt; instructions.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ kind &lt;span class=&quot;hljs-keyword&quot;&gt;get&lt;/span&gt; clustersrc-dev&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Now it&apos;s time to verify the cluster and check kubernetes version installed.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;# export the kubeconfig &lt;span class=&quot;hljs-keyword&quot;&gt;to&lt;/span&gt; &lt;span class=&quot;hljs-type&quot;&gt;point&lt;/span&gt; &lt;span class=&quot;hljs-keyword&quot;&gt;to&lt;/span&gt; the &lt;span class=&quot;hljs-keyword&quot;&gt;cluster&lt;/span&gt;$ export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-&lt;span class=&quot;hljs-keyword&quot;&gt;cluster&lt;/span&gt;.kubeconfig# &lt;span class=&quot;hljs-keyword&quot;&gt;check&lt;/span&gt; nodes &lt;span class=&quot;hljs-keyword&quot;&gt;of&lt;/span&gt; the k8s &lt;span class=&quot;hljs-keyword&quot;&gt;cluster&lt;/span&gt;$ kubectl &lt;span class=&quot;hljs-keyword&quot;&gt;get&lt;/span&gt; nodes&lt;span class=&quot;hljs-type&quot;&gt;NAME&lt;/span&gt;                           STATUS   ROLES                  AGE   &lt;span class=&quot;hljs-keyword&quot;&gt;VERSION&lt;/span&gt;rc-dev-gzktk                   Ready    control-plane,master   &lt;span class=&quot;hljs-number&quot;&gt;15&lt;/span&gt;m   v1&lt;span class=&quot;hljs-number&quot;&gt;.22&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;-eks-bb942e6rc-dev-md&lt;span class=&quot;hljs-number&quot;&gt;-0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;-7&lt;/span&gt;c4c7f595d&lt;span class=&quot;hljs-number&quot;&gt;-5&lt;/span&gt;lcq8   Ready    &amp;lt;&lt;span class=&quot;hljs-keyword&quot;&gt;none&lt;/span&gt;&amp;gt;                 &lt;span class=&quot;hljs-number&quot;&gt;14&lt;/span&gt;m   v1&lt;span class=&quot;hljs-number&quot;&gt;.22&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;-eks-bb942e&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&quot;heading-spin-up-a-multi-node-cluster&quot;&gt;Spin Up a multi node cluster&lt;/h2&gt;&lt;p&gt;While the basic cluster with a control plane and a worker node is nice, having the ability to spin up a multi node kubernetes cluster is fantastic. Let&apos;s see if EKS Anywhere is up to the task.&lt;/p&gt;&lt;p&gt;In order to increase the number of nodes in the cluster, you need to modify the cluster configuration file. There&apos;s no option to pass this as a parameter to the &lt;code&gt;eksctl anywhere&lt;/code&gt; command; at least for now.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ cp rc-dev.yaml rc-dev-multinode.yaml&lt;span class=&quot;hljs-comment&quot;&gt;#change metadata and name of the cluster configuration&lt;/span&gt;$ sed -i &lt;span class=&quot;hljs-string&quot;&gt;&apos;s/rc-dev/rc-dev-multinode/g&apos;&lt;/span&gt; rc-dev-multinode.yaml&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;And finally, change the &lt;code&gt;workerNodeGroupConfigurations.count&lt;/code&gt; value to 3. Now let&apos;s deploy the cluster.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ eksctl anywhere create cluster &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;f rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode.yaml&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&quot;heading-verify-cluster&quot;&gt;Verify cluster&lt;/h2&gt;&lt;p&gt;Export the kubeconfig file as before and check nodes status&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ kubectl get nodesNAME                                  STATUS   ROLES                  AGE     VERSION$ kubectl get nodesNAME                                     STATUS   ROLES                  AGE     VERSIONrc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;7dwtd                   Ready    control&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;plane,master   2m40s   v1&lt;span class=&quot;hljs-number&quot;&gt;.22&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bb942e6rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;md&lt;span class=&quot;hljs-number&quot;&gt;-0&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;86786b8dfb&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dsfrw   Ready    &lt;span class=&quot;hljs-operator&quot;&gt;&amp;lt;&lt;/span&gt;none&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt;                 2m9s    v1&lt;span class=&quot;hljs-number&quot;&gt;.22&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bb942e6rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;md&lt;span class=&quot;hljs-number&quot;&gt;-0&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;86786b8dfb&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;gpx2x   Ready    &lt;span class=&quot;hljs-operator&quot;&gt;&amp;lt;&lt;/span&gt;none&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt;                 2m10s   v1&lt;span class=&quot;hljs-number&quot;&gt;.22&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bb942e6rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;md&lt;span class=&quot;hljs-number&quot;&gt;-0&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;86786b8dfb&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;tx6b4   Ready    &lt;span class=&quot;hljs-operator&quot;&gt;&amp;lt;&lt;/span&gt;none&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt;                 2m10s   v1&lt;span class=&quot;hljs-number&quot;&gt;.22&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bb942e6&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Also, check for all system pods if they&apos;re in &lt;code&gt;running&lt;/code&gt; state.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ kubectl get po &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;ANAMESPACE                           NAME                                                             READY   STATUS    RESTARTS   AGEcapd&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         capd&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;6466d54c9d&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;j9klq                         &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          112scapi&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;kubeadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bootstrap&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system       capi&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;kubeadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bootstrap&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;7c6bcf98b7&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;tb2zx       &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m1scapi&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;kubeadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;control&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;plane&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system   capi&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;kubeadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;control&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;plane&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;68b4c848dc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;zz6gs   &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          115scapi&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         capi&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;64647f455d&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;x5zx7                         &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m3scert&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager                        cert&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;8674857d7b&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;5hmgq                                    &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m41scert&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager                        cert&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;cainjector&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;f5b94ccdf&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;9bglv                          &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m41scert&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager                        cert&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;webhook&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;84fbd6fb68&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;6xkkj                            &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m40seksa&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         eksa&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;79b4b76bb8&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;lbc79                         &lt;span class=&quot;hljs-number&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;2&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          92setcdadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bootstrap&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;provider&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system   etcdadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bootstrap&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;provider&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;664f699b7c&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;q44db   &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2metcdadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system           etcdadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;59dc96c7b9&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;hpnnp           &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          118skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         cilium&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;858bk                                                     &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m51skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         cilium&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;89r45                                                     &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m51skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         cilium&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bxfv6                                                     &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m51skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         cilium&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;operator&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;7698596ff4&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;2k264                                 &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m51skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         cilium&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;operator&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;7698596ff4&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;cgjqn                                 &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m51skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         cilium&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;z4zgt                                                     &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m51skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         coredns&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;55467bc785&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;n4mv2                                         &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          3m15skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         coredns&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;55467bc785&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;qgdhx                                         &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          3m15skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         kube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;apiserver&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;7dwtd                            &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          3m18skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         kube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;7dwtd                   &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          3m18skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         kube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;proxy&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;2npkx                                                 &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          3m16skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         kube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;proxy&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;7j7vm                                                 &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m56skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         kube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;proxy&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;8ntxj                                                 &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m56skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         kube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;proxy&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;cwgtn                                                 &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          2m55skube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         kube&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;scheduler&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;7dwtd                            &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          3m18s&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;To verify that a cluster control plane is up and running, use the &lt;code&gt;kubectl&lt;/code&gt; command to show that the control plane pods are all running.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ kubectl get po &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;A &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;l control&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;plane&lt;span class=&quot;hljs-operator&quot;&gt;=&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;managerNAMESPACE                           NAME                                                             READY   STATUS    RESTARTS   AGEcapd&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         capd&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;6466d54c9d&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;j9klq                         &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          20mcapi&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;kubeadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bootstrap&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system       capi&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;kubeadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bootstrap&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;7c6bcf98b7&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;tb2zx       &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          20mcapi&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;kubeadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;control&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;plane&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system   capi&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;kubeadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;control&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;plane&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;68b4c848dc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;zz6gs   &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          20mcapi&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system                         capi&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;64647f455d&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;x5zx7                         &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          20metcdadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bootstrap&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;provider&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system   etcdadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bootstrap&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;provider&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;664f699b7c&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;q44db   &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          20metcdadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;system           etcdadm&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;controller&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;manager&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;59dc96c7b9&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;hpnnp           &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          20m&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Once you&apos;ve verified that all nodes in the workload cluster are in &lt;code&gt;ready&lt;/code&gt; state and the pods  are in &lt;code&gt;Running&lt;/code&gt; status, you can go ahead and deploy some test workloads onto the cluster. &lt;/p&gt;&lt;h2 id=&quot;heading-deploy-sample-workload&quot;&gt;Deploy sample workload&lt;/h2&gt;&lt;p&gt;To deploy a sample test workload in your shiny new multinode workload cluster, do&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ kubectl apply &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;f &lt;span class=&quot;hljs-string&quot;&gt;&quot;https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml&quot;&lt;/span&gt;deployment.apps/hello&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;a createdservice&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;hello&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;a created&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Check all kubernetes resources in the &lt;code&gt;default&lt;/code&gt; namespace.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ kubectl get allNAME                              READY   STATUS    RESTARTS   AGEpod&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;hello&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;a&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;9644dd8dc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;znwzr   &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     Running   &lt;span class=&quot;hljs-number&quot;&gt;0&lt;/span&gt;          37sNAME                  TYPE        CLUSTER&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;IP       EXTERNAL&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;IP   PORT(S)        AGEservice&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;hello&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;a   NodePort    &lt;span class=&quot;hljs-number&quot;&gt;10.106&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.132&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.175&lt;/span&gt;   &lt;span class=&quot;hljs-operator&quot;&gt;&amp;lt;&lt;/span&gt;none&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt;        &lt;span class=&quot;hljs-number&quot;&gt;80&lt;/span&gt;:&lt;span class=&quot;hljs-number&quot;&gt;30687&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;TCP   37sservice&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;kubernetes    ClusterIP   &lt;span class=&quot;hljs-number&quot;&gt;10.96&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.1&lt;/span&gt;        &lt;span class=&quot;hljs-operator&quot;&gt;&amp;lt;&lt;/span&gt;none&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt;        &lt;span class=&quot;hljs-number&quot;&gt;443&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;TCP        6m32sNAME                          READY   UP&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;TO&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;DATE   AVAILABLE   AGEdeployment.apps/hello&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;a   &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;     &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;            &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;           37sNAME                                    DESIRED   CURRENT   READY   AGEreplicaset.apps/hello&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;a&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;9644dd8dc   &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;         &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;         &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;       37s&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;To access the default web-page of the sample workload, forward the deployment port to your localhost&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ kubectl port&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;forward deploy&lt;span class=&quot;hljs-operator&quot;&gt;/&lt;/span&gt;hello&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;a &lt;span class=&quot;hljs-number&quot;&gt;8000&lt;/span&gt;:&lt;span class=&quot;hljs-number&quot;&gt;80&lt;/span&gt;Forwarding &lt;span class=&quot;hljs-keyword&quot;&gt;from&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;127.0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.1&lt;/span&gt;:&lt;span class=&quot;hljs-number&quot;&gt;8000&lt;/span&gt; &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;80&lt;/span&gt;Forwarding &lt;span class=&quot;hljs-keyword&quot;&gt;from&lt;/span&gt; [::&lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;]:&lt;span class=&quot;hljs-number&quot;&gt;8000&lt;/span&gt; &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;80&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;From a second terminal, try accessing the webpage by doing &lt;code&gt;curl localhost:8000&lt;/code&gt;&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1656229177222/UrfVS0j3Y.png&quot; alt=&quot;Screen Shot 2022-06-26 at 12.35.28 AM.png&quot; /&gt;&lt;/p&gt;&lt;h2 id=&quot;heading-scale-your-cluster&quot;&gt;Scale your cluster&lt;/h2&gt;&lt;p&gt;Currently, the only way you can scale the workload cluster is by manually incrementing the number of control plane and worker nodes in the cluster configuration file and upgrading the cluster. &lt;/p&gt;&lt;p&gt;Increment the worker node count to 5 as shown below&lt;/p&gt;&lt;pre&gt;&lt;code&gt;&lt;span class=&quot;hljs-string&quot;&gt;$&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;cat&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;rc-dev-multinode-more.yaml&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;anywhere.eks.amazonaws.com/v1alpha1&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Cluster&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;rc-dev-multinode&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;clusterNetwork:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;cniConfig:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;cilium:&lt;/span&gt; {}    &lt;span class=&quot;hljs-attr&quot;&gt;pods:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;cidrBlocks:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;192.168&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;/16&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;services:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;cidrBlocks:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;10.96&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.0&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;/12&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;controlPlaneConfiguration:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;count:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;datacenterRef:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;DockerDatacenterConfig&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;rc-dev-multinode&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;externalEtcdConfiguration:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;count:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;1&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;kubernetesVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;&quot;1.21&quot;&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;managementCluster:&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;rc-dev-multinode&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;workerNodeGroupConfigurations:&lt;/span&gt;  &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;count:&lt;/span&gt; &lt;span class=&quot;hljs-number&quot;&gt;5&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;md-0&lt;/span&gt;&lt;span class=&quot;hljs-meta&quot;&gt;---&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;anywhere.eks.amazonaws.com/v1alpha1&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;DockerDatacenterConfig&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;rc-dev-multinode&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt; {}&lt;span class=&quot;hljs-meta&quot;&gt;---&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Let&apos;s give it a try to see if we can upgrade the workload cluster to add more worker nodes.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ eksctl anywhere upgrade cluster &lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;f rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;more.yaml Performing setup and validations Docker Provider setup &lt;span class=&quot;hljs-keyword&quot;&gt;is&lt;/span&gt; valid Validate certificate &lt;span class=&quot;hljs-keyword&quot;&gt;for&lt;/span&gt; registry mirror Control plane ready Worker nodes ready Nodes ready Cluster CRDs ready Cluster object present on workload cluster Upgrade cluster kubernetes version increment Validate &lt;span class=&quot;hljs-keyword&quot;&gt;immutable&lt;/span&gt; fields Upgrade preflight validations passEnsuring etcd CAPI providers exist on management cluster before upgradeUpgrading core componentsPausing EKS&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;A cluster controller reconcilePausing Flux kustomizationGitOps field not specified, pause flux kustomization skippedCreating bootstrap clusterInstalling cluster&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;api providers on bootstrap clusterMoving cluster management &lt;span class=&quot;hljs-keyword&quot;&gt;from&lt;/span&gt; workload to bootstrap clusterUpgrading workload clusterMoving cluster management &lt;span class=&quot;hljs-keyword&quot;&gt;from&lt;/span&gt; bootstrap to workload clusterApplying &lt;span class=&quot;hljs-keyword&quot;&gt;new&lt;/span&gt; EKS&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;A cluster resource; resuming reconcileResuming EKS&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;A controller reconciliationUpdating Git Repo with &lt;span class=&quot;hljs-keyword&quot;&gt;new&lt;/span&gt; EKS&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;A cluster specGitOps field not specified, update git repo skippedForcing reconcile Git repo with latest commitGitOps not configured, force reconcile flux git repo skippedResuming Flux kustomizationGitOps field not specified, resume flux kustomization skippedWriting cluster config file🎉 Cluster upgraded&lt;span class=&quot;hljs-operator&quot;&gt;!&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Let&apos;s check the nodes to verify the state of the cluster.&lt;/p&gt;&lt;pre&gt;&lt;code&gt;$ kubectl get nodesNAME                                     STATUS   ROLES                  AGE   VERSIONrc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;7dwtd                   Ready    control&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;plane,master   43m   v1&lt;span class=&quot;hljs-number&quot;&gt;.22&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bb942e6rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;md&lt;span class=&quot;hljs-number&quot;&gt;-0&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;86786b8dfb&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dsfrw   Ready    &lt;span class=&quot;hljs-operator&quot;&gt;&amp;lt;&lt;/span&gt;none&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt;                 43m   v1&lt;span class=&quot;hljs-number&quot;&gt;.22&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bb942e6rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;md&lt;span class=&quot;hljs-number&quot;&gt;-0&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;86786b8dfb&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;gpx2x   Ready    &lt;span class=&quot;hljs-operator&quot;&gt;&amp;lt;&lt;/span&gt;none&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt;                 43m   v1&lt;span class=&quot;hljs-number&quot;&gt;.22&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bb942e6rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;md&lt;span class=&quot;hljs-number&quot;&gt;-0&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;86786b8dfb&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;stkx8   Ready    &lt;span class=&quot;hljs-operator&quot;&gt;&amp;lt;&lt;/span&gt;none&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt;                 10m   v1&lt;span class=&quot;hljs-number&quot;&gt;.22&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bb942e6rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;md&lt;span class=&quot;hljs-number&quot;&gt;-0&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;86786b8dfb&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;tx6b4   Ready    &lt;span class=&quot;hljs-operator&quot;&gt;&amp;lt;&lt;/span&gt;none&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt;                 43m   v1&lt;span class=&quot;hljs-number&quot;&gt;.22&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bb942e6rc&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;dev&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;multinode&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;md&lt;span class=&quot;hljs-number&quot;&gt;-0&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;86786b8dfb&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;vqpf9   Ready    &lt;span class=&quot;hljs-operator&quot;&gt;&amp;lt;&lt;/span&gt;none&lt;span class=&quot;hljs-operator&quot;&gt;&amp;gt;&lt;/span&gt;                 10m   v1&lt;span class=&quot;hljs-number&quot;&gt;.22&lt;/span&gt;&lt;span class=&quot;hljs-number&quot;&gt;.6&lt;/span&gt;&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;eks&lt;span class=&quot;hljs-operator&quot;&gt;-&lt;/span&gt;bb942e6&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;And sure enough, we have all 5 worker nodes in &lt;code&gt;ready&lt;/code&gt; status.&lt;/p&gt;&lt;p&gt;You can scale workload clusters in a semi automatic way by storing your cluster config manifest in git and then having a CI/CD system deploy your changes. Or you can use a GitOps controller to apply the changes. EKS Anywhere allows cluster management by incorporating GitOps. It uses Flux to manage clusters with GitOps. More on this in a next post.&lt;/p&gt;&lt;h2 id=&quot;heading-adding-integration-to-your-eks-anywhere-cluster&quot;&gt;Adding Integration to your EKS Anywhere Cluster&lt;/h2&gt;&lt;p&gt;Standing up a workload cluster is awesome; however as mentioned earlier, Kubernetes can quickly become operation extensive and needs the flexibility of integrating with a wide array of tools. And EKS Anywhere doesn&apos;t disappoint.&lt;/p&gt;&lt;p&gt;EKS Anywhere offers custom integration for certain third-party vendor components, namely Ubuntu TLS, Cilium, and Flux. It also provides flexibility for you to integrate with your choice of tools in other areas. This is really great and frees up the teams from being locked in with a certain vendor and enables them to swap out default components with tools of their choice; for instance, swapping out cilium with a different CNI such as calico.&lt;/p&gt;&lt;p&gt;Some of the key integration components compatible with EKS Anywhere clusters are&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Load Balancer - KubeVip, MetalLB&lt;/li&gt;&lt;li&gt;Local container repository - Harbor&lt;/li&gt;&lt;li&gt;Monitoring     - Prometheus , Grafana , Datadog , or NewRelic&lt;/li&gt;&lt;li&gt;Logging     - Splunk or Fluentbit&lt;/li&gt;&lt;li&gt;Secret Management - Hashicorp Vault&lt;/li&gt;&lt;li&gt;Policy agent - Open Policy Agent (OPA)&lt;/li&gt;&lt;li&gt;Service mesh - Istio, Linkerd&lt;/li&gt;&lt;li&gt;Cost management     - KubeCost&lt;/li&gt;&lt;li&gt;Etcd backup and restore - Valero&lt;/li&gt;&lt;/ul&gt;&lt;h1 id=&quot;heading-cluster-api-kubernetes-capi&quot;&gt;Cluster API Kubernetes (CAPI)&lt;/h1&gt;&lt;p&gt;Any introduction to EKS Anywhere will not be complete without mentioning Cluster API kubernetes project.&lt;/p&gt;&lt;p&gt;&lt;a target=&quot;_blank&quot; href=&quot;https://cluster-api.sigs.k8s.io/&quot;&gt;Kubernetes Cluster API&lt;/a&gt; or CAPI is a Kubernetes SIG (Special Interest Group) project focused on providing declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters. Two of the main goals of CAPI project are&lt;/p&gt;&lt;ul&gt;&lt;li&gt;To manage the lifecycle (create, scale, upgrade, destroy) of Kubernetes-conformant clusters using a declarative API.&lt;/li&gt;&lt;li&gt;To work in different environments, both on-premises and in the cloud.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;To add some context, the notion of Bootstrapping cluster, Management cluster and Workload cluster in the context of managing kubernetes cluster lifecycle was first championed by CAPI.  &lt;/p&gt;&lt;p&gt;EKS Anywhere works under the same principles and uses CAPI underneath to implement some of these features. It uses an infrastructure provider model for creating, upgrading, and managing Kubernetes clusters that leverages the Kubernetes Cluster API project. The first supported EKS Anywhere provider, VMware vSphere, is implemented based on the Kubernetes Cluster API Provider vSphere (CAPV) specifications. Similarly, EKS Anywhere supports Cluster API for Docker Provider (CAPD) for creating development and test workload clusters.&lt;/p&gt;&lt;p&gt;The EKS Anywhere project wraps Cluster API, various other CLIs and plugins (eksctl cli, anywhere plugin, kubectl, aws-iam-authenticator)  and bundles them in a single package to simplify the creation of workload clusters.&lt;/p&gt;&lt;h1 id=&quot;heading-epilogue&quot;&gt;Epilogue&lt;/h1&gt;&lt;p&gt;Amazon EKS Anywhere aims to solve the pain points of managing the lifecycle of  kubernetes clusters in on-premise set up and provides a consistent and reliable workflow for creating and managing kubernetes clusters across deployment models and providers. With its currently supported VMWare&apos;s vSphere provider and upcoming support for bare metal, I am keen to explore its potential and study its adoption across customers and teams.&lt;/p&gt;&lt;p&gt;Let me know your thoughts in the comments.&lt;/p&gt;]]&gt;</hashnode:content><hashnode:coverImage>https://cdn.hashnode.com/res/hashnode/image/upload/v1656293933822/0DFAElA1_.png</hashnode:coverImage></item></channel></rss>