Tutorial: Custom networking
By default, when the Amazon VPC CNI plugin for Kubernetes creates secondary elastic network interfaces (network interfaces) for your Amazon EC2 node, it creates them in the same subnet as the node's primary network interface. It also associates the same security groups to the secondary network interface that are associated to the primary network interface. For one or more of the following reasons, you might want the plugin to create secondary network interfaces in a different subnet or want to associate different security groups to the secondary network interfaces, or both:
There's a limited number of
IPv4
addresses that are available in the subnet
that the primary network interface is in. This might limit the number of Pods
that you can create in the subnet. By using a different subnet for secondary network
interfaces, you can increase the number of available
IPv4
addresses available
for Pods.
For security reasons, your Pods might need to use a different subnet or security groups than the node's primary network interface.
The nodes are configured in public subnets, and you want to place the Pods in private subnets. The route table associated to a public subnet includes a route to an internet gateway. The route table associated to a private subnet doesn't include a route to an internet gateway.
Considerations
With custom networking enabled, no IP addresses assigned to the primary network
interface are assigned to Pods. Only IP addresses from secondary network
interfaces are assigned to
Pods
.
If your cluster uses the
IPv6
family, you can't use custom
networking.
If you plan to use custom networking only to help alleviate
IPv4
address
exhaustion, you can create a cluster using the
IPv6
family instead. For more
information, see
Tutorial: Assigning IPv6 addresses to
Pods and services
.
Even though Pods deployed to subnets specified for secondary network interfaces can use different subnet and security groups than the node's primary network interface, the subnets and security groups must be in the same VPC as the node.
Prerequisites
Familiarity with how the Amazon VPC CNI plugin for Kubernetes creates secondary network interfaces and
assigns IP addresses to Pods. For more information, see
ENI Allocation
Version
2.12.3
or later or
1.27.160
or later of the AWS CLI installed and configured on your device or AWS CloudShell. You can check your current version with
aws --version | cut -d / -f2 | cut -d ' ' -f1
.
Package managers such
yum
,
apt-get
, or
Homebrew for macOS are often several versions behind the latest version of the AWS CLI. To install the latest version, see
Installing, updating, and uninstalling the AWS CLI
and
Quick configuration with
aws configure
in the AWS Command Line Interface User Guide. The AWS CLI version installed in the AWS CloudShell may also be several versions behind the latest version. To update it, see
Installing AWS CLI to your home directory
in the AWS CloudShell User Guide.
The
kubectl
command line tool is installed on your device or
AWS CloudShell. The version can be the same as or up to one minor version earlier or later than
the Kubernetes version of your cluster. For example, if your cluster version is
1.26
, you can use
kubectl
version
1.25
,
1.26
, or
1.27
with it. To
install or upgrade
kubectl
, see
Installing or updating kubectl
.
We recommend that you complete the steps in this topic in a Bash shell. If you aren't using a Bash shell, some script commands such as line continuation characters and the way variables are set and used require adjustment for your shell. Additionally, the quoting and escaping rules for your shell might be different. For more information, see Using quotation marks with strings in the AWS CLI in the AWS Command Line Interface User Guide.
For this tutorial, we recommend using the
, except where it's noted to replace them. You can replace any
example
values
when completing the steps for a
production cluster. We recommend completing all steps in the same terminal. This is because
variables are set and used throughout the steps and won't exist in different terminals.
example value
The commands in this topic are formatted using the conventions listed in
Using the AWS CLI examples
. If you're running commands from the command line against resources that are in a different AWS Region than the default AWS Region defined in the AWS CLI
profile
that you're using, then you need to add
--region
to the commands.
region-code
When you want to deploy custom networking to your production cluster, skip to Step 2: Configure your VPC .
Step 1: Create a test VPC and cluster
To create a cluster
The following procedures help you create a test VPC and cluster and configure custom networking for that cluster. We don't recommend using the test cluster for production workloads because several unrelated features that you might use on your production cluster aren't covered in this topic. For more information, see Creating an Amazon EKS cluster .
-
Define a few variables to use in the remaining steps.
export cluster_name=my-custom-networking-cluster account_id=$(aws sts get-caller-identity --query Account --output text)
Create a VPC.
Create a VPC using an Amazon EKS AWS CloudFormation template.
aws cloudformation create-stack --stack-name my-eks-custom-networking-vpc \ --template-url https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-vpc-private-subnets.yaml \ --parameters ParameterKey=VpcBlock,ParameterValue=192.168.0.0/24 \ ParameterKey=PrivateSubnet01Block,ParameterValue=192.168.0.64/27 \ ParameterKey=PrivateSubnet02Block,ParameterValue=192.168.0.96/27 \ ParameterKey=PublicSubnet01Block,ParameterValue=192.168.0.0/27 \ ParameterKey=PublicSubnet02Block,ParameterValue=192.168.0.32/27
The AWS CloudFormation stack takes a few minutes to create. To check on the stack's deployment status, run the following command.
aws cloudformation describe-stacks --stack-name my-eks-custom-networking-vpc --query Stacks\[\].StackStatus --output text
Don't continue to the next step until the output of the command is
CREATE_COMPLETE
.Define variables with the values of the private subnet IDs created by the template.
subnet_id_1=$(aws cloudformation describe-stack-resources --stack-name my-eks-custom-networking-vpc \ --query "StackResources[?LogicalResourceId=='PrivateSubnet01'].PhysicalResourceId" --output text) subnet_id_2=$(aws cloudformation describe-stack-resources --stack-name my-eks-custom-networking-vpc \ --query "StackResources[?LogicalResourceId=='PrivateSubnet02'].PhysicalResourceId" --output text)
Define variables with the Availability Zones of the subnets retrieved in the previous step.
az_1=$(aws ec2 describe-subnets --subnet-ids $subnet_id_1 --query 'Subnets[*].AvailabilityZone' --output text) az_2=$(aws ec2 describe-subnets --subnet-ids $subnet_id_2 --query 'Subnets[*].AvailabilityZone' --output text)
Create a cluster IAM role.
Run the following command to create an IAM trust policy JSON file.
cat >eks-cluster-role-trust-policy.json <<EOF "Version": "2012-10-17", "Statement": [ "Effect": "Allow", "Principal": { "Service": "eks.amazonaws.com" "Action": "sts:AssumeRole"
Create the Amazon EKS cluster IAM role. If necessary, preface
eks-cluster-role-trust-policy.json
with the path on your computer that you wrote the file to in the previous step. The command associates the trust policy that you created in the previous step to the role. To create an IAM role, the IAM principal that is creating the role must be assigned theiam:CreateRole
action (permission).aws iam create-role --role-name myCustomNetworkingAmazonEKSClusterRole --assume-role-policy-document file://"eks-cluster-role-trust-policy.json"
Attach the Amazon EKS managed policy named
AmazonEKSClusterPolicy
to the role. To attach an IAM policy to an IAM principal, the principal that is attaching the policy must be assigned one of the following IAM actions (permissions): iam:AttachUserPolicy
oriam:AttachRolePolicy
.aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy --role-name myCustomNetworkingAmazonEKSClusterRole
Create an Amazon EKS cluster and configure your device to communicate with it.
Create a cluster.
aws eks create-cluster --name my-custom-networking-cluster \ --role-arn arn:aws:iam::$account_id:role/myCustomNetworkingAmazonEKSClusterRole \ --resources-vpc-config subnetIds=$subnet_id_1","$subnet_id_2
Note
You might receive an error that one of the Availability Zones in your request doesn't have sufficient capacity to create an Amazon EKS cluster. If this happens, the error output contains the Availability Zones that can support a new cluster. Retry creating your cluster with at least two subnets that are located in the supported Availability Zones for your account. For more information, see Insufficient capacity.
The cluster takes several minutes to create. To check on the cluster's deployment status, run the following command.
aws eks describe-cluster --name my-custom-networking-cluster --query cluster.status
Don't continue to the next step until the output of the command is
"ACTIVE"
.Configure
kubectl
to communicate with your cluster.aws eks update-kubeconfig --name my-custom-networking-cluster
Step 2: Configure your VPC
This tutorial requires the VPC created in Step 1: Create a test VPC and cluster. For a production cluster, adjust the steps accordingly for your VPC by replacing all of the
with your own.example values
-
Confirm that your currently-installed Amazon VPC CNI plugin for Kubernetes is the latest version. To determine the latest version for the Amazon EKS add-on type and update your version to it, see Updating an add-on. To determine the latest version for the self-managed add-on type and update your version to it, see Working with the Amazon VPC CNI plugin for Kubernetes Amazon EKS add-on.
Retrieve the ID of your cluster VPC and store it in a variable for use in later steps. For a production cluster, replace
with the name of your cluster.my-custom-networking-cluster
vpc_id=$(aws eks describe-cluster --name
my-custom-networking-cluster
--query "cluster.resourcesVpcConfig.vpcId" --output text)Associate an additional Classless Inter-Domain Routing (CIDR) block with your cluster's VPC. The CIDR block can't overlap with any existing associated CIDR blocks.
View the current CIDR blocks associated to your VPC.
aws ec2 describe-vpcs --vpc-ids $vpc_id \ --query 'Vpcs[*].CidrBlockAssociationSet[*].{CIDRBlock: CidrBlock, State: CidrBlockState.State}' --out table
The example output is as follows.
---------------------------------- | DescribeVpcs | +-----------------+--------------+ | CIDRBlock | State | +-----------------+--------------+ |
192.168.0.0/24
| associated | +-----------------+--------------+Associate an additional CIDR block to your VPC. For more information, see Associate additional
IPv4
CIDR blocks with your VPC in the Amazon VPC User Guide.aws ec2 associate-vpc-cidr-block --vpc-id $vpc_id --cidr-block
192.168.1.0/24
Confirm that the new block is associated.
aws ec2 describe-vpcs --vpc-ids $vpc_id --query 'Vpcs[*].CidrBlockAssociationSet[*].{CIDRBlock: CidrBlock, State: CidrBlockState.State}' --out table
The example output is as follows.
---------------------------------- | DescribeVpcs | +-----------------+--------------+ | CIDRBlock | State | +-----------------+--------------+ |
192.168.0.0/24
| associated | |192.168.1.0/24
| associated | +-----------------+--------------+Don't proceed to the next step until your new CIDR block's
State
isassociated
.Create as many subnets as you want to use in each Availability Zone that your existing subnets are in. Specify a CIDR block that's within the CIDR block that you associated with your VPC in a previous step.
Create new subnets. The subnets must be created in a different VPC CIDR block than your existing subnets are in, but in the same Availability Zones as your existing subnets. In this example, one subnet is created in the new CIDR block in each Availability Zone that the current private subnets exist in. The IDs of the subnets created are stored in variables for use in later steps. The
Name
values match the values assigned to the subnets created using the Amazon EKS VPC template in a previous step. Names aren't required. You can use different names.new_subnet_id_1=$(aws ec2 create-subnet --vpc-id $vpc_id --availability-zone $az_1 --cidr-block
192.168.1.0/27
\ --tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=my-eks-custom-networking-vpc-PrivateSubnet01
},{Key=kubernetes.io/role/internal-elb,Value=1}]' \ --query Subnet.SubnetId --output text) new_subnet_id_2=$(aws ec2 create-subnet --vpc-id $vpc_id --availability-zone $az_2 --cidr-block192.168.1.32/27
\ --tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=my-eks-custom-networking-vpc-PrivateSubnet02
},{Key=kubernetes.io/role/internal-elb,Value=1}]' \ --query Subnet.SubnetId --output text)Important
By default, your new subnets are implicitly associated with your VPC's main route table. This route table allows communication between all the resources that are deployed in the VPC. However, it doesn't allow communication with resources that have IP addresses that are outside the CIDR blocks that are associated with your VPC. You can associate your own route table to your subnets to change this behavior. For more information, see Subnet route tables in the Amazon VPC User Guide.
View the current subnets in your VPC.
aws ec2 describe-subnets --filters "Name=vpc-id,Values=$vpc_id" \ --query 'Subnets[*].{SubnetId: SubnetId,AvailabilityZone: AvailabilityZone,CidrBlock: CidrBlock}' \ --output table
The example output is as follows.
---------------------------------------------------------------------- | DescribeSubnets | +------------------+--------------------+----------------------------+ | AvailabilityZone | CidrBlock | SubnetId | +------------------+--------------------+----------------------------+ |
us-west-2
d
|192.168.0.0/27
| subnet-example1
| |us-west-2
a
|192.168.0.32/27
| subnet-example2
| |us-west-2
a
|192.168.0.64/27
| subnet-example3
| |us-west-2
d
|192.168.0.96/27
| subnet-example4
| |us-west-2
a
|192.168.1.0/27
| subnet-example5
| |us-west-2
d
|192.168.1.32/27
| subnet-example6
| +------------------+--------------------+----------------------------+You can see the subnets in the
192.168.1.0
CIDR block that you created are in the same Availability Zones as the subnets in the192.168.0.0
CIDR block.Step 3: Configure Kubernetes resources
To configure Kubernetes resources
-
Set the
AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG
environment variable totrue
in theaws-node
DaemonSet
.kubectl set env daemonset aws-node -n kube-system AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true
Retrieve the ID of your cluster security group and store it in a variable for use in the next step. Amazon EKS automatically creates this security group when you create your cluster.
cluster_security_group_id=$(aws eks describe-cluster --name $cluster_name --query cluster.resourcesVpcConfig.clusterSecurityGroupId --output text)
Create an
ENIConfig
custom resource for each subnet that you want to deploy Pods in.Create a unique file for each network interface configuration.
The following commands create separate
ENIConfig
files for the two subnets that were created in a previous step. The value forname
must be unique. The name is the same as the Availability Zone that the subnet is in. The cluster security group is assigned to theENIConfig
.cat >$az_1.yaml <<EOF apiVersion: crd.k8s.amazonaws.com/v1alpha1 kind: ENIConfig metadata: name:
$az_1
spec: securityGroups: -$cluster_security_group_id
subnet: $new_subnet_id_1cat >$az_2.yaml <<EOF apiVersion: crd.k8s.amazonaws.com/v1alpha1 kind: ENIConfig metadata: name:
$az_2
spec: securityGroups: -$cluster_security_group_id
subnet: $new_subnet_id_2For a production cluster, you can make the following changes to the previous commands:
Replace
with the ID of an existing security group that you want to use for each$cluster_security_group_id
ENIConfig
.We recommend naming your
ENIConfigs
the same as the Availability Zone that you'll use theENIConfig
for, whenever possible. You might need to use different names for yourENIConfigs
than the names of the Availability Zones for a variety of reasons. For example, if you have more than two subnets in the same Availability Zone and want to use them both with custom networking, then you need multipleENIConfigs
for the same Availability Zone. Since eachENIConfig
requires a unique name, you can't name more than one of yourENIConfigs
using the Availability Zone name.If your
ENIConfig
names aren't all the same as Availability Zone names, then replace
and$az_1
with your own names in the previous commands and annotate your nodes with the ENIConfig later in this tutorial.$az_2
Note
If you don't specify a valid security group for use with a production cluster and you're using:
version
1.8.0
or later of the Amazon VPC CNI plugin for Kubernetes, then the security groups associated with the node's primary elastic network interface are used.a version of the Amazon VPC CNI plugin for Kubernetes that's earlier than
1.8.0
, then the default security group for the VPC is assigned to secondary network interfaces.Important
AWS_VPC_K8S_CNI_EXTERNALSNAT=false
is a default setting in the configuration for the Amazon VPC CNI plugin for Kubernetes. If you're using the default setting, then traffic that is destined for IP addresses that aren't within one of the CIDR blocks associated with your VPC use the security groups and subnets of your node's primary network interface. The subnets and security groups defined in yourENIConfigs
that are used to create secondary network interfaces aren't used for this traffic. For more information about this setting, see SNAT for Pods.If you also use security groups for Pods, the security group that's specified in a
SecurityGroupPolicy
is used instead of the security group that's specified in theENIConfigs
. For more information, see Tutorial: Security groups for Pods.Apply each custom resource file that you created to your cluster with the following commands.
kubectl apply -f $az_1.yaml kubectl apply -f $az_2.yaml
Confirm that your
ENIConfigs
were created.kubectl get ENIConfigs
The example output is as follows.
NAME AGE
us-west-2
a
117sus-west-2
d
105sIf you're enabling custom networking on a production cluster and named your
ENIConfigs
something other than the Availability Zone that you're using them for, then skip to the next step to deploy Amazon EC2 nodes.Enable Kubernetes to automatically apply the
ENIConfig
for an Availability Zone to any new Amazon EC2 nodes created in your cluster.For the test cluster in this tutorial, skip to the next step.
For a production cluster, check to see if an annotation with the key
k8s.amazonaws.com/eniConfig
for theENI_CONFIG_ANNOTATION_DEF
environment variable exists in the container spec for theaws-node
DaemonSet
.kubectl describe daemonset aws-node -n kube-system | grep ENI_CONFIG_ANNOTATION_DEF
If output is returned, the annotation exists. If no output is returned, then the variable is not set. For a production cluster, you can use either this setting or the setting in the following step. If you use this setting, it overrides the setting in the following step. In this tutorial, the setting in the next step is used.
kubectl set env daemonset aws-node -n kube-system ENI_CONFIG_LABEL_DEF=topology.kubernetes.io/zone
Create a node IAM role.
Run the following command to create an IAM trust policy JSON file.
cat >
node-role-trust-relationship.json
<<EOF "Version": "2012-10-17", "Statement": [ "Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com" "Action": "sts:AssumeRole"Run the following command to set a variable for your role name. You can replace
with any name you choose.myCustomNetworkingAmazonEKSNodeRole
export node_role_name=
myCustomNetworkingAmazonEKSNodeRole
Create the IAM role and store its returned Amazon Resource Name (ARN) in a variable for use in a later step.
node_role_arn=$(aws iam create-role --role-name $node_role_name --assume-role-policy-document file://"
node-role-trust-relationship.json
" \ --query Role.Arn --output text)Attach three required IAM managed policies to the IAM role.
aws iam attach-role-policy \ --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy \ --role-name $node_role_name aws iam attach-role-policy \ --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly \ --role-name $node_role_name aws iam attach-role-policy \ --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy \ --role-name $node_role_name
Important
For simplicity in this tutorial, the
AmazonEKS_CNI_Policy
policy is attached to the node IAM role. In a production cluster however, we recommend attaching the policy to a separate IAM role that is used only with the Amazon VPC CNI plugin for Kubernetes. For more information, see Configuring the Amazon VPC CNI plugin for Kubernetes to use IAM roles for service accounts.Create one of the following types of node groups. To determine the instance type that you want to deploy, see Choosing an Amazon EC2 instance type. For this tutorial, complete the Managed, Without a launch template or with a launch template without an AMI ID specified option. If you're going to use the node group for production workloads, then we recommend that you familiarize yourself with all of the managed and self-managed node group options before deploying the node group.
Without a launch template or with a launch template without an AMI ID specified – Run the following command. For this tutorial, use the
. For a production node group, replace allexample values
with your own. The node group name can't be longer than 63 characters. It must start with letter or digit, but can also include hyphens and underscores for the remaining characters.example values
aws eks create-nodegroup --cluster-name $cluster_name --nodegroup-name
my-nodegroup
\ --subnets$subnet_id_1
$subnet_id_2
--instance-typest3.medium
--node-role $node_role_arnWith a launch template with a specified AMI
Determine the Amazon EKS recommended number of maximum Pods for your nodes. Follow the instructions in Amazon EKS recommended maximum Pods for each Amazon EC2 instance type, adding
to step 3 in that topic. Note the output for use in the next step.--cni-custom-networking-enabled
In your launch template, specify an Amazon EKS optimized AMI ID, or a custom AMI built off the Amazon EKS optimized AMI, then deploy the node group using a launch template and provide the following user data in the launch template. This user data passes arguments into the
bootstrap.sh
file. For more information about the bootstrap file, see bootstrap.shon GitHub. You can replace
with either the value from the previous step (recommended) or your own value.20
/etc/eks/bootstrap.sh
my-cluster
--use-max-pods false --kubelet-extra-args '--max-pods=20
'If you've created a custom AMI that is not built off the Amazon EKS optimized AMI, then you need to custom create the configuration yourself.
Self-managed
Determine the Amazon EKS recommended number of maximum Pods for your nodes. Follow the instructions in Amazon EKS recommended maximum Pods for each Amazon EC2 instance type, adding
to step 3 in that topic. Note the output for use in the next step.--cni-custom-networking-enabled
Deploy the node group using the instructions in Launching self-managed Amazon Linux nodes. Specify the following text for the BootstrapArguments parameter. You can replace
with either the value from the previous step (recommended) or your own value.20
--use-max-pods false --kubelet-extra-args '--max-pods=
'20
Note
If you want nodes in a production cluster to support a significantly higher number of Pods, run the script in Amazon EKS recommended maximum Pods for each Amazon EC2 instance type again. Also, add the
option to the command. For example,--cni-prefix-delegation-enabled
is returned for an110
m5.large
instance type. For instructions on how to enable this capability, see Increase the amount of available IP addresses for your Amazon EC2 nodes. You can use this capability with custom networking.Node group creation takes several minutes. You can check the status of the creation of a managed node group with the following command.
aws eks describe-nodegroup --cluster-name $cluster_name --nodegroup-name
my-nodegroup
--query nodegroup.status --output textDon't continue to the next step until the output returned is
ACTIVE
.For the tutorial, you can skip this step.
For a production cluster, if you didn't name your
ENIConfigs
the same as the Availability Zone that you're using them for, then you must annotate your nodes with theENIConfig
name that should be used with the node. This step isn't necessary if you only have one subnet in each Availability Zone and you named yourENIConfigs
with the same names as your Availability Zones. This is because the Amazon VPC CNI plugin for Kubernetes automatically associates the correctENIConfig
with the node for you when you enabled it to do so in a previous step.Get the list of nodes in your cluster.
kubectl get nodes
The example output is as follows.
NAME STATUS ROLES AGE VERSION ip-
192-168-0-126
.us-west-2
.compute.internal Ready <none> 8m49s v1.22.9-eks-810597c ip-192-168-0-92
.us-west-2
.compute.internal Ready <none> 8m34s v1.22.9-eks-810597cDetermine which Availability Zone each node is in. Run the following command for each node that was returned in the previous step.
aws ec2 describe-instances --filters Name=network-interface.private-dns-name,Values=ip-
192-168-0-126
.us-west-2
.compute.internal \ --query 'Reservations[].Instances[].{AvailabilityZone: Placement.AvailabilityZone, SubnetId: SubnetId}'The example output is as follows.
"AvailabilityZone": "us-west-2
d
", "SubnetId": "subnet-Example5
"Annotate each node with the
ENIConfig
that you created for the subnet ID and Availability Zone. You can only annotate a node with oneENIConfig
, though multiple nodes can be annotated with the sameENIConfig
. Replace the
with your own.example values
kubectl annotate node ip-
192-168-0-126
.us-west-2
.compute.internal k8s.amazonaws.com/eniConfig=EniConfigName1
kubectl annotate node ip-192-168-0-92
.us-west-2
.compute.internal k8s.amazonaws.com/eniConfig=EniConfigName2
Make sure that you have available nodes that are using the custom networking feature.
Cordon and drain the nodes to gracefully shut down the Pods. For more information, see Safely Drain a Node
in the Kubernetes documentation. Terminate the nodes. If the nodes are in an existing managed node group, you can delete the node group. Copy the command that follows to your device. Make the following modifications to the command as needed and then run the modified command:
your node group.aws eks delete-nodegroup --cluster-name
my-cluster
--nodegroup-namemy-nodegroup
Only new nodes that are registered with the
k8s.amazonaws.com/eniConfig
label use the custom networking feature.Confirm that Pods are assigned an IP address from a CIDR block that's associated to one of the subnets that you created in a previous step.
kubectl get pods -A -o wide
The example output is as follows.
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system aws-node-
2rkn4
1/1 Running 0 7m19s 192.168.0.92 ip-192-168-0-92.us-west-2.compute.internal <none> <none> kube-system aws-node-k96wp
1/1 Running 0 7m15s 192.168.0.126 ip-192-168-0-126.us-west-2.compute.internal <none> <none> kube-system coredns-657694c6f4-smcgr
1/1 Running 0 56m 192.168.1.23 ip-192-168-0-92.us-west-2.compute.internal <none> <none> kube-system coredns-657694c6f4-stwv9
1/1 Running 0 56m 192.168.1.28 ip-192-168-0-92.us-west-2.compute.internal <none> <none> kube-system kube-proxy-jgshq
1/1 Running 0 7m19s 192.168.0.92 ip-192-168-0-92.us-west-2.compute.internal <none> <none> kube-system kube-proxy-wx9vk
1/1 Running 0 7m15s 192.168.0.126 ip-192-168-0-126.us-west-2.compute.internal <none> <none>You can see that the
coredns
Pods
are assigned IP addresses from the192.168.1.0
CIDR block that you added to your VPC. Without custom networking, they would have been assigned addresses from the192.168.0.0
CIDR block, because it was the only CIDR block originally associated with the VPC.If a Pod's
spec
containshostNetwork=true
, it's assigned the primary IP address of the node. It isn't assigned an address from the subnets that you added. By default, this value is set tofalse
. This value is set totrue
for thekube-proxy
and Amazon VPC CNI plugin for Kubernetes (aws-node
) Pods that run on your cluster. This is why thekube-proxy
and the plugin'saws-node
Pods aren't assigned192.168.1.x
addresses in the previous output. For more information about a Pod'shostNetwork
setting, see PodSpec v1 corein the Kubernetes API reference. Step 5: Delete tutorial resources
After you complete the tutorial, we recommend that you delete the resources that you created. You can then adjust the steps to enable custom networking for a production cluster.
To delete the tutorial resources
-
If the node group that you created was just for testing, then delete it.
aws eks delete-nodegroup --cluster-name $cluster_name --nodegroup-name my-nodegroup
Even after the AWS CLI output says that the cluster is deleted, the delete process might not actually be complete. The delete process takes a few minutes. Confirm that it's complete by running the following command.
aws eks describe-nodegroup --cluster-name $cluster_name --nodegroup-name my-nodegroup --query nodegroup.status --output text
Don't continue until the returned output is similar to the following output.
An error occurred (ResourceNotFoundException) when calling the DescribeNodegroup operation: No node group found for name: my-nodegroup.
If the node group that you created was just for testing, then delete the node IAM role.
Detach the policies from the role.
aws iam detach-role-policy --role-name myCustomNetworkingAmazonEKSNodeRole --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy aws iam detach-role-policy --role-name myCustomNetworkingAmazonEKSNodeRole --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly aws iam detach-role-policy --role-name myCustomNetworkingAmazonEKSNodeRole --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
Delete the role.
aws iam delete-role --role-name myCustomNetworkingAmazonEKSNodeRole
Delete the cluster.
aws eks delete-cluster --name $cluster_name
Confirm the cluster is deleted with the following command.
aws eks describe-cluster --name $cluster_name --query cluster.status --output text
When output similar to the following is returned, the cluster is successfully deleted.
An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name: my-cluster.
Delete the cluster IAM role.
Detach the policies from the role.
aws iam detach-role-policy --role-name myCustomNetworkingAmazonEKSClusterRole --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
Delete the role.
aws iam delete-role --role-name myCustomNetworkingAmazonEKSClusterRole
Delete the subnets that you created in a previous step.
aws ec2 delete-subnet --subnet-id $new_subnet_id_1 aws ec2 delete-subnet --subnet-id $new_subnet_id_2
-
-
-