Table of Contents
Introduction
This post is part-2 of the series to create and configure Avi controller on AWS and configure with EKS cluster to serve the deployed applications for L4 and L7 services.
Step by step process for install and configure Avi controller and it's SE group is discussed in this post ➡️ Avi on AWS: Comprehensive Installation Guide.
In this post, we would continue with the journey and setup AKO with EKS cluster to connect with Avi controller and serve the deployed apps. The Avi Kubernetes Operator (AKO) is an operator which works as an ingress Controller and performs Avi-specific functions in a Kubernetes environment with the accessible Avi Controller. AKO remains in sync with the necessary Kubernetes objects and calls Avi Controller APIs to configure the virtual services.
Pre-requisites
It's assumed that Avi controller is installed and configured to connect with the Kubernetes cluster. We can create an AWS EKS cluster to configure ingress setup.
Accessing EKS from workstation
There are many ways to install EKS and we assume operator would follow either way to build the EKS cluster. The cluster can be accessed from local workstation using the eksctl tool.
We can fetch the kubeconfig for the EKS cluster to configure in local workstation
Route53 hosted zone
We would need Route53 hosted Zone with a domain, to support ingress and load balancer(s) created for app deployed in EKS. We have already setup the Avi Cloud and corresponding Service Engine Group with DNS connection type as Route53 in previous post for configuring Avi.
Install AKO in EKS
Avi Kubernetes Operator (AKO) can be installed on any Kubernetes cluster including clusters from popular clouds ( i.e: AWS, Azure, GCP ).
AKO can be installed on kubernetes with Helm package manager. Latest version of AKO can be found as Avi Vintage site or at VMware docs.
Install AKO
Create the avi-system namespace
Access the helm package for AKO
In the next step, we have to fetch the values.yaml for AKO Helm package in order to modify the template and prepare for AKO install in configured kubernetes environment.
We have to configure AKO (via values.yaml
) for the Avi controller with following parameters.
- ControllerSettings.controllerVersion (Installed Avi controller version)
- ControllerSettings.controllerHost (IP address or Hostname of Avi Controller)
- ControllerSettings.cloudName (The configured cloud name on Avi controller)
- ControllerSettings.serviceEngineGroupName (Name of ServiceEngine Group)
- AKOSettings.clusterName (A unique identifier for kubernetes cluster)
- avicredentials.username
- avicredentials.password
- avicredentials.certificateAuthorityData
- NetworkSettings.nodeNetworkList (List of Networks and corresponding CIDR mappings for K8s nodes. It's optional when in NodePort mode / static routes are disabled / non vcenter clouds)
- NetworkSettings.vipNetworkList (List of Network Names or Subnet [format: subnet-xxx] information for VIP network, multiple networks allowed only for AWS Cloud)
- L4Settings.defaultDomain ( Specify a default sub-domain for L4 LB services as per the route53 hosted zone configured as pre-requisites)
- L4Settings.autoFQDN (ENUM: default(
<svc>.<ns>.<subdomain>
), flat (<svc>-<ns>.<subdomain>
), "disabled" If the value is disabled then the FQDN generation is disabled) - L7Settings.serviceType (NodePort|ClusterIP|NodePortLocal , default to ClusterIP)
- AKOSettings.cniPlugin ( We can leave this field with blank or "" for EKS with default CNI)
- layer7Only: false (If this flag is switched on, then AKO will only do layer 7 loadbalancing)
- disableStaticRouteSync: true (If the POD networks are reachable from the Avi SE, set this knob to true.)
To get the ca.crt to configure https access for AKO to the Avi controller, we
can access the controller - Templates > Security > SSL/TLS Certificates
to
fetch the ca.crt installed and put the raw cert data in values.yaml.
Podcidr reachability
The values.yaml for AKO helm chart contains a flag as disableStaticRouteSync. If the POD networks are reachable from the Avi SE, set this knob to true. In case of EKS, podcidr is a private network and is not reachable to the subnet network by default. Thus, setting the flag as True which throw error in AKO pod log as below :
Thus, to achieve the connectivity, need to make the EKS node and podcidr network accessible to Avi-SE. We can manually add a rule in EKS security group for access to Avi-SE(s) security group. But that's too much work and every time, SE get's added/changed, the security group which is auto-generated, would need to be fixed in EKS security group.
Thus, the better solution is to configure EKS security group within SE-Group setting in Avi Controller. Avi SE Group config has an option (Data vNIC Custom Security Groups) for custom security group to be associated with Data vNICs for SE instance. We can set EKS security group to let SE get accessible to EKS network and then configure the AKO param as disableStaticRouteSync: true
to get direct connectivity of Avi-SE and EKS networks.
AKO Service Type
The default serviceType in AKO setting is ClusterIP, which enable Avi SE to reach app in kubernetes via it's ClusterIP service. As an alternative, we can set AKO serviceType as NodePort, and expect to access app in kubernetes via it's NodePort service, then we won't need podcidr reachability as EKS Node would be reachable to Avi SE.
We can not configure NodePortLocal as the AKO serviceType unless we use EKS with Antrea as the CNI for EKS. The default VPC CNI for EKS, enable to configure ClusterIP or NodePort as the serviceType in AKO.
As the next step, lets install AKO in the EKS cluster.
We can verify the installation in kubernetes :
We can observe that AKO gets installed as statefulset and we can define replicas to 3 in values.yaml to have high-availability and better performance.
Once AKO is configured, operator can access and even change the AKO configs
(though not recommended) via the configMap avi-k8s-config
created in
avi-system namespace.
Check for provisioned SE
At this point, we can observe that the Service Engine (SE) have been provisioned and waiting the apps in EKS to create ingress or LB service and assign corresponding Virtual Service in Avi.
Avi-controller manages the lifecycle for Service Engine (SE) and provisions corresponding EC2 instances or deletes SE based on the idle time setting for SE Group if not Virtual Service exists.
Avi also creates AWS security group for SE and assigns to corresponding EC2 for Service Engine.
Build app to test ingress with Avi
We can follow below steps, to create a simple nginx based webApp to test AKO and Avi setup :
Ingress resource for app
As we have configured route53 for DNS in Avi, we can visit the route53 to verify the setup and use the domain for application ingress resource creation.
We can create an ingress resource for this application
Describe the generated ingress and observe that AKO annotations get added and Avi would start provisioning virtual service for the ingress.
Verify the ingress resource to check the details along with IP assigned by Avi for the ingress resource.
We can verify Route53 that Avi has added A record to the hosted zone for the created ingress (webapp.kubetest.com) and exposed on VIP (10.0.10.193) which is managed by Avi. The app domain is accessible across the VPC and external ( if we have used public hosted zone instead of private hosted zone).
We can check Avi for the created virtual service assigned to the ingress for the application.
And we can also verify accessing the application.
We can perform further tests using the ingress resource for webapp and the analytics can be accessed on the Avi Dashboard
LoadBalancer service for app
There might be a need to expose an application with LoadBalancer service and access the app via it's external IP. EKS by default provisions an ELB for any LoadBalancer service for such use cases.
Once we have configured Avi controller with AKO and enabled AKO to serve L7 and L4 services for EKS, then an IP from the VIP range would get assigned for the LoadBalancer service of app. Thus provisioning VIP for LB and managing the connectivity with LB service would be managed by Avi. Another good point is that, a DNS subdomain would be created and assigned to Route53 to serve the LoadBalancer service based on the AKO setting (L4Settings.autoFQDN).
To test the setup, we have created another app "testapp" with nginx image in appns namespace in the EKS cluster.
Next step, we can edit the service with type as LoadBalancer and let Avi provision the VIP for the app's LB service.
Once, done, we can check the service gets assigned to avi related annotations as below :
We can validate that Avi provisions a virtual service of type L4 and assigns a VIP ( 10.0.5.200 ) which gets exposed on the domain (testapp.appns.kubetest.com) for the LoadBalancer service of the app.
We can check AWS Route53 hosted zone getting assigned with another A-Record for the LoadBalancer service for the app.
Avi Dashboard with service interaction to the server (pod) in EKS.
And we can also verify accessing the application.
We can perform further tests using the LoadBalancer service for for testapp and the analytics can be accessed on the Avi Dashboard
Conclusion
In this post, we have observed the step by step process to configure AKO on EKS with Avi controller and also performed tests to create L4 and L7 services. I hope that the steps would help operators to configure Avi on AWS. Thank You 🙂