TCA 2.1 What's new - Appliance sizing changes
As the amount of features delivered by Telco Cloud Automation increases there is going to be an impact on the sizing of the appliances. This post is going to explore what exactly has changed between 2.0 and 2.1 from a sizing perspective.
The following table summarizes the sizing requirements for each of the components in a production based deployment for Telco Cloud Automation 2.0. You would either choose a VM based or a CNA based deployment.
Component | vCPU | Memory | Disk Space |
---|---|---|---|
TCA Manager - VM | 4 | 12 GB | 60 GB |
TCA CP - VM | 4 | 12 GB | 60 GB |
Control Plane 3x - CNA | 8 | 16 GB | 50 GB |
Worker Nodes 3x - CNA | 8 | 16 GB | 50 GB |
Bootstrap CP 1x - CNA | 2 | 4 GB | 50 GB |
Bootstrap Worker 1x - CNA | 2 | 4 GB | 50 GB |
In a VM based deployment you require at least one TCA Manager and one TCA CP VM, depending on the amount of Kubernetes clusters which are to be deployed and the amount of vCenter Servers backing the physical hosts additional TCA CP VMs may be required in a specific design. In addition to CaaS Management, when a VIM is to be integrated, like VMware Cloud Director or VMware Integrated OpenStack, additional TCA CP VMs also need to be accounted for. A future post will explore how a multi-cloud / VIM deployment will impact the design choices and how current scale can work in deployments requiring thousands of ESXi hosts like you would have in a larger scale RAN deployment and how a possible Telco Cloud Automation architecture could look like.
For a Cloud Native based deployment the deployment consideration changes due to the need of having to run a highly available control plane, e.g. 3 etcd nodes, as well as highly availabe worker nodes. This bumps up a minimal production deployment to at least 6 virtual machines on which the TCA Manager and CP pods can be deployed then. In the case of additional TCA CPs being required as with the examples given above, a new Kubernetes cluster is required today as the pods are deployed into a predetermined namespace (tca-system).
In addition to the Kubernetes Cluster to run the TCA workloads resouces also need to be planned for and allocated for a temporary bootstrap cluster which gets deployed whenever lifecycle operatiosn on a management cluster are necessary. This is not required in a VM based deployment, as the temporary cluster is simply spun up via kind (Kubernetes in Docker) on the TCA-CP. While these 2 virtual machines are transient in nature for proper performance they should be accounted for from a capacity planning perspective.
As mentioned before Telco Cloud Automation has slightly different resource requirements as new functionality was added. The table below outlines the new requirements for both the virtual machine based deployment as well as the cloud native deployment model.
Component | vCPU | Memory | Disk Space |
---|---|---|---|
TCA Manager - VM | 6 | 18 GB | 200 GB |
TCA CP - VM | 6 | 18 GB | 200 GB |
Control Plane 3x - CNA | 2 | 8 GB | 50 GB |
Worker Nodes 3x - CNA | 8 | 16 GB | 50 GB |
Bootstrap CP 1x - CNA | 2 | 4 GB | 50 GB |
Bootstrap Worker 1x - CNA | 2 | 4 GB | 50 GB |
As can be seen the most bump in resources happened on the disk space for the virtual machine based deployment as well as some additional memory and CPU requirements now. When upgrading from an older release of Telco Cloud Automation you will have to perform the update as necessary, perform a backup and then a restore into a freshly deployed pair of appliances for a virtual machine based deployment, as today expanding the partitions is not supported on an existing deployment.