TCA 2.3 automatically handles the regeneration of the Kubernetes cluster certificates.
Checking if Cluster Certificates are valid:
Run the following commands to query the Kubernetes Cluster Control Plane node for the status and expiration date of Kubernetes Cluster certificates:
SSH to the Control Plane node of the CaaS Cluster and switch to the sudo user:
ssh capv@K8S-CONTROL-PLANE-IP sudo -i
Check if certificates are expired:
kubeadm certs check-expiration
Another verification point is to login to each Telco Cloud Automation-Control Plane (TCA-CP) Appliance Management UI and looking at the status of the Kubernetes Clusters registered within TCA-CP:
A green dot would mean that the communication is fine and there is no certificate related issue. A red dot would mean that the communication is broken and there could be a possible certificate related issue.
Updating the Cluster certificate within CaaS
TCA 2.3 introduces the automatic renewal of the Cluster Certificates, given the CaaS Clusters have been upgraded to the supported Tanzu Kubernetes Grid (TKG) clusters: 1.24.10, 1.23.16 and 1.22.17.
Clusters upgraded to TKG 1.24.10, 1.23.16 or 1.22.17:
TCA 2.3 will automatically handle the regeneration of the Kubernetes cluster certificates. However, these clusters will still need their kubeconfig updated. Please proceed to the Updating the references of new Cluster Certificates within TCA-M and TCA-CP section.
NOTE: The following steps are applicable only for TKG clusters that were deployed via older versions of TCA and have NOT been upgraded in TCA 2.3.
Renewing the Workload Cluster Certificate
SSH into the TCA-CP where corresponding management cluster is deployed as the admin user and switch to the sudo user:
ssh admin@mgmt_cluster_tca_cp_fqdn su –
Note: Replace mgmt_cluster_tca_cp_fqdn with the actual values in the command provided.
Download the certificate renewal tool file (cluster-cert-renew.tar.gz).
Optional: For Airgap environments, manually SCP the file over to the TCA-CP: curl -kfsSL https://vmwaresaas.jfrog.io/artifactory/generic-registry/kb/20230413/cluster-cert-renew.tar.gz --output cluster-cert-renew.tar.gz
Untar the cluster-cert-renew.tar.gz tar ball:
tar -zxvf cluster-cert-renew.tar.gz
Renew the workload cluster certificate:
cd /home/admin/cluster-cert-renew bash cert-renew -wc workload-cluster-name -mc mgmt-cluster-name -t workload
Note: Replace workload-cluster-name and mgmt-cluster-name with the actual values in the command provided.
Verify the new workload cluster certificate has been stored on management cluster:
kubectl config use-context mgmt-cluster-name-admin@mgmt-cluster-name kubectl get secret workloadcluster-name-kubeconfig -n workloadcluster-name -ojsonpath='{.data.value}' | base64 -d
Note: Replace mgmt-cluster-name-admin and mgmt-cluster-name and workloadcluster-name with the actual values in the command provided.
Renew the Management Cluster Certificates
SSH into the TCA-CP and switch to the sudo user:
ssh admin@tca_cp su – Note: Replace tca_cp with the IP of the TCA-CP where the management cluster is configured in the command provided.
Download the cluster-cert-renew scripts tar to TCA-CP:
Untar the cluster-cert-renew.tar.gz tar ball and change to the cluster-cert-renew directory:
tar -zxvf cluster-cert-renew.tar.gz cd /home/admin/cluster-cert-renew
Obtain the Control Plane node IP:
Note: This control-plane-node-ip is different from the static cluster kube-vip IP
SSH to the management cluster:
ssh capv@mgmt-kube-vip Note: Replace mgmt-kube-vip with the actual value in the command provided.
Run the kubectl get nodes command:
kubectl get nodes -owide | grep control-plane | awk '{print ""$6""}' | head -n 1
Renew the management cluster certificate.
bash cert-renew -mc mgmt-cluster-name -t management -ip control-plane-node-ip Note: Replace control-plane-node-ip with the control-plane-node-ip from the previous step. Note: This can take several minutes to complete.
Synchronize the kubeconfig for the TCA-Manager (TCA-M) and TCA-CP
Note: All (upgraded and non-upgraded) Clusters require the kubeconfig to be synchronized
POST the following API call, from any machine that has access to the TCA-M web layer, to generate an authentication token:
curl -D - --location --insecure --request POST 'https://tca-m-url/hybridity/api/sessions' --header 'Accept: application/json' --header 'Content-Type: text/plain' --data-raw '{"username": "username","password": "plain_text_password"}'
Note: Replace tca-m-url and username and plain_text_password
with the actual values in the command provided.
Take note of the x-hm-authorization from the output of the previous step:
Sample: 95XXXXX4:dXX2:4XX3:bXX2:7XXXXXXXXXX5
Update the TCA-M and TCA-CP database by synchronizing the kubeconfig:
curl --location --insecure --request POST 'https://tca-m-fqdn/telco/api/caas/v2/clusters/cluster_name/syncKubeconfig' --header 'Accept: application/json' --header 'Content-Type: application/json' --header 'x-hm-authorization: auth-token'
Note: Replace tca-m-fqdn and cluster_name and auth-token with the actual values in the command provided.
Note: The operation can take several minutes.
To ensure that the operation is succeeded, run the following API call:
curl --location --insecure --request GET 'https://tca-m-fqdn/hybridity/api/jobs/job_id_from_above_response' --header 'Accept: application/json' --header 'x-hm-authorization: auth-token' Note: Replace tca-m-fqdn and auth-token with the actual values in the command provided. Note: Take note of the isDone and didFail flags. The isDone flag should return true and the didFailflag should return false.
SSH login to TCA-CP to restart the services:
ssh admin@tca-cp
su -
Note: tca-cp where the cluster is configured
Restart the following TCA-CP services:
systemctl restart app-engine systemctl restart web-engine
コメント