Symptoms: CNF termination was failing in TCA Manager with the following error message.
Cause: the Kubernetes certificates expired on the TKG cluster because 1 year has passed since the cluster was deployed. The certificates were automatically renewed by the TCA Control Plane. However, due to a bug the connection in the TCA Control Plane was shown as Down ( see below ). Morevover, the reconnect was failing from TCA with Internal Server Error 500.
Resolution: this is a known issue that has been solved in TCA 2.2 already.
Workaround:
Step 1 - from the TCA Control Plane as root, execute the following commands.
[root@ITTC1VTCTCP002 ~]# mongo hybridity
MongoDB shell version: 3.2.5
connecting to: hybridity
Server has startup warnings:
2023-01-30T14:37:22.944+0000 I CONTROL [initandlisten]
2023-01-30T14:37:22.944+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2023-01-30T14:37:22.944+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2023-01-30T14:37:22.944+0000 I CONTROL [initandlisten]
> db.ApplianceConfig.find({section:"kubernetes"}).pretty() ->> Many results will be returned. Identify the cluster that must be updated/fixed.
{
"_id" : ObjectId("61f3b6624eb1a5a310f44a05"),
"config" : {
"url" : https://10.131.151.67:6443,
"clusterName" : "nokia-ccs-lab1",
"kubeconfig" : <TRUNCATED (base64 encoded kubeconfig file)>,
"UUID" : "e63435a1-92d5-4326-944a-cf9ca4a6055e",
"version" : "1.20",
"kubeSystemUUID" : "a794d00e-b9df-4894-b57c-472130f64e51"
},
"section" : "kubernetes",
"enterprise" : "HybridityAdmin",
"organization" : "HybridityAdmin",
"lastUpdated" : ISODate("2022-07-15T15:34:05.354Z"),
"lastUpdateEnterprise" : "HybridityAdmin",
"lastUpdateOrganization" : "HybridityAdmin",
"lastUpdateUser" : "HybridityAdmin",
"creationDate" : ISODate("2022-01-28T09:24:50.954Z"),
"creationEnterprise" : "HybridityAdmin",
"creationOrganization" : "HybridityAdmin",
"creationUser" : "HybridityAdmin",
"isDeleted" : false
}
> db.ApplianceConfig.update({"_id" : ObjectId("61f3b6624eb1a5a310f44a05")},{$set:{"config.type":"WORKLOAD"}})
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
>
Step 2 – from the TCA CP Appliance Manager GUI, reconnect the TKG cluster by editing it and pasting the kubeconfig that can be found in the cluster itself in /home/capv/.kube/config
Step 3 – Save and reboot the Appliance
Comments