top of page

Late binding Memory Problem in TCA 2.1

Iussue- During customization, the VMConfig plug-in tries to fit the virtual machines to the correct NUMA nodes of the ESXi server based on its SR-IOV, Passthrough, and Pinning specifications.


Workaround-

To disable the virtual machine placement operation by the plug-in and enable the virtual machine placement by vSphere DRS, configure the following settings on the node pool before instantiating the network function. The following is the documented workaround:

  • Connect to TKG Management cluster using SSH.

  • Execute the following command on the selected nodepool kubectl edit vmconfigset -n testpn1

  • and add preferHosts: [] Example before change CCLI@tph-ci-vmw-caas-mgmt1>>kubectl get vmconfigset -n test  test-np1 -o yaml

apiVersion: acm.vmware.com/v1alpha1

kind: VMConfigSet

metadata:

  annotations:

    kubectl.kubernetes.io/last-applied-configuration: |

      {"apiVersion":"acm.vmware.com/v1alpha1","kind":"VMConfigSet","metadata":{"annotations":{},"name":"test-np1","namespace":"test"},"spec":{"nodeLabels":{"telco.vmware.com/nodepool":"test-np1","type":"h"},"vmConfig":{"extraConfig":{"sched.cpu.latencySensitivity":"high"},"preferHosts":null,"revision":"1643191324469087192"}}}

  creationTimestamp: "2022-01-24T14:20:12Z"

  generation: 3

  managedFields:

  - apiVersion: acm.vmware.com/v1alpha1

    fieldsType: FieldsV1

    fieldsV1:

      f:metadata:

        f:annotations:

          .: {}

          f:kubectl.kubernetes.io/last-applied-configuration: {}

      f:spec:

        .: {}

        f:nodeLabels:

          .: {}

          f:telco.vmware.com/nodepool: {}

          f:type: {}

        f:vmConfig:

          .: {}

          f:extraConfig:

            .: {}

            f:sched.cpu.latencySensitivity: {}

          f:revision: {}

    manager: kubectl-client-side-apply

    operation: Update

    time: "2022-01-24T15:41:32Z"

  name: test-np1

  namespace: test

  resourceVersion: "82684707"

  uid: d821db3b-e062-48d9-9cb5-e61195ec63d6

spec:

  nodeLabels:

    telco.vmware.com/nodepool: test-np1

    type: h

  vmConfig:

    extraConfig:

      sched.cpu.latencySensitivity: high

    revision: "1643191324469087192"


Example after change

CCLI@tph-ci-vmw-caas-mgmt1>>kubectl get vmconfigset -n test  test-np1 -o yaml

apiVersion: acm.vmware.com/v1alpha1

kind: VMConfigSet

metadata:

  annotations:

    kubectl.kubernetes.io/last-applied-configuration: |

      {"apiVersion":"acm.vmware.com/v1alpha1","kind":"VMConfigSet","metadata":{"annotations":{},"name":"test-np1","namespace":"test"},"spec":{"nodeLabels":{"telco.vmware.com/nodepool":"test-np1","type":"h"},"vmConfig":{"extraConfig":{"sched.cpu.latencySensitivity":"high"},"preferHosts":null,"revision":"1643191324469087192"}}}

  creationTimestamp: "2022-01-24T14:20:12Z"

  generation: 3

  managedFields:

  - apiVersion: acm.vmware.com/v1alpha1

    fieldsType: FieldsV1

    fieldsV1:

      f:metadata:

        f:annotations:

          .: {}

          f:kubectl.kubernetes.io/last-applied-configuration: {}

      f:spec:

        .: {}

        f:nodeLabels:

          .: {}

          f:telco.vmware.com/nodepool: {}

          f:type: {}

        f:vmConfig:

          .: {}

          f:extraConfig:

            .: {}

            f:sched.cpu.latencySensitivity: {}

          f:revision: {}

    manager: kubectl-client-side-apply

    operation: Update

    time: "2022-01-24T15:41:32Z"

  name: test-np1

  namespace: test

  resourceVersion: "82684707"

  uid: d821db3b-e062-48d9-9cb5-e61195ec63d6

spec:

  nodeLabels:

    telco.vmware.com/nodepool: test-np1

    type: h

  vmConfig:

    extraConfig:

      sched.cpu.latencySensitivity: high

    preferHosts: []

5 views0 comments

Recent Posts

See All
bottom of page