A Platform Engineer has been tasked with building a custom image for the deployment of NKP management and worker nodes. The engineer needs to ensure that the proper package versions are used when creating these images. The security team has only authorized version 1.30.5 of Kubernetes and version 1.7.22 of containerd. Where should the engineer go to verify that this is the version being used when building the custom image?
A Kubernetes administrator needs to deploy a new Kubernetes cluster into a new workspace. This cluster requires a predictive analytics solution that detects current and future anomalies. Which option does the administrator need to deploy after the cluster is ready?
A company was using a test application called temp-shop developed in the temp-ecommerce NKP Starter cluster. Now, the cluster has just been taking up valuable resources that could be used for other projects, so the development team has decided to remove it.
Before proceeding, they verified that they had the cluster configuration file temp-ecommerce.conf.
What command should the development team execute to delete the cluster with its nodes and application?
A Platform Engineer needs to create an NKP custom image for vSphere.
Which option should the engineer use?
A Platform Engineer is getting started with NKP and has created a bastion host with all needed prerequisites.
How should the engineer install Kommander?
A Platform Engineer for an organization does research in Antarctica. The engineer is preparing a bastion host for deploying NKP while the infrastructure is isolated. Which programs should the engineer ensure are installed on a bastion host before shipping the infrastructure?
An administrator has been trying to deploy an initial AHV-based NKP cluster in a dark site (no Internet connectivity) environment using the command shown in the question.
nkp create cluster nutanix \
--cluster-name=$CLUSTER_NAME \
--control-plane-prism-element-cluster=$PE_NAME \
--worker-prism-element-cluster=$PE_NAME \
--control-plane-subnets=$SUBNET_ASSOCIATED_WITH_PE \
--worker-subnets=$SUBNET_ASSOCIATED_WITH_PE \
--control-plane-endpoint-ip=$AVAILABLE_IP_FROM_SAME_SUBNET \
--csi-storage-container=$NAME_OF_YOUR_STORAGE_CONTAINER \
--endpoint=$PC_ENDPOINT_URL \
--control-plane-vm-image=$NAME_OF_OS_IMAGE_CREATED_BY_NKP_CLI \
--worker-vm-image=$NAME_OF_OS_IMAGE_CREATED_BY_NKP_CLI \
--registry-url=${REGISTRY_URL} \
--registry-mirror-username=${REGISTRY_USERNAME} \
--registry-mirror-password=${REGISTRY_PASSWORD} \
--kubernetes-service-load-balancer-ip-range $START_IP-$END_IP \
--self-managed
Which missing attribute needs to be added in order for the deployment?
A Platform Engineer would like to install some NKP applications, but with a few modifications to the default configuration specs of some of the components. Additionally, Velero itself can be disabled, as the company already utilizes a different backup utility for Kubernetes.
Which procedure would the engineer utilize to accomplish these goals when deploying the applications?
In an effort to control cloud cost consumption, auto-scale is configured to meet demands as needed.
What is the behavior for when nodes are scaled down?
Prior to implementing NKP, a company had created a number of Kubernetes (K8s) clusters using kubeadm. While they are deploying new managed clusters via NKP, the company does not wish to migrate workloads from these pre-existing native K8s clusters over to new NKP clusters just yet.
What are the requirements to have these clusters attached to their NKP management cluster?