The best option for selecting the machine type for the application to use that optimizes CPU utilization by using the fewest number of steps is to use the Recommender API and apply the suggested recommendations. The Recommender API is a service that provides recommendations for optimizing your Google Cloud resources, such as Compute Engine instances, disks, and firewalls. You can use the Recommender API to get recommendations for changing the machine type of your Compute Engine instances based on historical system metrics, such as CPU utilization. You can also apply the suggested recommendations by using the Recommender API or Cloud Console. This way, you can optimize CPU utilization by using the most suitable machine type for your application with minimal effort.
Your CTO has asked you to implement a postmortem policy on every incident for internal use. You want to define what a good postmortem is to ensure that the policy is successful at your company. What should you do?
Choose 2 answers
Ensure that all postmortems include what caused the incident, identify the person or team responsible for
causing the incident. and how to prevent a future occurrence of the incident.
Ensure that all postmortems include what caused the incident, how the incident could have been worse, and how to prevent a future occurrence of the incident.
Ensure that all postmortems include the severity of the incident, how to prevent a future occurrence of the incident. and what caused the incident without naming internal system components.
Ensure that all postmortems include how the incident was resolved and what caused the incident without naming customer information.
Ensure that all postmortems include all incident participants in postmortem authoring and share postmortems as widely as possible,
Answer: BE
The correct answers are B and E.
A good postmortem should include what caused the incident, how the incident could have been worse, and how to prevent a future occurrence of the incident1. This helps to identify the root cause of the problem, the impact of the incident, and the actions to take to mitigate or eliminate the risk of recurrence.
A good postmortem should also include all incident participants in postmortem authoring and share postmortems as widely as possible2. This helps to foster a culture of learning and collaboration, as well as to increase the visibility and accountability of the incident response process.
Answer A is incorrect because it assigns blame to a person or team, which goes against the principle of blameless postmortems2. Blameless postmortems focus on finding solutions rather than pointing fingers, and encourage honest and constructive feedback without fear of punishment.
Answer C is incorrect because it omits how the incident could have been worse, which is an important factor to consider when evaluating the severity and impact of the incident1. It also avoids naming internal system components, which makes it harder to understand the technical details and root cause of the problem.
Answer D is incorrect because it omits how to prevent a future occurrence of the incident, which is the main goal of a postmortem1. It also avoids naming customer information, which may be relevant for understanding the impact and scope of the incident.
Your uses Jenkins running on Google Cloud VM instances for CI/CD. You need to extend the functionality to use infrastructure as code automation by using Terraform. You must ensure that the Terraform Jenkins instance is authorized to create Google Cloud resources. You want to follow Google-recommended practices- What should you do?
Add the auth application-default command as a step in Jenkins before running the Terraform commands.
Create a dedicated service account for the Terraform instance. Download and copy the secret key value to the GOOGLE environment variable on the Jenkins server.
Confirm that the Jenkins VM instance has an attached service account with the appropriate Identity and Access Management (IAM) permissions.
use the Terraform module so that Secret Manager can retrieve credentials.
Answer: C
The correct answer is C.
Confirming that the Jenkins VM instance has an attached service account with the appropriate Identity and Access Management (IAM) permissions is the best way to ensure that the Terraform Jenkins instance is authorized to create Google Cloud resources.This follows the Google-recommended practice of using service accounts to authenticate and authorize applications running on Google Cloud1.Service accounts are associated with private keys that can be used to generate access tokens for Google Cloud APIs2.By attaching a service account to the Jenkins VM instance, Terraform can use the Application Default Credentials (ADC) strategy to automatically find and use the service account credentials3.
Answer A is incorrect because the auth application-default command is used to obtain user credentials, not service account credentials.User credentials are not recommended for applications running on Google Cloud, as they are less secure and less scalable than service account credentials1.
Answer B is incorrect because it involves downloading and copying the secret key value of the service account, which is not a secure or reliable way of managing credentials.The secret key value should be kept private and not exposed to any other system or user2. Moreover, setting the GOOGLE environment variable on the Jenkins server is not a valid way of providing credentials to Terraform.Terraform expects the credentials to be either in a file pointed by the GOOGLE_APPLICATION_CREDENTIALS environment variable, or in a provider block with the credentials argument3.
Answer D is incorrect because it involves using the Terraform module for Secret Manager, which is a service that stores and manages sensitive data such as API keys, passwords, and certificates. While Secret Manager can be used to store and retrieve credentials, it is not necessary or sufficient for authorizing the Terraform Jenkins instance. The Terraform Jenkins instance still needs a service account with the appropriate IAM permissions to access Secret Manager and other Google Cloud resources.
You are analyzing Java applications in production. All applications have Cloud Profiler and Cloud Trace installed and configured by default. You want to determine which applications need performance tuning. What should you do?
Choose 2 answers
A. Examine the wall-clock time and the CPU time Of the application. If the difference is substantial, increase the CPU resource allocation.
B. Examine the wall-clock time and the CPU time of the application. If the difference is substantial, increase the memory resource allocation.
C. 17 Examine the wall-clock time and the CPU time of the application. If the difference is substantial, increase the local disk storage allocation.
D. O Examine the latency time, the wall-clock time, and the CPU time of the application. If the latency time is slowly burning down the error budget, and the difference between wall-clock time and CPU time is minimal, mark the application for optimization.
E. Examine the heap usage Of the application. If the usage is low, mark the application for optimization.
Answer: AD
The correct answers are A and D.
Examine the wall-clock time and the CPU time of the application. If the difference is substantial, increase the CPU resource allocation. This is a good way to determine if the application is CPU-bound, meaning that it spends more time waiting for the CPU than performing actual computation.Increasing the CPU resource allocation can improve the performance of CPU-bound applications1.
Examine the latency time, the wall-clock time, and the CPU time of the application. If the latency time is slowly burning down the error budget, and the difference between wall-clock time and CPU time is minimal, mark the application for optimization. This is a good way to determine if the application is I/O-bound, meaning that it spends more time waiting for input/output operations than performing actual computation.Increasing the CPU resource allocation will not help I/O-bound applications, and they may need optimization to reduce the number or duration of I/O operations2.
Answer B is incorrect because increasing the memory resource allocation will not help if the application is CPU-bound or I/O-bound. Memory allocation affects how much data the application can store and access in memory, but it does not affect how fast the application can process that data.
Answer C is incorrect because increasing the local disk storage allocation will not help if the application is CPU-bound or I/O-bound. Disk storage affects how much data the application can store and access on disk, but it does not affect how fast the application can process that data.
Answer E is incorrect because examining the heap usage of the application will not help to determine if the application needs performance tuning. Heap usage affects how much memory the application allocates for dynamic objects, but it does not affect how fast the application can process those objects. Moreover, low heap usage does not necessarily mean that the application is inefficient or unoptimized.
You deployed an application into a large Standard Google Kubernetes Engine (GKE) cluster. The application is stateless and multiple pods run at the same time. Your application receives inconsistent traffic. You need to ensure that the user experience remains consistent regardless of changes in traffic. and that the resource usage of the cluster is optimized.
What should you do?
Configure a cron job to scale the deployment on a schedule.
Configure a Horizontal Pod Autoscaler.
Configure a Vertical Pod Autoscaler.
Configure cluster autoscaling on the node pool.
Answer: B