Google Professional Cloud DevOps Engineer Real Exam Questions
The questions for Professional Cloud DevOps Engineer were last updated at Nov 22,2024.
- Exam Code: Professional Cloud DevOps Engineer
- Exam Name: Google Cloud Certified - Professional Cloud DevOps Engineer Exam
- Certification Provider: Google
- Latest update: Nov 22,2024
Your team uses Cloud Build for all CI/CO pipelines. You want to use the kubectl builder for Cloud Build to deploy new images to Google Kubernetes Engine (GKE). You need to authenticate to GKE while minimizing development effort.
What should you do?
- A . Assign the Container Developer role to the Cloud Build service account.
- B . Specify the Container Developer role for Cloud Build in the cloudbuild.yaml file.
- C . Create a new service account with the Container Developer role and use it to run Cloud Build.
- D . Create a separate step in Cloud Build to retrieve service account credentials and pass these to kubectl.
A
Explanation:
https://cloud.google.com/build/docs/deploying-builds/deploy-gke
https://cloud.google.com/build/docs/securing-builds/configure-user-specified-service-accounts
You support a high-traffic web application and want to ensure that the home page loads in a timely manner. As a first step, you decide to implement a Service Level Indicator (SLI) to represent home page request latency with an acceptable page load time set to 100 ms.
What is the Google-recommended way of calculating this SLI?
- A . Buckelize Ihe request latencies into ranges, and then compute the percentile at 100 ms.
- B . Bucketize the request latencies into ranges, and then compute the median and 90th percentiles.
- C . Count the number of home page requests that load in under 100 ms, and then divide by the total number of home page requests.
- D . Count the number of home page requests that load in under 100 ms. and then divide by the total number of all web application requests.
C
Explanation:
https://sre.google/workbook/implementing-slos/
In the SRE principles book, it’s recommended treating the SLI as the ratio of two numbers: the number of good events divided by the total number of events. For example: Number of successful HTTP requests / total HTTP requests (success rate)
You support an application deployed on Compute Engine. The application connects to a Cloud SQL instance to store and retrieve data. After an update to the application, users report errors showing database timeout messages. The number of concurrent active users remained stable. You need to find the most probable cause of the database timeout.
What should you do?
- A . Check the serial port logs of the Compute Engine instance.
- B . Use Stackdriver Profiler to visualize the resources utilization throughout the application.
- C . Determine whether there is an increased number of connections to the Cloud SQL instance.
- D . Use Cloud Security Scanner to see whether your Cloud SQL is under a Distributed Denial of Service (DDoS) attack.
C
Explanation:
The most probable cause of the database timeout is an increased number of connections to the Cloud SQL instance. This could happen if the application does not close connections properly or if it creates too many connections at once. You can check the number of connections to the Cloud SQL instance using Cloud Monitoring or Cloud SQL Admin API.
You have a CI/CD pipeline that uses Cloud Build to build new Docker images and push them to Docker Hub. You use Git for code versioning. After making a change in the Cloud Build YAML configuration, you notice that no new artifacts are being built by the pipeline. You need to resolve the issue following Site Reliability Engineering practices.
What should you do?
- A . Disable the CI pipeline and revert to manually building and pushing the artifacts.
- B . Change the CI pipeline to push the artifacts to Container Registry instead of Docker Hub.
- C . Upload the configuration YAML file to Cloud Storage and use Error Reporting to identify and fix the issue.
- D . Run a Git compare between the previous and current Cloud Build Configuration files to find and fix the bug.
D
Explanation:
"After making a change in the Cloud Build YAML configuration, you notice that no new artifacts are
being built by the pipeline"- means something wrong on the recent change not with the image registry.
You have a CI/CD pipeline that uses Cloud Build to build new Docker images and push them to Docker Hub. You use Git for code versioning. After making a change in the Cloud Build YAML configuration, you notice that no new artifacts are being built by the pipeline. You need to resolve the issue following Site Reliability Engineering practices.
What should you do?
- A . Disable the CI pipeline and revert to manually building and pushing the artifacts.
- B . Change the CI pipeline to push the artifacts to Container Registry instead of Docker Hub.
- C . Upload the configuration YAML file to Cloud Storage and use Error Reporting to identify and fix the issue.
- D . Run a Git compare between the previous and current Cloud Build Configuration files to find and fix the bug.
D
Explanation:
"After making a change in the Cloud Build YAML configuration, you notice that no new artifacts are
being built by the pipeline"- means something wrong on the recent change not with the image registry.
Your company experiences bugs, outages, and slowness in its production systems. Developers use the production environment for new feature development and bug fixes. Configuration and experiments are done in the production environment, causing outages for users. Testers use the production environment for load testing, which often slows the production systems. You need to redesign the environment to reduce the number of bugs and outages in production and to enable testers to load test new features.
What should you do?
- A . Create an automated testing script in production to detect failures as soon as they occur.
- B . Create a development environment with smaller server capacity and give access only to developers and testers.
- C . Secure the production environment to ensure that developers can’t change it and set up one controlled update per year.
- D . Create a development environment for writing code and a test environment for configurations, experiments, and load testing.
D
Explanation:
Creating a development environment for writing code and a test environment for configurations, experiments, and load testing is the best practice to reduce the number of bugs and outages in production and to enable testers to load test new features. This way, the production environment is isolated from changes that could affect its stability and performance.
Your company experiences bugs, outages, and slowness in its production systems. Developers use the production environment for new feature development and bug fixes. Configuration and experiments are done in the production environment, causing outages for users. Testers use the production environment for load testing, which often slows the production systems. You need to redesign the environment to reduce the number of bugs and outages in production and to enable testers to load test new features.
What should you do?
- A . Create an automated testing script in production to detect failures as soon as they occur.
- B . Create a development environment with smaller server capacity and give access only to developers and testers.
- C . Secure the production environment to ensure that developers can’t change it and set up one controlled update per year.
- D . Create a development environment for writing code and a test environment for configurations, experiments, and load testing.
D
Explanation:
Creating a development environment for writing code and a test environment for configurations, experiments, and load testing is the best practice to reduce the number of bugs and outages in production and to enable testers to load test new features. This way, the production environment is isolated from changes that could affect its stability and performance.
You need to deploy a new service to production. The service needs to automatically scale using a Managed Instance Group (MIG) and should be deployed over multiple regions. The service needs a large number of resources for each instance and you need to plan for capacity.
What should you do?
- A . Use the n2-highcpu-96 machine type in the configuration of the MIG.
- B . Monitor results of Stackdriver Trace to determine the required amount of resources.
- C . Validate that the resource requirements are within the available quota limits of each region.
- D . Deploy the service in one region and use a global load balancer to route traffic to this region.
C
Explanation:
https://cloud.google.com/compute/quotas#understanding_quotas
https://cloud.google.com/compute/quotas
Your team is designing a new application for deployment into Google Kubernetes Engine (GKE). You need to set up monitoring to collect and aggregate various application-level metrics in a centralized location. You want to use Google Cloud Platform services while minimizing the amount of work required to set up monitoring.
What should you do?
- A . Publish various metrics from the application directly to the Slackdriver Monitoring API, and then observe these custom metrics in Stackdriver.
- B . Install the Cloud Pub/Sub client libraries, push various metrics from the application to various topics, and then observe the aggregated metrics in Stackdriver.
- C . Install the OpenTelemetry client libraries in the application, configure Stackdriver as the export destination for the metrics, and then observe the application’s metrics in Stackdriver.
- D . Emit all metrics in the form of application-specific log messages, pass these messages from the containers to the Stackdriver logging collector, and then observe metrics in Stackdriver.
A
Explanation:
https://cloud.google.com/kubernetes-engine/docs/concepts/custom-and-external-metrics#custom_metrics
https://github.com/GoogleCloudPlatform/k8s-stackdriver/blob/master/custom-metrics-stackdriver-adapter/README.md
Your application can report a custom metric to Cloud Monitoring. You can configure Kubernetes to respond to these metrics and scale your workload automatically. For example, you can scale your application based on metrics such as queries per second, writes per second, network performance, latency when communicating with a different application, or other metrics that make sense for your workload. https://cloud.google.com/kubernetes-engine/docs/concepts/custom-and-external-metrics
Your team is designing a new application for deployment into Google Kubernetes Engine (GKE). You need to set up monitoring to collect and aggregate various application-level metrics in a centralized location. You want to use Google Cloud Platform services while minimizing the amount of work required to set up monitoring.
What should you do?
- A . Publish various metrics from the application directly to the Slackdriver Monitoring API, and then observe these custom metrics in Stackdriver.
- B . Install the Cloud Pub/Sub client libraries, push various metrics from the application to various topics, and then observe the aggregated metrics in Stackdriver.
- C . Install the OpenTelemetry client libraries in the application, configure Stackdriver as the export destination for the metrics, and then observe the application’s metrics in Stackdriver.
- D . Emit all metrics in the form of application-specific log messages, pass these messages from the containers to the Stackdriver logging collector, and then observe metrics in Stackdriver.
A
Explanation:
https://cloud.google.com/kubernetes-engine/docs/concepts/custom-and-external-metrics#custom_metrics
https://github.com/GoogleCloudPlatform/k8s-stackdriver/blob/master/custom-metrics-stackdriver-adapter/README.md
Your application can report a custom metric to Cloud Monitoring. You can configure Kubernetes to respond to these metrics and scale your workload automatically. For example, you can scale your application based on metrics such as queries per second, writes per second, network performance, latency when communicating with a different application, or other metrics that make sense for your workload. https://cloud.google.com/kubernetes-engine/docs/concepts/custom-and-external-metrics