Spring Sale 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: clap70

Professional-Cloud-Architect Google Certified Professional - Cloud Architect (GCP) Questions and Answers

Questions 4

For this question, refer to the TerramEarth case study.

To speed up data retrieval, more vehicles will be upgraded to cellular connections and be able to transmit data to the ETL process. The current FTP process is error-prone and restarts the data transfer from the start of the file when connections fail, which happens often. You want to improve the reliability of the solution and minimize data transfer time on the cellular connections. What should you do?

Options:

A.

Use one Google Container Engine cluster of FTP servers. Save the data to a Multi-Regional bucket. Run the ETL process using data in the bucket.

B.

Use multiple Google Container Engine clusters running FTP servers located in different regions. Save the data to Multi-Regional buckets in us, eu, and asia. Run the ETL process using the data in the bucket.

C.

Directly transfer the files to different Google Cloud Multi-Regional Storage bucket locations in us, eu, and asia using Google APIs over HTTP(S). Run the ETL process using the data in the bucket.

D.

Directly transfer the files to a different Google Cloud Regional Storage bucket location in us, eu, and asia using Google APIs over HTTP(S). Run the ETL process to retrieve the data from each Regional bucket.

Buy Now
Questions 5

For this question, refer to the TerramEarth case study.

TerramEarth has equipped unconnected trucks with servers and sensors to collet telemetry data. Next year they want to use the data to train machine learning models. They want to store this data in the cloud while reducing costs. What should they do?

Options:

A.

Have the vehicle’ computer compress the data in hourly snapshots, and store it in a Google Cloud storage (GCS) Nearline bucket.

B.

Push the telemetry data in Real-time to a streaming dataflow job that compresses the data, and store it in Google BigQuery.

C.

Push the telemetry data in real-time to a streaming dataflow job that compresses the data, and store it in Cloud Bigtable.

D.

Have the vehicle's computer compress the data in hourly snapshots, a Store it in a GCS Coldline bucket.

Buy Now
Questions 6

For this question, refer to the TerramEarth case study.

TerramEarth's CTO wants to use the raw data from connected vehicles to help identify approximately when a vehicle in the development team to focus their failure. You want to allow analysts to centrally query the vehicle data. Which architecture should you recommend?

A)

B)

C)

D)

Options:

A.

Option A

B.

Option B

C.

Option C

D.

Option D

Buy Now
Questions 7

For this question, refer to the TerramEarth case study

You analyzed TerramEarth's business requirement to reduce downtime, and found that they can achieve a majority of time saving by reducing customers' wait time for parts You decided to focus on reduction of the 3 weeks aggregate reporting time Which modifications to the company's processes should you recommend?

Options:

A.

Migrate from CSV to binary format, migrate from FTP to SFTP transport, and develop machine learning analysis of metrics.

B.

Migrate from FTP to streaming transport, migrate from CSV to binary format, and develop machine learning analysis of metrics.

C.

Increase fleet cellular connectivity to 80%, migrate from FTP to streaming transport, and develop machine learning analysis of metrics.

D.

Migrate from FTP to SFTP transport, develop machine learning analysis of metrics, and increase dealer local inventory by a fixed factor.

Buy Now
Questions 8

For this question refer to the TerramEarth case study.

Which of TerramEarth's legacy enterprise processes will experience significant change as a result of increased Google Cloud Platform adoption.

Options:

A.

Opex/capex allocation, LAN changes, capacity planning

B.

Capacity planning, TCO calculations, opex/capex allocation

C.

Capacity planning, utilization measurement, data center expansion

D.

Data Center expansion, TCO calculations, utilization measurement

Buy Now
Questions 9

For this question, refer to the TerramEarth case study.

TerramEarth plans to connect all 20 million vehicles in the field to the cloud. This increases the volume to 20 million 600 byte records a second for 40 TB an hour. How should you design the data ingestion?

Options:

A.

Vehicles write data directly to GCS.

B.

Vehicles write data directly to Google Cloud Pub/Sub.

C.

Vehicles stream data directly to Google BigQuery.

D.

Vehicles continue to write data using the existing system (FTP).

Buy Now
Questions 10

Your agricultural division is experimenting with fully autonomous vehicles.

You want your architecture to promote strong security during vehicle operation.

Which two architecture should you consider?

Choose 2 answers:

Options:

A.

Treat every micro service call between modules on the vehicle as untrusted.

B.

Require IPv6 for connectivity to ensure a secure address space.

C.

Use a trusted platform module (TPM) and verify firmware and binaries on boot.

D.

Use a functional programming language to isolate code execution cycles.

E.

Use multiple connectivity subsystems for redundancy.

F.

Enclose the vehicle's drive electronics in a Faraday cage to isolate chips.

Buy Now
Questions 11

For this question refer to the TerramEarth case study

Operational parameters such as oil pressure are adjustable on each of TerramEarth's vehicles to increase their efficiency, depending on their environmental conditions. Your primary goal is to increase the operating efficiency of all 20 million cellular and unconnected vehicles in the field How can you accomplish this goal?

Options:

A.

Have your engineers inspect the data for patterns, and then create an algorithm with rules that make operational adjustments automatically.

B.

Capture all operating data, train machine learning models that identify ideal operations, and run locally to make operational adjustments automatically.

C.

Implement a Google Cloud Dataflow streaming job with a sliding window, and use Google Cloud Messaging (GCM) to make operational adjustments automatically.

D.

Capture all operating data, train machine learning models that identify ideal operations, and host in Google Cloud Machine Learning (ML) Platform to make operational adjustments automatically.

Buy Now
Questions 12

For this question, refer to the Helicopter Racing League (HRL) case study. Your team is in charge of creating a

payment card data vault for card numbers used to bill tens of thousands of viewers, merchandise consumers,

and season ticket holders. You need to implement a custom card tokenization service that meets the following

requirements:

• It must provide low latency at minimal cost.

• It must be able to identify duplicate credit cards and must not store plaintext card numbers.

• It should support annual key rotation.

Which storage approach should you adopt for your tokenization service?

Options:

A.

Store the card data in Secret Manager after running a query to identify duplicates.

B.

Encrypt the card data with a deterministic algorithm stored in Firestore using Datastore mode.

C.

Encrypt the card data with a deterministic algorithm and shard it across multiple Memorystore instances.

D.

Use column-level encryption to store the data in Cloud SQL.

Buy Now
Questions 13

For this question, refer to the Cymbal Retail case study Cymbal plans to migrate their existing on-premises systems to Google Cloud and implement Al-powered virtual agents to handle customer interactions You need to provision the compute resources that can scale for the Al-powered virtual agents What should you do?

Options:

A.

Use Cloud SQL to store the customer data and product catalog.

B.

Configure Cloud Build to call Al Applications (formerly Vertex Al Agent Builder).

C.

Deploy a Google Kubernetes Engine (GKE) cluster with autoscaling enabled

D.

Create a single, large Compute Engine VM instance with a high CPU allocation.

Buy Now
Questions 14

For this question, refer to the Cymbal Retail case study. Cymbal wants to migrate their product catalog management processes to Google Cloud. You need to ensure a smooth migration with proper change management to minimize disruption and risks to the business. You want to follow Google-recommended practices to automate product catalog enrichment, improve product discoverability, increase customer engagement, and minimize costs. What should you do?

Options:

A.

Design a migration plan to move all of Cymbal's data to Cloud Storage, and use Compute Engine for all business logic

B.

Design a migration plan to move all of Cymbal's data to Cloud Storage, and use Cloud Run functions for all business logic

C.

Design a migration plan, starting with a pilot project focusing on a specific product category, and gradually expand to other categories.

D.

Design a migration plan with a scheduled window to move all components at once Perform extensive testing to ensure a successful migration.

Buy Now
Questions 15

For this question, refer to the Cymbal Retail case study. Cymbal wants you to connect their on-premises systems to Google Cloud while maintaining secure communication between their on-premises and cloud environments You want to follow Google's recommended approach to ensure the most secure and manageable solution. What should you do?

Options:

A.

Use a bastion host to provide secure access lo Google Cloud resources from Cymbal's on-premises systems.

B.

Configure a static VPN connection using SSH tunnels to connect the on-premises systems to Google Cloud

C.

Configure a Cloud VPN gateway and establish a VPN tunnel Configure firewall rules to restrict access to specific resources and services based on IP addresses and ports.

D.

Use Google Cloud's VPC peering to connect Cymbal's on-premises network to Google Cloud.

Buy Now
Questions 16

For this question, refer to the Cymbal Retail case study. Cymbal's generative Al models require high-performance storage for temporary files generated during model training and inference. These files are ephemeral and frequently accessed and modified You need to select a storage solution that minimizes latency and cost and maximizes performance for generative Al workloads. What should you do?

Options:

A.

Use a Cloud Storage bucket in the same region as your virtual machines Configure lifecycle policies to delete files after processing

B.

Use Filestore to store temporary files

C.

Use performance persistent disks.

D.

Use Local SSDs attached to the VMs running the generative Al models

Buy Now
Questions 17

For this question, refer to the Cymbal Retail case study. Cymbal wants to migrate its diverse database environment to Google Cloud while ensuring high availability and performance for online customers. The company also wants to efficiently store and access large product images These images typically stay In the catalog for more than 90 days and are accessed less and less frequently. You need to select the appropriate Google Cloud services for each database. You also need to design a storage solution for the product images that optimizes cost and performance What should you do?

Options:

A.

Migrate all databases to Spanner for consistency, and use Cloud Storage Standard for image storage

B.

Migrate all databases to self-managed instances on Compute Engino. and use a persistent disk for image storage.

C.

Migrate MySQL and SQL Server to Spanner. Redis to Memorystore. and MongoDB to Firestore Use Cloud Storage Standard for image storage, and move

images to Cloud Storage Nearline storage when products become less popular.

D.

Migrate MySQL to Cloud SQL. SQL Server to Cloud SQL. Redis to Memorystore. and MongoDB to Firestore. Use Cloud Storage Standard for image storage, and move images to Cloud Storage Coldline storage when products become less popular

Buy Now
Questions 18

For this question, refer to the Cymbal Retail case study. Cymbal has a centralized project that supports large video files for Vertex Al model training. Standard storage costs have suddenly increased this month, and you need to determine why. What should you do?

Options:

A.

Investigate if the project owner disabled a soft-delete policy on the bucket holding the video files.

B.

Investigate if the project owner moved from dual-region storage to region storage

C.

Investigate If the project owner enabled a soft-delete policy on the bucket holding the video files.

D.

Investigate if the project owner moved from multi-region storage to region stotage.

Buy Now
Questions 19

For this question, refer to the Cymbal Retail case study. Cymbal wants you to design a cloud-first data storage infrastructure for the product catalog modernization project. You want to ensure efficient data access and high availability for Cymbals web application and virtual agents while minimizing operational costs. What should you do?

Options:

A.

Use AlloyDB for structured product data, and Cloud Storage for product images

B.

Use Spanner for the structured product data, and BigTable for product images

C.

Use Filestore for the structured product data and Cloud Storage for product images

D.

Use Cloud Storage for structured product data, and BigQuery for product images

Buy Now
Questions 20

For this question, refer to the Dress4Win case study. Dress4Win is expected to grow to 10 times its size in 1 year with a corresponding growth in data and traffic that mirrors the existing patterns of usage. The CIO has set the target of migrating production infrastructure to the cloud within the next 6 months. How will you configure the solution to scale for this growth without making major application changes and still maximize the ROI?

Options:

A.

Migrate the web application layer to App Engine, and MySQL to Cloud Datastore, and NAS to Cloud Storage. Deploy RabbitMQ, and deploy Hadoop servers using Deployment Manager.

B.

Migrate RabbitMQ to Cloud Pub/Sub, Hadoop to BigQuery, and NAS to Compute Engine with Persistent Disk storage. Deploy Tomcat, and deploy Nginx using Deployment Manager.

C.

Implement managed instance groups for Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Compute Engine with Persistent Disk storage.

D.

Implement managed instance groups for the Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Cloud Storage.

Buy Now
Questions 21

For this question, refer to the Dress4Win case study. You are responsible for the security of data stored in

Cloud Storage for your company, Dress4Win. You have already created a set of Google Groups and assigned the appropriate users to those groups. You should use Google best practices and implement the simplest design to meet the requirements.

Considering Dress4Win’s business and technical requirements, what should you do?

Options:

A.

Assign custom IAM roles to the Google Groups you created in order to enforce security requirements.

Encrypt data with a customer-supplied encryption key when storing files in Cloud Storage.

B.

Assign custom IAM roles to the Google Groups you created in order to enforce security requirements.

Enable default storage encryption before storing files in Cloud Storage.

C.

Assign predefined IAM roles to the Google Groups you created in order to enforce security requirements.

Utilize Google’s default encryption at rest when storing files in Cloud Storage.

D.

Assign predefined IAM roles to the Google Groups you created in order to enforce security requirements. Ensure that the default Cloud KMS key is set before storing files in Cloud Storage.

Buy Now
Questions 22

Refer to the Altostrat Media case study for the following solution regarding the performance analysis of their media processing pipeline.

Altostrat needs to analyze the performance of its media processing pipeline running on Java-based Cloud Run function. You need to select the most effective tool for the task. What should you do?

Options:

A.

Query logs in Cloud Logging.

B.

Analyze the data via Cloud Profiler.

C.

Instrument the code to use Cloud Trace.

D.

Inspect data from Snapshot Debugger.

Buy Now
Questions 23

Refer to the Altostrat Media case study for the following solution.

Altostrat is concerned about sophisticated, multi-vector Distributed Denial of Service (DDoS) attacks targeting various layers of their infrastructure. DDoS attacks could potentially disrupt video streaming and cause financial losses. You need to mitigate this risk. What should you do?

Options:

A.

Set up VPC Service Controls to restrict access to sensitive resources and prevent data exfiltration.

B.

Configure Cloud Next Generation Firewall (NGFW) with custom rules to filter malicious traffic at the network level.

C.

Deploy Google Cloud Armor with pre-configured and custom rules for L3/L4 and L7 protection.

D.

Activate Security Command Center to monitor security posture and detect potential threats.

Buy Now
Questions 24

Refer to the Altostrat Media case study for the following solutions regarding cost optimization for batch processing and microservices testing strategies.

Altostrat is experiencing fluctuating computational demands for its batch processing jobs. These jobs are not time-critical and can tolerate occasional interruptions. You want to optimize cloud costs and address batch processing needs. What should you do?

Options:

A.

Configure reserved VM instances

B.

Deploy spot VM instances.

C.

Set up standard VM instances.

D.

Use Cloud Run functions.

Buy Now
Questions 25

Refer to the Altostrat Media case study for the following solution regarding API management and cost control.

Altostrat is using Apigee for API management and wants to ensure their APIs are protected from overuse and abuse. You need to implement an Apigee feature to control the total number of API calls for cost management. What should you do?

Options:

A.

Set up API key validation.

B.

Integrate OAuth 2.0 authorization.

C.

Configure Quota policies.

D.

Activate XML threat protection.

Buy Now
Questions 26

Altostrat's development team is using a microservices architecture for their application. You need to select the most suitable testing approach to ensure that individual microservices function correctly in isolation. What should you do?

Options:

A.

Run unit testing.

B.

Use load testing.

C.

Perform end-to-end testing.

D.

Execute integration testing.

Buy Now
Questions 27

You have broken down a legacy monolithic application into a few containerized RESTful microservices. You want to run those microservices on Cloud Run. You also want to make sure the services are highly available with low latency to your customers. What should you do?

Options:

A.

Deploy Cloud Run services to multiple availability zones. Create Cloud Endpoints that point to the services. Create a global HTIP(S) Load Balancing instance and attach the Cloud Endpoints to its backend.

B.

Deploy Cloud Run services to multiple regions Create serverless network endpoint groups pointing to the services. Add the serverless NE Gs to a backend service that is used by a global HTIP(S) Load Balancing instance.

C.

Cloud Run services to multiple regions. In Cloud DNS, create a latency-based DNS name that points to the services.

D.

Deploy Cloud Run services to multiple availability zones. Create a TCP/IP global load balancer. Add the Cloud Run Endpoints to its backend service.

Buy Now
Questions 28

Altostrat stores a large library of media content, including sensitive interviews and documentaries, in Cloud Storage. They are concerned about the confidentiality of this content and want to protect it from unauthorized access. You need to implement a Google-recommended solution that is easy to integrate and provides Altostrat with control and auditability of the encryption keys. What should you do?

Options:

A.

Configure Cloud Storage to use server-side encryption with Google-managed encryption keys. Create a bucket policy to restrict access to only authorized Google groups and required service accounts.

B.

Use Cloud Storage default encryption at rest. Implement fine-grained access control using IAM roles and groups to restrict access to sensitive buckets.

C.

Implement client-side encryption before uploading it to Cloud Storage. Store the encryption keys in a HashiCorp Vault instance deployed on Google Kubernetes Engine (GKE). Implement fine-grained access control to sensitive Cloud Storage buckets using IAM roles.

D.

Use customer-managed encryption keys (CMEK) for all Cloud Storage buckets storing sensitive media content. Implement fine-grained access control using IAM roles and groups to restrict access to sensitive buckets.

Buy Now
Questions 29

For this question, refer to the TerramEarth case study. A new architecture that writes all incoming data to

BigQuery has been introduced. You notice that the data is dirty, and want to ensure data quality on an

automated daily basis while managing cost.

What should you do?

Options:

A.

Set up a streaming Cloud Dataflow job, receiving data by the ingestion process. Clean the data in a Cloud Dataflow pipeline.

B.

Create a Cloud Function that reads data from BigQuery and cleans it. Trigger it. Trigger the Cloud Function from a Compute Engine instance.

C.

Create a SQL statement on the data in BigQuery, and save it as a view. Run the view daily, and save the result to a new table.

D.

Use Cloud Dataprep and configure the BigQuery tables as the source. Schedule a daily job to clean the data.

Buy Now
Questions 30

TerramEarth has a legacy web application that you cannot migrate to cloud. However, you still want to build a cloud-native way to monitor the application. If the application goes down, you want the URL to point to a "Site is unavailable" page as soon as possible. You also want your Ops team to receive a notification for the issue. You need to build a reliable solution for minimum cost

What should you do?

Options:

A.

Create a scheduled job in Cloud Run to invoke a container every minute. The container will check the application URL If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.

B.

Create a cron job on a Compute Engine VM that runs every minute. The cron job invokes a Python program to check the application URL If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.

C.

Create a Cloud Monitoring uptime check to validate the application URL If it fails, put a message in a Pub/Sub queue that triggers a Cloud Function to switch the URL to the "Site is unavailable" page, and notify the Ops team.

D.

Use Cloud Error Reporting to check the application URL If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.

Buy Now
Questions 31

TerramEarth has about 1 petabyte (PB) of vehicle testing data in a private data center. You want to move the data to Cloud Storage for your machine learning team. Currently, a 1-Gbps interconnect link is available for you. The machine learning team wants to start using the data in a month. What should you do?

Options:

A.

Request Transfer Appliances from Google Cloud, export the data to appliances, and return the appliances to Google Cloud.

B.

Configure the Storage Transfer service from Google Cloud to send the data from your data center to Cloud Storage

C.

Make sure there are no other users consuming the 1 Gbps link, and use multi-thread transfer to upload the data to Cloud Storage.

D.

Export files to an encrypted USB device, send the device to Google Cloud, and request an import of the data to Cloud Storage

Buy Now
Questions 32

For this question, refer to the TerramEarth case study. Considering the technical requirements, how should you reduce the unplanned vehicle downtime in GCP?

Options:

A.

Use BigQuery as the data warehouse. Connect all vehicles to the network and stream data into BigQuery using Cloud Pub/Sub and Cloud Dataflow. Use Google Data Studio for analysis and reporting.

B.

Use BigQuery as the data warehouse. Connect all vehicles to the network and upload gzip files to a Multi-Regional Cloud Storage bucket using gcloud. Use Google Data Studio for analysis and reporting.

C.

Use Cloud Dataproc Hive as the data warehouse. Upload gzip files to a MultiRegional Cloud Storage

bucket. Upload this data into BigQuery using gcloud. Use Google data Studio for analysis and reporting.

D.

Use Cloud Dataproc Hive as the data warehouse. Directly stream data into prtitioned Hive tables. Use Pig scripts to analyze data.

Buy Now
Questions 33

For this question, refer to the TerramEarth case study. TerramEarth has decided to store data files in Cloud Storage. You need to configure Cloud Storage lifecycle rule to store 1 year of data and minimize file storage cost.

Which two actions should you take?

Options:

A.

Create a Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Standard”, and Action: “Set to Coldline”, and create a second GCS life-cycle rule with Age: “365”, Storage Class: “Coldline”, and Action: “Delete”.

B.

Create a Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Coldline”, and Action: “Set to Nearline”, and create a second GCS life-cycle rule with Age: “91”, Storage Class: “Coldline”, and Action: “Set to Nearline”.

C.

Create a Cloud Storage lifecycle rule with Age: “90”, Storage Class: “Standard”, and Action: “Set to Nearline”, and create a second GCS life-cycle rule with Age: “91”, Storage Class: “Nearline”, and Action: “Set to Coldline”.

D.

Create a Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Standard”, and Action: “Set to Coldline”, and create a second GCS life-cycle rule with Age: “365”, Storage Class: “Nearline”, and Action: “Delete”.

Buy Now
Questions 34

For this question, refer to the TerramEarth case study. You are asked to design a new architecture for the

ingestion of the data of the 200,000 vehicles that are connected to a cellular network. You want to follow

Google-recommended practices.

Considering the technical requirements, which components should you use for the ingestion of the data?

Options:

A.

Google Kubernetes Engine with an SSL Ingress

B.

Cloud IoT Core with public/private key pairs

C.

Compute Engine with project-wide SSH keys

D.

Compute Engine with specific SSH keys

Buy Now
Questions 35

You are migrating a Linux-based application from your private data center to Google Cloud. The TerramEarth security team sent you several recent Linux vulnerabilities published by Common Vulnerabilities and Exposures (CVE). You need assistance in understanding how these vulnerabilities could impact your migration. What should you do?

Options:

A.

Open a support case regarding the CVE and chat with the support engineer.

B.

Read the CVEs from the Google Cloud Status Dashboard to understand the impact.

C.

Read the CVEs from the Google Cloud Platform Security Bulletins to understand the impact

D.

Post a question regarding the CVE in Stack Overflow to get an explanation

E.

Post a question regarding the CVE in a Google Cloud discussion group to get an explanation

Buy Now
Questions 36

Your customer is receiving reports that their recently updated Google App Engine application is taking approximately 30 seconds to load for some of their users. This behavior was not reported before the update. What strategy should you take?

Options:

A.

Work with your ISP to diagnose the problem.

B.

Open a support ticket to ask for network capture and flow data to diagnose the problem, then roll back your application.

C.

Roll back to an earlier known good release initially, then use Stackdriver Trace and logging to diagnose the problem in a development/test/staging environment.

D.

Roll back to an earlier known good release, then push the release again at a quieter period to investigate. Then use Stackdriver Trace and logging to diagnose the problem.

Buy Now
Questions 37

You are using Cloud SQL as the database backend for a large CRM deployment. You want to scale as usage increases and ensure that you don’t run out of storage, maintain 75% CPU usage cores, and keep replication lag below 60 seconds. What are the correct steps to meet your requirements?

Options:

A.

1) Enable automatic storage increase for the instance.

2) Create a Stackdriver alert when CPU usage exceeds 75%, and change the instance type to reduce

CPU usage.

3) Create a Stackdriver alert for replication lag, and shard the database to reduce replication time.

B.

1) Enable automatic storage increase for the instance.

2) Change the instance type to a 32-core machine type to keep CPU usage below 75%.

3) Create a Stackdriver alert for replication lag, and shard the database to reduce replication time.

C.

1) Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on the

instance to create more space.

2) Deploy memcached to reduce CPU load.

3) Change the instance type to a 32-core machine type to reduce replication lag.

D.

1) Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on the

instance to create more space.

2) Deploy memcached to reduce CPU load.

3) Create a Stackdriver alert for replication lag, and change the instance type to a 32-core machine type to reduce replication lag.

Buy Now
Questions 38

Your company just finished a rapid lift and shift to Google Compute Engine for your compute needs. You have another 9 months to design and deploy a more cloud-native solution. Specifically, you want a system that is no-ops and auto-scaling. Which two compute products should you choose? Choose 2 answers

Options:

A.

Compute Engine with containers

B.

Google Kubernetes Engine with containers

C.

Google App Engine Standard Environment

D.

Compute Engine with custom instance types

E.

Compute Engine with managed instance groups

Buy Now
Questions 39

You are developing an application using different microservices that should remain internal to the cluster. You want to be able to configure each microservice with a specific number of replicas. You also want to be able to address a specific microservice from any other microservice in a uniform way, regardless of the number of replicas the microservice scales to. You need to implement this solution on Google Kubernetes Engine. What should you do?

Options:

A.

Deploy each microservice as a Deployment. Expose the Deployment in the cluster using a Service, and use the Service DNS name to address it from other microservices within the cluster.

B.

Deploy each microservice as a Deployment. Expose the Deployment in the cluster using an Ingress, and use the Ingress IP address to address the Deployment from other microservices within the cluster.

C.

Deploy each microservice as a Pod. Expose the Pod in the cluster using a Service, and use the Service DNS name to address the microservice from other microservices within the cluster.

D.

Deploy each microservice as a Pod. Expose the Pod in the cluster using an Ingress, and use the Ingress IP address name to address the Pod from other microservices within the cluster.

Buy Now
Questions 40

Your organization requires that metrics from all applications be retained for 5 years for future analysis in possible legal proceedings. Which approach should you use?

Options:

A.

Grant the security team access to the logs in each Project.

B.

Configure Stackdriver Monitoring for all Projects, and export to BigQuery.

C.

Configure Stackdriver Monitoring for all Projects with the default retention policies.

D.

Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage.

Buy Now
Questions 41

Your team plans to use Vertex AI to develop and deploy machine learning models for various use cases for fraud detection, product recommendations, and customer churn prediction. You want to enhance the security posture of the Vertex AI and Workbench environment by restricting data exfiltration. What should you do?

Options:

A.

Create a service perimeter and include ml.googleapis.com and document.googleapis.com as protected services.

B.

Enable VPC Flow Logs to monitor network traffic to and from Vertex AI services and to identify suspicious activity.

C.

Create a service perimeter and include aiplatform.googleapis.com and notebooks.googleapis.com as protected services.

D.

Enable Private Google Access for the VPC network to allow Vertex AI services to access public Google services without traversing the public internet.

Buy Now
Questions 42

You are using a GitHub repository for your application s source code. You want to set up an efficient and secure continuous deployment process to automatically build and deploy the application to Cloud Run whenever a pull request is merged What should you do?

Options:

A.

Create a GitHub Enterprise trigger in Cloud Build Once a pull request is merged, trigger Cloud Build to build and deploy the application to Cloud Run. Save the deployment credential to Secret Manager

B.

Create a workflow using GitHub Actions to build and deploy the application to Cloud Run once a pull request is merged. The workflow will use a service account key

checked in with your source code for deployment permission

C.

Create a GitHub webhook trigger in Cloud Build. Once a pull request is merged, trigger Cloud Build to build a container image and save it In Artifact Registry Use

Config Sync to deploy the application to Cloud Run

D.

Connect your repository using the Cloud Build GitHub app. Create a trigger in Cloud Build. Once a pull request is merged, trigger Cloud Build to build and deploy the application to Cloud Run.

Buy Now
Questions 43

You need to deploy a stateful workload on Google Cloud. The workload can scale horizontally, but each instance needs to read and write to the same POSIX filesystem. At high load, the stateful workload needs to support up to 100 MB/s of writes. What should you do?

Options:

A.

Use a persistent disk for each instance.

B.

Use a regional persistent disk for each instance.

C.

Create a Cloud Filestore instance and mount it in each instance.

D.

Create a Cloud Storage bucket and mount it in each instance using gcsfuse.

Buy Now
Questions 44

For this question, refer to the EHR Healthcare case study. You are a developer on the EHR customer portal team. Your team recently migrated the customer portal application to Google Cloud. The load has increased on the application servers, and now the application is logging many timeout errors. You recently incorporated Pub/Sub into the application architecture, and the application is not logging any Pub/Sub publishing errors. You want to improve publishing latency. What should you do?

Options:

A.

Increase the Pub/Sub Total Timeout retry value.

B.

Move from a Pub/Sub subscriber pull model to a push model.

C.

Turn off Pub/Sub message batching.

D.

Create a backup Pub/Sub message queue.

Buy Now
Questions 45

For this question, refer to the EHR Healthcare case study. You are responsible for designing the Google Cloud network architecture for Google Kubernetes Engine. You want to follow Google best practices. Considering the EHR Healthcare business and technical requirements, what should you do to reduce the attack surface?

Options:

A.

Use a private cluster with a private endpoint with master authorized networks configured.

B.

Use a public cluster with firewall rules and Virtual Private Cloud (VPC) routes.

C.

Use a private cluster with a public endpoint with master authorized networks configured.

D.

Use a public cluster with master authorized networks enabled and firewall rules.

Buy Now
Questions 46

You need to upgrade the EHR connection to comply with their requirements. The new connection design must support business-critical needs and meet the same network and security policy requirements. What should you do?

Options:

A.

Add a new Dedicated Interconnect connection.

B.

Upgrade the bandwidth on the Dedicated Interconnect connection to 100 G.

C.

Add three new Cloud VPN connections.

D.

Add a new Carrier Peering connection.

Buy Now
Questions 47

For this question, refer to the EHR Healthcare case study. EHR has single Dedicated Interconnect

connection between their primary data center and Googles network. This connection satisfies

EHR’s network and security policies:

• On-premises servers without public IP addresses need to connect to cloud resources

without public IP addresses

• Traffic flows from production network mgmt. servers to Compute Engine virtual

machines should never traverse the public internet.

You need to upgrade the EHR connection to comply with their requirements. The new

connection design must support business critical needs and meet the same network and

security policy requirements. What should you do?

Options:

A.

Add a new Dedicated Interconnect connection

B.

Upgrade the bandwidth on the Dedicated Interconnect connection to 100 G

C.

Add three new Cloud VPN connections

D.

Add a new Carrier Peering connection

Buy Now
Questions 48

For this question, refer to the EHR Healthcare case study. In the past, configuration errors put public IP addresses on backend servers that should not have been accessible from the Internet. You need to ensure that no one can put external IP addresses on backend Compute Engine instances and that external IP addresses can only be configured on frontend Compute Engine instances. What should you do?

Options:

A.

Create an Organizational Policy with a constraint to allow external IP addresses only on the frontend Compute Engine instances.

B.

Revoke the compute.networkAdmin role from all users in the project with front end instances.

C.

Create an Identity and Access Management (IAM) policy that maps the IT staff to the compute.networkAdmin role for the organization.

D.

Create a custom Identity and Access Management (IAM) role named GCE_FRONTEND with the compute.addresses.create permission.

Buy Now
Questions 49

For this question, refer to the EHR Healthcare case study. You need to define the technical architecture for securely deploying workloads to Google Cloud. You also need to ensure that only verified containers are deployed using Google Cloud services. What should you do? (Choose two.)

Options:

A.

Enable Binary Authorization on GKE, and sign containers as part of a CI/CD pipeline.

B.

Configure Jenkins to utilize Kritis to cryptographically sign a container as part of a CI/CD pipeline.

C.

Configure Container Registry to only allow trusted service accounts to create and deploy containers from the registry.

D.

Configure Container Registry to use vulnerability scanning to confirm that there are no vulnerabilities before deploying the workload.

Buy Now
Questions 50

For this question, refer to the EHR Healthcare case study. You are responsible for ensuring that EHR's use of Google Cloud will pass an upcoming privacy compliance audit. What should you do? (Choose two.)

Options:

A.

Verify EHR's product usage against the list of compliant products on the Google Cloud compliance page.

B.

Advise EHR to execute a Business Associate Agreement (BAA) with Google Cloud.

C.

Use Firebase Authentication for EHR's user facing applications.

D.

Implement Prometheus to detect and prevent security breaches on EHR's web-based applications.

E.

Use GKE private clusters for all Kubernetes workloads.

Buy Now
Questions 51

For this question, refer to the EHR Healthcare case study. You need to define the technical architecture for hybrid connectivity between EHR's on-premises systems and Google Cloud. You want to follow Google's recommended practices for production-level applications. Considering the EHR Healthcare business and technical requirements, what should you do?

Options:

A.

Configure two Partner Interconnect connections in one metro (City), and make sure the Interconnect connections are placed in different metro zones.

B.

Configure two VPN connections from on-premises to Google Cloud, and make sure the VPN devices on-premises are in separate racks.

C.

Configure Direct Peering between EHR Healthcare and Google Cloud, and make sure you are peering at least two Google locations.

D.

Configure two Dedicated Interconnect connections in one metro (City) and two connections in another metro, and make sure the Interconnect connections are placed in different metro zones.

Buy Now
Questions 52

Your development team has created a mobile game app. You want to test the new mobile app on Android and

iOS devices with a variety of configurations. You need to ensure that testing is efficient and cost-effective. What

should you do?

Options:

A.

Upload your mobile app to the Firebase Test Lab, and test the mobile app on Android and iOS devices.

B.

Create Android and iOS VMs on Google Cloud, install the mobile app on the VMs, and test the mobile app.

C.

Create Android and iOS containers on Google Kubernetes Engine (GKE), install the mobile app on the

containers, and test the mobile app.

D.

Upload your mobile app with different configurations to Firebase Hosting and test each configuration.

Buy Now
Questions 53

For this question, refer to the Mountkirk Games case study.

Mountkirk Games' gaming servers are not automatically scaling properly. Last month, they rolled out a new feature, which suddenly became very popular. A record number of users are trying to use the service, but many of them are getting 503 errors and very slow response times. What should they investigate first?

Options:

A.

Verify that the database is online.

B.

Verify that the project quota hasn't been exceeded.

C.

Verify that the new feature code did not introduce any performance bugs.

D.

Verify that the load-testing team is not running their tool against production.

Buy Now
Questions 54

You need to implement a network ingress for a new game that meets the defined business and technical

requirements. Mountkirk Games wants each regional game instance to be located in multiple Google Cloud

regions. What should you do?

Options:

A.

Configure a global load balancer connected to a managed instance group running Compute Engine

instances.

B.

Configure kubemci with a global load balancer and Google Kubernetes Engine.

C.

Configure a global load balancer with Google Kubernetes Engine.

D.

Configure Ingress for Anthos with a global load balancer and Google Kubernetes Engine.

Buy Now
Questions 55

Your development teams release new versions of games running on Google Kubernetes Engine (GKE) daily.

You want to create service level indicators (SLIs) to evaluate the quality of the new versions from the user’s

perspective. What should you do?

Options:

A.

Create CPU Utilization and Request Latency as service level indicators.

B.

Create GKE CPU Utilization and Memory Utilization as service level indicators.

C.

Create Request Latency and Error Rate as service level indicators.

D.

Create Server Uptime and Error Rate as service level indicators.

Buy Now
Questions 56

You are implementing Firestore for Mountkirk Games. Mountkirk Games wants to give a new game

programmatic access to a legacy game's Firestore database. Access should be as restricted as possible. What

should you do?

Options:

A.

Create a service account (SA) in the legacy game's Google Cloud project, add this SA in the new game's IAM page, and then give it the Firebase Admin role in both projects

B.

Create a service account (SA) in the legacy game's Google Cloud project, add a second SA in the new game's IAM page, and then give the Organization Admin role to both SAs

C.

Create a service account (SA) in the legacy game's Google Cloud project, give it the Firebase Admin role, and then migrate the new game to the legacy game's project.

D.

Create a service account (SA) in the lgacy game's Google Cloud project, give the SA the Organization Admin rule and then give it the Firebase Admin role in both projects

Buy Now
Questions 57

You need to optimize batch file transfers into Cloud Storage for Mountkirk Games’ new Google Cloud solution.

The batch files contain game statistics that need to be staged in Cloud Storage and be processed by an extract

transform load (ETL) tool. What should you do?

Options:

A.

Use gsutil to batch move files in sequence.

B.

Use gsutil to batch copy the files in parallel.

C.

Use gsutil to extract the files as the first part of ETL.

D.

Use gsutil to load the files as the last part of ETL.

Buy Now
Questions 58

Mountkirk Games wants to limit the physical location of resources to their operating Google Cloud regions.

What should you do?

Options:

A.

Configure an organizational policy which constrains where resources can be deployed.

B.

Configure IAM conditions to limit what resources can be configured.

C.

Configure the quotas for resources in the regions not being used to 0.

D.

Configure a custom alert in Cloud Monitoring so you can disable resources as they are created in other

regions.

Buy Now
Questions 59

Mountkirk Games wants you to secure the connectivity from the new gaming application platform to Google

Cloud. You want to streamline the process and follow Google-recommended practices. What should you do?

Options:

A.

Configure Workload Identity and service accounts to be used by the application platform.

B.

Use Kubernetes Secrets, which are obfuscated by default. Configure these Secrets to be used by the

application platform.

C.

Configure Kubernetes Secrets to store the secret, enable Application-Layer Secrets Encryption, and use

Cloud Key Management Service (Cloud KMS) to manage the encryption keys. Configure these Secrets to

be used by the application platform.

D.

Configure HashiCorp Vault on Compute Engine, and use customer managed encryption keys and Cloud

Key Management Service (Cloud KMS) to manage the encryption keys. Configure these Secrets to be used

by the application platform.

Buy Now
Questions 60

For this question, refer to the Dress4Win case study.

As part of their new application experience, Dress4Wm allows customers to upload images of themselves. The customer has exclusive control over who may view these images. Customers should be able to upload images with minimal latency and also be shown their images quickly on the main application page when they log in. Which configuration should Dress4Win use?

Options:

A.

Store image files in a Google Cloud Storage bucket. Use Google Cloud Datastore to maintain metadata that maps each customer's ID and their image files.

B.

Store image files in a Google Cloud Storage bucket. Add custom metadata to the uploaded images in Cloud Storage that contains the customer's unique ID.

C.

Use a distributed file system to store customers' images. As storage needs increase, add more persistent disks and/or nodes. Assign each customer a unique ID, which sets each file's owner attribute, ensuring privacy of images.

D.

Use a distributed file system to store customers' images. As storage needs increase, add more persistent disks and/or nodes. Use a Google Cloud SQL database to maintain metadata that maps each customer's ID to their image files.

Buy Now
Questions 61

For this question, refer to the Dress4Win case study.

At Dress4Win, an operations engineer wants to create a tow-cost solution to remotely archive copies of database backup files. The database files are compressed tar files stored in their current data center. How should he proceed?

Options:

A.

Create a cron script using gsutil to copy the files to a Coldline Storage bucket.

B.

Create a cron script using gsutil to copy the files to a Regional Storage bucket.

C.

Create a Cloud Storage Transfer Service Job to copy the files to a Coldline Storage bucket.

D.

Create a Cloud Storage Transfer Service job to copy the files to a Regional Storage bucket.

Buy Now
Questions 62

Dress4win has end to end tests covering 100% of their endpoints.

They want to ensure that the move of cloud does not introduce any new bugs.

Which additional testing methods should the developers employ to prevent an outage?

Options:

A.

They should run the end to end tests in the cloud staging environment to determine if the code is working as

intended.

B.

They should enable google stack driver debugger on the application code to show errors in the code

C.

They should add additional unit tests and production scale load tests on their cloud staging environment.

D.

They should add canary tests so developers can measure how much of an impact the new release causes to latency

Buy Now
Questions 63

For this question, refer to the Dress4Win case study.

As part of Dress4Win's plans to migrate to the cloud, they want to be able to set up a managed logging and monitoring system so they can handle spikes in their traffic load. They want to ensure that:

• The infrastructure can be notified when it needs to scale up and down to handle the ebb and flow of usage throughout the day

• Their administrators are notified automatically when their application reports errors.

• They can filter their aggregated logs down in order to debug one piece of the application across many hosts

Which Google StackDriver features should they use?

Options:

A.

Logging, Alerts, Insights, Debug

B.

Monitoring, Trace, Debug, Logging

C.

Monitoring, Logging, Alerts, Error Reporting

D.

Monitoring, Logging, Debug, Error Report

Buy Now
Questions 64

For this question, refer to the Dress4Win case study.

Dress4Win has end-to-end tests covering 100% of their endpoints. They want to ensure that the move to the cloud does not introduce any new bugs. Which additional testing methods should the developers employ to prevent an outage?

Options:

A.

They should enable Google Stackdriver Debugger on the application code to show errors in the code.

B.

They should add additional unit tests and production scale load tests on their cloud staging environment.

C.

They should run the end-to-end tests in the cloud staging environment to determine if the code is working as intended.

D.

They should add canary tests so developers can measure how much of an impact the new release causes to latency.

Buy Now
Questions 65

For this question, refer to the JencoMart case study.

The migration of JencoMart’s application to Google Cloud Platform (GCP) is progressing too slowly. The infrastructure is shown in the diagram. You want to maximize throughput. What are three potential bottlenecks? (Choose 3 answers.)

Options:

A.

A single VPN tunnel, which limits throughput

B.

A tier of Google Cloud Storage that is not suited for this task

C.

A copy command that is not suited to operate over long distances

D.

Fewer virtual machines (VMs) in GCP than on-premises machines

E.

A separate storage layer outside the VMs, which is not suited for this task

F.

Complicated internet connectivity between the on-premises infrastructure and GCP

Buy Now
Questions 66

For this question, refer to the JencoMart case study.

JencoMart has built a version of their application on Google Cloud Platform that serves traffic to Asia. You want to measure success against their business and technical goals. Which metrics should you track?

Options:

A.

Error rates for requests from Asia

B.

Latency difference between US and Asia

C.

Total visits, error rates, and latency from Asia

D.

Total visits and average latency for users in Asia

E.

The number of character sets present in the database

Buy Now
Questions 67

For this question, refer to the JencoMart case study

A few days after JencoMart migrates the user credentials database to Google Cloud Platform and shuts down the old server, the new database server stops responding to SSH connections. It is still serving database requests to the application servers correctly. What three steps should you take to diagnose the problem? Choose 3 answers

Options:

A.

Delete the virtual machine (VM) and disks and create a new one.

B.

Delete the instance, attach the disk to a new VM, and investigate.

C.

Take a snapshot of the disk and connect to a new machine to investigate.

D.

Check inbound firewall rules for the network the machine is connected to.

E.

Connect the machine to another network with very simple firewall rules and investigate.

F.

Print the Serial Console output for the instance for troubleshooting, activate the interactive console, and investigate.

Buy Now
Questions 68

For this question, refer to the JencoMart case study.

The JencoMart security team requires that all Google Cloud Platform infrastructure is deployed using a least privilege model with separation of duties for administration between production and development resources. What Google domain and project structure should you recommend?

Options:

A.

Create two G Suite accounts to manage users: one for development/test/staging and one for production. Each account should contain one project for every application.

B.

Create two G Suite accounts to manage users: one with a single project for all development applications and one with a single project for all production applications.

C.

Create a single G Suite account to manage users with each stage of each application in its own project.

D.

Create a single G Suite account to manage users with one project for the development/test/staging environment and one project for the production environment.

Buy Now
Questions 69

For this question, refer to the JencoMart case study.

JencoMart wants to move their User Profiles database to Google Cloud Platform. Which Google Database should they use?

Options:

A.

Cloud Spanner

B.

Google BigQuery

C.

Google Cloud SQL

D.

Google Cloud Datastore

Buy Now
Questions 70

For this question, refer to the JencoMart case study.

JencoMart has decided to migrate user profile storage to Google Cloud Datastore and the application servers to Google Compute Engine (GCE). During the migration, the existing infrastructure will need access to Datastore to upload the data. What service account key-management strategy should you recommend?

Options:

A.

Provision service account keys for the on-premises infrastructure and for the GCE virtual machines (VMs).

B.

Authenticate the on-premises infrastructure with a user account and provision service account keys for the VMs.

C.

Provision service account keys for the on-premises infrastructure and use Google Cloud Platform (GCP) managed keys for the VMs

D.

Deploy a custom authentication service on GCE/Google Container Engine (GKE) for the on-premises infrastructure and use GCP managed keys for the VMs.

Buy Now
Questions 71

For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical architecture for the compute workloads for your company, Mountkirk Games. Considering the Mountkirk Games business and technical requirements, what should you do?

Options:

A.

Create network load balancers. Use preemptible Compute Engine instances.

B.

Create network load balancers. Use non-preemptible Compute Engine instances.

C.

Create a global load balancer with managed instance groups and autoscaling policies. Use preemptible Compute Engine instances.

D.

Create a global load balancer with managed instance groups and autoscaling policies. Use non-preemptible Compute Engine instances.

Buy Now
Questions 72

For this question, refer to the Mountkirk Games case study. Which managed storage option meets Mountkirk’s technical requirement for storing game activity in a time series database service?

Options:

A.

Cloud Bigtable

B.

Cloud Spanner

C.

BigQuery

D.

Cloud Datastore

Buy Now
Exam Name: Google Certified Professional - Cloud Architect (GCP)
Last Update: Apr 7, 2026
Questions: 333
Professional-Cloud-Architect pdf

Professional-Cloud-Architect PDF

$25.5  $84.99
Professional-Cloud-Architect Engine

Professional-Cloud-Architect Testing Engine

$30  $99.99
Professional-Cloud-Architect PDF + Engine

Professional-Cloud-Architect PDF + Testing Engine

$40.5  $134.99