Summer Limited Time 60% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: dealsixty

Professional-Data-Engineer Exam Dumps - Google Cloud Certified Questions and Answers

Question # 24

You have several different unstructured data sources, within your on-premises data center as well as in the cloud. The data is in various formats, such as Apache Parquet and CSV. You want to centralize this data in Cloud Storage. You need to set up an object sink for your data that allows you to use your own encryption keys. You want to use a GUI-based solution. What should you do?

Options:

A.

Use Cloud Data Fusion to move files into Cloud Storage.

B.

Use Storage Transfer Service to move files into Cloud Storage.

C.

Use Dataflow to move files into Cloud Storage.

D.

Use BigQuery Data Transfer Service to move files into BigQuery.

Buy Now
Question # 25

You are deploying a new storage system for your mobile application, which is a media streaming service. You decide the best fit is Google Cloud Datastore. You have entities with multiple properties, some of which can take on multiple values. For example, in the entity ‘Movie’ the property ‘actors’ and the property ‘tags’ have multiple values but the property ‘date released’ does not. A typical query would ask for all movies with actor= ordered by date_released or all movies with tag=Comedy ordered by date_released. How should you avoid a combinatorial explosion in the number of indexes?

Options:

A.

Option A

B.

Option B.

C.

Option C

D.

Option D

Buy Now
Question # 26

You have a query that filters a BigQuery table using a WHERE clause on timestamp and ID columns. By using bq query – -dry_run you learn that the query triggers a full scan of the table, even though the filter on timestamp and ID select a tiny fraction of the overall data. You want to reduce the amount of data scanned by BigQuery with minimal changes to existing SQL queries. What should you do?

Options:

A.

Create a separate table for each ID.

B.

Use the LIMIT keyword to reduce the number of rows returned.

C.

Recreate the table with a partitioning column and clustering column.

D.

Use the bq query - -maximum_bytes_billed flag to restrict the number of bytes billed.

Buy Now
Question # 27

You are designing a data mesh on Google Cloud with multiple distinct data engineering teams building data products. The typical data curation design pattern consists of landing files in Cloud Storage, transforming raw data in Cloud Storage and BigQuery datasets. and storing the final curated data product in BigQuery datasets You need to configure Dataplex to ensure that each team can access only the assets needed to build their data products. You also need to ensure that teams can easily share the curated data product. What should you do?

Options:

A.

1 Create a single Dataplex virtual lake and create a single zone to contain landing, raw. and curated data.

2 Provide each data engineering team access to the virtual lake.

B.

1 Create a single Dataplex virtual lake and create a single zone to contain landing, raw. and curated data. 2 Build separate assets for each data product within the zone.

3. Assign permissions to the data engineering teams at the zone level.

C.

1 Create a Dataplex virtual lake for each data product, and create a single zone to contain landing, raw, and curated data.

2. Provide the data engineering teams with full access to the virtual lake assigned to their data product.

D.

1 Create a Dataplex virtual lake for each data product, and create multiple zones for landing, raw. and curated data.

2. Provide the data engineering teams with full access to the virtual lake assigned to their data product.

Buy Now
Question # 28

Your neural network model is taking days to train. You want to increase the training speed. What can you do?

Options:

A.

Subsample your test dataset.

B.

Subsample your training dataset.

C.

Increase the number of input features to your model.

D.

Increase the number of layers in your neural network.

Buy Now
Question # 29

You need ads data to serve Al models and historical data tor analytics longtail and outlier data points need to be identified You want to cleanse the data n near-reel time before running it through Al models What should you do?

Options:

A.

Use BigQuery to ingest prepare and then analyze the data and then run queries to create views

B.

Use Cloud Storage as a data warehouse shell scripts tor processing, and BigQuery to create views tor desired datasets

C.

Use Dataflow to identity longtail and outber data points programmatically with BigQuery as a sink

D.

Use Cloud Composer to identify longtail and outlier data points, and then output a usable dataset to BigQuery

Buy Now
Question # 30

You are building a streaming Dataflow pipeline that ingests noise level data from hundreds of sensors placed near construction sites across a city. The sensors measure noise level every ten seconds, and send that data to the pipeline when levels reach above 70 dBA. You need to detect the average noise level from a sensor when data is received for a duration of more than 30 minutes, but the window ends when no data has been received for 15 minutes What should you do?

Options:

A.

Use session windows with a 30-mmute gap duration.

B.

Use tumbling windows with a 15-mmute window and a fifteen-minute. withAllowedLateness operator.

C.

Use session windows with a 15-minute gap duration.

D.

Use hopping windows with a 15-mmute window, and a thirty-minute period.

Buy Now
Question # 31

You have 100 GB of data stored in a BigQuery table. This data is outdated and will only be accessed one or two times a year for analytics with SQL. For backup purposes, you want to store this data to be immutable for 3 years. You want to minimize storage costs. What should you do?

Options:

A.

1 Create a BigQuery table clone.

2. Query the clone when you need to perform analytics.

B.

1 Create a BigQuery table snapshot.

2 Restore the snapshot when you need to perform analytics.

C.

1. Perform a BigQuery export to a Cloud Storage bucket with archive storage class.

2 Enable versionmg on the bucket.

3. Create a BigQuery external table on the exported files.

D.

1 Perform a BigQuery export to a Cloud Storage bucket with archive storage class.

2 Set a locked retention policy on the bucket.

3. Create a BigQuery external table on the exported files.

Buy Now
Question # 32

You launched a new gaming app almost three years ago. You have been uploading log files from the previous day to a separate Google BigQuery table with the table name format LOGS_yyyymmdd. You have been using table wildcard functions to generate daily and monthly reports for all time ranges. Recently, you discovered that some queries that cover long date ranges are exceeding the limit of 1,000 tables and failing. How can you resolve this issue?

Options:

A.

Convert all daily log tables into date-partitioned tables

B.

Convert the sharded tables into a single partitioned table

C.

Enable query caching so you can cache data from previous months

D.

Create separate views to cover each month, and query from these views

Buy Now
Question # 33

You are building a teal-lime prediction engine that streams files, which may contain Pll (personal identifiable information) data, into Cloud Storage and eventually into BigQuery You want to ensure that the sensitive data is masked but still maintains referential Integrity, because names and emails are often used as join keys How should you use the Cloud Data Loss Prevention API (DLP API) to ensure that the Pll data is not accessible by unauthorized individuals?

Options:

A.

Create a pseudonym by replacing the Pll data with cryptogenic tokens, and store the non-tokenized data in a locked-down button.

B.

Redact all Pll data, and store a version of the unredacted data in a locked-down bucket

C.

Scan every table in BigQuery, and mask the data it finds that has Pll

D.

Create a pseudonym by replacing Pll data with a cryptographic format-preserving token

Buy Now
Exam Name: Google Professional Data Engineer Exam
Last Update: Jun 15, 2025
Questions: 376
Professional-Data-Engineer pdf

Professional-Data-Engineer PDF

$34  $84.99
Professional-Data-Engineer Engine

Professional-Data-Engineer Testing Engine

$38  $94.99
Professional-Data-Engineer PDF + Engine

Professional-Data-Engineer PDF + Testing Engine

$54  $134.99