Weekend Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: bigdisc65

DP-700 Exam Dumps - Microsoft Certified: Fabric Analytics Engineer Associate Questions and Answers

Question # 4

You have a Fabric workspace that contains a takehouse and a semantic model named Model1.

You use a notebook named Notebook1 to ingest and transform data from an external data source.

You need to execute Notebook1 as part of a data pipeline named Pipeline1. The process must meet the following requirements:

• Run daily at 07:00 AM UTC.

• Attempt to retry Notebook1 twice if the notebook fails.

• After Notebook1 executes successfully, refresh Model1.

Which three actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

Options:

A.

Set the Retry setting of the Notebook activity to 2.

B.

Place the Semantic model refresh activity after the Notebook activity and link the activities by using an On completion condition.

C.

Place the Semantic model refresh activity after the Notebook activity and link the activities by using the On success condition.

D.

From the Schedule settings of Notebook1, set the time zone to UTC.

E.

From the Schedule settings of Pipeline1, set the time zone to UTC.

F.

Set the Retry setting of the Semantic model refresh activity to 2.

Buy Now
Question # 5

You have a Fabric workspace named Workspace1 that contains the items shown in the following table.

For Model1, the Keep your Direct Lake data up to date option is disabled.

You need to configure the execution of the items to meet the following requirements:

Notebook1 must execute every weekday at 8:00 AM.

Notebook2 must execute when a file is saved to an Azure Blob Storage container.

Model1 must refresh when Notebook1 has executed successfully.

How should you orchestrate each item? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Options:

Buy Now
Question # 6

You have a Fabric workspace named Workspace1 that contains a warehouse named Warehouse1.

You plan to deploy Warehouse1 to a new workspace named Workspace2.

As part of the deployment process, you need to verify whether Warehouse1 contains invalid references. The solution must minimize development effort.

What should you use?

Options:

A.

a database project

B.

a deployment pipeline

C.

a Python script

D.

a T-SQL script

Buy Now
Question # 7

You have five Fabric workspaces.

You are monitoring the execution of items by using Monitoring hub.

You need to identify in which workspace a specific item runs.

Which column should you view in Monitoring hub?

Options:

A.

Start time

B.

Capacity

C.

Activity name

D.

Submitter

E.

Item type

F.

Job type

G.

Location

Buy Now
Question # 8

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have a KQL database that contains two tables named Stream and Reference. Stream contains streaming data in the following format.

Reference contains reference data in the following format.

Both tables contain millions of rows.

You have the following KQL queryset.

You need to reduce how long it takes to run the KQL queryset.

Solution: You move the filter to line 02.

Does this meet the goal?

Options:

A.

Yes

B.

No

Buy Now
Question # 9

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have a Fabric eventstream that loads data into a table named Bike_Location in a KQL database. The table contains the following columns:

BikepointID

Street

Neighbourhood

No_Bikes

No_Empty_Docks

Timestamp

You need to apply transformation and filter logic to prepare the data for consumption. The solution must return data for a neighbourhood named Sands End when No_Bikes is at least 15. The results must be ordered by No_Bikes in ascending order.

Solution: You use the following code segment:

Does this meet the goal?

Options:

A.

Yes

B.

no

Buy Now
Question # 10

You have a Fabric workspace that contains a lakehouse named Lakehouse1. Data is ingested into Lakehouse1 as one flat table. The table contains the following columns.

You plan to load the data into a dimensional model and implement a star schema. From the original flat table, you create two tables named FactSales and DimProduct. You will track changes in DimProduct.

You need to prepare the data.

Which three columns should you include in the DimProduct table? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

Options:

A.

Date

B.

ProductName

C.

ProductColor

D.

TransactionID

E.

SalesAmount

F.

ProductID

Buy Now
Question # 11

You need to populate the MAR1 data in the bronze layer.

Which two types of activities should you include in the pipeline? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

Options:

A.

ForEach

B.

Copy data

C.

WebHook

D.

Stored procedure

Buy Now
Question # 12

You need to create the product dimension.

How should you complete the Apache Spark SQL code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Options:

Buy Now
Question # 13

You need to ensure that usage of the data in the Amazon S3 bucket meets the technical requirements.

What should you do?

Options:

A.

Create a workspace identity and enable high concurrency for the notebooks.

B.

Create a shortcut and ensure that caching is disabled for the workspace.

C.

Create a workspace identity and use the identity in a data pipeline.

D.

Create a shortcut and ensure that caching is enabled for the workspace.

Buy Now
Exam Code: DP-700
Exam Name: Implementing Data Engineering Solutions Using Microsoft Fabric
Last Update: Jun 14, 2025
Questions: 104
DP-700 pdf

DP-700 PDF

$33.25  $94.99
DP-700 Engine

DP-700 Testing Engine

$38.5  $109.99
DP-700 PDF + Engine

DP-700 PDF + Testing Engine

$50.75  $144.99