Weekend Sale Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: bigdisc65

ML Data Scientist Databricks-Machine-Learning-Professional Syllabus Exam Questions Answers

Page: 2 / 4
Question 8

Which of the following statements describes streaming with Spark as a model deployment strategy?

Options:

A.

The inference of batch processed records as soon as a trigger is hit

B.

The inference of all types of records in real-time

C.

The inference of batch processed records as soon as a Spark job is run

D.

The inference of incrementally processed records as soon as trigger is hit

E.

The inference of incrementally processed records as soon as a Spark job is run

Question 9

Which of the following is a probable response to identifying drift in a machine learning application?

Options:

A.

None of these responses

B.

Retraining and deploying a model on more recent data

C.

All of these responses

D.

Rebuilding the machine learning application with a new label variable

E.

Sunsetting the machine learning application

Question 10

Which of the following describes the concept of MLflow Model flavors?

Options:

A.

A convention that deployment tools can use to wrap preprocessing logic into a Model

B.

A convention that MLflow Model Registry can use to version models

C.

A convention that MLflow Experiments can use to organize their Runs by project

D.

A convention that deployment tools can use to understand the model

E.

A convention that MLflow Model Registrycan use to organize its Models by project

Question 11

A machine learning engineer has developed a model and registered it using the FeatureStoreClient fs. The model has model URI model_uri. The engineer now needs to perform batch inference on customer-level Spark DataFrame spark_df, but it is missing a few of the static features that were used when training the model. The customer_id column is the primary key of spark_df and the training set used when training and logging the model.

Which of the following code blocks can be used to compute predictions for spark_df when the missing feature values can be found in the Feature Store by searching for features by customer_id?

Options:

A.

df = fs.get_missing_features(spark_df, model_uri)

fs.score_model(model_uri, df)

B.

fs.score_model(model_uri, spark_df)

C.

df = fs.get_missing_features(spark_df, model_uri)

fs.score_batch(model_uri, df)

df = fs.get_missing_features(spark_df)

D.

fs.score_batch(model_uri, df)

E.

fs.score_batch(model_uri, spark_df)

Page: 2 / 4
Exam Name: Databricks Certified Machine Learning Professional
Last Update: May 17, 2024
Questions: 60
Databricks-Machine-Learning-Professional pdf

Databricks-Machine-Learning-Professional PDF

$28  $80
Databricks-Machine-Learning-Professional Engine

Databricks-Machine-Learning-Professional Testing Engine

$33.25  $95
Databricks-Machine-Learning-Professional PDF + Engine

Databricks-Machine-Learning-Professional PDF + Testing Engine

$45.5  $130