You are a data scientist leveraging Oracle Cloud Infrastructure (OCI) Data Science to create a
model and need some additional Python libraries for processing genome sequencing data. Which of
the following THREE statements are correct with respect to installing additional Python libraries to
process the data?
You want to write a Python script to create a collection of different projects for your data science team. Which Oracle Cloud Infrastructure (OCI) Data Science Interface would you use?
You are a data scientist designing an air traffic control model, and you choose to leverage Oracle
AutoML You understand that the Oracle AutoML pipeline consists of multiple stages and
automatically operates in a certain sequence. What is the correct sequence for the Oracle AutoML
pipeline?
Which Oracle Accelerated Data Science (ADS) classes can be used for easy access to data sets from
reference libraries and index websites such as scikit-learn?
You realize that your model deployment is about to reach its utilization limit. What would you do to avoid the issue before requests start to fail?
You want to build a multistep machine learning workflow by using the Oracle Cloud
Infrastructure (OCI) Data Science Pipeline feature. How would you configure the conda environment
to run a pipeline step?
The Oracle AutoML pipeline automates hyperparameter tuning by training the model with different parameters in parallel. You have created an instance of Oracle AutoML as ora-cle_automl and now you want an output with all the different trials performed by Oracle Au-toML. Which of the following command gives you the results of all the trials?
The Accelerated Data Science (ADS) model evaluation classes support different types of machine
learning modeling techniques. Which three types of modeling techniques are supported by ADS
Evaluators?
You are creating an Oracle Cloud Infrastructure (OCI) Data Science job that will run on a recurring basis in a production environment. This job will pick up sensitive data from an Object Storage bucket, train a model, and save it to the model catalog. How would you design the authentication mechanism for the job?
During a job run, you receive an error message that no space is left on your disk device. To solve the problem, you must increase the size of the job storage. What would be the most effi-cient way to do this with Data Science Jobs?