Amazon AWS

AWS Certified Machine Learning Engineer – Associate

MLA-C01

The AWS Certified Machine Learning Engineer – Associate (MLA-C01) exam validates your skills in building, training, and deploying machine learning models on AWS. It is ideal for those looking to specialize in machine learning.

486 questions 0 views Free
Start Mock Test Timed · Full-length · Scored

Questions 311–320 of 486

Q311

A company needs to preprocess large datasets for training. What AWS service is best suited for this task?

  • A Amazon Redshift
  • B AWS Batch
  • C AWS Glue
  • D Amazon QuickSight
Explanation AWS Glue provides ETL capabilities that facilitate dataset preprocessing, while the others serve different functions.
Q312

What happens when you set the instance type to a more powerful option in SageMaker?

  • A Lower training accuracy
  • B Increased cost
  • C Reduced model complexity
  • D Faster training times
Explanation Choosing a more powerful instance increases costs, but can improve performance; it is not guaranteed to reduce model complexity or accuracy.
Q313

Which AWS service automates the end-to-end ML workflow?

  • A SageMaker
  • B Lambda
  • C CloudFormation
  • D DynamoDB
Explanation SageMaker provides a complete suite for ML including model training and deployment.
Q314

A company needs to integrate predictive analytics into their applications. Which service should they use?

  • A Athena
  • B QuickSight
  • C SageMaker
  • D Glue
Explanation SageMaker allows for easy integration of machine learning models into applications.
Q315

You are configuring an IAM policy for accessing an S3 bucket. What should you include for secure access?

  • A Public access settings
  • B IAM role permissions
  • C Resource ARNs only
  • D Encrypted passwords
Explanation IAM role permissions ensure secure, authorized access to AWS resources.
Q316

Which service allows for serverless machine learning inference?

  • A AWS Lambda
  • B Amazon S3
  • C Amazon EC2
  • D Amazon RDS
Explanation AWS Lambda runs code in response to events without provisioning servers, making it ideal for serverless inference.
Q317

A company needs to deploy a machine learning model that must scale with unpredictable traffic. Which AWS service is best suited for this requirement?

  • A AWS Batch
  • B Amazon SageMaker
  • C Amazon Kinesis
  • D AWS Snowball
Explanation Amazon SageMaker provides an easy-to-scale environment for deploying ML models efficiently with variable traffic patterns.
Q318

What happens when an Amazon SageMaker model is deployed with a Multi-Model endpoint?

  • A Only one model is active.
  • B Multiple models can be invoked.
  • C No inference can occur.
  • D Models are always loaded in memory.
Explanation A Multi-Model endpoint enables invoking multiple models from a single endpoint, optimizing resource utilization.
Q319

Which AWS service is primarily used for deploying machine learning models?

  • A Amazon SageMaker
  • B Amazon RDS
  • C Amazon S3
  • D AWS Lambda
Explanation Amazon SageMaker is designed for deploying ML models, while RDS, S3, and Lambda are not specifically for ML deployment.
Q320

A company needs to run an ML operation that scales rapidly with unpredictable traffic. Which architecture pattern should they prioritize?

  • A Monolithic Architecture
  • B Serverless Architecture
  • C On-Premise Infrastructure
  • D Dedicated Server Clusters
Explanation Serverless architecture dynamically scales and manages varying loads effectively, unlike the options that are less flexible.