Inspirational journeys

Follow the stories of academics and their research expeditions

AWS Certified Machine Learning Specialty Questions 2022 - Part 4

Mary Smith

Thu, 16 Apr 2026

AWS Certified Machine Learning Specialty Questions 2022 - Part 4

1. Which of the following is true about AWS IoT Greengrass?

A) AWS IoT Greengrass does not support containerization of applications.
B) AWS IoT Greengrass only supports MQTT as the messaging protocol for communication between devices and the cloud.
C) AWS IoT Greengrass requires a constant internet connection for devices to operate.
D) AWS IoT Greengrass enables local execution of AWS Lambda functions, AWS IoT Device Shadow, and AWS IoT Device Gateway on IoT devices even when they are not connected to the cloud.



2. A company wants to run large-scale, batch computing workloads using AWS Batch. They want to ensure that their jobs are optimized for cost and performance. Which of the following options would best achieve their goals?

A) Use AWS Elastic Beanstalk to deploy the batch computing workloads. Configure the environment to use On-Demand Instances, and set up auto-scaling to add more instances when the workload increases.
B) Use AWS CloudFormation to create a custom compute environment with a fixed number of EC2 instances. Use On-Demand Instances for the compute environment, and configure the job definitions to use Spot Instances for the actual job execution.
C) Use AWS Batch managed compute environments to automatically provision and manage compute resources for the jobs. Use Spot Instances for both the compute environment and the actual job execution.
D) Use AWS Batch managed compute environments to automatically provision and manage compute resources for the jobs. Use On-Demand Instances for the compute environment, and configure the job definitions to use Spot Instances for the actual job execution.



3. Which of the following is not a feature of Amazon Fraud Detector?

A) Providing a real-time risk score for fraud detection.
B) Automatically ingesting data from various sources for analysis.
C) Providing a platform for virtual reality development.
D) Providing automated feedback for model improvement.
E) Allowing custom models to be integrated into the detection process.


4. What is the most appropriate AWS service for deploying and scaling a real-time machine learning model that is built using TensorFlow and requires low-latency inference, and why?

A) Amazon SageMaker because it offers a fully-managed service for building, training, and deploying machine learning models at scale, with support for TensorFlow and low-latency inference using the SageMaker Neo runtime.
B) AWS Elastic Beanstalk because it allows for easy deployment and management of web applications, including machine learning models, with automatic scaling and load balancing.
C) AWS Lambda because it enables the deployment of code without the need to provision or manage servers, and supports real-time inference using TensorFlow.
D) AWS Batch because it enables the processing of batch computing workloads, including machine learning model inference, with automatic scaling and resource optimization.
E) AWS Fargate because it provides on-demand, containerized compute capacity for running machine learning models, and can be automatically scaled to meet changing demand.


5. A company has millions of sensor data records that need to be preprocessed before training a machine learning model. Which AWS service can be used to preprocess this data?

A) Amazon Kinesis Data Firehose
B) AWS Lambda
C) AWS Glue
D) Amazon SageMaker
E) Amazon QuickSight


1. Right Answer: D
Explanation:

2. Right Answer: D
Explanation:

3. Right Answer: C
Explanation:

4. Right Answer: A
Explanation:

5. Right Answer: C
Explanation:

0 Comments

Leave a comment