Inspirational journeys

Follow the stories of academics and their research expeditions

AWS Certified Big Data - Specialty Certification - Part 33

Mary Smith

Sun, 19 Apr 2026

AWS Certified Big Data - Specialty Certification - Part 33

1. Currently, your company has an order processing system in AWS. There EC2 instances in place to take orders from the application and EC2 instances as Scaling Group Auto to process orders. Which of the following additional components can be used to ensure that the processing of EC2 instances correctly scaled according to demand?

A) Performance of Cloud look to understand the capacity of the processing servers. Refer to the SNS is used to expand the server based on the notification.
B) Performance of Cloud look to understand the capacity of the server processing, and then scale capacity, respectively.
C) Using SQS queue to separate architecture. Scale Processing Server based on the notifications sent by the SOS line.
D) None
E) Using SQS queue to separate architecture. scale processing server based on the length of the queue.


2. Which of the following are used Spark Streaming tools for data consumption by Amazon Kinesis?

A) Amazon Kinesis Library Consumer
B) Client Library Amazon Kinesis
C) Library Manufacturer Amazon Kinesis
D) Amazon Fire Hose
E) None


3. Which of the following is wrong, when it comes to pulling and pushing an approach based on receiving data Splunk?

A) Press-based approach can be done by streaming directly from AWS.
B) It may be incremental approaches using dedicated pollers working Splunk popular supplement for A to extract data from different AWS services such as Amazon Cloud Watch or Amazon S3
C) Pull-based approach is less overhead for maintenance, compared with the method based on pressure.
D) Pull-oriented approach provides a guarantee of delivery of the data to try and control point out of the box
E) None


4. Which of the following is not a lambda plan in terms of conversion, using Kinesis fire hose of data?

A) processing logs Cloud Trail
B) None
C) Apache Log in JSON format
D) Processing CloudWatch Journals
E) Sys Journal magazine in JSON format


5. Which of the following data service used partitione AWS Athena? Please, choose:

A) AMY
B) Hive
C) Login stash
D) None
E) d3.js


1. Right Answer: E
Explanation:

2. Right Answer: C
Explanation: The client library Kinesis (KCL) allows to consume and process data from the data stream Kinesis. This type of application is also referred to as a consumer.

3. Right Answer: C
Explanation:

4. Right Answer: A
Explanation: Kinesis Computer Fire hose Lambda provides the following drawings, which can be used to generate lambda function to convert data.

5. Right Answer: B
Explanation: By sharing data, you can limit the number of scanned data for each question, which increases productivity and reduce costs. Athena uses a hive for partitione data. You can exchange data with any key. The general method is to divide the data based on time, which often leads to a multilevel partitioning. For example, a customer who has data Corning in every hour to make a decision to divide the year. month, day and hour. Another client, who has the data coming from different sources, but will be charged once a day. It can be separated by the data source identifier and date.

0 Comments

Leave a comment