AWS S3 Exam Practice Questions:

1. What does Amazon S3 stand for?

    1. Simple Storage Solution.
    2. Storage Storage Storage (triple redundancy Storage).
    3. Storage Server Solution.
    4. Simple Storage Service

[showhide type=”q1″ more_text=”Answer is…” less_text=”Show less…”]
4. Simple Storage Service.  [/showhide]

 

2. What are characteristics of Amazon S3? Choose 2 answers

    1. Objects are directly accessible via a URL
    2. S3 should be used to host a relational database
    3. S3 allows you to store objects or virtually unlimited size
    4. S3 allows you to store virtually unlimited amounts of data
    5. S3 offers Provisioned IOPS

[showhide type=”q2″ more_text=”Answer is…” less_text=”Show less…”]
1. Objects are directly accessible via a URL.  &   4. S3 allows you to store virtually unlimited amounts of data  [/showhide]

 

3. You are building an automated transcription service in which Amazon EC2 worker instances process an uploaded audio file and generate a text file. You must store both of these files in the same durable storage until the text file is retrieved. You do not know what the storage capacity requirements are. Which storage option is both cost-efficient and scalable?

    1. Multiple Amazon EBS volume with snapshots
    2. A single Amazon Glacier vault
    3. A single Amazon S3 bucket
    4. Multiple instance stores

[showhide type=”q3″ more_text=”Answer is…” less_text=”Show less…”]
3. A single Amazon S3 bucket.  [/showhide]

 

4. A user wants to upload a complete folder to AWS S3 using the S3 Management console. How can the user perform this activity?

    1. Just drag and drop the folder using the flash tool provided by S3
    2. Use the Enable Enhanced Folder option from the S3 console while uploading objects
    3. The user cannot upload the whole folder in one go with the S3 management console
    4. Use the Enable Enhanced Uploader option from the S3 console while uploading objects

[showhide type=”q4″ more_text=”Answer is…” less_text=”Show less…”]
1. Just drag and drop the folder using the flash tool provided by S3.  [/showhide]

 

5. A media company produces new video files on-premises every day with a total size of around 100GB after compression. All files have a size of 1-2 GB and need to be uploaded to Amazon S3 every night in a fixed time window between 3am and 5am. Current upload takes almost 3 hours, although less than half of the available bandwidth is used. What step(s) would ensure that the file uploads are able to complete in the allotted time window?

    1. Increase your network bandwidth to provide faster throughput to S3
    2. Upload the files in parallel to S3 using multipart upload
    3. Pack all files into a single archive, upload it to S3, then extract the files in AWS
    4. Use AWS Import/Export to transfer the video files

[showhide type=”q5″ more_text=”Answer is…” less_text=”Show less…”]
2. Upload the files in parallel to S3 using multipart upload.  [/showhide]

 

6. A company is deploying a two-tier, highly available web application to AWS. Which service provides durable storage for static content while utilizing lower Overall CPU resources for the web tier?

    1. Amazon EBS volume
    2. Amazon S3
    3. Amazon EC2 instance store
    4. Amazon RDS instance

[showhide type=”q6″ more_text=”Answer is…” less_text=”Show less…”]
2. Amazon S3.  [/showhide]

 

7. You have an application running on an Amazon Elastic Compute Cloud instance that uploads 5 GB video objects to Amazon Simple Storage Service (S3). Video uploads are taking longer than expected, resulting in poor application performance. Which method will help improve performance of your application?

    1. Enable enhanced networking
    2. Use Amazon S3 multipart upload
    3. Leveraging Amazon CloudFront, use the HTTP POST method to reduce latency.
    4. Use Amazon Elastic Block Store Provisioned IOPs and use an Amazon EBS-optimized instance

[showhide type=”q7″ more_text=”Answer is…” less_text=”Show less…”]
2. Use Amazon S3 multipart upload.  [/showhide]

 

8. When you put objects in Amazon S3, what is the indication that an object was successfully stored?

    1. Each S3 account has a special bucket named_s3_logs. Success codes are written to this bucket with a timestamp and checksum.
    2. A success code is inserted into the S3 object metadata.
    3. A HTTP 200 result code and MD5 checksum, taken together, indicate that the operation was successful.
    4. Amazon S3 is engineered for 99.999999999% durability. Therefore there is no need to confirm that data was inserted.

[showhide type=”q8″ more_text=”Answer is…” less_text=”Show less…”]
3. A HTTP 200 result code and MD5 checksum, taken together, indicate that the operation was successful.  [/showhide]

 

9. You have private video content in S3 that you want to serve to subscribed users on the Internet. User IDs, credentials, and subscriptions are stored in an Amazon RDS database. Which configuration will allow you to securely serve private content to your users?

  1. Generate pre-signed URLs for each user as they request access to protected S3 content
  2. Create an IAM user for each subscribed user and assign the GetObject permission to each IAM user
  3. Create an S3 bucket policy that limits access to your private content to only your subscribed users’ credentials
  4. Create a CloudFront Origin Identity user for your subscribed users and assign the GetObject permission to this user

[showhide type=”q9″ more_text=”Answer is…” less_text=”Show less…”]
1. Generate pre-signed URLs for each user as they request access to protected S3 content.  [/showhide]

 

10. You run an ad-supported photo sharing website using S3 to serve photos to visitors of your site. At some point you find out that other sites have been linking to the photos on your site, causing loss to your business. What is an effective method to mitigate this?

  1. Remove public read access and use signed URLs with expiry dates.
  2. Use CloudFront distributions for static content.
  3. Block the IPs of the offending websites in Security Groups.
  4. Store photos on an EBS volume of the web server.

[showhide type=”q10″ more_text=”Answer is…” less_text=”Show less…”]
1. Remove public read access and use signed URLs with expiry dates.  [/showhide]



11. You are designing a web application that stores static assets in an Amazon Simple Storage Service (S3) bucket. You expect this bucket to immediately receive over 150 PUT requests per second. What should you do to ensure optimal performance?

  1. Use multi-part upload.
  2. Add a random prefix to the key names.
  3. Amazon S3 will automatically manage performance at this scale.
  4. Use a predictable naming scheme, such as sequential numbers or date time sequences, in the key names

[showhide type=”q11″ more_text=”Answer is…” less_text=”Show less…”]
2. Add a random prefix to the key names.  [/showhide]

 

12. What is the maximum number of S3 buckets available per AWS Account?

  1. 100 Per region
  2. There is no Limit
  3. 100 Per Account
  4. 500 Per Account
  5. 100 Per IAM User

[showhide type=”q12″ more_text=”Answer is…” less_text=”Show less…”]
3. 100 Per Account.  [/showhide]

 

13. Your customer needs to create an application to allow contractors to upload videos to Amazon Simple Storage Service (S3) so they can be transcoded into a different format. She creates AWS Identity and Access Management (IAM) users for her application developers, and in just one week, they have the application hosted on a fleet of Amazon Elastic Compute Cloud (EC2) instances. The attached IAM role is assigned to the instances. As expected, a contractor who authenticates to the application is given a pre-signed URL that points to the location for video upload. However, contractors are reporting that they cannot upload their videos. Which of the following are valid reasons for this behavior? Choose 2 answers { “Version”: “2012-10-17”, “Statement”: [ { “Effect”: “Allow”, “Action”: “s3:*”, “Resource”: “*” } ] }

  1. The IAM role does not explicitly grant permission to upload the object.
  2. The contractorsˈ accounts have not been granted “write” access to the S3 bucket.
  3. The application is not using valid security credentials to generate the pre-signed URL.
  4. The developers do not have access to upload objects to the S3 bucket.
  5. The S3 bucket still has the associated default permissions.
  6. The pre-signed URL has expired.

[showhide type=”q13″ more_text=”Answer is…” less_text=”Show less…”]
3. The application is not using valid security credentials to generate the pre-signed URL.  &   6. The pre-signed URL has expired.  [/showhide]

 

14. Which of the following are valid statements about Amazon S3? Choose 2 answers

  1. S3 provides read-after-write consistency for any type of PUT or DELETE.
  2. Consistency is not guaranteed for any type of PUT or DELETE.
  3. A successful response to a PUT request only occurs when a complete object is saved
  4. Partially saved objects are immediately readable with a GET after an overwrite PUT.
  5. S3 provides eventual consistency for overwrite PUTS and DELETES

[showhide type=”q14″ more_text=”Answer is…” less_text=”Show less…”]
3. A successful response to a PUT request only occurs when a complete object is saved.

5. S3 provides eventual consistency for overwrite PUTS and DELETES

[/showhide]

 

15. A customer is leveraging Amazon Simple Storage Service in eu-west-1 to store static content for web-based property. The customer is storing objects using the Standard Storage class. Where are the customers’ objects replicated?

  1. Single facility in eu-west-1 and a single facility in eu-central-1
  2. Single facility in eu-west-1 and a single facility in us-east-1
  3. Multiple facilities in eu-west-1
  4. A single facility in eu-west-1

[showhide type=”q15″ more_text=”Answer is…” less_text=”Show less…”]
3. Multiple facilities in eu-west-1.  [/showhide]

 

16. A user has an S3 object in the US Standard region with the content “color=red”. The user updates the object with the content as “color=”white”. If the user tries to read the value 1 minute after it was uploaded, what will S3 return?

  1. It will return “color=white”
  2. It will return “color=red”
  3. It will return an error saying that the object was not found
  4. It may return either “color=red” or “color=white” i.e. any of the value

[showhide type=”q16″ more_text=”Answer is…” less_text=”Show less…”]
4. It may return either “color=red” or “color=white” i.e. any of the value.  [/showhide]

 

17. What does RRS stand for when talking about S3?

    1. Redundancy Removal System
    2. Relational Rights Storage
    3. Regional Rights Standard
    4. Reduced Redundancy Storage

[showhide type=”q17″ more_text=”Answer is…” less_text=”Show less…”]
4. Reduced Redundancy Storage.  [/showhide]

 

18. What is the durability of S3 RRS?

    1. 99.99%
    2. 99.95%
    3. 99.995%
    4. 99.999999999%

[showhide type=”q18″ more_text=”Answer is…” less_text=”Show less…”]
1. 99.99%.  [/showhide]

 

19. What is the Reduced Redundancy option in Amazon S3?

    1. Less redundancy for a lower cost
    2. It doesn’t exist in Amazon S3, but in Amazon EBS.
    3. It allows you to destroy any copy of your files outside a specific jurisdiction.
    4. It doesn’t exist at all

[showhide type=”q19″ more_text=”Answer is…” less_text=”Show less…”]
1. Less redundancy for a lower cost.  [/showhide]

 

20. An application is generating a log file every 5 minutes. The log file is not critical but may be required only for verification in case of some major issue. The file should be accessible over the internet whenever required. Which of the below mentioned options is a best possible storage solution for it?

    1. AWS S3
    2. AWS Glacier
    3. AWS RDS
    4. AWS S3 RRS

[showhide type=”q20″ more_text=”Answer is…” less_text=”Show less…”]
4. AWS S3 RRS.  [/showhide]




21. A user has moved an object to Glacier using the life cycle rules. The user requests to restore the archive after 6 months. When the restore request is completed the user accesses that archive. Which of the below mentioned statements is not true in this condition?

  1. The archive will be available as an object for the duration specified by the user during the restoration request
  2. The restored object’s storage class will be RRS
  3. The user can modify the restoration period only by issuing a new restore request with the updated period
  4. The user needs to pay storage for both RRS (restored) and Glacier (Archive) Rates

[showhide type=”q21″ more_text=”Answer is…” less_text=”Show less…”]
2. The restored object’s storage class will be RRS  [/showhide]

 

22. Your department creates regular analytics reports from your company’s log files. All log data is collected in Amazon S3 and processed by daily Amazon Elastic Map Reduce (EMR) jobs that generate daily PDF reports and aggregated tables in CSV format for an Amazon Redshift data warehouse. Your CFO requests that you optimize the cost structure for this system. Which of the following alternatives will lower costs without compromising average performance of the system or data integrity for the raw data?

    1. Use reduced redundancy storage (RRS) for PDF and CSV data in Amazon S3. Add Spot instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift.
    2. Use reduced redundancy storage (RRS) for all data in S3. Use a combination of Spot instances and Reserved Instances for Amazon EMR jobs. Use Reserved instances for Amazon Redshift
    3. Use reduced redundancy storage (RRS) for all data in Amazon S3. Add Spot Instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift
    4. Use reduced redundancy storage (RRS) for PDF and CSV data in S3. Add Spot Instances to EMR jobs. Use Spot Instances for Amazon Redshift.

[showhide type=”q22″ more_text=”Answer is…” less_text=”Show less…”]
2. Use reduced redundancy storage (RRS) for all data in S3. Use a combination of Spot instances and Reserved Instances for Amazon EMR jobs. Use Reserved instances for Amazon Redshift.  [/showhide]

 

23. Which of the below mentioned options can be a good use case for storing content in AWS RRS?

    1. Storing mission critical data Files
    2. Storing infrequently used log files
    3. Storing a video file which is not reproducible
    4. Storing image thumbnails

[showhide type=”q23″ more_text=”Answer is…” less_text=”Show less…”]
4. Storing image thumbnails.  [/showhide]

 

24. A newspaper organization has an on-premises application which allows the public to search its back catalogue and retrieve individual newspaper pages via a website written in Java. They have scanned the old newspapers into JPEGs (approx. 17TB) and used Optical Character Recognition (OCR) to populate a commercial search product. The hosting platform and software is now end of life and the organization wants to migrate its archive to AWS and produce a cost efficient architecture and still be designed for availability and durability. Which is the most appropriate?

    1. Use S3 with reduced redundancy to store and serve the scanned files, install the commercial search application on EC2 Instances and configure with auto-scaling and an Elastic Load Balancer.
    2. Model the environment using CloudFormation. Use an EC2 instance running Apache webserver and an open source search application, stripe multiple standard EBS volumes together to store the JPEGs and search index.
    3. Use S3 with standard redundancy to store and serve the scanned files, use CloudSearch for query processing, and use Elastic Beanstalk to host the website across multiple availability zones.
    4. Use a single-AZ RDS MySQL instance to store the search index and the JPEG images use an EC2 instance to serve the website and translate user queries into SQL.
    5. Use a CloudFront download distribution to serve the JPEGs to the end users and Install the current commercial search product, along with a Java Container for the website on EC2 instances and use Route53 with DNS round-robin.

[showhide type=”q24″ more_text=”Answer is…” less_text=”Show less…”]
3. Use S3 with standard redundancy to store and serve the scanned files, use CloudSearch for query processing, and use Elastic Beanstalk to host the website across multiple availability zones.  [/showhide]

 

25. A research scientist is planning for the one-time launch of an Elastic MapReduce cluster and is encouraged by her manager to minimize the costs. The cluster is designed to ingest 200TB of genomics data with a total of 100 Amazon EC2 instances and is expected to run for around four hours. The resulting data set must be stored temporarily until archived into an Amazon RDS Oracle instance. Which option will help save the most money while meeting requirements?

    1. Store ingest and output files in Amazon S3. Deploy on-demand for the master and core nodes and spot for the task nodes.
    2. Optimize by deploying a combination of on-demand, RI and spot-pricing models for the master, core and task nodes. Store ingest and output files in Amazon S3 with a lifecycle policy that archives them to Amazon Glacier.
    3. Store the ingest files in Amazon S3 RRS and store the output files in S3. Deploy Reserved Instances for the master and core nodes and on-demand for the task nodes.
    4. Deploy on-demand master, core and task nodes and store ingest and output files in Amazon S3 RRS

[showhide type=”q25″ more_text=”Answer is…” less_text=”Show less…”]
1. Store ingest and output files in Amazon S3. Deploy on-demand for the master and core nodes and spot for the task nodes.  [/showhide]

 

26. Which set of Amazon S3 features helps to prevent and recover from accidental data loss?

    1. Object lifecycle and service access logging
    2. Object versioning and Multi-factor authentication
    3. Access controls and server-side encryption
    4. Website hosting and Amazon S3 policies

[showhide type=”q26″ more_text=”Answer is…” less_text=”Show less…”]
2. Object versioning and Multi-factor authentication.  [/showhide]

 

27. You use S3 to store critical data for your company several users within your group currently have full permissions to your S3 buckets. You need to come up with a solution that does not impact your users and also protect against the accidental deletion of objects. Which two options will address this issue? Choose 2 answers

    1. Enable versioning on your S3 Buckets
    2. Configure your S3 Buckets with MFA delete
    3. Create a Bucket policy and only allow read only permissions to all users at the bucket level
    4. Enable object life cycle policies and configure the data older than 3 months to be archived in Glacier

[showhide type=”q27″ more_text=”Answer is…” less_text=”Show less…”]
1. Enable versioning on your S3 Buckets. & 2. Configure your S3 Buckets with MFA delete [/showhide]

 

28. To protect S3 data from both accidental deletion and accidental overwriting, you should

  1. enable S3 versioning on the bucket
  2. access S3 data using only signed URLs
  3. disable S3 delete using an IAM bucket policy
  4. enable S3 Reduced Redundancy Storage
  5. enable Multi-Factor Authentication (MFA) protected access

[showhide type=”q28″ more_text=”Answer is…” less_text=”Show less…”]
1. enable S3 versioning on the bucket.  [/showhide]

 
29. A user has not enabled versioning on an S3 bucket. What will be the version ID of the object inside that bucket?

  1. 0
  2. There will be no version attached
  3. Null
  4. Blank

[showhide type=”q29″ more_text=”Answer is…” less_text=”Show less…”]
1. NULL.  [/showhide]

 

30. A user is trying to find the state of an S3 bucket with respect to versioning. Which of the below mentioned states AWS will not return when queried?

  1. versioning-enabled
  2. versioning-suspended
  3. unversioned
  4. versioned

[showhide type=”q30″ more_text=”Answer is…” less_text=”Show less…”]
4. Versioned.  [/showhide]




31. Your firm has uploaded a large amount of aerial image data to S3. In the past, in your on-premises environment, you used a dedicated group of servers to oaten process this data and used Rabbit MQ, an open source messaging system, to get job information to the servers. Once processed the data would go to tape and be shipped offsite. Your manager told you to stay with the current design, and leverage AWS archival storage and messaging services to minimize cost. Which is correct?

    1. Use SQS for passing job messages, use Cloud Watch alarms to terminate EC2 worker instances when they become idle. Once data is processed, change the storage class of the S3 objects to Reduced Redundancy Storage
    2. Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SQS. Once data is processed, change the storage class of the S3 objects to Reduced Redundancy Storage
    3. Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SQS. Once data is processed, change the storage class of the S3 objects to Glacier
    4. Use SNS to pass job messages use Cloud Watch alarms to terminate spot worker instances when they become idle. Once data is processed, change the storage class of the S3 object to Glacier.

[showhide type=”q31″ more_text=”Answer is…” less_text=”Show less…”]
3. Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SQS. Once data is processed, change the storage class of the S3 objects to Glacier.  [/showhide]

 

32. You have a proprietary data store on-premises that must be backed up daily by dumping the data store contents to a single compressed 50GB file and sending the file to AWS. Your SLAs state that any dump file backed up within the past 7 days can be retrieved within 2 hours. Your compliance department has stated that all data must be held indefinitely. The time required to restore the data store from a backup is approximately 1 hour. Your on-premise network connection is capable of sustaining 1gbps to AWS. Which backup methods to AWS would be most cost-effective while still meeting all of your requirements?

  1. Send the daily backup files to Glacier immediately after being generated
  2. Transfer the daily backup files to an EBS volume in AWS and take daily snapshots of the volume
  3. Transfer the daily backup files to S3 and use appropriate bucket lifecycle policies to send to Glacier
  4. Host the backup files on a Storage Gateway with Gateway-Cached Volumes and take daily snapshots

[showhide type=”q32″ more_text=”Answer is…” less_text=”Show less…”]
3. Transfer the daily backup files to S3 and use appropriate bucket lifecycle policies to send to Glacier.  [/showhide]

 

33. Which features can be used to restrict access to data in S3? Choose 2 answers

    1. Set an S3 ACL on the bucket or the object.
    2. Create a CloudFront distribution for the bucket.
    3. Set an S3 bucket policy.
    4. Enable IAM Identity Federation
    5. Use S3 Virtual Hosting

[showhide type=”q33″ more_text=”Answer is…” less_text=”Show less…”]
1. Set an S3 ACL on the bucket or the object.  &

3. Set an S3 bucket policy [/showhide]

 

34. Which method can be used to prevent an IP address block from accessing public objects in an S3 bucket?

    1. Create a bucket policy and apply it to the bucket
    2. Create a NACL and attach it to the VPC of the bucket
    3. Create an ACL and apply it to all objects in the bucket
    4. Modify the IAM policies of any users that would access the bucket

[showhide type=”q34″ more_text=”Answer is…” less_text=”Show less…”]
1. Create a bucket policy and apply it to the bucket.  [/showhide]

 

35. A user has granted read/write permission of his S3 bucket using ACL. Which of the below mentioned options is a valid ID to grant permission to other AWS accounts (grantee. using ACL?

    1. IAM User ID
    2. S3 Secure ID
    3. Access ID
    4. Canonical user ID

[showhide type=”q35″ more_text=”Answer is…” less_text=”Show less…”]
4. Canonical userID.  [/showhide]

 

36. A root account owner has given full access of his S3 bucket to one of the IAM users using the bucket ACL. When the IAM user logs in to the S3 console, which actions can he perform?

    1. He can just view the content of the bucket
    2. He can do all the operations on the bucket
    3. It is not possible to give access to an IAM user using ACL
    4. The IAM user can perform all operations on the bucket using only API/SDK

[showhide type=”q36″ more_text=”Answer is…” less_text=”Show less…”]
3. It is not possible to give access to an IAM user using ACL.  [/showhide]

 

37. A root AWS account owner is trying to understand various options to set the permission to AWS S3. Which of the below mentioned options is not the right option to grant permission for S3?

    1. User Access Policy
    2. S3 Object Policy
    3. S3 Bucket Policy
    4. S3 ACL

[showhide type=”q37″ more_text=”Answer is…” less_text=”Show less…”]
2.S3 Object Policy.  [/showhide]

 

38. A system admin is managing buckets, objects and folders with AWS S3. Which of the below mentioned statements is true and should be taken in consideration by the sysadmin?

    1. Folders support only ACL
    2. Both the object and bucket can have an Access Policy but folder cannot have policy
    3. Folders can have a policy
    4. Both the object and bucket can have ACL but folders cannot have ACL

[showhide type=”q38″ more_text=”Answer is…” less_text=”Show less…”]
4. Both the object and bucket can have ACL but folders cannot have ACL.  [/showhide]

 

39. A user has created an S3 bucket which is not publicly accessible. The bucket is having thirty objects which are also private. If the user wants to make the objects public, how can he configure this with minimal efforts?

    1. User should select all objects from the console and apply a single policy to mark them public
    2. User can write a program which programmatically makes all objects public using S3 SDK
    3. Set the AWS bucket policy which marks all objects as public
    4. Make the bucket ACL as public so it will also mark all objects as public

[showhide type=”q39″ more_text=”Answer is…” less_text=”Show less…”]
3. Set the AWS bucket policy which marks all objects as public.  [/showhide]

 

40. You need to configure an Amazon S3 bucket to serve static assets for your public-facing web application. Which methods ensure that all objects uploaded to the bucket are set to public read? Choose 2 answers

    1. Set permissions on the object to public read during upload.
    2. Configure the bucket ACL to set all objects to public read.
    3. Configure the bucket policy to set all objects to public read.
    4. Use AWS Identity and Access Management roles to set the bucket to public read.
    5. Amazon S3 objects default to public read, so no action is needed.

[showhide type=”q40″ more_text=”Answer is…” less_text=”Show less…”]
1. Set permissions on the object to public read during upload &

3. Configure the bucket policy to set all objects to public read. [/showhide]




41. Amazon S3 doesn’t automatically give a user who creates _____ permission to perform other actions on that bucket or object.

    1. a file
    2. a bucket or object
    3. a bucket or file
    4. a object or file

[showhide type=”q41″ more_text=”Answer is…” less_text=”Show less…”]
2. A bucket or object.  [/showhide]

 

42. A root account owner is trying to understand the S3 bucket ACL. Which of the below mentioned options cannot be used to grant ACL on the object using the authorized predefined group?

    1. Authenticated user group
    2. All users group
    3. Log Delivery Group
    4. Canonical user group

[showhide type=”q42″ more_text=”Answer is…” less_text=”Show less…”]
4. Canonical user group.  [/showhide]

 

43. A user is enabling logging on a particular bucket. Which of the below mentioned options may be best suitable to allow access to the log bucket?

  1. Create an IAM policy and allow log access
  2. It is not possible to enable logging on the S3 bucket
  3. Create an IAM Role, which has access to the log bucket
  4. Provide ACL for the logging group

[showhide type=”q43″ more_text=”Answer is…” less_text=”Show less…”]
4. Provide ACL for the logging group.  [/showhide]
 
44. A user is trying to configure access with S3. Which of the following options is not possible to provide access to the S3 bucket / object?

  1. Define the policy for the IAM user
  2. Define the ACL for the object
  3. Define the policy for the object
  4. Define the policy for the bucket

[showhide type=”q44″ more_text=”Answer is…” less_text=”Show less…”]
3. Define the policy for the object.  [/showhide]

 

45. A user is having access to objects of an S3 bucket, which is not owned by him. If he is trying to set the objects of that bucket public, which of the below mentioned options may be a right fit for this action?

  1. Make the bucket public with full access
  2. Define the policy for the bucket
  3. Provide ACL on the object
  4. Create an IAM user with permission

[showhide type=”q45″ more_text=”Answer is…” less_text=”Show less…”]
3. Provide ACL on the object.  [/showhide]

 

46. A bucket owner has allowed another account’s IAM users to upload or access objects in his bucket. The IAM user of Account A is trying to access an object created by the IAM user of account B. What will happen in this scenario?

  1. The bucket policy may not be created as S3 will give error due to conflict of Access Rights
  2. It is not possible to give permission to multiple IAM users
  3. AWS S3 will verify proper rights given by the owner of Account A, the bucket owner as well as by the IAM user B to the object
  4. It is not possible that the IAM user of one account accesses objects of the other IAM user

[showhide type=”q46″ more_text=”Answer is…” less_text=”Show less…”]
3. AWS S3 will verify proper rights given by the owner of Account A, the bucket owner as well as by the IAM user B to the object.  [/showhide]

 

47. A company is storing data on Amazon Simple Storage Service (S3). The company’s security policy mandates that data is encrypted at rest. Which of the following methods can achieve this? Choose 3 answers

    1. Use Amazon S3 server-side encryption with AWS Key Management Service managed keys
    2. Use Amazon S3 server-side encryption with customer-provided keys
    3. Use Amazon S3 server-side encryption with EC2 key pair.
    4. Use Amazon S3 bucket policies to restrict access to the data at rest.
    5. Encrypt the data on the client-side before ingesting to Amazon S3 using their own master key
    6. Use SSL to encrypt the data while in transit to Amazon S3.

[showhide type=”q47″ more_text=”Answer is…” less_text=”Show less…”]
1. Use Amazon S3 server-side encryption with AWS Key Management Service managed keys.

2. Use Amazon S3 server-side encryption with customer-provided keys

5. Encrypt the data on the client-side before ingesting to Amazon S3 using their own master key [/showhide]

 

48. A user has enabled versioning on an S3 bucket. The user is using server side encryption for data at Rest. If the user is supplying his own keys for encryption (SSE-C) which of the below mentioned statements is true?

    1. The user should use the same encryption key for all versions of the same object
    2. It is possible to have different encryption keys for different versions of the same object
    3. AWS S3 does not allow the user to upload his own keys for server side encryption
    4. The SSE-C does not work when versioning is enabled

[showhide type=”q48″ more_text=”Answer is…” less_text=”Show less…”]
2. It is possible to have different encryption keys for different versions of the same object.  [/showhide]

 

49. A storage admin wants to encrypt all the objects stored in S3 using server side encryption. The user does not want to use the AES 256 encryption key provided by S3. How can the user achieve this?

    1. The admin should upload his secret key to the AWS console and let S3 decrypt the objects
    2. The admin should use CLI or API to upload the encryption key to the S3 bucket. When making a call to the S3 API mention the encryption key URL in each request
    3. S3 does not support client supplied encryption keys for server side encryption
    4. The admin should send the keys and encryption algorithm with each API call

[showhide type=”q49″ more_text=”Answer is…” less_text=”Show less…”]
4. The admin should send the keys and encryption algorithm with each API call.  [/showhide]

 

50. A user has enabled versioning on an S3 bucket. The user is using server side encryption for data at rest. If the user is supplying his own keys for encryption (SSE-C), what is recommended to the user for the purpose of security?

    1. User should not use his own security key as it is not secure
    2. Configure S3 to rotate the user’s encryption key at regular intervals
    3. Configure S3 to store the user’s keys securely with SSL
    4. Keep rotating the encryption key manually at the client side

[showhide type=”q50″ more_text=”Answer is…” less_text=”Show less…”]
4. Keep rotating the encryption key manually at the client side.  [/showhide]




51. A system admin is planning to encrypt all objects being uploaded to S3 from an application. The system admin does not want to implement his own encryption algorithm; instead he is planning to use server side encryption by supplying his own key (SSE-C.. Which parameter is not required while making a call for SSE-C?

    1. x-amz-server-side-encryption-customer-key-AES-256
    2. x-amz-server-side-encryption-customer-key
    3. x-amz-server-side-encryption-customer-algorithm
    4. x-amz-server-side-encryption-customer-key-MD5

[showhide type=”q51″ more_text=”Answer is…” less_text=”Show less…”]
1. x-amz-server-side-encryption-customer-key-AES-256.  [/showhide]

 

52. A customer is leveraging Amazon Simple Storage Service in eu-west-1 to store static content for a web-based property. The customer is storing objects using the Standard Storage class. Where are the customers objects replicated?

    1. A single facility in eu-west-1 and a single facility in eu-central-1
    2. A single facility in eu-west-1 and a single facility in us-east-1
    3. Multiple facilities in eu-west-1
    4. A single facility in eu-west-1

[showhide type=”q52″ more_text=”Answer is…” less_text=”Show less…”]
3. Multiple facilities in eu-west-1.  [/showhide]

 

53. You are designing a personal document-archiving solution for your global enterprise with thousands of employee. Each employee has potentially gigabytes of data to be backed up in this archiving solution. The solution will be exposed to the employees as an application, where they can just drag and drop their files to the archiving system. Employees can retrieve their archives through a web interface. The corporate network has high bandwidth AWS DirectConnect connectivity to AWS. You have regulatory requirements that all data needs to be encrypted before being uploaded to the cloud. How do you implement this in a highly available and cost efficient way?

  1. Manage encryption keys on-premise in an encrypted relational database. Set up an on-premises server with sufficient storage to temporarily store files and then upload them to Amazon S3, providing a client-side master key.
  2. Manage encryption keys in a Hardware Security Module(HSM) appliance on-premise server with sufficient storage to temporarily store, encrypt, and upload files directly into amazon Glacier.
  3. Manage encryption keys in amazon Key Management Service (KMS), upload to amazon simple storage service (s3) with client-side encryption using a KMS customer master key ID and configure Amazon S3 lifecycle policies to store each object using the amazon glacier storage tier.
  4. Manage encryption keys in an AWS CloudHSM appliance. Encrypt files prior to uploading on the employee desktop and then upload directly into amazon glacier

[showhide type=”q53″ more_text=”Answer is…” less_text=”Show less…”]
3. Manage encryption keys in amazon Key Management Service (KMS), upload to amazon simple storage service (s3) with client-side encryption using a KMS customer master key ID and configure Amazon S3 lifecycle policies to store each object using the amazon glacier storage tier.  [/showhide]

 

54. A user has enabled server side encryption with S3. The user downloads the encrypted object from S3. How can the user decrypt it?

  1. S3 does not support server side encryption
  2. S3 provides a server side key to decrypt the object
  3. The user needs to decrypt the object using their own private key
  4. S3 manages encryption and decryption automatically

[showhide type=”q54″ more_text=”Answer is…” less_text=”Show less…”]
4. S3 manages encryption and decryption automatically.  [/showhide]

 

55. When uploading an object, what request header can be explicitly specified in a request to Amazon S3 to encrypt object data when saved on the server side?

  1. x-amz-storage-class
  2. Content-MD5
  3. x-amz-security-token
  4. x-amz-server-side-encryption

[showhide type=”q55″ more_text=”Answer is…” less_text=”Show less…”]
4. x-amz-server-side-encryption.  [/showhide]

 

56. A startup company hired you to help them build a mobile application that will ultimately store billions of image and videos in Amazon S3. The company is lean on funding, and wants to minimize operational costs, however, they have an aggressive marketing plan, and expect to double their current installation base every six months. Due to the nature of their business, they are expecting sudden and large increases to traffic to and from S3, and need to ensure that it can handle the performance needs of their application. What other information must you gather from this customer in order to determine whether S3 is the right option?

    1. You must know how many customers that company has today, because this is critical in understanding what their customer base will be in two years.
    2. You must find out total number of requests per second at peak usage.
    3. You must know the size of the individual objects being written to S3 in order to properly design the key namespace.
    4. In order to build the key namespace correctly, you must understand the total amount of storage needs for each S3 bucket.

[showhide type=”q56″ more_text=”Answer is…” less_text=”Show less…”]
2. You must find out total number of requests per second at peak usage.  [/showhide]

 

57. A document storage company is deploying their application to AWS and changing their business model to support both free tier and premium tier users. The premium tier users will be allowed to store up to 200GB of data and free tier customers will be allowed to store only 5GB. The customer expects that billions of files will be stored. All users need to be alerted when approaching 75 percent quota utilization and again at 90 percent quota use. To support the free tier and premium tier users, how should they architect their application?

    1. The company should utilize an amazon simple workflow service activity worker that updates the users data counter in amazon dynamo DB. The activity worker will use simple email service to send an email if the counter increases above the appropriate thresholds.
    2. The company should deploy an amazon relational data base service relational database with a store objects table that has a row for each stored object along with size of each object. The upload server will query the aggregate consumption of the user in questions (by first determining the files store by the user, and then querying the stored objects table for respective file sizes) and send an email via Amazon Simple Email Service if the thresholds are breached.
    3. The company should write both the content length and the username of the files owner as S3 metadata for the object. They should then create a file watcher to iterate over each object and aggregate the size for each user and send a notification via Amazon Simple Queue Service to an emailing service if the storage threshold is exceeded.
    4. The company should create two separated amazon simple storage service buckets one for data storage for free tier users and another for data storage for premium tier users. An amazon simple workflow service activity worker will query all objects for a given user based on the bucket the data is stored in and aggregate storage. The activity worker will notify the user via Amazon Simple Notification Service when necessary

[showhide type=”q57″ more_text=”Answer is…” less_text=”Show less…”]
1. The company should utilize an amazon simple workflow service activity worker that updates the users data counter in amazon dynamo DB. The activity worker will use simple email service to send an email if the counter increases above the appropriate thresholds.  [/showhide]

 

58. Your company host a social media website for storing and sharing documents. the web application allow users to upload large files while resuming and pausing the upload as needed. Currently, files are uploaded to your php front end backed by Elastic Load Balancing and an autoscaling fleet of amazon elastic compute cloud (EC2) instances that scale upon average of bytes received (NetworkIn) After a file has been uploaded. it is copied to amazon simple storage service(S3). Amazon Ec2 instances use an AWS Identity and Access Management (AMI) role that allows Amazon s3 uploads. Over the last six months, your user base and scale have increased significantly, forcing you to increase the auto scaling groups Max parameter a few times. Your CFO is concerned about the rising costs and has asked you to adjust the architecture where needed to better optimize costs. Which architecture change could you introduce to reduce cost and still keep your web application secure and scalable?

    1. Replace the Autoscaling launch Configuration to include c3.8xlarge instances; those instances can potentially yield a network throughput of 10gbps.
    2. Re-architect your ingest pattern, have the app authenticate against your identity provider as a broker fetching temporary AWS credentials from AWS Secure token service (GetFederation Token). Securely pass the credentials and s3 endpoint/prefix to your app. Implement client-side logic to directly upload the file to amazon s3 using the given credentials and S3 Prefix.
    3. Re-architect your ingest pattern, and move your web application instances into a VPC public subnet. Attach a public IP address for each EC2 instance (using the auto scaling launch configuration settings). Use Amazon Route 53 round robin records set and http health check to DNS load balance the app request this approach will significantly reduce the cost by bypassing elastic load balancing.
    4. Re-architect your ingest pattern, have the app authenticate against your identity provider as a broker fetching temporary AWS credentials from AWS Secure token service (GetFederation Token). Securely pass the credentials and s3 endpoint/prefix to your app. Implement client-side logic that used the S3 multipart upload API to directly upload the file to Amazon s3 using the given credentials and s3 Prefix.

[showhide type=”q58″ more_text=”Answer is…” less_text=”Show less…”]
4. Re-architect your ingest pattern, have the app authenticate against your identity provider as a broker fetching temporary AWS credentials from AWS Secure token service (GetFederation Token). Securely pass the credentials and s3 endpoint/prefix to your app. Implement client-side logic that used the S3 multipart upload API to directly upload the file to Amazon s3 using the given credentials and s3 Prefix.  [/showhide]

 

59.If an application is storing hourly log files from thousands of instances from a high traffic web site, which naming scheme would give optimal performance on S3?

  1. Sequential
  2. instanceID_log-HH-DD-MM-YYYY
  3. instanceID_log-YYYY-MM-DD-HH
  4. HH-DD-MM-YYYY-log_instanceID
  5. YYYY-MM-DD-HH-log_instanceID

[showhide type=”q59″ more_text=”Answer is…” less_text=”Show less…”]
4. HH-DD-MM-YYYY-log_instanceID.  [/showhide]