- [Morgan] Next up, another sample question. The topic this question relates to is, select high-performing and scalable storage solutions for a workload. The stem reads: A solutions architect has been given a large number of video files to upload to an Amazon S3 bucket. The file sizes are 100 to 500 MB. The solutions architect also wants to easily resume failed upload attempts. How should the solutions architect perform the uploads in the least amount of time? This question has a few phrases that I'd like to call out. First, is the fact that we have files that we know we are uploading to S3. So, you know that is the service to focus in on. Then, the stem provides sizes for the files, 100 to 500 MB. I generally pay attention when numbers or specifics like this are provided because it's likely this means that the file size does matter. The problem to solve for is that the solutions architect needs to easily resume failed upload attempts. Finally, the question is asking for the solution that will take the least amount of time. Let's take a look at the responses. There is A, Split each file into 5-MB parts. Upload the individual parts and use S3 multipart upload to merge the parts into a complete object. B, Using the AWS CLI, copy the individual objects into the Amazon S3 bucket with the aws s3 cp command. C, From the Amazon S3 console, select the Amazon S3 bucket. Upload the S3 bucket and drag and drop items into the bucket. And D, Upload the files with SFTP and the AWS Transfer Family. We will now pause to let you review the question and the responses. Revealing the key in three seconds. So pause now, if you'd like more time. Three, two, one. The key is B, Using the AWS CLI, copy individual objects into the Amazon S3 bucket with the aws s3 cp command. In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation. Using multipart upload can improve throughput, and it could also allow for quick recovery of restarting objects due to network failure. The stem talked about being able to easily resume failed upload attempts, and multipart upload helps for this. Now, the reason you want to use the command line and the cp command is because aws s3 commands automatically perform multipart uploading and downloading based on the file size. This means this option will solve the problem, and it will take very little time to solve because the command line already does the multipart upload piece for you. Now, let's review the incorrect responses. First up is A, Split each file into 5-MB parts. Upload the individual parts normally and use S3 multipart upload to merge the parts into a complete object. This is incorrect because multipart upload for S3 is recommended for objects over 100 MB. So breaking the objects up into 5-MB parts is not necessary, and it would require that you reassemble the object in S3 to its original size, since the object was broken up into parts before uploading. This would also take more time than using the cp command. So, for these reasons, this response is incorrect. Next up is C, From the Amazon S3 console, select the Amazon S3 bucket. Upload the S3 bucket and drag and drop items into the bucket. This is incorrect because uploading files into S3 from the Management Console does not provide any protection from network problems, or automatically do multipart upload. So, this does not solve the problem presented in the stem. Lastly, there is D, Upload the files with SFTP and the AWS Transfer Family. This option is incorrect because though you could upload files through SFTP, it will not solve the problem for easily restarting file uploads. And it also requires more work and time to use this option than it would to use the command line and the copy command. That is it for this one. See you soon.