AWS S3 CLI Create Bucket AWS S3 CLI Create Folder AWS S3 CLI Download AWS S3 CLI Examples AWS S3 CLI Get Object AWS S3 CLI List Buckets AWS S3 CLI List Objects AWS S3 CLI ls AWS S3 CLI Sync

28 Essential AWS S3 CLI Command Examples to Manage Buckets and Objects

It’s easier to supervisor AWS S3 buckets and objects from CLI. This tutorial explains the fundamentals of how to handle S3 buckets and its objects utilizing aws s3 cli using the next examples:

For fast reference, listed here are the instructions. For details on how these instructions work, learn the remainder of the tutorial.

# s3 make bucket (create bucket)
aws s3 mb s3://tgsbucket –region us-west-2

# s3 take away bucket
aws s3 rb s3://tgsbucket
aws s3 rb s3://tgsbucket –drive

# s3 ls commands
aws s3 ls
aws s3 ls s3://tgsbucket
aws s3 ls s3://tgsbucket –recursive
aws s3 ls s3://tgsbucket –recursive –human-readable –summarize

# s3 cp instructions
aws s3 cp getdata.php s3://tgsbucket
aws s3 cp /native/dir/knowledge s3://tgsbucket –recursive
aws s3 cp s3://tgsbucket/getdata.php /local/dir/knowledge
aws s3 cp s3://tgsbucket/ /local/dir/knowledge –recursive
aws s3 cp s3://tgsbucket/init.xml s3://backup-bucket
aws s3 cp s3://tgsbucket s3://backup-bucket –recursive

# s3 mv commands
aws s3 mv supply.json s3://tgsbucket
aws s3 mv s3://tgsbucket/getdata.php /house/venture
aws s3 mv s3://tgsbucket/source.json s3://backup-bucket
aws s3 mv /native/dir/knowledge s3://tgsbucket/knowledge –recursive
aws s3 mv s3://tgsbucket s3://backup-bucket –recursive

# s3 rm commands
aws s3 rm s3://tgsbucket/queries.txt
aws s3 rm s3://tgsbucket –recursive

# s3 sync instructions
aws s3 sync backup s3://tgsbucket
aws s3 sync s3://tgsbucket/backup /tmp/backup
aws s3 sync s3://tgsbucket s3://backup-bucket

# s3 bucket web site
aws s3 web site s3://tgsbucket/ –index-document index.html –error-document error.html

# s3 presign url (default 3600 seconds)
aws s3 presign s3://tgsbucket/dnsrecords.txt
aws s3 presign s3://tgsbucket/dnsrecords.txt –expires-in 60

1. Create New S3 Bucket

Use mb choice for this. mb stands for Make Bucket.

The next will create a brand new S3 bucket

$ aws s3 mb s3://tgsbucket
make_bucket: tgsbucket

In the above example, the bucket is created within the us-east-1 region, as that’s what is specified in the consumer’s config file as proven under.

$ cat ~/.aws/config
[profile ramesh] region = us-east-1

To setup your config file properly, use aws configure command as explained right here: 15 AWS Configure Command Examples to Manage Multiple Profiles for CLI

If the bucket already exists, and you personal the bucket, you’ll get the next error message.

$ aws s3 mb s3://tgsbucket
make_bucket failed: s3://tgsbucket An error occurred (BucketAlreadyOwnedByYou) when calling the CreateBucket operation: Your earlier request to create the named bucket succeeded and you already personal it.

If the bucket already exists, but owned by some other consumer, you’ll get the following error message.

$ aws s3 mb s3://paloalto
make_bucket failed: s3://paloalto An error occurred (BucketAlreadyExists) when calling the CreateBucket operation: The requested bucket identify is just not out there. The bucket namespace is shared by all customers of the system. Please select a special identify and attempt again.

Beneath some state of affairs, you may also get the next error message.

$ aws s3 mb s3://demo-bucket
make_bucket failed: s3://demo-bucket An error occurred (IllegalLocationConstraintException) when calling the CreateBucket operation: The unspecified location constraint is incompatible for the area specific endpoint this request was sent to.

2. Create New S3 Bucket – Totally different Region

To create a bucket in a selected region (totally different than the one from your config file), then use the –area choice as proven under.

$ aws s3 mb s3://tgsbucket –region us-west-2
make_bucket: tgsbucket

3. Delete S3 Bucket (That is empty)

Use rb choice for this. rb stands for take away bucket.

The following deletes the given bucket.

$ aws s3 rb s3://tgsbucket
remove_bucket: tgsbucket

If the bucket you are attempting to delete doesn’t exists, you’ll get the following error message.

$ aws s3 rb s3://tgsbucket1
remove_bucket failed: s3://tgsbucket1 An error occurred (NoSuchBucket) when calling the DeleteBucket operation: The required bucket doesn’t exist

four. Delete S3 Bucket (And all its objects)

If the bucket incorporates some object, you’ll get the following error message:

$ aws s3 rb s3://tgsbucket
remove_bucket failed: s3://tgsbucket An error occurred (BucketNotEmpty) when calling the DeleteBucket operation: The bucket you tried to delete shouldn’t be empty

To delete a bucket along with all its objects, use the –drive choice as proven under.

$ aws s3 rb s3://tgsbucket –pressure
delete: s3://tgsbucket/demo/getdata.php
delete: s3://tgsbucket/ipallow.txt
delete: s3://tgsbucket/demo/servers.txt
delete: s3://tgsbucket/demo/
remove_bucket: tgsbucket

5. Record All S3 Buckets

To view all the buckets owned by the consumer, execute the next ls command.

$ aws s3 ls
2019-02-06 11:38:55 tgsbucket
2018-12-18 18:02:27 etclinux
2018-12-08 18:05:15 readynas
..
..

Within the above output, the timestamp is the date the bucket was created. The timezone was adjusted to be displayed to your laptop computer’s timezone.

The following command is similar as the above:

aws s3 ls s3://

6. Listing All Objects in a Bucket

The next command shows all objects and prefixes underneath the tgsbucket.

$ aws s3 ls s3://tgsbucket
PRE config/
PRE knowledge/
2019-04-07 11:38:20 13 getdata.php
2019-04-07 11:38:20 2546 ipallow.php
2019-04-07 11:38:20 9 license.php
2019-04-07 11:38:20 3677 servers.txt

In the above output:

  • Contained in the tgsbucket, there are two folders config and knowledge (indicated by PRE)
  • PRE stands for Prefix of an S3 object.
  • Inside the tgsbucket, we’ve got 4 information on the / degree
  • The timestamp is when the file was created
  • The 2nd column display the dimensions of the S3 object

Observe: The above output doesn’t show the content material of sub-folders config and knowledge

7. Listing all Objects in a Bucket Recursively

To show all of the objects recursively together with the content material of the sub-folders, execute the next command.

$ aws s3 ls s3://tgsbucket –recursive
2019-04-07 11:38:19 2777 config/init.xml
2019-04-07 11:38:20 52 config/help.txt
2019-04-07 11:38:20 1758 knowledge/database.txt
2019-04-07 11:38:20 13 getdata.php
2019-04-07 11:38:20 2546 ipallow.php
2019-04-07 11:38:20 9 license.php
2019-04-07 11:38:20 3677 servers.txt

Notice: If you end up listing all of the information, notice how there isn’t any PRE indicator in the 2nd column for the folders.

8. Complete Measurement of All Objects in a S3 Bucket

You possibly can determine the entire measurement of all the information in your S3 bucket through the use of the mixture of following three options: recursive, human-readable, summarize

Notice: The next displays both complete file measurement within the S3 bucket, and the whole number of information within the s3 bucket

$ aws s3 ls s3://tgsbucket –recursive –human-readable –summarize
2019-04-07 11:38:19 2.7 KiB config/init.xml
2019-04-07 11:38:20 52 Bytes config/help.txt
2019-04-07 11:38:20 1.7 KiB knowledge/database.txt
2019-04-07 11:38:20 13 Bytes getdata.php
2019-04-07 11:38:20 2.5 KiB ipallow.php
2019-04-07 11:38:20 9 Bytes license.php
2019-04-07 11:38:20 three.6 KiB servers.txt

Complete Objects: 7
Complete Measurement: 10.6 KiB

Within the above output:

  • recursive choice make it possible for it shows all the information within the s3 bucket including sub-folders
  • human-readable shows the dimensions of the file in readable format. Attainable values you’ll see in the 2nd column for the dimensions are: Bytes/MiB/KiB/GiB/TiB/PiB/EiB
  • summarize choices be sure to display the last two strains in the above output. This means the entire number of objects within the S3 bucket and the full measurement of all these objects

9. Request Payer Itemizing

If a selected bucket is configured as requester pays buckets, then in case you are accessing objects in that bucket, you perceive that you are chargeable for the cost of that request entry. On this case, bucket owner doesn’t have to pay for the entry.

To indicate this in your ls command, you’ll have to specify –request-payer choice as shown under.

$ aws s3 ls s3://tgsbucket –recursive –request-payer requester
2019-04-07 11:38:19 2777 config/init.xml
2019-04-07 11:38:20 52 config/help.txt
2019-04-07 11:38:20 1758 knowledge/database.txt
2019-04-07 11:38:20 13 getdata.php
2019-04-07 11:38:20 2546 ipallow.php
2019-04-07 11:38:20 9 license.php
2019-04-07 11:38:20 3677 servers.txt

For signed URL, be sure to embrace x-amz-request-payer=requester within the request

10. Copy Native File to S3 Bucket

In the following instance, we are copying getdata.php file from native laptop to S3 bucket.

$ aws s3 cp getdata.php s3://tgsbucket
upload: ./getdata.php to s3://tgsbucket/getdata.php

If you’d like to copy the getdata.php to a S3 bucket with a special identify, do the following

$ aws s3 cp getdata.php s3://tgsbucket/getdata-new.php
upload: ./getdata.php to s3://tgsbucket/getdata-new.php

For the native file, you can even specify the complete path as proven under.

$ aws s3 cp /house/challenge/getdata.php s3://tgsbucket
upload: ../../residence/undertaking/getdata.php to s3://tgsbucket/getdata.php

11. Copy Local Folder with all Information to S3 Bucket

On this example, we are copying all of the information from the “data” folder that is underneath /house/tasks directory to S3 bucket

$ cd /house/tasks

$ aws s3 cp knowledge s3://tgsbucket –recursive
upload: knowledge/parameters.txt to s3://tgsbucket/parameters.txt
add: knowledge/widespread.txt to s3://tgsbucket/widespread.txt
..

In the above instance, notice that only the information from the local knowledge/ folder is getting uploaded. Not the folder “data” itself

In case you like to add the info folder from native to s3 bucket as knowledge folder, then specify the folder identify after the bucket identify as shown under.

$ aws s3 cp knowledge s3://tgsbucket/knowledge –recursive
add: knowledge/parameters.txt to s3://tgsbucket/knowledge/parameters.txt
upload: knowledge/widespread.txt to s3://tgsbucket/knowledge/widespread.txt
..
..

12. Download a File from S3 Bucket

To download a selected file from an S3 bucket do the next. The next copies getdata.php from the given s3 bucket to the present listing.

$ aws s3 cp s3://tgsbucket/getdata.php .
obtain: s3://tgsbucket/getdata.php to ./getdata.php

You’ll be able to download the file to the local machine with in a unique identify as proven under.

$ aws s3 cp s3://tgsbucket/getdata.php getdata-local.php
obtain: s3://tgsbucket/getdata.php to ./getdata-local.php

Download the file from S3 bucket to a selected folder in local machine as shown under. The following will obtain getdata.php file to /house/venture folder on local machine.

$ aws s3 cp s3://tgsbucket/getdata.php /residence/undertaking/
obtain: s3://tgsbucket/getdata.php to ../../house/undertaking/getdata.php

13. Obtain All Information Recursively from a S3 Bucket (Utilizing Copy)

The next will obtain all the information from the given bucket to the current directory on your laptop.

$ aws s3 cp s3://tgsbucket/ . –recursive
obtain: s3://tgsbucket/getdata.php to ./getdata.php
obtain: s3://tgsbucket/config/init.xml ./config/init.xml
..

If you need to obtain all the information from a S3 bucket to a selected folder regionally, please specify the complete path of the native listing as shown under.

$ aws s3 cp s3://tgsbucket/ /house/tasks/tgsbucket –recursive
download: s3://tgsbucket/getdata.php to ../../residence/tasks/tgsbucket/getdata.php
obtain: s3://tgsbucket/config/init.xml to ../../residence/tasks/tgsbucket/config/init.xml
..

In the above command, if the tgsbucket folder doesn’t exists underneath /house/tasks, it’s going to create it routinely.

14. Copy a File from One Bucket to One other Bucket

The next command will copy the config/init.xml from tgsbucket to backup bucket as shown under.

$ aws s3 cp s3://tgsbucket/config/init.xml s3://backup-bucket
copy: s3://tgsbucket/config/init.xml to s3://backup-bucket/init.xml

Within the above example, eventhough init.xml file was underneath config folder within the supply bucket, on the vacation spot bucket, it copied the init.xml file to the top-level / in the backup-bucket.

If you need to copy the identical folder from source and destination along with the file, specify the folder identify in the desintation bucketas shown under.

$ aws s3 cp s3://tgsbucket/config/init.xml s3://backup-bucket/config
copy: s3://tgsbucket/config/init.xml to s3://backup-bucket/config/init.xml

If the vacation spot bucket doesn’t exist, you’ll get the next error message.

$ aws s3 cp s3://tgsbucket/check.txt s3://backup-bucket-777
copy failed: s3://tgsbucket/check.txt to s3://backup-bucket-777/check.txt An error occurred (NoSuchBucket) when calling the CopyObject operation: The required bucket doesn’t exist

15. Copy All Information Recursively from One Bucket to Another

The next will copy all the information from the source bucket together with information underneath sub-folders to the destination bucket.

$ aws s3 cp s3://tgsbucket s3://backup-bucket –recursive
copy: s3://tgsbucket/getdata.php to s3://backup-bucket/getdata.php
copy: s3://tgsbucket/config/init.xml s3://backup-bucket/config/init.xml
..

16. Move a File from Native to S3 Bucket

Once you move file from Native machine to S3 bucket, as you’d anticipate, the file shall be bodily moved from local machine to the S3 bucket.

$ ls -l source.json
-rw-r–r– 1 ramesh sysadmin 1404 Apr 2 13:25 supply.json

$ aws s3 mv supply.json s3://tgsbucket
move: ./source.json to s3://tgsbucket/source.json

As you see the file doesn’t exists on the local machine after the transfer. Its only on S3 bucket now.

$ ls -l supply.json
ls: supply.json: No such file or directory

17. Move a File from S3 Bucket to Local

The following is reverse of the previou example. Here, the file might be moved from S3 bucket to local machine.

As you see under, the file now exists on the s3 bucket.

$ aws s3 ls s3://tgsbucket/getdata.php
2019-04-06 06:24:29 1758 getdata.php

Transfer the file from S3 bucket to /residence/challenge listing on local machine.

$ aws s3 mv s3://tgsbucket/getdata.php /residence/undertaking
move: s3://tgsbucket/getdata.php to ../../../house/venture/getdata.php

After the transfer, the file doesn’t exists on S3 bucketanymore.

$ aws s3 ls s3://tgsbucket/getdata.php

18. Transfer a File from One S3 Bucket to One other S3 Bucket

Earlier than the transfer, the file supply.json is in tgsbucket.

$ aws s3 ls s3://tgsbucket/source.json
2019-04-06 06:51:39 1404 supply.json

This file is just not in backup-bucket.

$ aws s3 ls s3://backup-bucket/source.json
$

Transfer the file from tgsbucketto backup-bucket.

$ aws s3 mv s3://tgsbucket/supply.json s3://backup-bucket
move: s3://tgsbucket/supply.json to s3://backup-bucket/supply.json

Now, the file is simply on the backup-bucket.

$ aws s3 ls s3://tgsbucket/source.json
$

$ aws s3 ls s3://backup-bucket/source.json
2019-04-06 06:56:00 1404 supply.json

19. Transfer All Information from a Native Folder to S3 Bucket

In this example, the following information are underneath knowledge folder.

$ ls -1 knowledge
dnsrecords.txt
parameters.txt
dev-setup.txt
error.txt

The following strikes all the information within the knowledge listing on native machine to tgsbucket

$ aws s3 mv knowledge s3://tgsbucket/knowledge –recursive
move: knowledge/dnsrecords.txt to s3://tgsbucket/knowledge/dnsrecords.txt
move: knowledge/parameters.txt to s3://tgsbucket/knowledge/parameters.txt
move: knowledge/dev-setup.txt to s3://tgsbucket/knowledge/dev-setup.txt
move: knowledge/error.txt to s3://tgsbucket/knowledge/error.txt

20. Transfer All Information from S3 Bucket to Local Folder

On this instance, the localdata folder is at present empty.

$ ls -1 localdata
$

The following will move all of the information in the S3 bucketunder knowledge folder to localdata folder on your local machine.

$ aws s3 mv s3://tgsbucket/knowledge/ localdata –recursive
transfer: s3://tgsbucket/knowledge/dnsrecords.txt to localdata/dnsrecords.txt
move: s3://tgsbucket/knowledge/parameters.txt to localdata/parameters.txt
transfer: s3://tgsbucket/knowledge/dev-setup.txt to localdata/dev-setup.txt
move: s3://tgsbucket/knowledge/error.txt to localdata/error.txt

Right here is the output after the above transfer.

$ aws s3 ls s3://tgsbucket/knowledge/
$

$ ls -1 localdata
dnsrecords.txt
parameters.txt
dev-setup.txt
error.txt

21. Transfer All Information from One S3 Bucket to One other S3 Bucket

Use the recursive choice to transfer all information from one bucket to another as shown under.

$ aws s3 mv s3://tgsbucket s3://backup-bucket –recursive
move: s3://tgsbucket/dev-setup.txt to s3://backup-bucket/dev-setup.txt
transfer: s3://tgsbucket/dnsrecords.txt to s3://backup-bucket/dnsrecords.txt
move: s3://tgsbucket/error.txt to s3://backup-bucket/error.txt
transfer: s3://tgsbucket/parameters.txt to s3://backup-bucket/parameters.txt

22. Delete a File from S3 Bucket

To delete a selected file from a S3 bucket, use the rm choice as shown under. The next will delete the queries.txt file from the given S3 bucket.

$ aws s3 rm s3://tgsbucket/queries.txt
delete: s3://tgsbucket/queries.txt

23. Delete All Objects from S3 buckets

If you specify rm choice simply with a bucket identify, it doesn’t do anything. This won’t delete any file from the bucket.

aws s3 rm s3://tgsbucket

To delete all of the information from a S3 bucket, use the –recursive choice as present nbelow.

$ aws s3 rm s3://tgsbucket –recursive
delete: s3://tgsbucket/dnsrecords.txt
delete: s3://tgsbucket/widespread.txt
delete: s3://tgsbucket/parameters.txt
delete: s3://tgsbucket/config/init.xml
..

24. Sync information from Laptop computer to S3 Bucket

If you use sync command, it should recursively copies only the brand new or updated information from the supply listing to the destination.

The following will sync the information from backup directory in native machine to the tgsbucket.

$ aws s3 sync backup s3://tgsbucket
upload: backup/docker.sh to s3://tgsbucket/docker.sh
upload: backup/handle.txt to s3://tgsbucket/tackle.txt
upload: backup/show.py to s3://tgsbucket/display.py
upload: backup/getdata.php to s3://tgsbucket/getdata.php

If you’d like to sync it to a subfolder referred to as backup on the S3 bucket, then embrace the folder identify within the s3 bucket as shown under.

$ aws s3 sync backup s3://tgsbucket/backup
upload: backup/docker.sh to s3://tgsbucket/backup/docker.sh
upload: backup/tackle.txt to s3://tgsbucket/backup/handle.txt
upload: backup/display.py to s3://tgsbucket/backup/show.py
upload: backup/getdata.php to s3://tgsbucket/backup/getdata.php

Once you do the sync as soon as, when you run the command immediately once more, it won’t do something, as there isn’t a new or up to date information on the native backup listing.

$ aws s3 sync backup s3://tgsbucket/backup
$

Let us create a new file on the local machine for testing.

echo “New file” > backup/newfile.txt

Now whenever you execute the sync, it’ll sync only this new file to the S3 bucket.

$ aws s3 sync backup s3://tgsbucket/backup
add: backup/newfile.txt to s3://tgsbucket/backup/newfile.txt

25. Sync File from S3 bucket to Local

That is reverse of the previous instance. Right here, we are syncing the information from the S3 bucket to the native machine.

$ aws s3 sync s3://tgsbucket/backup /tmp/backup
obtain: s3://tgsbucket/backup/docker.sh to ../../tmp/backup/docker.sh
obtain: s3://tgsbucket/backup/show.py to ../../tmp/backup/show.py
download: s3://tgsbucket/backup/newfile.txt to ../../tmp/backup/newfile.txt
download: s3://tgsbucket/backup/getdata.php to ../../tmp/backup/getdata.php
download: s3://tgsbucket/backup/handle.txt to ../../tmp/backup/tackle.txt

26. Sync Information from one S3 Bucket to Another S3 Bucket

The following instance syncs the information from one tgsbucket to backup-bucket

$ aws s3 sync s3://tgsbucket s3://backup-bucket
copy: s3://tgsbucket/backup/newfile.txt to s3://backup-bucket/backup/newfile.txt
copy: s3://tgsbucket/backup/show.py to s3://backup-bucket/backup/display.py
copy: s3://tgsbucket/backup/docker.sh to s3://backup-bucket/backup/docker.sh
copy: s3://tgsbucket/backup/tackle.txt to s3://backup-bucket/backup/tackle.txt
copy: s3://tgsbucket/backup/getdata.php to s3://backup-bucket/backup/getdata.php

27. Set S3 bucket as an internet site

You may as well make S3 bucket to host a static web site as shown under. For this, you need to specify each the index and error doc.

aws s3 website s3://tgsbucket/ –index-document index.html –error-document error.html

This bucket is in us-east-1 area. So, when you’ve achieved the above, you’ll be able to access the tgsbucket as an internet site using the following URL: http://tgsbucket.s3-website-us-east-1.amazonaws.com/

For this to work properly, be sure that public access is about on this S3 bucket, as this acts as an internet site now.

28. Presign URL of S3 Object for Short-term Access

Once you presign a URL for an S3 file, anyone who was given this URL can retrieve the S3 file with a HTTP GET request.

For example, if you need to give access to the dnsrecords.txt file to someone briefly, presign this specific S3 object as proven under.

$ aws s3 presign s3://tgsbucket/dnsrecords.txt
https://tgsbucket.s3.amazonaws.com/error.txt?AWSAccessKeyId=AAAAAAAAAAAAAAAAAAAA&Expires=1111111111&Signature=ooooooooooo%2Babcdefghijlimmm%3A

The output of the above command might be a HTTPS url, which you’ll be able to hand it out someone who should have the ability to download the dnsrecords.txt file out of your S3 bucket.

The above URL might be legitimate by default for 3600 seconds (1 hour).

If you would like to specify a brief expirty time, use the following expires-in choice. The following will create a presigned URL that’s valid just for 1 minute.
–expires-in (integer) Number of seconds until the pre-signed URL expires. Default is 3600 seconds.

$ aws s3 presign s3://tgsbucket/dnsrecords.txt –expires-in 60
https://tgsbucket.s3.amazonaws.com/error.txt?AWSAccessKeyId=AAAAAAAAAAAAAAAAAAAA&Expires=1111111111&Signature=ooooooooooo%2Babcdefghijlimmm%3A

If somebody tries to entry the URL after the expiry time, they’ll see the next AccessDenied message.


AccessDenied
Request has expired
2019-04-07T11:38:12Z
2019-04-07T11:38:21Z
1111111111111111

mmmmmmmmmm/ggggggggg

When you loved this article, you may also like..