For the ones who are not introduced to the world of S3 bucket, this article will guide you on how to upload the file on s3bucket through the terminal in Linux and assigning permission to it.
But before that let’s get a small overview on what S3 bucket is and in what ways the file can be uploaded on s3bucket.
S3 bucket is one of the many services provided by AWS. It is nothing but public cloud storage where you can store your files, kind of a folder in your pc but unlike pc, you can access it anywhere you want. S3 is an abbreviation of Simple Storage Service. AWS provides many ways to upload a file on the s3 bucket, which are given below:
- Upload file using Drag and Drop
- Upload file using click
- Upload file using aws CLI in the terminal.
From the above-mentioned ways, we are going to look into the 3rd way.
As a first step, you need to install aws CLI on your local machine. Here is the command to install it.
sudo apt install aws
Once the installation is done, please configure the aws with this command:
As soon as you run the above command you will be asked the following details one by one in the terminal as you keep entering the correct values and press enter.
AWS Access Key ID: <S3_BUCKET_NAME> AWS Secret Access Key: <AWS_SERVER_PUBLIC_KEY> Default region name: <S3_BUCKET_REGION> Default output format: json
After you enter the relevant information, your aws is configured.
Now onto uploading the file and to do that following command will help you achieve it.
aws s3 cp <path-to-file-from-local> s3://<S3_BUCKET_NAME>/<folder-name> --acl public-read
eg: aws s3 cp /home/nehalk/Desktop/file.txt s3://<S3_BUCKET_NAME>/resources — acl public-read
And voila! Your file, file.txt has been uploaded from Desktop folder of your local machine to resources folder in S3 bucket.
Another useful command is to permit the given file to download it.
aws s3api put-object-acl --bucket <S3_BUCKET_NAME> --key <path-to-file-on-s3-bucket> --acl public-read
This command will give the read permission to that and after that, you can download the file.
In order to use the AWS CLI with object storage service that provide an S3-compatible API like DigitalOcean Spaces, you must also configure a custom endpoint.
This can be done on the command line using the
aws s3 ls --endpoint=https://nyc3.digitaloceanspaces.comWorth pointing out that there's no way currently to have a default endpoint. You gotta specify it every time. I usually alias it for convenience:
alias awsdo='aws --endpoint=https://nyc3.digitaloceanspaces.com'– Freedom_Ben Jun 12 '19 at 23:47