Post

s3 Buckets

s3 Buckets

🪿 s3

  • s3 stands for Simple Storage Service (s3).
  • Thinks of it’s like an FTP. ( FTP == s3 )
  • Usage:- Hosting a Static website , Backup’s , Logs and etc… Buckets:-
    • They are Top Level Container.
    • Each Bucket have Unique Name.

Objects:- - Store Actual Data/files (like Source Code, Images, Content of website).

Access Control: - Bucket Policies (Resource Based) - IAM Policies (Identity Based) - Access Control List (Legacy)

Identify S3 Bucket’s

Using nslookup

1
2
3
4
5
# Grab IP from the Result
nslookup $Host

# Put IP here
nslookup $Ip

Using curl

1
2
# check Headers
curl -I http://$Host     or     curl -I https://$Host

Tools

  • ip2cloud
    1
    
    echo $IP | ip2cloud 
    
  • ip2provider
    1
    
    ./ip2provider.py $IP
    

List s3 Bucket Content (Without Authentication)

1
2
3
4
5
# using No-sign
aws s3 ls s3://$Host --no-sign-request

# Specify Region
aws s3 ls s3://$host --no-sign-request --region us-east-1

Copy and Download s3 Bucket Content (Without Authentication)

1
2
3
4
5
# copy Specific File
aws s3 cp s3://$Host/File_Name  --no-sign-request  --region us-east-1  file_Name

# Download all File's
aws s3 sync s3://$Host/ --no-sign-request  --region us-east-1  Folder_Name
This post is licensed under CC BY 4.0 by the author.