CyberKeeda In Social Media
Showing posts with label S3. Show all posts
Showing posts with label S3. Show all posts

Cloudformation template to create S3 bucket with Tags and disable public access

 


Below CloudFormation template can be used for the following tasks.
  • Create S3 bucket.
  • Add tags to S3 bucket.
  • Disable public access.
 S3BUCKET
  PrimaryBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub 'cyberkeeda-${Environment}-${AppName}-bucket'
      AccessControl: Private
      PublicAccessBlockConfiguration:
        BlockPublicAcls: True
        BlockPublicPolicy: True
        IgnorePublicAcls: True
        RestrictPublicBuckets: True
      Tags:
        - Key: Name
          Value: !Sub 'cyberkeeda-${Environment}-${AppName}'
        - Key: Environment
          Value: "Development"
        - Key: Creator
          Value: !Sub "${Creator}"
        - Key: Appname
          Value: !Sub "${Appname}"
        - Key: Unit
          Value: !Sub "${Unit}"
        - Key: Owner
          Value: admin@ck.com

Read more ...

Ansible role to create S3 bucket and directories

 


 Ansible roles can be defined as

  • Collection of multiple playbooks within directories for several tasks and operations.
  • It's a way of maintaining playbooks in a structured and identical manner.
  • It's a way of breaking lengthy playbooks into small plays.
  • Roles can be uploaded to Ansible galaxy, which can be reused as an ansible library or module.

How can we create Ansible roles.
  • We can use ansible-galaxy command to download existing roles uploaded on website https://galaxy.ansible.com/
  • We can use ansible-galaxy command to create new role.
  • While creating a new role ansible-galaxy creates roles in the default directory as /etc/ansible/roles followed by name of the role.
Below commands can be used as per need.
  • Ansible galaxy command to check installed roles.
$ ansible-galaxy collection list
  • Ansible galaxy command to create role in default directory
$ ansible-galaxy init /etc/ansible/roles/my-role --offline
  • Ansible galaxy command to create role in present working directory
$ ansible-galaxy init my-role
  • Ansible galaxy command to install roles from ansible galaxy website collections
$ ansible-galaxy collection install amazon.aws

Ansible role directory structure looks like below, we can take example of above created ansible role by name my-role
$ ansible-galaxy init my-role
- Role my-role was created successfully
$ tree my-role/
my-role/
├── defaults
│   └── main.yml
├── files
├── handlers
│   └── main.yml
├── meta
│   └── main.yml
├── README.md
├── tasks
│   └── main.yml
├── templates
├── tests
│   ├── inventory
│   └── test.yml
└── vars
    └── main.yml

8 directories, 8 files

How can we use roles
  • Till now, we have seen how to create role and once created what's it's default directory structure looks like, Ansible roles can be used in there ways
    • with the roles option: This is the classic way of using roles in a play.
    • tasks level with include_role: one can reuse roles dynamically anywhere in the tasks section of a play using include_role.
    • tasks level with import_role: You can reuse roles statically anywhere in the tasks section of a play using import_role.
Here we will know more about the classic way of using roles, that is by using the roles option in playbook.

So instead of going through all conventional method of installing apache or ngnix, I will share a real-time custom role, that has the following task/operations to do.
  • Create multiple AWS S3 buckets by regions.
  • Create directory structure within two of above created bucket.
First let's go through the playbook, that can be independently used to do the entire operation without creating ansible roles.

Note: amazon.aws galaxy collection must be update to recent version, in order to use option s3_bucket 
$ ansible-galaxy collection install amazon.aws
---
- hosts: localhost
  connection: local
  gather_facts: False

  tasks:

    - name: Read environment specific variables.
    include_vars:
        file: "ansible_role/vars/{{ Environment }}/main.yml"

    - name: Create static-ck application buckets in us-east-1 region.
    s3_bucket:
        name: "{{ item }}"
        state: present
        tags:
            Name: "{{ item }}"
            Environment: "{{ Environment }}"
            Owner: "{{ bucketTags[Environment]['Owner'] }}"
        region: us-east-1
        public_access:
            block_public_acls: true
            ignore_public_acls: true
            block_public_policy: true
            restrict_public_buckets: true
    with_items:
        - "{{ bucketCfg[Environment]['PrimarBucketName'] }}"
        - "{{ bucketCfg[Environment]['DevopsBucketName'] }}"
        - "{{ bucketCfg[Environment]['CDNLogBucketName'] }}"

    - name: Create static-ck application buckets in us-east-2 region.
    s3_bucket:
        name: "{{ item }}"
        state: present
        tags:
            Name: "{{ item }}"
            Environment: "{{ Environment }}"
            Owner: "{{ bucketTags[Environment]['Owner'] }}"
        region: us-east-2
        public_access:
            block_public_acls: true
            ignore_public_acls: true
            block_public_policy: true
            restrict_public_buckets: true
    with_items:
        - "{{ bucketCfg[Environment]['SecondaryBucketName'] }}"


    - name: Create empty directories to store build artifacts.
    aws_s3:
        bucket: "{{ item.bucket_name }}"
        object: "{{ item.artifact_dir }}"
        mode: create
    with_items:
        - { bucket_name: "{{ bucketCfg[Environment]['PrimarBucketName'] }}", artifact_dir: "/app1/artifcats" }
        - { bucket_name: "{{ bucketCfg[Environment]['SecondaryBucketName'] }}", artifact_dir: "/app1/artifcats" }


    - name: Create empty directories to deploy latest build.
    aws_s3:
        bucket: "{{ item.bucket_name }}"
        object: "{{ item.latest_dir }}"
        mode: create
    with_items:
        - { bucket_name: "{{ bucketCfg[Environment]['PrimarBucketName'] }}", latest_dir: "/app1/latest" }
        - { bucket_name: "{{ bucketCfg[Environment]['SecondaryBucketName'] }}", latest_dir: "/app1/latest" }

Above playbook can be triggered independently using the below command as.
$ ansible-playbook  -vv --extra-vars "Environment=int" main.yml


The same deployment can be done using ansible roles in below manner following the below steps.
  • Create a new ansible role by name ansible_role
$ ansible-galaxy init ansible_role
  • Create a new root/entry playbook to initiate deployment
$ touch root.yml
  • Include the below lines and indicate the use role option to call our role, please note that we have used the option "roles" to call our newly created role directory by name ansible_role, while using roles option please make a note of the below points for main.yml file.
    • When you use the roles option at the play level, for each role ‘x’:
    • If roles/x/tasks/main.yml exists, Ansible adds the tasks in that file to the play.
    • If roles/x/handlers/main.yml exists, Ansible adds the handlers in that file to the play.
    • If roles/x/vars/main.yml exists, Ansible adds the variables in that file to the play.
    • If roles/x/defaults/main.yml exists, Ansible adds the variables in that file to the play.
    • If roles/x/meta/main.yml exists, Ansible adds any role dependencies in that file to the list of roles.
    • Any copy, script, template or include tasks (in the role) can reference files in roles/x/{files,templates,tasks}/ (dir depends on task) without having to path them relatively or absolutely.
---
- hosts: localhost
  connection: local
  gather_facts: False

  roles:
   - ansible_role 
  • Below the directory structure we follow within our newly created role.
root.yml
|
ansible_role/
├── defaults
│   └── main.yml
├── files
├── handlers
│   └── main.yml
├── meta
│   └── main.yml
├── README.md
├── tasks
│   ├── create_bucket_directories.yml
│   ├── create_s3_bucket.yml
│   └── main.yml
├── templates
├── tests
│   ├── inventory
│   └── test.yml
└── vars
    └── int
        └── main.yml

So from above directory layout, we have the below files and directories to create.
  • We have divided our tasks into parts
    • Create S3 buckets
    • Create directories within S3
    • All the above two tasks will be defined individually under two different file by name
      • create_s3_bucket.yml
      • create_bucket_directories.yml
    • Where as ansible_roles/tasks/main.yml is entry point for these two task, which we will be importing using option import tasks
$ cat ansible_role/tasks/main.yml
---
- import_tasks: create_s3_bucket.yml
- import_tasks: create_bucket_directories.yml

This is how, my other two task files look like.

$ cat ansible_role/tasks/create_s3_bucket.yml
---

- name: Read environment specific variables.
  include_vars:
      file: "ansible_role/vars/{{ Environment }}/main.yml"

- name: Create static-ck application buckets in us-east-1 region.
  s3_bucket:
      name: "{{ item }}"
      state: present
      tags:
          Name: "{{ item }}"
          Environment: "{{ Environment }}"
          Owner: "{{ bucketTags[Environment]['Owner'] }}"
      region: us-east-1
      public_access:
          block_public_acls: true
          ignore_public_acls: true
          block_public_policy: true
          restrict_public_buckets: true
  with_items:
      - "{{ bucketCfg[Environment]['PrimarBucketName'] }}"
      - "{{ bucketCfg[Environment]['DevopsBucketName'] }}"
      - "{{ bucketCfg[Environment]['CDNLogBucketName'] }}"

- name: Create static-ck application buckets in us-east-2 region.
  s3_bucket:
      name: "{{ item }}"
      state: present
      tags:
          Name: "{{ item }}"
          Environment: "{{ Environment }}"
          Owner: "{{ bucketTags[Environment]['Owner'] }}"
      region: us-east-2
      public_access:
          block_public_acls: true
          ignore_public_acls: true
          block_public_policy: true
          restrict_public_buckets: true
  with_items:
      - "{{ bucketCfg[Environment]['SecondaryBucketName'] }}"

$ cat ansible_role/tasks/create_bucket_directories.yml
---

- name: Read environment specific variables.
  include_vars:
      file: "ansible_role/vars/{{ Environment }}/main.yml"

- name: Create empty directories to store build artifacts.
  aws_s3:
      bucket: "{{ item.bucket_name }}"
      object: "{{ item.artifact_dir }}"
      mode: create
  with_items:
      - { bucket_name: "{{ bucketCfg[Environment]['PrimarBucketName'] }}", artifact_dir: "/v1/artifcats" }
      - { bucket_name: "{{ bucketCfg[Environment]['SecondaryBucketName'] }}", artifact_dir: "/v1/artifcats" }

  • We have added an additional directory as "int", which is the short form of Internal environment, following the same we can create more directories that can relate to other environmental specific files for prod and non-prod environmet too.
    • Within file ansible_role/vars/int/main.yml we defined key value pairs that can be used later while running our playbook
$ cat ansible_role/vars/int/main.yml
---
# default variables
region: us-east-1
ProductName: ck
ProjectName: static-app
Environment: int
PrimaryRegion: us-east-1
SecondaryRegion: us-east-2
regions:
  us-east-1:
    preferredMaintenanceWindow: "sat:06:00-sat:06:30"
  us-east-2:
    preferredMaintenanceWindow: "sat:05:00-sat:05:30"

bucketCfg:
  int:
    Environment: "{{ Environment }}"
    PrimarBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-primary-cyberkeeda-bucket-01"
    SecondaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-secondary-cyberkeeda-bucket-01"
    CDNLogBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-cdn-logs-cyberkeeda-bucket-01"
    DevopsBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-devops-cyberkeeda-bucket-01"
    PrimaryBucketRegion: "{{ PrimaryRegion }}"
    SecondaryBucketRegion: "{{SecondaryRegion}}"
    DevopsBucketRegion: "{{ PrimaryRegion }}"
bucketTags:
  int:
    PrimaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-primary"
    SecondaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-secondary"
    Environment: "{{ Environment }}"
    CreatorID: "admin@cyberkeeda.com"
    Owner: "admin@cyberkeeda.com"

Once the above templates are created and save, we can run our playbook with below ansible-playbook command.
$ ansible-playbook  -vv --extra-vars "Environment=int" root.yml

Below is the details for the above paramter used along with the ansible-playbook command.
  • -vv : Verbrose mode for debugging in STDOUT
  • --extra-vars : Key-Value pair to be used within playbook

Hope this blog post will help any one in some sort, please comment in case if you have any difficulties following steps.

Read more ...

Upload File from Local to S3 Bucket using CURL

 

Upload data to S3 bucket using CURL.

This guide aka shell script will help you to upload files into S3 without installing AWS SDK, Python Boto or AWS CLI.


Script's README section has most of the usage defined, high level things script can do.

  • Copy files from a specific directory between specific range dates.
  • Copy specific files with search filer.
  • Script logs all the file copy into file with time stamp. 
  • Reinitiate is supported without re-write


#!/bin/bash
#
# Parameters
# $1 => Directory/Folder to search file.
# $2 => AWS Bucket subdirectories 
#       Example -- myAWSs3bucket/folderA/FolderB/FolderC
#              1.) In case one want to put files in folderA, use folderA as $2
#                  2.) In case one want to put files in folderB, use folderA/folderB as $2
#                  3.) In case one want to put files in folderC, use folderA/folderB/folderC as $2
# $3 => Existense of file from Start date in format YYYYMMDD 
#       Example --
#                  1.) 20210104 -> 4th January 2021
#                  2.) 20201212 -> 12th December 2020
# $4 => Existense of file upto end date in format YYYYMMDD
#       Example --
#                  1.) 20200322 -> 22nd March 2020
#                  2.) 20201212 -> 12th December 2020
# $5 => File Filter 
#       Example -- We need only specific files from a folder.
#                  1.) 20200122_data_settlement.txt --> Use $5 as *_data_settlement.txt
#                  2.) salesdata-20201215100610.txt --> Use $5 as salesdata-*
#      
# Task - Find similar 20200122_data_settlement.txt on location /usr/data/
#        File existence date range 20200322 (22nd March 2020) to 20210104 (4th January 2021)
#        Copy it to AWS S3 bucket's subfolder named as folderA 
#
#     
# Syntax -  ./copy_data_to_S3_via_Curl.sh <LocalFolderLocation> <S3BUCKET-DIRECTORY> <STARTDATE> <ENDDATE> <FILEFILTER>
#
# Usage
#
#        1.) With File Filter
#         ./copy_data_to_S3_via_Curl.sh /usr/data folderA 20200322 20210104  '*data_settlement.txt'
#
#        2.) Without File Filter
#         ./copy_data_to_S3_via_Curl.sh /usr/data folderA 20200322 20210104  
#        
#    3.) Reinitiate left upload
#
#         ./copy_data_to_S3_via_Curl.sh 1 folderA
#
#
#  Flow 
#  1.) Script use find command to find all the files with parameters and write it to a file "/tmp/file_inventory.txt"
#  2.) For Loop is being used further ti read file inputs and do S3 operations using HTTPS API
#  3.) Script keeps removing the entries from inventory file after a successful upload.
#  4.) Script writes the successful and failed upload status within log file "/tmp/file_copy_status.log"
#  5.) Incase we want to interrupt and upload the remaining files later, comment line no 62
#        62 find $1 -newermt $3 \! -newermt $4   -iname "$5" >> $inventory
#      To avoid confusion run the script with same paramter.
#
#
# Author: Jackuna
#

# Bucket Data
bucket="mys3bucket-data"
s3_access_key="AKgtusjksskXXXXTQTW"
s3_secret_key="KSKKSIS HSNKSLS+ydRQ3Ya37A5NUd1V7QvEwDUZR"

# Files
inventory="/tmp/file_inventory.txt"
logme="/tmp/file_copy_status.log"


if  [ $# == 2 ]; then
  echo "`date` -  Initiating left file upload from old inventory " >> $logme

elif [ $# -eq 5 ]; then
  truncate -s 0 $inventory
  find $1 -newermt $3 \! -newermt $4   -iname "$5" >> $inventory
  echo "`date` - Initiating all file that contains string $5 and found between $3 - $4  upload from new inventory " >> $logme

elif [ $# -eq 4 ]; then
  truncate -s 0 $inventory
  find $1 -newermt $3 \! -newermt $4  >> $inventory
  sed -i 1d $inventory
  echo "`date` - Initiating all file found between $3 - $4  upload from new inventory " >> $logme

else
  echo " Some or all arguments Missing from CLI"
  echo " Usage :  ./copy_data_to_S3_via_Curl.sh <LocalFolderLocation> <S3BUCKET-DIRECTORY> <STARTDATE> <ENDDATE> <FILEFILTER>"
  echo " Open Script README section"
  exit 1
fi

file_list=`cat $inventory`
total_file_count=`cat $inventory|wc -l`


for local_file_val in $file_listdo
        aws_folder=$2
        aws_file_name=`echo $local_file_val| rev| cut -d '/' -f1 | rev`
        aws_filepath="/${bucket}/$aws_folder/$aws_file_name"

        # metadata
        contentType="application/x-compressed-tar"
        dateValue=`date -R`
        signature_string="PUT\n\n${contentType}\n${dateValue}\n${aws_filepath}"
        signature_hash=`echo -en ${signature_string} | openssl sha1 -hmac ${s3_secret_key} -binary | base64`


        curl -X PUT -T "$local_file_val" \
    -H "Host: ${bucket}.s3.amazonaws.com" \
    -H "Date: ${dateValue}" \
    -H "Content-Type: ${contentType}" \
    -H "Authorization: AWS ${s3_access_key}:${signature_hash}" \
        https://${bucket}.s3.amazonaws.com/$aws_folder/$aws_file_name

    if [ $? -gt 0 ]; then
            echo "`date` Upload Failed  $local_file_val to $bucket" >> $logme
    else
            echo "`date` Upload Success $local_file_val to $bucket" >> $logme
            count=$((count + 1))
            printf "\rCopy Status -  $count/$total_file_count - Completed "

            sleep 1
            sed -i "/\/$aws_file_name/d" $inventory
    fi

done;

Feel free to comment.

Read more ...

AWS CloudFormation Script to Create Lambda Role with Inline Policy for S3 Operations.



Within this blog we have a requirement to copy data from one bucket to another bucket using Lambda Function, in order to accomplish the task Lambda needs an additional role in order to perform task for other AWS Services.

So we will use Cloudformation script to create the below AWS Resources.

  • IAM Role for Lambda Service.
  • Above created Role has attached Inline Policy with the below access.
    • ACCESS to two individual Bucket.
    • ACCESS to Cloud Watch to perform basic Log Operations 

In case if your are looking to use it, replace the below enlisted by yours value.
  • Bucket 1 name : mydemodests1
  • Bucket 2 name : mydemodests2
  • IAM Role name : LambaRoleforS3operation
  • Inline Policy name : LambaRoleforS3operation-InlinePolicy

AWSTemplateFormatVersion: 2010-09-09
Description:  Lambda role creation for S3 Operation.
  
Resources:
  LambdaIAMRole:
    Type'AWS::IAM::Role'
    Description"Lambda IAM Role"
    Properties:
      RoleNameLambaRoleforS3operation
      AssumeRolePolicyDocument:
        Version'2012-10-17'
        Statement:
          - SidAllowLambdaServiceToAssumeRole
            EffectAllow
            Principal:
              Service:
                - lambda.amazonaws.com
            Action:
              - sts:AssumeRole
      Path/service-role/
      Policies:
        - PolicyName"LambaRoleforS3operation-InlinePolicy"
          PolicyDocument: {
    "Version""2012-10-17",
    "Statement": [
        {
            "Effect""Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource""arn:aws:logs:*:*:*"
        },
        {
            "Effect""Allow",
            "Action": [
                "s3:*"
            ],
            "Resource": [
                "arn:aws:s3:::mydemodests1/*"
            ]
        },
        {
            "Effect""Allow",
            "Action": [
                "s3:*"
            ],
            "Resource": [
                "arn:aws:s3:::mydemodests2/*"
            ]
        }
    ]
}

Read more ...

AWS S3 - Copy data from one bucket to another without storing credentials anywhere.


Within this post, we will cover.

  • How to automate copy or sync data/objects from one bucket to another.
  • How we can use an EC2 instance to copy data from one bucket to another.
  • We will leverage the power of AWS IAM Role and AWS S3 CLI to accomplish our requirement.
  • AWS CloudFormation script to create IAM role and Inline Policy.


So let's know our lab setup and similarly you can assume your requirement by replacing the variables.

  • We already have an EC2 Instance within zone ap-south-1 ( Mumbai )
  • Since S3 is region independent, we will be not highlighting it here.
  • We have two different bucket and two files under those bucket within aws same account as 
    • Bucket 1 name : cyberkeeda-bucket---> demo-file-A.txt
    • Bucket 2 name : cyberkeeda-bucket-b --> demo-file-B.txt
    • Since S3 is region independent, we will be not highlighting it here.
  • We will copy data from cyberkeeda-bucket-to cyberkeeda-bucket-by running aws cli commands from our ec2 instance.
  • Above task can be done using AWS CLI Command from any host but the major difference is, one need to store credentials while running aws configure command.
  • We will by pass the aws configure command by assigning an Instance Profile IAM role.
  • We will create an IAM Role with Inline policy.
  • We will use Cloudformation Script to create the required role.

Few things we must know about IAM role before proceeding further,

  • IAM Role : IAM role is a set of permissions that are created to initiate various AWS Service request, when we say aws service request that means request made to initiate services like ( S3, EC2, LAMBDA, etc etc )
  • IAM Roles are not attached to any user or group, it's assumed by other aws services like ( ec2, lambda ), applications.
  • Policy : Policy can be defined as set of permissions allowed/denied to role,user or group.
  • Managed Policy : A policy that has been created keeping in mind of reusibility, creating one and can be mapped to multiple user/service/role.
  • Inline Policy : Policy that has been created for one to one mapping between policy and entity.

CloudFormation Script to create IAM Role and Inline Policy.


AWSTemplateFormatVersion: 2010-09-09
Description
  CFN Script to create role and inline policy for ec2 instance.
  Will be used further to transfer data from Source bucket to Destination bucket.
  Author - Jackuna ( https://github.com/Jackuna)

Parameters:
  RoleName:
    TypeString
    DescriptionProvide Role Name that will be assumed by EC2. [a-z][a-z0-9]*
  InlinePolicyName:
    TypeString
    DescriptionProvide Inline Policy name, it will attached with above created role. [a-z][a-z0-9]*
  SourceBucketName:
    TypeString
    DescriptionProvide Source Bucket name [a-z][a-z0-9]* 
  DestinationBucketName:
    TypeString
    DescriptionProvide Destination Bucket name [a-z][a-z0-9]*

Resources:
  RootRole:
    Type'AWS::IAM::Role'
    Properties:
      RoleName!Sub "${RoleName}"
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - EffectAllow
            Principal:
              Service: ["ec2.amazonaws.com"]
            Action: ['sts:AssumeRole']
      Policies:
        - PolicyName!Sub ${InlinePolicyName}
          PolicyDocument:
            Version'2012-10-17'
            Statement:
              - EffectAllow
                Action:
                - s3:ListBucket
                - s3:PutObject
                - s3:GetObject
                Resource:
                - !Sub arn:aws:s3:::${SourceBucketName}/*
                - !Sub arn:aws:s3:::${SourceBucketName}
              - EffectAllow
                Action
                - s3:ListBucket
                - s3:PutObject
                - s3:GetObject
                Resource:
                - !Sub arn:aws:s3:::${DestinationBucketName}/*
                - !Sub arn:aws:s3:::${DestinationBucketName}
  RootInstanceProfile:
    Type'AWS::IAM::InstanceProfile'
    DependsOn:
      - RootRole
    Properties:
      Path/
      InstanceProfileName!Sub "${RoleName}"
      Roles:
      - !Ref RoleName

Outputs:
  RoleDetails:
    DescriptionRole Name
    Value!Ref RootRole
  PolicyDetails:
    DescriptionInline Policy Name
    Value!Ref InlinePolicyName


Steps to use the above cloud formation script:
  • Copy the above content and save it into a file and name it as iam_policy_role.yaml
  • Go to AWS Console --> Services --> Cloudformation --> Create Stack
  • Choose options : Template is ready and Upload a Template File and upload your saved template iam_policy_role.yaml  --> Next

  • Next page will ask you for required parameters as input, we will fill it as per our lab setup and requirement.
    • Stack Name : Name of the stack ( Could be anything )
    • Destination Bucket name : Name of the bucket where we want to copy data from our source bucket.
    • Role Name : Name of your IAM role ( Could be anything )
    • Inline Policy : Name of your policy, which will allow list,get,put object permission to buckets ( Could be anything )

  • Click Next --> Again Click Next and then click on check Box to agree --> Then create Stack.
  • Next screen will initiate CloudFormation stack creation window, we can see the progress of our stack creation... wait and use the refresh button till stack creation say's it's completed.

  • Once the stack status stands completed, click on output tab and verify the name of your created resources.
  • Now toggle down to IAM windows and search our above created role.
  • Once Verified we can go to our EC2 instance, where we will be attaching our above created role to give access to S3 bucket.
  • AWS Console → EC2 → Search instance → yourinstaceName→ Right Click → Instance Setting → Attach/Replace IAM Role → Choose above created IAM role (s3_copy_data_between_buckets_role) --> Apply


Now we are ready to test, verify and further automate it using cronJob.
  • Login to your EC2 instance.
  • Run the below command to verify you proper access to both the S3 buckets.
List content within bucket.

 aws s3 ls s3://cyberkeeda-bucket-a/
aws s3 ls s3://cyberkeeda-bucket-b/


You can see the output of the above command shows file for different buckets.

Copy file/content from one bucket to another.

  • Now we will try to copy file name demo-file-A.txt from bucket cyberkeeda-bucket-to cyberkeeda-bucket-a


 aws s3 cp s3://SOURCE-BUCKET-NAME/FILE-NAME s3://DESTINATION-BUCKET-NAME/FILE-NAME
aws s3 cp s3://cyberkeeda-bucket-a/demo-file-A.txt  s3://cyberkeeda-bucket-b/demo-file-A.txt
Sync all file/content from one bucket to another.

 aws s3 sync s3://SOURCE-BUCKET-NAME/ s3://DESTINATION-BUCKET-NAME/
aws s3 sync s3://cyberkeeda-bucket-a/  s3://cyberkeeda-bucket-b/
Sync all file/content from one bucket to another with ACL as bucket owner.

 aws s3 sync --acl bucket-owner-full-control s3://cyberkeeda-bucket-a/  s3://cyberkeeda-bucket-b/

That's it with this post, we will cover how to do the same for Cross Account within next post.
Feel free to comment, if you face any issue implementing it.



Read more ...
Designed By Jackuna