CyberKeeda In Social Media
Showing posts with label Cloud Computing. Show all posts
Showing posts with label Cloud Computing. Show all posts

AWS Cloudformation template to create Cloudwatch Event rule to trigger ECS Task

                             


Cloudformation Template that will created below resources.

  • IAM role for ECS Task and Cloudwatch rule.
  • CloudWatch schedule rule ( cron ) to trigger task defination.


Template

AWSTemplateFormatVersion: 2010-09-09
Description: | 
              1. IAM Role to be used by ECS task and cloudwatch event rule.
              2. CloudWatch Rule to trigger ecs tasks.
             
Parameters:
  ProductName:
    Description: Parent Product name.
    Type: String
    Default: cyberkeeda
  ProjectName:
    Description: Project Name
    Type: String
    Default: cyberkeeda-report
  Environment:
    Description: The equivalent CN name of the environment being worked on
    Type: String
    AllowedValues:
      - dev
      - uat
      - qa
  Region:
    Description: Ck Region specific parameter
    Type: String
    AllowedValues:
      - mum
      - hyd
  ECSClusterARN:
    Description: ECS Cluster ARN to schedule Task 
    Type: String
    Default: None
  CWEventRuleCron:
    Description: Cron Expression to schedule ECS task. 
    Type: String
    Default: "cron(0 9 * * ? *)"
  ECSTaskDefARN:
    Description: ARN for ECS Task defination
    Type: String

Metadata:
  AWS::CloudFormation::Interface:
    ParameterGroups:
      - 
        Label:
          default: Project based details
        Parameters:
          - ProductName
          - ProjectName
          - Environment
          - Region
      - 
        Label:
          default: ECS details.
        Parameters:
          - ECSClusterARN
          - ECSTaskDefARN
          - CWEventRuleCron
      
Resources:
  ExecutionRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: !Sub "${ProductName}-${Region}-${Environment}-${ProjectName}-role"
      AssumeRolePolicyDocument:
        Statement:
          - Effect: Allow
            Principal:
              Service: [ 'ecs-tasks.amazonaws.com', 'events.amazonaws.com' ]
            Action: sts:AssumeRole
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
      Policies:
      - PolicyName: !Sub "${ProductName}-${Region}-${Environment}-${ProjectName}-role-inlinePolicy"
        PolicyDocument: 
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                - ecs:RunTask
                Resource:
                - !Sub "${ECSTaskDefARN}:*"
              - Effect: Allow
                Action: iam:PassRole
                Resource:
                - "*"
                Condition:
                  StringLike:
                    iam:PassedToService: ecs-tasks.amazonaws.com
  TaskSchedule:
    Type: AWS::Events::Rule
    Properties:
      Description: Trigger Cyberkeeda Daily ECS task
      Name: !Sub  "${ProductName}-${Region}-${Environment}-${ProjectName}-daily-event-rule"
      ScheduleExpression: !Ref CWEventRuleCron
      State: ENABLED
      Targets:
        - Id: !Sub "${ProductName}-${Region}-${Environment}-${ProjectName}-daily-event-rule-targetId"
          EcsParameters:
            LaunchType: EC2
            TaskDefinitionArn: !Ref TaskDefinition
            TaskCount: 1
          RoleArn:
            Fn::GetAtt:
            - ExecutionRole
            - Arn
          Arn: !Ref ECSClusterARN

Let me know, for any questions in comment box.

Read more ...

AWS Cloudformation template to create ECS Task definition.

 



Cloudformation Template that will created below resources.

  • IAM role for ECS Task execution
  • ECS Task definition


Template

AWSTemplateFormatVersion: 2010-09-09
Description: | 
              ECS Task is responsible to fetch files from sftp location.
              1. IAM Role to be used by ECS task and cloudwatch event rule.
              2. ECS Task defination with container env variables, please note credential needs to be created first within parameter store.
             
Parameters:
  ProductName:
    Description: Parent Product name.
    Type: String
    Default: cyberkeeda
  ProjectName:
    Description: Project Name
    Type: String
    Default: cyberkeeda-report
  Environment:
    Description: The equivalent CN name of the environment being worked on
    Type: String
    AllowedValues:
      - dev
      - uat
      - qa
  Region:
    Description: Ck Region specific parameter
    Type: String
    AllowedValues:
      - mum
      - hyd
  ECSTaskDefARN:
    Description: ARN for ECS Task defination
    Type: String
  SFTPHostFQDN:
    Description: Remote SFTP Host FQDN.
    Type: String
    Default: 123.111.11.1
  SFTPHostPort:
    Description: Remote SFTP Host Port.
    Type: String
    Default: 22
  SFTPUserName:
    Description: Remote SFTP Host username.
    Type: String
    Default: sftpadmin
  SFTPPasswordParameterStoreName:
    Description: Remote SFTP Host Parameter store name.
    Type: String
    Default: sftppass
  ContainerImageUrlwithTag:
    Description: Container Image URL with tag.
    Type: String
    Default: docker.io/jackuna/sftpnew
  ECSClusterARN:
    Description: ECS Cluster ARN to schedule Task 
    Type: String
    Default: arn:aws:ecs:ap-south-1:895678824142:cluster/sftp

Metadata:
  AWS::CloudFormation::Interface:
    ParameterGroups:
      - 
        Label:
          default: CK Project Details
        Parameters:
          - ProductName
          - ProjectName
          - Environment
          - Region
      - 
        Label:
          default: Remote SFTP Server details used as Container Environment Variables.
        Parameters:
          - SFTPHostFQDN
          - SFTPHostPort
          - SFTPUserName
          - SFTPPasswordParameterStoreName
      
Resources:
  ExecutionRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: !Sub "${ProductName}-${Region}-${Environment}-${ProjectName}-role"
      AssumeRolePolicyDocument:
        Statement:
          - Effect: Allow
            Principal:
              Service: [ 'ecs-tasks.amazonaws.com', 'events.amazonaws.com' ]
            Action: sts:AssumeRole
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
      Policies:
      - PolicyName: !Sub "${ProductName}-${Region}-${Environment}-${ProjectName}-role-inlinePolicy"
        PolicyDocument: 
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                - ssm:GetParameters
                Resource:
                - !Sub "arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:parameter/${Environment}.sftp-password" 
              - Effect: Allow
                Action:
                - ecs:RunTask
                Resource:
                - !Sub "${ECSTaskDefARN}:*"
              - Effect: Allow
                Action: iam:PassRole
                Resource:
                - "*"
                Condition:
                  StringLike:
                    iam:PassedToService: ecs-tasks.amazonaws.com
  TaskDefinition:
    Type: AWS::ECS::TaskDefinition
    Properties:
      Family: !Sub "${ProductName}-${Region}-${Environment}-${ProjectName}-ecs-task"
      Memory: 128
      NetworkMode: bridge 
      ExecutionRoleArn: !Ref ExecutionRole
      TaskRoleArn : !Ref ExecutionRole
      ContainerDefinitions:
        - Name: !Sub "${ProductName}-${Region}-${Environment}-${ProjectName}-container"
          Image: !Ref ContainerImageUrlwithTag
          Memory: 128
          Cpu: 0
          MountPoints: 
            - 
              SourceVolume: "ecs-logs"
              ContainerPath: "/var/log/ecs"
          Command: 
            - python
            - sftp_python.py
          WorkingDirectory: "/usr/local/aws-swa"
          Secrets:
            - 
              Name: SFTP_PASSWORD
              ValueFrom: !Sub ${CNEnvironment}.sftp-password
          Environment: 
            - 
              Name: APPLICATION_LOGS
              Value: !Sub  "/var/log/ecs/${ProductName}-${Region}-${Environment}-${ProjectName}-ecs-task.logs"
            - 
              Name: SFTP_HOST
              Value: !Ref SFTPHostFQDN
            - 
              Name: SFTP_PORT
              Value: !Ref SFTPHostPort
            - 
              Name: SFTP_USERNAME
              Value: !Ref SFTPUserName

      RequiresCompatibilities:
        - EC2
      Volumes: 
        - 
          Host: 
            SourcePath: "/var/log/ecs"
          Name: "ecs-logs"

Let me know, for any questions in comment box.

Read more ...

AWS Managed Policy to Restrict IAM User to Access AWS Resource from Specific IP Address.

 




AWS Managed Policy

Within this blog post, we will cover 
How we can use IAM Managed Policy used to create an IAM User Boundary which will limit a user for the below operations.

  • AWS S3 Limited Access [Get, Put, List]
  • S3 Access with only single IP Address.
Syntax Template

AWSTemplateFormatVersion: 2010-09-09
DescriptionCFN to create ManagedPolicy 

Resources:
  IBDSReconUserBoundaryPolicy:
    TypeAWS::IAM::ManagedPolicy
    Properties
      DescriptionA ManagedPolicy meant to restrict user based upon ingress IP.
      ManagedPolicyNamemy_s3_user_boundary
      Path/
      Users:
      - my_s3_user
      PolicyDocument
            Version'2012-10-17'
            Statement:
            - EffectAllow
              Action:
              - s3:ListBucket
              - s3:GetBucketLocation
              Resourcearn:aws:s3:::my-randon-s3-bucket
            - EffectAllow
              Action:
              - s3:PutObject
              - s3:PutObjectAcl
              Resourcearn:aws:s3:::my-randon-s3-bucketa/bucketfiles/*
              Condition:
                IpAddressIfExists:
                  aws:SourceIp123.345.657.12

Read more ...

AWS WAF : Web Access Firewall to control access to CloudFront Public domain URL using IPset Rules.

 

AWS WAFv2

AWS Web Access Firewall is one the services that can be used to inspect, control and manage web request.
WAF uses one or many rules to allow, limit or block as per request statement provided within rule.
AWS WAF and it's corresponding rule can be attached to multiple AWS services. such as
Official AWS Link
  • Amazon CloudFront distribution, 
  • Amazon API Gateway REST API, 
  • Application Load Balancer,
  • AWS API Gateway
In this blog post,, we will cover, how to restrict our AWS cloudfront distribution URL for limited IP Addresses only.

Lab Setup : So we have a Cloudfront distribution with both http and https access mapped to respective AWS S3 buckets, as we know once we have created a Cloudfront distribution, AWS provide us a publicly accessible Cloudfront domain URL, using which we can access our S3 content, thus url, which looks somehow similar to http://d3ocffr25e-somerandom.cloudfront.ne

We will use AWS WAF to restrict/block access approaching to our Cloudfront domain to all random IP other than the one which we have whitelisted within our IP sets.

CloudFormation Template to create below resources.
  • IP Sets : AWS::WAFv2::IPSet
  • Web ACLv2 : AWS::WAFv2::WebACL
  • Custom Response Body : CustomResponseBodies
  • Rules : IPSetReferenceStatement
  • CloudWatch Metrics
  • Sample Request Dashboard.

Note : After creation of below resources, this has to be associated with cloudformation distribution from cloudfromation section.

Syntax Template

---
AWSTemplateFormatVersion'2010-09-09'
DescriptionThis template will create WAFv2 Ipset, WebAcl with Cloudfront Scope, Web ACLs rules, Cloudwatch metric
              Sample Request Dashbaord with last 3 hour data.

Parameters:
  env:
    DescriptionThe environment name being worked on
    TypeString
    AllowedValues:
      - prod
      - non-prod
  CloudFrontInfo:
    DescriptionCloudfront Name.
    TypeString
  IPSetname:
    DescriptionThe short name to identify ipsets.
    TypeString
  IPSetDescription:
    DescriptionShort description to identify ipsets.
    TypeString

Metadata:
  AWS::CloudFormation::Interface:
    ParameterGroups:
      -
        Label:
          defaultEnvironment
        Parameters:
          - env
      - 
        Label:
          defaultIP Set Details
        Parameters:
          - IPSetname
          - IPSetDescription

Resources:
  SampleIPSet:
      Type'AWS::WAFv2::IPSet'
      Properties:
        Description!Sub "${IPSetDescription}"
        Name!Sub "${IPSetname}"
        ScopeCLOUDFRONT
        IPAddressVersionIPV4
        Addresses:  
          - 111.11.11.11/32 # Random-Pub-IP-1
          - 122.12.12.1/32   #  Random-Pub-IP-2
          - 133.225.192.0/18  #  Random-Pub-IP-3

  CDNAccessIPRestrictionWebACL:
    TypeAWS::WAFv2::WebACL
    Properties:
      Name!Sub "myproject-${env}-cdn-${CloudFrontInfo}-WebACL"
      ScopeCLOUDFRONT
      DefaultAction:
        Block: {
      "CustomResponse": {
        "ResponseCode"401,
        "CustomResponseBodyKey""Unauthorized"
      }
    }
      Description!Sub "To limit access of Cloudfront ${CloudFrontInfo}  from known IP ranges only"
      Rules:
        - Name!Sub "myproject-${env}-cdn-${CloudFrontInfo}-WebACL-Rule1"
          Priority0
          Statement:
            IPSetReferenceStatement:
              Arn!GetAtt SampleIPSet.Arn
          Action:
            Allow: {}
          VisibilityConfig:
            SampledRequestsEnabledtrue
            CloudWatchMetricsEnabledtrue
            MetricName!Sub "myproject-${env}-cdn-${CloudFrontInfo}-IpLimitationRule"
      VisibilityConfig:
        SampledRequestsEnabledtrue
        CloudWatchMetricsEnabledtrue
        MetricName!Sub "myproject-${env}-cdn-${CloudFrontInfo}-WebACLMetric"
      Capacity1

      CustomResponseBodies:
        Unauthorized:
          ContentTypeTEXT_PLAIN
          ContentUnauthorized !

We will go through each sections of the CloudFormation template to understand the same.

MetaData and Interface MetaDataFollow the link to know more about it

Going further into the Resource Section, our WAF relevant stuffs are defined.


AWS::WAFv2::IPSet
This part of CFN resource us used to define our custom set of IP Address, which will be later used while defining our WAF rules, Let's know what are those properties.

SampleIPSet:
      Type'AWS::WAFv2::IPSet'
      Properties:
        Description!Sub "${IPSetDescription}"
        Name!Sub "${IPSetname}"
        ScopeCLOUDFRONT
        IPAddressVersionIPV4
        Addresses:  
          - 111.11.11.11/32 # Random-Pub-IP-1
          - 122.12.12.1/32   #  Random-Pub-IP-2
          - 133.225.192.0/18  #  Random-Pub-IP-3

Scope
Scope can be either REGIONAL or CLOUDFRONT
As Cloudfront (CDN) is Global service, it's being treated as a Seperate scope.
Note : Please note, this scope can be created only within us-east-1 region only.


Addresses
Used to define set of IP using the valid CIDR format.

Example :
          - 111.11.11.11/32 # Used to define single IP as /32 - 122.12.12.1/32 - 133.225.192.0/18 # Used to define entire subnet range as /18


Custom Response Body : CustomResponseBodies

This part of CFN resource us used to define our custom response that can support in any type as ( Plain TEXT, HTML, JSON ), here we have used a plain text.


      CustomResponseBodies:
        Unauthorized:
          ContentTypeTEXT_PLAIN
          ContentUnauthorized !



Rules

This part of WAF ACL CFN resource us used to custom rules (Allow, Deny )based upon statements.


  • In our example, we are allowing IP Sets with priority 0 ( Highest Priority ), and it's linked the the ARN of the same IP Set, we have created above.
  • In addition to that, we are creating Cloudwatch Metrics Dashboard for each Allowed Request.
  • And at last we are enabling the Sampled Request, which will show us the Realtime calls/request.

Rules:
        - Name!Sub "myproject-${env}-cdn-${CloudFrontInfo}-WebACL-Rule1"
          Priority0
          Statement:
            IPSetReferenceStatement:
              Arn!GetAtt SampleIPSet.Arn
          Action:
            Allow: {}
          VisibilityConfig:
            SampledRequestsEnabledtrue
            CloudWatchMetricsEnabledtrue
            MetricName!Sub "myproject-${env}-cdn-${CloudFrontInfo}-IpLimitationRule"


Web ACLv2 : AWS::WAFv2::WebACL
This part of CFN resource us used to define the WAFv2 ACl and it defines the below parts.
  • It has scope of CloudFront.
  • Default action is to Block everything and thus in response provide HTTP access code 401 and with the custom response body that we have created above part.
  • It will create a Cloudwatch metric dashboard for all blocked request.
  • It will also create a panel for all realtime blocked request/calls.

 CDNAccessIPRestrictionWebACL:
    TypeAWS::WAFv2::WebACL
    Properties:
      Name!Sub "myproject-${env}-cdn-${CloudFrontInfo}-WebACL"
      ScopeCLOUDFRONT
      DefaultAction:
        Block: {
      "CustomResponse": {
        "ResponseCode"401,
        "CustomResponseBodyKey""Unauthorized"
      }
    }
      Description!Sub "To limit access of Cloudfront ${CloudFrontInfo}  from known IP ranges only"
      Rules:
        - Name!Sub "myproject-${env}-cdn-${CloudFrontInfo}-WebACL-Rule1"
          Priority0
          Statement:
            IPSetReferenceStatement:
              Arn!GetAtt SampleIPSet.Arn
          Action:
            Allow: {}
          VisibilityConfig:
            SampledRequestsEnabledtrue
            CloudWatchMetricsEnabledtrue
            MetricName!Sub "myproject-${env}-cdn-${CloudFrontInfo}-IpLimitationRule"
      VisibilityConfig:
        SampledRequestsEnabledtrue
        CloudWatchMetricsEnabledtrue
        MetricName!Sub "myproject-${env}-cdn-${CloudFrontInfo}-WebACLMetric"
      Capacity1

  • This part creates the AWS WAFv2 w


Read more ...

AWS S3 - Cross accounts copy data from one bucket to another.

Within this post, we will cover.

  • How to allow data copy from AWS Cross account S3 Bucekts.
  • Data from Bucket existing with one account can copy data to s3 bucket lying in another AWS account.

Setup is exactly similar to our last blog post : Link 

We have two different bucket and two files under those bucket within different AWS Accounts.
  • Bucket 1 name : cyberkeeda-bucket-account-a --> demo-file-A.txt
  • Bucket 2 name : cyberkeeda-bucket-account-b -> demo-file-B.txt


We will start by creating a bucket on Account B and modifying few things to allow our source bucket account owner to give access to our destination bucket.

We will assume we already have a bucket on account B, with all the public access to bucket denied, so we need to modify/add below changes within destination bucket Permission tab.

Below all modifications, we are doing at our destination account - B 
  •  Modify Public Access Rights : S3 --> choose your destination bucket --> Permission tab --> Click on Block Public Access --> Edit.
    • Uncheck : Block Public Access
    • Check : Block public access to buckets and objects granted through new access control lists (ACLs)
    • Check : Block public access to buckets and objects granted through any access control lists (ACLs)
    • Check : Block public access to buckets and objects granted through new public bucket or access point policies
    • Uncheck : Block public and cross-account access to buckets and objects through any public bucket or access point policies
  • In the above manner we are blocking every public access except for AWS Cross accounts.
  • Add Bucket Policy to allow read, write access to Account A:
    • S3 --> choose your destination bucket --> Permission tab --> Click on Block Policy --> Add the below lines.
    • Replace the AWS Account number with your source bucket owner account number, here our source account is for Account-A number.
    • And bucket with the destination bucket name, here our destination bucket name (cyberkeeda-bucket-account-b)
    • Update the variables Source Account number and Destination bucket name and save it.
{
    "Version": "2012-10-17",
    "Id": "Policy1586529665189",
    "Statement": [
        {
            "Sid": "SidtoAllowCrossAccountAccess",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::387789623977:root"
            },
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::cyberkeeda-bucket-account-b",
                "arn:aws:s3:::cyberkeeda-bucket-account-b/*"
            ]
        }
    ]
}

We are done with all required changes with Destination Bucket Account B, now lets move and do the needful at account A.

All below changes are made at Account -A ( Source Account )

Link for Cloudformation script : Link
Use the above cloudformation script to create instance based IAM role and replace the destination bucket with bucket name of Account B.

  • Stack Name : Name of the stack ( Could be anything )
  • Source Bucket name : Name of the bucket where we want to copy data from our source bucket, Account A bucket name (cyberkeeda-bucket-account-A)
  • Destination Bucket name : Name of the bucket where we want to copy data from our source bucket, Account B bucket name (cyberkeeda-bucket-account-b)
  • Role Name : Name of your IAM role ( Could be anything )
  • Inline Policy : Name of your policy, which will allow list,get,put object permission to buckets ( Could be anything )
  • Once Stack is created, follow the same process to attach IAM role to instance and after that we can use aws CLI commands as (LS,CP,SYNC)

Note
  1. This is really important stuff to share, whenever we copy any data/object from source s3 bucket to destination bucket while in Cross account, use sync --acl bucket-owner-full-control.
  2. This is mandatory else you can copy but the destination bucket owner will be unable to view/download any uploaded file/object from source account.

Now use the below AWS CLI command to Sync all file/content from one bucket to another with ACL as bucket owner.

 aws s3 sync --acl bucket-owner-full-control s3://cyberkeeda-bucket-account-A/  s3://cyberkeeda-bucket-account-B/

You can see a stream of data copying as an STDOUT after command is executed.



Read more ...

AWS S3 - Copy data from one bucket to another without storing credentials anywhere.


Within this post, we will cover.

  • How to automate copy or sync data/objects from one bucket to another.
  • How we can use an EC2 instance to copy data from one bucket to another.
  • We will leverage the power of AWS IAM Role and AWS S3 CLI to accomplish our requirement.
  • AWS CloudFormation script to create IAM role and Inline Policy.


So let's know our lab setup and similarly you can assume your requirement by replacing the variables.

  • We already have an EC2 Instance within zone ap-south-1 ( Mumbai )
  • Since S3 is region independent, we will be not highlighting it here.
  • We have two different bucket and two files under those bucket within aws same account as 
    • Bucket 1 name : cyberkeeda-bucket---> demo-file-A.txt
    • Bucket 2 name : cyberkeeda-bucket-b --> demo-file-B.txt
    • Since S3 is region independent, we will be not highlighting it here.
  • We will copy data from cyberkeeda-bucket-to cyberkeeda-bucket-by running aws cli commands from our ec2 instance.
  • Above task can be done using AWS CLI Command from any host but the major difference is, one need to store credentials while running aws configure command.
  • We will by pass the aws configure command by assigning an Instance Profile IAM role.
  • We will create an IAM Role with Inline policy.
  • We will use Cloudformation Script to create the required role.

Few things we must know about IAM role before proceeding further,

  • IAM Role : IAM role is a set of permissions that are created to initiate various AWS Service request, when we say aws service request that means request made to initiate services like ( S3, EC2, LAMBDA, etc etc )
  • IAM Roles are not attached to any user or group, it's assumed by other aws services like ( ec2, lambda ), applications.
  • Policy : Policy can be defined as set of permissions allowed/denied to role,user or group.
  • Managed Policy : A policy that has been created keeping in mind of reusibility, creating one and can be mapped to multiple user/service/role.
  • Inline Policy : Policy that has been created for one to one mapping between policy and entity.

CloudFormation Script to create IAM Role and Inline Policy.


AWSTemplateFormatVersion: 2010-09-09
Description
  CFN Script to create role and inline policy for ec2 instance.
  Will be used further to transfer data from Source bucket to Destination bucket.
  Author - Jackuna ( https://github.com/Jackuna)

Parameters:
  RoleName:
    TypeString
    DescriptionProvide Role Name that will be assumed by EC2. [a-z][a-z0-9]*
  InlinePolicyName:
    TypeString
    DescriptionProvide Inline Policy name, it will attached with above created role. [a-z][a-z0-9]*
  SourceBucketName:
    TypeString
    DescriptionProvide Source Bucket name [a-z][a-z0-9]* 
  DestinationBucketName:
    TypeString
    DescriptionProvide Destination Bucket name [a-z][a-z0-9]*

Resources:
  RootRole:
    Type'AWS::IAM::Role'
    Properties:
      RoleName!Sub "${RoleName}"
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - EffectAllow
            Principal:
              Service: ["ec2.amazonaws.com"]
            Action: ['sts:AssumeRole']
      Policies:
        - PolicyName!Sub ${InlinePolicyName}
          PolicyDocument:
            Version'2012-10-17'
            Statement:
              - EffectAllow
                Action:
                - s3:ListBucket
                - s3:PutObject
                - s3:GetObject
                Resource:
                - !Sub arn:aws:s3:::${SourceBucketName}/*
                - !Sub arn:aws:s3:::${SourceBucketName}
              - EffectAllow
                Action
                - s3:ListBucket
                - s3:PutObject
                - s3:GetObject
                Resource:
                - !Sub arn:aws:s3:::${DestinationBucketName}/*
                - !Sub arn:aws:s3:::${DestinationBucketName}
  RootInstanceProfile:
    Type'AWS::IAM::InstanceProfile'
    DependsOn:
      - RootRole
    Properties:
      Path/
      InstanceProfileName!Sub "${RoleName}"
      Roles:
      - !Ref RoleName

Outputs:
  RoleDetails:
    DescriptionRole Name
    Value!Ref RootRole
  PolicyDetails:
    DescriptionInline Policy Name
    Value!Ref InlinePolicyName


Steps to use the above cloud formation script:
  • Copy the above content and save it into a file and name it as iam_policy_role.yaml
  • Go to AWS Console --> Services --> Cloudformation --> Create Stack
  • Choose options : Template is ready and Upload a Template File and upload your saved template iam_policy_role.yaml  --> Next

  • Next page will ask you for required parameters as input, we will fill it as per our lab setup and requirement.
    • Stack Name : Name of the stack ( Could be anything )
    • Destination Bucket name : Name of the bucket where we want to copy data from our source bucket.
    • Role Name : Name of your IAM role ( Could be anything )
    • Inline Policy : Name of your policy, which will allow list,get,put object permission to buckets ( Could be anything )

  • Click Next --> Again Click Next and then click on check Box to agree --> Then create Stack.
  • Next screen will initiate CloudFormation stack creation window, we can see the progress of our stack creation... wait and use the refresh button till stack creation say's it's completed.

  • Once the stack status stands completed, click on output tab and verify the name of your created resources.
  • Now toggle down to IAM windows and search our above created role.
  • Once Verified we can go to our EC2 instance, where we will be attaching our above created role to give access to S3 bucket.
  • AWS Console → EC2 → Search instance → yourinstaceName→ Right Click → Instance Setting → Attach/Replace IAM Role → Choose above created IAM role (s3_copy_data_between_buckets_role) --> Apply


Now we are ready to test, verify and further automate it using cronJob.
  • Login to your EC2 instance.
  • Run the below command to verify you proper access to both the S3 buckets.
List content within bucket.

 aws s3 ls s3://cyberkeeda-bucket-a/
aws s3 ls s3://cyberkeeda-bucket-b/


You can see the output of the above command shows file for different buckets.

Copy file/content from one bucket to another.

  • Now we will try to copy file name demo-file-A.txt from bucket cyberkeeda-bucket-to cyberkeeda-bucket-a


 aws s3 cp s3://SOURCE-BUCKET-NAME/FILE-NAME s3://DESTINATION-BUCKET-NAME/FILE-NAME
aws s3 cp s3://cyberkeeda-bucket-a/demo-file-A.txt  s3://cyberkeeda-bucket-b/demo-file-A.txt
Sync all file/content from one bucket to another.

 aws s3 sync s3://SOURCE-BUCKET-NAME/ s3://DESTINATION-BUCKET-NAME/
aws s3 sync s3://cyberkeeda-bucket-a/  s3://cyberkeeda-bucket-b/
Sync all file/content from one bucket to another with ACL as bucket owner.

 aws s3 sync --acl bucket-owner-full-control s3://cyberkeeda-bucket-a/  s3://cyberkeeda-bucket-b/

That's it with this post, we will cover how to do the same for Cross Account within next post.
Feel free to comment, if you face any issue implementing it.



Read more ...
Designed By Jackuna