CyberKeeda In Social Media
Showing posts with label cloud. Show all posts
Showing posts with label cloud. Show all posts

AWS S3 - Cross accounts copy data from one bucket to another.

Within this post, we will cover.

  • How to allow data copy from AWS Cross account S3 Bucekts.
  • Data from Bucket existing with one account can copy data to s3 bucket lying in another AWS account.

Setup is exactly similar to our last blog post : Link 

We have two different bucket and two files under those bucket within different AWS Accounts.
  • Bucket 1 name : cyberkeeda-bucket-account-a --> demo-file-A.txt
  • Bucket 2 name : cyberkeeda-bucket-account-b -> demo-file-B.txt


We will start by creating a bucket on Account B and modifying few things to allow our source bucket account owner to give access to our destination bucket.

We will assume we already have a bucket on account B, with all the public access to bucket denied, so we need to modify/add below changes within destination bucket Permission tab.

Below all modifications, we are doing at our destination account - B 
  •  Modify Public Access Rights : S3 --> choose your destination bucket --> Permission tab --> Click on Block Public Access --> Edit.
    • Uncheck : Block Public Access
    • Check : Block public access to buckets and objects granted through new access control lists (ACLs)
    • Check : Block public access to buckets and objects granted through any access control lists (ACLs)
    • Check : Block public access to buckets and objects granted through new public bucket or access point policies
    • Uncheck : Block public and cross-account access to buckets and objects through any public bucket or access point policies
  • In the above manner we are blocking every public access except for AWS Cross accounts.
  • Add Bucket Policy to allow read, write access to Account A:
    • S3 --> choose your destination bucket --> Permission tab --> Click on Block Policy --> Add the below lines.
    • Replace the AWS Account number with your source bucket owner account number, here our source account is for Account-A number.
    • And bucket with the destination bucket name, here our destination bucket name (cyberkeeda-bucket-account-b)
    • Update the variables Source Account number and Destination bucket name and save it.
{
    "Version": "2012-10-17",
    "Id": "Policy1586529665189",
    "Statement": [
        {
            "Sid": "SidtoAllowCrossAccountAccess",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::387789623977:root"
            },
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::cyberkeeda-bucket-account-b",
                "arn:aws:s3:::cyberkeeda-bucket-account-b/*"
            ]
        }
    ]
}

We are done with all required changes with Destination Bucket Account B, now lets move and do the needful at account A.

All below changes are made at Account -A ( Source Account )

Link for Cloudformation script : Link
Use the above cloudformation script to create instance based IAM role and replace the destination bucket with bucket name of Account B.

  • Stack Name : Name of the stack ( Could be anything )
  • Source Bucket name : Name of the bucket where we want to copy data from our source bucket, Account A bucket name (cyberkeeda-bucket-account-A)
  • Destination Bucket name : Name of the bucket where we want to copy data from our source bucket, Account B bucket name (cyberkeeda-bucket-account-b)
  • Role Name : Name of your IAM role ( Could be anything )
  • Inline Policy : Name of your policy, which will allow list,get,put object permission to buckets ( Could be anything )
  • Once Stack is created, follow the same process to attach IAM role to instance and after that we can use aws CLI commands as (LS,CP,SYNC)

Note
  1. This is really important stuff to share, whenever we copy any data/object from source s3 bucket to destination bucket while in Cross account, use sync --acl bucket-owner-full-control.
  2. This is mandatory else you can copy but the destination bucket owner will be unable to view/download any uploaded file/object from source account.

Now use the below AWS CLI command to Sync all file/content from one bucket to another with ACL as bucket owner.

 aws s3 sync --acl bucket-owner-full-control s3://cyberkeeda-bucket-account-A/  s3://cyberkeeda-bucket-account-B/

You can see a stream of data copying as an STDOUT after command is executed.



Read more ...

AWS S3 - Copy data from one bucket to another without storing credentials anywhere.


Within this post, we will cover.

  • How to automate copy or sync data/objects from one bucket to another.
  • How we can use an EC2 instance to copy data from one bucket to another.
  • We will leverage the power of AWS IAM Role and AWS S3 CLI to accomplish our requirement.
  • AWS CloudFormation script to create IAM role and Inline Policy.


So let's know our lab setup and similarly you can assume your requirement by replacing the variables.

  • We already have an EC2 Instance within zone ap-south-1 ( Mumbai )
  • Since S3 is region independent, we will be not highlighting it here.
  • We have two different bucket and two files under those bucket within aws same account as 
    • Bucket 1 name : cyberkeeda-bucket---> demo-file-A.txt
    • Bucket 2 name : cyberkeeda-bucket-b --> demo-file-B.txt
    • Since S3 is region independent, we will be not highlighting it here.
  • We will copy data from cyberkeeda-bucket-to cyberkeeda-bucket-by running aws cli commands from our ec2 instance.
  • Above task can be done using AWS CLI Command from any host but the major difference is, one need to store credentials while running aws configure command.
  • We will by pass the aws configure command by assigning an Instance Profile IAM role.
  • We will create an IAM Role with Inline policy.
  • We will use Cloudformation Script to create the required role.

Few things we must know about IAM role before proceeding further,

  • IAM Role : IAM role is a set of permissions that are created to initiate various AWS Service request, when we say aws service request that means request made to initiate services like ( S3, EC2, LAMBDA, etc etc )
  • IAM Roles are not attached to any user or group, it's assumed by other aws services like ( ec2, lambda ), applications.
  • Policy : Policy can be defined as set of permissions allowed/denied to role,user or group.
  • Managed Policy : A policy that has been created keeping in mind of reusibility, creating one and can be mapped to multiple user/service/role.
  • Inline Policy : Policy that has been created for one to one mapping between policy and entity.

CloudFormation Script to create IAM Role and Inline Policy.


AWSTemplateFormatVersion: 2010-09-09
Description
  CFN Script to create role and inline policy for ec2 instance.
  Will be used further to transfer data from Source bucket to Destination bucket.
  Author - Jackuna ( https://github.com/Jackuna)

Parameters:
  RoleName:
    TypeString
    DescriptionProvide Role Name that will be assumed by EC2. [a-z][a-z0-9]*
  InlinePolicyName:
    TypeString
    DescriptionProvide Inline Policy name, it will attached with above created role. [a-z][a-z0-9]*
  SourceBucketName:
    TypeString
    DescriptionProvide Source Bucket name [a-z][a-z0-9]* 
  DestinationBucketName:
    TypeString
    DescriptionProvide Destination Bucket name [a-z][a-z0-9]*

Resources:
  RootRole:
    Type'AWS::IAM::Role'
    Properties:
      RoleName!Sub "${RoleName}"
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - EffectAllow
            Principal:
              Service: ["ec2.amazonaws.com"]
            Action: ['sts:AssumeRole']
      Policies:
        - PolicyName!Sub ${InlinePolicyName}
          PolicyDocument:
            Version'2012-10-17'
            Statement:
              - EffectAllow
                Action:
                - s3:ListBucket
                - s3:PutObject
                - s3:GetObject
                Resource:
                - !Sub arn:aws:s3:::${SourceBucketName}/*
                - !Sub arn:aws:s3:::${SourceBucketName}
              - EffectAllow
                Action
                - s3:ListBucket
                - s3:PutObject
                - s3:GetObject
                Resource:
                - !Sub arn:aws:s3:::${DestinationBucketName}/*
                - !Sub arn:aws:s3:::${DestinationBucketName}
  RootInstanceProfile:
    Type'AWS::IAM::InstanceProfile'
    DependsOn:
      - RootRole
    Properties:
      Path/
      InstanceProfileName!Sub "${RoleName}"
      Roles:
      - !Ref RoleName

Outputs:
  RoleDetails:
    DescriptionRole Name
    Value!Ref RootRole
  PolicyDetails:
    DescriptionInline Policy Name
    Value!Ref InlinePolicyName


Steps to use the above cloud formation script:
  • Copy the above content and save it into a file and name it as iam_policy_role.yaml
  • Go to AWS Console --> Services --> Cloudformation --> Create Stack
  • Choose options : Template is ready and Upload a Template File and upload your saved template iam_policy_role.yaml  --> Next

  • Next page will ask you for required parameters as input, we will fill it as per our lab setup and requirement.
    • Stack Name : Name of the stack ( Could be anything )
    • Destination Bucket name : Name of the bucket where we want to copy data from our source bucket.
    • Role Name : Name of your IAM role ( Could be anything )
    • Inline Policy : Name of your policy, which will allow list,get,put object permission to buckets ( Could be anything )

  • Click Next --> Again Click Next and then click on check Box to agree --> Then create Stack.
  • Next screen will initiate CloudFormation stack creation window, we can see the progress of our stack creation... wait and use the refresh button till stack creation say's it's completed.

  • Once the stack status stands completed, click on output tab and verify the name of your created resources.
  • Now toggle down to IAM windows and search our above created role.
  • Once Verified we can go to our EC2 instance, where we will be attaching our above created role to give access to S3 bucket.
  • AWS Console → EC2 → Search instance → yourinstaceName→ Right Click → Instance Setting → Attach/Replace IAM Role → Choose above created IAM role (s3_copy_data_between_buckets_role) --> Apply


Now we are ready to test, verify and further automate it using cronJob.
  • Login to your EC2 instance.
  • Run the below command to verify you proper access to both the S3 buckets.
List content within bucket.

 aws s3 ls s3://cyberkeeda-bucket-a/
aws s3 ls s3://cyberkeeda-bucket-b/


You can see the output of the above command shows file for different buckets.

Copy file/content from one bucket to another.

  • Now we will try to copy file name demo-file-A.txt from bucket cyberkeeda-bucket-to cyberkeeda-bucket-a


 aws s3 cp s3://SOURCE-BUCKET-NAME/FILE-NAME s3://DESTINATION-BUCKET-NAME/FILE-NAME
aws s3 cp s3://cyberkeeda-bucket-a/demo-file-A.txt  s3://cyberkeeda-bucket-b/demo-file-A.txt
Sync all file/content from one bucket to another.

 aws s3 sync s3://SOURCE-BUCKET-NAME/ s3://DESTINATION-BUCKET-NAME/
aws s3 sync s3://cyberkeeda-bucket-a/  s3://cyberkeeda-bucket-b/
Sync all file/content from one bucket to another with ACL as bucket owner.

 aws s3 sync --acl bucket-owner-full-control s3://cyberkeeda-bucket-a/  s3://cyberkeeda-bucket-b/

That's it with this post, we will cover how to do the same for Cross Account within next post.
Feel free to comment, if you face any issue implementing it.



Read more ...

AWS - How to extend windows drive volume size from an existing size to a newer size

How to extended your EBS Volume attached to an Windows Server


So to be true to you, I'm not a Windows guy, even for smaller thing i need to google, even if i already did a task based on windows server, next time it's asked to do the same I used to forgot as it's the frequency of work i get with respect to windows.

So why not draft and make a blog post, let's know how to do that.
In this blog post I will cover.
  • How to extend windows root EBS device volume.
  • How to extend an additional attached EBS volume.

Lab Setup details:
  1. We already have a EC2 instance with Windows Server installed on it.
  2. We already have a root volume ( Disk 0 ) attached of size 30Gb
  3. We have made 2 additional disk partitions as D and E drives.
  4. We already have a additional EBS volume( Disk 1 ) mounted with partition name as DATA
  5. We are assuming no Unallocated space is present.
How to extend windows root EBS device volume.

Final goal : We will add 3Gb of additional disk space to our root EBS volume ( /dev/sda1 and extend our D drive partition from 5Gb to 8Gb.

  • Go to AWS Console, select your desired Windows server EC2 instance.
  • Under description find Block Device (/dev/sda1), click it and from popup window note the EBS Volume ID and select it.
  • It will redirect to EBS Volumes window, confirm the EBS volume id, that we have noted on above step and confirm the existing size too.


  • Once Confirmed, we are ready to modify the volume size, that is from 30Gb to 33Gb
  • Select volume, right click on it, choose modify volume and make it from 30 to 33 as we want to increase it by 3Gb
  • Confirm and submit and check the state till, it become available from optimizing.
  • Once completed, we can login to our windows ec2 instnace and follow the next steps.
  • Open Run --> Paste "diskmgmt.msc" --> Action --> Refresh Disk
  • A new space with Unallocated of size 3Gb can be found.
  • Now we are ready to extend our D: drive from 5Gb to 8Gb.
 
  • Right Click on D: Volume --> Extend Volume --> Next --> 3Gb volume must be there within next Selected panel. --> Finish 

We can perform the same with our existing additional attached disk volumes, just identify your EBS volume id and follow up the same procedures.

Read more ...
Designed By Jackuna