CyberKeeda In Social Media
Showing posts with label AWS S3. Show all posts
Showing posts with label AWS S3. Show all posts

Most used AWS S3 Bucket Policies.

 


Bucket Policies are one of the key element when we have talk about security and compliance while using AWS S3 buckets to host our static contents.

In this post, below are the some code snippets of most used bucket policy documents.


Policy 1 : Enable Public Read access to Bucket objects.

Turing OFF the Public Access Check from S3 bucket permission tab is not sufficient to enable public read access, additionally you need to add below bucket policy statement to enable.


{
  "Version":"2012-10-17",
  "Statement":[
    {
      "Sid":"EnablePublicRead",
      "Effect":"Allow",
      "Principal": "*",
      "Action":["s3:GetObject"],
      "Resource":["arn:aws:s3:::ck-public-demo-bucket/*"]
    }
  ]
}


Policy 2 : Allow only HTTPs Connections.

AWS S3 allows both HTTP and HTTPS by default, in order to force clients to initiate only HTTPS connection, use below bucket policy document to force it.

{
  "Id": "ExamplePolicy",
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowOnlySSLRequests",
      "Action": "s3:GetObject",
      "Effect": "Allow",
      "Resource": [
        "arn:aws:s3:::ck-public-demo-bucket/*"
      ],
      "Condition": {
        "Bool": {
          "aws:SecureTransport": "true"
        }
      },
      "Principal": "*"
    }
  ]
}


Policy 3 : Allow access from a specific or range of IP address.


{
  "Version": "2012-10-17",
  "Id": "AllowOnlyIpS3Policy",
  "Statement": [
    {
      "Sid": "AllowOnlyIp",
"Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": [ "arn:aws:s3:::ck-public-demo-bucket", "arn:aws:s3:::ck-public-demo-bucket/*" ], "Condition": { "NotIpAddress": {"aws:SourceIp": "12.345.67.89/32"} } } ] }

Policy 4 : Cross Account Bucket access Policy.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::REPLACE-WITH-YOUR-AWS-CROSS-ACCOUNT-NUMBER:root"
            },
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:PutObjectAcl"
            ],
            "Resource": [
                "arn:aws:s3:::ck-public-demo-bucket/*"
            ]
        }
    ]
}

Note :  Do copy content from cross account with additional acl as bucket owner full control

aws s3 cp demo-file.txt s3://ck-public-demo-bucket/ --acl bucket-owner-full-control

Will keep on adding more..
Read more ...

Jenkins Pipeline to create CloudFront Distribution using S3 as Origin

 


 Within this post, we will cover 

  • Complete Jenkins Scripted pipeline.
  • Complete IAAC (Infrastructure As A Code ) to deploy AWS services and It's respective configuration.
  • Jenkins integration with GitHub as code repository.
  • Jenkins integration with Docker to make the deployment platform independent.
  • Jenkins integration with Ansible to call AWS CloudFormation scripts.
  • Using Ansible roles to fill the gaps of AWS CloudFormation, basically in this blog post and lab environment I'm using it to bypass usage of AWS CloudFormation Stack Sets and Custom Resources.

Flow diagram explaining the automation.

Explanation of above flow diagram.

Once Jenkins Job is triggered with appropriate input variables.
  1. It starts with fetching source code from git repository, which contains.
    1. Source Code for applications ( HTML, CSS, JS )
    2. IAAC code to support infrastructures deployment.
      • Ansible Role, playbooks.
      • CloudFormation templates.
      • Jenkins File, which has scripted pipeline defined.
  2. Once source code is downloaded, it will look for Jenkins pipeline file named a Jenkinsfile.
  3. Once Jenkins file is executed, it will initiate the pipeline in below stages.
    1. Stage Checkout : It looks for deployment type, as normal build or rollback and based upon it, it will checkout to respective git branch or tag.
    2. Stage Build : To make the pipeline, platform independent and reusable in nature, instead of directly triggering jobs on Jenkins node via bash or powershell commands, we will be using docker containers to run our CLI commands.
      • Here we will use Ansible Playbooks to create Infrastructure, thus in this step we will build a Ansible docker image from Docker file.
    3. Stage Deploy: Once our pre-requisites are ready ( Ansible Docker Image ), we will run ansible container and trigger ansible-playbook command on the fly with appropriate Environment variables and Variables.
      • Ansible playbook ( root.yml ) is executed, which has the roles defined under it by name ansible_role
      • I have removed non used default directories like (meta, default, handlers, tests etc. ) as these are not being used within our requirement.
      • Ansible role has three task playbook files with below operations.
        • Create S3 bucket : It will use ansible's role amazon.aws.s3_bucket to creates s3 bucket with tags and restricted public access. 
        • Create empty directories within above created S3 buckets.: It will use ansible's role amazon.aws.aws_s3 to create bucket objects.
        • Create CloudFormation distributions : It will use ansible's role amazon.aws.cloudformation option to create CloudFront distribution via CloudFormation template.

Jenkins file used in this lab.
def ENVT = env.ENVIRONMENT
def VERSION = env.VERSION
def JOBTYPE = env.JOBTYPE
def ACCESS_KEY = env.AWS_ACCESS_KEY
def KEY_ID = env.AWS_SECRET_ACCESS_KEY


node('master'){
  try {

    stage('checkout'){

        if ( "${VERSION}" == 'default') {
            checkout scm
            } 
        else {
            checkout scm
            sh "git checkout $VERSION"
            }
        }
    		
    stage('build'){
                  sh "ls -ltr"
                   echo "Building docker image via dockerfile..."
                   sh "docker build -t ck-pwdgen-app/ansible:2.10-$BUILD_ID ."
                  }
    stage('deploy'){
                    echo "Infrastructure deployment started...."
                    wrap([$class: "MaskPasswordsBuildWrapper",
                          varPasswordPairs: [[password: ACCESS_KEY, var: ACCESS_KEY], [password: KEY_ID, var: KEY_ID] ]]) {
                    sh "docker run \
                        -e AWS_ACCESS_KEY_ID=$ACCESS_KEY \
                        -e AWS_SECRET_ACCESS_KEY=$KEY_ID \
                        -e AWS_DEFAULT_REGION='us-west-1' \
                        ck-pwdgen-app/ansible:2.10-$BUILD_ID ansible-playbook -vvv --extra-vars 'Environment=${ENVT}' root.yml"
                      }
                    } 
            }

  catch (e){
    echo "Error occurred - " + e.toString()
    throw e
    } 
  finally {
    deleteDir()
        if ( "${JOBTYPE}" == 'build-deploy') {
          
            sh 'docker rmi -f ck-pwdgen-app/ansible:2.10-$BUILD_ID  && echo "ck-pwdgen-app/ansible:2.10-$BUILD_ID local image deleted."'
       }
  }
}

Jenkins Pipeline job will look something like below.


 

Dockerfile used to create Ansible Image
FROM python:3.7
RUN python3 -m pip install ansible==2.10 boto3 awscli && ansible-galaxy collection install amazon.aws


ADD root.yml /usr/local/ansible/
COPY ansible_role /usr/local/ansible/ansible_role

WORKDIR usr/local/ansible/

CMD ["ansible-playbook", "--version"]

Ansible Role directory structure and it's respective file contents.
root.yml
|
ansible_role/
├── README.md
├── tasks
│   ├── create_bucket_directories.yml
│   ├── create_cloudfront_dist.yml
│   ├── create_s3_bucket.yml
│   └── main.yml
└── vars
    └── int
        └── main.yml

3 directories, 6 files

Ansible Entry Playbook file ( root.yml ), we will initiate the ansible tasks using roles defined in below file.

$ cat root.yml

---
- hosts: localhost
  connection: local
  gather_facts: False

  roles:
   - ansible_role

Ansible Roles Variable file content.

$ cat ansible_role/vars/int/main.yml 

---
# default variables
region: us-east-1
ProductName: ck
ProjectName: pwdgen
Environment: int
PrimaryRegion: us-east-1
SecondaryRegion: us-east-2

bucketCfg:
  int:
    Environment: "{{ Environment }}"
    PrimarBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-primary-bucket"
    SecondaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-secondary-bucket"
    CDNLogBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-cdn-logs-bucket"
    DevopsBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-devops-bucket"
    PrimaryBucketRegion: "{{ PrimaryRegion }}"
    SecondaryBucketRegion: "{{SecondaryRegion}}"
    DevopsBucketRegion: "{{ PrimaryRegion }}"
bucketTags:
  int:
    PrimaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-primary"
    SecondaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-secondary"
    Environment: "{{ Environment }}"
    Owner: "admin@cyberkeeda.com"

Ansible Role Tasks file contents.
$ cat ansible_role/tasks/main.yml 

---
- import_tasks: create_s3_bucket.yml
- import_tasks: create_bucket_directories.yml
- import_tasks: create_cloudfront_dist.yml

Ansible Role Tasks file contents.
$ cat ansible_role/tasks/create_s3_bucket.yml

- name: Read environment specific variables.
  include_vars:
      file: "ansible_role/vars/{{ Environment }}/main.yml"

- name: Create static-ck application buckets in us-east-1 region.
  s3_bucket:
      name: "{{ item }}"
      state: absent
      tags:
          Name: "{{ item }}"
          Environment: "{{ Environment }}"
          Owner: "{{ bucketTags[Environment]['Owner'] }}"
      region: us-east-1
      public_access:
          block_public_acls: true
          ignore_public_acls: true
          block_public_policy: true
          restrict_public_buckets: true
  with_items:
      - "{{ bucketCfg[Environment]['PrimarBucketName'] }}"
      - "{{ bucketCfg[Environment]['DevopsBucketName'] }}"
      - "{{ bucketCfg[Environment]['CDNLogBucketName'] }}"

- name: Create static-ck application buckets in us-east-2 region.
  s3_bucket:
      name: "{{ item }}"
      state: absent
      tags:
          Name: "{{ item }}"
          Environment: "{{ Environment }}"
          Owner: "{{ bucketTags[Environment]['Owner'] }}"
      region: us-east-2
      public_access:
          block_public_acls: true
          ignore_public_acls: true
          block_public_policy: true
          restrict_public_buckets: true
  with_items:
      - "{{ bucketCfg[Environment]['SecondaryBucketName'] }}"



Ansible Role Tasks file contents.
$ cat ansible_role/tasks/create_bucket_directories.yml
---

- name: Read environment specific variables.
  include_vars:
      file: "ansible_role/vars/{{ Environment }}/main.yml"

- name: Create empty directories to store build artifacts.
  aws_s3:
      bucket: "{{ item.bucket_name }}"
      object: "{{ item.artifact_dir }}"
      mode: create
  with_items:
      - { bucket_name: "{{ bucketCfg[Environment]['PrimarBucketName'] }}", artifact_dir: "/app1/artifacts" }
      - { bucket_name: "{{ bucketCfg[Environment]['SecondaryBucketName'] }}", artifact_dir: "/app1/artifacts" }


- name: Create empty directories to deploy latest build.
  aws_s3:
      bucket: "{{ item.bucket_name }}"
      object: "{{ item.latest_dir }}"
      mode: create
  with_items:
      - { bucket_name: "{{ bucketCfg[Environment]['PrimarBucketName'] }}", latest_dir: "/app1/latest" }
      - { bucket_name: "{{ bucketCfg[Environment]['SecondaryBucketName'] }}", latest_dir: "/app1/latest" }
Ansible Role Tasks file contents.
$ cat ansible_role/tasks/create_cloudfront_dist.yml
AWSTemplateFormatVersion: '2010-09-09'

Description: 'CF Template to setup infra for static password generator application'

Parameters:
    Environment:
      Description:    Please specify the target environment.
      Type:           String
      Default:        "int"
      AllowedValues:
        - int
        - pre-prod
        - prod
    AppName:
      Description:  Application name.
      Type:         String
      Default:      "pwdgen"

    AlternateDomainNames:
      Description:    CNAMEs (alternate domain names)
      Type:           String
      Default:        "jackuna.github.io"

    IPV6Enabled:
      Description:    Should CloudFront to respond to IPv6 DNS requests with an IPv6 address for your distribution.
      Type:           String
      Default:        true
      AllowedValues:
        - true
        - false

    OriginProtocolPolicy:
      Description:    CloudFront Origin Protocol Policy to apply to your origin.
      Type:           String
      Default:        "https-only"
      AllowedValues:
        - http-only
        - match-viewer
        - https-only

    Compress:
      Description:    CloudFront Origin Protocol Policy to apply to your origin.
      Type:           String
      Default:        "true"
      AllowedValues:
        - true
        - false

    DefaultTTL:
      Description:    The default time in seconds that objects stay in CloudFront caches before CloudFront forwards another request to your custom origin. By default, AWS CloudFormation specifies 86400 seconds (one day).
      Type:           String
      Default:        "540.0"

    MaxTTL:
      Description:    The maximum time in seconds that objects stay in CloudFront caches before CloudFront forwards another request to your custom origin. By default, AWS CloudFormation specifies 31536000 seconds (one year).
      Type:           String
      Default:        "600.0"

    MinTTL:
      Description:    The minimum amount of time that you want objects to stay in the cache before CloudFront queries your origin to see whether the object has been updated.
      Type:           String
      Default:        "1.0"

    SmoothStreaming:
      Description:    Indicates whether to use the origin that is associated with this cache behavior to distribute media files in the Microsoft Smooth Streaming format.
      Type:           String
      Default:        "false"
      AllowedValues:
        - true
        - false
    QueryString:
      Description:    Indicates whether you want CloudFront to forward query strings to the origin that is associated with this cache behavior.
      Type:           String
      Default:        "false"
      AllowedValues:
        - true
        - false

    ForwardCookies:
      Description:    Forwards specified cookies to the origin of the cache behavior.
      Type:           String
      Default:        "none"
      AllowedValues:
        - all
        - whitelist
        - none

    ViewerProtocolPolicy:
      Description:    The protocol that users can use to access the files in the origin that you specified in the TargetOriginId property when the default cache behavior is applied to a request.
      Type:           String
      Default:        "https-only"
      AllowedValues:
        - redirect-to-https
        - allow-all
        - https-only

    PriceClass:
      Description:    The price class that corresponds with the maximum price that you want to pay for CloudFront service. If you specify PriceClass_All, CloudFront responds to requests for your objects from all CloudFront edge locations.
      Type:           String
      Default:        "PriceClass_100"
      AllowedValues:
        - PriceClass_All
        - PriceClass_100
        - PriceClass_200

    SslSupportMethod:
      Description:    Specifies how CloudFront serves HTTPS requests.
      Type:           String
      Default:        "sni-only"
      AllowedValues:
        - sni-only
        - vip

    MinimumProtocolVersion:
      Description:    The minimum version of the SSL protocol that you want CloudFront to use for HTTPS connections.
      Type:           String
      Default:        "TLSv1.2_2021"
      AllowedValues:
        - TLSv1.2_2021
        - TLSv1.2_2019
        - TLSv1.1_2018

    OriginKeepaliveTimeout:
      Description:    You can create a custom keep-alive timeout. All timeout units are in seconds. The default keep-alive timeout is 5 seconds, but you can configure custom timeout lengths. The minimum timeout length is 1 second; the maximum is 60 seconds.
      Type:           String
      Default:        "60"

    OriginReadTimeout:
      Description:    You can create a custom origin read timeout. All timeout units are in seconds. The default origin read timeout is 30 seconds, but you can configure custom timeout lengths. The minimum timeout length is 4 seconds; the maximum is 60 seconds.
      Type:           String
      Default:        "30"


    BucketVersioning:
      Description:    The versioning state of an Amazon S3 bucket. If you enable versioning, you must suspend versioning to disable it.
      Type:           String
      Default:        "Suspended"
      AllowedValues:
        - Enabled
        - Suspended

Resources:
  # Bucket Policy for primary and secondary buckets.
  PrimaryBucketReadPolicy:
      Type: 'AWS::S3::BucketPolicy'
      Properties:
        Bucket: !Sub 'ck-${Environment}-${AppName}-primary-bucket'
        PolicyDocument:
          Statement:
          - Action: 
              - 's3:GetObject'
            Effect: Allow
            Resource: !Sub 'arn:aws:s3:::ck-${Environment}-${AppName}-primary-bucket/*'
            Principal:
              CanonicalUser: !GetAtt PrimaryBucketCloudFrontOriginAccessIdentity.S3CanonicalUserId
  SecondaryBucketReadPolicy:
      Type: 'AWS::S3::BucketPolicy'
      Properties:
        Bucket: !Sub 'ck-${Environment}-${AppName}-secondary-bucket'
        PolicyDocument:
          Statement:
          - Action: 
              - 's3:GetObject'
            Effect: Allow
            Resource: !Sub 'arn:aws:s3:::ck-${Environment}-${AppName}-secondary-bucket/*'
            Principal:
              CanonicalUser: !GetAtt SecondaryBucketCloudFrontOriginAccessIdentity.S3CanonicalUserId

  # Cloud Front OAI
  PrimaryBucketCloudFrontOriginAccessIdentity:
    Type: 'AWS::CloudFront::CloudFrontOriginAccessIdentity'
    Properties:
      CloudFrontOriginAccessIdentityConfig:
        Comment: !Sub 'ck-${Environment}-${AppName}-primary'
  SecondaryBucketCloudFrontOriginAccessIdentity:
    Type: 'AWS::CloudFront::CloudFrontOriginAccessIdentity'
    Properties:
      CloudFrontOriginAccessIdentityConfig:
        Comment: !Sub 'ck-${Environment}-${AppName}-secondary'

  # Cloudfront Cache Policy
  CDNCachePolicy:
    Type: AWS::CloudFront::CachePolicy
    Properties: 
      CachePolicyConfig: 
        Comment: 'Max TTL 600 to validate frequent changes'
        DefaultTTL: !Ref DefaultTTL
        MaxTTL: !Ref MaxTTL
        MinTTL: !Ref MinTTL
        Name: !Sub 'ck-${Environment}-${AppName}-cache-policy'
        ParametersInCacheKeyAndForwardedToOrigin: 
            CookiesConfig: 
                CookieBehavior: none
            EnableAcceptEncodingBrotli: True
            EnableAcceptEncodingGzip: True
            HeadersConfig: 
                HeaderBehavior: none
            QueryStringsConfig: 
                QueryStringBehavior: none

  # CLOUDFRONT DISTRIBUTION
  CloudFrontDistribution:
    Type: 'AWS::CloudFront::Distribution'
    DependsOn:
    - CDNCachePolicy
    Properties:
      DistributionConfig:
        Comment: 'Cyberkeeda Password Generator application'
        Enabled: true
        HttpVersion: http2
        IPV6Enabled: true
        DefaultRootObject: version.json
        Origins:
        - DomainName: !Sub 'ck-${Environment}-${AppName}-primary.s3.amazonaws.com'
          Id: !Sub 'ck-${Environment}-${AppName}-primary-origin'
          OriginPath: "/v1/latest"
          ConnectionAttempts: 1
          ConnectionTimeout: 2
          S3OriginConfig:
            OriginAccessIdentity: !Sub 'origin-access-identity/cloudfront/${PrimaryBucketCloudFrontOriginAccessIdentity}'
        - DomainName: !Sub 'ck-${Environment}-${AppName}-secondary.s3.amazonaws.com'
          Id: !Sub 'ck-${Environment}-${AppName}-secondary-origin'
          OriginPath: "/v1/latest"
          ConnectionAttempts: 1
          ConnectionTimeout: 2
          S3OriginConfig:
            OriginAccessIdentity: !Sub 'origin-access-identity/cloudfront/${SecondaryBucketCloudFrontOriginAccessIdentity}'
        OriginGroups:
          Quantity: 1
          Items: 
          - Id: !Sub 'ck-${Environment}-${AppName}-cdn-origin-group'
            FailoverCriteria: 
              StatusCodes: 
                Items: 
                - 500
                - 502
                - 503
                - 504
                - 403
                - 404
                Quantity: 6
            Members:
              Quantity: 2
              Items: 
              - OriginId: !Sub 'ck-${Environment}-${AppName}-primary-origin'
              - OriginId: !Sub 'ck-${Environment}-${AppName}-secondary-origin'
        CacheBehaviors:
          - CachePolicyId: !GetAtt 'CDNCachePolicy.Id'
            PathPattern:  '*'
            ViewerProtocolPolicy: !Ref 'ViewerProtocolPolicy'
            TargetOriginId: !Sub 'ck-${Environment}-${AppName}-cdn-origin-group'
        DefaultCacheBehavior:
          AllowedMethods:
            - GET
            - HEAD
          TargetOriginId: !Sub 'ck-${Environment}-${AppName}-cdn-origin-group'
          ViewerProtocolPolicy: !Ref 'ViewerProtocolPolicy'
          CachePolicyId: !GetAtt 'CDNCachePolicy.Id'
Outputs:
  CDNCloudfrontURL:
    Description: CloudFront CDN Url.
    Value: !GetAtt  'CloudFrontDistribution.DomainName'


Once the above file and it's respective contents are dumped within source code repository, we can use to create AWS services using Jenkins pipeline job.

If we breakdown the blog post, this post can be used for other techinacl refrences too, such as.
  • Jenkins Scripted pipeline using parameters.
  • How to hash/mask passwords and sensitive environments. 
  • Leverage the power of docker to make codes uniform across environments and platform.
    • If you notice, we can easily install ansible packages within build machine and run the ansible playbook directly, but we are not touching any third party application within our build machine.
    • Even once our task is done, we are removing the container.
  • How to build docker image from docker file using jenkins.
  • Docker file to build ansible image.
  • Real world example of Ansible Roles.
  • Ansible to create S3 buckets with tags.
  • How to disable s3 bucket public access using ansible.
  • How to create s3 bucket directories and objects using Ansible.
  • How to use Ansible to create CloudFormation stack using parameters.
  • CloudFormation template to create below resources.
    • S3 Bucket Policy
    • CloudFront Origin Access Identity.
    • CloudFront Cache Policy.
    • CloudFront Distribution with Origin Group and S3 as a Origin.

Hope this blog post, help you in some use case.

There might be definitely errors and areas of improvement within this blog post or better wat to handle such deployment, please share your valuable comments.

Read more ...

Cloudformation template to create S3 bucket with Tags and disable public access

 


Below CloudFormation template can be used for the following tasks.
  • Create S3 bucket.
  • Add tags to S3 bucket.
  • Disable public access.
 S3BUCKET
  PrimaryBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub 'cyberkeeda-${Environment}-${AppName}-bucket'
      AccessControl: Private
      PublicAccessBlockConfiguration:
        BlockPublicAcls: True
        BlockPublicPolicy: True
        IgnorePublicAcls: True
        RestrictPublicBuckets: True
      Tags:
        - Key: Name
          Value: !Sub 'cyberkeeda-${Environment}-${AppName}'
        - Key: Environment
          Value: "Development"
        - Key: Creator
          Value: !Sub "${Creator}"
        - Key: Appname
          Value: !Sub "${Appname}"
        - Key: Unit
          Value: !Sub "${Unit}"
        - Key: Owner
          Value: admin@ck.com

Read more ...

Ansible role to create S3 bucket and directories

 


 Ansible roles can be defined as

  • Collection of multiple playbooks within directories for several tasks and operations.
  • It's a way of maintaining playbooks in a structured and identical manner.
  • It's a way of breaking lengthy playbooks into small plays.
  • Roles can be uploaded to Ansible galaxy, which can be reused as an ansible library or module.

How can we create Ansible roles.
  • We can use ansible-galaxy command to download existing roles uploaded on website https://galaxy.ansible.com/
  • We can use ansible-galaxy command to create new role.
  • While creating a new role ansible-galaxy creates roles in the default directory as /etc/ansible/roles followed by name of the role.
Below commands can be used as per need.
  • Ansible galaxy command to check installed roles.
$ ansible-galaxy collection list
  • Ansible galaxy command to create role in default directory
$ ansible-galaxy init /etc/ansible/roles/my-role --offline
  • Ansible galaxy command to create role in present working directory
$ ansible-galaxy init my-role
  • Ansible galaxy command to install roles from ansible galaxy website collections
$ ansible-galaxy collection install amazon.aws

Ansible role directory structure looks like below, we can take example of above created ansible role by name my-role
$ ansible-galaxy init my-role
- Role my-role was created successfully
$ tree my-role/
my-role/
├── defaults
│   └── main.yml
├── files
├── handlers
│   └── main.yml
├── meta
│   └── main.yml
├── README.md
├── tasks
│   └── main.yml
├── templates
├── tests
│   ├── inventory
│   └── test.yml
└── vars
    └── main.yml

8 directories, 8 files

How can we use roles
  • Till now, we have seen how to create role and once created what's it's default directory structure looks like, Ansible roles can be used in there ways
    • with the roles option: This is the classic way of using roles in a play.
    • tasks level with include_role: one can reuse roles dynamically anywhere in the tasks section of a play using include_role.
    • tasks level with import_role: You can reuse roles statically anywhere in the tasks section of a play using import_role.
Here we will know more about the classic way of using roles, that is by using the roles option in playbook.

So instead of going through all conventional method of installing apache or ngnix, I will share a real-time custom role, that has the following task/operations to do.
  • Create multiple AWS S3 buckets by regions.
  • Create directory structure within two of above created bucket.
First let's go through the playbook, that can be independently used to do the entire operation without creating ansible roles.

Note: amazon.aws galaxy collection must be update to recent version, in order to use option s3_bucket 
$ ansible-galaxy collection install amazon.aws
---
- hosts: localhost
  connection: local
  gather_facts: False

  tasks:

    - name: Read environment specific variables.
    include_vars:
        file: "ansible_role/vars/{{ Environment }}/main.yml"

    - name: Create static-ck application buckets in us-east-1 region.
    s3_bucket:
        name: "{{ item }}"
        state: present
        tags:
            Name: "{{ item }}"
            Environment: "{{ Environment }}"
            Owner: "{{ bucketTags[Environment]['Owner'] }}"
        region: us-east-1
        public_access:
            block_public_acls: true
            ignore_public_acls: true
            block_public_policy: true
            restrict_public_buckets: true
    with_items:
        - "{{ bucketCfg[Environment]['PrimarBucketName'] }}"
        - "{{ bucketCfg[Environment]['DevopsBucketName'] }}"
        - "{{ bucketCfg[Environment]['CDNLogBucketName'] }}"

    - name: Create static-ck application buckets in us-east-2 region.
    s3_bucket:
        name: "{{ item }}"
        state: present
        tags:
            Name: "{{ item }}"
            Environment: "{{ Environment }}"
            Owner: "{{ bucketTags[Environment]['Owner'] }}"
        region: us-east-2
        public_access:
            block_public_acls: true
            ignore_public_acls: true
            block_public_policy: true
            restrict_public_buckets: true
    with_items:
        - "{{ bucketCfg[Environment]['SecondaryBucketName'] }}"


    - name: Create empty directories to store build artifacts.
    aws_s3:
        bucket: "{{ item.bucket_name }}"
        object: "{{ item.artifact_dir }}"
        mode: create
    with_items:
        - { bucket_name: "{{ bucketCfg[Environment]['PrimarBucketName'] }}", artifact_dir: "/app1/artifcats" }
        - { bucket_name: "{{ bucketCfg[Environment]['SecondaryBucketName'] }}", artifact_dir: "/app1/artifcats" }


    - name: Create empty directories to deploy latest build.
    aws_s3:
        bucket: "{{ item.bucket_name }}"
        object: "{{ item.latest_dir }}"
        mode: create
    with_items:
        - { bucket_name: "{{ bucketCfg[Environment]['PrimarBucketName'] }}", latest_dir: "/app1/latest" }
        - { bucket_name: "{{ bucketCfg[Environment]['SecondaryBucketName'] }}", latest_dir: "/app1/latest" }

Above playbook can be triggered independently using the below command as.
$ ansible-playbook  -vv --extra-vars "Environment=int" main.yml


The same deployment can be done using ansible roles in below manner following the below steps.
  • Create a new ansible role by name ansible_role
$ ansible-galaxy init ansible_role
  • Create a new root/entry playbook to initiate deployment
$ touch root.yml
  • Include the below lines and indicate the use role option to call our role, please note that we have used the option "roles" to call our newly created role directory by name ansible_role, while using roles option please make a note of the below points for main.yml file.
    • When you use the roles option at the play level, for each role ‘x’:
    • If roles/x/tasks/main.yml exists, Ansible adds the tasks in that file to the play.
    • If roles/x/handlers/main.yml exists, Ansible adds the handlers in that file to the play.
    • If roles/x/vars/main.yml exists, Ansible adds the variables in that file to the play.
    • If roles/x/defaults/main.yml exists, Ansible adds the variables in that file to the play.
    • If roles/x/meta/main.yml exists, Ansible adds any role dependencies in that file to the list of roles.
    • Any copy, script, template or include tasks (in the role) can reference files in roles/x/{files,templates,tasks}/ (dir depends on task) without having to path them relatively or absolutely.
---
- hosts: localhost
  connection: local
  gather_facts: False

  roles:
   - ansible_role 
  • Below the directory structure we follow within our newly created role.
root.yml
|
ansible_role/
├── defaults
│   └── main.yml
├── files
├── handlers
│   └── main.yml
├── meta
│   └── main.yml
├── README.md
├── tasks
│   ├── create_bucket_directories.yml
│   ├── create_s3_bucket.yml
│   └── main.yml
├── templates
├── tests
│   ├── inventory
│   └── test.yml
└── vars
    └── int
        └── main.yml

So from above directory layout, we have the below files and directories to create.
  • We have divided our tasks into parts
    • Create S3 buckets
    • Create directories within S3
    • All the above two tasks will be defined individually under two different file by name
      • create_s3_bucket.yml
      • create_bucket_directories.yml
    • Where as ansible_roles/tasks/main.yml is entry point for these two task, which we will be importing using option import tasks
$ cat ansible_role/tasks/main.yml
---
- import_tasks: create_s3_bucket.yml
- import_tasks: create_bucket_directories.yml

This is how, my other two task files look like.

$ cat ansible_role/tasks/create_s3_bucket.yml
---

- name: Read environment specific variables.
  include_vars:
      file: "ansible_role/vars/{{ Environment }}/main.yml"

- name: Create static-ck application buckets in us-east-1 region.
  s3_bucket:
      name: "{{ item }}"
      state: present
      tags:
          Name: "{{ item }}"
          Environment: "{{ Environment }}"
          Owner: "{{ bucketTags[Environment]['Owner'] }}"
      region: us-east-1
      public_access:
          block_public_acls: true
          ignore_public_acls: true
          block_public_policy: true
          restrict_public_buckets: true
  with_items:
      - "{{ bucketCfg[Environment]['PrimarBucketName'] }}"
      - "{{ bucketCfg[Environment]['DevopsBucketName'] }}"
      - "{{ bucketCfg[Environment]['CDNLogBucketName'] }}"

- name: Create static-ck application buckets in us-east-2 region.
  s3_bucket:
      name: "{{ item }}"
      state: present
      tags:
          Name: "{{ item }}"
          Environment: "{{ Environment }}"
          Owner: "{{ bucketTags[Environment]['Owner'] }}"
      region: us-east-2
      public_access:
          block_public_acls: true
          ignore_public_acls: true
          block_public_policy: true
          restrict_public_buckets: true
  with_items:
      - "{{ bucketCfg[Environment]['SecondaryBucketName'] }}"

$ cat ansible_role/tasks/create_bucket_directories.yml
---

- name: Read environment specific variables.
  include_vars:
      file: "ansible_role/vars/{{ Environment }}/main.yml"

- name: Create empty directories to store build artifacts.
  aws_s3:
      bucket: "{{ item.bucket_name }}"
      object: "{{ item.artifact_dir }}"
      mode: create
  with_items:
      - { bucket_name: "{{ bucketCfg[Environment]['PrimarBucketName'] }}", artifact_dir: "/v1/artifcats" }
      - { bucket_name: "{{ bucketCfg[Environment]['SecondaryBucketName'] }}", artifact_dir: "/v1/artifcats" }

  • We have added an additional directory as "int", which is the short form of Internal environment, following the same we can create more directories that can relate to other environmental specific files for prod and non-prod environmet too.
    • Within file ansible_role/vars/int/main.yml we defined key value pairs that can be used later while running our playbook
$ cat ansible_role/vars/int/main.yml
---
# default variables
region: us-east-1
ProductName: ck
ProjectName: static-app
Environment: int
PrimaryRegion: us-east-1
SecondaryRegion: us-east-2
regions:
  us-east-1:
    preferredMaintenanceWindow: "sat:06:00-sat:06:30"
  us-east-2:
    preferredMaintenanceWindow: "sat:05:00-sat:05:30"

bucketCfg:
  int:
    Environment: "{{ Environment }}"
    PrimarBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-primary-cyberkeeda-bucket-01"
    SecondaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-secondary-cyberkeeda-bucket-01"
    CDNLogBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-cdn-logs-cyberkeeda-bucket-01"
    DevopsBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-devops-cyberkeeda-bucket-01"
    PrimaryBucketRegion: "{{ PrimaryRegion }}"
    SecondaryBucketRegion: "{{SecondaryRegion}}"
    DevopsBucketRegion: "{{ PrimaryRegion }}"
bucketTags:
  int:
    PrimaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-primary"
    SecondaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-secondary"
    Environment: "{{ Environment }}"
    CreatorID: "admin@cyberkeeda.com"
    Owner: "admin@cyberkeeda.com"

Once the above templates are created and save, we can run our playbook with below ansible-playbook command.
$ ansible-playbook  -vv --extra-vars "Environment=int" root.yml

Below is the details for the above paramter used along with the ansible-playbook command.
  • -vv : Verbrose mode for debugging in STDOUT
  • --extra-vars : Key-Value pair to be used within playbook

Hope this blog post will help any one in some sort, please comment in case if you have any difficulties following steps.

Read more ...

AWS S3 Bucket Policy to grant access to other AWS account

 



AWS Bucket Policy to be used for the below requirements.

  • Grant access of S3 Bucket to other AWS account.
  • Restrict access to List and Download objects from it, nothing more nothing extra.

Script to extract yesterday date

{
"Sid": "Allow Bucket Read access from below AWS accounts", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::123456789012:root", "arn:aws:iam::121314151617:root", "arn:aws:iam::181912021222:root" ] }, "Action": [ "s3:Get*", "s3:List*" ], "Resource": "arn:aws:s3:::cyberkeeda-limited-access-bucket/*" } ] })


Hope this snippet, helps you !
Read more ...

AWS Managed Policy to Restrict IAM User to Access AWS Resource from Specific IP Address.

 




AWS Managed Policy

Within this blog post, we will cover 
How we can use IAM Managed Policy used to create an IAM User Boundary which will limit a user for the below operations.

  • AWS S3 Limited Access [Get, Put, List]
  • S3 Access with only single IP Address.
Syntax Template

AWSTemplateFormatVersion: 2010-09-09
DescriptionCFN to create ManagedPolicy 

Resources:
  IBDSReconUserBoundaryPolicy:
    TypeAWS::IAM::ManagedPolicy
    Properties
      DescriptionA ManagedPolicy meant to restrict user based upon ingress IP.
      ManagedPolicyNamemy_s3_user_boundary
      Path/
      Users:
      - my_s3_user
      PolicyDocument
            Version'2012-10-17'
            Statement:
            - EffectAllow
              Action:
              - s3:ListBucket
              - s3:GetBucketLocation
              Resourcearn:aws:s3:::my-randon-s3-bucket
            - EffectAllow
              Action:
              - s3:PutObject
              - s3:PutObjectAcl
              Resourcearn:aws:s3:::my-randon-s3-bucketa/bucketfiles/*
              Condition:
                IpAddressIfExists:
                  aws:SourceIp123.345.657.12

Read more ...

Upload File from Local to S3 Bucket using CURL

 

Upload data to S3 bucket using CURL.

This guide aka shell script will help you to upload files into S3 without installing AWS SDK, Python Boto or AWS CLI.


Script's README section has most of the usage defined, high level things script can do.

  • Copy files from a specific directory between specific range dates.
  • Copy specific files with search filer.
  • Script logs all the file copy into file with time stamp. 
  • Reinitiate is supported without re-write


#!/bin/bash
#
# Parameters
# $1 => Directory/Folder to search file.
# $2 => AWS Bucket subdirectories 
#       Example -- myAWSs3bucket/folderA/FolderB/FolderC
#              1.) In case one want to put files in folderA, use folderA as $2
#                  2.) In case one want to put files in folderB, use folderA/folderB as $2
#                  3.) In case one want to put files in folderC, use folderA/folderB/folderC as $2
# $3 => Existense of file from Start date in format YYYYMMDD 
#       Example --
#                  1.) 20210104 -> 4th January 2021
#                  2.) 20201212 -> 12th December 2020
# $4 => Existense of file upto end date in format YYYYMMDD
#       Example --
#                  1.) 20200322 -> 22nd March 2020
#                  2.) 20201212 -> 12th December 2020
# $5 => File Filter 
#       Example -- We need only specific files from a folder.
#                  1.) 20200122_data_settlement.txt --> Use $5 as *_data_settlement.txt
#                  2.) salesdata-20201215100610.txt --> Use $5 as salesdata-*
#      
# Task - Find similar 20200122_data_settlement.txt on location /usr/data/
#        File existence date range 20200322 (22nd March 2020) to 20210104 (4th January 2021)
#        Copy it to AWS S3 bucket's subfolder named as folderA 
#
#     
# Syntax -  ./copy_data_to_S3_via_Curl.sh <LocalFolderLocation> <S3BUCKET-DIRECTORY> <STARTDATE> <ENDDATE> <FILEFILTER>
#
# Usage
#
#        1.) With File Filter
#         ./copy_data_to_S3_via_Curl.sh /usr/data folderA 20200322 20210104  '*data_settlement.txt'
#
#        2.) Without File Filter
#         ./copy_data_to_S3_via_Curl.sh /usr/data folderA 20200322 20210104  
#        
#    3.) Reinitiate left upload
#
#         ./copy_data_to_S3_via_Curl.sh 1 folderA
#
#
#  Flow 
#  1.) Script use find command to find all the files with parameters and write it to a file "/tmp/file_inventory.txt"
#  2.) For Loop is being used further ti read file inputs and do S3 operations using HTTPS API
#  3.) Script keeps removing the entries from inventory file after a successful upload.
#  4.) Script writes the successful and failed upload status within log file "/tmp/file_copy_status.log"
#  5.) Incase we want to interrupt and upload the remaining files later, comment line no 62
#        62 find $1 -newermt $3 \! -newermt $4   -iname "$5" >> $inventory
#      To avoid confusion run the script with same paramter.
#
#
# Author: Jackuna
#

# Bucket Data
bucket="mys3bucket-data"
s3_access_key="AKgtusjksskXXXXTQTW"
s3_secret_key="KSKKSIS HSNKSLS+ydRQ3Ya37A5NUd1V7QvEwDUZR"

# Files
inventory="/tmp/file_inventory.txt"
logme="/tmp/file_copy_status.log"


if  [ $# == 2 ]; then
  echo "`date` -  Initiating left file upload from old inventory " >> $logme

elif [ $# -eq 5 ]; then
  truncate -s 0 $inventory
  find $1 -newermt $3 \! -newermt $4   -iname "$5" >> $inventory
  echo "`date` - Initiating all file that contains string $5 and found between $3 - $4  upload from new inventory " >> $logme

elif [ $# -eq 4 ]; then
  truncate -s 0 $inventory
  find $1 -newermt $3 \! -newermt $4  >> $inventory
  sed -i 1d $inventory
  echo "`date` - Initiating all file found between $3 - $4  upload from new inventory " >> $logme

else
  echo " Some or all arguments Missing from CLI"
  echo " Usage :  ./copy_data_to_S3_via_Curl.sh <LocalFolderLocation> <S3BUCKET-DIRECTORY> <STARTDATE> <ENDDATE> <FILEFILTER>"
  echo " Open Script README section"
  exit 1
fi

file_list=`cat $inventory`
total_file_count=`cat $inventory|wc -l`


for local_file_val in $file_listdo
        aws_folder=$2
        aws_file_name=`echo $local_file_val| rev| cut -d '/' -f1 | rev`
        aws_filepath="/${bucket}/$aws_folder/$aws_file_name"

        # metadata
        contentType="application/x-compressed-tar"
        dateValue=`date -R`
        signature_string="PUT\n\n${contentType}\n${dateValue}\n${aws_filepath}"
        signature_hash=`echo -en ${signature_string} | openssl sha1 -hmac ${s3_secret_key} -binary | base64`


        curl -X PUT -T "$local_file_val" \
    -H "Host: ${bucket}.s3.amazonaws.com" \
    -H "Date: ${dateValue}" \
    -H "Content-Type: ${contentType}" \
    -H "Authorization: AWS ${s3_access_key}:${signature_hash}" \
        https://${bucket}.s3.amazonaws.com/$aws_folder/$aws_file_name

    if [ $? -gt 0 ]; then
            echo "`date` Upload Failed  $local_file_val to $bucket" >> $logme
    else
            echo "`date` Upload Success $local_file_val to $bucket" >> $logme
            count=$((count + 1))
            printf "\rCopy Status -  $count/$total_file_count - Completed "

            sleep 1
            sed -i "/\/$aws_file_name/d" $inventory
    fi

done;

Feel free to comment.

Read more ...
Designed By Jackuna