CyberKeeda In Social Media
Showing posts with label DevOps. Show all posts
Showing posts with label DevOps. Show all posts

Jenkins Pipeline for Continuous Integration of AWS Lambda Function with GitHub repository

 


AWS Lambda Function is awesome and trust me if you are working on AWS, sooner or later you have to deal with AWS Lambda.

Due to the nature of being a PAAS service, we can't ignore the ways of Lambda deployment and it's test methods which is somehow more or less through the Lambda Function Console.

Off course there are ways to write code, test code and deploy code directly through IDEs, but keep in mind you still need an ACCESS Key and ACCESS Secret. 

So what about the code base, how will we track the code changes done in Lambda Function itself.

In this post, we will cover the challenges with Lambda approaches for CICD and one of the my proposed solutions to address some part of it.

Let's know some of the challenges and their probable solutions.

  • Lambda Deployment : We can use Terraform and CloudFormation for the same, then what's is the challenge ?
    • CloudFormation :
      • We can use Inline methods to put our Lambda Code under the Code block of ZipFile, but what about the 3rd party modules like panda, we can't use the code block under CloudFormation.
      • One can still package those third party modules and other code together, but still one needs to upload those in S3 bucket and think of a way of handling changes before using it.
  • Lambda Function Code base : 
    • We still need to have snapshots of our lambda function code to track the daily changes and to be later use in deployment pipeline.

There are some more, challenges with Lambda Function, but in this blog post we will try to cover the basic of CICD, that is replicating our Lambda Function code from Console to GitHub repository.

  • The moment when we talk about CICD, mostly pipeline uses git to get source code and then use it for further process like, checkout, build, release, deploy, test..
  • Here the case is somehow different, due to nature of PAAS, we have to test our code's functionality in Lambda Console first, then it can be pushed to repository to save our source code.
  • Yes, AWS SAM is yet another option of testing Lambda Function code within our local environment, but not in the case when Lambda is hosted in VPC and it uses other services to communicate.

Below is one of my proposed solution to achieve the same.



Prerequisites.
  • IAM Access Key Secrets or IAM Role attached to EC2 instance, from where the Jenkins job is triggered.
  • GitHub Personal Access Token
Here is the flow...
  1. I assume, developer is initially testing his/her code's functionality on Lambda Console, Once Developer is okay, with his/her Lambda Function code, we will move to next step.
  2. SysAdmin/Developer can check-in his/her code directly from Lambda Function to GitHub repository using Jenkins Job.
  3. Jenkins Job has scripted pipeline attached to it, thus it will go through below stages.
    • Stage : Check Out code to appropriate branch.
    • Stage : Build Docker image from Docker File for Ansible.
    • Stage : Run Ansible container from above created Docker image and run Ansible Playbook command to execute Ansible role and it's relative ansible tasks.
      1. Task 1 - 
        • Download Lambda Code from Lambda Console using Python Script, which is using boto3 module.
        • Unzip the downloaded code into specific directory to track the changes as a file, else changes in zip file can't be tracked.
      2. Task 2 - 
        • Clone existing repository from git, replace the existing lambda source code with the newer one downloaded in above step.
        •  Git add, commit and push it into git repository.

Here is the lab setup.

Our Lambda Function in console has something by name "ck-uat-Lambda-Authorizer"


And it's code looks like something below in console.


GitHub repository where I want to publish my code.

Repo Snapshot.


Directory Layout for the same...



Our Final Intention is to dump or lambda function code under src directory, that is lambda_folder/src

So according to the flow stated earlier in the post, I will paste the code..

Jenkins Scripted Pipeline code.

Note: Do mask the additional secrets to avoid it to be appear in plain text.
def gituser = env.GIT_USERNAME
def gituserpass = env.GIT_PASSWORD
def ACCESS_KEY = env.AWS_ACCESS_KEY
def KEY_ID = env.AWS_SECRET_ACCESS_KEY
def DEBUG_MODE = env.LOG_TYPE

node('master'){
  
  try {

    stage('Git Checkout'){
            checkout scm
            sh "git checkout lambda_deployer"
        }

     stage('build'){
                  sh "ls -ltr"
                   echo "Building docker image via dockerfile..."
                   sh "docker build  -t ansible:2.10-$BUILD_ID ."
                  }

     stage('deploy'){
                    echo "Infrastructure deployment started...."
                    wrap([$class: "MaskPasswordsBuildWrapper",
                          varPasswordPairs: [[password: gituserpass, var: gituserpass] ]]) {
                    sh "docker run --rm \
                        -e gituser=$gituser \
                        -e gituserpass=$gituserpass \
                        -e AWS_ACCESS_KEY_ID=$ACCESS_KEY \
                        -e AWS_SECRET_ACCESS_KEY=$KEY_ID \
                        -e AWS_DEFAULT_REGION='ap-south-1' \
                        ansible:2.10-$BUILD_ID ansible-playbook -$DEBUG_MODE  --extra-vars 'env=dev1 git_username=${gituser} token=${gituserpass}' lambda_folder/root_lambda_project.yml"
                      }
                    }          
      }


            
  catch (e){
    echo "Error occurred - " + e.toString()
    throw e
    } 
  finally {
    deleteDir()
       
            sh 'docker rmi -f ansible:2.10-$BUILD_ID  && echo "ansible:2.10-$BUILD_ID local image deleted."'
  }
}

Build Pipe Line should have something like below in Jenkins Console.



Jenkins One of the Stage : Build will build docker image from Docker File, here is the docker file source code.

FROM python:3.7
RUN python3 -m pip install ansible==2.10 boto3 awscli

RUN rm -rf /usr/local/ansible/

copy lambda_folder /usr/local/ansible/lambda_folder

WORKDIR usr/local/ansible/

CMD ["ansible-playbook", "--version"]

Once Docker Images is created, next step is to run Docker container from the above created Ansible image.

Here is the Ansible Role and it's respective tasks.

Ansible Root Playbook YAML -- root_lambda_project.yml

---
- hosts: localhost
  connection: local
  gather_facts: False

  roles:
   - role-

Ansible Variable file under roles -- lambda_folder/role/vars/dev1/main.yml

---
region: us-east-1
function_name: ck-uat-LambdaAuthorizer
git_repo_name: aws-swa
git_repo_branch: lambda_deployer

Python Script, that will be called on one of the Ansible Task to download Lambda Function code  

Note : It's an edited version of existing version of code from stackoverflow.
"""
    Script to download individual Lambda Function and dump code in specified directory
"""
import os
import sys
from urllib.request import urlopen
import zipfile
from io import BytesIO

import boto3


def get_lambda_functions_code_url(fn_name):

    client = boto3.client("lambda")
    functions_code_url = []
    fn_code = client.get_function(FunctionName=fn_name)["Code"]
    fn_code["FunctionName"] = fn_name
    functions_code_url.append(fn_code)
    return functions_code_url


def download_lambda_function_code(fn_name, fn_code_link, dir_path):

    function_path = os.path.join(dir_path, fn_name)
    if not os.path.exists(function_path):
        os.mkdir(function_path)
    with urlopen(fn_code_link) as lambda_extract:
        with zipfile.ZipFile(BytesIO(lambda_extract.read())) as zfile:
            zfile.extractall(function_path)


if __name__ == "__main__":
    inp = sys.argv[1:]
    print("Destination folder {}".format(inp))
    if inp and os.path.exists(inp[0]):
        dest = os.path.abspath(inp[0])
        fc = get_lambda_functions_code_url(sys.argv[2])
        for i, f in enumerate(fc):
            print("Downloading Lambda function {}".format(f["FunctionName"]))
            download_lambda_function_code(f["FunctionName"], f["Location"], dest)
    else:
        print("Destination folder doesn't exist")


Ansible Task 1 : lambda_folder/role/tasks/download_lambda_code.yml

---

- name: Read Variables
  include_vars:
    file: "role/vars/{{ env }}/main.yml"

- name: Download Lambda Function using Python script..
  command:
    argv:
      - python3 
      - role/files/download_lambda.py 
      - src
      - "{{ function_name }}"
Ansible Task 2 : lambda_folder/role/tasks/update_repository.yml

---
- name: Git clone source repository..
  command:
    argv:
      - git 
      - clone 
      - https://{{ git_username }}:{{ token }}@github.com/Jackuna/{{ git_repo_name }}.git 
      - -b 
      - "{{ git_repo_branch }}"

- name: Git add Lambda function source code to repo..
  command: >
    cp -r src {{ git_repo_name }}/lambda_folder

- name: Git add recent changes..
  command: >
    git add --all lambda_folder/src
  args:
    chdir: "{{ git_repo_name }}"

- name: Git Config username..
  command: >
    git config user.name {{ git_username }}
  args:
    chdir: "{{ git_repo_name }}"

- name: Git Config email..
  command: >
    git config user.email {{ git_username }}@cyberkeeda.com 
  args:
    chdir: "{{ git_repo_name }}"  
- name: Git commit recent changes..
  command: >
    git commit -m "Updated Latest code"
  args:
    chdir: "{{ git_repo_name }}"

- name: Git push recent changes..
  command:
    argv:
      - git 
      - push 
      - https://{{ git_username }}:{{ token }}@github.com/Jackuna/{{ git_repo_name }}.git 
      - -u 
      - "{{ git_repo_branch }}"
  args:
    chdir: "{{ git_repo_name }}"
  register: git_push_output  

That's all you need.. in case of hurdles or issues, do comment !
Read more ...

Jenkins Pipeline to create CloudFront Distribution using S3 as Origin

 


 Within this post, we will cover 

  • Complete Jenkins Scripted pipeline.
  • Complete IAAC (Infrastructure As A Code ) to deploy AWS services and It's respective configuration.
  • Jenkins integration with GitHub as code repository.
  • Jenkins integration with Docker to make the deployment platform independent.
  • Jenkins integration with Ansible to call AWS CloudFormation scripts.
  • Using Ansible roles to fill the gaps of AWS CloudFormation, basically in this blog post and lab environment I'm using it to bypass usage of AWS CloudFormation Stack Sets and Custom Resources.

Flow diagram explaining the automation.

Explanation of above flow diagram.

Once Jenkins Job is triggered with appropriate input variables.
  1. It starts with fetching source code from git repository, which contains.
    1. Source Code for applications ( HTML, CSS, JS )
    2. IAAC code to support infrastructures deployment.
      • Ansible Role, playbooks.
      • CloudFormation templates.
      • Jenkins File, which has scripted pipeline defined.
  2. Once source code is downloaded, it will look for Jenkins pipeline file named a Jenkinsfile.
  3. Once Jenkins file is executed, it will initiate the pipeline in below stages.
    1. Stage Checkout : It looks for deployment type, as normal build or rollback and based upon it, it will checkout to respective git branch or tag.
    2. Stage Build : To make the pipeline, platform independent and reusable in nature, instead of directly triggering jobs on Jenkins node via bash or powershell commands, we will be using docker containers to run our CLI commands.
      • Here we will use Ansible Playbooks to create Infrastructure, thus in this step we will build a Ansible docker image from Docker file.
    3. Stage Deploy: Once our pre-requisites are ready ( Ansible Docker Image ), we will run ansible container and trigger ansible-playbook command on the fly with appropriate Environment variables and Variables.
      • Ansible playbook ( root.yml ) is executed, which has the roles defined under it by name ansible_role
      • I have removed non used default directories like (meta, default, handlers, tests etc. ) as these are not being used within our requirement.
      • Ansible role has three task playbook files with below operations.
        • Create S3 bucket : It will use ansible's role amazon.aws.s3_bucket to creates s3 bucket with tags and restricted public access. 
        • Create empty directories within above created S3 buckets.: It will use ansible's role amazon.aws.aws_s3 to create bucket objects.
        • Create CloudFormation distributions : It will use ansible's role amazon.aws.cloudformation option to create CloudFront distribution via CloudFormation template.

Jenkins file used in this lab.
def ENVT = env.ENVIRONMENT
def VERSION = env.VERSION
def JOBTYPE = env.JOBTYPE
def ACCESS_KEY = env.AWS_ACCESS_KEY
def KEY_ID = env.AWS_SECRET_ACCESS_KEY


node('master'){
  try {

    stage('checkout'){

        if ( "${VERSION}" == 'default') {
            checkout scm
            } 
        else {
            checkout scm
            sh "git checkout $VERSION"
            }
        }
    		
    stage('build'){
                  sh "ls -ltr"
                   echo "Building docker image via dockerfile..."
                   sh "docker build -t ck-pwdgen-app/ansible:2.10-$BUILD_ID ."
                  }
    stage('deploy'){
                    echo "Infrastructure deployment started...."
                    wrap([$class: "MaskPasswordsBuildWrapper",
                          varPasswordPairs: [[password: ACCESS_KEY, var: ACCESS_KEY], [password: KEY_ID, var: KEY_ID] ]]) {
                    sh "docker run \
                        -e AWS_ACCESS_KEY_ID=$ACCESS_KEY \
                        -e AWS_SECRET_ACCESS_KEY=$KEY_ID \
                        -e AWS_DEFAULT_REGION='us-west-1' \
                        ck-pwdgen-app/ansible:2.10-$BUILD_ID ansible-playbook -vvv --extra-vars 'Environment=${ENVT}' root.yml"
                      }
                    } 
            }

  catch (e){
    echo "Error occurred - " + e.toString()
    throw e
    } 
  finally {
    deleteDir()
        if ( "${JOBTYPE}" == 'build-deploy') {
          
            sh 'docker rmi -f ck-pwdgen-app/ansible:2.10-$BUILD_ID  && echo "ck-pwdgen-app/ansible:2.10-$BUILD_ID local image deleted."'
       }
  }
}

Jenkins Pipeline job will look something like below.


 

Dockerfile used to create Ansible Image
FROM python:3.7
RUN python3 -m pip install ansible==2.10 boto3 awscli && ansible-galaxy collection install amazon.aws


ADD root.yml /usr/local/ansible/
COPY ansible_role /usr/local/ansible/ansible_role

WORKDIR usr/local/ansible/

CMD ["ansible-playbook", "--version"]

Ansible Role directory structure and it's respective file contents.
root.yml
|
ansible_role/
├── README.md
├── tasks
│   ├── create_bucket_directories.yml
│   ├── create_cloudfront_dist.yml
│   ├── create_s3_bucket.yml
│   └── main.yml
└── vars
    └── int
        └── main.yml

3 directories, 6 files

Ansible Entry Playbook file ( root.yml ), we will initiate the ansible tasks using roles defined in below file.

$ cat root.yml

---
- hosts: localhost
  connection: local
  gather_facts: False

  roles:
   - ansible_role

Ansible Roles Variable file content.

$ cat ansible_role/vars/int/main.yml 

---
# default variables
region: us-east-1
ProductName: ck
ProjectName: pwdgen
Environment: int
PrimaryRegion: us-east-1
SecondaryRegion: us-east-2

bucketCfg:
  int:
    Environment: "{{ Environment }}"
    PrimarBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-primary-bucket"
    SecondaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-secondary-bucket"
    CDNLogBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-cdn-logs-bucket"
    DevopsBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-devops-bucket"
    PrimaryBucketRegion: "{{ PrimaryRegion }}"
    SecondaryBucketRegion: "{{SecondaryRegion}}"
    DevopsBucketRegion: "{{ PrimaryRegion }}"
bucketTags:
  int:
    PrimaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-primary"
    SecondaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-secondary"
    Environment: "{{ Environment }}"
    Owner: "admin@cyberkeeda.com"

Ansible Role Tasks file contents.
$ cat ansible_role/tasks/main.yml 

---
- import_tasks: create_s3_bucket.yml
- import_tasks: create_bucket_directories.yml
- import_tasks: create_cloudfront_dist.yml

Ansible Role Tasks file contents.
$ cat ansible_role/tasks/create_s3_bucket.yml

- name: Read environment specific variables.
  include_vars:
      file: "ansible_role/vars/{{ Environment }}/main.yml"

- name: Create static-ck application buckets in us-east-1 region.
  s3_bucket:
      name: "{{ item }}"
      state: absent
      tags:
          Name: "{{ item }}"
          Environment: "{{ Environment }}"
          Owner: "{{ bucketTags[Environment]['Owner'] }}"
      region: us-east-1
      public_access:
          block_public_acls: true
          ignore_public_acls: true
          block_public_policy: true
          restrict_public_buckets: true
  with_items:
      - "{{ bucketCfg[Environment]['PrimarBucketName'] }}"
      - "{{ bucketCfg[Environment]['DevopsBucketName'] }}"
      - "{{ bucketCfg[Environment]['CDNLogBucketName'] }}"

- name: Create static-ck application buckets in us-east-2 region.
  s3_bucket:
      name: "{{ item }}"
      state: absent
      tags:
          Name: "{{ item }}"
          Environment: "{{ Environment }}"
          Owner: "{{ bucketTags[Environment]['Owner'] }}"
      region: us-east-2
      public_access:
          block_public_acls: true
          ignore_public_acls: true
          block_public_policy: true
          restrict_public_buckets: true
  with_items:
      - "{{ bucketCfg[Environment]['SecondaryBucketName'] }}"



Ansible Role Tasks file contents.
$ cat ansible_role/tasks/create_bucket_directories.yml
---

- name: Read environment specific variables.
  include_vars:
      file: "ansible_role/vars/{{ Environment }}/main.yml"

- name: Create empty directories to store build artifacts.
  aws_s3:
      bucket: "{{ item.bucket_name }}"
      object: "{{ item.artifact_dir }}"
      mode: create
  with_items:
      - { bucket_name: "{{ bucketCfg[Environment]['PrimarBucketName'] }}", artifact_dir: "/app1/artifacts" }
      - { bucket_name: "{{ bucketCfg[Environment]['SecondaryBucketName'] }}", artifact_dir: "/app1/artifacts" }


- name: Create empty directories to deploy latest build.
  aws_s3:
      bucket: "{{ item.bucket_name }}"
      object: "{{ item.latest_dir }}"
      mode: create
  with_items:
      - { bucket_name: "{{ bucketCfg[Environment]['PrimarBucketName'] }}", latest_dir: "/app1/latest" }
      - { bucket_name: "{{ bucketCfg[Environment]['SecondaryBucketName'] }}", latest_dir: "/app1/latest" }
Ansible Role Tasks file contents.
$ cat ansible_role/tasks/create_cloudfront_dist.yml
AWSTemplateFormatVersion: '2010-09-09'

Description: 'CF Template to setup infra for static password generator application'

Parameters:
    Environment:
      Description:    Please specify the target environment.
      Type:           String
      Default:        "int"
      AllowedValues:
        - int
        - pre-prod
        - prod
    AppName:
      Description:  Application name.
      Type:         String
      Default:      "pwdgen"

    AlternateDomainNames:
      Description:    CNAMEs (alternate domain names)
      Type:           String
      Default:        "jackuna.github.io"

    IPV6Enabled:
      Description:    Should CloudFront to respond to IPv6 DNS requests with an IPv6 address for your distribution.
      Type:           String
      Default:        true
      AllowedValues:
        - true
        - false

    OriginProtocolPolicy:
      Description:    CloudFront Origin Protocol Policy to apply to your origin.
      Type:           String
      Default:        "https-only"
      AllowedValues:
        - http-only
        - match-viewer
        - https-only

    Compress:
      Description:    CloudFront Origin Protocol Policy to apply to your origin.
      Type:           String
      Default:        "true"
      AllowedValues:
        - true
        - false

    DefaultTTL:
      Description:    The default time in seconds that objects stay in CloudFront caches before CloudFront forwards another request to your custom origin. By default, AWS CloudFormation specifies 86400 seconds (one day).
      Type:           String
      Default:        "540.0"

    MaxTTL:
      Description:    The maximum time in seconds that objects stay in CloudFront caches before CloudFront forwards another request to your custom origin. By default, AWS CloudFormation specifies 31536000 seconds (one year).
      Type:           String
      Default:        "600.0"

    MinTTL:
      Description:    The minimum amount of time that you want objects to stay in the cache before CloudFront queries your origin to see whether the object has been updated.
      Type:           String
      Default:        "1.0"

    SmoothStreaming:
      Description:    Indicates whether to use the origin that is associated with this cache behavior to distribute media files in the Microsoft Smooth Streaming format.
      Type:           String
      Default:        "false"
      AllowedValues:
        - true
        - false
    QueryString:
      Description:    Indicates whether you want CloudFront to forward query strings to the origin that is associated with this cache behavior.
      Type:           String
      Default:        "false"
      AllowedValues:
        - true
        - false

    ForwardCookies:
      Description:    Forwards specified cookies to the origin of the cache behavior.
      Type:           String
      Default:        "none"
      AllowedValues:
        - all
        - whitelist
        - none

    ViewerProtocolPolicy:
      Description:    The protocol that users can use to access the files in the origin that you specified in the TargetOriginId property when the default cache behavior is applied to a request.
      Type:           String
      Default:        "https-only"
      AllowedValues:
        - redirect-to-https
        - allow-all
        - https-only

    PriceClass:
      Description:    The price class that corresponds with the maximum price that you want to pay for CloudFront service. If you specify PriceClass_All, CloudFront responds to requests for your objects from all CloudFront edge locations.
      Type:           String
      Default:        "PriceClass_100"
      AllowedValues:
        - PriceClass_All
        - PriceClass_100
        - PriceClass_200

    SslSupportMethod:
      Description:    Specifies how CloudFront serves HTTPS requests.
      Type:           String
      Default:        "sni-only"
      AllowedValues:
        - sni-only
        - vip

    MinimumProtocolVersion:
      Description:    The minimum version of the SSL protocol that you want CloudFront to use for HTTPS connections.
      Type:           String
      Default:        "TLSv1.2_2021"
      AllowedValues:
        - TLSv1.2_2021
        - TLSv1.2_2019
        - TLSv1.1_2018

    OriginKeepaliveTimeout:
      Description:    You can create a custom keep-alive timeout. All timeout units are in seconds. The default keep-alive timeout is 5 seconds, but you can configure custom timeout lengths. The minimum timeout length is 1 second; the maximum is 60 seconds.
      Type:           String
      Default:        "60"

    OriginReadTimeout:
      Description:    You can create a custom origin read timeout. All timeout units are in seconds. The default origin read timeout is 30 seconds, but you can configure custom timeout lengths. The minimum timeout length is 4 seconds; the maximum is 60 seconds.
      Type:           String
      Default:        "30"


    BucketVersioning:
      Description:    The versioning state of an Amazon S3 bucket. If you enable versioning, you must suspend versioning to disable it.
      Type:           String
      Default:        "Suspended"
      AllowedValues:
        - Enabled
        - Suspended

Resources:
  # Bucket Policy for primary and secondary buckets.
  PrimaryBucketReadPolicy:
      Type: 'AWS::S3::BucketPolicy'
      Properties:
        Bucket: !Sub 'ck-${Environment}-${AppName}-primary-bucket'
        PolicyDocument:
          Statement:
          - Action: 
              - 's3:GetObject'
            Effect: Allow
            Resource: !Sub 'arn:aws:s3:::ck-${Environment}-${AppName}-primary-bucket/*'
            Principal:
              CanonicalUser: !GetAtt PrimaryBucketCloudFrontOriginAccessIdentity.S3CanonicalUserId
  SecondaryBucketReadPolicy:
      Type: 'AWS::S3::BucketPolicy'
      Properties:
        Bucket: !Sub 'ck-${Environment}-${AppName}-secondary-bucket'
        PolicyDocument:
          Statement:
          - Action: 
              - 's3:GetObject'
            Effect: Allow
            Resource: !Sub 'arn:aws:s3:::ck-${Environment}-${AppName}-secondary-bucket/*'
            Principal:
              CanonicalUser: !GetAtt SecondaryBucketCloudFrontOriginAccessIdentity.S3CanonicalUserId

  # Cloud Front OAI
  PrimaryBucketCloudFrontOriginAccessIdentity:
    Type: 'AWS::CloudFront::CloudFrontOriginAccessIdentity'
    Properties:
      CloudFrontOriginAccessIdentityConfig:
        Comment: !Sub 'ck-${Environment}-${AppName}-primary'
  SecondaryBucketCloudFrontOriginAccessIdentity:
    Type: 'AWS::CloudFront::CloudFrontOriginAccessIdentity'
    Properties:
      CloudFrontOriginAccessIdentityConfig:
        Comment: !Sub 'ck-${Environment}-${AppName}-secondary'

  # Cloudfront Cache Policy
  CDNCachePolicy:
    Type: AWS::CloudFront::CachePolicy
    Properties: 
      CachePolicyConfig: 
        Comment: 'Max TTL 600 to validate frequent changes'
        DefaultTTL: !Ref DefaultTTL
        MaxTTL: !Ref MaxTTL
        MinTTL: !Ref MinTTL
        Name: !Sub 'ck-${Environment}-${AppName}-cache-policy'
        ParametersInCacheKeyAndForwardedToOrigin: 
            CookiesConfig: 
                CookieBehavior: none
            EnableAcceptEncodingBrotli: True
            EnableAcceptEncodingGzip: True
            HeadersConfig: 
                HeaderBehavior: none
            QueryStringsConfig: 
                QueryStringBehavior: none

  # CLOUDFRONT DISTRIBUTION
  CloudFrontDistribution:
    Type: 'AWS::CloudFront::Distribution'
    DependsOn:
    - CDNCachePolicy
    Properties:
      DistributionConfig:
        Comment: 'Cyberkeeda Password Generator application'
        Enabled: true
        HttpVersion: http2
        IPV6Enabled: true
        DefaultRootObject: version.json
        Origins:
        - DomainName: !Sub 'ck-${Environment}-${AppName}-primary.s3.amazonaws.com'
          Id: !Sub 'ck-${Environment}-${AppName}-primary-origin'
          OriginPath: "/v1/latest"
          ConnectionAttempts: 1
          ConnectionTimeout: 2
          S3OriginConfig:
            OriginAccessIdentity: !Sub 'origin-access-identity/cloudfront/${PrimaryBucketCloudFrontOriginAccessIdentity}'
        - DomainName: !Sub 'ck-${Environment}-${AppName}-secondary.s3.amazonaws.com'
          Id: !Sub 'ck-${Environment}-${AppName}-secondary-origin'
          OriginPath: "/v1/latest"
          ConnectionAttempts: 1
          ConnectionTimeout: 2
          S3OriginConfig:
            OriginAccessIdentity: !Sub 'origin-access-identity/cloudfront/${SecondaryBucketCloudFrontOriginAccessIdentity}'
        OriginGroups:
          Quantity: 1
          Items: 
          - Id: !Sub 'ck-${Environment}-${AppName}-cdn-origin-group'
            FailoverCriteria: 
              StatusCodes: 
                Items: 
                - 500
                - 502
                - 503
                - 504
                - 403
                - 404
                Quantity: 6
            Members:
              Quantity: 2
              Items: 
              - OriginId: !Sub 'ck-${Environment}-${AppName}-primary-origin'
              - OriginId: !Sub 'ck-${Environment}-${AppName}-secondary-origin'
        CacheBehaviors:
          - CachePolicyId: !GetAtt 'CDNCachePolicy.Id'
            PathPattern:  '*'
            ViewerProtocolPolicy: !Ref 'ViewerProtocolPolicy'
            TargetOriginId: !Sub 'ck-${Environment}-${AppName}-cdn-origin-group'
        DefaultCacheBehavior:
          AllowedMethods:
            - GET
            - HEAD
          TargetOriginId: !Sub 'ck-${Environment}-${AppName}-cdn-origin-group'
          ViewerProtocolPolicy: !Ref 'ViewerProtocolPolicy'
          CachePolicyId: !GetAtt 'CDNCachePolicy.Id'
Outputs:
  CDNCloudfrontURL:
    Description: CloudFront CDN Url.
    Value: !GetAtt  'CloudFrontDistribution.DomainName'


Once the above file and it's respective contents are dumped within source code repository, we can use to create AWS services using Jenkins pipeline job.

If we breakdown the blog post, this post can be used for other techinacl refrences too, such as.
  • Jenkins Scripted pipeline using parameters.
  • How to hash/mask passwords and sensitive environments. 
  • Leverage the power of docker to make codes uniform across environments and platform.
    • If you notice, we can easily install ansible packages within build machine and run the ansible playbook directly, but we are not touching any third party application within our build machine.
    • Even once our task is done, we are removing the container.
  • How to build docker image from docker file using jenkins.
  • Docker file to build ansible image.
  • Real world example of Ansible Roles.
  • Ansible to create S3 buckets with tags.
  • How to disable s3 bucket public access using ansible.
  • How to create s3 bucket directories and objects using Ansible.
  • How to use Ansible to create CloudFormation stack using parameters.
  • CloudFormation template to create below resources.
    • S3 Bucket Policy
    • CloudFront Origin Access Identity.
    • CloudFront Cache Policy.
    • CloudFront Distribution with Origin Group and S3 as a Origin.

Hope this blog post, help you in some use case.

There might be definitely errors and areas of improvement within this blog post or better wat to handle such deployment, please share your valuable comments.

Read more ...

Python Script to create new JIRA tickets using JIRA API

 


Python Script with below JIRA operations

  • Fetch JIRA ticket details using JIRA API.
  • Create JIRA ticket using JIRA API


Information to collect before using script.
  • Get your organization JIRA URL handy with you.
    • Can be retrieved from any JIRA ticket URL, respective URL has FQDN that's the JIRA URL dedicated for your organization.
  • Know your JIRA project/space name.
    • Login to any of your JIRA tickets.
    • From top navigation panel, select projects and check the name associated with the project, project name will be single word without any space.
  • Know JIRA field/mandatory fields within your ticket, before you create a ticket via API.
    • Will know how to fetch details about it from our python script, get_jita_details method can be used to get details for field

Script has one class and two methods, will know how and when to use one by one.
  • JiraHandler ( Class )
  • get_jira_details ( Method ) - Can be used to fetch JIRA ticket details.
  • create_jira_cr_ticket( Method ) - Can be used to create new JIRA ticket. 
Note : 
  • For simplicity, i have used basic authentication method to authenticate to JIRA servers, although for some sort of security instead of using plain text password, have encoded it using base64 authentication.
  • You need to ready with the Payload data json file, before you go ahead and create a new JIRA ticket, this file can be of any name but please content is in JSON format.
        payload_data.json
 

Script.

import requests
import json
import base64
from requests.auth import HTTPBasicAuth

# Inorder to encrypt/decrypt your credentials using base64 module as below.
# To encode --> base64.b64encode(bytes("random", "utf-8"))
# To decode --> base64.b64decode("cmFuZG9t").decode("utf-8")

class JiraHandler:
      
    def __init__(self, username):
        print('Loading Instnace variables...')
        self.username = username
        self.securestring = base64.b64decode("replaceItwithYourCredential").decode("utf-8")
        self.url = "https://jira.yourownjiradomain.com/rest/api/2/issue/"

    def get_jira_details(self,jira_ticket):

        try :
            
            auth = HTTPBasicAuth(self.username, self.securestring)
            get_details_url = self.url + jira_ticket

            headers = {
               "Accept": "application/json"
            }
            
            print("retrieveing", jira_ticket, "details...")
            response = requests.request(
               "GET",
               get_details_url,
               headers=headers,
               auth=auth
            )
            print(json.dumps(json.loads(response.text), sort_keys=True, indent=4, separators=(",", ": ")))
            
        except Exception as e:
            return e



    def create_jira_cr_ticket(self, filename):
        get_details_url = self.url

        auth = HTTPBasicAuth(self.username, self.securestring)

        headers = {
           "Accept": "application/json",
           "Content-Type": "application/json"
        }
        try:
            with open(filename, "r") as re_read_file:
                payload = re_read_file.read()
                print("Creating new JIRA...")
                response = requests.request(
                   "POST",
                   get_details_url,
                   data=payload,
                   headers=headers,
                   auth=auth
                )
                print(json.dumps(json.loads(response.text), sort_keys=True, indent=4, separators=(",", ": ")))
        except Exception as filenotfound:
            print("Can't load file..", filenotfound)



         


How to use it, 
  • Load Instance variable by calling class and provide your JIRA username as an input.
# Initiate Class and load instance variable.
d = JiraHandler("your_JIRA_Username")
  • Call method get_jira_details and add JIRA ticket as method input, for example if we want to get details about a specific JIRA ticket CKPROJ-6162, we must call the method as described below.
# Call below method to get JIRA ticket details.
d.get_jira_details("CKPROJ-6162")
  • Before calling method create_jira_cr_ticket, dump json content within a json file, for instance i have created a file named payload_data.json and it looks somehow like below.
    • Thinking how to find it, use method get_jira_details, it will give you an idea for the field used within your project's jira ticket.
{
        "fields": {
           "project":
           {
              "key": "CKPROJ"
           },
           "summary": "My Dummy Ticket created by REST API",
           "description": "Dummy CR for REST API Test",
           "confluence": "Dummy Confluence Page Link",
           "verification_steps": "Verification Plan",
           "issuetype": {
              "name": "Change Request"
               }
           }
        }
Once you are ready with the payload data, save it within a JSON file and call method providing payload_data.json file as an input to it, as an output script will return the JIRA ticket details.
# Call below method to create new JIRA ticket
d.create_jira_cr_ticket("C:\\Users\\cyberkeeda\\payload_data.json")

Hope this script can have help in any sort, for any help comment please.


Read more ...

YAML Cheat Sheet

 


YAML or YML has become a must know data serialization language for System Administrators, it is widely used in many of the configuration management, containerization platforms , Orchestration applications, Infra as a Code, and many more, and because of it's human readable formats, it's widely accepted and used.

YAML usage in some widely used sysadmin tools.
  • Ansible
  • Docker
  • Kubernetes
  • AWS CloudFormation.
  • TerraForm
Here in this blog post, i have kept some of the YAML cheat sheets, which can help you to craft your own or to read some yaml files.

Key Value

A variety of data is structured in the form of key value pairs.

Key - Items written on left side of data ( Fruit, Drink, Vegetable )
Value Items written on left side of data ( Orange, Juice, Spinach)
Important: Space after :


YAML Key Value Pair
Fruit: Orange Drink: Juice Vegetable: Spinach


List or Array

A representation of List Smartphone type and Brand type in YAML format.


YAML List/Array
Smartphones:
- IOS - Android Brands: - Apple - Nokia - Micromax
- Vivo


Dictionary

A representation of Employee named 26 year old male named as Jignesh working as Web developer data in dictionary type in YAML format.


YAML Dictionary/Map
Employee: Name: Jignesh Sex: Male Age: 26 Title: Web Developer


Dictionary inside List

A representation of multiple list of Fruits (Mango, Guava, Banana) and their individual specification mapped in dictionary value ( Calories, Fat, Carbs )


YAML Dictionary inside List
Fruits: - Mango: Calories: 95 Fat: 0.3 Carbs: 25 - Guava: Calories: 105 Fat: 0.4 Carbs: 27 - Banana: Calories: 45 Fat: 0.1 Carbs: 11 Vegetables: - Onion: Calories: 25 Fat: 0.1 Carbs: 6 - Potato: Calories: 22 Fat: 0.2 Carbs: 4.8 - Ginger: Calories: 8 Fat: 0.1 Carbs: 1.9


List inside Dictionary

A representation of Employee named 26 year old male named as Jignesh working as Web developer data in dictionary type.

Employee Shail has been assigned two Projects ( Login and Logout Form ), a representation of List data.

YAML List inside Dictionary
Employee: Name: Shail Sex: Male Age: 28 Title: Web Developer
  Projects:
- Login Form - Logout Form

For sure there are many more to update in the YAML cheat list, but will begin with these..

Read more ...

AWS - How to extend windows drive volume size from an existing size to a newer size

How to extended your EBS Volume attached to an Windows Server


So to be true to you, I'm not a Windows guy, even for smaller thing i need to google, even if i already did a task based on windows server, next time it's asked to do the same I used to forgot as it's the frequency of work i get with respect to windows.

So why not draft and make a blog post, let's know how to do that.
In this blog post I will cover.
  • How to extend windows root EBS device volume.
  • How to extend an additional attached EBS volume.

Lab Setup details:
  1. We already have a EC2 instance with Windows Server installed on it.
  2. We already have a root volume ( Disk 0 ) attached of size 30Gb
  3. We have made 2 additional disk partitions as D and E drives.
  4. We already have a additional EBS volume( Disk 1 ) mounted with partition name as DATA
  5. We are assuming no Unallocated space is present.
How to extend windows root EBS device volume.

Final goal : We will add 3Gb of additional disk space to our root EBS volume ( /dev/sda1 and extend our D drive partition from 5Gb to 8Gb.

  • Go to AWS Console, select your desired Windows server EC2 instance.
  • Under description find Block Device (/dev/sda1), click it and from popup window note the EBS Volume ID and select it.
  • It will redirect to EBS Volumes window, confirm the EBS volume id, that we have noted on above step and confirm the existing size too.


  • Once Confirmed, we are ready to modify the volume size, that is from 30Gb to 33Gb
  • Select volume, right click on it, choose modify volume and make it from 30 to 33 as we want to increase it by 3Gb
  • Confirm and submit and check the state till, it become available from optimizing.
  • Once completed, we can login to our windows ec2 instnace and follow the next steps.
  • Open Run --> Paste "diskmgmt.msc" --> Action --> Refresh Disk
  • A new space with Unallocated of size 3Gb can be found.
  • Now we are ready to extend our D: drive from 5Gb to 8Gb.
 
  • Right Click on D: Volume --> Extend Volume --> Next --> 3Gb volume must be there within next Selected panel. --> Finish 

We can perform the same with our existing additional attached disk volumes, just identify your EBS volume id and follow up the same procedures.

Read more ...
Designed By Jackuna