CyberKeeda In Social Media
Showing posts with label AWS Cloudformation. Show all posts
Showing posts with label AWS Cloudformation. Show all posts

Fix : AWS SAM IAM Error : arn:aws:cloudformation:us-east-1:aws:transform/Serverless-2016-10-31

 



Stack trace :

User: arn:aws:sts::455734o955:assumed-role/xx-xx-xx-app-role/i-xxxxxxxxxx is not authorized to perform: cloudformation:CreateChangeSet on resource: arn:aws:cloudformation:us-east-1:aws:transform/Serverless-2016-10-31 because no identity-based policy allows the cloudformation:CreateChangeSet action.


Fix.

Add the below resource within your JSON policy statement.

Note :  cloudformation:* is strictly discouraged, fine tune your access permissions.

        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "cloudformation:*",
            "Resource": [
                "arn:aws:cloudformation:us-east-1:aws:transform/Serverless-2016-10-31"
            ]
        }



Read more ...

Continuous Integration of Lambda Function using AWS SAM

 


AWS Lambda Function is awesome and trust me if you are working on AWS, sooner or later you have to deal with AWS Lambda.

In this blog post, we will cover the below use cases.

  • What is AWS SAM
  • How to create Lambda Function using AWS SAM
  • How to delete Lambda Function created using AWS SAM
  • How to integrate AWS SAM with Docker.
  • How to create a continuous integration pipeline with Jenkins, GitHub, Docker and SAM.


What is AWS SAM.

One can find a huge detailed information in official documentation, here is the AWS link for the same.

I would like to share my level of short understanding with you, that can give you some idea of AWS SAM and it's respective components.

  • AWS SAM that is focused on the creating Application using AWS PAAS services such as API Gateway, AWS Lambda, SNS, SES etc.
  • SAM templates are somehow similar to CloudFormation templates, so one who has idea of CloudFormation can easily adapt SAM templates too.
  • SAM shorten the code when it's being compared to CloudFormation for Serverless services deployment.
  • SAM in backend create CloudFormation stacks to deploy the AWS services, that means it's avoid some part of code that has to be written by user and does the job for you by adding those lines.
  • In order to use SAM, one needs an download additional binary/package which is not being clubbed with AWS CLI.
How to create Lambda Function using SAM ?

Before you directly jump into it, first know the must know stuffs from file and directory prospective.

  • samconfig.toml : Configuration file that will be used during the SAM commands ( init, test, build, validate etc)
  • template.yml : SAM template, similar to CloudFormation template to define Parameter, Resource, Metadata, Output, Mapping etc.
  • events - Directory to store events for testing our Lambda code using events.json file.
  • tests - Directory that contains the unit test files
Lab setup details -
    - We will be deploying a Lambda Function with Python 3.7 runtime.
    - Name of our sam application is sam-deployer_with_sam
    - This is how our Lambda Function looks like in console and it's basic task is to check the status of           port, ie Open or Closed 
    - Our files and templates follow the CICD approach, so we have kept our code for two environment
        that is ( default, dev, uat )
    



Steps.
  • Install AWS SAM CLI first.
  • All tutorial you might have went through will ask you to go through the sam init, sam build and all, this blog is post is little baked one by using existing templates.
  • Create a new directory to use in this project
$ mkdir lambda-deployer_with_sam
  • Create a new files following the below directory structure layout
lambda-deployer_with_sam/
├── events
│   └── event.json
├── samconfig.toml
├── src
└── template.yaml

2 directories, 3 files

Here is the basic content to post in respective files.

Contents of samconfig.toml
version = 0.1
[default]
[default.deploy]
[default.deploy.parameters]
stack_name = "default-lambda-deployer-with-sam-Stack"
s3_bucket = "lambda-deployer-sam-bucket"
s3_prefix = "sam-lambda-stack"
region = "ap-south-1"
capabilities = "CAPABILITY_IAM"
disable_rollback = true
image_repositories = []

[dev]
[dev.deploy]
[dev.deploy.parameters]
stack_name = "dev-lambda-deployer-with-sam-Stack"
s3_bucket = "lambda-deployer-sam-bucket"
s3_prefix = "dev-sam-lambda-stack"
region = "ap-south-1"
capabilities = "CAPABILITY_IAM"
disable_rollback = true
image_repositories = []
parameter_overrides = "Environment=\"dev\""

[uat]
[uat.deploy]
[uat.deploy.parameters]
stack_name = "uat-lambda-deployer-with-sam-Stack"
s3_bucket = "lambda-deployer-sam-bucket"
s3_prefix = "uat-sam-lambda-stack"
region = "ap-south-1"
capabilities = "CAPABILITY_IAM"
disable_rollback = true
image_repositories = []
parameter_overrides = "Environment=\"uat\""


Lets understand the content of sam configuration file, that is samconfig.toml.
  • This file will be used later during the deployment operation Lambda and it's respective resource.
  • This file can be used to categorize environment specific paramters.
  • The first line [ default ],  [ dev ] , [uat]defines the name of the environment
  • All the next lines coming after Second and Third Line [uat.deploy.parameters] is provide environment specifc paramters.
  • parameter_overrides is the one, that is used to override the default parameter provided in the template.yml file, which is equivalent to cloudformation template.
Contents of template.yml
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: An AWS Serverless Specification template describing your function.

Parameters:
  Environment:
    Description:    Please specify the target environment.
    Type:           String
    Default:        "dev"
    AllowedValues:
      - dev
      - uat
  AppName:
    Description:  Application name.
    Type:         String
    Default:      "find-port-status"

Mappings:
  EnvironmentMap:
    dev:
     IAMRole: 'arn:aws:iam::897248824142:role/service-role/vpclambda-role-27w9b8uq'
    uat:
      IAMRole: 'arn:aws:iam::897248824142:role/service-role/vpclambda-role-27w9b8uq'
    stg:
      IAMRole: 'arn:aws:iam::897248824142:role/service-role/vpclambda-role-27w9b8uq'

Resources:
  LambdabySam:
    Type: 'AWS::Serverless::Function'
    Properties:
      FunctionName: !Sub 'ck-${Environment}-${AppName}'
      Handler: lambda_function.lambda_handler
      Runtime: python3.7
      CodeUri: src/
      Description: 'Lambda Created by SAM template'
      MemorySize: 128
      Timeout: 3
      Role: !FindInMap [EnvironmentMap, !Ref Environment, IAMRole]
      VpcConfig:
        SecurityGroupIds:
          - sg-a0f856da
        SubnetIds:
          - subnet-e9c898a5
          - subnet-bdbb59d6
      Environment:
        Variables:
          Name: !Sub 'ck-${Environment}-${AppName}'
          Owner: CyberkeedaAdmin
      Tags:
        Name: !Sub 'ck-${Environment}-${AppName}'
        Owner: CyberkeedaAdmin


Now, our last step is put our Lambda code into src diectory.

$ touch src/lambda_function.py 

Contents of src/lambda_function.py
import json
import socket


def isOpen(ip,port):
   s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
   try:
      s.connect((ip, int(port)))
      s.settimeout(1)
      return True
   except:
      time.sleep(1)
      return False
      
def lambda_handler(event, context):
    
    if isOpen('142.250.195.196',443):
        code = 200
    else:
        code = 500


    return {
        'statusCode': code,
        'body': json.dumps("Port status")
    }

Now, we have everything in place, let's deploy or Lambda code using SAM.

Initiate SAM build with respective environment, defined in samconfig.toml
$ sam build --config-env dev

Output will look, something like below.
Building codeuri: /home/kunal/aws_sam_work/lambda-deployer_with_sam/src runtime: python3.7 metadata: {} architecture: x86_64 functions: ['LambdabySam']
requirements.txt file not found. Continuing the build without dependencies.
Running PythonPipBuilder:CopySource

Build Succeeded

Built Artifacts  : .aws-sam/build
Built Template   : .aws-sam/build/template.yaml

Commands you can use next
=========================
[*] Invoke Function: sam local invoke
[*] Test Function in the Cloud: sam sync --stack-name {stack-name} --watch
[*] Deploy: sam deploy --guided

Now, we have build ready to be deployed.. let's initiate sam deploy.
$ sam deploy --config-env dev
Output will look, something like below.
Uploading to dev-sam-lambda-stack/dccfd91235d686ff0c5dcab3c4d44652  400 / 400  (100.00%)

        Deploying with following values
        ===============================
        Stack name                   : dev-lambda-deployer-with-sam-Stack
        Region                       : ap-south-1
        Confirm changeset            : False
        Disable rollback             : True
        Deployment s3 bucket         : 9-bucket
        Capabilities                 : ["CAPABILITY_IAM"]
        Parameter overrides          : {"Environment": "dev"}
        Signing Profiles             : {}

Initiating deployment
=====================
Uploading to dev-sam-lambda-stack/b6c26b6d535bf3b43f5b0bb71a88daa1.template  1627 / 1627  (100.00%)

Waiting for changeset to be created..

CloudFormation stack changeset
---------------------------------------------------------------------------------------------------------------------
Operation                     LogicalResourceId             ResourceType                  Replacement                 
---------------------------------------------------------------------------------------------------------------------
+ Add                         LambdabySam                   AWS::Lambda::Function         N/A                         
---------------------------------------------------------------------------------------------------------------------

Changeset created successfully. arn:aws:cloudformation:ap-south-1:897248824142:changeSet/samcli-deploy1646210098/97de1b9e-ed08-45fe-8e65-fb0c0928e8f7


2022-03-02 14:05:09 - Waiting for stack create/update to complete

CloudFormation events from stack operations
---------------------------------------------------------------------------------------------------------------------
ResourceStatus                ResourceType                  LogicalResourceId             ResourceStatusReason        
---------------------------------------------------------------------------------------------------------------------
CREATE_IN_PROGRESS            AWS::Lambda::Function         LambdabySam                   -                           
CREATE_IN_PROGRESS            AWS::Lambda::Function         LambdabySam                   Resource creation Initiated 
Initiate SAM build with respective environment, defined in samconfig.toml
CREATE_COMPLETE               AWS::Lambda::Function         LambdabySam                   -                           
CREATE_COMPLETE               AWS::CloudFormation::Stack    dev-lambda-deployer-with-     -                           
                                                            sam-Stack                                                 
---------------------------------------------------------------------------------------------------------------------

Successfully created/updated stack - dev-lambda-deployer-with-sam-Stack in ap-south-1

This step, will create the required services and it's respective configuration, confirm the same from lambda console, this is how it looks like.




Please note, every time we make any changes in lambda_function.py file, we need to re-build and deploy.

That's in this post, we will know later in next post about below stuffs.
  • How to delete Lambda Function created using AWS SAM
  • How to integrate AWS SAM with Docker.
  • How to create a continuous integration pipeline with Jenkins, GitHub, Docker and SAM.


Read more ...

Jenkins Pipeline for Continuous Integration of AWS Lambda Function with GitHub repository

 


AWS Lambda Function is awesome and trust me if you are working on AWS, sooner or later you have to deal with AWS Lambda.

Due to the nature of being a PAAS service, we can't ignore the ways of Lambda deployment and it's test methods which is somehow more or less through the Lambda Function Console.

Off course there are ways to write code, test code and deploy code directly through IDEs, but keep in mind you still need an ACCESS Key and ACCESS Secret. 

So what about the code base, how will we track the code changes done in Lambda Function itself.

In this post, we will cover the challenges with Lambda approaches for CICD and one of the my proposed solutions to address some part of it.

Let's know some of the challenges and their probable solutions.

  • Lambda Deployment : We can use Terraform and CloudFormation for the same, then what's is the challenge ?
    • CloudFormation :
      • We can use Inline methods to put our Lambda Code under the Code block of ZipFile, but what about the 3rd party modules like panda, we can't use the code block under CloudFormation.
      • One can still package those third party modules and other code together, but still one needs to upload those in S3 bucket and think of a way of handling changes before using it.
  • Lambda Function Code base : 
    • We still need to have snapshots of our lambda function code to track the daily changes and to be later use in deployment pipeline.

There are some more, challenges with Lambda Function, but in this blog post we will try to cover the basic of CICD, that is replicating our Lambda Function code from Console to GitHub repository.

  • The moment when we talk about CICD, mostly pipeline uses git to get source code and then use it for further process like, checkout, build, release, deploy, test..
  • Here the case is somehow different, due to nature of PAAS, we have to test our code's functionality in Lambda Console first, then it can be pushed to repository to save our source code.
  • Yes, AWS SAM is yet another option of testing Lambda Function code within our local environment, but not in the case when Lambda is hosted in VPC and it uses other services to communicate.

Below is one of my proposed solution to achieve the same.



Prerequisites.
  • IAM Access Key Secrets or IAM Role attached to EC2 instance, from where the Jenkins job is triggered.
  • GitHub Personal Access Token
Here is the flow...
  1. I assume, developer is initially testing his/her code's functionality on Lambda Console, Once Developer is okay, with his/her Lambda Function code, we will move to next step.
  2. SysAdmin/Developer can check-in his/her code directly from Lambda Function to GitHub repository using Jenkins Job.
  3. Jenkins Job has scripted pipeline attached to it, thus it will go through below stages.
    • Stage : Check Out code to appropriate branch.
    • Stage : Build Docker image from Docker File for Ansible.
    • Stage : Run Ansible container from above created Docker image and run Ansible Playbook command to execute Ansible role and it's relative ansible tasks.
      1. Task 1 - 
        • Download Lambda Code from Lambda Console using Python Script, which is using boto3 module.
        • Unzip the downloaded code into specific directory to track the changes as a file, else changes in zip file can't be tracked.
      2. Task 2 - 
        • Clone existing repository from git, replace the existing lambda source code with the newer one downloaded in above step.
        •  Git add, commit and push it into git repository.

Here is the lab setup.

Our Lambda Function in console has something by name "ck-uat-Lambda-Authorizer"


And it's code looks like something below in console.


GitHub repository where I want to publish my code.

Repo Snapshot.


Directory Layout for the same...



Our Final Intention is to dump or lambda function code under src directory, that is lambda_folder/src

So according to the flow stated earlier in the post, I will paste the code..

Jenkins Scripted Pipeline code.

Note: Do mask the additional secrets to avoid it to be appear in plain text.
def gituser = env.GIT_USERNAME
def gituserpass = env.GIT_PASSWORD
def ACCESS_KEY = env.AWS_ACCESS_KEY
def KEY_ID = env.AWS_SECRET_ACCESS_KEY
def DEBUG_MODE = env.LOG_TYPE

node('master'){
  
  try {

    stage('Git Checkout'){
            checkout scm
            sh "git checkout lambda_deployer"
        }

     stage('build'){
                  sh "ls -ltr"
                   echo "Building docker image via dockerfile..."
                   sh "docker build  -t ansible:2.10-$BUILD_ID ."
                  }

     stage('deploy'){
                    echo "Infrastructure deployment started...."
                    wrap([$class: "MaskPasswordsBuildWrapper",
                          varPasswordPairs: [[password: gituserpass, var: gituserpass] ]]) {
                    sh "docker run --rm \
                        -e gituser=$gituser \
                        -e gituserpass=$gituserpass \
                        -e AWS_ACCESS_KEY_ID=$ACCESS_KEY \
                        -e AWS_SECRET_ACCESS_KEY=$KEY_ID \
                        -e AWS_DEFAULT_REGION='ap-south-1' \
                        ansible:2.10-$BUILD_ID ansible-playbook -$DEBUG_MODE  --extra-vars 'env=dev1 git_username=${gituser} token=${gituserpass}' lambda_folder/root_lambda_project.yml"
                      }
                    }          
      }


            
  catch (e){
    echo "Error occurred - " + e.toString()
    throw e
    } 
  finally {
    deleteDir()
       
            sh 'docker rmi -f ansible:2.10-$BUILD_ID  && echo "ansible:2.10-$BUILD_ID local image deleted."'
  }
}

Build Pipe Line should have something like below in Jenkins Console.



Jenkins One of the Stage : Build will build docker image from Docker File, here is the docker file source code.

FROM python:3.7
RUN python3 -m pip install ansible==2.10 boto3 awscli

RUN rm -rf /usr/local/ansible/

copy lambda_folder /usr/local/ansible/lambda_folder

WORKDIR usr/local/ansible/

CMD ["ansible-playbook", "--version"]

Once Docker Images is created, next step is to run Docker container from the above created Ansible image.

Here is the Ansible Role and it's respective tasks.

Ansible Root Playbook YAML -- root_lambda_project.yml

---
- hosts: localhost
  connection: local
  gather_facts: False

  roles:
   - role-

Ansible Variable file under roles -- lambda_folder/role/vars/dev1/main.yml

---
region: us-east-1
function_name: ck-uat-LambdaAuthorizer
git_repo_name: aws-swa
git_repo_branch: lambda_deployer

Python Script, that will be called on one of the Ansible Task to download Lambda Function code  

Note : It's an edited version of existing version of code from stackoverflow.
"""
    Script to download individual Lambda Function and dump code in specified directory
"""
import os
import sys
from urllib.request import urlopen
import zipfile
from io import BytesIO

import boto3


def get_lambda_functions_code_url(fn_name):

    client = boto3.client("lambda")
    functions_code_url = []
    fn_code = client.get_function(FunctionName=fn_name)["Code"]
    fn_code["FunctionName"] = fn_name
    functions_code_url.append(fn_code)
    return functions_code_url


def download_lambda_function_code(fn_name, fn_code_link, dir_path):

    function_path = os.path.join(dir_path, fn_name)
    if not os.path.exists(function_path):
        os.mkdir(function_path)
    with urlopen(fn_code_link) as lambda_extract:
        with zipfile.ZipFile(BytesIO(lambda_extract.read())) as zfile:
            zfile.extractall(function_path)


if __name__ == "__main__":
    inp = sys.argv[1:]
    print("Destination folder {}".format(inp))
    if inp and os.path.exists(inp[0]):
        dest = os.path.abspath(inp[0])
        fc = get_lambda_functions_code_url(sys.argv[2])
        for i, f in enumerate(fc):
            print("Downloading Lambda function {}".format(f["FunctionName"]))
            download_lambda_function_code(f["FunctionName"], f["Location"], dest)
    else:
        print("Destination folder doesn't exist")


Ansible Task 1 : lambda_folder/role/tasks/download_lambda_code.yml

---

- name: Read Variables
  include_vars:
    file: "role/vars/{{ env }}/main.yml"

- name: Download Lambda Function using Python script..
  command:
    argv:
      - python3 
      - role/files/download_lambda.py 
      - src
      - "{{ function_name }}"
Ansible Task 2 : lambda_folder/role/tasks/update_repository.yml

---
- name: Git clone source repository..
  command:
    argv:
      - git 
      - clone 
      - https://{{ git_username }}:{{ token }}@github.com/Jackuna/{{ git_repo_name }}.git 
      - -b 
      - "{{ git_repo_branch }}"

- name: Git add Lambda function source code to repo..
  command: >
    cp -r src {{ git_repo_name }}/lambda_folder

- name: Git add recent changes..
  command: >
    git add --all lambda_folder/src
  args:
    chdir: "{{ git_repo_name }}"

- name: Git Config username..
  command: >
    git config user.name {{ git_username }}
  args:
    chdir: "{{ git_repo_name }}"

- name: Git Config email..
  command: >
    git config user.email {{ git_username }}@cyberkeeda.com 
  args:
    chdir: "{{ git_repo_name }}"  
- name: Git commit recent changes..
  command: >
    git commit -m "Updated Latest code"
  args:
    chdir: "{{ git_repo_name }}"

- name: Git push recent changes..
  command:
    argv:
      - git 
      - push 
      - https://{{ git_username }}:{{ token }}@github.com/Jackuna/{{ git_repo_name }}.git 
      - -u 
      - "{{ git_repo_branch }}"
  args:
    chdir: "{{ git_repo_name }}"
  register: git_push_output  

That's all you need.. in case of hurdles or issues, do comment !
Read more ...

Jenkins Pipeline to create CloudFront Distribution using S3 as Origin

 


 Within this post, we will cover 

  • Complete Jenkins Scripted pipeline.
  • Complete IAAC (Infrastructure As A Code ) to deploy AWS services and It's respective configuration.
  • Jenkins integration with GitHub as code repository.
  • Jenkins integration with Docker to make the deployment platform independent.
  • Jenkins integration with Ansible to call AWS CloudFormation scripts.
  • Using Ansible roles to fill the gaps of AWS CloudFormation, basically in this blog post and lab environment I'm using it to bypass usage of AWS CloudFormation Stack Sets and Custom Resources.

Flow diagram explaining the automation.

Explanation of above flow diagram.

Once Jenkins Job is triggered with appropriate input variables.
  1. It starts with fetching source code from git repository, which contains.
    1. Source Code for applications ( HTML, CSS, JS )
    2. IAAC code to support infrastructures deployment.
      • Ansible Role, playbooks.
      • CloudFormation templates.
      • Jenkins File, which has scripted pipeline defined.
  2. Once source code is downloaded, it will look for Jenkins pipeline file named a Jenkinsfile.
  3. Once Jenkins file is executed, it will initiate the pipeline in below stages.
    1. Stage Checkout : It looks for deployment type, as normal build or rollback and based upon it, it will checkout to respective git branch or tag.
    2. Stage Build : To make the pipeline, platform independent and reusable in nature, instead of directly triggering jobs on Jenkins node via bash or powershell commands, we will be using docker containers to run our CLI commands.
      • Here we will use Ansible Playbooks to create Infrastructure, thus in this step we will build a Ansible docker image from Docker file.
    3. Stage Deploy: Once our pre-requisites are ready ( Ansible Docker Image ), we will run ansible container and trigger ansible-playbook command on the fly with appropriate Environment variables and Variables.
      • Ansible playbook ( root.yml ) is executed, which has the roles defined under it by name ansible_role
      • I have removed non used default directories like (meta, default, handlers, tests etc. ) as these are not being used within our requirement.
      • Ansible role has three task playbook files with below operations.
        • Create S3 bucket : It will use ansible's role amazon.aws.s3_bucket to creates s3 bucket with tags and restricted public access. 
        • Create empty directories within above created S3 buckets.: It will use ansible's role amazon.aws.aws_s3 to create bucket objects.
        • Create CloudFormation distributions : It will use ansible's role amazon.aws.cloudformation option to create CloudFront distribution via CloudFormation template.

Jenkins file used in this lab.
def ENVT = env.ENVIRONMENT
def VERSION = env.VERSION
def JOBTYPE = env.JOBTYPE
def ACCESS_KEY = env.AWS_ACCESS_KEY
def KEY_ID = env.AWS_SECRET_ACCESS_KEY


node('master'){
  try {

    stage('checkout'){

        if ( "${VERSION}" == 'default') {
            checkout scm
            } 
        else {
            checkout scm
            sh "git checkout $VERSION"
            }
        }
    		
    stage('build'){
                  sh "ls -ltr"
                   echo "Building docker image via dockerfile..."
                   sh "docker build -t ck-pwdgen-app/ansible:2.10-$BUILD_ID ."
                  }
    stage('deploy'){
                    echo "Infrastructure deployment started...."
                    wrap([$class: "MaskPasswordsBuildWrapper",
                          varPasswordPairs: [[password: ACCESS_KEY, var: ACCESS_KEY], [password: KEY_ID, var: KEY_ID] ]]) {
                    sh "docker run \
                        -e AWS_ACCESS_KEY_ID=$ACCESS_KEY \
                        -e AWS_SECRET_ACCESS_KEY=$KEY_ID \
                        -e AWS_DEFAULT_REGION='us-west-1' \
                        ck-pwdgen-app/ansible:2.10-$BUILD_ID ansible-playbook -vvv --extra-vars 'Environment=${ENVT}' root.yml"
                      }
                    } 
            }

  catch (e){
    echo "Error occurred - " + e.toString()
    throw e
    } 
  finally {
    deleteDir()
        if ( "${JOBTYPE}" == 'build-deploy') {
          
            sh 'docker rmi -f ck-pwdgen-app/ansible:2.10-$BUILD_ID  && echo "ck-pwdgen-app/ansible:2.10-$BUILD_ID local image deleted."'
       }
  }
}

Jenkins Pipeline job will look something like below.


 

Dockerfile used to create Ansible Image
FROM python:3.7
RUN python3 -m pip install ansible==2.10 boto3 awscli && ansible-galaxy collection install amazon.aws


ADD root.yml /usr/local/ansible/
COPY ansible_role /usr/local/ansible/ansible_role

WORKDIR usr/local/ansible/

CMD ["ansible-playbook", "--version"]

Ansible Role directory structure and it's respective file contents.
root.yml
|
ansible_role/
├── README.md
├── tasks
│   ├── create_bucket_directories.yml
│   ├── create_cloudfront_dist.yml
│   ├── create_s3_bucket.yml
│   └── main.yml
└── vars
    └── int
        └── main.yml

3 directories, 6 files

Ansible Entry Playbook file ( root.yml ), we will initiate the ansible tasks using roles defined in below file.

$ cat root.yml

---
- hosts: localhost
  connection: local
  gather_facts: False

  roles:
   - ansible_role

Ansible Roles Variable file content.

$ cat ansible_role/vars/int/main.yml 

---
# default variables
region: us-east-1
ProductName: ck
ProjectName: pwdgen
Environment: int
PrimaryRegion: us-east-1
SecondaryRegion: us-east-2

bucketCfg:
  int:
    Environment: "{{ Environment }}"
    PrimarBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-primary-bucket"
    SecondaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-secondary-bucket"
    CDNLogBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-cdn-logs-bucket"
    DevopsBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-devops-bucket"
    PrimaryBucketRegion: "{{ PrimaryRegion }}"
    SecondaryBucketRegion: "{{SecondaryRegion}}"
    DevopsBucketRegion: "{{ PrimaryRegion }}"
bucketTags:
  int:
    PrimaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-primary"
    SecondaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-secondary"
    Environment: "{{ Environment }}"
    Owner: "admin@cyberkeeda.com"

Ansible Role Tasks file contents.
$ cat ansible_role/tasks/main.yml 

---
- import_tasks: create_s3_bucket.yml
- import_tasks: create_bucket_directories.yml
- import_tasks: create_cloudfront_dist.yml

Ansible Role Tasks file contents.
$ cat ansible_role/tasks/create_s3_bucket.yml

- name: Read environment specific variables.
  include_vars:
      file: "ansible_role/vars/{{ Environment }}/main.yml"

- name: Create static-ck application buckets in us-east-1 region.
  s3_bucket:
      name: "{{ item }}"
      state: absent
      tags:
          Name: "{{ item }}"
          Environment: "{{ Environment }}"
          Owner: "{{ bucketTags[Environment]['Owner'] }}"
      region: us-east-1
      public_access:
          block_public_acls: true
          ignore_public_acls: true
          block_public_policy: true
          restrict_public_buckets: true
  with_items:
      - "{{ bucketCfg[Environment]['PrimarBucketName'] }}"
      - "{{ bucketCfg[Environment]['DevopsBucketName'] }}"
      - "{{ bucketCfg[Environment]['CDNLogBucketName'] }}"

- name: Create static-ck application buckets in us-east-2 region.
  s3_bucket:
      name: "{{ item }}"
      state: absent
      tags:
          Name: "{{ item }}"
          Environment: "{{ Environment }}"
          Owner: "{{ bucketTags[Environment]['Owner'] }}"
      region: us-east-2
      public_access:
          block_public_acls: true
          ignore_public_acls: true
          block_public_policy: true
          restrict_public_buckets: true
  with_items:
      - "{{ bucketCfg[Environment]['SecondaryBucketName'] }}"



Ansible Role Tasks file contents.
$ cat ansible_role/tasks/create_bucket_directories.yml
---

- name: Read environment specific variables.
  include_vars:
      file: "ansible_role/vars/{{ Environment }}/main.yml"

- name: Create empty directories to store build artifacts.
  aws_s3:
      bucket: "{{ item.bucket_name }}"
      object: "{{ item.artifact_dir }}"
      mode: create
  with_items:
      - { bucket_name: "{{ bucketCfg[Environment]['PrimarBucketName'] }}", artifact_dir: "/app1/artifacts" }
      - { bucket_name: "{{ bucketCfg[Environment]['SecondaryBucketName'] }}", artifact_dir: "/app1/artifacts" }


- name: Create empty directories to deploy latest build.
  aws_s3:
      bucket: "{{ item.bucket_name }}"
      object: "{{ item.latest_dir }}"
      mode: create
  with_items:
      - { bucket_name: "{{ bucketCfg[Environment]['PrimarBucketName'] }}", latest_dir: "/app1/latest" }
      - { bucket_name: "{{ bucketCfg[Environment]['SecondaryBucketName'] }}", latest_dir: "/app1/latest" }
Ansible Role Tasks file contents.
$ cat ansible_role/tasks/create_cloudfront_dist.yml
AWSTemplateFormatVersion: '2010-09-09'

Description: 'CF Template to setup infra for static password generator application'

Parameters:
    Environment:
      Description:    Please specify the target environment.
      Type:           String
      Default:        "int"
      AllowedValues:
        - int
        - pre-prod
        - prod
    AppName:
      Description:  Application name.
      Type:         String
      Default:      "pwdgen"

    AlternateDomainNames:
      Description:    CNAMEs (alternate domain names)
      Type:           String
      Default:        "jackuna.github.io"

    IPV6Enabled:
      Description:    Should CloudFront to respond to IPv6 DNS requests with an IPv6 address for your distribution.
      Type:           String
      Default:        true
      AllowedValues:
        - true
        - false

    OriginProtocolPolicy:
      Description:    CloudFront Origin Protocol Policy to apply to your origin.
      Type:           String
      Default:        "https-only"
      AllowedValues:
        - http-only
        - match-viewer
        - https-only

    Compress:
      Description:    CloudFront Origin Protocol Policy to apply to your origin.
      Type:           String
      Default:        "true"
      AllowedValues:
        - true
        - false

    DefaultTTL:
      Description:    The default time in seconds that objects stay in CloudFront caches before CloudFront forwards another request to your custom origin. By default, AWS CloudFormation specifies 86400 seconds (one day).
      Type:           String
      Default:        "540.0"

    MaxTTL:
      Description:    The maximum time in seconds that objects stay in CloudFront caches before CloudFront forwards another request to your custom origin. By default, AWS CloudFormation specifies 31536000 seconds (one year).
      Type:           String
      Default:        "600.0"

    MinTTL:
      Description:    The minimum amount of time that you want objects to stay in the cache before CloudFront queries your origin to see whether the object has been updated.
      Type:           String
      Default:        "1.0"

    SmoothStreaming:
      Description:    Indicates whether to use the origin that is associated with this cache behavior to distribute media files in the Microsoft Smooth Streaming format.
      Type:           String
      Default:        "false"
      AllowedValues:
        - true
        - false
    QueryString:
      Description:    Indicates whether you want CloudFront to forward query strings to the origin that is associated with this cache behavior.
      Type:           String
      Default:        "false"
      AllowedValues:
        - true
        - false

    ForwardCookies:
      Description:    Forwards specified cookies to the origin of the cache behavior.
      Type:           String
      Default:        "none"
      AllowedValues:
        - all
        - whitelist
        - none

    ViewerProtocolPolicy:
      Description:    The protocol that users can use to access the files in the origin that you specified in the TargetOriginId property when the default cache behavior is applied to a request.
      Type:           String
      Default:        "https-only"
      AllowedValues:
        - redirect-to-https
        - allow-all
        - https-only

    PriceClass:
      Description:    The price class that corresponds with the maximum price that you want to pay for CloudFront service. If you specify PriceClass_All, CloudFront responds to requests for your objects from all CloudFront edge locations.
      Type:           String
      Default:        "PriceClass_100"
      AllowedValues:
        - PriceClass_All
        - PriceClass_100
        - PriceClass_200

    SslSupportMethod:
      Description:    Specifies how CloudFront serves HTTPS requests.
      Type:           String
      Default:        "sni-only"
      AllowedValues:
        - sni-only
        - vip

    MinimumProtocolVersion:
      Description:    The minimum version of the SSL protocol that you want CloudFront to use for HTTPS connections.
      Type:           String
      Default:        "TLSv1.2_2021"
      AllowedValues:
        - TLSv1.2_2021
        - TLSv1.2_2019
        - TLSv1.1_2018

    OriginKeepaliveTimeout:
      Description:    You can create a custom keep-alive timeout. All timeout units are in seconds. The default keep-alive timeout is 5 seconds, but you can configure custom timeout lengths. The minimum timeout length is 1 second; the maximum is 60 seconds.
      Type:           String
      Default:        "60"

    OriginReadTimeout:
      Description:    You can create a custom origin read timeout. All timeout units are in seconds. The default origin read timeout is 30 seconds, but you can configure custom timeout lengths. The minimum timeout length is 4 seconds; the maximum is 60 seconds.
      Type:           String
      Default:        "30"


    BucketVersioning:
      Description:    The versioning state of an Amazon S3 bucket. If you enable versioning, you must suspend versioning to disable it.
      Type:           String
      Default:        "Suspended"
      AllowedValues:
        - Enabled
        - Suspended

Resources:
  # Bucket Policy for primary and secondary buckets.
  PrimaryBucketReadPolicy:
      Type: 'AWS::S3::BucketPolicy'
      Properties:
        Bucket: !Sub 'ck-${Environment}-${AppName}-primary-bucket'
        PolicyDocument:
          Statement:
          - Action: 
              - 's3:GetObject'
            Effect: Allow
            Resource: !Sub 'arn:aws:s3:::ck-${Environment}-${AppName}-primary-bucket/*'
            Principal:
              CanonicalUser: !GetAtt PrimaryBucketCloudFrontOriginAccessIdentity.S3CanonicalUserId
  SecondaryBucketReadPolicy:
      Type: 'AWS::S3::BucketPolicy'
      Properties:
        Bucket: !Sub 'ck-${Environment}-${AppName}-secondary-bucket'
        PolicyDocument:
          Statement:
          - Action: 
              - 's3:GetObject'
            Effect: Allow
            Resource: !Sub 'arn:aws:s3:::ck-${Environment}-${AppName}-secondary-bucket/*'
            Principal:
              CanonicalUser: !GetAtt SecondaryBucketCloudFrontOriginAccessIdentity.S3CanonicalUserId

  # Cloud Front OAI
  PrimaryBucketCloudFrontOriginAccessIdentity:
    Type: 'AWS::CloudFront::CloudFrontOriginAccessIdentity'
    Properties:
      CloudFrontOriginAccessIdentityConfig:
        Comment: !Sub 'ck-${Environment}-${AppName}-primary'
  SecondaryBucketCloudFrontOriginAccessIdentity:
    Type: 'AWS::CloudFront::CloudFrontOriginAccessIdentity'
    Properties:
      CloudFrontOriginAccessIdentityConfig:
        Comment: !Sub 'ck-${Environment}-${AppName}-secondary'

  # Cloudfront Cache Policy
  CDNCachePolicy:
    Type: AWS::CloudFront::CachePolicy
    Properties: 
      CachePolicyConfig: 
        Comment: 'Max TTL 600 to validate frequent changes'
        DefaultTTL: !Ref DefaultTTL
        MaxTTL: !Ref MaxTTL
        MinTTL: !Ref MinTTL
        Name: !Sub 'ck-${Environment}-${AppName}-cache-policy'
        ParametersInCacheKeyAndForwardedToOrigin: 
            CookiesConfig: 
                CookieBehavior: none
            EnableAcceptEncodingBrotli: True
            EnableAcceptEncodingGzip: True
            HeadersConfig: 
                HeaderBehavior: none
            QueryStringsConfig: 
                QueryStringBehavior: none

  # CLOUDFRONT DISTRIBUTION
  CloudFrontDistribution:
    Type: 'AWS::CloudFront::Distribution'
    DependsOn:
    - CDNCachePolicy
    Properties:
      DistributionConfig:
        Comment: 'Cyberkeeda Password Generator application'
        Enabled: true
        HttpVersion: http2
        IPV6Enabled: true
        DefaultRootObject: version.json
        Origins:
        - DomainName: !Sub 'ck-${Environment}-${AppName}-primary.s3.amazonaws.com'
          Id: !Sub 'ck-${Environment}-${AppName}-primary-origin'
          OriginPath: "/v1/latest"
          ConnectionAttempts: 1
          ConnectionTimeout: 2
          S3OriginConfig:
            OriginAccessIdentity: !Sub 'origin-access-identity/cloudfront/${PrimaryBucketCloudFrontOriginAccessIdentity}'
        - DomainName: !Sub 'ck-${Environment}-${AppName}-secondary.s3.amazonaws.com'
          Id: !Sub 'ck-${Environment}-${AppName}-secondary-origin'
          OriginPath: "/v1/latest"
          ConnectionAttempts: 1
          ConnectionTimeout: 2
          S3OriginConfig:
            OriginAccessIdentity: !Sub 'origin-access-identity/cloudfront/${SecondaryBucketCloudFrontOriginAccessIdentity}'
        OriginGroups:
          Quantity: 1
          Items: 
          - Id: !Sub 'ck-${Environment}-${AppName}-cdn-origin-group'
            FailoverCriteria: 
              StatusCodes: 
                Items: 
                - 500
                - 502
                - 503
                - 504
                - 403
                - 404
                Quantity: 6
            Members:
              Quantity: 2
              Items: 
              - OriginId: !Sub 'ck-${Environment}-${AppName}-primary-origin'
              - OriginId: !Sub 'ck-${Environment}-${AppName}-secondary-origin'
        CacheBehaviors:
          - CachePolicyId: !GetAtt 'CDNCachePolicy.Id'
            PathPattern:  '*'
            ViewerProtocolPolicy: !Ref 'ViewerProtocolPolicy'
            TargetOriginId: !Sub 'ck-${Environment}-${AppName}-cdn-origin-group'
        DefaultCacheBehavior:
          AllowedMethods:
            - GET
            - HEAD
          TargetOriginId: !Sub 'ck-${Environment}-${AppName}-cdn-origin-group'
          ViewerProtocolPolicy: !Ref 'ViewerProtocolPolicy'
          CachePolicyId: !GetAtt 'CDNCachePolicy.Id'
Outputs:
  CDNCloudfrontURL:
    Description: CloudFront CDN Url.
    Value: !GetAtt  'CloudFrontDistribution.DomainName'


Once the above file and it's respective contents are dumped within source code repository, we can use to create AWS services using Jenkins pipeline job.

If we breakdown the blog post, this post can be used for other techinacl refrences too, such as.
  • Jenkins Scripted pipeline using parameters.
  • How to hash/mask passwords and sensitive environments. 
  • Leverage the power of docker to make codes uniform across environments and platform.
    • If you notice, we can easily install ansible packages within build machine and run the ansible playbook directly, but we are not touching any third party application within our build machine.
    • Even once our task is done, we are removing the container.
  • How to build docker image from docker file using jenkins.
  • Docker file to build ansible image.
  • Real world example of Ansible Roles.
  • Ansible to create S3 buckets with tags.
  • How to disable s3 bucket public access using ansible.
  • How to create s3 bucket directories and objects using Ansible.
  • How to use Ansible to create CloudFormation stack using parameters.
  • CloudFormation template to create below resources.
    • S3 Bucket Policy
    • CloudFront Origin Access Identity.
    • CloudFront Cache Policy.
    • CloudFront Distribution with Origin Group and S3 as a Origin.

Hope this blog post, help you in some use case.

There might be definitely errors and areas of improvement within this blog post or better wat to handle such deployment, please share your valuable comments.

Read more ...

Cloudformation template to create S3 bucket with Tags and disable public access

 


Below CloudFormation template can be used for the following tasks.
  • Create S3 bucket.
  • Add tags to S3 bucket.
  • Disable public access.
 S3BUCKET
  PrimaryBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub 'cyberkeeda-${Environment}-${AppName}-bucket'
      AccessControl: Private
      PublicAccessBlockConfiguration:
        BlockPublicAcls: True
        BlockPublicPolicy: True
        IgnorePublicAcls: True
        RestrictPublicBuckets: True
      Tags:
        - Key: Name
          Value: !Sub 'cyberkeeda-${Environment}-${AppName}'
        - Key: Environment
          Value: "Development"
        - Key: Creator
          Value: !Sub "${Creator}"
        - Key: Appname
          Value: !Sub "${Appname}"
        - Key: Unit
          Value: !Sub "${Unit}"
        - Key: Owner
          Value: admin@ck.com

Read more ...

AWS Cloudformation template to create Cloudwatch Event rule to trigger ECS Task

                             


Cloudformation Template that will created below resources.

  • IAM role for ECS Task and Cloudwatch rule.
  • CloudWatch schedule rule ( cron ) to trigger task defination.


Template

AWSTemplateFormatVersion: 2010-09-09
Description: | 
              1. IAM Role to be used by ECS task and cloudwatch event rule.
              2. CloudWatch Rule to trigger ecs tasks.
             
Parameters:
  ProductName:
    Description: Parent Product name.
    Type: String
    Default: cyberkeeda
  ProjectName:
    Description: Project Name
    Type: String
    Default: cyberkeeda-report
  Environment:
    Description: The equivalent CN name of the environment being worked on
    Type: String
    AllowedValues:
      - dev
      - uat
      - qa
  Region:
    Description: Ck Region specific parameter
    Type: String
    AllowedValues:
      - mum
      - hyd
  ECSClusterARN:
    Description: ECS Cluster ARN to schedule Task 
    Type: String
    Default: None
  CWEventRuleCron:
    Description: Cron Expression to schedule ECS task. 
    Type: String
    Default: "cron(0 9 * * ? *)"
  ECSTaskDefARN:
    Description: ARN for ECS Task defination
    Type: String

Metadata:
  AWS::CloudFormation::Interface:
    ParameterGroups:
      - 
        Label:
          default: Project based details
        Parameters:
          - ProductName
          - ProjectName
          - Environment
          - Region
      - 
        Label:
          default: ECS details.
        Parameters:
          - ECSClusterARN
          - ECSTaskDefARN
          - CWEventRuleCron
      
Resources:
  ExecutionRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: !Sub "${ProductName}-${Region}-${Environment}-${ProjectName}-role"
      AssumeRolePolicyDocument:
        Statement:
          - Effect: Allow
            Principal:
              Service: [ 'ecs-tasks.amazonaws.com', 'events.amazonaws.com' ]
            Action: sts:AssumeRole
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
      Policies:
      - PolicyName: !Sub "${ProductName}-${Region}-${Environment}-${ProjectName}-role-inlinePolicy"
        PolicyDocument: 
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                - ecs:RunTask
                Resource:
                - !Sub "${ECSTaskDefARN}:*"
              - Effect: Allow
                Action: iam:PassRole
                Resource:
                - "*"
                Condition:
                  StringLike:
                    iam:PassedToService: ecs-tasks.amazonaws.com
  TaskSchedule:
    Type: AWS::Events::Rule
    Properties:
      Description: Trigger Cyberkeeda Daily ECS task
      Name: !Sub  "${ProductName}-${Region}-${Environment}-${ProjectName}-daily-event-rule"
      ScheduleExpression: !Ref CWEventRuleCron
      State: ENABLED
      Targets:
        - Id: !Sub "${ProductName}-${Region}-${Environment}-${ProjectName}-daily-event-rule-targetId"
          EcsParameters:
            LaunchType: EC2
            TaskDefinitionArn: !Ref TaskDefinition
            TaskCount: 1
          RoleArn:
            Fn::GetAtt:
            - ExecutionRole
            - Arn
          Arn: !Ref ECSClusterARN

Let me know, for any questions in comment box.

Read more ...
Designed By Jackuna