CyberKeeda In Social Media
Showing posts with label Ansible Roles. Show all posts
Showing posts with label Ansible Roles. Show all posts

Jenkins Pipeline for Continuous Integration of AWS Lambda Function with GitHub repository

 


AWS Lambda Function is awesome and trust me if you are working on AWS, sooner or later you have to deal with AWS Lambda.

Due to the nature of being a PAAS service, we can't ignore the ways of Lambda deployment and it's test methods which is somehow more or less through the Lambda Function Console.

Off course there are ways to write code, test code and deploy code directly through IDEs, but keep in mind you still need an ACCESS Key and ACCESS Secret. 

So what about the code base, how will we track the code changes done in Lambda Function itself.

In this post, we will cover the challenges with Lambda approaches for CICD and one of the my proposed solutions to address some part of it.

Let's know some of the challenges and their probable solutions.

  • Lambda Deployment : We can use Terraform and CloudFormation for the same, then what's is the challenge ?
    • CloudFormation :
      • We can use Inline methods to put our Lambda Code under the Code block of ZipFile, but what about the 3rd party modules like panda, we can't use the code block under CloudFormation.
      • One can still package those third party modules and other code together, but still one needs to upload those in S3 bucket and think of a way of handling changes before using it.
  • Lambda Function Code base : 
    • We still need to have snapshots of our lambda function code to track the daily changes and to be later use in deployment pipeline.

There are some more, challenges with Lambda Function, but in this blog post we will try to cover the basic of CICD, that is replicating our Lambda Function code from Console to GitHub repository.

  • The moment when we talk about CICD, mostly pipeline uses git to get source code and then use it for further process like, checkout, build, release, deploy, test..
  • Here the case is somehow different, due to nature of PAAS, we have to test our code's functionality in Lambda Console first, then it can be pushed to repository to save our source code.
  • Yes, AWS SAM is yet another option of testing Lambda Function code within our local environment, but not in the case when Lambda is hosted in VPC and it uses other services to communicate.

Below is one of my proposed solution to achieve the same.



Prerequisites.
  • IAM Access Key Secrets or IAM Role attached to EC2 instance, from where the Jenkins job is triggered.
  • GitHub Personal Access Token
Here is the flow...
  1. I assume, developer is initially testing his/her code's functionality on Lambda Console, Once Developer is okay, with his/her Lambda Function code, we will move to next step.
  2. SysAdmin/Developer can check-in his/her code directly from Lambda Function to GitHub repository using Jenkins Job.
  3. Jenkins Job has scripted pipeline attached to it, thus it will go through below stages.
    • Stage : Check Out code to appropriate branch.
    • Stage : Build Docker image from Docker File for Ansible.
    • Stage : Run Ansible container from above created Docker image and run Ansible Playbook command to execute Ansible role and it's relative ansible tasks.
      1. Task 1 - 
        • Download Lambda Code from Lambda Console using Python Script, which is using boto3 module.
        • Unzip the downloaded code into specific directory to track the changes as a file, else changes in zip file can't be tracked.
      2. Task 2 - 
        • Clone existing repository from git, replace the existing lambda source code with the newer one downloaded in above step.
        •  Git add, commit and push it into git repository.

Here is the lab setup.

Our Lambda Function in console has something by name "ck-uat-Lambda-Authorizer"


And it's code looks like something below in console.


GitHub repository where I want to publish my code.

Repo Snapshot.


Directory Layout for the same...



Our Final Intention is to dump or lambda function code under src directory, that is lambda_folder/src

So according to the flow stated earlier in the post, I will paste the code..

Jenkins Scripted Pipeline code.

Note: Do mask the additional secrets to avoid it to be appear in plain text.
def gituser = env.GIT_USERNAME
def gituserpass = env.GIT_PASSWORD
def ACCESS_KEY = env.AWS_ACCESS_KEY
def KEY_ID = env.AWS_SECRET_ACCESS_KEY
def DEBUG_MODE = env.LOG_TYPE

node('master'){
  
  try {

    stage('Git Checkout'){
            checkout scm
            sh "git checkout lambda_deployer"
        }

     stage('build'){
                  sh "ls -ltr"
                   echo "Building docker image via dockerfile..."
                   sh "docker build  -t ansible:2.10-$BUILD_ID ."
                  }

     stage('deploy'){
                    echo "Infrastructure deployment started...."
                    wrap([$class: "MaskPasswordsBuildWrapper",
                          varPasswordPairs: [[password: gituserpass, var: gituserpass] ]]) {
                    sh "docker run --rm \
                        -e gituser=$gituser \
                        -e gituserpass=$gituserpass \
                        -e AWS_ACCESS_KEY_ID=$ACCESS_KEY \
                        -e AWS_SECRET_ACCESS_KEY=$KEY_ID \
                        -e AWS_DEFAULT_REGION='ap-south-1' \
                        ansible:2.10-$BUILD_ID ansible-playbook -$DEBUG_MODE  --extra-vars 'env=dev1 git_username=${gituser} token=${gituserpass}' lambda_folder/root_lambda_project.yml"
                      }
                    }          
      }


            
  catch (e){
    echo "Error occurred - " + e.toString()
    throw e
    } 
  finally {
    deleteDir()
       
            sh 'docker rmi -f ansible:2.10-$BUILD_ID  && echo "ansible:2.10-$BUILD_ID local image deleted."'
  }
}

Build Pipe Line should have something like below in Jenkins Console.



Jenkins One of the Stage : Build will build docker image from Docker File, here is the docker file source code.

FROM python:3.7
RUN python3 -m pip install ansible==2.10 boto3 awscli

RUN rm -rf /usr/local/ansible/

copy lambda_folder /usr/local/ansible/lambda_folder

WORKDIR usr/local/ansible/

CMD ["ansible-playbook", "--version"]

Once Docker Images is created, next step is to run Docker container from the above created Ansible image.

Here is the Ansible Role and it's respective tasks.

Ansible Root Playbook YAML -- root_lambda_project.yml

---
- hosts: localhost
  connection: local
  gather_facts: False

  roles:
   - role-

Ansible Variable file under roles -- lambda_folder/role/vars/dev1/main.yml

---
region: us-east-1
function_name: ck-uat-LambdaAuthorizer
git_repo_name: aws-swa
git_repo_branch: lambda_deployer

Python Script, that will be called on one of the Ansible Task to download Lambda Function code  

Note : It's an edited version of existing version of code from stackoverflow.
"""
    Script to download individual Lambda Function and dump code in specified directory
"""
import os
import sys
from urllib.request import urlopen
import zipfile
from io import BytesIO

import boto3


def get_lambda_functions_code_url(fn_name):

    client = boto3.client("lambda")
    functions_code_url = []
    fn_code = client.get_function(FunctionName=fn_name)["Code"]
    fn_code["FunctionName"] = fn_name
    functions_code_url.append(fn_code)
    return functions_code_url


def download_lambda_function_code(fn_name, fn_code_link, dir_path):

    function_path = os.path.join(dir_path, fn_name)
    if not os.path.exists(function_path):
        os.mkdir(function_path)
    with urlopen(fn_code_link) as lambda_extract:
        with zipfile.ZipFile(BytesIO(lambda_extract.read())) as zfile:
            zfile.extractall(function_path)


if __name__ == "__main__":
    inp = sys.argv[1:]
    print("Destination folder {}".format(inp))
    if inp and os.path.exists(inp[0]):
        dest = os.path.abspath(inp[0])
        fc = get_lambda_functions_code_url(sys.argv[2])
        for i, f in enumerate(fc):
            print("Downloading Lambda function {}".format(f["FunctionName"]))
            download_lambda_function_code(f["FunctionName"], f["Location"], dest)
    else:
        print("Destination folder doesn't exist")


Ansible Task 1 : lambda_folder/role/tasks/download_lambda_code.yml

---

- name: Read Variables
  include_vars:
    file: "role/vars/{{ env }}/main.yml"

- name: Download Lambda Function using Python script..
  command:
    argv:
      - python3 
      - role/files/download_lambda.py 
      - src
      - "{{ function_name }}"
Ansible Task 2 : lambda_folder/role/tasks/update_repository.yml

---
- name: Git clone source repository..
  command:
    argv:
      - git 
      - clone 
      - https://{{ git_username }}:{{ token }}@github.com/Jackuna/{{ git_repo_name }}.git 
      - -b 
      - "{{ git_repo_branch }}"

- name: Git add Lambda function source code to repo..
  command: >
    cp -r src {{ git_repo_name }}/lambda_folder

- name: Git add recent changes..
  command: >
    git add --all lambda_folder/src
  args:
    chdir: "{{ git_repo_name }}"

- name: Git Config username..
  command: >
    git config user.name {{ git_username }}
  args:
    chdir: "{{ git_repo_name }}"

- name: Git Config email..
  command: >
    git config user.email {{ git_username }}@cyberkeeda.com 
  args:
    chdir: "{{ git_repo_name }}"  
- name: Git commit recent changes..
  command: >
    git commit -m "Updated Latest code"
  args:
    chdir: "{{ git_repo_name }}"

- name: Git push recent changes..
  command:
    argv:
      - git 
      - push 
      - https://{{ git_username }}:{{ token }}@github.com/Jackuna/{{ git_repo_name }}.git 
      - -u 
      - "{{ git_repo_branch }}"
  args:
    chdir: "{{ git_repo_name }}"
  register: git_push_output  

That's all you need.. in case of hurdles or issues, do comment !
Read more ...

Ansible role to create S3 bucket and directories

 


 Ansible roles can be defined as

  • Collection of multiple playbooks within directories for several tasks and operations.
  • It's a way of maintaining playbooks in a structured and identical manner.
  • It's a way of breaking lengthy playbooks into small plays.
  • Roles can be uploaded to Ansible galaxy, which can be reused as an ansible library or module.

How can we create Ansible roles.
  • We can use ansible-galaxy command to download existing roles uploaded on website https://galaxy.ansible.com/
  • We can use ansible-galaxy command to create new role.
  • While creating a new role ansible-galaxy creates roles in the default directory as /etc/ansible/roles followed by name of the role.
Below commands can be used as per need.
  • Ansible galaxy command to check installed roles.
$ ansible-galaxy collection list
  • Ansible galaxy command to create role in default directory
$ ansible-galaxy init /etc/ansible/roles/my-role --offline
  • Ansible galaxy command to create role in present working directory
$ ansible-galaxy init my-role
  • Ansible galaxy command to install roles from ansible galaxy website collections
$ ansible-galaxy collection install amazon.aws

Ansible role directory structure looks like below, we can take example of above created ansible role by name my-role
$ ansible-galaxy init my-role
- Role my-role was created successfully
$ tree my-role/
my-role/
├── defaults
│   └── main.yml
├── files
├── handlers
│   └── main.yml
├── meta
│   └── main.yml
├── README.md
├── tasks
│   └── main.yml
├── templates
├── tests
│   ├── inventory
│   └── test.yml
└── vars
    └── main.yml

8 directories, 8 files

How can we use roles
  • Till now, we have seen how to create role and once created what's it's default directory structure looks like, Ansible roles can be used in there ways
    • with the roles option: This is the classic way of using roles in a play.
    • tasks level with include_role: one can reuse roles dynamically anywhere in the tasks section of a play using include_role.
    • tasks level with import_role: You can reuse roles statically anywhere in the tasks section of a play using import_role.
Here we will know more about the classic way of using roles, that is by using the roles option in playbook.

So instead of going through all conventional method of installing apache or ngnix, I will share a real-time custom role, that has the following task/operations to do.
  • Create multiple AWS S3 buckets by regions.
  • Create directory structure within two of above created bucket.
First let's go through the playbook, that can be independently used to do the entire operation without creating ansible roles.

Note: amazon.aws galaxy collection must be update to recent version, in order to use option s3_bucket 
$ ansible-galaxy collection install amazon.aws
---
- hosts: localhost
  connection: local
  gather_facts: False

  tasks:

    - name: Read environment specific variables.
    include_vars:
        file: "ansible_role/vars/{{ Environment }}/main.yml"

    - name: Create static-ck application buckets in us-east-1 region.
    s3_bucket:
        name: "{{ item }}"
        state: present
        tags:
            Name: "{{ item }}"
            Environment: "{{ Environment }}"
            Owner: "{{ bucketTags[Environment]['Owner'] }}"
        region: us-east-1
        public_access:
            block_public_acls: true
            ignore_public_acls: true
            block_public_policy: true
            restrict_public_buckets: true
    with_items:
        - "{{ bucketCfg[Environment]['PrimarBucketName'] }}"
        - "{{ bucketCfg[Environment]['DevopsBucketName'] }}"
        - "{{ bucketCfg[Environment]['CDNLogBucketName'] }}"

    - name: Create static-ck application buckets in us-east-2 region.
    s3_bucket:
        name: "{{ item }}"
        state: present
        tags:
            Name: "{{ item }}"
            Environment: "{{ Environment }}"
            Owner: "{{ bucketTags[Environment]['Owner'] }}"
        region: us-east-2
        public_access:
            block_public_acls: true
            ignore_public_acls: true
            block_public_policy: true
            restrict_public_buckets: true
    with_items:
        - "{{ bucketCfg[Environment]['SecondaryBucketName'] }}"


    - name: Create empty directories to store build artifacts.
    aws_s3:
        bucket: "{{ item.bucket_name }}"
        object: "{{ item.artifact_dir }}"
        mode: create
    with_items:
        - { bucket_name: "{{ bucketCfg[Environment]['PrimarBucketName'] }}", artifact_dir: "/app1/artifcats" }
        - { bucket_name: "{{ bucketCfg[Environment]['SecondaryBucketName'] }}", artifact_dir: "/app1/artifcats" }


    - name: Create empty directories to deploy latest build.
    aws_s3:
        bucket: "{{ item.bucket_name }}"
        object: "{{ item.latest_dir }}"
        mode: create
    with_items:
        - { bucket_name: "{{ bucketCfg[Environment]['PrimarBucketName'] }}", latest_dir: "/app1/latest" }
        - { bucket_name: "{{ bucketCfg[Environment]['SecondaryBucketName'] }}", latest_dir: "/app1/latest" }

Above playbook can be triggered independently using the below command as.
$ ansible-playbook  -vv --extra-vars "Environment=int" main.yml


The same deployment can be done using ansible roles in below manner following the below steps.
  • Create a new ansible role by name ansible_role
$ ansible-galaxy init ansible_role
  • Create a new root/entry playbook to initiate deployment
$ touch root.yml
  • Include the below lines and indicate the use role option to call our role, please note that we have used the option "roles" to call our newly created role directory by name ansible_role, while using roles option please make a note of the below points for main.yml file.
    • When you use the roles option at the play level, for each role ‘x’:
    • If roles/x/tasks/main.yml exists, Ansible adds the tasks in that file to the play.
    • If roles/x/handlers/main.yml exists, Ansible adds the handlers in that file to the play.
    • If roles/x/vars/main.yml exists, Ansible adds the variables in that file to the play.
    • If roles/x/defaults/main.yml exists, Ansible adds the variables in that file to the play.
    • If roles/x/meta/main.yml exists, Ansible adds any role dependencies in that file to the list of roles.
    • Any copy, script, template or include tasks (in the role) can reference files in roles/x/{files,templates,tasks}/ (dir depends on task) without having to path them relatively or absolutely.
---
- hosts: localhost
  connection: local
  gather_facts: False

  roles:
   - ansible_role 
  • Below the directory structure we follow within our newly created role.
root.yml
|
ansible_role/
├── defaults
│   └── main.yml
├── files
├── handlers
│   └── main.yml
├── meta
│   └── main.yml
├── README.md
├── tasks
│   ├── create_bucket_directories.yml
│   ├── create_s3_bucket.yml
│   └── main.yml
├── templates
├── tests
│   ├── inventory
│   └── test.yml
└── vars
    └── int
        └── main.yml

So from above directory layout, we have the below files and directories to create.
  • We have divided our tasks into parts
    • Create S3 buckets
    • Create directories within S3
    • All the above two tasks will be defined individually under two different file by name
      • create_s3_bucket.yml
      • create_bucket_directories.yml
    • Where as ansible_roles/tasks/main.yml is entry point for these two task, which we will be importing using option import tasks
$ cat ansible_role/tasks/main.yml
---
- import_tasks: create_s3_bucket.yml
- import_tasks: create_bucket_directories.yml

This is how, my other two task files look like.

$ cat ansible_role/tasks/create_s3_bucket.yml
---

- name: Read environment specific variables.
  include_vars:
      file: "ansible_role/vars/{{ Environment }}/main.yml"

- name: Create static-ck application buckets in us-east-1 region.
  s3_bucket:
      name: "{{ item }}"
      state: present
      tags:
          Name: "{{ item }}"
          Environment: "{{ Environment }}"
          Owner: "{{ bucketTags[Environment]['Owner'] }}"
      region: us-east-1
      public_access:
          block_public_acls: true
          ignore_public_acls: true
          block_public_policy: true
          restrict_public_buckets: true
  with_items:
      - "{{ bucketCfg[Environment]['PrimarBucketName'] }}"
      - "{{ bucketCfg[Environment]['DevopsBucketName'] }}"
      - "{{ bucketCfg[Environment]['CDNLogBucketName'] }}"

- name: Create static-ck application buckets in us-east-2 region.
  s3_bucket:
      name: "{{ item }}"
      state: present
      tags:
          Name: "{{ item }}"
          Environment: "{{ Environment }}"
          Owner: "{{ bucketTags[Environment]['Owner'] }}"
      region: us-east-2
      public_access:
          block_public_acls: true
          ignore_public_acls: true
          block_public_policy: true
          restrict_public_buckets: true
  with_items:
      - "{{ bucketCfg[Environment]['SecondaryBucketName'] }}"

$ cat ansible_role/tasks/create_bucket_directories.yml
---

- name: Read environment specific variables.
  include_vars:
      file: "ansible_role/vars/{{ Environment }}/main.yml"

- name: Create empty directories to store build artifacts.
  aws_s3:
      bucket: "{{ item.bucket_name }}"
      object: "{{ item.artifact_dir }}"
      mode: create
  with_items:
      - { bucket_name: "{{ bucketCfg[Environment]['PrimarBucketName'] }}", artifact_dir: "/v1/artifcats" }
      - { bucket_name: "{{ bucketCfg[Environment]['SecondaryBucketName'] }}", artifact_dir: "/v1/artifcats" }

  • We have added an additional directory as "int", which is the short form of Internal environment, following the same we can create more directories that can relate to other environmental specific files for prod and non-prod environmet too.
    • Within file ansible_role/vars/int/main.yml we defined key value pairs that can be used later while running our playbook
$ cat ansible_role/vars/int/main.yml
---
# default variables
region: us-east-1
ProductName: ck
ProjectName: static-app
Environment: int
PrimaryRegion: us-east-1
SecondaryRegion: us-east-2
regions:
  us-east-1:
    preferredMaintenanceWindow: "sat:06:00-sat:06:30"
  us-east-2:
    preferredMaintenanceWindow: "sat:05:00-sat:05:30"

bucketCfg:
  int:
    Environment: "{{ Environment }}"
    PrimarBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-primary-cyberkeeda-bucket-01"
    SecondaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-secondary-cyberkeeda-bucket-01"
    CDNLogBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-cdn-logs-cyberkeeda-bucket-01"
    DevopsBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-devops-cyberkeeda-bucket-01"
    PrimaryBucketRegion: "{{ PrimaryRegion }}"
    SecondaryBucketRegion: "{{SecondaryRegion}}"
    DevopsBucketRegion: "{{ PrimaryRegion }}"
bucketTags:
  int:
    PrimaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-primary"
    SecondaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-secondary"
    Environment: "{{ Environment }}"
    CreatorID: "admin@cyberkeeda.com"
    Owner: "admin@cyberkeeda.com"

Once the above templates are created and save, we can run our playbook with below ansible-playbook command.
$ ansible-playbook  -vv --extra-vars "Environment=int" root.yml

Below is the details for the above paramter used along with the ansible-playbook command.
  • -vv : Verbrose mode for debugging in STDOUT
  • --extra-vars : Key-Value pair to be used within playbook

Hope this blog post will help any one in some sort, please comment in case if you have any difficulties following steps.

Read more ...
Designed By Jackuna