CyberKeeda In Social Media

What are terraform providers and how to use it.

 

Within this post, we will cover 

  • What are terraform providers
  • Documentation link for providers.
  • How to choose providers
  • How to define providers within your terraform file.
  • Providers Versions.
    • How to find terraform provider versions.
    • How to explicitly mention provider version in terraform file. 

What are terraform providers ?

Terraform support N number of providers.
When we say providers it's basically terraform supported binaries and plugins for individual provider's subcategories like for example ( aws, azure, gcp etc ).
To be specific, terraform documentation categorized providers into multiple parts, which are mainly.
  • Major Clouds
    • AWS
    • GCP
    • Azure
    • OCI
    • Digital Ocean
    • VMware
  • Clouds
    • Other Cloud providers.
  • Infrastructure Software.
  • Network
  • VCS
  • Monitor and System Management
  • Database
  • Community.
Documentation link for providers.

How to choose providers ?
Before, you start writing your first terraform file, you must choose appropriate provider to provision
desired infrastructure.
For example, incase if you want to create a VPC subnet in AWS, you must choose AWS provider and define the same within your terraform script.

Navigate to official link to know more about supported provider. : Link
Please note, there are labels which also differentiate providers authors and owners.
  • Official
    • Officially maintained and supported and tested by Hashicorp
    • Note : They can be installed directly by executing terraform init command.
  • Verified.
    • Verified modules are reviewed by Hashicorp and are actively maintained by contributors, these badges appear next after the verification by Hashicorp.
    • Note : They can't be installed directly by executing terraform init command.
  • Community
    • 3rd Party plugin and modules, not actively maintained.
    • Note : They can't be installed directly by executing terraform init command.
How to define providers within your terraform file ?
  • Create an empty file within your IDE and give it a extension of  .tf
$ touch create_new_ec2_instance.tf
  • Next step is to choose format to define provider from our official terraform documentation.
    • Navigate to official provider Link
    • Select your provider as per your requirement.
      • For example, I need to create an EC2 instance, hence I must select AWS as provider.
      • Incase If I want to create a Azure Blob Container, I must select azure as my provider.
  • Once provider is selected, toggle to the Documentation from Navigation bar.

  • Within documentation. scroll to the Example Usage section and look for provider section, how it has been defined.
    • Please note before you define, providers and start executing your terraform you must have the authentication mechanism ready with you, It's very obvious if you want to provision any infrastructure on any public cloud, you must be authenticated first.
    • Every Providers has different way of authentication.
    • It's not mandatory or even discouraged to keep credentials hardcoded in a file, one work around is to define environment variables and import it during runtime.
provider "aws" {
  region     = "us-east-1"
  access_key = "AKIXXXXXXXXHB5PO7T6G"
  secret_key = "UdB1/aXJ9QgbQUSBS8BS9NWdrjr3wRbjE7hKddTD"
}
In case, if we want to use the export method of key, we can export keys and secret during terraform init command.

provider "aws" {}
$ export AWS_ACCESS_KEY_ID="myaccesskey"
$ export AWS_SECRET_ACCESS_KEY="myaccesssecret"
$ export AWS_DEFAULT_REGION="us-east-1"
$ terraform plan
  • Below snipped is to define Azure provider
    • Azure authentication can be done using multiple methods like Azure cli authentication, service principle and other too.
# Configure the Microsoft Azure Provider
provider "azurerm" {
  features {}
}


Provider Versions.


Provider sits in between terraform binary and Infrastructure provisioning, Providers are set of plugins that invokes APIs to create requested infrastructure in terraform file.

Here in above diagram, we will be creating a EC2 resource from terraform file named as create_ec2.tf

provider "aws" {
  region     = "us-east-1"
  access_key = "AKIA5BMYACCESSKEY"
  secret_key = "UdB1/MYACCESSSECRETIWIW7EH303"
}

resource "aws_instance" "my-ec2-instance" {
  ami           = "ami-08e4e35cccc6189f4" # us-west-1
  instance_type = "t2.micro"

   tags = {
    Name = "my-ec2"
  }

  }
  • Provider used here is AWS.
    • Please note under the provider section, we nowhere mentioned the version of aws provider.
    • Incase, if provider version is not explicitly mentioned, it will download the latest version available during the terraform init command.
How to find, version of providers ?




How to define provider version explicitly in terraform file ?
  • This is very useful, as this is the ideal way of using providers in production environment to avoid the adverse effect of new release to our existing infrastructure.
  • Below is the way, how we can define provider version in terraform file.
provider "aws" {
  region  = "us-east-1"
  version = "3.70.0"
}
  • We can also use operators to define as like any other language, use version equal to, greater than, less than like below.
    • version = "3.70.0"
    • version = "<=3.70.0"
    • version = ">=3.70.0" 

As on Terraform version greater then 0.13+, Version and Providers can be stated like below.

 terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "3.73.0"
    }
  }
}


provider "aws" {
  region     = "us-east-1"
  access_key = "XXXXXXXXXXXXXXXXXXXX"
  secret_key = "UdB1/YYYYYYYYYYYYYYYYYYYYYYYYYYY"
}



This is all about providers in this post, there are still more to explore and apply, will keep this thread updated.

Hope, this document helps you in some way !

Read more ...

Jenkins Pipeline to create CloudFront Distribution using S3 as Origin

 


 Within this post, we will cover 

  • Complete Jenkins Scripted pipeline.
  • Complete IAAC (Infrastructure As A Code ) to deploy AWS services and It's respective configuration.
  • Jenkins integration with GitHub as code repository.
  • Jenkins integration with Docker to make the deployment platform independent.
  • Jenkins integration with Ansible to call AWS CloudFormation scripts.
  • Using Ansible roles to fill the gaps of AWS CloudFormation, basically in this blog post and lab environment I'm using it to bypass usage of AWS CloudFormation Stack Sets and Custom Resources.

Flow diagram explaining the automation.

Explanation of above flow diagram.

Once Jenkins Job is triggered with appropriate input variables.
  1. It starts with fetching source code from git repository, which contains.
    1. Source Code for applications ( HTML, CSS, JS )
    2. IAAC code to support infrastructures deployment.
      • Ansible Role, playbooks.
      • CloudFormation templates.
      • Jenkins File, which has scripted pipeline defined.
  2. Once source code is downloaded, it will look for Jenkins pipeline file named a Jenkinsfile.
  3. Once Jenkins file is executed, it will initiate the pipeline in below stages.
    1. Stage Checkout : It looks for deployment type, as normal build or rollback and based upon it, it will checkout to respective git branch or tag.
    2. Stage Build : To make the pipeline, platform independent and reusable in nature, instead of directly triggering jobs on Jenkins node via bash or powershell commands, we will be using docker containers to run our CLI commands.
      • Here we will use Ansible Playbooks to create Infrastructure, thus in this step we will build a Ansible docker image from Docker file.
    3. Stage Deploy: Once our pre-requisites are ready ( Ansible Docker Image ), we will run ansible container and trigger ansible-playbook command on the fly with appropriate Environment variables and Variables.
      • Ansible playbook ( root.yml ) is executed, which has the roles defined under it by name ansible_role
      • I have removed non used default directories like (meta, default, handlers, tests etc. ) as these are not being used within our requirement.
      • Ansible role has three task playbook files with below operations.
        • Create S3 bucket : It will use ansible's role amazon.aws.s3_bucket to creates s3 bucket with tags and restricted public access. 
        • Create empty directories within above created S3 buckets.: It will use ansible's role amazon.aws.aws_s3 to create bucket objects.
        • Create CloudFormation distributions : It will use ansible's role amazon.aws.cloudformation option to create CloudFront distribution via CloudFormation template.

Jenkins file used in this lab.
def ENVT = env.ENVIRONMENT
def VERSION = env.VERSION
def JOBTYPE = env.JOBTYPE
def ACCESS_KEY = env.AWS_ACCESS_KEY
def KEY_ID = env.AWS_SECRET_ACCESS_KEY


node('master'){
  try {

    stage('checkout'){

        if ( "${VERSION}" == 'default') {
            checkout scm
            } 
        else {
            checkout scm
            sh "git checkout $VERSION"
            }
        }
    		
    stage('build'){
                  sh "ls -ltr"
                   echo "Building docker image via dockerfile..."
                   sh "docker build -t ck-pwdgen-app/ansible:2.10-$BUILD_ID ."
                  }
    stage('deploy'){
                    echo "Infrastructure deployment started...."
                    wrap([$class: "MaskPasswordsBuildWrapper",
                          varPasswordPairs: [[password: ACCESS_KEY, var: ACCESS_KEY], [password: KEY_ID, var: KEY_ID] ]]) {
                    sh "docker run \
                        -e AWS_ACCESS_KEY_ID=$ACCESS_KEY \
                        -e AWS_SECRET_ACCESS_KEY=$KEY_ID \
                        -e AWS_DEFAULT_REGION='us-west-1' \
                        ck-pwdgen-app/ansible:2.10-$BUILD_ID ansible-playbook -vvv --extra-vars 'Environment=${ENVT}' root.yml"
                      }
                    } 
            }

  catch (e){
    echo "Error occurred - " + e.toString()
    throw e
    } 
  finally {
    deleteDir()
        if ( "${JOBTYPE}" == 'build-deploy') {
          
            sh 'docker rmi -f ck-pwdgen-app/ansible:2.10-$BUILD_ID  && echo "ck-pwdgen-app/ansible:2.10-$BUILD_ID local image deleted."'
       }
  }
}

Jenkins Pipeline job will look something like below.


 

Dockerfile used to create Ansible Image
FROM python:3.7
RUN python3 -m pip install ansible==2.10 boto3 awscli && ansible-galaxy collection install amazon.aws


ADD root.yml /usr/local/ansible/
COPY ansible_role /usr/local/ansible/ansible_role

WORKDIR usr/local/ansible/

CMD ["ansible-playbook", "--version"]

Ansible Role directory structure and it's respective file contents.
root.yml
|
ansible_role/
├── README.md
├── tasks
│   ├── create_bucket_directories.yml
│   ├── create_cloudfront_dist.yml
│   ├── create_s3_bucket.yml
│   └── main.yml
└── vars
    └── int
        └── main.yml

3 directories, 6 files

Ansible Entry Playbook file ( root.yml ), we will initiate the ansible tasks using roles defined in below file.

$ cat root.yml

---
- hosts: localhost
  connection: local
  gather_facts: False

  roles:
   - ansible_role

Ansible Roles Variable file content.

$ cat ansible_role/vars/int/main.yml 

---
# default variables
region: us-east-1
ProductName: ck
ProjectName: pwdgen
Environment: int
PrimaryRegion: us-east-1
SecondaryRegion: us-east-2

bucketCfg:
  int:
    Environment: "{{ Environment }}"
    PrimarBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-primary-bucket"
    SecondaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-secondary-bucket"
    CDNLogBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-cdn-logs-bucket"
    DevopsBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-devops-bucket"
    PrimaryBucketRegion: "{{ PrimaryRegion }}"
    SecondaryBucketRegion: "{{SecondaryRegion}}"
    DevopsBucketRegion: "{{ PrimaryRegion }}"
bucketTags:
  int:
    PrimaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-primary"
    SecondaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-secondary"
    Environment: "{{ Environment }}"
    Owner: "admin@cyberkeeda.com"

Ansible Role Tasks file contents.
$ cat ansible_role/tasks/main.yml 

---
- import_tasks: create_s3_bucket.yml
- import_tasks: create_bucket_directories.yml
- import_tasks: create_cloudfront_dist.yml

Ansible Role Tasks file contents.
$ cat ansible_role/tasks/create_s3_bucket.yml

- name: Read environment specific variables.
  include_vars:
      file: "ansible_role/vars/{{ Environment }}/main.yml"

- name: Create static-ck application buckets in us-east-1 region.
  s3_bucket:
      name: "{{ item }}"
      state: absent
      tags:
          Name: "{{ item }}"
          Environment: "{{ Environment }}"
          Owner: "{{ bucketTags[Environment]['Owner'] }}"
      region: us-east-1
      public_access:
          block_public_acls: true
          ignore_public_acls: true
          block_public_policy: true
          restrict_public_buckets: true
  with_items:
      - "{{ bucketCfg[Environment]['PrimarBucketName'] }}"
      - "{{ bucketCfg[Environment]['DevopsBucketName'] }}"
      - "{{ bucketCfg[Environment]['CDNLogBucketName'] }}"

- name: Create static-ck application buckets in us-east-2 region.
  s3_bucket:
      name: "{{ item }}"
      state: absent
      tags:
          Name: "{{ item }}"
          Environment: "{{ Environment }}"
          Owner: "{{ bucketTags[Environment]['Owner'] }}"
      region: us-east-2
      public_access:
          block_public_acls: true
          ignore_public_acls: true
          block_public_policy: true
          restrict_public_buckets: true
  with_items:
      - "{{ bucketCfg[Environment]['SecondaryBucketName'] }}"



Ansible Role Tasks file contents.
$ cat ansible_role/tasks/create_bucket_directories.yml
---

- name: Read environment specific variables.
  include_vars:
      file: "ansible_role/vars/{{ Environment }}/main.yml"

- name: Create empty directories to store build artifacts.
  aws_s3:
      bucket: "{{ item.bucket_name }}"
      object: "{{ item.artifact_dir }}"
      mode: create
  with_items:
      - { bucket_name: "{{ bucketCfg[Environment]['PrimarBucketName'] }}", artifact_dir: "/app1/artifacts" }
      - { bucket_name: "{{ bucketCfg[Environment]['SecondaryBucketName'] }}", artifact_dir: "/app1/artifacts" }


- name: Create empty directories to deploy latest build.
  aws_s3:
      bucket: "{{ item.bucket_name }}"
      object: "{{ item.latest_dir }}"
      mode: create
  with_items:
      - { bucket_name: "{{ bucketCfg[Environment]['PrimarBucketName'] }}", latest_dir: "/app1/latest" }
      - { bucket_name: "{{ bucketCfg[Environment]['SecondaryBucketName'] }}", latest_dir: "/app1/latest" }
Ansible Role Tasks file contents.
$ cat ansible_role/tasks/create_cloudfront_dist.yml
AWSTemplateFormatVersion: '2010-09-09'

Description: 'CF Template to setup infra for static password generator application'

Parameters:
    Environment:
      Description:    Please specify the target environment.
      Type:           String
      Default:        "int"
      AllowedValues:
        - int
        - pre-prod
        - prod
    AppName:
      Description:  Application name.
      Type:         String
      Default:      "pwdgen"

    AlternateDomainNames:
      Description:    CNAMEs (alternate domain names)
      Type:           String
      Default:        "jackuna.github.io"

    IPV6Enabled:
      Description:    Should CloudFront to respond to IPv6 DNS requests with an IPv6 address for your distribution.
      Type:           String
      Default:        true
      AllowedValues:
        - true
        - false

    OriginProtocolPolicy:
      Description:    CloudFront Origin Protocol Policy to apply to your origin.
      Type:           String
      Default:        "https-only"
      AllowedValues:
        - http-only
        - match-viewer
        - https-only

    Compress:
      Description:    CloudFront Origin Protocol Policy to apply to your origin.
      Type:           String
      Default:        "true"
      AllowedValues:
        - true
        - false

    DefaultTTL:
      Description:    The default time in seconds that objects stay in CloudFront caches before CloudFront forwards another request to your custom origin. By default, AWS CloudFormation specifies 86400 seconds (one day).
      Type:           String
      Default:        "540.0"

    MaxTTL:
      Description:    The maximum time in seconds that objects stay in CloudFront caches before CloudFront forwards another request to your custom origin. By default, AWS CloudFormation specifies 31536000 seconds (one year).
      Type:           String
      Default:        "600.0"

    MinTTL:
      Description:    The minimum amount of time that you want objects to stay in the cache before CloudFront queries your origin to see whether the object has been updated.
      Type:           String
      Default:        "1.0"

    SmoothStreaming:
      Description:    Indicates whether to use the origin that is associated with this cache behavior to distribute media files in the Microsoft Smooth Streaming format.
      Type:           String
      Default:        "false"
      AllowedValues:
        - true
        - false
    QueryString:
      Description:    Indicates whether you want CloudFront to forward query strings to the origin that is associated with this cache behavior.
      Type:           String
      Default:        "false"
      AllowedValues:
        - true
        - false

    ForwardCookies:
      Description:    Forwards specified cookies to the origin of the cache behavior.
      Type:           String
      Default:        "none"
      AllowedValues:
        - all
        - whitelist
        - none

    ViewerProtocolPolicy:
      Description:    The protocol that users can use to access the files in the origin that you specified in the TargetOriginId property when the default cache behavior is applied to a request.
      Type:           String
      Default:        "https-only"
      AllowedValues:
        - redirect-to-https
        - allow-all
        - https-only

    PriceClass:
      Description:    The price class that corresponds with the maximum price that you want to pay for CloudFront service. If you specify PriceClass_All, CloudFront responds to requests for your objects from all CloudFront edge locations.
      Type:           String
      Default:        "PriceClass_100"
      AllowedValues:
        - PriceClass_All
        - PriceClass_100
        - PriceClass_200

    SslSupportMethod:
      Description:    Specifies how CloudFront serves HTTPS requests.
      Type:           String
      Default:        "sni-only"
      AllowedValues:
        - sni-only
        - vip

    MinimumProtocolVersion:
      Description:    The minimum version of the SSL protocol that you want CloudFront to use for HTTPS connections.
      Type:           String
      Default:        "TLSv1.2_2021"
      AllowedValues:
        - TLSv1.2_2021
        - TLSv1.2_2019
        - TLSv1.1_2018

    OriginKeepaliveTimeout:
      Description:    You can create a custom keep-alive timeout. All timeout units are in seconds. The default keep-alive timeout is 5 seconds, but you can configure custom timeout lengths. The minimum timeout length is 1 second; the maximum is 60 seconds.
      Type:           String
      Default:        "60"

    OriginReadTimeout:
      Description:    You can create a custom origin read timeout. All timeout units are in seconds. The default origin read timeout is 30 seconds, but you can configure custom timeout lengths. The minimum timeout length is 4 seconds; the maximum is 60 seconds.
      Type:           String
      Default:        "30"


    BucketVersioning:
      Description:    The versioning state of an Amazon S3 bucket. If you enable versioning, you must suspend versioning to disable it.
      Type:           String
      Default:        "Suspended"
      AllowedValues:
        - Enabled
        - Suspended

Resources:
  # Bucket Policy for primary and secondary buckets.
  PrimaryBucketReadPolicy:
      Type: 'AWS::S3::BucketPolicy'
      Properties:
        Bucket: !Sub 'ck-${Environment}-${AppName}-primary-bucket'
        PolicyDocument:
          Statement:
          - Action: 
              - 's3:GetObject'
            Effect: Allow
            Resource: !Sub 'arn:aws:s3:::ck-${Environment}-${AppName}-primary-bucket/*'
            Principal:
              CanonicalUser: !GetAtt PrimaryBucketCloudFrontOriginAccessIdentity.S3CanonicalUserId
  SecondaryBucketReadPolicy:
      Type: 'AWS::S3::BucketPolicy'
      Properties:
        Bucket: !Sub 'ck-${Environment}-${AppName}-secondary-bucket'
        PolicyDocument:
          Statement:
          - Action: 
              - 's3:GetObject'
            Effect: Allow
            Resource: !Sub 'arn:aws:s3:::ck-${Environment}-${AppName}-secondary-bucket/*'
            Principal:
              CanonicalUser: !GetAtt SecondaryBucketCloudFrontOriginAccessIdentity.S3CanonicalUserId

  # Cloud Front OAI
  PrimaryBucketCloudFrontOriginAccessIdentity:
    Type: 'AWS::CloudFront::CloudFrontOriginAccessIdentity'
    Properties:
      CloudFrontOriginAccessIdentityConfig:
        Comment: !Sub 'ck-${Environment}-${AppName}-primary'
  SecondaryBucketCloudFrontOriginAccessIdentity:
    Type: 'AWS::CloudFront::CloudFrontOriginAccessIdentity'
    Properties:
      CloudFrontOriginAccessIdentityConfig:
        Comment: !Sub 'ck-${Environment}-${AppName}-secondary'

  # Cloudfront Cache Policy
  CDNCachePolicy:
    Type: AWS::CloudFront::CachePolicy
    Properties: 
      CachePolicyConfig: 
        Comment: 'Max TTL 600 to validate frequent changes'
        DefaultTTL: !Ref DefaultTTL
        MaxTTL: !Ref MaxTTL
        MinTTL: !Ref MinTTL
        Name: !Sub 'ck-${Environment}-${AppName}-cache-policy'
        ParametersInCacheKeyAndForwardedToOrigin: 
            CookiesConfig: 
                CookieBehavior: none
            EnableAcceptEncodingBrotli: True
            EnableAcceptEncodingGzip: True
            HeadersConfig: 
                HeaderBehavior: none
            QueryStringsConfig: 
                QueryStringBehavior: none

  # CLOUDFRONT DISTRIBUTION
  CloudFrontDistribution:
    Type: 'AWS::CloudFront::Distribution'
    DependsOn:
    - CDNCachePolicy
    Properties:
      DistributionConfig:
        Comment: 'Cyberkeeda Password Generator application'
        Enabled: true
        HttpVersion: http2
        IPV6Enabled: true
        DefaultRootObject: version.json
        Origins:
        - DomainName: !Sub 'ck-${Environment}-${AppName}-primary.s3.amazonaws.com'
          Id: !Sub 'ck-${Environment}-${AppName}-primary-origin'
          OriginPath: "/v1/latest"
          ConnectionAttempts: 1
          ConnectionTimeout: 2
          S3OriginConfig:
            OriginAccessIdentity: !Sub 'origin-access-identity/cloudfront/${PrimaryBucketCloudFrontOriginAccessIdentity}'
        - DomainName: !Sub 'ck-${Environment}-${AppName}-secondary.s3.amazonaws.com'
          Id: !Sub 'ck-${Environment}-${AppName}-secondary-origin'
          OriginPath: "/v1/latest"
          ConnectionAttempts: 1
          ConnectionTimeout: 2
          S3OriginConfig:
            OriginAccessIdentity: !Sub 'origin-access-identity/cloudfront/${SecondaryBucketCloudFrontOriginAccessIdentity}'
        OriginGroups:
          Quantity: 1
          Items: 
          - Id: !Sub 'ck-${Environment}-${AppName}-cdn-origin-group'
            FailoverCriteria: 
              StatusCodes: 
                Items: 
                - 500
                - 502
                - 503
                - 504
                - 403
                - 404
                Quantity: 6
            Members:
              Quantity: 2
              Items: 
              - OriginId: !Sub 'ck-${Environment}-${AppName}-primary-origin'
              - OriginId: !Sub 'ck-${Environment}-${AppName}-secondary-origin'
        CacheBehaviors:
          - CachePolicyId: !GetAtt 'CDNCachePolicy.Id'
            PathPattern:  '*'
            ViewerProtocolPolicy: !Ref 'ViewerProtocolPolicy'
            TargetOriginId: !Sub 'ck-${Environment}-${AppName}-cdn-origin-group'
        DefaultCacheBehavior:
          AllowedMethods:
            - GET
            - HEAD
          TargetOriginId: !Sub 'ck-${Environment}-${AppName}-cdn-origin-group'
          ViewerProtocolPolicy: !Ref 'ViewerProtocolPolicy'
          CachePolicyId: !GetAtt 'CDNCachePolicy.Id'
Outputs:
  CDNCloudfrontURL:
    Description: CloudFront CDN Url.
    Value: !GetAtt  'CloudFrontDistribution.DomainName'


Once the above file and it's respective contents are dumped within source code repository, we can use to create AWS services using Jenkins pipeline job.

If we breakdown the blog post, this post can be used for other techinacl refrences too, such as.
  • Jenkins Scripted pipeline using parameters.
  • How to hash/mask passwords and sensitive environments. 
  • Leverage the power of docker to make codes uniform across environments and platform.
    • If you notice, we can easily install ansible packages within build machine and run the ansible playbook directly, but we are not touching any third party application within our build machine.
    • Even once our task is done, we are removing the container.
  • How to build docker image from docker file using jenkins.
  • Docker file to build ansible image.
  • Real world example of Ansible Roles.
  • Ansible to create S3 buckets with tags.
  • How to disable s3 bucket public access using ansible.
  • How to create s3 bucket directories and objects using Ansible.
  • How to use Ansible to create CloudFormation stack using parameters.
  • CloudFormation template to create below resources.
    • S3 Bucket Policy
    • CloudFront Origin Access Identity.
    • CloudFront Cache Policy.
    • CloudFront Distribution with Origin Group and S3 as a Origin.

Hope this blog post, help you in some use case.

There might be definitely errors and areas of improvement within this blog post or better wat to handle such deployment, please share your valuable comments.

Read more ...

Cloudformation template to create S3 bucket with Tags and disable public access

 


Below CloudFormation template can be used for the following tasks.
  • Create S3 bucket.
  • Add tags to S3 bucket.
  • Disable public access.
 S3BUCKET
  PrimaryBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub 'cyberkeeda-${Environment}-${AppName}-bucket'
      AccessControl: Private
      PublicAccessBlockConfiguration:
        BlockPublicAcls: True
        BlockPublicPolicy: True
        IgnorePublicAcls: True
        RestrictPublicBuckets: True
      Tags:
        - Key: Name
          Value: !Sub 'cyberkeeda-${Environment}-${AppName}'
        - Key: Environment
          Value: "Development"
        - Key: Creator
          Value: !Sub "${Creator}"
        - Key: Appname
          Value: !Sub "${Appname}"
        - Key: Unit
          Value: !Sub "${Unit}"
        - Key: Owner
          Value: admin@ck.com

Read more ...

Ansible role to create S3 bucket and directories

 


 Ansible roles can be defined as

  • Collection of multiple playbooks within directories for several tasks and operations.
  • It's a way of maintaining playbooks in a structured and identical manner.
  • It's a way of breaking lengthy playbooks into small plays.
  • Roles can be uploaded to Ansible galaxy, which can be reused as an ansible library or module.

How can we create Ansible roles.
  • We can use ansible-galaxy command to download existing roles uploaded on website https://galaxy.ansible.com/
  • We can use ansible-galaxy command to create new role.
  • While creating a new role ansible-galaxy creates roles in the default directory as /etc/ansible/roles followed by name of the role.
Below commands can be used as per need.
  • Ansible galaxy command to check installed roles.
$ ansible-galaxy collection list
  • Ansible galaxy command to create role in default directory
$ ansible-galaxy init /etc/ansible/roles/my-role --offline
  • Ansible galaxy command to create role in present working directory
$ ansible-galaxy init my-role
  • Ansible galaxy command to install roles from ansible galaxy website collections
$ ansible-galaxy collection install amazon.aws

Ansible role directory structure looks like below, we can take example of above created ansible role by name my-role
$ ansible-galaxy init my-role
- Role my-role was created successfully
$ tree my-role/
my-role/
├── defaults
│   └── main.yml
├── files
├── handlers
│   └── main.yml
├── meta
│   └── main.yml
├── README.md
├── tasks
│   └── main.yml
├── templates
├── tests
│   ├── inventory
│   └── test.yml
└── vars
    └── main.yml

8 directories, 8 files

How can we use roles
  • Till now, we have seen how to create role and once created what's it's default directory structure looks like, Ansible roles can be used in there ways
    • with the roles option: This is the classic way of using roles in a play.
    • tasks level with include_role: one can reuse roles dynamically anywhere in the tasks section of a play using include_role.
    • tasks level with import_role: You can reuse roles statically anywhere in the tasks section of a play using import_role.
Here we will know more about the classic way of using roles, that is by using the roles option in playbook.

So instead of going through all conventional method of installing apache or ngnix, I will share a real-time custom role, that has the following task/operations to do.
  • Create multiple AWS S3 buckets by regions.
  • Create directory structure within two of above created bucket.
First let's go through the playbook, that can be independently used to do the entire operation without creating ansible roles.

Note: amazon.aws galaxy collection must be update to recent version, in order to use option s3_bucket 
$ ansible-galaxy collection install amazon.aws
---
- hosts: localhost
  connection: local
  gather_facts: False

  tasks:

    - name: Read environment specific variables.
    include_vars:
        file: "ansible_role/vars/{{ Environment }}/main.yml"

    - name: Create static-ck application buckets in us-east-1 region.
    s3_bucket:
        name: "{{ item }}"
        state: present
        tags:
            Name: "{{ item }}"
            Environment: "{{ Environment }}"
            Owner: "{{ bucketTags[Environment]['Owner'] }}"
        region: us-east-1
        public_access:
            block_public_acls: true
            ignore_public_acls: true
            block_public_policy: true
            restrict_public_buckets: true
    with_items:
        - "{{ bucketCfg[Environment]['PrimarBucketName'] }}"
        - "{{ bucketCfg[Environment]['DevopsBucketName'] }}"
        - "{{ bucketCfg[Environment]['CDNLogBucketName'] }}"

    - name: Create static-ck application buckets in us-east-2 region.
    s3_bucket:
        name: "{{ item }}"
        state: present
        tags:
            Name: "{{ item }}"
            Environment: "{{ Environment }}"
            Owner: "{{ bucketTags[Environment]['Owner'] }}"
        region: us-east-2
        public_access:
            block_public_acls: true
            ignore_public_acls: true
            block_public_policy: true
            restrict_public_buckets: true
    with_items:
        - "{{ bucketCfg[Environment]['SecondaryBucketName'] }}"


    - name: Create empty directories to store build artifacts.
    aws_s3:
        bucket: "{{ item.bucket_name }}"
        object: "{{ item.artifact_dir }}"
        mode: create
    with_items:
        - { bucket_name: "{{ bucketCfg[Environment]['PrimarBucketName'] }}", artifact_dir: "/app1/artifcats" }
        - { bucket_name: "{{ bucketCfg[Environment]['SecondaryBucketName'] }}", artifact_dir: "/app1/artifcats" }


    - name: Create empty directories to deploy latest build.
    aws_s3:
        bucket: "{{ item.bucket_name }}"
        object: "{{ item.latest_dir }}"
        mode: create
    with_items:
        - { bucket_name: "{{ bucketCfg[Environment]['PrimarBucketName'] }}", latest_dir: "/app1/latest" }
        - { bucket_name: "{{ bucketCfg[Environment]['SecondaryBucketName'] }}", latest_dir: "/app1/latest" }

Above playbook can be triggered independently using the below command as.
$ ansible-playbook  -vv --extra-vars "Environment=int" main.yml


The same deployment can be done using ansible roles in below manner following the below steps.
  • Create a new ansible role by name ansible_role
$ ansible-galaxy init ansible_role
  • Create a new root/entry playbook to initiate deployment
$ touch root.yml
  • Include the below lines and indicate the use role option to call our role, please note that we have used the option "roles" to call our newly created role directory by name ansible_role, while using roles option please make a note of the below points for main.yml file.
    • When you use the roles option at the play level, for each role ‘x’:
    • If roles/x/tasks/main.yml exists, Ansible adds the tasks in that file to the play.
    • If roles/x/handlers/main.yml exists, Ansible adds the handlers in that file to the play.
    • If roles/x/vars/main.yml exists, Ansible adds the variables in that file to the play.
    • If roles/x/defaults/main.yml exists, Ansible adds the variables in that file to the play.
    • If roles/x/meta/main.yml exists, Ansible adds any role dependencies in that file to the list of roles.
    • Any copy, script, template or include tasks (in the role) can reference files in roles/x/{files,templates,tasks}/ (dir depends on task) without having to path them relatively or absolutely.
---
- hosts: localhost
  connection: local
  gather_facts: False

  roles:
   - ansible_role 
  • Below the directory structure we follow within our newly created role.
root.yml
|
ansible_role/
├── defaults
│   └── main.yml
├── files
├── handlers
│   └── main.yml
├── meta
│   └── main.yml
├── README.md
├── tasks
│   ├── create_bucket_directories.yml
│   ├── create_s3_bucket.yml
│   └── main.yml
├── templates
├── tests
│   ├── inventory
│   └── test.yml
└── vars
    └── int
        └── main.yml

So from above directory layout, we have the below files and directories to create.
  • We have divided our tasks into parts
    • Create S3 buckets
    • Create directories within S3
    • All the above two tasks will be defined individually under two different file by name
      • create_s3_bucket.yml
      • create_bucket_directories.yml
    • Where as ansible_roles/tasks/main.yml is entry point for these two task, which we will be importing using option import tasks
$ cat ansible_role/tasks/main.yml
---
- import_tasks: create_s3_bucket.yml
- import_tasks: create_bucket_directories.yml

This is how, my other two task files look like.

$ cat ansible_role/tasks/create_s3_bucket.yml
---

- name: Read environment specific variables.
  include_vars:
      file: "ansible_role/vars/{{ Environment }}/main.yml"

- name: Create static-ck application buckets in us-east-1 region.
  s3_bucket:
      name: "{{ item }}"
      state: present
      tags:
          Name: "{{ item }}"
          Environment: "{{ Environment }}"
          Owner: "{{ bucketTags[Environment]['Owner'] }}"
      region: us-east-1
      public_access:
          block_public_acls: true
          ignore_public_acls: true
          block_public_policy: true
          restrict_public_buckets: true
  with_items:
      - "{{ bucketCfg[Environment]['PrimarBucketName'] }}"
      - "{{ bucketCfg[Environment]['DevopsBucketName'] }}"
      - "{{ bucketCfg[Environment]['CDNLogBucketName'] }}"

- name: Create static-ck application buckets in us-east-2 region.
  s3_bucket:
      name: "{{ item }}"
      state: present
      tags:
          Name: "{{ item }}"
          Environment: "{{ Environment }}"
          Owner: "{{ bucketTags[Environment]['Owner'] }}"
      region: us-east-2
      public_access:
          block_public_acls: true
          ignore_public_acls: true
          block_public_policy: true
          restrict_public_buckets: true
  with_items:
      - "{{ bucketCfg[Environment]['SecondaryBucketName'] }}"

$ cat ansible_role/tasks/create_bucket_directories.yml
---

- name: Read environment specific variables.
  include_vars:
      file: "ansible_role/vars/{{ Environment }}/main.yml"

- name: Create empty directories to store build artifacts.
  aws_s3:
      bucket: "{{ item.bucket_name }}"
      object: "{{ item.artifact_dir }}"
      mode: create
  with_items:
      - { bucket_name: "{{ bucketCfg[Environment]['PrimarBucketName'] }}", artifact_dir: "/v1/artifcats" }
      - { bucket_name: "{{ bucketCfg[Environment]['SecondaryBucketName'] }}", artifact_dir: "/v1/artifcats" }

  • We have added an additional directory as "int", which is the short form of Internal environment, following the same we can create more directories that can relate to other environmental specific files for prod and non-prod environmet too.
    • Within file ansible_role/vars/int/main.yml we defined key value pairs that can be used later while running our playbook
$ cat ansible_role/vars/int/main.yml
---
# default variables
region: us-east-1
ProductName: ck
ProjectName: static-app
Environment: int
PrimaryRegion: us-east-1
SecondaryRegion: us-east-2
regions:
  us-east-1:
    preferredMaintenanceWindow: "sat:06:00-sat:06:30"
  us-east-2:
    preferredMaintenanceWindow: "sat:05:00-sat:05:30"

bucketCfg:
  int:
    Environment: "{{ Environment }}"
    PrimarBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-primary-cyberkeeda-bucket-01"
    SecondaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-secondary-cyberkeeda-bucket-01"
    CDNLogBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-cdn-logs-cyberkeeda-bucket-01"
    DevopsBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-devops-cyberkeeda-bucket-01"
    PrimaryBucketRegion: "{{ PrimaryRegion }}"
    SecondaryBucketRegion: "{{SecondaryRegion}}"
    DevopsBucketRegion: "{{ PrimaryRegion }}"
bucketTags:
  int:
    PrimaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-primary"
    SecondaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-secondary"
    Environment: "{{ Environment }}"
    CreatorID: "admin@cyberkeeda.com"
    Owner: "admin@cyberkeeda.com"

Once the above templates are created and save, we can run our playbook with below ansible-playbook command.
$ ansible-playbook  -vv --extra-vars "Environment=int" root.yml

Below is the details for the above paramter used along with the ansible-playbook command.
  • -vv : Verbrose mode for debugging in STDOUT
  • --extra-vars : Key-Value pair to be used within playbook

Hope this blog post will help any one in some sort, please comment in case if you have any difficulties following steps.

Read more ...
Designed By Jackuna