Within this post, we will cover
- Complete Jenkins Scripted pipeline.
- Complete IAAC (Infrastructure As A Code ) to deploy AWS services and It's respective configuration.
- Jenkins integration with GitHub as code repository.
- Jenkins integration with Docker to make the deployment platform independent.
- Jenkins integration with Ansible to call AWS CloudFormation scripts.
- Using Ansible roles to fill the gaps of AWS CloudFormation, basically in this blog post and lab environment I'm using it to bypass usage of AWS CloudFormation Stack Sets and Custom Resources.
Flow diagram explaining the automation.
Explanation of above flow diagram.
Once Jenkins Job is triggered with appropriate input variables.
- It starts with fetching source code from git repository, which contains.
- Source Code for applications ( HTML, CSS, JS )
- IAAC code to support infrastructures deployment.
- Ansible Role, playbooks.
- CloudFormation templates.
- Jenkins File, which has scripted pipeline defined.
- Once source code is downloaded, it will look for Jenkins pipeline file named a Jenkinsfile.
- Once Jenkins file is executed, it will initiate the pipeline in below stages.
- Stage Checkout : It looks for deployment type, as normal build or rollback and based upon it, it will checkout to respective git branch or tag.
- Stage Build : To make the pipeline, platform independent and reusable in nature, instead of directly triggering jobs on Jenkins node via bash or powershell commands, we will be using docker containers to run our CLI commands.
- Here we will use Ansible Playbooks to create Infrastructure, thus in this step we will build a Ansible docker image from Docker file.
- Stage Deploy: Once our pre-requisites are ready ( Ansible Docker Image ), we will run ansible container and trigger ansible-playbook command on the fly with appropriate Environment variables and Variables.
- Ansible playbook ( root.yml ) is executed, which has the roles defined under it by name ansible_role
- I have removed non used default directories like (meta, default, handlers, tests etc. ) as these are not being used within our requirement.
- Ansible role has three task playbook files with below operations.
- Create S3 bucket : It will use ansible's role amazon.aws.s3_bucket to creates s3 bucket with tags and restricted public access.
- Create empty directories within above created S3 buckets.: It will use ansible's role amazon.aws.aws_s3 to create bucket objects.
- Create CloudFormation distributions : It will use ansible's role amazon.aws.cloudformation option to create CloudFront distribution via CloudFormation template.
Jenkins file used in this lab.
def ENVT = env.ENVIRONMENT
def VERSION = env.VERSION
def JOBTYPE = env.JOBTYPE
def ACCESS_KEY = env.AWS_ACCESS_KEY
def KEY_ID = env.AWS_SECRET_ACCESS_KEY
node('master'){
try {
stage('checkout'){
if ( "${VERSION}" == 'default') {
checkout scm
}
else {
checkout scm
sh "git checkout $VERSION"
}
}
stage('build'){
sh "ls -ltr"
echo "Building docker image via dockerfile..."
sh "docker build -t ck-pwdgen-app/ansible:2.10-$BUILD_ID ."
}
stage('deploy'){
echo "Infrastructure deployment started...."
wrap([$class: "MaskPasswordsBuildWrapper",
varPasswordPairs: [[password: ACCESS_KEY, var: ACCESS_KEY], [password: KEY_ID, var: KEY_ID] ]]) {
sh "docker run \
-e AWS_ACCESS_KEY_ID=$ACCESS_KEY \
-e AWS_SECRET_ACCESS_KEY=$KEY_ID \
-e AWS_DEFAULT_REGION='us-west-1' \
ck-pwdgen-app/ansible:2.10-$BUILD_ID ansible-playbook -vvv --extra-vars 'Environment=${ENVT}' root.yml"
}
}
}
catch (e){
echo "Error occurred - " + e.toString()
throw e
}
finally {
deleteDir()
if ( "${JOBTYPE}" == 'build-deploy') {
sh 'docker rmi -f ck-pwdgen-app/ansible:2.10-$BUILD_ID && echo "ck-pwdgen-app/ansible:2.10-$BUILD_ID local image deleted."'
}
}
}
Jenkins Pipeline job will look something like below.
Dockerfile used to create Ansible Image
FROM python:3.7
RUN python3 -m pip install ansible==2.10 boto3 awscli && ansible-galaxy collection install amazon.aws
ADD root.yml /usr/local/ansible/
COPY ansible_role /usr/local/ansible/ansible_role
WORKDIR usr/local/ansible/
CMD ["ansible-playbook", "--version"]
Ansible Role directory structure and it's respective file contents.
root.yml
|
ansible_role/
├── README.md
├── tasks
│ ├── create_bucket_directories.yml
│ ├── create_cloudfront_dist.yml
│ ├── create_s3_bucket.yml
│ └── main.yml
└── vars
└── int
└── main.yml
3 directories, 6 files
Ansible Entry Playbook file ( root.yml ), we will initiate the ansible tasks using roles defined in below file.
$ cat root.yml
---
- hosts: localhost
connection: local
gather_facts: False
roles:
- ansible_role
Ansible Roles Variable file content.
$ cat ansible_role/vars/int/main.yml
---
# default variables
region: us-east-1
ProductName: ck
ProjectName: pwdgen
Environment: int
PrimaryRegion: us-east-1
SecondaryRegion: us-east-2
bucketCfg:
int:
Environment: "{{ Environment }}"
PrimarBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-primary-bucket"
SecondaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-secondary-bucket"
CDNLogBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-cdn-logs-bucket"
DevopsBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-devops-bucket"
PrimaryBucketRegion: "{{ PrimaryRegion }}"
SecondaryBucketRegion: "{{SecondaryRegion}}"
DevopsBucketRegion: "{{ PrimaryRegion }}"
bucketTags:
int:
PrimaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-primary"
SecondaryBucketName: "{{ ProductName }}-{{Environment}}-{{ ProjectName }}-secondary"
Environment: "{{ Environment }}"
Owner: "admin@cyberkeeda.com"
Ansible Role Tasks file contents.
$ cat ansible_role/tasks/main.yml
---
- import_tasks: create_s3_bucket.yml
- import_tasks: create_bucket_directories.yml
- import_tasks: create_cloudfront_dist.yml
Ansible Role Tasks file contents.
$ cat ansible_role/tasks/create_s3_bucket.yml
- name: Read environment specific variables.
include_vars:
file: "ansible_role/vars/{{ Environment }}/main.yml"
- name: Create static-ck application buckets in us-east-1 region.
s3_bucket:
name: "{{ item }}"
state: absent
tags:
Name: "{{ item }}"
Environment: "{{ Environment }}"
Owner: "{{ bucketTags[Environment]['Owner'] }}"
region: us-east-1
public_access:
block_public_acls: true
ignore_public_acls: true
block_public_policy: true
restrict_public_buckets: true
with_items:
- "{{ bucketCfg[Environment]['PrimarBucketName'] }}"
- "{{ bucketCfg[Environment]['DevopsBucketName'] }}"
- "{{ bucketCfg[Environment]['CDNLogBucketName'] }}"
- name: Create static-ck application buckets in us-east-2 region.
s3_bucket:
name: "{{ item }}"
state: absent
tags:
Name: "{{ item }}"
Environment: "{{ Environment }}"
Owner: "{{ bucketTags[Environment]['Owner'] }}"
region: us-east-2
public_access:
block_public_acls: true
ignore_public_acls: true
block_public_policy: true
restrict_public_buckets: true
with_items:
- "{{ bucketCfg[Environment]['SecondaryBucketName'] }}"
Ansible Role Tasks file contents.
$ cat ansible_role/tasks/create_bucket_directories.yml
---
- name: Read environment specific variables.
include_vars:
file: "ansible_role/vars/{{ Environment }}/main.yml"
- name: Create empty directories to store build artifacts.
aws_s3:
bucket: "{{ item.bucket_name }}"
object: "{{ item.artifact_dir }}"
mode: create
with_items:
- { bucket_name: "{{ bucketCfg[Environment]['PrimarBucketName'] }}", artifact_dir: "/app1/artifacts" }
- { bucket_name: "{{ bucketCfg[Environment]['SecondaryBucketName'] }}", artifact_dir: "/app1/artifacts" }
- name: Create empty directories to deploy latest build.
aws_s3:
bucket: "{{ item.bucket_name }}"
object: "{{ item.latest_dir }}"
mode: create
with_items:
- { bucket_name: "{{ bucketCfg[Environment]['PrimarBucketName'] }}", latest_dir: "/app1/latest" }
- { bucket_name: "{{ bucketCfg[Environment]['SecondaryBucketName'] }}", latest_dir: "/app1/latest" }
Ansible Role Tasks file contents.
$ cat ansible_role/tasks/create_cloudfront_dist.yml
AWSTemplateFormatVersion: '2010-09-09'
Description: 'CF Template to setup infra for static password generator application'
Parameters:
Environment:
Description: Please specify the target environment.
Type: String
Default: "int"
AllowedValues:
- int
- pre-prod
- prod
AppName:
Description: Application name.
Type: String
Default: "pwdgen"
AlternateDomainNames:
Description: CNAMEs (alternate domain names)
Type: String
Default: "jackuna.github.io"
IPV6Enabled:
Description: Should CloudFront to respond to IPv6 DNS requests with an IPv6 address for your distribution.
Type: String
Default: true
AllowedValues:
- true
- false
OriginProtocolPolicy:
Description: CloudFront Origin Protocol Policy to apply to your origin.
Type: String
Default: "https-only"
AllowedValues:
- http-only
- match-viewer
- https-only
Compress:
Description: CloudFront Origin Protocol Policy to apply to your origin.
Type: String
Default: "true"
AllowedValues:
- true
- false
DefaultTTL:
Description: The default time in seconds that objects stay in CloudFront caches before CloudFront forwards another request to your custom origin. By default, AWS CloudFormation specifies 86400 seconds (one day).
Type: String
Default: "540.0"
MaxTTL:
Description: The maximum time in seconds that objects stay in CloudFront caches before CloudFront forwards another request to your custom origin. By default, AWS CloudFormation specifies 31536000 seconds (one year).
Type: String
Default: "600.0"
MinTTL:
Description: The minimum amount of time that you want objects to stay in the cache before CloudFront queries your origin to see whether the object has been updated.
Type: String
Default: "1.0"
SmoothStreaming:
Description: Indicates whether to use the origin that is associated with this cache behavior to distribute media files in the Microsoft Smooth Streaming format.
Type: String
Default: "false"
AllowedValues:
- true
- false
QueryString:
Description: Indicates whether you want CloudFront to forward query strings to the origin that is associated with this cache behavior.
Type: String
Default: "false"
AllowedValues:
- true
- false
ForwardCookies:
Description: Forwards specified cookies to the origin of the cache behavior.
Type: String
Default: "none"
AllowedValues:
- all
- whitelist
- none
ViewerProtocolPolicy:
Description: The protocol that users can use to access the files in the origin that you specified in the TargetOriginId property when the default cache behavior is applied to a request.
Type: String
Default: "https-only"
AllowedValues:
- redirect-to-https
- allow-all
- https-only
PriceClass:
Description: The price class that corresponds with the maximum price that you want to pay for CloudFront service. If you specify PriceClass_All, CloudFront responds to requests for your objects from all CloudFront edge locations.
Type: String
Default: "PriceClass_100"
AllowedValues:
- PriceClass_All
- PriceClass_100
- PriceClass_200
SslSupportMethod:
Description: Specifies how CloudFront serves HTTPS requests.
Type: String
Default: "sni-only"
AllowedValues:
- sni-only
- vip
MinimumProtocolVersion:
Description: The minimum version of the SSL protocol that you want CloudFront to use for HTTPS connections.
Type: String
Default: "TLSv1.2_2021"
AllowedValues:
- TLSv1.2_2021
- TLSv1.2_2019
- TLSv1.1_2018
OriginKeepaliveTimeout:
Description: You can create a custom keep-alive timeout. All timeout units are in seconds. The default keep-alive timeout is 5 seconds, but you can configure custom timeout lengths. The minimum timeout length is 1 second; the maximum is 60 seconds.
Type: String
Default: "60"
OriginReadTimeout:
Description: You can create a custom origin read timeout. All timeout units are in seconds. The default origin read timeout is 30 seconds, but you can configure custom timeout lengths. The minimum timeout length is 4 seconds; the maximum is 60 seconds.
Type: String
Default: "30"
BucketVersioning:
Description: The versioning state of an Amazon S3 bucket. If you enable versioning, you must suspend versioning to disable it.
Type: String
Default: "Suspended"
AllowedValues:
- Enabled
- Suspended
Resources:
# Bucket Policy for primary and secondary buckets.
PrimaryBucketReadPolicy:
Type: 'AWS::S3::BucketPolicy'
Properties:
Bucket: !Sub 'ck-${Environment}-${AppName}-primary-bucket'
PolicyDocument:
Statement:
- Action:
- 's3:GetObject'
Effect: Allow
Resource: !Sub 'arn:aws:s3:::ck-${Environment}-${AppName}-primary-bucket/*'
Principal:
CanonicalUser: !GetAtt PrimaryBucketCloudFrontOriginAccessIdentity.S3CanonicalUserId
SecondaryBucketReadPolicy:
Type: 'AWS::S3::BucketPolicy'
Properties:
Bucket: !Sub 'ck-${Environment}-${AppName}-secondary-bucket'
PolicyDocument:
Statement:
- Action:
- 's3:GetObject'
Effect: Allow
Resource: !Sub 'arn:aws:s3:::ck-${Environment}-${AppName}-secondary-bucket/*'
Principal:
CanonicalUser: !GetAtt SecondaryBucketCloudFrontOriginAccessIdentity.S3CanonicalUserId
# Cloud Front OAI
PrimaryBucketCloudFrontOriginAccessIdentity:
Type: 'AWS::CloudFront::CloudFrontOriginAccessIdentity'
Properties:
CloudFrontOriginAccessIdentityConfig:
Comment: !Sub 'ck-${Environment}-${AppName}-primary'
SecondaryBucketCloudFrontOriginAccessIdentity:
Type: 'AWS::CloudFront::CloudFrontOriginAccessIdentity'
Properties:
CloudFrontOriginAccessIdentityConfig:
Comment: !Sub 'ck-${Environment}-${AppName}-secondary'
# Cloudfront Cache Policy
CDNCachePolicy:
Type: AWS::CloudFront::CachePolicy
Properties:
CachePolicyConfig:
Comment: 'Max TTL 600 to validate frequent changes'
DefaultTTL: !Ref DefaultTTL
MaxTTL: !Ref MaxTTL
MinTTL: !Ref MinTTL
Name: !Sub 'ck-${Environment}-${AppName}-cache-policy'
ParametersInCacheKeyAndForwardedToOrigin:
CookiesConfig:
CookieBehavior: none
EnableAcceptEncodingBrotli: True
EnableAcceptEncodingGzip: True
HeadersConfig:
HeaderBehavior: none
QueryStringsConfig:
QueryStringBehavior: none
# CLOUDFRONT DISTRIBUTION
CloudFrontDistribution:
Type: 'AWS::CloudFront::Distribution'
DependsOn:
- CDNCachePolicy
Properties:
DistributionConfig:
Comment: 'Cyberkeeda Password Generator application'
Enabled: true
HttpVersion: http2
IPV6Enabled: true
DefaultRootObject: version.json
Origins:
- DomainName: !Sub 'ck-${Environment}-${AppName}-primary.s3.amazonaws.com'
Id: !Sub 'ck-${Environment}-${AppName}-primary-origin'
OriginPath: "/v1/latest"
ConnectionAttempts: 1
ConnectionTimeout: 2
S3OriginConfig:
OriginAccessIdentity: !Sub 'origin-access-identity/cloudfront/${PrimaryBucketCloudFrontOriginAccessIdentity}'
- DomainName: !Sub 'ck-${Environment}-${AppName}-secondary.s3.amazonaws.com'
Id: !Sub 'ck-${Environment}-${AppName}-secondary-origin'
OriginPath: "/v1/latest"
ConnectionAttempts: 1
ConnectionTimeout: 2
S3OriginConfig:
OriginAccessIdentity: !Sub 'origin-access-identity/cloudfront/${SecondaryBucketCloudFrontOriginAccessIdentity}'
OriginGroups:
Quantity: 1
Items:
- Id: !Sub 'ck-${Environment}-${AppName}-cdn-origin-group'
FailoverCriteria:
StatusCodes:
Items:
- 500
- 502
- 503
- 504
- 403
- 404
Quantity: 6
Members:
Quantity: 2
Items:
- OriginId: !Sub 'ck-${Environment}-${AppName}-primary-origin'
- OriginId: !Sub 'ck-${Environment}-${AppName}-secondary-origin'
CacheBehaviors:
- CachePolicyId: !GetAtt 'CDNCachePolicy.Id'
PathPattern: '*'
ViewerProtocolPolicy: !Ref 'ViewerProtocolPolicy'
TargetOriginId: !Sub 'ck-${Environment}-${AppName}-cdn-origin-group'
DefaultCacheBehavior:
AllowedMethods:
- GET
- HEAD
TargetOriginId: !Sub 'ck-${Environment}-${AppName}-cdn-origin-group'
ViewerProtocolPolicy: !Ref 'ViewerProtocolPolicy'
CachePolicyId: !GetAtt 'CDNCachePolicy.Id'
Outputs:
CDNCloudfrontURL:
Description: CloudFront CDN Url.
Value: !GetAtt 'CloudFrontDistribution.DomainName'
Once the above file and it's respective contents are dumped within source code repository, we can use to create AWS services using Jenkins pipeline job.
If we breakdown the blog post, this post can be used for other techinacl refrences too, such as.
- Jenkins Scripted pipeline using parameters.
- How to hash/mask passwords and sensitive environments.
- Leverage the power of docker to make codes uniform across environments and platform.
- If you notice, we can easily install ansible packages within build machine and run the ansible playbook directly, but we are not touching any third party application within our build machine.
- Even once our task is done, we are removing the container.
- How to build docker image from docker file using jenkins.
- Docker file to build ansible image.
- Real world example of Ansible Roles.
- Ansible to create S3 buckets with tags.
- How to disable s3 bucket public access using ansible.
- How to create s3 bucket directories and objects using Ansible.
- How to use Ansible to create CloudFormation stack using parameters.
- CloudFormation template to create below resources.
- S3 Bucket Policy
- CloudFront Origin Access Identity.
- CloudFront Cache Policy.
- CloudFront Distribution with Origin Group and S3 as a Origin.
Hope this blog post, help you in some use case.
There might be definitely errors and areas of improvement within this blog post or better wat to handle such deployment, please share your valuable comments.
Read more ...