CyberKeeda In Social Media
Showing posts with label AWS. Show all posts
Showing posts with label AWS. Show all posts

Most used AWS S3 Bucket Policies.

 


Bucket Policies are one of the key element when we have talk about security and compliance while using AWS S3 buckets to host our static contents.

In this post, below are the some code snippets of most used bucket policy documents.


Policy 1 : Enable Public Read access to Bucket objects.

Turing OFF the Public Access Check from S3 bucket permission tab is not sufficient to enable public read access, additionally you need to add below bucket policy statement to enable.


{
  "Version":"2012-10-17",
  "Statement":[
    {
      "Sid":"EnablePublicRead",
      "Effect":"Allow",
      "Principal": "*",
      "Action":["s3:GetObject"],
      "Resource":["arn:aws:s3:::ck-public-demo-bucket/*"]
    }
  ]
}


Policy 2 : Allow only HTTPs Connections.

AWS S3 allows both HTTP and HTTPS by default, in order to force clients to initiate only HTTPS connection, use below bucket policy document to force it.

{
  "Id": "ExamplePolicy",
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowOnlySSLRequests",
      "Action": "s3:GetObject",
      "Effect": "Allow",
      "Resource": [
        "arn:aws:s3:::ck-public-demo-bucket/*"
      ],
      "Condition": {
        "Bool": {
          "aws:SecureTransport": "true"
        }
      },
      "Principal": "*"
    }
  ]
}


Policy 3 : Allow access from a specific or range of IP address.


{
  "Version": "2012-10-17",
  "Id": "AllowOnlyIpS3Policy",
  "Statement": [
    {
      "Sid": "AllowOnlyIp",
"Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": [ "arn:aws:s3:::ck-public-demo-bucket", "arn:aws:s3:::ck-public-demo-bucket/*" ], "Condition": { "NotIpAddress": {"aws:SourceIp": "12.345.67.89/32"} } } ] }

Policy 4 : Cross Account Bucket access Policy.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::REPLACE-WITH-YOUR-AWS-CROSS-ACCOUNT-NUMBER:root"
            },
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:PutObjectAcl"
            ],
            "Resource": [
                "arn:aws:s3:::ck-public-demo-bucket/*"
            ]
        }
    ]
}

Note :  Do copy content from cross account with additional acl as bucket owner full control

aws s3 cp demo-file.txt s3://ck-public-demo-bucket/ --acl bucket-owner-full-control

Will keep on adding more..
Read more ...

Fix : AWS SAM IAM Error : arn:aws:cloudformation:us-east-1:aws:transform/Serverless-2016-10-31

 



Stack trace :

User: arn:aws:sts::455734o955:assumed-role/xx-xx-xx-app-role/i-xxxxxxxxxx is not authorized to perform: cloudformation:CreateChangeSet on resource: arn:aws:cloudformation:us-east-1:aws:transform/Serverless-2016-10-31 because no identity-based policy allows the cloudformation:CreateChangeSet action.


Fix.

Add the below resource within your JSON policy statement.

Note :  cloudformation:* is strictly discouraged, fine tune your access permissions.

        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "cloudformation:*",
            "Resource": [
                "arn:aws:cloudformation:us-east-1:aws:transform/Serverless-2016-10-31"
            ]
        }



Read more ...

Continuous Integration of Lambda Function using AWS SAM

 


AWS Lambda Function is awesome and trust me if you are working on AWS, sooner or later you have to deal with AWS Lambda.

In this blog post, we will cover the below use cases.

  • What is AWS SAM
  • How to create Lambda Function using AWS SAM
  • How to delete Lambda Function created using AWS SAM
  • How to integrate AWS SAM with Docker.
  • How to create a continuous integration pipeline with Jenkins, GitHub, Docker and SAM.


What is AWS SAM.

One can find a huge detailed information in official documentation, here is the AWS link for the same.

I would like to share my level of short understanding with you, that can give you some idea of AWS SAM and it's respective components.

  • AWS SAM that is focused on the creating Application using AWS PAAS services such as API Gateway, AWS Lambda, SNS, SES etc.
  • SAM templates are somehow similar to CloudFormation templates, so one who has idea of CloudFormation can easily adapt SAM templates too.
  • SAM shorten the code when it's being compared to CloudFormation for Serverless services deployment.
  • SAM in backend create CloudFormation stacks to deploy the AWS services, that means it's avoid some part of code that has to be written by user and does the job for you by adding those lines.
  • In order to use SAM, one needs an download additional binary/package which is not being clubbed with AWS CLI.
How to create Lambda Function using SAM ?

Before you directly jump into it, first know the must know stuffs from file and directory prospective.

  • samconfig.toml : Configuration file that will be used during the SAM commands ( init, test, build, validate etc)
  • template.yml : SAM template, similar to CloudFormation template to define Parameter, Resource, Metadata, Output, Mapping etc.
  • events - Directory to store events for testing our Lambda code using events.json file.
  • tests - Directory that contains the unit test files
Lab setup details -
    - We will be deploying a Lambda Function with Python 3.7 runtime.
    - Name of our sam application is sam-deployer_with_sam
    - This is how our Lambda Function looks like in console and it's basic task is to check the status of           port, ie Open or Closed 
    - Our files and templates follow the CICD approach, so we have kept our code for two environment
        that is ( default, dev, uat )
    



Steps.
  • Install AWS SAM CLI first.
  • All tutorial you might have went through will ask you to go through the sam init, sam build and all, this blog is post is little baked one by using existing templates.
  • Create a new directory to use in this project
$ mkdir lambda-deployer_with_sam
  • Create a new files following the below directory structure layout
lambda-deployer_with_sam/
├── events
│   └── event.json
├── samconfig.toml
├── src
└── template.yaml

2 directories, 3 files

Here is the basic content to post in respective files.

Contents of samconfig.toml
version = 0.1
[default]
[default.deploy]
[default.deploy.parameters]
stack_name = "default-lambda-deployer-with-sam-Stack"
s3_bucket = "lambda-deployer-sam-bucket"
s3_prefix = "sam-lambda-stack"
region = "ap-south-1"
capabilities = "CAPABILITY_IAM"
disable_rollback = true
image_repositories = []

[dev]
[dev.deploy]
[dev.deploy.parameters]
stack_name = "dev-lambda-deployer-with-sam-Stack"
s3_bucket = "lambda-deployer-sam-bucket"
s3_prefix = "dev-sam-lambda-stack"
region = "ap-south-1"
capabilities = "CAPABILITY_IAM"
disable_rollback = true
image_repositories = []
parameter_overrides = "Environment=\"dev\""

[uat]
[uat.deploy]
[uat.deploy.parameters]
stack_name = "uat-lambda-deployer-with-sam-Stack"
s3_bucket = "lambda-deployer-sam-bucket"
s3_prefix = "uat-sam-lambda-stack"
region = "ap-south-1"
capabilities = "CAPABILITY_IAM"
disable_rollback = true
image_repositories = []
parameter_overrides = "Environment=\"uat\""


Lets understand the content of sam configuration file, that is samconfig.toml.
  • This file will be used later during the deployment operation Lambda and it's respective resource.
  • This file can be used to categorize environment specific paramters.
  • The first line [ default ],  [ dev ] , [uat]defines the name of the environment
  • All the next lines coming after Second and Third Line [uat.deploy.parameters] is provide environment specifc paramters.
  • parameter_overrides is the one, that is used to override the default parameter provided in the template.yml file, which is equivalent to cloudformation template.
Contents of template.yml
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: An AWS Serverless Specification template describing your function.

Parameters:
  Environment:
    Description:    Please specify the target environment.
    Type:           String
    Default:        "dev"
    AllowedValues:
      - dev
      - uat
  AppName:
    Description:  Application name.
    Type:         String
    Default:      "find-port-status"

Mappings:
  EnvironmentMap:
    dev:
     IAMRole: 'arn:aws:iam::897248824142:role/service-role/vpclambda-role-27w9b8uq'
    uat:
      IAMRole: 'arn:aws:iam::897248824142:role/service-role/vpclambda-role-27w9b8uq'
    stg:
      IAMRole: 'arn:aws:iam::897248824142:role/service-role/vpclambda-role-27w9b8uq'

Resources:
  LambdabySam:
    Type: 'AWS::Serverless::Function'
    Properties:
      FunctionName: !Sub 'ck-${Environment}-${AppName}'
      Handler: lambda_function.lambda_handler
      Runtime: python3.7
      CodeUri: src/
      Description: 'Lambda Created by SAM template'
      MemorySize: 128
      Timeout: 3
      Role: !FindInMap [EnvironmentMap, !Ref Environment, IAMRole]
      VpcConfig:
        SecurityGroupIds:
          - sg-a0f856da
        SubnetIds:
          - subnet-e9c898a5
          - subnet-bdbb59d6
      Environment:
        Variables:
          Name: !Sub 'ck-${Environment}-${AppName}'
          Owner: CyberkeedaAdmin
      Tags:
        Name: !Sub 'ck-${Environment}-${AppName}'
        Owner: CyberkeedaAdmin


Now, our last step is put our Lambda code into src diectory.

$ touch src/lambda_function.py 

Contents of src/lambda_function.py
import json
import socket


def isOpen(ip,port):
   s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
   try:
      s.connect((ip, int(port)))
      s.settimeout(1)
      return True
   except:
      time.sleep(1)
      return False
      
def lambda_handler(event, context):
    
    if isOpen('142.250.195.196',443):
        code = 200
    else:
        code = 500


    return {
        'statusCode': code,
        'body': json.dumps("Port status")
    }

Now, we have everything in place, let's deploy or Lambda code using SAM.

Initiate SAM build with respective environment, defined in samconfig.toml
$ sam build --config-env dev

Output will look, something like below.
Building codeuri: /home/kunal/aws_sam_work/lambda-deployer_with_sam/src runtime: python3.7 metadata: {} architecture: x86_64 functions: ['LambdabySam']
requirements.txt file not found. Continuing the build without dependencies.
Running PythonPipBuilder:CopySource

Build Succeeded

Built Artifacts  : .aws-sam/build
Built Template   : .aws-sam/build/template.yaml

Commands you can use next
=========================
[*] Invoke Function: sam local invoke
[*] Test Function in the Cloud: sam sync --stack-name {stack-name} --watch
[*] Deploy: sam deploy --guided

Now, we have build ready to be deployed.. let's initiate sam deploy.
$ sam deploy --config-env dev
Output will look, something like below.
Uploading to dev-sam-lambda-stack/dccfd91235d686ff0c5dcab3c4d44652  400 / 400  (100.00%)

        Deploying with following values
        ===============================
        Stack name                   : dev-lambda-deployer-with-sam-Stack
        Region                       : ap-south-1
        Confirm changeset            : False
        Disable rollback             : True
        Deployment s3 bucket         : 9-bucket
        Capabilities                 : ["CAPABILITY_IAM"]
        Parameter overrides          : {"Environment": "dev"}
        Signing Profiles             : {}

Initiating deployment
=====================
Uploading to dev-sam-lambda-stack/b6c26b6d535bf3b43f5b0bb71a88daa1.template  1627 / 1627  (100.00%)

Waiting for changeset to be created..

CloudFormation stack changeset
---------------------------------------------------------------------------------------------------------------------
Operation                     LogicalResourceId             ResourceType                  Replacement                 
---------------------------------------------------------------------------------------------------------------------
+ Add                         LambdabySam                   AWS::Lambda::Function         N/A                         
---------------------------------------------------------------------------------------------------------------------

Changeset created successfully. arn:aws:cloudformation:ap-south-1:897248824142:changeSet/samcli-deploy1646210098/97de1b9e-ed08-45fe-8e65-fb0c0928e8f7


2022-03-02 14:05:09 - Waiting for stack create/update to complete

CloudFormation events from stack operations
---------------------------------------------------------------------------------------------------------------------
ResourceStatus                ResourceType                  LogicalResourceId             ResourceStatusReason        
---------------------------------------------------------------------------------------------------------------------
CREATE_IN_PROGRESS            AWS::Lambda::Function         LambdabySam                   -                           
CREATE_IN_PROGRESS            AWS::Lambda::Function         LambdabySam                   Resource creation Initiated 
Initiate SAM build with respective environment, defined in samconfig.toml
CREATE_COMPLETE               AWS::Lambda::Function         LambdabySam                   -                           
CREATE_COMPLETE               AWS::CloudFormation::Stack    dev-lambda-deployer-with-     -                           
                                                            sam-Stack                                                 
---------------------------------------------------------------------------------------------------------------------

Successfully created/updated stack - dev-lambda-deployer-with-sam-Stack in ap-south-1

This step, will create the required services and it's respective configuration, confirm the same from lambda console, this is how it looks like.




Please note, every time we make any changes in lambda_function.py file, we need to re-build and deploy.

That's in this post, we will know later in next post about below stuffs.
  • How to delete Lambda Function created using AWS SAM
  • How to integrate AWS SAM with Docker.
  • How to create a continuous integration pipeline with Jenkins, GitHub, Docker and SAM.


Read more ...

AWS IAM Policy to Allow All Operations except IAM

 





Below policy template can be used to provide access to a user or add policy to a role with below set of permissions.

  • Allow all Services.
  • Allow all Resources
  • Allow all actions linked to every resource
  • Except IAM all operations and actions.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "*",
            "Resource": "*"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Deny",
            "Action": "iam:*",
            "Resource": "*"
        }
    ]
}


I have spent time to explore little template, hope this finds you via google to save some of yours time.


Read more ...

AWS S3 Bucket Policy to grant access to other AWS account

 



AWS Bucket Policy to be used for the below requirements.

  • Grant access of S3 Bucket to other AWS account.
  • Restrict access to List and Download objects from it, nothing more nothing extra.

Script to extract yesterday date

{
"Sid": "Allow Bucket Read access from below AWS accounts", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::123456789012:root", "arn:aws:iam::121314151617:root", "arn:aws:iam::181912021222:root" ] }, "Action": [ "s3:Get*", "s3:List*" ], "Resource": "arn:aws:s3:::cyberkeeda-limited-access-bucket/*" } ] })


Hope this snippet, helps you !
Read more ...

How to enable password based ssh authentication in ec2 instance

 



EC2 Linux SSH Authentication.

By default, preferred and default way of accessing any ec2 linux instance is key based authentication.
Here in this blog post, we will know 
  • How to enable basic authentication that is password based authentication in ec2 instance.
  • How to enable root login to ec2 instance.
I will keep on updating the post as per my learnings and used in practical scenarios.

Let's go through it :)

How to enable root login on linux ec2 instance.
  • Login to ec2 linux instance using it's private key.
  • Sudo to root
  • change password for root
  • Permit root login in sshd_config file

Syntax

[ec2-user@ip-10-0-1-116 ~]$ sudo su

Change root password from below command.


[root@ip-10-0-1-116 ec2-user]# passwd root

Permit root login by un-commenting the below line in sshd_config


[root@ip-10-0-1-116 ec2-user]# vi /etc/ssh/sshd_config

From


# PermitRootLogin yes

To

PermitRootLogin yes


How to enable password based authentication for ssh user.
  • Login to ec2 linux instance using it's private key.
  • Sudo to root
  • Permit root login in sshd_config file
  • Restart sshd service

Syntax

[ec2-user@ip-10-0-1-116 ~]$ sudo su

Permit root login by un-commenting the below line in sshd_config


[root@ip-10-0-1-116 ec2-user]# vi /etc/ssh/sshd_config

From


# PasswordAuthentication yes

To

PasswordAuthentication yes

Restart SSHD service

service sshd restart


Login and check !
Read more ...

Upload File from Local to S3 Bucket using CURL

 

Upload data to S3 bucket using CURL.

This guide aka shell script will help you to upload files into S3 without installing AWS SDK, Python Boto or AWS CLI.


Script's README section has most of the usage defined, high level things script can do.

  • Copy files from a specific directory between specific range dates.
  • Copy specific files with search filer.
  • Script logs all the file copy into file with time stamp. 
  • Reinitiate is supported without re-write


#!/bin/bash
#
# Parameters
# $1 => Directory/Folder to search file.
# $2 => AWS Bucket subdirectories 
#       Example -- myAWSs3bucket/folderA/FolderB/FolderC
#              1.) In case one want to put files in folderA, use folderA as $2
#                  2.) In case one want to put files in folderB, use folderA/folderB as $2
#                  3.) In case one want to put files in folderC, use folderA/folderB/folderC as $2
# $3 => Existense of file from Start date in format YYYYMMDD 
#       Example --
#                  1.) 20210104 -> 4th January 2021
#                  2.) 20201212 -> 12th December 2020
# $4 => Existense of file upto end date in format YYYYMMDD
#       Example --
#                  1.) 20200322 -> 22nd March 2020
#                  2.) 20201212 -> 12th December 2020
# $5 => File Filter 
#       Example -- We need only specific files from a folder.
#                  1.) 20200122_data_settlement.txt --> Use $5 as *_data_settlement.txt
#                  2.) salesdata-20201215100610.txt --> Use $5 as salesdata-*
#      
# Task - Find similar 20200122_data_settlement.txt on location /usr/data/
#        File existence date range 20200322 (22nd March 2020) to 20210104 (4th January 2021)
#        Copy it to AWS S3 bucket's subfolder named as folderA 
#
#     
# Syntax -  ./copy_data_to_S3_via_Curl.sh <LocalFolderLocation> <S3BUCKET-DIRECTORY> <STARTDATE> <ENDDATE> <FILEFILTER>
#
# Usage
#
#        1.) With File Filter
#         ./copy_data_to_S3_via_Curl.sh /usr/data folderA 20200322 20210104  '*data_settlement.txt'
#
#        2.) Without File Filter
#         ./copy_data_to_S3_via_Curl.sh /usr/data folderA 20200322 20210104  
#        
#    3.) Reinitiate left upload
#
#         ./copy_data_to_S3_via_Curl.sh 1 folderA
#
#
#  Flow 
#  1.) Script use find command to find all the files with parameters and write it to a file "/tmp/file_inventory.txt"
#  2.) For Loop is being used further ti read file inputs and do S3 operations using HTTPS API
#  3.) Script keeps removing the entries from inventory file after a successful upload.
#  4.) Script writes the successful and failed upload status within log file "/tmp/file_copy_status.log"
#  5.) Incase we want to interrupt and upload the remaining files later, comment line no 62
#        62 find $1 -newermt $3 \! -newermt $4   -iname "$5" >> $inventory
#      To avoid confusion run the script with same paramter.
#
#
# Author: Jackuna
#

# Bucket Data
bucket="mys3bucket-data"
s3_access_key="AKgtusjksskXXXXTQTW"
s3_secret_key="KSKKSIS HSNKSLS+ydRQ3Ya37A5NUd1V7QvEwDUZR"

# Files
inventory="/tmp/file_inventory.txt"
logme="/tmp/file_copy_status.log"


if  [ $# == 2 ]; then
  echo "`date` -  Initiating left file upload from old inventory " >> $logme

elif [ $# -eq 5 ]; then
  truncate -s 0 $inventory
  find $1 -newermt $3 \! -newermt $4   -iname "$5" >> $inventory
  echo "`date` - Initiating all file that contains string $5 and found between $3 - $4  upload from new inventory " >> $logme

elif [ $# -eq 4 ]; then
  truncate -s 0 $inventory
  find $1 -newermt $3 \! -newermt $4  >> $inventory
  sed -i 1d $inventory
  echo "`date` - Initiating all file found between $3 - $4  upload from new inventory " >> $logme

else
  echo " Some or all arguments Missing from CLI"
  echo " Usage :  ./copy_data_to_S3_via_Curl.sh <LocalFolderLocation> <S3BUCKET-DIRECTORY> <STARTDATE> <ENDDATE> <FILEFILTER>"
  echo " Open Script README section"
  exit 1
fi

file_list=`cat $inventory`
total_file_count=`cat $inventory|wc -l`


for local_file_val in $file_listdo
        aws_folder=$2
        aws_file_name=`echo $local_file_val| rev| cut -d '/' -f1 | rev`
        aws_filepath="/${bucket}/$aws_folder/$aws_file_name"

        # metadata
        contentType="application/x-compressed-tar"
        dateValue=`date -R`
        signature_string="PUT\n\n${contentType}\n${dateValue}\n${aws_filepath}"
        signature_hash=`echo -en ${signature_string} | openssl sha1 -hmac ${s3_secret_key} -binary | base64`


        curl -X PUT -T "$local_file_val" \
    -H "Host: ${bucket}.s3.amazonaws.com" \
    -H "Date: ${dateValue}" \
    -H "Content-Type: ${contentType}" \
    -H "Authorization: AWS ${s3_access_key}:${signature_hash}" \
        https://${bucket}.s3.amazonaws.com/$aws_folder/$aws_file_name

    if [ $? -gt 0 ]; then
            echo "`date` Upload Failed  $local_file_val to $bucket" >> $logme
    else
            echo "`date` Upload Success $local_file_val to $bucket" >> $logme
            count=$((count + 1))
            printf "\rCopy Status -  $count/$total_file_count - Completed "

            sleep 1
            sed -i "/\/$aws_file_name/d" $inventory
    fi

done;

Feel free to comment.

Read more ...

AWS CloudFormation Interface Metadata



AWS CloudFormation Metadata
Metadata section define the details about the cloudformation template. 

Syntax Template

Metadata:
  Instances:
    Description"Instances details within dev environment"
  Applications:
    Description"Application details for dev environment"

There are three types of AWS Cloudformation specific Metadata keys.
  •   AWS::CloudFormation::Designer
It's auto generated during drag and drop of canvas within Cloudformation designer section.
  •   AWS::CloudFormation::Interface
It's used for parameter grouping and labeling as per requirement
  •   AWS::CloudFormation::Init
One of the most important section from Automation prospective, it's used for Application installation and configuration on our AWS EC2 instances.

So within this blog post, we will cover 
  • What is Interface Metadata.
  • Why and How it's used
  • How to customize orders of defined parameters.
  • How to group up parameters and mark them a identifying label.
  • How to define labels for the parameters.

AWS::CloudFormation::Interface

So during our stack creation, you might have noticed our defined parameters appears into an alphabetical order by their Logical ids, it has nothing to do with the way or order we define in our CFN template.

Interface Metadata Syntax
Metadata:
  AWS::CloudFormation::Interface:
    ParameterGroups:
      - ParameterGroup
    ParameterLabels:
      - ParameterLabel

For this post, in order to understand more about it we will use the below CFN template as an example.

CFN Template

AWSTemplateFormatVersion: 2010-09-09
Description|
  CFN Script to demonstrate Cloudformation Interface Metadata Usage
  Interface Metadata : AWS::CloudFormation::Interface
  Interface Metadata can be used to
           Group Parameters and it's order.
           Labels for Parameter for user friendly description input.

Parameters:
  EnvironmentName:
    DescriptionSelect the environment.
    TypeString
    Defaultdev
    AllowedValues:
      - dev
      - prd
  EC2KeyName:
    DescriptionSelect your Key name from List.
    TypeAWS::EC2::KeyPair::KeyName
  EC2SecurityGroup:
    DescriptionSelect your security group from List.
    TypeAWS::EC2::SecurityGroup::Id
  EC2Subnet:
    DescriptionSelect your Subnet from List.
    TypeAWS::EC2::Subnet::Id
  EC2InstanceType:
    TypeString
    Defaultt2.micro
    AllowedValues:
      - t2.micro
      - t2.small
  ApplicationType:
    TypeString
    Defaultweb
    AllowedValues:
      - web
      - app
  MonitoringStatus:
    TypeString
    Defaultenabled
    AllowedValues:
      - enabled
      - disabled

When we will import the above template to create or update stack the console will appear something like below.



From above we can witness that irrespective of the order of allignment parameters within CFN script, it re-orders it within console alphabetically by Parameter's logical ids.

What if we want to group parameters and label them as a separate identifier, Interface metadata can help us to accomplish the goal

What are we going to do with above template ?
    Well we have two sections one relates to the  EC2 Configuration data and the other one is more specific to Application configuration, so we will define our parameter groups and label according to that only.

CFN Interface Metadata with Custom Parameter Group

AWSTemplateFormatVersion: 2010-09-09
Description|
  CFN Script to demonstrate Cloudformation Interface Metadata Usage
  Interface Metadata : AWS::CloudFormation::Interface
  Interface Metadata can be used to
           Group Parameters and it's order.
           Labels for Parameter for user friendly description input.
Metadata:
  AWS::CloudFormation::Interface:
    ParameterGroups:
      - Label:
          default"EC2 Configuration"
        Parameters:
          - EC2InstanceType
          - EC2KeyName
          - EC2SecurityGroup
          - EC2Subnet
      - Label:
          default"Application Configuration"
        Parameters:
          - EnvironmentName
          - ApplicationType
          - MonitoringStatus

Once above file is imported, here is how the console screen looks likes.
We have two Parameter Groups and our parameters are ordered by the way we defined under these groups.
  • EC2 Configuration
  • Application Configuration.

Now, we will add some custom Labels to our Parameters to make it more user friendly and informative for user to avoid confusion.

Within our base template, we will intuit below Parameter Logical Ids with our own Label.
  • EnvironmentName
    • Be carefull while choosing your deployment environmnet
  • EC2KeyName
    • Ensure you have the keys before selecting
  • MonitoringStatus
    • It enables cloudwatch logs for application data

CFN Template to demonstrate Custom Labels for Parameters

Metadata:
  AWS::CloudFormation::Interface:
    ParameterGroups:
      - Label:
          default"EC2 Configuration"
        Parameters:
          - EC2InstanceType
          - EC2KeyName
          - EC2SecurityGroup
          - EC2Subnet
      - Label:
          default"Application Configuration"
        Parameters:
          - EnvironmentName
          - ApplicationType
          - MonitoringStatus
    ParameterLabels:
      EnvironmentName
        default"Be carefull while choosing your deployment environmnet"
      EC2KeyName:
        default"Ensure you have the keys before selecting"
      MonitoringStatus:
        default"It enables cloudwatch logs for application data"

Once imported, this is the output in cloudformation console.



From above, we can state few things to note while using a parameter label configuration.

  • ParameterLabels will substitute your logical Id for your parameter with the one we have declared as default within our ParameterLabels section.
  • Only Logical Id gets substituted, description mentioned under individual parameter section persists.

AWS CFN best practice, we must use Interface metadata for large stack creation this makes things quite helpful for end user.

Feel free to comment.  

Read more ...
Designed By Jackuna