CyberKeeda In Social Media

IPV4 - Classes Range - Pictorial Representation


BOGON IPs - Please note Bogon IPs are reserved for private network, thus can't be used by ISP providers to use the ranges to declare it as Public IP.

Class A Bogon IP range.
  • CIDR -
  • Total IP - 32-8 = 24 
    • 2^24 = 16777216
Class B Bogon IP range.
  • CIDR -
  • Total IP - 32-12 = 20 
    • 2^20 = 1048576
Class C Bogon IP range.
  • CIDR -
  • Total IP - 32-16 = 16
    • 2^16 = 65536

Read more ...

Find IP Address Information Cheat Sheet by Example.


Read more ...

How to scan IP addresses details on your network using NMAP


You know using Linux is a kind fun, think about a requirement and you can see a wide number opensource tools gives you wings to your idea, no hurdles just go with your goal, they all will support you..

I would like to share you, what made me search internet and write this blog post.

So within my Lab environment, it's a very frequent task to configure, update IP configuration of other virtual machines, so to tackle this task, I have already written an Ansible Role, which basically configures the IP address for the host which has existing dhcp address assigned to it.

Now still there are some information I need to provide ansible before I proceed to run the playbook and the information it needs is, I need to manually look for free IPs in my current network.

So I was curious how to scan my network for used and free IP addresses, thus I surfed the internet and found, my friendly network troubleshooting tool NMAP gives the insight about it.

Let's see what command can be used to find those details.

Using below one lines to search for used IPs within your network.

$ nmap -sP


Starting Nmap 6.40 ( ) at 2022-06-16 17:10 IST
Nmap scan report for
Host is up (0.0078s latency).
Nmap scan report for
Host is up (0.0050s latency).
Nmap scan report for
Host is up (0.0043s latency).
Nmap scan report for
Host is up (0.0015s latency).
Nmap done: 256 IP addresses (4 hosts up) scanned in 2.59 seconds

Now let's scan again the same network and look for the listening ports along with the host ip

$ sudo nmap -sT


Starting Nmap 6.40 ( ) at 2022-06-16 17:17 IST
Nmap scan report for
Host is up (0.0061s latency).
Not shown: 992 filtered ports
80/tcp   open   http
443/tcp  open   https
1900/tcp open   upnp
2869/tcp closed icslap
7443/tcp open   oracleas-https
8080/tcp open   http-proxy
8200/tcp closed trivnet1
8443/tcp open   https-alt
MAC Address: AA:HA:IC:PF:P3:C1 (Unknown)

Nmap scan report for
Host is up (0.0083s latency).
Not shown: 998 closed ports
80/tcp  open  http
554/tcp open  rtsp
MAC Address: 14:07:o8:g5:7E:99 (Private)

Nmap scan report for
Host is up (0.0051s latency).
Not shown: 998 closed ports
22/tcp open  ssh
80/tcp open  http
MAC Address: 08:76:20:00:75:D5 (Cadmus Computer Systems)

Nmap scan report for
Host is up (0.0057s latency).
Not shown: 999 filtered ports
135/tcp open  msrpc
MAC Address: F0:76:30:60:8E:21 (Unknown)

Nmap scan report for
Host is up (0.0018s latency).
Not shown: 997 closed ports
22/tcp   open  ssh
8000/tcp open  http-alt
8080/tcp open  http-proxy

Nmap done: 256 IP addresses (5 hosts up) scanned in 7.84 seconds

If you need additional details like Host OS details and some more, then run the scan again with below command

$ sudo nmap -sT -O


Nmap scan report for
Host is up (0.00026s latency).
Not shown: 997 closed ports
22/tcp   open  ssh
8000/tcp open  http-alt
8080/tcp open  http-proxy
Device type: general purpose
Running: Linux 3.X
OS CPE: cpe:/o:linux:linux_kernel:3
OS details: Linux 3.7 - 3.9
Network Distance: 0 hops

Hope this post will help you in some sort !
Read more ...

How to remove last character from the last line of a file using SED


This could be very relatable hack for you as we all are dealing with JSON object now a days, and during automation using bash aka shell scripts, we may need to parse our json data.

Okay so here is the data, and what I have

$ cat account_address.txt

"59598532c58EBeB13A70a37159F0C3AB2e0aB623": { "balance": "10000" },
"A281753296De2A35c2Ae6D613b317b71F76F6aE2": { "balance": "10000" },
"2eAc363b2ffAfbc9b5dE9E2004057a778313d4Ac": { "balance": "10000" },
"3FD7893E53D35A93A240Be3B4112A24746F8d858": { "balance": "10000" },
"dfd46B5F7B194133C48562d84A970358E13d64f7": { "balance": "10000" },
"8F3D701F3963d41935C4D2FeeFb3E072FBc613Ee": { "balance": "10000" },
And  here is the data, and what I need.
$ cat account_address.txt

"59598532c58EBeB13A70a37159F0C3AB2e0aB623": { "balance": "10000" },
"A281753296De2A35c2Ae6D613b317b71F76F6aE2": { "balance": "10000" },
"2eAc363b2ffAfbc9b5dE9E2004057a778313d4Ac": { "balance": "10000" },
"3FD7893E53D35A93A240Be3B4112A24746F8d858": { "balance": "10000" },
"dfd46B5F7B194133C48562d84A970358E13d64f7": { "balance": "10000" },
"8F3D701F3963d41935C4D2FeeFb3E072FBc613Ee": { "balance": "10000" }

Using SED one liner, we can do this stuff.

$ cat account_address.txt | sed '$ s/.$//'

That's it !

Read more ...

How to restrict AWS S3 Content to be accessed by CloudFront distribution only.


CloudFront is one of the popular services of AWS that gives Caching mechanism for our static contents like html, css, images and media files serving a very fast performance using it's globally available CDN networks of POP sites.

In this blog post, we will know 

  • How to create a basic CloudFront distribution using S3 as Origin.
  • How can we create a CloudFront distribution using S3 as Origin without making the Content of Origin(s3 Objects) public.
  • What, Why and How about CloudFront OIA.

Here in this scenario, we will be using S3 bucket as an Origin for our CloudFront distribution

We will understand the problem first and then know, how Origin Access Identity can be used to address the request.

So we have quickly created a S3 bucket and CloudFront distribution using default settings with below details.

  • S3 bucket name - s3-web-bucket
  • Bucket Permissions - Block all Public Access
  • CloudFront distribution default object - index.html
  • CloudFront Origin - s3-web-bucket

Now, quickly upload a index.html file under the root of s3 bucket as s3-web-bucket/index.html.

We are done with the configuration, let's try to quickly access the CloudFront distribution and verify if everything is working perfectly or not.

$ curl -I

HTTP/2 403
content-type: application/xml
date: Thu, 14 Jul 2022 07:28:37 GMT
server: AmazonS3
x-cache: Error from cloudfront
via: 1.1 (CloudFront)
x-amz-cf-pop: MRS52-P4
x-amz-cf-id: BbAsVxxWfW9v3m1PD2uBHqRIj_7-J5U3fUzhhFiQQhbJj8a7lQlCvw==
We encountered 403 error, why ?
Ans : This is expected as we have kept the bucket permission level as Block All Public Access.

Okay, then let's modify the bucket permission and Allow Public Access, for this follow the below two steps.

  • Enable Public Access from Console by unchecking the Check box "Block all public access" and Save it.

  • Append the below Bucket Policy JSON statement to make all objects inside the Bucket as Public, the one highlighted in red can be replaced by your own Bucket name.

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "AddPerm",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::s3-web-bucket/*"

  • Save it, and your bucket permission section will appear with Red Amber signifying that your bucket is publicly accessible.

Done, Now let's try again to access the Website (index.html) from our CloudFront distribution.

$ curl -I

HTTP/2 200
content-type: text/html
content-length: 557
date: Thu, 14 Jul 2022 07:47:58 GMT
last-modified: Wed, 13 Jul 2022 18:50:58 GMT
etag: "c255abee97060a02ae7b79db49ed7ec1"
accept-ranges: bytes
server: AmazonS3
x-cache: Miss from cloudfront
via: 1.1 (CloudFront)
x-amz-cf-pop: MRS52-C2
x-amz-cf-id: Bhf_5IjA0sifp7jON4dpzZdjpCZCQTF5L7c5oenUbjc1vZzvL6ZUWA==

Good, we are able to access our webpage and now our static contents will be served from CDN network, but wait let's try to access the object(index.html) from bucket's S3 URLs too.

$ curl -I

HTTP/1.1 200 OK
x-amz-id-2: OgLcIIYScHdVok2puZb09ccCjU5K9xNxOL6D1sVj/nBf6hm93vCjQQSpm3fxo4tXpdjUa3u2TS0=
x-amz-request-id: 588WXNR2BH9F37R9
Date: Thu, 14 Jul 2022 07:50:42 GMT
Last-Modified: Wed, 13 Jul 2022 18:50:58 GMT
ETag: "c255abee97060a02ae7b79db49ed7ec1"
Accept-Ranges: bytes
Content-Type: text/html
Server: AmazonS3
Content-Length: 557

Here is the loophole, naming standards for any s3 bucket urls and it's respective objects are quite easy to guess if one knows the name of the bucket only.

User, developer and hackers can bypass the CloudFront url and can get access to Objects directly from S3 Urls only, but you may think or what's the issue as they are anyhow public read in nature by permissions.

So to answer these questions, here are some points I would like to point, how accessing content via CloudFront URls is useful

  • CloudFront URLs give you better performance.
  • CloudFront URL can provide Authentication mechanism.
  • CloudFront URL gives additional possibilities to trigger CloudFront Function, which can be used for custom solutions.
  • Sometimes content of a website/API is designed to be served via CloudFront only, accessing it from S3 gives you a portion of it's content.
These are few counter points, but there are many more to support why should you disable public access to your s3 buckets.

Read more ...

CloudFront : How to host multiple buckets from single CloudFront domain

As far if you follow this blog's posts, here mostly posts are related to cloud tasks assigned to me as an requirement, you can think as one of the industry standard requirements too.

In this blog post, we will see how we can achieve the above scenario that is One CloudFront domain to host multiple S3 buckets as origin.

Let's follow the steps as follow.

Create 3 different S3 buckets as per above architecture diagram.

As per the architecture diagram, create respective directories to match the URI path that is

  • --> s3-web1-bucket --> Create web1 directory inside s3-web1-bucket/
  • htttp:// --> s3-web2-bucket --> Create web2 directory inside s3-web2-bucket/

Dump 3 individual index.html files, that resembles to be an identifier as content served from that specific bucket.

  • index.html path for s3-web-bucket -- s3-web-bucket/index.html
  • index.html path for s3-web1-bucket -- s3-web-bucket/web1/index.html
  • index.html path for s3-web2-bucket -- s3-web-bucket/web2/index.html

This is how my three different index.html looks like.

We are set from the Bucket part, let's jump to CloudFront and create a basic CloudFront distribution with one of the s3 bucket as origin, here we have chosen s3-web-bucket as our origin for CloudFront distribution with other default settings.

Note : Default root object to index.html, else we have to append index.html manually every time after /

Now here comes the fun, we have our CloudFront URL in active state and thus according to our architecture this is what we are expecting overall.

Create Origins for S3 buckets.

Let's add other two more origin, which are the two other remaining s3 buckets.

Origin Configuration for S3 bucket "s3-web1-bucket"

Origin Configuration for S3 bucket "s3-web2-bucket"

Create Behaviors for the above origins.

So far, we have added all the s3 buckets as origin, now let's create the behavior which is path aka URI based routing.

Behavior 1 - /web1 routes to s3-web1-bucket

Behavior 2 - /web2 routes to s3-web2-bucket

Overall, within the Behavior tab, it should look as below  

That's it !
Let's Open Browser and test the urls one by one.

Hope this will help you in some sort !
Read more ...

Kubernetes Inter-pod communication within a cluster.


In this post, what are the ways through which we can configure our pods to communicate with each other within the same Kubernetes cluster.

In order to understand the same, we have create a lab scenario where we have two pods running inside the same cluster.

We will focus on two namespace 
  • default
  • web-apps
Let's see what are the pods running in both namespaces.
  • Pods running under default namespace.

  • Pods running under web-apps namespace.

What's the application - So we have our application pod named as "genache-cli-deploymnet" running under default namespace, within this lab environment we will know how we can establish communication between microservices like my-shell and weapp-shell to genache-cli-core.

Here are the different ways..

Using Pod's IP.

Every pod gets an IP from the defined CIDR range, which can be used to communicate directly from each other, irrespective of namespaces.
Thus a simple pattern of http://<pod-ip-address>:<container-port-number>

So as per our lab environment, we will try to establish a connection to genache-cli running with IP Address as curl and on Port 8545

> kubectl get pods -o wide

NAME                                     READY   STATUS    RESTARTS         AGE   IP           NODE             NOMINATED NODE   READINESS GATES
genache-cli-deployment-8f48b88fb-dqnkx   1/1     Running   20 (2d10h ago)   30d   docker-desktop   <none>           <none>
my-shell                                 1/1     Running   0                37m   docker-desktop   <none>           <none>
Output from my-shell running on web-apps namespace
root@webapp-shell:/# curl
400 Bad Request 

Output from webapp-shell running on default namespace.

root@my-shell:/# curl
400 Bad Request

Read more ...

Selenium XPATH Cheat Sheet


XPATH that contains Partial Text.

Example to Consider.

<span class="style-scope ytd-grid-video-renderer">1 day ago</span>

<span class="style-scope ytd-grid-video-renderer">84K views</span>

<span class="style-scope ytd-grid-video-renderer">1 hour ago</span>

<span class="style-scope ytd-grid-video-renderer">8 hour ago</span>

Need to Grep elements with Text that contains partial word as hour

Code Snippet.

 //*[contains(text(), "hour")]

In case if to Grep elements that contains partial tag.

 //*[contains(@id, "title")]

Read more ...

Fix : AWS SAM IAM Error : arn:aws:cloudformation:us-east-1:aws:transform/Serverless-2016-10-31


Stack trace :

User: arn:aws:sts::455734o955:assumed-role/xx-xx-xx-app-role/i-xxxxxxxxxx is not authorized to perform: cloudformation:CreateChangeSet on resource: arn:aws:cloudformation:us-east-1:aws:transform/Serverless-2016-10-31 because no identity-based policy allows the cloudformation:CreateChangeSet action.


Add the below resource within your JSON policy statement.

Note :  cloudformation:* is strictly discouraged, fine tune your access permissions.

            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "cloudformation:*",
            "Resource": [

Read more ...

Continuous Integration of Lambda Function using AWS SAM


AWS Lambda Function is awesome and trust me if you are working on AWS, sooner or later you have to deal with AWS Lambda.

In this blog post, we will cover the below use cases.

  • What is AWS SAM
  • How to create Lambda Function using AWS SAM
  • How to delete Lambda Function created using AWS SAM
  • How to integrate AWS SAM with Docker.
  • How to create a continuous integration pipeline with Jenkins, GitHub, Docker and SAM.

What is AWS SAM.

One can find a huge detailed information in official documentation, here is the AWS link for the same.

I would like to share my level of short understanding with you, that can give you some idea of AWS SAM and it's respective components.

  • AWS SAM that is focused on the creating Application using AWS PAAS services such as API Gateway, AWS Lambda, SNS, SES etc.
  • SAM templates are somehow similar to CloudFormation templates, so one who has idea of CloudFormation can easily adapt SAM templates too.
  • SAM shorten the code when it's being compared to CloudFormation for Serverless services deployment.
  • SAM in backend create CloudFormation stacks to deploy the AWS services, that means it's avoid some part of code that has to be written by user and does the job for you by adding those lines.
  • In order to use SAM, one needs an download additional binary/package which is not being clubbed with AWS CLI.
How to create Lambda Function using SAM ?

Before you directly jump into it, first know the must know stuffs from file and directory prospective.

  • samconfig.toml : Configuration file that will be used during the SAM commands ( init, test, build, validate etc)
  • template.yml : SAM template, similar to CloudFormation template to define Parameter, Resource, Metadata, Output, Mapping etc.
  • events - Directory to store events for testing our Lambda code using events.json file.
  • tests - Directory that contains the unit test files
Lab setup details -
    - We will be deploying a Lambda Function with Python 3.7 runtime.
    - Name of our sam application is sam-deployer_with_sam
    - This is how our Lambda Function looks like in console and it's basic task is to check the status of           port, ie Open or Closed 
    - Our files and templates follow the CICD approach, so we have kept our code for two environment
        that is ( default, dev, uat )

  • Install AWS SAM CLI first.
  • All tutorial you might have went through will ask you to go through the sam init, sam build and all, this blog is post is little baked one by using existing templates.
  • Create a new directory to use in this project
$ mkdir lambda-deployer_with_sam
  • Create a new files following the below directory structure layout
├── events
│   └── event.json
├── samconfig.toml
├── src
└── template.yaml

2 directories, 3 files

Here is the basic content to post in respective files.

Contents of samconfig.toml
version = 0.1
stack_name = "default-lambda-deployer-with-sam-Stack"
s3_bucket = "lambda-deployer-sam-bucket"
s3_prefix = "sam-lambda-stack"
region = "ap-south-1"
capabilities = "CAPABILITY_IAM"
disable_rollback = true
image_repositories = []

stack_name = "dev-lambda-deployer-with-sam-Stack"
s3_bucket = "lambda-deployer-sam-bucket"
s3_prefix = "dev-sam-lambda-stack"
region = "ap-south-1"
capabilities = "CAPABILITY_IAM"
disable_rollback = true
image_repositories = []
parameter_overrides = "Environment=\"dev\""

stack_name = "uat-lambda-deployer-with-sam-Stack"
s3_bucket = "lambda-deployer-sam-bucket"
s3_prefix = "uat-sam-lambda-stack"
region = "ap-south-1"
capabilities = "CAPABILITY_IAM"
disable_rollback = true
image_repositories = []
parameter_overrides = "Environment=\"uat\""

Lets understand the content of sam configuration file, that is samconfig.toml.
  • This file will be used later during the deployment operation Lambda and it's respective resource.
  • This file can be used to categorize environment specific paramters.
  • The first line [ default ],  [ dev ] , [uat]defines the name of the environment
  • All the next lines coming after Second and Third Line [uat.deploy.parameters] is provide environment specifc paramters.
  • parameter_overrides is the one, that is used to override the default parameter provided in the template.yml file, which is equivalent to cloudformation template.
Contents of template.yml
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: An AWS Serverless Specification template describing your function.

    Description:    Please specify the target environment.
    Type:           String
    Default:        "dev"
      - dev
      - uat
    Description:  Application name.
    Type:         String
    Default:      "find-port-status"

     IAMRole: 'arn:aws:iam::897248824142:role/service-role/vpclambda-role-27w9b8uq'
      IAMRole: 'arn:aws:iam::897248824142:role/service-role/vpclambda-role-27w9b8uq'
      IAMRole: 'arn:aws:iam::897248824142:role/service-role/vpclambda-role-27w9b8uq'

    Type: 'AWS::Serverless::Function'
      FunctionName: !Sub 'ck-${Environment}-${AppName}'
      Handler: lambda_function.lambda_handler
      Runtime: python3.7
      CodeUri: src/
      Description: 'Lambda Created by SAM template'
      MemorySize: 128
      Timeout: 3
      Role: !FindInMap [EnvironmentMap, !Ref Environment, IAMRole]
          - sg-a0f856da
          - subnet-e9c898a5
          - subnet-bdbb59d6
          Name: !Sub 'ck-${Environment}-${AppName}'
          Owner: CyberkeedaAdmin
        Name: !Sub 'ck-${Environment}-${AppName}'
        Owner: CyberkeedaAdmin

Now, our last step is put our Lambda code into src diectory.

$ touch src/ 

Contents of src/
import json
import socket

def isOpen(ip,port):
   s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
      s.connect((ip, int(port)))
      return True
      return False
def lambda_handler(event, context):
    if isOpen('',443):
        code = 200
        code = 500

    return {
        'statusCode': code,
        'body': json.dumps("Port status")

Now, we have everything in place, let's deploy or Lambda code using SAM.

Initiate SAM build with respective environment, defined in samconfig.toml
$ sam build --config-env dev

Output will look, something like below.
Building codeuri: /home/kunal/aws_sam_work/lambda-deployer_with_sam/src runtime: python3.7 metadata: {} architecture: x86_64 functions: ['LambdabySam']
requirements.txt file not found. Continuing the build without dependencies.
Running PythonPipBuilder:CopySource

Build Succeeded

Built Artifacts  : .aws-sam/build
Built Template   : .aws-sam/build/template.yaml

Commands you can use next
[*] Invoke Function: sam local invoke
[*] Test Function in the Cloud: sam sync --stack-name {stack-name} --watch
[*] Deploy: sam deploy --guided

Now, we have build ready to be deployed.. let's initiate sam deploy.
$ sam deploy --config-env dev
Output will look, something like below.
Uploading to dev-sam-lambda-stack/dccfd91235d686ff0c5dcab3c4d44652  400 / 400  (100.00%)

        Deploying with following values
        Stack name                   : dev-lambda-deployer-with-sam-Stack
        Region                       : ap-south-1
        Confirm changeset            : False
        Disable rollback             : True
        Deployment s3 bucket         : 9-bucket
        Capabilities                 : ["CAPABILITY_IAM"]
        Parameter overrides          : {"Environment": "dev"}
        Signing Profiles             : {}

Initiating deployment
Uploading to dev-sam-lambda-stack/b6c26b6d535bf3b43f5b0bb71a88daa1.template  1627 / 1627  (100.00%)

Waiting for changeset to be created..

CloudFormation stack changeset
Operation                     LogicalResourceId             ResourceType                  Replacement                 
+ Add                         LambdabySam                   AWS::Lambda::Function         N/A                         

Changeset created successfully. arn:aws:cloudformation:ap-south-1:897248824142:changeSet/samcli-deploy1646210098/97de1b9e-ed08-45fe-8e65-fb0c0928e8f7

2022-03-02 14:05:09 - Waiting for stack create/update to complete

CloudFormation events from stack operations
ResourceStatus                ResourceType                  LogicalResourceId             ResourceStatusReason        
CREATE_IN_PROGRESS            AWS::Lambda::Function         LambdabySam                   -                           
CREATE_IN_PROGRESS            AWS::Lambda::Function         LambdabySam                   Resource creation Initiated 
Initiate SAM build with respective environment, defined in samconfig.toml
CREATE_COMPLETE               AWS::Lambda::Function         LambdabySam                   -                           
CREATE_COMPLETE               AWS::CloudFormation::Stack    dev-lambda-deployer-with-     -                           

Successfully created/updated stack - dev-lambda-deployer-with-sam-Stack in ap-south-1

This step, will create the required services and it's respective configuration, confirm the same from lambda console, this is how it looks like.

Please note, every time we make any changes in file, we need to re-build and deploy.

That's in this post, we will know later in next post about below stuffs.
  • How to delete Lambda Function created using AWS SAM
  • How to integrate AWS SAM with Docker.
  • How to create a continuous integration pipeline with Jenkins, GitHub, Docker and SAM.

Read more ...
Designed By Jackuna