CyberKeeda In Social Media

AWS - How to extend windows drive volume size from an existing size to a newer size

How to extended your EBS Volume attached to an Windows Server

So to be true to you, I'm not a Windows guy, even for smaller thing i need to google, even if i already did a task based on windows server, next time it's asked to do the same I used to forgot as it's the frequency of work i get with respect to windows.

So why not draft and make a blog post, let's know how to do that.
In this blog post I will cover.
  • How to extend windows root EBS device volume.
  • How to extend an additional attached EBS volume.

Lab Setup details:
  1. We already have a EC2 instance with Windows Server installed on it.
  2. We already have a root volume ( Disk 0 ) attached of size 30Gb
  3. We have made 2 additional disk partitions as D and E drives.
  4. We already have a additional EBS volume( Disk 1 ) mounted with partition name as DATA
  5. We are assuming no Unallocated space is present.
How to extend windows root EBS device volume.

Final goal : We will add 3Gb of additional disk space to our root EBS volume ( /dev/sda1 and extend our D drive partition from 5Gb to 8Gb.

  • Go to AWS Console, select your desired Windows server EC2 instance.
  • Under description find Block Device (/dev/sda1), click it and from popup window note the EBS Volume ID and select it.
  • It will redirect to EBS Volumes window, confirm the EBS volume id, that we have noted on above step and confirm the existing size too.

  • Once Confirmed, we are ready to modify the volume size, that is from 30Gb to 33Gb
  • Select volume, right click on it, choose modify volume and make it from 30 to 33 as we want to increase it by 3Gb
  • Confirm and submit and check the state till, it become available from optimizing.
  • Once completed, we can login to our windows ec2 instnace and follow the next steps.
  • Open Run --> Paste "diskmgmt.msc" --> Action --> Refresh Disk
  • A new space with Unallocated of size 3Gb can be found.
  • Now we are ready to extend our D: drive from 5Gb to 8Gb.
  • Right Click on D: Volume --> Extend Volume --> Next --> 3Gb volume must be there within next Selected panel. --> Finish 

We can perform the same with our existing additional attached disk volumes, just identify your EBS volume id and follow up the same procedures.

Read more ...

How to Upload your Django bundled package to PyPI using twine

Within our last django post, i have demonstrated how we can bundle/package our app into tar.gz format and how we can install it locally.

In case you are not aware, go through the post link.
Within this blog post, we will know cover, how we can upload our django app bundled package into PYPI.
  • Once you have successfully created an account, let's install an additional python package named as twine, open command prompt/terminal and run the below command.
# pip install twine
  • Toggle down to your bundled home directory where you have your bundled tar.gz package under dist folder and run the following command.
# twine check dist/*
Note : PyPI uses your README.rst for to make it's README, descrptions and all other sections under your python package as a part of documentation.
Most of the errors comes while reading your README.rst file from twine as it fails to meet the requirement the formatting required to upload on twine.

In case it gives error, fix the error and try making new bundled package and do run "twine check" to verify if it has been fixed or not.

In order to recreate bundled package, delete old directories (dist and egg-info) and rerun previous command to rebundle it.

# python sdist
One all errors are fixed, we are ready to upload it on PyPi website, use the below command to upload your package.
# twine upload dist/*
It will ask for your credentials, use the same newly created PYPI account credentials to upload it under your username.
Upon successful upload, we can check our PyPi account, package will be uploaded with formatted description mentioned within our README.rst file.

Our django-filesnow screenshot looks like below screenshot.

Read more ...

How to make your Python Django App Reusable

Django is cool and it's features like any other python libraries and frameworks are way more cooler then it.

One of the best feature is to make your django app easily distributable for re usability.

Within this blog post, we will cover.
  • How can we build a django app a python package as xyz.tar.gz
  • How can be uploaded our created django app python package to Python Package Index, simple know as PyPI.
  • How we can Install our created package locally via tar.gz file.
  • How can we install using standard pip command as "pip install your_py_package"
  • We assume you already have a running Django Project and beneath it there must be an application, which you want to package and make it disributable.
  • Here in this blog post we already have a Django project named as docdocGo and within it we have created a django application named as FilesNow, we will cover all things taking it an example.
  • FilesNow is an django application, that can be used to download contents from a AWS S3 bucket within it's temporary directory and serve it as a presentable media to view or download for a user, it deletes the files after a fix set interval. 
  • FilesNow on GitHub link
Here is how our django project and it's  directory look like.


  • Python 3+
  • Django 2.2+
  • PIP
  • Twine ( pip install twin )

Lets proceed and how to Package our filesnow django app.
  1. Create a new empty directory outside our django project and name it something relevant to your app, here we will use it name as django-filesnow.
  2. Copy the entire application directory and paste under newly created directory.
  3. Toggle to directory django-filesnow and lets move ahead and create few required files within it.
  • Create a file django-filesnow/README.rst with the below content and do replace with your own.
Content of django-filesnow/README.rst

FilesNow is a Django app to download documnets, images 
from AWS S3 and serve is a temporary static content to customers.

FilesNow is a way to serve AWS S3 documents/media files
without giving access to your s3 buckets.

FilesNow itself cleans it's downloaded presentable
files, as such maintainig a healthy file system

AWS Boto3 Framework : pip install boto3
Configure AWS Credentilas using command : aws configure

Quick start

1. Add "filesnow" to your INSTALLED_APPS setting like this::


2. Include the polls URLconf in your project like this::

    path('filesnow/', include('filesnow.urls'))

3. Start the development server ``python runserver``

4. Visit and explore it.
  • Create a license file django-filesnow/LICENSE , choose license as per requirement of yours ( GNU, BSD, MIT ) etc, Choose a license website ( ) can help you to guide about your required license and it's content. I have used MIT license and you can find it's content within my Github repo.
  • Create two setup files within same directory and name it as django-filesnow/setup.cfg   and django-filesnow/ with the below content and do replace with your own.
Content of django-filesnow/setup.cfg
name = django-filesnow
version = 0.1
description = A Django app to download cloud contents.
long_description = file: README.rst
url =
author = Jackuna
author_email =
license = MIT License
classifiers =
    Environment :: Web Environment
    Framework :: Django
 Framework :: Django :: 2.2
    Intended Audience :: Developers
    License :: OSI Approved :: MIT License
    Operating System :: OS Independent
    Programming Language :: Python
    Programming Language :: Python :: 3
    Programming Language :: Python :: 3 :: Only
    Programming Language :: Python :: 3.6
    Programming Language :: Python :: 3.7
    Programming Language :: Python :: 3.8
    Topic :: Internet :: WWW/HTTP
    Topic :: Internet :: WWW/HTTP :: Dynamic Content

include_package_data = true
packages = find:
Content of django-filesnow/
from setuptools import setup

  • Only Python modules and packages are included in the package by default. To include additional files, we’ll need to create a file.  To include the templates, the README.rst and our LICENSE file, create a file django-polls/ with the following contents:
Content of django-filesnow/
include LICENSE
include README.rst
recursive-include filesnow/static *
recursive-include filesnow/templates *
So now we are all done with creation of required files in order to package your app, lets toggle again into the parent directory ( django-filesnow )and open a terminal/command prompt and run the below command to build our package.
# python sdist
Once command ends with successfully, and additional directory with name "dist" will be created, it contains our bundled and packaged django app, in our case  django-filesnow-0.1.tar.gz has been created.

Thus we have packaged our django app by name django-filesnow-0.1.tar.gz, we can distribute it as per your requirement, like uploading to repositories like github, email, uploading to any forum or website.

Let's know how to install our packaged filesnow django app django-filesnow-0.1.tar.gz, use the below command to install it.
# python -m pip install --user django-polls-0.1.tar.gz
Within Above command will install it, for windows we can located the installed application by name "filesnow" under directory.


Please note:
Above highlighted in red may varry, depending upon your system and package.
Package will be installed by name filesnow only not django-filesnow.
Within next post, we will know the remaining topics !

Read more ...

How to install and configure SignalFX Smart Agent on Windows Server

SignalFX Smart Agent.

Signal FX ships with SFX Smart agent, which is one of the essentials to monitor IT infrastructure (Hosts), using it we can monitor the below infrastructure resources 
  • Memory
  • CPU
  • Disk
  • Network
  • Disk IO
  • Network IO
Signal FX official documents suggest to install it via Power Shell script, that can be found within it's setup tab.

Within this blog post, we will cover how to install and configure SFX Smart Agent on Windows server from packaged ZIP file.

Assumptions and Requirements:
  • We already have accounts on Signal FX with required licenses.
  • We will be using Windows Server 2012 in our lab setup as a host.
  • I will install here.
  • Strings highlighted in RED within this post, must be replaced by your own values.
So before we proceed we need to gather three important and mandatory inputs
  • signalFxAccessToken
  • ingestUrl
  • apiUrl
All three can be extracted from the Setup tab, lets know it step by step.
  1. Login to your Signal FX Account.
  2. On the header navbar, click on integrations.
  3. Under essential services, click on SignalFX Smart Agent.
  4. Toggle to SETUP tab.
  5. Scroll down to the Windows Section.
  6. Copy the content on Windows setup and paste it into Notepad.
  7. Look for (apiUrlcan, ingestUrl, signalFxAccessToken) within the pasted strings within notepad, extract and keep it handy to use it further into our configuration file.


Signal FX Smart Agent mandatory requirements, so before we proceed further to install agent on our windows host, make sure we have both the below packages installed within our host.
  1. Net Framework 3.5 or higher.
Now let's move forward to setup.

  • Download the latest SFX Smart Agent for windows from github page -Download 
  • Extract and copy the content within your host to any directory of your choice, i am copying it into C:\Program Files.
  • Toggle into "etc" within extracted SignalFxAgent directory, as per mine setup location is C:\Program Files\SignalFxAgent\etc\signalfx.
  • Configure agent.yaml file.
Below is the sample configuration for agent.yaml, replace the highlighted one in red with the one you have extracted within previous steps.

# *Required* The access token for the org that you wish to send metrics to.
signalFxAccessToken: 'myRandonTokenGivenBySignalFx'
ingestUrl: ''
apiUrl: ''
intervalSeconds: 10

  # Valid values are 'debug', 'info', 'warning', and 'error'
  level: info

# observers are what discover running services in the environment
  - type: host

  - {"#from": 'C:\Program Files\SignalFxAgent\etc\*.yaml', flatten: true, optional: true}
  - type: host-metadata
  - type: processlist
  - type: cpu
  - type: disk-io
  - type: filesystems
  - type: memory
  - type: net-io
  - type: vmem

enableBuiltInFiltering: true
  • So we have made the required changes within agent.yaml, now save and exit, we are done with the config file setup.
  • Now let's install it and make it as a windows service, run the below command to install it.
Toggle again to the "SignalFxAgent" director, mine is "C:\Program Files\SignalFxAgent\" and run the install command to install it, replace the one highlighted in red by your own path.

PS C:\> cd C:\Program Files\SignalFxAgent\

 PS C:\Program Files\SignalFxAgent> bin\signalfx-agent.exe -service "install" -logEvents -config "C:\Program Files\Signal
This will create SignalFX Smart Agent as a Windows Service, we can stop and start from their as per our need.

In case if you are willing to start the services by command prompt, below is the command.
 PS C:\Program Files\SignalFxAgent> bin\signalfx-agent.exe -service "start"
Upon successful setup, we can find our configured host under SingnalFX Infrastructure Navbar as below.

Read more ...

Logstash with AWS Elasticsearch Service

Logstash with AWS Elasticsearch Service.

Data into aws elastic search domain can be shipped and ingested via multiple ways.
  • Using Kinesis Stream to ingest logs to aws elastic search.
  • Using Kinesis Firehose stream to to ingest logs to aws elastic search.
  • Using filebeat and logstash combination to ingest logs to aws elastic search.
In this blog post we will cover, how we will send our logs/data from EC2 instance using logstash to our AWS managed Elasticsearch domain.

Assumptions and Requirements:
  1. We already have a Elasticsearch domain created within AWS Elasticsearch services.
  2. User with IAM Role configured that have AmazonESFullAccess, this could be more granular access but for now we are assuming it to have full access for Elastic Search services.
  3. User must have programmatic access configured aka must have Access Key ID and AWS Secret Access Key.
  4. EC2 Instance that can have the above attached IAM role and must have appropriate security group  configured to connect to Elasticsearch endpoint, below snapshot will guide you about elastic search endpoint.
  5. I will not explain about logstash pipeline ( input, filter , output ), input and filter remains same but we will learn here what to define on the output section to ingest data to elasticsearch domain.

Installation and Configuration.

Lets proceed with installation first, we will install two components here.
  • Logstash 
  • Logstash-output-amazon_es plugin

Logstash can be directly installed from apt/yum or from binary too, click to use the official link for it's guideline, or you can follow up our previous post for complete ELK stack installation.

logstash-output-amazon_es plugin is a mandatory plugin to install as without it we can't ingest data to our AWS elasticsearch domain.
Please note, logstash must be installed first to install logstash-output-amazon_es plugin.

So toggle down to command prompt and run the below command, please locate your logstash bin directory before running the command, for amazon linux below is the default path.
# /usr/share/logstash/bin/logstash-plugin install logstash-output-amazon_es
You will get a success message upon a successful installation.

Now let's put the below lines within output section of your logstash output pipeline configuration.

Replace the highlighted red one with your own parameters.
output {
        stdout {codec => rubydebug
        amazon_es {
                hosts => [""]
                region => "us-west-2"
                aws_access_key_id => 'AjkjfjkNAPE7IHGZDDZ'
                aws_secret_access_key => '3yuefiuqeoixPRyho837WYwo0eicBVZ'
                index => "your-ownelasticsearch-index-name"

Once inserted and configured, restart the logstash service to reflect the changes.
Verify the same within logstash logs or kibana dashboard or even on ES Domain indices section.

Overall my logstash entire pipleline mentioned within file logstash.conf within directory /etc/logstash/conf.d/ looks like below, may be someone can take a reference of it.

Note : My demo.log contains logs generated by a spring boot app.
input {
  file {
    path => "/tmp/demo.log*"
    start_position => "beginning"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous

filter {

    grok {
          match => {
            "message" => [
                  "%{TIMESTAMP_ISO8601:timestamp}*%{LOGLEVEL:level}*--- *\[%{DATA:thread}] %{JAVACLASS:class} *:%{GREEDYDATA:json_data}"

filter {
      json {
        source => "json_data"
output {
        stdout {codec => rubydebug
        amazon_es {
                hosts => [""]
                region => "us-west-2"
                aws_access_key_id => 'AKs3IAuoisoosoweIHGZDDZ'
                aws_secret_access_key => '3d0w8bwuywbwi6IxPRyho837WYwo0eicBVZ'
                index => "your-ownelasticsearch-index-name"

Thanks, do comment i will be happy to help you.

Read more ...

Logstash Installation error Fix : Unable to install system startup script for Logstash.

If you are also facing challenges while installing logstash version 6 or 7 with 
below bunch of dozen of error strings, you are at right place let's fix it.

Using provided startup.options file: /etc/logstash/startup.options
Exception in thread "main" java.lang.UnsupportedClassVersionError: org/jruby/Main : Unsupported major.minor version 52.0
        at java.lang.ClassLoader.findBootstrapClass(Native Method)
        at java.lang.ClassLoader.findBootstrapClassOrNull(
        at java.lang.ClassLoader.loadClass(
        at java.lang.ClassLoader.loadClass(
        at sun.misc.Launcher$AppClassLoader.loadClass(
        at java.lang.ClassLoader.loadClass(
        at sun.launcher.LauncherHelper.checkAndLoadMain(
Unable to install system startup script for Logstash.
chmod: cannot access ‘/etc/default/logstash’: No such file or directory
warning: %post(logstash-1:7.5.2-1.noarch) scriptlet failed, exit status 1
Non-fatal POSTIN scriptlet failure in rpm package 1:logstash-7.5.2-1.noarch
  Verifying  : 1:logstash-7.5.2-1.noarch    

This error is mainly due to the existence of non supported version of java.
or these may be two versions of java installed within your system.
Logstash version 6+ has a dependency of Java 8+ , let know what is within our system and what's our default version picked up y CLI.

Run the below command to check java version.
# java -version

java version "1.7.0_231"
OpenJDK Runtime Environment (amzn- u231-b01)
OpenJDK 64-Bit Server VM (build 24.231-b01, mixed mode)
Verify it must be greater then 8, if not then uninstall the older version and install Java version 8.

# yum remove java-1.7.0-openjdk

# yum install java-1.8.0-openjdk
Verify the default version again and uninstall - install logstash again.
and finally run the below command.
# /usr/share/logstash/bin/system-install /etc/logstash/startup.options sysv

Read more ...
Related Posts Plugin for WordPress, Blogger...
Designed By Jackuna