CyberKeeda In Social Media
Showing posts with label Logstash. Show all posts
Showing posts with label Logstash. Show all posts

Logstash with AWS Elasticsearch Service

Logstash with AWS Elasticsearch Service.

Data into aws elastic search domain can be shipped and ingested via multiple ways.
  • Using Kinesis Stream to ingest logs to aws elastic search.
  • Using Kinesis Firehose stream to to ingest logs to aws elastic search.
  • Using filebeat and logstash combination to ingest logs to aws elastic search.
In this blog post we will cover, how we will send our logs/data from EC2 instance using logstash to our AWS managed Elasticsearch domain.

Assumptions and Requirements:
  1. We already have a Elasticsearch domain created within AWS Elasticsearch services.
  2. User with IAM Role configured that have AmazonESFullAccess, this could be more granular access but for now we are assuming it to have full access for Elastic Search services.
  3. User must have programmatic access configured aka must have Access Key ID and AWS Secret Access Key.
  4. EC2 Instance that can have the above attached IAM role and must have appropriate security group  configured to connect to Elasticsearch endpoint, below snapshot will guide you about elastic search endpoint.
  5. I will not explain about logstash pipeline ( input, filter , output ), input and filter remains same but we will learn here what to define on the output section to ingest data to elasticsearch domain.

Installation and Configuration.

Lets proceed with installation first, we will install two components here.
  • Logstash 
  • Logstash-output-amazon_es plugin

Logstash can be directly installed from apt/yum or from binary too, click to use the official link for it's guideline, or you can follow up our previous post for complete ELK stack installation.

logstash-output-amazon_es plugin is a mandatory plugin to install as without it we can't ingest data to our AWS elasticsearch domain.
Please note, logstash must be installed first to install logstash-output-amazon_es plugin.

So toggle down to command prompt and run the below command, please locate your logstash bin directory before running the command, for amazon linux below is the default path.
# /usr/share/logstash/bin/logstash-plugin install logstash-output-amazon_es
You will get a success message upon a successful installation.

Now let's put the below lines within output section of your logstash output pipeline configuration.

Replace the highlighted red one with your own parameters.
output {
        stdout {codec => rubydebug
        amazon_es {
                hosts => [""]
                region => "us-west-2"
                aws_access_key_id => 'AjkjfjkNAPE7IHGZDDZ'
                aws_secret_access_key => '3yuefiuqeoixPRyho837WYwo0eicBVZ'
                index => "your-ownelasticsearch-index-name"

Once inserted and configured, restart the logstash service to reflect the changes.
Verify the same within logstash logs or kibana dashboard or even on ES Domain indices section.

Overall my logstash entire pipleline mentioned within file logstash.conf within directory /etc/logstash/conf.d/ looks like below, may be someone can take a reference of it.

Note : My demo.log contains logs generated by a spring boot app.
input {
  file {
    path => "/tmp/demo.log*"
    start_position => "beginning"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous

filter {

    grok {
          match => {
            "message" => [
                  "%{TIMESTAMP_ISO8601:timestamp}*%{LOGLEVEL:level}*--- *\[%{DATA:thread}] %{JAVACLASS:class} *:%{GREEDYDATA:json_data}"

filter {
      json {
        source => "json_data"
output {
        stdout {codec => rubydebug
        amazon_es {
                hosts => [""]
                region => "us-west-2"
                aws_access_key_id => 'AKs3IAuoisoosoweIHGZDDZ'
                aws_secret_access_key => '3d0w8bwuywbwi6IxPRyho837WYwo0eicBVZ'
                index => "your-ownelasticsearch-index-name"

Thanks, do comment i will be happy to help you.

Read more ...

Logstash Installation error Fix : Unable to install system startup script for Logstash.

If you are also facing challenges while installing logstash version 6 or 7 with 
below bunch of dozen of error strings, you are at right place let's fix it.

Using provided startup.options file: /etc/logstash/startup.options
Exception in thread "main" java.lang.UnsupportedClassVersionError: org/jruby/Main : Unsupported major.minor version 52.0
        at java.lang.ClassLoader.findBootstrapClass(Native Method)
        at java.lang.ClassLoader.findBootstrapClassOrNull(
        at java.lang.ClassLoader.loadClass(
        at java.lang.ClassLoader.loadClass(
        at sun.misc.Launcher$AppClassLoader.loadClass(
        at java.lang.ClassLoader.loadClass(
        at sun.launcher.LauncherHelper.checkAndLoadMain(
Unable to install system startup script for Logstash.
chmod: cannot access ‘/etc/default/logstash’: No such file or directory
warning: %post(logstash-1:7.5.2-1.noarch) scriptlet failed, exit status 1
Non-fatal POSTIN scriptlet failure in rpm package 1:logstash-7.5.2-1.noarch
  Verifying  : 1:logstash-7.5.2-1.noarch    

This error is mainly due to the existence of non supported version of java.
or these may be two versions of java installed within your system.
Logstash version 6+ has a dependency of Java 8+ , let know what is within our system and what's our default version picked up y CLI.

Run the below command to check java version.
# java -version

java version "1.7.0_231"
OpenJDK Runtime Environment (amzn- u231-b01)
OpenJDK 64-Bit Server VM (build 24.231-b01, mixed mode)
Verify it must be greater then 8, if not then uninstall the older version and install Java version 8.

# yum remove java-1.7.0-openjdk

# yum install java-1.8.0-openjdk
Verify the default version again and uninstall - install logstash again.
and finally run the below command.
# /usr/share/logstash/bin/system-install /etc/logstash/startup.options sysv

Read more ...

ELK Stack - LogStash and Filebeat with SSL

ELK Stack all together can manage and parse huge amount of log data, that can be used further for analytical, troubleshooting , central monitoring and alarming purposes using it's efficient GUI.

In this tutorial we will see, how to use SSL while transferring data from Beat client and Logstash log aggregator, you can follow the entire setup of ELK Stack published on my previous post of "How to install ELK Stack"

We will cover only the additional setup required for SSL for logstash and filebeat, lets begin with Logstatsh server.

Connect to Logstatsh Server and toggle to logstash root directory.
Create a SSL directly within it.
$ sudo cd /etc/logstash/

$ sudo mkdir SSL; cd SSL

Now we will generate SSL certificates to use it further, run the below commands to generate SSL.
* Replace demo-elk-server by the name FQDN of your host, where logstatsh has been installed.
$ sudo openssl req -subj '/CN=demo-elk-server/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout ssl/logstash-forwarder.key -out ssl/logstash-forwarder.crt
Edit the filebeat input configuration file, that has been created to receive incoming logs from filebeat agents installed on clients.

My config file is named as filebeat-input.conf placed within directory /etc/logstatsh/conf.d/

Add the additional SSL keys path within config and save the file.
vim /etc/logstatsh/conf.d/filebeat-input.conf
input {
  beats {
    port => 5443
    type => syslog
    ssl => true
    ssl_certificate => "/etc/logstash/ssl/logstash-forwarder.crt"
    ssl_key => "/etc/logstash/ssl/logstash-forwarder.key"
We have to restart logstash service to reflect the changes.
And now we are done with the Logstash part, now let's move down to clients for filebeat ssl configuration.

Let's edit the filebeat.yml file for it and append the additional SSL lines along with server certificate path and save it.

vim /etc/filebeat/filebeat.yml
  # The Logstash hosts
  hosts: ["elk-server:5443"]
  ssl.certificate_authorities: ["/etc/filebeat/logstash-forwarder.crt"]
Now we have to copy the certificate "logstatsh-forwarder.crt" from logstatsh server and place it to directory /etc/filebeat/

Either SCP the file or create a new file named as logstatsh-forwarder.crt and copy paste the content of cert file to our newly created file within filebeat client configuration folder.

We have to restart logstash service to reflect the changes.

So we are done with the SSL config, incase if you face any difficulties do comment on the post.
Read more ...

How to install E-Elasticsearch L-Logstah K-Kibana Stack on Ubuntu Linux


ELK Stack is one of the most popular log management opensource application.

It is a collection of open-source products including Elasticsearch, Logstash, and Kibana. 

All these 3 products are developed, managed and maintained by an organization named as Elastic. 

ELK Stack all together can manage and parse huge amount of log data, that can be used further for analytical, troubleshooting , central monitoring and alarming purposes using it's efficient GUI.

  • Elasticsearch is a JSON-based search and analytics engine intended for horizontal scalability and easier management.
  • Logstash is a server-side data processing interface that has the capability to collect data from several sources concurrently. It then transforms it, and then sends the data to your desired stash.
  • Kibana is used to visualize your data and navigate the Elastic Stack. 

I think, we got some idea of the components that will be used to build the entire stack.
Let's know how to install, configure and use it on Ubuntu.

Thing to note
  • For now we will be installing ElasticSearch and Kibana on the same sever.
  • To forward logs, we will install filebeat agent on one of the Linux Server.
  • We will forward syslogs here in this demo.


  •  Install Java
OpenJDK 8 is available under default Ubuntu APT repositories, simply install Java 8 on an Ubuntu system using the below commands.
$ sudo apt update
$ sudo apt install openjdk-8-jdk openjdk-8-jre
Check Version.
$ java -version
openjdk version "1.8.0_232"
OpenJDK Runtime Environment (build 1.8.0_232-8u232-b09-0ubuntu1~18.04.1-b09)
OpenJDK 64-Bit Server VM (build 25.232-b09, mixed mode)
In case we need to set JAVA's Home directory, let's first determine where is java placed after installation then we will set the environment variable accordingly.
$ sudo update-alternatives --config java
Above command will help us to find the java path, mine look like below one and i will use the same to set my JAVA_Home.

There is only one alternative in link group java (providing /usr/bin/java): /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java Nothing to configure.

Though java is accessible from  /usr/bin/java  in case you still need to set Java home directory follow the below instructions.
$ sudo vim /etc/environment

Paste the above determined path as  JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java" at the end of the file.

$ source /etc/environment
Logout - Login to reflect the changes.
  •  Install and Configure ElasticSearch 
We will start the installation by importing and adding elasticsearch PGP Key following execution of the below commands sequentially to make elasticsearch and Kibana installation through apt-get.
$ wget -qO - | sudo apt-key add -

$ sudo apt-get install apt-transport-https

$ echo "deb stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

$ sudo apt-get update && sudo apt-get install elasticsearch
Now let's modify elasticsearch config file and make some important changes before we start our elasticsearch engine.
$ sudo vim /etc/elasticsearch/elasticsearch.yml

Uncomment “” and “http.port” in order to look the config like below. localhost
 http.port: 9200
Save the file and start elastic search
$ sudo systemctl start elasticsearch
In case if you want to enable it during boot.
$ sudo systemctl enable elasticsearch
Confirm it's working using below curl command.
$ curl -X GET "localhost:9200"
Output will look like something below.
  "name" : "ubuntu",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "IoQ9BAgsS2yGxir-C6tf1w",
  "version" : {
    "number" : "7.5.1",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "3ae9ac9a93c95bd0cdc054951cf95d88e1e18d96",
    "build_date" : "2019-12-16T22:57:37.835892Z",
    "build_snapshot" : false,
    "lucene_version" : "8.3.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  "tagline" : "You Know, for Search"

So we are done with the elasticsearch installation, lets proceed to install our Kibana Dashboard.

  •  Installation and configuration of Kibana Dashboard.
It's always recommended to install Kibana after elasticsearch, we have already added elastic repository that contains kibana too, we will use apt to install it.
$ sudo apt install kibana
Uncomment the following lines to proceed further.
server.port: 5601 "localhost"
elasticsearch.hosts: ["http://localhost:9200"]
So we are good to start the kibana service too
$ sudo systemctl start kibana
In case you want to enable it during startup/boot.
$ sudo systemctl enable kibana
  •  Installation and configuration of Logstash.
Logstash in general has a purpose to segregate multiple logs and can be used for transformation before it send to elasticserach.

Lets Install and configure to collect logs from our filebeat agent and then sending to elasticsearch.

We can install it using below apt command
$ sudo apt install logstash
Now, lets configure it, we will start by creating few files within logstash's conf.d directory. 
We will start with by creating filebeat input config file
$ sudo cd /etc/logstash/conf.d/

$ sudo vim filebeat-input.conf
Append the below lines within the file and save it.

input {
  beats {
    port => 5443
    type => syslog
Now create a new file by name syslog-filter.conf and add the below contents within the file and save it.
This file is responsible to filer logs in order to filer and parse to make it suitable to ingest into elasticsearch document format.
$ sudo cd /etc/logstash/conf.d/

$ sudo vim syslog-filter.conf
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
Create another config file for elastic search which will be responsible to ingest data from logstash to elasticsearch
$ sudo cd /etc/logstash/conf.d/

$ sudo vim output-elasticsearch.conf
Insert the below lines and save it.
output {
  elasticsearch { hosts => ["localhost:9200"]
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
So we are done with the logstash configuration too, lets start the logstash service too.
$ sudo systemctl start logstash
In case you want to enable it during startup/boot.
$ sudo systemctl enable logstash
  •  Installation and configuration of Filebeat Agent on Client.
Elastic Stack uses lightweight data shippers called Beats to collect data from various sources and transport them to Logstash or Elasticsearch. 

Each beat has been developed to serve specific purpose, some of them are enlisted below.
  • Filebeat: It collects and ships log files.
  • Metricbeat: It collects metrics from your systems and services.
  • Packetbeat: It collects and analyzes network data.
  • Winlogbeat: It collects Windows event logs.
  • Auditbeat: It collects Linux audit framework data and monitors file integrity.
  • Heartbeat: It monitors services for their availability with active probing.
In our current lab setup, we will use the most widely used Filebeat to parse and ship our log file to logstash and there after it will be forwarded to elasticsearch, which can be later used for analyzing data using Kibana

We can install it using below apt command
$ sudo apt install filebeat
Lets modify it's configuration file as per our requirements, let's find and modify the below lines to make it "true".
enabled: true
Now as we will be sending logs to elasticsearch via logstash, not directly to elasticsearch, we will be disabling the output section meant for elasticseach via commenting below lines
  # Array of hosts to connect to.
  # hosts: ["localhost:9200"]
Now, we will enable the logstash output section. by uncommenting the below lines.
Since logstash and elasticsearch both are installed within same host, we are using localhost, this can be replaced by elasticsearch server ip/hostname.
  # The Logstash hosts
  hosts: ["elk-server:5443"]
Save and exit, let's start file beat services and we are ready to ship our logs to elastic search server via logstash and successive to it, we can search our logs at kibana dashboards.
$ sudo systemctl start filebeat
In case you want to enable it during startup/boot.

$ sudo systemctl enable filebeat
Let's explore our kibana dashboard, and we will begin with creating our indexes on it.
Open your browser and open kibana server ip with port (5601)as shown below.
http://<kibana host ip>:5601

Click on "Explore my Own"

Click on Discover ( Left Panel )  then  Create Index.

Within index pattern put a string filebeat-* and click on Next Step

On next window of Step 2 , select or type @timestamp  and we are done.

Let's discover our data ingested within our newly created index, click on Discover again and we can see our data there.

Read more ...
Designed By Jackuna