CyberKeeda In Social Media
Showing posts with label Kibana. Show all posts
Showing posts with label Kibana. Show all posts

Kibana Fix : FORBIDDEN/12/index read-only / allow delete (api)


If you are also facing challenges while deleting one of the old index out of Kibana, you are at right place.
You to might be welcomed and surprised by same looking error screenshot.

Don't worry you are at right place, lets know how to fix it.

Error Cause : One of more node in your cluster has passed the high disk threshold which means more than 90% of the disk is full. When that happens Elasticsearch will try to move shards away from the node to free up space, but only if it can find another node with enough space.


You need to add more disk space, either on each node or by adding more nodes to the cluster to let Elasticsearch spread the load. 

Toggle down to your elasticsearch server node.
Open command prompt and use the below two curl commands to fix it.
# curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_cluster/settings -d '{ "transient": { "cluster.routing.allocation.disk.threshold_enabled": false } }'
You will get an output like below.
 {"acknowledged":true,"persistent":{},"transient":{"cluster":{"routing":{"allocation":{"disk":{"threshold_enabled":"false"}}}}}}    
 Now run the second command.

# curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
You will get an output like below.
 '{"acknowledged":true}'

 Let me know, if the above worked/not worked for you
Read more ...

How to install E-Elasticsearch L-Logstah K-Kibana Stack on Ubuntu Linux

ELK


ELK Stack is one of the most popular log management opensource application.

It is a collection of open-source products including Elasticsearch, Logstash, and Kibana. 

All these 3 products are developed, managed and maintained by an organization named as Elastic. 


ELK Stack all together can manage and parse huge amount of log data, that can be used further for analytical, troubleshooting , central monitoring and alarming purposes using it's efficient GUI.

  • Elasticsearch is a JSON-based search and analytics engine intended for horizontal scalability and easier management.
  • Logstash is a server-side data processing interface that has the capability to collect data from several sources concurrently. It then transforms it, and then sends the data to your desired stash.
  • Kibana is used to visualize your data and navigate the Elastic Stack. 

I think, we got some idea of the components that will be used to build the entire stack.
Let's know how to install, configure and use it on Ubuntu.

Thing to note
  • For now we will be installing ElasticSearch and Kibana on the same sever.
  • To forward logs, we will install filebeat agent on one of the Linux Server.
  • We will forward syslogs here in this demo.

Installation.

  •  Install Java
OpenJDK 8 is available under default Ubuntu APT repositories, simply install Java 8 on an Ubuntu system using the below commands.
$ sudo apt update
$ sudo apt install openjdk-8-jdk openjdk-8-jre
Check Version.
$ java -version
openjdk version "1.8.0_232"
OpenJDK Runtime Environment (build 1.8.0_232-8u232-b09-0ubuntu1~18.04.1-b09)
OpenJDK 64-Bit Server VM (build 25.232-b09, mixed mode)
In case we need to set JAVA's Home directory, let's first determine where is java placed after installation then we will set the environment variable accordingly.
$ sudo update-alternatives --config java
Above command will help us to find the java path, mine look like below one and i will use the same to set my JAVA_Home.

There is only one alternative in link group java (providing /usr/bin/java): /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java Nothing to configure.

Though java is accessible from  /usr/bin/java  in case you still need to set Java home directory follow the below instructions.
$ sudo vim /etc/environment

Paste the above determined path as  JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java" at the end of the file.

$ source /etc/environment
Logout - Login to reflect the changes.
  •  Install and Configure ElasticSearch 
We will start the installation by importing and adding elasticsearch PGP Key following execution of the below commands sequentially to make elasticsearch and Kibana installation through apt-get.
$ wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

$ sudo apt-get install apt-transport-https

$ echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

$ sudo apt-get update && sudo apt-get install elasticsearch
Now let's modify elasticsearch config file and make some important changes before we start our elasticsearch engine.
$ sudo vim /etc/elasticsearch/elasticsearch.yml

Uncomment “network.host” and “http.port” in order to look the config like below.

 network.host: localhost
 http.port: 9200
Save the file and start elastic search
$ sudo systemctl start elasticsearch
In case if you want to enable it during boot.
$ sudo systemctl enable elasticsearch
Confirm it's working using below curl command.
$ curl -X GET "localhost:9200"
Output will look like something below.
{
  "name" : "ubuntu",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "IoQ9BAgsS2yGxir-C6tf1w",
  "version" : {
    "number" : "7.5.1",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "3ae9ac9a93c95bd0cdc054951cf95d88e1e18d96",
    "build_date" : "2019-12-16T22:57:37.835892Z",
    "build_snapshot" : false,
    "lucene_version" : "8.3.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

So we are done with the elasticsearch installation, lets proceed to install our Kibana Dashboard.

  •  Installation and configuration of Kibana Dashboard.
It's always recommended to install Kibana after elasticsearch, we have already added elastic repository that contains kibana too, we will use apt to install it.
$ sudo apt install kibana
Uncomment the following lines to proceed further.
server.port: 5601
server.host: "localhost"
elasticsearch.hosts: ["http://localhost:9200"]
   
So we are good to start the kibana service too
$ sudo systemctl start kibana
In case you want to enable it during startup/boot.
$ sudo systemctl enable kibana
  •  Installation and configuration of Logstash.
Logstash in general has a purpose to segregate multiple logs and can be used for transformation before it send to elasticserach.

Lets Install and configure to collect logs from our filebeat agent and then sending to elasticsearch.

We can install it using below apt command
$ sudo apt install logstash
Now, lets configure it, we will start by creating few files within logstash's conf.d directory. 
We will start with by creating filebeat input config file
$ sudo cd /etc/logstash/conf.d/

$ sudo vim filebeat-input.conf
Append the below lines within the file and save it.

input {
  beats {
    port => 5443
    type => syslog
  }
}
Now create a new file by name syslog-filter.conf and add the below contents within the file and save it.
This file is responsible to filer logs in order to filer and parse to make it suitable to ingest into elasticsearch document format.
$ sudo cd /etc/logstash/conf.d/

$ sudo vim syslog-filter.conf
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}
Create another config file for elastic search which will be responsible to ingest data from logstash to elasticsearch
$ sudo cd /etc/logstash/conf.d/

$ sudo vim output-elasticsearch.conf
Insert the below lines and save it.
output {
  elasticsearch { hosts => ["localhost:9200"]
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}
So we are done with the logstash configuration too, lets start the logstash service too.
$ sudo systemctl start logstash
In case you want to enable it during startup/boot.
$ sudo systemctl enable logstash
  •  Installation and configuration of Filebeat Agent on Client.
Elastic Stack uses lightweight data shippers called Beats to collect data from various sources and transport them to Logstash or Elasticsearch. 

Each beat has been developed to serve specific purpose, some of them are enlisted below.
  • Filebeat: It collects and ships log files.
  • Metricbeat: It collects metrics from your systems and services.
  • Packetbeat: It collects and analyzes network data.
  • Winlogbeat: It collects Windows event logs.
  • Auditbeat: It collects Linux audit framework data and monitors file integrity.
  • Heartbeat: It monitors services for their availability with active probing.
In our current lab setup, we will use the most widely used Filebeat to parse and ship our log file to logstash and there after it will be forwarded to elasticsearch, which can be later used for analyzing data using Kibana

We can install it using below apt command
$ sudo apt install filebeat
Lets modify it's configuration file as per our requirements, let's find and modify the below lines to make it "true".
enabled: true
Now as we will be sending logs to elasticsearch via logstash, not directly to elasticsearch, we will be disabling the output section meant for elasticseach via commenting below lines
#output.elasticsearch:
  # Array of hosts to connect to.
  # hosts: ["localhost:9200"]
Now, we will enable the logstash output section. by uncommenting the below lines.
Since logstash and elasticsearch both are installed within same host, we are using localhost, this can be replaced by elasticsearch server ip/hostname.
output.logstash:
  # The Logstash hosts
  hosts: ["elk-server:5443"]
Save and exit, let's start file beat services and we are ready to ship our logs to elastic search server via logstash and successive to it, we can search our logs at kibana dashboards.
$ sudo systemctl start filebeat
In case you want to enable it during startup/boot.

$ sudo systemctl enable filebeat
Let's explore our kibana dashboard, and we will begin with creating our indexes on it.
Open your browser and open kibana server ip with port (5601)as shown below.
http://<kibana host ip>:5601


Click on "Explore my Own"

Click on Discover ( Left Panel )  then  Create Index.

Within index pattern put a string filebeat-* and click on Next Step

On next window of Step 2 , select or type @timestamp  and we are done.

Let's discover our data ingested within our newly created index, click on Discover again and we can see our data there.





Read more ...
Designed By Jackuna