Logstash with AWS Elasticsearch Service.
Data into aws elastic search domain can be shipped and ingested via multiple ways.
- Using Kinesis Stream to ingest logs to aws elastic search.
- Using Kinesis Firehose stream to to ingest logs to aws elastic search.
- Using filebeat and logstash combination to ingest logs to aws elastic search.
In this blog post we will cover, how we will send our logs/data from EC2 instance using logstash to our AWS managed Elasticsearch domain.
Assumptions and Requirements:
- We already have a Elasticsearch domain created within AWS Elasticsearch services.
- User with IAM Role configured that have AmazonESFullAccess, this could be more granular access but for now we are assuming it to have full access for Elastic Search services.
- User must have programmatic access configured aka must have Access Key ID and AWS Secret Access Key.
- EC2 Instance that can have the above attached IAM role and must have appropriate security group configured to connect to Elasticsearch endpoint, below snapshot will guide you about elastic search endpoint.
- I will not explain about logstash pipeline ( input, filter , output ), input and filter remains same but we will learn here what to define on the output section to ingest data to elasticsearch domain.
Installation and Configuration.
Lets proceed with installation first, we will install two components here.
- Logstash
- Logstash-output-amazon_es plugin
Logstash can be directly installed from apt/yum or from binary too, click to use the official link for it's guideline, or you can follow up our previous post for complete ELK stack installation.
logstash-output-amazon_es plugin is a mandatory plugin to install as without it we can't ingest data to our AWS elasticsearch domain.
Please note, logstash must be installed first to install logstash-output-amazon_es plugin.
So toggle down to command prompt and run the below command, please locate your logstash bin directory before running the command, for amazon linux below is the default path.
# /usr/share/logstash/bin/logstash-plugin install logstash-output-amazon_es
You will get a success message upon a successful installation.
Now let's put the below lines within output section of your logstash output pipeline configuration.
Replace the highlighted red one with your own parameters.
output {
stdout {codec => rubydebug
}
amazon_es {
hosts => ["search-myekkdomain-rcridsesoz23h6svyyyju4pnmy.us-west-2.es.amazonaws.com"]
region => "us-west-2"
aws_access_key_id => 'AjkjfjkNAPE7IHGZDDZ'
aws_secret_access_key => '3yuefiuqeoixPRyho837WYwo0eicBVZ'
index => "your-ownelasticsearch-index-name"
}
}
Once inserted and configured, restart the logstash service to reflect the changes.
Verify the same within logstash logs or kibana dashboard or even on ES Domain indices section.
Overall my logstash entire pipleline mentioned within file logstash.conf within directory /etc/logstash/conf.d/ looks like below, may be someone can take a reference of it.
Note : My demo.log contains logs generated by a spring boot app.
input {
file {
path => "/tmp/demo.log*"
start_position => "beginning"
codec => multiline {
pattern => "^%{TIMESTAMP_ISO8601}"
negate => true
what => previous
}
}
}
filter {
grok {
match => {
"message" => [
"%{TIMESTAMP_ISO8601:timestamp}*%{LOGLEVEL:level}*--- *\[%{DATA:thread}] %{JAVACLASS:class} *:%{GREEDYDATA:json_data}"
]
}
}
}
filter {
json {
source => "json_data"
}
}
output { stdout {codec => rubydebug } amazon_es { hosts => ["search-myekkdomain-rcridsesoz23h6svyyyju4pnmy.us-west-2.es.amazonaws.com"] region => "us-west-2" aws_access_key_id => 'AKs3IAuoisoosoweIHGZDDZ' aws_secret_access_key => '3d0w8bwuywbwi6IxPRyho837WYwo0eicBVZ' index => "your-ownelasticsearch-index-name" } }
Thanks, do comment i will be happy to help you.