CyberKeeda In Social Media
Showing posts with label ELK. Show all posts
Showing posts with label ELK. Show all posts

Logstash with AWS Elasticsearch Service

Logstash with AWS Elasticsearch Service.


Data into aws elastic search domain can be shipped and ingested via multiple ways.
  • Using Kinesis Stream to ingest logs to aws elastic search.
  • Using Kinesis Firehose stream to to ingest logs to aws elastic search.
  • Using filebeat and logstash combination to ingest logs to aws elastic search.
In this blog post we will cover, how we will send our logs/data from EC2 instance using logstash to our AWS managed Elasticsearch domain.

Assumptions and Requirements:
  1. We already have a Elasticsearch domain created within AWS Elasticsearch services.
  2. User with IAM Role configured that have AmazonESFullAccess, this could be more granular access but for now we are assuming it to have full access for Elastic Search services.
  3. User must have programmatic access configured aka must have Access Key ID and AWS Secret Access Key.
  4. EC2 Instance that can have the above attached IAM role and must have appropriate security group  configured to connect to Elasticsearch endpoint, below snapshot will guide you about elastic search endpoint.
  5. I will not explain about logstash pipeline ( input, filter , output ), input and filter remains same but we will learn here what to define on the output section to ingest data to elasticsearch domain.





Installation and Configuration.

Lets proceed with installation first, we will install two components here.
  • Logstash 
  • Logstash-output-amazon_es plugin

Logstash can be directly installed from apt/yum or from binary too, click to use the official link for it's guideline, or you can follow up our previous post for complete ELK stack installation.

logstash-output-amazon_es plugin is a mandatory plugin to install as without it we can't ingest data to our AWS elasticsearch domain.
Please note, logstash must be installed first to install logstash-output-amazon_es plugin.

So toggle down to command prompt and run the below command, please locate your logstash bin directory before running the command, for amazon linux below is the default path.
# /usr/share/logstash/bin/logstash-plugin install logstash-output-amazon_es
You will get a success message upon a successful installation.

Now let's put the below lines within output section of your logstash output pipeline configuration.

Replace the highlighted red one with your own parameters.
output {
        stdout {codec => rubydebug
        }
        amazon_es {
                hosts => ["search-myekkdomain-rcridsesoz23h6svyyyju4pnmy.us-west-2.es.amazonaws.com"]
                region => "us-west-2"
                aws_access_key_id => 'AjkjfjkNAPE7IHGZDDZ'
                aws_secret_access_key => '3yuefiuqeoixPRyho837WYwo0eicBVZ'
                index => "your-ownelasticsearch-index-name"
    }
}

Once inserted and configured, restart the logstash service to reflect the changes.
Verify the same within logstash logs or kibana dashboard or even on ES Domain indices section.

Overall my logstash entire pipleline mentioned within file logstash.conf within directory /etc/logstash/conf.d/ looks like below, may be someone can take a reference of it.

Note : My demo.log contains logs generated by a spring boot app.
input {
  file {
    path => "/tmp/demo.log*"
    start_position => "beginning"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
    }
  }
}

filter {

    grok {
          match => {
            "message" => [
                  "%{TIMESTAMP_ISO8601:timestamp}*%{LOGLEVEL:level}*--- *\[%{DATA:thread}] %{JAVACLASS:class} *:%{GREEDYDATA:json_data}"
                  ]
         }
     }
}

filter {
      json {
        source => "json_data"
      }
 }
output {
        stdout {codec => rubydebug
        }
        amazon_es {
                hosts => ["search-myekkdomain-rcridsesoz23h6svyyyju4pnmy.us-west-2.es.amazonaws.com"]
                region => "us-west-2"
                aws_access_key_id => 'AKs3IAuoisoosoweIHGZDDZ'
                aws_secret_access_key => '3d0w8bwuywbwi6IxPRyho837WYwo0eicBVZ'
                index => "your-ownelasticsearch-index-name"
    }
}


Thanks, do comment i will be happy to help you.


Read more ...

Logstash Installation error Fix : Unable to install system startup script for Logstash.


If you are also facing challenges while installing logstash version 6 or 7 with 
below bunch of dozen of error strings, you are at right place let's fix it.

Using provided startup.options file: /etc/logstash/startup.options
Exception in thread "main" java.lang.UnsupportedClassVersionError: org/jruby/Main : Unsupported major.minor version 52.0
        at java.lang.ClassLoader.findBootstrapClass(Native Method)
        at java.lang.ClassLoader.findBootstrapClassOrNull(ClassLoader.java:1078)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:417)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:323)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:363)
        at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:482)
Unable to install system startup script for Logstash.
chmod: cannot access ‘/etc/default/logstash’: No such file or directory
warning: %post(logstash-1:7.5.2-1.noarch) scriptlet failed, exit status 1
Non-fatal POSTIN scriptlet failure in rpm package 1:logstash-7.5.2-1.noarch
  Verifying  : 1:logstash-7.5.2-1.noarch    

This error is mainly due to the existence of non supported version of java.
or these may be two versions of java installed within your system.
Logstash version 6+ has a dependency of Java 8+ , let know what is within our system and what's our default version picked up y CLI.

Run the below command to check java version.
# java -version

java version "1.7.0_231"
OpenJDK Runtime Environment (amzn-2.6.19.1.80.amzn1-x86_64 u231-b01)
OpenJDK 64-Bit Server VM (build 24.231-b01, mixed mode)
Verify it must be greater then 8, if not then uninstall the older version and install Java version 8.

# yum remove java-1.7.0-openjdk

# yum install java-1.8.0-openjdk
Verify the default version again and uninstall - install logstash again.
and finally run the below command.
# /usr/share/logstash/bin/system-install /etc/logstash/startup.options sysv



Read more ...

Kibana Fix : FORBIDDEN/12/index read-only / allow delete (api)


If you are also facing challenges while deleting one of the old index out of Kibana, you are at right place.
You to might be welcomed and surprised by same looking error screenshot.

Don't worry you are at right place, lets know how to fix it.

Error Cause : One of more node in your cluster has passed the high disk threshold which means more than 90% of the disk is full. When that happens Elasticsearch will try to move shards away from the node to free up space, but only if it can find another node with enough space.


You need to add more disk space, either on each node or by adding more nodes to the cluster to let Elasticsearch spread the load. 

Toggle down to your elasticsearch server node.
Open command prompt and use the below two curl commands to fix it.
# curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_cluster/settings -d '{ "transient": { "cluster.routing.allocation.disk.threshold_enabled": false } }'
You will get an output like below.
 {"acknowledged":true,"persistent":{},"transient":{"cluster":{"routing":{"allocation":{"disk":{"threshold_enabled":"false"}}}}}}    
 Now run the second command.

# curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
You will get an output like below.
 '{"acknowledged":true}'

 Let me know, if the above worked/not worked for you
Read more ...

AWS Elasticsearch Service with Kinesis Data Stream and Kinesis Data Firehose


EKK -- ElasticSearch Kinesis Kibana


EKK Stack is a collective approach of using end to end AWS services to use elasticserach services.

We will replace all opensource products within a normal ELK stack with AWS Service. 

EKK Stack all together can manage and parse huge amount of log data, that can be used further for analytical, troubleshooting , central monitoring and alarming purposes using it's efficient GUI and without taking the burden of infrastructure availability and scalabilty, we can use all AWS service to deploy the entire elasticsearch services.

So below is the architecture we will be using and we will be enlisting all the AWS services used as compared to usual ELK Stack


EKK Stack Component

    • Elasticsearch and Kibana will be replaced by Amazon ElasticSearch Services, it includes Kibana dashboard too.
    • Logstash will be replaced by Kinesis Data Sream and Kinesis Data Firehose
    • Logstash client agent ( FileBeat ) client agent will be replaced by Kinesis Agent.
    You can have a look on one of my previous post of "How to install ELK Stack" to get an overview of how ELK stack works together with it's components.

    I have created a very basic AWS Cloudformation script, and will try to explain it and later that can be used too, lets start with parameters section.

    Parameters :
    # Author : Jackuna (https://github.com/Jackuna)
    # Website : www.cyberkeeda.com
    AWSTemplateFormatVersion: 2010-09-09
    Description: CloudFormation Stack to Create an AWS Managed Elastic Service using Kinesis Streaming Services.
    
    Parameters:
      LogBucketName:
        Type: String
        Description: Name of Amazon S3 bucket for log [a-z][a-z0-9]*
    
      KinesisStreamName:
        Type: String
        Description: Name of Kinesis Stream Name for log [a-z][a-z0-9]*
    
      ElasticsearchDomainName:
        Type: String
        Description: Name of Elasticsearch domain for log [a-z][a-z0-9]*
    
      ElasticsearchIndexName:
        Type: String
        Description: Name of Elasticsearch index from Kinesis Firehose [a-z][a-z0-9]*
        
      FirehoseName:
        Type: String
        Description: DeliveryStream for ES and S3 [a-z][a-z0-9]*
    Here are the parameters explained
    • LogBucketName: One need to feed the name of the S3 bucket name, that will be used to keep failed records and logs while ingesting data to elasticsearch domain from Amazon Kinesis Firehose stream.
    • ElasticsearchDomainName: Creation of AWS Elasticsearch starts with creation of domain within it, so that in case we wish to manage multiple elasticsearch services it could be identified as a separate domain.
    • ElasticsearchIndexName: Name of the Index, it will be used later while configuring indexes on Kibana dashboard.
    • KinesisStreamName: Name of the Kinesis Data Stream.
    • FirehoseName : Name of Kinesis Firehose data Stream.

    Resources:

    We will look into each resources one by one and at the end i will paste the entire resource section.

    KinesisDomainCreation
    Resources: 
      KinesisDomainCreation:
        Type: "AWS::Kinesis::Stream"
        Properties:
          Name: !Sub "${KinesisStreamName}"
          ShardCount: 5
    Here are the resources explained for "KinesisDomainCreation"
    • Type: "AWS::Kinesis::Stream"  : Creates a Kinesis stream that captures and transports data records that are emitted from data sources.
    • Name: !Sub "${KinesisStreamName}" : Kinesis data stream name, that will be replaced by one of our above defined parameters "KinesisStreamName"
    • ShardCount: 5The number of shards that the stream uses. For greater provisioned throughput, increase the number of shards.
    ElasticsearchDomain
    This resource section is responsible for ElasticSearch domain configuration along with it's underlying servers used for elastcsearch.
    ElasticsearchDomain:
        Type: AWS::Elasticsearch::Domain
        Properties:
          DomainName: !Sub "${ElasticsearchDomainName}"
          ElasticsearchVersion: '6.8'
          ElasticsearchClusterConfig:
            InstanceCount: '1'
            InstanceType: t2.small.elasticsearch
          EBSOptions:
            EBSEnabled: 'true'
            Iops: 0
            VolumeSize: 10
            VolumeType: gp2
          SnapshotOptions:
            AutomatedSnapshotStartHour: '0'
          AccessPolicies:
            Version: 2012-10-17
            Statement:
            - Effect: Allow
              Principal:
                AWS: '*' # Need to be replaced with appropriate value
              Action: es:*
              Resource: '*' # Need to be replaced with appropriate value
              #Resource: !Sub "arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${ElasticsearchDomainName}/*"
          AdvancedOptions:
            rest.action.multi.allow_explicit_index: 'true'
    Here are the resources explained for "ElasticsearchDomain"
    • Type: AWS::Elasticsearch::Domain  : The AWS::Elasticsearch::Domain resource creates an Amazon Elasticsearch Service (Amazon ES) domain that encapsulates the Amazon ES engine instances.
    • DomainName: !Sub "${ElasticsearchDomainName}" : Elastic search domain name, that will be replaced by one of our above defined parameters "ElasticsearchDomainName"
    • ElasticsearchVersion: '6.8' : Elastic Search Version.
    • ElasticsearchClusterConfig : This section contains the EC2 instances properties that will be used to create elasticsearch services.
    • EBSOptions : Volume type and it's proerties will be defined within this section.
    • SnapshotOptions : Snapshot properties for used Elasticsearch EC2 instances.
    • AccessPolicies : Policies defined for access.

    ESDeliverystream
    This resource section is responsible to create resources at Amazon Kinesis Firehose and configure it to send data to above created elasticsearch domain.
    ESDeliverystream:
        Type: AWS::KinesisFirehose::DeliveryStream
        DependsOn:
          - ElasticsearchDomain
          - DeliveryRole
          - DeliveryPolicy
        Properties:
          DeliveryStreamName: !Sub "${FirehoseName}"
          DeliveryStreamType: KinesisStreamAsSource
          KinesisStreamSourceConfiguration:
            KinesisStreamARN: !GetAtt KinesisDomainCreation.Arn
            RoleARN: !GetAtt DeliveryRole.Arn
          ElasticsearchDestinationConfiguration:
            BufferingHints:
              IntervalInSeconds: 60
              SizeInMBs: 1
            CloudWatchLoggingOptions: 
                Enabled: false
            DomainARN: !GetAtt ElasticsearchDomain.DomainArn
            IndexName: "demoLogs"
            IndexRotationPeriod: "NoRotation" # NoRotation, OneHour, OneDay, OneWeek, or OneMonth.
            TypeName: "fromFirehose"
            RetryOptions:
              DurationInSeconds: 60
            RoleARN: !GetAtt DeliveryRole.Arn
            S3BackupMode: FailedDocumentsOnly
            S3Configuration:
              BucketARN: !Sub "arn:aws:s3:::${LogBucketName}"
              BufferingHints:
                IntervalInSeconds: 60
                SizeInMBs: 1
              CompressionFormat: "UNCOMPRESSED"
              RoleARN: !GetAtt DeliveryRole.Arn 
              CloudWatchLoggingOptions: 
                Enabled: true
                LogGroupName: "deliverystream"
                LogStreamName: "s3Backup"
    Here are the resources explained for "ESDeliverystream:", 
    • Type: AWS::KinesisFirehose::DeliveryStreamn  : The AWS::KinesisFirehose::DeliveryStream resource creates an Amazon Kinesis Data Firehose (Kinesis Data Firehose) delivery stream that delivers real-time streaming data to Elasticsearch Service (Amazon ES) destination, within "Properties" section, we are defining Kinesis Firehose data stream name and Stream Source Type, which is kinesis data stream.
    • DependsOn : This is a predefined statement in AWS Cloudformation scripts, which ensure creation of resources before executing the current in lined resource, here it's basically ensuring that ElasticSearch domain and IAM role are created before creating a delivery stream.
    • ElasticsearchDestinationConfiguration : This section defines the delivery of firehose data to above created ElasticSearch Domain.
    DeliveryRole and DeliveryPolicy
    This resource section is responsible to create appropriate roles and policies required to READ-WRITE data from and to multiple AWS resources. 
    DeliveryRole:
        Type: 'AWS::IAM::Role'
        Properties:
          AssumeRolePolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action: 'sts:AssumeRole'
                Principal:
                  Service:
                    - 'firehose.amazonaws.com'
                Condition:
                  StringEquals:
                    'sts:ExternalId' : !Ref 'AWS::AccountId'
          RoleName: "DeliveryRole"
    
      DeliveryPolicy:
        Type: 'AWS::IAM::Policy'
        Properties:
          PolicyName: "DeliveryPolicy"
          Roles:
            - !Ref "DeliveryRole"
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                  - 's3:AbortMultipartUpload'
                  - 's3:GetBucketLocation'
                  - 's3:GetObject'
                  - 's3:ListBucket'
                  - 's3:ListBucketMultipartUploads'
                  - 's3:PutObject'
                  - 's3:PutObjectAcl'
                Resource:
                  - !Sub 'arn:aws:s3:::${LogBucketName}'
                  - !Sub 'arn:aws:s3:::${LogBucketName}/*'
              - Effect: Allow
                Action:
                  - 'es:DescribeElasticsearchDomain'
                  - 'es:DescribeElasticsearchDomains'
                  - 'es:DescribeElasticsearchDomainConfig'
                  - 'es:ESHttpPost'
                  - 'es:ESHttpPut'
                Resource:
                  - !Sub "arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${ElasticsearchDomainName}"
                  - !Sub "arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${ElasticsearchDomainName}/*"
              - Effect: Allow
                Action:
                  - 'es:ESHttpGet'
                Resource:
                  - !Sub 'arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${ElasticsearchDomainName}/_all/_settings'
                  - !Sub 'arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${ElasticsearchDomainName}/_cluster/stats'
                  - !Sub 'arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${ElasticsearchDomainName}/${ElasticsearchIndexName}*/_mapping/superstore'
                  - !Sub 'arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${ElasticsearchDomainName}/_nodes'
                  - !Sub 'arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${ElasticsearchDomainName}/_nodes/stats'
                  - !Sub 'arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${ElasticsearchDomainName}/_nodes/*/stats'
                  - !Sub 'arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${ElasticsearchDomainName}/_stats'
                  - !Sub 'arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${ElasticsearchDomainName}/${ElasticsearchIndexName}*/_stats'
              - Effect: Allow
                Action:
                  - 'logs:PutLogEvents'
                Resource:
                  - !Sub 'arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/kinesisfirehose/:log-stream:*'
              - Effect: Allow
                Action:
                  - 'kinesis:DescribeStream'
                  - 'kinesis:GetShardIterator'
                  - 'kinesis:GetRecords'
                Resource: !Sub 'arn:aws:kinesis:${AWS::Region}:${AWS::AccountId}:stream/%FIREHOSE_STREAM_NAME%'
              - Effect: Allow
                Action:
                  - 'kinesis:DescribeStream'
                  - 'kinesis:GetShardIterator'
                  - 'kinesis:GetRecords'
                  - 'kinesis:CreateStream'
                Resource: !Sub 'arn:aws:kinesis:${AWS::Region}:${AWS::AccountId}:stream/${KinesisStreamName}'
    
    LogBucket:

    This resource section is responsible to create S3 bucket, meant to keep failed logs.
    LogBucket:
        Type: 'AWS::S3::Bucket'
        Properties:
          BucketName: !Ref "LogBucketName"
          AccessControl: Private

    Once the Stack is Created, we need Kinesis agent installed on clients that will ship logs to AWS Kinesis data stream

    Installation and Configuration of Kinesis Agent:

    We are using amzon linux here as client to ship log data, install it using below command
    $ sudo yum install –y aws-kinesis-agent
    For Redhat/CentOS
    $ sudo yum install –y https://s3.amazonaws.com/streaming-data-agent/aws-kinesis-agent-latest.amzn1.noarch.rpm
    Open and edit kinesis agent config file and edit it as per your requirement, below is basic configuration.
    { 
       "flows": [
            { 
                "filePattern": "/tmp/you_app.log*", 
                "deliveryStream": "your-kinesis-deliverystreamname"
            } 
       ] 
    } 
    For more detailed option of configuration, please visit the official AWS link.

    Save and start the agent.
    $ sudo service aws-kinesis-agent start
    There are detailed multiple ways for preprocessing logs at kinesis agent, do look into attached official link and use the one that suits your log.

    Complete AWS Cloudformation Script.
    # Author : Jackuna (https://github.com/Jackuna)
    # Website : www.cyberkeeda.com
    AWSTemplateFormatVersion: 2010-09-09
    Description: CloudFormation Stack to Create an AWS Managed Elastic Service using Kinesis Streaming Services.
    
    Parameters:
      LogBucketName:
        Type: String
        Description: Name of Amazon S3 bucket for log [a-z][a-z0-9]*
    
      KinesisStreamName:
        Type: String
        Description: Name of Kinesis Stream Name for log [a-z][a-z0-9]*
    
      ElasticsearchDomainName:
        Type: String
        Description: Name of Elasticsearch domain for log [a-z][a-z0-9]*
    
      ElasticsearchIndexName:
        Type: String
        Description: Name of Elasticsearch index from Kinesis Firehose [a-z][a-z0-9]*
        
      FirehoseName:
        Type: String
        Description: DeliveryStream for ES and S3 [a-z][a-z0-9]*
    
    Resources: 
      KinesisDomainCreation:
        Type: "AWS::Kinesis::Stream"
        Properties:
          Name: !Sub "${KinesisStreamName}"
          ShardCount: 5
    
      ElasticsearchDomain:
        Type: AWS::Elasticsearch::Domain
        Properties:
          DomainName: !Sub "${ElasticsearchDomainName}"
          ElasticsearchVersion: '6.8'
          ElasticsearchClusterConfig:
            InstanceCount: '1'
            InstanceType: t2.small.elasticsearch
          EBSOptions:
            EBSEnabled: 'true'
            Iops: 0
            VolumeSize: 10
            VolumeType: gp2
          SnapshotOptions:
            AutomatedSnapshotStartHour: '0'
          AccessPolicies:
            Version: 2012-10-17
            Statement:
            - Effect: Allow
              Principal:
                AWS: '*' # Need to be replaced with appropriate value
              Action: es:*
              Resource: '*' # Need to be replaced with appropriate value
              #Resource: !Sub "arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${ElasticsearchDomainName}/*"
          AdvancedOptions:
            rest.action.multi.allow_explicit_index: 'true'
    
      ESDeliverystream:
        Type: AWS::KinesisFirehose::DeliveryStream
        DependsOn:
          - ElasticsearchDomain
          - DeliveryRole
          - DeliveryPolicy
        Properties:
          DeliveryStreamName: !Sub "${FirehoseName}"
          DeliveryStreamType: KinesisStreamAsSource
          KinesisStreamSourceConfiguration:
            KinesisStreamARN: !GetAtt KinesisDomainCreation.Arn
            RoleARN: !GetAtt DeliveryRole.Arn
          ElasticsearchDestinationConfiguration:
            BufferingHints:
              IntervalInSeconds: 60
              SizeInMBs: 1
            CloudWatchLoggingOptions: 
                Enabled: false
            DomainARN: !GetAtt ElasticsearchDomain.DomainArn
            IndexName: "demoLogs"
            IndexRotationPeriod: "NoRotation" # NoRotation, OneHour, OneDay, OneWeek, or OneMonth.
            TypeName: "fromFirehose"
            RetryOptions:
              DurationInSeconds: 60
            RoleARN: !GetAtt DeliveryRole.Arn
            S3BackupMode: FailedDocumentsOnly
            S3Configuration:
              BucketARN: !Sub "arn:aws:s3:::${LogBucketName}"
              BufferingHints:
                IntervalInSeconds: 60
                SizeInMBs: 1
              CompressionFormat: "UNCOMPRESSED"
              RoleARN: !GetAtt DeliveryRole.Arn 
              CloudWatchLoggingOptions: 
                Enabled: true
                LogGroupName: "deliverystream"
                LogStreamName: "s3Backup"
    
      DeliveryRole:
        Type: 'AWS::IAM::Role'
        Properties:
          AssumeRolePolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action: 'sts:AssumeRole'
                Principal:
                  Service:
                    - 'firehose.amazonaws.com'
                Condition:
                  StringEquals:
                    'sts:ExternalId' : !Ref 'AWS::AccountId'
          RoleName: "DeliveryRole"
    
      DeliveryPolicy:
        Type: 'AWS::IAM::Policy'
        Properties:
          PolicyName: "DeliveryPolicy"
          Roles:
            - !Ref "DeliveryRole"
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                  - 's3:AbortMultipartUpload'
                  - 's3:GetBucketLocation'
                  - 's3:GetObject'
                  - 's3:ListBucket'
                  - 's3:ListBucketMultipartUploads'
                  - 's3:PutObject'
                  - 's3:PutObjectAcl'
                Resource:
                  - !Sub 'arn:aws:s3:::${LogBucketName}'
                  - !Sub 'arn:aws:s3:::${LogBucketName}/*'
              - Effect: Allow
                Action:
                  - 'es:DescribeElasticsearchDomain'
                  - 'es:DescribeElasticsearchDomains'
                  - 'es:DescribeElasticsearchDomainConfig'
                  - 'es:ESHttpPost'
                  - 'es:ESHttpPut'
                Resource:
                  - !Sub "arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${ElasticsearchDomainName}"
                  - !Sub "arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${ElasticsearchDomainName}/*"
              - Effect: Allow
                Action:
                  - 'es:ESHttpGet'
                Resource:
                  - !Sub 'arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${ElasticsearchDomainName}/_all/_settings'
                  - !Sub 'arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${ElasticsearchDomainName}/_cluster/stats'
                  - !Sub 'arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${ElasticsearchDomainName}/${ElasticsearchIndexName}*/_mapping/superstore'
                  - !Sub 'arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${ElasticsearchDomainName}/_nodes'
                  - !Sub 'arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${ElasticsearchDomainName}/_nodes/stats'
                  - !Sub 'arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${ElasticsearchDomainName}/_nodes/*/stats'
                  - !Sub 'arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${ElasticsearchDomainName}/_stats'
                  - !Sub 'arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${ElasticsearchDomainName}/${ElasticsearchIndexName}*/_stats'
              - Effect: Allow
                Action:
                  - 'logs:PutLogEvents'
                Resource:
                  - !Sub 'arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/kinesisfirehose/:log-stream:*'
              - Effect: Allow
                Action:
                  - 'kinesis:DescribeStream'
                  - 'kinesis:GetShardIterator'
                  - 'kinesis:GetRecords'
                Resource: !Sub 'arn:aws:kinesis:${AWS::Region}:${AWS::AccountId}:stream/%FIREHOSE_STREAM_NAME%'
              - Effect: Allow
                Action:
                  - 'kinesis:DescribeStream'
                  - 'kinesis:GetShardIterator'
                  - 'kinesis:GetRecords'
                  - 'kinesis:CreateStream'
                Resource: !Sub 'arn:aws:kinesis:${AWS::Region}:${AWS::AccountId}:stream/${KinesisStreamName}'
    
      LogBucket:
        Type: 'AWS::S3::Bucket'
        Properties:
          BucketName: !Ref "LogBucketName"
          AccessControl: Private
    
    
    Do comment, i will be happy to help.

    Read more ...

    ELK Stack - LogStash and Filebeat with SSL

    ELK Stack all together can manage and parse huge amount of log data, that can be used further for analytical, troubleshooting , central monitoring and alarming purposes using it's efficient GUI.

    In this tutorial we will see, how to use SSL while transferring data from Beat client and Logstash log aggregator, you can follow the entire setup of ELK Stack published on my previous post of "How to install ELK Stack"

    We will cover only the additional setup required for SSL for logstash and filebeat, lets begin with Logstatsh server.

    Connect to Logstatsh Server and toggle to logstash root directory.
    Create a SSL directly within it.
    $ sudo cd /etc/logstash/
    
    $ sudo mkdir SSL; cd SSL

    Now we will generate SSL certificates to use it further, run the below commands to generate SSL.
    * Replace demo-elk-server by the name FQDN of your host, where logstatsh has been installed.
    $ sudo openssl req -subj '/CN=demo-elk-server/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout ssl/logstash-forwarder.key -out ssl/logstash-forwarder.crt
    
    Edit the filebeat input configuration file, that has been created to receive incoming logs from filebeat agents installed on clients.

    My config file is named as filebeat-input.conf placed within directory /etc/logstatsh/conf.d/

    Add the additional SSL keys path within config and save the file.
    vim /etc/logstatsh/conf.d/filebeat-input.conf
    input {
      beats {
        port => 5443
        type => syslog
        ssl => true
        ssl_certificate => "/etc/logstash/ssl/logstash-forwarder.crt"
        ssl_key => "/etc/logstash/ssl/logstash-forwarder.key"
      }
    }
    We have to restart logstash service to reflect the changes.
    And now we are done with the Logstash part, now let's move down to clients for filebeat ssl configuration.

    Let's edit the filebeat.yml file for it and append the additional SSL lines along with server certificate path and save it.

    vim /etc/filebeat/filebeat.yml
    output.logstash:
      # The Logstash hosts
      hosts: ["elk-server:5443"]
      ssl.certificate_authorities: ["/etc/filebeat/logstash-forwarder.crt"]
    Now we have to copy the certificate "logstatsh-forwarder.crt" from logstatsh server and place it to directory /etc/filebeat/

    Either SCP the file or create a new file named as logstatsh-forwarder.crt and copy paste the content of cert file to our newly created file within filebeat client configuration folder.

    We have to restart logstash service to reflect the changes.

    So we are done with the SSL config, incase if you face any difficulties do comment on the post.
    Read more ...

    How to install E-Elasticsearch L-Logstah K-Kibana Stack on Ubuntu Linux

    ELK


    ELK Stack is one of the most popular log management opensource application.

    It is a collection of open-source products including Elasticsearch, Logstash, and Kibana. 

    All these 3 products are developed, managed and maintained by an organization named as Elastic. 


    ELK Stack all together can manage and parse huge amount of log data, that can be used further for analytical, troubleshooting , central monitoring and alarming purposes using it's efficient GUI.

    • Elasticsearch is a JSON-based search and analytics engine intended for horizontal scalability and easier management.
    • Logstash is a server-side data processing interface that has the capability to collect data from several sources concurrently. It then transforms it, and then sends the data to your desired stash.
    • Kibana is used to visualize your data and navigate the Elastic Stack. 

    I think, we got some idea of the components that will be used to build the entire stack.
    Let's know how to install, configure and use it on Ubuntu.

    Thing to note
    • For now we will be installing ElasticSearch and Kibana on the same sever.
    • To forward logs, we will install filebeat agent on one of the Linux Server.
    • We will forward syslogs here in this demo.

    Installation.

    •  Install Java
    OpenJDK 8 is available under default Ubuntu APT repositories, simply install Java 8 on an Ubuntu system using the below commands.
    $ sudo apt update
    $ sudo apt install openjdk-8-jdk openjdk-8-jre
    Check Version.
    $ java -version
    openjdk version "1.8.0_232"
    OpenJDK Runtime Environment (build 1.8.0_232-8u232-b09-0ubuntu1~18.04.1-b09)
    OpenJDK 64-Bit Server VM (build 25.232-b09, mixed mode)
    In case we need to set JAVA's Home directory, let's first determine where is java placed after installation then we will set the environment variable accordingly.
    $ sudo update-alternatives --config java
    Above command will help us to find the java path, mine look like below one and i will use the same to set my JAVA_Home.

    There is only one alternative in link group java (providing /usr/bin/java): /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java Nothing to configure.

    Though java is accessible from  /usr/bin/java  in case you still need to set Java home directory follow the below instructions.
    $ sudo vim /etc/environment
    
    Paste the above determined path as  JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java" at the end of the file.
    
    $ source /etc/environment
    Logout - Login to reflect the changes.
    •  Install and Configure ElasticSearch 
    We will start the installation by importing and adding elasticsearch PGP Key following execution of the below commands sequentially to make elasticsearch and Kibana installation through apt-get.
    $ wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
    
    $ sudo apt-get install apt-transport-https
    
    $ echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
    
    $ sudo apt-get update && sudo apt-get install elasticsearch
    
    Now let's modify elasticsearch config file and make some important changes before we start our elasticsearch engine.
    $ sudo vim /etc/elasticsearch/elasticsearch.yml
    
    Uncomment “network.host” and “http.port” in order to look the config like below.
    
    
     network.host: localhost
     http.port: 9200
    
    Save the file and start elastic search
    $ sudo systemctl start elasticsearch
    In case if you want to enable it during boot.
    $ sudo systemctl enable elasticsearch
    Confirm it's working using below curl command.
    $ curl -X GET "localhost:9200"
    Output will look like something below.
    {
      "name" : "ubuntu",
      "cluster_name" : "elasticsearch",
      "cluster_uuid" : "IoQ9BAgsS2yGxir-C6tf1w",
      "version" : {
        "number" : "7.5.1",
        "build_flavor" : "default",
        "build_type" : "deb",
        "build_hash" : "3ae9ac9a93c95bd0cdc054951cf95d88e1e18d96",
        "build_date" : "2019-12-16T22:57:37.835892Z",
        "build_snapshot" : false,
        "lucene_version" : "8.3.0",
        "minimum_wire_compatibility_version" : "6.8.0",
        "minimum_index_compatibility_version" : "6.0.0-beta1"
      },
      "tagline" : "You Know, for Search"
    }
    
    
    So we are done with the elasticsearch installation, lets proceed to install our Kibana Dashboard.

    •  Installation and configuration of Kibana Dashboard.
    It's always recommended to install Kibana after elasticsearch, we have already added elastic repository that contains kibana too, we will use apt to install it.
    $ sudo apt install kibana
    Uncomment the following lines to proceed further.
    server.port: 5601
    server.host: "localhost"
    elasticsearch.hosts: ["http://localhost:9200"]
       
    So we are good to start the kibana service too
    $ sudo systemctl start kibana
    In case you want to enable it during startup/boot.
    $ sudo systemctl enable kibana
    •  Installation and configuration of Logstash.
    Logstash in general has a purpose to segregate multiple logs and can be used for transformation before it send to elasticserach.

    Lets Install and configure to collect logs from our filebeat agent and then sending to elasticsearch.

    We can install it using below apt command
    $ sudo apt install logstash
    Now, lets configure it, we will start by creating few files within logstash's conf.d directory. 
    We will start with by creating filebeat input config file
    $ sudo cd /etc/logstash/conf.d/
    
    $ sudo vim filebeat-input.conf
    
    Append the below lines within the file and save it.

    input {
      beats {
        port => 5443
        type => syslog
      }
    }
    Now create a new file by name syslog-filter.conf and add the below contents within the file and save it.
    This file is responsible to filer logs in order to filer and parse to make it suitable to ingest into elasticsearch document format.
    $ sudo cd /etc/logstash/conf.d/
    
    $ sudo vim syslog-filter.conf
    
    filter {
      if [type] == "syslog" {
        grok {
          match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
          add_field => [ "received_at", "%{@timestamp}" ]
          add_field => [ "received_from", "%{host}" ]
        }
        date {
          match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
        }
      }
    }
    
    Create another config file for elastic search which will be responsible to ingest data from logstash to elasticsearch
    $ sudo cd /etc/logstash/conf.d/
    
    $ sudo vim output-elasticsearch.conf
    
    Insert the below lines and save it.
    output {
      elasticsearch { hosts => ["localhost:9200"]
        hosts => "localhost:9200"
        manage_template => false
        index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
        document_type => "%{[@metadata][type]}"
      }
    }
    So we are done with the logstash configuration too, lets start the logstash service too.
    $ sudo systemctl start logstash
    In case you want to enable it during startup/boot.
    $ sudo systemctl enable logstash
    •  Installation and configuration of Filebeat Agent on Client.
    Elastic Stack uses lightweight data shippers called Beats to collect data from various sources and transport them to Logstash or Elasticsearch. 

    Each beat has been developed to serve specific purpose, some of them are enlisted below.
    • Filebeat: It collects and ships log files.
    • Metricbeat: It collects metrics from your systems and services.
    • Packetbeat: It collects and analyzes network data.
    • Winlogbeat: It collects Windows event logs.
    • Auditbeat: It collects Linux audit framework data and monitors file integrity.
    • Heartbeat: It monitors services for their availability with active probing.
    In our current lab setup, we will use the most widely used Filebeat to parse and ship our log file to logstash and there after it will be forwarded to elasticsearch, which can be later used for analyzing data using Kibana

    We can install it using below apt command
    $ sudo apt install filebeat
    Lets modify it's configuration file as per our requirements, let's find and modify the below lines to make it "true".
    enabled: true
    Now as we will be sending logs to elasticsearch via logstash, not directly to elasticsearch, we will be disabling the output section meant for elasticseach via commenting below lines
    #output.elasticsearch:
      # Array of hosts to connect to.
      # hosts: ["localhost:9200"]
    Now, we will enable the logstash output section. by uncommenting the below lines.
    Since logstash and elasticsearch both are installed within same host, we are using localhost, this can be replaced by elasticsearch server ip/hostname.
    output.logstash:
      # The Logstash hosts
      hosts: ["elk-server:5443"]
    Save and exit, let's start file beat services and we are ready to ship our logs to elastic search server via logstash and successive to it, we can search our logs at kibana dashboards.
    $ sudo systemctl start filebeat
    In case you want to enable it during startup/boot.

    $ sudo systemctl enable filebeat
    Let's explore our kibana dashboard, and we will begin with creating our indexes on it.
    Open your browser and open kibana server ip with port (5601)as shown below.
    http://<kibana host ip>:5601


    Click on "Explore my Own"

    Click on Discover ( Left Panel )  then  Create Index.

    Within index pattern put a string filebeat-* and click on Next Step

    On next window of Step 2 , select or type @timestamp  and we are done.

    Let's discover our data ingested within our newly created index, click on Discover again and we can see our data there.





    Read more ...
    Designed By Jackuna