CyberKeeda In Social Media
Showing posts with label Amazon Web Services. Show all posts
Showing posts with label Amazon Web Services. Show all posts

AWS - How to extend windows drive volume size from an existing size to a newer size

How to extended your EBS Volume attached to an Windows Server


So to be true to you, I'm not a Windows guy, even for smaller thing i need to google, even if i already did a task based on windows server, next time it's asked to do the same I used to forgot as it's the frequency of work i get with respect to windows.

So why not draft and make a blog post, let's know how to do that.
In this blog post I will cover.
  • How to extend windows root EBS device volume.
  • How to extend an additional attached EBS volume.

Lab Setup details:
  1. We already have a EC2 instance with Windows Server installed on it.
  2. We already have a root volume ( Disk 0 ) attached of size 30Gb
  3. We have made 2 additional disk partitions as D and E drives.
  4. We already have a additional EBS volume( Disk 1 ) mounted with partition name as DATA
  5. We are assuming no Unallocated space is present.
How to extend windows root EBS device volume.

Final goal : We will add 3Gb of additional disk space to our root EBS volume ( /dev/sda1 and extend our D drive partition from 5Gb to 8Gb.

  • Go to AWS Console, select your desired Windows server EC2 instance.
  • Under description find Block Device (/dev/sda1), click it and from popup window note the EBS Volume ID and select it.
  • It will redirect to EBS Volumes window, confirm the EBS volume id, that we have noted on above step and confirm the existing size too.


  • Once Confirmed, we are ready to modify the volume size, that is from 30Gb to 33Gb
  • Select volume, right click on it, choose modify volume and make it from 30 to 33 as we want to increase it by 3Gb
  • Confirm and submit and check the state till, it become available from optimizing.
  • Once completed, we can login to our windows ec2 instnace and follow the next steps.
  • Open Run --> Paste "diskmgmt.msc" --> Action --> Refresh Disk
  • A new space with Unallocated of size 3Gb can be found.
  • Now we are ready to extend our D: drive from 5Gb to 8Gb.
 
  • Right Click on D: Volume --> Extend Volume --> Next --> 3Gb volume must be there within next Selected panel. --> Finish 

We can perform the same with our existing additional attached disk volumes, just identify your EBS volume id and follow up the same procedures.

Read more ...

AWS Kinesis Agent configuration to process and parse multi line logs.


Within Kinesis Agent configuration, in order to preprocess data/logs before it send to Kinesis Stream or Firehose directly, we can use it's dataProcessingOptions  configuration settings.

Below are the three configuration options available for now.
  • SINGLELINE
  • CSVTOJSON
  • LOGTOJSON
There are many standard data/log format Kinesis agent is aware of and it don't need any pre processing such as Apache logs, but there are many cases our logs like Wildfly logs, custom data, stack traces which is not predefined, we need to use Kinesis Agent's dataProcessingOptions  to parse into a JSON value.

So we will be using here option "CSVTOJSON" and "SINGLELINE" option to parse our custom logs.

Here is our sample log looks like, every individual complete log line is highlighted with a different color.
14:36:21,753 | INFO  --- xyzignorelogs-1842 | orrelationIdGeneratorInterceptor | 1135 - com.xyz.ppp.def-orchestration-api-impl-bundle-v1 - 0.0.2211 | Execution started in CorrelationIdGeneratorInterceptor : somerandomnumber09897w9w7w
14:36:21,753 | INFO  --- xyzignorelogs-1842 | LogInInterceptor                 | 1135 - com.xyz.ppp.def-orchestration-api-impl-bundle-v1 - 0.0.2211 | Execution started in LogInInterceptor : somerandomnumber09897w9w7w
14:36:21,759 | INFO  ---  xyzignorelogs-1842 | Util                             | 1134 - com.xyz.ppp.def-orchestration- 0.0.2211 | {
  "consumer-workflow-DATA" : {
    "correlationId" : "somerandomnumber09897w9w7w",
    "flowName" : "carate",
    "Url" : "www.cyberkeeda.com",
    "Request/Response Type" : "tracecustomer",
    "API request" : {"fromAddress": "LA", "country" : "US", "DOB": "19-12-1977"}
  }
}
Before we do anything, we will tell agent to read our logs as multiline and for this we need to define the start of the string with a regex pattern.

Kinesis agent will treat a next line only when it again finds the same pattern.
From above logs we can clearly find that every new line of our log start with below time strings.
14:36:21,753 | INFO  
14:36:21,753 | INFO
14:36:21,759 | INFO 
    
So to process the above log file in JSON format, before it sends to stream.
We will use below configuration
{
    "flows": [
        {
            "filePattern": "/tmp/app.log*",
            "kinesisStream": "myapplogstream",
            "multiLineStartPattern": "^[0-9]{2}-[0-9]{2}-[0-9]{2}",
            "dataProcessingOptions": [
                {
                    "optionName": "SINGLELINE"
                },
                {
                    "optionName": "CSVTOJSON",
                    "customFieldNames": [ "timeframe", "message" ],
                    "delimiter": "---"
                }
            ]

        }
    ]
}

    
We have defined a regex to meet our requirement as "^[0-9]{2}-[0-9]{2}-[0-9]{2}" to let kinesis agent know our where does our multi-line starts with.

Further we are converting the entire line as SINGLELINE and dividing the entire line based on delimiter.
Here we are using "---", so overall we are dividing the entire lines into two part and thus seperating them by a "Comma (',')" to make it a CSV value.
"customFieldNames": [ "timeframe", "message" ],
Further As we are breaking the single line into CSV value with only two fields, we are using their field name as "timeframe" and "message"

Thus final processed line to stream will be sent as below
"timeframe" :"14:36:21,753 | INFO", "message": "xyzignorelogs-1842 | orrelationIdGeneratorInterceptor | 1135 - com.xyz.ppp.def-orchestration-api-impl-bundle-v1 - 0.0.2211 | Execution started in CorrelationIdGeneratorInterceptor : somerandomnumber09897w9w7w"

"timeframe" :"14:36:21,753" | INFO",  "message": "xyzignorelogs-1842 | LogInInterceptor                 | 1135 - com.xyz.ppp.def-orchestration-api-impl-bundle-v1 - 0.0.2211 | Execution started in LogInInterceptor : somerandomnumber09897w9w7w"


"timeframe" :" "14:36:21,759 | INFO", "message": "xyzignorelogs-1842 | Util                             | 1134 - com.xyz.ppp.def-orchestration- 0.0.2211 | {
  "consumer-workflow-DATA" : {
    "correlationId" : "somerandomnumber09897w9w7w",
    "flowName" : "rate",
    "Url" : "www.cyberkeeda.com",
    "Request/Response Type" : "tracecustomer",
    "API request" : {"fromAddress": "LA", "country" : "US", "DOB": "19-12-1977"}
  }
}"

    
Do let me know, if it works for you or not.

Read more ...

Fix - Amazon Recovery mount XFS file system has duplicate UUID problem (XFS: Filesystem sdc1 has duplicate UUID - can't mount )


While mounting my recovery volume on amazon web services , i found the error
Though i was using all sort of wright commands to mount the XSF filesytem directory.

Finally after checking the dmeg log i sort out that it's the same UUID as it's a backup clone at all.

To fix those things i googled out the command to mount the XFS filesystem without UUID

#mount -o nouuid /dev/sdc1 /mnt/recovery
Read more ...
Designed By Jackuna