Unify log aggregation and analytics throughout compute platforms

Unify log aggregation and analytics throughout compute platforms

[ad_1]

Our clients wish to make sure that their customers have the most effective expertise operating their software on AWS. To make this occur, it’s worthwhile to monitor and repair software program issues as rapidly as potential. Doing this will get difficult with the rising quantity of information needing to be rapidly detected, analyzed, and saved. On this publish, we stroll you thru an automatic course of to combination and monitor logging-application knowledge in near-real time, so you may remediate software points sooner.

This publish reveals methods to unify and centralize logs throughout completely different computing platforms. With this resolution, you may unify logs from Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), Amazon Kinesis Knowledge Firehose, and AWS Lambda utilizing brokers, log routers, and extensions. We use Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) with OpenSearch Dashboards to visualise and analyze the logs, collected throughout completely different computing platforms to get software insights. You may deploy the answer utilizing the AWS Cloud Improvement Equipment (AWS CDK) scripts offered as a part of the answer.

Buyer advantages

A unified aggregated log system offers the next advantages:

  • A single level of entry to all of the logs throughout completely different computing platforms
  • Assist defining and standardizing the transformations of logs earlier than they get delivered to downstream techniques like Amazon Easy Storage Service (Amazon S3), Amazon OpenSearch Service, Amazon Redshift, and different companies
  • The power to make use of Amazon OpenSearch Service to rapidly index, and OpenSearch Dashboards to look and visualize logs from its routers, functions, and different gadgets

Resolution overview

On this publish, we use the next companies to show log aggregation throughout completely different compute platforms:

  • Amazon EC2 – An internet service that gives safe, resizable compute capability within the cloud. It’s designed to make web-scale cloud computing simpler for builders.
  • Amazon ECS – An internet service that makes it straightforward to run, scale, and handle Docker containers on AWS, designed to make the Docker expertise simpler for builders.
  • Amazon EKS – An internet service that makes it straightforward to run, scale, and handle Docker containers on AWS.
  • Kinesis Knowledge Firehose – A totally managed service that makes it straightforward to stream knowledge to Amazon S3, Amazon Redshift, or Amazon OpenSearch Service.
  • Lambda – A compute service that permits you to run code with out provisioning or managing servers. It’s designed to make web-scale cloud computing simpler for builders.
  • Amazon OpenSearch Service – A totally managed service that makes it straightforward so that you can carry out interactive log analytics, real-time software monitoring, web site search, and extra.

The next diagram reveals the structure of our resolution.

The structure makes use of varied log aggregation instruments equivalent to log brokers, log routers, and Lambda extensions to gather logs from a number of compute platforms and ship them to Kinesis Knowledge Firehose. Kinesis Knowledge Firehose streams the logs to Amazon OpenSearch Service. Log information that fail to get persevered in Amazon OpenSearch service will get written to AWS S3. To scale this structure, every of those compute platforms streams the logs to a distinct Firehose supply stream, added as a separate index, and rotated each 24 hours.

The next sections show how the answer is carried out on every of those computing platforms.

Amazon EC2

The Kinesis agent collects and streams logs from the functions operating on EC2 cases to Kinesis Knowledge Firehose. The agent is a standalone Java software program software that provides a straightforward approach to acquire and ship knowledge to Kinesis Knowledge Firehose. The agent constantly screens recordsdata and sends logs to the Firehose supply stream.

BDB-1742-Ec2

The AWS CDK script offered as a part of this resolution deploys a easy PHP software that generates logs beneath the /and so forth/httpd/logs listing on the EC2 occasion. The Kinesis agent is configured by way of /and so forth/aws-kinesis/agent.json to gather knowledge from access_logs and error_logs, and stream them periodically to Kinesis Knowledge Firehose (ec2-logs-delivery-stream).

As a result of Amazon OpenSearch Service expects knowledge in JSON format, you may add a name to a Lambda perform to remodel the log knowledge to JSON format inside Kinesis Knowledge Firehose earlier than streaming to Amazon OpenSearch Service. The next is a pattern enter for the info transformer:

46.99.153.40 - - [29/Jul/2021:15:32:33 +0000] "GET / HTTP/1.1" 200 173 "-" "Mozilla/5.0 (Home windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36"

The next is our output:

{
    "logs" : "46.99.153.40 - - [29/Jul/2021:15:32:33 +0000] "GET / HTTP/1.1" 200 173 "-" "Mozilla/5.0 (Home windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36"",
}

We are able to improve the Lambda perform to extract the timestamp, HTTP, and browser info from the log knowledge, and retailer them as separate attributes within the JSON doc.

Amazon ECS

Within the case of Amazon ECS, we use FireLens to ship logs on to Kinesis Knowledge Firehose. FireLens is a container log router for Amazon ECS and AWS Fargate that provides you the extensibility to make use of the breadth of companies at AWS or accomplice options for log analytics and storage.

BDB-1742-ECS

The structure hosts FireLens as a sidecar, which collects logs from the primary container operating an httpd software and sends them to Kinesis Knowledge Firehose and streams to Amazon OpenSearch Service. The AWS CDK script offered as a part of this resolution deploys a httpd container hosted behind an Software Load Balancer. The httpd logs are pushed to Kinesis Knowledge Firehose (ecs-logs-delivery-stream) by the FireLens log router.

Amazon EKS

With the latest announcement of Fluent Bit help for Amazon EKS, you not must run a sidecar to route container logs from Amazon EKS pods operating on Fargate. With the brand new built-in logging help, you may choose a vacation spot of your option to ship the information to. Amazon EKS on Fargate makes use of a model of Fluent Bit for AWS, an upstream conformant distribution of Fluent Bit managed by AWS.

BDB-1742-EKS

The AWS CDK script offered as a part of this resolution deploys an NGINX container hosted behind an inner Software Load Balancer. The NGINX container logs are pushed to Kinesis Knowledge Firehose (eks-logs-delivery-stream) by the Fluent Bit plugin.

Lambda

For Lambda capabilities, you may ship logs on to Kinesis Knowledge Firehose utilizing the Lambda extension. You may deny the information being written to Amazon CloudWatch.

BDB-1742-Lambda

After deployment, the workflow is as follows:

  1. On startup, the extension subscribes to obtain logs for the platform and performance occasions. A neighborhood HTTP server is began contained in the exterior extension, which receives the logs.
  2. The extension buffers the log occasions in a synchronized queue and writes them to Kinesis Knowledge Firehose by way of PUT information.
  3. The logs are despatched to downstream techniques.
  4. The logs are despatched to Amazon OpenSearch Service.

The Firehose supply stream identify will get specified as an setting variable (AWS_KINESIS_STREAM_NAME).

For this resolution, as a result of we’re solely specializing in accumulating the run logs of the Lambda perform, the info transformer of the Kinesis Knowledge Firehose supply stream filters out the information of kind perform ("kind":"perform") earlier than sending it to Amazon OpenSearch Service.

The next is a pattern enter for the info transformer:

[
   {
      "time":"2021-07-29T19:54:08.949Z",
      "type":"platform.start",
      "record":{
         "requestId":"024ae572-72c7-44e0-90f5-3f002a1df3f2",
         "version":"$LATEST"
      }
   },
   {
      "time":"2021-07-29T19:54:09.094Z",
      "type":"platform.logsSubscription",
      "record":{
         "name":"kinesisfirehose-logs-extension-demo",
         "state":"Subscribed",
         "types":[
            "platform",
            "function"
         ]
      }
   },
   {
      "time":"2021-07-29T19:54:09.096Z",
      "kind":"perform",
      "document":"2021-07-29T19:54:09.094ZtundefinedtINFOtLoading functionn"
   },
   {
      "time":"2021-07-29T19:54:09.096Z",
      "kind":"platform.extension",
      "document":{
         "identify":"kinesisfirehose-logs-extension-demo",
         "state":"Prepared",
         "occasions":[
            "INVOKE",
            "SHUTDOWN"
         ]
      }
   },
   {
      "time":"2021-07-29T19:54:09.097Z",
      "kind":"perform",
      "document":"2021-07-29T19:54:09.097Zt024ae572-72c7-44e0-90f5-3f002a1df3f2tINFOtvalue1 = value1n"
   },   
   {
      "time":"2021-07-29T19:54:09.098Z",
      "kind":"platform.runtimeDone",
      "document":{
         "requestId":"024ae572-72c7-44e0-90f5-3f002a1df3f2",
         "standing":"success"
      }
   }
]

Stipulations

To implement this resolution, you want the next stipulations:

Construct the code

Take a look at the AWS CDK code by operating the next command:

mkdir unified-logs && cd unified-logs
git clone https://github.com/aws-samples/unified-log-aggregation-and-analytics .

Construct the lambda extension by operating the next command:

cd lib/computes/lambda/extensions
chmod +x extension.sh
./extension.sh
cd ../../../../

Be certain that to switch default AWS area specified beneath the worth of firehose.endpoint attribute inside lib/computes/ec2/ec2-startup.sh.

Construct the code by operating the next command:

yarn set up && npm run construct

Deploy the code

In the event you’re operating AWS CDK for the primary time, run the next command to bootstrap the AWS CDK setting (present your AWS account ID and AWS Area):

cdk bootstrap 
    --cloudformation-execution-policies arn:aws:iam::aws:coverage/AdministratorAccess 
    aws://<AWS Account Id>/<AWS_REGION>

You solely must bootstrap the AWS CDK one time (skip this step in case you have already accomplished this).

Run the next command to deploy the code:

cdk deploy --requires-approval

You get the next output:

 ✅  CdkUnifiedLogStack

Outputs:
CdkUnifiedLogStack.ec2ipaddress = xx.xx.xx.xx
CdkUnifiedLogStack.ecsloadbalancerurl = CdkUn-ecsse-PY4D8DVQLK5H-xxxxx.us-east-1.elb.amazonaws.com
CdkUnifiedLogStack.ecsserviceLoadBalancerDNS570CB744 = CdkUn-ecsse-PY4D8DVQLK5H-xxxx.us-east-1.elb.amazonaws.com
CdkUnifiedLogStack.ecsserviceServiceURL88A7B1EE = http://CdkUn-ecsse-PY4D8DVQLK5H-xxxx.us-east-1.elb.amazonaws.com
CdkUnifiedLogStack.eksclusterClusterNameCE21A0DB = ekscluster92983EFB-d29892f99efc4419bc08534a3d253160
CdkUnifiedLogStack.eksclusterConfigCommand515C0544 = aws eks update-kubeconfig --name ekscluster92983EFB-d29892f99efc4419bc08534a3d253160 --region us-east-1 --role-arn arn:aws:iam::xxx:function/CdkUnifiedLogStack-clustermasterroleCD184EDB-12U2TZHS28DW4
CdkUnifiedLogStack.eksclusterGetTokenCommand3C33A2A5 = aws eks get-token --cluster-name ekscluster92983EFB-d29892f99efc4419bc08534a3d253160 --region us-east-1 --role-arn arn:aws:iam::xxx:function/CdkUnifiedLogStack-clustermasterroleCD184EDB-12U2TZHS28DW4
CdkUnifiedLogStack.elasticdomainarn = arn:aws:es:us-east-1:xxx:area/cdkunif-elasti-rkiuv6bc52rp
CdkUnifiedLogStack.s3bucketname = cdkunifiedlogstack-logsfailederrcapturebucket0bcc-xxxxx
CdkUnifiedLogStack.samplelambdafunction = CdkUnifiedLogStack-LambdatransformerfunctionFA3659-c8u392491FrW

Stack ARN:
arn:aws:cloudformation:us-east-1:xxxx:stack/CdkUnifiedLogStack/6d53ef40-efd2-11eb-9a9d-1230a5204572

AWS CDK takes care of constructing the required infrastructure, deploying the pattern software, and accumulating logs from completely different sources to Amazon OpenSearch Service.

The next is among the key details about the stack:

  • ec2ipaddress – The general public IP tackle of the EC2 occasion, deployed with the pattern PHP software
  • ecsloadbalancerurl – The URL of the Amazon ECS Load Balancer, deployed with the httpd software
  • eksclusterClusterNameCE21A0DB – The Amazon EKS cluster identify, deployed with the NGINX software
  • samplelambdafunction – The pattern Lambda perform utilizing the Lambda extension to ship logs to Kinesis Knowledge Firehose
  • opensearch-domain-arn – The ARN of the Amazon OpenSearch Service area

Generate logs

To visualise the logs, you first must generate some pattern logs.

  1. To generate Lambda logs, invoke the perform utilizing the next AWS CLI command (run it a number of occasions):
aws lambda invoke 
--function-name "<<samplelambdafunction>>" 
--payload '{"payload": "howdy"}' /tmp/invoke-result 
--cli-binary-format raw-in-base64-out 
--log-type Tail

Be certain that to switch samplelambdafunction with the precise Lambda perform identify. The file path must be up to date primarily based on the underlying working system.

The perform ought to return "StatusCode": 200, with the next output:

{
    "StatusCode": 200,
    "LogResult": "<<Encoded>>",
    "ExecutedVersion": "$LATEST"
}

  1. Run the next command a few occasions to generate Amazon EC2 logs:
curl http://ec2ipaddress:80

Be certain that to switch ec2ipaddress with the general public IP tackle of the EC2 occasion.

  1. Run the next command a few occasions to generate Amazon ECS logs:
curl http://ecsloadbalancerurl:80

Be certain that to switch ecsloadbalancerurl with the general public ARN of the AWS Software Load Balancer.

We deployed the NGINX software with an inner load balancer, so the load balancer hits the well being checkpoint of the appliance, which is ample to generate the Amazon EKS entry logs.

Visualize the logs

To visualise the logs, full the next steps:

  1. On the Amazon OpenSearch Service console, select the hyperlink offered for the OpenSearch Dashboard 7URL.
  2. Configure entry to the OpenSearch Dashboard.
  3. Underneath OpenSearch Dashboard, on the Uncover menu, begin creating a brand new index sample for every compute log.

We are able to see separate indexes for every compute log partitioned by date, as within the following screenshot.

BDB-1742-create-index

The next screenshot reveals the method to create index patterns for Amazon EC2 logs.

BDB-1742-ec2

After you create the index sample, we are able to begin analyzing the logs utilizing the Uncover menu beneath OpenSearch Dashboard within the navigation pane. This instrument offers a single searchable and unified interface for all of the information with varied compute platforms. We are able to change between completely different logs utilizing the Change index sample submenu.

BDB-1742-unified

Clear up

Run the next command from the basis listing to delete the stack:

Conclusion

On this publish, we confirmed methods to unify and centralize logs throughout completely different compute platforms utilizing Kinesis Knowledge Firehose and Amazon OpenSearch Service. This method lets you analyze logs rapidly and the basis reason behind failures, utilizing a single platform fairly than completely different platforms for various companies.

You probably have suggestions about this publish, submit your feedback within the feedback part.

Sources

For extra info, see the next sources:


Concerning the creator

HariHari Ohm Prasath is a Senior Modernization Architect at AWS, serving to clients with their modernization journey to develop into cloud native. Hari likes to code and actively contributes to the open supply initiatives. Yow will discover him in Medium, Github & Twitter @hariohmprasath.

balluBallu Singh is a Principal Options Architect at AWS. He lives within the San Francisco Bay space and helps clients architect and optimize functions on AWS. In his spare time, he enjoys studying and spending time together with his household.

[ad_2]

Previous Article

A brand new chapter for Google’s Vulnerability Reward Program

Next Article

Apple honors and celebrates American veterans

Write a Comment

Leave a Comment

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter

Subscribe to our email newsletter to get the latest posts delivered right to your email.
Pure inspiration, zero spam ✨