ElastAlert as a Docker container

How to use ElastAlert as a docker container

If you are working in a custom system, then logging is one of the most important aspects of that system.

If you are in a system that does not allow external logging services such as Newrelic, Papertrail, Logentries or you do not want to outsource this to a third party for personal reasons then please continue reading, but first please see the intro about logging

One solution for internal logging cluster is the modern Elastic stack known also as ELK. What is ELK stands for, well it an acronym from the products that are used in that system, Elastic / Logstash / Kibana.

  • Elastic is the database that is used to keep all of our data (previously was known as ElasticSearch)

  • Logstash is the logs aggregator, this is I think oversimplification of how to import our logs into the database, this normally will have 2 more components one will be the logs aggregator (now known as beats) and the logs ingestor logstash, for this document lets keep it simple, after all we are not interested in this system

  • Kibana is the visualization tool that we use to analyze/view our logging data.

Lets not say more about this because this field is changing and can be different from setup to setup.

If you follow Elastic the company you will see the latest release of Elastic Stack 5. This release include a new feature the Watcher plugins (more in a feature article). If your cluster is a bit older or you have not activated the x-pack then you are missing the Watcher and try to look for alternatives, one of the alternatives is ElastAlert developed by Yelp

In my opinion this is a nice tool to do alerting that is my personal choice and also at Peopleperhour for that reason i have created an elastalert docker container that is used in our dockerized clustering system.

You can find in Github the project and how to use it.

The docker container takes some assumptions.

For local development

You will host the alerts in a bind-mounted folder with the container so that when the container starts it will load the alerts from the folder /opt/rules/ if it finds anything in there. So for example you are writing the rules in a directory stracture like this bellow

|- rules/
|--rules/example.yaml

Then you can test the example.yaml rules with the following command

docker run --rm -it -v $(PWD)/rules:/opt/rules  \
        -e ELASTICSEARCH_HOST=xxx.xxx.xxx.xxx \
        peopleperhour/elastalert bash;
$ mv /opt/config/config.yaml /opt/config/config.tpl.yaml
$ envsubst < /opt/config/config.tpl.yaml > /opt/config/config.yaml

Basically what we do is we start the elastalert container and we do not start the elastalert command, that will try to execute and start in, lets say, production mode but rather we want to get a shell into that container.

Also we need to replace the ELASTICSEARCH_HOST environment variable because this is set in our configuration file.

Once we do that we can navigate to the rules directory /opt/rules and test our specific alert with the elastalert-test-rule command.

Example usage of testing rules

$ elastalert-test-rule \
  --config /opt/config/config.yaml \
  --days 1 \
  example.yaml;

We have to define the elastalert configuration in order to bootstrap the testing with the same configuration as the one that will run.

For production usage

If you want to start the container in production mode then you have 2 main options.

First option is to bind-mount the rules that you want to apply into the /opt/rules directory and the container will pick up the rules from there.

Second option is if you are in AWS hosted environment then you can save your rules in an S3 bucket and configure the container to fetch the rules at startup from that S3 bucket and save them into the directory /opt/rules/ from that it will continue as normal.

If you save to S3 then you have to create an AWS IAM user with access only to that S3 bucket (for security create new IAM user with strict rules) and pass the bellow environment variables

  • S3_BUCKET: the bucket that holds the rules
  • AWS_ACCESS_KEY_ID : The IAM key
  • AWS_SECRET_ACCESS_KEY : The IAM secrete
  • AWS_DEFAULT_REGION: The region where the bucket is.

Alexandros Sapranidis

Software engineer, keen on wearing many hat, current Senior Software Engineer @Elastic cloud

Athens, Greece http://sapranidis.gr