Docker logs in Kibana Dasboard

Simplified guide to logging Docker to Elasticsearch in 2019 (With syslog-ng)

(Last Updated On: 07/12/2019)

This simplified guide to logging Docker to Elasticsearch shows you how to send logs of containers into Elastic. Although there are many tutorials on to logging Docker to Elasticsearch, this one is different from all as it uses syslog-ng. Visualize them on a nice dashboard in Kibana. And you can download it all at the end of the post!

Update: I moved the chapters about parsing and visualizing NGINX / Apache access logs in Kibana into a dedicated post / github repo.

Update 2: This post has been refactored and simplified to be compatible with Elasticsearch ECS and make it easier to implement. Compatible with Elasticsearch 7.x.

Why to store logs of containers?

I have a Docker environment on my Linux home server. It provides different services I use and sandboxes to try out new things. I already have a central log server to store logs from different sources. Logs of Docker containers are yet to be added.
Although Docker acts like a machine in the machine, it is still possible to collect and store the logs of the containers in the same central log server. They will be handy both for troubleshooting and analysis.

Choosing the right Docker log driver

The formats of the logs could be different container by container. Fortunately Docker daemon provides multiple drivers to collect and forward the logs to other systems regardless of the format of the logs.

Depending on what log driver we use – as Docker supports many log drivers – it could even provide metadata to the logs too.

The most trivial driver is ‘syslog’ but the documentation mentions an important disadvantage.

“Needs to be set up as highly available (HA) or else there can be issues on container start if it’s not available”

Although this is the only driver which supports TLS transport, it is quite overkill to set up an HA syslog infrastructure solely to collect container’s logs. I think journald is a better alternative to syslog.

I chose the journald driver over syslog because:

  1. All mainstream Linux operating systems ships journald which works out of the box.
  2. The service runs on the host and collects system logs even when no container is running.
  3. It is capable to collect Docker daemon logs as well, not just the logs of containers.
  4. The docker logs feature can be used parallel with journald.
  5. Regardless what the document mentions, it also supports logging tags.
  6. The only downside, that it is a binary format and processing it takes extra effort. Lucky for us syslog-ng can decode them. Syslog-ng can also relay the log messages via TLS if you still need that feature.

Orchestrating containers by docker-compose

I use docker-compose for container orchestration because it is much simpler to use in smaller server environments than using Kubernetes. Plus you can use Docker Swarm if you like.

The following chart gives you an overview about a single host system serving Nextcloud sites. From our perspective it is just PHP services and HTTP servers which are reachable from the Internet through a reverse proxy.

Docker to Elastic logging architecture with syslog-ngEvery Docker container sends their logs messages to journald.
Syslog-ng reads the journals and sends the processed messages back into Elasticsearch, which runs in the same Docker environment.

The logging daemon stores the logs on the local filesystem as well. Therefore in case Elastic goes down, no logs will be lost.
The logs on the host are in the same format as they are in Elastic, as a result you can reuse them anytime.

JournalD by default does not persist journals across system reboots or only keeps them for a limited time.

Elastic works for me as platform to visualize and analyze logs. I do not mean it as a long term storage, although it can work like that. If I need more server resources I simply delete Elastic and bootstrap it with the local logs whenever I need it again.

Configuring Docker daemon to store logs of containers in journald

In my example I use this docker-compose.yml file for Nextcloud. You can swap Nextcloud to any service you may already have in your containers. The basics are the same.

Explaining logging related options

To logging Docker to Elasticsearch first we need to use journald driver to collect the logs of the containers. The following Compose config exactly does that.

            driver: journald
                labels: "application"
                tag: "nginx|{{.ImageName}}|{{.DaemonName}}"
            application: "reverse_proxy"

The logging→options→labels:application key value pair maps to labels→application. Effectively it adds the “application:reverse_proxy” label to the container called “proxy”.

Defining the tag logging→options→tag:”nginx|{{.ImageName}}|{{.DaemonName}}” is very important. Syslog-ng will parse this field to extract, and container.image.tag filed according to ECS.

See the effect without “tag” being specified:

Nov 02 11:39:29 microchuck dockerd[1982]: - - [02/Nov/2018:11:39:29 +0100] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36" "-"
Nov 02 11:52:23 microchuck dockerd[1982]: - - [02/Nov/2018:11:52:23 +0100] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/601.7.7 (KHTML, like Gecko) Version/9.1.2 Safari/601.7.7" "-" 

And with “tag” specified.

Feb 04 00:20:39 microchuck nginx|nginx:1.16|docker[26858]: - - [04/Feb/2019:00:20:39 +0100] "GET /.well-known/security.txt HTTP/1.1" 404 169 "-" "-" "-"
Feb 04 00:20:52 microchuck nginx|nginx:1.16|docker[26858]: - - [04/Feb/2019:00:20:52 +0100] "GET /favicon.ico HTTP/1.1" 404 169 "-" "python-requests/2.10.0" "-" 

Explaining network related options

I need a custom network setup to set fixed IP address to Elastic on a bridged interface. Syslog-ng needs this to address it from outside of the Docker network.

        ports: - "9200:9200"

        driver: bridge
                - subnet:

With the help of the examples you should be able to make Docker daemon to log into journald.

Managing container log messages by syslog-ng

Some metadata values were already set like label and tag by Docker Compose. The next step is to read container’s logs from journald and parse metadata about Docker images and containers.

Collect and store container logs locally

I provide a syslog-ng configuration file. Similar to those provided in setting up a network source to collect OpenWRT logs or getting GeoIP metadata sent into Elasticsearch.

The basic configuration only stores container logs locally. It will be will be incrementally extended in the next chapters to reach its full potential.

filter f_dockerd {"${.journald._COMM}" eq "dockerd"};

parser p_docker_metadata {
        pair("",           "${.journald.CONTAINER_ID}")   # keyword
        pair("",         "${.journald.CONTAINER_NAME}") # keyword

destination d_docker_file {

log {
    # On openSUSE the default source is called 'src' while on most other distributions like Debian it is called 's_src'

The main building blocks are the same. Source drivers, filters, destinations drivers and log paths which wires them together. You can notice the source driver is actually not defined here but only referenced as source(src) from the main syslog-ng.conf file.

Notice: On Debian based systems the default source is usually called “s_src”. On openSUSE it is simply “src”, that is what I use here.

A filter called f_dockerd will select the logs created by the Docker Daemon command. Syslog-ng can use journald’s COMM variable to use for filtering.

A new parser p_docker_metadata will take care of parseing and converting key value pairs into fields accepted by Elasticsearch ECS.

The file destination driver uses a template for file names and directories. It is a nice feature which helps you to structure logs by sorting logs by date and name of the container.

Save the config as “/etc/syslog-ng/conf.d/docker-journal-elastic.conf” then reload syslog-ng service.

[email protected]:~> sudo systemctl reload syslog-ng

Check the results either by looking into the journals.

[email protected]:~> sudo journalctl -u docker --since="1 min ago"
-- Logs begin at Sun 2018-07-01 14:51:52 CEST, end at Mon 2019-02-04 13:25:12 CET. --
Feb 04 13:24:42 microchuck php|php-7.2-fpm|docker[1980]: -  04/Feb/2019:13:24:42 +0100 "GET /apps/notifications/api/v2/notifications" 200
Feb 04 13:24:42 microchuck nginx|nginx:1.16|docker[1980]: forwarded for - - [04/Feb/2019:13:24:42 +0100]  "GET /oc/ocs/v2.php/apps/notifications/api/v2/notifications HTTP/1.0" 200 74 "-" "Mozilla/5.0 (Ma>
Feb 04 13:24:42 microchuck nginx|nginx:1.16|docker[1980]: - - [04/Feb/2019:13:24:42 +0100] "GET /oc/ocs/v2.php/apps/notifications/api/v2/notifications HTTP/1.1" 200 74 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.>
Feb 04 13:25:12 microchuck php|php-7.2-fpm|docker[1980]: -  04/Feb/2019:13:25:12 +0100 "GET /apps/notifications/api/v2/notifications" 200

Or checking the directory /var/log/docker/.

[email protected]:~> sudo tree -L 2 /var/log/docker
├── 2019.02.03
│   ├── containerservices_proxy_1.log
│   ├── containerservices_nextcloud-www_1.log
│   └── containerservices_nextcloud-php_1.log
└── 2019.02.04
    ├── containerservices_proxy_1.log
    ├── containerservices_nextcloud-www_1.log
    └── containerservices_nextcloud-php_1.log

Transfer log messages into Elasticsearch

Syslog-ng already collectes and stores messages of containers on local filesystem. Both in volatile journals and persistent text files.
Let’s extend the previous config file by adding a new destination driver for Elastic. Also connect it into the existing log path. It also sends the same logs to Elastic.

Check the differences against the previous config to see what have changed.
If you have any problems by using this output then don’t worry. I will post a full downloadable example at the end of this post.

[email protected]:~> diff -u docker-journal.conf docker-journal-elastic.conf
--- docker-journal.conf 2019-07-12 13:55:46.799777984 +0200
+++ docker-journal-elastic.conf 2019-07-12 13:48:13.936519051 +0200
@@ -1,3 +1,5 @@
[email protected] elastic_host ""
 filter f_dockerd {"${.journald._COMM}" eq "dockerd"};
 parser p_docker_metadata {
@@ -25,10 +27,26 @@
+destination d_elastic_docker {
+    http(url("http://`elastic_host`/_bulk")
+        method("POST")
+        batch-lines(3)
+        workers(4)
+        headers("Content-Type: application/x-ndjson")
+        body-suffix("\n")
+        body(
+            "{ \"index\":{ \"_index\": \"docker-containers-${S_YEAR}-${S_MONTH}-${S_DAY}\"} }\n$(format-json --pair$HOST --pair host.ip=$SOURCEIP --pair @timestamp=$ISODATE --pair message=$MESSAGE --pair tags=container-services --pair ecs.version=1.0.0 --pair${PROGRAM} --key container.*)\n"
+        )
+        persist-name("docker")
+        log-fifo-size(20000)
+    );
 log {
     # On openSUSE the default source is called 'src' while on most other distributions like Debian it is called 's_src'
+    destination(d_elastic_docker);

You only need to adjust the address of your elastic in elastic_host variable.

You are ready to reload syslog-ng for the changes to take effect. After a couple of logs reached Elastic, make sure you create index patterns for the documents. It is mandatory to be able to discover them. Remember, the GitHub repo already contains an appropriate index template mapping.

Discovering container logs in Kibana

If you succeeded to follow the steps, you will have an index pattern called “docker-containers*”. Try to browse the log messages in Kibana→Discover menu.

Notice that ECS fields for containers and hosts are already set. The following screenshot shows an example from my system. I removed RFC5424 fields as they were not useful.

Docker Log Discovery in Kibana

Creating Docker visualizations in Kibana

There are many visualizations like Data Table, Vertical Bar, Pie Chart and Line. I created short videos about how you can make use of them.

Creating a stacked Vertical Bar visualization

This chart is useful to show how the amount of logs per app changes in time. You can use this type of visualization for other attributes too.

Creating a Line visualization for getting log trends

I use this type of visualization for seeing the trends how the number of logs changes in time. You can put Data Table next to it in a Dashboard to show more insights.

Creating a Data Table visualization to the amount of logs per container

The Data Table visualization usually more readable than Vertical Bars. For instance when texts on X-Axis are too long I prefer the Data Table.
The video below was made for HTTP Agents. Just replace the Field of Aggregation to “”.

Creating a Pie Chart visualization to show the amount of logs per app

Pie Charts are pretty good for counting the amount of data when your data set has limited variability. For example there are only a dozen of app names and not hundreds.
The video below was made for HTTP Response codes. You should be able to adjust the Field of Aggregation to “”.

Creating dashboard from visualizations in Kibana

You should add the recently created visualizations to a Dashboard.
Check the video below to see how you can do that.
Although the video presents NGINX logs, the method is the same for Docker logs.

Final words

I really liked this project. I could learn many new things and the results are simply amazing. My plan is to use this system and improve it on the go.

Configurations for syslog-ng and docker-compose. The visualizations and dashboard for Kibana. As I promised you can download them all from this GitHub repository. Feel free to use them.

Update: I moved the chapters about parsing and visualizing NGINX / Apache access logs in Kibana into a dedicated post. Please make sure you check it out.

If this post was helpful for you, then please share it. Most importantly, should you have anything to add then please make any comments below. I will appreciate it.

Thank you.

Leave a Reply

Your email address will not be published. Required fields are marked *

− 3 = 3