Visualizing NGINX access logs in Kibana

(Last Updated On: 02/15/2019)

We already have a central log server where we can collect logs of Docker containers. It is very common to run web servers running in containerized ecosystems. In this tutorial I show you how you can parse access logs of NGINX or Apache with syslog-ng. I also describe how visualizing NGINX access logs in Kibana can be achieved.

NGINX Dashboard in Kibana

Parse NGINX/Apache access logs to provide insights about HTTP usage

There is a specific parser in syslog-ng which can further process logs of NGINX or Apache web servers. Using it will provide you more insights about HTTP usage or our web servers.

Let’s place syslog-ng configuration from the example below in /etc/syslog-ng/conf.d/docker-journal-elastic.conf. You may need to overwrite the existing config in case you followed my previous tutorial about logging Docker to Elasticsearch.

filter f_dockerd {"${.journald._COMM}" eq "dockerd"};

parser nginx_parser {
    apache-accesslog-parser(
        prefix("nginx.")
    );
};

parser p_geoip2_nginx { geoip2( "${nginx.clientip}", prefix( "geoip2." ) database( "/etc/syslog-ng/GeoLite2-City.mmdb" ) ); };

destination d_docker_file {
    file(
        "/var/log/docker/$S_YEAR.$S_MONTH.$S_DAY/${.journald.CONTAINER_NAME}.json"
        template("$(format_json --rekey .journald.* --shift-levels 2 --scope rfc5424 --key HOST --key ISODATE --key nginx.* --key geoip2.* --key .journald.APPLICATION --key .journald.CONTAINER_*)\n")
        create-dirs(yes)
    );
};

destination d_elastic_docker {
    http(url("http://172.20.0.40:9200/docker-containers/test/_bulk")
        method("POST")
        flush-lines(3)
        workers(4)
        headers("Content-Type: application/x-ndjson")
        body-suffix("\n")
        body("{ \"index\":{} }\n$(format-json --rekey .journald.* --shift-levels 2 --scope rfc5424 --key HOST --key ISODATE --key nginx.* --key geoip2.* --key .journald.APPLICATION --key .journald.CONTAINER_*)\n")
    );
};

log {
    # On openSUSE the default source is called 'src' while on most other distributions like Debian it is called 's_src'
    source(src);
    filter(f_dockerd);
    if ( "${PROGRAM}" eq "nginx" ) {
        parser(nginx_parser);
        parser(p_geoip2_nginx);
        rewrite(r_geoip2);
    };
    destination(d_docker_file);
    destination(d_elastic_docker);
};

I have hard coded values like IP, Port, Index and type of Elastic in the config. Do not forget to adjust the values to suit to your needs.

I added two new parsers. The first one is called “apache-accesslog-parser” which comes with syslog-ng. It works on logs complying either to Common Log Format (Apache default) or to Combined Log Format (NGINX default).

The parser extracts the following fields from the messages: clientip, ident, auth, timestamp, rawrequest, response, bytes, referrer, and agent. The rawrequest field is further segmented into the verb, request, and httpversion fields. The syslog-ng OSE apache-accesslog-parser() parser uses the same naming convention as Logstash.

The second parser is the same GeoIP parser I already used to enrich Fail2ban logs but with different input value. The rewrite rule rewrite(r_geoip2) is from the same Fail2ban’s case to add longitude and latitude data.

The JSON template also contains a rekey and shift directive simply to cut off “.journald.” prefix from the macros. Those options are specific to syslog-ng.

Please do not reload syslog-ng yet. We need to create an index and an explicit data type mapping for some attributes before Elastic accept logs.

Creating index and data type mapping in Elasticsearch

Most of the data which are put into the indexes are mapped as keyword. This is good in most cases. For visualizing NGINX access logs in Kibana we need explicit data type mapping for some records.

To create them follow the steps I described in setting data type mappings for Fail2ban. Or just go to Dev Tools→Console in Kibana and issue the API command below.

 
PUT /docker-containers
{
   "mappings" : {
      "test" : {
         "properties" : {
            "geoip2" : {
               "properties" : {
                  "location2" : {
                     "type" : "geo_point"
                  }
               }
            },
            "nginx" : {
               "properties" : {
                  "response" : {
                     "type" : "integer"
                  },
                  "bytes" : {
                     "type" : "integer"
                  },
                  "clientip" : {
                     "type" : "ip"
                  }
               }
            }
         }
      }
   }
}

Now it is time to reload syslog-ng. After a couple of logs reached Elastic, make sure you create index patterns for the rest of the data, not described in the explicit mapping before.

Discovering access logs in Kibana

If you succeeded to follow the steps, you will have an index pattern called “docker-containers*”. Try to browse the log messages in Kibana→Discover menu.

The following screenshot shows an example from my system. The most interesting metadata fields are visible on the left.
There are APPLICATION, CONTAINER_NAME, CONTAINER TAG and of course the standard fields from RFC5424.

NGINX Discovery in KibanaScroll down a bit and you shall see the added GeoIP and NGINX metadata as well.

NGINX Discovery GeoIP in KibanaThis is already useful but we are going to create interesting visualizations in the next chapter.

Creating NGINX and Docker visualizations in Kibana

Visualizing NGINX access logs in Kibana can be done by using visualizations like Data Table, Vertical Bar, Pie Chart and Coordinate Map. I created short videos about how you can make use of them.

Creating a Vertical Bar visualization for NGINX average bytes

This chart is useful to show the size of traffic changes in time. You can use this type of visualization for HTTP Response codes. A Pie Chart also works nicely with that.

It can also present you the amount of logs each container produces.

Creating a Coordinate Map visualization for GeoIP data of HTTP clients

I used this type of visualization for Fail2ban logs before. You can use it to see where your HTTP clients are coming from. The free GeoIP database is enough to narrow down to counties or bigger areas.

Creating a Data Table visualization to show TOP 10 HTTP User Agents

The Data Table visualization usually more readable than Vertical Bars. For instance when texts on X-Axis are too long I prefer the Data Table.

Creating a Pie Chart visualization to show TOP 10 HTTP Response codes

Pie Charts are pretty good for counting the amount of data when your data set has limited variability. For example there are only a dozen of container names and not hundreds.

Creating dashboard from visualizations in Kibana

Visualizing NGINX access logs in Kibana is not ready yet. You should create a new Dashboard and add the recently created visualizations to it. It will look really nice.
Check the video below to see how you can do that.

P.S.

Originally this content was embedded in the Simplified guide to logging Docker to Elasticsearch in 2019 (With syslog-ng). However I realized that it makes sense to have dedicated tutorials for each subject.

Configurations for syslog-ng and docker-compose. The visualizations and dashboard for Kibana. They all can downloaded from this GitHub repository.

Again, should you have any comments then feel free to make any below. I will also highly appreciate if share this post.

Thank you!

2 thoughts on “Visualizing NGINX access logs in Kibana

  1. heru

    Hi,

    Can you help make tutorial Howto visualize nginx log with latest version of filebeat + logstash + kibana ?

    thx for your attention.

    Reply
    1. Balázs Németh Post author

      Hi,

      In my guides I focused on avoiding Filebeat and Logstash and simply rely on syslog messages.
      However I know that you should be able to visualize NGINX logs by following the on-screen tutorial of Kibana you can find on the following URL: $HOST/kibana/app/kibana#/home/tutorial/nginxLogs?_g=()

      Replace $HOST with the address of your Kibana server.

      Regards

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

17 + = 18