Visualizing Fail2ban logs in Kibana

(Last Updated On: 02/15/2019)

In the last post I wrote about how you can enrich Fail2ban logs with GeoIP metadata and with other data parsed from the logs. This time I will show you how you can use syslog-ng to send them into Elasticsearch and how visualizing Fail2ban logs in Kibana can show you where the failed login attempts are coming from.Fail2ban vvisualization in Kibana Coordinate map

 

There are might be other ways to send Fail2ban’s logs into Elasticsearch. This guide uses syslog-ng to transform the logs into a JSON payload enriched with GeoIP metadata. Also it can parse further data from logs and store them in Elasticsearch.

Create index mapping for Fail2ban in Elasticsearch

Elasticsearch is able to create automatic datatype mapping. It usually implicitly assigns the keyword data type. However the Coordinate Map visualization relies on a data type called geo_point which must contain latitude and longitude data. This data type must be explicitly set. GeoIP will provide these data for us. I will also add mapping to values ip and pid I got parsed from the logs (jail will be also added but the default datatype works for me).

The following PUT Mapping API command creates the index called fail2ban_geoip and applies the following mapping to that.

PUT /fail2ban_geoip
{  
   "mappings" : {
      "test" : {
         "properties" : {
            "geoip2" : {
               "properties" : {
                  "location2" : {
                     "type" : "geo_point"
                  }
               }
            },
            "ip": {
              "type": "ip"
            },
            "pid": {
              "type": "integer"
            }
         }
      }
   }
}

You can use the built-in Console of Kibana that you can find under Dev Tools to apply this command. See the screenshot below.

Kibana Dev Tools PUT mapping APIThe mapping must be done before the index is populated with documents. (Reindex exists but usually it requires transferring data from one index to another.)

Send Fail2ban logs into Elasticsearch

I will use the the same configuration snippet called fail2ban-geoip.conf from the post creating a central syslog server as a basis and extend it by adding an Elasticsearch destination. The example below only shows the difference against the previous version. You can find the whole configuration in GitHub in branch fail2ban-geoip-elasticsearch.

diff --git a/syslog-ng/conf.d/fail2ban-geoip.conf b/syslog-ng/conf.d/fail2ban-geoip.conf
index 01c46cf..518fab0 100644
--- a/syslog-ng/conf.d/fail2ban-geoip.conf
+++ b/syslog-ng/conf.d/fail2ban-geoip.conf
@@ -28,6 +28,18 @@ destination d_geoip2_file {
     file("/var/log/geoip2-test.log" template("${ISODATE} -> $(format-json --key ip --key jail --key pid --key HOST --key ISODATE --key geoip2.*)\n"));
 };
 
+destination d_elastic_geoip {
+    http(url("http://172.20.0.40:9200/fail2ban_geoip/test/_bulk")
+        method("POST")
+        flush-lines(3)
+        workers(4)
+        headers("Content-Type: application/x-ndjson")
+        body-suffix("\n")
+        body("{ \"index\":{} }\n$(format-json --key ip --key pid --key jail --key HOST --key ISODATE --key geoip2.*)\n")
+    );
+};
+
+
 log {
     # On openSUSE the default source is called 'src' while on most other distributions like Debian it is called 's_src'
     source(src);
@@ -37,4 +49,5 @@ log {
     parser(p_geoip2);
     rewrite(r_geoip2);
     destination(d_geoip2_file);
+    destination(d_elastic_geoip);
 };

The only change is the new destination driver called d_elastic_geoip. It represents Elasticsearch’s HTTP Bulk API. Please note that it contains the index name (fail2ban_geoip) and type (test) are hard coded. You may want to change them.

You should reload syslog-ng for the changes to take effect. After the reload please wait until a new Ban or Found event occurs in Fail2ban.

Create index patterns in Kibana

To be able to Discover all the details of the logs, we must instruct Elasticsearch to create an index pattern. This is the automatic mapping thing I described earlier.
You might need to wait until at least one document gets received which represents the whole attribute set of the JSON payload. Therefore the index pattern can be complete. Remember I only provided mappings for three attributes while there are still dozens.

Note: In case you already have logs because you followed my previous guide. You can put those logs manually into Elasticsearch. I had the same case but my log format was different because the original guide used DATE instead of ISODATE, and it did not included it in the JSON payload. I created a script to convert the logs into an ingest-able format by bulk insert API of Elasticsearch. Feel free to edit or adjust it to your needs.

To create an index pattern in Kibana, go to Management → Kibana → Index patterns → Create index pattern.

  1. Type in the index name fail2ban_geoip and click Next step.Kibana create index pattern - step 1
  2. On the next screen you should select a Time Filter. Set ISODATE from the drop-down menu. When selected click on Create index pattern.Kibana create index pattern - step 2
  3. If you succeeded then you should be able to use Discover to browse the logs you already have.Fail2ban logs Discovery in Kibana - step 1

Visualizing Fail2ban logs in Kibana

In Discover menu, you can scroll down and see all the available attributes. They are provided by syslog-ng via GeoIP and PatternDB. If you find geoip2.location2 you will notice that its icon is different, so as the icon of ip and pid. They represent different data types.

Clicking on geoip2.location2 reveals a Visualize button, let’s click on that.

Fail2ban logs Discovery in Kibana - step 2

Voila, the world map we were working on.

Fail2ban vvisualization in Kibana Coordinate map with options

You can even customize the visualization. I hope you like it.

If you liked this content then please share it on any platform you like. I will highly appreciate it. Also should you have anything to add then please make any comments below.

Would you like to read more about this topic? You should check the tutorial about how you can create a central syslog server. In case you already have one, you can extend it by using a simplified guide to logging Docker to Elasticsearch as well.

Leave a Reply

Your email address will not be published. Required fields are marked *

6 + 3 =