HOWTO collect logs from logstash
Why do that¶
If you want to be sure that there is no loss of log between an external component and Punchplatform. As an example, we will check the number of logs between Logstash and Punchplatform.
Prerequisites¶
You need :
- punchplatform-standalone
- Logstash (5.6.2)
What to do¶
Configure Logstash¶
Create a new pipeline with a lumberjack output (first-pipeline.conf) :
1 2 3 4 5 6 7 8 9 10 11 12 | input { tcp { port => 9901 } } output { lumberjack { hosts => "localhost" port => 29901 ssl_certificate => "/home/user/keys/logstash/logstash.crt" } } |
See also HOWTO connect lumberjack output of Logstash and lumberjack spout of Storm
Configure Punchplatform¶
Create a new topology with the configuration (lmr _in _topology.json) :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 | { "tenant" : "mytenant", "channel" : "test", "name" : "lmr_in", "meta" : { "tenant" : "mytenant" }, "spouts" : [ { "type" : "lumberjack_spout", "spout_settings" : { "listen" : { "host" : "0.0.0.0", "port" : 29901, "compression" : false, "ssl" : true, "ssl_private_key" : "/home/user/keys/logstash/punchplatform.key8", "ssl_certificate" : "/home/user/keys/logstash/logstash.crt" }, "self_monitoring.activation" : false }, "storm_settings" : { "executors": 1, "component" : "syslog_spout_lumberjack", "publish" : [ { "stream" : "logs", "fields" : ["line"] } ] } } ], "bolts" : [ { "type" : "distinct_log_counter_bolt", "bolt_settings" : { }, "storm_settings" : { "executors": 1, "component" : "distinct_log_counter", "subscribe" : [ { "component" : "syslog_spout_lumberjack", "stream" : "logs", "grouping": "localOrShuffle" } ], "publish" : [ { "stream" : "logs", "fields" : ["seq"] } ] } }, { "type": "elasticsearch_bolt", "bolt_settings": { "cluster_id": "es_search", "per_stream_settings" : [ { "stream" : "logs", "index" : { "type" : "daily" , "prefix" : "events-%{tenant}-" }, "document_json_field" : "log", "document_id_field" : "local_uuid" } ] }, "storm_settings": { "executors" : 1, "component": "elasticsearch_bolt", "subscribe": [ { "component": "distinct_log_counter", "stream": "logs", "grouping" : "localOrShuffle" } ] } } ], ... } |
Configure Injector¶
In order to inject logs in Logstash, we can use the punchplatform-injector. Define a configuration as follow (sequence _injector.json) :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | { # Set here where you want to send your samples. It must be the input point of # point of your topology "destination" : { "proto" : "tcp", "host" : "127.0.0.1", "port" : 9901 }, "load" :{ "stats_publish_interval" : "2s", "message_throughput" : 59 }, # In this section you define what you inject "message" : { "payloads" : [ "seq=%{counter}" ], "fields" : { "counter" : { "type" : "counter", "min" : 1 } } } } |
Execute¶
Launch Logstash pipeline :
1 | bin/logstash -f first-pipeline.conf --config.reload.automatic |
Launch the topology :
1 | punchplatform-topology.sh start-foreground -m local -topology /home/user/punchplatform-standalone-3.3.6-SNAPSHOT/conf/tenants/mytenant/channels/test/lmr_in_topology.json
|
Launch the injector :
1 | punchplatform-log-injector.sh -c ./resources/injector/mytenant/sequence_injector.json -n 300000 -t 5000 |
See the log directly in Kibana or change the level of log in
logback-topology.xml (
Example of results¶
In topology logs :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | [INFO] message="size of tuple map" size=0 [INFO] message="size of tuple map" size=0 [INFO] message="size of tuple map" size=17316 [ERROR] message="lumberjack peer stopped reading its acks" channel=[id: 0x214338bf, L:/127.0.0.1:29901 - R:/127.0.0.1:38072] [ERROR] message="closed channel" channel=[id: 0x214338bf, L:/127.0.0.1:29901 ! R:/127.0.0.1:38072] [ERROR] message="lumberjack peer stopped reading its acks" channel=[id: 0xe350d992, L:/127.0.0.1:29901 - R:/127.0.0.1:41178] [ERROR] message="closed channel" channel=[id: 0xe350d992, L:/127.0.0.1:29901 ! R:/127.0.0.1:41178] [INFO] message="size of tuple map" size=166839 [ERROR] message="lumberjack peer stopped reading its acks" channel=[id: 0x883a19f9, L:/127.0.0.1:29901 - R:/127.0.0.1:41316] [ERROR] message="closed channel" channel=[id: 0x883a19f9, L:/127.0.0.1:29901 ! R:/127.0.0.1:41316] [ERROR] message="lumberjack peer stopped reading its acks" channel=[id: 0x6b7301d3, L:/127.0.0.1:29901 - R:/127.0.0.1:41486] [ERROR] message="closed channel" channel=[id: 0x6b7301d3, L:/127.0.0.1:29901 ! R:/127.0.0.1:41486] [INFO] message="size of tuple map" size=300000 [INFO] message="size of tuple map" size=300000 |
In Kibana :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | { "_index": "events-mytenant-2017.10.12", "_type": "log", "_id": "AV8QZz-GBPkFG95pVBHv", "_score": null, "_source": { "message": "size of tuple map", "size": 300000, "ts": "2017-10-12T13:45:22.821+02:00" }, "fields": { "ts": [ 1507808722821 ] }, "sort": [ 1507808722821 ] |
}
Well done ! 300000 logs, all logs are arrived although that we had disconnection problems !