Skip to content

Troubleshooting no logs in Kibana

Why do that

When expected documents are not available in kibana, although no error is displayed at query time.

Most frequent reasons

There are many reasons why an Elasticsearch request brings back 0 documents :

  • Some part of source channels is stopped or not working
  • No events are coming in at the entry point of source channels
  • Events coming in have an altered format/content, that prevents normal parsing by the punchlets, or prevent normal indexing by Elasticsearch
  • the query targets a time period in which no documents are present yet, due to some lag/backlog in the processing chain
  • the query targets a time period in which no documents are present yet, due to a difference of time / timezone between the web browser of the user and the Elasticsearch server
  • the query targets an index pattern that does not match the index names, or that matches an alias which is not up to date (i.e. does not contain all the actual Elasticsearch indexes)

Investigation guidelines

1 - Reproduce the problem and get needed data

  • What is the query (index pattern used, timescope, active filters). This may imply editing the dashboard and visualization to understand the query settings
  • reproduce the problem in the "discover" tab, with a manual selection of index pattern, time scope, filters
  • check if the same query retrieves data when applied to old indices (last week)
  • go to kibana Management->Index Patterns-> YourQueryIndexPattern and note down the time filter field name in the right pane (under the index pattern name)

2 - Check if Elasticsearch fully healthy ?

        curl myEs:9200/_cat/health

3 - Are expected indices present ?

  • are indices present, matching the index pattern ? Do they contain documents ?

        curl <elasticsearch_cluster_url>:9200/_cat/indices?v

    ==> Check if all are green/yellow. Otherwise investigate using Elasticsearch documentation (maybe some Elasticsearch node are down, that are containing a shard of this Index, and the replica of this shard)

    ==> Check if there is missing index. In this case the issue may be before Elasticsearch indexing ==> see channels status and metrics, storm fails, exceptions and logs, check the health of the PunchPlatform through kibana platform monitoring dashboard (or index punchplatform and punchplatform-monitoring-current).

    ==> check if indices with 'errors' in name have been created because production channels are configured to index their errors

  • are indices in the required alias (if an alias is used by kibana index pattern) ?

        curl <es_cluster_url>:9200/_cat/aliases?v

    ==> if not,

    • fix your elasticsearch index template and push it to your cluster so that next days indexes are automatically in the expected alias

      . Elasticearch index templates are often stored in $PUNCHPLATFORM_CONF_DIR/resources/elasticsearch/templates

      . to see existing templates in Elasticsearch :

      curl <es_cluster_url>:9200/_template | jq keys

      . to push a template, use the _template REST API of elasticsearch

      curl -H 'content-type: application/json' -XPUT :9200/_template/template_name -d @templateFile

    • fix the existing aliases using the elasticsearch alias api :

          curl -XPOST <es_cluster_url>:9200/_aliases -H 'content-type: pplication/json' -d '{"actions":[{"add":{"index":"some_index_cname", "alias":"the_alias_used_from_kibana_index_pattern"}}]}'

4 - What kind of documents are present in the Elasticsearch index ?

        curl <es_cluster_url>:9200/<last_index_name>/_search?size=1 | jq .

==> your documents may lack the timestamp field that the index pattern is configured to require (maybe a parsing or source log problem), or this timestamp may contain a date not in the timescope of the query

==> your documents may be 'error documents' indicating an indexing or parsing problem. In this case :

  • Identify the faulty field and associated value in the exception document error in Elasticsearch
  • Check if the parsing is correct (i.e. the field value is correct for the raw message received at entry point of the system, which is also stored in the exception document)
  • If the parsing is correct, but Elasticsearch rejected the field as not matching the expected type, then update, document and apply the type mapping in the elasticsearch index template
  • If the parsing is incorrect, then have the faulty parser extended/fixed to process this specific log (provide the raw log and the exception message to the person in charge of developing the fix)
  • If the log is not consistent with expected format, then request the source device to be reconfigured