For end-users: your Kibana performances are theoretically not limited. However, when you process billion of events and perform heavy aggregations, the complexity needed is often way too high and put the database at risk. For that reason, production PunchPlatform Elasticsearch comes with protections to prevent too complex queries. For these cases, on-the-flow enrichments, aggregations, or reducing time frames are often the best solution do cope with this limit. Watch out then, with great powers come great responsibilities ...
The visualisation is empty¶
This could have two origins:
- Either there is no document matching your filter, or
- The indexation/mapping of the requested field is bad.
For the first case, you can check it out by going back to the Discover tab and looking for documents with your search.
For the second, check that your field matches the search you want it to perform. For instance, having a attribute over the is nonsense. If Elastic cannot perform your request, then it will yield something strange.
If this is good, you can check that the mapping (Elasticsearch term to design the binding between a field and a type, among string, integer, ip, geoloc and date mainly.) by going to the Discover tab, opening a document and looking for your field.
Four logos with four types: geoip, date, host, string.
If you think that there is a problem in a field mapping, see section
incorrect-mapping, or contact the
support, we could have made some mistakes!
The mapping is failing¶
Some mapping issues can be encountered while indexing the data. You can see this behavior in the with a warning sign:
First, Kibana 's mapping might be outdated. To ensure the update, you can go to the , ensure that the pattern is selected and then click on the orange button next to the name:
While going back to the section, if the problem still occurs, contact the support.
The discover request is slow¶
This troubleshoot is valid if you have read the warning section at the beggining of the Troubleshooting section.
- Your request may have a time range too wide. Requests are treated day by day by the Elasticsearch database, which is opening each index file associated. Try reducing your query to a smaller time range if possible, and check if the problem persists. If the request takes too much time for a proxy server in between, try splitting it then into sub-requests with smaller time ranges.
- The data indexation could also be bad. To check it, reduce your request to a subset of filters, until you determine which one is too slow. Then try other small filters alone, based upon this targeted field. If the request is still slow, contact the support about an eventual indexation problem.
- See the Best Practices section to refine your search and avoid database overload.
The request is too big¶
At first, please note that Kibana queries length must not get over 25000 characters. This is not an Elastic or PunchPlatform limitation, but the HTTP RFC recommendations.
If you need to perform such a test, check at first if you can 't factorize your request. Is all your host IPs part of a subnet ? or a port above a certain value? Also, if your target.host.name is myserver.my.datacenter.external.my.compan.com, can you match it by filtering by only ? Above the simplification, you will eventually gain request performances.
If you are matching a lot of values (ex.
target.uri.url="punchplatform.com" OR target.uri.url="*test.com" OR ...),
for instance to match IOCs, think about the aim of this request. Are
these URL suspect ? If you can, move these enrichments in the
PunchPlatform parsing section.
You may encounter these kind of errors:
Do not hesitate to contact the support with all the information (time, payload, in which tab you were, what were you intending to do etc.) if you suspect a platform error.