By default, Elasticsearch is tuned for the best trade-off between write performance and query performance for the majority of use cases. In this blog posting we cover some parameters that can be configured to improve query-time aggregation performance, with some of these improvements coming at the expense of write performance.
Note that this blog posting does not present anything that is not already documented in other locations. The goal here is to pull together relevant information into a small and digestible posting that provides a few pointers on how to improve slow Elasticsearch aggregations.
In addition to this blog, read the following official Elasticsearch documentation for more details on tuning an Elasticsearch cluster:
A description of the refresh interval
In this section we give an overview of the refresh interval, which is useful for understanding subsequent sections of this blog.
As documents are inserted into Elasticsearch, they are written into a buffer and then periodically flushed from that buffer into segments when a refresh occurs. By default refreshes occur every 1 second, which is known as the refresh_interval. Newly inserted documents are not searchable until they have been flushed into segments.
In the background, small segments are merged into larger segments, and those larger segments are merged into even larger segments, and so on. Therefore, by enabling frequent refreshes, Elasticsearch needs to do more background work merging small segments than it would need to do with less frequent refreshes which would create larger segments to start with.
While frequent refreshes are necessary if near-real-time search functionality is required for newly inserted data, they may not be necessary in other cases. If an application can wait longer for recent data to appear in search results, then the refresh interval should be increased in order to improve the efficiency of data ingestion, which in-turn should free up resources to help query performance.
Enable eager global ordinals to improve the performance of high-cardinality terms aggregations
The performance of terms aggregations on high-cardinality fields (fields with thousands or millions of possible unique values) may become slow and unpredictable in-part because by default global ordinals are lazily built on the first aggregation that occurs since the previous refresh as described here.
Building global ordinals eagerly should improve such aggregations and make response times more consistent, as the related data structure will be created when segments are refreshed, as opposed to the first query after each refresh.
Note that if global ordinals are eagerly built, this will impact write performance because new global ordinals will be created on every refresh. To minimize the additional workload caused by frequently building global ordinals due to frequent refreshes, increase the refresh interval.
Pre-sort indexes at insertion time
Index sorting can be used to pre-sort indices at insertion time as opposed to at query time, which should improve the performance of range queries and sort operations. Note that this will increase the cost of indexing documents into Elasticsearch.
Take advantage of the node query cache (cache filter results)
The Node query cache can be used for efficiently caching results of filter operations. This is effective if the same filter is executed multiple times, but changing even a single value within the filter means that a new filter result will need to be computed.
For example, queries that use “now” within the filter context cannot be cached since the value of now is constantly changing. Such requests can be made more cacheable by applying datemath on the now field to round it to the nearest minute/hour/etc, so that the filter results can be cached and re-used for queries that appear within the same interval that has been rounded to.
General Elasticsearch settings
In this section, we present general guidance on areas that can impact Elasticsearch performance.
The JVM heap size should generally be set to a maximum of 30GB. This blog gives a good description of why setting this value higher than 30GB is generally undesirable.
Swapping is bad for performance and should be disabled.
Number of shards and shard size
Too many shards will cause performance problems. This article gives rules of thumb about how many shards can be supported per GB of Heap and what target shard sizes should be.
If spinning disks are used, then the merge scheduler should be set to 1.
In this blog post, we have covered a few strategies for improving the aggregation performance of your Elasticsearch cluster.