底下列出一份經過tuning 過的 elasticsearch configuration
/etc/elasticsearch/elasticsearch.yml
cluster.name: estic12
node.name: "pcnode1"
node.master: true
node.data: false
path.data: /spare3
The location of the data files of each index / shard allocated on the node. Can hold multiple locations.
Note, there are no multiple copies of the same data, in that, its similar to RAID 0. Though simple, it should provide a good solution for people that don’t want to mess with RAID. Here is how it is configured:
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["tic12-a42.trendmicro.com", "tic12.trendmicro.com", "tic12-a45.trendmicro.com"]
threadpool.bulk.type: fixed
threadpool.bulk.size: 100
threadpool.bulk.queue_size: 320
For bulk operations, defaults to fixed size # of available processors. queue_size 50.
The fixed thread pool holds a fixed size of threads to handle the requests with a queue (optionally bounded) for pending requests that have no threads to service them.
discovery.zen.minimum_master_nodes: 2 #(es_node_num)/2 +1
indices.memory.index_buffer_size: 30%
The indexing buffer setting allows to control how much memory will be allocated for the indexing process.
accepts either a percentage or a byte size value. It defaults to 10%, meaning that 10% of the total memory allocated to a node will be used as the indexing buffer size.
index.translog.flush_threshold_ops: 50000
Each shard has a transaction log or write ahead log associated with it. It allows to guarantee that when an index/delete operation occurs, it is applied atomically, while not "committing" the internal Lucene index for each request.
After how many operations to flush.
index.refresh_interval: 30s
The async refresh interval of a shard.
How often to perform a refresh operation, which makes recent changes to the index visible to search.