|
5月1日
| |||
The number of open files does not depend on the number of documents.
A shard comes not for free. Each shard can take around ~150 open file descriptors (sockets, segment files) and up to 400-500 if actively being indexed.
Take care of number of shards, if you have 5 shards per index, and 2000 indices per node, you would hvae to prepare 10k * 150 open file descriptors. That is a challenge on a single RHEL 7 system providing 131072 file descriptors by default so you would have to change system limits (cat /proc/sys/fs/file-max) - the default is already very high.
I recommend using fewer shards and redesign the application for fewer indices (or even a single index) if you are limited to 2 nodes only. You can look at shard routing and index aliasing if this helps:
Jörg
沒有留言:
張貼留言