Elasticsearch 启动不了或者挂掉原因(不定期更新)
注:Elasticsearch 的日志路径为 / 安装目录 /isa/logs/elasticsearch/ 下面
-
日志出现如下内容:
max file descriptors [4096] for elasticsearch process likely too low, increase to at least [65536]
原因:无法创建本地文件问题, 用户最大可创建文件数太小
解决方案: root 用户修改这个文件:vi /etc/security/limits.conf
在最后加上两行,isearch 为系统安装用户:
isearch - nproc 10240 isearch - nofile 65536
保存文件后直接登陆 su - isearch,用命令查看是否生效:
[isearch@isearch181:~]$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 46663 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 65536 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 10240 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
最后要重启 elasticsearch
-
日志出现如下内容:
max number of threads [1024] for user [es] likely too low, increase to at least [2048]
原因:无法创建本地线程问题, 用户最大可创建线程数太小
解决方案: 方法和问题 1 解决办法相同 -
日志出现如下内容:
max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]
原因:最大虚拟内存太小
解决方案: root 用户修改这个文件:vi /etc/sysctl.conf
添加下面配置:
vm.max_map_count=655360
然后执行命令:
sysctl -p
最后重启 Elasticsearch
-
Elasticsearch 的 license 过期
在启动过程中日志如果出现下面的内容,代表过期了:[2018-07-26 18:50:33,012][ERROR][license.plugin.core ] [localhost] # # LICENSE EXPIRED ON [Thursday, October 06, 2016]. IF YOU HAVE A NEW LICENSE, PLEASE # UPDATE IT. OTHERWISE, PLEASE REACH OUT TO YOUR SUPPORT CONTACT. # # COMMERCIAL PLUGINS OPERATING WITH REDUCED FUNCTIONALITY # - graph # - Graph explore APIs are disabled
如果过期了,需要导入新的 Elasticsearch 的 license。我在此处提供一个新的 license 供大家下载:点我下载
下载下来的是有 2 个版本的 license,一个是 Elasticsearch2.0 的版本,一个是 Elasticsearch5X 的版本, 导入方法:导入:curl -XPUT 'http://192.168.0.130:9200/_license' -d @/test/elasticsearch/es_license_2018-07-28.json 查看:curl -XGET 'http://192.168.0.130:9200/_license'
- elasticsearch 的最大搜索设置
如果在 es 日志中出现如下日志,就表示在搜索中已经超过了 es 默认的最大搜索数量 10000
RemoteTransportException[[localhost][192.168.0.221:9300][indices:data/read/search[phase/query]]]; nested: QueryPhaseExecutionException[Result window is too large, from + size must be less than or equal to: [10000] but was [10100]****. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level parameter.];
Caused by: QueryPhaseExecutionException[Result window is too large, from + size must be less than or equal to: [10000] but was [10100]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level parameter.]
at org.elasticsearch.search.internal.DefaultSearchContext.preProcess(DefaultSearchContext.java:212)
at org.elasticsearch.search.query.QueryPhase.preProcess(QueryPhase.java:103)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:676)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:620)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:371)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:368)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:365)
at org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:33)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:376)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
解决办法:
在 /test/isa/conf/elasticsearch/elasticsearch.yml 的配置文件中,在以后一行加上index.max_result_window : 10000000
保存后重启 es 就可以了。
-
6. elasticsearch 数据所在磁盘是否已经使用 90%
[2018-11-12T15:01:29,365][WARN ][o.e.c.r.a.DiskThresholdMonitor] [uebaserver] high disk watermark [90%] exceeded on [IrOiSNTtS-WDwYzppf-V5Q][uebaserver][/isearch/elasticsearch/data/nodes/0] free: 16mb[0.1%], shards will be relocated a
way from this node
[2018-11-12T15:01:00,432][WARN ][o.e.i.e.Engine ] [uebaserver] [.sys-index][3] failed engine [lucene commit failed] java.io.IOException: No space left on device
at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[?:?]
at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:60) ~[?:?]
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[?:?]
at sun.nio.ch.IOUtil.write(IOUtil.java:65) ~[?:?]
at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:211) ~[?:?]
at java.nio.channels.Channels.writeFullyImpl(Channels.java:78) ~[?:1.8.0_91]
at java.nio.channels.Channels.writeFully(Channels.java:101) ~[?:1.8.0_91]
at java.nio.channels.Channels.access$000(Channels.java:61) ~[?:1.8.0_91]
at java.nio.channels.Channels$1.write(Channels.java:174) ~[?:1.8.0_91]
at org.apache.lucene.store.FSDirectory$FSIndexOutput$1.write(FSDirectory.java:419) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29
21:54:39]
at java.util.zip.CheckedOutputStream.write(CheckedOutputStream.java:73) ~[?:1.8.0_91]
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) ~[?:1.8.0_91]
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) ~[?:1.8.0_91]
at org.apache.lucene.store.OutputStreamIndexOutput.getChecksum(OutputStreamIndexOutput.java:80) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker
- 2017-08-29 21:54:39]
at org.apache.lucene.codecs.CodecUtil.writeCRC(CodecUtil.java:548) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
at org.apache.lucene.codecs.CodecUtil.writeFooter(CodecUtil.java:393) ~[lucene-core-6.6.1.jar:6.6.1 9aa465a89b64ff2dabe7b4d50c472de32c298683 - varunthacker - 2017-08-29 21:54:39]
at org.apache.lucene.codecs.lucene54.Lucene54DocValuesConsumer.close(Lucene54DocValuesConsumer.java:761)
-
7. 查询 elasticsearch 报Data too large
关键字
elasticsearch.yml 配置 indices.breaker.fielddata.limit,默认大小为 60%,可以根据实际情况调整大小,修改完成重新启动集群。
还可以配置 indices.fielddata.cache.size 清除旧数据占用的 filedata,让新的数据可以加载进来, 避免在查询中查不到新插入的数据