elasticsearch未分配碎片circuitbreakingexception[[parent]数据太大

ht4b089n  于 2021-06-14  发布在  ElasticSearch
关注(0)|答案(1)|浏览(528)

我收到警报说elasticsearch有2个未分配的碎片。我做了下面的api调用来收集更多细节。

curl -s http://localhost:9200/_cluster/allocation/explain | python -m json.tool

输出低于

"allocate_explanation": "cannot allocate because allocation is not permitted to any of the nodes",
    "can_allocate": "no",
    "current_state": "unassigned",
    "index": "docs_0_1603929645264",
    "node_allocation_decisions": [
        {
            "deciders": [
                {
                    "decider": "max_retry",
                    "decision": "NO",
                    "explanation": "shard has exceeded the maximum number of retries [5] on failed allocation attempts - manually call [/_cluster/reroute?retry_failed=true] to retry, [unassigned_info[[reason=ALLOCATION_FAILED], at[2020-10-30T06:10:16.305Z], failed_attempts[5], delayed=false, details[failed shard on node [o_9jyrmOSca9T12J4bY0Nw]: failed recovery, failure RecoveryFailedException[[docs_0_1603929645264][0]: Recovery failed from {elasticsearch-data-1}{fIaSuZsNTwODgZnt90f7kQ}{Qxl9iPacQVS-tN_t4YJqrw}{IP1}{IP:9300} into {elasticsearch-data-0}{o_9jyrmOSca9T12J4bY0Nw}{1w5mgwy0RYqBQ9c-qA_6Hw}{IP}{IP:9300}]; nested: RemoteTransportException[[elasticsearch-data-1][IP:9300][internal:index/shard/recovery/start_recovery]]; nested: RecoveryEngineException[Phase[1] phase1 failed]; nested: RecoverFilesRecoveryException[Failed to transfer [129] files with total size of [4.4gb]]; nested: RemoteTransportException[[elasticsearch-data-0][IP2:9300][internal:index/shard/recovery/file_chunk]]; nested: 
CircuitBreakingException[[parent] Data too large, data for [<transport_request>] would be [1972835086/1.8gb], which is larger than the limit of [1972122419/1.8gb], real usage: [1972833976/1.8gb], new bytes reserved: [1110/1kb]]; ], allocation_status[no_attempt]]]"
                }
            ],
            "node_decision": "no",
            "node_id": "1XEXS92jTK-asdfasdfasdf",
            "node_name": "elasticsearch-data-2",
            "transport_address": "IP1:9300"
        },
        {
            "deciders": [
                {
                    "decider": "max_retry",
                    "decision": "NO",
                    "explanation": "shard has exceeded the maximum number of retries [5] on failed allocation attempts - manually call [/_cluster/reroute?retry_failed=true] to retry, [unassigned_info[[reason=ALLOCATION_FAILED], at[2020-10-30T06:10:16.305Z], failed_attempts[5], delayed=false, details[failed shard on node [o_9jyrmOSca9T12J4bY0Nw]: failed recovery, failure RecoveryFailedException[[docs_0_1603929645264][0]: Recovery failed from {elasticsearch-data-1}{fIaSuZsNTwODgZnt90f7kQ}{Qxl9iPacQVS-tN_t4YJqrw}{IP1}{IP1:9300} into {elasticsearch-data-0}{o_9jyrmOSca9T12J4bY0Nw}{1w5mgwy0RYqBQ9c-qA_6Hw}{IP2}{IP2:9300}]; nested: RemoteTransportException[[elasticsearch-data-1][IP1:9300][internal:index/shard/recovery/start_recovery]]; nested: RecoveryEngineException[Phase[1] phase1 failed]; nested: RecoverFilesRecoveryException[Failed to transfer [129] files with total size of [4.4gb]]; nested: RemoteTransportException[[elasticsearch-data-0][IP2:9300][internal:index/shard/recovery/file_chunk]]; nested: 
CircuitBreakingException[[parent] Data too large, data for [<transport_request>] would be [1972835086/1.8gb], which is larger than the limit of [1972122419/1.8gb], real usage: [1972833976/1.8gb], new bytes reserved: [1110/1kb]]; ], allocation_status[no_attempt]]]"
                },
                {
                    "decider": "same_shard",
                    "decision": "NO",
                    "explanation": "the shard cannot be allocated to the same node on which a copy of the shard already exists [[docs_0_1603929645264][0], node[fIaSuZsNTwODgZnt90f7kQ], [P], s[STARTED], a[id=stHnyqjLQ7OwFbaqs5vWqA]]"
                }
            ],
            "node_decision": "no",
            "node_id": "fIaSuZsNTwODgZnt90f7kQ",
            "node_name": "elasticsearch-data-1",
            "transport_address": "IP1:9300"
        },
        {
            "deciders": [
                {
                    "decider": "max_retry",
                    "decision": "NO",
                    "explanation": "shard has exceeded the maximum number of retries [5] on failed allocation attempts - manually call [/_cluster/reroute?retry_failed=true] to retry, [unassigned_info[[reason=ALLOCATION_FAILED], at[2020-10-30T06:10:16.305Z], failed_attempts[5], delayed=false, details[failed shard on node [o_9jyrmOSca9T12J4bY0Nw]: failed recovery, failure RecoveryFailedException[[docs_0_1603929645264][0]: Recovery failed from {elasticsearch-data-1}{fIaSuZsNTwODgZnt90f7kQ}{Qxl9iPacQVS-tN_t4YJqrw}{IP1}{IP1:9300} into {elasticsearch-data-0}{o_9jyrmOSca9T12J4bY0Nw}{1w5mgwy0RYqBQ9c-qA_6Hw}{Ip2}{IP2:9300}]; nested: RemoteTransportException[[elasticsearch-data-1][IP1:9300][internal:index/shard/recovery/start_recovery]]; nested: RecoveryEngineException[Phase[1] phase1 failed]; nested: RecoverFilesRecoveryException[Failed to transfer [129] files with total size of [4.4gb]]; nested: RemoteTransportException[[elasticsearch-data-0][IP2:9300][internal:index/shard/recovery/file_chunk]]; nested: 
CircuitBreakingException[[parent] Data too large, data for [<transport_request>] would be [1972835086/1.8gb], which is larger than the limit of [1972122419/1.8gb], real usage: [1972833976/1.8gb], new bytes reserved: [1110/1kb]]; ], allocation_status[no_attempt]]]"
                }
            ],
            "node_decision": "no",
            "node_id": "o_9jyrmOSca9T12J4bY0Nw",
            "node_name": "elasticsearch-data-0",
            "transport_address": "IP2:9300"
        }
    ],
    "primary": false,
    "shard": 0,
    "unassigned_info": {
        "at": "2020-10-30T06:10:16.305Z",
        "details": "failed shard on node [o_9jyrmOSca9T12J4bY0Nw]: failed recovery, failure RecoveryFailedException[[docs_0_1603929645264][0]: Recovery failed from {elasticsearch-data-1}{fIaSuZsNTwODgZnt90f7kQ}{Qxl9iPacQVS-tN_t4YJqrw}{IP1}{IP1:9300} into {elasticsearch-data-0}{o_9jyrmOSca9T12J4bY0Nw}{1w5mgwy0RYqBQ9c-qA_6Hw}{IP2}{IP2:9300}]; nested: RemoteTransportException[[elasticsearch-data-1][IP1:9300][internal:index/shard/recovery/start_recovery]]; nested: RecoveryEngineException[Phase[1] phase1 failed]; nested: RecoverFilesRecoveryException[Failed to transfer [129] files with total size of [4.4gb]]; nested: RemoteTransportException[[elasticsearch-data-0][IP2:9300][internal:index/shard/recovery/file_chunk]]; nested: 
CircuitBreakingException[[parent] Data too large, data for [<transport_request>] would be [1972835086/1.8gb], which is larger than the limit of [1972122419/1.8gb], real usage: [1972833976/1.8gb], new bytes reserved: [1110/1kb]]; ",
        "failed_allocation_attempts": 5,
        "last_allocation_status": "no_attempt",
        "reason": "ALLOCATION_FAILED"
    }
}

我查了一下断路器的配置

curl -X GET "localhost:9200/_nodes/stats/breaker?pretty

可以看到3个节点(elasticsearch-data-0、elasticsearch-data-1和elasticsearch-data-2)的父限制大小如下所示。

"parent" : {
          "limit_size_in_bytes" : 1972122419,
          "limit_size" : "1.8gb",
          "estimated_size_in_bytes" : 1648057776,
          "estimated_size" : "1.5gb",
          "overhead" : 1.0,
          "tripped" : 139
        }

我提到了这个答案https://stackoverflow.com/a/61954408 并计划增加断路器的内存百分比或整个jvm堆。
这是一个k8s环境,elasticsearch数据作为一个有3个副本的状态集部署。当我描述statefulset时,我可以看到下面定义的env变量

Containers:
   elasticsearch:
    Image:      custom/elasticsearch-oss-s3:7.0.0
    Port:       9300/TCP
    Host Port:  0/TCP
    Limits:
      cpu:     10500m
      memory:  21Gi
    Requests:
      cpu:      10
      memory:   20Gi
    Environment:
      DISCOVERY_SERVICE:     elasticsearch-discovery
      NODE_MASTER:           false
      PROCESSORS:            11 (limits.cpu)
      ES_JAVA_OPTS:          -Djava.net.preferIPv4Stack=true -Xms2048m -Xmx2048m

因此,堆的大小似乎是2048m
我登录到一个elasticsearch数据pod,在elasticsearch config目录下,我看到了下面的文件

elasticsearch.keystore  elasticsearch.yml  jvm.options  log4j2.properties  repository-s3

elasticsearch.yml没有任何堆配置。它只有主节点的名称等。。
下面是jvm.options文件


## JVM configuration

# Xms represents the initial size of total heap space

# Xmx represents the maximum size of total heap space

-Xms1g
-Xmx1g

## GC configuration

-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly

## DNS cache policy

-Des.networkaddress.cache.ttl=60
-Des.networkaddress.cache.negative.ttl=10

# pre-touch memory pages used by the JVM during initialization

-XX:+AlwaysPreTouch

## basic

# explicitly set the stack size

-Xss1m

# set to headless, just in case

-Djava.awt.headless=true

# ensure UTF-8 encoding by default (e.g. filenames)

-Dfile.encoding=UTF-8

# use our provided JNA always versus the system one

-Djna.nosys=true

# turn off a JDK optimization that throws away stack traces for common

# exceptions because stack traces are important for debugging

-XX:-OmitStackTraceInFastThrow

# flags to configure Netty

-Dio.netty.noUnsafe=true
-Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0

# log4j 2

-Dlog4j.shutdownHookEnabled=false
-Dlog4j2.disable.jmx=true

-Djava.io.tmpdir=${ES_TMPDIR}

## heap dumps

# generate a heap dump when an allocation from the Java heap fails

# heap dumps are created in the working directory of the JVM

-XX:+HeapDumpOnOutOfMemoryError

# specify an alternative path for heap dumps; ensure the directory exists and

# has sufficient space

-XX:HeapDumpPath=data

# specify an alternative path for JVM fatal error logs

-XX:ErrorFile=logs/hs_err_pid%p.log

## JDK 8 GC logging

8:-XX:+PrintGCDetails
8:-XX:+PrintGCDateStamps
8:-XX:+PrintTenuringDistribution
8:-XX:+PrintGCApplicationStoppedTime
8:-Xloggc:logs/gc.log
8:-XX:+UseGCLogFileRotation
8:-XX:NumberOfGCLogFiles=32
8:-XX:GCLogFileSize=64m

# JDK 9+ GC logging

9-:-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m

# due to internationalization enhancements in JDK 9 Elasticsearch need to set the provider to COMPAT otherwise

# time/date parsing will break in an incompatible way for some date patterns and locals

9-:-Djava.locale.providers=COMPAT

从上面看,堆的总大小似乎是1g。
但是从这个pod的stateful集合中定义的env变量来看,它似乎是2048m。
哪一个是对的?
现在,从下面的链接
断路器设置|ElasticSearch
父级断路器可配置以下设置:
index.breaker.total.use\ real\内存(静态)确定父断路器应考虑实际内存使用情况(true)还是仅考虑子断路器保留的内存量(false)。默认为true。
总母断路器的指数.breaker.total.limit(动态)起始极限。如果index.breaker.total.use\ real\内存为false,则默认为jvm堆的70%。如果index.breaker.total.use\ real\内存为true,则默认为jvm堆的95%。
但是在错误和我查询的breaker stats中的极限值是-1972122419字节(1.8g)。这似乎不是20.48亿或1克的95%。
现在,我怎样才能增加堆或父级的内存限制,从而消除这个错误呢?

gev0vcfq

gev0vcfq1#

这里有两件事,shard分配异常和断路器异常(看起来是嵌套异常)。
请在您的群集中使用以下命令重新触发分配,因为先前的所有重试都失败,如果您仔细注意的话,您的异常消息中建议使用相同的命令。有关以下命令的更多信息,请参阅此相关github问题注解。
curl-xpost“:9200/\u群集/重新路由?重试失败
如果仍然不起作用,那么您必须修复父断路器异常,您应该使用 http://localhost:9200/_nodes/stats api来知道es节点的确切堆,并相应地增加它。

相关问题