elasticsearch在不同的数据节点中不断初始化一个特定的碎片

vs91vp4v  于 2021-06-10  发布在  ElasticSearch
关注(0)|答案(0)|浏览(331)

我收到ElasticSearchStatus警告说群集状态为黄色。在运行healthcheckapi时,我看到了下面的内容
curl -x得到http://localhost:9200/\u群集/运行状况/

{"cluster_name":"my-elasticsearch","status":"yellow","timed_out":false,"number_of_nodes":8,"number_of_data_nodes":3,"active_primary_shards":220,"active_shards":438,"relocating_shards":0,"initializing_shards":2,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":99.54545454545455}

初始化碎片是2。所以,我进一步运行下面的调用
curl -x得到http://localhost:9200/| cat/shards?h=索引,shard,prirep,state,unassigned.reason | grep init

% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 33457  100 33457    0   graph_vertex_24_18549             0 r INITIALIZING ALLOCATION_FAILED
  0  79609      0 --:--:-- --:--:-- --:--:-- 79659

curl -x得到http://localhost:9200/\u猫/碎片/图形\u顶点\u 24 \u 18549

graph_vertex_24_18549 0 p STARTED      8373375 8.4gb IP1   elasticsearch-data-1
graph_vertex_24_18549 0 r INITIALIZING               IP2 elasticsearch-data-2

在几分钟内重新运行同一个命令,显示它现在正在elasticsearch-data-0中初始化。见下文

graph_vertex_24_18549 0 p STARTED      8373375 8.4gb IP1   elasticsearch-data-1
graph_vertex_24_18549 0 r INITIALIZING               IP0   elasticsearch-data-0

如果我在几分钟内再次运行它,我可以看到它再次在elasticsearch-data-2中初始化。但它从未开始。
curl -x得到http://localhost:9200/类别/分配?v

shards disk.indices disk.used disk.avail disk.total disk.percent host          ip            node
   147      162.2gb   183.8gb    308.1gb      492gb           37 IP1 IP1 elasticsearch-data-2
   146      217.3gb   234.2gb    257.7gb      492gb           47 IP2   IP2   elasticsearch-data-1
   147      216.6gb   231.2gb    260.7gb      492gb           47 IP3  IP3  elasticsearch-data-0

curl -x得到http://localhost:9200/节点?v

ip            heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
IP1            7          77  20    4.17    4.57     4.88 mi        -      elasticsearch-master-2
IP2          72          59   7    2.59    2.38     2.19 i         -      elasticsearch-5f4bd5b88f-4lvxz
IP3           57          49   3    0.75    1.13     1.09 di        -      elasticsearch-data-2
IP4           63          57  21    2.69    3.58     4.11 di        -      elasticsearch-data-0
IP5            5          59   7    2.59    2.38     2.19 mi        -      elasticsearch-master-0
IP6            69          53  13    4.67    4.60     4.66 di        -      elasticsearch-data-1
IP7           8          70  14    2.86    3.20     3.09 mi        *      elasticsearch-master-1
IP8           30          77  20    4.17    4.57     4.88 i         -      elasticsearch-5f4bd5b88f-wnrl4

curl -s-xgethttp://localhost:9200/\u cluster/allocation/explain-d'{“index”:“graph\u vertex\u 24\u 18549”,“shard”:0,“primary”:false}'-h'内容类型:application/json'

{"index":"graph_vertex_24_18549","shard":0,"primary":false,"current_state":"initializing","unassigned_info":{"reason":"ALLOCATION_FAILED","at":"2020-11-04T08:21:45.756Z","failed_allocation_attempts":1,"details":"failed shard on node [1XEXS92jTK-wwanNgQrxsA]: failed to perform indices:data/write/bulk[s] on replica [graph_vertex_24_18549][0], node[1XEXS92jTK-wwanNgQrxsA], [R], s[STARTED], a[id=RnTOlfQuQkOumVuw_NeuTw], failure RemoteTransportException[[elasticsearch-data-2][IP:9300][indices:data/write/bulk[s][r]]]; nested: CircuitBreakingException[[parent] Data too large, data for [<transport_request>] would be [4322682690/4gb], which is larger than the limit of [4005632409/3.7gb], real usage: [3646987112/3.3gb], new bytes reserved: [675695578/644.3mb]]; ","last_allocation_status":"no_attempt"},"current_node":{"id":"o_9jyrmOSca9T12J4bY0Nw","name":"elasticsearch-data-0","transport_address":"IP:9300"},"explanation":"the shard is in the process of initializing on node [elasticsearch-data-0], wait until initialization has completed"}

问题是,由于上述相同的异常,我之前收到了未分配碎片的警报-“circuitbreakingexception[[parent]data too large,[<transport\u request>]的数据将是[4322682690/4gb],这大于[4005632409/3.7gb]的限制。”
但那时堆只有2g。我把它增加到4g。现在我看到了同样的错误,但这次是关于初始化碎片而不是未分配的碎片。
我该怎么补救?

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题