我在ElasticSearch聚合中遇到了一个问题。我们正在使用RestHighLevelClient在Java中查询ElasticSearch。
例外是-
ElasticSearch状态异常【ElasticSearch异常【类型=搜索阶段执行异常,原因=】】;nested:ElasticsearchException[ElasticSearch异常[类型=too_many_buckets_exception,原因=试图创建过多的存储桶。必须小于或等于:[20000],但实际为[20001]。此限制可通过更改[search.max_buckets]群集级别设置来设置。]];
我已经使用PUT请求更改了search.max_buckets,但仍然面临这个问题。
PUT /_cluster/设置{“持久”:{“搜索最大存储桶数”:20000 } }
根据我们的要求,首先,我们必须按天、按小时、按规则ID来聚合数据。聚合看起来像下面的级别-
Day{
1:00[
{
ruleId : 1 ,
count : 20
},
{
ruleId : 2 ,
count : 25
}
],
2:00[
{
ruleId : 1 ,
count : 20
},
{
ruleId : 2 ,
count : 25
}
]
我的准则是-
final List<DTO> violationCaseMgmtDtos = new ArrayList<>();
try {
RangeQueryBuilder queryBuilders =
(end_timestmp > 0 ? customTimeRangeQueryBuilder(start_timestmp, end_timestmp, generationTime)
: daysTimeRangeQueryBuilder(14, generationTime));
BoolQueryBuilder boolQuery = new BoolQueryBuilder();
boolQuery.must(queryBuilders);
boolQuery.must(QueryBuilders.matchQuery("pvGroupBy", true));
boolQuery.must(QueryBuilders.matchQuery("pvInformation", false));
TopHitsAggregationBuilder topHitsAggregationBuilder =
AggregationBuilders.topHits("topHits").docValueField(policyId).sort(generationTime, SortOrder.DESC);
TermsAggregationBuilder termsAggregation = AggregationBuilders.terms("distinct").field(policyId).size(10000)
.subAggregation(topHitsAggregationBuilder);
DateHistogramAggregationBuilder timeHistogramAggregationBuilder =
AggregationBuilders.dateHistogram("by_hour").field("eventDateTime")
.fixedInterval(DateHistogramInterval.HOUR).subAggregation(termsAggregation);
DateHistogramAggregationBuilder dateHistogramAggregationBuilder =
AggregationBuilders.dateHistogram("by_day").field("eventDateTime")
.fixedInterval(DateHistogramInterval.DAY).subAggregation(timeHistogramAggregationBuilder);
SearchRequest searchRequest = new SearchRequest(violationDataModel);
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.aggregation(dateHistogramAggregationBuilder);
searchSourceBuilder.query(boolQuery);
searchSourceBuilder.from(offset);
searchSourceBuilder.size(10000);
searchRequest.source(searchSourceBuilder);
SearchResponse searchResponse = null;
searchResponse = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
ParsedDateHistogram parsedDateHistogram = searchResponse.getAggregations().get("by_day");
parsedDateHistogram.getBuckets().parallelStream().forEach(dayBucket -> {
ParsedDateHistogram hourBasedData = dayBucket.getAggregations().get("by_hour");
hourBasedData.getBuckets().parallelStream().forEach(hourBucket -> {
// TimeLine timeLine = new TimeLine();
String dateTime = hourBucket.getKeyAsString();
// long dateInLong = DateUtil.getMiliSecondFromStringDate(dateTime);
// timeLine.setViolationEventTime(dateTime);
ParsedLongTerms distinctPolicys = hourBucket.getAggregations().get("distinct");
distinctPolicys.getBuckets().parallelStream().forEach(policyBucket -> {
DTO violationCaseManagementDTO = new DTO();
violationCaseManagementDTO.setDataAggregated(true);
violationCaseManagementDTO.setEventDateTime(dateTime);
violationCaseManagementDTO.setRuleId(Long.valueOf(policyBucket.getKey().toString()));
ParsedTopHits parsedTopHits = policyBucket.getAggregations().get("topHits");
SearchHit[] searchHits = parsedTopHits.getHits().getHits();
SearchHit searchHit = searchHits[0];
String source = searchHit.getSourceAsString();
ViolationDataModel violationModel = null;
try {
violationModel = objectMapper.readValue(source, ViolationDataModel.class);
} catch (Exception e) {
e.printStackTrace();
}
violationCaseManagementDTO.setRuleName(violationModel.getRuleName());
violationCaseManagementDTO.setGenerationTime(violationModel.getGenerationTime());
violationCaseManagementDTO.setPriority(violationModel.getPriority());
violationCaseManagementDTO.setStatus(violationModel.getViolationStatus());
violationCaseManagementDTO.setViolationId(violationModel.getId());
violationCaseManagementDTO.setEntity(violationModel.getViolator());
violationCaseManagementDTO.setViolationType(violationModel.getViolationEntityType());
violationCaseManagementDTO.setIndicatorsOfAttack( (int)
(policyBucket.getDocCount() * violationModel.getNoOfViolatedEvents()));
violationCaseMgmtDtos.add(violationCaseManagementDTO);
});
// violationCaseMgmtDtos.sort((d1,d2) -> d1.getEventDateTime().compareTo(d2.getEventDateTime()));
});
});
List<DTO> realtimeViolation = findViolationWithoutGrouping(start_timestmp, end_timestmp, offset, size);
realtimeViolation.stream().forEach(action -> violationCaseMgmtDtos.add(action));
} catch (Exception e) {
e.printStackTrace();
}
if (Objects.nonNull(violationCaseMgmtDtos) && violationCaseMgmtDtos.size() > 0) {
return violationCaseMgmtDtos.stream()
.filter(violationDto -> Objects.nonNull(violationDto))
.sorted((d1,d2) -> d2.getEventDateTime().compareTo(d1.getEventDateTime()))
.collect(Collectors.toList());
}
return violationCaseMgmtDtos;
}
请帮助我解决这个问题。
1条答案
按热度按时间jtw3ybtb1#
如果您使用的是ES版本7.x.x,则可以在查询中添加
terminate_after
子句,以限制数据将被划分到的存储桶的数量。当您试图聚合的数据具有高度随机性时,通常会发生这种情况。如果您的数据包含文本,那么最好在
.keyword
字段上聚合(假设您使用默认设置)。