本文整理了Java中org.elasticsearch.client.Client
类的一些代码示例,展示了Client
类的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Client
类的具体详情如下:
包路径:org.elasticsearch.client.Client
类名称:Client
[英]A client provides a one stop interface for performing actions/operations against the cluster.
All operations performed are asynchronous by nature. Each action/operation has two flavors, the first simply returns an org.elasticsearch.action.ActionFuture, while the second accepts an org.elasticsearch.action.ActionListener.
A client can either be retrieved from a org.elasticsearch.node.Node started, or connected remotely to one or more nodes using org.elasticsearch.client.transport.TransportClient.
[中]客户端提供一个一站式接口,用于对集群执行操作。
执行的所有操作本质上都是异步的。每个操作都有两种风格,第一种只是返回一个组织。弹性搜索。行动ActionFuture,而第二个接受组织。弹性搜索。行动ActionListener。
客户机可以从组织中检索。弹性搜索。节点。节点已启动,或使用org远程连接到一个或多个节点。弹性搜索。客户运输运输客户。
代码示例来源:origin: loklak/loklak_server
public void setMapping(String indexName, File json) {
try {
this.elasticsearchClient.admin().indices().preparePutMapping(indexName)
.setSource(new String(Files.readAllBytes(json.toPath()), StandardCharsets.UTF_8))
.setUpdateAllTypes(true)
.setType("_default_")
.execute()
.actionGet();
} catch (Throwable e) {
DAO.severe(e);
};
}
代码示例来源:origin: Netflix/conductor
@Override
public List<String> searchArchivableWorkflows(String indexName, long archiveTtlDays) {
QueryBuilder q = QueryBuilders.boolQuery()
.should(QueryBuilders.termQuery("status", "COMPLETED"))
.should(QueryBuilders.termQuery("status", "FAILED"))
.mustNot(QueryBuilders.existsQuery("archived"))
.minimumShouldMatch(1);
SearchRequestBuilder s = elasticSearchClient.prepareSearch(indexName)
.setTypes("workflow")
.setQuery(q)
.addSort("endTime", SortOrder.ASC)
.setSize(archiveSearchBatchSize);
SearchResponse response = s.execute().actionGet();
SearchHits hits = response.getHits();
logger.info("Archive search totalHits - {}", hits.getTotalHits());
return Arrays.stream(hits.getHits())
.map(hit -> hit.getId())
.collect(Collectors.toCollection(LinkedList::new));
}
代码示例来源:origin: apache/flume
@VisibleForTesting
IndexRequestBuilder prepareIndex(Client client) {
return client.prepareIndex();
}
代码示例来源:origin: apache/nifi
@Override
public void process(final InputStream in) throws IOException {
String json = IOUtils.toString(in, charset)
.replace("\r\n", " ").replace('\n', ' ').replace('\r', ' ');
if (indexOp.equalsIgnoreCase("index")) {
bulk.add(esClient.get().prepareIndex(index, docType, id)
.setSource(json.getBytes(charset)));
} else if (indexOp.equalsIgnoreCase("upsert")) {
bulk.add(esClient.get().prepareUpdate(index, docType, id)
.setDoc(json.getBytes(charset))
.setDocAsUpsert(true));
} else if (indexOp.equalsIgnoreCase("update")) {
bulk.add(esClient.get().prepareUpdate(index, docType, id)
.setDoc(json.getBytes(charset)));
} else {
throw new IOException("Index operation: " + indexOp + " not supported.");
}
}
});
代码示例来源:origin: loklak/loklak_server
public Map<String, Object> query(final String indexName, final String fieldKey, final String fieldValue) {
if (fieldKey == null || fieldValue.length() == 0) return null;
// prepare request
BoolQueryBuilder query = QueryBuilders.boolQuery();
query.filter(QueryBuilders.constantScoreQuery(QueryBuilders.termQuery(fieldKey, fieldValue)));
SearchRequestBuilder request = elasticsearchClient.prepareSearch(indexName)
.setSearchType(SearchType.QUERY_THEN_FETCH)
.setQuery(query)
.setFrom(0)
.setSize(1)
.setTerminateAfter(1);
// get response
SearchResponse response = request.execute().actionGet();
// evaluate search result
SearchHit[] hits = response.getHits().getHits();
if (hits.length == 0) return null;
assert hits.length == 1;
Map<String, Object> map = hits[0].getSource();
return map;
}
代码示例来源:origin: Netflix/conductor
@Override
public List<String> searchRecentRunningWorkflows(int lastModifiedHoursAgoFrom,
int lastModifiedHoursAgoTo) {
DateTime dateTime = new DateTime();
QueryBuilder q = QueryBuilders.boolQuery()
.must(QueryBuilders.rangeQuery("updateTime")
.gt(dateTime.minusHours(lastModifiedHoursAgoFrom)))
.must(QueryBuilders.rangeQuery("updateTime")
.lt(dateTime.minusHours(lastModifiedHoursAgoTo)))
.must(QueryBuilders.termQuery("status", "RUNNING"));
SearchRequestBuilder s = elasticSearchClient.prepareSearch(indexName)
.setTypes("workflow")
.setQuery(q)
.setSize(5000)
.addSort("updateTime", SortOrder.ASC);
SearchResponse response = s.execute().actionGet();
return StreamSupport.stream(response.getHits().spliterator(), false)
.map(hit -> hit.getId())
.collect(Collectors.toCollection(LinkedList::new));
}
代码示例来源:origin: thinkaurelius/titan
@Override
public List<String> query(IndexQuery query, KeyInformation.IndexRetriever informations, BaseTransaction tx) throws BackendException {
SearchRequestBuilder srb = client.prepareSearch(indexName);
srb.setTypes(query.getStore());
srb.setQuery(QueryBuilders.matchAllQuery());
srb.setPostFilter(getFilter(query.getCondition(),informations.get(query.getStore())));
if (!query.getOrder().isEmpty()) {
List<IndexQuery.OrderEntry> orders = query.getOrder();
SearchResponse response = srb.execute().actionGet();
log.debug("Executed query [{}] in {} ms", query.getCondition(), response.getTookInMillis());
SearchHits hits = response.getHits();
if (!query.hasLimit() && hits.totalHits() >= maxResultsSize)
log.warn("Query result set truncated to first [{}] elements for query: {}", maxResultsSize, query);
List<String> result = new ArrayList<String>(hits.hits().length);
for (SearchHit hit : hits) {
result.add(hit.id());
代码示例来源:origin: loklak/loklak_server
private Query(final String indexName, QueryBuilder queryBuilder, String order_field, int resultCount) {
//TODO: sort data using order_field
// prepare request
SearchRequestBuilder request = elasticsearchClient.prepareSearch(indexName)
.setSearchType(SearchType.QUERY_THEN_FETCH)
.setQuery(queryBuilder)
.setFrom(0)
.setSize(resultCount);
request.clearRescorers();
// get response
SearchResponse response = request.execute().actionGet();
hitCount = (int) response.getHits().getTotalHits();
// evaluate search result
SearchHit[] hits = response.getHits().getHits();
this.result = new ArrayList<Map<String, Object>>(hitCount);
for (SearchHit hit: hits) {
Map<String, Object> map = hit.getSource();
this.result.add(map);
}
}
代码示例来源:origin: loklak/loklak_server
public long count(final String index, final String histogram_timefield, final long millis) {
try {
SearchResponse response = elasticsearchClient.prepareSearch(index)
.setSize(0)
.setQuery(millis <= 0 ? QueryBuilders.constantScoreQuery(QueryBuilders.matchAllQuery()) : QueryBuilders.rangeQuery(histogram_timefield).from(new Date(System.currentTimeMillis() - millis)))
.execute()
.actionGet();
return response.getHits().getTotalHits();
} catch (Throwable e) {
DAO.severe(e);
return 0;
}
}
代码示例来源:origin: thinkaurelius/titan
@Override
public Iterable<RawQuery.Result<String>> query(RawQuery query, KeyInformation.IndexRetriever informations, BaseTransaction tx) throws BackendException {
SearchRequestBuilder srb = client.prepareSearch(indexName);
srb.setTypes(query.getStore());
srb.setQuery(QueryBuilders.queryStringQuery(query.getQuery()));
srb.setFrom(query.getOffset());
if (query.hasLimit()) srb.setSize(query.getLimit());
else srb.setSize(maxResultsSize);
srb.setNoFields();
//srb.setExplain(true);
SearchResponse response = srb.execute().actionGet();
log.debug("Executed query [{}] in {} ms", query.getQuery(), response.getTookInMillis());
SearchHits hits = response.getHits();
if (!query.hasLimit() && hits.totalHits() >= maxResultsSize)
log.warn("Query result set truncated to first [{}] elements for query: {}", maxResultsSize, query);
List<RawQuery.Result<String>> result = new ArrayList<RawQuery.Result<String>>(hits.hits().length);
for (SearchHit hit : hits) {
result.add(new RawQuery.Result<String>(hit.id(),hit.getScore()));
}
return result;
}
代码示例来源:origin: SonarSource/sonarqube
/**
* Get all the indexed documents (no paginated results). Results are not sorted.
*/
public List<SearchHit> getDocuments(IndexType indexType) {
SearchRequestBuilder req = SHARED_NODE.client().prepareSearch(indexType.getIndex()).setTypes(indexType.getType()).setQuery(matchAllQuery());
EsUtils.optimizeScrollRequest(req);
req.setScroll(new TimeValue(60000))
.setSize(100);
SearchResponse response = req.get();
List<SearchHit> result = newArrayList();
while (true) {
Iterables.addAll(result, response.getHits());
response = SHARED_NODE.client().prepareSearchScroll(response.getScrollId()).setScroll(new TimeValue(600000)).execute().actionGet();
// Break condition: No hits are returned
if (response.getHits().getHits().length == 0) {
break;
}
}
return result;
}
代码示例来源:origin: loklak/loklak_server
/**
* Get the number of documents in the search index for a given search query
*
* @param q
* the query
* @return the count of all documents in the index which matches with the query
*/
private long count(final QueryBuilder q, final String indexName) {
SearchResponse response =
elasticsearchClient.prepareSearch(indexName).setQuery(q).setSize(0).execute().actionGet();
return response.getHits().getTotalHits();
}
代码示例来源:origin: loklak/loklak_server
/**
* Get a search response for predefined aggregation task on a specific index.
* @param index Name of ES index
* @param aggr Pre-configured AggregationBuilder object
* @return HashMap with parsed aggregations
*/
private SearchResponse getAggregationResponse(String index, @SuppressWarnings("rawtypes") AggregationBuilder aggr) {
return this.elasticsearchClient.prepareSearch(index)
.setSearchType(SearchType.QUERY_THEN_FETCH)
.setQuery(QueryBuilders.matchAllQuery())
.setFrom(0)
.setSize(0)
.addAggregation(aggr)
.execute().actionGet();
}
代码示例来源:origin: Netflix/conductor
@Override
public List<EventExecution> getEventExecutions(String event) {
try {
Expression expression = Expression.fromString("event='" + event + "'");
QueryBuilder queryBuilder = expression.getFilterBuilder();
BoolQueryBuilder filterQuery = QueryBuilders.boolQuery().must(queryBuilder);
QueryStringQueryBuilder stringQuery = QueryBuilders.queryStringQuery("*");
BoolQueryBuilder fq = QueryBuilders.boolQuery().must(stringQuery).must(filterQuery);
final SearchRequestBuilder srb = elasticSearchClient.prepareSearch(logIndexPrefix + "*")
.setQuery(fq).setTypes(EVENT_DOC_TYPE)
.addSort(SortBuilders.fieldSort("created")
.order(SortOrder.ASC));
return mapEventExecutionsResponse(srb.execute().actionGet());
} catch (Exception e) {
logger.error("Failed to get executions for event: {}", event, e);
throw new ApplicationException(Code.BACKEND_ERROR, e.getMessage(), e);
}
}
代码示例来源:origin: richardwilly98/elasticsearch-river-mongodb
private void deleteBulkRequest(String objectId, String index, String type, String routing, String parent) {
if (logger.isTraceEnabled()) {
logger.trace("bulkDeleteRequest - objectId: {} - index: {} - type: {} - routing: {} - parent: {}", objectId, index, type,
routing, parent);
}
if (definition.getParentTypes() != null && definition.getParentTypes().contains(type)) {
QueryBuilder builder = QueryBuilders.hasParentQuery(type, QueryBuilders.termQuery(MongoDBRiver.MONGODB_ID_FIELD, objectId));
SearchResponse response = esClient.prepareSearch(index).setQuery(builder).setRouting(routing)
.addField(MongoDBRiver.MONGODB_ID_FIELD).execute().actionGet();
for (SearchHit hit : response.getHits().getHits()) {
getBulkProcessor(index, hit.getType()).deleteBulkRequest(hit.getId(), routing, objectId);
}
}
getBulkProcessor(index, type).deleteBulkRequest(objectId, routing, parent);
}
代码示例来源:origin: thinkaurelius/titan
/**
* If ES already contains this instance's target index, then do nothing.
* Otherwise, create the index, then wait {@link #CREATE_SLEEP}.
* <p>
* The {@code client} field must point to a live, connected client.
* The {@code indexName} field must be non-null and point to the name
* of the index to check for existence or create.
*
* @param config the config for this ElasticSearchIndex
* @throws java.lang.IllegalArgumentException if the index could not be created
*/
private void checkForOrCreateIndex(Configuration config) {
Preconditions.checkState(null != client);
//Create index if it does not already exist
IndicesExistsResponse response = client.admin().indices().exists(new IndicesExistsRequest(indexName)).actionGet();
if (!response.isExists()) {
ImmutableSettings.Builder settings = ImmutableSettings.settingsBuilder();
ElasticSearchSetup.applySettingsFromTitanConf(settings, config, ES_CREATE_EXTRAS_NS);
CreateIndexResponse create = client.admin().indices().prepareCreate(indexName)
.setSettings(settings.build()).execute().actionGet();
try {
final long sleep = config.get(CREATE_SLEEP);
log.debug("Sleeping {} ms after {} index creation returned from actionGet()", sleep, indexName);
Thread.sleep(sleep);
} catch (InterruptedException e) {
throw new TitanException("Interrupted while waiting for index to settle in", e);
}
if (!create.isAcknowledged()) throw new IllegalArgumentException("Could not create index: " + indexName);
}
}
代码示例来源:origin: Netflix/conductor
private void addIndex(String indexName) {
try {
elasticSearchClient.admin()
.indices()
.prepareGetIndex()
.addIndices(indexName)
.execute()
.actionGet();
} catch (IndexNotFoundException infe) {
try {
elasticSearchClient.admin()
.indices()
.prepareCreate(indexName)
.execute()
.actionGet();
} catch (ResourceAlreadyExistsException done) {
// no-op
}
}
}
代码示例来源:origin: loklak/loklak_server
public void createIndexIfNotExists(String indexName, final int shards, final int replicas) {
// create an index if not existent
if (!this.elasticsearchClient.admin().indices().prepareExists(indexName).execute().actionGet().isExists()) {
Settings.Builder settings = Settings.builder()
.put("number_of_shards", shards)
.put("number_of_replicas", replicas);
this.elasticsearchClient.admin().indices().prepareCreate(indexName)
.setSettings(settings)
.setUpdateAllTypes(true)
.execute().actionGet();
} else {
//LOGGER.debug("Index with name {} already exists", indexName);
}
}
代码示例来源:origin: codelibs/elasticsearch-reindexing
private void deleteIndexType(final String fromIndex, final String fromType) {
final SearchRequestBuilder builder = client.prepareSearch(fromIndex).setTypes(fromType).setScroll("1m");
SearchResponse searchResponse = builder.get();
SearchHit[] hits = searchResponse.getHits().getHits();
for (SearchHit hit : hits)
client.prepareDelete(hit.index(), hit.type(), hit.id()).get();
while (hits.length != 0) {
searchResponse = client.prepareSearchScroll(searchResponse.getScrollId()).setScroll("1m").get();
hits = searchResponse.getHits().getHits();
for (SearchHit hit : hits)
client.prepareDelete(hit.index(), hit.type(), hit.id()).get();
}
}
代码示例来源:origin: richardwilly98/elasticsearch-river-mongodb
public static long getIndexCount(Client client, MongoDBRiverDefinition definition) {
if (client.admin().indices().prepareExists(definition.getIndexName()).get().isExists()) {
if (definition.isImportAllCollections()) {
return client.prepareCount(definition.getIndexName()).execute().actionGet().getCount();
} else {
if (client.admin().indices().prepareTypesExists(definition.getIndexName()).setTypes(definition.getTypeName()).get()
.isExists()) {
return client.prepareCount(definition.getIndexName()).setTypes(definition.getTypeName()).get().getCount();
}
}
}
return 0;
}
内容来源于网络,如有侵权,请联系作者删除!