本文整理了Java中org.elasticsearch.common.settings.Settings.getAsInt()
方法的一些代码示例,展示了Settings.getAsInt()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Settings.getAsInt()
方法的具体详情如下:
包路径:org.elasticsearch.common.settings.Settings
类名称:Settings
方法名:getAsInt
[英]Returns the setting value (as int) associated with the setting key. If it does not exists, returns the default value provided.
[中]返回与设置键关联的设置值(作为int)。如果不存在,则返回提供的默认值。
代码示例来源:origin: floragunncom/search-guard
public BackendRegistry(final Settings settings, final Path configPath, final AdminDNs adminDns,
final XFFResolver xffResolver, final InternalAuthenticationBackend iab, final AuditLog auditLog, final ThreadPool threadPool) {
this.adminDns = adminDns;
this.esSettings = settings;
this.configPath = configPath;
this.xffResolver = xffResolver;
this.iab = iab;
this.auditLog = auditLog;
this.threadPool = threadPool;
this.userInjector = new UserInjector(settings, threadPool, auditLog, xffResolver);
authImplMap.put("intern_c", InternalAuthenticationBackend.class.getName());
authImplMap.put("intern_z", NoOpAuthorizationBackend.class.getName());
authImplMap.put("internal_c", InternalAuthenticationBackend.class.getName());
authImplMap.put("internal_z", NoOpAuthorizationBackend.class.getName());
authImplMap.put("noop_c", NoOpAuthenticationBackend.class.getName());
authImplMap.put("noop_z", NoOpAuthorizationBackend.class.getName());
authImplMap.put("ldap_c", "com.floragunn.dlic.auth.ldap.backend.LDAPAuthenticationBackend");
authImplMap.put("ldap_z", "com.floragunn.dlic.auth.ldap.backend.LDAPAuthorizationBackend");
authImplMap.put("basic_h", HTTPBasicAuthenticator.class.getName());
authImplMap.put("proxy_h", HTTPProxyAuthenticator.class.getName());
authImplMap.put("clientcert_h", HTTPClientCertAuthenticator.class.getName());
authImplMap.put("kerberos_h", "com.floragunn.dlic.auth.http.kerberos.HTTPSpnegoAuthenticator");
authImplMap.put("jwt_h", "com.floragunn.dlic.auth.http.jwt.HTTPJwtAuthenticator");
authImplMap.put("openid_h", "com.floragunn.dlic.auth.http.jwt.keybyoidc.HTTPJwtKeyByOpenIdConnectAuthenticator");
authImplMap.put("saml_h", "com.floragunn.dlic.auth.http.saml.HTTPSamlAuthenticator");
this.ttlInMin = settings.getAsInt(ConfigConstants.SEARCHGUARD_CACHE_TTL_MINUTES, 60);
createCaches();
}
代码示例来源:origin: floragunncom/search-guard
ads.getAsBoolean("http_authenticator.challenge", true), ads.getAsInt("order", 0));
代码示例来源:origin: org.elasticsearch/elasticsearch
/**
* Returns the number of shards.
*
* @return the provided value or -1 if it has not been set.
*/
public int numberOfShards() {
return settings.getAsInt(SETTING_NUMBER_OF_SHARDS, -1);
}
代码示例来源:origin: org.elasticsearch/elasticsearch
/**
* Returns the routing partition size.
*
* @return the provided value or -1 if it has not been set.
*/
public int routingPartitionSize() {
return settings.getAsInt(SETTING_ROUTING_PARTITION_SIZE, -1);
}
代码示例来源:origin: org.elasticsearch/elasticsearch
/**
* Returns the number of replicas.
*
* @return the provided value or -1 if it has not been set.
*/
public int numberOfReplicas() {
return settings.getAsInt(SETTING_NUMBER_OF_REPLICAS, -1);
}
代码示例来源:origin: org.elasticsearch/elasticsearch
public StandardTokenizerFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) {
super(indexSettings, name, settings);
maxTokenLength = settings.getAsInt("max_token_length", StandardAnalyzer.DEFAULT_MAX_TOKEN_LENGTH);
}
代码示例来源:origin: org.elasticsearch/elasticsearch
/**
* Returns the number of replicas this index has.
*/
public int getNumberOfReplicas() { return settings.getAsInt(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, null); }
代码示例来源:origin: medcl/elasticsearch-analysis-pinyin
public PinyinConfig(Settings settings) {
this.keepFirstLetter=settings.getAsBoolean("keep_first_letter",true);
this.keepSeparateFirstLetter=settings.getAsBoolean("keep_separate_first_letter",false);
this.keepFullPinyin=settings.getAsBoolean("keep_full_pinyin", true);
this.keepJoinedFullPinyin =settings.getAsBoolean("keep_joined_full_pinyin", false);
this.keepNoneChinese=settings.getAsBoolean("keep_none_chinese",true);
this.keepNoneChineseTogether=settings.getAsBoolean("keep_none_chinese_together",true);
this.noneChinesePinyinTokenize =settings.getAsBoolean("none_chinese_pinyin_tokenize",true);
this.keepOriginal=settings.getAsBoolean("keep_original", false);
this.LimitFirstLetterLength=settings.getAsInt("limit_first_letter_length", 16);
this.lowercase=settings.getAsBoolean("lowercase", true);
this.trimWhitespace=settings.getAsBoolean("trim_whitespace", true);
this.keepNoneChineseInFirstLetter =settings.getAsBoolean("keep_none_chinese_in_first_letter", true);
this.keepNoneChineseInJoinedFullPinyin =settings.getAsBoolean("keep_none_chinese_in_joined_full_pinyin", false);
this.removeDuplicateTerm =settings.getAsBoolean("remove_duplicated_term", false);
this.fixedPinyinOffset =settings.getAsBoolean("fixed_pinyin_offset", false);
this.ignorePinyinOffset =settings.getAsBoolean("ignore_pinyin_offset", true);
}
}
代码示例来源:origin: org.elasticsearch/elasticsearch
public StandardAnalyzerProvider(IndexSettings indexSettings, Environment env, String name, Settings settings) {
super(indexSettings, name, settings);
final CharArraySet defaultStopwords = CharArraySet.EMPTY_SET;
CharArraySet stopWords = Analysis.parseStopWords(env, indexSettings.getIndexVersionCreated(), settings, defaultStopwords);
int maxTokenLength = settings.getAsInt("max_token_length", StandardAnalyzer.DEFAULT_MAX_TOKEN_LENGTH);
standardAnalyzer = new StandardAnalyzer(stopWords);
standardAnalyzer.setVersion(version);
standardAnalyzer.setMaxTokenLength(maxTokenLength);
}
代码示例来源:origin: org.elasticsearch/elasticsearch
positionIncrementGap = analyzerSettings.getAsInt("position_increment_gap", positionIncrementGap);
int offsetGap = analyzerSettings.getAsInt("offset_gap", -1);
代码示例来源:origin: org.elasticsearch/elasticsearch
public ShingleTokenFilterFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) {
super(indexSettings, name, settings);
int maxAllowedShingleDiff = indexSettings.getMaxShingleDiff();
Integer maxShingleSize = settings.getAsInt("max_shingle_size", ShingleFilter.DEFAULT_MAX_SHINGLE_SIZE);
Integer minShingleSize = settings.getAsInt("min_shingle_size", ShingleFilter.DEFAULT_MIN_SHINGLE_SIZE);
Boolean outputUnigrams = settings.getAsBooleanLenientForPreEs6Indices(indexSettings.getIndexVersionCreated(),
"output_unigrams", true, deprecationLogger);
Boolean outputUnigramsIfNoShingles = settings.getAsBooleanLenientForPreEs6Indices(indexSettings.getIndexVersionCreated(),
"output_unigrams_if_no_shingles", false, deprecationLogger);
int shingleDiff = maxShingleSize - minShingleSize + (outputUnigrams ? 1 : 0);
if (shingleDiff > maxAllowedShingleDiff) {
deprecationLogger.deprecated("Deprecated big difference between maxShingleSize and minShingleSize in Shingle TokenFilter,"
+ "expected difference must be less than or equal to: [" + maxAllowedShingleDiff + "]");
}
String tokenSeparator = settings.get("token_separator", ShingleFilter.DEFAULT_TOKEN_SEPARATOR);
String fillerToken = settings.get("filler_token", ShingleFilter.DEFAULT_FILLER_TOKEN);
factory = new Factory("shingle", minShingleSize, maxShingleSize,
outputUnigrams, outputUnigramsIfNoShingles, tokenSeparator, fillerToken);
}
代码示例来源:origin: org.elasticsearch/elasticsearch
int dummyShards = request.settings.getAsInt(IndexMetaData.SETTING_NUMBER_OF_SHARDS,
dummyPartitionSize == 1 ? 1 : dummyPartitionSize + 1);
代码示例来源:origin: com.strapdata.elasticsearch/elasticsearch
/**
* Returns the routing partition size.
*
* @return the provided value or -1 if it has not been set.
*/
public int routingPartitionSize() {
return settings.getAsInt(SETTING_ROUTING_PARTITION_SIZE, -1);
}
代码示例来源:origin: org.elasticsearch/elasticsearch
Integer maybeNumberOfShards = settings.getAsInt(SETTING_NUMBER_OF_SHARDS, null);
if (maybeNumberOfShards == null) {
throw new IllegalArgumentException("must specify numberOfShards for index [" + index + "]");
Integer maybeNumberOfReplicas = settings.getAsInt(SETTING_NUMBER_OF_REPLICAS, null);
if (maybeNumberOfReplicas == null) {
throw new IllegalArgumentException("must specify numberOfReplicas for index [" + index + "]");
代码示例来源:origin: org.elasticsearch/elasticsearch
int updatedNumberOfReplicas = openSettings.getAsInt(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, -1);
if (updatedNumberOfReplicas != -1 && preserveExisting == false) {
代码示例来源:origin: org.elasticsearch/elasticsearch
} else {
recoverAfterMasterNodes = settings.getAsInt("discovery.zen.minimum_master_nodes", -1);
代码示例来源:origin: org.elasticsearch/elasticsearch
+ "if you wish to continue using the default of [5] shards, "
+ "you must manage this on the create index request or with an index template");
indexSettingsBuilder.put(SETTING_NUMBER_OF_SHARDS, settings.getAsInt(SETTING_NUMBER_OF_SHARDS, 5));
indexSettingsBuilder.put(SETTING_NUMBER_OF_REPLICAS, settings.getAsInt(SETTING_NUMBER_OF_REPLICAS, 1));
代码示例来源:origin: com.strapdata.elasticsearch/elasticsearch
public EdgeNGramTokenizerFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) {
super(indexSettings, name, settings);
this.minGram = settings.getAsInt("min_gram", NGramTokenizer.DEFAULT_MIN_NGRAM_SIZE);
this.maxGram = settings.getAsInt("max_gram", NGramTokenizer.DEFAULT_MAX_NGRAM_SIZE);
this.matcher = parseTokenChars(settings.getAsArray("token_chars"));
}
代码示例来源:origin: com.strapdata.elasticsearch/elasticsearch
public NGramTokenizerFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) {
super(indexSettings, name, settings);
this.minGram = settings.getAsInt("min_gram", NGramTokenizer.DEFAULT_MIN_NGRAM_SIZE);
this.maxGram = settings.getAsInt("max_gram", NGramTokenizer.DEFAULT_MAX_NGRAM_SIZE);
this.matcher = parseTokenChars(settings.getAsArray("token_chars"));
}
代码示例来源:origin: org.elasticsearch/elasticsearch
nodeName = Node.NODE_NAME_SETTING.get(settings);
this.indexMetaData = indexMetaData;
numberOfShards = settings.getAsInt(IndexMetaData.SETTING_NUMBER_OF_SHARDS, null);
内容来源于网络,如有侵权,请联系作者删除!