java.util.LinkedHashSet.add()方法的使用及代码示例

x33g5p2x  于2022-01-17 转载在 其他  
字(9.5k)|赞(0)|评价(0)|浏览(169)

本文整理了Java中java.util.LinkedHashSet.add()方法的一些代码示例,展示了LinkedHashSet.add()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。LinkedHashSet.add()方法的具体详情如下:
包路径:java.util.LinkedHashSet
类名称:LinkedHashSet
方法名:add

LinkedHashSet.add介绍

暂无

代码示例

代码示例来源:origin: apache/flink

/**
 * Registers the given type with the serialization stack. If the type is eventually
 * serialized as a POJO, then the type is registered with the POJO serializer. If the
 * type ends up being serialized with Kryo, then it will be registered at Kryo to make
 * sure that only tags are written.
 *
 * @param type The class of the type to register.
 */
public void registerPojoType(Class<?> type) {
  if (type == null) {
    throw new NullPointerException("Cannot register null type class.");
  }
  if (!registeredPojoTypes.contains(type)) {
    registeredPojoTypes.add(type);
  }
}

代码示例来源:origin: hankcs/HanLP

@Override
public Set<String> keySet()
{
  HashSet<String> stringSet = mdag.getAllStrings();
  LinkedHashSet<String> keySet = new LinkedHashSet<String>();
  Iterator<String> iterator = stringSet.iterator();
  while (iterator.hasNext())
  {
    String key = iterator.next();
    keySet.add(key.substring(0, key.length() - 3));
  }
  return keySet;
}

代码示例来源:origin: apache/flink

/**
 * Returns the registered Kryo types.
 */
public LinkedHashSet<Class<?>> getRegisteredKryoTypes() {
  if (isForceKryoEnabled()) {
    // if we force kryo, we must also return all the types that
    // were previously only registered as POJO
    LinkedHashSet<Class<?>> result = new LinkedHashSet<>();
    result.addAll(registeredKryoTypes);
    for(Class<?> t : registeredPojoTypes) {
      if (!result.contains(t)) {
        result.add(t);
      }
    }
    return result;
  } else {
    return registeredKryoTypes;
  }
}

代码示例来源:origin: apache/ignite

IgniteQueryErrorCode.PARSING);
if (constraints.size() > 1)
  throw new IgniteSQLException("Too many constraints - only PRIMARY KEY is supported for CREATE TABLE",
    IgniteQueryErrorCode.UNSUPPORTED_OPERATION);
      IgniteQueryErrorCode.PARSING);
  pkCols.add(gridCol.columnName());
int keyColsNum = pkCols.size();
int valColsNum = cols.size() - keyColsNum;
  LinkedHashSet<String> pkCols0 = res.primaryKeyColumns();
  if (!F.isEmpty(pkCols0) && pkCols0.size() == 1 && wrapKey0)
    res.affinityKey(pkCols0.iterator().next());

代码示例来源:origin: apache/ignite

Set<InetAddress> allInetAddrs = U.newHashSet(addrs.size());
if (reachableInetAddrs.size() < allInetAddrs.size()) {
  LinkedHashSet<InetSocketAddress> addrs0 = U.newLinkedHashSet(addrs.size());
  List<InetSocketAddress> unreachableInetAddr = new ArrayList<>(allInetAddrs.size() - reachableInetAddrs.size());
      addrs0.add(addr);
    else
      unreachableInetAddr.add(addr);

代码示例来源:origin: hankcs/HanLP

for (Map.Entry<String, CoreDictionary.Attribute> entry : map.entrySet())
  attributeList.add(entry.getValue());
    customNatureCollector.add(Nature.values()[i]);
out.writeInt(attributeList.size());
for (CoreDictionary.Attribute attribute : attributeList)

代码示例来源:origin: google/guava

candidateClasses.add(cls);
result.add(candidate);

代码示例来源:origin: apache/incubator-druid

public ListenableFuture<List<DataSegmentChangeRequestAndStatus>> processBatch(List<DataSegmentChangeRequest> changeRequests)
{
 boolean isAnyRequestDone = false;
 Map<DataSegmentChangeRequest, AtomicReference<Status>> statuses = Maps.newHashMapWithExpectedSize(changeRequests.size());
 for (DataSegmentChangeRequest cr : changeRequests) {
  AtomicReference<Status> status = processRequest(cr);
  if (status.get().getState() != Status.STATE.PENDING) {
   isAnyRequestDone = true;
  }
  statuses.put(cr, status);
 }
 CustomSettableFuture future = new CustomSettableFuture(waitingFutures, statuses);
 if (isAnyRequestDone) {
  future.resolve();
 } else {
  synchronized (waitingFutures) {
   waitingFutures.add(future);
  }
 }
 return future;
}

代码示例来源:origin: org.codehaus.plexus/plexus-container-default

if ( stack.contains( descriptor ) )
  circularity.subList( circularity.indexOf( descriptor ), circularity.size() );
  circularity.add( descriptor );
stack.add( descriptor );
try

代码示例来源:origin: apache/incubator-gobblin

private boolean addRecordAndEvictIfNecessary(GlobalMetadata recordToAdd) {
 // First remove the element from the HashSet if it's already in there to reset
 // the 'LRU' piece; then add it back in
 boolean isNew = !metadataRecords.remove(recordToAdd);
 metadataRecords.add(recordToAdd);
 // Now remove the first element (which should be the oldest) from the list
 // if we've exceeded the cache size
 if (cacheSize != -1 && metadataRecords.size() > cacheSize) {
  Iterator<GlobalMetadata> recordIt = metadataRecords.iterator();
  recordIt.next(); // Remove the oldest element - don't care what it is
  recordIt.remove();
 }
 return isNew;
}

代码示例来源:origin: apache/hive

if (currentOp1.getParentOperators().size() !=
    currentOp2.getParentOperators().size()) {
if (currentOp1.getParentOperators().size() > 1) {
 List<Operator<?>> discardableOpsForCurrentOp = new ArrayList<>();
 int idx = 0;
retainableOps.add(equalOp1);
discardableOps.add(equalOp2);
if (equalOp1 instanceof MapJoinOperator) {
 MapJoinOperator mop = (MapJoinOperator) equalOp1;
if (op instanceof MapJoinOperator && !retainableOps.contains(op)) {
 MapJoinOperator mop = (MapJoinOperator) op;
 dataSize = StatsUtils.safeAdd(dataSize, mop.getConf().getInMemoryDataSize());
if (op instanceof MapJoinOperator && !discardableOps.contains(op)) {
 MapJoinOperator mop = (MapJoinOperator) op;
 dataSize = StatsUtils.safeAdd(dataSize, mop.getConf().getInMemoryDataSize());

代码示例来源:origin: nutzam/nutz

public String[] getNamesByAnnotation(Class<? extends Annotation> klass, IocContext context) {
  List<String> names = new ArrayList<String>(loader.getNamesByAnnotation(createLoading(), klass));
  IocContext cntx;
  if (null == context || context == this.context)
    cntx = this.context;
  else
    cntx = new ComboContext(context, this.context);
  for (String name : cntx.names()) {
    ObjectProxy op = cntx.fetch(name);
    if (op.getObj() != null && klass.getAnnotation(klass) != null)
      names.add(name);
  }
  LinkedHashSet<String> re = new LinkedHashSet<String>();
  for (String name : names) {
    if (Strings.isBlank(name) || "null".equals(name))
      continue;
    re.add(name);
  }
  return re.toArray(new String[re.size()]);
}

代码示例来源:origin: crashub/crash

Map<?, ?> row = stream.next();
   bilto.add(String.valueOf(entry.getKey()));
   if (table.getRows().size() > 0) {
    renderers.add(table.renderer());
if (table.getRows().size() > 0) {
 renderers.add(table.renderer());

代码示例来源:origin: ahmetaa/zemberek-nlp

public static void countModelTokens() throws IOException {
 Path path = Paths.get("/home/aaa/projects/morfessor/model.txt");
 List<String> lines = Files.readAllLines(path, StandardCharsets.UTF_8);
 System.out.println(lines.size());
 LinkedHashSet<String> tokens = new LinkedHashSet<>();
 for (String s : lines) {
  if (s.startsWith("#")) {
   continue;
  }
  List<String> strings = Splitter.on(" ").trimResults().omitEmptyStrings().splitToList(s);
  String root = strings.get(1);
  List<String> endingTokens = strings.subList(2, strings.size()).stream()
    .filter(k -> !k.equals("+")).collect(Collectors.toList());
  String ending = String.join("", endingTokens);
  tokens.add(root);
  if (ending.length() > 0) {
   tokens.add(ending);
  }
 }
 System.out.println(tokens.size());
}

代码示例来源:origin: espertechinc/esper

public boolean filter(ContextPartitionIdentifier contextPartitionIdentifier) {
  ContextPartitionIdentifierPartitioned id = (ContextPartitionIdentifierPartitioned) contextPartitionIdentifier;
  if (match == null && cpids.contains(id.getContextPartitionId())) {
    throw new RuntimeException("Already exists context id: " + id.getContextPartitionId());
  }
  cpids.add(id.getContextPartitionId());
  contexts.add(id.getKeys());
  return Arrays.equals(id.getKeys(), match);
}

代码示例来源:origin: org.apache.maven/maven-project

private List collectRestoredListOfPatterns( List patterns,
                          List originalPatterns,
                          List originalInterpolatedPatterns )
{
  LinkedHashSet collectedPatterns = new LinkedHashSet();
  collectedPatterns.addAll( originalPatterns );
  for ( Iterator it = patterns.iterator(); it.hasNext(); )
  {
    String pattern = (String) it.next();
    if ( !originalInterpolatedPatterns.contains( pattern ) )
    {
      collectedPatterns.add( pattern );
    }
  }
  return collectedPatterns.isEmpty() ? Collections.EMPTY_LIST
          : new ArrayList( collectedPatterns );
}

代码示例来源:origin: ebean-orm/ebean

LinkedHashSet<String> propertySet = new LinkedHashSet<>(res.size() * 2);
  propertySet.add(temp);
  count++;
if (propertySet.contains("*")) {

代码示例来源:origin: cmusphinx/sphinx4

queue.add(reversed.getStart());
  State q = queue.iterator().next();
  queue.remove(q);
          .getNextState().getId()], semiring.times(rnew,
          a.getWeight()));
      if (!queue.contains(nextState)) {
        queue.add(nextState);

代码示例来源:origin: apache/hive

LinkedHashSet<ReduceSinkOperator> thisBottomReduceSinkOperators =
  exploitJobFlowCorrelation(rsop, corrCtx, correlation);
if (thisBottomReduceSinkOperators.size() == 0) {
 thisBottomReduceSinkOperators.add(rsop);

代码示例来源:origin: jersey/jersey

/**
 * Get collection of all {@link ServiceHolder}s bound for providers (custom and default) registered for the given service
 * provider contract in the underlying {@link InjectionManager injection manager} container.
 *
 * @param <T>             service provider contract Java type.
 * @param injectionManager underlying injection manager.
 * @param contract        service provider contract.
 * @return set of all available service provider instances for the contract
 */
public static <T> Collection<ServiceHolder<T>> getAllServiceHolders(InjectionManager injectionManager, Class<T> contract) {
  List<ServiceHolder<T>> providers = getServiceHolders(injectionManager,
                             contract,
                             Comparator.comparingInt(Providers::getPriority),
                             CustomAnnotationLiteral.INSTANCE);
  providers.addAll(getServiceHolders(injectionManager, contract));
  LinkedHashSet<ServiceHolder<T>> providersSet = new LinkedHashSet<>();
  for (ServiceHolder<T> provider : providers) {
    if (!providersSet.contains(provider)) {
      providersSet.add(provider);
    }
  }
  return providersSet;
}

相关文章