com.github.benmanes.caffeine.cache.Caffeine.writer()方法的使用及代码示例

x33g5p2x  于2022-01-18 转载在 其他  
字(7.4k)|赞(0)|评价(0)|浏览(171)

本文整理了Java中com.github.benmanes.caffeine.cache.Caffeine.writer()方法的一些代码示例,展示了Caffeine.writer()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Caffeine.writer()方法的具体详情如下:
包路径:com.github.benmanes.caffeine.cache.Caffeine
类名称:Caffeine
方法名:writer

Caffeine.writer介绍

[英]Specifies a writer instance that caches should notify each time an entry is explicitly created or modified, or removed for any RemovalCause. The writer is not notified when an entry is loaded or computed. Each cache created by this builder will invoke this writer as part of the atomic operation that modifies the cache.

Warning: after invoking this method, do not continue to use this cache builder reference; instead use the reference this method returns. At runtime, these point to the same instance, but only the returned reference has the correct generic type information so as to ensure type safety. For best results, use the standard method-chaining idiom illustrated in the class documentation above, configuring a builder and building your cache in a single statement. Failure to heed this advice can result in a ClassCastException being thrown by a cache operation at some undefined point in the future.

Warning: any exception thrown by writer will be propagated to the Cacheuser.

This feature cannot be used in conjunction with #weakKeys() or #buildAsync.
[中]指定写入程序实例,缓存应在每次显式创建、修改或删除任何RemovalCause项时通知该实例。加载或计算条目时,不会通知编写器。此生成器创建的每个缓存都将调用此编写器,作为修改缓存的原子操作的一部分。
警告:调用此方法后,不要继续使用此缓存生成器引用;而是使用此方法返回的引用。在运行时,它们指向同一实例,但只有返回的引用具有正确的泛型类型信息,以确保类型安全。为了获得最佳结果,请使用上面的类文档中说明的标准方法链接习惯用法,在单个语句中配置生成器并构建缓存。如果不注意这个建议,可能会导致缓存操作在将来的某个未定义点抛出ClassCastException。
警告:写入程序引发的任何异常都将传播到Cacheuser。
此功能不能与#weakKeys()或#buildAsync结合使用。

代码示例

代码示例来源:origin: ben-manes/caffeine

@Test(expectedExceptions = IllegalStateException.class)
public void writer_twice() {
 Caffeine.newBuilder().writer(writer).writer(writer);
}

代码示例来源:origin: ben-manes/caffeine

@Test(expectedExceptions = NullPointerException.class)
public void writer_null() {
 Caffeine.newBuilder().writer(null);
}

代码示例来源:origin: ben-manes/caffeine

@Test(expectedExceptions = IllegalStateException.class)
public void async_writer() {
 Caffeine.newBuilder().writer(writer).buildAsync(loader);
}

代码示例来源:origin: ben-manes/caffeine

@Test(expectedExceptions = IllegalStateException.class)
public void weakKeys_writer() {
 Caffeine.newBuilder().writer(writer).weakKeys();
}

代码示例来源:origin: ben-manes/caffeine

@Test(expectedExceptions = IllegalStateException.class)
public void writer_weakKeys() {
 Caffeine.newBuilder().writer(writer).weakKeys();
}

代码示例来源:origin: ben-manes/caffeine

@Test
 public void writer() {
  Caffeine<?, ?> builder = Caffeine.newBuilder().writer(writer);
  assertThat(builder.getCacheWriter(), is(writer));
  builder.build();
 }
}

代码示例来源:origin: ben-manes/caffeine

@Test
public void rescheduleDrainBuffers() {
 AtomicBoolean evicting = new AtomicBoolean();
 AtomicBoolean done = new AtomicBoolean();
 CacheWriter<Integer, Integer> writer = new CacheWriter<Integer, Integer>() {
  @Override public void write(Integer key, Integer value) {}
  @Override public void delete(Integer key, Integer value, RemovalCause cause) {
   evicting.set(true);
   await().untilTrue(done);
  }
 };
 BoundedLocalCache<Integer, Integer> map = asBoundedLocalCache(
   Caffeine.newBuilder().writer(writer).maximumSize(0L).build());
 map.put(1, 1);
 await().untilTrue(evicting);
 map.put(2, 2);
 assertThat(map.drainStatus, is(PROCESSING_TO_REQUIRED));
 done.set(true);
 await().until(() -> map.drainStatus, is(IDLE));
}

代码示例来源:origin: ben-manes/caffeine

.executor(Runnable::run)
  .maximumSize(100)
  .writer(writer)
  .build();
cache.put(key, oldValue);

代码示例来源:origin: ben-manes/caffeine

.executor(Runnable::run)
  .maximumSize(100)
  .writer(writer)
  .build();
BoundedLocalCache<Integer, Integer> localCache = asBoundedLocalCache(cache);

代码示例来源:origin: ben-manes/caffeine

/** Creates a configured cache. */
public CacheProxy<K, V> build() {
 boolean evicts = false;
 evicts |= configureMaximumSize();
 evicts |= configureMaximumWeight();
 evicts |= configureExpireAfterWrite();
 evicts |= configureExpireAfterAccess();
 evicts |= configureExpireVariably();
 JCacheEvictionListener<K, V> evictionListener = null;
 if (evicts) {
  evictionListener = new JCacheEvictionListener<>(dispatcher, statistics);
  caffeine.writer(evictionListener);
 }
 CacheProxy<K, V> cache;
 if (isReadThrough()) {
  configureRefreshAfterWrite();
  cache = newLoadingCacheProxy();
 } else {
  cache = newCacheProxy();
 }
 if (evictionListener != null) {
  evictionListener.setCache(cache);
 }
 return cache;
}

代码示例来源:origin: ben-manes/caffeine

builder.writer((CacheWriter<Object, Object>) writer);

代码示例来源:origin: apache/metron

.expireAfterAccess(profileTimeToLiveMillis, TimeUnit.MILLISECONDS)
    .ticker(ticker)
    .writer(new ActiveCacheWriter());
if (LOG.isDebugEnabled()) {
 activeCacheBuilder.recordStats();
    .expireAfterWrite(profileTimeToLiveMillis, TimeUnit.MILLISECONDS)
    .ticker(ticker)
    .writer(new ExpiredCacheWriter());
if (LOG.isDebugEnabled()) {
 expiredCacheBuilder.recordStats();

代码示例来源:origin: ben-manes/caffeine

builder.writer(context.cacheWriter());

代码示例来源:origin: com.github.ben-manes.caffeine/caffeine

builder.writer((CacheWriter<Object, Object>) writer);

代码示例来源:origin: com.wavefront/proxy

.maximumSize(cacheSize)
.ticker(ticker == null ? Ticker.systemTicker() : ticker)
.writer(new CacheWriter<HistogramKey, AgentDigest>() {
 @Override
 public void write(@Nonnull HistogramKey key, @Nonnull AgentDigest value) {

代码示例来源:origin: wavefrontHQ/java

.maximumSize(cacheSize)
.ticker(ticker == null ? Ticker.systemTicker() : ticker)
.writer(new CacheWriter<HistogramKey, AgentDigest>() {
 @Override
 public void write(@Nonnull HistogramKey key, @Nonnull AgentDigest value) {

代码示例来源:origin: CorfuDB/CorfuDB

return Caffeine.newBuilder()
    .recordStats()
    .writer(new CacheWriter<String, Object>() {
      @Override
      public synchronized void write(@Nonnull String key, @Nonnull Object value) {

代码示例来源:origin: com.github.ben-manes.caffeine/jcache

/** Creates a configured cache. */
public CacheProxy<K, V> build() {
 boolean evicts = false;
 evicts |= configureMaximumSize();
 evicts |= configureMaximumWeight();
 evicts |= configureExpireAfterWrite();
 evicts |= configureExpireAfterAccess();
 evicts |= configureExpireVariably();
 JCacheEvictionListener<K, V> evictionListener = null;
 if (evicts) {
  evictionListener = new JCacheEvictionListener<>(dispatcher, statistics);
  caffeine.writer(evictionListener);
 }
 CacheProxy<K, V> cache;
 if (isReadThrough()) {
  configureRefreshAfterWrite();
  cache = newLoadingCacheProxy();
 } else {
  cache = newCacheProxy();
 }
 if (evictionListener != null) {
  evictionListener.setCache(cache);
 }
 return cache;
}

代码示例来源:origin: CorfuDB/CorfuDB

/**
 * Returns a new LogUnitServer.
 * @param serverContext context object providing settings and objects
 */
public LogUnitServer(ServerContext serverContext) {
  this.serverContext = serverContext;
  this.config = LogUnitServerConfig.parse(serverContext.getServerConfig());
  if (config.isMemoryMode()) {
    log.warn("Log unit opened in-memory mode (Maximum size={}). "
        + "This should be run for testing purposes only. "
        + "If you exceed the maximum size of the unit, old entries will be "
        + "AUTOMATICALLY trimmed. "
        + "The unit WILL LOSE ALL DATA if it exits.", Utils
        .convertToByteStringRepresentation(config.getMaxCacheSize()));
    streamLog = new InMemoryStreamLog();
  } else {
    streamLog = new StreamLogFiles(serverContext, config.isNoVerify());
  }
  batchWriter = new BatchWriter<>(streamLog, serverContext.getServerEpoch(), !config.isNoSync());
  dataCache = Caffeine.newBuilder()
      .<Long, ILogData>weigher((k, v) -> ((LogData) v).getData() == null ? 1 : ((LogData) v).getData().length)
      .maximumWeight(config.getMaxCacheSize())
      .removalListener(this::handleEviction)
      .writer(batchWriter)
      .build(this::handleRetrieval);
  logCleaner = new StreamLogCompaction(streamLog, 10, 45, TimeUnit.MINUTES, ServerContext.SHUTDOWN_TIMER);
}

相关文章