本文整理了Java中org.skife.jdbi.v2.Query.setFetchSize
方法的一些代码示例,展示了Query.setFetchSize
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Query.setFetchSize
方法的具体详情如下:
包路径:org.skife.jdbi.v2.Query
类名称:Query
方法名:setFetchSize
[英]Specify the fetch size for the query. This should cause the results to be fetched from the underlying RDBMS in groups of rows equal to the number passed. This is useful for doing chunked streaming of results when exhausting memory could be a problem.
[中]
代码示例来源:origin: apache/incubator-druid
@Override
public Iterable<Map.Entry<String, String>> fetchAll()
{
return inReadOnlyTransaction((handle, status) -> {
return handle.createQuery(fetchAllQuery)
.setFetchSize(streamingFetchSize)
.map(new KeyValueResultSetMapper(keyColumn, valueColumn))
.list();
});
}
代码示例来源:origin: apache/hive
/**
* @param connector SQL connector to metadata
* @param metadataStorageTablesConfig Tables configuration
* @param dataSource Name of data source
*
* @return List of all data segments part of the given data source
*/
static List<DataSegment> getDataSegmentList(final SQLMetadataConnector connector,
final MetadataStorageTablesConfig metadataStorageTablesConfig,
final String dataSource) {
return connector.retryTransaction((handle, status) -> handle.createQuery(String.format(
"SELECT payload FROM %s WHERE dataSource = :dataSource",
metadataStorageTablesConfig.getSegmentsTable()))
.setFetchSize(getStreamingFetchSize(connector))
.bind("dataSource", dataSource)
.map(ByteArrayMapper.FIRST)
.fold(new ArrayList<>(), (Folder3<List<DataSegment>, byte[]>) (accumulator, payload, control, ctx) -> {
try {
final DataSegment segment = DATA_SEGMENT_INTERNER.intern(JSON_MAPPER.readValue(payload, DataSegment.class));
accumulator.add(segment);
return accumulator;
} catch (Exception e) {
throw new SQLException(e.toString());
}
}), 3, DEFAULT_MAX_TRIES);
}
代码示例来源:origin: apache/incubator-druid
.setFetchSize(connector.getStreamingFetchSize())
.bind("dataSource", dataSource)
.bind("start", interval.getStart().toString())
代码示例来源:origin: apache/incubator-druid
.setFetchSize(connector.getStreamingFetchSize())
.setMaxRows(limit)
.bind("dataSource", dataSource)
代码示例来源:origin: apache/incubator-druid
.setFetchSize(connector.getStreamingFetchSize())
.bind("dataSource", dataSource)
.map(ByteArrayMapper.FIRST)
代码示例来源:origin: com.ning.billing/killbill-osgi-bundles-analytics
public void apply(SQLStatement q) throws SQLException
{
assert q instanceof Query;
((Query) q).setFetchSize(va);
}
};
代码示例来源:origin: org.kill-bill.commons/killbill-jdbi
@Override
public void apply(SQLStatement q) throws SQLException
{
assert q instanceof Query;
((Query) q).setFetchSize(va);
}
};
代码示例来源:origin: com.ning.billing/killbill-osgi-bundles-analytics
public void apply(SQLStatement q) throws SQLException
{
assert q instanceof Query;
((Query) q).setFetchSize(fs.value());
}
};
代码示例来源:origin: org.kill-bill.commons/killbill-jdbi
@Override
public void apply(SQLStatement q) throws SQLException
{
assert q instanceof Query;
((Query) q).setFetchSize(fs.value());
}
};
代码示例来源:origin: org.kill-bill.commons/killbill-jdbi
@Override
public void apply(SQLStatement q) throws SQLException
{
assert q instanceof Query;
((Query) q).setFetchSize(fs.value());
}
};
代码示例来源:origin: com.ning.billing/killbill-osgi-bundles-analytics
public void apply(SQLStatement q) throws SQLException
{
assert q instanceof Query;
((Query) q).setFetchSize(fs.value());
}
};
代码示例来源:origin: io.druid/druid-server
getSegmentsTable()
))
.setFetchSize(connector.getStreamingFetchSize())
.bind("dataSource", ds)
.map(ByteArrayMapper.FIRST)
代码示例来源:origin: io.druid/druid-server
.setFetchSize(connector.getStreamingFetchSize())
.setMaxRows(limit)
.bind("dataSource", dataSource)
代码示例来源:origin: org.apache.druid/druid-server
.setFetchSize(connector.getStreamingFetchSize())
.bind("dataSource", dataSource)
.bind("start", interval.getStart().toString())
代码示例来源:origin: org.apache.druid/druid-server
.setFetchSize(connector.getStreamingFetchSize())
.setMaxRows(limit)
.bind("dataSource", dataSource)
代码示例来源:origin: io.druid/druid-server
.setFetchSize(connector.getStreamingFetchSize())
.bind("dataSource", dataSource)
.bind("start", interval.getStart().toString())
代码示例来源:origin: com.ning.billing/killbill-meter
@Override
public Void withHandle(final Handle handle) throws Exception {
// MySQL needs special setup to make it stream the results. See:
// http://javaquirks.blogspot.com/2007/12/mysql-streaming-result-set.html
// http://stackoverflow.com/questions/2447324/streaming-large-result-sets-with-mysql
final Query<Map<String, Object>> query = handle.createQuery("getStreamingAggregationCandidates")
.setFetchSize(Integer.MIN_VALUE)
.bind("aggregationLevel", aggregationLevel)
.bind("tenantRecordId", createCallContext().getTenantRecordId());
query.setStatementLocator(new StringTemplate3StatementLocator(TimelineAggregatorSqlDao.class));
ResultIterator<TimelineChunk> iterator = null;
try {
iterator = query
.map(timelineChunkMapper)
.iterator();
while (iterator.hasNext()) {
aggregationConsumer.processTimelineChunk(iterator.next());
}
} catch (Exception e) {
log.error(String.format("Exception during aggregation of level %d", aggregationLevel), e);
} finally {
if (iterator != null) {
iterator.close();
}
}
return null;
}
代码示例来源:origin: org.apache.druid/druid-server
.setFetchSize(connector.getStreamingFetchSize())
.bind("dataSource", ds)
.map(ByteArrayMapper.FIRST)
代码示例来源:origin: org.jdbi/jdbi
@Test
public void testFetchSize() throws Exception
{
h.createScript("default-data").execute();
Query<Something> q = h.createQuery("select id, name from something order by id").map(Something.class);
q.setFetchSize(1);
ResultIterator<Something> r = q.iterator();
assertTrue(r.hasNext());
r.next();
assertTrue(r.hasNext());
r.next();
assertFalse(r.hasNext());
}
代码示例来源:origin: org.kill-bill.commons/killbill-jdbi
@Test
public void testFetchSize() throws Exception
{
h.createScript("default-data").execute();
Query<Something> q = h.createQuery("select id, name from something order by id").map(Something.class);
q.setFetchSize(1);
ResultIterator<Something> r = q.iterator();
assertTrue(r.hasNext());
r.next();
assertTrue(r.hasNext());
r.next();
assertFalse(r.hasNext());
}
内容来源于网络,如有侵权,请联系作者删除!