当前位置: 首页>>代码示例>>Java>>正文


Java FluentIterable.iterator方法代码示例

本文整理汇总了Java中com.google.common.collect.FluentIterable.iterator方法的典型用法代码示例。如果您正苦于以下问题:Java FluentIterable.iterator方法的具体用法?Java FluentIterable.iterator怎么用?Java FluentIterable.iterator使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在com.google.common.collect.FluentIterable的用法示例。


在下文中一共展示了FluentIterable.iterator方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: create

import com.google.common.collect.FluentIterable; //导入方法依赖的package包/类
@Override
public ProducerOperator create(FragmentExecutionContext fragmentExecContext, final OperatorContext context, EasySubScan config) throws ExecutionSetupException {
  final FileSystemStoragePlugin2 registry = (FileSystemStoragePlugin2) fragmentExecContext.getStoragePlugin(config.getPluginId());
  final FileSystemPlugin fsPlugin = registry.getFsPlugin();

  final FileSystemWrapper fs = registry.getFs();
  final FormatPluginConfig formatConfig = PhysicalDatasetUtils.toFormatPlugin(config.getFileConfig(), Collections.<String>emptyList());
  final EasyFormatPlugin<?> formatPlugin = (EasyFormatPlugin<?>)fsPlugin.getFormatPlugin(formatConfig);

  //final ImplicitFilesystemColumnFinder explorer = new ImplicitFilesystemColumnFinder(context.getOptions(), fs, config.getColumns());

  FluentIterable<SplitAndExtended> unorderedWork = FluentIterable.from(config.getSplits())
    .transform(new Function<DatasetSplit, SplitAndExtended>() {
      @Override
      public SplitAndExtended apply(DatasetSplit split) {
        return new SplitAndExtended(split);
      }
    });

  final boolean sortReaders = context.getOptions().getOption(ExecConstants.SORT_FILE_BLOCKS);
  final List<SplitAndExtended> workList = sortReaders ?  unorderedWork.toSortedList(SPLIT_COMPARATOR) : unorderedWork.toList();
  final boolean selectAllColumns = selectsAllColumns(config.getSchema(), config.getColumns());
  final CompositeReaderConfig readerConfig = CompositeReaderConfig.getCompound(config.getSchema(), config.getColumns(), config.getPartitionColumns());
  final List<SchemaPath> innerFields = selectAllColumns ? ImmutableList.of(ColumnUtils.STAR_COLUMN) : readerConfig.getInnerColumns();

  FluentIterable<RecordReader> readers = FluentIterable.from(workList).transform(new Function<SplitAndExtended, RecordReader>() {
    @Override
    public RecordReader apply(SplitAndExtended input) {
      try {
        RecordReader inner = formatPlugin.getRecordReader(context, fs, input.getExtended(), innerFields);
        return readerConfig.wrapIfNecessary(context.getAllocator(), inner, input.getSplit());
      } catch (ExecutionSetupException e) {
        throw new RuntimeException(e);
      }
    }});

  return new ScanOperator(fragmentExecContext.getSchemaUpdater(), config, context, readers.iterator());
}
 
开发者ID:dremio,项目名称:dremio-oss,代码行数:39,代码来源:EasyScanOperatorCreator.java

示例2: create

import com.google.common.collect.FluentIterable; //导入方法依赖的package包/类
@Override
public ProducerOperator create(FragmentExecutionContext fragmentExecContext, final OperatorContext context, final ParquetSubScan config) throws ExecutionSetupException {
  final FileSystemStoragePlugin2 registry = (FileSystemStoragePlugin2) fragmentExecContext.getStoragePlugin(config.getPluginId());
  final FileSystemPlugin fsPlugin = registry.getFsPlugin();

  final FileSystemWrapper fs = registry.getFs();

  final Configuration conf = fsPlugin.getFsConf();
  conf.setBoolean(ENABLE_BYTES_READ_COUNTER, false);
  conf.setBoolean(ENABLE_BYTES_TOTAL_COUNTER, false);
  conf.setBoolean(ENABLE_TIME_READ_COUNTER, false);

  final Stopwatch watch = Stopwatch.createStarted();

  boolean isAccelerator = config.getPluginId().getName().equals("__accelerator");

  final ParquetReaderFactory readerFactory = UnifiedParquetReader.getReaderFactory(context.getConfig());

  // TODO (AH )Fix implicit columns with mod time and global dictionaries
  final ImplicitFilesystemColumnFinder finder = new ImplicitFilesystemColumnFinder(context.getOptions(), fs, config.getColumns(), isAccelerator);
  // load global dictionaries, globalDictionaries must be closed by the last reader
  final GlobalDictionaries globalDictionaries = GlobalDictionaries.create(context, fs,  config.getGlobalDictionaryEncodedColumns());
  final boolean vectorize = context.getOptions().getOption(ExecConstants.PARQUET_READER_VECTORIZE);
  final boolean autoCorrectCorruptDates = ((ParquetFileConfig)FileFormat.getForFile(config.getFormatSettings())).getAutoCorrectCorruptDates();
  final boolean readInt96AsTimeStamp = context.getOptions().getOption(ExecConstants.PARQUET_READER_INT96_AS_TIMESTAMP).bool_val;
  final boolean enableDetailedTracing = context.getOptions().getOption(ExecConstants.ENABLED_PARQUET_TRACING);
  final CodecFactory codec = CodecFactory.createDirectCodecFactory(fs.getConf(), new ParquetDirectByteBufferAllocator(context.getAllocator()), 0);

  final Map<String, GlobalDictionaryFieldInfo> globalDictionaryEncodedColumns = Maps.newHashMap();

  if (globalDictionaries != null) {
    for (GlobalDictionaryFieldInfo fieldInfo : config.getGlobalDictionaryEncodedColumns()) {
      globalDictionaryEncodedColumns.put(fieldInfo.getFieldName(), fieldInfo);
    }
  }

  final CompositeReaderConfig readerConfig = CompositeReaderConfig.getCompound(config.getSchema(), config.getColumns(), config.getPartitionColumns());
  final List<ParquetDatasetSplit> sortedSplits = Lists.newArrayList();
  final SingletonParquetFooterCache footerCache = new SingletonParquetFooterCache();

  for (DatasetSplit spilt : config.getSplits()) {
    sortedSplits.add(new ParquetDatasetSplit(spilt));
  }
  Collections.sort(sortedSplits);

  FluentIterable < RecordReader > readers = FluentIterable.from(sortedSplits).transform(new Function<ParquetDatasetSplit, RecordReader>() {
    @Override
    public RecordReader apply(ParquetDatasetSplit split) {
      final UnifiedParquetReader inner = new UnifiedParquetReader(
        context,
        readerFactory,
        finder.getRealFields(),
        config.getColumns(),
        globalDictionaryEncodedColumns,
        config.getConditions(),
        split.getSplitXAttr(),
        fs,
        footerCache.getFooter(fs, new Path(split.getSplitXAttr().getPath())),
        globalDictionaries,
        codec,
        autoCorrectCorruptDates,
        readInt96AsTimeStamp,
        vectorize,
        enableDetailedTracing
      );
      return readerConfig.wrapIfNecessary(context.getAllocator(), inner, split.getDatasetSplit());
    }
  });

  final ScanOperator scan = new ScanOperator(fragmentExecContext.getSchemaUpdater(), config, context, readers.iterator(), globalDictionaries);
  logger.debug("Took {} ms to create Parquet Scan SqlOperatorImpl.", watch.elapsed(TimeUnit.MILLISECONDS));
  return scan;
}
 
开发者ID:dremio,项目名称:dremio-oss,代码行数:74,代码来源:ParquetOperatorCreator.java


注:本文中的com.google.common.collect.FluentIterable.iterator方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。