當前位置: 首頁>>代碼示例>>Java>>正文


Java FluentIterable.iterator方法代碼示例

本文整理匯總了Java中com.google.common.collect.FluentIterable.iterator方法的典型用法代碼示例。如果您正苦於以下問題:Java FluentIterable.iterator方法的具體用法?Java FluentIterable.iterator怎麽用?Java FluentIterable.iterator使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在com.google.common.collect.FluentIterable的用法示例。


在下文中一共展示了FluentIterable.iterator方法的2個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: create

import com.google.common.collect.FluentIterable; //導入方法依賴的package包/類
@Override
public ProducerOperator create(FragmentExecutionContext fragmentExecContext, final OperatorContext context, EasySubScan config) throws ExecutionSetupException {
  final FileSystemStoragePlugin2 registry = (FileSystemStoragePlugin2) fragmentExecContext.getStoragePlugin(config.getPluginId());
  final FileSystemPlugin fsPlugin = registry.getFsPlugin();

  final FileSystemWrapper fs = registry.getFs();
  final FormatPluginConfig formatConfig = PhysicalDatasetUtils.toFormatPlugin(config.getFileConfig(), Collections.<String>emptyList());
  final EasyFormatPlugin<?> formatPlugin = (EasyFormatPlugin<?>)fsPlugin.getFormatPlugin(formatConfig);

  //final ImplicitFilesystemColumnFinder explorer = new ImplicitFilesystemColumnFinder(context.getOptions(), fs, config.getColumns());

  FluentIterable<SplitAndExtended> unorderedWork = FluentIterable.from(config.getSplits())
    .transform(new Function<DatasetSplit, SplitAndExtended>() {
      @Override
      public SplitAndExtended apply(DatasetSplit split) {
        return new SplitAndExtended(split);
      }
    });

  final boolean sortReaders = context.getOptions().getOption(ExecConstants.SORT_FILE_BLOCKS);
  final List<SplitAndExtended> workList = sortReaders ?  unorderedWork.toSortedList(SPLIT_COMPARATOR) : unorderedWork.toList();
  final boolean selectAllColumns = selectsAllColumns(config.getSchema(), config.getColumns());
  final CompositeReaderConfig readerConfig = CompositeReaderConfig.getCompound(config.getSchema(), config.getColumns(), config.getPartitionColumns());
  final List<SchemaPath> innerFields = selectAllColumns ? ImmutableList.of(ColumnUtils.STAR_COLUMN) : readerConfig.getInnerColumns();

  FluentIterable<RecordReader> readers = FluentIterable.from(workList).transform(new Function<SplitAndExtended, RecordReader>() {
    @Override
    public RecordReader apply(SplitAndExtended input) {
      try {
        RecordReader inner = formatPlugin.getRecordReader(context, fs, input.getExtended(), innerFields);
        return readerConfig.wrapIfNecessary(context.getAllocator(), inner, input.getSplit());
      } catch (ExecutionSetupException e) {
        throw new RuntimeException(e);
      }
    }});

  return new ScanOperator(fragmentExecContext.getSchemaUpdater(), config, context, readers.iterator());
}
 
開發者ID:dremio,項目名稱:dremio-oss,代碼行數:39,代碼來源:EasyScanOperatorCreator.java

示例2: create

import com.google.common.collect.FluentIterable; //導入方法依賴的package包/類
@Override
public ProducerOperator create(FragmentExecutionContext fragmentExecContext, final OperatorContext context, final ParquetSubScan config) throws ExecutionSetupException {
  final FileSystemStoragePlugin2 registry = (FileSystemStoragePlugin2) fragmentExecContext.getStoragePlugin(config.getPluginId());
  final FileSystemPlugin fsPlugin = registry.getFsPlugin();

  final FileSystemWrapper fs = registry.getFs();

  final Configuration conf = fsPlugin.getFsConf();
  conf.setBoolean(ENABLE_BYTES_READ_COUNTER, false);
  conf.setBoolean(ENABLE_BYTES_TOTAL_COUNTER, false);
  conf.setBoolean(ENABLE_TIME_READ_COUNTER, false);

  final Stopwatch watch = Stopwatch.createStarted();

  boolean isAccelerator = config.getPluginId().getName().equals("__accelerator");

  final ParquetReaderFactory readerFactory = UnifiedParquetReader.getReaderFactory(context.getConfig());

  // TODO (AH )Fix implicit columns with mod time and global dictionaries
  final ImplicitFilesystemColumnFinder finder = new ImplicitFilesystemColumnFinder(context.getOptions(), fs, config.getColumns(), isAccelerator);
  // load global dictionaries, globalDictionaries must be closed by the last reader
  final GlobalDictionaries globalDictionaries = GlobalDictionaries.create(context, fs,  config.getGlobalDictionaryEncodedColumns());
  final boolean vectorize = context.getOptions().getOption(ExecConstants.PARQUET_READER_VECTORIZE);
  final boolean autoCorrectCorruptDates = ((ParquetFileConfig)FileFormat.getForFile(config.getFormatSettings())).getAutoCorrectCorruptDates();
  final boolean readInt96AsTimeStamp = context.getOptions().getOption(ExecConstants.PARQUET_READER_INT96_AS_TIMESTAMP).bool_val;
  final boolean enableDetailedTracing = context.getOptions().getOption(ExecConstants.ENABLED_PARQUET_TRACING);
  final CodecFactory codec = CodecFactory.createDirectCodecFactory(fs.getConf(), new ParquetDirectByteBufferAllocator(context.getAllocator()), 0);

  final Map<String, GlobalDictionaryFieldInfo> globalDictionaryEncodedColumns = Maps.newHashMap();

  if (globalDictionaries != null) {
    for (GlobalDictionaryFieldInfo fieldInfo : config.getGlobalDictionaryEncodedColumns()) {
      globalDictionaryEncodedColumns.put(fieldInfo.getFieldName(), fieldInfo);
    }
  }

  final CompositeReaderConfig readerConfig = CompositeReaderConfig.getCompound(config.getSchema(), config.getColumns(), config.getPartitionColumns());
  final List<ParquetDatasetSplit> sortedSplits = Lists.newArrayList();
  final SingletonParquetFooterCache footerCache = new SingletonParquetFooterCache();

  for (DatasetSplit spilt : config.getSplits()) {
    sortedSplits.add(new ParquetDatasetSplit(spilt));
  }
  Collections.sort(sortedSplits);

  FluentIterable < RecordReader > readers = FluentIterable.from(sortedSplits).transform(new Function<ParquetDatasetSplit, RecordReader>() {
    @Override
    public RecordReader apply(ParquetDatasetSplit split) {
      final UnifiedParquetReader inner = new UnifiedParquetReader(
        context,
        readerFactory,
        finder.getRealFields(),
        config.getColumns(),
        globalDictionaryEncodedColumns,
        config.getConditions(),
        split.getSplitXAttr(),
        fs,
        footerCache.getFooter(fs, new Path(split.getSplitXAttr().getPath())),
        globalDictionaries,
        codec,
        autoCorrectCorruptDates,
        readInt96AsTimeStamp,
        vectorize,
        enableDetailedTracing
      );
      return readerConfig.wrapIfNecessary(context.getAllocator(), inner, split.getDatasetSplit());
    }
  });

  final ScanOperator scan = new ScanOperator(fragmentExecContext.getSchemaUpdater(), config, context, readers.iterator(), globalDictionaries);
  logger.debug("Took {} ms to create Parquet Scan SqlOperatorImpl.", watch.elapsed(TimeUnit.MILLISECONDS));
  return scan;
}
 
開發者ID:dremio,項目名稱:dremio-oss,代碼行數:74,代碼來源:ParquetOperatorCreator.java


注:本文中的com.google.common.collect.FluentIterable.iterator方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。