当前位置: 首页>>代码示例>>Java>>正文


Java JavaConversions.asJavaCollection方法代码示例

本文整理汇总了Java中scala.collection.JavaConversions.asJavaCollection方法的典型用法代码示例。如果您正苦于以下问题:Java JavaConversions.asJavaCollection方法的具体用法?Java JavaConversions.asJavaCollection怎么用?Java JavaConversions.asJavaCollection使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在scala.collection.JavaConversions的用法示例。


在下文中一共展示了JavaConversions.asJavaCollection方法的9个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: getInputFields

import scala.collection.JavaConversions; //导入方法依赖的package包/类
private List<Object> getInputFields(IndexedRecord inputRecord, String columnName) {
    // Adapt non-avpath syntax to avpath.
    // TODO: This should probably not be automatic, use the actual syntax.
    if (!columnName.startsWith("."))
        columnName = "." + columnName;
    Try<scala.collection.immutable.List<Evaluator.Ctx>> result = wandou.avpath.package$.MODULE$.select(inputRecord,
            columnName);
    List<Object> values = new ArrayList<Object>();
    if (result.isSuccess()) {
        for (Evaluator.Ctx ctx : JavaConversions.asJavaCollection(result.get())) {
            values.add(ctx.value());
        }
    } else {
        // Evaluating the expression failed, and we can handle the exception.
        Throwable t = result.failed().get();
        throw ProcessingErrorCode.createAvpathSyntaxError(t, columnName, -1);
    }
    return values;
}
 
开发者ID:Talend,项目名称:components,代码行数:20,代码来源:FilterRowDoFn.java

示例2: testAnonClassTokenize

import scala.collection.JavaConversions; //导入方法依赖的package包/类
public void testAnonClassTokenize() {
  myFixture.configureByFiles("anonClassTokens.java");
  Eddy eddy = makeEddy();
  Token[] toks = {
    // List<X> x = new <caret>List<X>() {
    new Tokens.IdentTok("List"),
    Tokens.LtTok$.MODULE$,
    new Tokens.IdentTok("X"),
    Tokens.GtTok$.MODULE$,
    new Tokens.IdentTok("x"),
    Tokens.EqTok$.MODULE$,
    Tokens.NewTok$.MODULE$,
    new Tokens.IdentTok("List"),
    Tokens.LtTok$.MODULE$,
    new Tokens.IdentTok("X"),
    Tokens.GtTok$.MODULE$,
    Tokens.LParenTok$.MODULE$,
    Tokens.RParenTok$.MODULE$,
    new Tokenizer.AtomicAnonBodyTok(PsiTreeUtil.findChildrenOfType(myFixture.getFile(), PsiAnonymousClass.class).iterator().next())
  };
  List<Token> wanted = new ArrayList<Token>();
  Collections.addAll(wanted, toks);

  List<Token> tokens = new ArrayList<Token>();
  try {
    for (final Loc<Token> tok : JavaConversions.asJavaCollection(Tokens.prepare(JavaConversions.asScalaBuffer(eddy.input().input).toList())))
      tokens.add(tok.x());
  } catch (Eddy.Skip s) {
    throw new AssertionError();
  }
  assertEquals(wanted, tokens);
}
 
开发者ID:eddysystems,项目名称:eddy,代码行数:33,代码来源:Tests.java

示例3: listGood

import scala.collection.JavaConversions; //导入方法依赖的package包/类
static public <A> Scored<A> listGood(final List<Alt<A>> xs) {
  final int n = xs.size();
  switch (n) {
    case 0:
      return (Scored<A>)Empty$.MODULE$;
    case 1:
      final Alt<A> x = xs.head();
      return new Best<A>(x.dp(),x.x(),(Scored<A>)Empty$.MODULE$);
    default:
      final PriorityQueue<Alt<A>> pq = new PriorityQueue<Alt<A>>(JavaConversions.asJavaCollection(xs));
      final Alt<A> bestA = pq.poll();
      return new Best<A>(bestA.dp(), bestA.x(), new Extractor<A>(new MultipleAltState<A>(pq)));
  }
}
 
开发者ID:eddysystems,项目名称:eddy,代码行数:15,代码来源:JavaScores.java

示例4: process

import scala.collection.JavaConversions; //导入方法依赖的package包/类
@Override
public void process(
    VariantContextWritable input, Emitter<Pair<Variant, Collection<Genotype>>> emitter) {
  VariantContext bvc = input.get();
  List<org.bdgenomics.adam.models.VariantContext> avcList =
      JavaConversions.seqAsJavaList(vcc.convert(bvc));
  for (org.bdgenomics.adam.models.VariantContext avc : avcList) {
    Variant variant = avc.variant().variant();
    Collection<Genotype> genotypes = JavaConversions.asJavaCollection(avc.genotypes());
    emitter.emit(Pair.of(variant, genotypes));
  }
}
 
开发者ID:cloudera,项目名称:quince,代码行数:13,代码来源:VCFToADAMVariantFn.java

示例5: getBrokers

import scala.collection.JavaConversions; //导入方法依赖的package包/类
/**
 * Returns the bootstrap broker nodes
 * @return the bootstrap broker nodes
 */
public Collection<Node> getBrokers() {
	if(adminClient==null) throw new IllegalStateException("Admin client not created");
	return JavaConversions.asJavaCollection(adminClient.bootstrapBrokers());		
}
 
开发者ID:nickman,项目名称:HeliosStreams,代码行数:9,代码来源:KafkaAdminClient.java

示例6: getAllConsumerGroups

import scala.collection.JavaConversions; //导入方法依赖的package包/类
/**
 * Returns the meta data on all consumer groups
 * @return the meta data on all consumer groups
 */
public Collection<GroupOverview> getAllConsumerGroups() {
	if(adminClient==null) throw new IllegalStateException("Admin client not created");
	return JavaConversions.asJavaCollection(adminClient.listAllConsumerGroupsFlattened());				
}
 
开发者ID:nickman,项目名称:HeliosStreams,代码行数:9,代码来源:KafkaAdminClient.java

示例7: getTopics

import scala.collection.JavaConversions; //导入方法依赖的package包/类
@Override
public Set<String> getTopics() {
    return new TreeSet<String>(JavaConversions.asJavaCollection(zkUtils.getAllTopics()));
}
 
开发者ID:craftsmenlabs,项目名称:kafka-admin-rest-api,代码行数:5,代码来源:TopicServiceImpl.java

示例8: clearStreamsFromPreviousRun

import scala.collection.JavaConversions; //导入方法依赖的package包/类
/**
 * This is a best-effort approach to clear the internal streams from previous run, including intermediate streams,
 * checkpoint stream and changelog streams.
 * For batch processing, we always clean up the previous internal streams and create a new set for each run.
 * @param prevConfig config of the previous run
 */
public void clearStreamsFromPreviousRun(Config prevConfig) {
  try {
    ApplicationConfig appConfig = new ApplicationConfig(prevConfig);
    LOGGER.info("run.id from previous run is {}", appConfig.getRunId());

    StreamConfig streamConfig = new StreamConfig(prevConfig);

    //Find all intermediate streams and clean up
    Set<StreamSpec> intStreams = JavaConversions.asJavaCollection(streamConfig.getStreamIds()).stream()
        .filter(streamConfig::getIsIntermediate)
        .map(id -> new StreamSpec(id, streamConfig.getPhysicalName(id), streamConfig.getSystem(id)))
        .collect(Collectors.toSet());
    intStreams.forEach(stream -> {
        LOGGER.info("Clear intermediate stream {} in system {}", stream.getPhysicalName(), stream.getSystemName());
        sysAdmins.get(stream.getSystemName()).clearStream(stream);
      });

    //Find checkpoint stream and clean up
    TaskConfig taskConfig = new TaskConfig(prevConfig);
    String checkpointManagerFactoryClass = taskConfig.getCheckpointManagerFactory().getOrElse(defaultValue(null));
    if (checkpointManagerFactoryClass != null) {
      CheckpointManager checkpointManager = ((CheckpointManagerFactory) Util.getObj(checkpointManagerFactoryClass))
          .getCheckpointManager(prevConfig, new MetricsRegistryMap());
      checkpointManager.clearCheckpoints();
    }

    //Find changelog streams and remove them
    StorageConfig storageConfig = new StorageConfig(prevConfig);
    for (String store : JavaConversions.asJavaCollection(storageConfig.getStoreNames())) {
      String changelog = storageConfig.getChangelogStream(store).getOrElse(defaultValue(null));
      if (changelog != null) {
        LOGGER.info("Clear store {} changelog {}", store, changelog);
        SystemStream systemStream = Util.getSystemStreamFromNames(changelog);
        StreamSpec spec = StreamSpec.createChangeLogStreamSpec(systemStream.getStream(), systemStream.getSystem(), 1);
        sysAdmins.get(spec.getSystemName()).clearStream(spec);
      }
    }
  } catch (Exception e) {
    // For batch, we always create a new set of internal streams (checkpoint, changelog and intermediate) with unique
    // id. So if clearStream doesn't work, it won't affect the correctness of the results.
    // We log a warning here and rely on retention to clean up the streams later.
    LOGGER.warn("Fail to clear internal streams from previous run. Please clean up manually.", e);
  }
}
 
开发者ID:apache,项目名称:samza,代码行数:51,代码来源:StreamManager.java

示例9: getConsumers

import scala.collection.JavaConversions; //导入方法依赖的package包/类
/**
 * Returns the meta data on all consumers in the passed consumer group
 * @param consumerGroup The consumer group name
 * @return the meta data on all consumers in the passed consumer group
 */
public Collection<ConsumerSummary> getConsumers(final String consumerGroup) {
	if(adminClient==null) throw new IllegalStateException("Admin client not created");
	return JavaConversions.asJavaCollection(adminClient.describeConsumerGroup(consumerGroup));				
}
 
开发者ID:nickman,项目名称:HeliosStreams,代码行数:10,代码来源:KafkaAdminClient.java


注:本文中的scala.collection.JavaConversions.asJavaCollection方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。