當前位置: 首頁>>代碼示例>>Java>>正文


Java JavaConversions.asJavaCollection方法代碼示例

本文整理匯總了Java中scala.collection.JavaConversions.asJavaCollection方法的典型用法代碼示例。如果您正苦於以下問題:Java JavaConversions.asJavaCollection方法的具體用法?Java JavaConversions.asJavaCollection怎麽用?Java JavaConversions.asJavaCollection使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在scala.collection.JavaConversions的用法示例。


在下文中一共展示了JavaConversions.asJavaCollection方法的9個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: getInputFields

import scala.collection.JavaConversions; //導入方法依賴的package包/類
private List<Object> getInputFields(IndexedRecord inputRecord, String columnName) {
    // Adapt non-avpath syntax to avpath.
    // TODO: This should probably not be automatic, use the actual syntax.
    if (!columnName.startsWith("."))
        columnName = "." + columnName;
    Try<scala.collection.immutable.List<Evaluator.Ctx>> result = wandou.avpath.package$.MODULE$.select(inputRecord,
            columnName);
    List<Object> values = new ArrayList<Object>();
    if (result.isSuccess()) {
        for (Evaluator.Ctx ctx : JavaConversions.asJavaCollection(result.get())) {
            values.add(ctx.value());
        }
    } else {
        // Evaluating the expression failed, and we can handle the exception.
        Throwable t = result.failed().get();
        throw ProcessingErrorCode.createAvpathSyntaxError(t, columnName, -1);
    }
    return values;
}
 
開發者ID:Talend,項目名稱:components,代碼行數:20,代碼來源:FilterRowDoFn.java

示例2: testAnonClassTokenize

import scala.collection.JavaConversions; //導入方法依賴的package包/類
public void testAnonClassTokenize() {
  myFixture.configureByFiles("anonClassTokens.java");
  Eddy eddy = makeEddy();
  Token[] toks = {
    // List<X> x = new <caret>List<X>() {
    new Tokens.IdentTok("List"),
    Tokens.LtTok$.MODULE$,
    new Tokens.IdentTok("X"),
    Tokens.GtTok$.MODULE$,
    new Tokens.IdentTok("x"),
    Tokens.EqTok$.MODULE$,
    Tokens.NewTok$.MODULE$,
    new Tokens.IdentTok("List"),
    Tokens.LtTok$.MODULE$,
    new Tokens.IdentTok("X"),
    Tokens.GtTok$.MODULE$,
    Tokens.LParenTok$.MODULE$,
    Tokens.RParenTok$.MODULE$,
    new Tokenizer.AtomicAnonBodyTok(PsiTreeUtil.findChildrenOfType(myFixture.getFile(), PsiAnonymousClass.class).iterator().next())
  };
  List<Token> wanted = new ArrayList<Token>();
  Collections.addAll(wanted, toks);

  List<Token> tokens = new ArrayList<Token>();
  try {
    for (final Loc<Token> tok : JavaConversions.asJavaCollection(Tokens.prepare(JavaConversions.asScalaBuffer(eddy.input().input).toList())))
      tokens.add(tok.x());
  } catch (Eddy.Skip s) {
    throw new AssertionError();
  }
  assertEquals(wanted, tokens);
}
 
開發者ID:eddysystems,項目名稱:eddy,代碼行數:33,代碼來源:Tests.java

示例3: listGood

import scala.collection.JavaConversions; //導入方法依賴的package包/類
static public <A> Scored<A> listGood(final List<Alt<A>> xs) {
  final int n = xs.size();
  switch (n) {
    case 0:
      return (Scored<A>)Empty$.MODULE$;
    case 1:
      final Alt<A> x = xs.head();
      return new Best<A>(x.dp(),x.x(),(Scored<A>)Empty$.MODULE$);
    default:
      final PriorityQueue<Alt<A>> pq = new PriorityQueue<Alt<A>>(JavaConversions.asJavaCollection(xs));
      final Alt<A> bestA = pq.poll();
      return new Best<A>(bestA.dp(), bestA.x(), new Extractor<A>(new MultipleAltState<A>(pq)));
  }
}
 
開發者ID:eddysystems,項目名稱:eddy,代碼行數:15,代碼來源:JavaScores.java

示例4: process

import scala.collection.JavaConversions; //導入方法依賴的package包/類
@Override
public void process(
    VariantContextWritable input, Emitter<Pair<Variant, Collection<Genotype>>> emitter) {
  VariantContext bvc = input.get();
  List<org.bdgenomics.adam.models.VariantContext> avcList =
      JavaConversions.seqAsJavaList(vcc.convert(bvc));
  for (org.bdgenomics.adam.models.VariantContext avc : avcList) {
    Variant variant = avc.variant().variant();
    Collection<Genotype> genotypes = JavaConversions.asJavaCollection(avc.genotypes());
    emitter.emit(Pair.of(variant, genotypes));
  }
}
 
開發者ID:cloudera,項目名稱:quince,代碼行數:13,代碼來源:VCFToADAMVariantFn.java

示例5: getBrokers

import scala.collection.JavaConversions; //導入方法依賴的package包/類
/**
 * Returns the bootstrap broker nodes
 * @return the bootstrap broker nodes
 */
public Collection<Node> getBrokers() {
	if(adminClient==null) throw new IllegalStateException("Admin client not created");
	return JavaConversions.asJavaCollection(adminClient.bootstrapBrokers());		
}
 
開發者ID:nickman,項目名稱:HeliosStreams,代碼行數:9,代碼來源:KafkaAdminClient.java

示例6: getAllConsumerGroups

import scala.collection.JavaConversions; //導入方法依賴的package包/類
/**
 * Returns the meta data on all consumer groups
 * @return the meta data on all consumer groups
 */
public Collection<GroupOverview> getAllConsumerGroups() {
	if(adminClient==null) throw new IllegalStateException("Admin client not created");
	return JavaConversions.asJavaCollection(adminClient.listAllConsumerGroupsFlattened());				
}
 
開發者ID:nickman,項目名稱:HeliosStreams,代碼行數:9,代碼來源:KafkaAdminClient.java

示例7: getTopics

import scala.collection.JavaConversions; //導入方法依賴的package包/類
@Override
public Set<String> getTopics() {
    return new TreeSet<String>(JavaConversions.asJavaCollection(zkUtils.getAllTopics()));
}
 
開發者ID:craftsmenlabs,項目名稱:kafka-admin-rest-api,代碼行數:5,代碼來源:TopicServiceImpl.java

示例8: clearStreamsFromPreviousRun

import scala.collection.JavaConversions; //導入方法依賴的package包/類
/**
 * This is a best-effort approach to clear the internal streams from previous run, including intermediate streams,
 * checkpoint stream and changelog streams.
 * For batch processing, we always clean up the previous internal streams and create a new set for each run.
 * @param prevConfig config of the previous run
 */
public void clearStreamsFromPreviousRun(Config prevConfig) {
  try {
    ApplicationConfig appConfig = new ApplicationConfig(prevConfig);
    LOGGER.info("run.id from previous run is {}", appConfig.getRunId());

    StreamConfig streamConfig = new StreamConfig(prevConfig);

    //Find all intermediate streams and clean up
    Set<StreamSpec> intStreams = JavaConversions.asJavaCollection(streamConfig.getStreamIds()).stream()
        .filter(streamConfig::getIsIntermediate)
        .map(id -> new StreamSpec(id, streamConfig.getPhysicalName(id), streamConfig.getSystem(id)))
        .collect(Collectors.toSet());
    intStreams.forEach(stream -> {
        LOGGER.info("Clear intermediate stream {} in system {}", stream.getPhysicalName(), stream.getSystemName());
        sysAdmins.get(stream.getSystemName()).clearStream(stream);
      });

    //Find checkpoint stream and clean up
    TaskConfig taskConfig = new TaskConfig(prevConfig);
    String checkpointManagerFactoryClass = taskConfig.getCheckpointManagerFactory().getOrElse(defaultValue(null));
    if (checkpointManagerFactoryClass != null) {
      CheckpointManager checkpointManager = ((CheckpointManagerFactory) Util.getObj(checkpointManagerFactoryClass))
          .getCheckpointManager(prevConfig, new MetricsRegistryMap());
      checkpointManager.clearCheckpoints();
    }

    //Find changelog streams and remove them
    StorageConfig storageConfig = new StorageConfig(prevConfig);
    for (String store : JavaConversions.asJavaCollection(storageConfig.getStoreNames())) {
      String changelog = storageConfig.getChangelogStream(store).getOrElse(defaultValue(null));
      if (changelog != null) {
        LOGGER.info("Clear store {} changelog {}", store, changelog);
        SystemStream systemStream = Util.getSystemStreamFromNames(changelog);
        StreamSpec spec = StreamSpec.createChangeLogStreamSpec(systemStream.getStream(), systemStream.getSystem(), 1);
        sysAdmins.get(spec.getSystemName()).clearStream(spec);
      }
    }
  } catch (Exception e) {
    // For batch, we always create a new set of internal streams (checkpoint, changelog and intermediate) with unique
    // id. So if clearStream doesn't work, it won't affect the correctness of the results.
    // We log a warning here and rely on retention to clean up the streams later.
    LOGGER.warn("Fail to clear internal streams from previous run. Please clean up manually.", e);
  }
}
 
開發者ID:apache,項目名稱:samza,代碼行數:51,代碼來源:StreamManager.java

示例9: getConsumers

import scala.collection.JavaConversions; //導入方法依賴的package包/類
/**
 * Returns the meta data on all consumers in the passed consumer group
 * @param consumerGroup The consumer group name
 * @return the meta data on all consumers in the passed consumer group
 */
public Collection<ConsumerSummary> getConsumers(final String consumerGroup) {
	if(adminClient==null) throw new IllegalStateException("Admin client not created");
	return JavaConversions.asJavaCollection(adminClient.describeConsumerGroup(consumerGroup));				
}
 
開發者ID:nickman,項目名稱:HeliosStreams,代碼行數:10,代碼來源:KafkaAdminClient.java


注:本文中的scala.collection.JavaConversions.asJavaCollection方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。