當前位置: 首頁>>代碼示例>>Java>>正文


Java JavaConversions.asScalaBuffer方法代碼示例

本文整理匯總了Java中scala.collection.JavaConversions.asScalaBuffer方法的典型用法代碼示例。如果您正苦於以下問題:Java JavaConversions.asScalaBuffer方法的具體用法?Java JavaConversions.asScalaBuffer怎麽用?Java JavaConversions.asScalaBuffer使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在scala.collection.JavaConversions的用法示例。


在下文中一共展示了JavaConversions.asScalaBuffer方法的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: RFileReaderRDD

import scala.collection.JavaConversions; //導入方法依賴的package包/類
public RFileReaderRDD(final SparkContext sparkContext,
                      final String instanceName,
                      final String zookeepers,
                      final String user,
                      final String password,
                      final String tableName,
                      final Set<String> auths,
                      final byte[] serialisedConfiguration) {
    super(sparkContext, JavaConversions.asScalaBuffer(new ArrayList<>()),
            ClassTag$.MODULE$.apply(Map.Entry.class));
    this.instanceName = instanceName;
    this.zookeepers = zookeepers;
    this.user = user;
    this.password = password;
    this.tableName = tableName;
    this.auths = auths;
    this.serialisedConfiguration = serialisedConfiguration;
}
 
開發者ID:gchq,項目名稱:Gaffer,代碼行數:19,代碼來源:RFileReaderRDD.java

示例2: getAllPartitionIds

import scala.collection.JavaConversions; //導入方法依賴的package包/類
/**
 * 根據指定topic獲取該topic的partition列表
 * @param topic
 * @return
 */
public static List<Integer> getAllPartitionIds(String topic) {
	List list = new ArrayList();
	list.add(topic);
	Buffer buffer = JavaConversions.asScalaBuffer(list);

	Map<String, Seq<Object>> topicPartMap = JavaConversions.asJavaMap(ZkUtils.getPartitionsForTopics(getZkClient(), buffer));
	List<Object> javaList = JavaConversions.asJavaList(topicPartMap.get(topic));
	
	List<Integer> retList = new ArrayList<Integer>();
	for (Object obj : javaList) {
		retList.add((Integer)obj);
	}
	
	return retList;
}
 
開發者ID:linzhaoming,項目名稱:easyframe-msg,代碼行數:21,代碼來源:AdminUtil.java

示例3: writeReports

import scala.collection.JavaConversions; //導入方法依賴的package包/類
private void writeReports( Coverage coverage, List<File> sourceRoots, File coberturaXmlOutputDirectory,
                           File scoverageXmlOutputDirectory, File scoverageHtmlOutputDirectory )
{
    Seq<File> sourceRootsAsScalaSeq = JavaConversions.asScalaBuffer( sourceRoots );

    new CoberturaXmlWriter( sourceRootsAsScalaSeq, coberturaXmlOutputDirectory ).write( coverage );
    getLog().info( String.format( "Written Cobertura XML report [%s]",
                                  new File( coberturaXmlOutputDirectory, "cobertura.xml" ).getAbsolutePath() ) );

    new ScoverageXmlWriter( sourceRootsAsScalaSeq, scoverageXmlOutputDirectory, false ).write( coverage );
    getLog().info( String.format( "Written XML coverage report [%s]",
                                  new File( scoverageXmlOutputDirectory, "scoverage.xml" ).getAbsolutePath() ) );

    new ScoverageHtmlWriter( sourceRootsAsScalaSeq, scoverageHtmlOutputDirectory, Option.<String>apply( encoding ) ).write( coverage );
    getLog().info( String.format( "Written HTML coverage report [%s]",
                                  new File( scoverageHtmlOutputDirectory, "index.html" ).getAbsolutePath() ) );

    getLog().info( String.format( "Statement coverage.: %s%%", coverage.statementCoverageFormatted() ) );
    getLog().info( String.format( "Branch coverage....: %s%%", coverage.branchCoverageFormatted() ) );
}
 
開發者ID:scoverage,項目名稱:scoverage-maven-plugin,代碼行數:21,代碼來源:SCoverageReportMojo.java

示例4: main

import scala.collection.JavaConversions; //導入方法依賴的package包/類
/**
 * @param args
 */
public static void main(String[] args) {
  List<String> javaList = new ArrayList<>();
  javaList.add("one");
  javaList.add("two");
  javaList.add("three");

  System.out.println(javaList); // prints [one, two, three]

  scala.collection.Seq<String> s = JavaConversions
      .asScalaBuffer(javaList);
  System.out.println(s); // prints Buffer(one, two, three)
}
 
開發者ID:jgperrin,項目名稱:net.jgp.labs.informix2spark,代碼行數:16,代碼來源:ScalaSeq.java

示例5: CassandraConfiguration

import scala.collection.JavaConversions; //導入方法依賴的package包/類
protected CassandraConfiguration()  {
	

try {
	seed = JavaConversions.asScalaBuffer(Arrays.asList(InetAddress.getByName("localhost")));
} catch (UnknownHostException e) {
	// TODO Auto-generated catch block
	e.printStackTrace();
}
	
CassandraCluster cc =  new CassandraCluster(seed, 9042,null, 8000, 120000, 1000,6000, 
		new ProtocolOptions().getCompression().LZ4, ConsistencyLevel.ONE);
session = cc.session();

}
 
開發者ID:boontadata,項目名稱:boontadata-streams,代碼行數:16,代碼來源:CassandraConfiguration.java

示例6: convert

import scala.collection.JavaConversions; //導入方法依賴的package包/類
@Override
public RDD<Tuple> convert(List<RDD<Tuple>> predecessors,
        POUnion physicalOperator) throws IOException {
    SparkUtil.assertPredecessorSizeGreaterThan(predecessors,
            physicalOperator, 0);
    UnionRDD<Tuple> unionRDD = new UnionRDD<Tuple>(sc,
            JavaConversions.asScalaBuffer(predecessors),
            SparkUtil.getManifest(Tuple.class));
    return unionRDD;
}
 
開發者ID:sigmoidanalytics,項目名稱:spork,代碼行數:11,代碼來源:UnionConverter.java

示例7: toSeq

import scala.collection.JavaConversions; //導入方法依賴的package包/類
static
public <E> Seq<E> toSeq(List<E> list){
	return JavaConversions.asScalaBuffer(list);
}
 
開發者ID:jeremyore,項目名稱:spark-pmml-import,代碼行數:5,代碼來源:ScalaUtil.java

示例8: collectParameters

import scala.collection.JavaConversions; //導入方法依賴的package包/類
private ParameterValues collectParameters(ParameterizedRug rug, ParameterValues arguments) {
    Collection<Parameter> parameters = asJavaCollection(rug.parameters());
    if (CommandLineOptions.hasOption("interactive") && !parameters.isEmpty()) {

        LineReader reader = ShellUtils.lineReader(ShellUtils.INTERACTIVE_HISTORY,
                Optional.empty());

        List<ParameterValue> newValues = new ArrayList<>();

        log.newline();
        log.info(Style.cyan(Constants.DIVIDER) + " "
                + Style.bold("Please specify parameter values"));
        log.info(Constants.LEFT_PADDING
                + "Press 'Enter' to accept default or provided values. '*' indicates required parameters.");

        for (Parameter parameter : parameters) {
            log.newline();

            ParameterValue pv = JavaConversions.mapAsJavaMap(arguments.parameterValueMap())
                    .get(parameter.getName());
            String defaultValue = (pv != null ? pv.getValue().toString()
                    : parameter.getDefaultValue());

            String description = org.apache.commons.lang3.StringUtils
                    .capitalize(parameter.getDescription());
            log.info("  " + WordUtils.wrap(description, Constants.WRAP_LENGTH, "\n  ", false));

            pv = readParameter(reader, parameter, defaultValue);

            boolean firstAttempt = true;
            while (isInvalid(rug, pv)
                    || ((pv.getValue() == null || pv.getValue().toString().length() == 0)
                            && parameter.isRequired())) {
                log.info(Style.red("  Provided value '%s' is not valid", pv.getValue()));
                if (firstAttempt) {
                    log.newline();
                    log.info("  pattern: %s, min length: %s, max length: %s",
                            parameter.getPattern(),
                            (parameter.getMinLength() >= 0 ? parameter.getMinLength()
                                    : "not defined"),
                            (parameter.getMaxLength() >= 0 ? parameter.getMaxLength()
                                    : "not defined"));
                    firstAttempt = false;
                }

                pv = readParameter(reader, parameter, defaultValue);
            }

            // add the new and validated parameter to project operations arguments
            newValues.add(pv);
        }
        arguments = new SimpleParameterValues(JavaConversions.asScalaBuffer(newValues));
        log.newline();

        ShellUtils.shutdown(reader);
    }
    return arguments;
}
 
開發者ID:atomist-attic,項目名稱:rug-cli,代碼行數:59,代碼來源:AbstractParameterizedCommand.java

示例9: toScalaBuffer

import scala.collection.JavaConversions; //導入方法依賴的package包/類
/**
 * Converting from Java to a Scala mutable collection.
 */
scala.collection.mutable.Buffer<String> toScalaBuffer(java.util.List<String> strings) {
  return JavaConversions.asScalaBuffer(strings);
}
 
開發者ID:travisbrown,項目名稱:scala-java-interop,代碼行數:7,代碼來源:UsingScala.java

示例10: getTypesDef

import scala.collection.JavaConversions; //導入方法依賴的package包/類
public static TypesDef getTypesDef(ImmutableList<EnumTypeDefinition> enums,
                                   ImmutableList<StructTypeDefinition> structs, ImmutableList<HierarchicalTypeDefinition<TraitType>> traits,
                                   ImmutableList<HierarchicalTypeDefinition<ClassType>> classes) {
    return new TypesDef(JavaConversions.asScalaBuffer(enums), JavaConversions.asScalaBuffer(structs),
            JavaConversions.asScalaBuffer(traits), JavaConversions.asScalaBuffer(classes));
}
 
開發者ID:apache,項目名稱:incubator-atlas,代碼行數:7,代碼來源:TypesUtil.java

示例11: testKafkaTransport

import scala.collection.JavaConversions; //導入方法依賴的package包/類
@Test
public void testKafkaTransport() throws Exception {

  String topic = "zipkin";
  // Kafka setup
  EmbeddedZookeeper zkServer = new EmbeddedZookeeper(TestZKUtils.zookeeperConnect());
  ZkClient zkClient = new ZkClient(zkServer.connectString(), 30000, 30000, ZKStringSerializer$.MODULE$);
  Properties props = TestUtils.createBrokerConfig(0, TestUtils.choosePort(), false);
  KafkaConfig config = new KafkaConfig(props);
  KafkaServer kafkaServer = TestUtils.createServer(config, new MockTime());

  Buffer<KafkaServer> servers = JavaConversions.asScalaBuffer(Collections.singletonList(kafkaServer));
  TestUtils.createTopic(zkClient, topic, 1, 1, servers, new Properties());
  zkClient.close();
  TestUtils.waitUntilMetadataIsPropagated(servers, topic, 0, 5000);

  // HTrace
  HTraceConfiguration hTraceConfiguration = HTraceConfiguration.fromKeyValuePairs(
      "sampler.classes", "AlwaysSampler",
      "span.receiver.classes", ZipkinSpanReceiver.class.getName(),
      "zipkin.kafka.metadata.broker.list", config.advertisedHostName() + ":" + config.advertisedPort(),
      "zipkin.kafka.topic", topic,
      ZipkinSpanReceiver.TRANSPORT_CLASS_KEY, KafkaTransport.class.getName()
  );

  final Tracer tracer = new Tracer.Builder("test-tracer")
      .tracerPool(new TracerPool("test-tracer-pool"))
      .conf(hTraceConfiguration)
      .build();

  String scopeName = "test-kafka-transport-scope";
  TraceScope traceScope = tracer.newScope(scopeName);
  traceScope.close();
  tracer.close();

  // Kafka consumer
  Properties consumerProps = new Properties();
  consumerProps.put("zookeeper.connect", props.getProperty("zookeeper.connect"));
  consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, "testing.group");
  consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "smallest");
  ConsumerConnector connector =
      kafka.consumer.Consumer.createJavaConsumerConnector(new kafka.consumer.ConsumerConfig(consumerProps));
  Map<String, Integer> topicCountMap = new HashMap<>();
  topicCountMap.put(topic, 1);
  Map<String, List<KafkaStream<byte[], byte[]>>> streams = connector.createMessageStreams(topicCountMap);
  ConsumerIterator<byte[], byte[]> it = streams.get(topic).get(0).iterator();

  // Test
  Assert.assertTrue("We should have one message in Kafka", it.hasNext());
  Span span = new Span();
  new TDeserializer(new TBinaryProtocol.Factory()).deserialize(span, it.next().message());
  Assert.assertEquals("The span name should match our scope description", span.getName(), scopeName);

  kafkaServer.shutdown();

}
 
開發者ID:apache,項目名稱:incubator-htrace,代碼行數:57,代碼來源:ITZipkinReceiver.java

示例12: pushToStream

import scala.collection.JavaConversions; //導入方法依賴的package包/類
public void pushToStream(String message) {

    int streamNo = (int) this.nextStream.incrementAndGet() % this.queues.size();

    AtomicLong offset = this.offsets.get(streamNo);
    BlockingQueue<FetchedDataChunk> queue = this.queues.get(streamNo);

    AtomicLong thisOffset = new AtomicLong(offset.incrementAndGet());

    List<Message> seq = Lists.newArrayList();
    seq.add(new Message(message.getBytes(Charsets.UTF_8)));
    ByteBufferMessageSet messageSet = new ByteBufferMessageSet(NoCompressionCodec$.MODULE$, offset, JavaConversions.asScalaBuffer(seq));

    FetchedDataChunk chunk = new FetchedDataChunk(messageSet,
        new PartitionTopicInfo("topic", streamNo, queue, thisOffset, thisOffset, new AtomicInteger(1), "clientId"),
        thisOffset.get());

    queue.add(chunk);
  }
 
開發者ID:apache,項目名稱:incubator-gobblin,代碼行數:20,代碼來源:MockKafkaStream.java

示例13: toScalaSeq

import scala.collection.JavaConversions; //導入方法依賴的package包/類
public static <T> Seq<T> toScalaSeq(List<T> list) {
    return JavaConversions.asScalaBuffer(list);
}
 
開發者ID:sigmoidanalytics,項目名稱:spork-streaming,代碼行數:4,代碼來源:SparkUtil.java

示例14: sendPreparedStatement

import scala.collection.JavaConversions; //導入方法依賴的package包/類
public ComposableFuture<QueryResult> sendPreparedStatement(final String query, final List<Object> values) {
  final Buffer<Object> scalaValues = JavaConversions.asScalaBuffer(values);
  return ScalaFutureHelper.from(() -> conn.sendPreparedStatement(query, scalaValues));
}
 
開發者ID:outbrain,項目名稱:ob1k,代碼行數:5,代碼來源:MySqlAsyncConnection.java

示例15: sendPreparedStatement

import scala.collection.JavaConversions; //導入方法依賴的package包/類
@Override
public ComposableFuture<QueryResult> sendPreparedStatement(final String query, final List<Object> values) {
  final Buffer<Object> scalaValues = JavaConversions.asScalaBuffer(values);
  return ScalaFutureHelper.from(() -> _pool.sendPreparedStatement(query, scalaValues));
}
 
開發者ID:outbrain,項目名稱:ob1k,代碼行數:6,代碼來源:MySqlConnectionPool.java


注:本文中的scala.collection.JavaConversions.asScalaBuffer方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。