当前位置: 首页>>代码示例>>Java>>正文


Java JavaConversions.asScalaBuffer方法代码示例

本文整理汇总了Java中scala.collection.JavaConversions.asScalaBuffer方法的典型用法代码示例。如果您正苦于以下问题:Java JavaConversions.asScalaBuffer方法的具体用法?Java JavaConversions.asScalaBuffer怎么用?Java JavaConversions.asScalaBuffer使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在scala.collection.JavaConversions的用法示例。


在下文中一共展示了JavaConversions.asScalaBuffer方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: RFileReaderRDD

import scala.collection.JavaConversions; //导入方法依赖的package包/类
public RFileReaderRDD(final SparkContext sparkContext,
                      final String instanceName,
                      final String zookeepers,
                      final String user,
                      final String password,
                      final String tableName,
                      final Set<String> auths,
                      final byte[] serialisedConfiguration) {
    super(sparkContext, JavaConversions.asScalaBuffer(new ArrayList<>()),
            ClassTag$.MODULE$.apply(Map.Entry.class));
    this.instanceName = instanceName;
    this.zookeepers = zookeepers;
    this.user = user;
    this.password = password;
    this.tableName = tableName;
    this.auths = auths;
    this.serialisedConfiguration = serialisedConfiguration;
}
 
开发者ID:gchq,项目名称:Gaffer,代码行数:19,代码来源:RFileReaderRDD.java

示例2: getAllPartitionIds

import scala.collection.JavaConversions; //导入方法依赖的package包/类
/**
 * 根据指定topic获取该topic的partition列表
 * @param topic
 * @return
 */
public static List<Integer> getAllPartitionIds(String topic) {
	List list = new ArrayList();
	list.add(topic);
	Buffer buffer = JavaConversions.asScalaBuffer(list);

	Map<String, Seq<Object>> topicPartMap = JavaConversions.asJavaMap(ZkUtils.getPartitionsForTopics(getZkClient(), buffer));
	List<Object> javaList = JavaConversions.asJavaList(topicPartMap.get(topic));
	
	List<Integer> retList = new ArrayList<Integer>();
	for (Object obj : javaList) {
		retList.add((Integer)obj);
	}
	
	return retList;
}
 
开发者ID:linzhaoming,项目名称:easyframe-msg,代码行数:21,代码来源:AdminUtil.java

示例3: writeReports

import scala.collection.JavaConversions; //导入方法依赖的package包/类
private void writeReports( Coverage coverage, List<File> sourceRoots, File coberturaXmlOutputDirectory,
                           File scoverageXmlOutputDirectory, File scoverageHtmlOutputDirectory )
{
    Seq<File> sourceRootsAsScalaSeq = JavaConversions.asScalaBuffer( sourceRoots );

    new CoberturaXmlWriter( sourceRootsAsScalaSeq, coberturaXmlOutputDirectory ).write( coverage );
    getLog().info( String.format( "Written Cobertura XML report [%s]",
                                  new File( coberturaXmlOutputDirectory, "cobertura.xml" ).getAbsolutePath() ) );

    new ScoverageXmlWriter( sourceRootsAsScalaSeq, scoverageXmlOutputDirectory, false ).write( coverage );
    getLog().info( String.format( "Written XML coverage report [%s]",
                                  new File( scoverageXmlOutputDirectory, "scoverage.xml" ).getAbsolutePath() ) );

    new ScoverageHtmlWriter( sourceRootsAsScalaSeq, scoverageHtmlOutputDirectory, Option.<String>apply( encoding ) ).write( coverage );
    getLog().info( String.format( "Written HTML coverage report [%s]",
                                  new File( scoverageHtmlOutputDirectory, "index.html" ).getAbsolutePath() ) );

    getLog().info( String.format( "Statement coverage.: %s%%", coverage.statementCoverageFormatted() ) );
    getLog().info( String.format( "Branch coverage....: %s%%", coverage.branchCoverageFormatted() ) );
}
 
开发者ID:scoverage,项目名称:scoverage-maven-plugin,代码行数:21,代码来源:SCoverageReportMojo.java

示例4: main

import scala.collection.JavaConversions; //导入方法依赖的package包/类
/**
 * @param args
 */
public static void main(String[] args) {
  List<String> javaList = new ArrayList<>();
  javaList.add("one");
  javaList.add("two");
  javaList.add("three");

  System.out.println(javaList); // prints [one, two, three]

  scala.collection.Seq<String> s = JavaConversions
      .asScalaBuffer(javaList);
  System.out.println(s); // prints Buffer(one, two, three)
}
 
开发者ID:jgperrin,项目名称:net.jgp.labs.informix2spark,代码行数:16,代码来源:ScalaSeq.java

示例5: CassandraConfiguration

import scala.collection.JavaConversions; //导入方法依赖的package包/类
protected CassandraConfiguration()  {
	

try {
	seed = JavaConversions.asScalaBuffer(Arrays.asList(InetAddress.getByName("localhost")));
} catch (UnknownHostException e) {
	// TODO Auto-generated catch block
	e.printStackTrace();
}
	
CassandraCluster cc =  new CassandraCluster(seed, 9042,null, 8000, 120000, 1000,6000, 
		new ProtocolOptions().getCompression().LZ4, ConsistencyLevel.ONE);
session = cc.session();

}
 
开发者ID:boontadata,项目名称:boontadata-streams,代码行数:16,代码来源:CassandraConfiguration.java

示例6: convert

import scala.collection.JavaConversions; //导入方法依赖的package包/类
@Override
public RDD<Tuple> convert(List<RDD<Tuple>> predecessors,
        POUnion physicalOperator) throws IOException {
    SparkUtil.assertPredecessorSizeGreaterThan(predecessors,
            physicalOperator, 0);
    UnionRDD<Tuple> unionRDD = new UnionRDD<Tuple>(sc,
            JavaConversions.asScalaBuffer(predecessors),
            SparkUtil.getManifest(Tuple.class));
    return unionRDD;
}
 
开发者ID:sigmoidanalytics,项目名称:spork,代码行数:11,代码来源:UnionConverter.java

示例7: toSeq

import scala.collection.JavaConversions; //导入方法依赖的package包/类
static
public <E> Seq<E> toSeq(List<E> list){
	return JavaConversions.asScalaBuffer(list);
}
 
开发者ID:jeremyore,项目名称:spark-pmml-import,代码行数:5,代码来源:ScalaUtil.java

示例8: collectParameters

import scala.collection.JavaConversions; //导入方法依赖的package包/类
private ParameterValues collectParameters(ParameterizedRug rug, ParameterValues arguments) {
    Collection<Parameter> parameters = asJavaCollection(rug.parameters());
    if (CommandLineOptions.hasOption("interactive") && !parameters.isEmpty()) {

        LineReader reader = ShellUtils.lineReader(ShellUtils.INTERACTIVE_HISTORY,
                Optional.empty());

        List<ParameterValue> newValues = new ArrayList<>();

        log.newline();
        log.info(Style.cyan(Constants.DIVIDER) + " "
                + Style.bold("Please specify parameter values"));
        log.info(Constants.LEFT_PADDING
                + "Press 'Enter' to accept default or provided values. '*' indicates required parameters.");

        for (Parameter parameter : parameters) {
            log.newline();

            ParameterValue pv = JavaConversions.mapAsJavaMap(arguments.parameterValueMap())
                    .get(parameter.getName());
            String defaultValue = (pv != null ? pv.getValue().toString()
                    : parameter.getDefaultValue());

            String description = org.apache.commons.lang3.StringUtils
                    .capitalize(parameter.getDescription());
            log.info("  " + WordUtils.wrap(description, Constants.WRAP_LENGTH, "\n  ", false));

            pv = readParameter(reader, parameter, defaultValue);

            boolean firstAttempt = true;
            while (isInvalid(rug, pv)
                    || ((pv.getValue() == null || pv.getValue().toString().length() == 0)
                            && parameter.isRequired())) {
                log.info(Style.red("  Provided value '%s' is not valid", pv.getValue()));
                if (firstAttempt) {
                    log.newline();
                    log.info("  pattern: %s, min length: %s, max length: %s",
                            parameter.getPattern(),
                            (parameter.getMinLength() >= 0 ? parameter.getMinLength()
                                    : "not defined"),
                            (parameter.getMaxLength() >= 0 ? parameter.getMaxLength()
                                    : "not defined"));
                    firstAttempt = false;
                }

                pv = readParameter(reader, parameter, defaultValue);
            }

            // add the new and validated parameter to project operations arguments
            newValues.add(pv);
        }
        arguments = new SimpleParameterValues(JavaConversions.asScalaBuffer(newValues));
        log.newline();

        ShellUtils.shutdown(reader);
    }
    return arguments;
}
 
开发者ID:atomist-attic,项目名称:rug-cli,代码行数:59,代码来源:AbstractParameterizedCommand.java

示例9: toScalaBuffer

import scala.collection.JavaConversions; //导入方法依赖的package包/类
/**
 * Converting from Java to a Scala mutable collection.
 */
scala.collection.mutable.Buffer<String> toScalaBuffer(java.util.List<String> strings) {
  return JavaConversions.asScalaBuffer(strings);
}
 
开发者ID:travisbrown,项目名称:scala-java-interop,代码行数:7,代码来源:UsingScala.java

示例10: getTypesDef

import scala.collection.JavaConversions; //导入方法依赖的package包/类
public static TypesDef getTypesDef(ImmutableList<EnumTypeDefinition> enums,
                                   ImmutableList<StructTypeDefinition> structs, ImmutableList<HierarchicalTypeDefinition<TraitType>> traits,
                                   ImmutableList<HierarchicalTypeDefinition<ClassType>> classes) {
    return new TypesDef(JavaConversions.asScalaBuffer(enums), JavaConversions.asScalaBuffer(structs),
            JavaConversions.asScalaBuffer(traits), JavaConversions.asScalaBuffer(classes));
}
 
开发者ID:apache,项目名称:incubator-atlas,代码行数:7,代码来源:TypesUtil.java

示例11: testKafkaTransport

import scala.collection.JavaConversions; //导入方法依赖的package包/类
@Test
public void testKafkaTransport() throws Exception {

  String topic = "zipkin";
  // Kafka setup
  EmbeddedZookeeper zkServer = new EmbeddedZookeeper(TestZKUtils.zookeeperConnect());
  ZkClient zkClient = new ZkClient(zkServer.connectString(), 30000, 30000, ZKStringSerializer$.MODULE$);
  Properties props = TestUtils.createBrokerConfig(0, TestUtils.choosePort(), false);
  KafkaConfig config = new KafkaConfig(props);
  KafkaServer kafkaServer = TestUtils.createServer(config, new MockTime());

  Buffer<KafkaServer> servers = JavaConversions.asScalaBuffer(Collections.singletonList(kafkaServer));
  TestUtils.createTopic(zkClient, topic, 1, 1, servers, new Properties());
  zkClient.close();
  TestUtils.waitUntilMetadataIsPropagated(servers, topic, 0, 5000);

  // HTrace
  HTraceConfiguration hTraceConfiguration = HTraceConfiguration.fromKeyValuePairs(
      "sampler.classes", "AlwaysSampler",
      "span.receiver.classes", ZipkinSpanReceiver.class.getName(),
      "zipkin.kafka.metadata.broker.list", config.advertisedHostName() + ":" + config.advertisedPort(),
      "zipkin.kafka.topic", topic,
      ZipkinSpanReceiver.TRANSPORT_CLASS_KEY, KafkaTransport.class.getName()
  );

  final Tracer tracer = new Tracer.Builder("test-tracer")
      .tracerPool(new TracerPool("test-tracer-pool"))
      .conf(hTraceConfiguration)
      .build();

  String scopeName = "test-kafka-transport-scope";
  TraceScope traceScope = tracer.newScope(scopeName);
  traceScope.close();
  tracer.close();

  // Kafka consumer
  Properties consumerProps = new Properties();
  consumerProps.put("zookeeper.connect", props.getProperty("zookeeper.connect"));
  consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, "testing.group");
  consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "smallest");
  ConsumerConnector connector =
      kafka.consumer.Consumer.createJavaConsumerConnector(new kafka.consumer.ConsumerConfig(consumerProps));
  Map<String, Integer> topicCountMap = new HashMap<>();
  topicCountMap.put(topic, 1);
  Map<String, List<KafkaStream<byte[], byte[]>>> streams = connector.createMessageStreams(topicCountMap);
  ConsumerIterator<byte[], byte[]> it = streams.get(topic).get(0).iterator();

  // Test
  Assert.assertTrue("We should have one message in Kafka", it.hasNext());
  Span span = new Span();
  new TDeserializer(new TBinaryProtocol.Factory()).deserialize(span, it.next().message());
  Assert.assertEquals("The span name should match our scope description", span.getName(), scopeName);

  kafkaServer.shutdown();

}
 
开发者ID:apache,项目名称:incubator-htrace,代码行数:57,代码来源:ITZipkinReceiver.java

示例12: pushToStream

import scala.collection.JavaConversions; //导入方法依赖的package包/类
public void pushToStream(String message) {

    int streamNo = (int) this.nextStream.incrementAndGet() % this.queues.size();

    AtomicLong offset = this.offsets.get(streamNo);
    BlockingQueue<FetchedDataChunk> queue = this.queues.get(streamNo);

    AtomicLong thisOffset = new AtomicLong(offset.incrementAndGet());

    List<Message> seq = Lists.newArrayList();
    seq.add(new Message(message.getBytes(Charsets.UTF_8)));
    ByteBufferMessageSet messageSet = new ByteBufferMessageSet(NoCompressionCodec$.MODULE$, offset, JavaConversions.asScalaBuffer(seq));

    FetchedDataChunk chunk = new FetchedDataChunk(messageSet,
        new PartitionTopicInfo("topic", streamNo, queue, thisOffset, thisOffset, new AtomicInteger(1), "clientId"),
        thisOffset.get());

    queue.add(chunk);
  }
 
开发者ID:apache,项目名称:incubator-gobblin,代码行数:20,代码来源:MockKafkaStream.java

示例13: toScalaSeq

import scala.collection.JavaConversions; //导入方法依赖的package包/类
public static <T> Seq<T> toScalaSeq(List<T> list) {
    return JavaConversions.asScalaBuffer(list);
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:4,代码来源:SparkUtil.java

示例14: sendPreparedStatement

import scala.collection.JavaConversions; //导入方法依赖的package包/类
public ComposableFuture<QueryResult> sendPreparedStatement(final String query, final List<Object> values) {
  final Buffer<Object> scalaValues = JavaConversions.asScalaBuffer(values);
  return ScalaFutureHelper.from(() -> conn.sendPreparedStatement(query, scalaValues));
}
 
开发者ID:outbrain,项目名称:ob1k,代码行数:5,代码来源:MySqlAsyncConnection.java

示例15: sendPreparedStatement

import scala.collection.JavaConversions; //导入方法依赖的package包/类
@Override
public ComposableFuture<QueryResult> sendPreparedStatement(final String query, final List<Object> values) {
  final Buffer<Object> scalaValues = JavaConversions.asScalaBuffer(values);
  return ScalaFutureHelper.from(() -> _pool.sendPreparedStatement(query, scalaValues));
}
 
开发者ID:outbrain,项目名称:ob1k,代码行数:6,代码来源:MySqlConnectionPool.java


注:本文中的scala.collection.JavaConversions.asScalaBuffer方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。