当前位置: 首页>>代码示例>>Java>>正文


Java JavaConversions类代码示例

本文整理汇总了Java中scala.collection.JavaConversions的典型用法代码示例。如果您正苦于以下问题:Java JavaConversions类的具体用法?Java JavaConversions怎么用?Java JavaConversions使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


JavaConversions类属于scala.collection包,在下文中一共展示了JavaConversions类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: getBrokerMetadataByAddress

import scala.collection.JavaConversions; //导入依赖的package包/类
/**
 * Get Kafka broker metadata for a specific address
 *
 * @param kafkaBrokers    list of registered Kafka brokers
 * @param kfBrokerAddress address to look for
 * @return Kafka broker metadata
 */
private KFBrokerMetadata getBrokerMetadataByAddress(final List<Broker> kafkaBrokers,
                                                    final InetSocketAddress kfBrokerAddress) {

    KFBrokerMetadata brokerMetadata = new KFBrokerMetadata();

    kafkaBrokers.forEach(broker -> {
        JavaConversions.mapAsJavaMap(broker.endPoints())
                .forEach((protocol, endpoint) -> {
                    if (endpoint.host().equals(kfBrokerAddress.getHostName())
                            && endpoint.port() == kfBrokerAddress.getPort()) {
                        brokerMetadata.setBrokerId(broker.id());
                        brokerMetadata.setHost(endpoint.host());
                        brokerMetadata.setPort(endpoint.port());
                        brokerMetadata.setConnectionString(endpoint.connectionString());
                        brokerMetadata.setSecurityProtocol(protocol.name);
                    }
                });
    });
    return brokerMetadata;
}
 
开发者ID:mcafee,项目名称:management-sdk-for-kafka,代码行数:28,代码来源:KFBrokerWatcher.java

示例2: processTopic

import scala.collection.JavaConversions; //导入依赖的package包/类
public List<OffsetInfo> processTopic(String group, String topic) throws Exception {
	List<String> partitionIds = null;
	try {
		partitionIds = JavaConversions.seqAsJavaList(ZKUtils.getZKUtilsFromKafka()
				.getChildren(ZkUtils.BrokerTopicsPath() + "/" + topic + "/partitions"));
	} catch (Exception e) {
		if (e instanceof NoNodeException) {
			LOG.warn("Is topic >" + topic + "< exists!", e);
			return null;
		}
	}
	List<OffsetInfo> offsetInfos = new ArrayList<OffsetInfo>();
	OffsetInfo offsetInfo = null;
	if (partitionIds == null) {
		// TODO that topic exists in consumer node but not in topics node?!
		return null;
	}

	for (String partitionId : partitionIds) {
		offsetInfo = processPartition(group, topic, partitionId);
		if (offsetInfo != null) {
			offsetInfos.add(offsetInfo);
		}
	}
	return offsetInfos;
}
 
开发者ID:chickling,项目名称:kmanager,代码行数:27,代码来源:OffsetGetter.java

示例3: startAdminHttpService

import scala.collection.JavaConversions; //导入依赖的package包/类
public void startAdminHttpService() {
  try {
    Properties properties = new Properties();
    properties.load(this.getClass().getResource("build.properties").openStream());
    LOG.info("build.properties build_revision: {}",
        properties.getProperty("build_revision", "unknown"));
  } catch (Throwable t) {
    LOG.warn("Failed to load properties from build.properties", t);
  }
  Duration[] defaultLatchIntervals = {Duration.apply(1, TimeUnit.MINUTES)};
  Iterator<Duration> durationIterator = Arrays.asList(defaultLatchIntervals).iterator();
  @SuppressWarnings("deprecation")
  AdminServiceFactory adminServiceFactory = new AdminServiceFactory(
      this.port,
      20,
      List$.MODULE$.empty(),
      Option.empty(),
      List$.MODULE$.empty(),
      Map$.MODULE$.empty(),
      JavaConversions.asScalaIterator(durationIterator).toList());
  RuntimeEnvironment runtimeEnvironment = new RuntimeEnvironment(this);
  AdminHttpService service = adminServiceFactory.apply(runtimeEnvironment);
  for (Map.Entry<String, CustomHttpHandler> entry : this.customHttpHandlerMap.entrySet()) {
    service.httpServer().createContext(entry.getKey(), entry.getValue());
  }
}
 
开发者ID:pinterest,项目名称:doctorkafka,代码行数:27,代码来源:OstrichAdminService.java

示例4: getOffsets

import scala.collection.JavaConversions; //导入依赖的package包/类
/**
 * @param zkServers Zookeeper server string: host1:port1[,host2:port2,...]
 * @param groupID consumer group to get offsets for
 * @param topic topic to get offsets for
 * @return mapping of (topic and) partition to offset
 */
public static Map<Pair<String,Integer>,Long> getOffsets(String zkServers,
                                                        String groupID,
                                                        String topic) {
  ZKGroupTopicDirs topicDirs = new ZKGroupTopicDirs(groupID, topic);
  Map<Pair<String,Integer>,Long> offsets = new HashMap<>();
  ZkUtils zkUtils = ZkUtils.apply(zkServers, ZK_TIMEOUT_MSEC, ZK_TIMEOUT_MSEC, false);
  try {
    List<?> partitions = JavaConversions.seqAsJavaList(
        zkUtils.getPartitionsForTopics(
          JavaConversions.asScalaBuffer(Collections.singletonList(topic))).head()._2());
    partitions.forEach(partition -> {
      String partitionOffsetPath = topicDirs.consumerOffsetDir() + "/" + partition;
      Option<String> maybeOffset = zkUtils.readDataMaybeNull(partitionOffsetPath)._1();
      Long offset = maybeOffset.isDefined() ? Long.valueOf(maybeOffset.get()) : null;
      offsets.put(new Pair<>(topic, Integer.valueOf(partition.toString())), offset);
    });
  } finally {
    zkUtils.close();
  }
  return offsets;
}
 
开发者ID:oncewang,项目名称:oryx2,代码行数:28,代码来源:KafkaUtils.java

示例5: write

import scala.collection.JavaConversions; //导入依赖的package包/类
@Override
public void write(final Kryo kryo, final Output output, final WrappedArray<T> iterable) {
    output.writeVarInt(iterable.size(), true);
    JavaConversions.asJavaCollection(iterable).forEach(t -> {
        kryo.writeClassAndObject(output, t);
        output.flush();
    });
}
 
开发者ID:PKUSilvester,项目名称:LiteGraph,代码行数:9,代码来源:WrappedArraySerializer.java

示例6: createRelation

import scala.collection.JavaConversions; //导入依赖的package包/类
@Override
public SparkRDF4JSparqlRelation createRelation(SQLContext sqlContext,
		scala.collection.immutable.Map<String, String> scalaParameters, StructType schema) {
	Map<String, String> parameters = JavaConversions.asJavaMap(scalaParameters);
	String service = Optional.ofNullable(parameters.get("service")).orElseThrow(() -> new RuntimeException(
			"Spark RDF4J Sparql requires a SPARQL 'service' to be specified in the parameters"));
	String query = Optional.ofNullable(parameters.get("query")).orElseThrow(() -> new RuntimeException(
			"Spark RDF4J Sparql requires a 'query' to be specified in the parameters"));

	try {
		ParsedQuery parsedQuery = QueryParserUtil.parseQuery(QueryLanguage.SPARQL, query, null);
		if(!(parsedQuery instanceof ParsedTupleQuery)) {
			throw new RuntimeException("Spark RDF4J can only be used with Tuple (Select) queries right now.");
		}
		return new SparkRDF4JSparqlRelation(service, parsedQuery, schema, sqlContext);
	} catch (MalformedQueryException e) {
		throw new RuntimeException("Query was not valid SPARQL", e);
	}

}
 
开发者ID:ansell,项目名称:spark-rdf4j,代码行数:21,代码来源:SparkRDF4JDefaultSource.java

示例7: derive

import scala.collection.JavaConversions; //导入依赖的package包/类
@Override
public Dataset<Row> derive(Map<String, Dataset<Row>> dependencies) throws Exception {

  Dataset<Row> compare, with;

  if (!dependencies.containsKey(compareDataset)) {
    throw new RuntimeException("Designated comparison target dataset is not a dependency: " + compareDataset);
  } else {
    compare = dependencies.get(compareDataset);
  }

  if (!dependencies.containsKey(withDataset)) {
    throw new RuntimeException("Designated comparison reference dataset is not a dependency: " + withDataset);
  } else {
    with = dependencies.get(withDataset);
  }

  return compare.join(with, JavaConversions.asScalaBuffer(fields).toList(), "leftanti");

}
 
开发者ID:cloudera-labs,项目名称:envelope,代码行数:21,代码来源:ExcludeDeriver.java

示例8: main

import scala.collection.JavaConversions; //导入依赖的package包/类
public static void main(String[] args) {

		if(args.length > 0){
			NUM_KIDS = Integer.parseInt(args[0]);
		}
		if(args.length > 1){
			DELAY = Long.parseLong(args[1]);
		}
		if(args.length > 2){
			DB_HOST = args[2];
		}
		
		ActorRef listener = system.actorOf(Props.create(HttpActor.class), "httpActor"); 
		
		InetSocketAddress endpoint = new InetSocketAddress(3000);
		int backlog = 100;
		List<Inet.SocketOption> options = JavaConversions.asScalaBuffer(new ArrayList<Inet.SocketOption>()).toList();
		Option<ServerSettings> settings = scala.Option.empty();
		ServerSSLEngineProvider sslEngineProvider = null;
		Bind bind = new Http.Bind(listener, endpoint, backlog, options, settings, sslEngineProvider);
		IO.apply(spray.can.Http$.MODULE$, system).tell(bind, ActorRef.noSender());
		
		system.scheduler().schedule(new FiniteDuration(5, TimeUnit.SECONDS), new FiniteDuration(5, TimeUnit.SECONDS), ()->{
			System.out.println(new Date() + " - numSales=" + numSales.get());
		}, system.dispatcher());
	}
 
开发者ID:maxant,项目名称:akkaTrader,代码行数:27,代码来源:Main.java

示例9: select

import scala.collection.JavaConversions; //导入依赖的package包/类
@Override
public Routee select(Object message, IndexedSeq<Routee> routees) {

	//find which product ID is relevant here
	String productId = null;
	if(message instanceof PurchaseOrder){
		productId = ((PurchaseOrder) message).getProductId();
	}else if(message instanceof SalesOrder){
		productId = ((SalesOrder) message).getProductId();
	}
	ActorRef actorHandlingProduct = kids.get(productId);

	//no go find the routee for the relevant actor
	for(Routee r : JavaConversions.asJavaIterable(routees)){
		ActorRef a = ((ActorRefRoutee) r).ref(); //cast ok, since the are by definition in this program all routees to ActorRefs
		if(a.equals(actorHandlingProduct)){
			return r;
		}
	}
	
	return akka.routing.NoRoutee$.MODULE$; //none found, return NoRoutee
}
 
开发者ID:maxant,项目名称:akkaTrader,代码行数:23,代码来源:Main.java

示例10: RFileReaderRDD

import scala.collection.JavaConversions; //导入依赖的package包/类
public RFileReaderRDD(final SparkContext sparkContext,
                      final String instanceName,
                      final String zookeepers,
                      final String user,
                      final String password,
                      final String tableName,
                      final Set<String> auths,
                      final byte[] serialisedConfiguration) {
    super(sparkContext, JavaConversions.asScalaBuffer(new ArrayList<>()),
            ClassTag$.MODULE$.apply(Map.Entry.class));
    this.instanceName = instanceName;
    this.zookeepers = zookeepers;
    this.user = user;
    this.password = password;
    this.tableName = tableName;
    this.auths = auths;
    this.serialisedConfiguration = serialisedConfiguration;
}
 
开发者ID:gchq,项目名称:Gaffer,代码行数:19,代码来源:RFileReaderRDD.java

示例11: move

import scala.collection.JavaConversions; //导入依赖的package包/类
private void move(EntryTree tree,
		TreeNode[] srcs,
		TreeNode tgt,
		boolean movingLeaf,
		Set<TraitThypeLike> desiredTraits) {

	if(movingLeaf && !desiredTraits.isEmpty()) {
		EntryData tgtEd = EntryData.of(tgt);
        tgtEd.insertTraits(JavaConversions.iterableAsScalaIterable(
                desiredTraits));
	}

	for(TreeNode src: srcs) {
		tree.move(src, tgt);
        EntryData srcEd = EntryData.of(src);
		if(!movingLeaf && !desiredTraits.isEmpty()) {
            srcEd.insertTraits(JavaConversions.iterableAsScalaIterable(
                    desiredTraits));
		}
		srcEd.markDirty();
	}
}
 
开发者ID:insweat,项目名称:hssd,代码行数:23,代码来源:HSSDEditorMoveEntry.java

示例12: in

import scala.collection.JavaConversions; //导入依赖的package包/类
/**
 * Pass a Scala Seq of inputs to the script. The inputs are either two-value
 * or three-value tuples, where the first value is the variable name, the
 * second value is the variable value, and the third optional value is the
 * metadata.
 *
 * @param inputs
 *            Scala Seq of inputs (parameters ($) and variables).
 * @return {@code this} Script object to allow chaining of methods
 */
public Script in(scala.collection.Seq<Object> inputs) {
	List<Object> list = JavaConversions.seqAsJavaList(inputs);
	for (Object obj : list) {
		if (obj instanceof Tuple3) {
			@SuppressWarnings("unchecked")
			Tuple3<String, Object, MatrixMetadata> t3 = (Tuple3<String, Object, MatrixMetadata>) obj;
			in(t3._1(), t3._2(), t3._3());
		} else if (obj instanceof Tuple2) {
			@SuppressWarnings("unchecked")
			Tuple2<String, Object> t2 = (Tuple2<String, Object>) obj;
			in(t2._1(), t2._2());
		} else {
			throw new MLContextException("Only Tuples of 2 or 3 values are permitted");
		}
	}
	return this;
}
 
开发者ID:apache,项目名称:systemml,代码行数:28,代码来源:Script.java

示例13: matrixObjectToRDDStringIJV

import scala.collection.JavaConversions; //导入依赖的package包/类
/**
 * Convert a {@code MatrixObject} to a {@code RDD<String>} in IJV format.
 *
 * @param matrixObject
 *            the {@code MatrixObject}
 * @return the {@code MatrixObject} converted to a {@code RDD<String>}
 */
public static RDD<String> matrixObjectToRDDStringIJV(MatrixObject matrixObject) {

	// NOTE: The following works when called from Java but does not
	// currently work when called from Spark Shell (when you call
	// collect() on the RDD<String>).
	//
	// JavaRDD<String> javaRDD = jsc.parallelize(list);
	// RDD<String> rdd = JavaRDD.toRDD(javaRDD);
	//
	// Therefore, we call parallelize() on the SparkContext rather than
	// the JavaSparkContext to produce the RDD<String> for Scala.

	List<String> list = matrixObjectToListStringIJV(matrixObject);

	ClassTag<String> tag = scala.reflect.ClassTag$.MODULE$.apply(String.class);
	return sc().parallelize(JavaConversions.asScalaBuffer(list), sc().defaultParallelism(), tag);
}
 
开发者ID:apache,项目名称:systemml,代码行数:25,代码来源:MLContextConversionUtil.java

示例14: frameObjectToRDDStringIJV

import scala.collection.JavaConversions; //导入依赖的package包/类
/**
 * Convert a {@code FrameObject} to a {@code RDD<String>} in IJV format.
 *
 * @param frameObject
 *            the {@code FrameObject}
 * @return the {@code FrameObject} converted to a {@code RDD<String>}
 */
public static RDD<String> frameObjectToRDDStringIJV(FrameObject frameObject) {

	// NOTE: The following works when called from Java but does not
	// currently work when called from Spark Shell (when you call
	// collect() on the RDD<String>).
	//
	// JavaRDD<String> javaRDD = jsc.parallelize(list);
	// RDD<String> rdd = JavaRDD.toRDD(javaRDD);
	//
	// Therefore, we call parallelize() on the SparkContext rather than
	// the JavaSparkContext to produce the RDD<String> for Scala.

	List<String> list = frameObjectToListStringIJV(frameObject);

	ClassTag<String> tag = scala.reflect.ClassTag$.MODULE$.apply(String.class);
	return sc().parallelize(JavaConversions.asScalaBuffer(list), sc().defaultParallelism(), tag);
}
 
开发者ID:apache,项目名称:systemml,代码行数:25,代码来源:MLContextConversionUtil.java

示例15: matrixObjectToRDDStringCSV

import scala.collection.JavaConversions; //导入依赖的package包/类
/**
 * Convert a {@code MatrixObject} to a {@code RDD<String>} in CSV format.
 *
 * @param matrixObject
 *            the {@code MatrixObject}
 * @return the {@code MatrixObject} converted to a {@code RDD<String>}
 */
public static RDD<String> matrixObjectToRDDStringCSV(MatrixObject matrixObject) {

	// NOTE: The following works when called from Java but does not
	// currently work when called from Spark Shell (when you call
	// collect() on the RDD<String>).
	//
	// JavaRDD<String> javaRDD = jsc.parallelize(list);
	// RDD<String> rdd = JavaRDD.toRDD(javaRDD);
	//
	// Therefore, we call parallelize() on the SparkContext rather than
	// the JavaSparkContext to produce the RDD<String> for Scala.

	List<String> list = matrixObjectToListStringCSV(matrixObject);

	ClassTag<String> tag = scala.reflect.ClassTag$.MODULE$.apply(String.class);
	return sc().parallelize(JavaConversions.asScalaBuffer(list), sc().defaultParallelism(), tag);
}
 
开发者ID:apache,项目名称:systemml,代码行数:25,代码来源:MLContextConversionUtil.java


注:本文中的scala.collection.JavaConversions类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。