當前位置: 首頁>>代碼示例>>Java>>正文


Java JavaConversions類代碼示例

本文整理匯總了Java中scala.collection.JavaConversions的典型用法代碼示例。如果您正苦於以下問題:Java JavaConversions類的具體用法?Java JavaConversions怎麽用?Java JavaConversions使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


JavaConversions類屬於scala.collection包,在下文中一共展示了JavaConversions類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: getBrokerMetadataByAddress

import scala.collection.JavaConversions; //導入依賴的package包/類
/**
 * Get Kafka broker metadata for a specific address
 *
 * @param kafkaBrokers    list of registered Kafka brokers
 * @param kfBrokerAddress address to look for
 * @return Kafka broker metadata
 */
private KFBrokerMetadata getBrokerMetadataByAddress(final List<Broker> kafkaBrokers,
                                                    final InetSocketAddress kfBrokerAddress) {

    KFBrokerMetadata brokerMetadata = new KFBrokerMetadata();

    kafkaBrokers.forEach(broker -> {
        JavaConversions.mapAsJavaMap(broker.endPoints())
                .forEach((protocol, endpoint) -> {
                    if (endpoint.host().equals(kfBrokerAddress.getHostName())
                            && endpoint.port() == kfBrokerAddress.getPort()) {
                        brokerMetadata.setBrokerId(broker.id());
                        brokerMetadata.setHost(endpoint.host());
                        brokerMetadata.setPort(endpoint.port());
                        brokerMetadata.setConnectionString(endpoint.connectionString());
                        brokerMetadata.setSecurityProtocol(protocol.name);
                    }
                });
    });
    return brokerMetadata;
}
 
開發者ID:mcafee,項目名稱:management-sdk-for-kafka,代碼行數:28,代碼來源:KFBrokerWatcher.java

示例2: processTopic

import scala.collection.JavaConversions; //導入依賴的package包/類
public List<OffsetInfo> processTopic(String group, String topic) throws Exception {
	List<String> partitionIds = null;
	try {
		partitionIds = JavaConversions.seqAsJavaList(ZKUtils.getZKUtilsFromKafka()
				.getChildren(ZkUtils.BrokerTopicsPath() + "/" + topic + "/partitions"));
	} catch (Exception e) {
		if (e instanceof NoNodeException) {
			LOG.warn("Is topic >" + topic + "< exists!", e);
			return null;
		}
	}
	List<OffsetInfo> offsetInfos = new ArrayList<OffsetInfo>();
	OffsetInfo offsetInfo = null;
	if (partitionIds == null) {
		// TODO that topic exists in consumer node but not in topics node?!
		return null;
	}

	for (String partitionId : partitionIds) {
		offsetInfo = processPartition(group, topic, partitionId);
		if (offsetInfo != null) {
			offsetInfos.add(offsetInfo);
		}
	}
	return offsetInfos;
}
 
開發者ID:chickling,項目名稱:kmanager,代碼行數:27,代碼來源:OffsetGetter.java

示例3: startAdminHttpService

import scala.collection.JavaConversions; //導入依賴的package包/類
public void startAdminHttpService() {
  try {
    Properties properties = new Properties();
    properties.load(this.getClass().getResource("build.properties").openStream());
    LOG.info("build.properties build_revision: {}",
        properties.getProperty("build_revision", "unknown"));
  } catch (Throwable t) {
    LOG.warn("Failed to load properties from build.properties", t);
  }
  Duration[] defaultLatchIntervals = {Duration.apply(1, TimeUnit.MINUTES)};
  Iterator<Duration> durationIterator = Arrays.asList(defaultLatchIntervals).iterator();
  @SuppressWarnings("deprecation")
  AdminServiceFactory adminServiceFactory = new AdminServiceFactory(
      this.port,
      20,
      List$.MODULE$.empty(),
      Option.empty(),
      List$.MODULE$.empty(),
      Map$.MODULE$.empty(),
      JavaConversions.asScalaIterator(durationIterator).toList());
  RuntimeEnvironment runtimeEnvironment = new RuntimeEnvironment(this);
  AdminHttpService service = adminServiceFactory.apply(runtimeEnvironment);
  for (Map.Entry<String, CustomHttpHandler> entry : this.customHttpHandlerMap.entrySet()) {
    service.httpServer().createContext(entry.getKey(), entry.getValue());
  }
}
 
開發者ID:pinterest,項目名稱:doctorkafka,代碼行數:27,代碼來源:OstrichAdminService.java

示例4: getOffsets

import scala.collection.JavaConversions; //導入依賴的package包/類
/**
 * @param zkServers Zookeeper server string: host1:port1[,host2:port2,...]
 * @param groupID consumer group to get offsets for
 * @param topic topic to get offsets for
 * @return mapping of (topic and) partition to offset
 */
public static Map<Pair<String,Integer>,Long> getOffsets(String zkServers,
                                                        String groupID,
                                                        String topic) {
  ZKGroupTopicDirs topicDirs = new ZKGroupTopicDirs(groupID, topic);
  Map<Pair<String,Integer>,Long> offsets = new HashMap<>();
  ZkUtils zkUtils = ZkUtils.apply(zkServers, ZK_TIMEOUT_MSEC, ZK_TIMEOUT_MSEC, false);
  try {
    List<?> partitions = JavaConversions.seqAsJavaList(
        zkUtils.getPartitionsForTopics(
          JavaConversions.asScalaBuffer(Collections.singletonList(topic))).head()._2());
    partitions.forEach(partition -> {
      String partitionOffsetPath = topicDirs.consumerOffsetDir() + "/" + partition;
      Option<String> maybeOffset = zkUtils.readDataMaybeNull(partitionOffsetPath)._1();
      Long offset = maybeOffset.isDefined() ? Long.valueOf(maybeOffset.get()) : null;
      offsets.put(new Pair<>(topic, Integer.valueOf(partition.toString())), offset);
    });
  } finally {
    zkUtils.close();
  }
  return offsets;
}
 
開發者ID:oncewang,項目名稱:oryx2,代碼行數:28,代碼來源:KafkaUtils.java

示例5: write

import scala.collection.JavaConversions; //導入依賴的package包/類
@Override
public void write(final Kryo kryo, final Output output, final WrappedArray<T> iterable) {
    output.writeVarInt(iterable.size(), true);
    JavaConversions.asJavaCollection(iterable).forEach(t -> {
        kryo.writeClassAndObject(output, t);
        output.flush();
    });
}
 
開發者ID:PKUSilvester,項目名稱:LiteGraph,代碼行數:9,代碼來源:WrappedArraySerializer.java

示例6: createRelation

import scala.collection.JavaConversions; //導入依賴的package包/類
@Override
public SparkRDF4JSparqlRelation createRelation(SQLContext sqlContext,
		scala.collection.immutable.Map<String, String> scalaParameters, StructType schema) {
	Map<String, String> parameters = JavaConversions.asJavaMap(scalaParameters);
	String service = Optional.ofNullable(parameters.get("service")).orElseThrow(() -> new RuntimeException(
			"Spark RDF4J Sparql requires a SPARQL 'service' to be specified in the parameters"));
	String query = Optional.ofNullable(parameters.get("query")).orElseThrow(() -> new RuntimeException(
			"Spark RDF4J Sparql requires a 'query' to be specified in the parameters"));

	try {
		ParsedQuery parsedQuery = QueryParserUtil.parseQuery(QueryLanguage.SPARQL, query, null);
		if(!(parsedQuery instanceof ParsedTupleQuery)) {
			throw new RuntimeException("Spark RDF4J can only be used with Tuple (Select) queries right now.");
		}
		return new SparkRDF4JSparqlRelation(service, parsedQuery, schema, sqlContext);
	} catch (MalformedQueryException e) {
		throw new RuntimeException("Query was not valid SPARQL", e);
	}

}
 
開發者ID:ansell,項目名稱:spark-rdf4j,代碼行數:21,代碼來源:SparkRDF4JDefaultSource.java

示例7: derive

import scala.collection.JavaConversions; //導入依賴的package包/類
@Override
public Dataset<Row> derive(Map<String, Dataset<Row>> dependencies) throws Exception {

  Dataset<Row> compare, with;

  if (!dependencies.containsKey(compareDataset)) {
    throw new RuntimeException("Designated comparison target dataset is not a dependency: " + compareDataset);
  } else {
    compare = dependencies.get(compareDataset);
  }

  if (!dependencies.containsKey(withDataset)) {
    throw new RuntimeException("Designated comparison reference dataset is not a dependency: " + withDataset);
  } else {
    with = dependencies.get(withDataset);
  }

  return compare.join(with, JavaConversions.asScalaBuffer(fields).toList(), "leftanti");

}
 
開發者ID:cloudera-labs,項目名稱:envelope,代碼行數:21,代碼來源:ExcludeDeriver.java

示例8: main

import scala.collection.JavaConversions; //導入依賴的package包/類
public static void main(String[] args) {

		if(args.length > 0){
			NUM_KIDS = Integer.parseInt(args[0]);
		}
		if(args.length > 1){
			DELAY = Long.parseLong(args[1]);
		}
		if(args.length > 2){
			DB_HOST = args[2];
		}
		
		ActorRef listener = system.actorOf(Props.create(HttpActor.class), "httpActor"); 
		
		InetSocketAddress endpoint = new InetSocketAddress(3000);
		int backlog = 100;
		List<Inet.SocketOption> options = JavaConversions.asScalaBuffer(new ArrayList<Inet.SocketOption>()).toList();
		Option<ServerSettings> settings = scala.Option.empty();
		ServerSSLEngineProvider sslEngineProvider = null;
		Bind bind = new Http.Bind(listener, endpoint, backlog, options, settings, sslEngineProvider);
		IO.apply(spray.can.Http$.MODULE$, system).tell(bind, ActorRef.noSender());
		
		system.scheduler().schedule(new FiniteDuration(5, TimeUnit.SECONDS), new FiniteDuration(5, TimeUnit.SECONDS), ()->{
			System.out.println(new Date() + " - numSales=" + numSales.get());
		}, system.dispatcher());
	}
 
開發者ID:maxant,項目名稱:akkaTrader,代碼行數:27,代碼來源:Main.java

示例9: select

import scala.collection.JavaConversions; //導入依賴的package包/類
@Override
public Routee select(Object message, IndexedSeq<Routee> routees) {

	//find which product ID is relevant here
	String productId = null;
	if(message instanceof PurchaseOrder){
		productId = ((PurchaseOrder) message).getProductId();
	}else if(message instanceof SalesOrder){
		productId = ((SalesOrder) message).getProductId();
	}
	ActorRef actorHandlingProduct = kids.get(productId);

	//no go find the routee for the relevant actor
	for(Routee r : JavaConversions.asJavaIterable(routees)){
		ActorRef a = ((ActorRefRoutee) r).ref(); //cast ok, since the are by definition in this program all routees to ActorRefs
		if(a.equals(actorHandlingProduct)){
			return r;
		}
	}
	
	return akka.routing.NoRoutee$.MODULE$; //none found, return NoRoutee
}
 
開發者ID:maxant,項目名稱:akkaTrader,代碼行數:23,代碼來源:Main.java

示例10: RFileReaderRDD

import scala.collection.JavaConversions; //導入依賴的package包/類
public RFileReaderRDD(final SparkContext sparkContext,
                      final String instanceName,
                      final String zookeepers,
                      final String user,
                      final String password,
                      final String tableName,
                      final Set<String> auths,
                      final byte[] serialisedConfiguration) {
    super(sparkContext, JavaConversions.asScalaBuffer(new ArrayList<>()),
            ClassTag$.MODULE$.apply(Map.Entry.class));
    this.instanceName = instanceName;
    this.zookeepers = zookeepers;
    this.user = user;
    this.password = password;
    this.tableName = tableName;
    this.auths = auths;
    this.serialisedConfiguration = serialisedConfiguration;
}
 
開發者ID:gchq,項目名稱:Gaffer,代碼行數:19,代碼來源:RFileReaderRDD.java

示例11: move

import scala.collection.JavaConversions; //導入依賴的package包/類
private void move(EntryTree tree,
		TreeNode[] srcs,
		TreeNode tgt,
		boolean movingLeaf,
		Set<TraitThypeLike> desiredTraits) {

	if(movingLeaf && !desiredTraits.isEmpty()) {
		EntryData tgtEd = EntryData.of(tgt);
        tgtEd.insertTraits(JavaConversions.iterableAsScalaIterable(
                desiredTraits));
	}

	for(TreeNode src: srcs) {
		tree.move(src, tgt);
        EntryData srcEd = EntryData.of(src);
		if(!movingLeaf && !desiredTraits.isEmpty()) {
            srcEd.insertTraits(JavaConversions.iterableAsScalaIterable(
                    desiredTraits));
		}
		srcEd.markDirty();
	}
}
 
開發者ID:insweat,項目名稱:hssd,代碼行數:23,代碼來源:HSSDEditorMoveEntry.java

示例12: in

import scala.collection.JavaConversions; //導入依賴的package包/類
/**
 * Pass a Scala Seq of inputs to the script. The inputs are either two-value
 * or three-value tuples, where the first value is the variable name, the
 * second value is the variable value, and the third optional value is the
 * metadata.
 *
 * @param inputs
 *            Scala Seq of inputs (parameters ($) and variables).
 * @return {@code this} Script object to allow chaining of methods
 */
public Script in(scala.collection.Seq<Object> inputs) {
	List<Object> list = JavaConversions.seqAsJavaList(inputs);
	for (Object obj : list) {
		if (obj instanceof Tuple3) {
			@SuppressWarnings("unchecked")
			Tuple3<String, Object, MatrixMetadata> t3 = (Tuple3<String, Object, MatrixMetadata>) obj;
			in(t3._1(), t3._2(), t3._3());
		} else if (obj instanceof Tuple2) {
			@SuppressWarnings("unchecked")
			Tuple2<String, Object> t2 = (Tuple2<String, Object>) obj;
			in(t2._1(), t2._2());
		} else {
			throw new MLContextException("Only Tuples of 2 or 3 values are permitted");
		}
	}
	return this;
}
 
開發者ID:apache,項目名稱:systemml,代碼行數:28,代碼來源:Script.java

示例13: matrixObjectToRDDStringIJV

import scala.collection.JavaConversions; //導入依賴的package包/類
/**
 * Convert a {@code MatrixObject} to a {@code RDD<String>} in IJV format.
 *
 * @param matrixObject
 *            the {@code MatrixObject}
 * @return the {@code MatrixObject} converted to a {@code RDD<String>}
 */
public static RDD<String> matrixObjectToRDDStringIJV(MatrixObject matrixObject) {

	// NOTE: The following works when called from Java but does not
	// currently work when called from Spark Shell (when you call
	// collect() on the RDD<String>).
	//
	// JavaRDD<String> javaRDD = jsc.parallelize(list);
	// RDD<String> rdd = JavaRDD.toRDD(javaRDD);
	//
	// Therefore, we call parallelize() on the SparkContext rather than
	// the JavaSparkContext to produce the RDD<String> for Scala.

	List<String> list = matrixObjectToListStringIJV(matrixObject);

	ClassTag<String> tag = scala.reflect.ClassTag$.MODULE$.apply(String.class);
	return sc().parallelize(JavaConversions.asScalaBuffer(list), sc().defaultParallelism(), tag);
}
 
開發者ID:apache,項目名稱:systemml,代碼行數:25,代碼來源:MLContextConversionUtil.java

示例14: frameObjectToRDDStringIJV

import scala.collection.JavaConversions; //導入依賴的package包/類
/**
 * Convert a {@code FrameObject} to a {@code RDD<String>} in IJV format.
 *
 * @param frameObject
 *            the {@code FrameObject}
 * @return the {@code FrameObject} converted to a {@code RDD<String>}
 */
public static RDD<String> frameObjectToRDDStringIJV(FrameObject frameObject) {

	// NOTE: The following works when called from Java but does not
	// currently work when called from Spark Shell (when you call
	// collect() on the RDD<String>).
	//
	// JavaRDD<String> javaRDD = jsc.parallelize(list);
	// RDD<String> rdd = JavaRDD.toRDD(javaRDD);
	//
	// Therefore, we call parallelize() on the SparkContext rather than
	// the JavaSparkContext to produce the RDD<String> for Scala.

	List<String> list = frameObjectToListStringIJV(frameObject);

	ClassTag<String> tag = scala.reflect.ClassTag$.MODULE$.apply(String.class);
	return sc().parallelize(JavaConversions.asScalaBuffer(list), sc().defaultParallelism(), tag);
}
 
開發者ID:apache,項目名稱:systemml,代碼行數:25,代碼來源:MLContextConversionUtil.java

示例15: matrixObjectToRDDStringCSV

import scala.collection.JavaConversions; //導入依賴的package包/類
/**
 * Convert a {@code MatrixObject} to a {@code RDD<String>} in CSV format.
 *
 * @param matrixObject
 *            the {@code MatrixObject}
 * @return the {@code MatrixObject} converted to a {@code RDD<String>}
 */
public static RDD<String> matrixObjectToRDDStringCSV(MatrixObject matrixObject) {

	// NOTE: The following works when called from Java but does not
	// currently work when called from Spark Shell (when you call
	// collect() on the RDD<String>).
	//
	// JavaRDD<String> javaRDD = jsc.parallelize(list);
	// RDD<String> rdd = JavaRDD.toRDD(javaRDD);
	//
	// Therefore, we call parallelize() on the SparkContext rather than
	// the JavaSparkContext to produce the RDD<String> for Scala.

	List<String> list = matrixObjectToListStringCSV(matrixObject);

	ClassTag<String> tag = scala.reflect.ClassTag$.MODULE$.apply(String.class);
	return sc().parallelize(JavaConversions.asScalaBuffer(list), sc().defaultParallelism(), tag);
}
 
開發者ID:apache,項目名稱:systemml,代碼行數:25,代碼來源:MLContextConversionUtil.java


注:本文中的scala.collection.JavaConversions類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。