當前位置: 首頁>>代碼示例>>Java>>正文


Java JavaStreamingContext.receiverStream方法代碼示例

本文整理匯總了Java中org.apache.spark.streaming.api.java.JavaStreamingContext.receiverStream方法的典型用法代碼示例。如果您正苦於以下問題:Java JavaStreamingContext.receiverStream方法的具體用法?Java JavaStreamingContext.receiverStream怎麽用?Java JavaStreamingContext.receiverStream使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.spark.streaming.api.java.JavaStreamingContext的用法示例。


在下文中一共展示了JavaStreamingContext.receiverStream方法的6個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: getEventReceiverStream

import org.apache.spark.streaming.api.java.JavaStreamingContext; //導入方法依賴的package包/類
@Override
public <T extends Event> JavaDStream<T> getEventReceiverStream(JavaStreamingContext sc,
    Class<T> eventType) {
  UserFilteredMessagingServiceReceiver<T> messagingReceiver =
      new UserFilteredMessagingServiceReceiver<T>(bundle.getModuleId(),
          PlatformClientFactory.getInstance().getUsedHost(), eventType);

  JavaDStream<T> stream = sc.receiverStream(messagingReceiver);

  return stream;
}
 
開發者ID:Telecooperation,項目名稱:assistance-platform-server,代碼行數:12,代碼來源:SparkService.java

示例2: main

import org.apache.spark.streaming.api.java.JavaStreamingContext; //導入方法依賴的package包/類
public static void main(String[] args) throws InterruptedException {
    SparkConf conf = new SparkConf().setMaster("local[*]").setAppName("pulsar-spark");
    JavaStreamingContext jssc = new JavaStreamingContext(conf, Durations.seconds(5));

    ClientConfiguration clientConf = new ClientConfiguration();
    ConsumerConfiguration consConf = new ConsumerConfiguration();
    String url = "pulsar://localhost:6650/";
    String topic = "persistent://sample/standalone/ns1/topic1";
    String subs = "sub1";

    JavaReceiverInputDStream<byte[]> msgs = jssc
            .receiverStream(new SparkStreamingPulsarReceiver(clientConf, consConf, url, topic, subs));

    JavaDStream<Integer> isContainingPulsar = msgs.flatMap(new FlatMapFunction<byte[], Integer>() {
        @Override
        public Iterator<Integer> call(byte[] msg) {
            return Arrays.asList(((new String(msg)).indexOf("Pulsar") != -1) ? 1 : 0).iterator();
        }
    });

    JavaDStream<Integer> numOfPulsar = isContainingPulsar.reduce(new Function2<Integer, Integer, Integer>() {
        @Override
        public Integer call(Integer i1, Integer i2) {
            return i1 + i2;
        }
    });

    numOfPulsar.print();

    jssc.start();
    jssc.awaitTermination();
}
 
開發者ID:apache,項目名稱:incubator-pulsar,代碼行數:33,代碼來源:SparkStreamingPulsarReceiverExample.java

示例3: start

import org.apache.spark.streaming.api.java.JavaStreamingContext; //導入方法依賴的package包/類
public void start() {
    final JavaStreamingContext context = new JavaStreamingContext(conf, checkpointInterval);

    // for graceful shutdown of the application ...
    Runtime.getRuntime().addShutdownHook(new Thread() {
        @Override
        public void run() {
            System.out.println("Shutting down streaming app...");
            context.stop(true, true);
            System.out.println("Shutdown of streaming app complete.");
        }
    });

    JKinesisReceiver receiver = new JKinesisReceiver(appName, streamName,
                                                     endpointUrl, regionName,
                                                     checkpointInterval,
                                                     InitialPositionInStream.LATEST);

    JavaDStream<String> dstream = context.receiverStream(receiver);

    JavaDStream<EventRecord> recs = dstream.map(new EventRecordMapFunc());

    recs.print();

    // persist to DStream to Cassandra
    javaFunctions(recs)
        .writerBuilder("canary", "eventrecord", mapToRow(EventRecord.class))
        .saveToCassandra();


    System.out.println("Start Spark Stream Processing...");

    context.start();
    context.awaitTermination();

}
 
開發者ID:lenards,項目名稱:spark-cstar-canaries,代碼行數:37,代碼來源:Consumer.java

示例4: main

import org.apache.spark.streaming.api.java.JavaStreamingContext; //導入方法依賴的package包/類
public static void main(String[] args) throws Exception {
    if (args.length < 2) {
      System.err.println("Usage: JavaCustomReceiver <hostname> <port>");
      System.exit(1);
    }

//    StreamingExamples.setStreamingLogLevels();
    // https://github.com/apache/spark/blob/39e2bad6a866d27c3ca594d15e574a1da3ee84cc/examples/src/main/scala/org/apache/spark/examples/streaming/StreamingExamples.scala
    boolean log4jInitialized = Logger.getRootLogger().getAllAppenders().hasMoreElements();
    if (!log4jInitialized) {
      // We first log something to initialize Spark's default logging, then we override the
      // logging level.
///      logInfo("Setting log level to [WARN] for streaming example." +
///        " To override add a custom log4j.properties to the classpath.")
      Logger.getRootLogger().setLevel(Level.WARN);
    }

    // Create the context with a 1 second batch size
    SparkConf sparkConf = new SparkConf().setAppName("JavaCustomReceiver").setMaster("local[*]").set("spark.driver.host", "localhost"); // https://issues.apache.org/jira/browse/
    JavaStreamingContext ssc = new JavaStreamingContext(sparkConf, new Duration(5000));

    // Create an input stream with the custom receiver on target ip:port and count the
    // words in input stream of \n delimited text (eg. generated by 'nc')
    final JavaReceiverInputDStream<Tuple2<String, Long>> receiverStream = ssc.receiverStream(new JavaCustomReceiver(args[0], Integer.parseInt(args[1])));
    PairFunction mapFunction = new PairFunction() {

		@Override
		public Tuple2 call(Object arg0) throws Exception {
			return (Tuple2) arg0;
		}
    	
    };
	final JavaPairDStream<String, Long> keyValues = receiverStream.mapToPair(mapFunction);
	
	JavaPairDStream<String, Long> byKeys = keyValues.reduceByKey((a, b) -> a + b);
	byKeys.print();
	
/*    JavaDStream<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
      @Override
      public Iterable<String> call(String x) {
//        return Arrays.asList(SPACE.split(x)).iterator();
    	  return Arrays.asList(SPACE.split(x));
      }
    });
    JavaPairDStream<String, Integer> wordCounts = words.mapToPair(
      new PairFunction<String, String, Integer>() {
        @Override public Tuple2<String, Integer> call(String s) {
          return new Tuple2<>(s, 1);
        }
      }).reduceByKey(new Function2<Integer, Integer, Integer>() {
        @Override
        public Integer call(Integer i1, Integer i2) {
          return i1 + i2;
        }
      });

    wordCounts.print();*/
    ssc.start();
    ssc.awaitTermination();
  }
 
開發者ID:Logimethods,項目名稱:nats-connector-spark,代碼行數:61,代碼來源:JavaCustomReceiver.java

示例5: createStream

import org.apache.spark.streaming.api.java.JavaStreamingContext; //導入方法依賴的package包/類
public static <TYPE> JavaReceiverInputDStream<TYPE> createStream(JavaStreamingContext jssc,
                                                                 String topic, String topicRegType, Class<TYPE> topicType,
                                                                 StorageLevel storageLevel) {
    return jssc.receiverStream(new VortexReceiver<>(storageLevel, topic, topicRegType, topicType));
}
 
開發者ID:ADLINK-IST,項目名稱:vortex-spark,代碼行數:6,代碼來源:VortexUtils.java

示例6: asStreamOf

import org.apache.spark.streaming.api.java.JavaStreamingContext; //導入方法依賴的package包/類
/**
 * @param ssc, the (Java based) Spark Streaming Context
 * @return a Spark Stream, belonging to the provided Context, that will collect NATS Messages
 */
public JavaReceiverInputDStream<R> asStreamOf(JavaStreamingContext ssc) {
	return ssc.receiverStream(this);
}
 
開發者ID:Logimethods,項目名稱:nats-connector-spark,代碼行數:8,代碼來源:StandardNatsToSparkConnectorImpl.java


注:本文中的org.apache.spark.streaming.api.java.JavaStreamingContext.receiverStream方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。