當前位置: 首頁>>代碼示例>>Java>>正文


Java SparkRunner類代碼示例

本文整理匯總了Java中org.apache.beam.runners.spark.SparkRunner的典型用法代碼示例。如果您正苦於以下問題:Java SparkRunner類的具體用法?Java SparkRunner怎麽用?Java SparkRunner使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


SparkRunner類屬於org.apache.beam.runners.spark包,在下文中一共展示了SparkRunner類的12個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: rejectStateAndTimers

import org.apache.beam.runners.spark.SparkRunner; //導入依賴的package包/類
/**
 * Reject state and timers {@link DoFn}.
 *
 * @param doFn the {@link DoFn} to possibly reject.
 */
public static void rejectStateAndTimers(DoFn<?, ?> doFn) {
  DoFnSignature signature = DoFnSignatures.getSignature(doFn.getClass());

  if (signature.stateDeclarations().size() > 0) {
    throw new UnsupportedOperationException(
        String.format(
            "Found %s annotations on %s, but %s cannot yet be used with state in the %s.",
            DoFn.StateId.class.getSimpleName(),
            doFn.getClass().getName(),
            DoFn.class.getSimpleName(),
            SparkRunner.class.getSimpleName()));
  }

  if (signature.timerDeclarations().size() > 0) {
    throw new UnsupportedOperationException(
        String.format(
            "Found %s annotations on %s, but %s cannot yet be used with timers in the %s.",
            DoFn.TimerId.class.getSimpleName(),
            doFn.getClass().getName(),
            DoFn.class.getSimpleName(),
            SparkRunner.class.getSimpleName()));
  }
}
 
開發者ID:apache,項目名稱:beam,代碼行數:29,代碼來源:TranslationUtils.java

示例2: testTrackSingle

import org.apache.beam.runners.spark.SparkRunner; //導入依賴的package包/類
@Test
public void testTrackSingle() {
  options.setRunner(SparkRunner.class);
  JavaSparkContext jsc = SparkContextFactory.getSparkContext(options);
  JavaStreamingContext jssc = new JavaStreamingContext(jsc,
      new org.apache.spark.streaming.Duration(options.getBatchIntervalMillis()));

  Pipeline p = Pipeline.create(options);

  CreateStream<Integer> emptyStream =
      CreateStream.of(
          VarIntCoder.of(),
          Duration.millis(options.getBatchIntervalMillis())).emptyBatch();

  p.apply(emptyStream).apply(ParDo.of(new PassthroughFn<>()));

  p.traverseTopologically(new StreamingSourceTracker(jssc, p, ParDo.MultiOutput.class,  0));
  assertThat(StreamingSourceTracker.numAssertions, equalTo(1));
}
 
開發者ID:apache,項目名稱:beam,代碼行數:20,代碼來源:TrackStreamingSourcesTest.java

示例3: rejectSplittable

import org.apache.beam.runners.spark.SparkRunner; //導入依賴的package包/類
public static void rejectSplittable(DoFn<?, ?> doFn) {
  DoFnSignature signature = DoFnSignatures.getSignature(doFn.getClass());

  if (signature.processElement().isSplittable()) {
    throw new UnsupportedOperationException(
        String.format(
            "%s does not support splittable DoFn: %s", SparkRunner.class.getSimpleName(), doFn));
  }
}
 
開發者ID:apache,項目名稱:beam,代碼行數:10,代碼來源:TranslationUtils.java

示例4: call

import org.apache.beam.runners.spark.SparkRunner; //導入依賴的package包/類
@Override
public JavaStreamingContext call() throws Exception {
  LOG.info("Creating a new Spark Streaming Context");
  // validate unbounded read properties.
  checkArgument(
      options.getMinReadTimeMillis() < options.getBatchIntervalMillis(),
      "Minimum read time has to be less than batch time.");
  checkArgument(
      options.getReadTimePercentage() > 0 && options.getReadTimePercentage() < 1,
      "Read time percentage is bound to (0, 1).");

  SparkPipelineTranslator translator =
      new StreamingTransformTranslator.Translator(new TransformTranslator.Translator());
  Duration batchDuration = new Duration(options.getBatchIntervalMillis());
  LOG.info("Setting Spark streaming batchDuration to {} msec", batchDuration.milliseconds());

  JavaSparkContext jsc = SparkContextFactory.getSparkContext(options);
  JavaStreamingContext jssc = new JavaStreamingContext(jsc, batchDuration);

  // We must first init accumulators since translators expect them to be instantiated.
  SparkRunner.initAccumulators(options, jsc);

  EvaluationContext ctxt = new EvaluationContext(jsc, pipeline, options, jssc);
  // update cache candidates
  SparkRunner.updateCacheCandidates(pipeline, translator, ctxt);
  pipeline.traverseTopologically(new SparkRunner.Evaluator(translator, ctxt));
  ctxt.computeOutputs();

  checkpoint(jssc, checkpointDir);

  return jssc;
}
 
開發者ID:apache,項目名稱:beam,代碼行數:33,代碼來源:SparkRunnerStreamingContextFactory.java

示例5: testTrackFlattened

import org.apache.beam.runners.spark.SparkRunner; //導入依賴的package包/類
@Test
public void testTrackFlattened() {
  options.setRunner(SparkRunner.class);
  JavaSparkContext jsc = SparkContextFactory.getSparkContext(options);
  JavaStreamingContext jssc = new JavaStreamingContext(jsc,
      new org.apache.spark.streaming.Duration(options.getBatchIntervalMillis()));

  Pipeline p = Pipeline.create(options);

  CreateStream<Integer> queueStream1 =
      CreateStream.of(
          VarIntCoder.of(),
          Duration.millis(options.getBatchIntervalMillis())).emptyBatch();
  CreateStream<Integer> queueStream2 =
      CreateStream.of(
          VarIntCoder.of(),
          Duration.millis(options.getBatchIntervalMillis())).emptyBatch();

  PCollection<Integer> pcol1 = p.apply(queueStream1);
  PCollection<Integer> pcol2 = p.apply(queueStream2);
  PCollection<Integer> flattened =
      PCollectionList.of(pcol1).and(pcol2).apply(Flatten.<Integer>pCollections());
  flattened.apply(ParDo.of(new PassthroughFn<>()));

  p.traverseTopologically(new StreamingSourceTracker(jssc, p, ParDo.MultiOutput.class, 0, 1));
  assertThat(StreamingSourceTracker.numAssertions, equalTo(1));
}
 
開發者ID:apache,項目名稱:beam,代碼行數:28,代碼來源:TrackStreamingSourcesTest.java

示例6: StreamingSourceTracker

import org.apache.beam.runners.spark.SparkRunner; //導入依賴的package包/類
private StreamingSourceTracker(
    JavaStreamingContext jssc,
    Pipeline pipeline,
    Class<? extends PTransform> transformClassToAssert,
    Integer... expected) {
  this.ctxt = new EvaluationContext(jssc.sparkContext(), pipeline, options, jssc);
  this.evaluator = new SparkRunner.Evaluator(
      new StreamingTransformTranslator.Translator(new TransformTranslator.Translator()), ctxt);
  this.transformClassToAssert = transformClassToAssert;
  this.expected = expected;
}
 
開發者ID:apache,項目名稱:beam,代碼行數:12,代碼來源:TrackStreamingSourcesTest.java

示例7: createSparkRunnerPipeline

import org.apache.beam.runners.spark.SparkRunner; //導入依賴的package包/類
private Pipeline createSparkRunnerPipeline() {
    PipelineOptions o = PipelineOptionsFactory.create();
    SparkContextOptions options = o.as(SparkContextOptions.class);
    JavaSparkContext jsc = new JavaSparkContext("local[2]", "PubSubInput");
    options.setProvidedSparkContext(jsc);
    options.setUsesProvidedSparkContext(true);
    options.setRunner(SparkRunner.class);
    runtimeContainer = new BeamJobRuntimeContainer(options);
    return Pipeline.create(options);
}
 
開發者ID:Talend,項目名稱:components,代碼行數:11,代碼來源:PubSubInputRuntimeTestIT.java

示例8: createSparkRunnerPipeline

import org.apache.beam.runners.spark.SparkRunner; //導入依賴的package包/類
private Pipeline createSparkRunnerPipeline() {
    JavaSparkContext jsc = new JavaSparkContext("local[2]", this.getClass().getName());
    PipelineOptions o = PipelineOptionsFactory.create();
    SparkContextOptions options = o.as(SparkContextOptions.class);
    options.setProvidedSparkContext(jsc);
    options.setUsesProvidedSparkContext(true);
    options.setRunner(SparkRunner.class);
    runtimeContainer = new BeamJobRuntimeContainer(options);
    return Pipeline.create(options);
}
 
開發者ID:Talend,項目名稱:components,代碼行數:11,代碼來源:PubSubOutputRuntimeTestIT.java

示例9: setupLazyAvroCoder

import org.apache.beam.runners.spark.SparkRunner; //導入依賴的package包/類
@Before
public void setupLazyAvroCoder() {
    options = PipelineOptionsFactory.as(SparkPipelineOptions.class);
    options.setRunner(SparkRunner.class);
    options.setSparkMaster("local");
    options.setStreaming(false);
    pWrite = Pipeline.create(options);
    pRead = Pipeline.create(options);

}
 
開發者ID:Talend,項目名稱:components,代碼行數:11,代碼來源:S3SparkRuntimeTestIT.java

示例10: getOptions

import org.apache.beam.runners.spark.SparkRunner; //導入依賴的package包/類
/**
 * @return the options used to create this pipeline. These can be or changed before the Pipeline is created.
 */
public SparkContextOptions getOptions() {
    if (options == null) {
        options = PipelineOptionsFactory.as(SparkContextOptions.class);
        options.setRunner(SparkRunner.class);
    }
    return options;
}
 
開發者ID:Talend,項目名稱:components,代碼行數:11,代碼來源:SparkIntegrationTestResource.java

示例11: createSparkRunnerPipeline

import org.apache.beam.runners.spark.SparkRunner; //導入依賴的package包/類
private Pipeline createSparkRunnerPipeline() {
    PipelineOptions o = PipelineOptionsFactory.create();
    SparkContextOptions options = o.as(SparkContextOptions.class);
    options.setProvidedSparkContext(jsc);
    options.setUsesProvidedSparkContext(true);
    options.setRunner(SparkRunner.class);
    runtimeContainer = new BeamJobRuntimeContainer(options);
    return Pipeline.create(options);
}
 
開發者ID:Talend,項目名稱:components,代碼行數:10,代碼來源:BigQueryBeamRuntimeTestIT.java

示例12: createSparkRunnerPipeline

import org.apache.beam.runners.spark.SparkRunner; //導入依賴的package包/類
private Pipeline createSparkRunnerPipeline() {
    PipelineOptions o = PipelineOptionsFactory.create();
    SparkContextOptions options = o.as(SparkContextOptions.class);

    SparkConf conf = new SparkConf();
    conf.setAppName("KinesisInput");
    conf.setMaster("local[2]");
    conf.set("spark.driver.allowMultipleContexts", "true");
    JavaSparkContext jsc = new JavaSparkContext(new SparkContext(conf));
    options.setProvidedSparkContext(jsc);
    options.setUsesProvidedSparkContext(true);
    options.setRunner(SparkRunner.class);

    return Pipeline.create(options);
}
 
開發者ID:Talend,項目名稱:components,代碼行數:16,代碼來源:KinesisInputRuntimeTestIT.java


注:本文中的org.apache.beam.runners.spark.SparkRunner類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。