當前位置: 首頁>>代碼示例>>Java>>正文


Java StorageLevel類代碼示例

本文整理匯總了Java中org.apache.spark.storage.StorageLevel的典型用法代碼示例。如果您正苦於以下問題:Java StorageLevel類的具體用法?Java StorageLevel怎麽用?Java StorageLevel使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


StorageLevel類屬於org.apache.spark.storage包,在下文中一共展示了StorageLevel類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: testValidTwitchStreamBuilder

import org.apache.spark.storage.StorageLevel; //導入依賴的package包/類
/**
 * Test that the flow works correctly
 */
@Test
public void testValidTwitchStreamBuilder() {
    Set<String> gamesList = new HashSet<>();
    gamesList.add("League+of+Legends");
    Set<String> channelsList = new HashSet<>();
    channelsList.add("#TSM_Dyrus");

    JavaReceiverInputDStream<Message> stream = new TwitchStreamBuilder()
            .setGames(gamesList)
            .setChannels(channelsList)
            .setLanguage("es")
            .setStorageLevel(StorageLevel.MEMORY_AND_DISK_SER_2())
            .setSchedulingInterval(300)
            .build(jssc);
}
 
開發者ID:agapic,項目名稱:Twitch-Streamer,代碼行數:19,代碼來源:JavaTwitchStreamBuilderTest.java

示例2: processAndRegisterTempTable

import org.apache.spark.storage.StorageLevel; //導入依賴的package包/類
private void processAndRegisterTempTable(Dataset<Row> df, RootStatement rootStatement, String tableAlias, String logText, boolean debug) {
    boolean dfPersisted = false;
    long tableReferenceCount = rootStatement.getTableReferenceCount().getCount(tableAlias);

    if (tableReferenceCount > 1) {
        df = df.persist(StorageLevel.MEMORY_AND_DISK_SER());
        dfPersisted = true;
        logger.info(String.format("Persist table %s because it is referenced %s times", tableAlias, tableReferenceCount));
    } else {
        logger.info(String.format("Do not persist table %s because it is referenced %s times", tableAlias, tableReferenceCount));
    }

    df.createOrReplaceTempView(tableAlias);
    logger.info(String.format("Registered temp view %s for query: %s", tableAlias, logText));

    if (debug) {
        if (!dfPersisted) {
            df = df.persist(StorageLevel.MEMORY_AND_DISK_SER());
        }

        // TODO save debug info/data
    }
}
 
開發者ID:uber,項目名稱:uberscriptquery,代碼行數:24,代碼來源:QueryEngine.java

示例3: readAndConvertFeatureRDD

import org.apache.spark.storage.StorageLevel; //導入依賴的package包/類
private static RDD<Tuple2<Object,double[]>> readAndConvertFeatureRDD(
    JavaPairRDD<String,float[]> javaRDD,
    Broadcast<Map<String,Integer>> bIdToIndex) {

  RDD<Tuple2<Integer,double[]>> scalaRDD = javaRDD.mapToPair(t ->
      new Tuple2<>(bIdToIndex.value().get(t._1()), t._2())
  ).mapValues(f -> {
      double[] d = new double[f.length];
      for (int i = 0; i < d.length; i++) {
        d[i] = f[i];
      }
      return d;
    }
  ).rdd();

  // This mimics the persistence level establish by ALS training methods
  scalaRDD.persist(StorageLevel.MEMORY_AND_DISK());

  @SuppressWarnings("unchecked")
  RDD<Tuple2<Object,double[]>> objKeyRDD = (RDD<Tuple2<Object,double[]>>) (RDD<?>) scalaRDD;
  return objKeyRDD;
}
 
開發者ID:oncewang,項目名稱:oryx2,代碼行數:23,代碼來源:ALSUpdate.java

示例4: SparkStreamingPulsarReceiver

import org.apache.spark.storage.StorageLevel; //導入依賴的package包/類
public SparkStreamingPulsarReceiver(StorageLevel storageLevel, ClientConfiguration clientConfiguration,
        ConsumerConfiguration consumerConfiguration, String url, String topic, String subscription) {
    super(storageLevel);
    checkNotNull(clientConfiguration, "ClientConfiguration must not be null");
    checkNotNull(consumerConfiguration, "ConsumerConfiguration must not be null");
    this.clientConfiguration = clientConfiguration;
    this.url = url;
    this.topic = topic;
    this.subscription = subscription;
    if (consumerConfiguration.getAckTimeoutMillis() == 0) {
        consumerConfiguration.setAckTimeout(60, TimeUnit.SECONDS);
    }
    consumerConfiguration.setMessageListener((MessageListener & Serializable) (consumer, msg) -> {
        try {
            store(msg.getData());
            consumer.acknowledgeAsync(msg);
        } catch (Exception e) {
            log.error("Failed to store a message : {}", e.getMessage());
        }
    });
    this.consumerConfiguration = consumerConfiguration;
}
 
開發者ID:apache,項目名稱:incubator-pulsar,代碼行數:23,代碼來源:SparkStreamingPulsarReceiver.java

示例5: writeGraphRDD

import org.apache.spark.storage.StorageLevel; //導入依賴的package包/類
@Override
public void writeGraphRDD(final Configuration configuration, final JavaPairRDD<Object, VertexWritable> graphRDD) {
    if (!configuration.getBoolean(Constants.GREMLIN_SPARK_PERSIST_CONTEXT, false))
        LOGGER.warn("The SparkContext should be persisted in order for the RDD to persist across jobs. To do so, set " + Constants.GREMLIN_SPARK_PERSIST_CONTEXT + " to true");
    if (!configuration.containsKey(Constants.GREMLIN_HADOOP_OUTPUT_LOCATION))
        throw new IllegalArgumentException("There is no provided " + Constants.GREMLIN_HADOOP_OUTPUT_LOCATION + " to write the persisted RDD to");
    SparkContextStorage.open(configuration).rm(configuration.getString(Constants.GREMLIN_HADOOP_OUTPUT_LOCATION));  // this might be bad cause it unpersists the job RDD
    // determine which storage level to persist the RDD as with MEMORY_ONLY being the default cache()
    final StorageLevel storageLevel = StorageLevel.fromString(configuration.getString(Constants.GREMLIN_SPARK_PERSIST_STORAGE_LEVEL, "MEMORY_ONLY"));
    if (!configuration.getBoolean(Constants.GREMLIN_HADOOP_GRAPH_WRITER_HAS_EDGES, true))
        graphRDD.mapValues(vertex -> {
            vertex.get().dropEdges(Direction.BOTH);
            return vertex;
        }).setName(Constants.getGraphLocation(configuration.getString(Constants.GREMLIN_HADOOP_OUTPUT_LOCATION))).persist(storageLevel);
    else
        graphRDD.setName(Constants.getGraphLocation(configuration.getString(Constants.GREMLIN_HADOOP_OUTPUT_LOCATION))).persist(storageLevel);
    Spark.refresh(); // necessary to do really fast so the Spark GC doesn't clear out the RDD
}
 
開發者ID:PKUSilvester,項目名稱:LiteGraph,代碼行數:19,代碼來源:PersistedOutputRDD.java

示例6: cache

import org.apache.spark.storage.StorageLevel; //導入依賴的package包/類
@Override
@SuppressWarnings("unchecked")
public void cache(String storageLevel, Coder<?> coder) {
  // we "force" MEMORY storage level in streaming
  if (!StorageLevel.fromString(storageLevel).equals(StorageLevel.MEMORY_ONLY_SER())) {
    LOG.warn("Provided StorageLevel: {} is ignored for streams, using the default level: {}",
        storageLevel,
        StorageLevel.MEMORY_ONLY_SER());
  }
  // Caching can cause Serialization, we need to code to bytes
  // more details in https://issues.apache.org/jira/browse/BEAM-2669
  Coder<WindowedValue<T>> wc = (Coder<WindowedValue<T>>) coder;
  this.dStream = dStream.map(CoderHelpers.toByteFunction(wc))
      .cache()
      .map(CoderHelpers.fromByteFunction(wc));

}
 
開發者ID:apache,項目名稱:beam,代碼行數:18,代碼來源:UnboundedDataset.java

示例7: cache

import org.apache.spark.storage.StorageLevel; //導入依賴的package包/類
@Override
@SuppressWarnings("unchecked")
public void cache(String storageLevel, Coder<?> coder) {
  StorageLevel level = StorageLevel.fromString(storageLevel);
  if (TranslationUtils.avoidRddSerialization(level)) {
    // if it is memory only reduce the overhead of moving to bytes
    this.rdd = getRDD().persist(level);
  } else {
    // Caching can cause Serialization, we need to code to bytes
    // more details in https://issues.apache.org/jira/browse/BEAM-2669
    Coder<WindowedValue<T>> windowedValueCoder = (Coder<WindowedValue<T>>) coder;
    this.rdd = getRDD().map(CoderHelpers.toByteFunction(windowedValueCoder))
        .persist(level)
        .map(CoderHelpers.fromByteFunction(windowedValueCoder));
  }
}
 
開發者ID:apache,項目名稱:beam,代碼行數:17,代碼來源:BoundedDataset.java

示例8: initWatermarks

import org.apache.spark.storage.StorageLevel; //導入依賴的package包/類
private static Map<Integer, SparkWatermarks> initWatermarks(final BlockManager blockManager) {

    final Map<Integer, SparkWatermarks> watermarks = fetchSparkWatermarks(blockManager);

    if (watermarks == null) {
      final HashMap<Integer, SparkWatermarks> empty = Maps.newHashMap();
      blockManager.putSingle(
          WATERMARKS_BLOCK_ID,
          empty,
          StorageLevel.MEMORY_ONLY(),
          true,
          WATERMARKS_TAG);
      return empty;
    } else {
      return watermarks;
    }
  }
 
開發者ID:apache,項目名稱:beam,代碼行數:18,代碼來源:GlobalWatermarkHolder.java

示例9: testNatsStreamingToSparkConnectorImpl_Serialization

import org.apache.spark.storage.StorageLevel; //導入依賴的package包/類
@Test
// @See https://github.com/Logimethods/nats-connector-spark/pull/3
// @See https://github.com/nats-io/java-nats-streaming/issues/51
public void testNatsStreamingToSparkConnectorImpl_Serialization() throws IOException, ClassNotFoundException {
	SubscriptionOptions.Builder optsBuilder = new SubscriptionOptions.Builder().durableName(DURABLE_NAME);
	final NatsStreamingToSparkConnectorImpl<String> connector = 
			NatsToSparkConnector
			.receiveFromNatsStreaming(String.class, StorageLevel.MEMORY_ONLY(), "clusterID") 
			.withSubscriptionOptionsBuilder(optsBuilder)
			.deliverAllAvailable() 
			.withNatsURL("NATS_URL") 
			.withSubjects("DEFAULT_SUBJECT");

	@SuppressWarnings("unchecked")
	final NatsStreamingToSparkConnectorImpl<String> newConnector = (NatsStreamingToSparkConnectorImpl<String>) SerializationUtils.clone(connector);
	
	assertEquals(DURABLE_NAME, newConnector.getSubscriptionOptions().getDurableName());
}
 
開發者ID:Logimethods,項目名稱:nats-connector-spark,代碼行數:19,代碼來源:NatsStreamingToSparkConnectorTest.java

示例10: testNatsToSparkConnectorWithAdditionalPropertiesAndSubjects

import org.apache.spark.storage.StorageLevel; //導入依賴的package包/類
@Test(timeout=6000)
public void testNatsToSparkConnectorWithAdditionalPropertiesAndSubjects() throws InterruptedException {
	
	JavaStreamingContext ssc = new JavaStreamingContext(sc, new Duration(200));

	final Properties properties = new Properties();
	properties.setProperty(PROP_URL, NATS_SERVER_URL);
	final JavaReceiverInputDStream<String> messages =  
			NatsToSparkConnector
				.receiveFromNats(String.class, StorageLevel.MEMORY_ONLY())
				.withProperties(properties)
				.withSubjects(DEFAULT_SUBJECT)
				.asStreamOf(ssc);

	validateTheReceptionOfMessages(ssc, messages);
}
 
開發者ID:Logimethods,項目名稱:nats-connector-spark,代碼行數:17,代碼來源:StandardNatsToSparkConnectorTest.java

示例11: testNatsToSparkConnectorWithAdditionalPropertiesAndMultipleSubjects

import org.apache.spark.storage.StorageLevel; //導入依賴的package包/類
@Test(timeout=6000)
public void testNatsToSparkConnectorWithAdditionalPropertiesAndMultipleSubjects() throws InterruptedException {
	
	JavaStreamingContext ssc = new JavaStreamingContext(sc, new Duration(200));

	final Properties properties = new Properties();
	final JavaReceiverInputDStream<String> messages = 
			NatsToSparkConnector
				.receiveFromNats(String.class, StorageLevel.MEMORY_ONLY())
				.withNatsURL(NATS_SERVER_URL)
				.withProperties(properties)
				.withSubjects(DEFAULT_SUBJECT, "EXTRA_SUBJECT")
				.asStreamOf(ssc);

	validateTheReceptionOfMessages(ssc, messages);
}
 
開發者ID:Logimethods,項目名稱:nats-connector-spark,代碼行數:17,代碼來源:StandardNatsToSparkConnectorTest.java

示例12: testNatsToSparkConnectorWithAdditionalProperties

import org.apache.spark.storage.StorageLevel; //導入依賴的package包/類
@Test(timeout=6000)
public void testNatsToSparkConnectorWithAdditionalProperties() throws InterruptedException {
	
	JavaStreamingContext ssc = new JavaStreamingContext(sc, new Duration(200));

	final Properties properties = new Properties();
	properties.setProperty(PROP_SUBJECTS, "sub1,"+DEFAULT_SUBJECT+" , sub2");
	properties.setProperty(PROP_URL, NATS_SERVER_URL);
	final JavaReceiverInputDStream<String> messages = 
			NatsToSparkConnector
				.receiveFromNats(String.class, StorageLevel.MEMORY_ONLY())
				.withProperties(properties)
				.asStreamOf(ssc);

	validateTheReceptionOfMessages(ssc, messages);
}
 
開發者ID:Logimethods,項目名稱:nats-connector-spark,代碼行數:17,代碼來源:StandardNatsToSparkConnectorTest.java

示例13: testNatsToSparkConnectorWithAdditionalPropertiesAndSubjects

import org.apache.spark.storage.StorageLevel; //導入依賴的package包/類
@Test(timeout=6000)
public void testNatsToSparkConnectorWithAdditionalPropertiesAndSubjects() throws InterruptedException {
	
	JavaStreamingContext ssc = new JavaStreamingContext(sc, new Duration(200));

	final Properties properties = new Properties();
	properties.setProperty(PROP_URL, NATS_SERVER_URL);

	final JavaPairDStream<String, String> messages = 
			NatsToSparkConnector
				.receiveFromNats(String.class, StorageLevel.MEMORY_ONLY())
				.withProperties(properties)
				.withSubjects(DEFAULT_SUBJECT)
				.asStreamOfKeyValue(ssc);

	validateTheReceptionOfMessages(ssc, messages);
}
 
開發者ID:Logimethods,項目名稱:nats-connector-spark,代碼行數:18,代碼來源:StandardNatsToSparkKeyValueConnectorTest.java

示例14: testNatsToSparkConnectorWithAdditionalPropertiesAndMultipleSubjects

import org.apache.spark.storage.StorageLevel; //導入依賴的package包/類
@Test(timeout=6000)
public void testNatsToSparkConnectorWithAdditionalPropertiesAndMultipleSubjects() throws InterruptedException {
	
	JavaStreamingContext ssc = new JavaStreamingContext(sc, new Duration(200));

	final Properties properties = new Properties();
	final JavaPairDStream<String, String> messages = 
			NatsToSparkConnector
				.receiveFromNats(String.class, StorageLevel.MEMORY_ONLY())
				.withNatsURL(NATS_SERVER_URL)
				.withProperties(properties)
				.withSubjects(DEFAULT_SUBJECT, "EXTRA_SUBJECT")
				.asStreamOfKeyValue(ssc);

	validateTheReceptionOfMessages(ssc, messages);
}
 
開發者ID:Logimethods,項目名稱:nats-connector-spark,代碼行數:17,代碼來源:StandardNatsToSparkKeyValueConnectorTest.java

示例15: testNatsToSparkConnectorWithAdditionalProperties

import org.apache.spark.storage.StorageLevel; //導入依賴的package包/類
@Test(timeout=6000)
public void testNatsToSparkConnectorWithAdditionalProperties() throws InterruptedException {
	
	JavaStreamingContext ssc = new JavaStreamingContext(sc, new Duration(200));

	final Properties properties = new Properties();
	properties.setProperty(PROP_SUBJECTS, "sub1,"+DEFAULT_SUBJECT+" , sub2");
	properties.setProperty(PROP_URL, NATS_SERVER_URL);
	final JavaPairDStream<String, String> messages = 
			NatsToSparkConnector
				.receiveFromNats(String.class, StorageLevel.MEMORY_ONLY())
				.withProperties(properties)
				.asStreamOfKeyValue(ssc);

	validateTheReceptionOfMessages(ssc, messages);
}
 
開發者ID:Logimethods,項目名稱:nats-connector-spark,代碼行數:17,代碼來源:StandardNatsToSparkKeyValueConnectorTest.java


注:本文中的org.apache.spark.storage.StorageLevel類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。