当前位置: 首页>>代码示例>>Java>>正文


Java StateBackend类代码示例

本文整理汇总了Java中org.apache.flink.runtime.state.StateBackend的典型用法代码示例。如果您正苦于以下问题:Java StateBackend类的具体用法?Java StateBackend怎么用?Java StateBackend使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


StateBackend类属于org.apache.flink.runtime.state包,在下文中一共展示了StateBackend类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: JobCheckpointingSettings

import org.apache.flink.runtime.state.StateBackend; //导入依赖的package包/类
public JobCheckpointingSettings(
		List<JobVertexID> verticesToTrigger,
		List<JobVertexID> verticesToAcknowledge,
		List<JobVertexID> verticesToConfirm,
		CheckpointCoordinatorConfiguration checkpointCoordinatorConfiguration,
		@Nullable SerializedValue<StateBackend> defaultStateBackend,
		@Nullable SerializedValue<MasterTriggerRestoreHook.Factory[]> masterHooks) {


	this.verticesToTrigger = requireNonNull(verticesToTrigger);
	this.verticesToAcknowledge = requireNonNull(verticesToAcknowledge);
	this.verticesToConfirm = requireNonNull(verticesToConfirm);
	this.checkpointCoordinatorConfiguration = Preconditions.checkNotNull(checkpointCoordinatorConfiguration);
	this.defaultStateBackend = defaultStateBackend;
	this.masterHooks = masterHooks;
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:17,代码来源:JobCheckpointingSettings.java

示例2: snapshotOperatorState

import org.apache.flink.runtime.state.StateBackend; //导入依赖的package包/类
@Override
public StreamTaskState snapshotOperatorState(long checkpointId, long timestamp) throws Exception {
    StreamTaskState taskState = super.snapshotOperatorState(checkpointId, timestamp);

    // we write the panes with the key/value maps into the stream
    StateBackend.CheckpointStateOutputView out = getStateBackend().createCheckpointStateOutputView(checkpointId, timestamp);

    int numKeys = windows.size();
    out.writeInt(numKeys);

    for (Map.Entry<K, Map<Long, ContextPair>> keyWindows : windows.entrySet()) {
        int numWindows = keyWindows.getValue().size();
        out.writeInt(numWindows);
        for (ContextPair context : keyWindows.getValue().values()) {
            context.writeToState(out);
        }
    }

    taskState.setOperatorState(out.closeAndGetHandle());
    return taskState;
}
 
开发者ID:wangyangjun,项目名称:StreamBench,代码行数:22,代码来源:CoGroupOperator.java

示例3: RocksDBStateBackend

import org.apache.flink.runtime.state.StateBackend; //导入依赖的package包/类
/**
 * Private constructor that creates a re-configured copy of the state backend.
 *
 * @param original The state backend to re-configure.
 * @param config The configuration.
 */
private RocksDBStateBackend(RocksDBStateBackend original, Configuration config) {
	// reconfigure the state backend backing the streams
	final StateBackend originalStreamBackend = original.checkpointStreamBackend;
	this.checkpointStreamBackend = originalStreamBackend instanceof ConfigurableStateBackend ?
			((ConfigurableStateBackend) originalStreamBackend).configure(config) :
			originalStreamBackend;

	// configure incremental checkpoints
	if (original.enableIncrementalCheckpointing != null) {
		this.enableIncrementalCheckpointing = original.enableIncrementalCheckpointing;
	}
	else {
		this.enableIncrementalCheckpointing =
				config.getBoolean(CheckpointingOptions.INCREMENTAL_CHECKPOINTS);
	}

	// configure local directories
	if (original.localRocksDbDirectories != null) {
		this.localRocksDbDirectories = original.localRocksDbDirectories;
	}
	else {
		final String rocksdbLocalPaths = config.getString(CheckpointingOptions.ROCKSDB_LOCAL_DIRECTORIES);
		if (rocksdbLocalPaths != null) {
			String[] directories = rocksdbLocalPaths.split(",|" + File.pathSeparator);

			try {
				setDbStoragePaths(directories);
			}
			catch (IllegalArgumentException e) {
				throw new IllegalConfigurationException("Invalid configuration for RocksDB state " +
						"backend's local storage directories: " + e.getMessage(), e);
			}
		}
	}

	// copy remaining settings
	this.predefinedOptions = original.predefinedOptions;
	this.optionsFactory = original.optionsFactory;
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:46,代码来源:RocksDBStateBackend.java

示例4: testLoadFileSystemStateBackendMixed

import org.apache.flink.runtime.state.StateBackend; //导入依赖的package包/类
/**
 * Validates taking the application-defined file system state backend and adding with additional
 * parameters from the cluster configuration, but giving precedence to application-defined
 * parameters over configuration-defined parameters.
 */
@Test
public void testLoadFileSystemStateBackendMixed() throws Exception {
	final String appCheckpointDir = new Path(tmp.newFolder().toURI()).toString();
	final String checkpointDir = new Path(tmp.newFolder().toURI()).toString();
	final String savepointDir = new Path(tmp.newFolder().toURI()).toString();

	final String localDir1 = tmp.newFolder().getAbsolutePath();
	final String localDir2 = tmp.newFolder().getAbsolutePath();
	final String localDir3 = tmp.newFolder().getAbsolutePath();
	final String localDir4 = tmp.newFolder().getAbsolutePath();

	final boolean incremental = !CheckpointingOptions.INCREMENTAL_CHECKPOINTS.defaultValue();

	final Path expectedCheckpointsPath = new Path(appCheckpointDir);
	final Path expectedSavepointsPath = new Path(savepointDir);

	final RocksDBStateBackend backend = new RocksDBStateBackend(appCheckpointDir, incremental);
	backend.setDbStoragePaths(localDir1, localDir2);

	final Configuration config = new Configuration();
	config.setString(backendKey, "jobmanager"); // this should not be picked up
	config.setString(CheckpointingOptions.CHECKPOINTS_DIRECTORY, checkpointDir); // this should not be picked up
	config.setString(CheckpointingOptions.SAVEPOINT_DIRECTORY, savepointDir);
	config.setBoolean(CheckpointingOptions.INCREMENTAL_CHECKPOINTS, !incremental);  // this should not be picked up
	config.setString(CheckpointingOptions.ROCKSDB_LOCAL_DIRECTORIES, localDir3 + ":" + localDir4);  // this should not be picked up

	final StateBackend loadedBackend =
			StateBackendLoader.fromApplicationOrConfigOrDefault(backend, config, cl, null);
	assertTrue(loadedBackend instanceof RocksDBStateBackend);

	final RocksDBStateBackend loadedRocks = (RocksDBStateBackend) loadedBackend;

	assertEquals(incremental, loadedRocks.isIncrementalCheckpointsEnabled());
	checkPaths(loadedRocks.getDbStoragePaths(), localDir1, localDir2);

	AbstractFileStateBackend fsBackend = (AbstractFileStateBackend) loadedRocks.getCheckpointBackend();
	assertEquals(expectedCheckpointsPath, fsBackend.getCheckpointPath());
	assertEquals(expectedSavepointsPath, fsBackend.getSavepointPath());
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:45,代码来源:RocksDBStateBackendFactoryTest.java

示例5: JobSnapshottingSettings

import org.apache.flink.runtime.state.StateBackend; //导入依赖的package包/类
public JobSnapshottingSettings(
		List<JobVertexID> verticesToTrigger,
		List<JobVertexID> verticesToAcknowledge,
		List<JobVertexID> verticesToConfirm,
		long checkpointInterval,
		long checkpointTimeout,
		long minPauseBetweenCheckpoints,
		int maxConcurrentCheckpoints,
		ExternalizedCheckpointSettings externalizedCheckpointSettings,
		@Nullable StateBackend defaultStateBackend,
		boolean isExactlyOnce) {

	// sanity checks
	if (checkpointInterval < 1 || checkpointTimeout < 1 ||
			minPauseBetweenCheckpoints < 0 || maxConcurrentCheckpoints < 1) {
		throw new IllegalArgumentException();
	}
	
	this.verticesToTrigger = requireNonNull(verticesToTrigger);
	this.verticesToAcknowledge = requireNonNull(verticesToAcknowledge);
	this.verticesToConfirm = requireNonNull(verticesToConfirm);
	this.checkpointInterval = checkpointInterval;
	this.checkpointTimeout = checkpointTimeout;
	this.minPauseBetweenCheckpoints = minPauseBetweenCheckpoints;
	this.maxConcurrentCheckpoints = maxConcurrentCheckpoints;
	this.externalizedCheckpointSettings = requireNonNull(externalizedCheckpointSettings);
	this.defaultStateBackend = defaultStateBackend;
	this.isExactlyOnce = isExactlyOnce;
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:30,代码来源:JobSnapshottingSettings.java

示例6: setStateBackend

import org.apache.flink.runtime.state.StateBackend; //导入依赖的package包/类
public void setStateBackend(StateBackend backend) {
	if (backend != null) {
		try {
			InstantiationUtil.writeObjectToConfig(backend, this.config, STATE_BACKEND);
		} catch (Exception e) {
			throw new StreamTaskException("Could not serialize stateHandle provider.", e);
		}
	}
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:10,代码来源:StreamConfig.java

示例7: getStateBackend

import org.apache.flink.runtime.state.StateBackend; //导入依赖的package包/类
public StateBackend getStateBackend(ClassLoader cl) {
	try {
		return InstantiationUtil.readObjectFromConfig(this.config, STATE_BACKEND, cl);
	} catch (Exception e) {
		throw new StreamTaskException("Could not instantiate statehandle provider.", e);
	}
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:8,代码来源:StreamConfig.java

示例8: createStateBackend

import org.apache.flink.runtime.state.StateBackend; //导入依赖的package包/类
private StateBackend createStateBackend() throws Exception {
	final StateBackend fromApplication = configuration.getStateBackend(getUserCodeClassLoader());

	return StateBackendLoader.fromApplicationOrConfigOrDefault(
			fromApplication,
			getEnvironment().getTaskManagerInfo().getConfiguration(),
			getUserCodeClassLoader(),
			LOG);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:10,代码来源:StreamTask.java

示例9: runOperator

import org.apache.flink.runtime.state.StateBackend; //导入依赖的package包/类
private OperatorStateHandles runOperator(
		Configuration taskConfiguration,
		ExecutionConfig executionConfig,
		OneInputStreamOperator<Long, Long> operator,
		KeySelector<Long, Long> keySelector,
		boolean isKeyedState,
		StateBackend stateBackend,
		ClassLoader classLoader,
		OperatorStateHandles operatorStateHandles,
		Iterable<Long> input) throws Exception {

	try (final MockEnvironment environment = new MockEnvironment(
			"test task",
			32 * 1024,
			new MockInputSplitProvider(),
			256,
			taskConfiguration,
			executionConfig,
			16,
			1,
			0,
			classLoader)) {

		OneInputStreamOperatorTestHarness<Long, Long> harness;

		if (isKeyedState) {
			harness = new KeyedOneInputStreamOperatorTestHarness<>(
				operator,
				keySelector,
				BasicTypeInfo.LONG_TYPE_INFO,
				environment);
		} else {
			harness = new OneInputStreamOperatorTestHarness<>(operator, LongSerializer.INSTANCE, environment);
		}

		harness.setStateBackend(stateBackend);

		harness.setup();
		harness.initializeState(operatorStateHandles);
		harness.open();

		long timestamp = 0L;

		for (Long value : input) {
			harness.processElement(value, timestamp++);
		}

		long checkpointId = 1L;
		long checkpointTimestamp = timestamp + 1L;

		OperatorStateHandles stateHandles = harness.snapshot(checkpointId, checkpointTimestamp);

		harness.close();

		return stateHandles;
	}
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:58,代码来源:PojoSerializerUpgradeTest.java

示例10: testLoadFileSystemStateBackend

import org.apache.flink.runtime.state.StateBackend; //导入依赖的package包/类
/**
 * Validates loading a file system state backend with additional parameters from the cluster configuration.
 */
@Test
public void testLoadFileSystemStateBackend() throws Exception {
	final String checkpointDir = new Path(tmp.newFolder().toURI()).toString();
	final String savepointDir = new Path(tmp.newFolder().toURI()).toString();
	final String localDir1 = tmp.newFolder().getAbsolutePath();
	final String localDir2 = tmp.newFolder().getAbsolutePath();
	final String localDirs = localDir1 + File.pathSeparator + localDir2;
	final boolean incremental = !CheckpointingOptions.INCREMENTAL_CHECKPOINTS.defaultValue();

	final Path expectedCheckpointsPath = new Path(checkpointDir);
	final Path expectedSavepointsPath = new Path(savepointDir);

	// we configure with the explicit string (rather than AbstractStateBackend#X_STATE_BACKEND_NAME)
	// to guard against config-breaking changes of the name
	final Configuration config1 = new Configuration();
	config1.setString(backendKey, "rocksdb");
	config1.setString(CheckpointingOptions.CHECKPOINTS_DIRECTORY, checkpointDir);
	config1.setString(CheckpointingOptions.SAVEPOINT_DIRECTORY, savepointDir);
	config1.setString(CheckpointingOptions.ROCKSDB_LOCAL_DIRECTORIES, localDirs);
	config1.setBoolean(CheckpointingOptions.INCREMENTAL_CHECKPOINTS, incremental);

	final Configuration config2 = new Configuration();
	config2.setString(backendKey, RocksDBStateBackendFactory.class.getName());
	config2.setString(CheckpointingOptions.CHECKPOINTS_DIRECTORY, checkpointDir);
	config2.setString(CheckpointingOptions.SAVEPOINT_DIRECTORY, savepointDir);
	config2.setString(CheckpointingOptions.ROCKSDB_LOCAL_DIRECTORIES, localDirs);
	config2.setBoolean(CheckpointingOptions.INCREMENTAL_CHECKPOINTS, incremental);

	StateBackend backend1 = StateBackendLoader.loadStateBackendFromConfig(config1, cl, null);
	StateBackend backend2 = StateBackendLoader.loadStateBackendFromConfig(config2, cl, null);

	assertTrue(backend1 instanceof RocksDBStateBackend);
	assertTrue(backend2 instanceof RocksDBStateBackend);

	RocksDBStateBackend fs1 = (RocksDBStateBackend) backend1;
	RocksDBStateBackend fs2 = (RocksDBStateBackend) backend2;

	AbstractFileStateBackend fs1back = (AbstractFileStateBackend) fs1.getCheckpointBackend();
	AbstractFileStateBackend fs2back = (AbstractFileStateBackend) fs2.getCheckpointBackend();

	assertEquals(expectedCheckpointsPath, fs1back.getCheckpointPath());
	assertEquals(expectedCheckpointsPath, fs2back.getCheckpointPath());
	assertEquals(expectedSavepointsPath, fs1back.getSavepointPath());
	assertEquals(expectedSavepointsPath, fs2back.getSavepointPath());
	assertEquals(incremental, fs1.isIncrementalCheckpointsEnabled());
	assertEquals(incremental, fs2.isIncrementalCheckpointsEnabled());
	checkPaths(fs1.getDbStoragePaths(), localDir1, localDir2);
	checkPaths(fs2.getDbStoragePaths(), localDir1, localDir2);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:53,代码来源:RocksDBStateBackendFactoryTest.java

示例11: CheckpointCoordinator

import org.apache.flink.runtime.state.StateBackend; //导入依赖的package包/类
public CheckpointCoordinator(
		JobID job,
		long baseInterval,
		long checkpointTimeout,
		long minPauseBetweenCheckpoints,
		int maxConcurrentCheckpointAttempts,
		CheckpointRetentionPolicy retentionPolicy,
		ExecutionVertex[] tasksToTrigger,
		ExecutionVertex[] tasksToWaitFor,
		ExecutionVertex[] tasksToCommitTo,
		CheckpointIDCounter checkpointIDCounter,
		CompletedCheckpointStore completedCheckpointStore,
		StateBackend checkpointStateBackend,
		Executor executor,
		SharedStateRegistryFactory sharedStateRegistryFactory) {

	// sanity checks
	checkNotNull(checkpointStateBackend);
	checkArgument(baseInterval > 0, "Checkpoint timeout must be larger than zero");
	checkArgument(checkpointTimeout >= 1, "Checkpoint timeout must be larger than zero");
	checkArgument(minPauseBetweenCheckpoints >= 0, "minPauseBetweenCheckpoints must be >= 0");
	checkArgument(maxConcurrentCheckpointAttempts >= 1, "maxConcurrentCheckpointAttempts must be >= 1");

	// max "in between duration" can be one year - this is to prevent numeric overflows
	if (minPauseBetweenCheckpoints > 365L * 24 * 60 * 60 * 1_000) {
		minPauseBetweenCheckpoints = 365L * 24 * 60 * 60 * 1_000;
	}

	// it does not make sense to schedule checkpoints more often then the desired
	// time between checkpoints
	if (baseInterval < minPauseBetweenCheckpoints) {
		baseInterval = minPauseBetweenCheckpoints;
	}

	this.job = checkNotNull(job);
	this.baseInterval = baseInterval;
	this.checkpointTimeout = checkpointTimeout;
	this.minPauseBetweenCheckpointsNanos = minPauseBetweenCheckpoints * 1_000_000;
	this.maxConcurrentCheckpointAttempts = maxConcurrentCheckpointAttempts;
	this.tasksToTrigger = checkNotNull(tasksToTrigger);
	this.tasksToWaitFor = checkNotNull(tasksToWaitFor);
	this.tasksToCommitTo = checkNotNull(tasksToCommitTo);
	this.pendingCheckpoints = new LinkedHashMap<>();
	this.checkpointIdCounter = checkNotNull(checkpointIDCounter);
	this.completedCheckpointStore = checkNotNull(completedCheckpointStore);
	this.executor = checkNotNull(executor);
	this.sharedStateRegistryFactory = checkNotNull(sharedStateRegistryFactory);
	this.sharedStateRegistry = sharedStateRegistryFactory.create(executor);

	this.recentPendingCheckpoints = new ArrayDeque<>(NUM_GHOST_CHECKPOINT_IDS);
	this.masterHooks = new HashMap<>();

	this.timer = new ScheduledThreadPoolExecutor(1,
			new DispatcherThreadFactory(Thread.currentThread().getThreadGroup(), "Checkpoint Timer"));

	// make sure the timer internally cleans up and does not hold onto stale scheduled tasks
	this.timer.setRemoveOnCancelPolicy(true);
	this.timer.setContinueExistingPeriodicTasksAfterShutdownPolicy(false);
	this.timer.setExecuteExistingDelayedTasksAfterShutdownPolicy(false);

	this.checkpointProperties = CheckpointProperties.forCheckpoint(retentionPolicy);

	try {
		this.checkpointStorage = checkpointStateBackend.createCheckpointStorage(job);

		// Make sure the checkpoint ID enumerator is running. Possibly
		// issues a blocking call to ZooKeeper.
		checkpointIDCounter.start();
	} catch (Throwable t) {
		throw new RuntimeException("Failed to start checkpoint ID counter: " + t.getMessage(), t);
	}
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:73,代码来源:CheckpointCoordinator.java

示例12: getDefaultStateBackend

import org.apache.flink.runtime.state.StateBackend; //导入依赖的package包/类
@Nullable
public StateBackend getDefaultStateBackend() {
	return defaultStateBackend;
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:5,代码来源:JobSnapshottingSettings.java

示例13: getDefaultStateBackend

import org.apache.flink.runtime.state.StateBackend; //导入依赖的package包/类
@Nullable
public SerializedValue<StateBackend> getDefaultStateBackend() {
	return defaultStateBackend;
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:5,代码来源:JobCheckpointingSettings.java

示例14: enableCheckpointing

import org.apache.flink.runtime.state.StateBackend; //导入依赖的package包/类
public void enableCheckpointing(
		long interval,
		long checkpointTimeout,
		long minPauseBetweenCheckpoints,
		int maxConcurrentCheckpoints,
		CheckpointRetentionPolicy retentionPolicy,
		List<ExecutionJobVertex> verticesToTrigger,
		List<ExecutionJobVertex> verticesToWaitFor,
		List<ExecutionJobVertex> verticesToCommitTo,
		List<MasterTriggerRestoreHook<?>> masterHooks,
		CheckpointIDCounter checkpointIDCounter,
		CompletedCheckpointStore checkpointStore,
		StateBackend checkpointStateBackend,
		CheckpointStatsTracker statsTracker) {

	// simple sanity checks
	checkArgument(interval >= 10, "checkpoint interval must not be below 10ms");
	checkArgument(checkpointTimeout >= 10, "checkpoint timeout must not be below 10ms");

	checkState(state == JobStatus.CREATED, "Job must be in CREATED state");
	checkState(checkpointCoordinator == null, "checkpointing already enabled");

	ExecutionVertex[] tasksToTrigger = collectExecutionVertices(verticesToTrigger);
	ExecutionVertex[] tasksToWaitFor = collectExecutionVertices(verticesToWaitFor);
	ExecutionVertex[] tasksToCommitTo = collectExecutionVertices(verticesToCommitTo);

	checkpointStatsTracker = checkNotNull(statsTracker, "CheckpointStatsTracker");

	// create the coordinator that triggers and commits checkpoints and holds the state
	checkpointCoordinator = new CheckpointCoordinator(
		jobInformation.getJobId(),
		interval,
		checkpointTimeout,
		minPauseBetweenCheckpoints,
		maxConcurrentCheckpoints,
		retentionPolicy,
		tasksToTrigger,
		tasksToWaitFor,
		tasksToCommitTo,
		checkpointIDCounter,
		checkpointStore,
		checkpointStateBackend,
		ioExecutor,
		SharedStateRegistry.DEFAULT_FACTORY);

	// register the master hooks on the checkpoint coordinator
	for (MasterTriggerRestoreHook<?> hook : masterHooks) {
		if (!checkpointCoordinator.addMasterHook(hook)) {
			LOG.warn("Trying to register multiple checkpoint hooks with the name: {}", hook.getIdentifier());
		}
	}

	checkpointCoordinator.setCheckpointStatsTracker(checkpointStatsTracker);

	// interval of max long value indicates disable periodic checkpoint,
	// the CheckpointActivatorDeactivator should be created only if the interval is not max value
	if (interval != Long.MAX_VALUE) {
		// the periodic checkpoint scheduler is activated and deactivated as a result of
		// job status changes (running -> on, all other states -> off)
		registerJobStatusListener(checkpointCoordinator.createActivatorDeactivator());
	}
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:63,代码来源:ExecutionGraph.java

示例15: testDeserializationOfUserCodeWithUserClassLoader

import org.apache.flink.runtime.state.StateBackend; //导入依赖的package包/类
@Test
public void testDeserializationOfUserCodeWithUserClassLoader() throws Exception {
	final ClassLoader classLoader = new URLClassLoader(new URL[0], getClass().getClassLoader());
	final Serializable outOfClassPath = CommonTestUtils.createObjectForClassNotInClassPath(classLoader);

	final MasterTriggerRestoreHook.Factory[] hooks = {
			new TestFactory(outOfClassPath) };
	final SerializedValue<MasterTriggerRestoreHook.Factory[]> serHooks = new SerializedValue<>(hooks);

	final JobCheckpointingSettings checkpointingSettings = new JobCheckpointingSettings(
			Collections.<JobVertexID>emptyList(),
			Collections.<JobVertexID>emptyList(),
			Collections.<JobVertexID>emptyList(),
			new CheckpointCoordinatorConfiguration(
				1000L,
				10000L,
				0L,
				1,
				CheckpointRetentionPolicy.NEVER_RETAIN_AFTER_TERMINATION,
				true),
			new SerializedValue<StateBackend>(new CustomStateBackend(outOfClassPath)),
			serHooks);

	final JobGraph jobGraph = new JobGraph(new JobID(), "test job");
	jobGraph.setSnapshotSettings(checkpointingSettings);

	// to serialize/deserialize the job graph to see if the behavior is correct under
	// distributed execution
	final JobGraph copy = CommonTestUtils.createCopySerializable(jobGraph);

	final ExecutionGraph eg = ExecutionGraphBuilder.buildGraph(
		null,
		copy,
		new Configuration(),
		TestingUtils.defaultExecutor(),
		TestingUtils.defaultExecutor(),
		mock(SlotProvider.class),
		classLoader,
		new StandaloneCheckpointRecoveryFactory(),
		Time.seconds(10),
		new NoRestartStrategy(),
		new UnregisteredMetricsGroup(),
		10,
		VoidBlobWriter.getInstance(),
		log);

	assertEquals(1, eg.getCheckpointCoordinator().getNumberOfRegisteredMasterHooks());
	assertTrue(jobGraph.getCheckpointingSettings().getDefaultStateBackend().deserializeValue(classLoader) instanceof CustomStateBackend);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:50,代码来源:CheckpointSettingsSerializableTest.java


注:本文中的org.apache.flink.runtime.state.StateBackend类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。