當前位置: 首頁>>代碼示例>>Java>>正文


Java Configuration類代碼示例

本文整理匯總了Java中org.apache.flink.configuration.Configuration的典型用法代碼示例。如果您正苦於以下問題:Java Configuration類的具體用法?Java Configuration怎麽用?Java Configuration使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


Configuration類屬於org.apache.flink.configuration包,在下文中一共展示了Configuration類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: open

import org.apache.flink.configuration.Configuration; //導入依賴的package包/類
@Override
public void open(Configuration config) {
  ValueStateDescriptor<AbstractStatisticsWrapper<AisMessage>> descriptor =
      new ValueStateDescriptor<AbstractStatisticsWrapper<AisMessage>>("trajectoryStatistics",
          TypeInformation.of(new TypeHint<AbstractStatisticsWrapper<AisMessage>>() {}));

  statisticsOfTrajectory = getRuntimeContext().getState(descriptor);

}
 
開發者ID:ehabqadah,項目名稱:in-situ-processing-datAcron,代碼行數:10,代碼來源:AisStreamEnricher.java

示例2: cleanUp

import org.apache.flink.configuration.Configuration; //導入依賴的package包/類
@AfterClass
public static void cleanUp() throws IOException {
	if (!skipTest) {
		// initialize configuration with valid credentials
		final Configuration conf = new Configuration();
		conf.setString("s3.access.key", ACCESS_KEY);
		conf.setString("s3.secret.key", SECRET_KEY);
		FileSystem.initialize(conf);

		final Path directory = new Path("s3://" + BUCKET + '/' + TEST_DATA_DIR);
		final FileSystem fs = directory.getFileSystem();

		// clean up
		fs.delete(directory, true);

		// now directory must be gone
		assertFalse(fs.exists(directory));

		// reset configuration
		FileSystem.initialize(new Configuration());
	}
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:23,代碼來源:HadoopS3FileSystemITCase.java

示例3: deploy

import org.apache.flink.configuration.Configuration; //導入依賴的package包/類
@Override
public YarnClusterClient deploy() {
  ApplicationSubmissionContext context = Records.newRecord(ApplicationSubmissionContext.class);
  context.setApplicationId(job.yarnAppId());
  ApplicationReport report;
  try {
    report = startAppMaster(context);

    Configuration conf = getFlinkConfiguration();
    conf.setString(JobManagerOptions.ADDRESS.key(), report.getHost());
    conf.setInteger(JobManagerOptions.PORT.key(), report.getRpcPort());

    return createYarnClusterClient(this, yarnClient, report, conf, false);
  } catch (Exception e) {
    throw new RuntimeException(e);
  }
}
 
開發者ID:uber,項目名稱:AthenaX,代碼行數:18,代碼來源:AthenaXYarnClusterDescriptor.java

示例4: testDeployerWithIsolatedConfiguration

import org.apache.flink.configuration.Configuration; //導入依賴的package包/類
@Test
public void testDeployerWithIsolatedConfiguration() throws Exception {
  YarnClusterConfiguration clusterConf = mock(YarnClusterConfiguration.class);
  doReturn(new YarnConfiguration()).when(clusterConf).conf();
  ScheduledExecutorService executor = mock(ScheduledExecutorService.class);
  Configuration flinkConf = new Configuration();
  YarnClient client = mock(YarnClient.class);
  JobDeployer deploy = new JobDeployer(clusterConf, client, executor, flinkConf);
  AthenaXYarnClusterDescriptor desc = mock(AthenaXYarnClusterDescriptor.class);

  YarnClusterClient clusterClient = mock(YarnClusterClient.class);
  doReturn(clusterClient).when(desc).deploy();

  ActorGateway actorGateway = mock(ActorGateway.class);
  doReturn(actorGateway).when(clusterClient).getJobManagerGateway();
  doReturn(Future$.MODULE$.successful(null)).when(actorGateway).ask(any(), any());

  JobGraph jobGraph = mock(JobGraph.class);
  doReturn(JobID.generate()).when(jobGraph).getJobID();
  deploy.start(desc, jobGraph);

  ArgumentCaptor<Configuration> args = ArgumentCaptor.forClass(Configuration.class);
  verify(desc).setFlinkConfiguration(args.capture());
  assertNotSame(flinkConf, args.getValue());
}
 
開發者ID:uber,項目名稱:AthenaX,代碼行數:26,代碼來源:JobDeployerTest.java

示例5: open

import org.apache.flink.configuration.Configuration; //導入依賴的package包/類
@Override
public void open(Configuration parameters) throws Exception {
    super.open(parameters);

    state = new State<>();

    processingTimeService =
            ((StreamingRuntimeContext) getRuntimeContext()).getProcessingTimeService();

    long currentProcessingTime = processingTimeService.getCurrentProcessingTime();

    processingTimeService.registerTimer(currentProcessingTime + inactiveBucketCheckInterval, this);

    this.clock = new Clock() {
        @Override
        public long currentTimeMillis() {
            return processingTimeService.getCurrentProcessingTime();
        }
    };
}
 
開發者ID:breakEval13,項目名稱:rocketmq-flink-plugin,代碼行數:21,代碼來源:TODBucketingSink.java

示例6: initFileSystem

import org.apache.flink.configuration.Configuration; //導入依賴的package包/類
/**
 * Create a file system with the user-defined {@code HDFS} configuration.
 *
 * @throws IOException
 */
private void initFileSystem() throws IOException {
    if (fs != null) {
        return;
    }
    org.apache.hadoop.conf.Configuration hadoopConf = HadoopFileSystem.getHadoopConfiguration();
    if (fsConfig != null) {
        String disableCacheName = String.format("fs.%s.impl.disable.cache", new Path(basePath).toUri().getScheme());
        hadoopConf.setBoolean(disableCacheName, true);
        for (String key : fsConfig.keySet()) {
            hadoopConf.set(key, fsConfig.getString(key, null));
        }
    }

    fs = new Path(basePath).getFileSystem(hadoopConf);
}
 
開發者ID:breakEval13,項目名稱:rocketmq-flink-plugin,代碼行數:21,代碼來源:TODBucketingSink.java

示例7: getSink

import org.apache.flink.configuration.Configuration; //導入依賴的package包/類
/**
 * 通用入HDFS-Map
 * @param topic topic名
 * @return SaveHdfsSink
 */
public static SinkFunction<String> getSink(String topic) {
    Configuration configuration = new Configuration();
    //TODO 添加Hadoop配置內容
    configuration.setString("dfs.namenode.name.dir", "file:///home/hadmin/data/hadoop/hdfs/name");
    configuration.setString("dfs.nameservices", "ns");
    configuration.setString("dfs.ha.namenodes.ns", "nn1,nn2");
    configuration.setString("dfs.namenode.rpc-address.ns.nn1", "10.11.0.193:9000");
    configuration.setString("dfs.namenode.rpc-address.ns.nn2", "10.11.0.194:9000");
    configuration.setString("dfs.namenode.shared.edits.dir", "qjournal://10.11.0.193:8485;10.11.0.194;10.11.0.195:8485/ns");
    configuration.setString("hadoop.tmp.dir", "/home/hadmin/data/hadoop/tmp");
    configuration.setString("fs.defaultFS", "hdfs://ns");
    configuration.setString("dfs.journalnode.edits.dir", "/home/hadmin/data/hadoop/journal");
    configuration.setString("ha.zookeeper.quorum", "10.11.0.193:2181,10.11.0.194:2181,10.11.0.195:2181");
    configuration.setString("mapreduce.input.fileinputformat.split.minsize", "10");

    TODBucketingSink<String> sink = new TODBucketingSink<>("/xml/" + topic + "/");
    sink.setBucketer(new DateTimeBucketer<>("yyyy/MM/dd/HH"));
    sink.setWriter(new StringWriter<>());
    sink.setPendingPrefix("source");
    sink.setPendingSuffix(".txt");
    sink.setFSConfig(configuration);
    //設置Flink Bucketer 的刷新時間
    sink.setAsyncTimeout(60000L);
    return sink;
}
 
開發者ID:breakEval13,項目名稱:rocketmq-flink-plugin,代碼行數:31,代碼來源:HdfsSink.java

示例8: open

import org.apache.flink.configuration.Configuration; //導入依賴的package包/類
/**
 * Override open from RichFunctions set
 * This method allow the retrieval of the state and set it as queryable
 */
@Override
@SuppressWarnings("unchecked")
public void open(Configuration config) {
    ValueStateDescriptor descriptor =
            new ValueStateDescriptor(
                    "consumption",
                    LampConsumption.class); // default value of the state, if nothing was set
    this.consumption = getRuntimeContext().getState(descriptor);
    descriptor.setQueryable("consumption-list-api");
}
 
開發者ID:ProjectEmber,項目名稱:project-ember,代碼行數:15,代碼來源:EmberConsumptionMean.java

示例9: setup

import org.apache.flink.configuration.Configuration; //導入依賴的package包/類
public static void setup(int proxyPortRangeStart, int serverPortRangeStart) {
	try {
		Configuration config = new Configuration();
		config.setLong(TaskManagerOptions.MANAGED_MEMORY_SIZE, 4L);
		config.setInteger(ConfigConstants.LOCAL_NUMBER_TASK_MANAGER, NUM_TMS);
		config.setInteger(ConfigConstants.TASK_MANAGER_NUM_TASK_SLOTS, NUM_SLOTS_PER_TM);
		config.setInteger(QueryableStateOptions.CLIENT_NETWORK_THREADS, 1);
		config.setBoolean(QueryableStateOptions.SERVER_ENABLE, true);
		config.setInteger(QueryableStateOptions.SERVER_NETWORK_THREADS, 1);
		config.setString(QueryableStateOptions.PROXY_PORT_RANGE, proxyPortRangeStart + "-" + (proxyPortRangeStart + NUM_TMS));
		config.setString(QueryableStateOptions.SERVER_PORT_RANGE, serverPortRangeStart + "-" + (serverPortRangeStart + NUM_TMS));

		cluster = new TestingCluster(config, false);
		cluster.start(true);

		client = new QueryableStateClient("localhost", proxyPortRangeStart);

		// verify that we are not in HA mode
		Assert.assertTrue(cluster.haMode() == HighAvailabilityMode.NONE);

	} catch (Exception e) {
		e.printStackTrace();
		fail(e.getMessage());
	}
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:26,代碼來源:NonHAAbstractQueryableStateTestBase.java

示例10: createHDFS

import org.apache.flink.configuration.Configuration; //導入依賴的package包/類
@Before
public void createHDFS() {
	try {
		baseDir = new File("./target/hdfs/hdfsTesting").getAbsoluteFile();
		FileUtil.fullyDelete(baseDir);

		org.apache.hadoop.conf.Configuration hdConf = new org.apache.hadoop.conf.Configuration();
		hdConf.set(MiniDFSCluster.HDFS_MINIDFS_BASEDIR, baseDir.getAbsolutePath());
		hdConf.set("dfs.block.size", String.valueOf(1048576)); // this is the minimum we can set.

		MiniDFSCluster.Builder builder = new MiniDFSCluster.Builder(hdConf);
		hdfsCluster = builder.build();

		hdfsURI = "hdfs://" + hdfsCluster.getURI().getHost() + ":" + hdfsCluster.getNameNodePort() + "/";
		hdfs = new org.apache.hadoop.fs.Path(hdfsURI).getFileSystem(hdConf);

	} catch (Throwable e) {
		e.printStackTrace();
		Assert.fail("Test failed " + e.getMessage());
	}
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:22,代碼來源:ContinuousFileProcessingITCase.java

示例11: testPrintSinkStdErr

import org.apache.flink.configuration.Configuration; //導入依賴的package包/類
@Test
public void testPrintSinkStdErr() throws Exception {
	ByteArrayOutputStream baos = new ByteArrayOutputStream();
	PrintStream stream = new PrintStream(baos);
	System.setOut(stream);

	final StreamingRuntimeContext ctx = Mockito.mock(StreamingRuntimeContext.class);

	PrintSinkFunction<String> printSink = new PrintSinkFunction<>();
	printSink.setRuntimeContext(ctx);
	try {
		printSink.open(new Configuration());
	} catch (Exception e) {
		Assert.fail();
	}
	printSink.setTargetToStandardErr();
	printSink.invoke("hello world!", SinkContextUtil.forTimestamp(0));

	assertEquals("Print to System.err", printSink.toString());
	assertEquals("hello world!" + line, baos.toString());

	printSink.close();
	stream.close();
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:25,代碼來源:PrintSinkFunctionTest.java

示例12: testOperatorNameTruncation

import org.apache.flink.configuration.Configuration; //導入依賴的package包/類
@Test
public void testOperatorNameTruncation() {
	Configuration cfg = new Configuration();
	cfg.setString(MetricOptions.SCOPE_NAMING_OPERATOR, ScopeFormat.SCOPE_OPERATOR_NAME);
	MetricRegistryImpl registry = new MetricRegistryImpl(MetricRegistryConfiguration.fromConfiguration(cfg));
	TaskManagerMetricGroup tm = new TaskManagerMetricGroup(registry, "host", "id");
	TaskManagerJobMetricGroup job = new TaskManagerJobMetricGroup(registry, tm, new JobID(), "jobname");
	TaskMetricGroup taskMetricGroup = new TaskMetricGroup(registry, job, new JobVertexID(), new AbstractID(), "task", 0, 0);

	String originalName = new String(new char[100]).replace("\0", "-");
	OperatorMetricGroup operatorMetricGroup = taskMetricGroup.addOperator(originalName);

	String storedName = operatorMetricGroup.getScopeComponents()[0];
	Assert.assertEquals(TaskMetricGroup.METRICS_OPERATOR_NAME_MAX_LENGTH, storedName.length());
	Assert.assertEquals(originalName.substring(0, TaskMetricGroup.METRICS_OPERATOR_NAME_MAX_LENGTH), storedName);
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:17,代碼來源:TaskMetricGroupTest.java

示例13: hasNewNetworkBufConfMixed

import org.apache.flink.configuration.Configuration; //導入依賴的package包/類
/**
 * Verifies that {@link TaskManagerServicesConfiguration#hasNewNetworkBufConf(Configuration)}
 * returns the correct result for mixed old/new configurations.
 */
@SuppressWarnings("deprecation")
@Test
public void hasNewNetworkBufConfMixed() throws Exception {
	Configuration config = new Configuration();
	assertTrue(TaskManagerServicesConfiguration.hasNewNetworkBufConf(config));

	config.setInteger(TaskManagerOptions.NETWORK_NUM_BUFFERS, 1);
	assertFalse(TaskManagerServicesConfiguration.hasNewNetworkBufConf(config));

	// old + 1 new parameter = new:
	Configuration config1 = config.clone();
	config1.setFloat(TaskManagerOptions.NETWORK_BUFFERS_MEMORY_FRACTION, 0.1f);
	assertTrue(TaskManagerServicesConfiguration.hasNewNetworkBufConf(config1));

	config1 = config.clone();
	config1.setLong(TaskManagerOptions.NETWORK_BUFFERS_MEMORY_MIN, 1024);
	assertTrue(TaskManagerServicesConfiguration.hasNewNetworkBufConf(config1));

	config1 = config.clone();
	config1.setLong(TaskManagerOptions.NETWORK_BUFFERS_MEMORY_MAX, 1024);
	assertTrue(TaskManagerServicesConfiguration.hasNewNetworkBufConf(config1));
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:27,代碼來源:TaskManagerServicesConfigurationTest.java

示例14: MesosFlinkResourceManager

import org.apache.flink.configuration.Configuration; //導入依賴的package包/類
public MesosFlinkResourceManager(
	Configuration flinkConfig,
	MesosConfiguration mesosConfig,
	MesosWorkerStore workerStore,
	LeaderRetrievalService leaderRetrievalService,
	MesosTaskManagerParameters taskManagerParameters,
	ContainerSpecification taskManagerContainerSpec,
	MesosArtifactResolver artifactResolver,
	int maxFailedTasks,
	int numInitialTaskManagers) {

	super(numInitialTaskManagers, flinkConfig, leaderRetrievalService);

	this.mesosConfig = requireNonNull(mesosConfig);

	this.workerStore = requireNonNull(workerStore);
	this.artifactResolver = requireNonNull(artifactResolver);

	this.taskManagerParameters = requireNonNull(taskManagerParameters);
	this.taskManagerContainerSpec = requireNonNull(taskManagerContainerSpec);
	this.maxFailedTasks = maxFailedTasks;

	this.workersInNew = new HashMap<>();
	this.workersInLaunch = new HashMap<>();
	this.workersBeingReturned = new HashMap<>();
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:27,代碼來源:MesosFlinkResourceManager.java

示例15: testConfigureMemoryStateBackendMixed

import org.apache.flink.configuration.Configuration; //導入依賴的package包/類
/**
 * Validates taking the application-defined memory state backend and adding additional
 * parameters from the cluster configuration, but giving precedence to application-defined
 * parameters over configuration-defined parameters.
 */
@Test
public void testConfigureMemoryStateBackendMixed() throws Exception {
	final String appCheckpointDir = new Path(tmp.newFolder().toURI()).toString();
	final String checkpointDir = new Path(tmp.newFolder().toURI()).toString();
	final String savepointDir = new Path(tmp.newFolder().toURI()).toString();

	final Path expectedCheckpointPath = new Path(appCheckpointDir);
	final Path expectedSavepointPath = new Path(savepointDir);

	final MemoryStateBackend backend = new MemoryStateBackend(appCheckpointDir, null);

	final Configuration config = new Configuration();
	config.setString(backendKey, "filesystem"); // check that this is not accidentally picked up
	config.setString(CheckpointingOptions.CHECKPOINTS_DIRECTORY, checkpointDir); // this parameter should not be picked up
	config.setString(CheckpointingOptions.SAVEPOINT_DIRECTORY, savepointDir);

	StateBackend loadedBackend = StateBackendLoader.fromApplicationOrConfigOrDefault(backend, config, cl, null);
	assertTrue(loadedBackend instanceof MemoryStateBackend);

	final MemoryStateBackend memBackend = (MemoryStateBackend) loadedBackend;
	assertEquals(expectedCheckpointPath, memBackend.getCheckpointPath());
	assertEquals(expectedSavepointPath, memBackend.getSavepointPath());
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:29,代碼來源:StateBackendLoadingTest.java


注:本文中的org.apache.flink.configuration.Configuration類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。