当前位置: 首页>>代码示例>>Java>>正文


Java StreamId类代码示例

本文整理汇总了Java中io.pravega.connectors.flink.util.StreamId的典型用法代码示例。如果您正苦于以下问题:Java StreamId类的具体用法?Java StreamId怎么用?Java StreamId使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。


StreamId类属于io.pravega.connectors.flink.util包,在下文中一共展示了StreamId类的14个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: publishUsingFlinkConnector

import io.pravega.connectors.flink.util.StreamId; //导入依赖的package包/类
private void publishUsingFlinkConnector(AppConfiguration appConfiguration) throws Exception {
	StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

	StreamId streamId = getStreamId();
	FlinkPravegaWriter<Event> writer = pravega.newWriter(streamId, Event.class, new EventRouter());

	if(appConfiguration.getProducer().isControlledEnv()) {
		if(!(env instanceof LocalStreamEnvironment)) {
			throw new Exception("Use a local Flink environment or set controlledEnv to false in app.json.");
		}
		//setting this to single instance since the controlled run allows user inout to trigger error events
		env.setParallelism(1);
		long latency = appConfiguration.getProducer().getLatencyInMilliSec();
		int capacity = appConfiguration.getProducer().getCapacity();
		ControlledSourceContextProducer controlledSourceContextProducer = new ControlledSourceContextProducer(capacity, latency);
		env.addSource(controlledSourceContextProducer).name("EventSource").addSink(writer).name("Pravega-" + streamId.getName());
	} else {
		SourceContextProducer sourceContextProducer = new SourceContextProducer(appConfiguration);
		env.addSource(sourceContextProducer).name("EventSource").addSink(writer).name("Pravega-" + streamId.getName());
	}

	env.execute(appConfiguration.getName()+"-producer");
}
 
开发者ID:pravega,项目名称:nautilus-samples,代码行数:24,代码来源:PravegaEventPublisher.java

示例2: FlinkPravegaTableSource

import io.pravega.connectors.flink.util.StreamId; //导入依赖的package包/类
/**
 * Creates a Pravega {@link StreamTableSource}.
 *
 * <p>The {@code deserializationSchemaFactory} supplies a {@link DeserializationSchema}
 * based on the result type information.
 *
 * @param controllerURI                The pravega controller endpoint address.
 * @param stream                       The stream to read events from.
 * @param startTime                    The start time from when to read events from.
 * @param deserializationSchemaFactory The deserialization schema to use for stream events.
 * @param typeInfo                     The type information describing the result type.
 */
public FlinkPravegaTableSource(
        final URI controllerURI,
        final StreamId stream,
        final long startTime,
        Function<TypeInformation<Row>, DeserializationSchema<Row>> deserializationSchemaFactory,
        TypeInformation<Row> typeInfo) {
    this.controllerURI = controllerURI;
    this.stream = stream;
    this.startTime = startTime;
    checkNotNull(deserializationSchemaFactory, "Deserialization schema factory");
    this.typeInfo = checkNotNull(typeInfo, "Type information");
    this.deserializationSchema = deserializationSchemaFactory.apply(typeInfo);
}
 
开发者ID:pravega,项目名称:flink-connectors,代码行数:26,代码来源:FlinkPravegaTableSource.java

示例3: publishUsingFlinkConnector

import io.pravega.connectors.flink.util.StreamId; //导入依赖的package包/类
private void publishUsingFlinkConnector(AppConfiguration appConfiguration) throws Exception {

		StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

		StreamId streamId = getStreamId();
		FlinkPravegaWriter<Event> writer = pravega.newWriter(streamId, Event.class, new EventRouter());

		int parallelism = appConfiguration.getPipeline().getParallelism();

		if(appConfiguration.getProducer().isControlledEnv()) {
			if(!(env instanceof LocalStreamEnvironment)) {
				throw new Exception("Use a local Flink environment or set controlledEnv to false in app.json.");
			}
			//setting this to single instance since the controlled run allows user inout to trigger error events
			env.setParallelism(1);
			long latency = appConfiguration.getProducer().getLatencyInMilliSec();
			int capacity = appConfiguration.getProducer().getCapacity();
			ControlledSourceContextProducer controlledSourceContextProducer = new ControlledSourceContextProducer(capacity, latency);
			env.addSource(controlledSourceContextProducer).name("EventSource").addSink(writer).name("Pravega-" + streamId.getName());
		} else {
			env.setParallelism(parallelism);
			SourceContextProducer sourceContextProducer = new SourceContextProducer(appConfiguration);
			env.addSource(sourceContextProducer).name("EventSource").addSink(writer).name("Pravega-" + streamId.getName());
		}

		env.execute(appConfiguration.getName()+"-producer");

	}
 
开发者ID:pravega,项目名称:pravega-samples,代码行数:29,代码来源:PravegaEventPublisher.java

示例4: main

import io.pravega.connectors.flink.util.StreamId; //导入依赖的package包/类
public static void main(String[] args) throws Exception {
    ParameterTool params = ParameterTool.fromArgs(args);
    FlinkPravegaParams helper = new FlinkPravegaParams(params);
    StreamId stream = helper.createStreamFromParam("input", "examples/turbineHeatTest");

    StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
    env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);

    // 1. read and decode the sensor events from a Pravega stream
    long startTime = params.getLong("start", 0L);
    FlinkPravegaReader<String> reader = helper.newReader(stream, startTime, String.class);
    DataStream<SensorEvent> events = env.addSource(reader, "input").map(new SensorMapper()).name("events");

    // 2. extract timestamp information to support 'event-time' processing
    SingleOutputStreamOperator<SensorEvent> timestamped = events.assignTimestampsAndWatermarks(
            new BoundedOutOfOrdernessTimestampExtractor<SensorEvent>(Time.seconds(10)) {
        @Override
        public long extractTimestamp(SensorEvent element) {
            return element.getTimestamp();
        }
    });
    timestamped.print();

    // 3. summarize the temperature data for each sensor
    SingleOutputStreamOperator<SensorAggregate> summaries = timestamped
            .keyBy("sensorId")
            .window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(8)))
            .fold(null, new SensorAggregator()).name("summaries");

    // 4. save to HDFS and print to stdout.  Refer to the TaskManager's 'Stdout' view in the Flink UI.
    summaries.print().name("stdout");
    if (params.has("output")) {
        summaries.writeAsCsv(params.getRequired("output"), FileSystem.WriteMode.OVERWRITE);
    }

    env.execute("TurbineHeatProcessor_" + stream);
}
 
开发者ID:pravega,项目名称:pravega-samples,代码行数:38,代码来源:TurbineHeatProcessor.java

示例5: publishData

import io.pravega.connectors.flink.util.StreamId; //导入依赖的package包/类
public void publishData(final StreamId streamId, final int numElements) {
	final EventStreamWriter<Integer> eventWriter = createWriter(streamId.getName(), streamId.getScope());
	for (int i=1; i<=numElements; i++) {
		eventWriter.writeEvent(i);
	}
	eventWriter.close();
}
 
开发者ID:pravega,项目名称:nautilus-samples,代码行数:8,代码来源:StreamUtils.java

示例6: newExactlyOnceWriter

import io.pravega.connectors.flink.util.StreamId; //导入依赖的package包/类
public <T extends Serializable> FlinkPravegaWriter<T> newExactlyOnceWriter(final StreamId stream,
																		   final SerializationSchema<T> serializationSchema,
																		   final PravegaEventRouter<T> router) {
	FlinkPravegaWriter writer = new FlinkPravegaWriter<T>(getControllerUri(), stream.getScope(), stream.getName(), serializationSchema, router);
	writer.setPravegaWriterMode(PravegaWriterMode.EXACTLY_ONCE);
	return writer;
}
 
开发者ID:pravega,项目名称:nautilus-samples,代码行数:8,代码来源:StreamUtils.java

示例7: exactlyOnceWriteSimulator

import io.pravega.connectors.flink.util.StreamId; //导入依赖的package包/类
public void exactlyOnceWriteSimulator(final StreamId outStreamId, final StreamUtils streamUtils, int numElements) throws Exception {

		final int checkpointInterval = 100;

		final int restartAttempts = 1;
		final long delayBetweenAttempts = 0L;

		//30 sec timeout for all
		final long txTimeout = 30 * 1000;
		final long txTimeoutMax = 30 * 1000;
		final long txTimeoutGracePeriod = 30 * 1000;

		final String jobName = "ExactlyOnceSimulator";

		final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
		env.setParallelism(parallelism);

		env.enableCheckpointing(checkpointInterval);
		env.setRestartStrategy(RestartStrategies.fixedDelayRestart(restartAttempts, delayBetweenAttempts));

		// Pravega Writer
		FlinkPravegaWriter<Integer> pravegaExactlyOnceWriter = streamUtils.newExactlyOnceWriter(outStreamId,
				Integer.class, new IdentityRouter<>());

		env
				.addSource(new IntegerCounterSourceGenerator(numElements))
				.map(new FailingIdentityMapper<>(numElements / parallelism / 2))
				.rebalance()
				.addSink(pravegaExactlyOnceWriter);

		env.execute(jobName);
	}
 
开发者ID:pravega,项目名称:nautilus-samples,代码行数:33,代码来源:EventCounterApp.java

示例8: standardReadWriteSimulator

import io.pravega.connectors.flink.util.StreamId; //导入依赖的package包/类
public void standardReadWriteSimulator(final StreamId inStreamId, final StreamId outStreamId, final StreamUtils streamUtils, int numElements) throws Exception {

		final int checkpointInterval = 100;
		final int taskFailureRestartAttempts = 1;
		final long delayBetweenRestartAttempts = 0L;
		final long startTime = 0L;
		final String jobName = "standardReadWriteSimulator";

		final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
		env.setParallelism(parallelism);
		env.enableCheckpointing(checkpointInterval);
		env.setRestartStrategy(RestartStrategies.fixedDelayRestart(taskFailureRestartAttempts, delayBetweenRestartAttempts));

		// the Pravega reader
		final FlinkPravegaReader<Integer> pravegaSource = streamUtils.getFlinkPravegaParams().newReader(inStreamId, startTime, Integer.class);

		// Pravega Writer
		FlinkPravegaWriter<Integer> pravegaWriter = streamUtils.getFlinkPravegaParams().newWriter(outStreamId, Integer.class, new IdentityRouter<>());
		pravegaWriter.setPravegaWriterMode(PravegaWriterMode.ATLEAST_ONCE);

		DataStream<Integer> stream = env.addSource(pravegaSource).map(new IdentityMapper<>());

		stream.addSink(pravegaWriter);

		stream.addSink(new IntSequenceExactlyOnceValidator(numElements));

		env.execute(jobName);

	}
 
开发者ID:pravega,项目名称:nautilus-samples,代码行数:30,代码来源:EventCounterApp.java

示例9: testEndToEnd

import io.pravega.connectors.flink.util.StreamId; //导入依赖的package包/类
/**
 * Tests the end-to-end functionality of table source & sink.
 *
 * <p>This test uses the {@link FlinkPravegaTableSink} to emit an in-memory table
 * containing sample data as a Pravega stream of 'append' events (i.e. as a changelog).
 * The test then uses the {@link FlinkPravegaTableSource} to absorb the changelog as a new table.
 *
 * <p>Flink's ability to convert POJOs (e.g. {@link SampleRecord}) to/from table rows is also demonstrated.
 *
 * <p>Because the source is unbounded, the test must throw an exception to deliberately terminate the job.
 *
 * @throws Exception on exception
 */
@Test
public void testEndToEnd() throws Exception {

    // create a Pravega stream for test purposes
    StreamId stream = new StreamId(setupUtils.getScope(), "FlinkTableITCase.testEndToEnd");
    this.setupUtils.createTestStream(stream.getName(), 1);

    // create a Flink Table environment
    StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment().setParallelism(1);
    StreamTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);

    // define a table of sample data from a collection of POJOs.  Schema:
    // root
    //  |-- category: String
    //  |-- value: Integer
    Table table = tableEnv.fromDataStream(env.fromCollection(SAMPLES));

    // write the table to a Pravega stream (using the 'category' column as a routing key)
    FlinkPravegaTableSink sink = new FlinkPravegaTableSink(
            this.setupUtils.getControllerUri(), stream, JsonRowSerializationSchema::new, "category");
    table.writeToSink(sink);

    // register the Pravega stream as a table called 'samples'
    FlinkPravegaTableSource source = new FlinkPravegaTableSource(
            this.setupUtils.getControllerUri(), stream, 0, JsonRowDeserializationSchema::new, SAMPLE_SCHEMA);
    tableEnv.registerTableSource("samples", source);

    // select some sample data from the Pravega-backed table, as a view
    Table view = tableEnv.sql("SELECT * FROM samples WHERE category IN ('A','B')");

    // write the view to a test sink that verifies the data for test purposes
    tableEnv.toAppendStream(view, SampleRecord.class).addSink(new TestSink(SAMPLES));

    // execute the topology
    try {
        env.execute();
        Assert.fail("expected an exception");
    } catch (JobExecutionException e) {
        // we expect the job to fail because the test sink throws a deliberate exception.
        Assert.assertTrue(e.getCause() instanceof TestCompletionException);
    }
}
 
开发者ID:pravega,项目名称:flink-connectors,代码行数:56,代码来源:FlinkTableITCase.java

示例10: getStreamId

import io.pravega.connectors.flink.util.StreamId; //导入依赖的package包/类
public StreamId getStreamId() {
	return pravega.getStreamFromParam(STREAM_PARAMETER, DEFAULT_STREAM);
}
 
开发者ID:pravega,项目名称:pravega-samples,代码行数:4,代码来源:AbstractPipeline.java

示例11: run

import io.pravega.connectors.flink.util.StreamId; //导入依赖的package包/类
public void run() {
	StreamId streamId = getStreamId();
	pravega.createStream(streamId);
	LOG.info("Succesfully created stream: {}", streamId);
}
 
开发者ID:pravega,项目名称:pravega-samples,代码行数:6,代码来源:StreamCreator.java

示例12: createStream

import io.pravega.connectors.flink.util.StreamId; //导入依赖的package包/类
public StreamId createStream(final String streamParamName) {
	final String defaultStreamName = RandomStringUtils.randomAlphabetic(20);
	StreamId streamId = flinkPravegaParams.createStreamFromParam(streamParamName, scope + "/" + defaultStreamName);
	log.info("Created stream: {} with scope: {}", streamId.getName(), streamId.getScope());
	return streamId;
}
 
开发者ID:pravega,项目名称:nautilus-samples,代码行数:7,代码来源:StreamUtils.java

示例13: exactlyOnceReadWriteSimulator

import io.pravega.connectors.flink.util.StreamId; //导入依赖的package包/类
public void exactlyOnceReadWriteSimulator(final StreamId inStreamId, final StreamId outStreamId,
										  final StreamUtils streamUtils, int numElements,
										  boolean generateData, boolean throttled) throws Exception {

	final int blockAtNum = numElements/2;
	final int sleepPerElement = 1;

	final int checkpointInterval = 100;
	final int taskFailureRestartAttempts = 3;
	final long delayBetweenRestartAttempts = 0L;
	final long startTime = 0L;
	final String jobName = "exactlyOnceReadWriteSimulator";

	//30 sec timeout for all
	final long txTimeout = 30 * 1000;
	final long txTimeoutMax = 30 * 1000;
	final long txTimeoutGracePeriod = 30 * 1000;

	EventStreamWriter<Integer> eventWriter;
	ThrottledIntegerWriter producer = null;

	final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
	env.setParallelism(parallelism);
	env.enableCheckpointing(checkpointInterval);
	env.setRestartStrategy(RestartStrategies.fixedDelayRestart(taskFailureRestartAttempts, delayBetweenRestartAttempts));

	// we currently need this to work around the case where tasks are started too late, a checkpoint was already triggered, and some tasks
	// never see the checkpoint event
	env.getCheckpointConfig().setCheckpointTimeout(2000);

	// the Pravega reader
	final FlinkPravegaReader<Integer> pravegaSource = streamUtils.getFlinkPravegaParams().newReader(inStreamId, startTime, Integer.class);

	// Pravega Writer
	FlinkPravegaWriter<Integer> pravegaExactlyOnceWriter = streamUtils.newExactlyOnceWriter(outStreamId,
			Integer.class, new IdentityRouter<>());

	DataStream<Integer> stream =
	env.addSource(pravegaSource)
			.map(new FailingIdentityMapper<>(numElements * 2 / 3))
			.setParallelism(1)

			.map(new NotifyingMapper<>())
			.setParallelism(1);

			stream.addSink(pravegaExactlyOnceWriter)
			.setParallelism(1);

			stream.addSink(new IntSequenceExactlyOnceValidator(numElements))
			.setParallelism(1);

	if (generateData) {
		eventWriter = streamUtils.createWriter(inStreamId.getName(), inStreamId.getScope());
		producer = new ThrottledIntegerWriter(eventWriter, numElements, blockAtNum, sleepPerElement, false);
		producer.start();
		if (throttled) {
			ThrottledIntegerWriter finalProducer = producer;
			TO_CALL_ON_COMPLETION.set(() -> finalProducer.unThrottle());
		}
	}

	try {
		env.execute(jobName);
	} catch (Exception e) {
		if (!(ExceptionUtils.getRootCause(e) instanceof IntSequenceExactlyOnceValidator.SuccessException)) {
			throw e;
		}
	}

	if (generateData && producer != null) producer.sync();

}
 
开发者ID:pravega,项目名称:nautilus-samples,代码行数:73,代码来源:EventCounterApp.java

示例14: FlinkPravegaTableSink

import io.pravega.connectors.flink.util.StreamId; //导入依赖的package包/类
/**
 * Creates a Pravega {@link AppendStreamTableSink}.
 *
 * <p>The {@code serializationSchemaFactory} supplies a {@link SerializationSchema}
 * based on the output field names.
 *
 * <p>Each row is written to a Pravega stream with a routing key based on the {@code routingKeyFieldName}.
 * The specified field must of type {@code STRING}.
 *
 * @param controllerURI                The pravega controller endpoint address.
 * @param stream                       The stream to write events to.
 * @param serializationSchemaFactory   A factory for the serialization schema to use for stream events.
 * @param routingKeyFieldName          The field name to use as a Pravega event routing key.
 */
public FlinkPravegaTableSink(
        URI controllerURI,
        StreamId stream,
        Function<String[], SerializationSchema<Row>> serializationSchemaFactory,
        String routingKeyFieldName) {
    this.controllerURI = controllerURI;
    this.stream = stream;
    this.serializationSchemaFactory = serializationSchemaFactory;
    this.routingKeyFieldName = routingKeyFieldName;
}
 
开发者ID:pravega,项目名称:flink-connectors,代码行数:25,代码来源:FlinkPravegaTableSink.java


注:本文中的io.pravega.connectors.flink.util.StreamId类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。