当前位置: 首页>>代码示例>>Java>>正文


Java KeyedSerializationSchema类代码示例

本文整理汇总了Java中org.apache.flink.streaming.util.serialization.KeyedSerializationSchema的典型用法代码示例。如果您正苦于以下问题:Java KeyedSerializationSchema类的具体用法?Java KeyedSerializationSchema怎么用?Java KeyedSerializationSchema使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。


KeyedSerializationSchema类属于org.apache.flink.streaming.util.serialization包,在下文中一共展示了KeyedSerializationSchema类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: DummyFlinkKafkaProducer

import org.apache.flink.streaming.util.serialization.KeyedSerializationSchema; //导入依赖的package包/类
@SuppressWarnings("unchecked")
DummyFlinkKafkaProducer(Properties producerConfig, KeyedSerializationSchema<T> schema, FlinkKafkaPartitioner partitioner) {

	super(DUMMY_TOPIC, schema, producerConfig, partitioner);

	this.mockProducer = mock(KafkaProducer.class);
	when(mockProducer.send(any(ProducerRecord.class), any(Callback.class))).thenAnswer(new Answer<Object>() {
		@Override
		public Object answer(InvocationOnMock invocationOnMock) throws Throwable {
			pendingCallbacks.add(invocationOnMock.getArgumentAt(1, Callback.class));
			return null;
		}
	});

	this.pendingCallbacks = new ArrayList<>();
	this.flushLatch = new MultiShotLatch();
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:18,代码来源:FlinkKafkaProducerBaseTest.java

示例2: getProducerSink

import org.apache.flink.streaming.util.serialization.KeyedSerializationSchema; //导入依赖的package包/类
@Override
public <T> StreamSink<T> getProducerSink(
		String topic,
		KeyedSerializationSchema<T> serSchema,
		Properties props,
		FlinkKafkaPartitioner<T> partitioner) {
	FlinkKafkaProducer09<T> prod = new FlinkKafkaProducer09<>(topic, serSchema, props, partitioner);
	prod.setFlushOnCheckpoint(true);
	return new StreamSink<>(prod);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:11,代码来源:KafkaTestEnvironmentImpl.java

示例3: getProducerSink

import org.apache.flink.streaming.util.serialization.KeyedSerializationSchema; //导入依赖的package包/类
@Override
public <T> StreamSink<T> getProducerSink(
		String topic,
		KeyedSerializationSchema<T> serSchema,
		Properties props,
		FlinkKafkaPartitioner<T> partitioner) {
	FlinkKafkaProducer08<T> prod = new FlinkKafkaProducer08<>(
			topic,
			serSchema,
			props,
			partitioner);
	prod.setFlushOnCheckpoint(true);
	return new StreamSink<>(prod);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:15,代码来源:KafkaTestEnvironmentImpl.java

示例4: getProducerSink

import org.apache.flink.streaming.util.serialization.KeyedSerializationSchema; //导入依赖的package包/类
@Override
public <T> StreamSink<T> getProducerSink(String topic, KeyedSerializationSchema<T> serSchema, Properties props, FlinkKafkaPartitioner<T> partitioner) {
	return new StreamSink<>(new FlinkKafkaProducer011<>(
		topic,
		serSchema,
		props,
		Optional.ofNullable(partitioner),
		producerSemantic,
		FlinkKafkaProducer011.DEFAULT_KAFKA_PRODUCERS_POOL_SIZE));
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:11,代码来源:KafkaTestEnvironmentImpl.java

示例5: produceIntoKafka

import org.apache.flink.streaming.util.serialization.KeyedSerializationSchema; //导入依赖的package包/类
@Override
public <T> DataStreamSink<T> produceIntoKafka(DataStream<T> stream, String topic, KeyedSerializationSchema<T> serSchema, Properties props, FlinkKafkaPartitioner<T> partitioner) {
	return stream.addSink(new FlinkKafkaProducer011<>(
		topic,
		serSchema,
		props,
		Optional.ofNullable(partitioner),
		producerSemantic,
		FlinkKafkaProducer011.DEFAULT_KAFKA_PRODUCERS_POOL_SIZE));
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:11,代码来源:KafkaTestEnvironmentImpl.java

示例6: writeToKafkaWithTimestamps

import org.apache.flink.streaming.util.serialization.KeyedSerializationSchema; //导入依赖的package包/类
@Override
public <T> DataStreamSink<T> writeToKafkaWithTimestamps(DataStream<T> stream, String topic, KeyedSerializationSchema<T> serSchema, Properties props) {
	FlinkKafkaProducer011<T> prod = new FlinkKafkaProducer011<>(
		topic, serSchema, props, Optional.of(new FlinkFixedPartitioner<>()), producerSemantic, FlinkKafkaProducer011.DEFAULT_KAFKA_PRODUCERS_POOL_SIZE);

	prod.setWriteTimestampToKafka(true);

	return stream.addSink(prod);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:10,代码来源:KafkaTestEnvironmentImpl.java

示例7: writeToKafkaWithTimestamps

import org.apache.flink.streaming.util.serialization.KeyedSerializationSchema; //导入依赖的package包/类
@Override
public <T> DataStreamSink<T> writeToKafkaWithTimestamps(DataStream<T> stream, String topic, KeyedSerializationSchema<T> serSchema, Properties props) {
	FlinkKafkaProducer010<T> prod = new FlinkKafkaProducer010<>(topic, serSchema, props);
	prod.setFlushOnCheckpoint(true);
	prod.setWriteTimestampToKafka(true);
	return stream.addSink(prod);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:8,代码来源:KafkaTestEnvironmentImpl.java

示例8: FlinkKafkaProducerBase

import org.apache.flink.streaming.util.serialization.KeyedSerializationSchema; //导入依赖的package包/类
/**
 * The main constructor for creating a FlinkKafkaProducer.
 *
 * @param defaultTopicId The default topic to write data to
 * @param serializationSchema A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messages
 * @param producerConfig Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.
 * @param customPartitioner A serializable partitioner for assigning messages to Kafka partitions. Passing null will use Kafka's partitioner.
 */
public FlinkKafkaProducerBase(String defaultTopicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaPartitioner<IN> customPartitioner) {
	requireNonNull(defaultTopicId, "TopicID not set");
	requireNonNull(serializationSchema, "serializationSchema not set");
	requireNonNull(producerConfig, "producerConfig not set");
	ClosureCleaner.clean(customPartitioner, true);
	ClosureCleaner.ensureSerializable(serializationSchema);

	this.defaultTopicId = defaultTopicId;
	this.schema = serializationSchema;
	this.producerConfig = producerConfig;
	this.flinkKafkaPartitioner = customPartitioner;

	// set the producer configuration properties for kafka record key value serializers.
	if (!producerConfig.containsKey(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG)) {
		this.producerConfig.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class.getCanonicalName());
	} else {
		LOG.warn("Overwriting the '{}' is not recommended", ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG);
	}

	if (!producerConfig.containsKey(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG)) {
		this.producerConfig.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class.getCanonicalName());
	} else {
		LOG.warn("Overwriting the '{}' is not recommended", ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG);
	}

	// eagerly ensure that bootstrap servers are set.
	if (!this.producerConfig.containsKey(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG)) {
		throw new IllegalArgumentException(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG + " must be supplied in the producer config properties.");
	}

	this.topicPartitionsMap = new HashMap<>();
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:41,代码来源:FlinkKafkaProducerBase.java

示例9: runKeyValueTest

import org.apache.flink.streaming.util.serialization.KeyedSerializationSchema; //导入依赖的package包/类
public void runKeyValueTest() throws Exception {
	final String topic = "keyvaluetest";
	createTestTopic(topic, 1, 1);
	final int elementCount = 5000;

	// ----------- Write some data into Kafka -------------------

	StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
	env.setParallelism(1);
	env.setRestartStrategy(RestartStrategies.noRestart());
	env.getConfig().disableSysoutLogging();

	DataStream<Tuple2<Long, PojoValue>> kvStream = env.addSource(new SourceFunction<Tuple2<Long, PojoValue>>() {
		@Override
		public void run(SourceContext<Tuple2<Long, PojoValue>> ctx) throws Exception {
			Random rnd = new Random(1337);
			for (long i = 0; i < elementCount; i++) {
				PojoValue pojo = new PojoValue();
				pojo.when = new Date(rnd.nextLong());
				pojo.lon = rnd.nextLong();
				pojo.lat = i;
				// make every second key null to ensure proper "null" serialization
				Long key = (i % 2 == 0) ? null : i;
				ctx.collect(new Tuple2<>(key, pojo));
			}
		}

		@Override
		public void cancel() {
		}
	});

	KeyedSerializationSchema<Tuple2<Long, PojoValue>> schema = new TypeInformationKeyValueSerializationSchema<>(Long.class, PojoValue.class, env.getConfig());
	Properties producerProperties = FlinkKafkaProducerBase.getPropertiesFromBrokerList(brokerConnectionStrings);
	producerProperties.setProperty("retries", "3");
	kafkaServer.produceIntoKafka(kvStream, topic, schema, producerProperties, null);
	env.execute("Write KV to Kafka");

	// ----------- Read the data again -------------------

	env = StreamExecutionEnvironment.getExecutionEnvironment();
	env.setParallelism(1);
	env.setRestartStrategy(RestartStrategies.noRestart());
	env.getConfig().disableSysoutLogging();

	KeyedDeserializationSchema<Tuple2<Long, PojoValue>> readSchema = new TypeInformationKeyValueSerializationSchema<>(Long.class, PojoValue.class, env.getConfig());

	Properties props = new Properties();
	props.putAll(standardProps);
	props.putAll(secureProps);
	DataStream<Tuple2<Long, PojoValue>> fromKafka = env.addSource(kafkaServer.getConsumer(topic, readSchema, props));
	fromKafka.flatMap(new RichFlatMapFunction<Tuple2<Long, PojoValue>, Object>() {
		long counter = 0;
		@Override
		public void flatMap(Tuple2<Long, PojoValue> value, Collector<Object> out) throws Exception {
			// the elements should be in order.
			Assert.assertTrue("Wrong value " + value.f1.lat, value.f1.lat == counter);
			if (value.f1.lat % 2 == 0) {
				assertNull("key was not null", value.f0);
			} else {
				Assert.assertTrue("Wrong value " + value.f0, value.f0 == counter);
			}
			counter++;
			if (counter == elementCount) {
				// we got the right number of elements
				throw new SuccessException();
			}
		}
	});

	tryExecute(env, "Read KV from Kafka");

	deleteTestTopic(topic);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:75,代码来源:KafkaConsumerTestBase.java

示例10: getProducerSink

import org.apache.flink.streaming.util.serialization.KeyedSerializationSchema; //导入依赖的package包/类
@Override
public <T> StreamSink<T> getProducerSink(
		String topic,
		KeyedSerializationSchema<T> serSchema,
		Properties props,
		KafkaPartitioner<T> partitioner) {
	FlinkKafkaProducer09<T> prod = new FlinkKafkaProducer09<>(topic, serSchema, props, partitioner);
	prod.setFlushOnCheckpoint(true);
	return new StreamSink<>(prod);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:11,代码来源:KafkaTestEnvironmentImpl.java

示例11: getProducerSink

import org.apache.flink.streaming.util.serialization.KeyedSerializationSchema; //导入依赖的package包/类
@Override
public <T> StreamSink<T> getProducerSink(
		String topic,
		KeyedSerializationSchema<T> serSchema,
		Properties props,
		KafkaPartitioner<T> partitioner) {
	FlinkKafkaProducer08<T> prod = new FlinkKafkaProducer08<>(
			topic,
			serSchema,
			props,
			partitioner);
	prod.setFlushOnCheckpoint(true);
	return new StreamSink<>(prod);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:15,代码来源:KafkaTestEnvironmentImpl.java

示例12: FlinkKafkaProducerBase

import org.apache.flink.streaming.util.serialization.KeyedSerializationSchema; //导入依赖的package包/类
/**
 * The main constructor for creating a FlinkKafkaProducer.
 *
 * @param defaultTopicId The default topic to write data to
 * @param serializationSchema A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messages
 * @param producerConfig Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.
 * @param customPartitioner A serializable partitioner for assigning messages to Kafka partitions. Passing null will use Kafka's partitioner
 */
public FlinkKafkaProducerBase(String defaultTopicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, KafkaPartitioner<IN> customPartitioner) {
	requireNonNull(defaultTopicId, "TopicID not set");
	requireNonNull(serializationSchema, "serializationSchema not set");
	requireNonNull(producerConfig, "producerConfig not set");
	ClosureCleaner.clean(customPartitioner, true);
	ClosureCleaner.ensureSerializable(serializationSchema);

	this.defaultTopicId = defaultTopicId;
	this.schema = serializationSchema;
	this.producerConfig = producerConfig;

	// set the producer configuration properties for kafka record key value serializers.
	if (!producerConfig.containsKey(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG)) {
		this.producerConfig.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class.getCanonicalName());
	} else {
		LOG.warn("Overwriting the '{}' is not recommended", ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG);
	}

	if (!producerConfig.containsKey(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG)) {
		this.producerConfig.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class.getCanonicalName());
	} else {
		LOG.warn("Overwriting the '{}' is not recommended", ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG);
	}

	// eagerly ensure that bootstrap servers are set.
	if (!this.producerConfig.containsKey(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG)) {
		throw new IllegalArgumentException(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG + " must be supplied in the producer config properties.");
	}

	this.partitioner = customPartitioner;
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:40,代码来源:FlinkKafkaProducerBase.java

示例13: produceIntoKafka

import org.apache.flink.streaming.util.serialization.KeyedSerializationSchema; //导入依赖的package包/类
@Override
public <T> DataStreamSink<T> produceIntoKafka(DataStream<T> stream, String topic, KeyedSerializationSchema<T> serSchema, Properties props, FlinkKafkaPartitioner<T> partitioner) {
	FlinkKafkaProducer09<T> prod = new FlinkKafkaProducer09<>(topic, serSchema, props, partitioner);
	prod.setFlushOnCheckpoint(true);
	return stream.addSink(prod);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:7,代码来源:KafkaTestEnvironmentImpl.java

示例14: writeToKafkaWithTimestamps

import org.apache.flink.streaming.util.serialization.KeyedSerializationSchema; //导入依赖的package包/类
@Override
public <T> DataStreamSink<T> writeToKafkaWithTimestamps(DataStream<T> stream, String topic, KeyedSerializationSchema<T> serSchema, Properties props) {
	throw new UnsupportedOperationException();
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:5,代码来源:KafkaTestEnvironmentImpl.java

示例15: FlinkKafkaProducer

import org.apache.flink.streaming.util.serialization.KeyedSerializationSchema; //导入依赖的package包/类
/**
 * @deprecated Use {@link FlinkKafkaProducer08#FlinkKafkaProducer08(String, String, KeyedSerializationSchema)}
 */
@Deprecated
public FlinkKafkaProducer(String brokerList, String topicId, KeyedSerializationSchema<IN> serializationSchema) {
	super(topicId, serializationSchema, getPropertiesFromBrokerList(brokerList), (FlinkKafkaPartitioner<IN>) null);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:8,代码来源:FlinkKafkaProducer.java


注:本文中的org.apache.flink.streaming.util.serialization.KeyedSerializationSchema类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。