當前位置: 首頁>>代碼示例>>Java>>正文


Java KeyedMessage類代碼示例

本文整理匯總了Java中kafka.producer.KeyedMessage的典型用法代碼示例。如果您正苦於以下問題:Java KeyedMessage類的具體用法?Java KeyedMessage怎麽用?Java KeyedMessage使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


KeyedMessage類屬於kafka.producer包,在下文中一共展示了KeyedMessage類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: produce

import kafka.producer.KeyedMessage; //導入依賴的package包/類
void produce() {
	//read file
	try {
		File csv = new File(fileName); 
		BufferedReader reader = new BufferedReader(new FileReader(csv));
		String line = null;
		int messageNum = 1;
		while ((line = reader.readLine()) != null) {
			String key = String.valueOf(messageNum);
			String data = line;
			producer.send(new KeyedMessage<String, String>(TOPIC, key, data));
			System.out.println(data);
			messageNum++;
		}
		reader.close();
	} catch (Exception e) {
		e.printStackTrace();
	}
}
 
開發者ID:thulab,項目名稱:iotdb-jdbc,代碼行數:20,代碼來源:KafkaProducer.java

示例2: configure

import kafka.producer.KeyedMessage; //導入依賴的package包/類
/**
 * Configure the class based on the properties for the user exit.
 * @param properties the properties we want to configure.
 */
public void configure(PropertyManagement properties) {
    batchSize = KafkaProperties.getKafkaBatchSize();
    topic = KafkaProperties.getKafkaTopic();
    if (topic.equals(KafkaProperties.DEFAULT_TOPIC)) {
        LOG.warn("The Property 'topic' is not set. " + "Using the default topic name: " +
                 KafkaProperties.DEFAULT_TOPIC);
    }
    messageList = new ArrayList<KeyedMessage<Object, Object>>(batchSize);
    
    kafkaProps = KafkaProperties.getKafkaBDGlueProperties();

    messageHelper.setTopic(topic);
    
    logConfiguration();
}
 
開發者ID:oracle,項目名稱:bdglue,代碼行數:20,代碼來源:KafkaHelper.java

示例3: process

import kafka.producer.KeyedMessage; //導入依賴的package包/類
/**
 * Process the received BDGlue event. Assumes the data is already formatted
 * in the event body.
 * @param event the BDGlue event we want to process.
 */
public void process(EventData event) {
    String messageKey = null;
    String messageTopic = null;

    //byte[] eventBody = (byte[])event.eventBody();
    Object eventBody = event.eventBody();

    messageTopic = messageHelper.getTopic(event);
    messageKey = messageHelper.getMessageKey(event);
    
    if (LOG.isDebugEnabled()) {
      LOG.debug("{Event} " + messageTopic + " : " + messageKey + " : " + eventBody );
    }

    // create a message and add to buffer
    
    KeyedMessage<Object, Object> data = new KeyedMessage<Object, Object>(messageTopic, messageKey, eventBody);
    
    messageList.add(data);
}
 
開發者ID:oracle,項目名稱:bdglue,代碼行數:26,代碼來源:KafkaHelper.java

示例4: updateState

import kafka.producer.KeyedMessage; //導入依賴的package包/類
public void updateState(List<TridentTuple> tuples, TridentCollector collector) {
    String topic = null;
    for (TridentTuple tuple : tuples) {
        try {
            topic = topicSelector.getTopic(tuple);

            if(topic != null) {
                producer.send(new KeyedMessage(topic, mapper.getKeyFromTuple(tuple),
                        mapper.getMessageFromTuple(tuple)));
            } else {
                LOG.warn("skipping key = " + mapper.getKeyFromTuple(tuple) + ", topic selector returned null.");
            }
        } catch (Exception ex) {
            String errorMsg = "Could not send message with key = " + mapper.getKeyFromTuple(tuple)
                    + " to topic = " + topic;
            LOG.warn(errorMsg, ex);
            throw new FailedException(errorMsg, ex);
        }
    }
}
 
開發者ID:redBorder,項目名稱:rb-bi,代碼行數:21,代碼來源:TridentKafkaState.java

示例5: execute

import kafka.producer.KeyedMessage; //導入依賴的package包/類
@Override
public void execute(Tuple input) {
    K key = null;
    V message = null;
    String topic = null;
    try {
        key = mapper.getKeyFromTuple(input);
        message = mapper.getMessageFromTuple(input);
        topic = topicSelector.getTopic(input);
        if(topic != null ) {
            producer.send(new KeyedMessage<K, V>(topic, key, message));
        } else {
            LOG.warn("skipping key = " + key + ", topic selector returned null.");
        }
    } catch (Exception ex) {
        LOG.error("Could not send message with key = " + key
                + " and value = " + message + " to topic = " + topic, ex);
    } finally {
        collector.ack(input);
    }
}
 
開發者ID:redBorder,項目名稱:rb-bi,代碼行數:22,代碼來源:KafkaBolt.java

示例6: send

import kafka.producer.KeyedMessage; //導入依賴的package包/類
public void send(COL_RDBMS event) throws Exception {
	EncoderFactory avroEncoderFactory = EncoderFactory.get();
	SpecificDatumWriter<COL_RDBMS> avroEventWriter = new SpecificDatumWriter<COL_RDBMS>(COL_RDBMS.SCHEMA$);
	
	ByteArrayOutputStream stream = new ByteArrayOutputStream();
	BinaryEncoder binaryEncoder = avroEncoderFactory.binaryEncoder(stream,null);

	try {
		avroEventWriter.write(event, binaryEncoder);
		binaryEncoder.flush();
	} catch (IOException e) {
		e.printStackTrace();
		throw e;
	}
	IOUtils.closeQuietly(stream);

	KeyedMessage<String, byte[]> data = new KeyedMessage<String, byte[]>(
			TOPIC, stream.toByteArray());

	producer.send(data);
}
 
開發者ID:iotoasis,項目名稱:SDA,代碼行數:22,代碼來源:AvroRdbmsDeviceInfoPublish.java

示例7: send

import kafka.producer.KeyedMessage; //導入依賴的package包/類
public void send(COL_ONEM2M event) throws Exception {
	EncoderFactory avroEncoderFactory = EncoderFactory.get();
	SpecificDatumWriter<COL_ONEM2M> avroEventWriter = new SpecificDatumWriter<COL_ONEM2M>(COL_ONEM2M.SCHEMA$);
	
	ByteArrayOutputStream stream = new ByteArrayOutputStream();
	BinaryEncoder binaryEncoder = avroEncoderFactory.binaryEncoder(stream,null);

	try {
		avroEventWriter.write(event, binaryEncoder);
		binaryEncoder.flush();
	} catch (IOException e) {
		e.printStackTrace();
		throw e;
	}
	IOUtils.closeQuietly(stream);

	KeyedMessage<String, byte[]> data = new KeyedMessage<String, byte[]>(
			TOPIC, stream.toByteArray());

	producer.send(data);
}
 
開發者ID:iotoasis,項目名稱:SDA,代碼行數:22,代碼來源:AvroOneM2MDataPublish.java

示例8: main

import kafka.producer.KeyedMessage; //導入依賴的package包/類
public static void main(String[] args) {
    Properties props = new Properties();
    props.put("metadata.broker.list", "127.0.0.1:9092");
    props.put("serializer.class", "kafka.serializer.StringEncoder");
    props.put("key.serializer.class", "kafka.serializer.StringEncoder");
    props.put("request.required.acks","-1");

    Producer<String, String> producer = new Producer<String, String>(new ProducerConfig(props));

    int messageNo = 100;
    final int COUNT = 1000;
    while (messageNo < COUNT) {
        String key = String.valueOf(messageNo);
        String data = "hello kafka message " + key;
        producer.send(new KeyedMessage<String, String>("TestTopic", key ,data));
        System.out.println(data);
        messageNo ++;
    }
}
 
開發者ID:javahongxi,項目名稱:whatsmars,代碼行數:20,代碼來源:KafkaProducer.java

示例9: sendMessages

import kafka.producer.KeyedMessage; //導入依賴的package包/類
/**
 * Send a set of messages. Each must have a "key" string value.
 * 
 * @param topic
 * @param msg
 * @throws FailedToSendMessageException
 * @throws JSONException
 */
@Override
public void sendMessages(String topic, List<? extends message> msgs)
		throws IOException, FailedToSendMessageException {
	log.info("sending " + msgs.size() + " events to [" + topic + "]");

	final List<KeyedMessage<String, String>> kms = new ArrayList<KeyedMessage<String, String>>(msgs.size());
	for (message o : msgs) {
		final KeyedMessage<String, String> data = new KeyedMessage<String, String>(topic, o.getKey(), o.toString());
		kms.add(data);
	}
	try {
		fProducer.send(kms);

	} catch (FailedToSendMessageException excp) {
		log.error("Failed to send message(s) to topic [" + topic + "].", excp);
		throw new FailedToSendMessageException(excp.getMessage(), excp);
	}
}
 
開發者ID:att,項目名稱:dmaap-framework,代碼行數:27,代碼來源:KafkaPublisher.java

示例10: sample

import kafka.producer.KeyedMessage; //導入依賴的package包/類
@Override
public SampleResult sample(Entry entry) {
	SampleResult result = new SampleResult();
	result.setSampleLabel(getName());
	try {
		result.sampleStart();
		Producer<String, String> producer = getProducer();
		KeyedMessage<String, String> msg = new KeyedMessage<String, String>(getTopic(), getMessage());
		producer.send(msg);
		result.sampleEnd(); 
		result.setSuccessful(true);
		result.setResponseCodeOK();
	} catch (Exception e) {
		result.sampleEnd(); // stop stopwatch
		result.setSuccessful(false);
		result.setResponseMessage("Exception: " + e);
		// get stack trace as a String to return as document data
		java.io.StringWriter stringWriter = new java.io.StringWriter();
		e.printStackTrace(new java.io.PrintWriter(stringWriter));
		result.setResponseData(stringWriter.toString(), null);
		result.setDataType(org.apache.jmeter.samplers.SampleResult.TEXT);
		result.setResponseCode("FAILED");
	}
	return result;
}
 
開發者ID:XMeterSaaSService,項目名稱:kafka_jmeter,代碼行數:26,代碼來源:KafkaSampler.java

示例11: main

import kafka.producer.KeyedMessage; //導入依賴的package包/類
public static void main(String[] args) {
	String brokers = "localhost:9092";
	Producer<String, String> producer = KafkaProducer.getInstance(brokers).getProducer();

	KafkaDataProducer instance = new KafkaDataProducer();

	String topic = "test-topic";

	for (int i = 0; i < 100; i++) {
		String message = instance.get(i);
		KeyedMessage<String, String> keyedMessage = new KeyedMessage<String, String>(topic, "device001", message);
		producer.send(keyedMessage);
		System.out.println("message[" + (i + 1) + "] is sent.");
		try {
			Thread.sleep(1000);
		} catch (InterruptedException e) {
			e.printStackTrace();
		}
	}
}
 
開發者ID:osswangxining,項目名稱:another-rule-based-analytics-on-spark,代碼行數:21,代碼來源:KafkaDataProducer.java

示例12: messageWithSingleThriftSpan

import kafka.producer.KeyedMessage; //導入依賴的package包/類
/** Ensures legacy encoding works: a single TBinaryProtocol encoded span */
@Test
public void messageWithSingleThriftSpan() throws Exception {
  Builder builder = builder("single_span");

  byte[] bytes = Codec.THRIFT.writeSpan(TRACE.get(0));
  producer.send(new KeyedMessage<>(builder.topic, bytes));

  try (KafkaCollector collector = newKafkaTransport(builder, consumer)) {
    assertThat(recvdSpans.take()).containsExactly(TRACE.get(0));
  }

  assertThat(kafkaMetrics.messages()).isEqualTo(1);
  assertThat(kafkaMetrics.bytes()).isEqualTo(bytes.length);
  assertThat(kafkaMetrics.spans()).isEqualTo(1);
}
 
開發者ID:liaominghua,項目名稱:zipkin,代碼行數:17,代碼來源:KafkaCollectorTest.java

示例13: messageWithMultipleSpans_thrift

import kafka.producer.KeyedMessage; //導入依賴的package包/類
/** Ensures list encoding works: a TBinaryProtocol encoded list of spans */
@Test
public void messageWithMultipleSpans_thrift() throws Exception {
  Builder builder = builder("multiple_spans_thrift");

  byte[] bytes = Codec.THRIFT.writeSpans(TRACE);
  producer.send(new KeyedMessage<>(builder.topic, bytes));

  try (KafkaCollector collector = newKafkaTransport(builder, consumer)) {
    assertThat(recvdSpans.take()).containsExactlyElementsOf(TRACE);
  }

  assertThat(kafkaMetrics.messages()).isEqualTo(1);
  assertThat(kafkaMetrics.bytes()).isEqualTo(bytes.length);
  assertThat(kafkaMetrics.spans()).isEqualTo(TestObjects.TRACE.size());
}
 
開發者ID:liaominghua,項目名稱:zipkin,代碼行數:17,代碼來源:KafkaCollectorTest.java

示例14: messageWithMultipleSpans_json

import kafka.producer.KeyedMessage; //導入依賴的package包/類
/** Ensures list encoding works: a json encoded list of spans */
@Test
public void messageWithMultipleSpans_json() throws Exception {
  Builder builder = builder("multiple_spans_json");

  byte[] bytes = Codec.JSON.writeSpans(TRACE);
  producer.send(new KeyedMessage<>(builder.topic, bytes));

  try (KafkaCollector collector = newKafkaTransport(builder, consumer)) {
    assertThat(recvdSpans.take()).containsExactlyElementsOf(TRACE);
  }

  assertThat(kafkaMetrics.messages()).isEqualTo(1);
  assertThat(kafkaMetrics.bytes()).isEqualTo(bytes.length);
  assertThat(kafkaMetrics.spans()).isEqualTo(TestObjects.TRACE.size());
}
 
開發者ID:liaominghua,項目名稱:zipkin,代碼行數:17,代碼來源:KafkaCollectorTest.java

示例15: skipsMalformedData

import kafka.producer.KeyedMessage; //導入依賴的package包/類
/** Ensures malformed spans don't hang the collector */
@Test
public void skipsMalformedData() throws Exception {
  Builder builder = builder("decoder_exception");

  producer.send(new KeyedMessage<>(builder.topic, Codec.THRIFT.writeSpans(TRACE)));
  producer.send(new KeyedMessage<>(builder.topic, new byte[0]));
  producer.send(new KeyedMessage<>(builder.topic, "[\"='".getBytes())); // screwed up json
  producer.send(new KeyedMessage<>(builder.topic, "malformed".getBytes()));
  producer.send(new KeyedMessage<>(builder.topic, Codec.THRIFT.writeSpans(TRACE)));

  try (KafkaCollector collector = newKafkaTransport(builder, consumer)) {
    assertThat(recvdSpans.take()).containsExactlyElementsOf(TRACE);
    // the only way we could read this, is if the malformed spans were skipped.
    assertThat(recvdSpans.take()).containsExactlyElementsOf(TRACE);
  }

  assertThat(kafkaMetrics.messagesDropped()).isEqualTo(3);
}
 
開發者ID:liaominghua,項目名稱:zipkin,代碼行數:20,代碼來源:KafkaCollectorTest.java


注:本文中的kafka.producer.KeyedMessage類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。