当前位置: 首页>>代码示例>>Java>>正文


Java ConsumerRecord.topic方法代码示例

本文整理汇总了Java中org.apache.kafka.clients.consumer.ConsumerRecord.topic方法的典型用法代码示例。如果您正苦于以下问题:Java ConsumerRecord.topic方法的具体用法?Java ConsumerRecord.topic怎么用?Java ConsumerRecord.topic使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.kafka.clients.consumer.ConsumerRecord的用法示例。


在下文中一共展示了ConsumerRecord.topic方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: onMessage

import org.apache.kafka.clients.consumer.ConsumerRecord; //导入方法依赖的package包/类
@Override
public void onMessage(ConsumerRecord<K, V> data, Acknowledgment acknowledgment) {
    logger.info(cachingDateFormatter.format(System.currentTimeMillis()) + "-" + data.toString());
    // router topic
    String topic = data.topic();

    MessageHandler<K, V> messageHandler = messageHandlers.get(topic);
    if (null == messageHandler) {
        // TODO:需要处理 未找到注册的MessageHandler
        throw new RuntimeException("not found MessagHandler Instance");
    }
    // 获取运行时泛型
    Type messageType = ((ParameterizedType) messageHandler.getClass().getGenericInterfaces()[0]).getActualTypeArguments()[0];
    // create MessageChannel , MessageBuilder
    messageChannel = new KafkaMessageChannel(acknowledgment);
    messageChannel.putMessage(data);
    Message message = MessageBuilder.build(messageType, messageChannel).createMessage(data.key());
    messageHandler.handler(message);
}
 
开发者ID:ailang323,项目名称:tankms,代码行数:20,代码来源:KafkaMessageListenerAdapter.java

示例2: fromKafka

import org.apache.kafka.clients.consumer.ConsumerRecord; //导入方法依赖的package包/类
static Message<? extends com.google.protobuf.Message> fromKafka(com.google.protobuf.Message protoMessage, Envelope envelope, ConsumerRecord<String, byte[]> record) {
    boolean wasReceived = true;

    Topic topic = new Topic(record.topic());
    String partitioningKey = record.key();
    int partitionId = record.partition();
    long offset = record.offset();

    String messageId = envelope.getMessageId();
    String correlationId = envelope.getCorrelationId();

    MessageType type = MessageType.of(protoMessage);

    String requestCorrelationId = envelope.getRequestCorrelationId();
    Topic replyTo = new Topic(envelope.getReplyTo());

    Metadata meta = new Metadata(wasReceived, topic, partitioningKey, partitionId, offset, messageId, correlationId, requestCorrelationId, replyTo, type);
    return new Message<>(protoMessage, meta);
}
 
开发者ID:Sixt,项目名称:ja-micro,代码行数:20,代码来源:Messages.java

示例3: onMessageConsumed

import org.apache.kafka.clients.consumer.ConsumerRecord; //导入方法依赖的package包/类
private synchronized void onMessageConsumed(ConsumerRecord<String, String> record) {
    log.info(String.format("Consumed message: [%s]", record));

    TopicPartition topicPartition = new TopicPartition(record.topic(), record.partition());

    if (!messagesByTopicPartition.containsKey(topicPartition)) {
        messagesByTopicPartition.put(topicPartition, new VersionedMessages(Lists.newLinkedList()));
    }

    VersionedMessages versionedMessages = messagesByTopicPartition.get(topicPartition);
    LinkedList<ConsumerRecord<String, String>> messages = versionedMessages.messages;
    messages.addFirst(record);

    if (messages.size() > maxTopicMessagesCount) {
        messages.removeLast();
    }

    versionedMessages.version.incrementAndGet();
}
 
开发者ID:enthusiast94,项目名称:kafka-visualizer,代码行数:20,代码来源:KafkaTopicsDataTracker.java

示例4: call

import org.apache.kafka.clients.consumer.ConsumerRecord; //导入方法依赖的package包/类
@Override
public Boolean call() {

    logger.debug("Number of records received : {}", records.count());
    try {
        for (final ConsumerRecord<String, Serializable> record : records) {
            TopicPartition tp = new TopicPartition(record.topic(), record.partition());
            logger.info("Record received topicPartition : {}, offset : {}", tp,
                record.offset());
            partitionToUncommittedOffsetMap.put(tp, record.offset());

            processConsumerRecords(record);
        }
    } catch (Exception e) {
        logger.error("Error while consuming", e);
    }
    return true;
}
 
开发者ID:warlock-china,项目名称:azeroth,代码行数:19,代码来源:NewApiTopicConsumer.java

示例5: pollAndCommitTransactionsBatch

import org.apache.kafka.clients.consumer.ConsumerRecord; //导入方法依赖的package包/类
private void pollAndCommitTransactionsBatch() {
    ConsumerRecords<ByteBuffer, ByteBuffer> records = consumer.poll(POLL_TIMEOUT);
    List<TransactionScope> scopes = new ArrayList<>(records.count());
    for (ConsumerRecord<ByteBuffer, ByteBuffer> record : records) {
        TransactionScope transactionScope = serializer.deserialize(record.key());
        if (transactionScope.getScope().isEmpty()) {
            LOGGER.warn("[R] {} polled empty transaction {}", readerId, transactionScope.getTransactionId());
        }
        TopicPartition topicPartition = new TopicPartition(record.topic(), record.partition());
        buffer.put(transactionScope.getTransactionId(),
                new TransactionData(transactionScope, record.value(), topicPartition, record.offset()));
        scopes.add(transactionScope);
        committedOffsetMap.computeIfAbsent(topicPartition, COMMITTED_OFFSET).notifyRead(record.offset());
    }
    if (!scopes.isEmpty()) {
        scopes.sort(SCOPE_COMPARATOR);
        LOGGER.trace("[R] {} polled {}", readerId, scopes);
    }
    approveAndCommitTransactionsBatch(scopes);
}
 
开发者ID:epam,项目名称:Lagerta,代码行数:21,代码来源:Reader.java

示例6: convertMessages

import org.apache.kafka.clients.consumer.ConsumerRecord; //导入方法依赖的package包/类
private void convertMessages(ConsumerRecords<byte[], byte[]> msgs) {
    for (ConsumerRecord<byte[], byte[]> msg : msgs) {
        log.trace("Consuming message with key {}, value {}", msg.key(), msg.value());
        SchemaAndValue keyAndSchema = keyConverter.toConnectData(msg.topic(), msg.key());
        SchemaAndValue valueAndSchema = valueConverter.toConnectData(msg.topic(), msg.value());
        SinkRecord record = new SinkRecord(msg.topic(), msg.partition(),
                keyAndSchema.schema(), keyAndSchema.value(),
                valueAndSchema.schema(), valueAndSchema.value(),
                msg.offset(),
                ConnectUtils.checkAndConvertTimestamp(msg.timestamp()),
                msg.timestampType());
        record = transformationChain.apply(record);
        if (record != null) {
            messageBatch.add(record);
        }
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:18,代码来源:WorkerSinkTask.java

示例7: decodePayload

import org.apache.kafka.clients.consumer.ConsumerRecord; //导入方法依赖的package包/类
public static <K, V> Payload<K, V> decodePayload(Deserializer<V> valueDeserializer, ConsumerRecord<K, byte[]> originConsumerRecord) {
    TracingHeader tracingHeader = null;
    ConsumerRecord<K, V> dataRecord = null;
    boolean sampled = false;
    byte[] data = originConsumerRecord.value();
    byte[] vData = null;
    if (data.length <= HEADER_LENGTH) {
        vData = data;
    } else {
        ByteBuffer byteBuf = ByteBuffer.wrap(data);
        short magic = byteBuf.getShort(0);
        short tpLen = byteBuf.getShort(2);
        if (magic == MAGIC && tpLen == TracingHeader.LENGTH) {
            byte[] tpBytes = new byte[tpLen];
            System.arraycopy(byteBuf.array(), HEADER_LENGTH, tpBytes, 0, tpLen);
            tracingHeader = TracingHeader.fromBytes(tpBytes);
            sampled = true;
            int dataOffset = tpLen + HEADER_LENGTH;
            vData = new byte[byteBuf.array().length - dataOffset];
            System.arraycopy(byteBuf.array(), dataOffset, vData, 0, vData.length);
        } else {
            vData = data;
        }
    }
    dataRecord = new ConsumerRecord<>(originConsumerRecord.topic(),
            originConsumerRecord.partition(), originConsumerRecord.offset(),
            originConsumerRecord.key(), valueDeserializer.deserialize(originConsumerRecord.topic(), vData));
    return new Payload<>(tracingHeader, dataRecord, sampled);
}
 
开发者ID:YanXs,项目名称:nighthawk,代码行数:30,代码来源:PayloadCodec.java

示例8: transform

import org.apache.kafka.clients.consumer.ConsumerRecord; //导入方法依赖的package包/类
@Override
public boolean transform(ConsumerRecord<String, String> change, Row row)
    throws BiremeException {
  JsonParser jsonParser = new JsonParser();
  JsonObject value = (JsonObject) jsonParser.parse(change.value());

  if (!value.has("payload") || value.get("payload").isJsonNull()) {
    return false;
  }

  JsonObject payLoad = value.getAsJsonObject("payload");
  DebeziumRecord record = new DebeziumRecord(change.topic(), payLoad);

  Table table = cxt.tablesInfo.get(getMappedTableName(record));

  row.type = record.type;
  row.produceTime = record.produceTime;
  row.originTable = getOriginTableName(record);
  row.mappedTable = getMappedTableName(record);
  row.keys = formatColumns(record, table, table.keyNames, false);

  if (row.type != RowType.DELETE) {
    row.tuple = formatColumns(record, table, table.columnName, false);
  }

  return true;
}
 
开发者ID:HashDataInc,项目名称:bireme,代码行数:28,代码来源:DebeziumPipeLine.java

示例9: TransactionWrapper

import org.apache.kafka.clients.consumer.ConsumerRecord; //导入方法依赖的package包/类
public TransactionWrapper(ConsumerRecord<ByteBuffer, ByteBuffer> record, TransactionMetadata deserializedMetadata) {
    GridArgumentCheck.notNull(deserializedMetadata, "metadata cannot be null");
    this.value = record.value();
    this.key = record.key();
    this.topicPartition = new TopicPartition(record.topic(), record.partition());
    this.offset = record.offset();
    this.deserializedMetadata = deserializedMetadata;
}
 
开发者ID:epam,项目名称:Lagerta,代码行数:9,代码来源:TransactionWrapper.java

示例10: onCompletion

import org.apache.kafka.clients.consumer.ConsumerRecord; //导入方法依赖的package包/类
@Override
public void onCompletion(Throwable error, ConsumerRecord<String, String> record) {
    TopicPartition partition = new TopicPartition(record.topic(), record.partition());
    List<ConsumerRecord<String, String>> records = consumedRecords.get(partition);
    if (records == null) {
        records = new ArrayList<>();
        consumedRecords.put(partition, records);
    }
    records.add(record);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:11,代码来源:KafkaBasedLogTest.java

示例11: update

import org.apache.kafka.clients.consumer.ConsumerRecord; //导入方法依赖的package包/类
@SuppressWarnings("unchecked")
@Override
public void update(final ConsumerRecord<byte[], byte[]> record) {
    final SourceNodeAndDeserializer sourceNodeAndDeserializer = deserializers.get(record.topic());
    final ConsumerRecord<Object, Object> deserialized = sourceNodeAndDeserializer.deserializer.deserialize(record);
    final ProcessorRecordContext recordContext =
            new ProcessorRecordContext(deserialized.timestamp(),
                                       deserialized.offset(),
                                       deserialized.partition(),
                                       deserialized.topic());
    processorContext.setRecordContext(recordContext);
    processorContext.setCurrentNode(sourceNodeAndDeserializer.sourceNode);
    sourceNodeAndDeserializer.sourceNode.process(deserialized.key(), deserialized.value());
    offsets.put(new TopicPartition(record.topic(), record.partition()), deserialized.offset() + 1);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:16,代码来源:GlobalStateUpdateTask.java

示例12: update

import org.apache.kafka.clients.consumer.ConsumerRecord; //导入方法依赖的package包/类
@Override
public void update(final ConsumerRecord<byte[], byte[]> record) {
    final TopicPartition tp = new TopicPartition(record.topic(), record.partition());
    if (!updatedPartitions.containsKey(tp)) {
        updatedPartitions.put(tp, 0);
    }
    updatedPartitions.put(tp, updatedPartitions.get(tp) + 1);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:9,代码来源:StateConsumerTest.java

示例13: addRecord

import org.apache.kafka.clients.consumer.ConsumerRecord; //导入方法依赖的package包/类
private static void addRecord(final ConsumerRecord<byte[], byte[]> record,
                              final Map<String, Map<TopicPartition, List<ConsumerRecord<byte[], byte[]>>>> recordPerTopicPerPartition) {

    final String topic = record.topic();
    final TopicPartition partition = new TopicPartition(topic, record.partition());

    if ("data".equals(topic)
        || "echo".equals(topic)
        || "min".equals(topic)
        || "sum".equals(topic)) {

        Map<TopicPartition, List<ConsumerRecord<byte[], byte[]>>> topicRecordsPerPartition
            = recordPerTopicPerPartition.get(topic);

        if (topicRecordsPerPartition == null) {
            topicRecordsPerPartition = new HashMap<>();
            recordPerTopicPerPartition.put(topic, topicRecordsPerPartition);
        }

        List<ConsumerRecord<byte[], byte[]>> records = topicRecordsPerPartition.get(partition);
        if (records == null) {
            records = new ArrayList<>();
            topicRecordsPerPartition.put(partition, records);
        }
        records.add(record);
    } else {
        throw new RuntimeException("FAIL: received data from unexpected topic: " + record);
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:30,代码来源:EosTestDriver.java

示例14: process

import org.apache.kafka.clients.consumer.ConsumerRecord; //导入方法依赖的package包/类
/**
 * 处理mysql 拉全量请求
 */
@Override
public void process(final ConsumerRecord<String, byte[]> consumerRecord, Object... args){
	try{
 	List<MessageEntry> msgEntryLst = parser.getEntry(consumerRecord.value());
 	if(msgEntryLst.isEmpty())
 		return ; 
 	EntryHeader entryHeader = msgEntryLst.get(0).getEntryHeader();
 	EventType operType = entryHeader.getOperType();
 	
 	//TODO 暂时放弃拉全量表的update/delete等消息
     if (operType!=EventType.INSERT) {
         listener.reduceFlowSize(consumerRecord.serializedValueSize());
         consumerListener.syncOffset(consumerRecord);
         return;
     }
     
  // 判断是否处理过该消息
     String msgPos = entryHeader.getPos();
     Object processed = cache.getIfPresent(msgPos);
     if (processed != null) {
         logger.info("Data have bean processed, the data position is [{}]", msgPos);
         listener.reduceFlowSize(consumerRecord.serializedValueSize());
         consumerListener.syncOffset(consumerRecord);
         return;
     }
     
     logger.info("Received FULL DATA PULL REQUEST message");
     ControlMessage message = Convertor.mysqlFullPullMessage(msgEntryLst.get(0), listener.getListenerId(), consumerRecord);
     String schemaName = getStringValue("SCHEMA_NAME", message);
     String tableName = getStringValue("TABLE_NAME", message);
     DataTable table = ThreadLocalCache.get(Constants.CacheNames.DATA_TABLES, Utils.buildDataTableCacheKey(schemaName, tableName));
     if (table == null) {
         logger.warn("Table {}.{} is not supported,please configure it in dbus database.", schemaName, tableName);
         return;
     }
     
     for (String controlTopic : controlTopics) {
         String json = message.toJSONString();
         ProducerRecord<String, byte[]> producerRecord = new ProducerRecord<>(controlTopic, message.getType(), json.getBytes());
	
         Future<RecordMetadata> future = listener.sendRecord(producerRecord);
         future.get();
	
         logger.info("write initial load request message to kafka: {}", json);
     }
     
  // 暂停接收 topic 的数据
        TopicPartition tp = new TopicPartition(consumerRecord.topic(), consumerRecord.partition());
        consumerListener.pauseTopic(tp, consumerRecord.offset(), message);

        // emit FULL_DATA_PULL_REQ 通知给bolt
        EmitData emitData = new EmitData();
        emitData.add(EmitData.DB_SCHEMA, schemaName);
        emitData.add(EmitData.DATA_TABLE, tableName);

        List<Object> values = new Values(emitData, Command.FULL_DATA_PULL_REQ);
        listener.emitData(values, consumerRecord);

        consumerListener.syncOffset(consumerRecord);
        cache.put(msgPos, msgEntryLst.get(0).toString());
	}catch(Exception e){
		throw new RuntimeException(e);
	}
}
 
开发者ID:BriData,项目名称:DBus,代码行数:68,代码来源:MysqlInitialLoadProcessor.java

示例15: execute

import org.apache.kafka.clients.consumer.ConsumerRecord; //导入方法依赖的package包/类
@Override
public void execute(Tuple input) {
    ConsumerRecord<String, byte[]> record = (ConsumerRecord<String, byte[]>) input.getValueByField("record");

    long kafkaOffset = record.offset();
    String fromTopic = record.topic();

    FullyOffset currentOffset = new FullyOffset(0, 0, 0);

    try {
        //处理ctrl topic的数据
        if (fromTopic.equalsIgnoreCase(dsInfo.getCtrlTopic())) {
            processControlCommand(record, input);
            return;
        }

        // a 读取message数据
        processor.preProcess(record);

        // b 一次读取一个partition
        List<DispatcherPackage> list;
        int partitionOffset = 0;


        do {
            partitionOffset++;
            list = processor.getNextList();
            if (list == null) {
                break;
            }

            //分schema后的子包
            int subOffset = 1;
            for (DispatcherPackage subPackage : list) {
                currentOffset = new FullyOffset(kafkaOffset, partitionOffset, subOffset);

                // 1 获得数据
                String key = subPackage.getKey();
                byte[] content = subPackage.getContent();
                int msgCount = subPackage.getMsgCount();
                String schemaName = subPackage.getSchemaName();
                String toTopic = subPackage.getToTopic();

                ContinuousFullyOffset continuousOffset = getSchemaFullyOffset(schemaName);
                continuousOffset.setProcessingOffset(currentOffset);
                if (key == null) {
                    // 2 构建数据消息的key,记录上一个offset是谁, 主要是用于日志查错
                    subPackage.setKey(continuousOffset.toString());
                    key = subPackage.getKey();
                }

                logger.debug(String.format("  currentOffset=%s, from_topic: %s, (to_topic:%s, schemaName=%s), Key=%s, msg_count=%d",
                        currentOffset.toString(), fromTopic, toTopic, schemaName, key, msgCount));
                this.collector.emit(input, new Values(subPackage, currentOffset));

                continuousOffset.setProcessedOffset(currentOffset);

                subOffset++;
            }
        } while (true);

        this.collector.ack(input);

    } catch (Exception ex) {
        // Print something in the log
        logger.error(String.format("FAIL! Dispatcher bolt fails at offset (%s).", currentOffset.toString()));
        // Call fail
        this.collector.fail(input);

        collector.reportError(ex);
        throw new RuntimeException(ex);
    }
}
 
开发者ID:BriData,项目名称:DBus,代码行数:74,代码来源:DispatcherBout.java


注:本文中的org.apache.kafka.clients.consumer.ConsumerRecord.topic方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。