當前位置: 首頁>>代碼示例>>Java>>正文


Java SimpleConsumer.clientId方法代碼示例

本文整理匯總了Java中kafka.javaapi.consumer.SimpleConsumer.clientId方法的典型用法代碼示例。如果您正苦於以下問題:Java SimpleConsumer.clientId方法的具體用法?Java SimpleConsumer.clientId怎麽用?Java SimpleConsumer.clientId使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在kafka.javaapi.consumer.SimpleConsumer的用法示例。


在下文中一共展示了SimpleConsumer.clientId方法的7個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: getLatestOffset

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
private static long getLatestOffset(SimpleConsumer consumer, TopicAndPartition topicAndPartition) {
  Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap<>();
  requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(kafka.api.OffsetRequest.LatestTime(), 1));
  kafka.javaapi.OffsetRequest request =
      new kafka.javaapi.OffsetRequest(requestInfo, kafka.api.OffsetRequest.CurrentVersion(), consumer.clientId());
  OffsetResponse response = consumer.getOffsetsBefore(request);

  if (response.hasError()) {
    logger.warn("Failed to fetch offset for {} due to {}", topicAndPartition,
        response.errorCode(topicAndPartition.topic(), topicAndPartition.partition()));
    return -1;
  }

  long[] offsets = response.offsets(topicAndPartition.topic(), topicAndPartition.partition());
  return offsets[0];
}
 
開發者ID:uber,項目名稱:chaperone,代碼行數:17,代碼來源:KafkaMonitor.java

示例2: getOffset

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
public long getOffset(String topic, int partition, long startOffsetTime) {
    SimpleConsumer simpleConsumer = findLeaderConsumer(partition);

    if (simpleConsumer == null) {
        LOG.error("Error consumer is null get offset from partition:" + partition);
        return -1;
    }

    TopicAndPartition topicAndPartition = new TopicAndPartition(topic, partition);
    Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
    requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(startOffsetTime, 1));
    OffsetRequest request = new OffsetRequest(requestInfo, kafka.api.OffsetRequest.CurrentVersion(), simpleConsumer.clientId());

    long[] offsets = simpleConsumer.getOffsetsBefore(request).offsets(topic, partition);
    if (offsets.length > 0) {
        return offsets[0];
    } else {
        return NO_OFFSET;
    }
}
 
開發者ID:zhangjunfang,項目名稱:jstorm-0.9.6.3-,代碼行數:21,代碼來源:KafkaConsumer.java

示例3: findAllOffsets

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
private static long[] findAllOffsets(SimpleConsumer consumer, String topicName, int partitionId)
{
    TopicAndPartition topicAndPartition = new TopicAndPartition(topicName, partitionId);

    // The API implies that this will always return all of the offsets. So it seems a partition can not have
    // more than Integer.MAX_VALUE-1 segments.
    //
    // This also assumes that the lowest value returned will be the first segment available. So if segments have been dropped off, this value
    // should not be 0.
    PartitionOffsetRequestInfo partitionOffsetRequestInfo = new PartitionOffsetRequestInfo(kafka.api.OffsetRequest.LatestTime(), Integer.MAX_VALUE);
    OffsetRequest offsetRequest = new OffsetRequest(ImmutableMap.of(topicAndPartition, partitionOffsetRequestInfo), kafka.api.OffsetRequest.CurrentVersion(), consumer.clientId());
    OffsetResponse offsetResponse = consumer.getOffsetsBefore(offsetRequest);

    if (offsetResponse.hasError()) {
        short errorCode = offsetResponse.errorCode(topicName, partitionId);
        log.warn("Offset response has error: %d", errorCode);
        throw new PrestoException(KAFKA_SPLIT_ERROR, "could not fetch data from Kafka, error code is '" + errorCode + "'");
    }

    return offsetResponse.offsets(topicName, partitionId);
}
 
開發者ID:y-lan,項目名稱:presto,代碼行數:22,代碼來源:KafkaSplitManager.java

示例4: findAllOffsets

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
private static long[] findAllOffsets(SimpleConsumer consumer, String topicName, int partitionId) {
    TopicAndPartition topicAndPartition = new TopicAndPartition(topicName, partitionId);

    // The API implies that this will always return all of the offsets. So it seems a partition can not have
    // more than Integer.MAX_VALUE-1 segments.
    //
    // This also assumes that the lowest value returned will be the first segment available. So if segments have been dropped off, this value
    // should not be 0.
    PartitionOffsetRequestInfo partitionOffsetRequestInfo = new PartitionOffsetRequestInfo(kafka.api.OffsetRequest.LatestTime(), 10000);
    OffsetRequest offsetRequest = new OffsetRequest(ImmutableMap.of(topicAndPartition, partitionOffsetRequestInfo), kafka.api.OffsetRequest.CurrentVersion(), consumer.clientId());
    OffsetResponse offsetResponse = consumer.getOffsetsBefore(offsetRequest);

    if (offsetResponse.hasError()) {
        short errorCode = offsetResponse.errorCode(topicName, partitionId);
        LOGGER.warn(format("Offset response has error: %d", errorCode));
        throw new RakamException("could not fetch data from Kafka, error code is '" + errorCode + "'", HttpResponseStatus.INTERNAL_SERVER_ERROR);
    }

    long[] offsets = offsetResponse.offsets(topicName, partitionId);

    return offsets;
}
 
開發者ID:rakam-io,項目名稱:rakam,代碼行數:23,代碼來源:KafkaOffsetManager.java

示例5: getOffset

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
public static long getOffset(SimpleConsumer consumer, String topic, int partition, long startOffsetTime) {
    TopicAndPartition topicAndPartition = new TopicAndPartition(topic, partition);
    Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
    requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(startOffsetTime, 1));
    OffsetRequest request = new OffsetRequest(
            requestInfo, kafka.api.OffsetRequest.CurrentVersion(), consumer.clientId());

    long[] offsets = consumer.getOffsetsBefore(request).offsets(topic, partition);
    if (offsets.length > 0) {
        return offsets[0];
    } else {
        return NO_OFFSET;
    }
}
 
開發者ID:redBorder,項目名稱:rb-bi,代碼行數:15,代碼來源:KafkaUtils.java

示例6: getLastOffset

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
/**
 * Retrieves the last offset before the given timestamp for a given topic partition.
 *
 * @return The last offset before the given timestamp or {@code 0} if failed to do so.
 */
private long getLastOffset(TopicPartition topicPart, long timestamp) {
  BrokerInfo brokerInfo = brokerService.getLeader(topicPart.getTopic(), topicPart.getPartition());
  SimpleConsumer consumer = brokerInfo == null ? null : consumers.getUnchecked(brokerInfo);

  // If no broker, treat it as failure attempt.
  if (consumer == null) {
    LOG.warn("Failed to talk to any broker. Default offset to 0 for {}", topicPart);
    return 0L;
  }

  // Fire offset request
  OffsetRequest request = new OffsetRequest(ImmutableMap.of(
    new TopicAndPartition(topicPart.getTopic(), topicPart.getPartition()),
    new PartitionOffsetRequestInfo(timestamp, 1)
  ), kafka.api.OffsetRequest.CurrentVersion(), consumer.clientId());

  OffsetResponse response = consumer.getOffsetsBefore(request);

  // Retrieve offsets from response
  long[] offsets = response.hasError() ? null : response.offsets(topicPart.getTopic(), topicPart.getPartition());
  if (offsets == null || offsets.length <= 0) {
    short errorCode = response.errorCode(topicPart.getTopic(), topicPart.getPartition());

    // If the topic partition doesn't exists, use offset 0 without logging error.
    if (errorCode != ErrorMapping.UnknownTopicOrPartitionCode()) {
      consumers.refresh(brokerInfo);
      LOG.warn("Failed to fetch offset for {} with timestamp {}. Error: {}. Default offset to 0.",
               topicPart, timestamp, errorCode);
    }
    return 0L;
  }

  LOG.debug("Offset {} fetched for {} with timestamp {}.", offsets[0], topicPart, timestamp);
  return offsets[0];
}
 
開發者ID:apache,項目名稱:twill,代碼行數:41,代碼來源:SimpleKafkaConsumer.java

示例7: getOffset

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
private static long getOffset(SimpleConsumer simpleConsumer, String topic, int partition, long startOffsetTime) {
    TopicAndPartition topicAndPartition = new TopicAndPartition(topic, partition);
    Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
    requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(startOffsetTime, 1));
    OffsetRequest request = new OffsetRequest(requestInfo, kafka.api.OffsetRequest.CurrentVersion(), simpleConsumer.clientId());

    long[] offsets = simpleConsumer.getOffsetsBefore(request).offsets(topic, partition);
    if (offsets.length > 0) {
        return offsets[0];
    } else {
        return NO_OFFSET;
    }
}
 
開發者ID:linzhaoming,項目名稱:easyframe-msg,代碼行數:14,代碼來源:SimpleKafkaHelper.java


注:本文中的kafka.javaapi.consumer.SimpleConsumer.clientId方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。