當前位置: 首頁>>代碼示例>>Java>>正文


Java SimpleConsumer.getOffsetsBefore方法代碼示例

本文整理匯總了Java中kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore方法的典型用法代碼示例。如果您正苦於以下問題:Java SimpleConsumer.getOffsetsBefore方法的具體用法?Java SimpleConsumer.getOffsetsBefore怎麽用?Java SimpleConsumer.getOffsetsBefore使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在kafka.javaapi.consumer.SimpleConsumer的用法示例。


在下文中一共展示了SimpleConsumer.getOffsetsBefore方法的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: getOffset

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
private static OffsetInfo getOffset(String topic, PartitionMetadata partition) {
  Broker broker = partition.leader();

  SimpleConsumer consumer = new SimpleConsumer(broker.host(), broker.port(), 10000, 1000000,
                                               "com.rekko.newrelic.storm.kafka");
  try {
    TopicAndPartition
        topicAndPartition =
        new TopicAndPartition(topic, partition.partitionId());
    PartitionOffsetRequestInfo rquest = new PartitionOffsetRequestInfo(-1, 1);
    Map<TopicAndPartition, PartitionOffsetRequestInfo>
        map =
        new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
    map.put(topicAndPartition, rquest);
    OffsetRequest req = new OffsetRequest(map, (short) 0, "com.rekko.newrelic.storm.kafka");
    OffsetResponse resp = consumer.getOffsetsBefore(req);
    OffsetInfo offset = new OffsetInfo();
    offset.offset = resp.offsets(topic, partition.partitionId())[0];
    return offset;
  } finally {
    consumer.close();
  }
}
 
開發者ID:ghais,項目名稱:newrelic_storm_kafka,代碼行數:24,代碼來源:Kafka.java

示例2: getLastOffset

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
public static long getLastOffset(SimpleConsumer consumer, String topic, int partition, long whichTime, String
        clientName) {
    TopicAndPartition topicAndPartition = new TopicAndPartition(topic, partition);
    Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap<TopicAndPartition,
            PartitionOffsetRequestInfo>();
    requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(whichTime, 1));
    OffsetRequest request = new OffsetRequest(requestInfo, kafka.api.OffsetRequest.CurrentVersion(), clientName);
    OffsetResponse response = consumer.getOffsetsBefore(request);
    if (response.hasError()) {
        System.out.println("Error fetching data Offset Data the Broker. Reason: " + response.errorCode(topic,
                partition));
        return 0;
    }
    long[] offsets = response.offsets(topic, partition);
    return offsets[0];
}
 
開發者ID:wngn123,項目名稱:wngn-jms-kafka,代碼行數:17,代碼來源:SimpleConsumerExample.java

示例3: getLastOffset

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
private static long getLastOffset(SimpleConsumer consumer, String topic, int partition,
                                  long whichTime) {
    TopicAndPartition topicAndPartition = new TopicAndPartition(topic, partition);
    Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
    requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(whichTime, 1));
    kafka.javaapi.OffsetRequest request = new kafka.javaapi.OffsetRequest(requestInfo,
        kafka.api.OffsetRequest.CurrentVersion(), CLIENT_ID);
    OffsetResponse response = consumer.getOffsetsBefore(request);

    if (response.hasError()) {
        System.out.println("Error fetching data Offset Data the Broker. Reason: "
                           + response.errorCode(topic, partition));
        return 0;
    }
    long[] offsets = response.offsets(topic, partition);
    return offsets[0];
}
 
開發者ID:warlock-china,項目名稱:azeroth,代碼行數:18,代碼來源:ZkConsumerCommand.java

示例4: getLastOffset

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
/**
 * Defines where to start reading data from
 * Helpers Available:
 * kafka.api.OffsetRequest.EarliestTime() => finds the beginning of the data in the logs and starts streaming
 * from there
 * kafka.api.OffsetRequest.LatestTime()   => will only stream new messages
 *
 * @param consumer
 * @param topic
 * @param partition
 * @param whichTime
 * @param clientName
 * @return
 */
public static long getLastOffset(SimpleConsumer consumer, String topic, int partition, long whichTime, String clientName) {
    TopicAndPartition topicAndPartition = new TopicAndPartition(topic, partition);
    Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
    requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(whichTime, 1));
    kafka.javaapi.OffsetRequest request = new kafka.javaapi.OffsetRequest(
            requestInfo, kafka.api.OffsetRequest.CurrentVersion(), clientName);
    OffsetResponse response = consumer.getOffsetsBefore(request);

    if (response.hasError()) {
        System.out.println("Error fetching data Offset Data the Broker. Reason: " + response.errorCode(topic, partition));
        return 0;
    }
    long[] offsets = response.offsets(topic, partition);
    return offsets[0];
}
 
開發者ID:bingoohuang,項目名稱:javacode-demo,代碼行數:30,代碼來源:SimpleExample.java

示例5: getLastOffset

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
private static long getLastOffset(SimpleConsumer consumer, String topic, int partition, long whichTime) {
	TopicAndPartition topicAndPartition = new TopicAndPartition(topic, partition);
	Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
	requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(whichTime, 1));
	kafka.javaapi.OffsetRequest request = new kafka.javaapi.OffsetRequest(requestInfo,
			kafka.api.OffsetRequest.CurrentVersion(), CLIENT_ID);
	OffsetResponse response = consumer.getOffsetsBefore(request);

	if (response.hasError()) {
		System.out.println(
				"Error fetching data Offset Data the Broker. Reason: " + response.errorCode(topic, partition));
		return 0;
	}
	long[] offsets = response.offsets(topic, partition);
	return offsets[0];
}
 
開發者ID:vakinge,項目名稱:jeesuite-libs,代碼行數:17,代碼來源:ZkConsumerCommand.java

示例6: getLatestOffset

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
private static long getLatestOffset(SimpleConsumer consumer, TopicAndPartition topicAndPartition) {
  Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap<>();
  requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(kafka.api.OffsetRequest.LatestTime(), 1));
  kafka.javaapi.OffsetRequest request =
      new kafka.javaapi.OffsetRequest(requestInfo, kafka.api.OffsetRequest.CurrentVersion(), consumer.clientId());
  OffsetResponse response = consumer.getOffsetsBefore(request);

  if (response.hasError()) {
    logger.warn("Failed to fetch offset for {} due to {}", topicAndPartition,
        response.errorCode(topicAndPartition.topic(), topicAndPartition.partition()));
    return -1;
  }

  long[] offsets = response.offsets(topicAndPartition.topic(), topicAndPartition.partition());
  return offsets[0];
}
 
開發者ID:uber,項目名稱:chaperone,代碼行數:17,代碼來源:KafkaMonitor.java

示例7: getLastOffset

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
/**
 * @param consumer
 * @param topic
 * @param partition
 * @param whichTime
 * @param clientName
 * @return 0 if consumer is null at this time
 */
public static long getLastOffset(SimpleConsumer consumer, String topic, int partition, long whichTime, String clientName)
{
  if (consumer == null) {
    return 0;
  }
  TopicAndPartition topicAndPartition = new TopicAndPartition(topic, partition);
  Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
  requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(whichTime, 1));
  OffsetRequest request = new OffsetRequest(requestInfo, kafka.api.OffsetRequest.CurrentVersion(), clientName);
  OffsetResponse response = consumer.getOffsetsBefore(request);

  if (response.hasError()) {
    logger.error("Error fetching data Offset Data the Broker. Reason: " + response.errorCode(topic, partition));
    return 0;
  }
  long[] offsets = response.offsets(topic, partition);
  return offsets[0];
}
 
開發者ID:apache,項目名稱:apex-malhar,代碼行數:27,代碼來源:KafkaMetadataUtil.java

示例8: getLastOffset

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
public static long getLastOffset(SimpleConsumer consumer, String topic,
		int partition, long whichTime, String clientName) {
	TopicAndPartition topicAndPartition = new TopicAndPartition(topic,
			partition);
	Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
	requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(
			whichTime, 1));
	kafka.javaapi.OffsetRequest request = new kafka.javaapi.OffsetRequest(
			requestInfo, kafka.api.OffsetRequest.CurrentVersion(),
			clientName);
	OffsetResponse response = consumer.getOffsetsBefore(request);

	if (response.hasError()) {
		System.out
				.println("Error fetching data Offset Data the Broker. Reason: "
						+ response.errorCode(topic, partition));
		return 0;
	}
	long[] offsets = response.offsets(topic, partition);
	return offsets[0];
}
 
開發者ID:vincenzo-gulisano,項目名稱:Bes,代碼行數:22,代碼來源:KafkaReceiver.java

示例9: getLastOffset

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
public static long getLastOffset(SimpleConsumer consumer, String topic, int partition,
                                 long whichTime, String clientName) {
    TopicAndPartition topicAndPartition = new TopicAndPartition(topic, partition);
    Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
    requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(whichTime, 1));
    kafka.javaapi.OffsetRequest request = new kafka.javaapi.OffsetRequest(
            requestInfo, kafka.api.OffsetRequest.CurrentVersion(), clientName);
    OffsetResponse response = consumer.getOffsetsBefore(request);

    if (response.hasError()) {
        System.out.println("Error fetching data Offset Data the Broker. Reason: " + response.errorCode(topic, partition) );
        return 0;
    }
    long[] offsets = response.offsets(topic, partition);
    return offsets[0];
}
 
開發者ID:aakash7864,項目名稱:Simple-Kafka,代碼行數:17,代碼來源:SimpleExample.java

示例10: findAllOffsets

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
private static long[] findAllOffsets(SimpleConsumer consumer, String topicName, int partitionId)
{
    TopicAndPartition topicAndPartition = new TopicAndPartition(topicName, partitionId);

    // The API implies that this will always return all of the offsets. So it seems a partition can not have
    // more than Integer.MAX_VALUE-1 segments.
    //
    // This also assumes that the lowest value returned will be the first segment available. So if segments have been dropped off, this value
    // should not be 0.
    PartitionOffsetRequestInfo partitionOffsetRequestInfo = new PartitionOffsetRequestInfo(kafka.api.OffsetRequest.LatestTime(), Integer.MAX_VALUE);
    OffsetRequest offsetRequest = new OffsetRequest(ImmutableMap.of(topicAndPartition, partitionOffsetRequestInfo), kafka.api.OffsetRequest.CurrentVersion(), consumer.clientId());
    OffsetResponse offsetResponse = consumer.getOffsetsBefore(offsetRequest);

    if (offsetResponse.hasError()) {
        short errorCode = offsetResponse.errorCode(topicName, partitionId);
        log.warn("Offset response has error: %d", errorCode);
        throw new PrestoException(KAFKA_SPLIT_ERROR, "could not fetch data from Kafka, error code is '" + errorCode + "'");
    }

    return offsetResponse.offsets(topicName, partitionId);
}
 
開發者ID:y-lan,項目名稱:presto,代碼行數:22,代碼來源:KafkaSplitManager.java

示例11: getLastOffset

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
private long getLastOffset(SimpleConsumer consumer, String topic, int partition,
                                 long whichTime, String clientName) throws StageException {
  try {
    TopicAndPartition topicAndPartition = new TopicAndPartition(topic, partition);
    Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap<>();
    requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(whichTime, 1));
    kafka.javaapi.OffsetRequest request = new kafka.javaapi.OffsetRequest(
      requestInfo, kafka.api.OffsetRequest.CurrentVersion(), clientName);
    OffsetResponse response = consumer.getOffsetsBefore(request);

    if (response.hasError()) {
      LOG.error(KafkaErrors.KAFKA_22.getMessage(), consumer.host() + ":" + consumer.port(),
        response.errorCode(topic, partition));
      return 0;
    }
    long[] offsets = response.offsets(topic, partition);
    return offsets[0];
  } catch (Exception e) {
    LOG.error(KafkaErrors.KAFKA_30.getMessage(), e.toString(), e);
    throw new StageException(KafkaErrors.KAFKA_30, e.toString(), e);
  }
}
 
開發者ID:streamsets,項目名稱:datacollector,代碼行數:23,代碼來源:KafkaLowLevelConsumer08.java

示例12: getLastOffset

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
private long getLastOffset(SimpleConsumer consumer, String topic, int partition, long whichTime, String
        clientName) {
    log.debug("Getting latest offset...");
    TopicAndPartition topicAndPartition = new TopicAndPartition(topic, partition);
    Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap<>();
    requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(whichTime, 1));

    kafka.javaapi.OffsetRequest request = new kafka.javaapi.OffsetRequest(
            requestInfo,
            kafka.api.OffsetRequest.CurrentVersion(),
            clientName);
    OffsetResponse response = consumer.getOffsetsBefore(request);

    if (response.hasError()) {
        log.error(String.format("Error fetching data Offset Data from the Broker. Reason [%d]", response
                .errorCode(topic, partition)));
        return 0;
    }
    long[] offsets = response.offsets(topic, partition);
    log.debug(String.format("Latest offset [%d]", offsets[0]));

    return offsets[0];
}
 
開發者ID:ogidogi,項目名稱:laughing-octo-sansa,代碼行數:24,代碼來源:HBaseExample.java

示例13: getLastOffset

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
public static long getLastOffset( SimpleConsumer consumer, String topic, int partition, long whichTime, String clientName ) {
    TopicAndPartition topicAndPartition = new TopicAndPartition( topic, partition );
    Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
    requestInfo.put( topicAndPartition, new PartitionOffsetRequestInfo( whichTime, 1 ) );
    kafka.javaapi.OffsetRequest request = new kafka.javaapi.OffsetRequest( requestInfo,
            kafka.api.OffsetRequest.CurrentVersion(), clientName );
    OffsetResponse response = consumer.getOffsetsBefore( request );

    if ( response.hasError() ) {
        System.out.println( "Error fetching data Offset Data the Broker. Reason: "
                + response.errorCode( topic, partition ) );
        return 0;
    }
    long[] offsets = response.offsets( topic, partition );
    return offsets[0];
}
 
開發者ID:krux,項目名稱:java-kafka-client-libs,代碼行數:17,代碼來源:KafkaLowLevelConsumer.java

示例14: getEarliestOffset

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
@Override
public long getEarliestOffset() {
  if (this.earliestOffset == -2 && uri != null) {
    // TODO : Make the hardcoded paramters configurable
    SimpleConsumer consumer = new SimpleConsumer(uri.getHost(), uri.getPort(), 60000,
        1024 * 1024, "hadoop-etl");
    Map<TopicAndPartition, PartitionOffsetRequestInfo> offsetInfo = new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
    offsetInfo.put(new TopicAndPartition(topic, partition), new PartitionOffsetRequestInfo(
        kafka.api.OffsetRequest.EarliestTime(), 1));
    OffsetResponse response = consumer
        .getOffsetsBefore(new OffsetRequest(offsetInfo, kafka.api.OffsetRequest
            .CurrentVersion(), "hadoop-etl"));
    long[] endOffset = response.offsets(topic, partition);
    consumer.close();
    this.earliestOffset = endOffset[0];
    return endOffset[0];
  } else {
    return this.earliestOffset;
  }
}
 
開發者ID:HiveKa,項目名稱:HiveKa,代碼行數:21,代碼來源:KafkaRequest.java

示例15: getLastOffset

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
@Override
public long getLastOffset(long time) {
  SimpleConsumer consumer = new SimpleConsumer(uri.getHost(), uri.getPort(), 60000,
      1024 * 1024, "hadoop-etl");
  Map<TopicAndPartition, PartitionOffsetRequestInfo> offsetInfo = new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
  offsetInfo.put(new TopicAndPartition(topic, partition), new PartitionOffsetRequestInfo(
      time, 1));
  OffsetResponse response = consumer.getOffsetsBefore(new OffsetRequest(offsetInfo,
      kafka.api.OffsetRequest.CurrentVersion(),"hadoop-etl"));
  long[] endOffset = response.offsets(topic, partition);
  consumer.close();
  if(endOffset.length == 0)
  {
    log.info("The exception is thrown because the latest offset retunred zero for topic : " + topic + " and partition " + partition);
  }
  this.latestOffset = endOffset[0];
  return endOffset[0];
}
 
開發者ID:HiveKa,項目名稱:HiveKa,代碼行數:19,代碼來源:KafkaRequest.java


注:本文中的kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。