當前位置: 首頁>>代碼示例>>Java>>正文


Java SimpleConsumer.close方法代碼示例

本文整理匯總了Java中kafka.javaapi.consumer.SimpleConsumer.close方法的典型用法代碼示例。如果您正苦於以下問題:Java SimpleConsumer.close方法的具體用法?Java SimpleConsumer.close怎麽用?Java SimpleConsumer.close使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在kafka.javaapi.consumer.SimpleConsumer的用法示例。


在下文中一共展示了SimpleConsumer.close方法的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: fetchTopicMetadataFromBroker

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
private List<TopicMetadata> fetchTopicMetadataFromBroker(String broker, String... selectedTopics) {
  LOG.info(String.format("Fetching topic metadata from broker %s", broker));
  SimpleConsumer consumer = null;
  try {
    consumer = getSimpleConsumer(broker);
    for (int i = 0; i < NUM_TRIES_FETCH_TOPIC; i++) {
      try {
        return consumer.send(new TopicMetadataRequest(Arrays.asList(selectedTopics))).topicsMetadata();
      } catch (Exception e) {
        LOG.warn(String.format("Fetching topic metadata from broker %s has failed %d times.", broker, i + 1), e);
        try {
          Thread.sleep((long) ((i + Math.random()) * 1000));
        } catch (InterruptedException e2) {
          LOG.warn("Caught InterruptedException: " + e2);
        }
      }
    }
  } finally {
    if (consumer != null) {
      consumer.close();
    }
  }
  return null;
}
 
開發者ID:Hanmourang,項目名稱:Gobblin,代碼行數:25,代碼來源:KafkaWrapper.java

示例2: getOffset

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
private static OffsetInfo getOffset(String topic, PartitionMetadata partition) {
  Broker broker = partition.leader();

  SimpleConsumer consumer = new SimpleConsumer(broker.host(), broker.port(), 10000, 1000000,
                                               "com.rekko.newrelic.storm.kafka");
  try {
    TopicAndPartition
        topicAndPartition =
        new TopicAndPartition(topic, partition.partitionId());
    PartitionOffsetRequestInfo rquest = new PartitionOffsetRequestInfo(-1, 1);
    Map<TopicAndPartition, PartitionOffsetRequestInfo>
        map =
        new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
    map.put(topicAndPartition, rquest);
    OffsetRequest req = new OffsetRequest(map, (short) 0, "com.rekko.newrelic.storm.kafka");
    OffsetResponse resp = consumer.getOffsetsBefore(req);
    OffsetInfo offset = new OffsetInfo();
    offset.offset = resp.offsets(topic, partition.partitionId())[0];
    return offset;
  } finally {
    consumer.close();
  }
}
 
開發者ID:ghais,項目名稱:newrelic_storm_kafka,代碼行數:24,代碼來源:Kafka.java

示例3: getTopicMetadataFromBroker

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
private List<TopicMetadata> getTopicMetadataFromBroker(List<KafkaNode> bootstrapNodes, String topicName) throws NoBrokerAvailableException {
    Exception lastException = null;
    for (KafkaNode bootstrapNode : bootstrapNodes) {
        SimpleConsumer consumer = null;

        try {
            consumer = createConsumer(bootstrapNode);
            final TopicMetadataRequest request = new TopicMetadataRequest(Collections.singletonList(topicName));
            final TopicMetadataResponse response = consumer.send(request);

            return response.topicsMetadata();
        } catch (Exception e) {
            lastException = e;
        } finally {
            if (consumer != null) {
                consumer.close();
            }
        }
    }

    final String message = String.format("No broker available for topic '%s' with servers '%s'", topicName, Arrays.toString(bootstrapNodes.toArray()));
    throw new NoBrokerAvailableException(message, lastException);
}
 
開發者ID:researchgate,項目名稱:kafka-metamorph,代碼行數:24,代碼來源:Kafka08PartitionConsumer.java

示例4: close

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
@Override
public void close() throws IOException {
  int numOfConsumersNotClosed = 0;

  for (SimpleConsumer consumer : this.activeConsumers.values()) {
    if (consumer != null) {
      try {
        consumer.close();
      } catch (Exception e) {
        LOG.warn(String.format("Failed to close Kafka Consumer %s:%d", consumer.host(), consumer.port()));
        numOfConsumersNotClosed++;
      }
    }
  }
  this.activeConsumers.clear();
  if (numOfConsumersNotClosed > 0) {
    throw new IOException(numOfConsumersNotClosed + " consumer(s) failed to close.");
  }
}
 
開發者ID:Hanmourang,項目名稱:Gobblin,代碼行數:20,代碼來源:KafkaWrapper.java

示例5: readMessages

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
public List<byte[]> readMessages(String topic) {
  SimpleConsumer consumer = new SimpleConsumer("localhost", 6667, 100000, 64 * 1024, "consumer");
  FetchRequest req = new FetchRequestBuilder()
          .clientId("consumer")
          .addFetch(topic, 0, 0, 100000)
          .build();
  FetchResponse fetchResponse = consumer.fetch(req);
  Iterator<MessageAndOffset> results = fetchResponse.messageSet(topic, 0).iterator();
  List<byte[]> messages = new ArrayList<>();
  while(results.hasNext()) {
    ByteBuffer payload = results.next().message().payload();
    byte[] bytes = new byte[payload.limit()];
    payload.get(bytes);
    messages.add(bytes);
  }
  consumer.close();
  return messages;
}
 
開發者ID:apache,項目名稱:metron,代碼行數:19,代碼來源:KafkaComponent.java

示例6: getEarliestOffset

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
@Override
public long getEarliestOffset() {
  if (this.earliestOffset == -2 && uri != null) {
    // TODO : Make the hardcoded paramters configurable
    SimpleConsumer consumer = new SimpleConsumer(uri.getHost(), uri.getPort(), 60000,
        1024 * 1024, "hadoop-etl");
    Map<TopicAndPartition, PartitionOffsetRequestInfo> offsetInfo = new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
    offsetInfo.put(new TopicAndPartition(topic, partition), new PartitionOffsetRequestInfo(
        kafka.api.OffsetRequest.EarliestTime(), 1));
    OffsetResponse response = consumer
        .getOffsetsBefore(new OffsetRequest(offsetInfo, kafka.api.OffsetRequest
            .CurrentVersion(), "hadoop-etl"));
    long[] endOffset = response.offsets(topic, partition);
    consumer.close();
    this.earliestOffset = endOffset[0];
    return endOffset[0];
  } else {
    return this.earliestOffset;
  }
}
 
開發者ID:HiveKa,項目名稱:HiveKa,代碼行數:21,代碼來源:KafkaRequest.java

示例7: getLastOffset

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
@Override
public long getLastOffset(long time) {
  SimpleConsumer consumer = new SimpleConsumer(uri.getHost(), uri.getPort(), 60000,
      1024 * 1024, "hadoop-etl");
  Map<TopicAndPartition, PartitionOffsetRequestInfo> offsetInfo = new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
  offsetInfo.put(new TopicAndPartition(topic, partition), new PartitionOffsetRequestInfo(
      time, 1));
  OffsetResponse response = consumer.getOffsetsBefore(new OffsetRequest(offsetInfo,
      kafka.api.OffsetRequest.CurrentVersion(),"hadoop-etl"));
  long[] endOffset = response.offsets(topic, partition);
  consumer.close();
  if(endOffset.length == 0)
  {
    log.info("The exception is thrown because the latest offset retunred zero for topic : " + topic + " and partition " + partition);
  }
  this.latestOffset = endOffset[0];
  return endOffset[0];
}
 
開發者ID:HiveKa,項目名稱:HiveKa,代碼行數:19,代碼來源:KafkaRequest.java

示例8: fetchTopicMetadataFromBroker

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
private List<TopicMetadata> fetchTopicMetadataFromBroker(String broker, String... selectedTopics) {
  LOG.info(String.format("Fetching topic metadata from broker %s", broker));
  SimpleConsumer consumer = null;
  try {
    consumer = getSimpleConsumer(broker);
    for (int i = 0; i < this.fetchTopicRetries; i++) {
      try {
        return consumer.send(new TopicMetadataRequest(Arrays.asList(selectedTopics))).topicsMetadata();
      } catch (Exception e) {
        LOG.warn(String.format("Fetching topic metadata from broker %s has failed %d times.", broker, i + 1), e);
        try {
          Thread.sleep((long) ((i + Math.random()) * 1000));
        } catch (InterruptedException e2) {
          LOG.warn("Caught InterruptedException: " + e2);
        }
      }
    }
  } finally {
    if (consumer != null) {
      consumer.close();
    }
  }
  return null;
}
 
開發者ID:apache,項目名稱:incubator-gobblin,代碼行數:25,代碼來源:KafkaWrapper.java

示例9: fetchTopicMetadataFromBroker

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
private List<TopicMetadata> fetchTopicMetadataFromBroker(String broker, String... selectedTopics) {
  log.info(String.format("Fetching topic metadata from broker %s", broker));
  SimpleConsumer consumer = null;
  try {
    consumer = getSimpleConsumer(broker);
    for (int i = 0; i < this.fetchTopicRetries; i++) {
      try {
        return consumer.send(new TopicMetadataRequest(Arrays.asList(selectedTopics))).topicsMetadata();
      } catch (Exception e) {
        log.warn(String.format("Fetching topic metadata from broker %s has failed %d times.", broker, i + 1), e);
        try {
          Thread.sleep((long) ((i + Math.random()) * 1000));
        } catch (InterruptedException e2) {
          log.warn("Caught InterruptedException: " + e2);
        }
      }
    }
  } finally {
    if (consumer != null) {
      consumer.close();
    }
  }
  return null;
}
 
開發者ID:apache,項目名稱:incubator-gobblin,代碼行數:25,代碼來源:Kafka08ConsumerClient.java

示例10: close

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
@Override
public void close() throws IOException {
  int numOfConsumersNotClosed = 0;

  for (SimpleConsumer consumer : this.activeConsumers.values()) {
    if (consumer != null) {
      try {
        consumer.close();
      } catch (Exception e) {
        log.warn(String.format("Failed to close Kafka Consumer %s:%d", consumer.host(), consumer.port()));
        numOfConsumersNotClosed++;
      }
    }
  }
  this.activeConsumers.clear();
  if (numOfConsumersNotClosed > 0) {
    throw new IOException(numOfConsumersNotClosed + " consumer(s) failed to close.");
  }
}
 
開發者ID:apache,項目名稱:incubator-gobblin,代碼行數:20,代碼來源:Kafka08ConsumerClient.java

示例11: getNumPartitions

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
public int getNumPartitions(String topic) {
    SimpleConsumer consumer = null;
    try {
        consumer = createConsumer(
            mConfig.getKafkaSeedBrokerHost(),
            mConfig.getKafkaSeedBrokerPort(),
            "partitionLookup");
        List<String> topics = new ArrayList<String>();
        topics.add(topic);
        TopicMetadataRequest request = new TopicMetadataRequest(topics);
        TopicMetadataResponse response = consumer.send(request);
        if (response.topicsMetadata().size() != 1) {
            throw new RuntimeException("Expected one metadata for topic " + topic + " found " +
                response.topicsMetadata().size());
        }
        TopicMetadata topicMetadata = response.topicsMetadata().get(0);
        return topicMetadata.partitionsMetadata().size();
    } finally {
        if (consumer != null) {
            consumer.close();
        }
    }
}
 
開發者ID:pinterest,項目名稱:secor,代碼行數:24,代碼來源:KafkaClient.java

示例12: getCommittedMessage

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
public Message getCommittedMessage(TopicPartition topicPartition) throws Exception {
    SimpleConsumer consumer = null;
    try {
        long committedOffset = mZookeeperConnector.getCommittedOffsetCount(topicPartition) - 1;
        if (committedOffset < 0) {
            return null;
        }
        consumer = createConsumer(topicPartition);
        return getMessage(topicPartition, committedOffset, consumer);
    } catch (MessageDoesNotExistException e) {
      // If a RuntimeEMessageDoesNotExistException exception is raised,
      // the message at the last comitted offset does not exist in Kafka.
      // This is usually due to the message being compacted away by the
      // Kafka log compaction process.
      //
      // That is no an exceptional situation - in fact it can be normal if
      // the topic being consumed by Secor has a low volume. So in that
      // case, simply return null
      LOG.warn("no committed message for topic {} partition {}", topicPartition.getTopic(), topicPartition.getPartition());
      return null;
    } finally {
        if (consumer != null) {
            consumer.close();
        }
    }
}
 
開發者ID:pinterest,項目名稱:secor,代碼行數:27,代碼來源:KafkaClient.java

示例13: closeSimpleConsumer

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
/**
 * 關閉對應資源
 *
 * @param consumer
 */
private static void closeSimpleConsumer(SimpleConsumer consumer) {
    if (consumer != null) {
        try {
            consumer.close();
        } catch (Exception e) {
        }
    }
}
 
開發者ID:wngn123,項目名稱:wngn-jms-kafka,代碼行數:14,代碼來源:JavaKafkaSimpleConsumerAPI.java

示例14: getTopicPartitionLogSize

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
/**
 * 獲取指定主題及分區logsize
 * @param stat
 */
public void getTopicPartitionLogSize(TopicPartitionInfo stat) {
    BrokerEndPoint leader = findLeader(stat.getTopic(), stat.getPartition()).leader();
    SimpleConsumer consumer = getConsumerClient(leader.host(), leader.port());

    try {
        long logsize = getLastOffset(consumer, stat.getTopic(), stat.getPartition(),
            kafka.api.OffsetRequest.LatestTime());
        stat.setLogSize(logsize);
    } finally {
        consumer.close();
    }
}
 
開發者ID:warlock-china,項目名稱:azeroth,代碼行數:17,代碼來源:ZkConsumerCommand.java

示例15: getLastOffset

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
public static long getLastOffset(String leadBrokerHost, int port, String topic, 
        int partition, long whichTime) {
    String clientName = "Client_" + topic + "_" + partition;
    SimpleConsumer consumer = new SimpleConsumer(leadBrokerHost, port, 
            soTimeoutMS, bufferSize, clientName);
    
    TopicAndPartition topicAndPartition = new TopicAndPartition(topic, partition);
    Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = 
            new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
    requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(whichTime, 1));
    kafka.javaapi.OffsetRequest request = new kafka.javaapi.OffsetRequest(requestInfo,
            kafka.api.OffsetRequest.CurrentVersion(), clientName);
    OffsetResponse response = consumer.getOffsetsBefore(request);

    if (response.hasError()) {
        LOG.warn("Error fetching data Offset Data the Broker. Reason: " 
                + response.errorCode(topic, partition));
        return 0;
    }
    long[] offsets = response.offsets(topic, partition);
    
    if (consumer != null) {
        try {
            consumer.close();
        } catch (Exception e) {
            LOG.error("SimpleConsumer.close() error due to ", e);
        }
    }
    
    return offsets.length > 0 ? offsets[0] : -1;
}
 
開發者ID:jretty-org,項目名稱:kafka-xclient,代碼行數:32,代碼來源:KafkaOffsetTools.java


注:本文中的kafka.javaapi.consumer.SimpleConsumer.close方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。