當前位置: 首頁>>代碼示例>>Java>>正文


Java SimpleConsumer.send方法代碼示例

本文整理匯總了Java中kafka.javaapi.consumer.SimpleConsumer.send方法的典型用法代碼示例。如果您正苦於以下問題:Java SimpleConsumer.send方法的具體用法?Java SimpleConsumer.send怎麽用?Java SimpleConsumer.send使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在kafka.javaapi.consumer.SimpleConsumer的用法示例。


在下文中一共展示了SimpleConsumer.send方法的14個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: getPartitionMetadata

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
public static PartitionMetadata getPartitionMetadata(final SimpleConsumer consumer, final List<String> topics, final int partitionId) {
    try {
        TopicMetadataRequest req = new TopicMetadataRequest(topics);
        TopicMetadataResponse resp = consumer.send(req);

        List<TopicMetadata> topicMetadataList = resp.topicsMetadata();

        for (TopicMetadata metaData : topicMetadataList) {
            for (PartitionMetadata part : metaData.partitionsMetadata()) {
                if (part.partitionId() == partitionId) {
                    return part;
                }
            }
        }
    } catch (Exception e) {
        LOG.warn("Unable to fetch partition meta data from host[{}:{}] [{}:{}]", consumer.host(), consumer.port(), topics, partitionId, e);
    }

    return null;
}
 
開發者ID:jeoffreylim,項目名稱:maelstrom,代碼行數:21,代碼來源:KafkaMetaData.java

示例2: updateLeaderMap

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
private void updateLeaderMap() {
  for (String broker : brokerList) {
    try {
      SimpleConsumer consumer = getSimpleConsumer(broker);
      TopicMetadataRequest req = new TopicMetadataRequest(auditTopics);
      kafka.javaapi.TopicMetadataResponse resp = consumer.send(req);
      List<TopicMetadata> metaData = resp.topicsMetadata();

      for (TopicMetadata tmd : metaData) {
        for (PartitionMetadata pmd : tmd.partitionsMetadata()) {
          TopicAndPartition topicAndPartition = new TopicAndPartition(tmd.topic(), pmd.partitionId());
          partitionLeader.put(topicAndPartition, getHostPort(pmd.leader()));
        }
      }
    } catch (Exception e) {
      logger.warn("Got exception to get metadata from broker=" + broker, e);
    }
  }
}
 
開發者ID:uber,項目名稱:chaperone,代碼行數:20,代碼來源:KafkaMonitor.java

示例3: getTopicMetadataFromBroker

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
private List<TopicMetadata> getTopicMetadataFromBroker(List<KafkaNode> bootstrapNodes, String topicName) throws NoBrokerAvailableException {
    Exception lastException = null;
    for (KafkaNode bootstrapNode : bootstrapNodes) {
        SimpleConsumer consumer = null;

        try {
            consumer = createConsumer(bootstrapNode);
            final TopicMetadataRequest request = new TopicMetadataRequest(Collections.singletonList(topicName));
            final TopicMetadataResponse response = consumer.send(request);

            return response.topicsMetadata();
        } catch (Exception e) {
            lastException = e;
        } finally {
            if (consumer != null) {
                consumer.close();
            }
        }
    }

    final String message = String.format("No broker available for topic '%s' with servers '%s'", topicName, Arrays.toString(bootstrapNodes.toArray()));
    throw new NoBrokerAvailableException(message, lastException);
}
 
開發者ID:researchgate,項目名稱:kafka-metamorph,代碼行數:24,代碼來源:Kafka08PartitionConsumer.java

示例4: doesTopicExist

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
public boolean doesTopicExist(String topic) {
    log.debug("Does Topic {} exist?", topic);
    SimpleConsumer consumer = new SimpleConsumer(host, port, soTimeout, bufferSize, clientId);
    List<String> topics = new ArrayList<>();
    TopicMetadataRequest request = new TopicMetadataRequest(topics);
    TopicMetadataResponse response = consumer.send(request);
    List<TopicMetadata> metadata = response.topicsMetadata();

    for (TopicMetadata item : metadata) {
        if (item.topic().equals(topic)) {
            log.debug("Found Topic {}.", topic);
            return true;
        }
    }
    log.debug("Did not find Topic {}.", topic);
    return false;
}
 
開發者ID:javabilities,項目名稱:producer,代碼行數:18,代碼來源:MessageService.java

示例5: getPartitions

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
public List<TopicPartitionLeader> getPartitions(SimpleConsumer consumer, String topic) {
    List<TopicPartitionLeader> partitions = new ArrayList<TopicPartitionLeader>();
    TopicMetadataRequest topicMetadataRequest = new TopicMetadataRequest(Collections.singletonList(topic));
    TopicMetadataResponse topicMetadataResponse = consumer.send(topicMetadataRequest);
    List<TopicMetadata> topicMetadataList = topicMetadataResponse.topicsMetadata();
    for (TopicMetadata topicMetadata : topicMetadataList) {
        List<PartitionMetadata> partitionMetadataList = topicMetadata.partitionsMetadata();
        for (PartitionMetadata partitionMetadata : partitionMetadataList) {
            if (partitionMetadata.leader() != null) {
                String partitionLeaderHost = partitionMetadata.leader().host();
                int partitionLeaderPort = partitionMetadata.leader().port();
                int partitionId = partitionMetadata.partitionId();
                TopicPartitionLeader topicPartitionLeader = new TopicPartitionLeader(topic, partitionId, partitionLeaderHost, partitionLeaderPort);
                partitions.add(topicPartitionLeader);
            }
        }
    }
    return partitions;
}
 
開發者ID:Symantec,項目名稱:kafka-monitoring-tool,代碼行數:20,代碼來源:KafkaConsumerOffsetUtil.java

示例6: getNumPartitions

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
public int getNumPartitions(String topic) {
    SimpleConsumer consumer = null;
    try {
        consumer = createConsumer(
            mConfig.getKafkaSeedBrokerHost(),
            mConfig.getKafkaSeedBrokerPort(),
            "partitionLookup");
        List<String> topics = new ArrayList<String>();
        topics.add(topic);
        TopicMetadataRequest request = new TopicMetadataRequest(topics);
        TopicMetadataResponse response = consumer.send(request);
        if (response.topicsMetadata().size() != 1) {
            throw new RuntimeException("Expected one metadata for topic " + topic + " found " +
                response.topicsMetadata().size());
        }
        TopicMetadata topicMetadata = response.topicsMetadata().get(0);
        return topicMetadata.partitionsMetadata().size();
    } finally {
        if (consumer != null) {
            consumer.close();
        }
    }
}
 
開發者ID:pinterest,項目名稱:secor,代碼行數:24,代碼來源:KafkaClient.java

示例7: getTopicMetaData

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
/** 根據Topic列表返回TopicMetaData信息
 * 
 * @param topics
 * @return */
public static List<TopicMetadata> getTopicMetaData(List<String> topics) {
	SimpleConsumer simpleConsumer = SimpleKafkaHelper.getDefaultSimpleConsumer();
	TopicMetadataRequest metaDataRequest = new TopicMetadataRequest(topics);
	TopicMetadataResponse resp = simpleConsumer.send(metaDataRequest);
	List<TopicMetadata> metadatas = resp.topicsMetadata();

	return metadatas;
}
 
開發者ID:linzhaoming,項目名稱:easyframe-msg,代碼行數:13,代碼來源:AdminUtil.java

示例8: listTopics

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
public List<String> listTopics() {
    log.debug("List Topics");
    SimpleConsumer consumer = new SimpleConsumer(host, port, soTimeout, bufferSize, clientId);
    List<String> topics = new ArrayList<>();
    TopicMetadataRequest request = new TopicMetadataRequest(topics);
    TopicMetadataResponse response = consumer.send(request);
    List<TopicMetadata> metadata = response.topicsMetadata();

    for (TopicMetadata item : metadata) {
        topics.add(item.topic());
    }

    log.debug("Found {} Topics", topics.size());
    return topics;
}
 
開發者ID:javabilities,項目名稱:producer,代碼行數:16,代碼來源:MessageService.java

示例9: getLastOffset

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
public long getLastOffset(SimpleConsumer consumer, String topic, int partition,
                           long whichTime, String clientName) {
    long lastOffset = 0;
    try {
        List<String> topics = Collections.singletonList(topic);
        TopicMetadataRequest req = new TopicMetadataRequest(topics);
        kafka.javaapi.TopicMetadataResponse topicMetadataResponse = consumer.send(req);
        TopicAndPartition topicAndPartition = new TopicAndPartition(topic, partition);
        for (TopicMetadata topicMetadata : topicMetadataResponse.topicsMetadata()) {
            for (PartitionMetadata partitionMetadata : topicMetadata.partitionsMetadata()) {
                if (partitionMetadata.partitionId() == partition) {
                    String partitionHost = partitionMetadata.leader().host();
                    consumer = getConsumer(partitionHost, partitionMetadata.leader().port(), clientName);
                    break;
                }
            }
        }
        Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
        requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(whichTime, 1));
        kafka.javaapi.OffsetRequest request = new kafka.javaapi.OffsetRequest(requestInfo, kafka.api.OffsetRequest.CurrentVersion(), clientName);
        OffsetResponse response = consumer.getOffsetsBefore(request);
        if (response.hasError()) {
            LOG.error("Error fetching Offset Data from the Broker. Reason: " + response.errorCode(topic, partition));
            lastOffset = 0;
        }
        long[] offsets = response.offsets(topic, partition);
        lastOffset = offsets[0];
    } catch (Exception e) {
        LOG.error("Error while collecting the log Size for topic: " + topic + ", and partition: " + partition, e);
    }
    return lastOffset;
}
 
開發者ID:Symantec,項目名稱:kafka-monitoring-tool,代碼行數:33,代碼來源:KafkaConsumerOffsetUtil.java

示例10: getTopicOffsets

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
private Map<String, Long> getTopicOffsets(List<String> topics) {
    ArrayList<HostAndPort> nodes = new ArrayList<>(config.getNodes());
    Collections.shuffle(nodes);

    SimpleConsumer simpleConsumer = consumerManager.getConsumer(nodes.get(0));
    TopicMetadataRequest topicMetadataRequest = new TopicMetadataRequest(topics);
    TopicMetadataResponse topicMetadataResponse = simpleConsumer.send(topicMetadataRequest);

    ImmutableMap.Builder<String, Long> builder = ImmutableMap.builder();

    for (TopicMetadata metadata : topicMetadataResponse.topicsMetadata()) {
        for (PartitionMetadata part : metadata.partitionsMetadata()) {
            LOGGER.debug(format("Adding Partition %s/%s", metadata.topic(), part.partitionId()));
            Broker leader = part.leader();
            if (leader == null) { // Leader election going on...
                LOGGER.warn(format("No leader for partition %s/%s found!", metadata.topic(), part.partitionId()));
            } else {
                HostAndPort leaderHost = HostAndPort.fromParts(leader.host(), leader.port());
                SimpleConsumer leaderConsumer = consumerManager.getConsumer(leaderHost);

                long offset = findAllOffsets(leaderConsumer, metadata.topic(), part.partitionId())[0];
                builder.put(metadata.topic(), offset);
            }
        }
    }

    return builder.build();
}
 
開發者ID:rakam-io,項目名稱:rakam,代碼行數:29,代碼來源:KafkaOffsetManager.java

示例11: findLeader

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
private HostAndPort findLeader(TopicPartition topicPartition) {
    SimpleConsumer consumer = null;
    try {
        LOG.debug("looking up leader for topic {} partition {}", topicPartition.getTopic(), topicPartition.getPartition());
        consumer = createConsumer(
            mConfig.getKafkaSeedBrokerHost(),
            mConfig.getKafkaSeedBrokerPort(),
            "leaderLookup");
        List<String> topics = new ArrayList<String>();
        topics.add(topicPartition.getTopic());
        TopicMetadataRequest request = new TopicMetadataRequest(topics);
        TopicMetadataResponse response = consumer.send(request);

        List<TopicMetadata> metaData = response.topicsMetadata();
        for (TopicMetadata item : metaData) {
            for (PartitionMetadata part : item.partitionsMetadata()) {
                if (part.partitionId() == topicPartition.getPartition()) {
                    return HostAndPort.fromParts(part.leader().host(), part.leader().port());
                }
            }
        }
    } finally {
        if (consumer != null) {
            consumer.close();
        }
    }
    return null;
}
 
開發者ID:pinterest,項目名稱:secor,代碼行數:29,代碼來源:KafkaClient.java

示例12: getMetaData

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
private void getMetaData(String topic) {
  LOG.info("inside getMetaData"); //xxx
  LOG.info("seedBrokerList" + this.brokerList); //xxx

  for (HostPort seed: brokerList) {
    SimpleConsumer consumer = new SimpleConsumer(
        seed.getHost(),
        seed.getPort(),
        10000,   // timeout
        64*1024, // bufferSize
        "metaLookup"  // clientId
        );
    List <String> topicList = Collections.singletonList(topic);

    TopicMetadataRequest req = new TopicMetadataRequest(topicList);
    kafka.javaapi.TopicMetadataResponse resp = consumer.send(req);
    List<TopicMetadata> metaDataList = resp.topicsMetadata();
    LOG.info("metaDataList: " + metaDataList); //xxxx

    for (TopicMetadata m: metaDataList) {
      LOG.info("inside the metadatalist loop"); //xxx
      LOG.info("m partitionsMetadata: " + m.partitionsMetadata()); //xxx
      for (PartitionMetadata part : m.partitionsMetadata()) {
        LOG.info("inside the partitionmetadata loop"); //xxx
        storeMetadata(topic, part);
      }
    }
  }
}
 
開發者ID:DemandCube,項目名稱:Scribengin,代碼行數:30,代碼來源:ScribenginAM.java

示例13: getSplits

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
@Override
public ConnectorSplitSource getSplits(ConnectorTransactionHandle transaction, ConnectorSession session, ConnectorTableLayoutHandle layout)
{
    KafkaTableHandle kafkaTableHandle = convertLayout(layout).getTable();

    SimpleConsumer simpleConsumer = consumerManager.getConsumer(selectRandom(nodes));

    TopicMetadataRequest topicMetadataRequest = new TopicMetadataRequest(ImmutableList.of(kafkaTableHandle.getTopicName()));
    TopicMetadataResponse topicMetadataResponse = simpleConsumer.send(topicMetadataRequest);

    ImmutableList.Builder<ConnectorSplit> splits = ImmutableList.builder();

    for (TopicMetadata metadata : topicMetadataResponse.topicsMetadata()) {
        for (PartitionMetadata part : metadata.partitionsMetadata()) {
            log.debug("Adding Partition %s/%s", metadata.topic(), part.partitionId());

            Broker leader = part.leader();
            if (leader == null) { // Leader election going on...
                log.warn("No leader for partition %s/%s found!", metadata.topic(), part.partitionId());
                continue;
            }

            HostAddress partitionLeader = HostAddress.fromParts(leader.host(), leader.port());

            SimpleConsumer leaderConsumer = consumerManager.getConsumer(partitionLeader);
            // Kafka contains a reverse list of "end - start" pairs for the splits

            List<HostAddress> partitionNodes = ImmutableList.copyOf(Lists.transform(part.isr(), KafkaSplitManager::brokerToHostAddress));

            long[] offsets = findAllOffsets(leaderConsumer,  metadata.topic(), part.partitionId());

            for (int i = offsets.length - 1; i > 0; i--) {
                KafkaSplit split = new KafkaSplit(
                        connectorId,
                        metadata.topic(),
                        kafkaTableHandle.getKeyDataFormat(),
                        kafkaTableHandle.getMessageDataFormat(),
                        part.partitionId(),
                        offsets[i],
                        offsets[i - 1],
                        partitionNodes);
                splits.add(split);
            }
        }
    }

    return new FixedSplitSource(connectorId, splits.build());
}
 
開發者ID:y-lan,項目名稱:presto,代碼行數:49,代碼來源:KafkaSplitManager.java

示例14: printLeader

import kafka.javaapi.consumer.SimpleConsumer; //導入方法依賴的package包/類
public void printLeader() {

    SimpleConsumer consumer = new SimpleConsumer("localhost", 9092, 100000, 64 * 1024, "leaderLookup");

    List<String> topics = Collections.singletonList("mjtopic");
    TopicMetadataRequest req = new TopicMetadataRequest(topics);

    kafka.javaapi.TopicMetadataResponse resp = consumer.send(req);

    List<TopicMetadata> metaData = resp.topicsMetadata();

    int[] leaders = new int[12] ;

    for (TopicMetadata item : metaData) {
      for (PartitionMetadata part : item.partitionsMetadata()) {

          // System.out.println("partition " + part.partitionId()) ;
          // System.out.println("Leader :" + part.leader().id() + " " + part.leader().host()) ;
          leaders[part.partitionId()] = part.leader().id() ;


      }
    }

    for (int j = 0 ; j < 12 ; j++) {

      System.out.println("Leader for partition " + j + " " + leaders[j]) ;

    }

  }
 
開發者ID:mdkhanga,項目名稱:my-blog-code,代碼行數:32,代碼來源:PartitionLeader.java


注:本文中的kafka.javaapi.consumer.SimpleConsumer.send方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。