当前位置: 首页>>代码示例>>Java>>正文


Java Broker.port方法代码示例

本文整理汇总了Java中kafka.cluster.Broker.port方法的典型用法代码示例。如果您正苦于以下问题:Java Broker.port方法的具体用法?Java Broker.port怎么用?Java Broker.port使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在kafka.cluster.Broker的用法示例。


在下文中一共展示了Broker.port方法的7个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: getOffset

import kafka.cluster.Broker; //导入方法依赖的package包/类
private static OffsetInfo getOffset(String topic, PartitionMetadata partition) {
  Broker broker = partition.leader();

  SimpleConsumer consumer = new SimpleConsumer(broker.host(), broker.port(), 10000, 1000000,
                                               "com.rekko.newrelic.storm.kafka");
  try {
    TopicAndPartition
        topicAndPartition =
        new TopicAndPartition(topic, partition.partitionId());
    PartitionOffsetRequestInfo rquest = new PartitionOffsetRequestInfo(-1, 1);
    Map<TopicAndPartition, PartitionOffsetRequestInfo>
        map =
        new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
    map.put(topicAndPartition, rquest);
    OffsetRequest req = new OffsetRequest(map, (short) 0, "com.rekko.newrelic.storm.kafka");
    OffsetResponse resp = consumer.getOffsetsBefore(req);
    OffsetInfo offset = new OffsetInfo();
    offset.offset = resp.offsets(topic, partition.partitionId())[0];
    return offset;
  } finally {
    consumer.close();
  }
}
 
开发者ID:ghais,项目名称:newrelic_storm_kafka,代码行数:24,代码来源:Kafka.java

示例2: findNewLeader

import kafka.cluster.Broker; //导入方法依赖的package包/类
private Broker findNewLeader(Broker oldLeader) throws InterruptedException {
    long retryCnt = 0;
    while (true) {
        PartitionMetadata metadata = findLeader();
        logger.debug("findNewLeader - meta leader {}, previous leader {}", metadata, oldLeader);
        if (metadata != null && metadata.leader() != null && (oldLeader == null ||
                (!(oldLeader.host().equalsIgnoreCase(metadata.leader().host()) &&
                  (oldLeader.port() == metadata.leader().port())) || retryCnt != 0))) {
            // first time through if the leader hasn't changed give ZooKeeper a second to recover
            // second time, assume the broker did recover before failover, or it was a non-Broker issue
            logger.info("findNewLeader - using new leader {} from meta data, previous leader {}", metadata.leader(), oldLeader);
            return metadata.leader();
        }
        //TODO: backoff retry
        Thread.sleep(1000L);
        retryCnt ++;
        // if could not find the leader for current replicaBrokers, let's try to find one via allBrokers
        if (retryCnt >= 3 && (retryCnt - 3) % 5 == 0) {
            logger.warn("can nof find leader for {} - {} after {} retries", topic, partitionId, retryCnt);
            replicaBrokers.clear();
            replicaBrokers.addAll(allBrokers);
        }
    }
}
 
开发者ID:lyogavin,项目名称:Pistachio,代码行数:25,代码来源:KafkaSimpleConsumer.java

示例3: isSameBroker

import kafka.cluster.Broker; //导入方法依赖的package包/类
public static boolean isSameBroker(final Broker lastBroker, final Broker checkBroker) {
    if (lastBroker == null || checkBroker == null)
        return false;

    return lastBroker.host().equals(checkBroker.host()) && lastBroker.port() == checkBroker.port();
}
 
开发者ID:jeoffreylim,项目名称:maelstrom,代码行数:7,代码来源:LeaderBrokerChecker.java

示例4: getPartitionLeader

import kafka.cluster.Broker; //导入方法依赖的package包/类
private KafkaNode getPartitionLeader(List<KafkaNode> bootstrapNodes, String topicName, int partitionId) throws NoBrokerAvailableException, PartitionNotAvailableException {
    PartitionMetadata partitionMetadata = getMetadataForPartition(bootstrapNodes, topicName, partitionId);
    Broker leader = partitionMetadata.leader();
    return new KafkaNode(leader.host(), leader.port());
}
 
开发者ID:researchgate,项目名称:kafka-metamorph,代码行数:6,代码来源:Kafka08PartitionConsumer.java

示例5: KafkaSplitSource

import kafka.cluster.Broker; //导入方法依赖的package包/类
KafkaSplitSource(String connectorId, Table table,
        Iterable<Partition> hivePartitions,
        KafkaClientConfig kafkaConfig)
{
    this.connectorId = connectorId;
    this.fetchedIndex = 0;
    this.computedSplits = new ArrayList<Split>();
    String zookeeper = kafkaConfig.getZookeeper();
    int zkSessionTimeout = kafkaConfig.getZookeeperSessionTimeout();
    int zkConnectionTimeout = kafkaConfig.getZookeeperConnectTimeout();

    Map<String, String> tblProps = table.getParameters();
    String tableTopic = tblProps.get(KafkaTableProperties.kafkaTopicName);

    long splitRange = getDefault(tblProps, KafkaTableProperties.kafkaSplitRange, 60 * 60 * 1000);
    long scanRange = getDefault(tblProps, KafkaTableProperties.kafkaJobRange, 24 * 60 * 60 * 1000);
    int sampleRate = (int) getDefault(tblProps, KafkaTableProperties.kafkaTableSampleRate, 100);

    ZkClient zkclient = new ZkClient(zookeeper, zkSessionTimeout,
            zkConnectionTimeout, new ZKStringSerializer());

    TopicMetadata metadata = AdminUtils.fetchTopicMetadataFromZk(tableTopic, zkclient);
    List<PartitionMetadata> mds = scala.collection.JavaConversions.asJavaList(metadata.partitionsMetadata());

    List<long[]> offsetList = null;
    // if the table is partitioned, look at each partition and
    // determine the data to look at.
    List<FieldSchema> partCols = table.getPartitionKeys();
    if (partCols != null && partCols.size() > 0)
    {
        offsetList = generateTsOffsetsFromPartitions(hivePartitions, tblProps, splitRange, partCols);
    } else
    {
        // we will set the table property so that all the the queries hit here.
        offsetList = generateTsOffsetsNoPartitions(scanRange, mds.size());
    }

    for (PartitionMetadata md : mds)
    {
        Broker broker = md.leader().get();
        for (long[] offsets : offsetList)
        {
            long startTs = offsets[0];
            long endTs = offsets[1];
            KafkaSplit split = new KafkaSplit(connectorId,
                    tableTopic, md.partitionId(),
                    broker.host(), broker.port(),
                    sampleRate,
                    startTs, endTs, zookeeper,
                    zkSessionTimeout, zkConnectionTimeout);
            this.computedSplits.add(split);
        }
    }
}
 
开发者ID:dropbox,项目名称:presto-kafka-connector,代码行数:55,代码来源:KafkaSplitSourceProvider.java

示例6: brokerToNode

import kafka.cluster.Broker; //导入方法依赖的package包/类
/**
 * Turn a broker instance into a node instance.
 *
 * @param broker broker instance
 * @return Node representing the given broker
 */
private static Node brokerToNode(Broker broker) {
	return new Node(broker.id(), broker.host(), broker.port());
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:10,代码来源:Kafka08PartitionDiscoverer.java

示例7: brokerToNode

import kafka.cluster.Broker; //导入方法依赖的package包/类
/**
 * Turn a broker instance into a node instance
 * @param broker broker instance
 * @return Node representing the given broker
 */
private static Node brokerToNode(Broker broker) {
	return new Node(broker.id(), broker.host(), broker.port());
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:9,代码来源:FlinkKafkaConsumer08.java


注:本文中的kafka.cluster.Broker.port方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。