当前位置: 首页>>代码示例>>Java>>正文


Java Broker.host方法代码示例

本文整理汇总了Java中kafka.cluster.Broker.host方法的典型用法代码示例。如果您正苦于以下问题:Java Broker.host方法的具体用法?Java Broker.host怎么用?Java Broker.host使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在kafka.cluster.Broker的用法示例。


在下文中一共展示了Broker.host方法的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: getOffset

import kafka.cluster.Broker; //导入方法依赖的package包/类
private static OffsetInfo getOffset(String topic, PartitionMetadata partition) {
  Broker broker = partition.leader();

  SimpleConsumer consumer = new SimpleConsumer(broker.host(), broker.port(), 10000, 1000000,
                                               "com.rekko.newrelic.storm.kafka");
  try {
    TopicAndPartition
        topicAndPartition =
        new TopicAndPartition(topic, partition.partitionId());
    PartitionOffsetRequestInfo rquest = new PartitionOffsetRequestInfo(-1, 1);
    Map<TopicAndPartition, PartitionOffsetRequestInfo>
        map =
        new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
    map.put(topicAndPartition, rquest);
    OffsetRequest req = new OffsetRequest(map, (short) 0, "com.rekko.newrelic.storm.kafka");
    OffsetResponse resp = consumer.getOffsetsBefore(req);
    OffsetInfo offset = new OffsetInfo();
    offset.offset = resp.offsets(topic, partition.partitionId())[0];
    return offset;
  } finally {
    consumer.close();
  }
}
 
开发者ID:ghais,项目名称:newrelic_storm_kafka,代码行数:24,代码来源:Kafka.java

示例2: getPartitionLeader

import kafka.cluster.Broker; //导入方法依赖的package包/类
private KafkaNode getPartitionLeader(List<KafkaNode> bootstrapNodes, String topicName, int partitionId) throws NoBrokerAvailableException, PartitionNotAvailableException {
    PartitionMetadata partitionMetadata = getMetadataForPartition(bootstrapNodes, topicName, partitionId);
    Broker leader = partitionMetadata.leader();
    return new KafkaNode(leader.host(), leader.port());
}
 
开发者ID:researchgate,项目名称:kafka-metamorph,代码行数:6,代码来源:Kafka08PartitionConsumer.java

示例3: KafkaSplitSource

import kafka.cluster.Broker; //导入方法依赖的package包/类
KafkaSplitSource(String connectorId, Table table,
        Iterable<Partition> hivePartitions,
        KafkaClientConfig kafkaConfig)
{
    this.connectorId = connectorId;
    this.fetchedIndex = 0;
    this.computedSplits = new ArrayList<Split>();
    String zookeeper = kafkaConfig.getZookeeper();
    int zkSessionTimeout = kafkaConfig.getZookeeperSessionTimeout();
    int zkConnectionTimeout = kafkaConfig.getZookeeperConnectTimeout();

    Map<String, String> tblProps = table.getParameters();
    String tableTopic = tblProps.get(KafkaTableProperties.kafkaTopicName);

    long splitRange = getDefault(tblProps, KafkaTableProperties.kafkaSplitRange, 60 * 60 * 1000);
    long scanRange = getDefault(tblProps, KafkaTableProperties.kafkaJobRange, 24 * 60 * 60 * 1000);
    int sampleRate = (int) getDefault(tblProps, KafkaTableProperties.kafkaTableSampleRate, 100);

    ZkClient zkclient = new ZkClient(zookeeper, zkSessionTimeout,
            zkConnectionTimeout, new ZKStringSerializer());

    TopicMetadata metadata = AdminUtils.fetchTopicMetadataFromZk(tableTopic, zkclient);
    List<PartitionMetadata> mds = scala.collection.JavaConversions.asJavaList(metadata.partitionsMetadata());

    List<long[]> offsetList = null;
    // if the table is partitioned, look at each partition and
    // determine the data to look at.
    List<FieldSchema> partCols = table.getPartitionKeys();
    if (partCols != null && partCols.size() > 0)
    {
        offsetList = generateTsOffsetsFromPartitions(hivePartitions, tblProps, splitRange, partCols);
    } else
    {
        // we will set the table property so that all the the queries hit here.
        offsetList = generateTsOffsetsNoPartitions(scanRange, mds.size());
    }

    for (PartitionMetadata md : mds)
    {
        Broker broker = md.leader().get();
        for (long[] offsets : offsetList)
        {
            long startTs = offsets[0];
            long endTs = offsets[1];
            KafkaSplit split = new KafkaSplit(connectorId,
                    tableTopic, md.partitionId(),
                    broker.host(), broker.port(),
                    sampleRate,
                    startTs, endTs, zookeeper,
                    zkSessionTimeout, zkConnectionTimeout);
            this.computedSplits.add(split);
        }
    }
}
 
开发者ID:dropbox,项目名称:presto-kafka-connector,代码行数:55,代码来源:KafkaSplitSourceProvider.java

示例4: brokerToNode

import kafka.cluster.Broker; //导入方法依赖的package包/类
/**
 * Turn a broker instance into a node instance.
 *
 * @param broker broker instance
 * @return Node representing the given broker
 */
private static Node brokerToNode(Broker broker) {
	return new Node(broker.id(), broker.host(), broker.port());
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:10,代码来源:Kafka08PartitionDiscoverer.java

示例5: brokerToNode

import kafka.cluster.Broker; //导入方法依赖的package包/类
/**
 * Turn a broker instance into a node instance
 * @param broker broker instance
 * @return Node representing the given broker
 */
private static Node brokerToNode(Broker broker) {
	return new Node(broker.id(), broker.host(), broker.port());
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:9,代码来源:FlinkKafkaConsumer08.java


注:本文中的kafka.cluster.Broker.host方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。