当前位置: 首页>>代码示例>>Java>>正文


Java Broker类代码示例

本文整理汇总了Java中kafka.cluster.Broker的典型用法代码示例。如果您正苦于以下问题:Java Broker类的具体用法?Java Broker怎么用?Java Broker使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


Broker类属于kafka.cluster包,在下文中一共展示了Broker类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: getBrokerMetadataByAddress

import kafka.cluster.Broker; //导入依赖的package包/类
/**
 * Get Kafka broker metadata for a specific address
 *
 * @param kafkaBrokers    list of registered Kafka brokers
 * @param kfBrokerAddress address to look for
 * @return Kafka broker metadata
 */
private KFBrokerMetadata getBrokerMetadataByAddress(final List<Broker> kafkaBrokers,
                                                    final InetSocketAddress kfBrokerAddress) {

    KFBrokerMetadata brokerMetadata = new KFBrokerMetadata();

    kafkaBrokers.forEach(broker -> {
        JavaConversions.mapAsJavaMap(broker.endPoints())
                .forEach((protocol, endpoint) -> {
                    if (endpoint.host().equals(kfBrokerAddress.getHostName())
                            && endpoint.port() == kfBrokerAddress.getPort()) {
                        brokerMetadata.setBrokerId(broker.id());
                        brokerMetadata.setHost(endpoint.host());
                        brokerMetadata.setPort(endpoint.port());
                        brokerMetadata.setConnectionString(endpoint.connectionString());
                        brokerMetadata.setSecurityProtocol(protocol.name);
                    }
                });
    });
    return brokerMetadata;
}
 
开发者ID:mcafee,项目名称:management-sdk-for-kafka,代码行数:28,代码来源:KFBrokerWatcher.java

示例2: getOffset

import kafka.cluster.Broker; //导入依赖的package包/类
private static OffsetInfo getOffset(String topic, PartitionMetadata partition) {
  Broker broker = partition.leader();

  SimpleConsumer consumer = new SimpleConsumer(broker.host(), broker.port(), 10000, 1000000,
                                               "com.rekko.newrelic.storm.kafka");
  try {
    TopicAndPartition
        topicAndPartition =
        new TopicAndPartition(topic, partition.partitionId());
    PartitionOffsetRequestInfo rquest = new PartitionOffsetRequestInfo(-1, 1);
    Map<TopicAndPartition, PartitionOffsetRequestInfo>
        map =
        new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
    map.put(topicAndPartition, rquest);
    OffsetRequest req = new OffsetRequest(map, (short) 0, "com.rekko.newrelic.storm.kafka");
    OffsetResponse resp = consumer.getOffsetsBefore(req);
    OffsetInfo offset = new OffsetInfo();
    offset.offset = resp.offsets(topic, partition.partitionId())[0];
    return offset;
  } finally {
    consumer.close();
  }
}
 
开发者ID:ghais,项目名称:newrelic_storm_kafka,代码行数:24,代码来源:Kafka.java

示例3: reassignPartitions

import kafka.cluster.Broker; //导入依赖的package包/类
private static void reassignPartitions(ZkUtils zkUtils, Collection<Broker> brokers, String topic, int partitionCount, int replicationFactor) {
  scala.collection.mutable.ArrayBuffer<BrokerMetadata> brokersMetadata = new scala.collection.mutable.ArrayBuffer<>(brokers.size());
  for (Broker broker : brokers) {
    brokersMetadata.$plus$eq(new BrokerMetadata(broker.id(), broker.rack()));
  }
  scala.collection.Map<Object, Seq<Object>> newAssignment =
      AdminUtils.assignReplicasToBrokers(brokersMetadata, partitionCount, replicationFactor, 0, 0);

  scala.collection.mutable.ArrayBuffer<String> topicList = new scala.collection.mutable.ArrayBuffer<>();
  topicList.$plus$eq(topic);
  scala.collection.Map<Object, scala.collection.Seq<Object>> currentAssignment = zkUtils.getPartitionAssignmentForTopics(topicList).apply(topic);
  String currentAssignmentJson = formatAsReassignmentJson(topic, currentAssignment);
  String newAssignmentJson = formatAsReassignmentJson(topic, newAssignment);

  LOG.info("Reassign partitions for topic " + topic);
  LOG.info("Current partition replica assignment " + currentAssignmentJson);
  LOG.info("New partition replica assignment " + newAssignmentJson);
  zkUtils.createPersistentPath(ZkUtils.ReassignPartitionsPath(), newAssignmentJson, zkUtils.DefaultAcls());
}
 
开发者ID:linkedin,项目名称:kafka-monitor,代码行数:20,代码来源:MultiClusterTopicManagementService.java

示例4: getRegularKafkaOffsetMonitors

import kafka.cluster.Broker; //导入依赖的package包/类
public List<KafkaOffsetMonitor> getRegularKafkaOffsetMonitors() throws Exception {
    List<KafkaConsumerGroupMetadata> kafkaConsumerGroupMetadataList = zkClient.getActiveRegularConsumersAndTopics();
    List<KafkaOffsetMonitor> kafkaOffsetMonitors = new ArrayList<KafkaOffsetMonitor>();
    List<Broker> kafkaBrokers = getAllBrokers();
    SimpleConsumer consumer = getConsumer(kafkaBrokers.get(1).host(), kafkaBrokers.get(1).port(), clientName);
    for (KafkaConsumerGroupMetadata kafkaConsumerGroupMetadata : kafkaConsumerGroupMetadataList) {
        List<TopicPartitionLeader> partitions = getPartitions(consumer, kafkaConsumerGroupMetadata.getTopic());
        for (TopicPartitionLeader partition : partitions) {
            consumer = getConsumer(partition.getLeaderHost(), partition.getLeaderPort(), clientName);
            long kafkaTopicOffset = getLastOffset(consumer, kafkaConsumerGroupMetadata.getTopic(), partition.getPartitionId(), -1, clientName);
            long consumerOffset = 0;
            if (kafkaConsumerGroupMetadata.getPartitionOffsetMap().get(Integer.toString(partition.getPartitionId())) != null) {
                consumerOffset = kafkaConsumerGroupMetadata.getPartitionOffsetMap().get(Integer.toString(partition.getPartitionId()));
            }
            long lag = kafkaTopicOffset - consumerOffset;
            KafkaOffsetMonitor kafkaOffsetMonitor = new KafkaOffsetMonitor(kafkaConsumerGroupMetadata.getConsumerGroup(), kafkaConsumerGroupMetadata.getTopic(), partition.getPartitionId(), kafkaTopicOffset, consumerOffset, lag);
            kafkaOffsetMonitors.add(kafkaOffsetMonitor);
        }
    }
    return kafkaOffsetMonitors;
}
 
开发者ID:Symantec,项目名称:kafka-monitoring-tool,代码行数:22,代码来源:KafkaConsumerOffsetUtil.java

示例5: findNewLeader

import kafka.cluster.Broker; //导入依赖的package包/类
private Broker findNewLeader(Broker oldLeader) throws InterruptedException {
    long retryCnt = 0;
    while (true) {
        PartitionMetadata metadata = findLeader();
        logger.debug("findNewLeader - meta leader {}, previous leader {}", metadata, oldLeader);
        if (metadata != null && metadata.leader() != null && (oldLeader == null ||
                (!(oldLeader.host().equalsIgnoreCase(metadata.leader().host()) &&
                  (oldLeader.port() == metadata.leader().port())) || retryCnt != 0))) {
            // first time through if the leader hasn't changed give ZooKeeper a second to recover
            // second time, assume the broker did recover before failover, or it was a non-Broker issue
            logger.info("findNewLeader - using new leader {} from meta data, previous leader {}", metadata.leader(), oldLeader);
            return metadata.leader();
        }
        //TODO: backoff retry
        Thread.sleep(1000L);
        retryCnt ++;
        // if could not find the leader for current replicaBrokers, let's try to find one via allBrokers
        if (retryCnt >= 3 && (retryCnt - 3) % 5 == 0) {
            logger.warn("can nof find leader for {} - {} after {} retries", topic, partitionId, retryCnt);
            replicaBrokers.clear();
            replicaBrokers.addAll(allBrokers);
        }
    }
}
 
开发者ID:lyogavin,项目名称:Pistachio,代码行数:25,代码来源:KafkaSimpleConsumer.java

示例6: ConsumerFetcherThread

import kafka.cluster.Broker; //导入依赖的package包/类
public ConsumerFetcherThread(String name, ConsumerConfig config, Broker sourceBroker, Map<TopicAndPartition, PartitionTopicInfo> partitionMap, ConsumerFetcherManager consumerFetcherManager) {
    super(/*name =*/ name,
           /* clientId = */config.clientId + "-" + name,
            /*sourceBroker =*/ sourceBroker,
           /* socketTimeout =*/ config.socketTimeoutMs,
           /* socketBufferSize = */config.socketReceiveBufferBytes,
            /*fetchSize =*/ config.fetchMessageMaxBytes,
           /* fetcherBrokerId =*/ Requests.OrdinaryConsumerId,
           /* maxWait = */config.fetchWaitMaxMs,
            /*minBytes = */config.fetchMinBytes,
           /* isInterruptible =*/ true);
    this.name = name;
    this.config = config;
    this.sourceBroker = sourceBroker;
    this.partitionMap = partitionMap;
    this.consumerFetcherManager = consumerFetcherManager;
}
 
开发者ID:bingoohuang,项目名称:buka,代码行数:18,代码来源:ConsumerFetcherThread.java

示例7: readFrom

import kafka.cluster.Broker; //导入依赖的package包/类
public static TopicMetadataResponse readFrom(final ByteBuffer buffer) {
    int correlationId = buffer.getInt();
    int brokerCount = buffer.getInt();
    List<Broker> brokers = Utils.flatList(0, brokerCount, new Function1<Integer, Broker>() {
        @Override
        public Broker apply(Integer index) {
            return Brokers.readFrom(buffer);
        }
    });

    final Map<Integer, Broker> brokerMap = Utils.map(brokers, new Function1<Broker, Tuple2<Integer, Broker>>() {
        @Override
        public Tuple2<Integer, Broker> apply(Broker broker) {
            return Tuple2.make(broker.id, broker);
        }
    });
    int topicCount = buffer.getInt();
    List<TopicMetadata> topicsMetadata = Utils.flatList(0, topicCount, new Function1<Integer, TopicMetadata>() {
        @Override
        public TopicMetadata apply(Integer arg) {
            return TopicMetadata.readFrom(buffer, brokerMap);
        }
    });

    return new TopicMetadataResponse(topicsMetadata, correlationId);
}
 
开发者ID:bingoohuang,项目名称:buka,代码行数:27,代码来源:TopicMetadataResponse.java

示例8: writeTo

import kafka.cluster.Broker; //导入依赖的package包/类
@Override
public void writeTo(final ByteBuffer buffer) {
    buffer.putShort(versionId);
    buffer.putInt(correlationId);
    writeShortString(buffer, clientId);
    buffer.putInt(controllerId);
    buffer.putInt(controllerEpoch);
    buffer.putInt(partitionStateInfos.size());
    Utils.foreach(partitionStateInfos, new Callable2<TopicAndPartition, PartitionStateInfo>() {
        @Override
        public void apply(TopicAndPartition key, PartitionStateInfo value) {
            writeShortString(buffer, key.topic);
            buffer.putInt(key.partition);
            value.writeTo(buffer);
        }
    });

    buffer.putInt(aliveBrokers.size());

    Utils.foreach(aliveBrokers, new Callable1<Broker>() {
        @Override
        public void apply(Broker _) {
            _.writeTo(buffer);
        }
    });
}
 
开发者ID:bingoohuang,项目名称:buka,代码行数:27,代码来源:UpdateMetadataRequest.java

示例9: getAllBrokersInCluster

import kafka.cluster.Broker; //导入依赖的package包/类
public static List<Broker> getAllBrokersInCluster(ZkClient zkClient) {
    List<String> brokerIds = ZkUtils.getChildrenParentMayNotExist(zkClient, ZkUtils.BrokerIdsPath);

    List<Broker> brokers = Lists.newArrayList();
    if (brokerIds == null) return brokers;

    Collections.sort(brokerIds);

    for (String brokerId : brokerIds) {
        int brokerInt = Integer.parseInt(brokerId);
        Broker brokerInfo = getBrokerInfo(zkClient, brokerInt);
        if (brokerInfo != null) brokers.add(brokerInfo);
    }

    return brokers;
}
 
开发者ID:bingoohuang,项目名称:buka,代码行数:17,代码来源:ZkUtils.java

示例10: registerBrokerInZk

import kafka.cluster.Broker; //导入依赖的package包/类
public static void registerBrokerInZk(ZkClient zkClient, int id, String host, int port, int timeout, int jmxPort) {
    String brokerIdPath = ZkUtils.BrokerIdsPath + "/" + id;
    String timestamp = SystemTime.instance.milliseconds() + "";
    String brokerInfo = Json.encode(ImmutableMap.of("version", 1, "host", host, "port", port, "jmx_port", jmxPort, "timestamp", timestamp));
    Broker expectedBroker = new Broker(id, host, port);

    try {
        createEphemeralPathExpectConflictHandleZKBug(zkClient, brokerIdPath, brokerInfo, expectedBroker,
                new Function2<String, Object, Boolean>() {
                    @Override
                    public Boolean apply(String brokerString, Object broker) {
                        return Brokers.createBroker(((Broker) broker).id, brokerString).equals(broker);
                    }
                }, timeout);

    } catch (ZkNodeExistsException e) {
        throw new RuntimeException("A broker is already registered on the path " + brokerIdPath
                + ". This probably " + "indicates that you either have configured a brokerid that is already in use, or "
                + "else you have shutdown this broker and restarted it faster than the zookeeper "
                + "timeout so it appears to be re-registering.");
    }
    logger.info("Registered broker {} at path {} with address {}:{}.", id, brokerIdPath, host, port);
}
 
开发者ID:bingoohuang,项目名称:buka,代码行数:24,代码来源:ZkUtils.java

示例11: run

import kafka.cluster.Broker; //导入依赖的package包/类
public void run() {
  try {
    while (!exit) {
      KafkaTool kafkaTool = new KafkaTool(topic, cluster.getZKConnect());
      kafkaTool.connect();
      TopicMetadata topicMeta = kafkaTool.findTopicMetadata(topic);
      PartitionMetadata partitionMeta = findPartition(topicMeta, partition);
      Broker partitionLeader = partitionMeta.leader();
      Server kafkaServer = cluster.findKafkaServerByPort(partitionLeader.port());
      System.out.println("Shutdown kafka server " + kafkaServer.getPort());
      kafkaServer.shutdown();
      failureCount++;
      Thread.sleep(sleepBeforeRestart);
      kafkaServer.start();
      kafkaTool.close();
      Thread.sleep(10000); //wait to make sure that the kafka server start
    }
  } catch (Exception e) {
    e.printStackTrace();
  }
  synchronized (this) {
    notify();
  }
}
 
开发者ID:DemandCube,项目名称:Scribengin,代码行数:25,代码来源:AckKafkaWriterTestRunner.java

示例12: info

import kafka.cluster.Broker; //导入依赖的package包/类
private void info(List<PartitionMetadata> holder) {
  String[] header = { 
      "Partition Id", "Leader", "Replicas"
  };
  TabularFormater formater = new TabularFormater(header);
  formater.setTitle("Partitions");
  for(PartitionMetadata sel : holder) {
    StringBuilder replicas = new StringBuilder();
    for(Broker broker : sel.replicas()) {
      if(replicas.length() > 0) replicas.append(",");
      replicas.append(broker.port());
    }
    formater.addRow(sel.partitionId(), sel.leader().port(), replicas.toString());
  }
  System.out.println(formater.getFormatText());
}
 
开发者ID:DemandCube,项目名称:Scribengin,代码行数:17,代码来源:KafkaProducerPartitionLeaderChangeBugUnitTest.java

示例13: getRegisteredKafkaBrokers

import kafka.cluster.Broker; //导入依赖的package包/类
/**
 * Get a list of registered Kafka broker.
 * If the connection has failed or any other exception is thrown, it return an empty list
 *
 * @return list of registered Kafka brokers
 */
private List<Broker> getRegisteredKafkaBrokers() {
    try (ClusterConnection cnx = new ClusterConnection(zkConnectionString,
            PropertyNames.ZK_CONNECTION_TIMEOUT_MS.getDefaultValue(),
            String.valueOf(zkSessionTimeout))) {

        final ClusterTools clusterTools = new ClusterTools();
        return clusterTools.getKafkaBrokers(cnx.getConnection());
    } catch (Exception e) {
    }
    return new ArrayList<>();
}
 
开发者ID:mcafee,项目名称:management-sdk-for-kafka,代码行数:18,代码来源:KFBrokerWatcher.java

示例14: fetchAllBrokers

import kafka.cluster.Broker; //导入依赖的package包/类
public List<BrokerInfo> fetchAllBrokers() {
    List<BrokerInfo> result = new ArrayList<>();
    Seq<Broker> brokers = zkUtils.getAllBrokersInCluster();
    Iterator<Broker> iterator = brokers.toList().iterator();
    while (iterator.hasNext()) {
        Broker broker = iterator.next();
        Node node = broker.getNode(SecurityProtocol.PLAINTEXT);
        result.add(new BrokerInfo(node.idString(), node.host(), node.port()));
    }
    return result;
}
 
开发者ID:warlock-china,项目名称:azeroth,代码行数:12,代码来源:ZkConsumerCommand.java

示例15: getClusterViz

import kafka.cluster.Broker; //导入依赖的package包/类
public Node getClusterViz() {
	Node rootNode = new Node("KafkaCluster");
	List<Node> childNodes = new ArrayList<Node>();
	List<Broker> brokers = JavaConversions.seqAsJavaList(ZKUtils.getZKUtilsFromKafka().getAllBrokersInCluster());
	brokers.forEach(broker -> {
		List<EndPoint> endPoints = JavaConversions.seqAsJavaList(broker.endPoints().seq());
		childNodes.add(new Node(broker.id() + ":" + endPoints.get(0).host() + ":" + endPoints.get(0).port(), null));
	});
	rootNode.setChildren(childNodes);
	return rootNode;
}
 
开发者ID:chickling,项目名称:kmanager,代码行数:12,代码来源:OffsetGetter.java


注:本文中的kafka.cluster.Broker类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。