当前位置: 首页>>代码示例>>Java>>正文


Java KafkaServer类代码示例

本文整理汇总了Java中kafka.server.KafkaServer的典型用法代码示例。如果您正苦于以下问题:Java KafkaServer类的具体用法?Java KafkaServer怎么用?Java KafkaServer使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


KafkaServer类属于kafka.server包,在下文中一共展示了KafkaServer类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: initialize

import kafka.server.KafkaServer; //导入依赖的package包/类
public void initialize() {
    if (initialized) {
        throw new IllegalStateException("Context has been already initialized");
    }
    zkServer = new EmbeddedZookeeper(TestZKUtils.zookeeperConnect());
    zkClient = new ZkClient(zkServer.connectString(), 10000, 10000, ZKStringSerializer$.MODULE$);

    port = TestUtils.choosePort();

    KafkaConfig config = new KafkaConfig(TestUtils.createBrokerConfig(brokerId, port, true));
    Time mock = new MockTime();

    kafkaServer = new KafkaServer(config, mock);
    kafkaServer.startup();

    initialized = true;
}
 
开发者ID:researchgate,项目名称:kafka-metamorph,代码行数:18,代码来源:Kafka08TestContext.java

示例2: underreplicatedTopicsCanBeCreatedAndVerified

import kafka.server.KafkaServer; //导入依赖的package包/类
@Test
public void underreplicatedTopicsCanBeCreatedAndVerified() {
  // Given
  KafkaUtilities kUtil = new KafkaUtilities();
  EmbeddedKafkaCluster cluster = new EmbeddedKafkaCluster();
  int numBrokers = 1;
  int partitions = numBrokers + 1;
  int replication = numBrokers + 1;
  cluster.startCluster(numBrokers);
  KafkaServer broker = cluster.getBroker(0);
  KafkaZkClient zkClient = broker.zkClient();

  // When/Then
  for (String topic : exampleTopics) {
    assertThat(kUtil.createAndVerifyTopic(zkClient, topic, partitions, replication, oneYearRetention)).isTrue();
    // Only one broker is up, so the actual number of replicas will be only 1.
    assertThat(kUtil.verifySupportTopic(zkClient, topic, partitions, replication)).isEqualTo(KafkaUtilities.VerifyTopicState.Less);
  }
  assertThat(kUtil.getNumTopics(zkClient)).isEqualTo(exampleTopics.length);

  // Cleanup
  cluster.stopCluster();
  zkClient.close();
}
 
开发者ID:confluentinc,项目名称:support-metrics-common,代码行数:25,代码来源:KafkaUtilitiesTest.java

示例3: startup

import kafka.server.KafkaServer; //导入依赖的package包/类
public void startup() {
    for (int i = 0; i < ports.size(); i++) {
        Integer port = ports.get(i);
        File logDir = TestUtils.constructTempDir("kafka-local");

        Properties properties = new Properties();
        properties.putAll(baseProperties);
        properties.setProperty("zookeeper.connect", zkConnection);
        properties.setProperty("broker.id", String.valueOf(i + 1));
        properties.setProperty("host.name", "localhost");
        properties.setProperty("port", Integer.toString(port));
        properties.setProperty("log.dir", logDir.getAbsolutePath());
        properties.setProperty("log.flush.interval.messages", String.valueOf(1));

        KafkaServer broker = startBroker(properties);

        brokers.add(broker);
        logDirs.add(logDir);
    }
}
 
开发者ID:wngn123,项目名称:wngn-jms-kafka,代码行数:21,代码来源:EmbeddedKafkaCluster.java

示例4: waitUntilMetadataIsPropagated

import kafka.server.KafkaServer; //导入依赖的package包/类
public static void waitUntilMetadataIsPropagated(final List<KafkaServer> servers,
                                                 final String topic,
                                                 final int partition,
                                                 final long timeout) throws InterruptedException {
    TestUtils.waitForCondition(new TestCondition() {
        @Override
        public boolean conditionMet() {
            for (final KafkaServer server : servers) {
                final MetadataCache metadataCache = server.apis().metadataCache();
                final Option<PartitionStateInfo> partitionInfo =
                        metadataCache.getPartitionInfo(topic, partition);
                if (partitionInfo.isEmpty()) {
                    return false;
                }
                final PartitionStateInfo partitionStateInfo = partitionInfo.get();
                if (!Request.isValidBrokerId(partitionStateInfo.leaderIsrAndControllerEpoch().leaderAndIsr().leader())) {
                    return false;
                }
            }
            return true;
        }
    }, timeout, "metadata for topic=" + topic + " partition=" + partition + " not propagated to all brokers");

}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:25,代码来源:IntegrationTestUtils.java

示例5: createKafkaServer

import kafka.server.KafkaServer; //导入依赖的package包/类
private KafkaServer createKafkaServer(KafkaConfig kafkaConfig) {
  return new KafkaServer(kafkaConfig, new Time() {

    @Override
    public long milliseconds() {
      return System.currentTimeMillis();
    }

    @Override
    public long nanoseconds() {
      return System.nanoTime();
    }

    @Override
    public void sleep(long ms) {
      try {
        Thread.sleep(ms);
      } catch (InterruptedException e) {
        Thread.interrupted();
      }
    }
  });
}
 
开发者ID:apache,项目名称:twill,代码行数:24,代码来源:EmbeddedKafkaServer.java

示例6: startKafka

import kafka.server.KafkaServer; //导入依赖的package包/类
private void startKafka() throws Exception
{
  FileUtils.deleteDirectory(new File(kafkaTmpDir));

  Properties props = new Properties();
  props.setProperty("zookeeper.session.timeout.ms", "100000");
  props.put("advertised.host.name", "localhost");
  props.put("port", 11111);
  // props.put("broker.id", "0");
  props.put("log.dir", kafkaTmpDir);
  props.put("enable.zookeeper", "true");
  props.put("zookeeper.connect", zookeeperLocalCluster.getConnectString());
  KafkaConfig kafkaConfig = KafkaConfig.fromProps(props);
  kafkaLocalBroker = new KafkaServer(kafkaConfig, new SystemTime(), scala.Option.apply("kafkaThread"));
  kafkaLocalBroker.startup();

  zkClient = new ZkClient(zookeeperLocalCluster.getConnectString(), 60000, 60000, ZKStringSerializer$.MODULE$);
  ZkUtils zkUtils = new ZkUtils(zkClient, new ZkConnection(zookeeperLocalCluster.getConnectString()), false);
  // ZkUtils zkUtils = ZkUtils.apply(zookeeperLocalCluster.getConnectString(), 60000, 60000, false);
  AdminUtils.createTopic(zkUtils, topic, 1, 1, new Properties());
}
 
开发者ID:apache,项目名称:incubator-pirk,代码行数:22,代码来源:KafkaStormIntegrationTest.java

示例7: startup

import kafka.server.KafkaServer; //导入依赖的package包/类
public void startup() {
    for (int i = 0; i < ports.size(); i++) {
        Integer port = ports.get(i);
        File logDir = TestUtils.constructTempDir("kafka-local");

        Properties properties = new Properties();
        properties.putAll(baseProperties);
        properties.setProperty("zookeeper.connect", zkConnection);
        properties.setProperty("broker.id", String.valueOf(i + 1));
        properties.setProperty("host.name", "localhost");
        properties.setProperty("port", Integer.toString(port));
        properties.setProperty("log.dir", logDir.getAbsolutePath());
        properties.setProperty("num.partitions", String.valueOf(1));
        properties.setProperty("auto.create.topics.enable", String.valueOf(Boolean.TRUE));
        System.out.println("EmbeddedKafkaCluster: local directory: " + logDir.getAbsolutePath());
        properties.setProperty("log.flush.interval.messages", String.valueOf(1));

        KafkaServer broker = startBroker(properties);

        brokers.add(broker);
        logDirs.add(logDir);
    }
}
 
开发者ID:HydAu,项目名称:Camel,代码行数:24,代码来源:EmbeddedKafkaCluster.java

示例8: waitUntilMetadataIsPropagated

import kafka.server.KafkaServer; //导入依赖的package包/类
public static void waitUntilMetadataIsPropagated(final List<KafkaServer> servers,
                                                 final String topic,
                                                 final int partition,
                                                 final long timeout) throws InterruptedException {
  TestUtils.waitForCondition(new TestCondition() {
    @Override
    public boolean conditionMet() {
      for (final KafkaServer server : servers) {
        final MetadataCache metadataCache = server.apis().metadataCache();
        final Option<UpdateMetadataRequest.PartitionState> partitionInfo =
                metadataCache.getPartitionInfo(topic, partition);
        if (partitionInfo.isEmpty()) {
          return false;
        }
        final UpdateMetadataRequest.PartitionState metadataPartitionState = partitionInfo.get();
        if (!Request.isValidBrokerId(metadataPartitionState.basePartitionState.leader)) {
          return false;
        }
      }
      return true;
    }
  }, timeout, "metadata for topic=" + topic + " partition=" + partition + " not propagated to all brokers");

}
 
开发者ID:confluentinc,项目名称:ksql,代码行数:25,代码来源:IntegrationTestUtils.java

示例9: publish

import kafka.server.KafkaServer; //导入依赖的package包/类
public void publish(String topic, List<String> messages)
{
  Properties producerProps = new Properties();
  producerProps.setProperty("bootstrap.servers", BROKERHOST + ":" + BROKERPORT);
  producerProps.setProperty("key.serializer", "org.apache.kafka.common.serialization.IntegerSerializer");
  producerProps.setProperty("value.serializer", "org.apache.kafka.common.serialization.ByteArraySerializer");

  try (KafkaProducer<Integer, byte[]> producer = new KafkaProducer<>(producerProps)) {
    for (String message : messages) {
      ProducerRecord<Integer, byte[]> data = new ProducerRecord<>(topic, message.getBytes(StandardCharsets.UTF_8));
      producer.send(data);
    }
  }

  List<KafkaServer> servers = new ArrayList<KafkaServer>();
  servers.add(kafkaServer);
  TestUtils.waitUntilMetadataIsPropagated(scala.collection.JavaConversions.asScalaBuffer(servers), topic, 0, 30000);
}
 
开发者ID:apache,项目名称:apex-malhar,代码行数:19,代码来源:EmbeddedKafka.java

示例10: startup

import kafka.server.KafkaServer; //导入依赖的package包/类
public void startup() {
	for (int i = 0; i < ports.size(); i++) {
		Integer port = ports.get(i);
		File logDir = TestUtils.constructTempDir("kafka-local");

		Properties properties = new Properties();
		properties.putAll(baseProperties);
		properties.setProperty("zookeeper.connect", zkConnection);
		properties.setProperty("broker.id", String.valueOf(i + 1));
		properties.setProperty("host.name", "localhost");
		properties.setProperty("port", Integer.toString(port));
		properties.setProperty("log.dir", logDir.getAbsolutePath());
		properties.setProperty("num.partitions",  String.valueOf(1));
		properties.setProperty("auto.create.topics.enable",  String.valueOf(Boolean.TRUE));
		System.out.println("EmbeddedKafkaCluster: local directory: " + logDir.getAbsolutePath());
		properties.setProperty("log.flush.interval.messages", String.valueOf(1));

		KafkaServer broker = startBroker(properties);

		brokers.add(broker);
		logDirs.add(logDir);
	}
}
 
开发者ID:robgmills,项目名称:zero-downtime-soa,代码行数:24,代码来源:EmbeddedKafkaCluster.java

示例11: MetricsReporter

import kafka.server.KafkaServer; //导入依赖的package包/类
/**
 * @param server The Kafka server.
 * @param kafkaSupportConfig The properties this server was created from, plus extra Proactive
 *     Support (PS) ones
 *     Note that Kafka does not understand PS properties,
 *     hence server->KafkaConfig() does not contain any of them, necessitating
 *     passing this extra argument to the API.
 * @param serverRuntime The Java runtime of the server that is being monitored.
 * @param kafkaUtilities An instance of {@link KafkaUtilities} that will be used to perform
 *     e.g. Kafka topic management if needed.
 */
public MetricsReporter(
    KafkaServer server,
    KafkaSupportConfig kafkaSupportConfig,
    Runtime serverRuntime,
    KafkaUtilities kafkaUtilities
) {
  super(kafkaSupportConfig, kafkaUtilities, null, true);
  this.server = server;
  this.serverRuntime = serverRuntime;
  this.kafkaSupportConfig = kafkaSupportConfig;
  this.zkClientProvider = new KafkaServerZkClientProvider(server);
  Objects.requireNonNull(server, "Kafka Server can't be null");
  Objects.requireNonNull(serverRuntime, "serverRuntime can't be null");

}
 
开发者ID:confluentinc,项目名称:support-metrics-client,代码行数:27,代码来源:MetricsReporter.java

示例12: underreplicatedTopicsCanBeRecreatedAndVerified

import kafka.server.KafkaServer; //导入依赖的package包/类
@Test
public void underreplicatedTopicsCanBeRecreatedAndVerified() {
  // Given
  KafkaUtilities kUtil = new KafkaUtilities();
  EmbeddedKafkaCluster cluster = new EmbeddedKafkaCluster();
  int numBrokers = 1;
  int partitions = numBrokers + 1;
  int replication = numBrokers + 1;
  cluster.startCluster(numBrokers);
  KafkaServer broker = cluster.getBroker(0);
  KafkaZkClient zkClient = broker.zkClient();

  // When/Then
  for (String topic : exampleTopics) {
    assertThat(kUtil.createAndVerifyTopic(zkClient, topic, partitions, replication, oneYearRetention)).isTrue();
    assertThat(kUtil.createAndVerifyTopic(zkClient, topic, partitions, replication, oneYearRetention)).isTrue();
    assertThat(kUtil.verifySupportTopic(zkClient, topic, partitions, replication)).isEqualTo(KafkaUtilities.VerifyTopicState.Less);
  }
  assertThat(kUtil.getNumTopics(zkClient)).isEqualTo(exampleTopics.length);

  // Cleanup
  cluster.stopCluster();
}
 
开发者ID:confluentinc,项目名称:support-metrics-common,代码行数:24,代码来源:KafkaUtilitiesTest.java

示例13: replicatedTopicsCanBeCreatedAndVerified

import kafka.server.KafkaServer; //导入依赖的package包/类
@Test
public void replicatedTopicsCanBeCreatedAndVerified() {
  // Given
  KafkaUtilities kUtil = new KafkaUtilities();
  EmbeddedKafkaCluster cluster = new EmbeddedKafkaCluster();
  int numBrokers = 3;
  cluster.startCluster(numBrokers);
  KafkaServer broker = cluster.getBroker(0);
  KafkaZkClient zkClient = broker.zkClient();
  Random random = new Random();
  int replication = numBrokers;

  // When/Then
  for (String topic : exampleTopics) {
    int morePartitionsThanBrokers = random.nextInt(10) + numBrokers + 1;
    assertThat(kUtil.createAndVerifyTopic(zkClient, topic, morePartitionsThanBrokers, replication, oneYearRetention)).isTrue();
    assertThat(kUtil.verifySupportTopic(zkClient, topic, morePartitionsThanBrokers, replication)).isEqualTo(KafkaUtilities.VerifyTopicState.Exactly);
  }

  // Cleanup
  cluster.stopCluster();
}
 
开发者ID:confluentinc,项目名称:support-metrics-common,代码行数:23,代码来源:KafkaUtilitiesTest.java

示例14: stop

import kafka.server.KafkaServer; //导入依赖的package包/类
@AfterClass
public static void stop() {
    for (KafkaServer kafkaServer : kafkaServers) {
        kafkaServer.shutdown();
    }

    kafkaServers.clear();
    zkClient.close();
    zkServer.shutdown();

    Utils.delete(zkData);
    Utils.delete(zkLogs);
}
 
开发者ID:milenkovicm,项目名称:netty-kafka-producer,代码行数:14,代码来源:AbstractMultiBrokerTest.java

示例15: before

import kafka.server.KafkaServer; //导入依赖的package包/类
@Override
protected void before() throws Throwable {
    logDirectory = tempDir(perTest("kafka-log"));
    Properties properties = brokerDefinition.getProperties();
    properties.setProperty(KafkaConfig.LogDirProp(), logDirectory.getCanonicalPath());
    kafkaServer = new KafkaServer(new KafkaConfig(properties),
            SystemTime$.MODULE$, Some$.MODULE$.apply("kafkaServer"));
    kafkaServer.startup();

    List<TopicDefinition> topicDefinitions = brokerDefinition.getTopicDefinitions();
    if (!topicDefinitions.isEmpty()) {
        ZkUtils zkUtils = ZkUtils.apply(brokerDefinition.getZookeeperConnect(), 30000, 30000,
                JaasUtils.isZkSecurityEnabled());
        for (TopicDefinition topicDefinition : topicDefinitions) {
            String name = topicDefinition.getName();
            log.info("Creating topic {}", name);
            AdminUtils.createTopic(zkUtils,
                    name,
                    topicDefinition.getPartitions(),
                    topicDefinition.getReplicationFactor(),
                    topicDefinition.getProperties());
        }
    }
}
 
开发者ID:jkorab,项目名称:ameliant-tools,代码行数:25,代码来源:EmbeddedKafkaBroker.java


注:本文中的kafka.server.KafkaServer类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。