當前位置: 首頁>>代碼示例>>Java>>正文


Java ConsumerConfig類代碼示例

本文整理匯總了Java中kafka.consumer.ConsumerConfig的典型用法代碼示例。如果您正苦於以下問題:Java ConsumerConfig類的具體用法?Java ConsumerConfig怎麽用?Java ConsumerConfig使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


ConsumerConfig類屬於kafka.consumer包,在下文中一共展示了ConsumerConfig類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: OldApiTopicConsumer

import kafka.consumer.ConsumerConfig; //導入依賴的package包/類
/**
 * 
 * @param connector
 * @param topics
 * @param processThreads 
 */
@SuppressWarnings("unchecked")
public OldApiTopicConsumer(ConsumerContext context) {

    this.consumerContext = context;
    try {
        Class<?> deserializerClass = Class
            .forName(context.getProperties().getProperty("value.deserializer"));
        deserializer = (Deserializer<Object>) deserializerClass.newInstance();
    } catch (Exception e) {
    }
    this.connector = kafka.consumer.Consumer
        .createJavaConsumerConnector(new ConsumerConfig(context.getProperties()));

    int poolSize = consumerContext.getMessageHandlers().size();
    this.fetchExecutor = new StandardThreadExecutor(poolSize, poolSize, 0, TimeUnit.SECONDS,
        poolSize, new StandardThreadFactory("KafkaFetcher"));

    this.defaultProcessExecutor = new StandardThreadExecutor(1, context.getMaxProcessThreads(),
        30, TimeUnit.SECONDS, context.getMaxProcessThreads(),
        new StandardThreadFactory("KafkaProcessor"), new PoolFullRunsPolicy());

    logger.info(
        "Kafka Conumer ThreadPool initialized,fetchPool Size:{},defalutProcessPool Size:{} ",
        poolSize, context.getMaxProcessThreads());
}
 
開發者ID:warlock-china,項目名稱:azeroth,代碼行數:32,代碼來源:OldApiTopicConsumer.java

示例2: KafkaDataProvider

import kafka.consumer.ConsumerConfig; //導入依賴的package包/類
public KafkaDataProvider(String zookeeper, String topic, String groupId) {
  super(MessageAndMetadata.class);
  Properties props = new Properties();
  props.put("zookeeper.connect", zookeeper);
  props.put("group.id", groupId);
  props.put("zookeeper.session.timeout.ms", "30000");
  props.put("auto.commit.interval.ms", "1000");
  props.put("fetch.message.max.bytes", "4194304");
  consumer = kafka.consumer.Consumer.createJavaConsumerConnector(new ConsumerConfig(props));
  Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
  topicCountMap.put(topic, 1);
  Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumer.createMessageStreams(topicCountMap);
  KafkaStream<byte[], byte[]> stream = consumerMap.get(topic).get(0);

  iter = stream.iterator();
}
 
開發者ID:XiaoMi,項目名稱:linden,代碼行數:17,代碼來源:KafkaDataProvider.java

示例3: createConsumerConfig

import kafka.consumer.ConsumerConfig; //導入依賴的package包/類
private static ConsumerConfig createConsumerConfig(String zookeeper, String groupId) {

/**
 * this method used to set kafka-consumer configuration
 * 
 * Args :
 * 	m_zookeeper: zookeeper address with port
 * 	m_groupId  : kafka-consumer consumer group
 * 
 * Return :
 * 	an object of ConnsumerConfig 
 * 
 */

      Properties props = new Properties();
      props.put("zookeeper.connect", zookeeper);
      props.put("group.id", groupId);
      props.put("zookeeper.session.timeout.ms", "400");
      props.put("zookeeper.sync.time.ms", "200");
      props.put("auto.commit.interval.ms", "1000");
      return new ConsumerConfig(props);
  }
 
開發者ID:zhai3516,項目名稱:storm-demos,代碼行數:23,代碼來源:KafkaDataSpout.java

示例4: createConsumerConfig

import kafka.consumer.ConsumerConfig; //導入依賴的package包/類
private ConsumerConfig createConsumerConfig(String groupId,
		String consumerId) {
	final Properties props = new Properties();
	props.put("zookeeper.connect", fZooKeeper);
	props.put("group.id", groupId);
	props.put("consumer.id", consumerId);
	//props.put("auto.commit.enable", "false");
	// additional settings: start with our defaults, then pull in configured
	// overrides
	props.putAll(KafkaInternalDefaults);
	for (String key : KafkaConsumerKeys) {
		transferSettingIfProvided(props, key, "kafka");
	}

	return new ConsumerConfig(props);
}
 
開發者ID:att,項目名稱:dmaap-framework,代碼行數:17,代碼來源:DMaaPKafkaConsumerFactory.java

示例5: open

import kafka.consumer.ConsumerConfig; //導入依賴的package包/類
public void open(Map map, TopologyContext topologyContext, SpoutOutputCollector spoutOutputCollector) {
    _collector = spoutOutputCollector;
    Properties props = new Properties();
    props.put("zookeeper.connect", conf.get(OSMIngest.ZOOKEEPERS));
    props.put("group.id", groupId);
    props.put("zookeeper.sync.time.ms", "200");
    props.put("auto.commit.interval.ms", "1000");
    ConsumerConfig consumerConfig = new ConsumerConfig(props);
    ConsumerConnector consumer = Consumer.createJavaConsumerConnector(consumerConfig);
    Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
    topicCountMap.put(topic, 1);
    Map<String, List<KafkaStream<String, String>>> consumerMap = consumer.createMessageStreams(topicCountMap, new StringDecoder(new VerifiableProperties()), new StringDecoder(new VerifiableProperties()));
    List<KafkaStream<String, String>> streams = consumerMap.get(topic);
    KafkaStream<String, String> stream = null;
    if (streams.size() == 1) {
        stream = streams.get(0);
    } else {
        log.error("Streams should be of size 1");
    }
    kafkaIterator = stream.iterator();
}
 
開發者ID:geomesa,項目名稱:geomesa-tutorials,代碼行數:22,代碼來源:OSMKafkaSpout.java

示例6: readTopicToList

import kafka.consumer.ConsumerConfig; //導入依賴的package包/類
/**
 * Read topic to list, only using Kafka code.
 */
private static List<MessageAndMetadata<byte[], byte[]>> readTopicToList(String topicName, ConsumerConfig config, final int stopAfter) {
	ConsumerConnector consumerConnector = Consumer.createJavaConsumerConnector(config);
	// we request only one stream per consumer instance. Kafka will make sure that each consumer group
	// will see each message only once.
	Map<String,Integer> topicCountMap = Collections.singletonMap(topicName, 1);
	Map<String, List<KafkaStream<byte[], byte[]>>> streams = consumerConnector.createMessageStreams(topicCountMap);
	if (streams.size() != 1) {
		throw new RuntimeException("Expected only one message stream but got "+streams.size());
	}
	List<KafkaStream<byte[], byte[]>> kafkaStreams = streams.get(topicName);
	if (kafkaStreams == null) {
		throw new RuntimeException("Requested stream not available. Available streams: "+streams.toString());
	}
	if (kafkaStreams.size() != 1) {
		throw new RuntimeException("Requested 1 stream from Kafka, bot got "+kafkaStreams.size()+" streams");
	}
	LOG.info("Opening Consumer instance for topic '{}' on group '{}'", topicName, config.groupId());
	ConsumerIterator<byte[], byte[]> iteratorToRead = kafkaStreams.get(0).iterator();

	List<MessageAndMetadata<byte[], byte[]>> result = new ArrayList<>();
	int read = 0;
	while(iteratorToRead.hasNext()) {
		read++;
		result.add(iteratorToRead.next());
		if (read == stopAfter) {
			LOG.info("Read "+read+" elements");
			return result;
		}
	}
	return result;
}
 
開發者ID:axbaretto,項目名稱:flink,代碼行數:35,代碼來源:KafkaConsumerTestBase.java

示例7: prepare

import kafka.consumer.ConsumerConfig; //導入依賴的package包/類
@SuppressWarnings("unchecked")
public void prepare() {
	Properties props = geneConsumerProp();
	
	for(String topicName : topic.keySet()){
		ConsumerConnector consumer = kafka.consumer.Consumer
				.createJavaConsumerConnector(new ConsumerConfig(props));
		
		consumerConnMap.put(topicName, consumer);
	}
	if(distributed!=null){
		try {
			logger.warn("zkDistributed is start...");
			zkDistributed = ZkDistributed.getSingleZkDistributed(distributed);
			zkDistributed.zkRegistration();
		} catch (Exception e) {
			// TODO Auto-generated catch block
			logger.error("zkRegistration fail:{}",ExceptionUtil.getErrorMessage(e));
		}
	}
}
 
開發者ID:DTStack,項目名稱:jlogstash-input-plugin,代碼行數:22,代碼來源:KafkaDistributed.java

示例8: reconnConsumer

import kafka.consumer.ConsumerConfig; //導入依賴的package包/類
public void reconnConsumer(String topicName){
		
		//停止topic 對應的conn
		ConsumerConnector consumerConn = consumerConnMap.get(topicName);
		consumerConn.commitOffsets(true);
		consumerConn.shutdown();
		consumerConnMap.remove(topicName);
		
		//停止topic 對應的stream消耗線程
		ExecutorService es = executorMap.get(topicName);
		es.shutdownNow();
		executorMap.remove(topicName);

		Properties prop = geneConsumerProp();
		ConsumerConnector newConsumerConn = kafka.consumer.Consumer
				.createJavaConsumerConnector(new ConsumerConfig(prop));
		consumerConnMap.put(topicName, newConsumerConn);

		addNewConsumer(topicName, topic.get(topicName));
}
 
開發者ID:DTStack,項目名稱:jlogstash-input-plugin,代碼行數:21,代碼來源:KafkaDistributed.java

示例9: reconnConsumer

import kafka.consumer.ConsumerConfig; //導入依賴的package包/類
public void reconnConsumer(String topicName){
	
	//停止topic 對應的conn
	ConsumerConnector consumerConn = consumerConnMap.get(topicName);
	consumerConn.commitOffsets(true);
	consumerConn.shutdown();
	consumerConnMap.remove(topicName);
	
	//停止topic 對應的stream消耗線程
	ExecutorService es = executorMap.get(topicName);
	es.shutdownNow();	
	executorMap.remove(topicName);
	
	Properties prop = geneConsumerProp();
	ConsumerConnector newConsumerConn = kafka.consumer.Consumer
			.createJavaConsumerConnector(new ConsumerConfig(prop));
	consumerConnMap.put(topicName, newConsumerConn);
	
	addNewConsumer(topicName, topic.get(topicName));
}
 
開發者ID:DTStack,項目名稱:jlogstash-input-plugin,代碼行數:21,代碼來源:Kafka.java

示例10: KafkaConsumerConnector

import kafka.consumer.ConsumerConfig; //導入依賴的package包/類
public KafkaConsumerConnector(String zk, String groupName) {
    //Get group id which should be unique for table so as to keep offsets clean for multiple runs.
    String groupId = "voltdb-" + groupName;
    //TODO: Should get this from properties file or something as override?
    Properties props = new Properties();
    props.put("zookeeper.connect", zk);
    props.put("group.id", groupId);
    props.put("zookeeper.session.timeout.ms", "400");
    props.put("zookeeper.sync.time.ms", "200");
    props.put("auto.commit.interval.ms", "1000");
    props.put("auto.commit.enable", "true");
    props.put("auto.offset.reset", "smallest");
    props.put("rebalance.backoff.ms", "10000");

    m_consumerConfig = new ConsumerConfig(props);

    m_consumer = kafka.consumer.Consumer.createJavaConsumerConnector(m_consumerConfig);
}
 
開發者ID:anhnv-3991,項目名稱:VoltDB,代碼行數:19,代碼來源:KafkaLoader.java

示例11: startConsumers

import kafka.consumer.ConsumerConfig; //導入依賴的package包/類
@Override
public CompletionService<Histogram> startConsumers() {
    final ConsumerConfig consumerConfig = new ConsumerConfig(props);

    consumerConnector = Consumer.createJavaConsumerConnector(consumerConfig);

    // Create message streams
    final Map<String, Integer> topicMap = new HashMap<>();
    topicMap.put(topic, numThreads);

    final Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumerConnector.createMessageStreams(topicMap);
    final List<KafkaStream<byte[], byte[]>> streams = consumerMap.get(topic);

    // Pass each stream to a consumer that will read from the stream in its own thread.
    for (final KafkaStream<byte[], byte[]> stream : streams) {
        executorCompletionService.submit(new BlockingKafkaMessageConsumer(stream));
    }

    return executorCompletionService;
}
 
開發者ID:eHarmony,項目名稱:benchmarkio,代碼行數:21,代碼來源:BlockingKafkaMessageConsumerCoordinator.java

示例12: initialize

import kafka.consumer.ConsumerConfig; //導入依賴的package包/類
/**
 * {@inheritDoc}
 */
@Override
public void initialize()
    throws StreamingException
{
    ConsumerConfig consumerConfig = new ConsumerConfig(kafkaProperties);
    consumerConnector = Consumer.createJavaConsumerConnector(consumerConfig);

    Map<String, Integer> topicCountMap = Maps.newHashMap();
    topicCountMap.put(topic, TOPIC_COUNT);

    Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap =
        consumerConnector.createMessageStreams(topicCountMap);
    KafkaStream<byte[], byte[]> stream = consumerMap.get(topic).get(0);
    consumerIterator = stream.iterator();
}
 
開發者ID:HuaweiBigData,項目名稱:StreamCQL,代碼行數:19,代碼來源:KafkaSourceOp.java

示例13: main

import kafka.consumer.ConsumerConfig; //導入依賴的package包/類
public static void main(String[] args) throws Exception {
    if (id == null) throw new IllegalStateException("Undefined HC_ID");
    if (zk == null) throw new IllegalStateException("Undefined HC_ZK");

    out.println("Starting " + HttpClient.class.getSimpleName());
    out.println("Using zk:" + zk + ", id:" + id);

    Properties props = new Properties();
    props.put("zookeeper.connect", zk);
    props.put("group.id", id);
    props.put("zookeeper.session.timeout.ms", "400");
    props.put("zookeeper.sync.time.ms", "200");

    ConsumerConnector consumer = Consumer.createJavaConsumerConnector(new ConsumerConfig(props));
    KafkaStream<byte[],byte[]> stream = consumer.createMessageStreams(Collections.singletonMap(id, 1)).get(id).get(0);

    consume(consumer, stream);
}
 
開發者ID:stealthly,項目名稱:punxsutawney,代碼行數:19,代碼來源:HttpClient.java

示例14: createDefaultConsumerConfig

import kafka.consumer.ConsumerConfig; //導入依賴的package包/類
/**
 * Creates default consumer config.
 *
 * @param zooKeeper ZooKeeper address &lt;server:port&gt;.
 * @param grpId Group Id for kafka subscriber.
 * @return Kafka consumer configuration.
 */
private ConsumerConfig createDefaultConsumerConfig(String zooKeeper, String grpId) {
    A.notNull(zooKeeper, "zookeeper");
    A.notNull(grpId, "groupId");

    Properties props = new Properties();

    props.put("zookeeper.connect", zooKeeper);
    props.put("group.id", grpId);
    props.put("zookeeper.session.timeout.ms", "400");
    props.put("zookeeper.sync.time.ms", "200");
    props.put("auto.commit.interval.ms", "1000");
    props.put("auto.offset.reset", "smallest");

    return new ConsumerConfig(props);
}
 
開發者ID:apache,項目名稱:ignite,代碼行數:23,代碼來源:KafkaIgniteStreamerSelfTest.java

示例15: createKafkaStream

import kafka.consumer.ConsumerConfig; //導入依賴的package包/類
public List<KafkaStream<byte[], byte[]>> createKafkaStream(
    String zookeeperConnectString,
    String topic,
    int partitions
) {
  //create consumer
  Properties consumerProps = new Properties();
  consumerProps.put("zookeeper.connect", zookeeperConnectString);
  consumerProps.put("group.id", "testClient");
  consumerProps.put("zookeeper.session.timeout.ms", "6000");
  consumerProps.put("zookeeper.sync.time.ms", "200");
  consumerProps.put("auto.commit.interval.ms", "1000");
  consumerProps.put("consumer.timeout.ms", "500");
  ConsumerConfig consumerConfig = new ConsumerConfig(consumerProps);
  ConsumerConnector consumer = Consumer.createJavaConsumerConnector(consumerConfig);
  Map<String, Integer> topicCountMap = new HashMap<>();
  topicCountMap.put(topic, partitions);
  Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumer.createMessageStreams(topicCountMap);
  return consumerMap.get(topic);
}
 
開發者ID:streamsets,項目名稱:datacollector,代碼行數:21,代碼來源:SdcKafkaTestUtil.java


注:本文中的kafka.consumer.ConsumerConfig類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。