当前位置: 首页>>代码示例>>Java>>正文


Java Producer类代码示例

本文整理汇总了Java中kafka.javaapi.producer.Producer的典型用法代码示例。如果您正苦于以下问题:Java Producer类的具体用法?Java Producer怎么用?Java Producer使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。


Producer类属于kafka.javaapi.producer包,在下文中一共展示了Producer类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: go

import kafka.javaapi.producer.Producer; //导入依赖的package包/类
/**
 * 读取配置文件,创建线程池,运行线程
 */
public void go() {
    Constant constant = new Constant();
    kafkaProperties kafkaProperties = new kafkaProperties();
    ProducerConfig config = new ProducerConfig(kafkaProperties.properties());

    ExecutorService executorService = Executors.newFixedThreadPool(Integer.parseInt(constant.THREAD_POOL_SIZE));

    String topic = constant.TOPIC_NAME;
    Task[] tasks = new Task[Integer.parseInt(constant.THREAD_NUM)];
    String[] folders = constant.FILE_FOLDERS.split(";");
    int batchSize = Integer.parseInt(constant.BATCH_SIZE);
    CopyOnWriteArrayList<String> fileList = addFiles(folders);

    for (int i = 0; i < tasks.length; ++i) {
        tasks[i] = new Task(i, topic, new Producer<String, String>(config), fileList, batchSize);
    }

    for (Task task : tasks) {
        executorService.execute(task);
    }
    executorService.shutdown();
}
 
开发者ID:Transwarp-DE,项目名称:Transwarp-Sample-Code,代码行数:26,代码来源:kafkaProducer.java

示例2: produceMessages

import kafka.javaapi.producer.Producer; //导入依赖的package包/类
public static void produceMessages(String brokerList, String topic, int msgCount, String msgPayload) throws JSONException, IOException {
    
    // Add Producer properties and created the Producer
    ProducerConfig config = new ProducerConfig(setKafkaBrokerProps(brokerList));
    Producer<String, String> producer = new Producer<String, String>(config);

    LOG.info("KAFKA: Preparing To Send " + msgCount + " Events.");
    for (int i=0; i<msgCount; i++){

        // Create the JSON object
        JSONObject obj = new JSONObject();
        obj.put("id", String.valueOf(i));
        obj.put("msg", msgPayload);
        obj.put("dt", GenerateRandomDay.genRandomDay());
        String payload = obj.toString();

        KeyedMessage<String, String> data = new KeyedMessage<String, String>(topic, null, payload);
        producer.send(data);
        LOG.info("Sent message: " + data.toString());
    }
    LOG.info("KAFKA: Sent " + msgCount + " Events.");

    // Stop the producer
    producer.close();
}
 
开发者ID:sakserv,项目名称:storm-topology-examples,代码行数:26,代码来源:KafkaProducerTest.java

示例3: KafkaProducer

import kafka.javaapi.producer.Producer; //导入依赖的package包/类
public KafkaProducer(){
    Properties props = new Properties();
    //此处配置的是kafka的端口
    props.put("metadata.broker.list", "192.168.1.116:9092");

    //配置value的序列化类
    props.put("serializer.class", "kafka.serializer.StringEncoder");
    //配置key的序列化类
    props.put("key.serializer.class", "kafka.serializer.StringEncoder");

    //request.required.acks
    //0, which means that the producer never waits for an acknowledgement from the broker (the same behavior as 0.7). This option provides the lowest latency but the weakest durability guarantees (some data will be lost when a server fails).
    //1, which means that the producer gets an acknowledgement after the leader replica has received the data. This option provides better durability as the client waits until the server acknowledges the request as successful (only messages that were written to the now-dead leader but not yet replicated will be lost).
    //-1, which means that the producer gets an acknowledgement after all in-sync replicas have received the data. This option provides the best durability, we guarantee that no messages will be lost as long as at least one in sync replica remains.
    props.put("request.required.acks","-1");

    producer = new Producer<String, String>(new ProducerConfig(props));
}
 
开发者ID:unrealinux,项目名称:DataProcessPlatformKafkaJavaSDK,代码行数:19,代码来源:KafkaProducer.java

示例4: connect

import kafka.javaapi.producer.Producer; //导入依赖的package包/类
private void connect(String serverURI, String clientId, String zkConnect) throws MqttException {
	
	mqtt = new MqttAsyncClient(serverURI, clientId);
	mqtt.setCallback(this);
	IMqttToken token = mqtt.connect();
	Properties props = new Properties();
	
	//Updated based on Kafka v0.8.1.1
	props.put("metadata.broker.list", "localhost:9092");
       props.put("serializer.class", "kafka.serializer.StringEncoder");
       props.put("partitioner.class", "example.producer.SimplePartitioner");
       props.put("request.required.acks", "1");
	
	ProducerConfig config = new ProducerConfig(props);
	kafkaProducer = new Producer<String, String>(config);
	token.waitForCompletion();
	logger.info("Connected to MQTT and Kafka");
}
 
开发者ID:DhruvKalaria,项目名称:MQTTKafkaBridge,代码行数:19,代码来源:Bridge.java

示例5: sendMulitThread

import kafka.javaapi.producer.Producer; //导入依赖的package包/类
public static void sendMulitThread() {
	Producer<String, String> producer = buildSyncProducer();
	Random random = new Random();
	List<Thread> produceThreads = IntStream.range(0, 20).mapToObj(i -> {
		return new Thread(() -> {
			final String threadName = Thread.currentThread().getName();
			for(int j = 0; j < 10000; j++) {
				sendMessage(producer, Constants.TOPIC_NAME, random.nextInt(10000) + "", threadName + " message " + j);
			}
		});
	}).peek(Thread::start).collect(toList());
	
	produceThreads.stream().forEach(t -> {
		try {
			t.join();
		} catch (Exception e) {
			e.printStackTrace();
		}
	});
	
	producer.close();
}
 
开发者ID:walle-liao,项目名称:jaf-examples,代码行数:23,代码来源:ProducerDemo.java

示例6: buildAsyncProducer

import kafka.javaapi.producer.Producer; //导入依赖的package包/类
private static Producer<String, String> buildAsyncProducer() {
	Properties props = new Properties();
	props.put("metadata.broker.list", Constants.BROKER_LIST);
	props.put("serializer.class", StringEncoder.class.getName());
	props.put("partitioner.class", HashPartitioner.class.getName());
	props.put("request.required.acks", "-1");
	props.put("producer.type", "async");  // 使用异步模式
	props.put("batch.num.messages", "3");  // 注意这里会3个消息一起提交
	props.put("queue.buffer.max.ms", "10000000");
	props.put("queue.buffering.max.messages", "1000000");
	props.put("queue.enqueue.timeout.ms", "20000000");
	
	ProducerConfig config = new ProducerConfig(props);
	Producer<String, String> produce = new Producer<>(config);
	return produce;
}
 
开发者ID:walle-liao,项目名称:jaf-examples,代码行数:17,代码来源:ProducerDemo.java

示例7: prepare

import kafka.javaapi.producer.Producer; //导入依赖的package包/类
@Override
public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {
    //for backward compatibility.
    if(mapper == null) {
        this.mapper = new FieldNameBasedTupleToKafkaMapper<K,V>();
    }

    //for backward compatibility.
    if(topicSelector == null) {
        this.topicSelector = new DefaultTopicSelector((String) stormConf.get(TOPIC));
    }

    Map configMap = (Map) stormConf.get(KAFKA_BROKER_PROPERTIES);
    Properties properties = new Properties();
    properties.putAll(configMap);
    ProducerConfig config = new ProducerConfig(properties);
    producer = new Producer<K, V>(config);
    this.collector = collector;
}
 
开发者ID:redBorder,项目名称:rb-bi,代码行数:20,代码来源:KafkaBolt.java

示例8: main

import kafka.javaapi.producer.Producer; //导入依赖的package包/类
public static void main(String[] args) {
    Properties props = new Properties();
    props.put("metadata.broker.list", "127.0.0.1:9092");
    props.put("serializer.class", "kafka.serializer.StringEncoder");
    props.put("key.serializer.class", "kafka.serializer.StringEncoder");
    props.put("request.required.acks","-1");

    Producer<String, String> producer = new Producer<String, String>(new ProducerConfig(props));

    int messageNo = 100;
    final int COUNT = 1000;
    while (messageNo < COUNT) {
        String key = String.valueOf(messageNo);
        String data = "hello kafka message " + key;
        producer.send(new KeyedMessage<String, String>("TestTopic", key ,data));
        System.out.println(data);
        messageNo ++;
    }
}
 
开发者ID:javahongxi,项目名称:whatsmars,代码行数:20,代码来源:KafkaProducer.java

示例9: execute

import kafka.javaapi.producer.Producer; //导入依赖的package包/类
public void execute(JavaPairRDD<String, byte[]> inputMessage) {
    JavaPairRDD<String, byte[]> partitionedRDD;
    if (config.getLocalMode())
        partitionedRDD = inputMessage;
    else {
        // Helps scale beyond number of input partitions in kafka
        partitionedRDD = inputMessage.repartition(config.getRepartitionCount());

    }

    partitionedRDD.foreachPartition(prdd -> {
        // You can choose binary or string encoder
        Producer validProducer = ConnectionManager.getKafkaSingletonConnectionWithBinaryEncoder(config);
        prdd.forEachRemaining(records -> {
            byte[] msg = records._2();
            try {
                // TODO: Add your logic here to process data
                // As default we are just publishing back to another kafka topic
                logger.info("Processing event=" + new String(msg));
                publishMessagesToKafka(validProducer, msg);
            } catch (Exception e){
                logger.error("Error processing message:" + msg);
            }
        });
    });
}
 
开发者ID:ameyamk,项目名称:spark-streaming-direct-kafka,代码行数:27,代码来源:ProcessStreamingData.java

示例10: KafkaPublisher

import kafka.javaapi.producer.Producer; //导入依赖的package包/类
/**
 * constructor initializing
 * 
 * @param settings
 * @throws rrNvReadable.missingReqdSetting
 */
public KafkaPublisher(@Qualifier("propertyReader") rrNvReadable settings) throws rrNvReadable.missingReqdSetting {
	//fSettings = settings;

	final Properties props = new Properties();
	/*transferSetting(fSettings, props, "metadata.broker.list", "localhost:9092");
	transferSetting(fSettings, props, "request.required.acks", "1");
	transferSetting(fSettings, props, "message.send.max.retries", "5");
	transferSetting(fSettings, props, "retry.backoff.ms", "150"); */
	String kafkaConnUrl= com.att.ajsc.filemonitor.AJSCPropertiesMap.getProperty(CambriaConstants.msgRtr_prop,"kafka.metadata.broker.list"); 
	System.out.println("kafkaConnUrl:- "+kafkaConnUrl);
	if(null==kafkaConnUrl){ 

		kafkaConnUrl="localhost:9092"; 
	}		
	transferSetting( props, "metadata.broker.list", kafkaConnUrl);
	transferSetting( props, "request.required.acks", "1");
	transferSetting( props, "message.send.max.retries", "5");
	transferSetting(props, "retry.backoff.ms", "150"); 

	props.put("serializer.class", "kafka.serializer.StringEncoder");

	fConfig = new ProducerConfig(props);
	fProducer = new Producer<String, String>(fConfig);
}
 
开发者ID:att,项目名称:dmaap-framework,代码行数:31,代码来源:KafkaPublisher.java

示例11: sample

import kafka.javaapi.producer.Producer; //导入依赖的package包/类
@Override
public SampleResult sample(Entry entry) {
	SampleResult result = new SampleResult();
	result.setSampleLabel(getName());
	try {
		result.sampleStart();
		Producer<String, String> producer = getProducer();
		KeyedMessage<String, String> msg = new KeyedMessage<String, String>(getTopic(), getMessage());
		producer.send(msg);
		result.sampleEnd(); 
		result.setSuccessful(true);
		result.setResponseCodeOK();
	} catch (Exception e) {
		result.sampleEnd(); // stop stopwatch
		result.setSuccessful(false);
		result.setResponseMessage("Exception: " + e);
		// get stack trace as a String to return as document data
		java.io.StringWriter stringWriter = new java.io.StringWriter();
		e.printStackTrace(new java.io.PrintWriter(stringWriter));
		result.setResponseData(stringWriter.toString(), null);
		result.setDataType(org.apache.jmeter.samplers.SampleResult.TEXT);
		result.setResponseCode("FAILED");
	}
	return result;
}
 
开发者ID:XMeterSaaSService,项目名称:kafka_jmeter,代码行数:26,代码来源:KafkaSampler.java

示例12: main

import kafka.javaapi.producer.Producer; //导入依赖的package包/类
public static void main(String[] args) {
	String brokers = "localhost:9092";
	Producer<String, String> producer = KafkaProducer.getInstance(brokers).getProducer();

	KafkaDataProducer instance = new KafkaDataProducer();

	String topic = "test-topic";

	for (int i = 0; i < 100; i++) {
		String message = instance.get(i);
		KeyedMessage<String, String> keyedMessage = new KeyedMessage<String, String>(topic, "device001", message);
		producer.send(keyedMessage);
		System.out.println("message[" + (i + 1) + "] is sent.");
		try {
			Thread.sleep(1000);
		} catch (InterruptedException e) {
			e.printStackTrace();
		}
	}
}
 
开发者ID:osswangxining,项目名称:another-rule-based-analytics-on-spark,代码行数:21,代码来源:KafkaDataProducer.java

示例13: getInstance

import kafka.javaapi.producer.Producer; //导入依赖的package包/类
public static KafkaProducer getInstance(String brokerList) {
	long threadId = Thread.currentThread().getId();
	Producer<String, String> producer = _pool.get(threadId);
	System.out.println("producer:" + producer + ", thread:" + threadId);

	if (producer == null) {

		Preconditions.checkArgument(StringUtils.isNotBlank(brokerList), "kafka brokerList is blank...");

		// set properties
		Properties properties = new Properties();
		properties.put(METADATA_BROKER_LIST_KEY, brokerList);
		properties.put(SERIALIZER_CLASS_KEY, SERIALIZER_CLASS_VALUE);
		properties.put("kafka.message.CompressionCodec", "1");
		properties.put("client.id", "streaming-kafka-output");
		ProducerConfig producerConfig = new ProducerConfig(properties);

		producer = new Producer<String, String>(producerConfig);

		_pool.put(threadId, producer);
	}

	return instance;
}
 
开发者ID:osswangxining,项目名称:another-rule-based-analytics-on-spark,代码行数:25,代码来源:KafkaProducer.java

示例14: send

import kafka.javaapi.producer.Producer; //导入依赖的package包/类
@Override
public ListenableFuture<Integer> send() {
  try {
    int size = messages.size();
    Producer<Integer, ByteBuffer> kafkaProducer = producer.get();
    if (kafkaProducer == null) {
      return Futures.immediateFailedFuture(new IllegalStateException("No kafka producer available."));
    }
    kafkaProducer.send(messages);
    return Futures.immediateFuture(size);
  } catch (Exception e) {
    return Futures.immediateFailedFuture(e);
  } finally {
    messages.clear();
  }
}
 
开发者ID:apache,项目名称:twill,代码行数:17,代码来源:SimpleKafkaPublisher.java

示例15: main

import kafka.javaapi.producer.Producer; //导入依赖的package包/类
public static void main(String[] args) {
    Properties props = new Properties();
    props.put("serializer.class", "kafka.serializer.StringEncoder");
    props.put("metadata.broker.list", "localhost:9092");

    Producer<String,String> producer = new Producer<String, String>(new ProducerConfig(props));

    int number = 1;
    for(; number < MESSAGES_NUMBER; number++)
    {
        String messageStr =
                String.format("{\"message\": %d, \"uid\":\"%s\"}",
                        number, uId.get(rand.nextInt(uNum)));

        producer.send(new KeyedMessage<String, String>(SparkStreamingConsumer.KAFKA_TOPIC,
                null, messageStr));
        if (number % 10000 == 0)
            System.out.println("Messages pushed: " + number);
    }
    System.out.println("Messages pushed: " + number);
}
 
开发者ID:rssdev10,项目名称:spark-kafka-streaming,代码行数:22,代码来源:KafkaDataProducer.java


注:本文中的kafka.javaapi.producer.Producer类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。