当前位置: 首页>>代码示例>>Java>>正文


Java TridentKafkaUpdater类代码示例

本文整理汇总了Java中storm.kafka.trident.TridentKafkaUpdater的典型用法代码示例。如果您正苦于以下问题:Java TridentKafkaUpdater类的具体用法?Java TridentKafkaUpdater怎么用?Java TridentKafkaUpdater使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


TridentKafkaUpdater类属于storm.kafka.trident包,在下文中一共展示了TridentKafkaUpdater类的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: buildTopology

import storm.kafka.trident.TridentKafkaUpdater; //导入依赖的package包/类
@Override
public StormTopology buildTopology(Config topologyConf) throws Exception {
    IBatchSpout wordSpout = new FileBasedBatchSpout("words.txt", new Fields("word"), 10);

    TridentTopology topology = new TridentTopology();

    Stream wordsStream = topology.newStream("someWords", wordSpout);

    TridentKafkaStateFactory stateFactory = TridentConnectorUtil.getTridentKafkaStateFactory(TOPIC_NAME, kafkaBrokerlist, "word", "word", topologyConf);
    wordsStream.partitionPersist(stateFactory, new Fields("word"), new TridentKafkaUpdater(), new Fields()).parallelismHint(1);

    JmsStateFactory jmsStateFactory = TridentConnectorUtil.getJmsStateFactory(jmsConnectionString, JMS_QUEUE_NAME);
    wordsStream.partitionPersist(jmsStateFactory, new Fields("word"), new JmsUpdater(), new Fields()).parallelismHint(1);

    Stream kafkaStream = topology.newStream("kafkaTridentSpout",  TridentConnectorUtil.getTridentKafkaEmitter(zkConnString, TOPIC_NAME, topologyConf)).parallelismHint(1);
    Stream jmsStream = topology.newStream("jmsTridentSpout",  TridentConnectorUtil.getTridentJmsSpouts(jmsConnectionString, JMS_QUEUE_NAME, topologyConf, "words")).parallelismHint(1);

    kafkaStream = kafkaStream.global().each(new Fields("str"), new TridentWordCount(), new Fields("word","count")).parallelismHint(1);
    jmsStream = jmsStream.global().each(new Fields("words"), new TridentWordCount(), new Fields("word","count")).parallelismHint(1);

    HBaseStateFactory hBaseStateFactory = TridentConnectorUtil.getTridentHbaseFactory(hbaseUrl, TABLE_NAME, "word", COLUMN_FAMILY, Lists.newArrayList("word"),
            Lists.newArrayList("count"), topologyConf);
    TridentState tridentState = jmsStream.global().partitionPersist(hBaseStateFactory, new Fields("word", "count"), new HBaseUpdater(), new Fields()).parallelismHint(1);

    HdfsStateFactory tridentHdfsFactory = TridentConnectorUtil.getTridentHdfsFactory(hdfsUrl, HDFS_SRC_DIR, HDFS_ROTATION_DIR, "word", "count");
    kafkaStream.global().partitionPersist(tridentHdfsFactory, new Fields("word", "count"), new HdfsUpdater(), new Fields()).parallelismHint(1);

    CassandraStateFactory cassandraStateFactory = TridentConnectorUtil.getCassandraStateFactory(cassandraConnString, KEY_SPACE_NAME, "word", COLUMN_FAMILY, topologyConf);
    Map<String, Class> fieldToTypeMap = Maps.newHashMap();
    fieldToTypeMap.put("word", String.class);
    fieldToTypeMap.put("count", Long.class);
    SimpleCassandraTridentTupleMapper mapper = new SimpleCassandraTridentTupleMapper(KEY_SPACE_NAME, COLUMN_FAMILY, "word",fieldToTypeMap);
    kafkaStream.global().partitionPersist(cassandraStateFactory, new Fields("word", "count"),
            new CassandraUpdater(mapper), new Fields()).parallelismHint(1);
    return topology.build();
}
 
开发者ID:Parth-Brahmbhatt,项目名称:storm-smoke-test,代码行数:37,代码来源:WordCountTridentSmokeTest.java

示例2: main

import storm.kafka.trident.TridentKafkaUpdater; //导入依赖的package包/类
public static void main(String... args) throws AlreadyAliveException, InvalidTopologyException, AuthorizationException {

		// starting to build topology
		TridentTopology topology = new TridentTopology();

		// Kafka as an opaque trident spout
		OpaqueTridentKafkaSpout spout = new OpaqueTridentKafkaSpoutBuilder(Conf.zookeeper, Conf.inputTopic).build();
		Stream stream = topology.newStream(kafkaSpout, spout);

		// mapping transaction messages to pairs: (person,amount)
		Stream atomicTransactions = stream.each(strF, Functions.mapToPersonAmount, personAmountF);

		// bolt to println data
		atomicTransactions.each(personAmountF, Functions.printlnFunction, emptyF);

		// aggregating transactions and mapping to Kafka messages
		Stream transactionsGroupped = atomicTransactions.groupBy(personF)
				.persistentAggregate(new MemoryMapState.Factory(), amountF, new Sum(), sumF).newValuesStream()
				.each(personSumF, Functions.mapToKafkaMessage, keyMessageF);

		// Kafka as a bolt -- producing to outputTopic
		TridentKafkaStateFactory stateFactory = new TridentKafkaStateFactory() //
				.withKafkaTopicSelector(new DefaultTopicSelector(Conf.outputTopic)) //
				.withTridentTupleToKafkaMapper(new FieldNameBasedTupleToKafkaMapper<String, String>(key, message));
		transactionsGroupped.partitionPersist(stateFactory, keyMessageF, new TridentKafkaUpdater(), emptyF);

		// submitting topology to local cluster
		new LocalCluster().submitTopology(kafkaAccountsTopology, topologyConfig(), topology.build());

		// waiting a while, then running Kafka producer
		Sleep.seconds(5);
		KafkaProduceExample.start(20);

	}
 
开发者ID:dzikowski,项目名称:simple-kafka-storm-java,代码行数:35,代码来源:KafkaStormTridentExample.java


注:本文中的storm.kafka.trident.TridentKafkaUpdater类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。