当前位置: 首页>>代码示例>>Java>>正文


Java Stream.each方法代码示例

本文整理汇总了Java中storm.trident.Stream.each方法的典型用法代码示例。如果您正苦于以下问题:Java Stream.each方法的具体用法?Java Stream.each怎么用?Java Stream.each使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在storm.trident.Stream的用法示例。


在下文中一共展示了Stream.each方法的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: main

import storm.trident.Stream; //导入方法依赖的package包/类
public static void main(String... args) throws AlreadyAliveException, InvalidTopologyException, AuthorizationException {

		// starting to build topology
		TridentTopology topology = new TridentTopology();

		// Kafka as an opaque trident spout
		OpaqueTridentKafkaSpout spout = new OpaqueTridentKafkaSpoutBuilder(Conf.zookeeper, Conf.inputTopic).build();
		Stream stream = topology.newStream(kafkaSpout, spout);

		// mapping transaction messages to pairs: (person,amount)
		Stream atomicTransactions = stream.each(strF, Functions.mapToPersonAmount, personAmountF);

		// bolt to println data
		atomicTransactions.each(personAmountF, Functions.printlnFunction, emptyF);

		// aggregating transactions and mapping to Kafka messages
		Stream transactionsGroupped = atomicTransactions.groupBy(personF)
				.persistentAggregate(new MemoryMapState.Factory(), amountF, new Sum(), sumF).newValuesStream()
				.each(personSumF, Functions.mapToKafkaMessage, keyMessageF);

		// Kafka as a bolt -- producing to outputTopic
		TridentKafkaStateFactory stateFactory = new TridentKafkaStateFactory() //
				.withKafkaTopicSelector(new DefaultTopicSelector(Conf.outputTopic)) //
				.withTridentTupleToKafkaMapper(new FieldNameBasedTupleToKafkaMapper<String, String>(key, message));
		transactionsGroupped.partitionPersist(stateFactory, keyMessageF, new TridentKafkaUpdater(), emptyF);

		// submitting topology to local cluster
		new LocalCluster().submitTopology(kafkaAccountsTopology, topologyConfig(), topology.build());

		// waiting a while, then running Kafka producer
		Sleep.seconds(5);
		KafkaProduceExample.start(20);

	}
 
开发者ID:dzikowski,项目名称:simple-kafka-storm-java,代码行数:35,代码来源:KafkaStormTridentExample.java

示例2: buildTopology

import storm.trident.Stream; //导入方法依赖的package包/类
public static StormTopology buildTopology(String hbaseRoot){
    Fields fields = new Fields("word", "count");
    FixedBatchSpout spout = new FixedBatchSpout(fields, 4,
            new Values("storm", 1),
            new Values("trident", 1),
            new Values("needs", 1),
            new Values("javadoc", 1)
    );
    spout.setCycle(true);

    TridentHBaseMapper tridentHBaseMapper = new SimpleTridentHBaseMapper()
            .withColumnFamily("cf")
            .withColumnFields(new Fields("word"))
            .withCounterFields(new Fields("count"))
            .withRowKeyField("word");

    HBaseValueMapper rowToStormValueMapper = new WordCountValueMapper();

    HBaseProjectionCriteria projectionCriteria = new HBaseProjectionCriteria();
    projectionCriteria.addColumn(new HBaseProjectionCriteria.ColumnMetaData("cf", "count"));

    HBaseState.Options options = new HBaseState.Options()
            .withConfigKey(hbaseRoot)
            .withDurability(Durability.SYNC_WAL)
            .withMapper(tridentHBaseMapper)
            .withProjectionCriteria(projectionCriteria)
            .withRowToStormValueMapper(rowToStormValueMapper)
            .withTableName("WordCount");

    StateFactory factory = new HBaseStateFactory(options);

    TridentTopology topology = new TridentTopology();
    Stream stream = topology.newStream("spout1", spout);

    stream.partitionPersist(factory, fields,  new HBaseUpdater(), new Fields());

    TridentState state = topology.newStaticState(factory);
    stream = stream.stateQuery(state, new Fields("word"), new HBaseQuery(), new Fields("columnName","columnValue"));
    stream.each(new Fields("word","columnValue"), new PrintFunction(), new Fields());
    return topology.build();
}
 
开发者ID:mengzhiyi,项目名称:storm-hbase-1.0.x,代码行数:42,代码来源:WordCountTrident.java

示例3: buildTopology

import storm.trident.Stream; //导入方法依赖的package包/类
private static StormTopology buildTopology(LocalDRPC drpc) {
	TwitterDeveloperAccount twitterDeveloperAccount = new TwitterDeveloperAccount(
			TwitterDebugAuthenticationData.ACCESS_TOKEN, TwitterDebugAuthenticationData.ACCESS_TOKEN_SECRET,
			TwitterDebugAuthenticationData.API_KEY, TwitterDebugAuthenticationData.API_SECRET);
	TridentTopology topology = new TridentTopology();

	SmashBrosTweetsSpout smashBrosTweetsSpout = new SmashBrosTweetsSpout(twitterDeveloperAccount);

	Stream tweetsStream = topology.newStream("smashbros-tweets-spout", smashBrosTweetsSpout);

	// TridentState persistedTweets = tweetsStream.partitionPersist(new
	// SmashBrosTweetsDatabaseState.Factory(),
	// new Fields("tweet"), new
	// BaseStateUpdater<SmashBrosTweetsDatabaseState>() {
	// private static final long serialVersionUID = -2160953537837069611L;
	//
	// @Override
	// public void updateState(SmashBrosTweetsDatabaseState state,
	// List<TridentTuple> tuples,
	// TridentCollector collector) {
	// List<Object> tweetIds = new ArrayList<Object>();
	// List<Object> tweets = new ArrayList<Object>();
	// for (TridentTuple t : tuples) {
	// tweetIds.add(((Tweet)t.get(0)).getId());
	// tweets.add(t.get(0));
	// }
	// state.multiUpdate(tweetIds, tweets);
	// }
	// });

	Stream tweetsTextStream = tweetsStream.each(new Fields("tweet"), new TweetTextExtractor(), new Fields(
			"tweet-text"));

	TridentState wordCounts = tweetsTextStream
			.each(new Fields("tweet-text"), new TweetWordsFilterAndSplit(), new Fields("word")) //
			.groupBy(new Fields("word")) //
			.persistentAggregate(new MemoryMapState.Factory(), new Count(), new Fields("count")) //
			.parallelismHint(6);

	TridentState charactersRank = tweetsTextStream
			.each(new Fields("tweet-text"), new CharactersReferencesIdentifier(), new Fields("charRef")) //
			.groupBy(new Fields("charRef")) //
			.persistentAggregate(new MemoryMapState.Factory(), new Count(), new Fields("count")) //
			.parallelismHint(6);

	// wordCounts.newValuesStream().each(new Fields("count"), new Debug());
	// charactersRank.newValuesStream().each(new Fields("count"), new
	// Debug());

	return topology.build();
}
 
开发者ID:danielgimenes,项目名称:SmashBrosTwitterAnalytics,代码行数:52,代码来源:SmashBrosTwitterTopology.java

示例4: buildTopology

import storm.trident.Stream; //导入方法依赖的package包/类
public static StormTopology buildTopology(Config conf) {			
	TridentTopology topology = new TridentTopology();
	Stream stream = null;
	
	List<String> fieldsWebLog = new ArrayList<String>();
	fieldsWebLog.add("host");
	fieldsWebLog.add("log");
	fieldsWebLog.add("user");
	fieldsWebLog.add("datetime");
	fieldsWebLog.add("request");
	fieldsWebLog.add("status");
	fieldsWebLog.add("size");
	fieldsWebLog.add("referer");
	fieldsWebLog.add("userAgent");
	fieldsWebLog.add("session");
	fieldsWebLog.add("responseTime");
	fieldsWebLog.add("timestamp");
	fieldsWebLog.add("json");
			
    SimpleFileStringSpout spout = new SimpleFileStringSpout("data/webserverlogs.json", "rawLogs");
    spout.setCycle(true);
    
    stream = topology.newStream("spout", spout);
    stream = stream.each(new Fields("rawLogs"), new WebServerLog2Json(), new Fields(fieldsWebLog));	    	    
	stream = stream.each(new Fields(fieldsWebLog), new WebServerLogFilter());
	
	stream.each(new Fields("request", "datetime"), new DatePartition(), new Fields("cq", "cf"))
			.groupBy(new Fields("request", "cq", "cf"))
			.persistentAggregate(new MemoryMapState.Factory(), new Count(), new Fields("count"))					
			.newValuesStream()
			.each(new Fields("request", "cq", "cf", "count"), new LogFilter());
	
	stream.each(new Fields("user", "datetime"), new DatePartition(), new Fields("cq", "cf"))
			.groupBy(new Fields("user", "cq", "cf"))
			.persistentAggregate(new MemoryMapState.Factory(), new Count(), new Fields("count"))					
			.newValuesStream()
			.each(new Fields("user", "cq", "cf", "count"), new LogFilter());
 
	if (Constant.YES.equals(conf.get(Conf.PROP_OPENTSDB_USE))) {
		LOG.info("OpenTSDB: " + conf.get(Conf.PROP_OPENTSDB_USE));
		stream.groupBy(new Fields(fieldsWebLog)).aggregate(new Fields(fieldsWebLog), new WebServerLog2TSDB(), new Fields("count"))			
		.each(new Fields("request", "count"), new LogFilter());
	}
	
	if (Constant.YES.equals(conf.get(Conf.PROP_HDFS_USE))) {
		LOG.info("HDFS: " + conf.get(Conf.PROP_HDFS_USE));
		stream.each(new Fields(fieldsWebLog), new HDFSPersistence(), new Fields("result"))
		.each(new Fields("result"), new LogFilter());
	}
	
	return topology.build();				
}
 
开发者ID:Produban,项目名称:openbus,代码行数:53,代码来源:OpenbusProcessorFileTopology.java

示例5: buildTopology

import storm.trident.Stream; //导入方法依赖的package包/类
public static StormTopology buildTopology()

{
	TridentTopology topology = new TridentTopology();
	Spout spout1 = new Spout();
	
	Stream inputStream = topology.newStream("faltu", spout1);//faltu isnt used anywhere.
	
	 inputStream
	    .each(new Fields("myTuple"),new Function(), new Fields());
	 
	 return topology.build();
}
 
开发者ID:BinitaBharati,项目名称:storm-trident-example,代码行数:14,代码来源:ExampleTopology.java


注:本文中的storm.trident.Stream.each方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。