本文整理汇总了Java中org.apache.hadoop.util.bloom.BloomFilter.add方法的典型用法代码示例。如果您正苦于以下问题:Java BloomFilter.add方法的具体用法?Java BloomFilter.add怎么用?Java BloomFilter.add使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类org.apache.hadoop.util.bloom.BloomFilter
的用法示例。
在下文中一共展示了BloomFilter.add方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。
示例1: exec
import org.apache.hadoop.util.bloom.BloomFilter; //导入方法依赖的package包/类
@Override
public Tuple exec(Tuple input) throws IOException {
if (input == null || input.size() == 0) return null;
// Strip off the initial level of bag
DataBag values = (DataBag)input.get(0);
Iterator<Tuple> it = values.iterator();
Tuple t = it.next();
// If the input tuple has only one field, then we'll extract
// that field and serialize it into a key. If it has multiple
// fields, we'll serialize the whole tuple.
byte[] b;
if (t.size() == 1) b = DataType.toBytes(t.get(0));
else b = DataType.toBytes(t, DataType.TUPLE);
Key k = new Key(b);
filter = new BloomFilter(vSize, numHash, hType);
filter.add(k);
return TupleFactory.getInstance().newTuple(bloomOut());
}
示例2: addToBloomFilter
import org.apache.hadoop.util.bloom.BloomFilter; //导入方法依赖的package包/类
private void addToBloomFilter(final Object vertex, final BloomFilter filter) throws RetrieverException {
try {
filter.add(new org.apache.hadoop.util.bloom.Key(elementConverter.serialiseVertex(vertex)));
} catch (final AccumuloElementConversionException e) {
throw new RetrieverException("Failed to add identifier to the bloom key", e);
}
}
示例3: run
import org.apache.hadoop.util.bloom.BloomFilter; //导入方法依赖的package包/类
@Override
public int run(String[] args) throws Exception {
if (args.length != 4) {
System.err
.println("Usage: Trainer <totrain> <nummembers> <falseposrate> <bfoutfile>");
return 1;
}
// Parse command line arguments
Path inputFile = new Path(args[0]);
int numMembers = Integer.parseInt(args[1]);
float falsePosRate = Float.parseFloat(args[2]);
Path bfFile = new Path(args[3]);
// TODO Create a new Jedis object using localhost at port 6379
jedis = new Jedis("localhost", 6379);
// TODO delete the REDIS_SET_KEY
jedis.del(REDIS_SET_KEY);
// TODO Create a new Bloom filter
BloomFilter filter = createBloomFilter(numMembers, falsePosRate);
// TODO open the file for read
FileSystem fs = FileSystem.get(getConf());
String line = null;
int numRecords = 0;
BufferedReader rdr = new BufferedReader(new InputStreamReader(
fs.open(inputFile)));
while ((line = rdr.readLine()) != null) {
// TODO if the line is not empty
if (!line.isEmpty()) {
// TODO add the line to the Bloom filter
filter.add(new Key(line.getBytes()));
// TODO use Jedis client's "sadd" method to set
jedis.sadd(REDIS_SET_KEY, line);
// TODO increment numRecords
++numRecords;
}
}
// TODO Close reader, disconnect Jedis client
rdr.close();
jedis.disconnect();
System.out.println("Trained Bloom filter with " + numRecords
+ " entries.");
System.out.println("Serializing Bloom filter to HDFS at " + bfFile);
// TODO create anew FSDataOutputStream using the FileSystem
FSDataOutputStream strm = fs.create(bfFile);
// TODO pass the stream to the Bloom filter
filter.write(strm);
// TODO close the stream
strm.flush();
strm.close();
System.out.println("Done training Bloom filter.");
return 0;
}