当前位置: 首页>>代码示例>>Java>>正文


Java ModifiableHadoopConfiguration.of方法代码示例

本文整理汇总了Java中com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration.of方法的典型用法代码示例。如果您正苦于以下问题:Java ModifiableHadoopConfiguration.of方法的具体用法?Java ModifiableHadoopConfiguration.of怎么用?Java ModifiableHadoopConfiguration.of使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration的用法示例。


在下文中一共展示了ModifiableHadoopConfiguration.of方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: testRecordReader

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入方法依赖的package包/类
public void testRecordReader() throws Exception {
    final Configuration conf = new Configuration();
    ModifiableHadoopConfiguration faunusConf = ModifiableHadoopConfiguration.of(conf);
    faunusConf.getInputConf(ROOT_NS).set(SCRIPT_FILE, ScriptRecordReaderTest.class.getResource("ScriptInput.groovy").getFile());
    ScriptRecordReader reader = new ScriptRecordReader(VertexQueryFilter.create(new EmptyConfiguration()), HadoopCompatLoader.getCompat().newTask(conf, new TaskAttemptID()));
    reader.initialize(new FileSplit(new Path(ScriptRecordReaderTest.class.getResource("graph-of-the-gods.id").toURI()), 0, Long.MAX_VALUE, new String[]{}),
            HadoopCompatLoader.getCompat().newTask(conf, new TaskAttemptID()));
    int counter = 0;
    while (reader.nextKeyValue()) {
        assertEquals(reader.getCurrentKey(), NullWritable.get());
        FaunusVertex vertex = reader.getCurrentValue();
        long id = vertex.getLongId();
        assertEquals(id, counter++);
        assertEquals(vertex.getPropertyKeys().size(), 0);
        assertEquals(count(vertex.getEdges(Direction.IN)), 0);
        if (id == 1 || id == 2 || id == 3 || id == 7 || id == 11) {
            assertTrue(count(vertex.getEdges(Direction.OUT)) > 0);
        } else {
            assertEquals(count(vertex.getEdges(Direction.OUT)), 0);
        }
    }
    assertEquals(counter, 12);
    reader.close();
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:25,代码来源:ScriptRecordReaderTest.java

示例2: testRecordWriter

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入方法依赖的package包/类
public void testRecordWriter() throws Exception {
    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    DataOutputStream stream = new DataOutputStream(new PrintStream(baos));
    Configuration conf = new Configuration();
    ModifiableHadoopConfiguration faunusConf = ModifiableHadoopConfiguration.of(conf);
    faunusConf.getOutputConf(ROOT_NS).set(SCRIPT_FILE, ScriptRecordWriterTest.class.getResource("ScriptOutput.groovy").getFile());
    ScriptRecordWriter writer = new ScriptRecordWriter(stream, conf);
    Map<Long, FaunusVertex> graph = generateGraph(ExampleGraph.TINKERGRAPH);
    for (FaunusVertex vertex : graph.values()) {
        writer.write(NullWritable.get(), vertex);
    }
    String output = baos.toString();
    String[] rows = output.split("\n");
    int vertices = 0;
    for (String row : rows) {
        vertices++;
        assertTrue(row.contains(":"));
        if (row.startsWith("2") || row.startsWith("3") || row.startsWith("5"))
            assertEquals(row.length(), 3);
        else
            assertTrue(row.length() > 3);
    }
    assertEquals(vertices, graph.size());

}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:26,代码来源:ScriptRecordWriterTest.java

示例3: runVertexScanJob

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入方法依赖的package包/类
public static ScanMetrics runVertexScanJob(VertexScanJob vertexScanJob, Configuration conf, String confRootField,
                                     org.apache.hadoop.conf.Configuration hadoopConf,
                                     Class<? extends InputFormat> inputFormat)
        throws IOException, InterruptedException, ClassNotFoundException {

    ModifiableHadoopConfiguration scanConf =
            ModifiableHadoopConfiguration.of(TitanHadoopConfiguration.MAPRED_NS, hadoopConf);

    tryToLoadClassByName(vertexScanJob);

    // Set the VertexScanJob class
    scanConf.set(TitanHadoopConfiguration.SCAN_JOB_CLASS, vertexScanJob.getClass().getName());

    String jobName = HadoopScanMapper.class.getSimpleName() + "[" + vertexScanJob + "]";

    return runJob(conf, confRootField, hadoopConf, inputFormat, jobName, HadoopVertexScanMapper.class);
}
 
开发者ID:graben1437,项目名称:titan1.0.1.kafka,代码行数:18,代码来源:HadoopScanRunner.java

示例4: testRecordReaderWithVertexQueryFilterDirection

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入方法依赖的package包/类
public void testRecordReaderWithVertexQueryFilterDirection() throws Exception {
    Configuration config = new Configuration();
    ModifiableHadoopConfiguration faunusConf = ModifiableHadoopConfiguration.of(config);
    faunusConf.set(TitanHadoopConfiguration.INPUT_VERTEX_QUERY_FILTER, "v.query().direction(OUT)");
    GraphSONRecordReader reader = new GraphSONRecordReader(VertexQueryFilter.create(config));
    reader.initialize(new FileSplit(new Path(GraphSONRecordReaderTest.class.getResource("graph-of-the-gods.json").toURI()), 0, Long.MAX_VALUE, new String[]{}),
            HadoopCompatLoader.getCompat().newTask(new Configuration(), new TaskAttemptID()));
    int counter = 0;
    while (reader.nextKeyValue()) {
        counter++;
        assertEquals(reader.getCurrentKey(), NullWritable.get());
        FaunusVertex vertex = reader.getCurrentValue();
        assertEquals(Iterables.size(vertex.getEdges(Direction.IN)), 0);
    }
    assertEquals(counter, 12);
    reader.close();
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:18,代码来源:GraphSONRecordReaderTest.java

示例5: getDataOuputStream

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入方法依赖的package包/类
public DataOutputStream getDataOuputStream(final TaskAttemptContext job) throws IOException, InterruptedException {
    org.apache.hadoop.conf.Configuration hadoopConf = DEFAULT_COMPAT.getContextConfiguration(job);
    this.faunusConf = ModifiableHadoopConfiguration.of(hadoopConf);
    boolean isCompressed = getCompressOutput(job);
    CompressionCodec codec = null;
    String extension = "";
    if (isCompressed) {
        final Class<? extends CompressionCodec> codecClass = getOutputCompressorClass(job, DefaultCodec.class);
        codec = ReflectionUtils.newInstance(codecClass, hadoopConf);
        extension = codec.getDefaultExtension();
    }
    final Path file = super.getDefaultWorkFile(job, extension);
    final FileSystem fs = file.getFileSystem(hadoopConf);
    if (!isCompressed) {
        return new DataOutputStream(fs.create(file, false));
    } else {
        return new DataOutputStream(codec.createOutputStream(fs.create(file, false)));
    }
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:20,代码来源:HadoopFileOutputFormat.java

示例6: setup

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入方法依赖的package包/类
@Override
public void setup(final Mapper.Context context) throws IOException, InterruptedException {
    faunusConf = ModifiableHadoopConfiguration.of(DEFAULT_COMPAT.getContextConfiguration(context));

    if (!faunusConf.get(PIPELINE_TRACK_PATHS))
        throw new IllegalStateException(LinkMapReduce.class.getSimpleName() + " requires that paths be enabled");

    step = faunusConf.get(LINK_STEP);
    direction = faunusConf.get(LINK_DIRECTION);
    label = faunusConf.get(LINK_LABEL);
    mergeDuplicates = faunusConf.get(LINK_MERGE_DUPLICATES);
    mergeWeightKey = faunusConf.get(LINK_MERGE_WEIGHT_KEY);
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:14,代码来源:LinkMapReduce.java

示例7: setConf

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入方法依赖的package包/类
@Override
public void setConf(org.apache.hadoop.conf.Configuration hadoopConf) {
    this.hadoopConf = hadoopConf;
    this.configuration = ModifiableHadoopConfiguration.of(hadoopConf);

    boolean trackPaths = this.configuration.get(PIPELINE_TRACK_PATHS);
    if (trackPaths) {
        this.tracker = new Tracker((this instanceof FaunusVertex) ?
                new FaunusVertex.MicroVertex(this.id) :
                new StandardFaunusEdge.MicroEdge(this.id));
    }
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:13,代码来源:FaunusPathElement.java

示例8: setup

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入方法依赖的package包/类
@Override
public void setup(final Mapper.Context context) throws IOException, InterruptedException {
    faunusConf = ModifiableHadoopConfiguration.of(DEFAULT_COMPAT.getJobContextConfiguration(context));
    direction = faunusConf.get(VERTICES_EDGES_DIRECTION);
    labels = faunusConf.get(VERTICES_EDGES_LABELS);
    trackPaths = faunusConf.get(PIPELINE_TRACK_PATHS);
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:8,代码来源:VerticesEdgesMapReduce.java

示例9: setup

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入方法依赖的package包/类
@Override
protected void setup(Context context) throws IOException, InterruptedException {
    super.setup(context);
    org.apache.hadoop.conf.Configuration hadoopConf = DEFAULT_COMPAT.getContextConfiguration(context);
    ModifiableHadoopConfiguration scanConf = ModifiableHadoopConfiguration.of(TitanHadoopConfiguration.MAPRED_NS, hadoopConf);
    job = getJob(scanConf);
    metrics = new HadoopContextScanMetrics(context);
    Configuration graphConf = getTitanConfiguration(context);
    finishSetup(scanConf, graphConf);
}
 
开发者ID:graben1437,项目名称:titan1.0.1.kafka,代码行数:11,代码来源:HadoopScanMapper.java

示例10: getConf

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入方法依赖的package包/类
private static ModifiableHadoopConfiguration getConf(String path) throws IOException {
    Properties properties = new Properties();
    Configuration configuration = new Configuration();
    properties.load(new FileInputStream(path));
    for (Map.Entry<Object, Object> entry : properties.entrySet()) {
        configuration.set(entry.getKey().toString(), entry.getValue().toString());
    }
    return ModifiableHadoopConfiguration.of(configuration);
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:10,代码来源:LoaderScriptChecker.java

示例11: setup

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入方法依赖的package包/类
@Override
public void setup(final Mapper.Context context) throws IOException, InterruptedException {
    faunusConf = ModifiableHadoopConfiguration.of(DEFAULT_COMPAT.getContextConfiguration(context));
    graph = TitanGraphOutputMapReduce.generateGraph(faunusConf);
    trackState = DEFAULT_COMPAT.getContextConfiguration(context).getBoolean(Tokens.TITAN_HADOOP_PIPELINE_TRACK_STATE, false);

    // Check whether a script is defined in the config
    if (faunusConf.has(OUTPUT_LOADER_SCRIPT_FILE)) {
        Path scriptPath = new Path(faunusConf.get(OUTPUT_LOADER_SCRIPT_FILE));
        FileSystem scriptFS = FileSystem.get(DEFAULT_COMPAT.getJobContextConfiguration(context));
        loaderScript = new LoaderScriptWrapper(scriptFS, scriptPath);
    }
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:14,代码来源:TitanGraphOutputMapReduce.java

示例12: cassandraRepair

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入方法依赖的package包/类
public static void cassandraRepair(Properties titanProperties, String indexName, String indexType, String partitioner) throws Exception {
    Configuration hadoopConfig = new Configuration();
    ConfigHelper.setInputPartitioner(hadoopConfig, partitioner);
    ModifiableHadoopConfiguration titanConf = ModifiableHadoopConfiguration.of(hadoopConfig);
    titanConf.set(TitanHadoopConfiguration.INPUT_FORMAT, TitanCassandraInputFormat.class.getCanonicalName());

    setCommonRepairOptions(titanConf, indexName, indexType);
    copyPropertiesToInputAndOutputConf(hadoopConfig, titanProperties);

    HadoopGraph hg = new HadoopGraph(hadoopConfig);
    repairIndex(hg);

}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:14,代码来源:TitanIndexRepair.java

示例13: setup

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入方法依赖的package包/类
@Override
public void setup(final Mapper.Context context) throws IOException, InterruptedException {
    faunusConf = ModifiableHadoopConfiguration.of(DEFAULT_COMPAT.getContextConfiguration(context));
    step = faunusConf.get(BACK_FILTER_STEP);
    String configuredClassname = faunusConf.get(BACK_FILTER_CLASS);
    isVertex = Vertex.class.getCanonicalName().equals(configuredClassname);
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:8,代码来源:BackFilterMapReduce.java

示例14: initialize

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入方法依赖的package包/类
@Override
public void initialize(final InputSplit genericSplit, final TaskAttemptContext context) throws IOException {
    lineRecordReader.initialize(genericSplit, context);
    org.apache.hadoop.conf.Configuration c = DEFAULT_COMPAT.getContextConfiguration(context);
    Configuration configuration = ModifiableHadoopConfiguration.of(c);
    graphsonUtil = new HadoopGraphSONUtility(configuration);
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:8,代码来源:GraphSONRecordReader.java

示例15: testVertexQueryConstruction

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入方法依赖的package包/类
public void testVertexQueryConstruction() {
    Configuration config = new Configuration();
    ModifiableHadoopConfiguration faunusConf = ModifiableHadoopConfiguration.of(config);
    faunusConf.set(TitanHadoopConfiguration.INPUT_VERTEX_QUERY_FILTER, "v.query().limit(0).direction(IN).labels('knows')");
    VertexQueryFilter query = VertexQueryFilter.create(config);
    assertTrue(query.doesFilter());
    assertEquals(query.limit, 0);
    assertEquals(query.hasContainers.size(), 0);
    assertEquals(query.direction, Direction.IN);
    assertEquals(query.labels.length, 1);
    assertEquals(query.labels[0], "knows");
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:13,代码来源:VertexQueryFilterTest.java


注:本文中的com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration.of方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。