当前位置: 首页>>代码示例>>Java>>正文


Java ModifiableHadoopConfiguration类代码示例

本文整理汇总了Java中com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration的典型用法代码示例。如果您正苦于以下问题:Java ModifiableHadoopConfiguration类的具体用法?Java ModifiableHadoopConfiguration怎么用?Java ModifiableHadoopConfiguration使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


ModifiableHadoopConfiguration类属于com.thinkaurelius.titan.hadoop.config包,在下文中一共展示了ModifiableHadoopConfiguration类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: testRecordReader

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入依赖的package包/类
public void testRecordReader() throws Exception {
    ModifiableHadoopConfiguration faunusConf = new ModifiableHadoopConfiguration();
    faunusConf.getInputConf(RDFConfig.ROOT_NS).set(RDFConfig.RDF_FORMAT, RDFConfig.Syntax.N_TRIPLES);
    RDFRecordReader reader = new RDFRecordReader(faunusConf);
    reader.initialize(new FileSplit(new Path(RDFRecordReaderTest.class.getResource("graph-example-1.ntriple").toURI()), 0, Long.MAX_VALUE, new String[]{}),
            HadoopCompatLoader.getCompat().newTask(faunusConf.getHadoopConfiguration(), new TaskAttemptID()));
    int counter = 0;
    while (reader.nextKeyValue()) {
        assertEquals(reader.getCurrentKey(), NullWritable.get());
        // reader.getCurrentValue();
        counter++;

    }
    assertEquals(counter, 18 * 3);
    reader.close();
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:17,代码来源:RDFRecordReaderTest.java

示例2: runScanJob

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入依赖的package包/类
public static ScanMetrics runScanJob(ScanJob scanJob, Configuration conf, String confRootField,
                                 org.apache.hadoop.conf.Configuration hadoopConf,
                                 Class<? extends InputFormat> inputFormat)
        throws IOException, InterruptedException, ClassNotFoundException {

    ModifiableHadoopConfiguration scanConf =
            ModifiableHadoopConfiguration.of(TitanHadoopConfiguration.MAPRED_NS, hadoopConf);

    tryToLoadClassByName(scanJob);

    // Set the ScanJob class
    scanConf.set(TitanHadoopConfiguration.SCAN_JOB_CLASS, scanJob.getClass().getName());

    String jobName = HadoopScanMapper.class.getSimpleName() + "[" + scanJob + "]";

    return runJob(conf, confRootField, hadoopConf, inputFormat, jobName, HadoopScanMapper.class);
}
 
开发者ID:graben1437,项目名称:titan1withtp3.1,代码行数:18,代码来源:HadoopScanRunner.java

示例3: runVertexScanJob

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入依赖的package包/类
public static ScanMetrics runVertexScanJob(VertexScanJob vertexScanJob, Configuration conf, String confRootField,
                                     org.apache.hadoop.conf.Configuration hadoopConf,
                                     Class<? extends InputFormat> inputFormat)
        throws IOException, InterruptedException, ClassNotFoundException {

    ModifiableHadoopConfiguration scanConf =
            ModifiableHadoopConfiguration.of(TitanHadoopConfiguration.MAPRED_NS, hadoopConf);

    tryToLoadClassByName(vertexScanJob);

    // Set the VertexScanJob class
    scanConf.set(TitanHadoopConfiguration.SCAN_JOB_CLASS, vertexScanJob.getClass().getName());

    String jobName = HadoopScanMapper.class.getSimpleName() + "[" + vertexScanJob + "]";

    return runJob(conf, confRootField, hadoopConf, inputFormat, jobName, HadoopVertexScanMapper.class);
}
 
开发者ID:graben1437,项目名称:titan1withtp3.1,代码行数:18,代码来源:HadoopScanRunner.java

示例4: getDataOuputStream

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入依赖的package包/类
public DataOutputStream getDataOuputStream(final TaskAttemptContext job) throws IOException, InterruptedException {
    org.apache.hadoop.conf.Configuration hadoopConf = DEFAULT_COMPAT.getContextConfiguration(job);
    this.faunusConf = ModifiableHadoopConfiguration.of(hadoopConf);
    boolean isCompressed = getCompressOutput(job);
    CompressionCodec codec = null;
    String extension = "";
    if (isCompressed) {
        final Class<? extends CompressionCodec> codecClass = getOutputCompressorClass(job, DefaultCodec.class);
        codec = ReflectionUtils.newInstance(codecClass, hadoopConf);
        extension = codec.getDefaultExtension();
    }
    final Path file = super.getDefaultWorkFile(job, extension);
    final FileSystem fs = file.getFileSystem(hadoopConf);
    if (!isCompressed) {
        return new DataOutputStream(fs.create(file, false));
    } else {
        return new DataOutputStream(codec.createOutputStream(fs.create(file, false)));
    }
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:20,代码来源:HadoopFileOutputFormat.java

示例5: setup

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入依赖的package包/类
@Override
protected void setup(Context context) throws IOException, InterruptedException {
    super.setup(context);

    // Catch any exceptions, log a warning, and allow the subclass to continue even if schema loading failed
    try {
        ModifiableHadoopConfiguration faunusConf =
                ModifiableHadoopConfiguration.of(DEFAULT_COMPAT.getContextConfiguration(context));

        if (faunusConf.get(TitanHadoopConfiguration.OUTPUT_TITAN_TYPE_CHECKING)) {
            TitanGraph g = TitanFactory.open(faunusConf.getOutputConf());
            FaunusSchemaManager.getTypeManager(null).setSchemaProvider(new SchemaContainer(g));
            log.info("Loaded schema associated with {}", g);
        } else {
            log.debug("Titan schema checking is disabled");
        }
    } catch (Throwable t) {
        log.warn("Unable to load Titan schema", t);
    }
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:21,代码来源:TitanSchemaAwareMapper.java

示例6: testPropertySortingOnText

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入依赖的package包/类
public void testPropertySortingOnText() throws Exception {
    Configuration config = ValueGroupCountMapReduce.createConfiguration(Vertex.class, "age", Text.class);
    this.mapReduceDriver.withConfiguration(config);

    Map<Long, FaunusVertex> vertices = new HashMap<Long, FaunusVertex>();
    for (long i = 0; i < 15; i++) {
        FaunusVertex v = new FaunusVertex(new ModifiableHadoopConfiguration(), i);
        v.setProperty("age", i);
        vertices.put(i, v);
        v.startPath();
    }
    final List<Pair<Text, LongWritable>> results = runWithGraphNoIndex(vertices, mapReduceDriver);
    final List<String> sortedText = new ArrayList<String>();
    sortedText.addAll(Arrays.asList("0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14"));
    Collections.sort(sortedText);
    //System.out.print(sortedText);
    for (int i = 0; i < results.size(); i++) {
        assertEquals(results.get(i).getSecond().get(), 1l);
        assertEquals(results.get(i).getFirst().toString(), sortedText.get(i));
    }
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:22,代码来源:ValueGroupCountMapReduceTest.java

示例7: testMultiLineParse

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入依赖的package包/类
public void testMultiLineParse() throws Exception {
    FaunusSchemaManager.getTypeManager(null).clear();
    ModifiableHadoopConfiguration faunusConf = new ModifiableHadoopConfiguration();
    faunusConf.getInputConf(ROOT_NS).set(RDF_USE_LOCALNAME, true);
    faunusConf.getInputConf(ROOT_NS).set(RDF_LITERAL_AS_PROPERTY, true);
    faunusConf.getInputConf(ROOT_NS).set(RDF_FORMAT, Syntax.N_TRIPLES);
    RDFBlueprintsHandler handler = new RDFBlueprintsHandler(faunusConf);

    handler.parse("<http://tinkerpop.com#josh> <http://tinkerpop.com#age> \"32\"^^<http://www.w3.org/2001/XMLSchema#int> .");
    handler.parse("<http://tinkerpop.com#josh> <http://tinkerpop.com#knows> <http://tinkerpop.com#marko> .");

    FaunusVertex josh = (FaunusVertex) handler.next();
    assertEquals(josh.getProperty("age"), 32);
    assertEquals(josh.getProperty("name"), "josh");
    assertEquals(josh.getPropertyKeys().size(), 3);
    josh = (FaunusVertex) handler.next();
    assertEquals(josh.getProperty("name"), "josh");
    assertEquals(josh.getPropertyKeys().size(), 2);
    FaunusVertex marko = (FaunusVertex) handler.next();
    assertEquals(marko.getProperty("name"), "marko");
    assertEquals(marko.getPropertyKeys().size(), 2);
    StandardFaunusEdge knows = (StandardFaunusEdge) handler.next();
    assertEquals(knows.getLabel(), "knows");
    assertEquals(knows.getPropertyKeys().size(), 1);
    assertFalse(handler.hasNext());
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:27,代码来源:RDFBlueprintsHandlerTest.java

示例8: generateGraph

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入依赖的package包/类
public static TitanGraph generateGraph(final ModifiableHadoopConfiguration titanConf) {
        final Class<? extends OutputFormat> format = titanConf.getClass(OUTPUT_FORMAT, OutputFormat.class, OutputFormat.class);
        if (TitanOutputFormat.class.isAssignableFrom(format)) {
            ModifiableConfiguration mc = titanConf.getOutputConf();
            boolean present = mc.has(AbstractCassandraStoreManager.CASSANDRA_KEYSPACE);
            LOGGER.trace("Keyspace in_config=" + present + " value=" + mc.get(AbstractCassandraStoreManager.CASSANDRA_KEYSPACE));
            TitanGraph g = TitanFactory.open(mc);

//            final boolean checkTypes = titanConf.get(TitanHadoopConfiguration.OUTPUT_TITAN_TYPE_CHECKING);
//
//            if (checkTypes) {
//                FaunusSchemaManager.getTypeManager(null).setSchemaProvider(new SchemaContainer(g));
//            }

            return g;
        } else {
            throw new RuntimeException("The provide graph output format is not a supported TitanOutputFormat: " + format.getName());
        }
    }
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:20,代码来源:TitanGraphOutputMapReduce.java

示例9: setup

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入依赖的package包/类
@Override
public void setup(
        final Mapper<NullWritable, FaunusVertex, NullWritable, NullWritable>.Context context) throws IOException {
    Configuration hadoopConf = DEFAULT_COMPAT.getContextConfiguration(context);
    ModifiableHadoopConfiguration faunusConf = ModifiableHadoopConfiguration.of(hadoopConf);
    BasicConfiguration titanConf = faunusConf.getOutputConf();
    indexName = faunusConf.get(TitanHadoopConfiguration.INDEX_NAME);
    indexType = faunusConf.get(TitanHadoopConfiguration.INDEX_TYPE);

    try {
        Preconditions.checkNotNull(indexName, "Need to provide at least an index name for re-index job");
        log.info("Read index information: name={} type={}", indexName, indexType);
        graph = (StandardTitanGraph)TitanFactory.open(titanConf);
        SchemaContainer schema = new SchemaContainer(graph);
        FaunusSchemaManager typeManager = FaunusSchemaManager.getTypeManager(titanConf);
        typeManager.setSchemaProvider(schema);
        log.info("Opened graph {}", graph);
        mgmt = (ManagementSystem) graph.getManagementSystem();
        validateIndexStatus();
    } catch (final Exception e) {
        if (null != mgmt && mgmt.isOpen())
            mgmt.rollback();
        DEFAULT_COMPAT.incrementContextCounter(context, Counters.FAILED_TRANSACTIONS, 1L);
        throw new IOException(e.getMessage(), e);
    }
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:27,代码来源:TitanIndexRepairMapper.java

示例10: setup

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入依赖的package包/类
@Override
public void setup(final Reducer.Context context) throws IOException, InterruptedException {
    faunusConf = ModifiableHadoopConfiguration.of(DEFAULT_COMPAT.getContextConfiguration(context));

    if (!faunusConf.has(LINK_DIRECTION)) {
        Iterator<Entry<String, String>> it = DEFAULT_COMPAT.getContextConfiguration(context).iterator();
        log.error("Broken configuration missing {}", LINK_DIRECTION);
        log.error("---- Start config dump ----");
        while (it.hasNext()) {
            Entry<String,String> ent = it.next();
            log.error("k:{} -> v:{}", ent.getKey(), ent.getValue());
        }
        log.error("---- End config dump   ----");
        throw new NullPointerException();
    }
    direction = faunusConf.get(LINK_DIRECTION).opposite();
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:18,代码来源:LinkMapReduce.java

示例11: testPropertySortingOnInteger

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入依赖的package包/类
public void testPropertySortingOnInteger() throws Exception {
    Configuration config = ValueGroupCountMapReduce.createConfiguration(Vertex.class, "age", IntWritable.class);
    this.mapReduceDriver.withConfiguration(config);

    Map<Long, FaunusVertex> vertices = new HashMap<Long, FaunusVertex>();
    for (long i = 0; i < 15; i++) {
        FaunusVertex v = new FaunusVertex(new ModifiableHadoopConfiguration(), i);
        v.setProperty("age", i);
        vertices.put(i, v);
        v.startPath();
    }
    final List<Pair<IntWritable, LongWritable>> results = runWithGraphNoIndex(vertices, mapReduceDriver);
    for (int i = 0; i < results.size(); i++) {
        assertEquals(results.get(i).getSecond().get(), 1l);
        assertEquals(results.get(i).getFirst().get(), i);
    }
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:18,代码来源:ValueGroupCountMapReduceTest.java

示例12: setup

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入依赖的package包/类
@Override
public void setup(final Mapper.Context context) throws IOException, InterruptedException {
    Configuration hc = DEFAULT_COMPAT.getContextConfiguration(context);
    ModifiableHadoopConfiguration titanConf = ModifiableHadoopConfiguration.of(hc);
    try {
        this.mapSpillOver = titanConf.get(PIPELINE_MAP_SPILL_OVER);
        final String keyClosureString = hc.get(KEY_CLOSURE, null);
        if (null == keyClosureString)
            this.keyClosure = null;
        else
            this.keyClosure = (Closure) engine.eval(keyClosureString);

        final String valueClosureString = hc.get(VALUE_CLOSURE, null);
        if (null == valueClosureString)
            this.valueClosure = null;
        else
            this.valueClosure = (Closure) engine.eval(valueClosureString);

    } catch (final ScriptException e) {
        throw new IOException(e.getMessage(), e);
    }
    this.isVertex = hc.getClass(CLASS, Element.class, Element.class).equals(Vertex.class);
    this.map = new CounterMap<Object>();
    this.outputs = new SafeMapperOutputs(context);
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:26,代码来源:GroupCountMapReduce.java

示例13: testBasicSerialization

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入依赖的package包/类
public void testBasicSerialization() throws IOException {
    FaunusVertex vertex1 = new FaunusVertex(new ModifiableHadoopConfiguration(), 10);
    FaunusVertex vertex2 = new FaunusVertex(new ModifiableHadoopConfiguration(), Long.MAX_VALUE);

    ByteArrayOutputStream bytes1 = new ByteArrayOutputStream();
    vertex1.write(new DataOutputStream(bytes1));
    assertEquals(bytes1.size(), 8);
    // 1 long id + 1 variable int paths + 1 short properties +  2 vinteger edge types (2)
    // ? + 1 + 1 + 2 + 2 + 2 = 11 bytes + 1 byte long id

    ByteArrayOutputStream bytes2 = new ByteArrayOutputStream();
    vertex2.write(new DataOutputStream(bytes2));
    assertEquals(bytes2.size(), 24);
    // 1 long id + 1 int paths + 1 short properties + 2 vinteger edge types (2)
    // ? + 1 + 1 + 2 + 2 + 2 = 11 bytes + 9 byte long id

    final Long id1 = WritableUtils.readVLong(new DataInputStream(new ByteArrayInputStream(bytes1.toByteArray())));
    final Long id2 = WritableUtils.readVLong(new DataInputStream(new ByteArrayInputStream(bytes2.toByteArray())));

    assertEquals(id1, new Long(10l));
    assertEquals(id2, new Long(Long.MAX_VALUE));
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:23,代码来源:FaunusElementTest.java

示例14: testRecordWriter

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入依赖的package包/类
public void testRecordWriter() throws Exception {
    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    DataOutputStream stream = new DataOutputStream(new PrintStream(baos));
    Configuration conf = new Configuration();
    ModifiableHadoopConfiguration faunusConf = ModifiableHadoopConfiguration.of(conf);
    faunusConf.getOutputConf(ROOT_NS).set(SCRIPT_FILE, ScriptRecordWriterTest.class.getResource("ScriptOutput.groovy").getFile());
    ScriptRecordWriter writer = new ScriptRecordWriter(stream, conf);
    Map<Long, FaunusVertex> graph = generateGraph(ExampleGraph.TINKERGRAPH);
    for (FaunusVertex vertex : graph.values()) {
        writer.write(NullWritable.get(), vertex);
    }
    String output = baos.toString();
    String[] rows = output.split("\n");
    int vertices = 0;
    for (String row : rows) {
        vertices++;
        assertTrue(row.contains(":"));
        if (row.startsWith("2") || row.startsWith("3") || row.startsWith("5"))
            assertEquals(row.length(), 3);
        else
            assertTrue(row.length() > 3);
    }
    assertEquals(vertices, graph.size());

}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:26,代码来源:ScriptRecordWriterTest.java

示例15: testRecordReader

import com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration; //导入依赖的package包/类
public void testRecordReader() throws Exception {
    final Configuration conf = new Configuration();
    ModifiableHadoopConfiguration faunusConf = ModifiableHadoopConfiguration.of(conf);
    faunusConf.getInputConf(ROOT_NS).set(SCRIPT_FILE, ScriptRecordReaderTest.class.getResource("ScriptInput.groovy").getFile());
    ScriptRecordReader reader = new ScriptRecordReader(VertexQueryFilter.create(new EmptyConfiguration()), HadoopCompatLoader.getCompat().newTask(conf, new TaskAttemptID()));
    reader.initialize(new FileSplit(new Path(ScriptRecordReaderTest.class.getResource("graph-of-the-gods.id").toURI()), 0, Long.MAX_VALUE, new String[]{}),
            HadoopCompatLoader.getCompat().newTask(conf, new TaskAttemptID()));
    int counter = 0;
    while (reader.nextKeyValue()) {
        assertEquals(reader.getCurrentKey(), NullWritable.get());
        FaunusVertex vertex = reader.getCurrentValue();
        long id = vertex.getLongId();
        assertEquals(id, counter++);
        assertEquals(vertex.getPropertyKeys().size(), 0);
        assertEquals(count(vertex.getEdges(Direction.IN)), 0);
        if (id == 1 || id == 2 || id == 3 || id == 7 || id == 11) {
            assertTrue(count(vertex.getEdges(Direction.OUT)) > 0);
        } else {
            assertEquals(count(vertex.getEdges(Direction.OUT)), 0);
        }
    }
    assertEquals(counter, 12);
    reader.close();
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:25,代码来源:ScriptRecordReaderTest.java


注:本文中的com.thinkaurelius.titan.hadoop.config.ModifiableHadoopConfiguration类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。