当前位置: 首页>>代码示例>>Java>>正文


Java TitanHadoopConfiguration类代码示例

本文整理汇总了Java中com.thinkaurelius.titan.hadoop.config.TitanHadoopConfiguration的典型用法代码示例。如果您正苦于以下问题:Java TitanHadoopConfiguration类的具体用法?Java TitanHadoopConfiguration怎么用?Java TitanHadoopConfiguration使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


TitanHadoopConfiguration类属于com.thinkaurelius.titan.hadoop.config包,在下文中一共展示了TitanHadoopConfiguration类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: copyInputKeys

import com.thinkaurelius.titan.hadoop.config.TitanHadoopConfiguration; //导入依赖的package包/类
private static void copyInputKeys(org.apache.hadoop.conf.Configuration hadoopConf, org.apache.commons.configuration.Configuration source) {
    // Copy IndexUpdateJob settings into the hadoop-backed cfg
    Iterator<String> keyIter = source.getKeys();
    while (keyIter.hasNext()) {
        String key = keyIter.next();
        ConfigElement.PathIdentifier pid;
        try {
            pid = ConfigElement.parse(ROOT_NS, key);
        } catch (RuntimeException e) {
            log.debug("[inputkeys] Skipping {}", key, e);
            continue;
        }

        if (!pid.element.isOption())
            continue;

        String k = ConfigElement.getPath(TitanHadoopConfiguration.GRAPH_CONFIG_KEYS, true) + "." + key;
        String v = source.getProperty(key).toString();

        hadoopConf.set(k, v);
        log.debug("[inputkeys] Set {}={}", k, v);
    }
}
 
开发者ID:graben1437,项目名称:titan1withtp3.1,代码行数:24,代码来源:MapReduceIndexManagement.java

示例2: setConf

import com.thinkaurelius.titan.hadoop.config.TitanHadoopConfiguration; //导入依赖的package包/类
@Override
public void setConf(final Configuration config) {
    super.setConf(config);

    // Copy some Titan configuration keys to the Hadoop Configuration keys used by Cassandra's ColumnFamilyInputFormat
    ConfigHelper.setInputInitialAddress(config, titanConf.get(GraphDatabaseConfiguration.STORAGE_HOSTS)[0]);
    if (titanConf.has(GraphDatabaseConfiguration.STORAGE_PORT))
        ConfigHelper.setInputRpcPort(config, String.valueOf(titanConf.get(GraphDatabaseConfiguration.STORAGE_PORT)));
    if (titanConf.has(GraphDatabaseConfiguration.AUTH_USERNAME))
        ConfigHelper.setInputKeyspaceUserName(config, titanConf.get(GraphDatabaseConfiguration.AUTH_USERNAME));
    if (titanConf.has(GraphDatabaseConfiguration.AUTH_PASSWORD))
        ConfigHelper.setInputKeyspacePassword(config, titanConf.get(GraphDatabaseConfiguration.AUTH_PASSWORD));

    // Copy keyspace, force the CF setting to edgestore, honor widerows when set
    final boolean wideRows = config.getBoolean(INPUT_WIDEROWS_CONFIG, false);
    // Use the setInputColumnFamily overload that includes a widerows argument; using the overload without this argument forces it false
    ConfigHelper.setInputColumnFamily(config, titanConf.get(AbstractCassandraStoreManager.CASSANDRA_KEYSPACE),
            mrConf.get(TitanHadoopConfiguration.COLUMN_FAMILY_NAME), wideRows);
    log.debug("Set keyspace: {}", titanConf.get(AbstractCassandraStoreManager.CASSANDRA_KEYSPACE));

    // Set the column slice bounds via Faunus's vertex query filter
    final SlicePredicate predicate = new SlicePredicate();
    final int rangeBatchSize = config.getInt(RANGE_BATCH_SIZE_CONFIG, Integer.MAX_VALUE);
    predicate.setSlice_range(getSliceRange(TitanHadoopSetupCommon.DEFAULT_SLICE_QUERY, rangeBatchSize)); // TODO stop slicing the whole row
    ConfigHelper.setInputSlicePredicate(config, predicate);
}
 
开发者ID:graben1437,项目名称:titan1withtp3.1,代码行数:27,代码来源:CassandraBinaryInputFormat.java

示例3: runScanJob

import com.thinkaurelius.titan.hadoop.config.TitanHadoopConfiguration; //导入依赖的package包/类
public static ScanMetrics runScanJob(ScanJob scanJob, Configuration conf, String confRootField,
                                 org.apache.hadoop.conf.Configuration hadoopConf,
                                 Class<? extends InputFormat> inputFormat)
        throws IOException, InterruptedException, ClassNotFoundException {

    ModifiableHadoopConfiguration scanConf =
            ModifiableHadoopConfiguration.of(TitanHadoopConfiguration.MAPRED_NS, hadoopConf);

    tryToLoadClassByName(scanJob);

    // Set the ScanJob class
    scanConf.set(TitanHadoopConfiguration.SCAN_JOB_CLASS, scanJob.getClass().getName());

    String jobName = HadoopScanMapper.class.getSimpleName() + "[" + scanJob + "]";

    return runJob(conf, confRootField, hadoopConf, inputFormat, jobName, HadoopScanMapper.class);
}
 
开发者ID:graben1437,项目名称:titan1withtp3.1,代码行数:18,代码来源:HadoopScanRunner.java

示例4: runVertexScanJob

import com.thinkaurelius.titan.hadoop.config.TitanHadoopConfiguration; //导入依赖的package包/类
public static ScanMetrics runVertexScanJob(VertexScanJob vertexScanJob, Configuration conf, String confRootField,
                                     org.apache.hadoop.conf.Configuration hadoopConf,
                                     Class<? extends InputFormat> inputFormat)
        throws IOException, InterruptedException, ClassNotFoundException {

    ModifiableHadoopConfiguration scanConf =
            ModifiableHadoopConfiguration.of(TitanHadoopConfiguration.MAPRED_NS, hadoopConf);

    tryToLoadClassByName(vertexScanJob);

    // Set the VertexScanJob class
    scanConf.set(TitanHadoopConfiguration.SCAN_JOB_CLASS, vertexScanJob.getClass().getName());

    String jobName = HadoopScanMapper.class.getSimpleName() + "[" + vertexScanJob + "]";

    return runJob(conf, confRootField, hadoopConf, inputFormat, jobName, HadoopVertexScanMapper.class);
}
 
开发者ID:graben1437,项目名称:titan1withtp3.1,代码行数:18,代码来源:HadoopScanRunner.java

示例5: run

import com.thinkaurelius.titan.hadoop.config.TitanHadoopConfiguration; //导入依赖的package包/类
public ScanMetrics run() throws InterruptedException, IOException, ClassNotFoundException {

        org.apache.hadoop.conf.Configuration hadoopConf = null != baseHadoopConf ?
                baseHadoopConf : new org.apache.hadoop.conf.Configuration();

        if (null != titanConf) {
            String prefix = ConfigElement.getPath(TitanHadoopConfiguration.GRAPH_CONFIG_KEYS, true) + ".";
            for (String k : titanConf.getKeys("")) {
                hadoopConf.set(prefix + k, titanConf.get(k, Object.class).toString());
                log.debug("Set: {}={}", prefix + k, titanConf.get(k, Object.class).toString());
            }
        }
        Preconditions.checkNotNull(hadoopConf);

        if (null != scanJob) {
            return HadoopScanRunner.runScanJob(scanJob, scanJobConf, scanJobConfRoot, hadoopConf, HBaseBinaryInputFormat.class);
        } else {
            return HadoopScanRunner.runVertexScanJob(vertexScanJob, scanJobConf, scanJobConfRoot, hadoopConf, HBaseBinaryInputFormat.class);
        }
    }
 
开发者ID:graben1437,项目名称:titan1withtp3.1,代码行数:21,代码来源:HBaseHadoopScanRunner.java

示例6: setup

import com.thinkaurelius.titan.hadoop.config.TitanHadoopConfiguration; //导入依赖的package包/类
@Override
protected void setup(Context context) throws IOException, InterruptedException {
    super.setup(context);

    // Catch any exceptions, log a warning, and allow the subclass to continue even if schema loading failed
    try {
        ModifiableHadoopConfiguration faunusConf =
                ModifiableHadoopConfiguration.of(DEFAULT_COMPAT.getContextConfiguration(context));

        if (faunusConf.get(TitanHadoopConfiguration.OUTPUT_TITAN_TYPE_CHECKING)) {
            TitanGraph g = TitanFactory.open(faunusConf.getOutputConf());
            FaunusSchemaManager.getTypeManager(null).setSchemaProvider(new SchemaContainer(g));
            log.info("Loaded schema associated with {}", g);
        } else {
            log.debug("Titan schema checking is disabled");
        }
    } catch (Throwable t) {
        log.warn("Unable to load Titan schema", t);
    }
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:21,代码来源:TitanSchemaAwareMapper.java

示例7: setup

import com.thinkaurelius.titan.hadoop.config.TitanHadoopConfiguration; //导入依赖的package包/类
@Override
public void setup(
        final Mapper<NullWritable, FaunusVertex, NullWritable, NullWritable>.Context context) throws IOException {
    Configuration hadoopConf = DEFAULT_COMPAT.getContextConfiguration(context);
    ModifiableHadoopConfiguration faunusConf = ModifiableHadoopConfiguration.of(hadoopConf);
    BasicConfiguration titanConf = faunusConf.getOutputConf();
    indexName = faunusConf.get(TitanHadoopConfiguration.INDEX_NAME);
    indexType = faunusConf.get(TitanHadoopConfiguration.INDEX_TYPE);

    try {
        Preconditions.checkNotNull(indexName, "Need to provide at least an index name for re-index job");
        log.info("Read index information: name={} type={}", indexName, indexType);
        graph = (StandardTitanGraph)TitanFactory.open(titanConf);
        SchemaContainer schema = new SchemaContainer(graph);
        FaunusSchemaManager typeManager = FaunusSchemaManager.getTypeManager(titanConf);
        typeManager.setSchemaProvider(schema);
        log.info("Opened graph {}", graph);
        mgmt = (ManagementSystem) graph.getManagementSystem();
        validateIndexStatus();
    } catch (final Exception e) {
        if (null != mgmt && mgmt.isOpen())
            mgmt.rollback();
        DEFAULT_COMPAT.incrementContextCounter(context, Counters.FAILED_TRANSACTIONS, 1L);
        throw new IOException(e.getMessage(), e);
    }
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:27,代码来源:TitanIndexRepairMapper.java

示例8: testRecordReaderWithVertexQueryFilterDirection

import com.thinkaurelius.titan.hadoop.config.TitanHadoopConfiguration; //导入依赖的package包/类
public void testRecordReaderWithVertexQueryFilterDirection() throws Exception {
    Configuration config = new Configuration();
    ModifiableHadoopConfiguration faunusConf = ModifiableHadoopConfiguration.of(config);
    faunusConf.set(TitanHadoopConfiguration.INPUT_VERTEX_QUERY_FILTER, "v.query().direction(OUT)");
    GraphSONRecordReader reader = new GraphSONRecordReader(VertexQueryFilter.create(config));
    reader.initialize(new FileSplit(new Path(GraphSONRecordReaderTest.class.getResource("graph-of-the-gods.json").toURI()), 0, Long.MAX_VALUE, new String[]{}),
            HadoopCompatLoader.getCompat().newTask(new Configuration(), new TaskAttemptID()));
    int counter = 0;
    while (reader.nextKeyValue()) {
        counter++;
        assertEquals(reader.getCurrentKey(), NullWritable.get());
        FaunusVertex vertex = reader.getCurrentValue();
        assertEquals(Iterables.size(vertex.getEdges(Direction.IN)), 0);
    }
    assertEquals(counter, 12);
    reader.close();
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:18,代码来源:GraphSONRecordReaderTest.java

示例9: testRecordReaderWithVertexQueryFilterLimit

import com.thinkaurelius.titan.hadoop.config.TitanHadoopConfiguration; //导入依赖的package包/类
public void testRecordReaderWithVertexQueryFilterLimit() throws Exception {
    Configuration config = new Configuration();
    ModifiableHadoopConfiguration faunusConf = ModifiableHadoopConfiguration.of(config);
    faunusConf.set(TitanHadoopConfiguration.INPUT_VERTEX_QUERY_FILTER, "v.query().limit(0)");
    GraphSONRecordReader reader = new GraphSONRecordReader(VertexQueryFilter.create(config));
    reader.initialize(new FileSplit(new Path(GraphSONRecordReaderTest.class.getResource("graph-of-the-gods.json").toURI()), 0, Long.MAX_VALUE, new String[]{}),
            HadoopCompatLoader.getCompat().newTask(new Configuration(), new TaskAttemptID()));
    int counter = 0;
    while (reader.nextKeyValue()) {
        counter++;
        assertEquals(reader.getCurrentKey(), NullWritable.get());
        FaunusVertex vertex = reader.getCurrentValue();
        assertEquals(Iterables.size(vertex.getEdges(Direction.BOTH)), 0);
    }
    assertEquals(counter, 12);
    reader.close();
}
 
开发者ID:graben1437,项目名称:titan0.5.4-hbase1.1.1-custom,代码行数:18,代码来源:GraphSONRecordReaderTest.java

示例10: copyIndexJobKeys

import com.thinkaurelius.titan.hadoop.config.TitanHadoopConfiguration; //导入依赖的package包/类
private static void copyIndexJobKeys(org.apache.hadoop.conf.Configuration hadoopConf, String indexName, String relationType) {
    hadoopConf.set(ConfigElement.getPath(TitanHadoopConfiguration.SCAN_JOB_CONFIG_KEYS, true) + "." +
                    ConfigElement.getPath(IndexUpdateJob.INDEX_NAME), indexName);

    hadoopConf.set(ConfigElement.getPath(TitanHadoopConfiguration.SCAN_JOB_CONFIG_KEYS, true) + "." +
            ConfigElement.getPath(IndexUpdateJob.INDEX_RELATION_TYPE), relationType);

    hadoopConf.set(ConfigElement.getPath(TitanHadoopConfiguration.SCAN_JOB_CONFIG_KEYS, true) + "." +
            ConfigElement.getPath(GraphDatabaseConfiguration.JOB_START_TIME),
            String.valueOf(System.currentTimeMillis()));
}
 
开发者ID:graben1437,项目名称:titan1withtp3.1,代码行数:12,代码来源:MapReduceIndexManagement.java

示例11: setConf

import com.thinkaurelius.titan.hadoop.config.TitanHadoopConfiguration; //导入依赖的package包/类
@Override
public void setConf(final Configuration config) {
    HadoopPoolsConfigurable.super.setConf(config);
    this.mrConf = ModifiableHadoopConfiguration.of(TitanHadoopConfiguration.MAPRED_NS, config);
    this.hadoopConf = config;
    this.titanConf = mrConf.getTitanGraphConf();
}
 
开发者ID:graben1437,项目名称:titan1withtp3.1,代码行数:8,代码来源:AbstractBinaryInputFormat.java

示例12: copyPropertiesToInputAndOutputConf

import com.thinkaurelius.titan.hadoop.config.TitanHadoopConfiguration; //导入依赖的package包/类
private static void copyPropertiesToInputAndOutputConf(Configuration sink, Properties source) {
    final String prefix = ConfigElement.getPath(TitanHadoopConfiguration.GRAPH_CONFIG_KEYS, true) + ".";
    for (Map.Entry<Object, Object> e : source.entrySet()) {
        String k;
        String v = e.getValue().toString();
        k = prefix + e.getKey().toString();
        sink.set(k, v);
        log.info("Set {}={}", k, v);
    }
}
 
开发者ID:graben1437,项目名称:titan1withtp3.1,代码行数:11,代码来源:MapReduceIndexJobs.java

示例13: setup

import com.thinkaurelius.titan.hadoop.config.TitanHadoopConfiguration; //导入依赖的package包/类
@Override
protected void setup(Context context) throws IOException, InterruptedException {
    super.setup(context);
    org.apache.hadoop.conf.Configuration hadoopConf = DEFAULT_COMPAT.getContextConfiguration(context);
    ModifiableHadoopConfiguration scanConf = ModifiableHadoopConfiguration.of(TitanHadoopConfiguration.MAPRED_NS, hadoopConf);
    job = getJob(scanConf);
    metrics = new HadoopContextScanMetrics(context);
    Configuration graphConf = getTitanConfiguration(context);
    finishSetup(scanConf, graphConf);
}
 
开发者ID:graben1437,项目名称:titan1withtp3.1,代码行数:11,代码来源:HadoopScanMapper.java

示例14: getJobConfiguration

import com.thinkaurelius.titan.hadoop.config.TitanHadoopConfiguration; //导入依赖的package包/类
static Configuration getJobConfiguration(ModifiableHadoopConfiguration scanConf) {
    if (!scanConf.has(TitanHadoopConfiguration.SCAN_JOB_CONFIG_ROOT)) {
        log.debug("No job configuration root provided");
        return null;
    }
    ConfigNamespace jobRoot = getJobRoot(scanConf.get(TitanHadoopConfiguration.SCAN_JOB_CONFIG_ROOT));
    return ModifiableHadoopConfiguration.prefixView(jobRoot, TitanHadoopConfiguration.SCAN_JOB_CONFIG_KEYS,
            scanConf);
}
 
开发者ID:graben1437,项目名称:titan1withtp3.1,代码行数:10,代码来源:HadoopScanMapper.java

示例15: runJob

import com.thinkaurelius.titan.hadoop.config.TitanHadoopConfiguration; //导入依赖的package包/类
/**
 * Run a ScanJob on Hadoop MapReduce.
 * <p>
 * The {@code confRootField} parameter must be a string in the format
 * {@code package.package...class#fieldname}, where {@code fieldname} is the
 * name of a public static field on the class specified by the portion of the
 * string before the {@code #}.  The {@code #} itself is just a separator and
 * is discarded.
 * <p>
 * When a MapReduce task process prepares to execute the {@code ScanJob}, it will
 * read the public static field named by {@code confFieldRoot} and cast it to a
 * {@link ConfigNamespace}.  This namespace object becomes the root of a
 * {@link Configuration} instantiated, populated with the key-value pairs
 * from the {@code conf} parameter, and then passed into the {@code ScanJob}.
 * <p>
 * This method blocks until the ScanJob completes, then returns the metrics
 * generated by the job during its execution.  It does not timeout.
 *
 * @param conf configuration settings for the ScanJob
 * @param confRootField the root of the ScanJob's configuration
 * @param hadoopConf the Configuration passed to the MapReduce Job
 * @param inputFormat the InputFormat<StaticBuffer, Iterable<Entry>>
 *        that reads (row, columns) pairs out of a Titan edgestore
 * @return metrics generated by the ScanJob
 * @throws IOException if the job fails for any reason
 * @throws ClassNotFoundException if {@code scanJob.getClass()} or if Hadoop
 *         MapReduce's internal job-submission-related reflection fails
 * @throws InterruptedException if interrupted while waiting for the Hadoop
 *         MapReduce job to complete
 */
public static ScanMetrics runJob(Configuration conf, String confRootField,
                                 org.apache.hadoop.conf.Configuration hadoopConf,
                                 Class<? extends InputFormat> inputFormat, String jobName,
                                 Class<? extends Mapper> mapperClass)
        throws IOException, InterruptedException, ClassNotFoundException {

    Preconditions.checkArgument(null != hadoopConf);
    Preconditions.checkArgument(null != inputFormat);

    if (null != conf) {
        Preconditions.checkArgument(null != confRootField,
                "Configuration root field must be provided when configuration instance is provided");
    }

    ModifiableHadoopConfiguration scanConf =
            ModifiableHadoopConfiguration.of(TitanHadoopConfiguration.MAPRED_NS, hadoopConf);

    if (null != confRootField) {
        // Set the scanjob configuration root
        scanConf.set(TitanHadoopConfiguration.SCAN_JOB_CONFIG_ROOT, confRootField);

        // Instantiate scanjob configuration root
        ConfigNamespace confRoot = HadoopScanMapper.getJobRoot(confRootField);

        // Create writable view of scanjob configuration atop the Hadoop Configuration instance, where all keys are prefixed with SCAN_JOB_CONFIG_KEYS
        ModifiableConfiguration hadoopJobConf = ModifiableHadoopConfiguration.prefixView(confRoot,
                TitanHadoopConfiguration.SCAN_JOB_CONFIG_KEYS, scanConf);

        // Copy scanjob settings from the Titan Configuration instance to the Hadoop Configuration instance
        Map<String, Object> jobConfMap = conf.getSubset(confRoot);
        for (Map.Entry<String, Object> jobConfEntry : jobConfMap.entrySet()) {
            hadoopJobConf.set((ConfigOption) ConfigElement.parse(confRoot, jobConfEntry.getKey()).element, jobConfEntry.getValue());
        }
    }

    return runJob(scanConf.getHadoopConfiguration(), inputFormat, jobName, mapperClass);
}
 
开发者ID:graben1437,项目名称:titan1withtp3.1,代码行数:68,代码来源:HadoopScanRunner.java


注:本文中的com.thinkaurelius.titan.hadoop.config.TitanHadoopConfiguration类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。