當前位置: 首頁>>代碼示例>>Java>>正文


Java TableMapReduceUtil.initCredentialsForCluster方法代碼示例

本文整理匯總了Java中org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initCredentialsForCluster方法的典型用法代碼示例。如果您正苦於以下問題:Java TableMapReduceUtil.initCredentialsForCluster方法的具體用法?Java TableMapReduceUtil.initCredentialsForCluster怎麽用?Java TableMapReduceUtil.initCredentialsForCluster使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil的用法示例。


在下文中一共展示了TableMapReduceUtil.initCredentialsForCluster方法的3個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: createSubmittableJob

import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil; //導入方法依賴的package包/類
/**
 * Sets up the actual job.
 *
 * @param conf  The current configuration.
 * @param args  The command line parameters.
 * @return The newly created job.
 * @throws java.io.IOException When setting up the job fails.
 */
public static Job createSubmittableJob(Configuration conf, String[] args)
throws IOException {
  if (!doCommandLine(args)) {
    return null;
  }
  if (!conf.getBoolean(HConstants.REPLICATION_ENABLE_KEY,
      HConstants.REPLICATION_ENABLE_DEFAULT)) {
    throw new IOException("Replication needs to be enabled to verify it.");
  }
  conf.set(NAME+".peerId", peerId);
  conf.set(NAME+".tableName", tableName);
  conf.setLong(NAME+".startTime", startTime);
  conf.setLong(NAME+".endTime", endTime);
  if (families != null) {
    conf.set(NAME+".families", families);
  }

  Pair<ReplicationPeerConfig, Configuration> peerConfigPair = getPeerQuorumConfig(conf);
  ReplicationPeerConfig peerConfig = peerConfigPair.getFirst();
  String peerQuorumAddress = peerConfig.getClusterKey();
  LOG.info("Peer Quorum Address: " + peerQuorumAddress + ", Peer Configuration: " +
      peerConfig.getConfiguration());
  conf.set(NAME + ".peerQuorumAddress", peerQuorumAddress);
  HBaseConfiguration.setWithPrefix(conf, PEER_CONFIG_PREFIX,
      peerConfig.getConfiguration().entrySet());

  conf.setInt(NAME + ".versions", versions);
  LOG.info("Number of version: " + versions);

  Job job = new Job(conf, NAME + "_" + tableName);
  job.setJarByClass(VerifyReplication.class);

  Scan scan = new Scan();
  scan.setTimeRange(startTime, endTime);
  if (versions >= 0) {
    scan.setMaxVersions(versions);
    LOG.info("Number of versions set to " + versions);
  }
  if(families != null) {
    String[] fams = families.split(",");
    for(String fam : fams) {
      scan.addFamily(Bytes.toBytes(fam));
    }
  }
  TableMapReduceUtil.initTableMapperJob(tableName, scan,
      Verifier.class, null, null, job);

  Configuration peerClusterConf = peerConfigPair.getSecond();
  // Obtain the auth token from peer cluster
  TableMapReduceUtil.initCredentialsForCluster(job, peerClusterConf);

  job.setOutputFormatClass(NullOutputFormat.class);
  job.setNumReduceTasks(0);
  return job;
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:64,代碼來源:VerifyReplication.java

示例2: createSubmittableJob

import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil; //導入方法依賴的package包/類
/**
 * Sets up the actual job.
 *
 * @param conf  The current configuration.
 * @param args  The command line parameters.
 * @return The newly created job.
 * @throws java.io.IOException When setting up the job fails.
 */
public static Job createSubmittableJob(Configuration conf, String[] args)
throws IOException {
  if (!doCommandLine(args)) {
    return null;
  }
  if (!conf.getBoolean(HConstants.REPLICATION_ENABLE_KEY,
      HConstants.REPLICATION_ENABLE_DEFAULT)) {
    throw new IOException("Replication needs to be enabled to verify it.");
  }
  conf.set(NAME+".peerId", peerId);
  conf.set(NAME+".tableName", tableName);
  conf.setLong(NAME+".startTime", startTime);
  conf.setLong(NAME+".endTime", endTime);
  if (families != null) {
    conf.set(NAME+".families", families);
  }

  String peerQuorumAddress = getPeerQuorumAddress(conf);
  conf.set(NAME + ".peerQuorumAddress", peerQuorumAddress);
  LOG.info("Peer Quorum Address: " + peerQuorumAddress);

  Job job = new Job(conf, NAME + "_" + tableName);
  job.setJarByClass(VerifyReplication.class);

  Scan scan = new Scan();
  scan.setTimeRange(startTime, endTime);
  if (versions >= 0) {
    scan.setMaxVersions(versions);
  }
  if(families != null) {
    String[] fams = families.split(",");
    for(String fam : fams) {
      scan.addFamily(Bytes.toBytes(fam));
    }
  }
  TableMapReduceUtil.initTableMapperJob(tableName, scan,
      Verifier.class, null, null, job);

  // Obtain the auth token from peer cluster
  TableMapReduceUtil.initCredentialsForCluster(job, peerQuorumAddress);

  job.setOutputFormatClass(NullOutputFormat.class);
  job.setNumReduceTasks(0);
  return job;
}
 
開發者ID:grokcoder,項目名稱:pbase,代碼行數:54,代碼來源:VerifyReplication.java

示例3: createSubmittableJob

import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil; //導入方法依賴的package包/類
/**
 * Sets up the actual job.
 *
 * @param conf  The current configuration.
 * @param args  The command line parameters.
 * @return The newly created job.
 * @throws java.io.IOException When setting up the job fails.
 */
public static Job createSubmittableJob(Configuration conf, String[] args)
throws IOException {
  if (!doCommandLine(args)) {
    return null;
  }
  if (!conf.getBoolean(HConstants.REPLICATION_ENABLE_KEY,
      HConstants.REPLICATION_ENABLE_DEFAULT)) {
    throw new IOException("Replication needs to be enabled to verify it.");
  }
  conf.set(NAME+".peerId", peerId);
  conf.set(NAME+".tableName", tableName);
  conf.setLong(NAME+".startTime", startTime);
  conf.setLong(NAME+".endTime", endTime);
  if (families != null) {
    conf.set(NAME+".families", families);
  }

  String peerQuorumAddress = getPeerQuorumAddress(conf);
  conf.set(NAME + ".peerQuorumAddress", peerQuorumAddress);
  LOG.info("Peer Quorum Address: " + peerQuorumAddress);

  Job job = new Job(conf, NAME + "_" + tableName);
  job.setJarByClass(VerifyReplication.class);

  Scan scan = new Scan();
  scan.setTimeRange(startTime, endTime);
  if(families != null) {
    String[] fams = families.split(",");
    for(String fam : fams) {
      scan.addFamily(Bytes.toBytes(fam));
    }
  }
  TableMapReduceUtil.initTableMapperJob(tableName, scan,
      Verifier.class, null, null, job);

  // Obtain the auth token from peer cluster
  TableMapReduceUtil.initCredentialsForCluster(job, peerQuorumAddress);

  job.setOutputFormatClass(NullOutputFormat.class);
  job.setNumReduceTasks(0);
  return job;
}
 
開發者ID:tenggyut,項目名稱:HIndex,代碼行數:51,代碼來源:VerifyReplication.java


注:本文中的org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initCredentialsForCluster方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。