當前位置: 首頁>>代碼示例>>Java>>正文


Java HConnectionManager.execute方法代碼示例

本文整理匯總了Java中org.apache.hadoop.hbase.client.HConnectionManager.execute方法的典型用法代碼示例。如果您正苦於以下問題:Java HConnectionManager.execute方法的具體用法?Java HConnectionManager.execute怎麽用?Java HConnectionManager.execute使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.hbase.client.HConnectionManager的用法示例。


在下文中一共展示了HConnectionManager.execute方法的12個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: loadDisabledTables

import org.apache.hadoop.hbase.client.HConnectionManager; //導入方法依賴的package包/類
/**
 * Load the list of disabled tables in ZK into local set.
 * @throws ZooKeeperConnectionException
 * @throws IOException
 */
private void loadDisabledTables()
throws ZooKeeperConnectionException, IOException {
  HConnectionManager.execute(new HConnectable<Void>(getConf()) {
    @Override
    public Void connect(HConnection connection) throws IOException {
      ZooKeeperWatcher zkw = createZooKeeperWatcher();
      try {
        for (TableName tableName :
            ZKTableStateClientSideReader.getDisabledOrDisablingTables(zkw)) {
          disabledTables.add(tableName);
        }
      } catch (KeeperException ke) {
        throw new IOException(ke);
      } catch (InterruptedException e) {
        throw new InterruptedIOException();
      } finally {
        zkw.close();
      }
      return null;
    }
  });
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:28,代碼來源:HBaseFsck.java

示例2: loadDisabledTables

import org.apache.hadoop.hbase.client.HConnectionManager; //導入方法依賴的package包/類
/**
 * Load the list of disabled tables in ZK into local set.
 * @throws ZooKeeperConnectionException
 * @throws IOException
 */
private void loadDisabledTables()
throws ZooKeeperConnectionException, IOException {
  HConnectionManager.execute(new HConnectable<Void>(getConf()) {
    @Override
    public Void connect(HConnection connection) throws IOException {
      ZooKeeperWatcher zkw = connection.getZooKeeperWatcher();
      try {
        for (String tableName : ZKTableReadOnly.getDisabledOrDisablingTables(zkw)) {
          disabledTables.add(Bytes.toBytes(tableName));
        }
      } catch (KeeperException ke) {
        throw new IOException(ke);
      }
      return null;
    }
  });
}
 
開發者ID:fengchen8086,項目名稱:LCIndex-HBase-0.94.16,代碼行數:23,代碼來源:HBaseFsck.java

示例3: loadDisabledTables

import org.apache.hadoop.hbase.client.HConnectionManager; //導入方法依賴的package包/類
/**
 * Load the list of disabled tables in ZK into local set.
 * @throws ZooKeeperConnectionException
 * @throws IOException
 */
private void loadDisabledTables()
throws ZooKeeperConnectionException, IOException {
  HConnectionManager.execute(new HConnectable<Void>(getConf()) {
    @Override
    public Void connect(HConnection connection) throws IOException {
      ZooKeeperWatcher zkw = createZooKeeperWatcher();
      try {
        for (TableName tableName :
            ZKTableReadOnly.getDisabledOrDisablingTables(zkw)) {
          disabledTables.add(tableName);
        }
      } catch (KeeperException ke) {
        throw new IOException(ke);
      } finally {
        zkw.close();
      }
      return null;
    }
  });
}
 
開發者ID:tenggyut,項目名稱:HIndex,代碼行數:26,代碼來源:HBaseFsck.java

示例4: loadDisabledTables

import org.apache.hadoop.hbase.client.HConnectionManager; //導入方法依賴的package包/類
/**
 * Load the list of disabled tables in ZK into local set.
 * @throws ZooKeeperConnectionException
 * @throws IOException
 */
private void loadDisabledTables()
throws ZooKeeperConnectionException, IOException {
  HConnectionManager.execute(new HConnectable<Void>(conf) {
    @Override
    public Void connect(HConnection connection) throws IOException {
      ZooKeeperWatcher zkw = connection.getZooKeeperWatcher();
      try {
        for (String tableName : ZKTable.getDisabledOrDisablingTables(zkw)) {
          disabledTables.add(Bytes.toBytes(tableName));
        }
      } catch (KeeperException ke) {
        throw new IOException(ke);
      }
      return null;
    }
  });
}
 
開發者ID:lifeng5042,項目名稱:RStore,代碼行數:23,代碼來源:HBaseFsck.java

示例5: merge

import org.apache.hadoop.hbase.client.HConnectionManager; //導入方法依賴的package包/類
/**
 * Scans the table and merges two adjacent regions if they are small. This
 * only happens when a lot of rows are deleted.
 *
 * When merging the hbase:meta region, the HBase instance must be offline.
 * When merging a normal table, the HBase instance must be online, but the
 * table must be disabled.
 *
 * @param conf        - configuration object for HBase
 * @param fs          - FileSystem where regions reside
 * @param tableName   - Table to be compacted
 * @param testMasterRunning True if we are to verify master is down before
 * running merge
 * @throws IOException
 */
public static void merge(Configuration conf, FileSystem fs,
  final TableName tableName, final boolean testMasterRunning)
throws IOException {
  boolean masterIsRunning = false;
  if (testMasterRunning) {
    masterIsRunning = HConnectionManager
        .execute(new HConnectable<Boolean>(conf) {
          @Override
          public Boolean connect(HConnection connection) throws IOException {
            return connection.isMasterRunning();
          }
        });
  }
  if (tableName.equals(TableName.META_TABLE_NAME)) {
    if (masterIsRunning) {
      throw new IllegalStateException(
          "Can not compact hbase:meta table if instance is on-line");
    }
    // TODO reenable new OfflineMerger(conf, fs).process();
  } else {
    if(!masterIsRunning) {
      throw new IllegalStateException(
          "HBase instance must be running to merge a normal table");
    }
    Admin admin = new HBaseAdmin(conf);
    try {
      if (!admin.isTableDisabled(tableName)) {
        throw new TableNotDisabledException(tableName);
      }
    } finally {
      admin.close();
    }
    new OnlineMerger(conf, fs, tableName).process();
  }
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:51,代碼來源:HMerge.java

示例6: merge

import org.apache.hadoop.hbase.client.HConnectionManager; //導入方法依賴的package包/類
/**
 * Scans the table and merges two adjacent regions if they are small. This
 * only happens when a lot of rows are deleted.
 *
 * When merging the META region, the HBase instance must be offline.
 * When merging a normal table, the HBase instance must be online, but the
 * table must be disabled.
 *
 * @param conf        - configuration object for HBase
 * @param fs          - FileSystem where regions reside
 * @param tableName   - Table to be compacted
 * @param testMasterRunning True if we are to verify master is down before
 * running merge
 * @throws IOException
 */
public static void merge(Configuration conf, FileSystem fs,
  final byte [] tableName, final boolean testMasterRunning)
throws IOException {
  boolean masterIsRunning = false;
  if (testMasterRunning) {
    masterIsRunning = HConnectionManager
        .execute(new HConnectable<Boolean>(conf) {
          @Override
          public Boolean connect(HConnection connection) throws IOException {
            return connection.isMasterRunning();
          }
        });
  }
  if (Bytes.equals(tableName, HConstants.META_TABLE_NAME)) {
    if (masterIsRunning) {
      throw new IllegalStateException(
          "Can not compact META table if instance is on-line");
    }
    new OfflineMerger(conf, fs).process();
  } else {
    if(!masterIsRunning) {
      throw new IllegalStateException(
          "HBase instance must be running to merge a normal table");
    }
    HBaseAdmin admin = new HBaseAdmin(conf);
    if (!admin.isTableDisabled(tableName)) {
      throw new TableNotDisabledException(tableName);
    }
    new OnlineMerger(conf, fs, tableName).process();
  }
}
 
開發者ID:fengchen8086,項目名稱:LCIndex-HBase-0.94.16,代碼行數:47,代碼來源:HMerge.java

示例7: merge

import org.apache.hadoop.hbase.client.HConnectionManager; //導入方法依賴的package包/類
/**
 * Scans the table and merges two adjacent regions if they are small. This
 * only happens when a lot of rows are deleted.
 *
 * When merging the hbase:meta region, the HBase instance must be offline.
 * When merging a normal table, the HBase instance must be online, but the
 * table must be disabled.
 *
 * @param conf        - configuration object for HBase
 * @param fs          - FileSystem where regions reside
 * @param tableName   - Table to be compacted
 * @param testMasterRunning True if we are to verify master is down before
 * running merge
 * @throws IOException
 */
public static void merge(Configuration conf, FileSystem fs,
  final TableName tableName, final boolean testMasterRunning)
throws IOException {
  boolean masterIsRunning = false;
  if (testMasterRunning) {
    masterIsRunning = HConnectionManager
        .execute(new HConnectable<Boolean>(conf) {
          @Override
          public Boolean connect(HConnection connection) throws IOException {
            return connection.isMasterRunning();
          }
        });
  }
  if (tableName.equals(TableName.META_TABLE_NAME)) {
    if (masterIsRunning) {
      throw new IllegalStateException(
          "Can not compact hbase:meta table if instance is on-line");
    }
    // TODO reenable new OfflineMerger(conf, fs).process();
  } else {
    if(!masterIsRunning) {
      throw new IllegalStateException(
          "HBase instance must be running to merge a normal table");
    }
    HBaseAdmin admin = new HBaseAdmin(conf);
    if (!admin.isTableDisabled(tableName)) {
      throw new TableNotDisabledException(tableName);
    }
    new OnlineMerger(conf, fs, tableName).process();
  }
}
 
開發者ID:tenggyut,項目名稱:HIndex,代碼行數:47,代碼來源:HMerge.java

示例8: merge

import org.apache.hadoop.hbase.client.HConnectionManager; //導入方法依賴的package包/類
/**
 * Scans the table and merges two adjacent regions if they are small. This
 * only happens when a lot of rows are deleted.
 *
 * When merging the hbase:meta region, the HBase instance must be offline.
 * When merging a normal table, the HBase instance must be online, but the
 * table must be disabled.
 *
 * @param conf        - configuration object for HBase
 * @param fs          - FileSystem where regions reside
 * @param tableName   - Table to be compacted
 * @param testMasterRunning True if we are to verify master is down before
 * running merge
 * @throws IOException
 */
public static void merge(Configuration conf, FileSystem fs,
  final TableName tableName, final boolean testMasterRunning)
throws IOException {
  boolean masterIsRunning = false;
  if (testMasterRunning) {
    masterIsRunning = HConnectionManager
        .execute(new HConnectable<Boolean>(conf) {
          @Override
          public Boolean connect(HConnection connection) throws IOException {
            return connection.isMasterRunning();
          }
        });
  }
  if (tableName.equals(TableName.META_TABLE_NAME)) {
    if (masterIsRunning) {
      throw new IllegalStateException(
          "Can not compact hbase:meta table if instance is on-line");
    }
    // TODO reenable new OfflineMerger(conf, fs).process();
  } else {
    if(!masterIsRunning) {
      throw new IllegalStateException(
          "HBase instance must be running to merge a normal table");
    }
    HBaseAdmin admin = new HBaseAdmin(conf);
    try {
      if (!admin.isTableDisabled(tableName)) {
        throw new TableNotDisabledException(tableName);
      }
    } finally {
      admin.close();
    }
    new OnlineMerger(conf, fs, tableName).process();
  }
}
 
開發者ID:shenli-uiuc,項目名稱:PyroDB,代碼行數:51,代碼來源:HMerge.java

示例9: map

import org.apache.hadoop.hbase.client.HConnectionManager; //導入方法依賴的package包/類
/**
 * Map method that compares every scanned row with the equivalent from
 * a distant cluster.
 * @param row  The current table row key.
 * @param value  The columns.
 * @param context  The current context.
 * @throws IOException When something is broken with the data.
 */
@Override
public void map(ImmutableBytesWritable row, final Result value,
                Context context)
    throws IOException {
  if (replicatedScanner == null) {
    Configuration conf = context.getConfiguration();
    final Scan scan = new Scan();
    scan.setCaching(conf.getInt(TableInputFormat.SCAN_CACHEDROWS, 1));
    long startTime = conf.getLong(NAME + ".startTime", 0);
    long endTime = conf.getLong(NAME + ".endTime", Long.MAX_VALUE);
    String families = conf.get(NAME + ".families", null);
    if(families != null) {
      String[] fams = families.split(",");
      for(String fam : fams) {
        scan.addFamily(Bytes.toBytes(fam));
      }
    }
    scan.setTimeRange(startTime, endTime);
    int versions = conf.getInt(NAME+".versions", -1);
    LOG.info("Setting number of version inside map as: " + versions);
    if (versions >= 0) {
      scan.setMaxVersions(versions);
    }

    final TableSplit tableSplit = (TableSplit)(context.getInputSplit());
    HConnectionManager.execute(new HConnectable<Void>(conf) {
      @Override
      public Void connect(HConnection conn) throws IOException {
        String zkClusterKey = conf.get(NAME + ".peerQuorumAddress");
        Configuration peerConf = HBaseConfiguration.createClusterConf(conf,
            zkClusterKey, PEER_CONFIG_PREFIX);

        TableName tableName = TableName.valueOf(conf.get(NAME + ".tableName"));
        replicatedTable = new HTable(peerConf, tableName);
        scan.setStartRow(value.getRow());
        scan.setStopRow(tableSplit.getEndRow());
        replicatedScanner = replicatedTable.getScanner(scan);
        return null;
      }
    });
    currentCompareRowInPeerTable = replicatedScanner.next();
  }
  while (true) {
    if (currentCompareRowInPeerTable == null) {
      // reach the region end of peer table, row only in source table
      logFailRowAndIncreaseCounter(context, Counters.ONLY_IN_SOURCE_TABLE_ROWS, value);
      break;
    }
    int rowCmpRet = Bytes.compareTo(value.getRow(), currentCompareRowInPeerTable.getRow());
    if (rowCmpRet == 0) {
      // rowkey is same, need to compare the content of the row
      try {
        Result.compareResults(value, currentCompareRowInPeerTable);
        context.getCounter(Counters.GOODROWS).increment(1);
      } catch (Exception e) {
        logFailRowAndIncreaseCounter(context, Counters.CONTENT_DIFFERENT_ROWS, value);
        LOG.error("Exception while comparing row : " + e);
      }
      currentCompareRowInPeerTable = replicatedScanner.next();
      break;
    } else if (rowCmpRet < 0) {
      // row only exists in source table
      logFailRowAndIncreaseCounter(context, Counters.ONLY_IN_SOURCE_TABLE_ROWS, value);
      break;
    } else {
      // row only exists in peer table
      logFailRowAndIncreaseCounter(context, Counters.ONLY_IN_PEER_TABLE_ROWS,
        currentCompareRowInPeerTable);
      currentCompareRowInPeerTable = replicatedScanner.next();
    }
  }
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:81,代碼來源:VerifyReplication.java

示例10: createSubmittableJob

import org.apache.hadoop.hbase.client.HConnectionManager; //導入方法依賴的package包/類
/**
 * Sets up the actual job.
 *
 * @param conf  The current configuration.
 * @param args  The command line parameters.
 * @return The newly created job.
 * @throws java.io.IOException When setting up the job fails.
 */
public static Job createSubmittableJob(Configuration conf, String[] args)
throws IOException {
  if (!doCommandLine(args)) {
    return null;
  }
  if (!conf.getBoolean(HConstants.REPLICATION_ENABLE_KEY, false)) {
    throw new IOException("Replication needs to be enabled to verify it.");
  }
  HConnectionManager.execute(new HConnectable<Void>(conf) {
    @Override
    public Void connect(HConnection conn) throws IOException {
      try {
        ReplicationZookeeper zk = new ReplicationZookeeper(conn, conf,
            conn.getZooKeeperWatcher());
        // Just verifying it we can connect
        ReplicationPeer peer = zk.getPeer(peerId);
        if (peer == null) {
          throw new IOException("Couldn't get access to the slave cluster," +
              "please see the log");
        }
      } catch (KeeperException ex) {
        throw new IOException("Couldn't get access to the slave cluster" +
            " because: ", ex);
      }
      return null;
    }
  });
  conf.set(NAME+".peerId", peerId);
  conf.set(NAME+".tableName", tableName);
  conf.setLong(NAME+".startTime", startTime);
  conf.setLong(NAME+".endTime", endTime);
  if (families != null) {
    conf.set(NAME+".families", families);
  }
  Job job = new Job(conf, NAME + "_" + tableName);
  job.setJarByClass(VerifyReplication.class);

  Scan scan = new Scan();
  if (startTime != 0) {
    scan.setTimeRange(startTime,
        endTime == 0 ? HConstants.LATEST_TIMESTAMP : endTime);
  }
  if(families != null) {
    String[] fams = families.split(",");
    for(String fam : fams) {
      scan.addFamily(Bytes.toBytes(fam));
    }
  }
  TableMapReduceUtil.initTableMapperJob(tableName, scan,
      Verifier.class, null, null, job);
  job.setOutputFormatClass(NullOutputFormat.class);
  job.setNumReduceTasks(0);
  return job;
}
 
開發者ID:fengchen8086,項目名稱:LCIndex-HBase-0.94.16,代碼行數:63,代碼來源:VerifyReplication.java

示例11: map

import org.apache.hadoop.hbase.client.HConnectionManager; //導入方法依賴的package包/類
/**
 * Map method that compares every scanned row with the equivalent from
 * a distant cluster.
 * @param row  The current table row key.
 * @param value  The columns.
 * @param context  The current context.
 * @throws IOException When something is broken with the data.
 */
@Override
public void map(ImmutableBytesWritable row, final Result value,
                Context context)
    throws IOException {
  if (replicatedScanner == null) {
    Configuration conf = context.getConfiguration();
    final Scan scan = new Scan();
    scan.setCaching(conf.getInt(TableInputFormat.SCAN_CACHEDROWS, 1));
    long startTime = conf.getLong(NAME + ".startTime", 0);
    long endTime = conf.getLong(NAME + ".endTime", Long.MAX_VALUE);
    String families = conf.get(NAME + ".families", null);
    if(families != null) {
      String[] fams = families.split(",");
      for(String fam : fams) {
        scan.addFamily(Bytes.toBytes(fam));
      }
    }
    scan.setTimeRange(startTime, endTime);
    if (versions >= 0) {
      scan.setMaxVersions(versions);
    }

    final TableSplit tableSplit = (TableSplit)(context.getInputSplit());
    HConnectionManager.execute(new HConnectable<Void>(conf) {
      @Override
      public Void connect(HConnection conn) throws IOException {
        String zkClusterKey = conf.get(NAME + ".peerQuorumAddress");
        Configuration peerConf = HBaseConfiguration.create(conf);
        ZKUtil.applyClusterKeyToConf(peerConf, zkClusterKey);

        TableName tableName = TableName.valueOf(conf.get(NAME + ".tableName"));
        // TODO: THis HTable doesn't get closed.  Fix!
        Table replicatedTable = new HTable(peerConf, tableName);
        scan.setStartRow(value.getRow());
        scan.setStopRow(tableSplit.getEndRow());
        replicatedScanner = replicatedTable.getScanner(scan);
        return null;
      }
    });
    currentCompareRowInPeerTable = replicatedScanner.next();
  }
  while (true) {
    if (currentCompareRowInPeerTable == null) {
      // reach the region end of peer table, row only in source table
      logFailRowAndIncreaseCounter(context, Counters.ONLY_IN_SOURCE_TABLE_ROWS, value);
      break;
    }
    int rowCmpRet = Bytes.compareTo(value.getRow(), currentCompareRowInPeerTable.getRow());
    if (rowCmpRet == 0) {
      // rowkey is same, need to compare the content of the row
      try {
        Result.compareResults(value, currentCompareRowInPeerTable);
        context.getCounter(Counters.GOODROWS).increment(1);
      } catch (Exception e) {
        logFailRowAndIncreaseCounter(context, Counters.CONTENT_DIFFERENT_ROWS, value);
      }
      currentCompareRowInPeerTable = replicatedScanner.next();
      break;
    } else if (rowCmpRet < 0) {
      // row only exists in source table
      logFailRowAndIncreaseCounter(context, Counters.ONLY_IN_SOURCE_TABLE_ROWS, value);
      break;
    } else {
      // row only exists in peer table
      logFailRowAndIncreaseCounter(context, Counters.ONLY_IN_PEER_TABLE_ROWS,
        currentCompareRowInPeerTable);
      currentCompareRowInPeerTable = replicatedScanner.next();
    }
  }
}
 
開發者ID:grokcoder,項目名稱:pbase,代碼行數:79,代碼來源:VerifyReplication.java

示例12: map

import org.apache.hadoop.hbase.client.HConnectionManager; //導入方法依賴的package包/類
/**
 * Map method that compares every scanned row with the equivalent from
 * a distant cluster.
 * @param row  The current table row key.
 * @param value  The columns.
 * @param context  The current context.
 * @throws IOException When something is broken with the data.
 */
@Override
public void map(ImmutableBytesWritable row, final Result value,
                Context context)
    throws IOException {
  if (replicatedScanner == null) {
    Configuration conf = context.getConfiguration();
    final Scan scan = new Scan();
    scan.setCaching(conf.getInt(TableInputFormat.SCAN_CACHEDROWS, 1));
    long startTime = conf.getLong(NAME + ".startTime", 0);
    long endTime = conf.getLong(NAME + ".endTime", Long.MAX_VALUE);
    String families = conf.get(NAME + ".families", null);
    if(families != null) {
      String[] fams = families.split(",");
      for(String fam : fams) {
        scan.addFamily(Bytes.toBytes(fam));
      }
    }
    scan.setTimeRange(startTime, endTime);
    HConnectionManager.execute(new HConnectable<Void>(conf) {
      @Override
      public Void connect(HConnection conn) throws IOException {
        String zkClusterKey = conf.get(NAME + ".peerQuorumAddress");
        Configuration peerConf = HBaseConfiguration.create(conf);
        ZKUtil.applyClusterKeyToConf(peerConf, zkClusterKey);

        HTable replicatedTable = new HTable(peerConf, conf.get(NAME + ".tableName"));
        scan.setStartRow(value.getRow());
        replicatedScanner = replicatedTable.getScanner(scan);
        return null;
      }
    });
  }
  Result res = replicatedScanner.next();
  try {
    Result.compareResults(value, res);
    context.getCounter(Counters.GOODROWS).increment(1);
  } catch (Exception e) {
    LOG.warn("Bad row", e);
    context.getCounter(Counters.BADROWS).increment(1);
  }
}
 
開發者ID:tenggyut,項目名稱:HIndex,代碼行數:50,代碼來源:VerifyReplication.java


注:本文中的org.apache.hadoop.hbase.client.HConnectionManager.execute方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。