當前位置: 首頁>>代碼示例>>Java>>正文


Java HColumnDescriptor.setCompressionType方法代碼示例

本文整理匯總了Java中org.apache.hadoop.hbase.HColumnDescriptor.setCompressionType方法的典型用法代碼示例。如果您正苦於以下問題:Java HColumnDescriptor.setCompressionType方法的具體用法?Java HColumnDescriptor.setCompressionType怎麽用?Java HColumnDescriptor.setCompressionType使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.hbase.HColumnDescriptor的用法示例。


在下文中一共展示了HColumnDescriptor.setCompressionType方法的5個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: addColumn

import org.apache.hadoop.hbase.HColumnDescriptor; //導入方法依賴的package包/類
/**
 * 往表中添加列族
 *
 * @param tableName  表名
 * @param familyName 列族名
 */
public void addColumn(String tableName, String familyName) {
    HBaseConfiguration hBaseConfiguration = new HBaseConfiguration();
    Admin admin = hBaseConfiguration.admin();
    TableName tb = TableName.valueOf(tableName);
    try {
        if (admin.tableExists(tb)) {
            HColumnDescriptor columnDescriptor = new HColumnDescriptor(familyName);

            columnDescriptor.setMaxVersions(1);//設置列族保留的最多版本
            columnDescriptor.setCompressionType(Compression.Algorithm.GZ);//設置壓縮算法
            columnDescriptor.setCompactionCompressionType(Compression.Algorithm.GZ);//合並壓縮算法

            admin.addColumn(tb, columnDescriptor);
        } else {
            log.info("表名【" + tableName + "】不存在");
        }
    } catch (IOException e) {
        log.error(e);
    } finally {
        hBaseConfiguration.close();
    }
}
 
開發者ID:mumuhadoop,項目名稱:mumu-hbase,代碼行數:29,代碼來源:HBaseTableOperation.java

示例2: testCreateWriter

import org.apache.hadoop.hbase.HColumnDescriptor; //導入方法依賴的package包/類
/**
 * Verify that compression and data block encoding are respected by the
 * Store.createWriterInTmp() method, used on store flush.
 */
@Test
public void testCreateWriter() throws Exception {
  Configuration conf = HBaseConfiguration.create();
  FileSystem fs = FileSystem.get(conf);

  HColumnDescriptor hcd = new HColumnDescriptor(family);
  hcd.setCompressionType(Compression.Algorithm.GZ);
  hcd.setDataBlockEncoding(DataBlockEncoding.DIFF);
  init(name.getMethodName(), conf, hcd);

  // Test createWriterInTmp()
  StoreFile.Writer writer = store.createWriterInTmp(4, hcd.getCompression(), false, true, false);
  Path path = writer.getPath();
  writer.append(new KeyValue(row, family, qf1, Bytes.toBytes(1)));
  writer.append(new KeyValue(row, family, qf2, Bytes.toBytes(2)));
  writer.append(new KeyValue(row2, family, qf1, Bytes.toBytes(3)));
  writer.append(new KeyValue(row2, family, qf2, Bytes.toBytes(4)));
  writer.close();

  // Verify that compression and encoding settings are respected
  HFile.Reader reader = HFile.createReader(fs, path, new CacheConfig(conf), conf);
  Assert.assertEquals(hcd.getCompressionType(), reader.getCompressionAlgorithm());
  Assert.assertEquals(hcd.getDataBlockEncoding(), reader.getDataBlockEncoding());
  reader.close();
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:30,代碼來源:TestStore.java

示例3: create

import org.apache.hadoop.hbase.HColumnDescriptor; //導入方法依賴的package包/類
private static void create(Admin admin, TableName tableName, byte[]... families)
    throws IOException {
  HTableDescriptor desc = new HTableDescriptor(tableName);
  for (byte[] family : families) {
    HColumnDescriptor colDesc = new HColumnDescriptor(family);
    colDesc.setMaxVersions(1);
    colDesc.setCompressionType(Algorithm.GZ);
    desc.addFamily(colDesc);
  }
  try {
    admin.createTable(desc);
  } catch (TableExistsException tee) {
    /* Ignore */
  }
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:16,代碼來源:TestSCVFWithMiniCluster.java

示例4: getDefaultColumnDescriptor

import org.apache.hadoop.hbase.HColumnDescriptor; //導入方法依賴的package包/類
public static HColumnDescriptor getDefaultColumnDescriptor(byte[] family) {
  HColumnDescriptor colDesc = new HColumnDescriptor(family);
  //    colDesc.setDataBlockEncoding(DataBlockEncoding.FAST_DIFF);
  colDesc.setDataBlockEncoding(DataBlockEncoding.NONE);
  colDesc.setCompressionType(Compression.Algorithm.NONE);
  return colDesc;
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:8,代碼來源:IndexTableRelation.java

示例5: perform

import org.apache.hadoop.hbase.HColumnDescriptor; //導入方法依賴的package包/類
@Override
public void perform() throws Exception {
  HTableDescriptor tableDescriptor = admin.getTableDescriptor(tableName);
  HColumnDescriptor[] columnDescriptors = tableDescriptor.getColumnFamilies();

  if (columnDescriptors == null || columnDescriptors.length == 0) {
    return;
  }

  // Possible compression algorithms. If an algorithm is not supported,
  // modifyTable will fail, so there is no harm.
  Algorithm[] possibleAlgos = Algorithm.values();

  // Since not every compression algorithm is supported,
  // let's use the same algorithm for all column families.

  // If an unsupported compression algorithm is chosen, pick a different one.
  // This is to work around the issue that modifyTable() does not throw remote
  // exception.
  Algorithm algo;
  do {
    algo = possibleAlgos[random.nextInt(possibleAlgos.length)];

    try {
      Compressor c = algo.getCompressor();

      // call returnCompressor() to release the Compressor
      algo.returnCompressor(c);
      break;
    } catch (Throwable t) {
      LOG.info("Performing action: Changing compression algorithms to " + algo +
              " is not supported, pick another one");
    }
  } while (true);

  LOG.debug("Performing action: Changing compression algorithms on "
    + tableName.getNameAsString() + " to " + algo);
  for (HColumnDescriptor descriptor : columnDescriptors) {
    if (random.nextBoolean()) {
      descriptor.setCompactionCompressionType(algo);
    } else {
      descriptor.setCompressionType(algo);
    }
  }

  // Don't try the modify if we're stopping
  if (context.isStopping()) {
    return;
  }

  admin.modifyTable(tableName, tableDescriptor);
}
 
開發者ID:fengchen8086,項目名稱:ditb,代碼行數:53,代碼來源:ChangeCompressionAction.java


注:本文中的org.apache.hadoop.hbase.HColumnDescriptor.setCompressionType方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。