当前位置: 首页>>代码示例>>Java>>正文


Java Table.getPartitionKeys方法代码示例

本文整理汇总了Java中org.apache.hadoop.hive.ql.metadata.Table.getPartitionKeys方法的典型用法代码示例。如果您正苦于以下问题:Java Table.getPartitionKeys方法的具体用法?Java Table.getPartitionKeys怎么用?Java Table.getPartitionKeys使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.hive.ql.metadata.Table的用法示例。


在下文中一共展示了Table.getPartitionKeys方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: createPtnKeyValueMap

import org.apache.hadoop.hive.ql.metadata.Table; //导入方法依赖的package包/类
static Map<String, String> createPtnKeyValueMap(Table table, Partition ptn)
  throws IOException {
  List<String> values = ptn.getValues();
  if (values.size() != table.getPartitionKeys().size()) {
    throw new IOException(
        "Partition values in partition inconsistent with table definition, table "
            + table.getTableName() + " has "
            + table.getPartitionKeys().size()
            + " partition keys, partition has " + values.size()
            + "partition values");
  }

  Map<String, String> ptnKeyValues = new HashMap<String, String>();

  int i = 0;
  for (FieldSchema schema : table.getPartitionKeys()) {
    // CONCERN : the way this mapping goes, the order *needs* to be
    // preserved for table.getPartitionKeys() and ptn.getValues()
    ptnKeyValues.put(schema.getName().toLowerCase(), values.get(i));
    i++;
  }

  return ptnKeyValues;
}
 
开发者ID:cloudera,项目名称:RecordServiceClient,代码行数:25,代码来源:InternalUtil.java

示例2: testCreateTable

import org.apache.hadoop.hive.ql.metadata.Table; //导入方法依赖的package包/类
@Test
public void testCreateTable() throws Exception {
  prepareData(TOPIC, PARTITION);
  Partitioner partitioner = HiveTestUtils.getPartitioner();

  Schema schema = createSchema();
  hive.createTable(hiveDatabase, TOPIC, schema, partitioner);
  String location = "partition=" + String.valueOf(PARTITION);
  hiveMetaStore.addPartition(hiveDatabase, TOPIC, location);

  List<String> expectedColumnNames = new ArrayList<>();
  for (Field field: schema.fields()) {
    expectedColumnNames.add(field.name());
  }

  Table table = hiveMetaStore.getTable(hiveDatabase, TOPIC);
  List<String> actualColumnNames = new ArrayList<>();
  for (FieldSchema column: table.getSd().getCols()) {
    actualColumnNames.add(column.getName());
  }

  assertEquals(expectedColumnNames, actualColumnNames);
  List<FieldSchema> partitionCols = table.getPartitionKeys();
  assertEquals(1, partitionCols.size());
  assertEquals("partition", partitionCols.get(0).getName());

  String[] expectedResult = {"true", "12", "12", "12.2", "12.2", "12"};
  String result = HiveTestUtils.runHive(hiveExec, "SELECT * FROM " + TOPIC);
  String[] rows = result.split("\n");
  // Only 6 of the 7 records should have been delivered due to flush_size = 3
  assertEquals(6, rows.length);
  for (String row: rows) {
    String[] parts = HiveTestUtils.parseOutput(row);
    for (int j = 0; j < expectedResult.length; ++j) {
      assertEquals(expectedResult[j], parts[j]);
    }
  }
}
 
开发者ID:jiangxiluning,项目名称:kafka-connect-hdfs,代码行数:39,代码来源:AvroHiveUtilTest.java

示例3: getPartitionKeys

import org.apache.hadoop.hive.ql.metadata.Table; //导入方法依赖的package包/类
@Override
public String[] getPartitionKeys(String location, Job job)
  throws IOException {
  Table table = phutil.getTable(location,
    hcatServerUri != null ? hcatServerUri : PigHCatUtil.getHCatServerUri(job),
    PigHCatUtil.getHCatServerPrincipal(job),
    job);   // Pass job to initialize metastore conf overrides
  List<FieldSchema> tablePartitionKeys = table.getPartitionKeys();
  String[] partitionKeys = new String[tablePartitionKeys.size()];
  for (int i = 0; i < tablePartitionKeys.size(); i++) {
    partitionKeys[i] = tablePartitionKeys.get(i).getName();
  }
  return partitionKeys;
}
 
开发者ID:cloudera,项目名称:RecordServiceClient,代码行数:15,代码来源:HCatRSLoader.java


注:本文中的org.apache.hadoop.hive.ql.metadata.Table.getPartitionKeys方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。