当前位置: 首页>>代码示例>>Java>>正文


Java HiveStorageHandler类代码示例

本文整理汇总了Java中org.apache.hadoop.hive.ql.metadata.HiveStorageHandler的典型用法代码示例。如果您正苦于以下问题:Java HiveStorageHandler类的具体用法?Java HiveStorageHandler怎么用?Java HiveStorageHandler使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


HiveStorageHandler类属于org.apache.hadoop.hive.ql.metadata包,在下文中一共展示了HiveStorageHandler类的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: getInputFormatClass

import org.apache.hadoop.hive.ql.metadata.HiveStorageHandler; //导入依赖的package包/类
/**
 * Utility method which gets table or partition {@link InputFormat} class. First it
 * tries to get the class name from given StorageDescriptor object. If it doesn't contain it tries to get it from
 * StorageHandler class set in table properties. If not found throws an exception.
 * @param job {@link JobConf} instance needed incase the table is StorageHandler based table.
 * @param sd {@link StorageDescriptor} instance of currently reading partition or table (for non-partitioned tables).
 * @param table Table object
 * @throws Exception
 */
public static Class<? extends InputFormat<?, ?>> getInputFormatClass(final JobConf job, final StorageDescriptor sd,
    final Table table) throws Exception {
  final String inputFormatName = sd.getInputFormat();
  if (Strings.isNullOrEmpty(inputFormatName)) {
    final String storageHandlerClass = table.getParameters().get(META_TABLE_STORAGE);
    if (Strings.isNullOrEmpty(storageHandlerClass)) {
      throw new ExecutionSetupException("Unable to get Hive table InputFormat class. There is neither " +
          "InputFormat class explicitly specified nor StorageHandler class");
    }
    final HiveStorageHandler storageHandler = HiveUtils.getStorageHandler(job, storageHandlerClass);
    return (Class<? extends InputFormat<?, ?>>) storageHandler.getInputFormatClass();
  } else {
    return (Class<? extends InputFormat<?, ?>>) Class.forName(inputFormatName) ;
  }
}
 
开发者ID:axbaretto,项目名称:drill,代码行数:25,代码来源:HiveUtilities.java

示例2: extractPartInfo

import org.apache.hadoop.hive.ql.metadata.HiveStorageHandler; //导入依赖的package包/类
private static PartInfo extractPartInfo(HCatSchema schema, StorageDescriptor sd,
                    Map<String, String> parameters, Configuration conf,
                    InputJobInfo inputJobInfo) throws IOException {

  StorerInfo storerInfo = InternalUtil.extractStorerInfo(sd, parameters);

  Properties hcatProperties = new Properties();
  HiveStorageHandler storageHandler = HCatUtil.getStorageHandler(conf, storerInfo);

  // copy the properties from storageHandler to jobProperties
  Map<String, String> jobProperties =
      HCatRSUtil.getInputJobProperties(storageHandler, inputJobInfo);

  for (String key : parameters.keySet()) {
    hcatProperties.put(key, parameters.get(key));
  }
  // FIXME
  // Bloating partinfo with inputJobInfo is not good
  return new PartInfo(schema, storageHandler, sd.getLocation(),
    hcatProperties, jobProperties, inputJobInfo.getTableInfo());
}
 
开发者ID:cloudera,项目名称:RecordServiceClient,代码行数:22,代码来源:InitializeInput.java

示例3: createMetaStoreClient

import org.apache.hadoop.hive.ql.metadata.HiveStorageHandler; //导入依赖的package包/类
private IMetaStoreClient createMetaStoreClient() throws MetaException {
  HiveMetaHookLoader hookLoader = new HiveMetaHookLoader() {
    @Override
    public HiveMetaHook getHook(Table tbl) throws MetaException {
      if (tbl == null) {
        return null;
      }

      try {
        HiveStorageHandler storageHandler =
            HiveUtils.getStorageHandler(hiveConf, tbl.getParameters().get(META_TABLE_STORAGE));
        return storageHandler == null ? null : storageHandler.getMetaHook();
      } catch (HiveException e) {
        LOG.error(e.toString());
        throw new MetaException("Failed to get storage handler: " + e);
      }
    }
  };

  return RetryingMetaStoreClient.getProxy(hiveConf, hookLoader, HiveMetaStoreClient.class.getName());
}
 
开发者ID:apache,项目名称:incubator-gobblin,代码行数:22,代码来源:HiveMetaStoreClientFactory.java

示例4: PartInfo

import org.apache.hadoop.hive.ql.metadata.HiveStorageHandler; //导入依赖的package包/类
/**
 * Instantiates a new hcat partition info.
 * @param partitionSchema the partition schema
 * @param storageHandler the storage handler
 * @param location the location
 * @param hcatProperties hcat-specific properties at the partition
 * @param jobProperties the job properties
 * @param tableInfo the table information
 */
public PartInfo(HCatSchema partitionSchema, HiveStorageHandler storageHandler,
        String location, Properties hcatProperties,
        Map<String, String> jobProperties, HCatTableInfo tableInfo) {
  this.partitionSchema = partitionSchema;
  this.location = location;
  this.hcatProperties = hcatProperties;
  this.jobProperties = jobProperties;
  this.tableInfo = tableInfo;

  this.storageHandlerClassName = storageHandler.getClass().getName();
  this.inputFormatClassName = storageHandler.getInputFormatClass().getName();
  this.serdeClassName = storageHandler.getSerDeClass().getName();
  this.outputFormatClassName = storageHandler.getOutputFormatClass().getName();
}
 
开发者ID:cloudera,项目名称:RecordServiceClient,代码行数:24,代码来源:PartInfo.java

示例5: getInputJobProperties

import org.apache.hadoop.hive.ql.metadata.HiveStorageHandler; //导入依赖的package包/类
public static Map<String, String> getInputJobProperties(
    HiveStorageHandler storageHandler, InputJobInfo inputJobInfo) {
  Properties props = inputJobInfo.getTableInfo().getStorerInfo().getProperties();
  props.put(serdeConstants.SERIALIZATION_LIB,storageHandler.getSerDeClass().getName());
  TableDesc tableDesc = new TableDesc(storageHandler.getInputFormatClass(),
          storageHandler.getOutputFormatClass(), props);
  if (tableDesc.getJobProperties() == null) {
    tableDesc.setJobProperties(new HashMap<String, String>());
  }

  Properties mytableProperties = tableDesc.getProperties();
  mytableProperties.setProperty(
      org.apache.hadoop.hive.metastore.api.hive_metastoreConstants.META_TABLE_NAME,
      inputJobInfo.getDatabaseName() + "." + inputJobInfo.getTableName());

  Map<String, String> jobProperties = new HashMap<String, String>();
  try {
    tableDesc.getJobProperties().put(
            HCatConstants.HCAT_KEY_JOB_INFO,
            HCatRSUtil.serialize(inputJobInfo));

    storageHandler.configureInputJobProperties(tableDesc, jobProperties);
  } catch (IOException e) {
    throw new IllegalStateException(
            "Failed to configure StorageHandler", e);
  }

  return jobProperties;
}
 
开发者ID:cloudera,项目名称:RecordServiceClient,代码行数:30,代码来源:HCatRSUtil.java


注:本文中的org.apache.hadoop.hive.ql.metadata.HiveStorageHandler类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。