当前位置: 首页>>代码示例>>Java>>正文


Java HCatException类代码示例

本文整理汇总了Java中org.apache.hive.hcatalog.common.HCatException的典型用法代码示例。如果您正苦于以下问题:Java HCatException类的具体用法?Java HCatException怎么用?Java HCatException使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。


HCatException类属于org.apache.hive.hcatalog.common包,在下文中一共展示了HCatException类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: flush

import org.apache.hive.hcatalog.common.HCatException; //导入依赖的package包/类
private void flush() throws HCatException {
  if (hCatRecordsBatch.isEmpty()) {
    return;
  }
  try {
    slaveWriter.write(hCatRecordsBatch.iterator());
    masterWriter.commit(writerContext);
  } catch (HCatException e) {
    LOG.error("Exception in flush - write/commit data to Hive", e);
    //abort on exception
    masterWriter.abort(writerContext);
    throw e;
  } finally {
    hCatRecordsBatch.clear();
  }
}
 
开发者ID:apache,项目名称:beam,代码行数:17,代码来源:HCatalogIO.java

示例2: asFlinkTuples

import org.apache.hive.hcatalog.common.HCatException; //导入依赖的package包/类
/**
 * Specifies that the InputFormat returns Flink tuples instead of
 * {@link org.apache.hive.hcatalog.data.HCatRecord}.
 *
 * <p>Note: Flink tuples might only support a limited number of fields (depending on the API).
 *
 * @return This InputFormat.
 * @throws org.apache.hive.hcatalog.common.HCatException
 */
public HCatInputFormatBase<T> asFlinkTuples() throws HCatException {

	// build type information
	int numFields = outputSchema.getFields().size();
	if (numFields > this.getMaxFlinkTupleSize()) {
		throw new IllegalArgumentException("Only up to " + this.getMaxFlinkTupleSize() +
				" fields can be returned as Flink tuples.");
	}

	TypeInformation[] fieldTypes = new TypeInformation[numFields];
	fieldNames = new String[numFields];
	for (String fieldName : outputSchema.getFieldNames()) {
		HCatFieldSchema field = outputSchema.get(fieldName);

		int fieldPos = outputSchema.getPosition(fieldName);
		TypeInformation fieldType = getFieldType(field);

		fieldTypes[fieldPos] = fieldType;
		fieldNames[fieldPos] = fieldName;

	}
	this.resultType = new TupleTypeInfo(fieldTypes);

	return this;
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:35,代码来源:HCatInputFormatBase.java

示例3: asFlinkTuples

import org.apache.hive.hcatalog.common.HCatException; //导入依赖的package包/类
/**
 * Specifies that the InputFormat returns Flink tuples instead of
 * {@link org.apache.hive.hcatalog.data.HCatRecord}.
 *
 * Note: Flink tuples might only support a limited number of fields (depending on the API).
 *
 * @return This InputFormat.
 * @throws org.apache.hive.hcatalog.common.HCatException
 */
public HCatInputFormatBase<T> asFlinkTuples() throws HCatException {

	// build type information
	int numFields = outputSchema.getFields().size();
	if(numFields > this.getMaxFlinkTupleSize()) {
		throw new IllegalArgumentException("Only up to "+this.getMaxFlinkTupleSize()+
				" fields can be returned as Flink tuples.");
	}

	TypeInformation[] fieldTypes = new TypeInformation[numFields];
	fieldNames = new String[numFields];
	for (String fieldName : outputSchema.getFieldNames()) {
		HCatFieldSchema field = outputSchema.get(fieldName);

		int fieldPos = outputSchema.getPosition(fieldName);
		TypeInformation fieldType = getFieldType(field);

		fieldTypes[fieldPos] = fieldType;
		fieldNames[fieldPos] = fieldName;

	}
	this.resultType = new TupleTypeInfo(fieldTypes);

	return this;
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:35,代码来源:HCatInputFormatBase.java

示例4: insert

import org.apache.hive.hcatalog.common.HCatException; //导入依赖的package包/类
private void insert(Map<String, String> partitionSpec, Iterable<HCatRecord> rows) {
  WriteEntity entity = new WriteEntity.Builder()
      .withDatabase(databaseName)
      .withTable(tableName)
      .withPartition(partitionSpec)
      .build();

  try {
    HCatWriter master = DataTransferFactory.getHCatWriter(entity, config);
    WriterContext context = master.prepareWrite();
    HCatWriter writer = DataTransferFactory.getHCatWriter(context);
    writer.write(rows.iterator());
    master.commit(context);
  } catch (HCatException e) {
    throw new RuntimeException("An error occurred while inserting data to " + databaseName + "." + tableName, e);
  }
}
 
开发者ID:klarna,项目名称:HiveRunner,代码行数:18,代码来源:TableDataInserter.java

示例5: getReaderContext

import org.apache.hive.hcatalog.common.HCatException; //导入依赖的package包/类
private ReaderContext getReaderContext(long desiredSplitCount) throws HCatException {
  ReadEntity entity =
      new ReadEntity.Builder()
          .withDatabase(spec.getDatabase())
          .withTable(spec.getTable())
          .withFilter(spec.getFilter())
          .build();
  // pass the 'desired' split count as an hint to the API
  Map<String, String> configProps = new HashMap<>(spec.getConfigProperties());
  configProps.put(
      HCatConstants.HCAT_DESIRED_PARTITION_NUM_SPLITS, String.valueOf(desiredSplitCount));
  return DataTransferFactory.getHCatReader(entity, configProps).prepareRead();
}
 
开发者ID:apache,项目名称:beam,代码行数:14,代码来源:HCatalogIO.java

示例6: start

import org.apache.hive.hcatalog.common.HCatException; //导入依赖的package包/类
@Override
public boolean start() throws HCatException {
  HCatReader reader =
      DataTransferFactory.getHCatReader(source.spec.getContext(), source.spec.getSplitId());
  hcatIterator = reader.read();
  return advance();
}
 
开发者ID:apache,项目名称:beam,代码行数:8,代码来源:HCatalogIO.java

示例7: initiateWrite

import org.apache.hive.hcatalog.common.HCatException; //导入依赖的package包/类
@Setup
public void initiateWrite() throws HCatException {
  WriteEntity entity =
      new WriteEntity.Builder()
          .withDatabase(spec.getDatabase())
          .withTable(spec.getTable())
          .withPartition(spec.getPartition())
          .build();
  masterWriter = DataTransferFactory.getHCatWriter(entity, spec.getConfigProperties());
  writerContext = masterWriter.prepareWrite();
  slaveWriter = DataTransferFactory.getHCatWriter(writerContext);
}
 
开发者ID:apache,项目名称:beam,代码行数:13,代码来源:HCatalogIO.java

示例8: processElement

import org.apache.hive.hcatalog.common.HCatException; //导入依赖的package包/类
@ProcessElement
public void processElement(ProcessContext ctx) throws HCatException {
  hCatRecordsBatch.add(ctx.element());
  if (hCatRecordsBatch.size() >= spec.getBatchSize()) {
    flush();
  }
}
 
开发者ID:apache,项目名称:beam,代码行数:8,代码来源:HCatalogIO.java

示例9: validateHcatFieldFollowsPigRules

import org.apache.hive.hcatalog.common.HCatException; //导入依赖的package包/类
private static void validateHcatFieldFollowsPigRules(HCatFieldSchema hcatField)
    throws PigException {
  try {
    Type hType = hcatField.getType();
    switch (hType) {
    case BOOLEAN:
      if (!pigHasBooleanSupport) {
        throw new PigException("Incompatible type found in HCat table schema: "
            + hcatField, PigHCatUtil.PIG_EXCEPTION_CODE);
      }
      break;
    case ARRAY:
      validateHCatSchemaFollowsPigRules(hcatField.getArrayElementSchema());
      break;
    case STRUCT:
      validateHCatSchemaFollowsPigRules(hcatField.getStructSubSchema());
      break;
    case MAP:
      // key is only string
      if (hcatField.getMapKeyType() != Type.STRING) {
        LOG.info("Converting non-String key of map " + hcatField.getName() + " from "
          + hcatField.getMapKeyType() + " to String.");
      }
      validateHCatSchemaFollowsPigRules(hcatField.getMapValueSchema());
      break;
    }
  } catch (HCatException e) {
    throw new PigException("Incompatible type found in hcat table schema: " + hcatField,
        PigHCatUtil.PIG_EXCEPTION_CODE, e);
  }
}
 
开发者ID:cloudera,项目名称:RecordServiceClient,代码行数:32,代码来源:PigHCatUtil.java

示例10: get

import org.apache.hive.hcatalog.common.HCatException; //导入依赖的package包/类
private Object get(String name) {
  checkColumn(name);
  try {
    return row.get(name, schema);
  } catch (HCatException e) {
    throw new RuntimeException("Error getting value for " + name, e);
  }
}
 
开发者ID:klarna,项目名称:HiveRunner,代码行数:9,代码来源:TableDataBuilder.java

示例11: column

import org.apache.hive.hcatalog.common.HCatException; //导入依赖的package包/类
private static HCatFieldSchema column(String name) {
  try {
    return new HCatFieldSchema(name, STRING, null);
  } catch (HCatException e) {
    throw new RuntimeException(e);
  }
}
 
开发者ID:klarna,项目名称:HiveRunner,代码行数:8,代码来源:TableDataBuilderTest.java

示例12: finishBundle

import org.apache.hive.hcatalog.common.HCatException; //导入依赖的package包/类
@FinishBundle
public void finishBundle() throws HCatException {
  flush();
}
 
开发者ID:apache,项目名称:beam,代码行数:5,代码来源:HCatalogIO.java

示例13: getReaderContext

import org.apache.hive.hcatalog.common.HCatException; //导入依赖的package包/类
/** Returns a ReaderContext instance for the passed datastore config params. */
static ReaderContext getReaderContext(Map<String, String> config) throws HCatException {
  return DataTransferFactory.getHCatReader(READ_ENTITY, config).prepareRead();
}
 
开发者ID:apache,项目名称:beam,代码行数:5,代码来源:HCatalogIOTestUtils.java

示例14: getWriterContext

import org.apache.hive.hcatalog.common.HCatException; //导入依赖的package包/类
/** Returns a WriterContext instance for the passed datastore config params. */
private static WriterContext getWriterContext(Map<String, String> config) throws HCatException {
  return DataTransferFactory.getHCatWriter(WRITE_ENTITY, config).prepareWrite();
}
 
开发者ID:apache,项目名称:beam,代码行数:5,代码来源:HCatalogIOTestUtils.java

示例15: writeRecords

import org.apache.hive.hcatalog.common.HCatException; //导入依赖的package包/类
/** Writes records to the table using the passed WriterContext. */
private static void writeRecords(WriterContext context) throws HCatException {
  DataTransferFactory.getHCatWriter(context).write(getHCatRecords(TEST_RECORDS_COUNT).iterator());
}
 
开发者ID:apache,项目名称:beam,代码行数:5,代码来源:HCatalogIOTestUtils.java


注:本文中的org.apache.hive.hcatalog.common.HCatException类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。