当前位置: 首页>>代码示例>>Java>>正文


Java TypeInfoFactory.getStructTypeInfo方法代码示例

本文整理汇总了Java中org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getStructTypeInfo方法的典型用法代码示例。如果您正苦于以下问题:Java TypeInfoFactory.getStructTypeInfo方法的具体用法?Java TypeInfoFactory.getStructTypeInfo怎么用?Java TypeInfoFactory.getStructTypeInfo使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory的用法示例。


在下文中一共展示了TypeInfoFactory.getStructTypeInfo方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: newStructTypeInfo

import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory; //导入方法依赖的package包/类
static StructTypeInfo newStructTypeInfo(Fields fields) {
  List<String> names = new ArrayList<>();
  List<TypeInfo> typeInfos = new ArrayList<>();

  for (int i = 0; i < fields.size(); i++) {
    String name = fields.get(i).toString();
    if (ROW_ID_NAME.equals(name)) {
      if (!fields.getTypeClass(i).equals(RecordIdentifier.class)) {
        throw new IllegalArgumentException(ROW_ID_NAME + " column is not of type "
            + RecordIdentifier.class.getSimpleName() + ". Found type: " + fields.getTypeClass(i));
      }
      continue;
    }
    names.add(name.toLowerCase());
    Class<?> type = fields.getTypeClass(i);
    if (type == null) {
      throw new IllegalArgumentException("Missing type information for field: " + name);
    }

    TypeInfo typeInfo = getTypeInfoFromClass(type);
    typeInfos.add(typeInfo);
  }

  return (StructTypeInfo) TypeInfoFactory.getStructTypeInfo(names, typeInfos);
}
 
开发者ID:HotelsDotCom,项目名称:corc,代码行数:26,代码来源:SchemaFactory.java

示例2: initialize

import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory; //导入方法依赖的package包/类
/**
 * An initialization function used to gather information about the table.
 * Typically, a SerDe implementation will be interested in fgthe list of
 * column names and their types. That information will be used to help perform
 * actual serialization and deserialization of data.
 */
@Override
public void initialize(Configuration conf, Properties tbl)
        throws SerDeException {
    // Get a list of the table's column names.
    String colNamesStr = tbl.getProperty(serdeConstants.LIST_COLUMNS);
    colNames = Arrays.asList(colNamesStr.split(","));

    // Get a list of TypeInfos for the columns. This list lines up with
    // the list of column names.
    String colTypesStr = tbl.getProperty(serdeConstants.LIST_COLUMN_TYPES);
    List<TypeInfo> colTypes =
            TypeInfoUtils.getTypeInfosFromTypeString(colTypesStr);

    rowTypeInfo =
            (StructTypeInfo) TypeInfoFactory.getStructTypeInfo(colNames, colTypes);
    rowOI =
            TypeInfoUtils.getStandardJavaObjectInspectorFromTypeInfo(rowTypeInfo);
}
 
开发者ID:scaleoutsoftware,项目名称:hServer,代码行数:25,代码来源:JsonSerDe.java

示例3: initialize

import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory; //导入方法依赖的package包/类
/**
 * An initialization function used to gather information about the table.
 * Typically, a SerDe implementation will be interested in the list of
 * column names and their types. That information will be used to help perform
 * actual serialization and deserialization of data.
 */
@Override
public void initialize(Configuration conf, Properties tbl)
		throws SerDeException {
	// Get a list of the table's column names.
	String colNamesStr = tbl.getProperty(serdeConstants.LIST_COLUMNS);
	colNames = Arrays.asList(colNamesStr.split(","));

	// Get a list of TypeInfos for the columns. This list lines up with
	// the list of column names.
	String colTypesStr = tbl.getProperty(serdeConstants.LIST_COLUMN_TYPES);
	List<TypeInfo> colTypes =
			TypeInfoUtils.getTypeInfosFromTypeString(colTypesStr);

	rowTypeInfo =
			(StructTypeInfo) TypeInfoFactory.getStructTypeInfo(colNames, colTypes);
	rowOI =
			TypeInfoUtils.getStandardJavaObjectInspectorFromTypeInfo(rowTypeInfo);
}
 
开发者ID:micmiu,项目名称:bigdata-tutorial,代码行数:25,代码来源:JSONCDHSerDe.java

示例4: initialize

import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory; //导入方法依赖的package包/类
@Override
public void initialize(final Configuration conf, final Properties tbl)
		throws SerDeException {
	log.debug("conf="+conf);
	log.debug("tblProperties="+tbl);
	final String facetType = tbl.getProperty(ConfigurationUtil.SOLR_FACET_MAPPING);
	final String columnString = tbl.getProperty(ConfigurationUtil.SOLR_COLUMN_MAPPING);
	if (StringUtils.isBlank(facetType)) {
		if (StringUtils.isBlank(columnString)) {
			throw new SerDeException("No facet mapping found, using "+ ConfigurationUtil.SOLR_COLUMN_MAPPING);
		}
		final String[] columnNamesArray = ConfigurationUtil.getAllColumns(columnString);
		colNames = Arrays.asList(columnNamesArray);
		log.debug(ConfigurationUtil.SOLR_COLUMN_MAPPING+" = " + colNames);
		row = new ArrayList<Object>(columnNamesArray.length);
	} else {
		row = new ArrayList<Object>(2);
		colNames = Arrays.asList(StringUtils.split(tbl.getProperty(Constants.LIST_COLUMNS),","));
	}
	
	colTypes = TypeInfoUtils.getTypeInfosFromTypeString(tbl.getProperty(Constants.LIST_COLUMN_TYPES));
	rowTypeInfo = (StructTypeInfo) TypeInfoFactory.getStructTypeInfo(colNames, colTypes);
	rowOI = TypeInfoUtils.getStandardJavaObjectInspectorFromTypeInfo(rowTypeInfo);
	log.debug("colNames="+colNames+" rowIO="+rowOI);
}
 
开发者ID:vroyer,项目名称:hive-solr-search,代码行数:26,代码来源:SolrSerDe.java

示例5: initialize

import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory; //导入方法依赖的package包/类
/**
 * An initialization function used to gather information about the table.
 * Typically, a SerDe implementation will be interested in the list of
 * column names and their types. That information will be used to help
 * perform actual serialization and deserialization of data.
 */
@Override
public void initialize(final Configuration conf, final Properties tbl)
		throws SerDeException {
	// Get a list of the table's column names.
	final String colNamesStr = tbl.getProperty(serdeConstants.LIST_COLUMNS);
	// Jai...change column names to lower case.
	colNames = Arrays.asList(colNamesStr.toLowerCase().split(","));
	// Get a list of TypeInfos for the columns. This list lines up with
	// the list of column names.
	final String colTypesStr = tbl
			.getProperty(serdeConstants.LIST_COLUMN_TYPES);
	final List<TypeInfo> colTypes = TypeInfoUtils
			.getTypeInfosFromTypeString(colTypesStr);
	rowTypeInfo = (StructTypeInfo) TypeInfoFactory.getStructTypeInfo(
			colNames, colTypes);
	rowOI = TypeInfoUtils
			.getStandardJavaObjectInspectorFromTypeInfo(rowTypeInfo);
}
 
开发者ID:jaibeermalik,项目名称:searchanalytics-bigdata,代码行数:25,代码来源:JSONSerDe.java

示例6: getTypeInfo

import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory; //导入方法依赖的package包/类
private TypeInfo getTypeInfo(String fieldType) {
  if (fieldType.equals(TEXT) || fieldType.equals(STRING) || fieldType.equals(STORED)) {
    return TypeInfoFactory.stringTypeInfo;
  } else if (fieldType.equals(LONG)) {
    return TypeInfoFactory.longTypeInfo;
  } else if (fieldType.equals(INT)) {
    return TypeInfoFactory.intTypeInfo;
  } else if (fieldType.equals(FLOAT)) {
    return TypeInfoFactory.floatTypeInfo;
  } else if (fieldType.equals(DOUBLE)) {
    return TypeInfoFactory.doubleTypeInfo;
  } else if (fieldType.equals(DATE)) {
    return TypeInfoFactory.dateTypeInfo;
  } else if (fieldType.equals(GEO_POINTVECTOR) || fieldType.equals(GEO_RECURSIVEPREFIX)
      || fieldType.equals(GEO_TERMPREFIX)) {
    List<TypeInfo> typeInfos = Arrays.asList((TypeInfo) TypeInfoFactory.floatTypeInfo,
        (TypeInfo) TypeInfoFactory.floatTypeInfo);
    return TypeInfoFactory.getStructTypeInfo(Arrays.asList(LATITUDE, LONGITUDE), typeInfos);
  }
  // Return string for anything that is not a built in type.
  return TypeInfoFactory.stringTypeInfo;
}
 
开发者ID:apache,项目名称:incubator-blur,代码行数:23,代码来源:BlurObjectInspectorGenerator.java

示例7: convertStruct

import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory; //导入方法依赖的package包/类
public static TypeInfo convertStruct(Schema schema) {
  final List<Field> fields = schema.fields();
  final List<String> names = new ArrayList<>(fields.size());
  final List<TypeInfo> types = new ArrayList<>(fields.size());
  for (Field field : fields) {
    names.add(field.name());
    types.add(convert(field.schema()));
  }
  return TypeInfoFactory.getStructTypeInfo(names, types);
}
 
开发者ID:jiangxiluning,项目名称:kafka-connect-hdfs,代码行数:11,代码来源:HiveSchemaConverter.java

示例8: getTypeSupportData

import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory; //导入方法依赖的package包/类
/**
   * Data provider for various type tests..
   *
   * @return the required arguments for the test
   * {@link MonarchPredicateHandlerTest#testIsMonarchTypeSupported(TypeInfo, boolean)}
   */
  @DataProvider
  public static Object[][] getTypeSupportData() {
    return new Object[][]{
      {TypeInfoFactory.intTypeInfo, true},
      {TypeInfoFactory.binaryTypeInfo, true},
      {TypeInfoFactory.longTypeInfo, true},
      {TypeInfoFactory.floatTypeInfo, true},
//      {TypeInfoFactory.unknownTypeInfo, false},
      {TypeInfoFactory.getDecimalTypeInfo(20, 10), true},
      {TypeInfoFactory.getCharTypeInfo(200), true},
      {TypeInfoFactory.getStructTypeInfo(Arrays.asList("c1", "c2"),
        Arrays.asList(TypeInfoFactory.floatTypeInfo, TypeInfoFactory.getUnionTypeInfo(
          Collections.singletonList(TypeInfoFactory.longTypeInfo)))), true},
      {TypeInfoFactory.getStructTypeInfo(Arrays.asList("c1", "c2"),
        Arrays.asList(TypeInfoFactory.dateTypeInfo, TypeInfoFactory.decimalTypeInfo)), true},
      {TypeInfoFactory.getMapTypeInfo(TypeInfoFactory.intTypeInfo,
        TypeInfoFactory.timestampTypeInfo), true},
      {TypeInfoFactory.getMapTypeInfo(TypeInfoFactory.doubleTypeInfo,
        TypeInfoFactory.getCharTypeInfo(100)), true},
      {TypeInfoFactory.getMapTypeInfo(TypeInfoFactory.doubleTypeInfo,
        TypeInfoFactory.getVarcharTypeInfo(100)), true},
      {TypeInfoFactory.getListTypeInfo(
        TypeInfoFactory.getListTypeInfo(TypeInfoFactory.shortTypeInfo)), true},
      {TypeInfoFactory.getStructTypeInfo(Arrays.asList("c1", "c2"),
        Arrays.asList(TypeInfoFactory.floatTypeInfo, TypeInfoFactory.getUnionTypeInfo(
          Arrays.asList(TypeInfoFactory.decimalTypeInfo,
            TypeInfoFactory.getListTypeInfo(TypeInfoFactory.shortTypeInfo))))), true},
      {TypeInfoFactory.getVarcharTypeInfo(200), true},
    };
  }
 
开发者ID:ampool,项目名称:monarch,代码行数:37,代码来源:MonarchPredicateHandlerTest.java

示例9: getObjectInspector

import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory; //导入方法依赖的package包/类
private ObjectInspector getObjectInspector() {
  final List<ColumnConverterDescriptor> columnConverters =
      converterDescriptor.getColumnConverters();
  List<String> names = new ArrayList<>();
  List<TypeInfo> typeInfos = new ArrayList<>();
  for (int i = 0; i < columnConverters.size(); i++) {
    final String columnName = columnConverters.get(i).getColumnName();
    names.add(i, columnName);
    final TypeInfo typeInfo = (TypeInfo) columnConverters.get(i).getTypeDescriptor();
    typeInfos.add(i, typeInfo);
  }
  TypeInfo rowTypeInfo = TypeInfoFactory.getStructTypeInfo(names, typeInfos);
  ObjectInspector inspector = OrcStruct.createObjectInspector(rowTypeInfo);
  return inspector;
}
 
开发者ID:ampool,项目名称:monarch,代码行数:16,代码来源:OrcWriterWrapper.java

示例10: initialize

import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory; //导入方法依赖的package包/类
public void initialize(String columnNameProperty, String columnTypeProperty) {
  List<String> columnNames = Arrays.asList(columnNameProperty.split(","));
  List<TypeInfo> columnTypes = TypeInfoUtils.getTypeInfosFromTypeString(columnTypeProperty);
  StructTypeInfo rowTypeInfo =
      (StructTypeInfo) TypeInfoFactory.getStructTypeInfo(columnNames, columnTypes);
  rowOI = TypeInfoUtils.getStandardJavaObjectInspectorFromTypeInfo(rowTypeInfo);
}
 
开发者ID:litao-buptsse,项目名称:flume-hive-batch-sink,代码行数:8,代码来源:TextDeserializer.java

示例11: initialize

import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory; //导入方法依赖的package包/类
@Override
public void initialize(Configuration conf, Properties tbl) throws SerDeException {
    String columnNameProperty = tbl.getProperty(IOConstants.COLUMNS);
    String columnTypeProperty = tbl.getProperty(IOConstants.COLUMNS_TYPES);

    if (Strings.isEmpty(columnNameProperty)) {
        columnNames = new ArrayList<String>();
    } else {
        columnNames = Arrays.asList(columnNameProperty.split(","));
    }
    if (Strings.isEmpty(columnTypeProperty)) {
        columnTypes = TypeInfoUtils.getTypeInfosFromTypeString(StringUtils.repeat("string", ":", columnNames.size()));
    } else {
        columnTypes = TypeInfoUtils.getTypeInfosFromTypeString(columnTypeProperty);
    }
    if (columnNames.size() != columnTypes.size()) {
        throw new IllegalArgumentException("IndexRHiveSerde initialization failed. Number of column " +
                "name and column type differs. columnNames = " + columnNames + ", columnTypes = " +
                columnTypes);
    }

    TypeInfo rowTypeInfo = TypeInfoFactory.getStructTypeInfo(columnNames, columnTypes);
    this.objInspector = new ArrayWritableObjectInspector((StructTypeInfo) rowTypeInfo);

    stats = new SerDeStats();
    serdeSize = 0;
}
 
开发者ID:shunfei,项目名称:indexr,代码行数:28,代码来源:IndexRSerde.java

示例12: initialize

import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory; //导入方法依赖的package包/类
@Override
public void initialize(Configuration conf, Properties tblProperties) throws SerDeException {
  colNames = Arrays.asList(tblProperties.getProperty(Constants.LIST_COLUMNS).split(","));
  colTypes = TypeInfoUtils.getTypeInfosFromTypeString(tblProperties.getProperty(Constants.LIST_COLUMN_TYPES));
  typeInfo = (StructTypeInfo) TypeInfoFactory.getStructTypeInfo(colNames, colTypes);
  inspector = TypeInfoUtils.getStandardJavaObjectInspectorFromTypeInfo(typeInfo);
  row = new ArrayList<>();
  enableFieldMapping = Boolean.valueOf(tblProperties.getProperty(ENABLE_FIELD_MAPPING, "false"));
}
 
开发者ID:lucidworks,项目名称:hive-solr,代码行数:10,代码来源:LWSerDe.java

示例13: initialize

import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory; //导入方法依赖的package包/类
@Override
public void initialize(Configuration conf, Properties tblProperties) throws SerDeException {

  colNames = Arrays.asList(tblProperties.getProperty(Constants.LIST_COLUMNS).split(","));
  colTypes = TypeInfoUtils
      .getTypeInfosFromTypeString(tblProperties.getProperty(Constants.LIST_COLUMN_TYPES));
  typeInfo = (StructTypeInfo) TypeInfoFactory.getStructTypeInfo(colNames, colTypes);
  inspector = TypeInfoUtils.getStandardJavaObjectInspectorFromTypeInfo(typeInfo);
  row = new ArrayList<>();
}
 
开发者ID:lucidworks,项目名称:hive-solr,代码行数:11,代码来源:LWSerDe.java

示例14: initialize

import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory; //导入方法依赖的package包/类
/**
 * Initializes the RecordServiceSerde based on the table schema.
 */
@Override
public final void initialize(final Configuration conf, final Properties tbl)
    throws SerDeException {
  final List<TypeInfo> columnTypes;
  final String columnNameProperty = tbl.getProperty(IOConstants.COLUMNS);
  final String columnTypeProperty = tbl.getProperty(IOConstants.COLUMNS_TYPES);

  if (columnNameProperty.length() == 0) {
    columnNames_ = new ArrayList<String>();
  } else {
    columnNames_ = Arrays.asList(columnNameProperty.split(","));
  }

  if (columnTypeProperty.length() == 0) {
    columnTypes = new ArrayList<TypeInfo>();
  } else {
    columnTypes = TypeInfoUtils.getTypeInfosFromTypeString(columnTypeProperty);
  }

  if (columnNames_.size() != columnTypes.size()) {
    throw new IllegalArgumentException("Initialization failed. Number of column " +
      "names and column types differs. columnNames = " + columnNames_ +
      ", columnTypes = " + columnTypes);
  }

  // Initialize the ObjectInspector based on the column type information in the table.
  final TypeInfo rowTypeInfo =
      TypeInfoFactory.getStructTypeInfo(columnNames_, columnTypes);
  objInspector_ = new RecordServiceObjectInspector((StructTypeInfo) rowTypeInfo);
}
 
开发者ID:cloudera,项目名称:RecordServiceClient,代码行数:34,代码来源:RecordServiceSerDe.java

示例15: initialize

import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory; //导入方法依赖的package包/类
@Override
public final void initialize(final Configuration conf, final Properties tbl) throws SerDeException {

  final TypeInfo rowTypeInfo;
  final List<String> columnNames;
  final List<TypeInfo> columnTypes;
  // Get column names and sort order
  final String columnNameProperty = tbl.getProperty(IOConstants.COLUMNS);
  final String columnTypeProperty = tbl.getProperty(IOConstants.COLUMNS_TYPES);

  if (columnNameProperty.length() == 0) {
    columnNames = new ArrayList<String>();
  } else {
    columnNames = Arrays.asList(columnNameProperty.split(","));
  }
  if (columnTypeProperty.length() == 0) {
    columnTypes = new ArrayList<TypeInfo>();
  } else {
    columnTypes = TypeInfoUtils.getTypeInfosFromTypeString(columnTypeProperty);
  }
  if (columnNames.size() != columnTypes.size()) {
    throw new IllegalArgumentException("ParquetHiveSerde initialization failed. Number of column " +
      "name and column type differs. columnNames = " + columnNames + ", columnTypes = " +
      columnTypes);
  }
  // Create row related objects
  rowTypeInfo = TypeInfoFactory.getStructTypeInfo(columnNames, columnTypes);
  this.objInspector = new ArrayWritableObjectInspector((StructTypeInfo) rowTypeInfo);

  // Stats part
  stats = new SerDeStats();
  serializedSize = 0;
  deserializedSize = 0;
  status = LAST_OPERATION.UNKNOWN;
}
 
开发者ID:apache,项目名称:parquet-mr,代码行数:36,代码来源:ParquetHiveSerDe.java


注:本文中的org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getStructTypeInfo方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。