當前位置: 首頁>>代碼示例>>Java>>正文


Java MetaException類代碼示例

本文整理匯總了Java中org.apache.hadoop.hive.metastore.api.MetaException的典型用法代碼示例。如果您正苦於以下問題:Java MetaException類的具體用法?Java MetaException怎麽用?Java MetaException使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


MetaException類屬於org.apache.hadoop.hive.metastore.api包,在下文中一共展示了MetaException類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: PartitionedTablePathResolver

import org.apache.hadoop.hive.metastore.api.MetaException; //導入依賴的package包/類
PartitionedTablePathResolver(IMetaStoreClient metastore, Table table)
    throws NoSuchObjectException, MetaException, TException {
  this.metastore = metastore;
  this.table = table;
  LOG.debug("Table '{}' is partitioned", Warehouse.getQualifiedName(table));
  tableBaseLocation = locationAsPath(table);
  List<Partition> onePartition = metastore.listPartitions(table.getDbName(), table.getTableName(), (short) 1);
  if (onePartition.isEmpty()) {
    LOG.warn("Table '{}' has no partitions, perhaps you can simply delete: {}.", Warehouse.getQualifiedName(table),
        tableBaseLocation);
    throw new ConfigurationException();
  }
  Path partitionLocation = locationAsPath(onePartition.get(0));
  int branches = partitionLocation.depth() - tableBaseLocation.depth();
  String globSuffix = StringUtils.repeat("*", "/", branches);
  globPath = new Path(tableBaseLocation, globSuffix);
}
 
開發者ID:HotelsDotCom,項目名稱:circus-train,代碼行數:18,代碼來源:PartitionedTablePathResolver.java

示例2: HiveMetaStore

import org.apache.hadoop.hive.metastore.api.MetaException; //導入依賴的package包/類
public HiveMetaStore(Configuration conf, HdfsSinkConnectorConfig connectorConfig) throws HiveMetaStoreException {
  HiveConf hiveConf = new HiveConf(conf, HiveConf.class);
  String hiveConfDir = connectorConfig.getString(HdfsSinkConnectorConfig.HIVE_CONF_DIR_CONFIG);
  String hiveMetaStoreURIs = connectorConfig.getString(HdfsSinkConnectorConfig.HIVE_METASTORE_URIS_CONFIG);
  if (hiveMetaStoreURIs.isEmpty()) {
    log.warn("hive.metastore.uris empty, an embedded Hive metastore will be "
             + "created in the directory the connector is started. "
             + "You need to start Hive in that specific directory to query the data.");
  }
  if (!hiveConfDir.equals("")) {
    String hiveSitePath = hiveConfDir + "/hive-site.xml";
    File hiveSite = new File(hiveSitePath);
    if (!hiveSite.exists()) {
      log.warn("hive-site.xml does not exist in provided Hive configuration directory {}.", hiveConf);
    }
    hiveConf.addResource(new Path(hiveSitePath));
  }
  hiveConf.set("hive.metastore.uris", hiveMetaStoreURIs);
  try {
    client = HCatUtil.getHiveMetastoreClient(hiveConf);
  } catch (IOException | MetaException e) {
    throw new HiveMetaStoreException(e);
  }
}
 
開發者ID:jiangxiluning,項目名稱:kafka-connect-hdfs,代碼行數:25,代碼來源:HiveMetaStore.java

示例3: getMetastorePaths

import org.apache.hadoop.hive.metastore.api.MetaException; //導入依賴的package包/類
@Override
public Set<Path> getMetastorePaths(short batchSize, int expectedPathCount)
  throws NoSuchObjectException, MetaException, TException {
  Set<Path> metastorePaths = new HashSet<>(expectedPathCount);
  PartitionIterator partitionIterator = new PartitionIterator(metastore, table, batchSize);
  while (partitionIterator.hasNext()) {
    Partition partition = partitionIterator.next();
    Path location = PathUtils.normalise(locationAsPath(partition));
    if (!location.toString().toLowerCase().startsWith(tableBaseLocation.toString().toLowerCase())) {
      LOG.error("Check your configuration: '{}' does not appear to be part of '{}'.", location, tableBaseLocation);
      throw new ConfigurationException();
    }
    metastorePaths.add(location);
  }
  return metastorePaths;
}
 
開發者ID:HotelsDotCom,項目名稱:circus-train,代碼行數:17,代碼來源:PartitionedTablePathResolver.java

示例4: injectMocks

import org.apache.hadoop.hive.metastore.api.MetaException; //導入依賴的package包/類
@Before
public void injectMocks() throws NoSuchObjectException, MetaException, TException {
  when(table.getSd()).thenReturn(tableSd);
  when(table.getDbName()).thenReturn(DATABASE_NAME);
  when(table.getTableName()).thenReturn(TABLE_NAME);
  when(table.getPartitionKeys()).thenReturn(Arrays.asList(new FieldSchema("name", "string", "comments")));
  when(tableSd.getLocation()).thenReturn(PARTITION_TABLE_BASE);
  when(partition1.getSd()).thenReturn(partitionSd1);
  when(partitionSd1.getLocation()).thenReturn(PARTITION_LOCATION_1);
  when(partition2.getSd()).thenReturn(partitionSd2);
  when(partitionSd2.getLocation()).thenReturn(PARTITION_LOCATION_2);
  when(metastore.listPartitionNames(DATABASE_NAME, TABLE_NAME, (short) -1))
      .thenReturn(Arrays.asList(PARTITION_NAME_1, PARTITION_NAME_2));
}
 
開發者ID:HotelsDotCom,項目名稱:circus-train,代碼行數:15,代碼來源:PartitionedTablePathResolverTest.java

示例5: newInstance

import org.apache.hadoop.hive.metastore.api.MetaException; //導入依賴的package包/類
@Override
public CloseableMetaStoreClient newInstance(HiveConf conf, String name) {
  LOG.debug("Connecting to '{}' metastore at '{}'", name, conf.getVar(ConfVars.METASTOREURIS));
  try {
    return CloseableMetaStoreClientFactory
        .newInstance(RetryingMetaStoreClient.getProxy(conf, new HiveMetaHookLoader() {
          @Override
          public HiveMetaHook getHook(Table tbl) throws MetaException {
            return null;
          }
        }, HiveMetaStoreClient.class.getName()));
  } catch (MetaException | RuntimeException e) {
    String message = String.format("Unable to connect to '%s' metastore at '%s'", name,
        conf.getVar(ConfVars.METASTOREURIS));
    throw new MetaStoreClientException(message, e);
  }
}
 
開發者ID:HotelsDotCom,項目名稱:circus-train,代碼行數:18,代碼來源:ThriftMetaStoreClientFactory.java

示例6: get_partitions_ps_with_auth

import org.apache.hadoop.hive.metastore.api.MetaException; //導入依賴的package包/類
@Test
public void get_partitions_ps_with_auth() throws NoSuchObjectException, MetaException, TException {
  List<Partition> partitions = Lists.newArrayList();
  List<Partition> outbound = Lists.newArrayList();
  List<String> partVals = Lists.newArrayList();
  List<String> groupNames = new ArrayList<>();

  when(primaryMapping.transformInboundDatabaseName(DB_P)).thenReturn("inbound");
  when(primaryClient.get_partitions_ps_with_auth("inbound", "table", partVals, (short) 10, "user", groupNames))
      .thenReturn(partitions);
  when(primaryMapping.transformOutboundPartitions(partitions)).thenReturn(outbound);
  List<Partition> result = handler.get_partitions_ps_with_auth(DB_P, "table", partVals, (short) 10, "user",
      groupNames);
  assertThat(result, is(outbound));
  verify(primaryMapping, never()).checkWritePermissions(DB_P);
}
 
開發者ID:HotelsDotCom,項目名稱:waggle-dance,代碼行數:17,代碼來源:FederatedHMSHandlerTest.java

示例7: NonCloseableHiveClientWithCaching

import org.apache.hadoop.hive.metastore.api.MetaException; //導入依賴的package包/類
private NonCloseableHiveClientWithCaching(final HiveConf hiveConf,
    final Map<String, String> hiveConfigOverride) throws MetaException {
  super(hiveConf, hiveConfigOverride);

  databases = CacheBuilder //
      .newBuilder() //
      .expireAfterAccess(1, TimeUnit.MINUTES) //
      .build(new DatabaseLoader());

  tableNameLoader = CacheBuilder //
      .newBuilder() //
      .expireAfterAccess(1, TimeUnit.MINUTES) //
      .build(new TableNameLoader());

  tableLoaders = CacheBuilder //
      .newBuilder() //
      .expireAfterAccess(4, TimeUnit.HOURS) //
      .maximumSize(20) //
      .build(new TableLoaderLoader());
}
 
開發者ID:skhalifa,項目名稱:QDrill,代碼行數:21,代碼來源:DrillHiveMetaStoreClient.java

示例8: HiveSchemaFactory

import org.apache.hadoop.hive.metastore.api.MetaException; //導入依賴的package包/類
public HiveSchemaFactory(HiveStoragePlugin plugin, String name, Map<String, String> hiveConfigOverride) throws ExecutionSetupException {
  this.schemaName = name;
  this.plugin = plugin;

  this.hiveConfigOverride = hiveConfigOverride;
  hiveConf = new HiveConf();
  if (hiveConfigOverride != null) {
    for (Map.Entry<String, String> entry : hiveConfigOverride.entrySet()) {
      final String property = entry.getKey();
      final String value = entry.getValue();
      hiveConf.set(property, value);
      logger.trace("HiveConfig Override {}={}", property, value);
    }
  }

  isHS2DoAsSet = hiveConf.getBoolVar(ConfVars.HIVE_SERVER2_ENABLE_DOAS);
  isDrillImpersonationEnabled = plugin.getContext().getConfig().getBoolean(ExecConstants.IMPERSONATION_ENABLED);

  try {
    processUserMetastoreClient =
        DrillHiveMetaStoreClient.createNonCloseableClientWithCaching(hiveConf, hiveConfigOverride);
  } catch (MetaException e) {
    throw new ExecutionSetupException("Failure setting up Hive metastore client.", e);
  }
}
 
開發者ID:skhalifa,項目名稱:QDrill,代碼行數:26,代碼來源:HiveSchemaFactory.java

示例9: commitDropTable

import org.apache.hadoop.hive.metastore.api.MetaException; //導入依賴的package包/類
public void commitDropTable(Table table, boolean deleteData) throws MetaException {

            boolean isExternal = isExternalTable(table);
            String tableName = getMonarchTableName(table);
            try {
                Map<String, String> parameters = table.getParameters();
                String tableType = parameters.getOrDefault(MonarchUtils.MONARCH_TABLE_TYPE, MonarchUtils.DEFAULT_TABLE_TYPE);
                if (tableType.equalsIgnoreCase(MonarchUtils.DEFAULT_TABLE_TYPE)) {
                  MonarchUtils.destroyFTable(tableName, table.getParameters(), isExternal, deleteData);
                } else {
                  MonarchUtils.destroyTable(tableName, table.getParameters(), isExternal, deleteData);
                }
            } catch (Exception se) {
                throw new MetaException(se.getMessage());
            }
        }
 
開發者ID:ampool,項目名稱:monarch,代碼行數:17,代碼來源:MonarchStorageHandler.java

示例10: getProcessor

import org.apache.hadoop.hive.metastore.api.MetaException; //導入依賴的package包/類
@Override
public TProcessor getProcessor(TTransport transport) {
  try {
    CloseableIHMSHandler baseHandler = federatedHMSHandlerFactory.create();
    IHMSHandler handler = newRetryingHMSHandler(ExceptionWrappingHMSHandler.newProxyInstance(baseHandler), hiveConf,
        false);
    transportMonitor.monitor(transport, baseHandler);
    return new TSetIpAddressProcessor<>(handler);
  } catch (MetaException | ReflectiveOperationException | RuntimeException e) {
    throw new RuntimeException("Error creating TProcessor", e);
  }
}
 
開發者ID:HotelsDotCom,項目名稱:waggle-dance,代碼行數:13,代碼來源:TSetIpAddressProcessorFactory.java

示例11: get_partitions_with_auth

import org.apache.hadoop.hive.metastore.api.MetaException; //導入依賴的package包/類
@Test
public void get_partitions_with_auth() throws NoSuchObjectException, MetaException, TException {
  List<Partition> partitions = Lists.newArrayList();
  List<Partition> outbound = Lists.newArrayList();
  List<String> groupNames = new ArrayList<>();
  when(primaryMapping.transformInboundDatabaseName(DB_P)).thenReturn("inbound");
  when(primaryClient.get_partitions_with_auth("inbound", "table", (short) 10, "user", groupNames))
      .thenReturn(partitions);
  when(primaryMapping.transformOutboundPartitions(partitions)).thenReturn(outbound);
  List<Partition> result = handler.get_partitions_with_auth(DB_P, "table", (short) 10, "user", groupNames);
  assertThat(result, is(outbound));
  verify(primaryMapping, never()).checkWritePermissions(DB_P);
}
 
開發者ID:HotelsDotCom,項目名稱:waggle-dance,代碼行數:14,代碼來源:FederatedHMSHandlerTest.java

示例12: get_databasNotAllowedException

import org.apache.hadoop.hive.metastore.api.MetaException; //導入依賴的package包/類
@Test
public void get_databasNotAllowedException() throws Exception {
  expectedException.expect(MetaException.class);
  IHMSHandler handler = ExceptionWrappingHMSHandler.newProxyInstance(baseHandler);
  when(baseHandler.get_database("bdp")).thenThrow(new NotAllowedException("waggle waggle!"));
  handler.get_database("bdp");
}
 
開發者ID:HotelsDotCom,項目名稱:waggle-dance,代碼行數:8,代碼來源:ExceptionWrappingHMSHandlerTest.java

示例13: drop_database

import org.apache.hadoop.hive.metastore.api.MetaException; //導入依賴的package包/類
@Test
public void drop_database() throws NoSuchObjectException, InvalidOperationException, MetaException, TException {
  when(primaryMapping.transformInboundDatabaseName(DB_P)).thenReturn("inbound");
  handler.drop_database(DB_P, false, false);
  verify(primaryMapping).checkWritePermissions(DB_P);
  verify(primaryClient).drop_database("inbound", false, false);
}
 
開發者ID:HotelsDotCom,項目名稱:waggle-dance,代碼行數:8,代碼來源:FederatedHMSHandlerTest.java

示例14: get_databases

import org.apache.hadoop.hive.metastore.api.MetaException; //導入依賴的package包/類
@Test
public void get_databases() throws MetaException, TException {
  PanopticOperationHandler panopticHandler = Mockito.mock(PanopticOperationHandler.class);
  when(databaseMappingService.getPanopticOperationHandler()).thenReturn(panopticHandler);
  String pattern = "*";
  when(panopticHandler.getAllDatabases(pattern)).thenReturn(Lists.newArrayList(DB_P, DB_S));
  List<String> result = handler.get_databases(pattern);
  assertThat(result.size(), is(2));
  assertThat(result, contains(DB_P, DB_S));
}
 
開發者ID:HotelsDotCom,項目名稱:waggle-dance,代碼行數:11,代碼來源:FederatedHMSHandlerTest.java

示例15: get_all_databases

import org.apache.hadoop.hive.metastore.api.MetaException; //導入依賴的package包/類
@Test
public void get_all_databases() throws MetaException, TException {
  PanopticOperationHandler panopticHandler = Mockito.mock(PanopticOperationHandler.class);
  when(databaseMappingService.getPanopticOperationHandler()).thenReturn(panopticHandler);
  when(panopticHandler.getAllDatabases()).thenReturn(Lists.newArrayList(DB_P, DB_S));
  List<String> result = handler.get_all_databases();
  assertThat(result.size(), is(2));
  assertThat(result, contains(DB_P, DB_S));
}
 
開發者ID:HotelsDotCom,項目名稱:waggle-dance,代碼行數:10,代碼來源:FederatedHMSHandlerTest.java


注:本文中的org.apache.hadoop.hive.metastore.api.MetaException類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。