当前位置: 首页>>代码示例>>Java>>正文


Java BigQuery类代码示例

本文整理汇总了Java中com.google.cloud.bigquery.BigQuery的典型用法代码示例。如果您正苦于以下问题:Java BigQuery类的具体用法?Java BigQuery怎么用?Java BigQuery使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。


BigQuery类属于com.google.cloud.bigquery包,在下文中一共展示了BigQuery类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: testPutWhenPartitioningOnMessageTimeWhenNoTimestampType

import com.google.cloud.bigquery.BigQuery; //导入依赖的package包/类
@Test(expected = ConnectException.class)
public void testPutWhenPartitioningOnMessageTimeWhenNoTimestampType() {
  final String topic = "test-topic";

  Map<String, String> properties = propertiesFactory.getProperties();
  properties.put(BigQuerySinkConfig.TOPICS_CONFIG, topic);
  properties.put(BigQuerySinkConfig.DATASETS_CONFIG, ".*=scratch");
  properties.put(BigQuerySinkTaskConfig.BIGQUERY_MESSAGE_TIME_PARTITIONING_CONFIG, "true");

  BigQuery bigQuery = mock(BigQuery.class);
  SinkTaskContext sinkTaskContext = mock(SinkTaskContext.class);
  InsertAllResponse insertAllResponse = mock(InsertAllResponse.class);

  when(bigQuery.insertAll(anyObject())).thenReturn(insertAllResponse);
  when(insertAllResponse.hasErrors()).thenReturn(false);

  BigQuerySinkTask testTask = new BigQuerySinkTask(bigQuery, null);
  testTask.initialize(sinkTaskContext);
  testTask.start(properties);

  testTask.put(Collections.singletonList(spoofSinkRecord(topic, "value", "message text", TimestampType.NO_TIMESTAMP_TYPE, null)));
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:23,代码来源:BigQuerySinkTaskTest.java

示例2: connect

import com.google.cloud.bigquery.BigQuery; //导入依赖的package包/类
/**
 * Returns a default {@link BigQuery} instance for the specified project with credentials provided
 * in the specified file, which can then be used for creating, updating, and inserting into tables
 * from specific datasets.
 *
 * @param projectName The name of the BigQuery project to work with
 * @param keyFilename The name of a file containing a JSON key that can be used to provide
 *                    credentials to BigQuery, or null if no authentication should be performed.
 * @return The resulting BigQuery object.
 */
public BigQuery connect(String projectName, String keyFilename) {
  if (keyFilename == null) {
    return connect(projectName);
  }

  logger.debug("Attempting to open file {} for service account json key", keyFilename);
  try (InputStream credentialsStream = new FileInputStream(keyFilename)) {
    logger.debug("Attempting to authenticate with BigQuery using provided json key");
    return new BigQueryOptions.DefaultBigqueryFactory().create(
        BigQueryOptions.newBuilder()
        .setProjectId(projectName)
        .setCredentials(GoogleCredentials.fromStream(credentialsStream))
        .build()
    );
  } catch (IOException err) {
    throw new BigQueryConnectException("Failed to access json key file", err);
  }
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:29,代码来源:BigQueryHelper.java

示例3: testNonAutoCreateTablesFailure

import com.google.cloud.bigquery.BigQuery; //导入依赖的package包/类
@Test(expected = BigQueryConnectException.class)
public void testNonAutoCreateTablesFailure() {
  final String dataset = "scratch";
  final String existingTableTopic = "topic-with-existing-table";
  final String nonExistingTableTopic = "topic-without-existing-table";
  final TableId existingTable = TableId.of(dataset, "topic_with_existing_table");
  final TableId nonExistingTable = TableId.of(dataset, "topic_without_existing_table");

  Map<String, String> properties = propertiesFactory.getProperties();
  properties.put(BigQuerySinkConnectorConfig.TABLE_CREATE_CONFIG, "false");
  properties.put(BigQuerySinkConfig.SANITIZE_TOPICS_CONFIG, "true");
  properties.put(BigQuerySinkConfig.DATASETS_CONFIG, String.format(".*=%s", dataset));
  properties.put(
      BigQuerySinkConfig.TOPICS_CONFIG,
      String.format("%s, %s", existingTableTopic, nonExistingTableTopic)
  );

  BigQuery bigQuery = mock(BigQuery.class);
  Table fakeTable = mock(Table.class);
  when(bigQuery.getTable(existingTable)).thenReturn(fakeTable);
  when(bigQuery.getTable(nonExistingTable)).thenReturn(null);

  BigQuerySinkConnector testConnector = new BigQuerySinkConnector(bigQuery);
  testConnector.start(properties);
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:26,代码来源:BigQuerySinkConnectorTest.java

示例4: testSimplePut

import com.google.cloud.bigquery.BigQuery; //导入依赖的package包/类
@Test
public void testSimplePut() {
  final String topic = "test-topic";

  Map<String, String> properties = propertiesFactory.getProperties();
  properties.put(BigQuerySinkConfig.TOPICS_CONFIG, topic);
  properties.put(BigQuerySinkConfig.DATASETS_CONFIG, ".*=scratch");

  BigQuery bigQuery = mock(BigQuery.class);
  SinkTaskContext sinkTaskContext = mock(SinkTaskContext.class);
  InsertAllResponse insertAllResponse = mock(InsertAllResponse.class);

  when(bigQuery.insertAll(anyObject())).thenReturn(insertAllResponse);
  when(insertAllResponse.hasErrors()).thenReturn(false);

  BigQuerySinkTask testTask = new BigQuerySinkTask(bigQuery, null);
  testTask.initialize(sinkTaskContext);
  testTask.start(properties);

  testTask.put(Collections.singletonList(spoofSinkRecord(topic)));
  testTask.flush(Collections.emptyMap());
  verify(bigQuery, times(1)).insertAll(any(InsertAllRequest.class));
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:24,代码来源:BigQuerySinkTaskTest.java

示例5: testSimplePutWhenSchemaRetrieverIsNotNull

import com.google.cloud.bigquery.BigQuery; //导入依赖的package包/类
@Test
public void testSimplePutWhenSchemaRetrieverIsNotNull() {
  final String topic = "test-topic";

  Map<String, String> properties = propertiesFactory.getProperties();
  properties.put(BigQuerySinkConfig.TOPICS_CONFIG, topic);
  properties.put(BigQuerySinkConfig.DATASETS_CONFIG, ".*=scratch");

  BigQuery bigQuery = mock(BigQuery.class);
  SinkTaskContext sinkTaskContext = mock(SinkTaskContext.class);
  InsertAllResponse insertAllResponse = mock(InsertAllResponse.class);

  when(bigQuery.insertAll(anyObject())).thenReturn(insertAllResponse);
  when(insertAllResponse.hasErrors()).thenReturn(false);

  SchemaRetriever schemaRetriever = mock(SchemaRetriever.class);

  BigQuerySinkTask testTask = new BigQuerySinkTask(bigQuery, schemaRetriever);
  testTask.initialize(sinkTaskContext);
  testTask.start(properties);

  testTask.put(Collections.singletonList(spoofSinkRecord(topic)));
  testTask.flush(Collections.emptyMap());
  verify(bigQuery, times(1)).insertAll(any(InsertAllRequest.class));
  verify(schemaRetriever, times(1)).setLastSeenSchema(any(TableId.class), any(String.class), any(Schema.class));
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:27,代码来源:BigQuerySinkTaskTest.java

示例6: testEmptyRecordPut

import com.google.cloud.bigquery.BigQuery; //导入依赖的package包/类
@Test
public void testEmptyRecordPut() {
  final String topic = "test_topic";
  final Schema simpleSchema = SchemaBuilder
      .struct()
      .field("aField", Schema.STRING_SCHEMA)
      .build();

  Map<String, String> properties = propertiesFactory.getProperties();
  BigQuery bigQuery = mock(BigQuery.class);

  BigQuerySinkTask testTask = new BigQuerySinkTask(bigQuery, null);
  testTask.start(properties);

  SinkRecord emptyRecord = spoofSinkRecord(topic, simpleSchema, null);

  testTask.put(Collections.singletonList(emptyRecord));
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:19,代码来源:BigQuerySinkTaskTest.java

示例7: explicit

import com.google.cloud.bigquery.BigQuery; //导入依赖的package包/类
public static void explicit() throws IOException {
  // Load credentials from JSON key file. If you can't set the GOOGLE_APPLICATION_CREDENTIALS
  // environment variable, you can explicitly load the credentials file to construct the
  // credentials.
  GoogleCredentials credentials;
  File credentialsPath = new File("service_account.json");  // TODO: update to your key path.
  try (FileInputStream serviceAccountStream = new FileInputStream(credentialsPath)) {
    credentials = ServiceAccountCredentials.fromStream(serviceAccountStream);
  }

  // Instantiate a client.
  BigQuery bigquery =
      BigQueryOptions.newBuilder().setCredentials(credentials).build().getService();

  // Use the client.
  System.out.println("Datasets:");
  for (Dataset dataset : bigquery.listDatasets().iterateAll()) {
    System.out.printf("%s%n", dataset.getDatasetId().getDataset());
  }
}
 
开发者ID:GoogleCloudPlatform,项目名称:java-docs-samples,代码行数:21,代码来源:AuthSnippets.java

示例8: main

import com.google.cloud.bigquery.BigQuery; //导入依赖的package包/类
public static void main(String... args) throws Exception {
  // Instantiate a client. If you don't specify credentials when constructing a client, the
  // client library will look for credentials in the environment, such as the
  // GOOGLE_APPLICATION_CREDENTIALS environment variable.
  BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();

  // The name for the new dataset
  String datasetName = "my_new_dataset";

  // Prepares a new dataset
  Dataset dataset = null;
  DatasetInfo datasetInfo = DatasetInfo.newBuilder(datasetName).build();

  // Creates the dataset
  dataset = bigquery.create(datasetInfo);

  System.out.printf("Dataset %s created.%n", dataset.getDatasetId().getDataset());
}
 
开发者ID:GoogleCloudPlatform,项目名称:java-docs-samples,代码行数:19,代码来源:QuickstartSample.java

示例9: loadSalesOrder

import com.google.cloud.bigquery.BigQuery; //导入依赖的package包/类
static public void loadSalesOrder(String jsonFileFullPath, BigQuery bigquery, BigQuerySnippets snippets, String datasetName, String tableName){
	try{
 	snippets.writeFileToTable(datasetName, tableName, getPath(jsonFileFullPath), FormatOptions.json());
 	System.out.println("A json.gz file was loaded into bigquery: " + jsonFileFullPath);
	}catch(IOException ioex){
		System.out.println(ioex.getMessage());
	}catch(InterruptedException interEx){
		
	}catch(TimeoutException toEx){
	}
}
 
开发者ID:michael-hll,项目名称:BigQueryStudy,代码行数:12,代码来源:App.java

示例10: getBigQuery

import com.google.cloud.bigquery.BigQuery; //导入依赖的package包/类
private BigQuery getBigQuery() {
  if (testBigQuery != null) {
    return testBigQuery;
  }
  String projectName = config.getString(config.PROJECT_CONFIG);
  String keyFilename = config.getString(config.KEYFILE_CONFIG);
  return new BigQueryHelper().connect(projectName, keyFilename);
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:9,代码来源:BigQuerySinkTask.java

示例11: getBigQueryWriter

import com.google.cloud.bigquery.BigQuery; //导入依赖的package包/类
private BigQueryWriter getBigQueryWriter() {
  boolean updateSchemas = config.getBoolean(config.SCHEMA_UPDATE_CONFIG);
  int retry = config.getInt(config.BIGQUERY_RETRY_CONFIG);
  long retryWait = config.getLong(config.BIGQUERY_RETRY_WAIT_CONFIG);
  BigQuery bigQuery = getBigQuery();
  if (updateSchemas) {
    return new AdaptiveBigQueryWriter(bigQuery,
                                      getSchemaManager(bigQuery),
                                      retry,
                                      retryWait);
  } else {
    return new SimpleBigQueryWriter(bigQuery, retry, retryWait);
  }
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:15,代码来源:BigQuerySinkTask.java

示例12: getSchemaManager

import com.google.cloud.bigquery.BigQuery; //导入依赖的package包/类
private SchemaManager getSchemaManager(BigQuery bigQuery) {
  if (testSchemaManager != null) {
    return testSchemaManager;
  }
  SchemaRetriever schemaRetriever = config.getSchemaRetriever();
  SchemaConverter<com.google.cloud.bigquery.Schema> schemaConverter = config.getSchemaConverter();
  return new SchemaManager(schemaRetriever, schemaConverter, bigQuery);
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:9,代码来源:BigQuerySinkConnector.java

示例13: ensureExistingTables

import com.google.cloud.bigquery.BigQuery; //导入依赖的package包/类
private void ensureExistingTables(
    BigQuery bigQuery,
    SchemaManager schemaManager,
    Map<String, TableId> topicsToTableIds) {
  for (Map.Entry<String, TableId> topicToTableId : topicsToTableIds.entrySet()) {
    String topic = topicToTableId.getKey();
    TableId tableId = topicToTableId.getValue();
    if (bigQuery.getTable(tableId) == null) {
      logger.info("Table {} does not exist; attempting to create", tableId);
      schemaManager.createTable(tableId, topic);
    }
  }
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:14,代码来源:BigQuerySinkConnector.java

示例14: SchemaManager

import com.google.cloud.bigquery.BigQuery; //导入依赖的package包/类
/**
 * @param schemaRetriever Used to determine the Kafka Connect Schema that should be used for a
 *                        given table.
 * @param schemaConverter Used to convert Kafka Connect Schemas into BigQuery format.
 * @param bigQuery Used to communicate create/update requests to BigQuery.
 */
public SchemaManager(
    SchemaRetriever schemaRetriever,
    SchemaConverter<com.google.cloud.bigquery.Schema> schemaConverter,
    BigQuery bigQuery) {
  this.schemaRetriever = schemaRetriever;
  this.schemaConverter = schemaConverter;
  this.bigQuery = bigQuery;
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:15,代码来源:SchemaManager.java

示例15: AdaptiveBigQueryWriter

import com.google.cloud.bigquery.BigQuery; //导入依赖的package包/类
/**
 * @param bigQuery Used to send write requests to BigQuery.
 * @param schemaManager Used to update BigQuery tables.
 * @param retry How many retries to make in the event of a 500/503 error.
 * @param retryWait How long to wait in between retries.
 */
public AdaptiveBigQueryWriter(BigQuery bigQuery,
                              SchemaManager schemaManager,
                              int retry,
                              long retryWait) {
  super(retry, retryWait);
  this.bigQuery = bigQuery;
  this.schemaManager = schemaManager;
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:15,代码来源:AdaptiveBigQueryWriter.java


注:本文中的com.google.cloud.bigquery.BigQuery类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。