当前位置: 首页>>代码示例>>Java>>正文


Java SinkTaskContext类代码示例

本文整理汇总了Java中org.apache.kafka.connect.sink.SinkTaskContext的典型用法代码示例。如果您正苦于以下问题:Java SinkTaskContext类的具体用法?Java SinkTaskContext怎么用?Java SinkTaskContext使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。


SinkTaskContext类属于org.apache.kafka.connect.sink包,在下文中一共展示了SinkTaskContext类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: testPutWhenPartitioningOnMessageTimeWhenNoTimestampType

import org.apache.kafka.connect.sink.SinkTaskContext; //导入依赖的package包/类
@Test(expected = ConnectException.class)
public void testPutWhenPartitioningOnMessageTimeWhenNoTimestampType() {
  final String topic = "test-topic";

  Map<String, String> properties = propertiesFactory.getProperties();
  properties.put(BigQuerySinkConfig.TOPICS_CONFIG, topic);
  properties.put(BigQuerySinkConfig.DATASETS_CONFIG, ".*=scratch");
  properties.put(BigQuerySinkTaskConfig.BIGQUERY_MESSAGE_TIME_PARTITIONING_CONFIG, "true");

  BigQuery bigQuery = mock(BigQuery.class);
  SinkTaskContext sinkTaskContext = mock(SinkTaskContext.class);
  InsertAllResponse insertAllResponse = mock(InsertAllResponse.class);

  when(bigQuery.insertAll(anyObject())).thenReturn(insertAllResponse);
  when(insertAllResponse.hasErrors()).thenReturn(false);

  BigQuerySinkTask testTask = new BigQuerySinkTask(bigQuery, null);
  testTask.initialize(sinkTaskContext);
  testTask.start(properties);

  testTask.put(Collections.singletonList(spoofSinkRecord(topic, "value", "message text", TimestampType.NO_TIMESTAMP_TYPE, null)));
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:23,代码来源:BigQuerySinkTaskTest.java

示例2: test

import org.apache.kafka.connect.sink.SinkTaskContext; //导入依赖的package包/类
@Test
public void test() throws InterruptedException {
    Map<String, String> sinkProperties = new HashMap<>();
    FluentdSinkTask task = new FluentdSinkTask();
    task.initialize(PowerMock.createMock(SinkTaskContext.class));
    //sinkProperties.put(FluentdSinkConnectorConfig.FLUENTD_CLIENT_MAX_BUFFER_BYTES, "100000");
    task.start(sinkProperties);
    final String topic = "testtopic";
    final String value = "{\"message\":\"This is a test message\"}";
    SinkRecord sinkRecord = new SinkRecord(
            topic,
            1,
            Schema.STRING_SCHEMA,
            topic,
            null,
            value,
            0,
            System.currentTimeMillis(),
            TimestampType.NO_TIMESTAMP_TYPE
    );
    task.put(Collections.singleton(sinkRecord));
    TimeUnit.SECONDS.sleep(1);
    EventEntry eventEntry = queue.poll();
    Assert.assertNotNull(eventEntry);
    Assert.assertEquals(value, eventEntry.getRecord().asMapValue().toJson());
}
 
开发者ID:fluent,项目名称:kafka-connect-fluentd,代码行数:27,代码来源:FluentdSinkTaskTest.java

示例3: S3SinkTask

import org.apache.kafka.connect.sink.SinkTaskContext; //导入依赖的package包/类
S3SinkTask(S3SinkConnectorConfig connectorConfig, SinkTaskContext context, S3Storage storage,
           Partitioner<FieldSchema> partitioner, Format<S3SinkConnectorConfig, String> format,
           Time time) throws Exception {
  this.assignment = new HashSet<>();
  this.topicPartitionWriters = new HashMap<>();
  this.connectorConfig = connectorConfig;
  this.context = context;
  this.storage = storage;
  this.partitioner = partitioner;
  this.format = format;
  this.time = time;

  url = connectorConfig.getString(StorageCommonConfig.STORE_URL_CONFIG);
  writerProvider = this.format.getRecordWriterProvider();

  open(context.assignment());
  log.info("Started S3 connector task with assigned partitions {}", assignment);
}
 
开发者ID:confluentinc,项目名称:kafka-connect-storage-cloud,代码行数:19,代码来源:S3SinkTask.java

示例4: testSimplePut

import org.apache.kafka.connect.sink.SinkTaskContext; //导入依赖的package包/类
@Test
public void testSimplePut() {
  final String topic = "test-topic";

  Map<String, String> properties = propertiesFactory.getProperties();
  properties.put(BigQuerySinkConfig.TOPICS_CONFIG, topic);
  properties.put(BigQuerySinkConfig.DATASETS_CONFIG, ".*=scratch");

  BigQuery bigQuery = mock(BigQuery.class);
  SinkTaskContext sinkTaskContext = mock(SinkTaskContext.class);
  InsertAllResponse insertAllResponse = mock(InsertAllResponse.class);

  when(bigQuery.insertAll(anyObject())).thenReturn(insertAllResponse);
  when(insertAllResponse.hasErrors()).thenReturn(false);

  BigQuerySinkTask testTask = new BigQuerySinkTask(bigQuery, null);
  testTask.initialize(sinkTaskContext);
  testTask.start(properties);

  testTask.put(Collections.singletonList(spoofSinkRecord(topic)));
  testTask.flush(Collections.emptyMap());
  verify(bigQuery, times(1)).insertAll(any(InsertAllRequest.class));
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:24,代码来源:BigQuerySinkTaskTest.java

示例5: testSimplePutWhenSchemaRetrieverIsNotNull

import org.apache.kafka.connect.sink.SinkTaskContext; //导入依赖的package包/类
@Test
public void testSimplePutWhenSchemaRetrieverIsNotNull() {
  final String topic = "test-topic";

  Map<String, String> properties = propertiesFactory.getProperties();
  properties.put(BigQuerySinkConfig.TOPICS_CONFIG, topic);
  properties.put(BigQuerySinkConfig.DATASETS_CONFIG, ".*=scratch");

  BigQuery bigQuery = mock(BigQuery.class);
  SinkTaskContext sinkTaskContext = mock(SinkTaskContext.class);
  InsertAllResponse insertAllResponse = mock(InsertAllResponse.class);

  when(bigQuery.insertAll(anyObject())).thenReturn(insertAllResponse);
  when(insertAllResponse.hasErrors()).thenReturn(false);

  SchemaRetriever schemaRetriever = mock(SchemaRetriever.class);

  BigQuerySinkTask testTask = new BigQuerySinkTask(bigQuery, schemaRetriever);
  testTask.initialize(sinkTaskContext);
  testTask.start(properties);

  testTask.put(Collections.singletonList(spoofSinkRecord(topic)));
  testTask.flush(Collections.emptyMap());
  verify(bigQuery, times(1)).insertAll(any(InsertAllRequest.class));
  verify(schemaRetriever, times(1)).setLastSeenSchema(any(TableId.class), any(String.class), any(Schema.class));
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:27,代码来源:BigQuerySinkTaskTest.java

示例6: TopicPartitionWriter

import org.apache.kafka.connect.sink.SinkTaskContext; //导入依赖的package包/类
public TopicPartitionWriter(
    TopicPartition tp,
    Storage storage,
    RecordWriterProvider writerProvider,
    Partitioner partitioner,
    HdfsSinkConnectorConfig connectorConfig,
    SinkTaskContext context,
    AvroData avroData) {
  this(tp, storage, writerProvider, partitioner, connectorConfig, context, avroData, null, null, null, null, null);
}
 
开发者ID:jiangxiluning,项目名称:kafka-connect-hdfs,代码行数:11,代码来源:TopicPartitionWriter.java

示例7: TopicPartitionWriter

import org.apache.kafka.connect.sink.SinkTaskContext; //导入依赖的package包/类
public TopicPartitionWriter(TopicPartition tp,
                            S3Storage storage,
                            RecordWriterProvider<S3SinkConnectorConfig> writerProvider,
                            Partitioner<FieldSchema> partitioner,
                            S3SinkConnectorConfig connectorConfig,
                            SinkTaskContext context) {
  this(tp, writerProvider, partitioner, connectorConfig, context, SYSTEM_TIME);
}
 
开发者ID:confluentinc,项目名称:kafka-connect-storage-cloud,代码行数:9,代码来源:TopicPartitionWriter.java

示例8: testPutWhenPartitioningOnMessageTime

import org.apache.kafka.connect.sink.SinkTaskContext; //导入依赖的package包/类
@Test
public void testPutWhenPartitioningOnMessageTime() {
  final String topic = "test-topic";

  Map<String, String> properties = propertiesFactory.getProperties();
  properties.put(BigQuerySinkConfig.TOPICS_CONFIG, topic);
  properties.put(BigQuerySinkConfig.DATASETS_CONFIG, ".*=scratch");
  properties.put(BigQuerySinkTaskConfig.BIGQUERY_MESSAGE_TIME_PARTITIONING_CONFIG, "true");

  BigQuery bigQuery = mock(BigQuery.class);
  SinkTaskContext sinkTaskContext = mock(SinkTaskContext.class);
  InsertAllResponse insertAllResponse = mock(InsertAllResponse.class);

  when(bigQuery.insertAll(anyObject())).thenReturn(insertAllResponse);
  when(insertAllResponse.hasErrors()).thenReturn(false);

  BigQuerySinkTask testTask = new BigQuerySinkTask(bigQuery, null);
  testTask.initialize(sinkTaskContext);
  testTask.start(properties);

  testTask.put(Collections.singletonList(spoofSinkRecord(topic, "value", "message text", TimestampType.CREATE_TIME, 1509007584334L)));
  testTask.flush(Collections.emptyMap());
  ArgumentCaptor<InsertAllRequest> argument = ArgumentCaptor.forClass(InsertAllRequest.class);

  verify(bigQuery, times(1)).insertAll(argument.capture());
  assertEquals("test-topic$20171026", argument.getValue().getTable().getTable());
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:28,代码来源:BigQuerySinkTaskTest.java

示例9: testBufferClearOnFlushError

import org.apache.kafka.connect.sink.SinkTaskContext; //导入依赖的package包/类
@Test
public void testBufferClearOnFlushError() {
  final String dataset = "scratch";
  final String topic = "test_topic";

  Map<String, String> properties = propertiesFactory.getProperties();
  properties.put(BigQuerySinkConfig.TOPICS_CONFIG, topic);
  properties.put(BigQuerySinkConfig.DATASETS_CONFIG, String.format(".*=%s", dataset));

  BigQuery bigQuery = mock(BigQuery.class);
  when(bigQuery.insertAll(any(InsertAllRequest.class)))
      .thenThrow(new RuntimeException("This is a test"));

  SinkTaskContext sinkTaskContext = mock(SinkTaskContext.class);
  BigQuerySinkTask testTask = new BigQuerySinkTask(bigQuery, null);
  testTask.initialize(sinkTaskContext);
  testTask.start(properties);

  try {
    testTask.put(Collections.singletonList(spoofSinkRecord(topic)));
    testTask.flush(Collections.emptyMap());
    fail("An exception should have been thrown by now");
  } catch (BigQueryConnectException err) {
    testTask.flush(Collections.emptyMap());
    verify(bigQuery, times(1)).insertAll(any(InsertAllRequest.class));
  }
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:28,代码来源:BigQuerySinkTaskTest.java

示例10: testEmptyFlush

import org.apache.kafka.connect.sink.SinkTaskContext; //导入依赖的package包/类
@Test
public void testEmptyFlush() {
  Map<String, String> properties = propertiesFactory.getProperties();
  BigQuery bigQuery = mock(BigQuery.class);

  SinkTaskContext sinkTaskContext = mock(SinkTaskContext.class);
  BigQuerySinkTask testTask = new BigQuerySinkTask(bigQuery, null);
  testTask.initialize(sinkTaskContext);
  testTask.start(properties);

  testTask.flush(Collections.emptyMap());
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:13,代码来源:BigQuerySinkTaskTest.java

示例11: testBigQuery5XXRetry

import org.apache.kafka.connect.sink.SinkTaskContext; //导入依赖的package包/类
@Test
public void testBigQuery5XXRetry() {
  final String topic = "test_topic";
  final String dataset = "scratch";

  Map<String, String> properties = propertiesFactory.getProperties();
  properties.put(BigQuerySinkTaskConfig.BIGQUERY_RETRY_CONFIG, "3");
  properties.put(BigQuerySinkTaskConfig.BIGQUERY_RETRY_WAIT_CONFIG, "2000");
  properties.put(BigQuerySinkConfig.TOPICS_CONFIG, topic);
  properties.put(BigQuerySinkConfig.DATASETS_CONFIG, String.format(".*=%s", dataset));

  BigQuery bigQuery = mock(BigQuery.class);

  InsertAllResponse insertAllResponse = mock(InsertAllResponse.class);
  when(bigQuery.insertAll(anyObject()))
      .thenThrow(new BigQueryException(500, "mock 500"))
      .thenThrow(new BigQueryException(502, "mock 502"))
      .thenThrow(new BigQueryException(503, "mock 503"))
      .thenReturn(insertAllResponse);
  when(insertAllResponse.hasErrors()).thenReturn(false);

  SinkTaskContext sinkTaskContext = mock(SinkTaskContext.class);

  BigQuerySinkTask testTask = new BigQuerySinkTask(bigQuery, null);
  testTask.initialize(sinkTaskContext);
  testTask.start(properties);
  testTask.put(Collections.singletonList(spoofSinkRecord(topic)));
  testTask.flush(Collections.emptyMap());

  verify(bigQuery, times(4)).insertAll(anyObject());
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:32,代码来源:BigQuerySinkTaskTest.java

示例12: testBigQuery403Retry

import org.apache.kafka.connect.sink.SinkTaskContext; //导入依赖的package包/类
@Test
public void testBigQuery403Retry() {
  final String topic = "test_topic";
  final String dataset = "scratch";

  Map<String, String> properties = propertiesFactory.getProperties();
  properties.put(BigQuerySinkTaskConfig.BIGQUERY_RETRY_CONFIG, "2");
  properties.put(BigQuerySinkTaskConfig.BIGQUERY_RETRY_WAIT_CONFIG, "2000");
  properties.put(BigQuerySinkConfig.TOPICS_CONFIG, topic);
  properties.put(BigQuerySinkConfig.DATASETS_CONFIG, String.format(".*=%s", dataset));

  BigQuery bigQuery = mock(BigQuery.class);

  InsertAllResponse insertAllResponse = mock(InsertAllResponse.class);
  BigQueryError quotaExceededError = new BigQueryError("quotaExceeded", null, null);
  BigQueryError rateLimitExceededError = new BigQueryError("rateLimitExceeded", null, null);
  when(bigQuery.insertAll(anyObject()))
      .thenThrow(new BigQueryException(403, "mock quota exceeded", quotaExceededError))
      .thenThrow(new BigQueryException(403, "mock rate limit exceeded", rateLimitExceededError))
      .thenReturn(insertAllResponse);
  when(insertAllResponse.hasErrors()).thenReturn(false);

  SinkTaskContext sinkTaskContext = mock(SinkTaskContext.class);

  BigQuerySinkTask testTask = new BigQuerySinkTask(bigQuery, null);
  testTask.initialize(sinkTaskContext);
  testTask.start(properties);
  testTask.put(Collections.singletonList(spoofSinkRecord(topic)));
  testTask.flush(Collections.emptyMap());

  verify(bigQuery, times(3)).insertAll(anyObject());
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:33,代码来源:BigQuerySinkTaskTest.java

示例13: testBigQueryRetryExceeded

import org.apache.kafka.connect.sink.SinkTaskContext; //导入依赖的package包/类
@Test(expected = BigQueryConnectException.class)
public void testBigQueryRetryExceeded() {
  final String topic = "test_topic";
  final String dataset = "scratch";

  Map<String, String> properties = propertiesFactory.getProperties();
  properties.put(BigQuerySinkTaskConfig.BIGQUERY_RETRY_CONFIG, "1");
  properties.put(BigQuerySinkTaskConfig.BIGQUERY_RETRY_WAIT_CONFIG, "2000");
  properties.put(BigQuerySinkConfig.TOPICS_CONFIG, topic);
  properties.put(BigQuerySinkConfig.DATASETS_CONFIG, String.format(".*=%s", dataset));

  BigQuery bigQuery = mock(BigQuery.class);

  InsertAllResponse insertAllResponse = mock(InsertAllResponse.class);
  BigQueryError quotaExceededError = new BigQueryError("quotaExceeded", null, null);
  when(bigQuery.insertAll(anyObject()))
    .thenThrow(new BigQueryException(403, "mock quota exceeded", quotaExceededError));
  when(insertAllResponse.hasErrors()).thenReturn(false);

  SinkTaskContext sinkTaskContext = mock(SinkTaskContext.class);

  BigQuerySinkTask testTask = new BigQuerySinkTask(bigQuery, null);
  testTask.initialize(sinkTaskContext);
  testTask.start(properties);
  testTask.put(Collections.singletonList(spoofSinkRecord(topic)));
  testTask.flush(Collections.emptyMap());
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:28,代码来源:BigQuerySinkTaskTest.java

示例14: testInterruptedException

import org.apache.kafka.connect.sink.SinkTaskContext; //导入依赖的package包/类
@Test(expected = ConnectException.class)
public void testInterruptedException() {
  final String dataset = "scratch";
  final String topic = "test_topic";

  Map<String, String> properties = propertiesFactory.getProperties();
  properties.put(BigQuerySinkConfig.TOPICS_CONFIG, topic);
  properties.put(BigQuerySinkConfig.DATASETS_CONFIG, String.format(".*=%s", dataset));

  BigQuery bigQuery  = mock(BigQuery.class);
  InsertAllResponse fakeResponse = mock(InsertAllResponse.class);
  when(fakeResponse.hasErrors()).thenReturn(false);
  when(fakeResponse.getInsertErrors()).thenReturn(Collections.emptyMap());
  when(bigQuery.insertAll(any(InsertAllRequest.class))).thenReturn(fakeResponse);

  SinkTaskContext sinkTaskContext = mock(SinkTaskContext.class);
  BigQuerySinkTask testTask = new BigQuerySinkTask(bigQuery, null);
  testTask.initialize(sinkTaskContext);
  testTask.start(properties);

  testTask.put(Collections.singletonList(spoofSinkRecord(topic)));
  testTask.flush(Collections.emptyMap());

  testTask.put(Collections.singletonList(spoofSinkRecord(topic)));
  Thread.currentThread().interrupt();
  testTask.flush(Collections.emptyMap());
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:28,代码来源:BigQuerySinkTaskTest.java

示例15: createWriter

import org.apache.kafka.connect.sink.SinkTaskContext; //导入依赖的package包/类
private DataWriter createWriter(SinkTaskContext context, AvroData avroData){
  return new DataWriter(connectorConfig, context, avroData);
}
 
开发者ID:jiangxiluning,项目名称:kafka-connect-hdfs,代码行数:4,代码来源:AvroHiveUtilTest.java


注:本文中的org.apache.kafka.connect.sink.SinkTaskContext类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。