当前位置: 首页>>代码示例>>Java>>正文


Java SinkTask类代码示例

本文整理汇总了Java中org.apache.kafka.connect.sink.SinkTask的典型用法代码示例。如果您正苦于以下问题:Java SinkTask类的具体用法?Java SinkTask怎么用?Java SinkTask使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


SinkTask类属于org.apache.kafka.connect.sink包,在下文中一共展示了SinkTask类的6个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: connectorTaskConfigs

import org.apache.kafka.connect.sink.SinkTask; //导入依赖的package包/类
/**
 * Get a list of updated task properties for the tasks of this connector.
 *
 * @param connName the connector name.
 * @param maxTasks the maxinum number of tasks.
 * @param sinkTopics a list of sink topics.
 * @return a list of updated tasks properties.
 */
public List<Map<String, String>> connectorTaskConfigs(String connName, int maxTasks, List<String> sinkTopics) {
    log.trace("Reconfiguring connector tasks for {}", connName);

    WorkerConnector workerConnector = connectors.get(connName);
    if (workerConnector == null)
        throw new ConnectException("Connector " + connName + " not found in this worker.");

    Connector connector = workerConnector.connector();
    List<Map<String, String>> result = new ArrayList<>();
    ClassLoader savedLoader = plugins.currentThreadLoader();
    try {
        savedLoader = plugins.compareAndSwapLoaders(connector);
        String taskClassName = connector.taskClass().getName();
        for (Map<String, String> taskProps : connector.taskConfigs(maxTasks)) {
            // Ensure we don't modify the connector's copy of the config
            Map<String, String> taskConfig = new HashMap<>(taskProps);
            taskConfig.put(TaskConfig.TASK_CLASS_CONFIG, taskClassName);
            if (sinkTopics != null) {
                taskConfig.put(SinkTask.TOPICS_CONFIG, Utils.join(sinkTopics, ","));
            }
            result.add(taskConfig);
        }
    } finally {
        Plugins.compareAndSwapLoaders(savedLoader);
    }

    return result;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:37,代码来源:Worker.java

示例2: WorkerSinkTask

import org.apache.kafka.connect.sink.SinkTask; //导入依赖的package包/类
public WorkerSinkTask(ConnectorTaskId id,
                      SinkTask task,
                      TaskStatus.Listener statusListener,
                      TargetState initialState,
                      WorkerConfig workerConfig,
                      Converter keyConverter,
                      Converter valueConverter,
                      TransformationChain<SinkRecord> transformationChain,
                      ClassLoader loader,
                      Time time) {
    super(id, statusListener, initialState, loader);

    this.workerConfig = workerConfig;
    this.task = task;
    this.keyConverter = keyConverter;
    this.valueConverter = valueConverter;
    this.transformationChain = transformationChain;
    this.time = time;
    this.messageBatch = new ArrayList<>();
    this.currentOffsets = new HashMap<>();
    this.pausedForRedelivery = false;
    this.rebalanceException = null;
    this.nextCommit = time.milliseconds() +
            workerConfig.getLong(WorkerConfig.OFFSET_COMMIT_INTERVAL_MS_CONFIG);
    this.committing = false;
    this.commitSeqno = 0;
    this.commitStarted = -1;
    this.commitFailures = 0;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:30,代码来源:WorkerSinkTask.java

示例3: initializeAndStart

import org.apache.kafka.connect.sink.SinkTask; //导入依赖的package包/类
/**
 * Initializes and starts the SinkTask.
 */
protected void initializeAndStart() {
    log.debug("Initializing task {} ", id);
    String topicsStr = taskConfig.get(SinkTask.TOPICS_CONFIG);
    if (topicsStr == null || topicsStr.isEmpty())
        throw new ConnectException("Sink tasks require a list of topics.");
    String[] topics = topicsStr.split(",");
    log.debug("Task {} subscribing to topics {}", id, topics);
    consumer.subscribe(Arrays.asList(topics), new HandleRebalance());
    task.initialize(context);
    task.start(taskConfig);
    log.info("Sink task {} finished initialization and start", this);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:16,代码来源:WorkerSinkTask.java

示例4: connectorConfig

import org.apache.kafka.connect.sink.SinkTask; //导入依赖的package包/类
private static Map<String, String> connectorConfig(SourceSink sourceSink) {
    Map<String, String> props = new HashMap<>();
    props.put(ConnectorConfig.NAME_CONFIG, CONNECTOR_NAME);
    Class<? extends Connector> connectorClass = sourceSink == SourceSink.SINK ? BogusSinkConnector.class : BogusSourceConnector.class;
    props.put(ConnectorConfig.CONNECTOR_CLASS_CONFIG, connectorClass.getName());
    props.put(ConnectorConfig.TASKS_MAX_CONFIG, "1");
    if (sourceSink == SourceSink.SINK)
        props.put(SinkTask.TOPICS_CONFIG, TOPICS_LIST_STR);
    return props;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:11,代码来源:StandaloneHerderTest.java

示例5: taskConfig

import org.apache.kafka.connect.sink.SinkTask; //导入依赖的package包/类
private static Map<String, String> taskConfig(SourceSink sourceSink) {
    HashMap<String, String> generatedTaskProps = new HashMap<>();
    // Connectors can add any settings, so these are arbitrary
    generatedTaskProps.put("foo", "bar");
    Class<? extends Task> taskClass = sourceSink == SourceSink.SINK ? BogusSinkTask.class : BogusSourceTask.class;
    generatedTaskProps.put(TaskConfig.TASK_CLASS_CONFIG, taskClass.getName());
    if (sourceSink == SourceSink.SINK)
        generatedTaskProps.put(SinkTask.TOPICS_CONFIG, TOPICS_LIST_STR);
    return generatedTaskProps;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:11,代码来源:StandaloneHerderTest.java

示例6: testTaskType

import org.apache.kafka.connect.sink.SinkTask; //导入依赖的package包/类
@Test
public void testTaskType() throws Exception {
  setUp();
  replayAll();
  task = new S3SinkTask();
  SinkTask.class.isAssignableFrom(task.getClass());
}
 
开发者ID:confluentinc,项目名称:kafka-connect-storage-cloud,代码行数:8,代码来源:S3SinkTaskTest.java


注:本文中的org.apache.kafka.connect.sink.SinkTask类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。