当前位置: 首页>>代码示例>>Java>>正文


Java SinkRecord.value方法代码示例

本文整理汇总了Java中org.apache.kafka.connect.sink.SinkRecord.value方法的典型用法代码示例。如果您正苦于以下问题:Java SinkRecord.value方法的具体用法?Java SinkRecord.value怎么用?Java SinkRecord.value使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.kafka.connect.sink.SinkRecord的用法示例。


在下文中一共展示了SinkRecord.value方法的10个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: project

import org.apache.kafka.connect.sink.SinkRecord; //导入方法依赖的package包/类
public static SinkRecord project(SinkRecord record, Schema currentSchema, Compatibility compatibility) {
  switch (compatibility) {
    case BACKWARD:
    case FULL:
    case FORWARD:
      Schema sourceSchema = record.valueSchema();
      Object value = record.value();
      if (sourceSchema == currentSchema || sourceSchema.equals(currentSchema)) {
        return record;
      }
      Object projected = SchemaProjector.project(sourceSchema, value, currentSchema);
      return new SinkRecord(record.topic(), record.kafkaPartition(), record.keySchema(),
                            record.key(), currentSchema, projected, record.kafkaOffset());
    default:
      return record;
  }
}
 
开发者ID:jiangxiluning,项目名称:kafka-connect-hdfs,代码行数:18,代码来源:SchemaUtils.java

示例2: schemaless

import org.apache.kafka.connect.sink.SinkRecord; //导入方法依赖的package包/类
@Test
public void schemaless() {
    final ReplaceField<SinkRecord> xform = new ReplaceField.Value<>();

    final Map<String, String> props = new HashMap<>();
    props.put("blacklist", "dont");
    props.put("renames", "abc:xyz,foo:bar");

    xform.configure(props);

    final Map<String, Object> value = new HashMap<>();
    value.put("dont", "whatever");
    value.put("abc", 42);
    value.put("foo", true);
    value.put("etc", "etc");

    final SinkRecord record = new SinkRecord("test", 0, null, null, null, value, 0);
    final SinkRecord transformedRecord = xform.apply(record);

    final Map updatedValue = (Map) transformedRecord.value();
    assertEquals(3, updatedValue.size());
    assertEquals(42, updatedValue.get("xyz"));
    assertEquals(true, updatedValue.get("bar"));
    assertEquals("etc", updatedValue.get("etc"));
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:26,代码来源:ReplaceFieldTest.java

示例3: convert

import org.apache.kafka.connect.sink.SinkRecord; //导入方法依赖的package包/类
public FluentdEventRecord convert(SinkRecord sinkRecord) {
    logger.debug("SinkRecord: {}", sinkRecord);
    FluentdEventRecord eventRecord = null;

    if (sinkRecord.value() != null) {
        eventRecord = getRecordConverter(sinkRecord.valueSchema(), sinkRecord.value())
                .convert(sinkRecord.valueSchema(), sinkRecord.value());
    }
    eventRecord.setTag(sinkRecord.topic());

    if (config.getFluentdClientTimestampInteger()) {
        eventRecord.setTimestamp(sinkRecord.timestamp() / 1000);
    } else {
        eventRecord.setEventTime(EventTime.fromEpochMilli(sinkRecord.timestamp()));
    }

    return eventRecord;
}
 
开发者ID:fluent,项目名称:kafka-connect-fluentd,代码行数:19,代码来源:SinkRecordConverter.java

示例4: convertRecord

import org.apache.kafka.connect.sink.SinkRecord; //导入方法依赖的package包/类
public static DeletableRecord convertRecord(SinkRecord record, boolean ignoreSchema, String versionType) {
  final Schema schema;
  final Object value;
  if (!ignoreSchema) {
    schema = preProcessSchema(record.valueSchema());
    value = preProcessValue(record.value(), record.valueSchema(), schema);
  } else {
    schema = record.valueSchema();
    value = record.value();
  }

  final String payload = new String(JSON_CONVERTER.fromConnectData(record.topic(), schema, value), StandardCharsets.UTF_8);

  if (StringUtils.isNotBlank(payload)) {
    DeleteEvent deleteEvent = GSON.fromJson(payload, DeleteEvent.class);
    return new DeletableRecord(new Key(deleteEvent.getIndex(), deleteEvent.getType(), deleteEvent.getId()), deleteEvent.getVersion(), versionType);
  } else {
    return null;
  }

}
 
开发者ID:chaokunyang,项目名称:jkes,代码行数:22,代码来源:DataConverter.java

示例5: put

import org.apache.kafka.connect.sink.SinkRecord; //导入方法依赖的package包/类
public void put(SinkRecord record) {
  try {
    ByteArrayOutputStream resultStream = new ByteArrayOutputStream();
    Writer writer = new OutputStreamWriter(resultStream);

    Object key = record.key();
    if (key != null) {
      writer.write(key.toString());
      writer.write(',');
    }

    Object value = record.value();
    if (value != null) {
      writer.write(value.toString());
    }

    writer.write('\n');
    writer.close();

    this.buffer.put(resultStream.toByteArray());
  } catch (IOException exception) {
    //TODO: check exception
    throw new RuntimeException(exception);
  }
}
 
开发者ID:yuuzi41,项目名称:kafka-connect-swift,代码行数:26,代码来源:KeyValueCsvSinkRecordBulker.java

示例6: encodePartition

import org.apache.kafka.connect.sink.SinkRecord; //导入方法依赖的package包/类
@Override
public String encodePartition(SinkRecord sinkRecord) {
    Object value = sinkRecord.value();
    Schema valueSchema = sinkRecord.valueSchema();
    long timestamp;
    if (value instanceof Struct) {
        Struct struct = (Struct) value;
        Object partitionKey = struct.get(fieldName);
        Schema.Type type = valueSchema.field(fieldName).schema().type();
        switch (type) {
            case INT8:
            case INT16:
            case INT32:
            case INT64:
                timestamp = ((Number) partitionKey).longValue();
                break;
            case STRING:
                String timestampStr = (String) partitionKey;
                timestamp =  Long.valueOf(timestampStr).longValue();
                break;
            default:
                log.error("Type {} is not supported as a partition key.", type.getName());
                throw new PartitionException("Error encoding partition.");
        }
    } else {
        log.error("Value is not Struct type.");
        throw new PartitionException("Error encoding partition.");
    }

    DateTime bucket = new DateTime(getPartition(partitionDurationMs, timestamp, formatter.getZone()));
    return bucket.toString(formatter);
}
 
开发者ID:jiangxiluning,项目名称:kafka-connect-hdfs,代码行数:33,代码来源:FieldHourlyPartitioner.java

示例7: encodePartition

import org.apache.kafka.connect.sink.SinkRecord; //导入方法依赖的package包/类
@Override
public String encodePartition(SinkRecord sinkRecord) {
  Object value = sinkRecord.value();
  Schema valueSchema = sinkRecord.valueSchema();
  if (value instanceof Struct) {
    Struct struct = (Struct) value;
    Object partitionKey = struct.get(fieldName);
    Type type = valueSchema.field(fieldName).schema().type();
    switch (type) {
      case INT8:
      case INT16:
      case INT32:
      case INT64:
        Number record = (Number) partitionKey;
        return fieldName + "=" + record.toString();
      case STRING:
        return fieldName + "=" + (String) partitionKey;
      case BOOLEAN:
        boolean booleanRecord = (boolean) partitionKey;
        return fieldName + "=" + Boolean.toString(booleanRecord);
      default:
        log.error("Type {} is not supported as a partition key.", type.getName());
        throw new PartitionException("Error encoding partition.");
    }
  } else {
    log.error("Value is not Struct type.");
    throw new PartitionException("Error encoding partition.");
  }
}
 
开发者ID:jiangxiluning,项目名称:kafka-connect-hdfs,代码行数:30,代码来源:FieldPartitioner.java

示例8: put

import org.apache.kafka.connect.sink.SinkRecord; //导入方法依赖的package包/类
@Override
public void put(Collection<SinkRecord> sinkRecords) {
  for (SinkRecord record : sinkRecords) {
    log.trace("current sinkRecord value: " + record.value());
    if (!(record.value() instanceof byte[])) {
      throw new ConnectException("the value of the record has an invalid type (must be of type byte[])");
    }
    try {
      channel.basicPublish(this.config.exchange, this.config.routingKey, null, (byte[]) record.value());
    } catch (IOException e) {
      log.error("There was an error while publishing the outgoing message to RabbitMQ");
      throw new RetriableException(e);
    }
  }
}
 
开发者ID:jcustenborder,项目名称:kafka-connect-rabbitmq,代码行数:16,代码来源:RabbitMQSinkTask.java

示例9: withSchema

import org.apache.kafka.connect.sink.SinkRecord; //导入方法依赖的package包/类
@Test
public void withSchema() {
    final ReplaceField<SinkRecord> xform = new ReplaceField.Value<>();

    final Map<String, String> props = new HashMap<>();
    props.put("whitelist", "abc,foo");
    props.put("renames", "abc:xyz,foo:bar");

    xform.configure(props);

    final Schema schema = SchemaBuilder.struct()
            .field("dont", Schema.STRING_SCHEMA)
            .field("abc", Schema.INT32_SCHEMA)
            .field("foo", Schema.BOOLEAN_SCHEMA)
            .field("etc", Schema.STRING_SCHEMA)
            .build();

    final Struct value = new Struct(schema);
    value.put("dont", "whatever");
    value.put("abc", 42);
    value.put("foo", true);
    value.put("etc", "etc");

    final SinkRecord record = new SinkRecord("test", 0, null, null, schema, value, 0);
    final SinkRecord transformedRecord = xform.apply(record);

    final Struct updatedValue = (Struct) transformedRecord.value();

    assertEquals(2, updatedValue.schema().fields().size());
    assertEquals(new Integer(42), updatedValue.getInt32("xyz"));
    assertEquals(true, updatedValue.getBoolean("bar"));
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:33,代码来源:ReplaceFieldTest.java

示例10: batchWrite

import org.apache.kafka.connect.sink.SinkRecord; //导入方法依赖的package包/类
public void batchWrite(final Collection<SinkRecord> records) throws SQLException {
    if (records == null || records.isEmpty()) {
        return;
    }
    List<String> sqlList = new ArrayList<>();
    for (SinkRecord record : records) {
        String topic = record.topic();
        /**没有pk的key是多少?**/
        String key = (String) record.key();
        String val = (String) record.value();
        log.info("===>>>topic:{},partition:{},offset:{},\n===>>>key:{},value:{}", topic, record.kafkaPartition(), record.kafkaOffset(), record.key(), record.value());

        RowMapPK rowMapPK = getRowMapPK(key);
        RowMap rowMap = JSON.parseObject(val, RowMap.class);

        /**数据过滤**/
        if (filter.match(rowMap)) {
            //将新老数据输出到指定文件
            ExportRowMap exportRowMap = new ExportRowMap(rowMapPK, rowMap);
            datalog.info(exportRowMap.toString());

            rowMap = dbRedirector.redirectDb(topic, rowMap);
            log.info("===>>>Assembler RowMap:{}", rowMap.toString());
            String sql = assembler.getSql(rowMapPK, rowMap);
            log.info("===>>>Assembler GET SQL:{}", sql);
            if (StringUtils.isNotEmpty(sql)) {
                sqlList.add(sql);
            }
        }
    }

    flush(sqlList);
}
 
开发者ID:songxin1990,项目名称:maxwell-sink,代码行数:34,代码来源:MySqlDbWriter.java


注:本文中的org.apache.kafka.connect.sink.SinkRecord.value方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。