當前位置: 首頁>>代碼示例>>Java>>正文


Java Partitioner.generatePartitionedPath方法代碼示例

本文整理匯總了Java中io.confluent.connect.hdfs.partitioner.Partitioner.generatePartitionedPath方法的典型用法代碼示例。如果您正苦於以下問題:Java Partitioner.generatePartitionedPath方法的具體用法?Java Partitioner.generatePartitionedPath怎麽用?Java Partitioner.generatePartitionedPath使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在io.confluent.connect.hdfs.partitioner.Partitioner的用法示例。


在下文中一共展示了Partitioner.generatePartitionedPath方法的5個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: testWriteRecordNonZeroInitailOffset

import io.confluent.connect.hdfs.partitioner.Partitioner; //導入方法依賴的package包/類
@Test
public void testWriteRecordNonZeroInitailOffset() throws Exception {
  DataWriter hdfsWriter = new DataWriter(connectorConfig, context, avroData);
  Partitioner partitioner = hdfsWriter.getPartitioner();
  hdfsWriter.recover(TOPIC_PARTITION);

  String key = "key";
  Schema schema = createSchema();
  Struct record = createRecord(schema);

  Collection<SinkRecord> sinkRecords = new ArrayList<>();
  for (long offset = 3; offset < 10; offset++) {
    SinkRecord sinkRecord =
        new SinkRecord(TOPIC, PARTITION, Schema.STRING_SCHEMA, key, schema, record, offset);
    sinkRecords.add(sinkRecord);
  }

  hdfsWriter.write(sinkRecords);
  hdfsWriter.close(assignment);
  hdfsWriter.stop();

  String directory = partitioner.generatePartitionedPath(TOPIC, "partition=" + String.valueOf(PARTITION));

  // Last file (offset 9) doesn't satisfy size requirement and gets discarded on close
  long[] validOffsets = {2, 5, 8};
  for (int i = 1; i < validOffsets.length; i++) {
    long startOffset = validOffsets[i - 1] + 1;
    long endOffset = validOffsets[i];
    Path path = new Path(FileUtils.committedFileName(url, topicsDir, directory,
                                                     TOPIC_PARTITION, startOffset, endOffset,
                                                     extension, ZERO_PAD_FMT));
    Collection<Object> records = schemaFileReader.readData(conf, path);
    long size = endOffset - startOffset + 1;
    assertEquals(size, records.size());
    for (Object avroRecord : records) {
      assertEquals(avroData.fromConnectData(schema, record), avroRecord);
    }
  }
}
 
開發者ID:jiangxiluning,項目名稱:kafka-connect-hdfs,代碼行數:40,代碼來源:DataWriterAvroTest.java

示例2: testWriteRecordFieldPartitioner

import io.confluent.connect.hdfs.partitioner.Partitioner; //導入方法依賴的package包/類
@Test
public void testWriteRecordFieldPartitioner() throws Exception {
  Map<String, Object> config = createConfig();
  Partitioner partitioner = new FieldPartitioner();
  partitioner.configure(config);

  String partitionField = (String) config.get(HdfsSinkConnectorConfig.PARTITION_FIELD_NAME_CONFIG);

  TopicPartitionWriter topicPartitionWriter = new TopicPartitionWriter(
      TOPIC_PARTITION, storage, writerProvider, partitioner, connectorConfig, context, avroData);

  String key = "key";
  Schema schema = createSchema();
  Struct[] records = createRecords(schema);

  Collection<SinkRecord> sinkRecords = createSinkRecords(records, key, schema);

  for (SinkRecord record : sinkRecords) {
    topicPartitionWriter.buffer(record);
  }

  topicPartitionWriter.recover();
  topicPartitionWriter.write();
  topicPartitionWriter.close();


  String directory1 = partitioner.generatePartitionedPath(TOPIC, partitionField + "=" + String.valueOf(16));
  String directory2 = partitioner.generatePartitionedPath(TOPIC, partitionField + "=" + String.valueOf(17));
  String directory3 = partitioner.generatePartitionedPath(TOPIC, partitionField + "=" + String.valueOf(18));

  Set<Path> expectedFiles = new HashSet<>();
  expectedFiles.add(new Path(FileUtils.committedFileName(url, topicsDir, directory1, TOPIC_PARTITION, 0, 2, extension, ZERO_PAD_FMT)));
  expectedFiles.add(new Path(FileUtils.committedFileName(url, topicsDir, directory2, TOPIC_PARTITION, 3, 5, extension, ZERO_PAD_FMT)));
  expectedFiles.add(new Path(FileUtils.committedFileName(url, topicsDir, directory3, TOPIC_PARTITION, 6, 8, extension, ZERO_PAD_FMT)));

  verify(expectedFiles, records, schema);
}
 
開發者ID:jiangxiluning,項目名稱:kafka-connect-hdfs,代碼行數:38,代碼來源:TopicPartitionWriterTest.java

示例3: testWriteRecordTimeBasedPartition

import io.confluent.connect.hdfs.partitioner.Partitioner; //導入方法依賴的package包/類
@Test
public void testWriteRecordTimeBasedPartition() throws Exception {
  Map<String, Object> config = createConfig();
  Partitioner partitioner = new TimeBasedPartitioner();
  partitioner.configure(config);

  TopicPartitionWriter topicPartitionWriter = new TopicPartitionWriter(
      TOPIC_PARTITION, storage, writerProvider, partitioner, connectorConfig, context, avroData);

  String key = "key";
  Schema schema = createSchema();
  Struct[] records = createRecords(schema);

  Collection<SinkRecord> sinkRecords = createSinkRecords(records, key, schema);

  for (SinkRecord record : sinkRecords) {
    topicPartitionWriter.buffer(record);
  }

  topicPartitionWriter.recover();
  topicPartitionWriter.write();
  topicPartitionWriter.close();


  long partitionDurationMs = (Long) config.get(HdfsSinkConnectorConfig.PARTITION_DURATION_MS_CONFIG);
  String pathFormat = (String) config.get(HdfsSinkConnectorConfig.PATH_FORMAT_CONFIG);
  String timeZoneString = (String) config.get(HdfsSinkConnectorConfig.TIMEZONE_CONFIG);
  long timestamp = System.currentTimeMillis();

  String encodedPartition = TimeUtils.encodeTimestamp(partitionDurationMs, pathFormat, timeZoneString, timestamp);

  String directory = partitioner.generatePartitionedPath(TOPIC, encodedPartition);

  Set<Path> expectedFiles = new HashSet<>();
  expectedFiles.add(new Path(FileUtils.committedFileName(url, topicsDir, directory, TOPIC_PARTITION, 0, 2, extension, ZERO_PAD_FMT)));
  expectedFiles.add(new Path(FileUtils.committedFileName(url, topicsDir, directory, TOPIC_PARTITION, 3, 5, extension, ZERO_PAD_FMT)));
  expectedFiles.add(new Path(FileUtils.committedFileName(url, topicsDir, directory, TOPIC_PARTITION, 6, 8, extension, ZERO_PAD_FMT)));

  verify(expectedFiles, records, schema);
}
 
開發者ID:jiangxiluning,項目名稱:kafka-connect-hdfs,代碼行數:41,代碼來源:TopicPartitionWriterTest.java

示例4: testWriteRecord

import io.confluent.connect.hdfs.partitioner.Partitioner; //導入方法依賴的package包/類
@Test
public void testWriteRecord() throws Exception {
  DataWriter hdfsWriter = new DataWriter(connectorConfig, context, avroData);
  Partitioner partitioner = hdfsWriter.getPartitioner();
  hdfsWriter.recover(TOPIC_PARTITION);

  String key = "key";
  Schema schema = createSchema();
  Struct record = createRecord(schema);

  Collection<SinkRecord> sinkRecords = new ArrayList<>();
  for (long offset = 0; offset < 7; offset++) {
    SinkRecord sinkRecord =
        new SinkRecord(TOPIC, PARTITION, Schema.STRING_SCHEMA, key, schema, record, offset);

    sinkRecords.add(sinkRecord);
  }
  hdfsWriter.write(sinkRecords);
  hdfsWriter.close(assignment);
  hdfsWriter.stop();

  String encodedPartition = "partition=" + String.valueOf(PARTITION);
  String directory = partitioner.generatePartitionedPath(TOPIC, encodedPartition);

  // Last file (offset 6) doesn't satisfy size requirement and gets discarded on close
  long[] validOffsets = {-1, 2, 5};
  for (int i = 1; i < validOffsets.length; i++) {
    long startOffset = validOffsets[i - 1] + 1;
    long endOffset = validOffsets[i];
    Path path = new Path(
        FileUtils.committedFileName(url, topicsDir, directory, TOPIC_PARTITION, startOffset,
                                    endOffset, extension, ZERO_PAD_FMT));
    Collection<Object> records = schemaFileReader.readData(conf, path);
    long size = endOffset - startOffset + 1;
    assertEquals(size, records.size());
    for (Object avroRecord : records) {
      assertEquals(avroData.fromConnectData(schema, record), avroRecord);
    }
  }
}
 
開發者ID:jiangxiluning,項目名稱:kafka-connect-hdfs,代碼行數:41,代碼來源:DataWriterParquetTest.java

示例5: testWriteRecord

import io.confluent.connect.hdfs.partitioner.Partitioner; //導入方法依賴的package包/類
@Test
public void testWriteRecord() throws Exception {
  DataWriter hdfsWriter = new DataWriter(connectorConfig, context, avroData);
  Partitioner partitioner = hdfsWriter.getPartitioner();
  hdfsWriter.recover(TOPIC_PARTITION);

  String key = "key";
  Schema schema = createSchema();
  Struct record = createRecord(schema);

  Collection<SinkRecord> sinkRecords = new ArrayList<>();
  for (long offset = 0; offset < 7; offset++) {
    SinkRecord sinkRecord =
        new SinkRecord(TOPIC, PARTITION, Schema.STRING_SCHEMA, key, schema, record, offset);

    sinkRecords.add(sinkRecord);
  }
  hdfsWriter.write(sinkRecords);
  hdfsWriter.close(assignment);
  hdfsWriter.stop();

  String encodedPartition = "partition=" + String.valueOf(PARTITION);
  String directory = partitioner.generatePartitionedPath(TOPIC, encodedPartition);

  // Last file (offset 6) doesn't satisfy size requirement and gets discarded on close
  long[] validOffsets = {-1, 2, 5};
  for (int i = 1; i < validOffsets.length; i++) {
    long startOffset = validOffsets[i - 1] + 1;
    long endOffset = validOffsets[i];
    Path path =
        new Path(FileUtils
                     .committedFileName(url, topicsDir, directory, TOPIC_PARTITION, startOffset,
                                        endOffset, extension, ZERO_PAD_FMT));
    Collection<Object> records = schemaFileReader.readData(conf, path);
    long size = endOffset - startOffset + 1;
    assertEquals(size, records.size());
    for (Object avroRecord : records) {
      assertEquals(avroData.fromConnectData(schema, record), avroRecord);
    }
  }
}
 
開發者ID:jiangxiluning,項目名稱:kafka-connect-hdfs,代碼行數:42,代碼來源:DataWriterAvroTest.java


注:本文中的io.confluent.connect.hdfs.partitioner.Partitioner.generatePartitionedPath方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。