當前位置: 首頁>>代碼示例>>Java>>正文


Java Partitioner.configure方法代碼示例

本文整理匯總了Java中io.confluent.connect.hdfs.partitioner.Partitioner.configure方法的典型用法代碼示例。如果您正苦於以下問題:Java Partitioner.configure方法的具體用法?Java Partitioner.configure怎麽用?Java Partitioner.configure使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在io.confluent.connect.hdfs.partitioner.Partitioner的用法示例。


在下文中一共展示了Partitioner.configure方法的5個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: createPartitioner

import io.confluent.connect.hdfs.partitioner.Partitioner; //導入方法依賴的package包/類
private Partitioner createPartitioner(HdfsSinkConnectorConfig config)
    throws ClassNotFoundException, IllegalAccessException, InstantiationException {

  @SuppressWarnings("unchecked")
  Class<? extends Partitioner> partitionerClasss = (Class<? extends Partitioner>)
      Class.forName(config.getString(HdfsSinkConnectorConfig.PARTITIONER_CLASS_CONFIG));

  Map<String, Object> map = copyConfig(config);
  Partitioner partitioner = partitionerClasss.newInstance();
  partitioner.configure(map);
  return partitioner;
}
 
開發者ID:jiangxiluning,項目名稱:kafka-connect-hdfs,代碼行數:13,代碼來源:DataWriter.java

示例2: testWriteRecordDefaultWithPadding

import io.confluent.connect.hdfs.partitioner.Partitioner; //導入方法依賴的package包/類
@Test
public void testWriteRecordDefaultWithPadding() throws Exception {
  Partitioner partitioner = new DefaultPartitioner();
  partitioner.configure(Collections.<String, Object>emptyMap());
  connectorProps.put(HdfsSinkConnectorConfig.FILENAME_OFFSET_ZERO_PAD_WIDTH_CONFIG, "2");
  configureConnector();
  TopicPartitionWriter topicPartitionWriter = new TopicPartitionWriter(
      TOPIC_PARTITION, storage, writerProvider, partitioner,  connectorConfig, context, avroData);

  String key = "key";
  Schema schema = createSchema();
  Struct[] records = createRecords(schema);

  Collection<SinkRecord> sinkRecords = createSinkRecords(records, key, schema);

  for (SinkRecord record : sinkRecords) {
    topicPartitionWriter.buffer(record);
  }

  topicPartitionWriter.recover();
  topicPartitionWriter.write();
  topicPartitionWriter.close();

  Set<Path> expectedFiles = new HashSet<>();
  expectedFiles.add(new Path(url + "/" + topicsDir + "/" + TOPIC + "/partition=" + PARTITION +
                             "/" + TOPIC + "+" + PARTITION + "+00+02" + extension));
  expectedFiles.add(new Path(url + "/" + topicsDir + "/" + TOPIC + "/partition=" + PARTITION +
                             "/" + TOPIC + "+" + PARTITION + "+03+05" + extension));
  expectedFiles.add(new Path(url + "/" + topicsDir + "/" + TOPIC + "/partition=" + PARTITION +
                             "/" + TOPIC + "+" + PARTITION + "+06+08" + extension));
  verify(expectedFiles, records, schema);
}
 
開發者ID:jiangxiluning,項目名稱:kafka-connect-hdfs,代碼行數:33,代碼來源:TopicPartitionWriterTest.java

示例3: testWriteRecordFieldPartitioner

import io.confluent.connect.hdfs.partitioner.Partitioner; //導入方法依賴的package包/類
@Test
public void testWriteRecordFieldPartitioner() throws Exception {
  Map<String, Object> config = createConfig();
  Partitioner partitioner = new FieldPartitioner();
  partitioner.configure(config);

  String partitionField = (String) config.get(HdfsSinkConnectorConfig.PARTITION_FIELD_NAME_CONFIG);

  TopicPartitionWriter topicPartitionWriter = new TopicPartitionWriter(
      TOPIC_PARTITION, storage, writerProvider, partitioner, connectorConfig, context, avroData);

  String key = "key";
  Schema schema = createSchema();
  Struct[] records = createRecords(schema);

  Collection<SinkRecord> sinkRecords = createSinkRecords(records, key, schema);

  for (SinkRecord record : sinkRecords) {
    topicPartitionWriter.buffer(record);
  }

  topicPartitionWriter.recover();
  topicPartitionWriter.write();
  topicPartitionWriter.close();


  String directory1 = partitioner.generatePartitionedPath(TOPIC, partitionField + "=" + String.valueOf(16));
  String directory2 = partitioner.generatePartitionedPath(TOPIC, partitionField + "=" + String.valueOf(17));
  String directory3 = partitioner.generatePartitionedPath(TOPIC, partitionField + "=" + String.valueOf(18));

  Set<Path> expectedFiles = new HashSet<>();
  expectedFiles.add(new Path(FileUtils.committedFileName(url, topicsDir, directory1, TOPIC_PARTITION, 0, 2, extension, ZERO_PAD_FMT)));
  expectedFiles.add(new Path(FileUtils.committedFileName(url, topicsDir, directory2, TOPIC_PARTITION, 3, 5, extension, ZERO_PAD_FMT)));
  expectedFiles.add(new Path(FileUtils.committedFileName(url, topicsDir, directory3, TOPIC_PARTITION, 6, 8, extension, ZERO_PAD_FMT)));

  verify(expectedFiles, records, schema);
}
 
開發者ID:jiangxiluning,項目名稱:kafka-connect-hdfs,代碼行數:38,代碼來源:TopicPartitionWriterTest.java

示例4: testWriteRecordTimeBasedPartition

import io.confluent.connect.hdfs.partitioner.Partitioner; //導入方法依賴的package包/類
@Test
public void testWriteRecordTimeBasedPartition() throws Exception {
  Map<String, Object> config = createConfig();
  Partitioner partitioner = new TimeBasedPartitioner();
  partitioner.configure(config);

  TopicPartitionWriter topicPartitionWriter = new TopicPartitionWriter(
      TOPIC_PARTITION, storage, writerProvider, partitioner, connectorConfig, context, avroData);

  String key = "key";
  Schema schema = createSchema();
  Struct[] records = createRecords(schema);

  Collection<SinkRecord> sinkRecords = createSinkRecords(records, key, schema);

  for (SinkRecord record : sinkRecords) {
    topicPartitionWriter.buffer(record);
  }

  topicPartitionWriter.recover();
  topicPartitionWriter.write();
  topicPartitionWriter.close();


  long partitionDurationMs = (Long) config.get(HdfsSinkConnectorConfig.PARTITION_DURATION_MS_CONFIG);
  String pathFormat = (String) config.get(HdfsSinkConnectorConfig.PATH_FORMAT_CONFIG);
  String timeZoneString = (String) config.get(HdfsSinkConnectorConfig.TIMEZONE_CONFIG);
  long timestamp = System.currentTimeMillis();

  String encodedPartition = TimeUtils.encodeTimestamp(partitionDurationMs, pathFormat, timeZoneString, timestamp);

  String directory = partitioner.generatePartitionedPath(TOPIC, encodedPartition);

  Set<Path> expectedFiles = new HashSet<>();
  expectedFiles.add(new Path(FileUtils.committedFileName(url, topicsDir, directory, TOPIC_PARTITION, 0, 2, extension, ZERO_PAD_FMT)));
  expectedFiles.add(new Path(FileUtils.committedFileName(url, topicsDir, directory, TOPIC_PARTITION, 3, 5, extension, ZERO_PAD_FMT)));
  expectedFiles.add(new Path(FileUtils.committedFileName(url, topicsDir, directory, TOPIC_PARTITION, 6, 8, extension, ZERO_PAD_FMT)));

  verify(expectedFiles, records, schema);
}
 
開發者ID:jiangxiluning,項目名稱:kafka-connect-hdfs,代碼行數:41,代碼來源:TopicPartitionWriterTest.java

示例5: getPartitioner

import io.confluent.connect.hdfs.partitioner.Partitioner; //導入方法依賴的package包/類
public static Partitioner getPartitioner() {
  Partitioner partitioner = new DefaultPartitioner();
  partitioner.configure(new HashMap<String, Object>());
  return partitioner;
}
 
開發者ID:jiangxiluning,項目名稱:kafka-connect-hdfs,代碼行數:6,代碼來源:HiveTestUtils.java


注:本文中的io.confluent.connect.hdfs.partitioner.Partitioner.configure方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。