当前位置: 首页>>代码示例>>Java>>正文


Java SinkCounter类代码示例

本文整理汇总了Java中org.apache.flume.instrumentation.SinkCounter的典型用法代码示例。如果您正苦于以下问题:Java SinkCounter类的具体用法?Java SinkCounter怎么用?Java SinkCounter使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


SinkCounter类属于org.apache.flume.instrumentation包,在下文中一共展示了SinkCounter类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: testEventCountingRoller

import org.apache.flume.instrumentation.SinkCounter; //导入依赖的package包/类
@Test
public void testEventCountingRoller() throws IOException, InterruptedException {
  int maxEvents = 100;
  MockHDFSWriter hdfsWriter = new MockHDFSWriter();
  BucketWriter bucketWriter = new BucketWriter(
      0, 0, maxEvents, 0, ctx, "/tmp", "file", "", ".tmp", null, null,
      SequenceFile.CompressionType.NONE, hdfsWriter, timedRollerPool, proxy,
      new SinkCounter("test-bucket-writer-" + System.currentTimeMillis()), 0, null, null, 30000,
      Executors.newSingleThreadExecutor(), 0, 0);

  Event e = EventBuilder.withBody("foo", Charsets.UTF_8);
  for (int i = 0; i < 1000; i++) {
    bucketWriter.append(e);
  }

  logger.info("Number of events written: {}", hdfsWriter.getEventsWritten());
  logger.info("Number of bytes written: {}", hdfsWriter.getBytesWritten());
  logger.info("Number of files opened: {}", hdfsWriter.getFilesOpened());

  Assert.assertEquals("events written", 1000, hdfsWriter.getEventsWritten());
  Assert.assertEquals("bytes written", 3000, hdfsWriter.getBytesWritten());
  Assert.assertEquals("files opened", 10, hdfsWriter.getFilesOpened());
}
 
开发者ID:moueimei,项目名称:flume-release-1.7.0,代码行数:24,代码来源:TestBucketWriter.java

示例2: testSizeRoller

import org.apache.flume.instrumentation.SinkCounter; //导入依赖的package包/类
@Test
public void testSizeRoller() throws IOException, InterruptedException {
  int maxBytes = 300;
  MockHDFSWriter hdfsWriter = new MockHDFSWriter();
  BucketWriter bucketWriter = new BucketWriter(
      0, maxBytes, 0, 0, ctx, "/tmp", "file", "", ".tmp", null, null,
      SequenceFile.CompressionType.NONE, hdfsWriter, timedRollerPool, proxy,
      new SinkCounter("test-bucket-writer-" + System.currentTimeMillis()), 0, null, null, 30000,
      Executors.newSingleThreadExecutor(), 0, 0);

  Event e = EventBuilder.withBody("foo", Charsets.UTF_8);
  for (int i = 0; i < 1000; i++) {
    bucketWriter.append(e);
  }

  logger.info("Number of events written: {}", hdfsWriter.getEventsWritten());
  logger.info("Number of bytes written: {}", hdfsWriter.getBytesWritten());
  logger.info("Number of files opened: {}", hdfsWriter.getFilesOpened());

  Assert.assertEquals("events written", 1000, hdfsWriter.getEventsWritten());
  Assert.assertEquals("bytes written", 3000, hdfsWriter.getBytesWritten());
  Assert.assertEquals("files opened", 10, hdfsWriter.getFilesOpened());
}
 
开发者ID:moueimei,项目名称:flume-release-1.7.0,代码行数:24,代码来源:TestBucketWriter.java

示例3: testInUsePrefix

import org.apache.flume.instrumentation.SinkCounter; //导入依赖的package包/类
@Test
public void testInUsePrefix() throws IOException, InterruptedException {
  final int ROLL_INTERVAL = 1000; // seconds. Make sure it doesn't change in course of test
  final String PREFIX = "BRNO_IS_CITY_IN_CZECH_REPUBLIC";

  MockHDFSWriter hdfsWriter = new MockHDFSWriter();
  HDFSTextSerializer formatter = new HDFSTextSerializer();
  BucketWriter bucketWriter = new BucketWriter(
      ROLL_INTERVAL, 0, 0, 0, ctx, "/tmp", "file", PREFIX, ".tmp", null, null,
      SequenceFile.CompressionType.NONE, hdfsWriter, timedRollerPool, proxy,
      new SinkCounter("test-bucket-writer-" + System.currentTimeMillis()), 0, null, null, 30000,
      Executors.newSingleThreadExecutor(), 0, 0);

  Event e = EventBuilder.withBody("foo", Charsets.UTF_8);
  bucketWriter.append(e);

  Assert.assertTrue("Incorrect in use prefix", hdfsWriter.getOpenedFilePath().contains(PREFIX));
}
 
开发者ID:moueimei,项目名称:flume-release-1.7.0,代码行数:19,代码来源:TestBucketWriter.java

示例4: testInUseSuffix

import org.apache.flume.instrumentation.SinkCounter; //导入依赖的package包/类
@Test
public void testInUseSuffix() throws IOException, InterruptedException {
  final int ROLL_INTERVAL = 1000; // seconds. Make sure it doesn't change in course of test
  final String SUFFIX = "WELCOME_TO_THE_HELLMOUNTH";

  MockHDFSWriter hdfsWriter = new MockHDFSWriter();
  HDFSTextSerializer serializer = new HDFSTextSerializer();
  BucketWriter bucketWriter = new BucketWriter(
      ROLL_INTERVAL, 0, 0, 0, ctx, "/tmp", "file", "", SUFFIX, null, null,
      SequenceFile.CompressionType.NONE, hdfsWriter, timedRollerPool, proxy,
      new SinkCounter("test-bucket-writer-" + System.currentTimeMillis()), 0, null, null, 30000,
      Executors.newSingleThreadExecutor(), 0, 0);

  Event e = EventBuilder.withBody("foo", Charsets.UTF_8);
  bucketWriter.append(e);

  Assert.assertTrue("Incorrect in use suffix", hdfsWriter.getOpenedFilePath().contains(SUFFIX));
}
 
开发者ID:moueimei,项目名称:flume-release-1.7.0,代码行数:19,代码来源:TestBucketWriter.java

示例5: testCallbackOnClose

import org.apache.flume.instrumentation.SinkCounter; //导入依赖的package包/类
@Test
public void testCallbackOnClose() throws IOException, InterruptedException {
  final int ROLL_INTERVAL = 1000; // seconds. Make sure it doesn't change in course of test
  final String SUFFIX = "WELCOME_TO_THE_EREBOR";
  final AtomicBoolean callbackCalled = new AtomicBoolean(false);

  MockHDFSWriter hdfsWriter = new MockHDFSWriter();
  BucketWriter bucketWriter = new BucketWriter(
      ROLL_INTERVAL, 0, 0, 0, ctx, "/tmp", "file", "", SUFFIX, null, null,
      SequenceFile.CompressionType.NONE, hdfsWriter, timedRollerPool, proxy,
      new SinkCounter("test-bucket-writer-" + System.currentTimeMillis()), 0,
      new HDFSEventSink.WriterCallback() {
        @Override
        public void run(String filePath) {
          callbackCalled.set(true);
        }
      }, "blah", 30000, Executors.newSingleThreadExecutor(), 0, 0);

  Event e = EventBuilder.withBody("foo", Charsets.UTF_8);
  bucketWriter.append(e);
  bucketWriter.close(true);

  Assert.assertTrue(callbackCalled.get());
}
 
开发者ID:moueimei,项目名称:flume-release-1.7.0,代码行数:25,代码来源:TestBucketWriter.java

示例6: DatahubWriter

import org.apache.flume.instrumentation.SinkCounter; //导入依赖的package包/类
public DatahubWriter(Configure configure, SinkCounter sinkCounter) {
    this.configure = configure;
    this.sinkCounter = sinkCounter;

    DatahubConfiguration datahubConfiguration = new DatahubConfiguration(
        new AliyunAccount(configure.getDatahubAccessId(), configure.getDatahubAccessKey()),
        configure.getDatahubEndPoint());
    datahubConfiguration.setUserAgent("datahub-flume-plugin-2.0.0");

    Project project = Project.Builder.build(configure.getDatahubProject(), datahubConfiguration);
    if (!project.listTopic().contains(configure.getDatahubTopic().toLowerCase())) {
        throw new RuntimeException("Can not find datahub topic[" + configure.getDatahubTopic() + "]");
    }
    topic = project.getTopic(configure.getDatahubTopic());
    if (topic == null) {
        throw new RuntimeException("Can not find datahub topic[" + configure.getDatahubTopic() + "]");
    }

    if (topic.getShardCount() == 0) {
        throw new RuntimeException("Topic[" + topic.getTopicName() + "] has not active shard");
    }

    // Initial record builder
    recordBuilder = new RecordBuilder(configure, topic);
    logger.info("Init RecordBuilder success");
}
 
开发者ID:aliyun,项目名称:aliyun-maxcompute-data-collectors,代码行数:27,代码来源:DatahubWriter.java

示例7: BucketWriterLoader

import org.apache.flume.instrumentation.SinkCounter; //导入依赖的package包/类
public BucketWriterLoader(long rollInterval,
                          long rollSize,
                          long rollCount,
                          long batchSize,
                          long defaultBlockSize,
                          Context context,
                          String filePrefix,
                          ScheduledThreadPoolExecutor timedRollerPool,
                          UserGroupInformation proxyTicket,
                          SinkCounter sinkCounter) {
  this.rollInterval = rollInterval;
  this.rollSize = rollSize;
  this.rollCount = rollCount;
  this.batchSize = batchSize;
  this.defaultBlockSize = defaultBlockSize;
  this.context = context;
  this.filePrefix = filePrefix;
  this.timedRollerPool = timedRollerPool;
  this.proxyTicket = proxyTicket;
  this.sinkCounter = sinkCounter;
}
 
开发者ID:kaaproject,项目名称:kaa,代码行数:22,代码来源:KaaHdfsSink.java

示例8: BucketWriter

import org.apache.flume.instrumentation.SinkCounter; //导入依赖的package包/类
BucketWriter(long rollInterval, long rollSize, long rollCount, long batchSize,
             long defaultBlockSize,
             Context context, String filePath, HDFSWriter writer,
             ScheduledThreadPoolExecutor timedRollerPool, UserGroupInformation user,
             SinkCounter sinkCounter) {
  this.rollInterval = rollInterval;
  this.rollSize = rollSize;
  this.rollCount = rollCount;
  this.batchSize = batchSize;
  this.defaultBlockSize = defaultBlockSize;
  this.filePath = filePath;
  this.writer = writer;
  this.timedRollerPool = timedRollerPool;
  this.user = user;
  this.sinkCounter = sinkCounter;

  isOpen = false;

  writer.configure(context);
}
 
开发者ID:kaaproject,项目名称:kaa,代码行数:21,代码来源:BucketWriter.java

示例9: testSingleWriterUseHeaders

import org.apache.flume.instrumentation.SinkCounter; //导入依赖的package包/类
@Test
public void testSingleWriterUseHeaders()
        throws Exception {
  String[] colNames = {COL1, COL2};
  String PART1_NAME = "country";
  String PART2_NAME = "hour";
  String[] partNames = {PART1_NAME, PART2_NAME};
  List<String> partitionVals = null;
  String PART1_VALUE = "%{" + PART1_NAME + "}";
  String PART2_VALUE = "%y-%m-%d-%k";
  partitionVals = new ArrayList<String>(2);
  partitionVals.add(PART1_VALUE);
  partitionVals.add(PART2_VALUE);

  String tblName = "hourlydata";
  TestUtil.dropDB(conf, dbName2);
  String dbLocation = dbFolder.newFolder(dbName2).getCanonicalPath() + ".db";
  dbLocation = dbLocation.replaceAll("\\\\","/"); // for windows paths
  TestUtil.createDbAndTable(driver, dbName2, tblName, partitionVals, colNames,
          colTypes, partNames, dbLocation);

  int totalRecords = 4;
  int batchSize = 2;
  int batchCount = totalRecords / batchSize;

  Context context = new Context();
  context.put("hive.metastore",metaStoreURI);
  context.put("hive.database",dbName2);
  context.put("hive.table",tblName);
  context.put("hive.partition", PART1_VALUE + "," + PART2_VALUE);
  context.put("autoCreatePartitions","true");
  context.put("useLocalTimeStamp", "false");
  context.put("batchSize","" + batchSize);
  context.put("serializer", HiveDelimitedTextSerializer.ALIAS);
  context.put("serializer.fieldnames", COL1 + ",," + COL2 + ",");
  context.put("heartBeatInterval", "0");

  Channel channel = startSink(sink, context);

  Calendar eventDate = Calendar.getInstance();
  List<String> bodies = Lists.newArrayList();

  // push events in two batches - two per batch. each batch is diff hour
  Transaction txn = channel.getTransaction();
  txn.begin();
  for (int j = 1; j <= totalRecords; j++) {
    Event event = new SimpleEvent();
    String body = j + ",blah,This is a log message,other stuff";
    event.setBody(body.getBytes());
    eventDate.clear();
    eventDate.set(2014, 03, 03, j % batchCount, 1); // yy mm dd hh mm
    event.getHeaders().put( "timestamp",
            String.valueOf(eventDate.getTimeInMillis()) );
    event.getHeaders().put( PART1_NAME, "Asia" );
    bodies.add(body);
    channel.put(event);
  }
  // execute sink to process the events
  txn.commit();
  txn.close();

  checkRecordCountInTable(0, dbName2, tblName);
  for (int i = 0; i < batchCount ; i++) {
    sink.process();
  }
  checkRecordCountInTable(totalRecords, dbName2, tblName);
  sink.stop();

  // verify counters
  SinkCounter counter = sink.getCounter();
  Assert.assertEquals(2, counter.getConnectionCreatedCount());
  Assert.assertEquals(2, counter.getConnectionClosedCount());
  Assert.assertEquals(2, counter.getBatchCompleteCount());
  Assert.assertEquals(0, counter.getBatchEmptyCount());
  Assert.assertEquals(0, counter.getConnectionFailedCount() );
  Assert.assertEquals(4, counter.getEventDrainAttemptCount());
  Assert.assertEquals(4, counter.getEventDrainSuccessCount() );

}
 
开发者ID:moueimei,项目名称:flume-release-1.7.0,代码行数:80,代码来源:TestHiveSink.java

示例10: testInstantiate

import org.apache.flume.instrumentation.SinkCounter; //导入依赖的package包/类
@Test
public void testInstantiate() throws Exception {
  HiveEndPoint endPoint = new HiveEndPoint(metaStoreURI, dbName, tblName, partVals);
  SinkCounter sinkCounter = new SinkCounter(this.getClass().getName());
  HiveWriter writer = new HiveWriter(endPoint, 10, true, timeout, callTimeoutPool, "flumetest",
                                     serializer, sinkCounter);

  writer.close();
}
 
开发者ID:moueimei,项目名称:flume-release-1.7.0,代码行数:10,代码来源:TestHiveWriter.java

示例11: testWriteBasic

import org.apache.flume.instrumentation.SinkCounter; //导入依赖的package包/类
@Test
public void testWriteBasic() throws Exception {
  HiveEndPoint endPoint = new HiveEndPoint(metaStoreURI, dbName, tblName, partVals);
  SinkCounter sinkCounter = new SinkCounter(this.getClass().getName());
  HiveWriter writer = new HiveWriter(endPoint, 10, true, timeout, callTimeoutPool, "flumetest",
                                     serializer, sinkCounter);

  writeEvents(writer,3);
  writer.flush(false);
  writer.close();
  checkRecordCountInTable(3);
}
 
开发者ID:moueimei,项目名称:flume-release-1.7.0,代码行数:13,代码来源:TestHiveWriter.java

示例12: testWriteMultiFlush

import org.apache.flume.instrumentation.SinkCounter; //导入依赖的package包/类
@Test
public void testWriteMultiFlush() throws Exception {
  HiveEndPoint endPoint = new HiveEndPoint(metaStoreURI, dbName, tblName, partVals);
  SinkCounter sinkCounter = new SinkCounter(this.getClass().getName());

  HiveWriter writer = new HiveWriter(endPoint, 10, true, timeout, callTimeoutPool, "flumetest",
                                     serializer, sinkCounter);

  checkRecordCountInTable(0);
  SimpleEvent event = new SimpleEvent();

  String REC1 = "1,xyz,Hello world,abc";
  event.setBody(REC1.getBytes());
  writer.write(event);
  checkRecordCountInTable(0);
  writer.flush(true);
  checkRecordCountInTable(1);

  String REC2 = "2,xyz,Hello world,abc";
  event.setBody(REC2.getBytes());
  writer.write(event);
  checkRecordCountInTable(1);
  writer.flush(true);
  checkRecordCountInTable(2);

  String REC3 = "3,xyz,Hello world,abc";
  event.setBody(REC3.getBytes());
  writer.write(event);
  writer.flush(true);
  checkRecordCountInTable(3);
  writer.close();

  checkRecordCountInTable(3);
}
 
开发者ID:moueimei,项目名称:flume-release-1.7.0,代码行数:35,代码来源:TestHiveWriter.java

示例13: testInOrderWrite

import org.apache.flume.instrumentation.SinkCounter; //导入依赖的package包/类
/**
 * Sets up input fields to have same order as table columns,
 * Also sets the separator on serde to be same as i/p field separator
 * @throws Exception
 */
@Test
public void testInOrderWrite() throws Exception {
  HiveEndPoint endPoint = new HiveEndPoint(metaStoreURI, dbName, tblName, partVals);
  SinkCounter sinkCounter = new SinkCounter(this.getClass().getName());
  int timeout = 5000; // msec

  HiveDelimitedTextSerializer serializer2 = new HiveDelimitedTextSerializer();
  Context ctx = new Context();
  ctx.put("serializer.fieldnames", COL1 + "," + COL2);
  ctx.put("serializer.serdeSeparator", ",");
  serializer2.configure(ctx);


  HiveWriter writer = new HiveWriter(endPoint, 10, true, timeout, callTimeoutPool,
          "flumetest", serializer2, sinkCounter);

  SimpleEvent event = new SimpleEvent();
  event.setBody("1,Hello world 1".getBytes());
  writer.write(event);
  event.setBody("2,Hello world 2".getBytes());
  writer.write(event);
  event.setBody("3,Hello world 3".getBytes());
  writer.write(event);
  writer.flush(false);
  writer.close();
}
 
开发者ID:moueimei,项目名称:flume-release-1.7.0,代码行数:32,代码来源:TestHiveWriter.java

示例14: testSecondWriterBeforeFirstCommits

import org.apache.flume.instrumentation.SinkCounter; //导入依赖的package包/类
@Test
public void testSecondWriterBeforeFirstCommits() throws Exception {
  // here we open a new writer while the first is still writing (not committed)
  HiveEndPoint endPoint1 = new HiveEndPoint(metaStoreURI, dbName, tblName, partVals);
  ArrayList<String> partVals2 = new ArrayList<String>(2);
  partVals2.add(PART1_VALUE);
  partVals2.add("Nepal");
  HiveEndPoint endPoint2 = new HiveEndPoint(metaStoreURI, dbName, tblName, partVals2);

  SinkCounter sinkCounter1 = new SinkCounter(this.getClass().getName());
  SinkCounter sinkCounter2 = new SinkCounter(this.getClass().getName());

  HiveWriter writer1 = new HiveWriter(endPoint1, 10, true, timeout, callTimeoutPool, "flumetest",
                                      serializer, sinkCounter1);

  writeEvents(writer1, 3);

  HiveWriter writer2 = new HiveWriter(endPoint2, 10, true, timeout, callTimeoutPool, "flumetest",
                                      serializer, sinkCounter2);
  writeEvents(writer2, 3);
  writer2.flush(false); // commit

  writer1.flush(false); // commit
  writer1.close();

  writer2.close();
}
 
开发者ID:moueimei,项目名称:flume-release-1.7.0,代码行数:28,代码来源:TestHiveWriter.java

示例15: testSecondWriterAfterFirstCommits

import org.apache.flume.instrumentation.SinkCounter; //导入依赖的package包/类
@Test
public void testSecondWriterAfterFirstCommits() throws Exception {
  // here we open a new writer after the first writer has committed one txn
  HiveEndPoint endPoint1 = new HiveEndPoint(metaStoreURI, dbName, tblName, partVals);
  ArrayList<String> partVals2 = new ArrayList<String>(2);
  partVals2.add(PART1_VALUE);
  partVals2.add("Nepal");
  HiveEndPoint endPoint2 = new HiveEndPoint(metaStoreURI, dbName, tblName, partVals2);

  SinkCounter sinkCounter1 = new SinkCounter(this.getClass().getName());
  SinkCounter sinkCounter2 = new SinkCounter(this.getClass().getName());

  HiveWriter writer1 = new HiveWriter(endPoint1, 10, true, timeout, callTimeoutPool, "flumetest",
                                      serializer, sinkCounter1);

  writeEvents(writer1, 3);

  writer1.flush(false); // commit


  HiveWriter writer2 = new HiveWriter(endPoint2, 10, true, timeout, callTimeoutPool, "flumetest",
                                      serializer, sinkCounter2);
  writeEvents(writer2, 3);
  writer2.flush(false); // commit


  writer1.close();
  writer2.close();
}
 
开发者ID:moueimei,项目名称:flume-release-1.7.0,代码行数:30,代码来源:TestHiveWriter.java


注:本文中的org.apache.flume.instrumentation.SinkCounter类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。