当前位置: 首页>>代码示例>>Java>>正文


Java Sampler类代码示例

本文整理汇总了Java中org.apache.htrace.Sampler的典型用法代码示例。如果您正苦于以下问题:Java Sampler类的具体用法?Java Sampler怎么用?Java Sampler使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。


Sampler类属于org.apache.htrace包,在下文中一共展示了Sampler类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: read

import org.apache.htrace.Sampler; //导入依赖的package包/类
@Override
public int read(ByteBuffer buf) throws IOException {
  if (curDataSlice == null || curDataSlice.remaining() == 0 && bytesNeededToFinish > 0) {
    TraceScope scope = Trace.startSpan(
        "RemoteBlockReader2#readNextPacket(" + blockId + ")", Sampler.NEVER);
    try {
      readNextPacket();
    } finally {
      scope.close();
    }
  }
  if (curDataSlice.remaining() == 0) {
    // we're at EOF now
    return -1;
  }

  int nRead = Math.min(curDataSlice.remaining(), buf.remaining());
  ByteBuffer writeSlice = curDataSlice.duplicate();
  writeSlice.limit(writeSlice.position() + nRead);
  buf.put(writeSlice);
  curDataSlice.position(writeSlice.position());

  return nRead;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:25,代码来源:RemoteBlockReader2.java

示例2: fillBuffer

import org.apache.htrace.Sampler; //导入依赖的package包/类
/**
 * Reads bytes into a buffer until EOF or the buffer's limit is reached
 */
private int fillBuffer(FileInputStream stream, ByteBuffer buf)
    throws IOException {
  TraceScope scope = Trace.startSpan("BlockReaderLocalLegacy#fillBuffer(" +
      blockId + ")", Sampler.NEVER);
  try {
    int bytesRead = stream.getChannel().read(buf);
    if (bytesRead < 0) {
      //EOF
      return bytesRead;
    }
    while (buf.remaining() > 0) {
      int n = stream.getChannel().read(buf);
      if (n < 0) {
        //EOF
        return bytesRead;
      }
      bytesRead += n;
    }
    return bytesRead;
  } finally {
    scope.close();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:27,代码来源:BlockReaderLocalLegacy.java

示例3: Test

import org.apache.htrace.Sampler; //导入依赖的package包/类
/**
 * Note that all subclasses of this class must provide a public constructor
 * that has the exact same list of arguments.
 */
Test(final Connection con, final TestOptions options, final Status status) {
  this.connection = con;
  this.conf = con == null ? HBaseConfiguration.create() : this.connection.getConfiguration();
  this.opts = options;
  this.status = status;
  this.testName = this.getClass().getSimpleName();
  receiverHost = SpanReceiverHost.getInstance(conf);
  if (options.traceRate >= 1.0) {
    this.traceSampler = Sampler.ALWAYS;
  } else if (options.traceRate > 0.0) {
    conf.setDouble("hbase.sampler.fraction", options.traceRate);
    this.traceSampler = new ProbabilitySampler(new HBaseHTraceConfiguration(conf));
  } else {
    this.traceSampler = Sampler.NEVER;
  }
  everyN = (int) (opts.totalRows / (opts.totalRows * opts.sampleRate));
  if (options.isValueZipf()) {
    this.zipf = new RandomDistribution.Zipf(this.rand, 1, options.getValueSize(), 1.1);
  }
  LOG.info("Sampling 1 every " + everyN + " out of " + opts.perClientRunRows + " total rows.");
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:26,代码来源:PerformanceEvaluation.java

示例4: createEntries

import org.apache.htrace.Sampler; //导入依赖的package包/类
private void createEntries(Opts opts) throws TableNotFoundException, AccumuloException, AccumuloSecurityException {

    // Trace the write operation. Note, unless you flush the BatchWriter, you will not capture
    // the write operation as it is occurs asynchronously. You can optionally create additional Spans
    // within a given Trace as seen below around the flush
    TraceScope scope = Trace.startSpan("Client Write", Sampler.ALWAYS);

    System.out.println("TraceID: " + Long.toHexString(scope.getSpan().getTraceId()));
    BatchWriter batchWriter = opts.getConnector().createBatchWriter(opts.getTableName(), new BatchWriterConfig());

    Mutation m = new Mutation("row");
    m.put("cf", "cq", "value");

    batchWriter.addMutation(m);
    // You can add timeline annotations to Spans which will be able to be viewed in the Monitor
    scope.getSpan().addTimelineAnnotation("Initiating Flush");
    batchWriter.flush();

    batchWriter.close();
    scope.close();
  }
 
开发者ID:apache,项目名称:accumulo-examples,代码行数:22,代码来源:TracingExample.java

示例5: readEntries

import org.apache.htrace.Sampler; //导入依赖的package包/类
private void readEntries(Opts opts) throws TableNotFoundException, AccumuloException, AccumuloSecurityException {

    Scanner scanner = opts.getConnector().createScanner(opts.getTableName(), opts.auths);

    // Trace the read operation.
    TraceScope readScope = Trace.startSpan("Client Read", Sampler.ALWAYS);
    System.out.println("TraceID: " + Long.toHexString(readScope.getSpan().getTraceId()));

    int numberOfEntriesRead = 0;
    for (Entry<Key,Value> entry : scanner) {
      System.out.println(entry.getKey().toString() + " -> " + entry.getValue().toString());
      ++numberOfEntriesRead;
    }
    // You can add additional metadata (key, values) to Spans which will be able to be viewed in the Monitor
    readScope.getSpan().addKVAnnotation("Number of Entries Read".getBytes(UTF_8), String.valueOf(numberOfEntriesRead).getBytes(UTF_8));

    readScope.close();
  }
 
开发者ID:apache,项目名称:accumulo-examples,代码行数:19,代码来源:TracingExample.java

示例6: waitForAckedSeqno

import org.apache.htrace.Sampler; //导入依赖的package包/类
private void waitForAckedSeqno(long seqno) throws IOException {
  TraceScope scope = Trace.startSpan("waitForAckedSeqno", Sampler.NEVER);
  try {
    if (DFSClient.LOG.isDebugEnabled()) {
      DFSClient.LOG.debug("Waiting for ack for: " + seqno);
    }
    long begin = Time.monotonicNow();
    try {
      synchronized (dataQueue) {
        while (!isClosed()) {
          checkClosed();
          if (lastAckedSeqno >= seqno) {
            break;
          }
          try {
            dataQueue.wait(1000); // when we receive an ack, we notify on
            // dataQueue
          } catch (InterruptedException ie) {
            throw new InterruptedIOException(
                "Interrupted while waiting for data to be acknowledged by pipeline");
          }
        }
      }
      checkClosed();
    } catch (ClosedChannelException e) {
    }
    long duration = Time.monotonicNow() - begin;
    if (duration > dfsclientSlowLogThresholdMs) {
      DFSClient.LOG.warn("Slow waitForAckedSeqno took " + duration
          + "ms (threshold=" + dfsclientSlowLogThresholdMs + "ms)");
    }
  } finally {
    scope.close();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:36,代码来源:DFSOutputStream.java

示例7: readChunk

import org.apache.htrace.Sampler; //导入依赖的package包/类
@Override
protected synchronized int readChunk(long pos, byte[] buf, int offset, 
                                     int len, byte[] checksumBuf) 
                                     throws IOException {
  TraceScope scope =
      Trace.startSpan("RemoteBlockReader#readChunk(" + blockId + ")",
          Sampler.NEVER);
  try {
    return readChunkImpl(pos, buf, offset, len, checksumBuf);
  } finally {
    scope.close();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:14,代码来源:RemoteBlockReader.java

示例8: CacheDirectiveIterator

import org.apache.htrace.Sampler; //导入依赖的package包/类
public CacheDirectiveIterator(ClientProtocol namenode,
    CacheDirectiveInfo filter, Sampler<?> traceSampler) {
  super(0L);
  this.namenode = namenode;
  this.filter = filter;
  this.traceSampler = traceSampler;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:8,代码来源:CacheDirectiveIterator.java

示例9: DFSInotifyEventInputStream

import org.apache.htrace.Sampler; //导入依赖的package包/类
DFSInotifyEventInputStream(Sampler traceSampler, ClientProtocol namenode,
      long lastReadTxid) throws IOException {
  this.traceSampler = traceSampler;
  this.namenode = namenode;
  this.it = Iterators.emptyIterator();
  this.lastReadTxid = lastReadTxid;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:8,代码来源:DFSInotifyEventInputStream.java

示例10: testShortCircuitTraceHooks

import org.apache.htrace.Sampler; //导入依赖的package包/类
@Test
public void testShortCircuitTraceHooks() throws IOException {
  assumeTrue(NativeCodeLoader.isNativeCodeLoaded() && !Path.WINDOWS);
  conf = new Configuration();
  conf.set(DFSConfigKeys.DFS_CLIENT_HTRACE_PREFIX +
      SpanReceiverHost.SPAN_RECEIVERS_CONF_SUFFIX,
      TestTracing.SetSpanReceiver.class.getName());
  conf.setLong("dfs.blocksize", 100 * 1024);
  conf.setBoolean(DFSConfigKeys.DFS_CLIENT_READ_SHORTCIRCUIT_KEY, true);
  conf.setBoolean(DFSConfigKeys.DFS_CLIENT_READ_SHORTCIRCUIT_SKIP_CHECKSUM_KEY, false);
  conf.set(DFSConfigKeys.DFS_DOMAIN_SOCKET_PATH_KEY,
      "testShortCircuitTraceHooks._PORT");
  conf.set(DFSConfigKeys.DFS_CHECKSUM_TYPE_KEY, "CRC32C");
  cluster = new MiniDFSCluster.Builder(conf)
      .numDataNodes(1)
      .build();
  dfs = cluster.getFileSystem();

  try {
    DFSTestUtil.createFile(dfs, TEST_PATH, TEST_LENGTH, (short)1, 5678L);

    TraceScope ts = Trace.startSpan("testShortCircuitTraceHooks", Sampler.ALWAYS);
    FSDataInputStream stream = dfs.open(TEST_PATH);
    byte buf[] = new byte[TEST_LENGTH];
    IOUtils.readFully(stream, buf, 0, TEST_LENGTH);
    stream.close();
    ts.close();

    String[] expectedSpanNames = {
      "OpRequestShortCircuitAccessProto",
      "ShortCircuitShmRequestProto"
    };
    TestTracing.assertSpanNamesFound(expectedSpanNames);
  } finally {
    dfs.close();
    cluster.shutdown();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:39,代码来源:TestTracingShortCircuitLocalRead.java

示例11: readWithTracing

import org.apache.htrace.Sampler; //导入依赖的package包/类
public void readWithTracing() throws Exception {
  String fileName = "testReadTraceHooks.dat";
  writeTestFile(fileName);
  long startTime = System.currentTimeMillis();
  TraceScope ts = Trace.startSpan("testReadTraceHooks", Sampler.ALWAYS);
  readTestFile(fileName);
  ts.close();
  long endTime = System.currentTimeMillis();

  String[] expectedSpanNames = {
    "testReadTraceHooks",
    "org.apache.hadoop.hdfs.protocol.ClientProtocol.getBlockLocations",
    "ClientNamenodeProtocol#getBlockLocations",
    "OpReadBlockProto"
  };
  assertSpanNamesFound(expectedSpanNames);

  // The trace should last about the same amount of time as the test
  Map<String, List<Span>> map = SetSpanReceiver.SetHolder.getMap();
  Span s = map.get("testReadTraceHooks").get(0);
  Assert.assertNotNull(s);

  long spanStart = s.getStartTimeMillis();
  long spanEnd = s.getStopTimeMillis();
  Assert.assertTrue(spanStart - startTime < 100);
  Assert.assertTrue(spanEnd - endTime < 100);

  // There should only be one trace id as it should all be homed in the
  // top trace.
  for (Span span : SetSpanReceiver.SetHolder.spans.values()) {
    Assert.assertEquals(ts.getSpan().getTraceId(), span.getTraceId());
  }
  SetSpanReceiver.SetHolder.spans.clear();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:35,代码来源:TestTracing.java

示例12: createTable

import org.apache.htrace.Sampler; //导入依赖的package包/类
private void createTable() throws IOException {
  TraceScope createScope = null;
  try {
    createScope = Trace.startSpan("createTable", Sampler.ALWAYS);
    util.createTable(tableName, familyName);
  } finally {
    if (createScope != null) createScope.close();
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:10,代码来源:IntegrationTestSendTraceRequests.java

示例13: deleteTable

import org.apache.htrace.Sampler; //导入依赖的package包/类
private void deleteTable() throws IOException {
  TraceScope deleteScope = null;

  try {
    if (admin.tableExists(tableName)) {
      deleteScope = Trace.startSpan("deleteTable", Sampler.ALWAYS);
      util.deleteTable(tableName);
    }
  } finally {
    if (deleteScope != null) deleteScope.close();
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:13,代码来源:IntegrationTestSendTraceRequests.java

示例14: insertData

import org.apache.htrace.Sampler; //导入依赖的package包/类
private LinkedBlockingQueue<Long> insertData() throws IOException, InterruptedException {
  LinkedBlockingQueue<Long> rowKeys = new LinkedBlockingQueue<Long>(25000);
  BufferedMutator ht = util.getConnection().getBufferedMutator(this.tableName);
  byte[] value = new byte[300];
  for (int x = 0; x < 5000; x++) {
    TraceScope traceScope = Trace.startSpan("insertData", Sampler.ALWAYS);
    try {
      for (int i = 0; i < 5; i++) {
        long rk = random.nextLong();
        rowKeys.add(rk);
        Put p = new Put(Bytes.toBytes(rk));
        for (int y = 0; y < 10; y++) {
          random.nextBytes(value);
          p.add(familyName, Bytes.toBytes(random.nextLong()), value);
        }
        ht.mutate(p);
      }
      if ((x % 1000) == 0) {
        admin.flush(tableName);
      }
    } finally {
      traceScope.close();
    }
  }
  admin.flush(tableName);
  return rowKeys;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:28,代码来源:IntegrationTestSendTraceRequests.java

示例15: EncryptionZoneIterator

import org.apache.htrace.Sampler; //导入依赖的package包/类
public EncryptionZoneIterator(ClientProtocol namenode,
                              Sampler<?> traceSampler) {
  super(Long.valueOf(0));
  this.namenode = namenode;
  this.traceSampler = traceSampler;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:7,代码来源:EncryptionZoneIterator.java


注:本文中的org.apache.htrace.Sampler类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。