當前位置: 首頁>>代碼示例>>Java>>正文


Java Histogram.recordValue方法代碼示例

本文整理匯總了Java中org.HdrHistogram.Histogram.recordValue方法的典型用法代碼示例。如果您正苦於以下問題:Java Histogram.recordValue方法的具體用法?Java Histogram.recordValue怎麽用?Java Histogram.recordValue使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.HdrHistogram.Histogram的用法示例。


在下文中一共展示了Histogram.recordValue方法的8個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: recordLatency

import org.HdrHistogram.Histogram; //導入方法依賴的package包/類
public static long recordLatency(long lastNow, Histogram h, DataInputBlobReader<RawDataSchema> reader) {
    long timeMessageWasSentDelta = reader.readPackedLong();
    
    lastNow += timeMessageWasSentDelta;                            
    //Note after the message is decoded the latency for the message must be computed using.
    
    long latency = System.nanoTime() - lastNow;
    if (latency>=0 && 0!=lastNow) {//conditional to protect against numerical overflow, see docs on nanoTime();
        try {
            h.recordValue(latency);
        } catch (ArrayIndexOutOfBoundsException outofbounds) {
            //do not record
            System.out.println("warning latency:"+latency+" was out of bounds");
        }
    }
    return lastNow;
}
 
開發者ID:oci-pronghorn,項目名稱:ProtocolTestProject,代碼行數:18,代碼來源:App.java

示例2: exchangeMessage

import org.HdrHistogram.Histogram; //導入方法依賴的package包/類
private void exchangeMessage(
    final SocketChannel socketChannel,
    final TestRequestEncoder testRequest,
    final HeaderEncoder header,
    final int index,
    final Histogram histogram)
    throws IOException
{
    header.msgSeqNum(index + 2);
    timestampEncoder.encode(System.currentTimeMillis());

    final long result = testRequest.encode(writeFlyweight, 0);

    final long sendingTime = System.nanoTime();
    write(socketChannel, result);

    read(socketChannel);
    final long returnTime = System.nanoTime();
    histogram.recordValue(returnTime - sendingTime);
}
 
開發者ID:real-logic,項目名稱:artio,代碼行數:21,代碼來源:LatencyBenchmarkClient.java

示例3: run

import org.HdrHistogram.Histogram; //導入方法依賴的package包/類
private static void run(final Subscription subscription, final long warmupCount, final long measuredCount) {
    final NanoClock clock = new SystemNanoClock();
    final Histogram histogram = new Histogram(1, 1000000000, 3);
    final MutableMarketDataSnapshot snapshot = new MutableMarketDataSnapshot();
    final UnsafeBuffer unsafeBuffer = new UnsafeBuffer(0, 0);
    final AtomicLong t0 = new AtomicLong();
    final AtomicLong t1 = new AtomicLong();
    final AtomicLong t2 = new AtomicLong();
    final long n = warmupCount + measuredCount;
    final AtomicLong count = new AtomicLong();
    final FragmentHandler fh = (buf, offset, len, header) -> {
        if (count.get() == 0) t0.set(clock.nanoTime());
        else if (count.get() == warmupCount-1) t1.set(clock.nanoTime());
        else if (count.get() == n-1) t2.set(clock.nanoTime());
        unsafeBuffer.wrap(buf, offset, len);
        final MarketDataSnapshot decoded = SerializerHelper.decode(unsafeBuffer, snapshot.builder());
        final long time = clock.nanoTime();
        if (count.incrementAndGet() <= n) {
            histogram.recordValue(time - decoded.getEventTimestamp());
        }
        if (count.get() == warmupCount) {
            histogram.reset();
        }
    };
    while (count.get() < n) {
        subscription.poll(fh, 256);
    }
    final long c = count.get();
    System.out.println((t2.get() - t0.get())/1000.0 + " us total receiving time (" + (t2.get() - t0.get())/(1000f*c) + " us/message, " + c/((t2.get()-t0.get())/1000000000f) + " messages/second)");
    System.out.println();
    HistogramPrinter.printHistogram(histogram);
}
 
開發者ID:terzerm,項目名稱:fx-highway,代碼行數:33,代碼來源:AeronSubscriber.java

示例4: createEquivalentHistogram

import org.HdrHistogram.Histogram; //導入方法依賴的package包/類
private Histogram createEquivalentHistogram() {
    Histogram histogram = new Histogram(2);
    for (int i = 1; i <= 100000; i++) {
        histogram.recordValue(i);
    }
    return histogram;
}
 
開發者ID:vladimir-bukhtoyarov,項目名稱:rolling-metrics,代碼行數:8,代碼來源:PercentileCalculationTest.java

示例5: main

import org.HdrHistogram.Histogram; //導入方法依賴的package包/類
public static void main(String[] args) throws IOException {
    // set up house-keeping
    ObjectMapper mapper = new ObjectMapper();
    Histogram stats = new Histogram(1, 10000000, 2);
    Histogram global = new Histogram(1, 10000000, 2);

    // and the consumer
    KafkaConsumer<String, String> consumer;
    try (InputStream props = Resources.getResource("consumer.props").openStream()) {
        Properties properties = new Properties();
        properties.load(props);
        if (properties.getProperty("group.id") == null) {
            properties.setProperty("group.id", "group-" + new Random().nextInt(100000));
        }
        consumer = new KafkaConsumer<>(properties);
    }
    consumer.subscribe(Arrays.asList("fast-messages", "summary-markers"));
    int timeouts = 0;
    //noinspection InfiniteLoopStatement
    while (true) {
        // read records with a short timeout. If we time out, we don't really care.
        ConsumerRecords<String, String> records = consumer.poll(200);
        if (records.count() == 0) {
            timeouts++;
        } else {
            System.out.printf("Got %d records after %d timeouts\n", records.count(), timeouts);
            timeouts = 0;
        }
        for (ConsumerRecord<String, String> record : records) {
            switch (record.topic()) {
                case "fast-messages":
                    // the send time is encoded inside the message
                    JsonNode msg = mapper.readTree(record.value());
                    switch (msg.get("type").asText()) {
                        case "test":
                            long latency = (long) ((System.nanoTime() * 1e-9 - msg.get("t").asDouble()) * 1000);
                            stats.recordValue(latency);
                            global.recordValue(latency);
                            break;
                        case "marker":
                            // whenever we get a marker message, we should dump out the stats
                            // note that the number of fast messages won't necessarily be quite constant
                            System.out.printf("%d messages received in period, latency(min, max, avg, 99%%) = %d, %d, %.1f, %d (ms)\n",
                                    stats.getTotalCount(),
                                    stats.getValueAtPercentile(0), stats.getValueAtPercentile(100),
                                    stats.getMean(), stats.getValueAtPercentile(99));
                            System.out.printf("%d messages received overall, latency(min, max, avg, 99%%) = %d, %d, %.1f, %d (ms)\n",
                                    global.getTotalCount(),
                                    global.getValueAtPercentile(0), global.getValueAtPercentile(100),
                                    global.getMean(), global.getValueAtPercentile(99));

                            stats.reset();
                            break;
                        default:
                            throw new IllegalArgumentException("Illegal message type: " + msg.get("type"));
                    }
                    break;
                case "summary-markers":
                    break;
                default:
                    throw new IllegalStateException("Shouldn't be possible to get message on topic " + record.topic());
            }
        }
    }
}
 
開發者ID:leidaxia,項目名稱:kafka-stream-druid,代碼行數:66,代碼來源:Consumer.java

示例6: main

import org.HdrHistogram.Histogram; //導入方法依賴的package包/類
public static void main(String[] args) throws IOException {
    // set up house-keeping
    ObjectMapper mapper = new ObjectMapper();
    Histogram stats = new Histogram(1, 10000000, 2);
    Histogram global = new Histogram(1, 10000000, 2);

    final String TOPIC_FAST_MESSAGES = "/sample-stream:fast-messages";
    final String TOPIC_SUMMARY_MARKERS = "/sample-stream:summary-markers";

    // and the consumer
    KafkaConsumer<String, String> consumer;
    try (InputStream props = Resources.getResource("consumer.props").openStream()) {
        Properties properties = new Properties();
        properties.load(props);
        if (properties.getProperty("group.id") == null) {
            properties.setProperty("group.id", "group-" + new Random().nextInt(100000));
        }

        consumer = new KafkaConsumer<>(properties);
    }
    consumer.subscribe(Arrays.asList(TOPIC_FAST_MESSAGES, TOPIC_SUMMARY_MARKERS));
    int timeouts = 0;
    //noinspection InfiniteLoopStatement
    while (true) {
        // read records with a short timeout. If we time out, we don't really care.
        ConsumerRecords<String, String> records = consumer.poll(200);
        if (records.count() == 0) {
            timeouts++;
        } else {
            System.out.printf("Got %d records after %d timeouts\n", records.count(), timeouts);
            timeouts = 0;
        }
        for (ConsumerRecord<String, String> record : records) {
            switch (record.topic()) {
                case TOPIC_FAST_MESSAGES:
                    // the send time is encoded inside the message
                    JsonNode msg = mapper.readTree(record.value());
                    switch (msg.get("type").asText()) {
                        case "test":
                            long latency = (long) ((System.nanoTime() * 1e-9 - msg.get("t").asDouble()) * 1000);
                            stats.recordValue(latency);
                            global.recordValue(latency);
                            break;
                        case "marker":
                            // whenever we get a marker message, we should dump out the stats
                            // note that the number of fast messages won't necessarily be quite constant
                            System.out.printf("%d messages received in period, latency(min, max, avg, 99%%) = %d, %d, %.1f, %d (ms)\n",
                                    stats.getTotalCount(),
                                    stats.getValueAtPercentile(0), stats.getValueAtPercentile(100),
                                    stats.getMean(), stats.getValueAtPercentile(99));
                            System.out.printf("%d messages received overall, latency(min, max, avg, 99%%) = %d, %d, %.1f, %d (ms)\n",
                                    global.getTotalCount(),
                                    global.getValueAtPercentile(0), global.getValueAtPercentile(100),
                                    global.getMean(), global.getValueAtPercentile(99));

                            stats.reset();
                            break;
                        default:
                            throw new IllegalArgumentException("Illegal message type: " + msg.get("type"));
                    }
                    break;
                case TOPIC_SUMMARY_MARKERS:
                    break;
                default:
                    throw new IllegalStateException("Shouldn't be possible to get message on topic " + record.topic());
            }
        }
    }
}
 
開發者ID:mapr-demos,項目名稱:mapr-streams-sample-programs,代碼行數:70,代碼來源:Consumer.java

示例7: main

import org.HdrHistogram.Histogram; //導入方法依賴的package包/類
public static void main(String[] args) throws IOException {
  // set up house-keeping
  ObjectMapper mapper = new ObjectMapper();
  Histogram stats = new Histogram(1, 10000000, 2);
  Histogram global = new Histogram(1, 10000000, 2);

  final String TOPIC_FAST_MESSAGES = "/sample-stream:fast-messages";
  final String TOPIC_SUMMARY_MARKERS = "/sample-stream:summary-markers";


  Table fastMessagesTable = getTable("/apps/fast-messages");

  // and the consumer
  KafkaConsumer<String, String> consumer;
  try (InputStream props = Resources.getResource("consumer.props").openStream()) {
    Properties properties = new Properties();
    properties.load(props);
    // use a new group id for the dbconsumer
    if (properties.getProperty("group.id") == null) {
      properties.setProperty("group.id", "group-" + new Random().nextInt(100000));
    } else {
      String groupId = properties.getProperty("group.id");
      properties.setProperty("group.id", "db-" + groupId);
    }

    consumer = new KafkaConsumer<>(properties);
  }
  consumer.subscribe(Arrays.asList(TOPIC_FAST_MESSAGES, TOPIC_SUMMARY_MARKERS));
  int timeouts = 0;

  //noinspection InfiniteLoopStatement
  while (true) {
    // read records with a short timeout. If we time out, we don't really care.
    ConsumerRecords<String, String> records = consumer.poll(200);
    if (records.count() == 0) {
      timeouts++;
    } else {
      System.out.printf("Got %d records after %d timeouts\n", records.count(), timeouts);
      timeouts = 0;
    }
    for (ConsumerRecord<String, String> record : records) {
      switch (record.topic()) {
        case TOPIC_FAST_MESSAGES:
          // the send time is encoded inside the message
          JsonNode msg = mapper.readTree(record.value());
          switch (msg.get("type").asText()) {
            case "test":
              // create a Document and set an _id, in this case the message number (document will be updated each time)
              Document messageDocument = MapRDB.newDocument(msg);
              messageDocument.setId( Integer.toString(messageDocument.getInt("k")));
              fastMessagesTable.insertOrReplace( messageDocument );

              long latency = (long) ((System.nanoTime() * 1e-9 - msg.get("t").asDouble()) * 1000);
              stats.recordValue(latency);
              global.recordValue(latency);
              break;
            case "marker":
              // whenever we get a marker message, we should dump out the stats
              // note that the number of fast messages won't necessarily be quite constant
              System.out.printf("%d messages received in period, latency(min, max, avg, 99%%) = %d, %d, %.1f, %d (ms)\n",
                      stats.getTotalCount(),
                      stats.getValueAtPercentile(0), stats.getValueAtPercentile(100),
                      stats.getMean(), stats.getValueAtPercentile(99));
              System.out.printf("%d messages received overall, latency(min, max, avg, 99%%) = %d, %d, %.1f, %d (ms)\n",
                      global.getTotalCount(),
                      global.getValueAtPercentile(0), global.getValueAtPercentile(100),
                      global.getMean(), global.getValueAtPercentile(99));
              stats.reset();
              break;
            default:
              throw new IllegalArgumentException("Illegal message type: " + msg.get("type"));
          }
          break;
        case TOPIC_SUMMARY_MARKERS:
          break;
        default:
          throw new IllegalStateException("Shouldn't be possible to get message on topic " + record.topic());
      }
    }
  }
}
 
開發者ID:mapr-demos,項目名稱:mapr-streams-sample-programs,代碼行數:82,代碼來源:DBConsumer.java

示例8: run

import org.HdrHistogram.Histogram; //導入方法依賴的package包/類
public void run()
{
    final Histogram histogram = new Histogram(3);
    final long scaleToMicros = TimeUnit.MICROSECONDS.toNanos(1);
    final SocketChannel socketChannel = this.socketChannel;
    final MutableAsciiBuffer readFlyweight = LatencyUnderLoadBenchmarkClient.this.readFlyweight;
    final long[] sendTimes = LatencyUnderLoadBenchmarkClient.this.sendTimes;

    while (true)
    {
        final long startTime = System.currentTimeMillis();
        int lastMessagesReceived = 0;
        while (lastMessagesReceived < MESSAGES_EXCHANGED)
        {
            try
            {
                final int length = read(socketChannel);
                final long time = System.nanoTime();
                final int received = scanForReceivesMessages(readFlyweight, length);
                for (int j = 0; j < received; j++)
                {
                    final long duration = time - sendTimes[lastMessagesReceived + j];
                    histogram.recordValue(duration);
                }
                lastMessagesReceived += received;
            }
            catch (final IOException ex)
            {
                ex.printStackTrace();
                System.exit(-1);
            }
        }

        printThroughput(startTime, MESSAGES_EXCHANGED);
        HistogramLogReader.prettyPrint(
            System.currentTimeMillis(), histogram, "Benchmark", scaleToMicros);

        histogram.reset();
        await();
    }
}
 
開發者ID:real-logic,項目名稱:artio,代碼行數:42,代碼來源:LatencyUnderLoadBenchmarkClient.java


注:本文中的org.HdrHistogram.Histogram.recordValue方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。