当前位置: 首页>>代码示例>>Java>>正文


Java MetricsRecord.timestamp方法代码示例

本文整理汇总了Java中org.apache.hadoop.metrics2.MetricsRecord.timestamp方法的典型用法代码示例。如果您正苦于以下问题:Java MetricsRecord.timestamp方法的具体用法?Java MetricsRecord.timestamp怎么用?Java MetricsRecord.timestamp使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.hadoop.metrics2.MetricsRecord的用法示例。


在下文中一共展示了MetricsRecord.timestamp方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: recordToJson

import org.apache.hadoop.metrics2.MetricsRecord; //导入方法依赖的package包/类
StringBuilder recordToJson(MetricsRecord record) {
  // Create a json object from a metrics record.
  StringBuilder jsonLines = new StringBuilder();
  Long timestamp = record.timestamp();
  Date currDate = new Date(timestamp);
  SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd");
  String date = dateFormat.format(currDate);
  SimpleDateFormat timeFormat = new SimpleDateFormat("hh:mm:ss");
  String time = timeFormat.format(currDate);
  String hostname = new String("null");
  try {
    hostname = InetAddress.getLocalHost().getHostName();
  } catch (Exception e) {
    LOG.warn("Error getting Hostname, going to continue");
  }
  jsonLines.append("{\"hostname\": \"" + hostname);
  jsonLines.append("\", \"timestamp\": " + timestamp);
  jsonLines.append(", \"date\": \"" + date);
  jsonLines.append("\",\"time\": \"" + time);
  jsonLines.append("\",\"name\": \"" + record.name() + "\" ");
  for (MetricsTag tag : record.tags()) {
    jsonLines.append(
        ", \"" + tag.name().toString().replaceAll("[\\p{Cc}]", "") + "\": ");
    jsonLines.append(" \"" + tag.value().toString() + "\"");
  }
  for (AbstractMetric m : record.metrics()) {
    jsonLines.append(
        ", \"" + m.name().toString().replaceAll("[\\p{Cc}]", "") + "\": ");
    jsonLines.append(" \"" + m.value().toString() + "\"");
  }
  jsonLines.append("}");
  return jsonLines;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:34,代码来源:TestKafkaMetrics.java

示例2: putMetrics

import org.apache.hadoop.metrics2.MetricsRecord; //导入方法依赖的package包/类
@Override
public void putMetrics(MetricsRecord record) {
    StringBuilder lines = new StringBuilder();
    StringBuilder metricsPathPrefix = new StringBuilder();

    // Configure the hierarchical place to display the graph.
    metricsPathPrefix.append(metricsPrefix).append(".")
            .append(record.context()).append(".").append(record.name());

    for (MetricsTag tag : record.tags()) {
        if (tag.value() != null) {
            metricsPathPrefix.append(".");
            metricsPathPrefix.append(tag.name());
            metricsPathPrefix.append("=");
            metricsPathPrefix.append(tag.value());
        }
    }

    // The record timestamp is in milliseconds while Graphite expects an epoc time in seconds.
    long timestamp = record.timestamp() / 1000L;

    // Collect datapoints.
    for (AbstractMetric metric : record.metrics()) {
        lines.append(
                metricsPathPrefix.toString() + "."
                        + metric.name().replace(' ', '.')).append(" ")
                .append(metric.value()).append(" ").append(timestamp)
                .append("\n");
    }

    try {
        if(writer != null){
          writer.write(lines.toString());
        } else {
          throw new MetricsException("Writer in GraphiteSink is null!");
        }
    } catch (Exception e) {
        throw new MetricsException("Error sending metrics", e);
    }
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:41,代码来源:GraphiteSink.java

示例3: putMetrics

import org.apache.hadoop.metrics2.MetricsRecord; //导入方法依赖的package包/类
@Override
public void putMetrics(MetricsRecord record) {
    StringBuilder lines = new StringBuilder();
    StringBuilder metricsPathPrefix = new StringBuilder();

    // Configure the hierarchical place to display the graph.
    metricsPathPrefix.append(metricsPrefix).append(".")
            .append(record.context()).append(".").append(record.name());

    for (MetricsTag tag : record.tags()) {
        if (tag.value() != null) {
            metricsPathPrefix.append(".");
            metricsPathPrefix.append(tag.name());
            metricsPathPrefix.append("=");
            metricsPathPrefix.append(tag.value());
        }
    }

    // The record timestamp is in milliseconds while Graphite expects an epoc time in seconds.
    long timestamp = record.timestamp() / 1000L;

    // Collect datapoints.
    for (AbstractMetric metric : record.metrics()) {
        lines.append(
                metricsPathPrefix.toString() + "."
                        + metric.name().replace(' ', '.')).append(" ")
                .append(metric.value()).append(" ").append(timestamp)
                .append("\n");
    }

    try {
      graphite.write(lines.toString());
    } catch (Exception e) {
      LOG.warn("Error sending metrics to Graphite", e);
      try {
        graphite.close();
      } catch (Exception e1) {
        throw new MetricsException("Error closing connection to Graphite", e1);
      }
    }
}
 
开发者ID:nucypher,项目名称:hadoop-oss,代码行数:42,代码来源:GraphiteSink.java


注:本文中的org.apache.hadoop.metrics2.MetricsRecord.timestamp方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。