当前位置: 首页>>代码示例>>Java>>正文


Java TraceScope.close方法代码示例

本文整理汇总了Java中org.apache.htrace.TraceScope.close方法的典型用法代码示例。如果您正苦于以下问题:Java TraceScope.close方法的具体用法?Java TraceScope.close怎么用?Java TraceScope.close使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.apache.htrace.TraceScope的用法示例。


在下文中一共展示了TraceScope.close方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: getBlockLocations

import org.apache.htrace.TraceScope; //导入方法依赖的package包/类
/**
 * Get block location info about file
 * 
 * getBlockLocations() returns a list of hostnames that store 
 * data for a specific file region.  It returns a set of hostnames
 * for every block within the indicated region.
 *
 * This function is very useful when writing code that considers
 * data-placement when performing operations.  For example, the
 * MapReduce system tries to schedule tasks on the same machines
 * as the data-block the task processes. 
 */
public BlockLocation[] getBlockLocations(String src, long start, 
      long length) throws IOException, UnresolvedLinkException {
  TraceScope scope = getPathTraceScope("getBlockLocations", src);
  try {
    LocatedBlocks blocks = getLocatedBlocks(src, start, length);
    BlockLocation[] locations =  DFSUtil.locatedBlocks2Locations(blocks);
    HdfsBlockLocation[] hdfsLocations = new HdfsBlockLocation[locations.length];
    for (int i = 0; i < locations.length; i++) {
      hdfsLocations[i] = new HdfsBlockLocation(locations[i], blocks.get(i));
    }
    return hdfsLocations;
  } finally {
    scope.close();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:28,代码来源:DFSClient.java

示例2: create

import org.apache.htrace.TraceScope; //导入方法依赖的package包/类
/**
 * <p>
 * NONSEQUENTIAL create is idempotent operation.
 * Retry before throwing exceptions.
 * But this function will not throw the NodeExist exception back to the
 * application.
 * </p>
 * <p>
 * But SEQUENTIAL is NOT idempotent operation. It is necessary to add
 * identifier to the path to verify, whether the previous one is successful
 * or not.
 * </p>
 *
 * @return Path
 */
public String create(String path, byte[] data, List<ACL> acl,
    CreateMode createMode)
throws KeeperException, InterruptedException {
  TraceScope traceScope = null;
  try {
    traceScope = Trace.startSpan("RecoverableZookeeper.create");
    byte[] newData = appendMetaData(data);
    switch (createMode) {
      case EPHEMERAL:
      case PERSISTENT:
        return createNonSequential(path, newData, acl, createMode);

      case EPHEMERAL_SEQUENTIAL:
      case PERSISTENT_SEQUENTIAL:
        return createSequential(path, newData, acl, createMode);

      default:
        throw new IllegalArgumentException("Unrecognized CreateMode: " +
            createMode);
    }
  } finally {
    if (traceScope != null) traceScope.close();
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:40,代码来源:RecoverableZooKeeper.java

示例3: setAcl

import org.apache.htrace.TraceScope; //导入方法依赖的package包/类
public void setAcl(String src, List<AclEntry> aclSpec) throws IOException {
  checkOpen();
  TraceScope scope = Trace.startSpan("setAcl", traceSampler);
  try {
    namenode.setAcl(src, aclSpec);
  } catch(RemoteException re) {
    throw re.unwrapRemoteException(AccessControlException.class,
                                   AclException.class,
                                   FileNotFoundException.class,
                                   NSQuotaExceededException.class,
                                   SafeModeException.class,
                                   SnapshotAccessControlException.class,
                                   UnresolvedPathException.class);
  } finally {
    scope.close();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:18,代码来源:DFSClient.java

示例4: removeAcl

import org.apache.htrace.TraceScope; //导入方法依赖的package包/类
public void removeAcl(String src) throws IOException {
  checkOpen();
  TraceScope scope = Trace.startSpan("removeAcl", traceSampler);
  try {
    namenode.removeAcl(src);
  } catch(RemoteException re) {
    throw re.unwrapRemoteException(AccessControlException.class,
                                   AclException.class,
                                   FileNotFoundException.class,
                                   NSQuotaExceededException.class,
                                   SafeModeException.class,
                                   SnapshotAccessControlException.class,
                                   UnresolvedPathException.class);
  } finally {
    scope.close();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:18,代码来源:DFSClient.java

示例5: getSnapshottableDirListing

import org.apache.htrace.TraceScope; //导入方法依赖的package包/类
/**
 * Get all the current snapshottable directories.
 * @return All the current snapshottable directories
 * @throws IOException
 * @see ClientProtocol#getSnapshottableDirListing()
 */
public SnapshottableDirectoryStatus[] getSnapshottableDirListing()
    throws IOException {
  checkOpen();
  TraceScope scope = Trace.startSpan("getSnapshottableDirListing",
      traceSampler);
  try {
    return namenode.getSnapshottableDirListing();
  } catch(RemoteException re) {
    throw re.unwrapRemoteException();
  } finally {
    scope.close();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:20,代码来源:DFSClient.java

示例6: datanodeReport

import org.apache.htrace.TraceScope; //导入方法依赖的package包/类
public DatanodeInfo[] datanodeReport(DatanodeReportType type)
    throws IOException {
  checkOpen();
  TraceScope scope = Trace.startSpan("datanodeReport", traceSampler);
  try {
    return namenode.getDatanodeReport(type);
  } finally {
    scope.close();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:11,代码来源:DFSClient.java

示例7: opCopyBlock

import org.apache.htrace.TraceScope; //导入方法依赖的package包/类
/** Receive OP_COPY_BLOCK */
private void opCopyBlock(DataInputStream in) throws IOException {
  OpCopyBlockProto proto = OpCopyBlockProto.parseFrom(vintPrefixed(in));
  TraceScope traceScope = continueTraceSpan(proto.getHeader(),
      proto.getClass().getSimpleName());
  try {
    copyBlock(PBHelper.convert(proto.getHeader().getBlock()),
        PBHelper.convert(proto.getHeader().getToken()));
  } finally {
    if (traceScope != null) traceScope.close();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:13,代码来源:Receiver.java

示例8: callGetStats

import org.apache.htrace.TraceScope; //导入方法依赖的package包/类
private long[] callGetStats() throws IOException {
  checkOpen();
  TraceScope scope = Trace.startSpan("getStats", traceSampler);
  try {
    return namenode.getStats();
  } finally {
    scope.close();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:10,代码来源:DFSClient.java

示例9: hsync

import org.apache.htrace.TraceScope; //导入方法依赖的package包/类
@Override
public void hsync() throws IOException {
  TraceScope scope =
      dfsClient.getPathTraceScope("hsync", src);
  try {
    flushOrSync(true, EnumSet.noneOf(SyncFlag.class));
  } finally {
    scope.close();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:11,代码来源:DFSOutputStream.java

示例10: setQuotaByStorageType

import org.apache.htrace.TraceScope; //导入方法依赖的package包/类
/**
 * Sets or resets quotas by storage type for a directory.
 * @see ClientProtocol#setQuota(String, long, long, StorageType)
 */
void setQuotaByStorageType(String src, StorageType type, long quota)
    throws IOException {
  if (quota <= 0 && quota != HdfsConstants.QUOTA_DONT_SET &&
      quota != HdfsConstants.QUOTA_RESET) {
    throw new IllegalArgumentException("Invalid values for quota :" +
      quota);
  }
  if (type == null) {
    throw new IllegalArgumentException("Invalid storage type(null)");
  }
  if (!type.supportTypeQuota()) {
    throw new IllegalArgumentException("Don't support Quota for storage type : "
      + type.toString());
  }
  TraceScope scope = getPathTraceScope("setQuotaByStorageType", src);
  try {
    namenode.setQuota(src, HdfsConstants.QUOTA_DONT_SET, quota, type);
  } catch (RemoteException re) {
    throw re.unwrapRemoteException(AccessControlException.class,
      FileNotFoundException.class,
      QuotaByStorageTypeExceededException.class,
      UnresolvedPathException.class,
      SnapshotAccessControlException.class);
  } finally {
    scope.close();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:32,代码来源:DFSClient.java

示例11: allowSnapshot

import org.apache.htrace.TraceScope; //导入方法依赖的package包/类
/**
 * Allow snapshot on a directory.
 * 
 * @see ClientProtocol#allowSnapshot(String snapshotRoot)
 */
public void allowSnapshot(String snapshotRoot) throws IOException {
  checkOpen();
  TraceScope scope = Trace.startSpan("allowSnapshot", traceSampler);
  try {
    namenode.allowSnapshot(snapshotRoot);
  } catch (RemoteException re) {
    throw re.unwrapRemoteException();
  } finally {
    scope.close();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:17,代码来源:DFSClient.java

示例12: modifyCacheDirective

import org.apache.htrace.TraceScope; //导入方法依赖的package包/类
public void modifyCacheDirective(
    CacheDirectiveInfo info, EnumSet<CacheFlag> flags) throws IOException {
  checkOpen();
  TraceScope scope = Trace.startSpan("modifyCacheDirective", traceSampler);
  try {
    namenode.modifyCacheDirective(info, flags);
  } catch (RemoteException re) {
    throw re.unwrapRemoteException();
  } finally {
    scope.close();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:13,代码来源:DFSClient.java

示例13: tracedWriteRequest

import org.apache.htrace.TraceScope; //导入方法依赖的package包/类
protected void tracedWriteRequest(Call call, int priority, Span span) throws IOException {
  TraceScope ts = Trace.continueSpan(span);
  try {
    writeRequest(call, priority, span);
  } finally {
    ts.close();
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:9,代码来源:RpcClientImpl.java

示例14: rollingUpgrade

import org.apache.htrace.TraceScope; //导入方法依赖的package包/类
RollingUpgradeInfo rollingUpgrade(RollingUpgradeAction action) throws IOException {
  TraceScope scope = Trace.startSpan("rollingUpgrade", traceSampler);
  try {
    return namenode.rollingUpgrade(action);
  } finally {
    scope.close();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:9,代码来源:DFSClient.java

示例15: readWithTracing

import org.apache.htrace.TraceScope; //导入方法依赖的package包/类
public void readWithTracing() throws Exception {
  String fileName = "testReadTraceHooks.dat";
  writeTestFile(fileName);
  long startTime = System.currentTimeMillis();
  TraceScope ts = Trace.startSpan("testReadTraceHooks", Sampler.ALWAYS);
  readTestFile(fileName);
  ts.close();
  long endTime = System.currentTimeMillis();

  String[] expectedSpanNames = {
    "testReadTraceHooks",
    "org.apache.hadoop.hdfs.protocol.ClientProtocol.getBlockLocations",
    "ClientNamenodeProtocol#getBlockLocations",
    "OpReadBlockProto"
  };
  assertSpanNamesFound(expectedSpanNames);

  // The trace should last about the same amount of time as the test
  Map<String, List<Span>> map = SetSpanReceiver.SetHolder.getMap();
  Span s = map.get("testReadTraceHooks").get(0);
  Assert.assertNotNull(s);

  long spanStart = s.getStartTimeMillis();
  long spanEnd = s.getStopTimeMillis();
  Assert.assertTrue(spanStart - startTime < 100);
  Assert.assertTrue(spanEnd - endTime < 100);

  // There should only be one trace id as it should all be homed in the
  // top trace.
  for (Span span : SetSpanReceiver.SetHolder.spans.values()) {
    Assert.assertEquals(ts.getSpan().getTraceId(), span.getTraceId());
  }
  SetSpanReceiver.SetHolder.spans.clear();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:35,代码来源:TestTracing.java


注:本文中的org.apache.htrace.TraceScope.close方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。