当前位置: 首页>>代码示例>>Java>>正文


Java AlreadyBeingCreatedException类代码示例

本文整理汇总了Java中org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException的典型用法代码示例。如果您正苦于以下问题:Java AlreadyBeingCreatedException类的具体用法?Java AlreadyBeingCreatedException怎么用?Java AlreadyBeingCreatedException使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


AlreadyBeingCreatedException类属于org.apache.hadoop.hdfs.protocol包,在下文中一共展示了AlreadyBeingCreatedException类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: checkAndMarkRunning

import org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException; //导入依赖的package包/类
/**
 * The idea for making sure that there is no more than one instance
 * running in an HDFS is to create a file in the HDFS, writes the hostname
 * of the machine on which the instance is running to the file, but did not
 * close the file until it exits. 
 * 
 * This prevents the second instance from running because it can not
 * creates the file while the first one is running.
 * 
 * This method checks if there is any running instance. If no, mark yes.
 * Note that this is an atomic operation.
 * 
 * @return null if there is a running instance;
 *         otherwise, the output stream to the newly created file.
 */
private OutputStream checkAndMarkRunning() throws IOException {
  try {
    if (fs.exists(idPath)) {
      // try appending to it so that it will fail fast if another balancer is
      // running.
      IOUtils.closeStream(fs.append(idPath));
      fs.delete(idPath, true);
    }
    final FSDataOutputStream fsout = fs.create(idPath, false);
    // mark balancer idPath to be deleted during filesystem closure
    fs.deleteOnExit(idPath);
    if (write2IdFile) {
      fsout.writeBytes(InetAddress.getLocalHost().getHostName());
      fsout.hflush();
    }
    return fsout;
  } catch(RemoteException e) {
    if(AlreadyBeingCreatedException.class.getName().equals(e.getClassName())){
      return null;
    } else {
      throw e;
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:40,代码来源:NameNodeConnector.java

示例2: call

import org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException; //导入依赖的package包/类
@Override
public FSDataOutputStream call() throws IOException {
  try {
    FileSystem fs = FSUtils.getCurrentFileSystem(getConf());
    FsPermission defaultPerms = FSUtils.getFilePermissions(fs, getConf(),
        HConstants.DATA_FILE_UMASK_KEY);
    Path tmpDir = new Path(FSUtils.getRootDir(getConf()), HConstants.HBASE_TEMP_DIRECTORY);
    fs.mkdirs(tmpDir);
    HBCK_LOCK_PATH = new Path(tmpDir, HBCK_LOCK_FILE);
    final FSDataOutputStream out = createFileWithRetries(fs, HBCK_LOCK_PATH, defaultPerms);
    out.writeBytes(InetAddress.getLocalHost().toString());
    out.flush();
    return out;
  } catch(RemoteException e) {
    if(AlreadyBeingCreatedException.class.getName().equals(e.getClassName())){
      return null;
    } else {
      throw e;
    }
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:22,代码来源:HBaseFsck.java

示例3: create

import org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException; //导入依赖的package包/类
@Override
public HdfsFileStatus create(String src, FsPermission masked,
    String clientName, EnumSetWritable<CreateFlag> flag, boolean createParent,
    short replication, long blockSize, CryptoProtocolVersion[] supportedVersions)
    throws AccessControlException, AlreadyBeingCreatedException,
           DSQuotaExceededException, FileAlreadyExistsException,
           FileNotFoundException, NSQuotaExceededException,
           ParentNotDirectoryException, SafeModeException,
           UnresolvedLinkException, SnapshotAccessControlException,
           IOException {
  try {
    AuthorizationProvider.beginClientOp();
    return server.create(src, masked, clientName, flag, createParent,
        replication, blockSize, supportedVersions);
  } finally {
    AuthorizationProvider.endClientOp();
  }
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:19,代码来源:AuthorizationProviderProxyClientProtocol.java

示例4: checkAndMarkRunning

import org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException; //导入依赖的package包/类
/**
 * The idea for making sure that there is no more than one instance
 * running in an HDFS is to create a file in the HDFS, writes the hostname
 * of the machine on which the instance is running to the file, but did not
 * close the file until it exits. 
 * 
 * This prevents the second instance from running because it can not
 * creates the file while the first one is running.
 * 
 * This method checks if there is any running instance. If no, mark yes.
 * Note that this is an atomic operation.
 * 
 * @return null if there is a running instance;
 *         otherwise, the output stream to the newly created file.
 */
private OutputStream checkAndMarkRunning() throws IOException {
  try {
    final FSDataOutputStream out = fs.create(idPath);
    if (write2IdFile) {
      out.writeBytes(InetAddress.getLocalHost().getHostName());
      out.hflush();
    }
    return out;
  } catch(RemoteException e) {
    if(AlreadyBeingCreatedException.class.getName().equals(e.getClassName())){
      return null;
    } else {
      throw e;
    }
  }
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:32,代码来源:NameNodeConnector.java

示例5: createNamenode

import org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException; //导入依赖的package包/类
static ClientProtocol createNamenode(ClientProtocol rpcNamenode,
    Configuration conf)
  throws IOException {
  long sleepTime = conf.getLong("dfs.client.rpc.retry.sleep",
      LEASE_SOFTLIMIT_PERIOD);
  RetryPolicy createPolicy = RetryPolicies.retryUpToMaximumCountWithFixedSleep(
      5, sleepTime, TimeUnit.MILLISECONDS);

  Map<Class<? extends Exception>,RetryPolicy> remoteExceptionToPolicyMap =
    new HashMap<Class<? extends Exception>, RetryPolicy>();
  remoteExceptionToPolicyMap.put(AlreadyBeingCreatedException.class, createPolicy);

  Map<Class<? extends Exception>,RetryPolicy> exceptionToPolicyMap =
    new HashMap<Class<? extends Exception>, RetryPolicy>();
  exceptionToPolicyMap.put(RemoteException.class,
      RetryPolicies.retryByRemoteException(
          RetryPolicies.TRY_ONCE_THEN_FAIL, remoteExceptionToPolicyMap));
  RetryPolicy methodPolicy = RetryPolicies.retryByException(
      RetryPolicies.TRY_ONCE_THEN_FAIL, exceptionToPolicyMap);
  Map<String,RetryPolicy> methodNameToPolicyMap = new HashMap<String,RetryPolicy>();

  methodNameToPolicyMap.put("create", methodPolicy);

  return (ClientProtocol) RetryProxy.create(ClientProtocol.class,
      rpcNamenode, methodNameToPolicyMap);
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:27,代码来源:DFSClient.java

示例6: createNamenodeWithRetry

import org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException; //导入依赖的package包/类
/** Create a {@link NameNode} proxy */
static DatanodeProtocolPB createNamenodeWithRetry(
    DatanodeProtocolPB rpcNamenode) {
  RetryPolicy createPolicy = RetryPolicies
      .retryUpToMaximumCountWithFixedSleep(5,
          HdfsConstants.LEASE_SOFTLIMIT_PERIOD, TimeUnit.MILLISECONDS);

  Map<Class<? extends Exception>, RetryPolicy> remoteExceptionToPolicyMap = 
      new HashMap<Class<? extends Exception>, RetryPolicy>();
  remoteExceptionToPolicyMap.put(AlreadyBeingCreatedException.class,
      createPolicy);

  Map<Class<? extends Exception>, RetryPolicy> exceptionToPolicyMap =
      new HashMap<Class<? extends Exception>, RetryPolicy>();
  exceptionToPolicyMap.put(RemoteException.class, RetryPolicies
      .retryByRemoteException(RetryPolicies.TRY_ONCE_THEN_FAIL,
          remoteExceptionToPolicyMap));
  RetryPolicy methodPolicy = RetryPolicies.retryByException(
      RetryPolicies.TRY_ONCE_THEN_FAIL, exceptionToPolicyMap);
  Map<String, RetryPolicy> methodNameToPolicyMap = new HashMap<String, RetryPolicy>();

  methodNameToPolicyMap.put("create", methodPolicy);

  return (DatanodeProtocolPB) RetryProxy.create(DatanodeProtocolPB.class,
      rpcNamenode, methodNameToPolicyMap);
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:27,代码来源:DatanodeProtocolClientSideTranslatorPB.java

示例7: create

import org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException; //导入依赖的package包/类
@Override
public HdfsFileStatus create(String src, FsPermission masked,
    String clientName, EnumSetWritable<CreateFlag> flag,
    boolean createParent, short replication, long blockSize)
    throws AccessControlException, AlreadyBeingCreatedException,
    DSQuotaExceededException, FileAlreadyExistsException,
    FileNotFoundException, NSQuotaExceededException,
    ParentNotDirectoryException, SafeModeException, UnresolvedLinkException,
    IOException {
  CreateRequestProto req = CreateRequestProto.newBuilder()
      .setSrc(src)
      .setMasked(PBHelper.convert(masked))
      .setClientName(clientName)
      .setCreateFlag(PBHelper.convertCreateFlag(flag))
      .setCreateParent(createParent)
      .setReplication(replication)
      .setBlockSize(blockSize)
      .build();
  try {
    CreateResponseProto res = rpcProxy.create(null, req);
    return res.hasFs() ? PBHelper.convert(res.getFs()) : null;
  } catch (ServiceException e) {
    throw ProtobufHelper.getRemoteException(e);
  }

}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:27,代码来源:ClientNamenodeProtocolTranslatorPB.java

示例8: create

import org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException; //导入依赖的package包/类
public void create(final String src, final FsPermission masked,
    final String clientName, final EnumSetWritable<CreateFlag> flag,
    final boolean createParent, final short replication, final long blockSize)
    throws AccessControlException, AlreadyBeingCreatedException,
    DSQuotaExceededException, FileAlreadyExistsException,
    FileNotFoundException, NSQuotaExceededException,
    ParentNotDirectoryException, SafeModeException, UnresolvedLinkException,
    IOException {
  ClientActionHandler handler = new ClientActionHandler() {
    @Override
    public Object doAction(ClientProtocol namenode)
        throws RemoteException, IOException {
      namenode
          .create(src, masked, clientName, flag, createParent, replication,
              blockSize);
      return null;
    }
  };
  doClientActionWithRetry(handler, "create");
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:21,代码来源:DFSClient.java

示例9: create

import org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException; //导入依赖的package包/类
@Override
public HdfsFileStatus create(String src, FsPermission masked,
    String clientName, EnumSetWritable<CreateFlag> flag, boolean createParent,
    short replication, long blockSize, EncodingPolicy policy)
    throws AccessControlException, AlreadyBeingCreatedException,
    DSQuotaExceededException, FileAlreadyExistsException,
    FileNotFoundException, NSQuotaExceededException,
    ParentNotDirectoryException, SafeModeException, UnresolvedLinkException,
    IOException {
  CreateRequestProto.Builder builder =
      CreateRequestProto.newBuilder().setSrc(src)
          .setMasked(PBHelper.convert(masked)).setClientName(clientName)
          .setCreateFlag(PBHelper.convertCreateFlag(flag))
          .setCreateParent(createParent).setReplication(replication)
          .setBlockSize(blockSize);
  if (policy != null) {
    builder.setPolicy(PBHelper.convert(policy));
  }
  CreateRequestProto req = builder.build();
  try {
    CreateResponseProto result = rpcProxy.create(null, req);
    return result.hasFs() ? PBHelper.convert(result.getFs()) : null;
  } catch (ServiceException e) {
    throw ProtobufHelper.getRemoteException(e);
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:27,代码来源:ClientNamenodeProtocolTranslatorPB.java

示例10: createNamenode

import org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException; //导入依赖的package包/类
private static ClientProtocol createNamenode(ClientProtocol rpcNamenode)
  throws IOException {
  RetryPolicy createPolicy = RetryPolicies.retryUpToMaximumCountWithFixedSleep(
      5, LEASE_SOFTLIMIT_PERIOD, TimeUnit.MILLISECONDS);
  
  Map<Class<? extends Exception>,RetryPolicy> remoteExceptionToPolicyMap =
    new HashMap<Class<? extends Exception>, RetryPolicy>();
  remoteExceptionToPolicyMap.put(AlreadyBeingCreatedException.class, createPolicy);

  Map<Class<? extends Exception>,RetryPolicy> exceptionToPolicyMap =
    new HashMap<Class<? extends Exception>, RetryPolicy>();
  exceptionToPolicyMap.put(RemoteException.class, 
      RetryPolicies.retryByRemoteException(
          RetryPolicies.TRY_ONCE_THEN_FAIL, remoteExceptionToPolicyMap));
  RetryPolicy methodPolicy = RetryPolicies.retryByException(
      RetryPolicies.TRY_ONCE_THEN_FAIL, exceptionToPolicyMap);
  Map<String,RetryPolicy> methodNameToPolicyMap = new HashMap<String,RetryPolicy>();
  
  methodNameToPolicyMap.put("create", methodPolicy);

  return (ClientProtocol) RetryProxy.create(ClientProtocol.class,
      rpcNamenode, methodNameToPolicyMap);
}
 
开发者ID:cumulusyebl,项目名称:cumulus,代码行数:24,代码来源:DFSClient.java

示例11: testInternalReleaseLease_COMM_COMM

import org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException; //导入依赖的package包/类
/**
 * Mocks FSNamesystem instance, adds an empty file, sets status of last two
 * blocks to COMMITTED and COMMITTED and invokes lease recovery
 * method. AlreadyBeingCreatedException is expected.
 * @throws AlreadyBeingCreatedException as the result
 */
@Test(expected=AlreadyBeingCreatedException.class)
public void testInternalReleaseLease_COMM_COMM () throws IOException {
  if(LOG.isDebugEnabled()) {
    LOG.debug("Running " + GenericTestUtils.getMethodName());
  }
  LeaseManager.Lease lm = mock(LeaseManager.Lease.class);
  Path file = 
    spy(new Path("/" + GenericTestUtils.getMethodName() + "_test.dat"));
  DatanodeDescriptor dnd = mock(DatanodeDescriptor.class);
  PermissionStatus ps =
    new PermissionStatus("test", "test", new FsPermission((short)0777));

  mockFileBlocks(2, HdfsConstants.BlockUCState.COMMITTED, 
    HdfsConstants.BlockUCState.COMMITTED, file, dnd, ps, false);

  fsn.internalReleaseLease(lm, file.toString(), null);
  assertTrue("FSNamesystem.internalReleaseLease suppose to throw " +
    "AlreadyBeingCreatedException here", false);
}
 
开发者ID:cumulusyebl,项目名称:cumulus,代码行数:26,代码来源:TestNNLeaseRecovery.java

示例12: testInternalReleaseLease_1blocks

import org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException; //导入依赖的package包/类
/**
 * Mocks FSNamesystem instance, adds an empty file with 1 block
 * and invokes lease recovery method. 
 * AlreadyBeingCreatedException is expected.
 * @throws AlreadyBeingCreatedException as the result
 */
@Test(expected=AlreadyBeingCreatedException.class)
public void testInternalReleaseLease_1blocks () throws IOException {
  if(LOG.isDebugEnabled()) {
    LOG.debug("Running " + GenericTestUtils.getMethodName());
  }
  LeaseManager.Lease lm = mock(LeaseManager.Lease.class);
  Path file = 
    spy(new Path("/" + GenericTestUtils.getMethodName() + "_test.dat"));
  DatanodeDescriptor dnd = mock(DatanodeDescriptor.class);
  PermissionStatus ps =
    new PermissionStatus("test", "test", new FsPermission((short)0777));

  mockFileBlocks(1, null, HdfsConstants.BlockUCState.COMMITTED, file, dnd, ps, false);

  fsn.internalReleaseLease(lm, file.toString(), null);
  assertTrue("FSNamesystem.internalReleaseLease suppose to throw " +
    "AlreadyBeingCreatedException here", false);
}
 
开发者ID:cumulusyebl,项目名称:cumulus,代码行数:25,代码来源:TestNNLeaseRecovery.java

示例13: createNamenode

import org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException; //导入依赖的package包/类
private static ClientProtocol createNamenode(ClientProtocol rpcNamenode,
    Configuration conf)
  throws IOException {
  long sleepTime = conf.getLong("dfs.client.rpc.retry.sleep",
      LEASE_SOFTLIMIT_PERIOD);
  RetryPolicy createPolicy = RetryPolicies.retryUpToMaximumCountWithFixedSleep(
      5, sleepTime, TimeUnit.MILLISECONDS);

  Map<Class<? extends Exception>,RetryPolicy> remoteExceptionToPolicyMap =
    new HashMap<Class<? extends Exception>, RetryPolicy>();
  remoteExceptionToPolicyMap.put(AlreadyBeingCreatedException.class, createPolicy);

  Map<Class<? extends Exception>,RetryPolicy> exceptionToPolicyMap =
    new HashMap<Class<? extends Exception>, RetryPolicy>();
  exceptionToPolicyMap.put(RemoteException.class,
      RetryPolicies.retryByRemoteException(
          RetryPolicies.TRY_ONCE_THEN_FAIL, remoteExceptionToPolicyMap));
  RetryPolicy methodPolicy = RetryPolicies.retryByException(
      RetryPolicies.TRY_ONCE_THEN_FAIL, exceptionToPolicyMap);
  Map<String,RetryPolicy> methodNameToPolicyMap = new HashMap<String,RetryPolicy>();

  methodNameToPolicyMap.put("create", methodPolicy);

  return (ClientProtocol) RetryProxy.create(ClientProtocol.class,
      rpcNamenode, methodNameToPolicyMap);
}
 
开发者ID:iVCE,项目名称:RDFS,代码行数:27,代码来源:DFSClient.java

示例14: takeOwnership

import org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException; //导入依赖的package包/类
/**
 * Takes ownership of the lock file if possible.
 * @param lockFile
 * @param lastEntry   last entry in the lock file. this param is an optimization.
 *                    we dont scan the lock file again to find its last entry here since
 *                    its already been done once by the logic used to check if the lock
 *                    file is stale. so this value comes from that earlier scan.
 * @param spoutId     spout id
 * @throws IOException if unable to acquire
 * @return null if lock File is not recoverable
 */
public static FileLock takeOwnership(FileSystem fs, Path lockFile, LogEntry lastEntry, String spoutId)
        throws IOException {
  try {
    if(fs instanceof DistributedFileSystem ) {
      if( !((DistributedFileSystem) fs).recoverLease(lockFile) ) {
        LOG.warn("Unable to recover lease on lock file {} right now. Cannot transfer ownership. Will need to try later. Spout = {}", lockFile, spoutId);
        return null;
      }
    }
    return new FileLock(fs, lockFile, spoutId, lastEntry);
  } catch (IOException e) {
    if (e instanceof RemoteException &&
            ((RemoteException) e).unwrapRemoteException() instanceof AlreadyBeingCreatedException) {
      LOG.warn("Lock file " + lockFile + "is currently open. Cannot transfer ownership now. Will need to try later. Spout= " + spoutId, e);
      return null;
    } else { // unexpected error
      LOG.warn("Cannot transfer ownership now for lock file " + lockFile + ". Will need to try later. Spout =" + spoutId, e);
      throw e;
    }
  }
}
 
开发者ID:alibaba,项目名称:jstorm,代码行数:33,代码来源:FileLock.java

示例15: create

import org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException; //导入依赖的package包/类
@Override
public HdfsFileStatus create(String src, FsPermission masked,
    String clientName, EnumSetWritable<CreateFlag> flag,
    boolean createParent, short replication, long blockSize, 
    CryptoProtocolVersion[] supportedVersions)
    throws AccessControlException, AlreadyBeingCreatedException,
    DSQuotaExceededException, FileAlreadyExistsException,
    FileNotFoundException, NSQuotaExceededException,
    ParentNotDirectoryException, SafeModeException, UnresolvedLinkException,
    IOException {
  CreateRequestProto.Builder builder = CreateRequestProto.newBuilder()
      .setSrc(src)
      .setMasked(PBHelper.convert(masked))
      .setClientName(clientName)
      .setCreateFlag(PBHelper.convertCreateFlag(flag))
      .setCreateParent(createParent)
      .setReplication(replication)
      .setBlockSize(blockSize);
  builder.addAllCryptoProtocolVersion(PBHelper.convert(supportedVersions));
  CreateRequestProto req = builder.build();
  try {
    CreateResponseProto res = rpcProxy.create(null, req);
    return res.hasFs() ? PBHelper.convert(res.getFs()) : null;
  } catch (ServiceException e) {
    throw ProtobufHelper.getRemoteException(e);
  }

}
 
开发者ID:naver,项目名称:hadoop,代码行数:29,代码来源:ClientNamenodeProtocolTranslatorPB.java


注:本文中的org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。