當前位置: 首頁>>代碼示例>>Java>>正文


Java StringUtils.hexStringToByte方法代碼示例

本文整理匯總了Java中org.apache.hadoop.util.StringUtils.hexStringToByte方法的典型用法代碼示例。如果您正苦於以下問題:Java StringUtils.hexStringToByte方法的具體用法?Java StringUtils.hexStringToByte怎麽用?Java StringUtils.hexStringToByte使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.hadoop.util.StringUtils的用法示例。


在下文中一共展示了StringUtils.hexStringToByte方法的4個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: hexDumpToBytes

import org.apache.hadoop.util.StringUtils; //導入方法依賴的package包/類
/**
 * Convert a string of lines that look like:
 *   "68 72 70 63 02 00 00 00  82 00 1d 6f 72 67 2e 61 hrpc.... ...org.a"
 * .. into an array of bytes.
 */
private static byte[] hexDumpToBytes(String hexdump) {
  final int LAST_HEX_COL = 3 * 16;
  
  StringBuilder hexString = new StringBuilder();
  
  for (String line : StringUtils.toUpperCase(hexdump).split("\n")) {
    hexString.append(line.substring(0, LAST_HEX_COL).replace(" ", ""));
  }
  return StringUtils.hexStringToByte(hexString.toString());
}
 
開發者ID:nucypher,項目名稱:hadoop-oss,代碼行數:16,代碼來源:TestIPC.java

示例2: getFileChecksum

import org.apache.hadoop.util.StringUtils; //導入方法依賴的package包/類
@Override
public FileChecksum getFileChecksum(Path f) throws IOException {
  Map<String, String> params = new HashMap<String, String>();
  params.put(OP_PARAM, Operation.GETFILECHECKSUM.toString());
  HttpURLConnection conn =
    getConnection(Operation.GETFILECHECKSUM.getMethod(), params, f, true);
  HttpExceptionUtils.validateResponse(conn, HttpURLConnection.HTTP_OK);
  final JSONObject json = (JSONObject) ((JSONObject)
    HttpFSUtils.jsonParse(conn)).get(FILE_CHECKSUM_JSON);
  return new FileChecksum() {
    @Override
    public String getAlgorithmName() {
      return (String) json.get(CHECKSUM_ALGORITHM_JSON);
    }

    @Override
    public int getLength() {
      return ((Long) json.get(CHECKSUM_LENGTH_JSON)).intValue();
    }

    @Override
    public byte[] getBytes() {
      return StringUtils.hexStringToByte((String) json.get(CHECKSUM_BYTES_JSON));
    }

    @Override
    public void write(DataOutput out) throws IOException {
      throw new UnsupportedOperationException();
    }

    @Override
    public void readFields(DataInput in) throws IOException {
      throw new UnsupportedOperationException();
    }
  };
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:37,代碼來源:HttpFSFileSystem.java

示例3: toMD5MD5CRC32FileChecksum

import org.apache.hadoop.util.StringUtils; //導入方法依賴的package包/類
/** Convert a Json map to a MD5MD5CRC32FileChecksum. */
public static MD5MD5CRC32FileChecksum toMD5MD5CRC32FileChecksum(
    final Map<?, ?> json) throws IOException {
  if (json == null) {
    return null;
  }

  final Map<?, ?> m = (Map<?, ?>)json.get(FileChecksum.class.getSimpleName());
  final String algorithm = (String)m.get("algorithm");
  final int length = ((Number) m.get("length")).intValue();
  final byte[] bytes = StringUtils.hexStringToByte((String)m.get("bytes"));

  final DataInputStream in = new DataInputStream(new ByteArrayInputStream(bytes));
  final DataChecksum.Type crcType = 
      MD5MD5CRC32FileChecksum.getCrcTypeFromAlgorithmName(algorithm);
  final MD5MD5CRC32FileChecksum checksum;

  // Recreate what DFSClient would have returned.
  switch(crcType) {
    case CRC32:
      checksum = new MD5MD5CRC32GzipFileChecksum();
      break;
    case CRC32C:
      checksum = new MD5MD5CRC32CastagnoliFileChecksum();
      break;
    default:
      throw new IOException("Unknown algorithm: " + algorithm);
  }
  checksum.readFields(in);

  //check algorithm name
  if (!checksum.getAlgorithmName().equals(algorithm)) {
    throw new IOException("Algorithm not matched. Expected " + algorithm
        + ", Received " + checksum.getAlgorithmName());
  }
  //check length
  if (length != checksum.getLength()) {
    throw new IOException("Length not matched: length=" + length
        + ", checksum.getLength()=" + checksum.getLength());
  }

  return checksum;
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:44,代碼來源:JsonUtil.java

示例4: testFailOnPreUpgradeImage

import org.apache.hadoop.util.StringUtils; //導入方法依賴的package包/類
/**
 * Test that sets up a fake image from Hadoop 0.3.0 and tries to start a
 * NN, verifying that the correct error message is thrown.
 */
@Test
public void testFailOnPreUpgradeImage() throws IOException {
  Configuration conf = new HdfsConfiguration();

  File namenodeStorage = new File(TEST_ROOT_DIR, "nnimage-0.3.0");
  conf.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY, namenodeStorage.toString());

  // Set up a fake NN storage that looks like an ancient Hadoop dir circa 0.3.0
  FileUtil.fullyDelete(namenodeStorage);
  assertTrue("Make " + namenodeStorage, namenodeStorage.mkdirs());
  File imageDir = new File(namenodeStorage, "image");
  assertTrue("Make " + imageDir, imageDir.mkdirs());

  // Hex dump of a formatted image from Hadoop 0.3.0
  File imageFile = new File(imageDir, "fsimage");
  byte[] imageBytes = StringUtils.hexStringToByte(
    "fffffffee17c0d2700000000");
  FileOutputStream fos = new FileOutputStream(imageFile);
  try {
    fos.write(imageBytes);
  } finally {
    fos.close();
  }

  // Now try to start an NN from it

  MiniDFSCluster cluster = null;
  try {
    cluster = new MiniDFSCluster.Builder(conf).numDataNodes(0)
      .format(false)
      .manageDataDfsDirs(false)
      .manageNameDfsDirs(false)
      .startupOption(StartupOption.REGULAR)
      .build();
    fail("Was able to start NN from 0.3.0 image");
  } catch (IOException ioe) {
    if (!ioe.toString().contains("Old layout version is 'too old'")) {
      throw ioe;
    }
  } finally {
    // We expect startup to fail, but just in case it didn't, shutdown now.
    if (cluster != null) {
      cluster.shutdown();
    }
  }
}
 
開發者ID:naver,項目名稱:hadoop,代碼行數:51,代碼來源:TestDFSUpgradeFromImage.java


注:本文中的org.apache.hadoop.util.StringUtils.hexStringToByte方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。