本文整理汇总了Java中org.apache.hadoop.util.DataChecksum.newCrc32方法的典型用法代码示例。如果您正苦于以下问题:Java DataChecksum.newCrc32方法的具体用法?Java DataChecksum.newCrc32怎么用?Java DataChecksum.newCrc32使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类org.apache.hadoop.util.DataChecksum
的用法示例。
在下文中一共展示了DataChecksum.newCrc32方法的4个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。
示例1: Reader
import org.apache.hadoop.util.DataChecksum; //导入方法依赖的package包/类
/**
* Construct the reader
* @param in The stream to read from.
* @param logVersion The version of the data coming from the stream.
*/
public Reader(DataInputStream in, StreamLimiter limiter, int logVersion) {
this.logVersion = logVersion;
if (NameNodeLayoutVersion.supports(
LayoutVersion.Feature.EDITS_CHESKUM, logVersion)) {
this.checksum = DataChecksum.newCrc32();
} else {
this.checksum = null;
}
// It is possible that the logVersion is actually a future layoutversion
// during the rolling upgrade (e.g., the NN gets upgraded first). We
// assume future layout will also support length of editlog op.
this.supportEditLogLength = NameNodeLayoutVersion.supports(
NameNodeLayoutVersion.Feature.EDITLOG_LENGTH, logVersion)
|| logVersion < NameNodeLayoutVersion.CURRENT_LAYOUT_VERSION;
if (this.checksum != null) {
this.in = new DataInputStream(
new CheckedInputStream(in, this.checksum));
} else {
this.in = in;
}
this.limiter = limiter;
this.cache = new OpInstanceCache();
this.maxOpSize = DFSConfigKeys.DFS_NAMENODE_MAX_OP_SIZE_DEFAULT;
}
示例2: create
import org.apache.hadoop.util.DataChecksum; //导入方法依赖的package包/类
public static Reader create(DataInputStream in, StreamLimiter limiter,
int logVersion) {
if (logVersion < NameNodeLayoutVersion.CURRENT_LAYOUT_VERSION) {
// Use the LengthPrefixedReader on edit logs which are newer than what
// we can parse. (Newer layout versions are represented by smaller
// negative integers, for historical reasons.) Even though we can't
// parse the Ops contained in them, we should still be able to call
// scanOp on them. This is important for the JournalNode during rolling
// upgrade.
return new LengthPrefixedReader(in, limiter, logVersion);
} else if (NameNodeLayoutVersion.supports(
NameNodeLayoutVersion.Feature.EDITLOG_LENGTH, logVersion)) {
return new LengthPrefixedReader(in, limiter, logVersion);
} else if (NameNodeLayoutVersion.supports(
LayoutVersion.Feature.EDITS_CHECKSUM, logVersion)) {
Checksum checksum = DataChecksum.newCrc32();
return new ChecksummedReader(checksum, in, limiter, logVersion);
} else {
return new LegacyReader(in, limiter, logVersion);
}
}
示例3: Writer
import org.apache.hadoop.util.DataChecksum; //导入方法依赖的package包/类
public Writer(DataOutputBuffer out) {
this.buf = out;
this.checksum = DataChecksum.newCrc32();
}
示例4: LengthPrefixedReader
import org.apache.hadoop.util.DataChecksum; //导入方法依赖的package包/类
LengthPrefixedReader(DataInputStream in, StreamLimiter limiter,
int logVersion) {
super(in, limiter, logVersion);
this.checksum = DataChecksum.newCrc32();
}