本文整理汇总了Java中org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor.getVolumeFailures方法的典型用法代码示例。如果您正苦于以下问题:Java DatanodeDescriptor.getVolumeFailures方法的具体用法?Java DatanodeDescriptor.getVolumeFailures怎么用?Java DatanodeDescriptor.getVolumeFailures使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor
的用法示例。
在下文中一共展示了DatanodeDescriptor.getVolumeFailures方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。
示例1: getVolumeFailuresTotal
import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor; //导入方法依赖的package包/类
@Override // FSNamesystemMBean
public int getVolumeFailuresTotal() {
List<DatanodeDescriptor> live = new ArrayList<DatanodeDescriptor>();
getBlockManager().getDatanodeManager().fetchDatanodes(live, null, true);
int volumeFailuresTotal = 0;
for (DatanodeDescriptor node: live) {
volumeFailuresTotal += node.getVolumeFailures();
}
return volumeFailuresTotal;
}
示例2: waitForDatanodeStatus
import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor; //导入方法依赖的package包/类
public static void waitForDatanodeStatus(DatanodeManager dm, int expectedLive,
int expectedDead, long expectedVolFails, long expectedTotalCapacity,
long timeout) throws InterruptedException, TimeoutException {
final List<DatanodeDescriptor> live = new ArrayList<DatanodeDescriptor>();
final List<DatanodeDescriptor> dead = new ArrayList<DatanodeDescriptor>();
final int ATTEMPTS = 10;
int count = 0;
long currTotalCapacity = 0;
int volFails = 0;
do {
Thread.sleep(timeout);
live.clear();
dead.clear();
dm.fetchDatanodes(live, dead, false);
currTotalCapacity = 0;
volFails = 0;
for (final DatanodeDescriptor dd : live) {
currTotalCapacity += dd.getCapacity();
volFails += dd.getVolumeFailures();
}
count++;
} while ((expectedLive != live.size() ||
expectedDead != dead.size() ||
expectedTotalCapacity != currTotalCapacity ||
expectedVolFails != volFails)
&& count < ATTEMPTS);
if (count == ATTEMPTS) {
throw new TimeoutException("Timed out waiting for capacity."
+ " Live = "+live.size()+" Expected = "+expectedLive
+ " Dead = "+dead.size()+" Expected = "+expectedDead
+ " Total capacity = "+currTotalCapacity
+ " Expected = "+expectedTotalCapacity
+ " Vol Fails = "+volFails+" Expected = "+expectedVolFails);
}
}
示例3: getVolumeFailuresTotal
import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor; //导入方法依赖的package包/类
@Override // FSNamesystemMBean
public int getVolumeFailuresTotal() {
List<DatanodeDescriptor> live = new ArrayList<DatanodeDescriptor>();
getBlockManager().getDatanodeManager().fetchDatanodes(live, null, false);
int volumeFailuresTotal = 0;
for (DatanodeDescriptor node: live) {
volumeFailuresTotal += node.getVolumeFailures();
}
return volumeFailuresTotal;
}