当前位置: 首页>>代码示例>>Java>>正文


Java PipelineAckProto类代码示例

本文整理汇总了Java中org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.PipelineAckProto的典型用法代码示例。如果您正苦于以下问题:Java PipelineAckProto类的具体用法?Java PipelineAckProto怎么用?Java PipelineAckProto使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。


PipelineAckProto类属于org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos包,在下文中一共展示了PipelineAckProto类的10个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: PipelineAck

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.PipelineAckProto; //导入依赖的package包/类
/**
 * Constructor
 * @param seqno sequence number
 * @param replies an array of replies
 * @param downstreamAckTimeNanos ack RTT in nanoseconds, 0 if no next DN in pipeline
 */
public PipelineAck(long seqno, int[] replies,
                   long downstreamAckTimeNanos) {
  ArrayList<Status> statusList = Lists.newArrayList();
  ArrayList<Integer> flagList = Lists.newArrayList();
  for (int r : replies) {
    statusList.add(StatusFormat.getStatus(r));
    flagList.add(r);
  }
  proto = PipelineAckProto.newBuilder()
    .setSeqno(seqno)
    .addAllReply(statusList)
    .addAllFlag(flagList)
    .setDownstreamAckTimeNanos(downstreamAckTimeNanos)
    .build();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:22,代码来源:PipelineAck.java

示例2: channelRead0

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.PipelineAckProto; //导入依赖的package包/类
@Override
protected void channelRead0(ChannelHandlerContext ctx, PipelineAckProto ack) throws Exception {
  Status reply = getStatus(ack);
  if (reply != Status.SUCCESS) {
    failed(ctx.channel(), () -> new IOException("Bad response " + reply + " for block " +
      block + " from datanode " + ctx.channel().remoteAddress()));
    return;
  }
  if (PipelineAck.isRestartOOBStatus(reply)) {
    failed(ctx.channel(), () -> new IOException("Restart response " + reply + " for block " +
      block + " from datanode " + ctx.channel().remoteAddress()));
    return;
  }
  if (ack.getSeqno() == HEART_BEAT_SEQNO) {
    return;
  }
  completed(ctx.channel());
}
 
开发者ID:apache,项目名称:hbase,代码行数:19,代码来源:FanOutOneBlockAsyncDFSOutput.java

示例3: PipelineAck

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.PipelineAckProto; //导入依赖的package包/类
/**
 * Constructor
 * @param seqno sequence number
 * @param replies an array of replies
 * @param downstreamAckTimeNanos ack RTT in nanoseconds, 0 if no next DN in pipeline
 */
public PipelineAck(long seqno, Status[] replies, long downstreamAckTimeNanos) {
  proto = PipelineAckProto.newBuilder()
    .setSeqno(seqno)
    .addAllStatus(Arrays.asList(replies))
    .setDownstreamAckTimeNanos(downstreamAckTimeNanos)
    .build();
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:14,代码来源:PipelineAck.java

示例4: createPipelineAckStatusGetter26

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.PipelineAckProto; //导入依赖的package包/类
private static PipelineAckStatusGetter createPipelineAckStatusGetter26()
    throws NoSuchMethodException {
  Method getStatusMethod = PipelineAckProto.class.getMethod("getStatus", int.class);
  return new PipelineAckStatusGetter() {

    @Override
    public Status get(PipelineAckProto ack) {
      try {
        return (Status) getStatusMethod.invoke(ack, 0);
      } catch (IllegalAccessException | InvocationTargetException e) {
        throw new RuntimeException(e);
      }
    }
  };
}
 
开发者ID:apache,项目名称:hbase,代码行数:16,代码来源:FanOutOneBlockAsyncDFSOutputHelper.java

示例5: setupReceiver

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.PipelineAckProto; //导入依赖的package包/类
private void setupReceiver(int timeoutMs) {
  AckHandler ackHandler = new AckHandler(timeoutMs);
  for (Channel ch : datanodeList) {
    ch.pipeline().addLast(
      new IdleStateHandler(timeoutMs, timeoutMs / 2, 0, TimeUnit.MILLISECONDS),
      new ProtobufVarint32FrameDecoder(),
      new ProtobufDecoder(PipelineAckProto.getDefaultInstance()), ackHandler);
    ch.config().setAutoRead(true);
  }
}
 
开发者ID:apache,项目名称:hbase,代码行数:11,代码来源:FanOutOneBlockAsyncDFSOutput.java

示例6: readFields

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.PipelineAckProto; //导入依赖的package包/类
/**** Writable interface ****/
public void readFields(InputStream in) throws IOException {
  proto = PipelineAckProto.parseFrom(vintPrefixed(in));
}
 
开发者ID:naver,项目名称:hadoop,代码行数:5,代码来源:PipelineAck.java

示例7: readFields

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.PipelineAckProto; //导入依赖的package包/类
/**
 * * Writable interface ***
 */
public void readFields(InputStream in) throws IOException {
  proto = PipelineAckProto.parseFrom(vintPrefixed(in));
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:7,代码来源:PipelineAck.java

示例8: getStatus

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.PipelineAckProto; //导入依赖的package包/类
static Status getStatus(PipelineAckProto ack) {
  return PIPELINE_ACK_STATUS_GETTER.get(ack);
}
 
开发者ID:apache,项目名称:hbase,代码行数:4,代码来源:FanOutOneBlockAsyncDFSOutputHelper.java

示例9: PipelineAck

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.PipelineAckProto; //导入依赖的package包/类
/**
 * Constructor
 *
 * @param seqno
 *     sequence number
 * @param replies
 *     an array of replies
 * @param downstreamAckTimeNanos
 *     ack RTT in nanoseconds, 0 if no next DN in pipeline
 */
public PipelineAck(long seqno, Status[] replies,
    long downstreamAckTimeNanos) {
  proto = PipelineAckProto.newBuilder().setSeqno(seqno)
      .addAllStatus(Arrays.asList(replies))
      .setDownstreamAckTimeNanos(downstreamAckTimeNanos).build();
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:17,代码来源:PipelineAck.java

示例10: get

import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.PipelineAckProto; //导入依赖的package包/类
Status get(PipelineAckProto ack); 
开发者ID:apache,项目名称:hbase,代码行数:2,代码来源:FanOutOneBlockAsyncDFSOutputHelper.java


注:本文中的org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.PipelineAckProto类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。