当前位置: 首页>>代码示例>>Java>>正文


Java AbstractRequest类代码示例

本文整理汇总了Java中org.apache.kafka.common.requests.AbstractRequest的典型用法代码示例。如果您正苦于以下问题:Java AbstractRequest类的具体用法?Java AbstractRequest怎么用?Java AbstractRequest使用的例子?那么, 这里精选的类代码示例或许可以为您提供帮助。


AbstractRequest类属于org.apache.kafka.common.requests包,在下文中一共展示了AbstractRequest类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: InFlightRequest

import org.apache.kafka.common.requests.AbstractRequest; //导入依赖的package包/类
public InFlightRequest(RequestHeader header,
                       long createdTimeMs,
                       String destination,
                       RequestCompletionHandler callback,
                       boolean expectResponse,
                       boolean isInternalRequest,
                       AbstractRequest request,
                       Send send,
                       long sendTimeMs) {
    this.header = header;
    this.destination = destination;
    this.callback = callback;
    this.expectResponse = expectResponse;
    this.isInternalRequest = isInternalRequest;
    this.request = request;
    this.send = send;
    this.sendTimeMs = sendTimeMs;
    this.createdTimeMs = createdTimeMs;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:20,代码来源:NetworkClient.java

示例2: ClientRequest

import org.apache.kafka.common.requests.AbstractRequest; //导入依赖的package包/类
/**
 * @param destination The brokerId to send the request to
 * @param requestBuilder The builder for the request to make
 * @param correlationId The correlation id for this client request
 * @param clientId The client ID to use for the header
 * @param createdTimeMs The unix timestamp in milliseconds for the time at which this request was created.
 * @param expectResponse Should we expect a response message or is this request complete once it is sent?
 * @param callback A callback to execute when the response has been received (or null if no callback is necessary)
 */
public ClientRequest(String destination,
                     AbstractRequest.Builder<?> requestBuilder,
                     int correlationId,
                     String clientId,
                     long createdTimeMs,
                     boolean expectResponse,
                     RequestCompletionHandler callback) {
    this.destination = destination;
    this.requestBuilder = requestBuilder;
    this.correlationId = correlationId;
    this.clientId = clientId;
    this.createdTimeMs = createdTimeMs;
    this.expectResponse = expectResponse;
    this.callback = callback;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:25,代码来源:ClientRequest.java

示例3: produceRequestMatcher

import org.apache.kafka.common.requests.AbstractRequest; //导入依赖的package包/类
private MockClient.RequestMatcher produceRequestMatcher(final long pid, final short epoch) {
    return new MockClient.RequestMatcher() {
        @Override
        public boolean matches(AbstractRequest body) {
            ProduceRequest produceRequest = (ProduceRequest) body;
            MemoryRecords records = produceRequest.partitionRecordsOrFail().get(tp0);
            assertNotNull(records);
            Iterator<MutableRecordBatch> batchIterator = records.batches().iterator();
            assertTrue(batchIterator.hasNext());
            MutableRecordBatch batch = batchIterator.next();
            assertFalse(batchIterator.hasNext());
            assertTrue(batch.isTransactional());
            assertEquals(pid, batch.producerId());
            assertEquals(epoch, batch.producerEpoch());
            assertEquals(transactionalId, produceRequest.transactionalId());
            return true;
        }
    };
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:20,代码来源:TransactionManagerTest.java

示例4: testLeaveGroupOnClose

import org.apache.kafka.common.requests.AbstractRequest; //导入依赖的package包/类
@Test
public void testLeaveGroupOnClose() {
    final String consumerId = "consumer";

    subscriptions.subscribe(singleton(topic1), rebalanceListener);

    client.prepareResponse(groupCoordinatorResponse(node, Errors.NONE));
    coordinator.ensureCoordinatorReady();

    client.prepareResponse(joinGroupFollowerResponse(1, consumerId, "leader", Errors.NONE));
    client.prepareResponse(syncGroupResponse(singletonList(t1p), Errors.NONE));
    coordinator.joinGroupIfNeeded();

    final AtomicBoolean received = new AtomicBoolean(false);
    client.prepareResponse(new MockClient.RequestMatcher() {
        @Override
        public boolean matches(AbstractRequest body) {
            received.set(true);
            LeaveGroupRequest leaveRequest = (LeaveGroupRequest) body;
            return leaveRequest.memberId().equals(consumerId) &&
                    leaveRequest.groupId().equals(groupId);
        }
    }, new LeaveGroupResponse(Errors.NONE));
    coordinator.close(0);
    assertTrue(received.get());
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:27,代码来源:ConsumerCoordinatorTest.java

示例5: testListOffsetsSendsIsolationLevel

import org.apache.kafka.common.requests.AbstractRequest; //导入依赖的package包/类
@Test
public void testListOffsetsSendsIsolationLevel() {
    for (final IsolationLevel isolationLevel : IsolationLevel.values()) {
        Fetcher<byte[], byte[]> fetcher = createFetcher(subscriptions, new Metrics(), new ByteArrayDeserializer(),
                new ByteArrayDeserializer(), Integer.MAX_VALUE, isolationLevel);

        subscriptions.assignFromUser(singleton(tp1));
        subscriptions.needOffsetReset(tp1, OffsetResetStrategy.LATEST);

        client.prepareResponse(new MockClient.RequestMatcher() {
            @Override
            public boolean matches(AbstractRequest body) {
                ListOffsetRequest request = (ListOffsetRequest) body;
                return request.isolationLevel() == isolationLevel;
            }
        }, listOffsetResponse(Errors.NONE, 1L, 5L));
        fetcher.updateFetchPositions(singleton(tp1));
        assertFalse(subscriptions.isOffsetResetNeeded(tp1));
        assertTrue(subscriptions.isFetchable(tp1));
        assertEquals(5, subscriptions.position(tp1).longValue());
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:23,代码来源:FetcherTest.java

示例6: prepareOffsetCommitResponse

import org.apache.kafka.common.requests.AbstractRequest; //导入依赖的package包/类
private AtomicBoolean prepareOffsetCommitResponse(MockClient client, Node coordinator, final Map<TopicPartition, Long> partitionOffsets) {
    final AtomicBoolean commitReceived = new AtomicBoolean(true);
    Map<TopicPartition, Errors> response = new HashMap<>();
    for (TopicPartition partition : partitionOffsets.keySet())
        response.put(partition, Errors.NONE);

    client.prepareResponseFrom(new MockClient.RequestMatcher() {
        @Override
        public boolean matches(AbstractRequest body) {
            OffsetCommitRequest commitRequest = (OffsetCommitRequest) body;
            for (Map.Entry<TopicPartition, Long> partitionOffset : partitionOffsets.entrySet()) {
                OffsetCommitRequest.PartitionData partitionData = commitRequest.offsetData().get(partitionOffset.getKey());
                // verify that the expected offset has been committed
                if (partitionData.offset != partitionOffset.getValue()) {
                    commitReceived.set(false);
                    return false;
                }
            }
            return true;
        }
    }, offsetCommitResponse(response), coordinator);
    return commitReceived;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:24,代码来源:KafkaConsumerTest.java

示例7: send

import org.apache.kafka.common.requests.AbstractRequest; //导入依赖的package包/类
@Override
public void send(ClientRequest request, long now) {
    Iterator<FutureResponse> iterator = futureResponses.iterator();
    while (iterator.hasNext()) {
        FutureResponse futureResp = iterator.next();
        if (futureResp.node != null && !request.destination().equals(futureResp.node.idString()))
            continue;

        AbstractRequest.Builder<?> builder = request.requestBuilder();
        short version = nodeApiVersions.usableVersion(request.apiKey(), builder.desiredVersion());
        AbstractRequest abstractRequest = request.requestBuilder().build(version);
        if (!futureResp.requestMatcher.matches(abstractRequest))
            throw new IllegalStateException("Request matcher did not match next-in-line request " + abstractRequest);
        ClientResponse resp = new ClientResponse(request.makeHeader(version), request.callback(), request.destination(),
                request.createdTimeMs(), time.milliseconds(), futureResp.disconnected, null, futureResp.responseBody);
        responses.add(resp);
        iterator.remove();
        return;
    }

    this.requests.add(request);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:23,代码来源:MockClient.java

示例8: testNormalJoinGroupLeader

import org.apache.kafka.common.requests.AbstractRequest; //导入依赖的package包/类
@Test
public void testNormalJoinGroupLeader() {
    EasyMock.expect(configStorage.snapshot()).andReturn(configState1);

    PowerMock.replayAll();

    final String consumerId = "leader";

    client.prepareResponse(groupCoordinatorResponse(node, Errors.NONE));
    coordinator.ensureCoordinatorReady();

    // normal join group
    Map<String, Long> memberConfigOffsets = new HashMap<>();
    memberConfigOffsets.put("leader", 1L);
    memberConfigOffsets.put("member", 1L);
    client.prepareResponse(joinGroupLeaderResponse(1, consumerId, memberConfigOffsets, Errors.NONE));
    client.prepareResponse(new MockClient.RequestMatcher() {
        @Override
        public boolean matches(AbstractRequest body) {
            SyncGroupRequest sync = (SyncGroupRequest) body;
            return sync.memberId().equals(consumerId) &&
                    sync.generationId() == 1 &&
                    sync.groupAssignment().containsKey(consumerId);
        }
    }, syncGroupResponse(ConnectProtocol.Assignment.NO_ERROR, "leader", 1L, Collections.singletonList(connectorId1),
            Collections.<ConnectorTaskId>emptyList(), Errors.NONE));
    coordinator.ensureActiveGroup();

    assertFalse(coordinator.needRejoin());
    assertEquals(0, rebalanceListener.revokedCount);
    assertEquals(1, rebalanceListener.assignedCount);
    assertFalse(rebalanceListener.assignment.failed());
    assertEquals(1L, rebalanceListener.assignment.offset());
    assertEquals("leader", rebalanceListener.assignment.leader());
    assertEquals(Collections.singletonList(connectorId1), rebalanceListener.assignment.connectors());
    assertEquals(Collections.emptyList(), rebalanceListener.assignment.tasks());

    PowerMock.verifyAll();
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:40,代码来源:WorkerCoordinatorTest.java

示例9: testNormalJoinGroupFollower

import org.apache.kafka.common.requests.AbstractRequest; //导入依赖的package包/类
@Test
public void testNormalJoinGroupFollower() {
    EasyMock.expect(configStorage.snapshot()).andReturn(configState1);

    PowerMock.replayAll();

    final String memberId = "member";

    client.prepareResponse(groupCoordinatorResponse(node, Errors.NONE));
    coordinator.ensureCoordinatorReady();

    // normal join group
    client.prepareResponse(joinGroupFollowerResponse(1, memberId, "leader", Errors.NONE));
    client.prepareResponse(new MockClient.RequestMatcher() {
        @Override
        public boolean matches(AbstractRequest body) {
            SyncGroupRequest sync = (SyncGroupRequest) body;
            return sync.memberId().equals(memberId) &&
                    sync.generationId() == 1 &&
                    sync.groupAssignment().isEmpty();
        }
    }, syncGroupResponse(ConnectProtocol.Assignment.NO_ERROR, "leader", 1L, Collections.<String>emptyList(),
            Collections.singletonList(taskId1x0), Errors.NONE));
    coordinator.ensureActiveGroup();

    assertFalse(coordinator.needRejoin());
    assertEquals(0, rebalanceListener.revokedCount);
    assertEquals(1, rebalanceListener.assignedCount);
    assertFalse(rebalanceListener.assignment.failed());
    assertEquals(1L, rebalanceListener.assignment.offset());
    assertEquals(Collections.emptyList(), rebalanceListener.assignment.connectors());
    assertEquals(Collections.singletonList(taskId1x0), rebalanceListener.assignment.tasks());

    PowerMock.verifyAll();
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:36,代码来源:WorkerCoordinatorTest.java

示例10: testJoinLeaderCannotAssign

import org.apache.kafka.common.requests.AbstractRequest; //导入依赖的package包/类
@Test
public void testJoinLeaderCannotAssign() {
    // If the selected leader can't get up to the maximum offset, it will fail to assign and we should immediately
    // need to retry the join.

    // When the first round fails, we'll take an updated config snapshot
    EasyMock.expect(configStorage.snapshot()).andReturn(configState1);
    EasyMock.expect(configStorage.snapshot()).andReturn(configState2);

    PowerMock.replayAll();

    final String memberId = "member";

    client.prepareResponse(groupCoordinatorResponse(node, Errors.NONE));
    coordinator.ensureCoordinatorReady();

    // config mismatch results in assignment error
    client.prepareResponse(joinGroupFollowerResponse(1, memberId, "leader", Errors.NONE));
    MockClient.RequestMatcher matcher = new MockClient.RequestMatcher() {
        @Override
        public boolean matches(AbstractRequest body) {
            SyncGroupRequest sync = (SyncGroupRequest) body;
            return sync.memberId().equals(memberId) &&
                    sync.generationId() == 1 &&
                    sync.groupAssignment().isEmpty();
        }
    };
    client.prepareResponse(matcher, syncGroupResponse(ConnectProtocol.Assignment.CONFIG_MISMATCH, "leader", 10L,
            Collections.<String>emptyList(), Collections.<ConnectorTaskId>emptyList(), Errors.NONE));
    client.prepareResponse(joinGroupFollowerResponse(1, memberId, "leader", Errors.NONE));
    client.prepareResponse(matcher, syncGroupResponse(ConnectProtocol.Assignment.NO_ERROR, "leader", 1L,
            Collections.<String>emptyList(), Collections.singletonList(taskId1x0), Errors.NONE));
    coordinator.ensureActiveGroup();

    PowerMock.verifyAll();
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:37,代码来源:WorkerCoordinatorTest.java

示例11: sendEligibleCalls

import org.apache.kafka.common.requests.AbstractRequest; //导入依赖的package包/类
/**
 * Send the calls which are ready.
 *
 * @param now                   The current time in milliseconds.
 * @param callsToSend           The calls to send, by node.
 * @param correlationIdToCalls  A map of correlation IDs to calls.
 * @param callsInFlight         A map of nodes to the calls they have in flight.
 *
 * @return                      The minimum timeout we need for poll().
 */
private long sendEligibleCalls(long now, Map<Node, List<Call>> callsToSend,
                 Map<Integer, Call> correlationIdToCalls, Map<String, List<Call>> callsInFlight) {
    long pollTimeout = Long.MAX_VALUE;
    for (Iterator<Map.Entry<Node, List<Call>>> iter = callsToSend.entrySet().iterator();
             iter.hasNext(); ) {
        Map.Entry<Node, List<Call>> entry = iter.next();
        List<Call> calls = entry.getValue();
        if (calls.isEmpty()) {
            iter.remove();
            continue;
        }
        Node node = entry.getKey();
        if (!client.ready(node, now)) {
            long nodeTimeout = client.connectionDelay(node, now);
            pollTimeout = Math.min(pollTimeout, nodeTimeout);
            log.trace("{}: client is not ready to send to {}.  Must delay {} ms", clientId, node, nodeTimeout);
            continue;
        }
        Call call = calls.remove(0);
        int timeoutMs = calcTimeoutMsRemainingAsInt(now, call.deadlineMs);
        AbstractRequest.Builder<?> requestBuilder = null;
        try {
            requestBuilder = call.createRequest(timeoutMs);
        } catch (Throwable throwable) {
            call.fail(now, new KafkaException(String.format(
                "Internal error sending %s to %s.", call.callName, node)));
            continue;
        }
        ClientRequest clientRequest = client.newClientRequest(node.idString(), requestBuilder, now, true);
        log.trace("{}: sending {} to {}. correlationId={}", clientId, requestBuilder, node,
            clientRequest.correlationId());
        client.send(clientRequest, now);
        getOrCreateListValue(callsInFlight, node.idString()).add(call);
        correlationIdToCalls.put(clientRequest.correlationId(), call);
    }
    return pollTimeout;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:48,代码来源:KafkaAdminClient.java

示例12: send

import org.apache.kafka.common.requests.AbstractRequest; //导入依赖的package包/类
/**
 * Send a new request. Note that the request is not actually transmitted on the
 * network until one of the {@link #poll(long)} variants is invoked. At this
 * point the request will either be transmitted successfully or will fail.
 * Use the returned future to obtain the result of the send. Note that there is no
 * need to check for disconnects explicitly on the {@link ClientResponse} object;
 * instead, the future will be failed with a {@link DisconnectException}.
 *
 * @param node The destination of the request
 * @param requestBuilder A builder for the request payload
 * @return A future which indicates the result of the send.
 */
// 将待发送的请求封装成ClientRequest,然后保存到unsent集合中等待发送
public RequestFuture<ClientResponse> send(Node node, AbstractRequest.Builder<?> requestBuilder) {
    long now = time.milliseconds();
    RequestFutureCompletionHandler completionHandler = new RequestFutureCompletionHandler();
    ClientRequest clientRequest = client.newClientRequest(node.idString(), requestBuilder, now, true,
            completionHandler);
    unsent.put(node, clientRequest);

    // wakeup the client in case it is blocking in poll so that we can send the queued request
    client.wakeup();
    return completionHandler.future;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:25,代码来源:ConsumerNetworkClient.java

示例13: doSend

import org.apache.kafka.common.requests.AbstractRequest; //导入依赖的package包/类
private void doSend(ClientRequest clientRequest, boolean isInternalRequest, long now, AbstractRequest request) {
    String nodeId = clientRequest.destination();
    // 组建header
    RequestHeader header = clientRequest.makeHeader(request.version());
    if (log.isDebugEnabled()) {
        int latestClientVersion = clientRequest.apiKey().latestVersion();
        if (header.apiVersion() == latestClientVersion) {
            log.trace("Sending {} {} to node {}.", clientRequest.apiKey(), request, nodeId);
        } else {
            log.debug("Using older server API v{} to send {} {} to node {}.",
                    header.apiVersion(), clientRequest.apiKey(), request, nodeId);
        }
    }
    //创建NetworkSend,此时把header转换成ByteBuffer存储起来
    //最终交给GatheringByteChannel实现类去发送消息
    Send send = request.toSend(nodeId, header);
    InFlightRequest inFlightRequest = new InFlightRequest(
            header,
            clientRequest.createdTimeMs(),
            clientRequest.destination(),
            clientRequest.callback(),
            clientRequest.expectResponse(),
            isInternalRequest,
            request,
            send,
            now);
    //放入队列中
    this.inFlightRequests.add(inFlightRequest);
    //调用selector,只是把send放到KafkaChannel
    selector.send(inFlightRequest.send);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:32,代码来源:NetworkClient.java

示例14: prepareAddPartitionsToTxn

import org.apache.kafka.common.requests.AbstractRequest; //导入依赖的package包/类
private void prepareAddPartitionsToTxn(final Map<TopicPartition, Errors> errors) {
    client.prepareResponse(new MockClient.RequestMatcher() {
        @Override
        public boolean matches(AbstractRequest body) {
            AddPartitionsToTxnRequest request = (AddPartitionsToTxnRequest) body;
            assertEquals(new HashSet<>(request.partitions()), new HashSet<>(errors.keySet()));
            return true;
        }
    }, new AddPartitionsToTxnResponse(0, errors));
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:11,代码来源:TransactionManagerTest.java

示例15: prepareFindCoordinatorResponse

import org.apache.kafka.common.requests.AbstractRequest; //导入依赖的package包/类
private void prepareFindCoordinatorResponse(Errors error, boolean shouldDisconnect,
                                            final CoordinatorType coordinatorType,
                                            final String coordinatorKey) {
    client.prepareResponse(new MockClient.RequestMatcher() {
        @Override
        public boolean matches(AbstractRequest body) {
            FindCoordinatorRequest findCoordinatorRequest = (FindCoordinatorRequest) body;
            assertEquals(findCoordinatorRequest.coordinatorType(), coordinatorType);
            assertEquals(findCoordinatorRequest.coordinatorKey(), coordinatorKey);
            return true;
        }
    }, new FindCoordinatorResponse(error, brokerNode), shouldDisconnect);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:14,代码来源:TransactionManagerTest.java


注:本文中的org.apache.kafka.common.requests.AbstractRequest类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。