當前位置: 首頁>>代碼示例>>Java>>正文


Java AbstractConfig類代碼示例

本文整理匯總了Java中org.apache.kafka.common.config.AbstractConfig的典型用法代碼示例。如果您正苦於以下問題:Java AbstractConfig類的具體用法?Java AbstractConfig怎麽用?Java AbstractConfig使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


AbstractConfig類屬於org.apache.kafka.common.config包,在下文中一共展示了AbstractConfig類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: newConverter

import org.apache.kafka.common.config.AbstractConfig; //導入依賴的package包/類
public Converter newConverter(String converterClassOrAlias, AbstractConfig config) {
    Class<? extends Converter> klass;
    try {
        klass = pluginClass(
                delegatingLoader,
                converterClassOrAlias,
                Converter.class
        );
    } catch (ClassNotFoundException e) {
        throw new ConnectException(
                "Failed to find any class that implements Converter and which name matches "
                        + converterClassOrAlias
                        + ", available connectors are: "
                        + pluginNames(delegatingLoader.converters())
        );
    }
    return config != null ? newConfiguredPlugin(config, klass) : newPlugin(klass);
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:19,代碼來源:Plugins.java

示例2: clientChannelBuilder

import org.apache.kafka.common.config.AbstractConfig; //導入依賴的package包/類
/**
 * @param securityProtocol the securityProtocol
 * @param contextType the contextType, it must be non-null if `securityProtocol` is SASL_*; it is ignored otherwise
 * @param config client config
 * @param listenerName the listenerName if contextType is SERVER or null otherwise
 * @param clientSaslMechanism SASL mechanism if mode is CLIENT, ignored otherwise
 * @param saslHandshakeRequestEnable flag to enable Sasl handshake requests; disabled only for SASL
 *             inter-broker connections with inter-broker protocol version < 0.10
 * @return the configured `ChannelBuilder`
 * @throws IllegalArgumentException if `mode` invariants described above is not maintained
 */
public static ChannelBuilder clientChannelBuilder(SecurityProtocol securityProtocol,
        JaasContext.Type contextType,
        AbstractConfig config,
        ListenerName listenerName,
        String clientSaslMechanism,
        boolean saslHandshakeRequestEnable) {

    if (securityProtocol == SecurityProtocol.SASL_PLAINTEXT || securityProtocol == SecurityProtocol.SASL_SSL) {
        if (contextType == null)
            throw new IllegalArgumentException("`contextType` must be non-null if `securityProtocol` is `" + securityProtocol + "`");
        if (clientSaslMechanism == null)
            throw new IllegalArgumentException("`clientSaslMechanism` must be non-null in client mode if `securityProtocol` is `" + securityProtocol + "`");
    }
    return create(securityProtocol, Mode.CLIENT, contextType, config, listenerName, clientSaslMechanism,
            saslHandshakeRequestEnable, null);
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:28,代碼來源:ChannelBuilders.java

示例3: NioEchoServer

import org.apache.kafka.common.config.AbstractConfig; //導入依賴的package包/類
public NioEchoServer(ListenerName listenerName, SecurityProtocol securityProtocol, AbstractConfig config,
        String serverHost, ChannelBuilder channelBuilder) throws Exception {
    super("echoserver");
    setDaemon(true);
    serverSocketChannel = ServerSocketChannel.open();
    serverSocketChannel.configureBlocking(false);
    serverSocketChannel.socket().bind(new InetSocketAddress(serverHost, 0));
    this.port = serverSocketChannel.socket().getLocalPort();
    this.socketChannels = Collections.synchronizedList(new ArrayList<SocketChannel>());
    this.newChannels = Collections.synchronizedList(new ArrayList<SocketChannel>());
    this.credentialCache = new CredentialCache();
    if (securityProtocol == SecurityProtocol.SASL_PLAINTEXT || securityProtocol == SecurityProtocol.SASL_SSL)
        ScramCredentialUtils.createCache(credentialCache, ScramMechanism.mechanismNames());
    if (channelBuilder == null)
        channelBuilder = ChannelBuilders.serverChannelBuilder(listenerName, securityProtocol, config, credentialCache);
    this.selector = new Selector(5000, new Metrics(), new MockTime(), "MetricGroup", channelBuilder);
    acceptorThread = new AcceptorThread();
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:19,代碼來源:NioEchoServer.java

示例4: newConfiguredPlugin

import org.apache.kafka.common.config.AbstractConfig; //導入依賴的package包/類
protected static <T> T newConfiguredPlugin(AbstractConfig config, Class<T> klass) {
    T plugin = Utils.newInstance(klass);
    if (plugin instanceof Configurable) {
        ((Configurable) plugin).configure(config.originals());
    }
    return plugin;
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:8,代碼來源:Plugins.java

示例5: testConfigFromStreamsConfig

import org.apache.kafka.common.config.AbstractConfig; //導入依賴的package包/類
@Test
public void testConfigFromStreamsConfig() {
    for (final String expectedMechanism : asList("PLAIN", "SCRAM-SHA-512")) {
        final Properties props = new Properties();
        props.setProperty(StreamsConfig.APPLICATION_ID_CONFIG, "some_app_id");
        props.setProperty(SaslConfigs.SASL_MECHANISM, expectedMechanism);
        props.setProperty(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9000");
        final StreamsConfig streamsConfig = new StreamsConfig(props);
        final AbstractConfig config = StreamsKafkaClient.Config.fromStreamsConfig(streamsConfig);
        assertEquals(expectedMechanism, config.values().get(SaslConfigs.SASL_MECHANISM));
        assertEquals(expectedMechanism, config.getString(SaslConfigs.SASL_MECHANISM));
    }
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:14,代碼來源:StreamsKafkaClientTest.java

示例6: createChannelBuilder

import org.apache.kafka.common.config.AbstractConfig; //導入依賴的package包/類
/**
 * @param config client configs
 * @return configured ChannelBuilder based on the configs.
 */
public static ChannelBuilder createChannelBuilder(AbstractConfig config) {
    // 根據security.protocol配置項的值,得到對應的securityprotocal對象
    SecurityProtocol securityProtocol = SecurityProtocol.forName(config.getString(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG));
    if (!SecurityProtocol.nonTestingValues().contains(securityProtocol))
        throw new ConfigException("Invalid SecurityProtocol " + securityProtocol);

    // 獲取sasl.mechanism配置項的值
    String clientSaslMechanism = config.getString(SaslConfigs.SASL_MECHANISM);

    // 創建對應的ChannelBuilder對象
    return ChannelBuilders.clientChannelBuilder(securityProtocol, JaasContext.Type.CLIENT, config, null,
            clientSaslMechanism, true);
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:18,代碼來源:ClientUtils.java

示例7: postProcessReconnectBackoffConfigs

import org.apache.kafka.common.config.AbstractConfig; //導入依賴的package包/類
/**
 * Postprocess the configuration so that exponential backoff is disabled when reconnect backoff
 * is explicitly configured but the maximum reconnect backoff is not cexplicitly onfigured.
 *
 * @param config                    The config object.
 * @param parsedValues              The parsedValues as provided to postProcessParsedConfig.
 *
 * @return                          The new values which have been set as described in postProcessParsedConfig.
 */
public static Map<String, Object> postProcessReconnectBackoffConfigs(AbstractConfig config,
                                                Map<String, Object> parsedValues) {
    HashMap<String, Object> rval = new HashMap<>();
    if ((!config.originals().containsKey(RECONNECT_BACKOFF_MAX_MS_CONFIG)) &&
            config.originals().containsKey(RECONNECT_BACKOFF_MS_CONFIG)) {
        log.debug("Disabling exponential reconnect backoff because " + RECONNECT_BACKOFF_MS_CONFIG +
            " is set, but " + RECONNECT_BACKOFF_MAX_MS_CONFIG + " is not.");
        rval.put(RECONNECT_BACKOFF_MAX_MS_CONFIG, parsedValues.get(RECONNECT_BACKOFF_MS_CONFIG));
    }
    return rval;
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:21,代碼來源:CommonClientConfigs.java

示例8: serverChannelBuilder

import org.apache.kafka.common.config.AbstractConfig; //導入依賴的package包/類
/**
 * @param listenerName the listenerName
 * @param securityProtocol the securityProtocol
 * @param config server config
 * @param credentialCache Credential cache for SASL/SCRAM if SCRAM is enabled
 * @return the configured `ChannelBuilder`
 */
public static ChannelBuilder serverChannelBuilder(ListenerName listenerName,
                                                  SecurityProtocol securityProtocol,
                                                  AbstractConfig config,
                                                  CredentialCache credentialCache) {
    return create(securityProtocol, Mode.SERVER, JaasContext.Type.SERVER, config, listenerName, null,
            true, credentialCache);
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:15,代碼來源:ChannelBuilders.java

示例9: create

import org.apache.kafka.common.config.AbstractConfig; //導入依賴的package包/類
private static ChannelBuilder create(SecurityProtocol securityProtocol,
                                     Mode mode,
                                     JaasContext.Type contextType,
                                     AbstractConfig config,
                                     ListenerName listenerName,
                                     String clientSaslMechanism,
                                     boolean saslHandshakeRequestEnable,
                                     CredentialCache credentialCache) {
    Map<String, ?> configs;
    if (listenerName == null)
        configs = config.values();
    else
        configs = config.valuesWithPrefixOverride(listenerName.configPrefix());

    ChannelBuilder channelBuilder;
    // 創建不同的channelBuidler對象
    switch (securityProtocol) {
        case SSL:
            requireNonNullMode(mode, securityProtocol);
            channelBuilder = new SslChannelBuilder(mode);
            break;
        case SASL_SSL:
        case SASL_PLAINTEXT:
            requireNonNullMode(mode, securityProtocol);
            JaasContext jaasContext = JaasContext.load(contextType, listenerName, configs);
            channelBuilder = new SaslChannelBuilder(mode, jaasContext, securityProtocol,
                    clientSaslMechanism, saslHandshakeRequestEnable, credentialCache);
            break;
        case PLAINTEXT:
        case TRACE:
            channelBuilder = new PlaintextChannelBuilder();
            break;
        default:
            throw new IllegalArgumentException("Unexpected securityProtocol " + securityProtocol);
    }
    // 調用channelBuilder.configure方法配置channelBuilder對象
    channelBuilder.configure(configs);
    return channelBuilder;
}
 
開發者ID:YMCoding,項目名稱:kafka-0.11.0.0-src-with-comment,代碼行數:40,代碼來源:ChannelBuilders.java

示例10: getAbsoluteFile

import org.apache.kafka.common.config.AbstractConfig; //導入依賴的package包/類
/**
 * Method is used to return a File checking to ensure that it is an absolute path.
 *
 * @param config config to read the value from
 * @param key    key for the value
 * @return File for the config value.
 */
public static File getAbsoluteFile(AbstractConfig config, String key) {
  Preconditions.checkNotNull(config, "config cannot be null");
  String path = config.getString(key);
  File file = new File(path);
  Preconditions.checkState(file.isAbsolute(), "'%s' must be an absolute path.", key);
  return new File(path);
}
 
開發者ID:jcustenborder,項目名稱:connect-utils,代碼行數:15,代碼來源:ConfigUtils.java

示例11: inetSocketAddresses

import org.apache.kafka.common.config.AbstractConfig; //導入依賴的package包/類
/**
 * Method is used to return a list of InetSocketAddress from a config list of hostname:port strings.
 *
 * @param config config to read the value from
 * @param key    key for the value
 * @return List of InetSocketAddress for the supplied strings.
 */
public static List<InetSocketAddress> inetSocketAddresses(AbstractConfig config, String key) {
  Preconditions.checkNotNull(config, "config cannot be null");
  List<String> value = config.getList(key);
  List<InetSocketAddress> addresses = new ArrayList<>(value.size());
  for (String s : value) {
    addresses.add(parseInetSocketAddress(s));
  }
  return ImmutableList.copyOf(addresses);
}
 
開發者ID:jcustenborder,項目名稱:connect-utils,代碼行數:17,代碼來源:ConfigUtils.java

示例12: hostAndPorts

import org.apache.kafka.common.config.AbstractConfig; //導入依賴的package包/類
/**
 * Method is used to parse a list ConfigDef item to a list of HostAndPort
 * @param config Config to read from
 * @param key ConfigItem to get the host string from.
 * @param defaultPort The default port to use if a port was not specified. Can be null.
 * @return HostAndPort based on the ConfigItem.
 * @return
 */
public static List<HostAndPort> hostAndPorts(AbstractConfig config, String key, Integer defaultPort) {
  final List<String> inputs = config.getList(key);
  List<HostAndPort> result = new ArrayList<>();
  for (final String input : inputs) {
    final HostAndPort hostAndPort = hostAndPort(input, defaultPort);
    result.add(hostAndPort);
  }

  return ImmutableList.copyOf(result);
}
 
開發者ID:jcustenborder,項目名稱:connect-utils,代碼行數:19,代碼來源:ConfigUtils.java

示例13: HiveMetaStore

import org.apache.kafka.common.config.AbstractConfig; //導入依賴的package包/類
public HiveMetaStore(Configuration conf, AbstractConfig connectorConfig)
    throws HiveMetaStoreException {
  HiveConf hiveConf = new HiveConf(conf, HiveConf.class);
  String hiveConfDir = connectorConfig.getString(HiveConfig.HIVE_CONF_DIR_CONFIG);
  String hiveMetaStoreUris = connectorConfig.getString(HiveConfig.HIVE_METASTORE_URIS_CONFIG);
  if (hiveMetaStoreUris.isEmpty()) {
    log.warn(
        "hive.metastore.uris empty, an embedded Hive metastore will be created in the directory"
        + " the connector is started. You need to start Hive in that specific directory to "
        + "query the data."
    );
  }
  if (!hiveConfDir.equals("")) {
    String hiveSitePath = hiveConfDir + "/hive-site.xml";
    File hiveSite = new File(hiveSitePath);
    if (!hiveSite.exists()) {
      log.warn(
          "hive-site.xml does not exist in provided Hive configuration directory {}.",
          hiveConf
      );
    }
    hiveConf.addResource(new Path(hiveSitePath));
  }
  hiveConf.set("hive.metastore.uris", hiveMetaStoreUris);
  try {
    client = HCatUtil.getHiveMetastoreClient(hiveConf);
  } catch (IOException | MetaException e) {
    throw new HiveMetaStoreException(e);
  }
}
 
開發者ID:confluentinc,項目名稱:kafka-connect-storage-common,代碼行數:31,代碼來源:HiveMetaStore.java

示例14: KafkaAvroHandler

import org.apache.kafka.common.config.AbstractConfig; //導入依賴的package包/類
public KafkaAvroHandler(OpMapper _opMapper, String configFile) {
	super(_opMapper, configFile);
	valSerialiazer = new SpecificAvroMutationSerializer();
	valSerialiazer.configure(
			new AbstractConfig(new ConfigDef(), config).originals(), false);
	keySerialiazer = new SpecificAvroMutationSerializer();
	keySerialiazer.configure(
			new AbstractConfig(new ConfigDef(), config).originals(), true);
	
}
 
開發者ID:rogers,項目名稱:change-data-capture,代碼行數:11,代碼來源:KafkaAvroHandler.java

示例15: plainValues

import org.apache.kafka.common.config.AbstractConfig; //導入依賴的package包/類
public Map<String, ?> plainValues() {
  Map<String, Object> map = new HashMap<>();
  for (AbstractConfig config : allConfigs) {
    map.putAll(config.values());
  }
  return map;
}
 
開發者ID:confluentinc,項目名稱:kafka-connect-storage-cloud,代碼行數:8,代碼來源:S3SinkConnectorConfig.java


注:本文中的org.apache.kafka.common.config.AbstractConfig類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。