当前位置: 首页>>代码示例>>Java>>正文


Java IndexMetaData.isIndexUsingShadowReplicas方法代码示例

本文整理汇总了Java中org.elasticsearch.cluster.metadata.IndexMetaData.isIndexUsingShadowReplicas方法的典型用法代码示例。如果您正苦于以下问题:Java IndexMetaData.isIndexUsingShadowReplicas方法的具体用法?Java IndexMetaData.isIndexUsingShadowReplicas怎么用?Java IndexMetaData.isIndexUsingShadowReplicas使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.elasticsearch.cluster.metadata.IndexMetaData的用法示例。


在下文中一共展示了IndexMetaData.isIndexUsingShadowReplicas方法的8个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: buildShardLevelInfo

import org.elasticsearch.cluster.metadata.IndexMetaData; //导入方法依赖的package包/类
static void buildShardLevelInfo(Logger logger, ShardStats[] stats, ImmutableOpenMap.Builder<String, Long> newShardSizes,
                                ImmutableOpenMap.Builder<ShardRouting, String> newShardRoutingToDataPath, ClusterState state) {
    MetaData meta = state.getMetaData();
    for (ShardStats s : stats) {
        IndexMetaData indexMeta = meta.index(s.getShardRouting().index());
        newShardRoutingToDataPath.put(s.getShardRouting(), s.getDataPath());
        long size = s.getStats().getStore().sizeInBytes();
        String sid = ClusterInfo.shardIdentifierFromRouting(s.getShardRouting());
        if (logger.isTraceEnabled()) {
            logger.trace("shard: {} size: {}", sid, size);
        }
        if (indexMeta != null && indexMeta.isIndexUsingShadowReplicas()) {
            // Shards on a shared filesystem should be considered of size 0
            if (logger.isTraceEnabled()) {
                logger.trace("shard: {} is using shadow replicas and will be treated as size 0", sid);
            }
            size = 0;
        }
        newShardSizes.put(sid, size);
    }
}
 
开发者ID:justor,项目名称:elasticsearch_my,代码行数:22,代码来源:InternalClusterInfoService.java

示例2: resolveRequest

import org.elasticsearch.cluster.metadata.IndexMetaData; //导入方法依赖的package包/类
@Override
protected void resolveRequest(ClusterState state, InternalRequest request) {
    IndexMetaData indexMeta = state.getMetaData().index(request.concreteIndex());
    if (request.request().realtime && // if the realtime flag is set
            request.request().preference() == null && // the preference flag is not already set
            indexMeta != null && // and we have the index
            indexMeta.isIndexUsingShadowReplicas()) { // and the index uses shadow replicas
        // set the preference for the request to use "_primary" automatically
        request.request().preference(Preference.PRIMARY.type());
    }
    // update the routing (request#index here is possibly an alias)
    request.request().routing(state.metaData().resolveIndexRouting(request.request().parent(), request.request().routing(), request.request().index()));
    // Fail fast on the node that received the request.
    if (request.request().routing() == null && state.getMetaData().routingRequired(request.concreteIndex(), request.request().type())) {
        throw new RoutingMissingException(request.concreteIndex(), request.request().type(), request.request().id());
    }
}
 
开发者ID:justor,项目名称:elasticsearch_my,代码行数:18,代码来源:TransportGetAction.java

示例3: buildShardLevelInfo

import org.elasticsearch.cluster.metadata.IndexMetaData; //导入方法依赖的package包/类
static void buildShardLevelInfo(ESLogger logger, ShardStats[] stats, HashMap<String, Long> newShardSizes, HashMap<ShardRouting, String> newShardRoutingToDataPath, ClusterState state) {
    MetaData meta = state.getMetaData();
    for (ShardStats s : stats) {
        IndexMetaData indexMeta = meta.index(s.getShardRouting().index());
        Settings indexSettings = indexMeta == null ? null : indexMeta.getSettings();
        newShardRoutingToDataPath.put(s.getShardRouting(), s.getDataPath());
        long size = s.getStats().getStore().sizeInBytes();
        String sid = ClusterInfo.shardIdentifierFromRouting(s.getShardRouting());
        if (logger.isTraceEnabled()) {
            logger.trace("shard: {} size: {}", sid, size);
        }
        if (indexSettings != null && IndexMetaData.isIndexUsingShadowReplicas(indexSettings)) {
            // Shards on a shared filesystem should be considered of size 0
            if (logger.isTraceEnabled()) {
                logger.trace("shard: {} is using shadow replicas and will be treated as size 0", sid);
            }
            size = 0;
        }
        newShardSizes.put(sid, size);
    }
}
 
开发者ID:baidu,项目名称:Elasticsearch,代码行数:22,代码来源:InternalClusterInfoService.java

示例4: resolveRequest

import org.elasticsearch.cluster.metadata.IndexMetaData; //导入方法依赖的package包/类
@Override
protected void resolveRequest(ClusterState state, InternalRequest request) {
    if (request.request().realtime == null) {
        request.request().realtime = this.realtime;
    }
    IndexMetaData indexMeta = state.getMetaData().index(request.concreteIndex());
    if (request.request().realtime && // if the realtime flag is set
            request.request().preference() == null && // the preference flag is not already set
            indexMeta != null && // and we have the index
            IndexMetaData.isIndexUsingShadowReplicas(indexMeta.getSettings())) { // and the index uses shadow replicas
        // set the preference for the request to use "_primary" automatically
        request.request().preference(Preference.PRIMARY.type());
    }
    // update the routing (request#index here is possibly an alias)
    request.request().routing(state.metaData().resolveIndexRouting(request.request().routing(), request.request().index()));
    // Fail fast on the node that received the request.
    if (request.request().routing() == null && state.getMetaData().routingRequired(request.concreteIndex(), request.request().type())) {
        throw new RoutingMissingException(request.concreteIndex(), request.request().type(), request.request().id());
    }
}
 
开发者ID:baidu,项目名称:Elasticsearch,代码行数:21,代码来源:TransportGetAction.java

示例5: IndexSettings

import org.elasticsearch.cluster.metadata.IndexMetaData; //导入方法依赖的package包/类
/**
 * Creates a new {@link IndexSettings} instance. The given node settings will be merged with the settings in the metadata
 * while index level settings will overwrite node settings.
 *
 * @param indexMetaData the index metadata this settings object is associated with
 * @param nodeSettings the nodes settings this index is allocated on.
 */
public IndexSettings(final IndexMetaData indexMetaData, final Settings nodeSettings, IndexScopedSettings indexScopedSettings) {
    scopedSettings = indexScopedSettings.copy(nodeSettings, indexMetaData);
    this.nodeSettings = nodeSettings;
    this.settings = Settings.builder().put(nodeSettings).put(indexMetaData.getSettings()).build();
    this.index = indexMetaData.getIndex();
    version = Version.indexCreated(settings);
    logger = Loggers.getLogger(getClass(), settings, index);
    nodeName = Node.NODE_NAME_SETTING.get(settings);
    this.indexMetaData = indexMetaData;
    numberOfShards = settings.getAsInt(IndexMetaData.SETTING_NUMBER_OF_SHARDS, null);
    isShadowReplicaIndex = indexMetaData.isIndexUsingShadowReplicas(settings);

    this.defaultField = DEFAULT_FIELD_SETTING.get(settings);
    this.queryStringLenient = QUERY_STRING_LENIENT_SETTING.get(settings);
    this.queryStringAnalyzeWildcard = QUERY_STRING_ANALYZE_WILDCARD.get(nodeSettings);
    this.queryStringAllowLeadingWildcard = QUERY_STRING_ALLOW_LEADING_WILDCARD.get(nodeSettings);
    this.defaultAllowUnmappedFields = scopedSettings.get(ALLOW_UNMAPPED);
    this.durability = scopedSettings.get(INDEX_TRANSLOG_DURABILITY_SETTING);
    syncInterval = INDEX_TRANSLOG_SYNC_INTERVAL_SETTING.get(settings);
    refreshInterval = scopedSettings.get(INDEX_REFRESH_INTERVAL_SETTING);
    globalCheckpointInterval = scopedSettings.get(INDEX_SEQ_NO_CHECKPOINT_SYNC_INTERVAL);
    flushThresholdSize = scopedSettings.get(INDEX_TRANSLOG_FLUSH_THRESHOLD_SIZE_SETTING);
    mergeSchedulerConfig = new MergeSchedulerConfig(this);
    gcDeletesInMillis = scopedSettings.get(INDEX_GC_DELETES_SETTING).getMillis();
    warmerEnabled = scopedSettings.get(INDEX_WARMER_ENABLED_SETTING);
    maxResultWindow = scopedSettings.get(MAX_RESULT_WINDOW_SETTING);
    maxAdjacencyMatrixFilters = scopedSettings.get(MAX_ADJACENCY_MATRIX_FILTERS_SETTING);
    maxRescoreWindow = scopedSettings.get(MAX_RESCORE_WINDOW_SETTING);
    TTLPurgeDisabled = scopedSettings.get(INDEX_TTL_DISABLE_PURGE_SETTING);
    maxRefreshListeners = scopedSettings.get(MAX_REFRESH_LISTENERS_PER_SHARD);
    maxSlicesPerScroll = scopedSettings.get(MAX_SLICES_PER_SCROLL);
    this.mergePolicyConfig = new MergePolicyConfig(logger, this);

    scopedSettings.addSettingsUpdateConsumer(MergePolicyConfig.INDEX_COMPOUND_FORMAT_SETTING, mergePolicyConfig::setNoCFSRatio);
    scopedSettings.addSettingsUpdateConsumer(MergePolicyConfig.INDEX_MERGE_POLICY_EXPUNGE_DELETES_ALLOWED_SETTING, mergePolicyConfig::setExpungeDeletesAllowed);
    scopedSettings.addSettingsUpdateConsumer(MergePolicyConfig.INDEX_MERGE_POLICY_FLOOR_SEGMENT_SETTING, mergePolicyConfig::setFloorSegmentSetting);
    scopedSettings.addSettingsUpdateConsumer(MergePolicyConfig.INDEX_MERGE_POLICY_MAX_MERGE_AT_ONCE_SETTING, mergePolicyConfig::setMaxMergesAtOnce);
    scopedSettings.addSettingsUpdateConsumer(MergePolicyConfig.INDEX_MERGE_POLICY_MAX_MERGE_AT_ONCE_EXPLICIT_SETTING, mergePolicyConfig::setMaxMergesAtOnceExplicit);
    scopedSettings.addSettingsUpdateConsumer(MergePolicyConfig.INDEX_MERGE_POLICY_MAX_MERGED_SEGMENT_SETTING, mergePolicyConfig::setMaxMergedSegment);
    scopedSettings.addSettingsUpdateConsumer(MergePolicyConfig.INDEX_MERGE_POLICY_SEGMENTS_PER_TIER_SETTING, mergePolicyConfig::setSegmentsPerTier);
    scopedSettings.addSettingsUpdateConsumer(MergePolicyConfig.INDEX_MERGE_POLICY_RECLAIM_DELETES_WEIGHT_SETTING, mergePolicyConfig::setReclaimDeletesWeight);

    scopedSettings.addSettingsUpdateConsumer(MergeSchedulerConfig.MAX_THREAD_COUNT_SETTING, MergeSchedulerConfig.MAX_MERGE_COUNT_SETTING,
        mergeSchedulerConfig::setMaxThreadAndMergeCount);
    scopedSettings.addSettingsUpdateConsumer(MergeSchedulerConfig.AUTO_THROTTLE_SETTING, mergeSchedulerConfig::setAutoThrottle);
    scopedSettings.addSettingsUpdateConsumer(INDEX_TRANSLOG_DURABILITY_SETTING, this::setTranslogDurability);
    scopedSettings.addSettingsUpdateConsumer(INDEX_TTL_DISABLE_PURGE_SETTING, this::setTTLPurgeDisabled);
    scopedSettings.addSettingsUpdateConsumer(MAX_RESULT_WINDOW_SETTING, this::setMaxResultWindow);
    scopedSettings.addSettingsUpdateConsumer(MAX_ADJACENCY_MATRIX_FILTERS_SETTING, this::setMaxAdjacencyMatrixFilters);
    scopedSettings.addSettingsUpdateConsumer(MAX_RESCORE_WINDOW_SETTING, this::setMaxRescoreWindow);
    scopedSettings.addSettingsUpdateConsumer(INDEX_WARMER_ENABLED_SETTING, this::setEnableWarmer);
    scopedSettings.addSettingsUpdateConsumer(INDEX_GC_DELETES_SETTING, this::setGCDeletes);
    scopedSettings.addSettingsUpdateConsumer(INDEX_TRANSLOG_FLUSH_THRESHOLD_SIZE_SETTING, this::setTranslogFlushThresholdSize);
    scopedSettings.addSettingsUpdateConsumer(INDEX_REFRESH_INTERVAL_SETTING, this::setRefreshInterval);
    scopedSettings.addSettingsUpdateConsumer(MAX_REFRESH_LISTENERS_PER_SHARD, this::setMaxRefreshListeners);
    scopedSettings.addSettingsUpdateConsumer(MAX_SLICES_PER_SCROLL, this::setMaxSlicesPerScroll);
}
 
开发者ID:justor,项目名称:elasticsearch_my,代码行数:65,代码来源:IndexSettings.java

示例6: useShadowEngine

import org.elasticsearch.cluster.metadata.IndexMetaData; //导入方法依赖的package包/类
/** Return true if a shadow engine should be used */
protected boolean useShadowEngine() {
    return primary == false && IndexMetaData.isIndexUsingShadowReplicas(settings);
}
 
开发者ID:baidu,项目名称:Elasticsearch,代码行数:5,代码来源:IndexShardModule.java

示例7: shouldExecuteReplication

import org.elasticsearch.cluster.metadata.IndexMetaData; //导入方法依赖的package包/类
/**
 * Indicated whether this operation should be replicated to shadow replicas or not. If this method returns true the replication phase will be skipped.
 * For example writes such as index and delete don't need to be replicated on shadow replicas but refresh and flush do.
 */
protected boolean shouldExecuteReplication(Settings settings) {
    return IndexMetaData.isIndexUsingShadowReplicas(settings) == false 
            && IndexMetaData.isIndexUsingDLEngine(settings) == false;
}
 
开发者ID:baidu,项目名称:Elasticsearch,代码行数:9,代码来源:TransportReplicationAction.java

示例8: shouldExecuteReplication

import org.elasticsearch.cluster.metadata.IndexMetaData; //导入方法依赖的package包/类
/**
 * Indicated whether this operation should be replicated to shadow replicas or not. If this method returns true the replication phase
 * will be skipped. For example writes such as index and delete don't need to be replicated on shadow replicas but refresh and flush do.
 */
protected boolean shouldExecuteReplication(IndexMetaData indexMetaData) {
    return indexMetaData.isIndexUsingShadowReplicas() == false;
}
 
开发者ID:justor,项目名称:elasticsearch_my,代码行数:8,代码来源:TransportReplicationAction.java


注:本文中的org.elasticsearch.cluster.metadata.IndexMetaData.isIndexUsingShadowReplicas方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。