当前位置: 首页>>代码示例>>Java>>正文


Java ByteSizeValue.bytes方法代码示例

本文整理汇总了Java中org.elasticsearch.common.unit.ByteSizeValue.bytes方法的典型用法代码示例。如果您正苦于以下问题:Java ByteSizeValue.bytes方法的具体用法?Java ByteSizeValue.bytes怎么用?Java ByteSizeValue.bytes使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在org.elasticsearch.common.unit.ByteSizeValue的用法示例。


在下文中一共展示了ByteSizeValue.bytes方法的9个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: FileInfo

import org.elasticsearch.common.unit.ByteSizeValue; //导入方法依赖的package包/类
/**
 * Constructs a new instance of file info
 *
 * @param name         file name as stored in the blob store
 * @param metaData  the files meta data
 * @param partSize     size of the single chunk
 */
public FileInfo(String name, StoreFileMetaData metaData, ByteSizeValue partSize) {
    this.name = name;
    this.metadata = metaData;

    long partBytes = Long.MAX_VALUE;
    if (partSize != null) {
        partBytes = partSize.bytes();
    }

    long totalLength = metaData.length();
    long numberOfParts = totalLength / partBytes;
    if (totalLength % partBytes > 0) {
        numberOfParts++;
    }
    if (numberOfParts == 0) {
        numberOfParts++;
    }
    this.numberOfParts = numberOfParts;
    this.partSize = partSize;
    this.partBytes = partBytes;
}
 
开发者ID:baidu,项目名称:Elasticsearch,代码行数:29,代码来源:BlobStoreIndexShardSnapshot.java

示例2: MemoryCircuitBreaker

import org.elasticsearch.common.unit.ByteSizeValue; //导入方法依赖的package包/类
/**
 * Create a circuit breaker that will break if the number of estimated
 * bytes grows above the limit. All estimations will be multiplied by
 * the given overheadConstant. Uses the given oldBreaker to initialize
 * the starting offset.
 * @param limit circuit breaker limit
 * @param overheadConstant constant multiplier for byte estimations
 * @param oldBreaker the previous circuit breaker to inherit the used value from (starting offset)
 */
public MemoryCircuitBreaker(ByteSizeValue limit, double overheadConstant, MemoryCircuitBreaker oldBreaker, ESLogger logger) {
    this.memoryBytesLimit = limit.bytes();
    this.overheadConstant = overheadConstant;
    if (oldBreaker == null) {
        this.used = new AtomicLong(0);
        this.trippedCount = new AtomicLong(0);
    } else {
        this.used = oldBreaker.used;
        this.trippedCount = oldBreaker.trippedCount;
    }
    this.logger = logger;
    if (logger.isTraceEnabled()) {
        logger.trace("Creating MemoryCircuitBreaker with a limit of {} bytes ({}) and a overhead constant of {}",
                this.memoryBytesLimit, limit, this.overheadConstant);
    }
}
 
开发者ID:baidu,项目名称:Elasticsearch,代码行数:26,代码来源:MemoryCircuitBreaker.java

示例3: BulkProcessor

import org.elasticsearch.common.unit.ByteSizeValue; //导入方法依赖的package包/类
BulkProcessor(Client client, BackoffPolicy backoffPolicy, Listener listener, @Nullable String name, int concurrentRequests, int bulkActions, ByteSizeValue bulkSize, @Nullable TimeValue flushInterval) {
    this.bulkActions = bulkActions;
    this.bulkSize = bulkSize.bytes();

    this.bulkRequest = new BulkRequest();
    this.bulkRequestHandler = (concurrentRequests == 0) ? BulkRequestHandler.syncHandler(client, backoffPolicy, listener) : BulkRequestHandler.asyncHandler(client, backoffPolicy, listener, concurrentRequests);

    if (flushInterval != null) {
        this.scheduler = (ScheduledThreadPoolExecutor) Executors.newScheduledThreadPool(1, EsExecutors.daemonThreadFactory(client.settings(), (name != null ? "[" + name + "]" : "") + "bulk_processor"));
        this.scheduler.setExecuteExistingDelayedTasksAfterShutdownPolicy(false);
        this.scheduler.setContinueExistingPeriodicTasksAfterShutdownPolicy(false);
        this.scheduledFuture = this.scheduler.scheduleWithFixedDelay(new Flush(), flushInterval.millis(), flushInterval.millis(), TimeUnit.MILLISECONDS);
    } else {
        this.scheduler = null;
        this.scheduledFuture = null;
    }
}
 
开发者ID:baidu,项目名称:Elasticsearch,代码行数:18,代码来源:BulkProcessor.java

示例4: getRateLimiter

import org.elasticsearch.common.unit.ByteSizeValue; //导入方法依赖的package包/类
/**
 * Configures RateLimiter based on repository and global settings
 *
 * @param repositorySettings repository settings
 * @param setting            setting to use to configure rate limiter
 * @param defaultRate        default limiting rate
 * @return rate limiter or null of no throttling is needed
 */
private RateLimiter getRateLimiter(RepositorySettings repositorySettings, String setting, ByteSizeValue defaultRate) {
    ByteSizeValue maxSnapshotBytesPerSec = repositorySettings.settings().getAsBytesSize(setting,
            settings.getAsBytesSize(setting, defaultRate));
    if (maxSnapshotBytesPerSec.bytes() <= 0) {
        return null;
    } else {
        return new RateLimiter.SimpleRateLimiter(maxSnapshotBytesPerSec.mbFrac());
    }
}
 
开发者ID:baidu,项目名称:Elasticsearch,代码行数:18,代码来源:BlobStoreRepository.java

示例5: validate

import org.elasticsearch.common.unit.ByteSizeValue; //导入方法依赖的package包/类
private ByteSizeValue validate(ByteSizeValue num) {
    if (num.bytes() < setting.minValue() || num.bytes() > setting.maxValue()) {
        throw invalidException();
    }
    return num;
}
 
开发者ID:baidu,项目名称:Elasticsearch,代码行数:7,代码来源:SettingsAppliers.java

示例6: onRefreshSettings

import org.elasticsearch.common.unit.ByteSizeValue; //导入方法依赖的package包/类
@Override
public void onRefreshSettings(Settings settings) {

    // Fielddata settings
    ByteSizeValue newFielddataMax = settings.getAsMemory(FIELDDATA_CIRCUIT_BREAKER_LIMIT_SETTING, null);
    Double newFielddataOverhead = settings.getAsDouble(FIELDDATA_CIRCUIT_BREAKER_OVERHEAD_SETTING, null);
    if (newFielddataMax != null || newFielddataOverhead != null) {
        long newFielddataLimitBytes = newFielddataMax == null ? HierarchyCircuitBreakerService.this.fielddataSettings.getLimit() : newFielddataMax.bytes();
        newFielddataOverhead = newFielddataOverhead == null ? HierarchyCircuitBreakerService.this.fielddataSettings.getOverhead() : newFielddataOverhead;

        BreakerSettings newFielddataSettings = new BreakerSettings(CircuitBreaker.FIELDDATA, newFielddataLimitBytes, newFielddataOverhead,
                HierarchyCircuitBreakerService.this.fielddataSettings.getType());
        registerBreaker(newFielddataSettings);
        HierarchyCircuitBreakerService.this.fielddataSettings = newFielddataSettings;
        logger.info("Updated breaker settings fielddata: {}", newFielddataSettings);
    }

    // Request settings
    ByteSizeValue newRequestMax = settings.getAsMemory(REQUEST_CIRCUIT_BREAKER_LIMIT_SETTING, null);
    Double newRequestOverhead = settings.getAsDouble(REQUEST_CIRCUIT_BREAKER_OVERHEAD_SETTING, null);
    if (newRequestMax != null || newRequestOverhead != null) {
        long newRequestLimitBytes = newRequestMax == null ? HierarchyCircuitBreakerService.this.requestSettings.getLimit() : newRequestMax.bytes();
        newRequestOverhead = newRequestOverhead == null ? HierarchyCircuitBreakerService.this.requestSettings.getOverhead() : newRequestOverhead;

        BreakerSettings newRequestSettings = new BreakerSettings(CircuitBreaker.REQUEST, newRequestLimitBytes, newRequestOverhead,
                HierarchyCircuitBreakerService.this.requestSettings.getType());
        registerBreaker(newRequestSettings);
        HierarchyCircuitBreakerService.this.requestSettings = newRequestSettings;
        logger.info("Updated breaker settings request: {}", newRequestSettings);
    }

    // Parent settings
    long oldParentMax = HierarchyCircuitBreakerService.this.parentSettings.getLimit();
    ByteSizeValue newParentMax = settings.getAsMemory(TOTAL_CIRCUIT_BREAKER_LIMIT_SETTING, null);
    if (newParentMax != null && (newParentMax.bytes() != oldParentMax)) {
        BreakerSettings newParentSettings = new BreakerSettings(CircuitBreaker.PARENT, newParentMax.bytes(), 1.0, CircuitBreaker.Type.PARENT);
        validateSettings(new BreakerSettings[]{newParentSettings});
        HierarchyCircuitBreakerService.this.parentSettings = newParentSettings;
        logger.info("Updated breaker settings parent: {}", newParentSettings);
    }
}
 
开发者ID:baidu,项目名称:Elasticsearch,代码行数:42,代码来源:HierarchyCircuitBreakerService.java

示例7: updateBufferSize

import org.elasticsearch.common.unit.ByteSizeValue; //导入方法依赖的package包/类
/**
 * Change the indexing and translog buffer sizes.  If {@code IndexWriter} is currently using more than
 * the new buffering indexing size then we do a refresh to free up the heap.
 */
public void updateBufferSize(ByteSizeValue shardIndexingBufferSize, ByteSizeValue shardTranslogBufferSize) {

    final EngineConfig config = engineConfig;
    final ByteSizeValue preValue = config.getIndexingBufferSize();

    config.setIndexingBufferSize(shardIndexingBufferSize);

    Engine engine = engineUnsafe();
    if (engine == null) {
        logger.debug("updateBufferSize: engine is not initialized yet; skipping");
        return;
    }

    // update engine if it is already started.
    if (preValue.bytes() != shardIndexingBufferSize.bytes()) {
        // so we push changes these changes down to IndexWriter:
        engine.onSettingsChanged();

        long iwBytesUsed = engine.indexWriterRAMBytesUsed();

        String message = LoggerMessageFormat.format("updating index_buffer_size from [{}] to [{}]; IndexWriter now using [{}] bytes",
                preValue, shardIndexingBufferSize, iwBytesUsed);

        if (iwBytesUsed > shardIndexingBufferSize.bytes()) {
            // our allowed buffer was changed to less than we are currently using; we ask IW to refresh
            // so it clears its buffers (otherwise it won't clear until the next indexing/delete op)
            logger.debug(message + "; now refresh to clear IndexWriter memory");

            // TODO: should IW have an API to move segments to disk, but not refresh?  Its flush method is protected...
            try {
                refresh("update index buffer");
            } catch (Throwable e) {
                logger.warn("failed to refresh after decreasing index buffer", e);
            }
        } else {
            logger.debug(message);
        }
    }
    engine.getTranslog().updateBuffer(shardTranslogBufferSize);
}
 
开发者ID:baidu,项目名称:Elasticsearch,代码行数:45,代码来源:IndexShard.java

示例8: buildTable

import org.elasticsearch.common.unit.ByteSizeValue; //导入方法依赖的package包/类
private Table buildTable(RestRequest request, final ClusterStateResponse state, final NodesStatsResponse stats) {
    final ObjectIntScatterMap<String> allocs = new ObjectIntScatterMap<>();

    for (ShardRouting shard : state.getState().routingTable().allShards()) {
        String nodeId = "UNASSIGNED";

        if (shard.assignedToNode()) {
            nodeId = shard.currentNodeId();
        }

        allocs.addTo(nodeId, 1);
    }

    Table table = getTableWithHeader(request);

    for (NodeStats nodeStats : stats.getNodes()) {
        DiscoveryNode node = nodeStats.getNode();

        int shardCount = allocs.getOrDefault(node.id(), 0);

        ByteSizeValue total = nodeStats.getFs().getTotal().getTotal();
        ByteSizeValue avail = nodeStats.getFs().getTotal().getAvailable();
        //if we don't know how much we use (non data nodes), it means 0
        long used = 0;
        short diskPercent = -1;
        if (total.bytes() > 0) {
            used = total.bytes() - avail.bytes();
            if (used >= 0 && avail.bytes() >= 0) {
                diskPercent = (short) (used * 100 / (used + avail.bytes()));
            }
        }

        table.startRow();
        table.addCell(shardCount);
        table.addCell(nodeStats.getIndices().getStore().getSize());
        table.addCell(used < 0 ? null : new ByteSizeValue(used));
        table.addCell(avail.bytes() < 0 ? null : avail);
        table.addCell(total.bytes() < 0 ? null : total);
        table.addCell(diskPercent < 0 ? null : diskPercent);
        table.addCell(node.getHostName());
        table.addCell(node.getHostAddress());
        table.addCell(node.name());
        table.endRow();
    }

    final String UNASSIGNED = "UNASSIGNED";
    if (allocs.containsKey(UNASSIGNED)) {
        table.startRow();
        table.addCell(allocs.get(UNASSIGNED));
        table.addCell(null);
        table.addCell(null);
        table.addCell(null);
        table.addCell(null);
        table.addCell(null);
        table.addCell(null);
        table.addCell(null);
        table.addCell(UNASSIGNED);
        table.endRow();
    }

    return table;
}
 
开发者ID:baidu,项目名称:Elasticsearch,代码行数:63,代码来源:RestAllocationAction.java

示例9: onRefreshSettings

import org.elasticsearch.common.unit.ByteSizeValue; //导入方法依赖的package包/类
@Override
public void onRefreshSettings(Settings settings) {
    String newLowWatermark = settings.get(CLUSTER_ROUTING_ALLOCATION_LOW_DISK_WATERMARK,
            DiskThresholdDecider.this.settings.get(
                    CLUSTER_ROUTING_ALLOCATION_LOW_DISK_WATERMARK,
                    DEFAULT_LOW_DISK_WATERMARK));
    String newHighWatermark = settings.get(CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK,
            DiskThresholdDecider.this.settings.get(
                    CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK,
                    DEFAULT_HIGH_DISK_WATERMARK));
    Boolean newRelocationsSetting = settings.getAsBoolean(CLUSTER_ROUTING_ALLOCATION_INCLUDE_RELOCATIONS,
            DiskThresholdDecider.this.settings.getAsBoolean(
                    CLUSTER_ROUTING_ALLOCATION_INCLUDE_RELOCATIONS,
                    DEFAULT_INCLUDE_RELOCATIONS));
    Boolean newEnableSetting =  settings.getAsBoolean(
            CLUSTER_ROUTING_ALLOCATION_DISK_THRESHOLD_ENABLED,
            DiskThresholdDecider.this.settings.getAsBoolean(
                    CLUSTER_ROUTING_ALLOCATION_DISK_THRESHOLD_ENABLED,
                    DEFAULT_THRESHOLD_ENABLED));

    TimeValue newRerouteInterval = settings.getAsTime(CLUSTER_ROUTING_ALLOCATION_REROUTE_INTERVAL, null);

    if (newEnableSetting != null && newEnableSetting != DiskThresholdDecider.this.enabled) {
        logger.info("updating [{}] from [{}] to [{}]", CLUSTER_ROUTING_ALLOCATION_DISK_THRESHOLD_ENABLED,
                DiskThresholdDecider.this.enabled, newEnableSetting);
        DiskThresholdDecider.this.enabled = newEnableSetting;
    }
    if (newRelocationsSetting != null && newRelocationsSetting != DiskThresholdDecider.this.includeRelocations) {
        logger.info("updating [{}] from [{}] to [{}]", CLUSTER_ROUTING_ALLOCATION_INCLUDE_RELOCATIONS,
                DiskThresholdDecider.this.includeRelocations, newRelocationsSetting);
        DiskThresholdDecider.this.includeRelocations = newRelocationsSetting;
    }
    if (newLowWatermark != null) {
        if (!validWatermarkSetting(newLowWatermark, CLUSTER_ROUTING_ALLOCATION_LOW_DISK_WATERMARK)) {
            throw new ElasticsearchParseException("unable to parse low watermark [{}]", newLowWatermark);
        }
        Double newFreeDiskThresholdLow = 100.0 - thresholdPercentageFromWatermark(newLowWatermark);
        ByteSizeValue newFreeBytesThresholdLow = thresholdBytesFromWatermark(newLowWatermark, CLUSTER_ROUTING_ALLOCATION_LOW_DISK_WATERMARK);
        if (!freeDiskThresholdLow.equals(newFreeDiskThresholdLow)
                || freeBytesThresholdLow.bytes() != newFreeBytesThresholdLow.bytes()) {
            logger.info("updating [{}] to [{}]", CLUSTER_ROUTING_ALLOCATION_LOW_DISK_WATERMARK, newLowWatermark);
            DiskThresholdDecider.this.freeDiskThresholdLow = newFreeDiskThresholdLow;
            DiskThresholdDecider.this.freeBytesThresholdLow = newFreeBytesThresholdLow;
        }
    }
    if (newHighWatermark != null) {
        if (!validWatermarkSetting(newHighWatermark, CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK)) {
            throw new ElasticsearchParseException("unable to parse high watermark [{}]", newHighWatermark);
        }
        Double newFreeDiskThresholdHigh = 100.0 - thresholdPercentageFromWatermark(newHighWatermark);
        ByteSizeValue newFreeBytesThresholdHigh = thresholdBytesFromWatermark(newHighWatermark, CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK);
        if (!freeDiskThresholdHigh.equals(newFreeDiskThresholdHigh)
                || freeBytesThresholdHigh.bytes() != newFreeBytesThresholdHigh.bytes()) {
            logger.info("updating [{}] to [{}]", CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK, newHighWatermark);
            DiskThresholdDecider.this.freeDiskThresholdHigh = 100.0 - thresholdPercentageFromWatermark(newHighWatermark);
            DiskThresholdDecider.this.freeBytesThresholdHigh = thresholdBytesFromWatermark(newHighWatermark, CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK);
        }
    }
    if (newRerouteInterval != null) {
        logger.info("updating [{}] to [{}]", CLUSTER_ROUTING_ALLOCATION_REROUTE_INTERVAL, newRerouteInterval);
        DiskThresholdDecider.this.rerouteInterval = newRerouteInterval;
    }
}
 
开发者ID:baidu,项目名称:Elasticsearch,代码行数:64,代码来源:DiskThresholdDecider.java


注:本文中的org.elasticsearch.common.unit.ByteSizeValue.bytes方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。