本文整理汇总了Java中org.apache.spark.SparkContext.accumulator方法的典型用法代码示例。如果您正苦于以下问题:Java SparkContext.accumulator方法的具体用法?Java SparkContext.accumulator怎么用?Java SparkContext.accumulator使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类org.apache.spark.SparkContext
的用法示例。
在下文中一共展示了SparkContext.accumulator方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。
示例1: initializeVariables
import org.apache.spark.SparkContext; //导入方法依赖的package包/类
public void initializeVariables(SparkContext sc) {
for (int i = 0; i < this.numberOfCluster; i++) {
longAccumulator = sc.accumulator(0L, new LongAccumulatorParam());
clusterCounterMaliciousList.add(longAccumulator);
longAccumulator = sc.accumulator(0L, new LongAccumulatorParam());
clusterCounterBenignList.add(longAccumulator);
}
longAccumulator = sc.accumulator(0L, new LongAccumulatorParam());
totalBenign = sc.accumulator(0L, new LongAccumulatorParam());
totalMalicious = sc.accumulator(0L, new LongAccumulatorParam());
totalNanoSeconds = sc.accumulator(0L, "totalNanoSeconds", new LongAccumulatorParam());
flowCounterMalicious = sc.accumulator(new HashMap<BigInteger, Boolean>(), new UniqueFlowAccumulatorParam());
flowCounterBenign = sc.accumulator(new HashMap<BigInteger, Boolean>(), new UniqueFlowAccumulatorParam());
}
示例2: initializeVariables
import org.apache.spark.SparkContext; //导入方法依赖的package包/类
public void initializeVariables(SparkContext sc) {
for (int i = 0; i < this.numberOfLabels; i++) {
longAccumulator = sc.accumulator(0L, new LongAccumulatorParam());
classificationCounterValidatedList.add(longAccumulator);
longAccumulator = sc.accumulator(0L, new LongAccumulatorParam());
classificationCounterOriginList.add(longAccumulator);
//unique entries
flowCounter = sc.accumulator(new HashMap<BigInteger, Boolean>(), new UniqueFlowAccumulatorParam());
uniqueOriginEntires.add(flowCounter);
flowCounter = sc.accumulator(new HashMap<BigInteger, Boolean>(), new UniqueFlowAccumulatorParam());
uniqueValidatedEntires.add(flowCounter);
}
longAccumulator = sc.accumulator(0L, new LongAccumulatorParam());
totalBenign = sc.accumulator(0L, new LongAccumulatorParam());
totalMalicious = sc.accumulator(0L, new LongAccumulatorParam());
truePositive = sc.accumulator(0L, new LongAccumulatorParam());
falseNegative = sc.accumulator(0L, new LongAccumulatorParam());
falsePositive = sc.accumulator(0L, new LongAccumulatorParam());
trueNegative = sc.accumulator(0L, new LongAccumulatorParam());
totalNanoSeconds = sc.accumulator(0L, "totalNanoSeconds", new LongAccumulatorParam());
flowCounterMalicious = sc.accumulator(new HashMap<BigInteger, Boolean>(), new UniqueFlowAccumulatorParam());
flowCounterBenign = sc.accumulator(new HashMap<BigInteger, Boolean>(), new UniqueFlowAccumulatorParam());
}
示例3: initializeVariables
import org.apache.spark.SparkContext; //导入方法依赖的package包/类
public void initializeVariables(SparkContext sc) {
longAccumulator = sc.accumulator(0L, new LongAccumulatorParam());
totalBenign = sc.accumulator(0L, new LongAccumulatorParam());
totalMalicious = sc.accumulator(0L, new LongAccumulatorParam());
flowCounterMalicious = sc.accumulator(new HashMap<BigInteger, Boolean>(), new UniqueFlowAccumulatorParam());
flowCounterBenign = sc.accumulator(new HashMap<BigInteger, Boolean>(), new UniqueFlowAccumulatorParam());
}