本文整理匯總了Java中org.apache.spark.SparkContext.accumulator方法的典型用法代碼示例。如果您正苦於以下問題:Java SparkContext.accumulator方法的具體用法?Java SparkContext.accumulator怎麽用?Java SparkContext.accumulator使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在類org.apache.spark.SparkContext
的用法示例。
在下文中一共展示了SparkContext.accumulator方法的3個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。
示例1: initializeVariables
import org.apache.spark.SparkContext; //導入方法依賴的package包/類
public void initializeVariables(SparkContext sc) {
for (int i = 0; i < this.numberOfCluster; i++) {
longAccumulator = sc.accumulator(0L, new LongAccumulatorParam());
clusterCounterMaliciousList.add(longAccumulator);
longAccumulator = sc.accumulator(0L, new LongAccumulatorParam());
clusterCounterBenignList.add(longAccumulator);
}
longAccumulator = sc.accumulator(0L, new LongAccumulatorParam());
totalBenign = sc.accumulator(0L, new LongAccumulatorParam());
totalMalicious = sc.accumulator(0L, new LongAccumulatorParam());
totalNanoSeconds = sc.accumulator(0L, "totalNanoSeconds", new LongAccumulatorParam());
flowCounterMalicious = sc.accumulator(new HashMap<BigInteger, Boolean>(), new UniqueFlowAccumulatorParam());
flowCounterBenign = sc.accumulator(new HashMap<BigInteger, Boolean>(), new UniqueFlowAccumulatorParam());
}
示例2: initializeVariables
import org.apache.spark.SparkContext; //導入方法依賴的package包/類
public void initializeVariables(SparkContext sc) {
for (int i = 0; i < this.numberOfLabels; i++) {
longAccumulator = sc.accumulator(0L, new LongAccumulatorParam());
classificationCounterValidatedList.add(longAccumulator);
longAccumulator = sc.accumulator(0L, new LongAccumulatorParam());
classificationCounterOriginList.add(longAccumulator);
//unique entries
flowCounter = sc.accumulator(new HashMap<BigInteger, Boolean>(), new UniqueFlowAccumulatorParam());
uniqueOriginEntires.add(flowCounter);
flowCounter = sc.accumulator(new HashMap<BigInteger, Boolean>(), new UniqueFlowAccumulatorParam());
uniqueValidatedEntires.add(flowCounter);
}
longAccumulator = sc.accumulator(0L, new LongAccumulatorParam());
totalBenign = sc.accumulator(0L, new LongAccumulatorParam());
totalMalicious = sc.accumulator(0L, new LongAccumulatorParam());
truePositive = sc.accumulator(0L, new LongAccumulatorParam());
falseNegative = sc.accumulator(0L, new LongAccumulatorParam());
falsePositive = sc.accumulator(0L, new LongAccumulatorParam());
trueNegative = sc.accumulator(0L, new LongAccumulatorParam());
totalNanoSeconds = sc.accumulator(0L, "totalNanoSeconds", new LongAccumulatorParam());
flowCounterMalicious = sc.accumulator(new HashMap<BigInteger, Boolean>(), new UniqueFlowAccumulatorParam());
flowCounterBenign = sc.accumulator(new HashMap<BigInteger, Boolean>(), new UniqueFlowAccumulatorParam());
}
示例3: initializeVariables
import org.apache.spark.SparkContext; //導入方法依賴的package包/類
public void initializeVariables(SparkContext sc) {
longAccumulator = sc.accumulator(0L, new LongAccumulatorParam());
totalBenign = sc.accumulator(0L, new LongAccumulatorParam());
totalMalicious = sc.accumulator(0L, new LongAccumulatorParam());
flowCounterMalicious = sc.accumulator(new HashMap<BigInteger, Boolean>(), new UniqueFlowAccumulatorParam());
flowCounterBenign = sc.accumulator(new HashMap<BigInteger, Boolean>(), new UniqueFlowAccumulatorParam());
}