當前位置: 首頁>>代碼示例>>Java>>正文


Java Utils.log2方法代碼示例

本文整理匯總了Java中weka.core.Utils.log2方法的典型用法代碼示例。如果您正苦於以下問題:Java Utils.log2方法的具體用法?Java Utils.log2怎麽用?Java Utils.log2使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在weka.core.Utils的用法示例。


在下文中一共展示了Utils.log2方法的13個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: priorEntropy

import weka.core.Utils; //導入方法依賴的package包/類
/**
 * Calculate the entropy of the prior distribution.
 * 
 * @return the entropy of the prior distribution
 * @throws Exception if the class is not nominal
 */
public final double priorEntropy() throws Exception {

  if (!m_ClassIsNominal) {
    throw new Exception("Can't compute entropy of class prior: "
      + "class numeric!");
  }

  if (m_NoPriors) {
    return Double.NaN;
  }

  double entropy = 0;
  for (int i = 0; i < m_NumClasses; i++) {
    entropy -=
      m_ClassPriors[i] / m_ClassPriorsSum
        * Utils.log2(m_ClassPriors[i] / m_ClassPriorsSum);
  }
  return entropy;
}
 
開發者ID:mydzigear,項目名稱:repo.kmeanspp.silhouette_score,代碼行數:26,代碼來源:Evaluation.java

示例2: theoryDL

import weka.core.Utils; //導入方法依賴的package包/類
/**
 * The description length of the theory for a given rule. Computed as:<br>
 * 0.5* [||k||+ S(t, k, k/t)]<br>
 * where k is the number of antecedents of the rule; t is the total possible
 * antecedents that could appear in a rule; ||K|| is the universal prior for k
 * , log2*(k) and S(t,k,p) = -k*log2(p)-(n-k)log2(1-p) is the subset encoding
 * length.
 * <p>
 * 
 * Details see Quilan: "MDL and categorical theories (Continued)",ML95
 * 
 * @param index the index of the given rule (assuming correct)
 * @return the theory DL, weighted if weight != 1.0
 */
public double theoryDL(int index) {

  double k = m_Ruleset.get(index).size();

  if (k == 0) {
    return 0.0;
  }

  double tdl = Utils.log2(k);
  if (k > 1) {
    tdl += 2.0 * Utils.log2(tdl); // of log2 star
  }
  tdl += subsetDL(m_Total, k, k / m_Total);
  // System.out.println("!!!theory: "+MDL_THEORY_WEIGHT * REDUNDANCY_FACTOR *
  // tdl);
  return MDL_THEORY_WEIGHT * REDUNDANCY_FACTOR * tdl;
}
 
開發者ID:mydzigear,項目名稱:repo.kmeanspp.silhouette_score,代碼行數:32,代碼來源:RuleStats.java

示例3: dataDL

import weka.core.Utils; //導入方法依賴的package包/類
/**
 * The description length of data given the parameters of the data based on
 * the ruleset.
 * <p>
 * Details see Quinlan: "MDL and categorical theories (Continued)",ML95
 * <p>
 * 
 * @param expFPOverErr expected FP/(FP+FN)
 * @param cover coverage
 * @param uncover uncoverage
 * @param fp False Positive
 * @param fn False Negative
 * @return the description length
 */
public static double dataDL(double expFPOverErr, double cover,
  double uncover, double fp, double fn) {
  double totalBits = Utils.log2(cover + uncover + 1.0); // how many data?
  double coverBits, uncoverBits; // What's the error?
  double expErr; // Expected FP or FN

  if (Utils.gr(cover, uncover)) {
    expErr = expFPOverErr * (fp + fn);
    coverBits = subsetDL(cover, fp, expErr / cover);
    uncoverBits = Utils.gr(uncover, 0.0) ? subsetDL(uncover, fn, fn / uncover)
      : 0.0;
  } else {
    expErr = (1.0 - expFPOverErr) * (fp + fn);
    coverBits = Utils.gr(cover, 0.0) ? subsetDL(cover, fp, fp / cover) : 0.0;
    uncoverBits = subsetDL(uncover, fn, expErr / uncover);
  }

  /*
   * System.err.println("!!!cover: " + cover + "|uncover" + uncover +
   * "|coverBits: "+coverBits+"|uncBits: "+ uncoverBits+
   * "|FPRate: "+expFPOverErr + "|expErr: "+expErr+
   * "|fp: "+fp+"|fn: "+fn+"|total: "+totalBits);
   */
  return (totalBits + coverBits + uncoverBits);
}
 
開發者ID:mydzigear,項目名稱:repo.kmeanspp.silhouette_score,代碼行數:40,代碼來源:RuleStats.java

示例4: buildClassifier

import weka.core.Utils; //導入方法依賴的package包/類
/**
 * Builds a classifier for a set of instances.
 * 
 * @param data the instances to train the classifier with
 * @throws Exception if something goes wrong
 */
@Override
public void buildClassifier(Instances data) throws Exception {

  // can classifier handle the data?
  getCapabilities().testWithFail(data);

  // remove instances with missing class
  data = new Instances(data);
  data.deleteWithMissingClass();

  m_bagger = new AttributeBagging();

  // RandomTree implements WeightedInstancesHandler, so we can
  // represent copies using weights to achieve speed-up.
  m_bagger.setRepresentCopiesUsingWeights(true);

  AttributeRandomTree rTree = new AttributeRandomTree();

  // set up the random tree options
  m_KValue = m_numFeatures;
  if (m_KValue < 1) {
    m_KValue = (int) Utils.log2(data.numAttributes() - 1) + 1;
  }
  rTree.setKValue(m_KValue);
  rTree.setMaxDepth(getMaxDepth());
  rTree.setDoNotCheckCapabilities(true);

  // set up the bagger and build the forest
  m_bagger.setBagSizePercent(m_BagSizePercent);
  m_bagger.setCalcOutOfBag(m_CalcOutOfBag);
  m_bagger.setClassifier(rTree);
  m_bagger.setSeed(m_randomSeed);
  m_bagger.setNumIterations(m_numTrees);
  m_bagger.setNumExecutionSlots(m_numExecutionSlots);
  m_bagger.buildClassifier(data);
}
 
開發者ID:seqcode,項目名稱:seqcode-core,代碼行數:43,代碼來源:BaggedRandomForest.java

示例5: updateStatsForConditionalDensityEstimator

import weka.core.Utils; //導入方法依賴的package包/類
/**
 * Updates stats for conditional density estimator based on current test
 * instance.
 * 
 * @param classifier the conditional density estimator
 * @param classMissing the instance for which density is to be computed,
 *          without a class value
 * @param classValue the class value of this instance
 * @throws Exception if density could not be computed successfully
 */
protected void updateStatsForConditionalDensityEstimator(
  ConditionalDensityEstimator classifier, Instance classMissing,
  double classValue) throws Exception {

  if (m_PriorEstimator == null) {
    setNumericPriorsFromBuffer();
  }
  m_SumSchemeEntropy -=
    classifier.logDensity(classMissing, classValue) * classMissing.weight()
      / Utils.log2;
  m_SumPriorEntropy -=
    m_PriorEstimator.logDensity(classValue) * classMissing.weight()
      / Utils.log2;
}
 
開發者ID:mydzigear,項目名稱:repo.kmeanspp.silhouette_score,代碼行數:25,代碼來源:Evaluation.java

示例6: getMetricRange

import weka.core.Utils; //導入方法依賴的package包/類
@Override
public double getMetricRange(Map<String, WeightMass> preDist) {

  int numClasses = preDist.size();
  if (numClasses < 2) {
    numClasses = 2;
  }

  return Utils.log2(numClasses);
}
 
開發者ID:mydzigear,項目名稱:repo.kmeanspp.silhouette_score,代碼行數:11,代碼來源:InfoGainSplitMetric.java

示例7: buildClassifier

import weka.core.Utils; //導入方法依賴的package包/類
/**
 * Builds a classifier for a set of instances.
 * 
 * @param data the instances to train the classifier with
 * @throws Exception if something goes wrong
 */
@Override
public void buildClassifier(Instances data) throws Exception {

  // can classifier handle the data?
  getCapabilities().testWithFail(data);

  // remove instances with missing class
  data = new Instances(data);
  data.deleteWithMissingClass();

  m_bagger = new Bagging();

  // RandomTree implements WeightedInstancesHandler, so we can
  // represent copies using weights to achieve speed-up.
  m_bagger.setRepresentCopiesUsingWeights(true);

  RandomTree rTree = new RandomTree();

  // set up the random tree options
  m_KValue = m_numFeatures;
  if (m_KValue < 1) {
    m_KValue = (int) Utils.log2(data.numAttributes() - 1) + 1;
  }
  rTree.setKValue(m_KValue);
  rTree.setMaxDepth(getMaxDepth());
  rTree.setDoNotCheckCapabilities(true);
  rTree.setBreakTiesRandomly(getBreakTiesRandomly());

  // set up the bagger and build the forest
  m_bagger.setClassifier(rTree);
  m_bagger.setSeed(m_randomSeed);
  m_bagger.setNumIterations(m_numTrees);
  m_bagger.setCalcOutOfBag(!getDontCalculateOutOfBagError());
  m_bagger.setNumExecutionSlots(m_numExecutionSlots);
  m_bagger.buildClassifier(data);
}
 
開發者ID:mydzigear,項目名稱:repo.kmeanspp.silhouette_score,代碼行數:43,代碼來源:RandomForest.java

示例8: KononenkosMDL

import weka.core.Utils; //導入方法依賴的package包/類
/**
 * Test using Kononenko's MDL criterion.
 * 
 * @param priorCounts
 * @param bestCounts
 * @param numInstances
 * @param numCutPoints
 * @return true if the split is acceptable
 */
private boolean KononenkosMDL(double[] priorCounts, double[][] bestCounts,
  double numInstances, int numCutPoints) {

  double distPrior, instPrior, distAfter = 0, sum, instAfter = 0;
  double before, after;
  int numClassesTotal;

  // Number of classes occuring in the set
  numClassesTotal = 0;
  for (double priorCount : priorCounts) {
    if (priorCount > 0) {
      numClassesTotal++;
    }
  }

  // Encode distribution prior to split
  distPrior = SpecialFunctions.log2Binomial(numInstances + numClassesTotal
    - 1, numClassesTotal - 1);

  // Encode instances prior to split.
  instPrior = SpecialFunctions.log2Multinomial(numInstances, priorCounts);

  before = instPrior + distPrior;

  // Encode distributions and instances after split.
  for (double[] bestCount : bestCounts) {
    sum = Utils.sum(bestCount);
    distAfter += SpecialFunctions.log2Binomial(sum + numClassesTotal - 1,
      numClassesTotal - 1);
    instAfter += SpecialFunctions.log2Multinomial(sum, bestCount);
  }

  // Coding cost after split
  after = Utils.log2(numCutPoints) + distAfter + instAfter;

  // Check if split is to be accepted
  return (before > after);
}
 
開發者ID:mydzigear,項目名稱:repo.kmeanspp.silhouette_score,代碼行數:48,代碼來源:Discretize.java

示例9: codingCost

import weka.core.Utils; //導入方法依賴的package包/類
/**
 * Returns coding cost for split (used in rule learner).
 */
@Override
public final double codingCost() {

  return Utils.log2(m_index);
}
 
開發者ID:mydzigear,項目名稱:repo.kmeanspp.silhouette_score,代碼行數:9,代碼來源:C45Split.java

示例10: FayyadAndIranisMDL

import weka.core.Utils; //導入方法依賴的package包/類
/**
 * Test using Fayyad and Irani's MDL criterion.
 * 
 * @param priorCounts
 * @param bestCounts
 * @param numInstances
 * @param numCutPoints
 * @return true if the splits is acceptable
 */
private boolean FayyadAndIranisMDL(double[] priorCounts,
  double[][] bestCounts, double numInstances, int numCutPoints) {

  double priorEntropy, entropy, gain;
  double entropyLeft, entropyRight, delta;
  int numClassesTotal, numClassesRight, numClassesLeft;

  // Compute entropy before split.
  priorEntropy = ContingencyTables.entropy(priorCounts);

  // Compute entropy after split.
  entropy = ContingencyTables.entropyConditionedOnRows(bestCounts);

  // Compute information gain.
  gain = priorEntropy - entropy;

  // Number of classes occuring in the set
  numClassesTotal = 0;
  for (double priorCount : priorCounts) {
    if (priorCount > 0) {
      numClassesTotal++;
    }
  }

  // Number of classes occuring in the left subset
  numClassesLeft = 0;
  for (int i = 0; i < bestCounts[0].length; i++) {
    if (bestCounts[0][i] > 0) {
      numClassesLeft++;
    }
  }

  // Number of classes occuring in the right subset
  numClassesRight = 0;
  for (int i = 0; i < bestCounts[1].length; i++) {
    if (bestCounts[1][i] > 0) {
      numClassesRight++;
    }
  }

  // Entropy of the left and the right subsets
  entropyLeft = ContingencyTables.entropy(bestCounts[0]);
  entropyRight = ContingencyTables.entropy(bestCounts[1]);

  // Compute terms for MDL formula
  delta = Utils.log2(Math.pow(3, numClassesTotal) - 2)
    - ((numClassesTotal * priorEntropy) - (numClassesRight * entropyRight) - (numClassesLeft * entropyLeft));

  // Check if split is to be accepted
  return (gain > (Utils.log2(numCutPoints) + delta) / numInstances);
}
 
開發者ID:mydzigear,項目名稱:repo.kmeanspp.silhouette_score,代碼行數:61,代碼來源:Discretize.java

示例11: calculateJMeasure

import weka.core.Utils; //導入方法依賴的package包/類
/**
 * Calculate J-measure value  
 * 
 * @param xyConditionalProbabilityIn 
 * @param headPorbability
 * @param bodyProbability
 * @return J-measure value
 */
public double calculateJMeasure(double xyConditionalProbabilityIn, double headPorbability, double bodyProbability){
    
    double jMesuare = xyConditionalProbabilityIn * Utils.log2(xyConditionalProbabilityIn / headPorbability) +
                        (1.0d -  xyConditionalProbabilityIn) * Utils.log2((1.0d - xyConditionalProbabilityIn) / (1.0d - headPorbability));
    
    double Jmeasure = jMesuare * bodyProbability;
    
    return (Double.isNaN(Jmeasure) == false) ? Jmeasure : 0.0d;
}
 
開發者ID:thienle2401,項目名稱:GeneralisedRulesAlgorithm,代碼行數:18,代碼來源:GRules.java

示例12: process

import weka.core.Utils; //導入方法依賴的package包/類
@Override
protected Instances process(Instances instances) throws Exception {



	Instances result = getOutputFormat();


	this.calculateWordCounts(instances);


	String[] sortedWords=this.wordInfo.keySet().toArray(new String[0]);

	Arrays.sort(sortedWords);

	for(String word:sortedWords){
		WordCount wordCount=this.wordInfo.get(word);

		if(wordCount.posCount+wordCount.negCount>=this.minFreq){

			double posProb=wordCount.posCount/posCount;
			double negProb=wordCount.negCount/negCount;
			double semanticOrientation=Utils.log2(posProb)-Utils.log2(negProb);



			double[] values = new double[result.numAttributes()];

			int wordNameIndex=result.attribute("WORD_NAME").index();
			values[wordNameIndex]=result.attribute(wordNameIndex).addStringValue(word);	

			values[result.numAttributes()-1]=semanticOrientation;


			Instance inst=new DenseInstance(1, values);

			inst.setDataset(result);

			result.add(inst);
		}

	}


	return result;



}
 
開發者ID:felipebravom,項目名稱:AffectiveTweets,代碼行數:50,代碼來源:PMILexiconExpander.java

示例13: subsetDL

import weka.core.Utils; //導入方法依賴的package包/類
/**
 * Subset description length: <br>
 * S(t,k,p) = -k*log2(p)-(n-k)log2(1-p)
 * 
 * Details see Quilan: "MDL and categorical theories (Continued)",ML95
 * 
 * @param t the number of elements in a known set
 * @param k the number of elements in a subset
 * @param p the expected proportion of subset known by recipient
 * @return the subset description length
 */
public static double subsetDL(double t, double k, double p) {
  double rt = Utils.gr(p, 0.0) ? (-k * Utils.log2(p)) : 0.0;
  rt -= (t - k) * Utils.log2(1 - p);
  return rt;
}
 
開發者ID:mydzigear,項目名稱:repo.kmeanspp.silhouette_score,代碼行數:17,代碼來源:RuleStats.java


注:本文中的weka.core.Utils.log2方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。