當前位置: 首頁>>代碼示例>>Java>>正文


Java Evaluation類代碼示例

本文整理匯總了Java中weka.classifiers.Evaluation的典型用法代碼示例。如果您正苦於以下問題:Java Evaluation類的具體用法?Java Evaluation怎麽用?Java Evaluation使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


Evaluation類屬於weka.classifiers包,在下文中一共展示了Evaluation類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: useClassifier

import weka.classifiers.Evaluation; //導入依賴的package包/類
/**
 * uses the meta-classifier
 */
protected static void useClassifier(Instances data) throws Exception {
    System.out.println("\n1. Meta-classfier");
    AttributeSelectedClassifier classifier = new AttributeSelectedClassifier();
    CfsSubsetEval eval = new CfsSubsetEval();
    //GreedyStepwise search = new GreedyStepwise();
    GeneticSearch search = new GeneticSearch();
    //	search.setSearchBackwards(false);
    RandomForest base = new RandomForest();
    classifier.setClassifier(base);
    System.out.println("Set the classifier : " + base.toString());
    classifier.setEvaluator(eval);
    System.out.println("Set the evaluator : " + eval.toString());
    //	classifier.setSearch( search );
    System.out.println("Set the search : " + search.toString());
    Evaluation evaluation = new Evaluation(data);
    evaluation.crossValidateModel(classifier, data, 10, new Random(1));
    System.out.println(evaluation.toSummaryString());
}
 
開發者ID:ajaybhat,項目名稱:Essay-Grading-System,代碼行數:22,代碼來源:AttributeSelectionRunner.java

示例2: performTestSetEvaluation

import weka.classifiers.Evaluation; //導入依賴的package包/類
/**
 * Splits the dataset between training set and test set according with the percentage given.
 * < br/>Then, build the classifier based on the training set and apply to predict the test set.
 * @param dataset
 * Dataset to be divided
 * @param percentageSplit
 * Rate of split
 * @return
 * An Evaluation Object with the results
 * @throws Exception
 */
public Evaluation[] performTestSetEvaluation(Instances dataset, int percentageSplit) throws Exception{
	int trainSetSize = Math.round((dataset.numInstances() * percentageSplit)/100);
	int testSetSize = dataset.numInstances() - trainSetSize;

	dataset = randomizeSet(dataset);
	trainingSet = new Instances(dataset, 0, trainSetSize);
	testingSet = new Instances(dataset, trainSetSize, testSetSize);

	for(int i = 0;i < cls.length;i++){
		cls[i].buildClassifier(trainingSet);
		eval[i] = new Evaluation(trainingSet);
		eval[i].evaluateModel(cls[i], testingSet);
	}

	return eval;
}
 
開發者ID:a-n-d-r-e-i,項目名稱:seagull,代碼行數:28,代碼來源:Classification.java

示例3: crossValidate

import weka.classifiers.Evaluation; //導入依賴的package包/類
/**
  * Utility method for fast 5-fold cross validation of a naive bayes
  * model
  *
  * @param fullModel a <code>NaiveBayesUpdateable</code> value
  * @param trainingSet an <code>Instances</code> value
  * @param r a <code>Random</code> value
  * @return a <code>double</code> value
  * @exception Exception if an error occurs
  */
 public static double crossValidate(NaiveBayesUpdateable fullModel,
		       Instances trainingSet,
		       Random r) throws Exception {
   // make some copies for fast evaluation of 5-fold xval
   Classifier [] copies = AbstractClassifier.makeCopies(fullModel, 5);
   Evaluation eval = new Evaluation(trainingSet);
   // make some splits
   for (int j = 0; j < 5; j++) {
     Instances test = trainingSet.testCV(5, j);
     // unlearn these test instances
     for (int k = 0; k < test.numInstances(); k++) {
test.instance(k).setWeight(-test.instance(k).weight());
((NaiveBayesUpdateable)copies[j]).updateClassifier(test.instance(k));
// reset the weight back to its original value
test.instance(k).setWeight(-test.instance(k).weight());
     }
     eval.evaluateModel(copies[j], test);
   }
   return eval.incorrect();
 }
 
開發者ID:dsibournemouth,項目名稱:autoweka,代碼行數:31,代碼來源:NBTreeNoSplit.java

示例4: trainRandomForest

import weka.classifiers.Evaluation; //導入依賴的package包/類
public static void trainRandomForest(final Instances trainingSet) throws Exception {
        // Create a classifier
        final RandomForest tree = new RandomForest();
        tree.buildClassifier(trainingSet);

        // Test the model
        final Evaluation eval = new Evaluation(trainingSet);
//        eval.crossValidateModel(tree, trainingSet, 10, new Random(1));
        eval.evaluateModel(tree, trainingSet);

        // Print the result à la Weka explorer:
        logger.info(eval.toSummaryString());
        logger.info(eval.toMatrixString());
        logger.info(tree.toString());
    }
 
開發者ID:cobr123,項目名稱:VirtaMarketAnalyzer,代碼行數:16,代碼來源:RetailSalePrediction.java

示例5: modelErrors

import weka.classifiers.Evaluation; //導入依賴的package包/類
/**
    *Updates the numIncorrectModel field for all nodes. This is needed for calculating the alpha-values. 
    */
   public void modelErrors() throws Exception{
	
Evaluation eval = new Evaluation(m_train);
	
if (!m_isLeaf) {
    m_isLeaf = true;
    eval.evaluateModel(this, m_train);
    m_isLeaf = false;
    m_numIncorrectModel = eval.incorrect();
    for (int i = 0; i < m_sons.length; i++) m_sons[i].modelErrors();
} else {
    eval.evaluateModel(this, m_train);
    m_numIncorrectModel = eval.incorrect();
}
   }
 
開發者ID:dsibournemouth,項目名稱:autoweka,代碼行數:19,代碼來源:LMTNode.java

示例6: crossValidate

import weka.classifiers.Evaluation; //導入依賴的package包/類
/**
  * Utility method for fast 5-fold cross validation of a naive bayes
  * model
  *
  * @param fullModel a <code>NaiveBayesUpdateable</code> value
  * @param trainingSet an <code>Instances</code> value
  * @param r a <code>Random</code> value
  * @return a <code>double</code> value
  * @exception Exception if an error occurs
  */
 public static double crossValidate(NaiveBayesUpdateable fullModel,
		       Instances trainingSet,
		       Random r) throws Exception {
   // make some copies for fast evaluation of 5-fold xval
   Classifier [] copies = Classifier.makeCopies(fullModel, 5);
   Evaluation eval = new Evaluation(trainingSet);
   // make some splits
   for (int j = 0; j < 5; j++) {
     Instances test = trainingSet.testCV(5, j);
     // unlearn these test instances
     for (int k = 0; k < test.numInstances(); k++) {
test.instance(k).setWeight(-test.instance(k).weight());
((NaiveBayesUpdateable)copies[j]).updateClassifier(test.instance(k));
// reset the weight back to its original value
test.instance(k).setWeight(-test.instance(k).weight());
     }
     eval.evaluateModel(copies[j], test);
   }
   return eval.incorrect();
 }
 
開發者ID:williamClanton,項目名稱:jbossBA,代碼行數:31,代碼來源:NBTreeNoSplit.java

示例7: Classification

import weka.classifiers.Evaluation; //導入依賴的package包/類
public Classification(ArrayList<ClassifierType> cType) {

		cls = new Classifier[cType.size()];
		eval = new Evaluation[cType.size()];

		for(int i = 0; i < cType.size();i++){			
			switch(cType.get(i)){
			// TODO Will we use J48 or ID3 implementation of decision trees?
			case J48:
				cls[i] = new J48();
				break;
			case NAIVE_BAYES:
				// If bType == Incremental then cls = new UpdateableNaiveBayes(); else
				cls[i] = new NaiveBayes();
				break;
			case IBK:
				cls[i] = new IBk();
				break;
			case COSINE:
				cls[i] = useCosine();
				// TODO Add other cases: Decision Rule, KNN and so on.
			}
		}
	}
 
開發者ID:a-n-d-r-e-i,項目名稱:seagull,代碼行數:25,代碼來源:Classification.java

示例8: Main

import weka.classifiers.Evaluation; //導入依賴的package包/類
public Main() {
    try {
        BufferedReader datafile;
        datafile = readDataFile("camping.txt");
        Instances data = new Instances(datafile);
        data.setClassIndex(data.numAttributes() - 1);

        Instances trainingData = new Instances(data, 0, 14);
        Instances testingData = new Instances(data, 14, 5);
        Evaluation evaluation = new Evaluation(trainingData);

        SMO smo = new SMO();
        smo.buildClassifier(data);

        evaluation.evaluateModel(smo, testingData);
        System.out.println(evaluation.toSummaryString());

        // Test instance 
        Instance instance = new DenseInstance(3);
        instance.setValue(data.attribute("age"), 78);
        instance.setValue(data.attribute("income"), 125700);
        instance.setValue(data.attribute("camps"), 1);            
        instance.setDataset(data);
        System.out.println("The instance: " + instance);
        System.out.println(smo.classifyInstance(instance));
    } catch (Exception ex) {
        ex.printStackTrace();
    }
}
 
開發者ID:PacktPublishing,項目名稱:Machine-Learning-End-to-Endguide-for-Java-developers,代碼行數:30,代碼來源:Main-SVG.java

示例9: writeCrossValidationResults

import weka.classifiers.Evaluation; //導入依賴的package包/類
@TimeThis(task="write-results", category=TimerCategory.EXPORT)
protected void writeCrossValidationResults(ProcessingContext<Corpus> ctx, TargetStream evaluationFile, Evaluation evaluation, String[] classes) throws Exception {
	Logger logger = getLogger(ctx);
       logger.info("writing test results into " + evaluationFile.getName());
       try (PrintStream out = evaluationFile.getPrintStream()) {
       	for (int i = 0; i < classes.length; ++i) {
       		out.printf("Results for class %d (%s):\n", i, classes[i]);
       		out.printf("  True positives : %8.0f\n", evaluation.numTruePositives(i));
       		out.printf("  False positives: %8.0f\n", evaluation.numFalsePositives(i));
       		out.printf("  True negatives : %8.0f\n", evaluation.numTrueNegatives(i));
       		out.printf("  False negatives: %8.0f\n", evaluation.numFalseNegatives(i));
       		out.printf("  Recall:    %6.4f\n", evaluation.recall(i));
       		out.printf("  Precision: %6.4f\n", evaluation.precision(i));
       		out.printf("  F-Measure: %6.4f\n", evaluation.fMeasure(i));
       		out.println();
       	}
       	out.println(evaluation.toMatrixString("Confusion matrix:"));
       }
}
 
開發者ID:Bibliome,項目名稱:alvisnlp,代碼行數:20,代碼來源:WekaTrain.java

示例10: evaluateResults

import weka.classifiers.Evaluation; //導入依賴的package包/類
public static void evaluateResults(Evaluation evaluation) {

        for (Prediction p : evaluation.predictions()) {
            System.out.println(p.actual() + " " + p.predicted());
        }
        System.out.println(evaluation.toSummaryString("\nResults\n======\n", true));
        //  System.out.println(evaluation.toSummaryString(evaluation.correlationCoefficient() + " " + evaluation.errorRate() + " " + evaluation.meanAbsoluteError() + " ");

    }
 
開發者ID:gizemsogancioglu,項目名稱:biosses,代碼行數:10,代碼來源:LinearRegressionMethod.java

示例11: evaluate

import weka.classifiers.Evaluation; //導入依賴的package包/類
public static void evaluate(Classifier clf, Instances data, double minPerfomance)
    throws Exception {
  Instances[] split = TestUtil.splitTrainTest(data);

  Instances train = split[0];
  Instances test = split[1];

  clf.buildClassifier(train);
  Evaluation trainEval = new Evaluation(train);
  trainEval.evaluateModel(clf, train);

  Evaluation testEval = new Evaluation(train);
  testEval.evaluateModel(clf, test);

  final double testPctCorrect = testEval.pctCorrect();
  final double trainPctCorrect = trainEval.pctCorrect();

  log.info("Train: {}, Test: {}", trainPctCorrect, testPctCorrect);
  boolean success =
      testPctCorrect > minPerfomance && trainPctCorrect > minPerfomance;
  Assert.assertTrue(success);
}
 
開發者ID:Waikato,項目名稱:wekaDeeplearning4j,代碼行數:23,代碼來源:StabilityTest.java

示例12: holdout

import weka.classifiers.Evaluation; //導入依賴的package包/類
/**
 * Perform simple holdout with a given percentage
 *
 * @param clf Classifier
 * @param data Full dataset
 * @param p Split percentage
 * @throws Exception
 */
public static void holdout(Classifier clf, Instances data, double p) throws Exception {
  Instances[] split = splitTrainTest(data, p);

  Instances train = split[0];
  Instances test = split[1];

  clf.buildClassifier(train);
  Evaluation trainEval = new Evaluation(train);
  trainEval.evaluateModel(clf, train);
  logger.info("Weka Train Evaluation:");
  logger.info(trainEval.toSummaryString());
  if (!data.classAttribute().isNumeric()) {
    logger.info(trainEval.toMatrixString());
  }

  Evaluation testEval = new Evaluation(train);
  logger.info("Weka Test Evaluation:");
  testEval.evaluateModel(clf, test);
  logger.info(testEval.toSummaryString());
  if (!data.classAttribute().isNumeric()) {
    logger.info(testEval.toMatrixString());
  }
}
 
開發者ID:Waikato,項目名稱:wekaDeeplearning4j,代碼行數:32,代碼來源:TestUtil.java

示例13: getErrorPercent

import weka.classifiers.Evaluation; //導入依賴的package包/類
@Override
public double getErrorPercent() {
    this.splitInstances();

    try {
        this.getClassifier().buildClassifier(getTrainInstances());

        Evaluation eval = new Evaluation(getTestInstances());
        eval.evaluateModel(getClassifier(), getTestInstances());

        return eval.pctIncorrect();

    } catch (Exception e) {
        e.printStackTrace();
        return -1;
    }
}
 
開發者ID:garciparedes,項目名稱:java-examples,代碼行數:18,代碼來源:AbstractSplitEstimator.java

示例14: trainRandomCommittee

import weka.classifiers.Evaluation; //導入依賴的package包/類
public static void trainRandomCommittee(final Instances trainingSet) throws Exception {
        logger.info("Create a classifier");
        final RandomTree classifier = new RandomTree();
        classifier.setKValue(0);
        classifier.setMaxDepth(0);
        classifier.setMinNum(0.001);
        classifier.setAllowUnclassifiedInstances(false);
        classifier.setNumFolds(0);

        final RandomCommittee tree = new RandomCommittee();
        tree.setClassifier(classifier);
        tree.setNumIterations(10);
        tree.buildClassifier(trainingSet);

        logger.info("Test the model");
        final Evaluation eval = new Evaluation(trainingSet);
//        eval.crossValidateModel(tree, trainingSet, 10, new Random(1));
        eval.evaluateModel(tree, trainingSet);

        // Print the result à la Weka explorer:
        logger.info(eval.toSummaryString());
        logger.info(tree.toString());
        logger.info(eval.toMatrixString());
        logger.info(eval.toClassDetailsString());
        logger.info(eval.toCumulativeMarginDistributionString());

//        logger.info("coefficients");
//        for(int i = 0; i < tree.coefficients().length; ++i){
//            logger.info("{} | {}", trainingSet.attribute(i).name(), tree.coefficients()[i]);
//        }

//        try {
//            final File file = new File(GitHubPublisher.localPath + RetailSalePrediction.predict_retail_sales + File.separator + "prediction_set_script.js");
//            FileUtils.writeStringToFile(file, ClassifierToJs.compress(ClassifierToJs.toSource(tree, "predictCommonBySet")), "UTF-8");
//        } catch (final Exception e) {
//            logger.error(e.getLocalizedMessage(), e);
//        }
    }
 
開發者ID:cobr123,項目名稱:VirtaMarketAnalyzer,代碼行數:39,代碼來源:RetailSalePrediction.java

示例15: trainDecisionTable

import weka.classifiers.Evaluation; //導入依賴的package包/類
public static void trainDecisionTable(final Instances trainingSet) throws Exception {
        // Create a classifier
        final DecisionTable tree = new DecisionTable();
        tree.buildClassifier(trainingSet);

        // Test the model
        final Evaluation eval = new Evaluation(trainingSet);
//        eval.crossValidateModel(tree, trainingSet, 10, new Random(1));
        eval.evaluateModel(tree, trainingSet);

        // Print the result à la Weka explorer:
        logger.info(eval.toSummaryString());
        logger.info(tree.toString());

//        try {
//            final File file = new File(GitHubPublisher.localPath + RetailSalePrediction.predict_retail_sales + File.separator + "prediction_set_script.js");
//            FileUtils.writeStringToFile(file, ClassifierToJs.compress(ClassifierToJs.toSource(tree, "predictCommonBySet")), "UTF-8");
//        } catch (final Exception e) {
//            logger.error(e.getLocalizedMessage(), e);
//        }
    }
 
開發者ID:cobr123,項目名稱:VirtaMarketAnalyzer,代碼行數:22,代碼來源:RetailSalePrediction.java


注:本文中的weka.classifiers.Evaluation類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。