當前位置: 首頁>>代碼示例>>Java>>正文


Java Classifier類代碼示例

本文整理匯總了Java中weka.classifiers.Classifier的典型用法代碼示例。如果您正苦於以下問題:Java Classifier類的具體用法?Java Classifier怎麽用?Java Classifier使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


Classifier類屬於weka.classifiers包,在下文中一共展示了Classifier類的15個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: WekaMatchingRule

import weka.classifiers.Classifier; //導入依賴的package包/類
/**
 * Create a MatchingRule, which can be trained using the Weka library for
 * identity resolution.
 * 
 * @param finalThreshold
 *            determines the confidence level, which needs to be exceeded by
 *            the classifier, so that it can classify a record as match.
 * 
 * @param classifierName
 *            Has the name of a specific classifier from the Weka library.
 * 
 * @param parameters
 *            Hold the parameters to tune the classifier.
 */

public WekaMatchingRule(double finalThreshold, String classifierName, String parameters[]) {
	super(finalThreshold);

	this.parameters = parameters;

	// create classifier
	try {
		this.classifier = (Classifier) Utils.forName(Classifier.class, classifierName, parameters);
	} catch (Exception e) {
		e.printStackTrace();
	}
	// create list for comparators
	this.comparators = new LinkedList<>();
}
 
開發者ID:olehmberg,項目名稱:winter,代碼行數:30,代碼來源:WekaMatchingRule.java

示例2: train

import weka.classifiers.Classifier; //導入依賴的package包/類
private void train(String name) {
	try {
		Classifier randomForest = new RandomForest();

		ConverterUtils.DataSource source = new ConverterUtils.DataSource(FOLDER + name);
		dataSet = source.getDataSet();

		dataSet.setClassIndex(dataSet.numAttributes() - 1);
		randomForest.buildClassifier(dataSet);

		classifier = randomForest;
	} catch (Exception e) {
		e.printStackTrace();
	}
}
 
開發者ID:igr,項目名稱:parlo,代碼行數:16,代碼來源:SentenceClassifier.java

示例3: train

import weka.classifiers.Classifier; //導入依賴的package包/類
public void train() {
	try {
		Classifier randomForest = new RandomForest();

		ConverterUtils.DataSource source = new ConverterUtils.DataSource(FOLDER + "question-classifier.arff");
		dataSet = source.getDataSet();

		dataSet.setClassIndex(dataSet.numAttributes() - 1);
		randomForest.buildClassifier(dataSet);

		classifier = randomForest;
	} catch (Exception e) {
		e.printStackTrace();
	}
}
 
開發者ID:igr,項目名稱:parlo,代碼行數:16,代碼來源:QuestionClassifier.java

示例4: runJ48

import weka.classifiers.Classifier; //導入依賴的package包/類
public static void runJ48(Instances trainSet, Instances testSet) {
    System.out.println("#####################  J48  #####################");

    Classifier model = null;
    Train train = new Train(trainSet);

    /*
     * TRAIN
     */
    try {
        model = train.getJ48Model();
    } catch (Exception e) {
        e.printStackTrace();
    }

    /*
     * TEST
     */
    Test test = new Test(trainSet, testSet);
    test.testModel(model);

    System.out.println("#####################  END OF J48  #####################");
    System.out.print("\n\n\n");
}
 
開發者ID:GeorgiMateev,項目名稱:twitter-user-gender-classification,代碼行數:25,代碼來源:Classification.java

示例5: runNaiveBayes

import weka.classifiers.Classifier; //導入依賴的package包/類
public static void runNaiveBayes(Instances trainSet, Instances testSet) {
    System.out.println("#####################  NAIVE BAYES  #####################");

    Classifier model = null;
    Train train = new Train(trainSet);

    /*
     * TRAIN
     */
    try {
        model = train.getNaiveBayes();
    } catch (Exception e) {
        e.printStackTrace();
    }

    /*
     * TEST
     */
    Test test = new Test(trainSet, testSet);
    test.testModel(model);

    System.out.println("#####################  END OF NAIVE BAYES  #####################");
    System.out.print("\n\n\n");
}
 
開發者ID:GeorgiMateev,項目名稱:twitter-user-gender-classification,代碼行數:25,代碼來源:Classification.java

示例6: runSMO

import weka.classifiers.Classifier; //導入依賴的package包/類
public static void runSMO(Instances trainSet, Instances testSet) {
    System.out.println("#####################  SMO (SVM)  #####################");

    Classifier model = null;
    Train train = new Train(trainSet);

    /*
     * TRAIN
     */
    try {
        model = train.getSMO();
    } catch (Exception e) {
        e.printStackTrace();
    }

    /*
     * TEST
     */
    Test test = new Test(trainSet, testSet);
    test.testModel(model);

    System.out.println("#####################  END OF SMO (SVM)  #####################");
    System.out.print("\n\n\n");
}
 
開發者ID:GeorgiMateev,項目名稱:twitter-user-gender-classification,代碼行數:25,代碼來源:Classification.java

示例7: process

import weka.classifiers.Classifier; //導入依賴的package包/類
@Override
public void process(ProcessingContext<Corpus> ctx, Corpus corpus) throws ProcessingException {
	try {
		Classifier classifier = loadClassifier(ctx);
		ElementClassifierResolvedObjects resObj = getResolvedObjects();
		IdentifiedInstances<Element> devSet = resObj.getRelationDefinition().createInstances();
		predictExamples(ctx, classifier, devSet, corpus);
	}
	catch (IOException ioe) {
		rethrow(ioe);
	}
	catch (ClassNotFoundException cnfe) {
		rethrow(cnfe);
	}
	catch (Exception e) {
		rethrow(e);
	}
}
 
開發者ID:Bibliome,項目名稱:alvisnlp,代碼行數:19,代碼來源:WekaPredict.java

示例8: predictExamples

import weka.classifiers.Classifier; //導入依賴的package包/類
@TimeThis(task="prediction")
protected void predictExamples(ProcessingContext<Corpus> ctx, Classifier classifier, IdentifiedInstances<Element> devSet, Corpus corpus) throws Exception {
	ElementClassifierResolvedObjects resObj = getResolvedObjects();
	RelationDefinition relationDefinition = resObj.getRelationDefinition();
	Evaluator examples = resObj.getExamples();
	String predictedClassFeatureKey = getPredictedClassFeatureKey();
	TargetStream evaluationFile = getEvaluationFile();
	boolean withId = evaluationFile != null;
	String[] classes = getClasses(devSet);
	getLogger(ctx).info("predicting class for each example");
       EvaluationContext evalCtx = new EvaluationContext(getLogger(ctx));
	for (Element example : Iterators.loop(getExamples(corpus, examples, evalCtx))) {
		Instance inst = relationDefinition.addExample(devSet, evalCtx, example, withId, withId);
		double prediction = classifier.classifyInstance(inst);
		example.addFeature(predictedClassFeatureKey, classes[(int) prediction]);
		if (!withId)
			devSet.delete();
	}
}
 
開發者ID:Bibliome,項目名稱:alvisnlp,代碼行數:20,代碼來源:WekaPredict.java

示例9: crossValidate

import weka.classifiers.Classifier; //導入依賴的package包/類
/**
  * Utility method for fast 5-fold cross validation of a naive bayes
  * model
  *
  * @param fullModel a <code>NaiveBayesUpdateable</code> value
  * @param trainingSet an <code>Instances</code> value
  * @param r a <code>Random</code> value
  * @return a <code>double</code> value
  * @exception Exception if an error occurs
  */
 public static double crossValidate(NaiveBayesUpdateable fullModel,
		       Instances trainingSet,
		       Random r) throws Exception {
   // make some copies for fast evaluation of 5-fold xval
   Classifier [] copies = AbstractClassifier.makeCopies(fullModel, 5);
   Evaluation eval = new Evaluation(trainingSet);
   // make some splits
   for (int j = 0; j < 5; j++) {
     Instances test = trainingSet.testCV(5, j);
     // unlearn these test instances
     for (int k = 0; k < test.numInstances(); k++) {
test.instance(k).setWeight(-test.instance(k).weight());
((NaiveBayesUpdateable)copies[j]).updateClassifier(test.instance(k));
// reset the weight back to its original value
test.instance(k).setWeight(-test.instance(k).weight());
     }
     eval.evaluateModel(copies[j], test);
   }
   return eval.incorrect();
 }
 
開發者ID:mydzigear,項目名稱:repo.kmeanspp.silhouette_score,代碼行數:31,代碼來源:NBTreeNoSplit.java

示例10: runIBk

import weka.classifiers.Classifier; //導入依賴的package包/類
public static void runIBk(Instances trainSet, Instances testSet) {
    System.out.println("#####################  IBk (kNN)  #####################");

    Classifier model = null;
    Train train = new Train(trainSet);

    /*
     * TRAIN
     */
    try {
        model = train.getIBk();
    } catch (Exception e) {
        e.printStackTrace();
    }

    /*
     * TEST
     */
    Test test = new Test(trainSet, testSet);
    test.testModel(model);

    System.out.println("#####################  END OF IBk (kNN)  #####################");
    System.out.print("\n\n\n");
}
 
開發者ID:GeorgiMateev,項目名稱:twitter-user-gender-classification,代碼行數:25,代碼來源:Classification.java

示例11: printClassifications

import weka.classifiers.Classifier; //導入依賴的package包/類
/**
 * Prints the classifications to the buffer.
 * 
 * @param classifier the classifier to use for printing the classifications
 * @param testset the test instances
 * @throws Exception if check fails or error occurs during printing of
 *           classifications
 */
public void printClassifications(Classifier classifier, Instances testset)
  throws Exception {
  int i;

  if (classifier instanceof BatchPredictor
    && ((BatchPredictor) classifier).implementsMoreEfficientBatchPrediction()) {
    double[][] predictions =
      ((BatchPredictor) classifier).distributionsForInstances(testset);
    for (i = 0; i < testset.numInstances(); i++) {
      printClassification(predictions[i], testset.instance(i), i);
    }
  } else {
    for (i = 0; i < testset.numInstances(); i++) {
      doPrintClassification(classifier, testset.instance(i), i);
    }
  }
}
 
開發者ID:mydzigear,項目名稱:repo.kmeanspp.silhouette_score,代碼行數:26,代碼來源:AbstractOutput.java

示例12: preProcessInstance

import weka.classifiers.Classifier; //導入依賴的package包/類
/**
 * Preprocesses an input instance and its copy (that will get its class value
 * set to missing for prediction purposes). Basically this only does something
 * special in the case when the classifier is an InputMappedClassifier.
 * 
 * @param inst the original instance to predict
 * @param withMissing a copy of the instance to predict
 * @param classifier the classifier that will be used to make the prediction
 * @return the original instance unchanged or mapped (in the case of an
 *         InputMappedClassifier) and the withMissing copy with the class
 *         attribute set to missing value.
 * @throws Exception if a problem occurs.
 */
protected Instance preProcessInstance(Instance inst, Instance withMissing,
  Classifier classifier) throws Exception {

  if (classifier instanceof weka.classifiers.misc.InputMappedClassifier) {
    inst = (Instance) inst.copy();
    inst =
      ((weka.classifiers.misc.InputMappedClassifier) classifier)
        .constructMappedInstance(inst);
    int mappedClass =
      ((weka.classifiers.misc.InputMappedClassifier) classifier)
        .getMappedClassIndex();
    withMissing.setMissing(mappedClass);
  } else {
    withMissing.setMissing(withMissing.classIndex());
  }

  return inst;
}
 
開發者ID:mydzigear,項目名稱:repo.kmeanspp.silhouette_score,代碼行數:32,代碼來源:AbstractOutput.java

示例13: classifyInstance

import weka.classifiers.Classifier; //導入依賴的package包/類
/**
 * Classify an instance.
 *
 * @param inst the instance to predict
 * @return a prediction for the instance
 * @throws Exception if an error occurs
 */
public double classifyInstance(Instance inst) throws Exception {

  double prediction = m_zeroR.classifyInstance(inst);

  // default model?
  if (!m_SuitableData) {
    return prediction;
  }
  
  for (Classifier classifier : m_Classifiers) {
    double toAdd = classifier.classifyInstance(inst);
    if (Utils.isMissingValue(toAdd)) {
      throw new UnassignedClassException("AdditiveRegression: base learner predicted missing value.");
    }
    toAdd *= getShrinkage();
    prediction += toAdd;
  }

  return prediction;
}
 
開發者ID:mydzigear,項目名稱:repo.kmeanspp.silhouette_score,代碼行數:28,代碼來源:AdditiveRegression.java

示例14: setExperiment

import weka.classifiers.Classifier; //導入依賴的package包/類
/**
 * Tells the panel to act on a new experiment.
 * 
 * @param exp a value of type 'Experiment'
 */
public void setExperiment(Experiment exp) {

  m_Exp = exp;
  m_AddBut.setEnabled(true);
  m_List.setModel(m_AlgorithmListModel);
  m_List.setCellRenderer(new ObjectCellRenderer());
  m_AlgorithmListModel.removeAllElements();
  if (m_Exp.getPropertyArray() instanceof Classifier[]) {
    Classifier[] algorithms = (Classifier[]) m_Exp.getPropertyArray();
    for (Classifier algorithm : algorithms) {
      m_AlgorithmListModel.addElement(algorithm);
    }
  }
  m_EditBut.setEnabled((m_AlgorithmListModel.size() > 0));
  m_DeleteBut.setEnabled((m_AlgorithmListModel.size() > 0));
  m_LoadOptionsBut.setEnabled((m_AlgorithmListModel.size() > 0));
  m_SaveOptionsBut.setEnabled((m_AlgorithmListModel.size() > 0));
  m_UpBut.setEnabled(JListHelper.canMoveUp(m_List));
  m_DownBut.setEnabled(JListHelper.canMoveDown(m_List));
}
 
開發者ID:mydzigear,項目名稱:repo.kmeanspp.silhouette_score,代碼行數:26,代碼來源:AlgorithmListPanel.java

示例15: findBestFitModel

import weka.classifiers.Classifier; //導入依賴的package包/類
@Override
public RegressionModel findBestFitModel(List<RegressionModel> models, CombinerType combiner, Dataset evaluationSet) throws EngineException {
  Classifier[] classifiers = new Classifier[models.size()];
  int i=0;
  for(RegressionModel model : models)
  {
    classifiers[i++] = model.getTrainedClassifier();
  }
  
  Classifier bestFit = null;
  bestFit = combiner.getBestFitClassifier(classifiers, evaluationSet.getAsInstances(), evaluationSet.getOptions());
  log.info("Best fit model combination generated.. ");
  RegressionModel m = new RegressionModel();
  m.setTrainedClassifier(bestFit);
  return m;
}
 
開發者ID:javanotes,項目名稱:reactive-data,代碼行數:17,代碼來源:IncrementalClassifierBean.java


注:本文中的weka.classifiers.Classifier類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。