当前位置: 首页>>代码示例>>Java>>正文


Java TransferFunctionType类代码示例

本文整理汇总了Java中org.neuroph.util.TransferFunctionType的典型用法代码示例。如果您正苦于以下问题:Java TransferFunctionType类的具体用法?Java TransferFunctionType怎么用?Java TransferFunctionType使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。


TransferFunctionType类属于org.neuroph.util包,在下文中一共展示了TransferFunctionType类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。

示例1: AnimalNetwork

import org.neuroph.util.TransferFunctionType; //导入依赖的package包/类
/**
 * Instantiates a new animal network.
 *
 * @param input the input
 * @param hidden the hidden
 * @param output the output
 */
public AnimalNetwork(int input,int hidden,int output) {
	super();
	System.out.println("network is created");
	initializeNeurons();
	initializeQuestions();
	animal_network = new MultiLayerPerceptron(TransferFunctionType.SIGMOID,Data.INPUTUNITS,Data.HIDDENUNITS,Data.OUTPUTUNITS);
	animal_network.setNetworkType(NeuralNetworkType.MULTI_LAYER_PERCEPTRON);
	animal_network.randomizeWeights();  //randomize weights 
	// set parameters 
	((LMS) animal_network.getLearningRule()).setMaxError(MAXERROR);//0-1 
	((LMS) animal_network.getLearningRule()).setLearningRate(LEARNINGRATE);//0-1
	((LMS) animal_network.getLearningRule()).setMaxIterations(MAXITERATIONS);//0-1
	MomentumBackpropagation backpropogation = new MomentumBackpropagation(); // define momentum
	backpropogation.setMomentum(0.7); // set momentum
	animal_network.setLearningRule(backpropogation); 
}
 
开发者ID:eldemcan,项目名称:20q,代码行数:24,代码来源:AnimalNetwork.java

示例2: startLearning

import org.neuroph.util.TransferFunctionType; //导入依赖的package包/类
public void startLearning() {
    Thread t1 = new Thread(new Runnable() {
        public void run() {
            console.addLog("Loading test set");
            testSet = loader.loadDataSet(testSetPath);
            console.addLog("Test set loaded");

            console.addLog("Loading training set");
            trainingSet = loader.loadDataSet(trainingSetPath);
            console.addLog("Training set loaded. Input size: " + trainingSet.getInputSize() +
                    " Output size: " + trainingSet.getOutputSize());

            nnet = new MultiLayerPerceptron(TransferFunctionType.SIGMOID,
                    trainingSet.getInputSize(), 86, 86, trainingSet.getOutputSize());

            MomentumBackpropagation backPropagation = new MomentumBackpropagation();
            backPropagation.setLearningRate(learningRate);
            backPropagation.setMomentum(momentum);

            LearningTestSetEvaluator evaluator =
                    new LearningTestSetEvaluator(nnetName, testSet, trainingSet, console);
            backPropagation.addListener(evaluator);
            backPropagation.addListener(new LearningEventListener() {
                @Override
                public void handleLearningEvent(LearningEvent event) {
                    if (event.getEventType() == LearningEvent.Type.LEARNING_STOPPED) {
                        listeners.forEach((listener) -> listener.learningStopped(LearningNetTask.this));
                    }
                }
            });
            nnet.setLearningRule(backPropagation);
            console.addLog("Started neural net learning with momentum: "
                    + momentum + ", learning rate: " + learningRate);
            nnet.learnInNewThread(trainingSet);
        }
    });
    t1.start();
}
 
开发者ID:fgulan,项目名称:final-thesis,代码行数:39,代码来源:LearningNetTask.java

示例3: learnNeuralNet

import org.neuroph.util.TransferFunctionType; //导入依赖的package包/类
private static void learnNeuralNet(DataSet trainingSet, DataSet testSet) {
    TestSetEvaluator testEvaluator = new TestSetEvaluator(NNET_NAME, testSet, trainingSet);
    MultiLayerPerceptron nnet = new MultiLayerPerceptron(TransferFunctionType.SIGMOID, INPUT_LAYER, 86, 86, OUTPUT_LAYER);

    MomentumBackpropagation bp = new MomentumBackpropagation();
    bp.setLearningRate(LEARINING_RATE);
    bp.setMomentum(MOMENTUM);
    bp.addListener(testEvaluator);

    nnet.setLearningRule(bp);
    nnet.learn(trainingSet);
    nnet.save(NNET_NAME + "last");
}
 
开发者ID:fgulan,项目名称:final-thesis,代码行数:14,代码来源:OneToOneHVTest.java

示例4: learnNeuralNet

import org.neuroph.util.TransferFunctionType; //导入依赖的package包/类
private static void learnNeuralNet(DataSet trainingSet, DataSet testSet) {
    TestSetEvaluator testEvaluator = new TestSetEvaluator(NNET_NAME, testSet, trainingSet);
    MultiLayerPerceptron nnet = new MultiLayerPerceptron(TransferFunctionType.SIGMOID, INPUT_LAYER, 140, OUTPUT_LAYER);

    MomentumBackpropagation bp = new MomentumBackpropagation();
    bp.setLearningRate(LEARINING_RATE);
    bp.setMomentum(MOMENTUM);
    bp.addListener(testEvaluator);

    nnet.setLearningRule(bp);
    nnet.learn(trainingSet);
    nnet.save(NNET_NAME + "last");
}
 
开发者ID:fgulan,项目名称:final-thesis,代码行数:14,代码来源:OneToOneNonUniqueDiagonalTest.java

示例5: learnNeuralNet

import org.neuroph.util.TransferFunctionType; //导入依赖的package包/类
private static void learnNeuralNet(DataSet trainingSet, DataSet testSet) {
    TestSetEvaluator testEvaluator = new TestSetEvaluator(NNET_NAME, testSet, trainingSet);
    MultiLayerPerceptron nnet = new MultiLayerPerceptron(TransferFunctionType.SIGMOID, INPUT_LAYER, 76, 76, OUTPUT_LAYER);

    MomentumBackpropagation bp = new MomentumBackpropagation();
    bp.setLearningRate(LEARINING_RATE);
    bp.setMomentum(MOMENTUM);
    bp.addListener(testEvaluator);

    nnet.setLearningRule(bp);
    nnet.learn(trainingSet);
    nnet.save(NNET_NAME + "last");
}
 
开发者ID:fgulan,项目名称:final-thesis,代码行数:14,代码来源:OneToOneDiagonalTest.java

示例6: main

import org.neuroph.util.TransferFunctionType; //导入依赖的package包/类
public static void main(String[] args) {

// create training set (logical XOR function)
        DataSet trainingSet = new DataSet(2, 1);
        trainingSet.addRow(new DataSetRow(new double[]{0, 0}, new double[]{0}));
        trainingSet.addRow(new DataSetRow(new double[]{0, 1}, new double[]{1}));
        trainingSet.addRow(new DataSetRow(new double[]{1, 0}, new double[]{1}));
        trainingSet.addRow(new DataSetRow(new double[]{1, 1}, new double[]{0}));


// create multi layer perceptron
        MultiLayerPerceptron myMlPerceptron = new MultiLayerPerceptron(TransferFunctionType.SIGMOID, 2, 3, 1);
        myMlPerceptron.setLearningRule(new BackPropagation());
// learn the training set
        myMlPerceptron.learn(trainingSet);
// test perceptron
        System.out.println("Testing trained neural network");
        testNeuralNetwork(myMlPerceptron, trainingSet);

// save trained neural network
        myMlPerceptron.save("myMlPerceptron.nnet");

// load saved neural network
        NeuralNetwork loadedMlPerceptron = NeuralNetwork.createFromFile("myMlPerceptron.nnet");

// test loaded neural network
        System.out.println("Testing loaded neural network");
        testNeuralNetwork(loadedMlPerceptron, trainingSet);

    }
 
开发者ID:fgulan,项目名称:final-thesis,代码行数:31,代码来源:TestLearn.java

示例7: MLPNetworkMaker

import org.neuroph.util.TransferFunctionType; //导入依赖的package包/类
public MLPNetworkMaker(String networkLabel, Dimension samplingDimension, ColorMode mode, List<String> outputNeuronLabels, List<Integer> neuronCounts, TransferFunctionType type, String saveLocation) {
    this.networkLabel = networkLabel;
    this.samplingDimension = samplingDimension;
    this.mode = mode;
    this.outputNeuronLabels = outputNeuronLabels;
    this.neuronCounts = neuronCounts;
    this.type = type;
    this.saveLocation = saveLocation;
}
 
开发者ID:afsalashyana,项目名称:FakeImageDetection,代码行数:10,代码来源:MLPNetworkMaker.java

示例8: run

import org.neuroph.util.TransferFunctionType; //导入依赖的package包/类
/**
 * Runs this sample
 */
public void run() {
	
    // create training set (logical XOR function)
    DataSet trainingSet = new DataSet(2, 1);
    trainingSet.addRow(new DataSetRow(new double[]{0, 0}, new double[]{0}));
    trainingSet.addRow(new DataSetRow(new double[]{0, 1}, new double[]{1}));
    trainingSet.addRow(new DataSetRow(new double[]{1, 0}, new double[]{1}));
    trainingSet.addRow(new DataSetRow(new double[]{1, 1}, new double[]{0}));

    // create multi layer perceptron
    MultiLayerPerceptron myMlPerceptron = new MultiLayerPerceptron(TransferFunctionType.TANH, 2, 3, 1);
    // enable batch if using MomentumBackpropagation
    if( myMlPerceptron.getLearningRule() instanceof MomentumBackpropagation ){
    	((MomentumBackpropagation)myMlPerceptron.getLearningRule()).setBatchMode(true);
    	((MomentumBackpropagation)myMlPerceptron.getLearningRule()).setMaxError(0.00001);
    }

    LearningRule learningRule = myMlPerceptron.getLearningRule();
    learningRule.addListener(this);
    
    // learn the training set
    System.out.println("Training neural network...");
    myMlPerceptron.learn(trainingSet);

    // test perceptron
    System.out.println("Testing trained neural network");
    testNeuralNetwork(myMlPerceptron, trainingSet);

    // save trained neural network
    myMlPerceptron.save("myMlPerceptron.nnet");

    // load saved neural network
    NeuralNetwork loadedMlPerceptron = NeuralNetwork.load("myMlPerceptron.nnet");

    // test loaded neural network
    System.out.println("Testing loaded neural network");
    testNeuralNetwork(loadedMlPerceptron, trainingSet);
}
 
开发者ID:East196,项目名称:maker,代码行数:42,代码来源:XorMultiLayerPerceptronSample.java

示例9: main

import org.neuroph.util.TransferFunctionType; //导入依赖的package包/类
/**
 * Runs this sample
 */
public static void main(String[] args) {
	
    // create training set (logical XOR function)
    DataSet trainingSet = new DataSet(2, 1);
    trainingSet.addRow(new DataSetRow(new double[]{0, 0}, new double[]{0}));
    trainingSet.addRow(new DataSetRow(new double[]{0, 1}, new double[]{1}));
    trainingSet.addRow(new DataSetRow(new double[]{1, 0}, new double[]{1}));
    trainingSet.addRow(new DataSetRow(new double[]{1, 1}, new double[]{0}));

    // create multi layer perceptron
    MultiLayerPerceptron myMlPerceptron = new MultiLayerPerceptron(TransferFunctionType.SIGMOID, 2, 3, 1);
    // set ResilientPropagation learning rule
    myMlPerceptron.setLearningRule(new ResilientPropagation()); 
   
    // learn the training set
    System.out.println("Training neural network...");
    myMlPerceptron.learn(trainingSet);

    int iterations = ((SupervisedLearning)myMlPerceptron.getLearningRule()).getCurrentIteration();        
    System.out.println("Learned in "+iterations+" iterations");
    
    // test perceptron
    System.out.println("Testing trained neural network");
    testNeuralNetwork(myMlPerceptron, trainingSet);

}
 
开发者ID:East196,项目名称:maker,代码行数:30,代码来源:XorResilientPropagationSample.java

示例10: createNetwork

import org.neuroph.util.TransferFunctionType; //导入依赖的package包/类
/**
 * Creates Competitive network architecture
 * 
 * @param inputNeuronsCount
 *            input neurons number
        * @param outputNeuronsCount
        *            output neurons number
 * @param neuronProperties
 *            neuron properties
 */
private void createNetwork(int inputNeuronsCount, int outputNeuronsCount) {
	// set network type
	this.setNetworkType(NeuralNetworkType.COMPETITIVE);

	// createLayer input layer
	Layer inputLayer = LayerFactory.createLayer(inputNeuronsCount, new NeuronProperties());
	this.addLayer(inputLayer);

	// createLayer properties for neurons in output layer
	NeuronProperties neuronProperties = new NeuronProperties();
	neuronProperties.setProperty("neuronType", CompetitiveNeuron.class);
	neuronProperties.setProperty("inputFunction",	WeightedSum.class);
	neuronProperties.setProperty("transferFunction",TransferFunctionType.RAMP);

	// createLayer full connectivity in competitive layer
	CompetitiveLayer competitiveLayer = new CompetitiveLayer(outputNeuronsCount, neuronProperties);

	// add competitive layer to network
	this.addLayer(competitiveLayer);

	double competitiveWeight = -(1 / (double) outputNeuronsCount);
	// createLayer full connectivity within competitive layer
	ConnectionFactory.fullConnect(competitiveLayer, competitiveWeight, 1);

	// createLayer full connectivity from input to competitive layer
	ConnectionFactory.fullConnect(inputLayer, competitiveLayer);

	// set input and output cells for this network
	NeuralNetworkFactory.setDefaultIO(this);

	this.setLearningRule(new CompetitiveLearning());
}
 
开发者ID:fiidau,项目名称:Y-Haplogroup-Predictor,代码行数:43,代码来源:CompetitiveNetwork.java

示例11: createNetwork

import org.neuroph.util.TransferFunctionType; //导入依赖的package包/类
/**
 * Creates adaline network architecture with specified number of input neurons
 * 
 * @param inputNeuronsCount
        *              number of neurons in input layer
 */
private void createNetwork(int inputNeuronsCount) {
	// set network type code
	this.setNetworkType(NeuralNetworkType.ADALINE);
               
               // create input layer neuron settings for this network
	NeuronProperties inNeuronProperties = new NeuronProperties();
	inNeuronProperties.setProperty("transferFunction", TransferFunctionType.LINEAR);

	// createLayer input layer with specified number of neurons
	Layer inputLayer = LayerFactory.createLayer(inputNeuronsCount, inNeuronProperties);
               inputLayer.addNeuron(new BiasNeuron()); // add bias neuron (always 1, and it will act as bias input for output neuron)
	this.addLayer(inputLayer);
               
              // create output layer neuron settings for this network
	NeuronProperties outNeuronProperties = new NeuronProperties();
	outNeuronProperties.setProperty("transferFunction", TransferFunctionType.RAMP);
	outNeuronProperties.setProperty("transferFunction.slope", new Double(1));
	outNeuronProperties.setProperty("transferFunction.yHigh", new Double(1));
	outNeuronProperties.setProperty("transferFunction.xHigh", new Double(1));
	outNeuronProperties.setProperty("transferFunction.yLow", new Double(-1));
	outNeuronProperties.setProperty("transferFunction.xLow", new Double(-1));

	// createLayer output layer (only one neuron)
	Layer outputLayer = LayerFactory.createLayer(1, outNeuronProperties);
	this.addLayer(outputLayer);

	// createLayer full conectivity between input and output layer
	ConnectionFactory.fullConnect(inputLayer, outputLayer);

	// set input and output cells for network
	NeuralNetworkFactory.setDefaultIO(this);

	// set LMS learning rule for this network
	this.setLearningRule(new LMS());
}
 
开发者ID:fiidau,项目名称:Y-Haplogroup-Predictor,代码行数:42,代码来源:Adaline.java

示例12: createNetwork

import org.neuroph.util.TransferFunctionType; //导入依赖的package包/类
/**
	 * Creates an instance of Unsuervised Hebian net with specified number
	 * of neurons in input layer and output layer, and transfer function
	 * 
	 * @param inputNeuronsNum
	 *            number of neurons in input layer
	 * @param outputNeuronsNum
	 *            number of neurons in output layer
	 * @param transferFunctionType
	 *            transfer function type
	 */
	private void createNetwork(int inputNeuronsNum, int outputNeuronsNum,
		TransferFunctionType transferFunctionType) {

		// init neuron properties
		NeuronProperties neuronProperties = new NeuronProperties();
//		neuronProperties.setProperty("bias", new Double(-Math
//				.abs(Math.random() - 0.5))); // Hebbian network cann not work
		// without bias
		neuronProperties.setProperty("transferFunction", transferFunctionType);
		neuronProperties.setProperty("transferFunction.slope", new Double(1));

		// set network type code
		this.setNetworkType(NeuralNetworkType.UNSUPERVISED_HEBBIAN_NET);

		// createLayer input layer
		Layer inputLayer = LayerFactory.createLayer(inputNeuronsNum,
			neuronProperties);
		this.addLayer(inputLayer);

		// createLayer output layer
		Layer outputLayer = LayerFactory.createLayer(outputNeuronsNum,
			neuronProperties);
		this.addLayer(outputLayer);

		// createLayer full conectivity between input and output layer
		ConnectionFactory.fullConnect(inputLayer, outputLayer);

		// set input and output cells for this network
		NeuralNetworkFactory.setDefaultIO(this);

		// set appropriate learning rule for this network
		this.setLearningRule(new UnsupervisedHebbianLearning());
	//this.setLearningRule(new OjaLearning(this));
	}
 
开发者ID:fiidau,项目名称:Y-Haplogroup-Predictor,代码行数:46,代码来源:UnsupervisedHebbianNetwork.java

示例13: BAM

import org.neuroph.util.TransferFunctionType; //导入依赖的package包/类
/**
 * Creates an instance of BAM network with specified number of neurons
        * in input and output layers.
 * 
 * @param inputNeuronsCount
 *            number of neurons in input layer
 * @param outputNeuronsCount
 *            number of neurons in output layer
 */
public BAM(int inputNeuronsCount, int outputNeuronsCount) {

	// init neuron settings for BAM network
	NeuronProperties neuronProperties = new NeuronProperties();
	neuronProperties.setProperty("neuronType", InputOutputNeuron.class);
	neuronProperties.setProperty("bias", new Double(0));
	neuronProperties.setProperty("transferFunction", TransferFunctionType.STEP);
	neuronProperties.setProperty("transferFunction.yHigh", new Double(1));
	neuronProperties.setProperty("transferFunction.yLow", new Double(0));

	this.createNetwork(inputNeuronsCount, outputNeuronsCount, neuronProperties);
}
 
开发者ID:fiidau,项目名称:Y-Haplogroup-Predictor,代码行数:22,代码来源:BAM.java

示例14: createNetwork

import org.neuroph.util.TransferFunctionType; //导入依赖的package包/类
/**
 *Creates an instance of Supervised Hebbian Network with specified number
 * of neurons in input layer, output layer and transfer function
 * 
 * @param inputNeuronsNum
 *            number of neurons in input layer
 * @param outputNeuronsNum
 *            number of neurons in output layer
 * @param transferFunctionType
 *            transfer function type
 */
private void createNetwork(int inputNeuronsNum, int outputNeuronsNum,
	TransferFunctionType transferFunctionType) {

	// init neuron properties
	NeuronProperties neuronProperties = new NeuronProperties();
	neuronProperties.setProperty("transferFunction", transferFunctionType);
	neuronProperties.setProperty("transferFunction.slope", new Double(1));
	neuronProperties.setProperty("transferFunction.yHigh", new Double(1));
	neuronProperties.setProperty("transferFunction.xHigh", new Double(1));		
	neuronProperties.setProperty("transferFunction.yLow", new Double(-1));
	neuronProperties.setProperty("transferFunction.xLow", new Double(-1));
	
	// set network type code
	this.setNetworkType(NeuralNetworkType.SUPERVISED_HEBBIAN_NET);

	// createLayer input layer
	Layer inputLayer = LayerFactory.createLayer(inputNeuronsNum,
		neuronProperties);
	this.addLayer(inputLayer);

	// createLayer output layer
	Layer outputLayer = LayerFactory.createLayer(outputNeuronsNum,
		neuronProperties);
	this.addLayer(outputLayer);

	// createLayer full conectivity between input and output layer
	ConnectionFactory.fullConnect(inputLayer, outputLayer);

	// set input and output cells for this network
	NeuralNetworkFactory.setDefaultIO(this);

	// set appropriate learning rule for this network
	this.setLearningRule(new SupervisedHebbianLearning());
}
 
开发者ID:fiidau,项目名称:Y-Haplogroup-Predictor,代码行数:46,代码来源:SupervisedHebbianNetwork.java

示例15: createNetwork

import org.neuroph.util.TransferFunctionType; //导入依赖的package包/类
/**
 * Creates Outstar architecture with specified number of neurons in 
 * output layer
 * 
 * @param outputNeuronsCount
 *            number of neurons in output layer
 */
private void createNetwork(int outputNeuronsCount ) {

	// set network type
	this.setNetworkType(NeuralNetworkType.OUTSTAR);

	// init neuron settings for this type of network
	NeuronProperties neuronProperties = new NeuronProperties();
	neuronProperties.setProperty("transferFunction", TransferFunctionType.STEP);
	
	// create input layer
	Layer inputLayer = LayerFactory.createLayer(1, neuronProperties);
	this.addLayer(inputLayer);

	// createLayer output layer
	neuronProperties.setProperty("transferFunction", TransferFunctionType.RAMP);
	Layer outputLayer = LayerFactory.createLayer(outputNeuronsCount, neuronProperties);
	this.addLayer(outputLayer);

	// create full conectivity between input and output layer
	ConnectionFactory.fullConnect(inputLayer, outputLayer);

	// set input and output cells for this network
	NeuralNetworkFactory.setDefaultIO(this);

	// set outstar learning rule for this network
	this.setLearningRule(new OutstarLearning());
}
 
开发者ID:fiidau,项目名称:Y-Haplogroup-Predictor,代码行数:35,代码来源:Outstar.java


注:本文中的org.neuroph.util.TransferFunctionType类示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。