当前位置: 首页>>代码示例>>C#>>正文


C# INetwork.SetLearningRate方法代码示例

本文整理汇总了C#中INetwork.SetLearningRate方法的典型用法代码示例。如果您正苦于以下问题:C# INetwork.SetLearningRate方法的具体用法?C# INetwork.SetLearningRate怎么用?C# INetwork.SetLearningRate使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在INetwork的用法示例。


在下文中一共展示了INetwork.SetLearningRate方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的C#代码示例。

示例1: Optimiser


//.........这里部分代码省略.........
            results = new float[Convert.ToInt32(dataAccess.GetParameter("Opt_Global_BufferSize"))];

            if (useBP)
            {

                int learningRateFunction = Convert.ToInt32(dataAccess.GetParameter("Opt_Bp_LearningType"));
                double initialLR = Convert.ToDouble(dataAccess.GetParameter("Opt_Bp_InitialLearnRate"));
                double finalLR = Convert.ToDouble(dataAccess.GetParameter("Opt_Bp_FinalLearnRate"));
                int jitterEpoch = Convert.ToInt32(dataAccess.GetParameter("Opt_Bp_JitterEpoch"));
                double jitterNoiseLimit = Convert.ToDouble(dataAccess.GetParameter("Opt_Bp_JitterNoiseLimit"));

                NeuronDotNet.Core.Backpropagation.LinearLayer inputLayer = new NeuronDotNet.Core.Backpropagation.LinearLayer(preprocessor.ImageSize.Width * preprocessor.ImageSize.Height);
                NeuronDotNet.Core.Backpropagation.SigmoidLayer hiddenLayer = new NeuronDotNet.Core.Backpropagation.SigmoidLayer(total);
                hiddenLayer.InputGroups = inputGroups.Length;
                NeuronDotNet.Core.Backpropagation.SigmoidLayer outputLayer = new NeuronDotNet.Core.Backpropagation.SigmoidLayer(1);

                hiddenLayer.Initializer = new NguyenWidrowFunction();

                new BackpropagationConnector(
                    inputLayer,
                    hiddenLayer,
                    inputGroups,
                    preprocessor.ImageSize.Width,
                    preprocessor.ImageSize.Height
                    );

                new BackpropagationConnector(hiddenLayer, outputLayer);

                network = new BackpropagationNetwork(inputLayer, outputLayer);

                switch (learningRateFunction)
                {
                    case 0:
                        network.SetLearningRate(initialLR);
                        break;
                    case 1:
                        network.SetLearningRate(new NeuronDotNet.Core.LearningRateFunctions.ExponentialFunction(initialLR, finalLR));//exp
                        break;
                    case 2:
                        network.SetLearningRate(new NeuronDotNet.Core.LearningRateFunctions.HyperbolicFunction(initialLR, finalLR));//hyp
                        break;
                    case 3:
                        network.SetLearningRate(new NeuronDotNet.Core.LearningRateFunctions.LinearFunction(initialLR, finalLR));//lin
                        break;

                    default:
                        throw new ArgumentOutOfRangeException("The learning rate index is out of range.\n");
                }
                network.JitterEpoch = jitterEpoch;
                network.JitterNoiseLimit = jitterNoiseLimit;

            }

            if (usePSO)
            {
                double minP = Convert.ToDouble(dataAccess.GetParameter("Opt_Pso_MinP"));
                double maxP = Convert.ToDouble(dataAccess.GetParameter("Opt_Pso_MaxP"));
                double minI = Convert.ToDouble(dataAccess.GetParameter("Opt_Pso_MinI"));
                double maxI = Convert.ToDouble(dataAccess.GetParameter("Opt_Pso_MaxI"));
                double quant = Convert.ToDouble(dataAccess.GetParameter("Opt_Pso_Quant"));
                double vMax = Convert.ToDouble(dataAccess.GetParameter("Opt_Pso_vMax"));

                int clamping = Convert.ToInt32(dataAccess.GetParameter("Opt_Pso_Clamping"));
                int initLinks = Convert.ToInt32(dataAccess.GetParameter("Opt_Pso_InitLinks"));
                int randomness = Convert.ToInt32(dataAccess.GetParameter("Opt_Pso_Randomness"));
                int randOrder = Convert.ToInt32(dataAccess.GetParameter("Opt_Pso_ParticleOrder"));
开发者ID:NoxHarmonium,项目名称:enform,代码行数:67,代码来源:Optimiser.cs


注:本文中的INetwork.SetLearningRate方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。