本文整理汇总了C#中Sample.InverseLeftProbability方法的典型用法代码示例。如果您正苦于以下问题:C# Sample.InverseLeftProbability方法的具体用法?C# Sample.InverseLeftProbability怎么用?C# Sample.InverseLeftProbability使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类Sample
的用法示例。
在下文中一共展示了Sample.InverseLeftProbability方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的C#代码示例。
示例1: Bug6988
public void Bug6988()
{
// due to writing i / n instead of (double) i / n, Sample.LeftProbability was reported as 0 except for the last value
Sample s = new Sample(0.0, 1.0, 3.0, 4.0);
Assert.IsTrue(TestUtilities.IsNearlyEqual(s.LeftProbability(2.0), 0.5));
Assert.IsTrue(TestUtilities.IsNearlyEqual(s.InverseLeftProbability(0.5), 2.0));
}
示例2: FitToSample
/// <summary>
/// Computes the Weibull distribution that best fits the given sample.
/// </summary>
/// <param name="sample">The sample to fit.</param>
/// <returns>The best fit parameters.</returns>
/// <remarks>
/// <para>The returned fit parameters are the <see cref="ShapeParameter"/> and <see cref="ScaleParameter"/>, in that order.
/// These are the same parameters, in the same order, that are required by the <see cref="WeibullDistribution(double,double)"/> constructor to
/// specify a new Weibull distribution.</para>
/// </remarks>
/// <exception cref="ArgumentNullException"><paramref name="sample"/> is null.</exception>
/// <exception cref="InvalidOperationException"><paramref name="sample"/> contains non-positive values.</exception>
/// <exception cref="InsufficientDataException"><paramref name="sample"/> contains fewer than three values.</exception>
public static FitResult FitToSample(Sample sample)
{
if (sample == null) throw new ArgumentNullException("sample");
if (sample.Count < 3) throw new InsufficientDataException();
if (sample.Minimum <= 0.0) throw new InvalidOperationException();
// The log likelyhood function is
// \log L = N \log k + (k-1) \sum_i \log x_i - N K \log \lambda - \sum_i \left(\frac{x_i}{\lambda}\right)^k
// Taking derivatives, we get
// \frac{\partial \log L}{\partial \lambda} = - \frac{N k}{\lambda} + \sum_i \frac{k}{\lambda} \left(\frac{x_i}{\lambda}\right)^k
// \frac{\partial \log L}{\partial k} =\frac{N}{k} + \sum_i \left[ 1 - \left(\frac{x_i}{\lambda}\right)^k \right] \log \left(\frac{x_i}{\lambda}\right)
// Setting the first expression to zero and solving for \lambda gives
// \lambda = \left( N^{-1} \sum_i x_i^k \right)^{1/k} = ( < x^k > )^{1/k}
// which allows us to reduce the problem from 2D to 1D.
// By the way, using the expression for the moment < x^k > of the Weibull distribution, you can show there is
// no bias to this result even for finite samples.
// Setting the second expression to zero gives
// \frac{1}{k} = \frac{1}{N} \sum_i \left[ \left( \frac{x_i}{\lambda} \right)^k - 1 \right] \log \left(\frac{x_i}{\lambda}\right)
// which, given the equation for \lambda as a function of k derived from the first expression, is an implicit equation for k.
// It cannot be solved in closed form, but we have now reduced our problem to finding a root in one-dimension.
// We need a starting guess for k.
// The method of moments equations are not solvable for the parameters in closed form
// but the scale parameter drops out of the ratio of the 1/3 and 2/3 quantile points
// and the result is easily solved for the shape parameter
// k = \frac{\log 2}{\log\left(\frac{x_{2/3}}{x_{1/3}}\right)}
double x1 = sample.InverseLeftProbability(1.0 / 3.0);
double x2 = sample.InverseLeftProbability(2.0 / 3.0);
double k0 = Global.LogTwo / Math.Log(x2 / x1);
// Given the shape paramter, we could invert the expression for the mean to get
// the scale parameter, but since we have an expression for \lambda from k, we
// dont' need it.
//double s0 = sample.Mean / AdvancedMath.Gamma(1.0 + 1.0 / k0);
// Simply handing our 1D function to a root-finder works fine until we start to encounter large k. For large k,
// even just computing \lambda goes wrong because we are taking x_i^k which overflows. Horst Rinne, "The Weibull
// Distribution: A Handbook" describes a way out. Basically, we first move to variables z_i = \log(x_i) and
// then w_i = z_i - \bar{z}. Then lots of factors of e^{k \bar{z}} cancel out and, even though we still do
// have some e^{k w_i}, the w_i are small and centered around 0 instead of large and centered around \lambda.
Sample transformedSample = sample.Copy();
transformedSample.Transform(x => Math.Log(x));
double zbar = transformedSample.Mean;
transformedSample.Transform(z => z - zbar);
// After this change of variable the 1D function to zero becomes
// g(k) = \sum_i ( 1 - k w_i ) e^{k w_i}
// It's easy to show that g(0) = n and g(\infinity) = -\infinity, so it must cross zero. It's also easy to take
// a derivative
// g'(k) = - k \sum_i w_i^2 e^{k w_i}
// so we can apply Newton's method.
int i = 0;
double k1 = k0;
while (true) {
i++;
double g = 0.0;
double gp = 0.0;
foreach (double w in transformedSample) {
double e = Math.Exp(k1 * w);
g += (1.0 - k1 * w) * e;
gp -= k1 * w * w * e;
}
double dk = -g / gp;
k1 += dk;
if (Math.Abs(dk) <= Global.Accuracy * Math.Abs(k1)) break;
if (i >= Global.SeriesMax) throw new NonconvergenceException();
}
// The corresponding lambda can also be expressed in terms of zbar and w's.
double t = 0.0;
foreach (double w in transformedSample) {
t += Math.Exp(k1 * w);
}
t /= transformedSample.Count;
double lambda1 = Math.Exp(zbar) * Math.Pow(t, 1.0 / k1);
// We need the curvature matrix at the minimum of our log likelyhood function
// to determine the covariance matrix. Taking more derivatives...
// \frac{\partial^2 \log L} = \frac{N k}{\lambda^2} - \sum_i \frac{k(k+1) x_i^k}{\lambda^{k+2}}
// = - \frac{N k^2}{\lambda^2}
// The second expression follows by inserting the first-derivative-equal-zero relation into the first.
// For k=1, this agrees with the variance formula for the mean of the best-fit exponential.
// Derivatives involving k are less simple.
//.........这里部分代码省略.........