Knowledge Base

What is the confidence interval and how is it calculated?

09/21/2023 | By: FDS

A confidence interval is a statistical measure used to indicate the uncertainty or precision of an estimate. It indicates the range in which the true value of a parameter is located with a given probability. Confidence intervals are often used to make estimates based on sample data.

The confidence interval is defined by two values: the estimated value and the error range. The estimated value is the point in the middle of the interval and represents the best estimate for the true value of the parameter. The error range indicates the maximum distance between the estimated value and the edge of the interval.

The calculation of a confidence interval depends on several factors, such as the desired confidence level (often specified as 95% or 99%), the distribution of the data, and the size of the sample. The most common methods for calculating confidence intervals are based on the normal distribution or the t-distribution.

For a normal distribution, the confidence interval is constructed symmetrically around the estimated value. The z-values (standard deviations) for the desired confidence level are used to determine the error range. The formula for calculating the confidence interval is:

Confidence interval = estimated value ± (z value * standard deviation / root(n))

Here, n is the sample size and the standard deviation indicates the dispersion of the data.

For small samples or when the standard deviation is not known, the t-distribution is used. The formula is similar, but instead of the z-value, the t-value from the t-distribution table is used.

It is important to note that the confidence interval makes a statement about the accuracy of the estimate, not about the probability that the true value is within the interval. It simply states that the percentage of intervals generated that contain the true value is equal to the confidence level.

Like (0)