|- Main - Syllabus - Schedule - Regulations - Resources - Projects - Concepts -|
Faculty of Engineering and Natural Sciences, Bahçeşehir University, İstanbul.
During the development and final stages of your project you will need to verify that your product performs according to the required specifications, for example, for accuracy, precision, efficiency, rate.
This is an objective process where appropriate experimental data (observations) is collected and analysed.
In many cases a single measurement of a parameter (e.g. the diameter of a wheel) may be sufficient given the correct used of an appropriate instrument (e.g. a Vernier caliper).
More complex systems often exhibit variability due to random processes within the system (e.g. electronic noise), or within the measurement instrument/procedure (e.g. variable alignment of rulers, reading errors), or in the working environment (e.g. ambient temperature changes, random arrival of objects on a conveyor belt). In the presence of variability, we need to make repeated measurements and then apply statistical analysis, and finally interpretation of the results.
This page outlines some common basic statistical tools that you may find useful for the verification process of your product. The focus here is not only on obtaining a measurement of the performance parameter, but also an estimate of the uncertainty in the measurement. Since many other tools and procedures exist, please discuss with your supervisor which procedures and tools are appropriate for your project.
The distribution of the values may look, for example, something like below:
The sample mean of the values, vbar, is a estimate of the true mean (center of mass) μ of the population. The difference between μ and the design target for the population is sometimes called accuracy. The sample standard deviation, s, is an estimate of the standard deviation, σ, which is a standard measure of the variability of the process; this is sometimes called precision. These definitions of accuracy and precision are illustrated in Figure 4.
The estimates vbar and s are calculated as follows:
For these values to be meaningful, we need an estimate of their uncertainty. This is commonly achieved by calculating a confidence interval or applying a hypothesis test. It is important to note that in both methods, uncertainty is proportional to √n; design your experiment to collect enough data for your needs.
We are 95% confident that the parameter lies in the interval x ± ΔxExample:
The acceleration due to gravity is determined to be g0 = 9.81 ± 0.02 m/s2 (95% ci).
This can also be written as: 9.79 < g0 < 9.83 m/s2 (95% ci).
A 95% confidence interval means that if we were to take 100 different samples of the same sample size and compute a 95% confidence interval for each sample, then approximately 95 of the 100 confidence intervals would contain the true value of the parameter. It does not mean that there is a 95% probability that the true value is inside the current interval.
3.1 Confidence intervals for μ and σ.
Confidence interval for μ
The value of t0.025 depends on the number of degrees of freedom, n-1; these values are tabulated above. For other values of n-1 you can obtain the t-value by solving the Matlab/Octave equation:
"tcdf(t0.025,n-1)=0.975". Alternative, use this cdf calculator.
Confidence interval for σ
[open the image in a new tab to get a larger version]The value of χ20.95 depends on the number of degrees of freedom, n-1 and can be calculated by solving the Matlab/Octave equation:
"chi2cdf(χ20.95,n-1)=0.95". Alternatively, use the scale formula and take the scale from the table (open the image in a new tab to see the full size). Alternatively, use this cdf calculator.
3.2 Confidence interval for fractions (probabilities and efficiencies)
A confidence interval for p can be formed as follows:
(≈ 95% ci)This simple formulation does not work well for values of p close to 0 or 1, for such cases a proper treatment is required. The exact Clopper-Pearson interval (based on the pmf leaving an equal probability on both sides) can be constructed as follows:
pLower < p < pUpper (95% confidence interval):
sum(binopdf(m:n,n,pLower)) = 0.025
Example: n = 100 trials, m = 80 successes, and so p' = 0.80.
>> sum(binopdf(m:n,n,0.70816)) ans = 0.025 >> sum(binopdf(0:m,n,0.87334)) ans = 0.025and so 0.708 < p < 0.873 with 95% confidence.
The same result can be obtained more easily from this online calculator :-).
Proof that this actually works (reasonably) well can be seen in this c++ simulation [The simulation suggests that the confidence level is a bit higher, more like 97%, so the c.i. is a bit conservative. In some cases 0:m+1 works better, but my simulation might be a bit off(?)].
Compare this to the more simple form: p = 0.80 ± 2 sqrt(0.80(1-0.8)/100) = 0.80 ± 0.08 and so 0.72 < p < 0.88 which does not allow for the asymmetry in the Binomial distribution.
3.3 Confidence interval for rates (Poisson)
For a Poisson process, in a time t, the measured number of events n has an uncertainty of ≈ √n.
n0 = n ± 2√nThere are more rigorous treatments, but the above form should be a reasonable approximation as long as you record enough events, that is much more than 10. Simply increase the time period until you have collected plenty of observations of the event.
n0 = 45 ± 13.4 ⇒ 31.6 < n0 < 58.4As usual with sampling, the uncertainty reduces ∝ 1/√n. For example is 450 events were observed over 600 seconds (ten times the observation period) then
n0 = 450 ± 42.4Proof that n0 = n ± 2√n provides a 95% c.i. for large n can be seen in this c++ simulation.
4. Hypothesis testing
The following sections present examples of testing a hypothese for μ, σ, p amd λ.
4.1 Hypothesis test for μ and σ.
Hypothesis test for μ
This can be solved using Matlab/Octave, or by using this cdf calculator.
Here, "small" would be a few percent or less for a significant result, and 1% or less for a very significant result (smaller is better). If the probability is large then the hypothesis cannot be rejected and we are not confident that that the accuracy requirement has been met.
The probability that we would observe a sample mean of 5.2473 mm when the true mean is 10 mm, is just 1.7%. This is a significant (small) P-value and so we can reject the hypothesis μ = 10 mm in favor of μ < 10 mm and so we conclude that accuracy requirement is satisfied.
Hypothesis test for σ
This can be solved using Matlab/Octave, or by using this cdf calculator.
Here, "small" would be a few percent or less for a significant result, and 1% or less for a very significant result (smaller is better). If the probability is large then the hypothesis cannot be rejected and we are not confident that that the precision requirement has been met.
Since we have a high probability (35%) that we would observe a sample standard deviation of 14.260 mm (or less), given a true standard deviation of 15 mm, then we cannot reject the hypothesis σ = 15 mm in favor of σ < 15 mm. We cannot say that the precision requirement is satisfied. In this case we would increase the sample size and/or improve the robot precision.
4.2 Hypothesis test for fractions (probabilities and efficiencies)
where m is the observed number of sucessess out of n trials.
This can be computed with Matlab/Octave as follows:
sum(binopdf(m:n,n,p0))Here, "small" would be a few percent or less for a significant result, and 1% or less for a very significant result (smaller is better). If the probability is large then the hypothesis cannot be rejected and we are not confident that that the requirement for p has been met.
The P-value is small (significant), so we can reject the hypothesis that p = 0.7 in favor of p > 0.7 and so the requirement is met.
Remember that the 95% confidence interval was 0.708 < p < 0.873 which also excludes p = 0.7.
4.3 Hypothesis test for rates (Poisson)
For a Poisson process, in a time t, the measured number of events m has an uncertainty of ≈ √m.
Here, "small" would be a few percent or less for a significant result, and 1% or less for a very significant result (smaller is better). If the probability is large then the hypothesis cannot be rejected and we are not confident that that the requirement has been met.
The P-value is very signficant (very small), less than 1%, and so reject the hypothesis and can conclude that the requirement for the mean rate, λ > 0.5, is satsisfied.
This is consistent with the above confidence interval result of 0.53 < λ < 0.97 events/second (95% c.i.) which excludes λ = 0.5.
Comments/corrections to firstname.lastname@example.org.