The difference between parametric and non-parametric statistics is based on the knowledge or ignorance of the probability distribution of the variable to be studied.
Parametric statistics uses calculations and procedures assuming you know how the random variable to be studied is distributed. On the contrary, non-parametric statistics uses methods to know how a phenomenon is distributed, and later, use parametric statistics techniques.
The definitions of both concepts are illustrated below:
- Parametric statistics: Refers to a part of statistical inference that uses statistics and resolution criteria based on known distributions.
- Non-parametric statistics: It is a branch of statistical inference whose calculations and procedures are based on unknown distributions.
Parametric and nonparametric statistics are complementary
They use different methods because their goals are different. However, they are two complementary branches. We do not always know with certainty — in fact we rarely do — how a random variable is distributed. Thus, it is necessary to use techniques to find out what type of distribution it most resembles.
Once we have found out how it is distributed, we can perform specific calculations and techniques for this type of distribution. Since, for example, the mean value in a Poisson distribution is not calculated in the same way as in a Normal one.
Even so, it is important to note that parametric statistics is much more well known and popular. Many times, instead of using nonparametric statistics, it is directly assumed that a variable is distributed in one way. That is, it starts from a starting hypothesis that is believed to be the correct one. However, when we want to carry out a work rigorously, if we are not sure, we must use non-parametric statistics.
Otherwise, however well applied the techniques of parametric statistics are, the results will be imprecise.