Chapter 1Limitations of classic statistics and motivation
In this chapter, we discuss the limitations of classic statistics that build on the concepts of the mean and variance. We argue that the mean and variance are appropriate measures of the center and the scatter of symmetric distributions. Many distributions we deal with are asymmetric, including distributions of positive data. The mean not only has a weak practical appeal but also may create theoretical trouble in the form of unbiased estimation – the existence of an unbiased estimator is more an exception than the rule.
Optimal statistical inference for normal variance in the form of minimum length or unbiased CI was developed more than 50 years ago and has been forgotten. This example serves as a motivation for our theory. Many central concepts, such as unbiased tests, mode, and maximum concentration estimators for normal variance serve as prototypes for the general theory to be deployed in subsequent chapters.
The Neyman-Pearson lemma is a fundamental statistical result that proves maximum power among all tests with fixed type I error. In this chapter, we prove two results, as an extension of this lemma, to be later used for demonstrating some optimal properties of M-statistics such as the superiority of the sufficient statistic and minimum volume of the density level test.
1.1 Limitations of classic statistics
1.1.1 Mean
A long time ago, several prominent statisticians pointed out to limitations of the mean as ...
Get M-statistics now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.