Chapter 3Mode statistics
This chapter presents an alternative to maximum concentration statistics where the short CI is replaced with the unbiased CI and the MC estimator is replaced with the MO estimator. We start with an introduction of the unbiased test and unbiased CI. Then the respective mode (MO) estimator is derived as the limit point of the unbiased CI when the confidence level approaches zero. Finally, in the framework of MO-statistics, we demonstrate the optimal property of the sufficient statistic through the concept of cumulative information as a generalization of classic Fisher information.
3.1 Unbiased test
The notion of the unbiased test was introduced by Neyman and Pearson many years ago. Unbiased tests are intuitively appealing because the probability of rejecting the null is minimal at the null value (Casella and Berger 1990, Shao 2003, Lehmann and Romano 2005). Consequently, the probability of rejecting the hypothesis at the parameter value different from the null is greater than at the null value. Typically, equal-tail quantiles are used to guarantee the test size . We argue that with a special choice of quantiles, we can make a test unbiased. This section suggests the quantiles that turn the derivative of the test evaluated at the null hypothesis to zero which yields a locally unbiased test. Sufficient conditions are offered that guarantee that the quantiles ...
Get M-statistics now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.