## The Bayes rule of the parameter in (0,1) under the power-log loss function with an application to the beta-binomial model

2017-06-27T03:56:02Z (GMT) by

We propose the power-log loss function plotted in Figure 1 for the restricted parameter space $\left(0,1\right)$, which satisfies all the six properties listed in Table 1 for a good loss function on $\left(0,1\right)$. In particular, the power-log loss function penalizes gross overestimation and gross underestimation equally, is convex in its argument, and attains its global minimum at the true unknown parameter. The power-log loss function on $\left(0,1\right)$ is an analog of the power-log loss function on $\left(0,\infty \right)$, which is the popular Stein's loss function. We then calculate the Bayes rule (estimator) of the parameter in $\left(0,1\right)$ under the power-log loss function, the posterior expected power-log loss (PEPLL) at the Bayes estimator, and the integrated risk under the power-log loss (IRPLL) at the Bayes estimator, which is also the Bayes risk under the power-log loss (BRPLL). We also calculate the usual Bayes estimator under the squared error loss, which has been proved to be larger than that under the power-log loss. Next, we analytically calculate the Bayes estimators and the PEPLL at the Bayes estimators under a beta-binomial model. Finally, the numerical simulations and a real data example of some monthly magazine exposure data exemplify our theoretical studies of two size relationships about the Bayes estimators and the PEPLLs.