## Example Problems on MLR and UMP

ST702 Homework 9 on MLR and UMP

# 8.28

Let $f(x | \theta)$ be the logistic location pdf

$f(x | \theta) = \frac{ e^{x - \theta} }{ (1+e^{x - \theta})^2 }, \ -\infty < x < \infty , \ -\infty < \theta < \infty$

## a

Show that this family has an MLR.

Take $\theta_1 > \theta_0$. Then,

$\frac{ f(x | \theta_1) }{ f(x | \theta_0) } = \frac{ \frac{ e^{x - \theta_1} }{ (1+e^{x - \theta_1})^2 }}{\frac{ e^{x - \theta_0} }{ (1+e^{x - \theta_0})^2 } }.$

Taking the derivative gives us

$\frac{ d }{ dx } \frac{ f(x | \theta_1) }{ f(x | \theta_0) } = Cosh(\frac{ x - \theta_0 }{ 2 }) Sech(\frac{ x - \theta_1 }{ 2 })^3 Sinh(\frac{ \theta_1 - \theta_0 }{ 2 }).$

With $\theta_1 > \theta_0$, this will always be increasing. Thus, we have MLR.

## b

Based on one observation, $X$, find the most powerful size $\alpha$ test of $H_0: \theta = 0 \text{ vs. } H_1: \theta = 1$. For $\alpha = 0.2$, find the size of the Type II Error.

We want to reject $H_0$ if $\frac{ f(x | \theta = 1) }{ f(x | \theta = 0) } > k$. Since the family is MLR, we can check that $x > k^*$.

\begin{align} \alpha & = P(x > k^* | H_0) \\ & = 1 - F(k^* | \theta = 0) \\ & = 1-\frac{ e^{k^* - 0} }{ 1 + e^{k^*-0} } \\ & = 1 - \frac{ e^{k^*} }{ 1 + e^{k^*} } \\ \\ \beta & = P( x < k^* | H_1)\\ & = F(k^* | \theta = 1) \\ & = \frac{ e^{k^* - 1} }{ 1 + e^{k^*-1} } \end{align}

Plugging in $\alpha =0.2$ gives $k = 1.38629$ and then $\beta = 0.595389$.

## c

Show that the test in part (b) is UMP size $\alpha$ for testing $H_0 \leq 0 \text{ vs. } H_1: \theta > 0$. What can be said about UMP tests in general for the logistic location family?

Since it is based off a sufficient statistic and has non-decreasing monotone likelihood ratio, we can say that it is UMP for size $\alpha$.

# 8.29

Let $X$ be one observation from a $Cauchy(\theta)$ distribution.

## a

Show that this family does not have an MLR.

Take $\theta_1 > \theta_0$. Then,

$\frac{ f(x | \theta_1) }{ f(x | \theta_0) } = \frac{ \frac{ 1 }{ \pi \sigma } \frac{ 1 }{ 1+(\frac{ x - \theta_1 }{ \sigma })^2 } } {\frac{ 1 }{ \pi \sigma } \frac{ 1 }{ 1+(\frac{ x - \theta_0 }{ \sigma })^2 }} = \frac{ 1 + (x - \theta_0)^2 }{ 1 + (x - \theta_1)^2 }$

Looking at the derivative in Mathematica shows that there are two zeros.

Solve[D[(1 + (x - 0)^2)/(1 + (x - 1)^2), x] == 0, x]
// FullSimplify


$x = \frac{1}{2} \left(1\pm\sqrt{5}\right)$

Notice that in $x < \frac{1}{2} \left(1-\sqrt{5}\right)$, the ratio decreases. In $\frac{1}{2} \left(1-\sqrt{5}\right) < x < \frac{1}{2} \left(1+\sqrt{5}\right)$ the ratio increases. In $x < \frac{1}{2} \left(1+\sqrt{5}\right)$ the ratio decreases again. Thus, the ratio is not monotone between $-\infty$ and $\infty$ and does not have an MLR.

## b

Show that the test

$\phi(x) = \begin{cases} 1 & \text{if } 1 < x< 3 \\ 0 & \text{otherwise} \end{cases}$

is most powerful of its size for testing $H_0: \theta =0 \text{ vs. } H_1: \theta = 1$. Calculate the Type I and Type II Error probabilities.

We want to reject when

$\frac{ f(x|1) }{ f(x|0) } > k.$

Notice,

\begin{align} \frac{ f(1|1) }{ f(1|0) } & = \frac{ 1 + (1 - 1)^2 }{ 1 + (1 - 0)^2 } \\ & = \frac{ 1 }{ 2 } \\ \frac{ f(3|1) }{ f(3|0) } & = \frac{ 1 + (3 - 1)^2 }{ 1 + (3 - 0)^2 } \\ & = \frac{ 1 }{ 2 }. \end{align}

Thus, we can equivalently reject for $1 < x < 3$ (which is the region where $\phi = 1$). Thus, we have a UMP test and can find the size.

$P(1 < X < 3 | \theta = 0) = \int_{1}^{3} \frac{ 1 }{ \pi } \frac{ 1 }{ 1+(x-0)^2 } dx =\frac{\tan ^{-1}(3)}{\pi }-\frac{1}{4} = 0.147584.$

The type II error is

$1 - P(1 < X < 3 | \theta = 1) = \int_{1}^{3} \frac{ 1 }{ \pi } \frac{ 1 }{ 1+(x-1)^2 } dx = 1-\frac{\tan ^{-1}(2)}{\pi } = 0.647584$

## c

Prove or disprove: The test in part (b) is UMP for testing $H_0: \theta \leq 0 \text{ vs. } H_1: \theta > 0$. What can be said about UMP tests in general for the Cauchy location family?

We formulated our test based on being able to substitute $\frac{ f(1|1) }{ f(1|0) }$ for $\frac{ f(3|1) }{ f(3|0) }$. However, this does not hold for all values of $\theta_1$. Let’s try $\theta_1 = 2$.

$\frac{ f(2|2) }{ f(2|0) } = \frac{ 1 + (2 - 2)^2 }{ 1 + (2 - 0)^2 } = \frac{ 1 }{ 4 } \neq \frac{1}{2}$

Thus we cannot say that the test will be UMP.

# 8.33

Let $X_1, \dots , X_n$ be a random sample form the $uniform(\theta, \theta + 1)$ distribution. To test $H_0: \theta = 0 \text{ vs. } H_1: \theta>0$, use the test

$\text{reject } H_0 \text{ if } Y_n \geq 1 \text{ or } Y_1 \geq k,$

where $k$ is a constant, $Y_1 = \min\{ X_1, \dots , X_n \}, Y_n = \max \{ X_1, \dots , X_n \}$.

For this we will need the marginal distribution of the smallest order statistics as well as its joint with the maximum order statistic.

\begin{align} f_{Y_1}(y_1) & = n(1-(y_1-\theta))^{n-1} I(\theta < y_1 < \theta + 1) \\ f_{Y_1, Y_n}(y_1, y_n) & = n (n-1) (y_n - y_1)^{n-2} I(\theta < y_1 < y_n < \theta + 1) \end{align}

## a

Determine $k$ for that the test will have size $\alpha$.

Looking at the first condition, $P(Y_n \geq 1 | \theta = 0 = 0$. Looking at the second condition,

\begin{align} \alpha & = P(Y_1 \geq k | \theta = 0 ) \\ & = \int_{k}^1 n (1-y_1)^{n-1} dy_1 \\ & = (1-k)^n \\ k & = 1 - \alpha^{1/n} \end{align}

## b

Find an expression for the power function of the test in part (a).

If $\theta \leq k -1$ then our power will be 0. For $k - 1 < \theta \leq 0$,

$\beta(\theta) = \int_{k}^{\theta + 1} n (1-(y_1 - \theta))^{n-1} dy_1 = (1 - k + \theta)^n$

For $0 < \theta \leq k$ we will need to use both the marginal and the joint density.

$\beta(\theta) = \int_{k}^{\theta + 1} n (1-(y_1 - \theta))^{n-1} dy_1 + \int_{\theta}^k \int_1^{\theta + 1} n (n-1) (y_n - y_1)^{n-2} dy_n dy_1 = \alpha + 1 - (1-\theta)^n$

And finally for $k < \theta$, we get $\beta(\theta) = 1$.

## c

Prove that the test is UMP size $\alpha$.

Since the pair $Y_1, Y_n$ is a sufficient statistic, we can use our theorems and corollary the Neyman-Pearson Lemma. And look at the ratio of the joint pdfs.

$\frac{ f_{Y_1, Y_n}(y_1, y_n | \theta) }{ f_{Y_1, Y_n}(y_1, y_n | \theta = 0) } = \frac{ n (n-1) (y_n - y_1)^{n-2} I(\theta < y_1 < y_n < \theta + 1) }{ n (n-1) (y_n - y_1)^{n-2} I(0 < y_1 < y_n < 1) } = \frac{ I(\theta < y_1 < y_n < \theta + 1) }{ I(0 < y_1 < y_n < 1) }$

Starting with $0 < \theta < 1$. Then,

$\frac{ f_{Y_1, Y_n}(y_1, y_n | \theta) }{ f_{Y_1, Y_n}(y_1, y_n | \theta = 0) } = \begin{cases} \frac{ 0 }{ 1 } = 0 & 0 < y_1 \leq \theta , \ y_1 < y_n < 1 \\ \frac{ 1 }{ 1 } = 1 & \theta < y_1 < y_n < 1 \\ \frac{ 1 }{ 0} = \infty & 1 \leq y_n < \theta+1, \ \theta < y_1 < y_n. \end{cases}$

Here, for $0 < \theta < k$, we can take $k = 1$. Then we would reject if the ratio is greater than $k$ and fail to reject if the ratio is less than $k$ (leaving some wiggle room on the boundary).

Now we can examine the case where $1 < \theta$.

$\frac{ f_{Y_1, Y_n}(y_1, y_n | \theta) }{ f_{Y_1, Y_n}(y_1, y_n | \theta = 0) } = \begin{cases} \frac{ 0 }{ 1 } = 0 & y_1 < y_n < 1 \\ \frac{ 1 }{ 0} = \infty & \theta < y_1 < y_n < \theta + 1 \end{cases}$

Here, for $k \leq \theta$, we can take $k = 0$. Again we would reject if the ratio is greater than $k$ and fail to reject if the ratio is less than $k$ (leaving some wiggle room on the boundary).

## d

Find values of $n$ and $k$ so that the UMP 0.10 level will have power at least 0.8 if $\theta >1$.

We have $k = 1 - (0.10)^{1/n}$, however notice that we will be using the third expression that we got for power. That is, our power is always 1. Thus, we can choose any value of $n$ and solve for $k$.

# 8.41

Let $X_1, \dots , X_n$ be a random sample from $n(\mu_X, \sigma^2_X$ and let $Y_1, \dots , Y_m$ be an independent random sample from a $n(\mu_Y, \sigma^2_Y)$. We are interested in testing

$H_0: \mu_X = \mu_Y \text{ vs. } H_1: \mu_X \neq \mu_Y$

with the assumption $\sigma_X^2= \sigma_Y^2 = \sigma^2$.

## a

Derive the LRT for these hypotheses. Show that the LRT can be based on the statistic

$T = \frac{ \overline{ X} - \overline{ Y } }{ \sqrt{ S_p^2 (\frac{ 1 }{ n } + \frac{ 1 }{ m }) } }$

where

$S_p^2 = \frac{ 1 }{ n+m-2 }\Big( \sum_{i=1}^n (X_i - \overline{ X })^2 + \sum_{i=1}^m (Y_i - \overline{ Y })^2 \Big)$

(This quantity $S_p^2$ is sometimes referred to as a pooled variance estimate.

We will start by optimizing over $H_0$, the numerator of the LRT. Notice that here that the $X$ and $Y$ distributions are actually the same, both $n(\mu_X, \sigma^2)$. So we have the same MLEs as we normally would for a normal distribution.

For the mean we have our sample mean

$\widehat \mu = \frac{ \sum_{i=1}^{n}x_i + \sum_{i=1}^m y_i}{ n_m } = \frac{ n \overline{ x } + m \overline{ y } }{ n+m }.$

And for the variance we have our sample variance

$\widehat \sigma_0^2 = \frac{ \sum_{i=1}^n (x_i - \widehat \mu)^2 + \sum_{i=1}^m (y_i - \widehat \mu)^2}{ n+m }.$

Now we can optimize over the whole space to find the unrestricted MLES, the denominator of the LRT. Our likelihood looks like

$L(\mu_X, \mu_Y, \sigma^2 | x, y) = (2 \pi \sigma^2)^{-n/2} e^{-\frac{ \sum_{i=1}^n (x_i -\mu_x)^2 }{ 2 \sigma^2 }} \cdot (2 \pi \sigma^2)^{-m/2} e^{-\frac{ \sum_{i=1}^m (y_i -\mu_y)^2 }{ 2 \sigma^2 }} = (2 \pi \sigma^2)^{-(n+m)/2} e^{-\frac{ \sum_{i=1}^n (x_i -\mu_x)^2 + \sum_{i=1}^m (y_i -\mu_y)^2}{ 2 \sigma^2 }}$

Then

$\ell = -\frac{ m+n }{ 2 } \log(2 \pi \sigma^2) - \frac{ \sum_{i=1}^n (x_i -\mu_x)^2}{ 2 \sigma^2 } - \frac{ \sum_{i=1}^m (y_i -\mu_y)^2}{ 2 \sigma^2 }$

Notice that the $\mu_X$ and $\mu_Y$ terms will separate; so, when we find the MLE for each of them, we will get the same result as if we found them individually. Thus, $\widehat \mu_X = \overline{ x }$ and $\widehat \mu_Y = \overline{ y }$. Now we can find $\widehat \sigma$.

\begin{align} \frac{ \partial \ell }{\partial \sigma^2} & = -\frac{ m + n }{ 2 \sigma^2 } + \frac{ 1 }{ (\sigma^2)^2 } \Big( \sum_{i=1}^n (x_i -\mu_x)^2 + \sum_{i=1}^m (y_i -\mu_y)^2 \Big) \\ \widehat \sigma^2 & = \frac{ \sum_{i=1}^n (x_i - \overline{ x })^2 + \sum_{i=1}^m (y_i - \overline{ y })^2 }{ m+n } \end{align}

We can now construct our LRT.

\begin{align} \lambda(x,y) & = \frac{ (2 \pi \widehat \sigma_0^2)^{-(n+m)/2} e^{-\frac{ \sum_{i=1}^n (x_i - \widehat \mu)^2 + \sum_{i=1}^m (y_i -\widehat \mu)^2}{ 2 \widehat \sigma_0^2 }} }{ (2 \pi \widehat \sigma^2)^{-(n+m)/2} e^{-\frac{ \sum_{i=1}^n (x_i -\overline{ x })^2 + \sum_{i=1}^m (y_i -\overline{ y })^2}{ 2 \widehat \sigma^2 }} } \\ & = \Big( \frac{ \hat \sigma_0^2 }{ \widehat \sigma^2 }\Big)^{-\frac{ m+n }{ 2 }} \end{align}

(I messed up my simplification somewhere, so I’ll take the one from Casella-Berger so we can move on.) The LRT rejects if $\Big( \frac{ \hat \sigma_0^2 }{ \widehat \sigma^2 }\Big) > k$.

Substituting in $\widehat \mu$ into $\widehat \sigma_0^2$ yields

\begin{align} \frac{ \hat \sigma_0^2 }{ \widehat \sigma^2 } & = \frac{ \sum (x_i - \overline{ x })^2 + \sum(y_i - \overline{ y })^2 + \frac{ nm }{ n+m } (\overline{ x } - \overline{ y })^2 }{ \widehat \sigma^2 } \\ & = n + m + \frac{ nm }{ n_m } \frac{ (\overline{ x } - \overline{ y })^2 }{ \widehat \sigma^2 }. \end{align}

Notice that $\widehat \sigma^2 = \frac{ n+m-2 }{ n+m } S^2_p$, so large values of this ratio are equivalent to large values of $\frac{ (\overline{ x } - \overline{ y })^2 }{ S^2_p }$ and thus large values of $T$.

## b

Show that, under $H_0$, $T \sim t_{n+m-2}$. (This test is know as the two-sample t test.)

$T = \frac{ \frac{ \overline{ X} - \overline{ Y } }{ \sqrt{ \frac{ 1 }{ n } + \frac{ 1 }{ m } } } }{ \sqrt{ S_p^2 (\frac{ 1 }{ n } + \frac{ 1 }{ m }) } } = \frac{ \overline{ X} - \overline{ Y } }{ \sqrt{ \frac{ 1 }{ n+m-2 }\Big( \sum_{i=1}^n (X_i - \overline{ X })^2 + \sum_{i=1}^m (Y_i - \overline{ Y })^2 \Big) } }$

Under $H_0$ both $X$ and $Y$ follow the same normal distribution, so $\overline{ X } - \overline{ Y } \sim N(0, \sigma^2 (\frac{ 1 }{ n } + \frac{ 1 }{ m })$ (we are scaling the variance). In the denominator, notice that $\sum_{i=1}^n (X_i - \overline{ X })^2 \sim \chi^2_{n-1}$ and $\sum_{i=1}^m (Y_i - \overline{ Y})^2 \sim \chi^2_{m-1}$. Adding these together gives a $\chi^2_{n+m-2}$ distribution. Notice that this distribution is being divided by it’s degrees of freedom and square-rooted. This is ratio is the definition of a $t_{n+m-2}$ distribution.

## c

Samples of wood were obtained from the core and periphery of a certain Byzantine church. The date of the wood was determined, giving the following data.

$\begin{array}{c c c c c} \text{Core} & & & \text{Periphery} & \\ \hline 1294 & 1251 & & 1284 & 1274 \\ 1279 & 1248 & & 1272 & 1264 \\ 1274 & 1240 & & 1256 & 1256 \\ 1264 & 1232 & & 1254 & 1250 \\ 1263 & 1220 & & 1242 & \\ 1254 & 1218 & & &\\ 1251 & 1210 & & & \end{array}$

Use the two-sample $t$ test to determine if the mean age of the core is the same as the mean age of the periphery.

Python code for this problem can be found here: https://github.com/JimmyJHickey/grad-scripts/blob/master/st702-statistical-theory-2/hw9-MLR-UMP/calc_t.py.

from math import sqrt

x = [1294, 1279, 1274, 1264, 1263, 1254, 1251, 1251, 1248, 1240, 1232, 1220, 1218, 1210]
n = len(x)

y = [1284, 1272, 1256, 1254, 1242, 1274, 1264, 1256, 1250]
m = len(y)

xbar = sum(x) / n
ybar = sum(y) / m

xmxbarsq=sum( [ (i - xbar)**2 for i in x] )
ymybarsq=sum( [ (i - ybar)**2 for i in y] )

sp = 1/(n+m-2) * (xmxbarsq + ymybarsq)
t = (xbar - ybar) / sqrt( sp * (1/n + 1/m))

print("xbar:\t" + str(xbar))
print("ybar:\t" + str(ybar))
print("sp:\t" + str(sp))
print("t:\t" + str(t))

xbar:	1249.857142857143
ybar:	1261.3333333333333
sp:	433.1292517006803
t:	-1.2906555249263398


Comparing this to a $t_{14 + 9 - 2} = t_{21}$ in R gives a p-value of

> 2*pt(-1.29, 21)
 0.211076


0.21. Thus, we do not have evidence to reject the null in favor of the alternative that the mean ages differ.