UA MATH566 统计理论 QE练习题1

    技术2022-07-12  86

    UA MATH566 统计理论 QE练习题1

    第四题第五题第六题

    2014年1月理论题目4-6。

    Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl80NDIwNzk3NA==,size_16,color_FFFFFF,t_70)

    第四题

    Part (a) Joint likelihood function of random sample is L ( θ ) = ∏ i = 1 n ( θ − c c X i c − 1 e − ( X i / θ ) c ) = θ − n c c n ( ∏ i = 1 n X i ) c − 1 e − 1 θ c ∑ i = 1 n X i c l ( θ ) = log ⁡ L ( θ ) = − n c log ⁡ θ + n log ⁡ c + ( c − 1 ) ∑ i = 1 n log ⁡ X i − 1 θ c ∑ i = 1 n X i c L(\theta) = \prod_{i=1}^n \left( \theta^{-c} cX_i^{c-1}e^{-(X_i/\theta)^c} \right) = \theta^{-nc}c^n (\prod_{i=1}^n X_i)^{c-1}e^{-\frac{1}{\theta^c}\sum_{i=1}^n X_i^c} \\ l(\theta) = \log L(\theta) = -nc \log \theta + n \log c + (c-1)\sum_{i=1}^n \log X_i - \frac{1}{\theta^c} \sum_{i=1}^n X_i^c L(θ)=i=1n(θccXic1e(Xi/θ)c)=θnccn(i=1nXi)c1eθc1i=1nXicl(θ)=logL(θ)=nclogθ+nlogc+(c1)i=1nlogXiθc1i=1nXic

    Take derivative of log-likelihood and let it be zero, l ′ ( θ ) = − n c θ + c θ c + 1 ∑ i = 1 n X i c = 0 l'(\theta) = -\frac{nc}{\theta} + \frac{c}{\theta^{c+1}} \sum_{i=1}^n X_i^c = 0 l(θ)=θnc+θc+1ci=1nXic=0

    Solve this equation and we can find MLE of θ \theta θ, θ ^ = ( 1 n ∑ i = 1 n X i c ) 1 / c \hat{\theta} = \left( \frac{1}{n}\sum_{i=1}^n X_i^c \right)^{1/c} θ^=(n1i=1nXic)1/c

    Part (b) Compute the density function of X 1 c X_1^c X1c, P ( X 1 c ≤ y ) = P ( X 1 ≤ y 1 / c ) = F X ( y 1 / c ) f X 1 c ( y ) = 1 c y 1 / c − 1 θ − c c y c − 1 / c e − ( y 1 / c / θ ) c = θ − c e − θ − c y , y > 0 P(X_1^c \le y) = P(X_1 \le y^{1/c}) = F_X(y^{1/c}) \\ f_{X_1^c}(y) = \frac{1}{c}y^{1/c-1} \theta^{-c}cy^{c-1/c}e^{-(y^{1/c}/\theta)^c} = \theta^{-c}e^{-\theta^{-c}y},y>0 P(X1cy)=P(X1y1/c)=FX(y1/c)fX1c(y)=c1y1/c1θccyc1/ce(y1/c/θ)c=θceθcy,y>0

    Obviously X 1 c ∼ E X P ( θ c ) X_1^c \sim EXP(\theta^c) X1cEXP(θc),equivalently, Γ ( 1 , θ − c ) \Gamma(1,\theta^{-c}) Γ(1,θc). By additivity of Gamma distribution, ∑ i = 1 n X i c ∼ Γ ( n , θ − c ) \sum_{i=1}^n X_i^c \sim \Gamma(n,\theta^{-c}) i=1nXicΓ(n,θc). By scale transformation, 1 n ∑ i = 1 n X i c ∼ Γ ( n , n θ − c ) \frac{1}{n}\sum_{i=1}^n X_i^c \sim \Gamma(n,n\theta^{-c}) n1i=1nXicΓ(n,nθc). Let Y = 1 n ∑ i = 1 n X i c Y = \frac{1}{n}\sum_{i=1}^n X_i^c Y=n1i=1nXic, E Y 1 / c = ∫ 0 ∞ y 1 / c ( n θ − c ) n y n − 1 Γ ( n ) e − n θ − c y d y = ( n θ − c ) n Γ ( n ) ∫ 0 ∞ y 1 / c + n − 1 e − n θ − c y d y = ( n θ − c ) − 1 / c Γ ( n ) ∫ 0 ∞ ( n θ − c y ) 1 / c + n − 1 e − n θ − c y d n θ − c y = ( n θ − c ) − 1 / c Γ ( 1 / c + n ) Γ ( n ) = θ n − 1 / c Γ ( 1 / c + n ) Γ ( n ) ⇒ E ( Y 1 / c Γ ( n ) n 1 / c Γ ( 1 / c + n ) ) = θ EY^{1/c} = \int_{0}^{\infty} y^{1/c} \frac{(n\theta^{-c})^n y^{n-1}}{\Gamma(n)}e^{-n\theta^{-c}y}dy = \frac{(n\theta^{-c})^n }{\Gamma(n)} \int_{0}^{\infty} y^{1/c+n-1}e^{-n\theta^{-c}y}dy \\ = \frac{(n\theta^{-c})^{-1/c} }{\Gamma(n)} \int_{0}^{\infty} (n\theta^{-c}y)^{1/c+n-1}e^{-n\theta^{-c}y}dn\theta^{-c}y = \frac{(n\theta^{-c})^{-1/c} \Gamma(1/c+n)}{\Gamma(n)}\\= \theta\frac{n^{-1/c} \Gamma(1/c+n)}{\Gamma(n)} \\ \Rightarrow E \left( \frac{Y^{1/c}\Gamma(n)}{n^{1/c} \Gamma(1/c+n)} \right) = \theta EY1/c=0y1/cΓ(n)(nθc)nyn1enθcydy=Γ(n)(nθc)n0y1/c+n1enθcydy=Γ(n)(nθc)1/c0(nθcy)1/c+n1enθcydnθcy=Γ(n)(nθc)1/cΓ(1/c+n)=θΓ(n)n1/cΓ(1/c+n)E(n1/cΓ(1/c+n)Y1/cΓ(n))=θ

    So we get an unbiased estimator of θ \theta θ. See joint likelihood L ( θ ) L(\theta) L(θ), by Neyman-Fisher Theorem, we know ∑ i = 1 n X i c \sum_{i=1}^n X_i^c i=1nXic is sufficient statistics. Plus, L ( θ ) = c n ( ∏ i = 1 n X i ) c − 1 exp ⁡ ( − 1 θ c ∑ i = 1 n X i c − n c log ⁡ θ ) L(\theta) = c^n (\prod_{i=1}^n X_i)^{c-1}\exp\left( -\frac{1}{\theta^c}\sum_{i=1}^n X_i^c - nc \log \theta \right) L(θ)=cn(i=1nXi)c1exp(θc1i=1nXicnclogθ)

    indicating it belongs to exponential family. So ∑ i = 1 n X i c \sum_{i=1}^n X_i^c i=1nXic is also complete. Since MLE is definitely a function of ∑ i = 1 n X i c \sum_{i=1}^n X_i^c i=1nXic, by Lehmann-Scheffe Theorem, Y 1 / c Γ ( n ) n 1 / c Γ ( 1 / c + n ) \frac{Y^{1/c}\Gamma(n)}{n^{1/c} \Gamma(1/c+n)} n1/cΓ(1/c+n)Y1/cΓ(n) is a UMVUE of θ \theta θ.

    Part ( c) For arbitrary θ 1 ≤ θ 0 ≤ θ 2 \theta_1 \le \theta_0 \le \theta_2 θ1θ0θ2, let’s compute likelihood ratio L ( θ 1 ) L ( θ 2 ) = θ 1 − n c c n ( ∏ i = 1 n X i ) c − 1 e − 1 θ 1 c ∑ i = 1 n X i c θ 2 − n c c n ( ∏ i = 1 n X i ) c − 1 e − 1 θ 2 c ∑ i = 1 n X i c = ( θ 2 θ 1 ) n c exp ⁡ ( ( 1 / θ 2 c − 1 / θ 1 c ) ∑ i = 1 n X i c ) \frac{L(\theta_1)}{L(\theta_2)} = \frac{\theta_1^{-nc}c^n (\prod_{i=1}^n X_i)^{c-1}e^{-\frac{1}{\theta_1^c}\sum_{i=1}^n X_i^c}}{\theta_2^{-nc}c^n (\prod_{i=1}^n X_i)^{c-1}e^{-\frac{1}{\theta_2^c}\sum_{i=1}^n X_i^c}} \\ = \left( \frac{\theta_2}{\theta_1} \right)^{nc} \exp \left( (1/\theta_2^c - 1/\theta_1^c)\sum_{i=1}^n X_i^c \right) L(θ2)L(θ1)=θ2nccn(i=1nXi)c1eθ2c1i=1nXicθ1nccn(i=1nXi)c1eθ1c1i=1nXic=(θ1θ2)ncexp((1/θ2c1/θ1c)i=1nXic)

    where ( 1 / θ 2 c − 1 / θ 1 c ) < 0 (1/\theta_2^c - 1/\theta_1^c)<0 (1/θ2c1/θ1c)<0. To make this likelihood ratio smaller than some number, ∑ i = 1 n X i c \sum_{i=1}^n X_i^c i=1nXic should be greater than k α k_{\alpha} kα such that P ( ∑ i = 1 n X i c > k α ) = α P(\sum_{i=1}^n X_i^c > k_{\alpha}) = \alpha P(i=1nXic>kα)=α

    We have discussed above that ∑ i = 1 n X i c ∼ Γ ( n , θ − c ) \sum_{i=1}^n X_i^c \sim \Gamma(n,\theta^{-c}) i=1nXicΓ(n,θc), so 2 ∑ i = 1 n X i c / θ 0 c ∼ χ 2 n 2 2\sum_{i=1}^n X_i^c/\theta_0^c \sim \chi_{2n}^2 2i=1nXic/θ0cχ2n2. Let k α k_{\alpha} kα is the upper α \alpha α-quantile of χ 2 n 2 \chi_{2n}^2 χ2n2. And the rejection region is { X 1 , X 2 , ⋯   , X n : 2 ∑ i = 1 n X i c / θ 0 c > k α } \{X_1,X_2,\cdots,X_n:2\sum_{i=1}^n X_i^c/\theta_0^c > k_{\alpha}\} {X1,X2,,Xn:2i=1nXic/θ0c>kα}.

    第五题

    Part (a) Joint likelihood function of random sample is L ( θ ) = ∏ i = 1 n θ − 1 X i ( 1 − θ ) / θ = θ − n exp ⁡ ( θ − 1 2 θ [ − 2 ∑ i = 1 n log ⁡ X i ] ) L(\theta) = \prod_{i=1}^n \theta^{-1}X_i^{(1-\theta)/\theta} = \theta^{-n}\exp \left( \frac{\theta-1}{2\theta} [-2\sum_{i=1}^n \log X_i]\right) L(θ)=i=1nθ1Xi(1θ)/θ=θnexp(2θθ1[2i=1nlogXi])

    By Neyman-Fisher Theorem, T ( X ) = − 2 ∑ i = 1 n log ⁡ X i T(X) =-2\sum_{i=1}^n \log X_i T(X)=2i=1nlogXi is sufficient statistics. For two different groups of random sample, L ( θ ∣ X ) L ( θ ∣ Y ) = θ − n exp ⁡ ( θ − 1 2 θ [ − 2 ∑ i = 1 n log ⁡ X i ] ) θ − n exp ⁡ ( θ − 1 2 θ [ − 2 ∑ i = 1 n log ⁡ Y i ] ) = exp ⁡ ( θ − 1 2 θ [ 2 ∑ i = 1 n log ⁡ Y i − 2 ∑ i = 1 n log ⁡ X i ] ) \frac{L(\theta|\textbf{X})}{L(\theta|\textbf{Y})} = \frac{\theta^{-n}\exp \left( \frac{\theta-1}{2\theta} [-2\sum_{i=1}^n \log X_i]\right)}{\theta^{-n}\exp \left( \frac{\theta-1}{2\theta} [-2\sum_{i=1}^n \log Y_i]\right)} \\ = \exp \left( \frac{\theta-1}{2\theta} [2\sum_{i=1}^n \log Y_i-2\sum_{i=1}^n \log X_i]\right) L(θY)L(θX)=θnexp(2θθ1[2i=1nlogYi])θnexp(2θθ1[2i=1nlogXi])=exp(2θθ1[2i=1nlogYi2i=1nlogXi])

    To make this ratio independent of θ \theta θ, T ( X ) = T ( Y ) T(X) = T(Y) T(X)=T(Y) holds. So T ( X ) T(X) T(X) is minimal sufficient statistics.

    Part(b) Compute P ( Y ≤ y ) = P ( − 2 log ⁡ X 1 ≤ y ) = P ( X 1 ≥ e − y / 2 ) = 1 − F X ( e − y / 2 ) f Y ( y ) = 1 2 e − y / 2 f X ( e − y / 2 ) = 1 2 e − y / 2 θ − 1 e − y ( 1 − θ ) / 2 θ = 1 2 θ e − y / 2 θ , y > 0 P(Y \le y) = P(-2\log X_1 \le y) = P(X_1 \ge e^{-y/2}) = 1 - F_X(e^{-y/2}) \\ f_Y(y) = \frac{1}{2}e^{-y/2}f_X(e^{-y/2}) = \frac{1}{2}e^{-y/2} \theta^{-1}e^{-y(1-\theta)/2\theta} = \frac{1}{2\theta}e^{-y/2\theta},y>0 P(Yy)=P(2logX1y)=P(X1ey/2)=1FX(ey/2)fY(y)=21ey/2fX(ey/2)=21ey/2θ1ey(1θ)/2θ=2θ1ey/2θ,y>0

    So Y ∼ E X P ( 2 θ ) Y \sim EXP(2\theta) YEXP(2θ), equivalently Γ ( 1 , 1 / 2 θ ) \Gamma(1,1/2\theta) Γ(1,1/2θ).

    Part ( c) By additivity of Gamma distribution, T ( X ) ∼ Γ ( n , 1 / 2 θ ) T(X) \sim \Gamma(n,1/2\theta) T(X)Γ(n,1/2θ). By scale transformation, T ( X ) / 2 θ ∼ Γ ( n , 1 ) T(X)/2\theta \sim \Gamma(n,1) T(X)/2θΓ(n,1). Let L L L be 2.5%-quantile of Γ ( n , 1 ) \Gamma(n,1) Γ(n,1), U U U be 97.5%-quantile of Γ ( n , 1 ) \Gamma(n,1) Γ(n,1),and then the 95% confidential interval is L ≤ T ( X ) / 2 θ ≤ U ⇒ T ( X ) 2 U ≤ θ ≤ T ( X ) 2 L L \le T(X) /2\theta\le U \Rightarrow \frac{T(X)}{2U}\le \theta \le \frac{T(X)}{2L} LT(X)/2θU2UT(X)θ2LT(X)

    Part (d) The expected length of confidential interval is E [ T ( X ) ( 1 2 L − 1 2 U ) ] = n θ ( U − L ) L U E[T(X)\left( \frac{1}{2L} - \frac{1}{2U} \right)] = \frac{n\theta(U-L)}{LU} E[T(X)(2L12U1)]=LUnθ(UL)

    (I can think of a different way but I need to attach the solution here.) (参考UA MATH564 概率论VI 数理统计基础3 卡方分布的正态近似)

    第六题

    Part (a) Joint likelihood of random sample is L ( β ) = ∏ i = 1 n ( x i β ) Y i e − x i β Y i ! = ( ∏ i = 1 n x i Y i Y i ! ) β n Y ˉ e − n β x ˉ l ( β ) = log ⁡ ( ∏ i = 1 n x i Y i Y i ! ) + n Y ˉ log ⁡ β − n x ˉ β L(\beta) = \prod_{i=1}^n \frac{(x_i \beta)^{Y_i}e^{-x_i\beta}}{Y_i !} = \left( \prod_{i=1}^n \frac{x_i^{Y_i}}{Y_i !} \right) \beta^{n\bar{Y}}e^{-n\beta \bar{x}} \\ l(\beta) = \log \left( \prod_{i=1}^n \frac{x_i^{Y_i}}{Y_i !} \right) + n\bar{Y} \log \beta - n \bar{x} \beta L(β)=i=1nYi!(xiβ)Yiexiβ=(i=1nYi!xiYi)βnYˉenβxˉl(β)=log(i=1nYi!xiYi)+nYˉlogβnxˉβ

    MLE of β \beta β is given by arg max ⁡ l ( β ) \argmax l(\beta) argmaxl(β), which is n Y ˉ β − n x ˉ = 0 ⇒ β ^ = Y ˉ x ˉ \frac{n\bar{Y}}{\beta} - n\bar{x} = 0\Rightarrow \hat{\beta} = \frac{\bar{Y}}{\bar{x}} βnYˉnxˉ=0β^=xˉYˉ

    Part (b) E β ^ = 1 x ˉ E Y ˉ = 1 n x ˉ ∑ i = 1 n E Y i = β ∑ i = 1 n x i n x ˉ = β V a r ( β ^ ) = 1 n 2 x ˉ 2 ∑ i = 1 n V a r Y i = β ∑ i = 1 n x i n 2 x ˉ 2 = β n x ˉ E\hat{\beta} = \frac{1}{\bar{x}} E\bar{Y} = \frac{1}{n\bar{x}} \sum_{i=1}^n EY_i = \frac{\beta \sum_{i=1}^n x_i }{n\bar{x}} = \beta \\ Var(\hat{\beta}) = \frac{1}{n^2\bar{x}^2} \sum_{i=1}^n VarY_i = \frac{\beta \sum_{i=1}^n x_i}{n^2\bar{x}^2} = \frac{\beta}{n\bar{x}} Eβ^=xˉ1EYˉ=nxˉ1i=1nEYi=nxˉβi=1nxi=βVar(β^)=n2xˉ21i=1nVarYi=n2xˉ2βi=1nxi=nxˉβ

    Part ( c) Posterior kernel of β \beta β is π ( β ∣ Y ) ∝ L ( β ) π ( β ∣ w , b 0 ) ∝ β n Y ˉ e − n β x ˉ β w b 0 − 1 e − w β = β n Y ˉ + w b 0 − 1 e − β ( w + n x ˉ ) \pi(\beta|\textbf{Y}) \propto L(\beta)\pi(\beta|w,b_0) \propto \beta^{n\bar{Y}}e^{-n\beta \bar{x}} \beta^{wb_0-1}e^{-w\beta} = \beta^{n\bar{Y}+wb_0-1}e^{-\beta(w+n\bar{x})} π(βY)L(β)π(βw,b0)βnYˉenβxˉβwb01ewβ=βnYˉ+wb01eβ(w+nxˉ)

    This is the kernel of Γ ( n Y ˉ + w b 0 , 1 w + n x ˉ ) \Gamma(n\bar{Y}+wb_0,\frac{1}{w+n\bar{x}}) Γ(nYˉ+wb0,w+nxˉ1). So posterior density of β \beta β is π ( β ∣ Y ) = ( w + n x ˉ ) n Y ˉ + w b 0 Γ ( n Y ˉ + w b 0 ) β n Y ˉ + w b 0 − 1 e − β ( w + n x ˉ ) , β > 0 \pi(\beta|\textbf{Y}) = \frac{(w+n\bar{x})^{n\bar{Y}+wb_0}}{\Gamma(n\bar{Y}+wb_0)}\beta^{n\bar{Y}+wb_0-1}e^{-\beta(w+n\bar{x})},\beta>0 π(βY)=Γ(nYˉ+wb0)(w+nxˉ)nYˉ+wb0βnYˉ+wb01eβ(w+nxˉ),β>0

    Part (d) Posterior mean of β \beta β is E [ β ∣ Y ] = n Y ˉ + w b 0 w + n x ˉ = n x ˉ β ^ w + n x ˉ + w b 0 w + n x ˉ E[\beta|\textbf{Y}] = \frac{n\bar{Y}+wb_0}{w+n\bar{x}} = \frac{n\bar{x}\hat{\beta}}{w+n\bar{x}} +\frac{wb_0}{w+n\bar{x}} E[βY]=w+nxˉnYˉ+wb0=w+nxˉnxˉβ^+w+nxˉwb0

    definitely weighted average of β ^ \hat{\beta} β^ and b 0 b_0 b0. When w → 0 w \to 0 w0, E [ β ∣ Y ] → β ^ E[\beta|\textbf{Y}] \to \hat{\beta} E[βY]β^.

    Processed: 0.012, SQL: 9