About 833 results. 1294 free access solutions
№ |
Condition |
free/or 0.5$ |
m59288 | For the normal distribution μ2k = σ2k (2k)! / (k! 2k) and μ2k+1 = 0, k = 0, 1, . . . . Use this result to analyze the two estimators where mk = 1/n Σni=1 (xi – x)k. The following result will be useful. Asy Cov [√nmj, √nmk] = μj + k – μjμk + jkμ2μj−1 μk−1 − jμj−1μk+1 − kμk−1μj+1. Use the delta method to obtain the asymptotic variances and covariance of these two functions assuming the data are drawn from a normal distribution with mean μ and variance σ2. Under the assumptions, the sample mean is a consistent estimator ofμ, so for purposes of deriving asymptotic results, the difference between x and μ may be ignored. As such, no generality is lost by assuming the mean is zero, and proceeding from there. Obtain V, the 3 × 3 covariance matrix for the three moments, then use the delta method to show that the covariance matrix for the two estimators is where J is the 2 x 3 matrix of derivatives.br> |
buy |
m59290 | For the simple regression model yi = μ + εi, εi ~ N [0, σ2], prove that the sample mean is consistent and asymptotically normally distributed. Now consider the alternative estimator μ = Σi wi yi, wt = 1/(n(n+1)/2) = i/Σi i. Note that Σi wi = 1. Prove that this is a consistent estimator of μ and obtain its asymptotic variance. |
buy |
m59334 | Given the data set estimate a probit model and test the hypothesis that x is not influential in determining the probability that y equalone. |
buy |
m59634 | In random sampling from the exponential distribution f (x) = (1/θ)e−x/θ, x ≥ 0, θ > 0, find the maximum likelihood estimator of θ and obtain the asymptotic distribution of this estimator. |
buy |
m59642 | In Solow’s classic (1957) study of technical change in the U.S. economy, he suggests the following aggregate production function: q(t) = A(t) f [k (t)], where q(t) is aggregate output per work hour, k(t) is the aggregate capital labor ratio, and A(t) is the technology index. Solow considered four static models, q/A= α + β ln k, q/A= α − β/k, ln (q/A) = α + β ln k, and ln (q/A) = α + β/k. Solow’s data for the years 1909 to 1949 are listed in Appendix Table F7.2. Use these data to estimate the α and β of the four functions listed above. [Note: Your results will not quite match Solow’s. See the next exercise for resolution of the discrepancy.] |
buy |
m59644 | In the aforementioned study, Solow states: A scatter of q/A against k is shown in Chart 4. Considering the amount of a priori doctoring which the raw figures have undergone, the fit is remarkably tight. Except, that is, for the layer of points which are obviously too high. These maverick observations relate to the seven last years of the period, 1943–1949. From the way they lie almost exactly parallel to the main scatter, one is tempted to conclude that in 1943 the aggregate production function simply shifted.
a. Compute a scatter diagram of q/Aagainst k.
b. Estimate the four models you estimated in the previous problem including a dummy variable for the years 1943 to 1949. How do your results change? [Note: These results match those reported by Solow, although he did not report the coefficient on the dummy variable.]
c. Solow went on to surmise that, in fact, the data were fundamentally different in the years before 1943 than during and after. Use a Chow test to examine the difference in the two subperiods using your four functional forms. Note that with the dummy variable, you can do the test by introducing an interaction term between the dummy and whichever function of k appears in the regression. Use an F test to test the hypothesis. |
buy |
m59648 | In the classical regression model with heteroscedasticity, which is more efficient, ordinary least squares or GMM? Obtain the two estimators and their respective asymptotic covariance matrices, then prove your assertion. |
buy |
m59649 | In the December, 1969, American Economic Review (pp. 886–896), Nathaniel Leaf reports the following least squares regression results for a cross section study of the effect of age composition on savings in 74 countries in 1964: ln S/Y = 7.3439 + 0.1596 ln Y/N + 0.0254 ln G − 1.3520 ln D1 − 0.3990 ln D2 ln S/N = 8.7851 + 1.1486 ln Y/N + 0.0265 ln G− 1.3438 ln D1 − 0.3966 ln D2 where S/Y = domestic savings ratio, S/N = per capita savings, Y/N = per capita income, D1 =percentage of the population under 15, D2 =percentage of the population over 64, and G = growth rate of per capita income. Are these results correct? Explain. |
buy |
m59650 | In the discussion of Harvey’s model in Section 11.7, it is noted that the initial estimator of γ1, the constant term in the regression of ln e2i on a constant, and zi is inconsistent by the amount 1.2704. Harvey points out that if the purpose of this initial regression is only to obtain starting values for the iterations, then the correction is not necessary. Explain why this statement would be true. |
buy |
m59651 | In the discussion of the instrumental variables estimator we showed that the least squares estimator b is biased and inconsistent. Nonetheless, b does estimate something: plim b = θ = β + Q−1γ. Derive the asymptotic covariance matrix of b, and show that b is asymptotically normally distributed. |
buy |
m59655 | In the generalized regression model, suppose that Ω is known.
a. What is the covariance matrix of the OLS and GLS estimators of β?
b. What is the covariance matrix of the OLS residual vector e = y − Xb?
c. What is the covariance matrix of the GLS residual vector ε = y − Xβ?
d. What is the covariance matrix of the OLS and GLS residual vectors? |
buy |
m59658 | In the panel data models estimated in Example 21.5.1, neither the log it nor the probit model provides a framework for applying a Hausman test to determine whether fixed or random effects is preferred. Explain. |
buy |
m59680 | Is the model stable? An updated version of Klein’s Model I was estimated. The relevant submatrix of ∆ is |
buy |
m59686 | It is commonly asserted that the Durbin–Watson statistic is only appropriate for testing for first-order autoregressive disturbances. What combination of the coefficients of the model is estimated by the Durbin–Watson statistic in each of the following cases: AR(1), AR(2), MA(1)? In each case, assume that the regression model does not contain a lagged dependent variable. Comment on the impact on your results of relaxing this assumption. |
buy |
m59772 | Let ei be the ith residual in the ordinary least squares regression of y on X in the classical regression model, and let εi be the corresponding true disturbance. Prove that plim (ei − εi) = 0. |
buy |
m59930 | (Limited Information Maximum Likelihood Estimation) Consider a bivariate distribution for x and y that is a function of two parameters, α and β. The joint density is f (x, y | α, β). We consider maximum likelihood estimation of the two parameters. The full information maximum likelihood estimator is the now familiar maximum likelihood estimator of the two parameters. Now, suppose that we can factor the joint distribution as done in Exercise 3, but in this case, we have f (x, y | α, β) = f (y | x, α, β) f (x | α). That is, the conditional density for y is a function of both parameters, but the marginal distribution for x involves only α.
a. Write down the general form for the log likelihood function using the joint density.
b. Since the joint density equals the product of the conditional times the marginal, the log-likelihood function can be written equivalently in terms of the factored density. Write this down, in general terms.
c. The parameter α can be estimated by itself using only the data on x and the log likelihood formed using the marginal density for x. It can also be estimated with β by using the full log-likelihood function and data on both y and x. Show this.
d. Show that the first estimator in Part c has a larger asymptotic variance than the second one. This is the difference between a limited information maximum likelihood estimator and a full information maximum likelihood estimator.
e. |
buy |
m59932 | Linear Transformations of the data, consider the least squares regression of y on K variables (with a constant) X. Consider an alternative set of regressors Z = XP, where P is a nonsingular matrix. Thus, each column of Z is a mixture of some of the columns of X. Prove that the residual vectors in the regressions of y on X and y on Z are identical. What relevance does this have to the question of changing the fit of a regression by changing the units of measurement of the independent variables? |
buy |
m59967 | Mixture distribution suppose that the joint distribution of the two random variables x and y is
a. Find the maximum likelihood estimators of β and θ and their asymptotic joint distribution.
b. Find the maximum likelihood estimator of θ/(β + θ) and its asymptotic distribution.
c. Prove that f (x) is of the form f (x) = γ (1 − γ)x, x = 0, 1, 2, . . . , and find the maximum likelihood estimator of γ and its asymptotic distribution.
d. Prove that f (y | x) is of the form Prove that f (y | x) integrates to 1. Find the maximum likelihood estimator of λ and its asymptotic distribution. [Hint: In the conditional distribution, just carry the xs along as constants.]
e. Prove that f (y) = θe−θy, y ≥ 0, θ>0. Find the maximum likelihood estimator of θ and its asymptotic variance.
f. Prove that Based on this distribution, what is the maximum likelihood estimator of β? |
buy |
m59981 | Now, assume only finite second moments of x; E[x2i ] is finite. Is this sufficient to establish consistency of b? E [|xy|] ≤ {E[x2]} 1/2{E [y2]}1/2will be helpful.) Is this assumption sufficient to establish asymptotic normality? |
buy |
m59982 | Now suppose that the disturbances are not normally distributed, although Ω is still known. Show that the limiting distribution of previous statistic is (1/J) times a chisquared variable with J degrees of freedom. Conclude that in the generalized regression model, the limiting distribution of the Wald statistic W = (Rβ − q) {R(Est.Var[β])R }−1 (Rβ − q) is chi-squared with J degrees of freedom, regardless of the distribution of the disturbances, as long as the data are otherwise well behaved. Note that in a finite sample, the true distribution may be approximated with an F[J, n − K] distribution. It is a bit ambiguous, however, to interpret this fact as implying that the statistic is asymptotically distributed as F with J and n− K degrees of freedom, because the limiting distribution used to obtain our result is the chi-squared, not the F. In this instance, the F[J, n − K] is a random variable that tends asymptotically to the chi-squared variate. |
buy |
|