№ |
Condition |
free/or 0.5$ |
m59992 | Obtain the mean lag and the long- and short-run multipliers for the following distributed lag models:
a. yt = 0.55(0.02xt + 0.15xt−1 + 0.43xt−2 + 0.23xt−3 + 0.17xt−4) + et.
b. The model in Exercise 5.
c. The model in Exercise 6. (Do for either x or z.) |
buy |
m59993 | Obtain the reduced form for the model in Exercise 1 under each of the assumptions made in parts a and in parts b1 and b9. |
buy |
m60014 | Partial Frisch and Waugh in the least squares regression of y on a constant and X, to compute the regression coefficients on X, we can first transform y to deviations from the mean y and, likewise, transform each column of X to deviations from the respective column mean; second, regress the transformed y on the transformed X without a constant. Do we get the same result if we only transform y? What if we only transform X? |
buy |
m60047 | Prove that |
buy |
m60049 | Prove (21-28). |
buy |
m60055 | Prove that E[b b] = β β + σ2 ΣKk=1(1/λk) where b is the ordinary least squares estimator and λk is a characteristic root of X X. |
buy |
m60064 | Prove that in the model y1 = X1β1 + ε1, y2 = X2β2 + ε2, generalized least squares is equivalent to equation-by-equation ordinary least squares if X1 = X2. Does your result hold if it is also known that β1 = β2? |
buy |
m60083 | Prove that an underidentified equation cannot be estimated by 2SLS. |
buy |
m60138 | Prove that the Hessian for the Tobit model in (22-14) is negative definite after Olsen’s transformation is applied to the parameters. |
buy |
m60139 | Prove that the least squares intercept estimator in the classical regression model is the minimum variance linear unbiased estimator. |
buy |
m60158 | Prove the result that the R2 associated with a restricted least squares estimator is never larger than that associated with the unrestricted least squares estimator. Conclude that imposing restrictions never improves the fit of the regression. |
buy |
m60159 | Prove the result that the restricted least squares estimator never has a larger covariance matrix than the unrestricted least squares estimator. |
buy |
m60165 | Prove that under the hypothesis that R β = q, the estimator where J is the number of restrictions, is unbiased for σ2. |
buy |
m60184 | Referring to the situation in Question 2, one might think that an informative prior would outweigh the effect of the increasing sample size.With respect to the Bayesian analysis of the linear regression, analyze the way in which the likelihood and an informative prior will compete for dominance in the posterior mean. The following exercises require specific software. The relevant techniques are available in several packages that might be in use, such as SAS, Stata, or LIMDEP. The exercises are suggested as departure points for explorations using a few of the many estimation techniques listed in this chapter. |
buy |
m60186 | Regression without a constant, suppose that you estimate a multiple regression first with then without a constant. Whether the R2 is higher in the second case than the first will depend in part on how it is computed. Using the (relatively) standard method R2 = 1 − (e e/y M0y), which regression will have a higher R2? |
buy |
m60187 | Repeat Exercise 10 for the cross sectionally correlated model of Section 13.9.1. |
buy |
m60193 | Reverse regression continued. This and the next exercise continue the analysis of Exercise 4. In Exercise 4, interest centered on a particular dummy variable in which the regressors were accurately measured, here we consider the case in which the crucial regressor in the model is measured with error. The paper by Kamlich and Polachek (1982) is directed toward this issue. Consider the simple errors in the variables model, y = α + βx∗ + ε, x = x∗ + u, where u and ε are uncorrelated and x is the erroneously measured, observed counterpart to x∗.
a. Assume that x∗, u, and ε are all normally distributed with means μ∗, 0, and 0, variances σ2∗, σ2u , and σ2ε, and zero covariances. Obtain the probability limits of the least squares estimators of α and β.
b. As an alternative, consider regressing x on a constant and y, and then computing the reciprocal of the estimate. Obtain the probability limit of this estimator.
c. Do the “direct” and “reverse” estimators bound the true coefficient? |
buy |
m60194 | Reverse regression continued. Suppose that the model in Exercise 5 is extended to y = βx∗ + γd + ε, x = x∗ + u. For convenience, we drop the constant term. Assume that x∗, ε and u are independent normally distributed with zero means. Suppose that d is a random variable that takes the values one and zero with probabilities π and 1 − π in the population and is independent of all other variables in the model. To put this formulation in context, the preceding model (and variants of it) has appeared in the literature on discrimination.We view y as a “wage” variable, x∗ as “qualifications,” and x as some imperfect measure such as education. The dummy variable d is membership (d = 1) or nonmembership (d = 0) in some protected class. The hypothesis of discrimination turns onγ < 0 versus γ ≥ = 0.
a. What is the probability limit of c, the least squares estimator of γ, in the least squares regression of y on x and d? Now suppose that x∗ and d are not independent. In particular, suppose that E[x∗ | d = 1] = μ1 and E[x∗ | d = 0] = μ0. Repeat the derivation with this assumption.
b. Consider, instead, a regression of x on y and d. What is the probability limit of the coefficient on d in this regression? Assume that x∗ and d are independent.
c. Suppose that x∗ and d are not independent, but γ is, in fact, less than zero. Assuming that both preceding equatio |
buy |
m60195 | Reverse regression. A common method of analyzing statistical data to detect discrimination in the workplace is to fit the regression y = α + x β + γd + ε, (1) where y is the wage rate and d is a dummy variable indicating either membership (d = 1) or nonmembership (d = 0) in the class toward which it is suggested the discrimination is directed. The regressors x include factors specific to the particular type of job as well as indicators of the qualifications of the individual.The hypothesis of interest is H0: γ ≥ 0 versus H1: γ < 0. The regression seeks to answer the question, “In a given job, are individuals in the class (d = 1) paid less than equally qualified individuals not in the class (d = 0)?” Consider an alternative approach. Do individuals in the class in the same job as others, and receiving the same wage, uniformly have higher qualifications? If so, this might also be viewed as a form of discrimination. To analyze this question, Conway and Roberts (1983) suggested the following procedure:
1. Fit (1) by ordinary least squares. Denote the estimates a, b, and c.
2. Compute the set of qualification indices, q = ai + Xb. (2) Note the omission of cd from the fitted value.
R2 = coefficient of determination for (1),
r2yd = squared correlation between y and d.
b. Will the sample evidence necessarily be consistent with the theory? Asymposium on the Conwayand Roberts’s paper appeared in the Journal of Business and Economic Statistics |
buy |
m60208 | A sample of 100 observations produces the following sample data: The underlying bivariate regression model is y1 = μ + ε1, y2 = μ + ε2.
a. Compute the OLS estimate of μ, and estimate the sampling variance of this estimator.
b. Compute the FGLS estimate of μ and the sampling variance of the estimator. |
buy |