№ |
Condition |
free/or 0.5$ |
m58772 | Derive the marginal effects for the Tobit model with heteroscedasticity that is described in Section 22.3.4.a. |
buy |
m58774 | Describe how to estimate the parameters of the model where εt is a serially uncorrelated, homoscedastic, classical disturbance. |
buy |
m58775 | Describe how to obtain nonlinear least squares estimates of the parameters of the model y = αxβ + ε. |
buy |
m58808 | Does first differencing reduce autocorrelation? Consider the models yt = β xt +εt, where εt = ρεt−1 + ut and εt = ut − λut−1. Compare the autocorrelation of εt in the original model with that of vt in yt − yt−1 = β (xt − xt−1) + vt, where vt = εt − εt − 1. |
buy |
m58846 | Estimate the parameters of the model in Example 15.1 using two-stage least squares. Obtain the residuals from the two equations. Do these residuals appear to be white noise series? Based on your findings, what do you conclude about the specification of the model? |
buy |
m58876 | This exercise is based on the following data set.
a. Compute the ordinary least squares regression of Y on a constant, X1, and X2. Be sure to compute the conventional estimator of the asymptotic covariance matrix of the OLS estimator as well.
b. Compute the White estimator of the appropriate asymptotic covariance matrix for the OLS estimates.
c. Test for the presence of heteroscedasticity using White’s general test. Do your results suggest the nature of the heteroscedasticity?
d. Use the Breusch–Pagan Lagrange multiplier test to test for heteroscedasticity.
e. Sort the data keying on X1 and use the Goldfeld–Quandt test to test for heteroscedasticity. Repeat the procedure, using X2. What do youfind? |
buy |
m58877 | Expand the rational lag model yt = [(0.6 + 2L) / (1 − 0.6L+ 0.5L2)]xt + et . What are the coefficients on xt, xt−1, xt−2, xt−3, and xt−4? |
buy |
m58881 | Explain how to estimate the parameters of the following model: yt = α + βxt + γ yt−1 + δyt−2 + et, et = ρet−1 + ut. Is there any problem with ordinary least squares? Let yt be consumption and let xt be disposable income. Using the method you have described, fit the previous model to the data in Appendix Table F5.1. Report your results. |
buy |
m58896 | Exponential Families of Distributions for each of the following distributions, determine whether it is an exponential family by examining the log-likelihood function. Then, identify the sufficient statistics.
a. Normal distribution with mean μ and variance σ2.
b. TheWeibull distribution in Exercise 4 in Chapter 17.
c. The mixture distribution in Exercise 3 in Chapter 17. |
buy |
m58915 | Finally, suppose that Ω must be estimated, but that assumptions (10-27) and (10-31) are met by the estimator. What changes are required in the development of the previous problem? |
buy |
m59041 | Find the autocorrelations and partial autocorrelations for the MA (2) process εt = vt − θ1vt−1 − θ2vt−2. |
buy |
m59263 | For random sampling from the classical regression model in (17-3), reparameterize the likelihood function in terms of η = 1/σ and δ = (1/σ)β. Find the maximum likelihood estimators of η and δ and obtain the asymptotic covariance matrix of the estimators of these parameters. |
buy |
m59264 | For the classical normal regression model y = Xβ + ε with no constant term and K regressors, what is plim F[K, n – k] = plim R2/K/(1 – R2) / (n – k), assuming that the true value of β is zero? |
buy |
m59265 | For the classical normal regression model y = Xβ + ε with no constant term and K regressors, assuming that the true value of β is zero, what is the exact expected value of F[K, n − K] = (R2/K)/[(1 − R2)/(n − K)]? |
buy |
m59282 | For the model in (5-25) and (5-26), prove that when only x* is measured with error, the squared correlation between y and x is less than that between y* and x*. (Note the assumption that y* = y.) Does the same hold true if y* is also measured with error? |
buy |
m59283 | For the model in Exercise 1, suppose that ε is normally distributed, with mean zero and variance σ2 [1 + (γ x)2]. Show that σ2 and γ2 can be consistently estimated by a regression of the least squares residuals on a constant and x2. Is this estimator efficient? |
buy |
m59284 | For the model in Exercise 3, test the hypothesis that λ = 0 using a Wald test, a likelihood ratio test, and a Lagrange multiplier test. Note that the restricted model is the Cobb–Douglas log-linear model. |
buy |
m59285 | For the model in the previous exercise, what is the probability limit of s2 = 1/n Σni=1 (yi − y)2? Note that s2 is the least squares estimator of the residual variance. It is also n times the conventional estimator of the variance of the OLS estimator, How does this equation compare with the true value you found in part b of Exercise 1? Does the conventional estimator produce the correct estimate of the true asymptotic variance of the least squares estimator? |
buy |
m59286 | For the model y1 = α1 + βx + ε1, y2 = α2 + ε2, y3 = α3 + ε3, assume that yi2 + yi3 = 1 at every observation. Prove that the sample covariance matrix of the least squares residuals from the three equations will be singular, thereby precluding computation of the FGLS estimator. How could you proceed in this case? |
buy |
m59287 | For the model y1 = γ1y2 + β11x1 + β21x2 + ε1, y2 = γ2y1 + β32x3 + β42x4 + ε2, show that there are two restrictions on the reduced-form coefficients. Describe a procedure for estimating the model while incorporating the restrictions. |
buy |