№ |
Condition |
free/or 0.5$ |
m60210 | Section 14.3.1 presents estimates of a Cobb–Douglas cost function using Nerlove’s 1955 data on theU.S electric power industry. Christensen and Greene’s 1976 update of this study used 1970 data for this industry. The Christensen and Greene data are given in Table F5.2. These data have provided a standard test data set for estimating different forms of production and cost functions, including the stochastic frontier model examined in Example 17.5. It has been suggested that one explanation for the apparent finding of economies of scale in these data is that the smaller firms were inefficient for other reasons. The stochastic frontier might allow one to disentangle these effects. Use these data to fit a frontier cost function which includes a quadratic term in log output in addition to the linear term and the factor prices. Then examine the estimated Jondrow et al. residuals to see if they do indeed vary negatively with output, as suggested. (This will require either some programming on your part or specialized software. The stochastic frontier model is provided as an option in TSP and LIMDEP. Or, the likelihood function can be programmed fairly easily for RATS or GAUSS. Note, for a cost frontier as opposed to a production frontier, it is necessary to reverse the sign on the argument in the Ф function.) |
buy |
m60237 | Show how to estimate a polynomial distributed lag model with lags of six periods and a third-order polynomial. |
buy |
m60246 | Show that in the multiple regression of y on a constant, x1 and x2 while imposing the restriction β1 + β2 = 1 leads to the regression of y − x1 on a constant and x2 − x1. |
buy |
m60279 | Show that the likelihood inequality in Theorem 17.3 holds for the normal distribution. |
buy |
m60309 | Show that the likelihood inequality in Theorem 17.3 holds for the Poisson distribution used in Section 17.3 by showing that E[(1/n) ln L(θ | y)] is uniquely maximized at θ = θ0. Hint: First show that the expectation is −θ + θ0 ln θ − E0 [ln yi!]. |
buy |
m60323 | Show the Yule–Walker equations for an ARMA (1, 1) process. |
buy |
m60473 | Suppose that in the group wise heteroscedasticity model of Section 11.7.2, Xi is the same for all i. What is the generalized least squares estimator of β? How would you compute the estimator if it were necessary to estimate σ2i? |
buy |
m60502 | Suppose that a linear probability model is to be fit to a set of observations on a dependent variable y that takes values zero and one, and a single regress or x that varies continuously across observations. Obtain the exact expressions for the least squares slope in the regression in terms of the mean(s) and variance of x, and interpret the result. |
buy |
m60652 | Suppose that the classical regression model applies but that the true value of the constant is zero. Compare the variance of the least squares slope estimator computed without a constant term with that of the estimator computed with an unnecessary constant term. |
buy |
m60699 | Suppose that the model of (13-2) is formulated with an overall constant term and n − 1 dummy variables (dropping, say, the last one). Investigate the effect that this supposition has on the set of dummy variable coefficients and on the least squares estimates of the slopes. |
buy |
m60701 | Suppose that the model of Exercise 4 were specified as
Describe a method of estimating the parameters, is ordinary least squaresconsistent? |
buy |
m60705 | Suppose that the regression model is y = μ + ε, where ε has a zero mean, constant variance, and equal correlation ρ across observations. Then Cov [εi, εj] = σ2ρ if i ≠ j . Prove that the least squares estimator of μ is inconsistent. Find the characteristic roots of Ω and show that Condition 2 after Theorem 10.2 is violated. |
buy |
m60706 | Suppose that the regression model is yi = α + βxi + εi, where the disturbances εi have f (εi) = (1/λ) exp (−λεi), εi ≥ 0. This model is rather peculiar in that all the disturbances are assumed to be positive. Note that the disturbances have E[εi | xi] = λ and Var[εi | xi] = λ2. Show that the least squares slope is unbiased but that the intercept is biased. |
buy |
m60707 | Suppose that the regression model is yi = μ + εi, where E[εi | xi ] = 0, Cov[εi, εj | xi , xj] = 0 for i ≠ j , but Var[εi | xi] = σ2x2i , xi > 0.
a. Given a sample of observations on yi and xi, what is the most efficient estimator of μ? What is its variance?
b. What is the OLS estimator of μ, and what is the variance of the ordinary least squares estimator?
c. Prove that the estimator in part a is at least as efficient as the estimator in part b. |
buy |
m60710 | Suppose the true regression model is given by (8-2). The result in (8-4) shows that if either P1.2 is nonzero or β2 is nonzero, then regression of y on X1 alone produces a biased and inconsistent estimator of β1. Suppose the objective is to forecast y, not to estimate the parameters. Consider regression of y on X1 alone to estimate β1 with b1 (which is biased). Is the forecast of y computed using X1b1 also biased? Assume that E[X2 | X1] is a linear function of X1. Discuss your findings generally.What are the implications for prediction when variables are omitted from a regression? |
buy |
m60715 | Suppose we change the assumptions of the model to AS5: (xi, ε) are an independent and identically distributed sequence of random vectors such that xi has a finite mean vector, μx, finite positive definite covariance matrix Σxx and finite fourth moments E[xj xkxl xm] = φjklm for all variables. How does the proof of consistency and asymptotic normality of b change? Are these assumptions weaker or stronger than the ones made in Section 5.2? |
buy |
m60718 | Suppose that x has theWeibull distribution f (x) = αβxβ−1e−αxβ, x ≥ 0, α, β > 0. a. Obtain the log-likelihood function for a random sample of n observations.
b. Obtain the likelihood equations for maximum likelihood estimation of α and β.
Note that the first provides an explicit solution for α in terms of the data and β. But, after inserting this in the second, we obtain only an implicit solution for β. How would you obtain the maximum likelihood estimators?
c. Obtain the second derivatives matrix of the log-likelihood with respect to α and β. The exact expectations of the elements involving β involve the derivatives of the gamma function and are quite messy analytically. Of course, your exact result provides an empirical estimator. How would you estimate the asymptotic covariance matrix for your estimators in Part b?
d. Prove that αβ Cov [ln x, xβ] = 1. |
buy |
m60725 | Suppose that you have two independent unbiased estimators of the same parameter θ, say θ1 and θ2, with different variances v1 and v2. What linear combination θ = c1θ1 + c2θ2 is the minimum variance unbiased estimator of θ? |
buy |
m60769 | The consumption function used in Example 5.3 is a very simple specification. One might wonder if the meager specification of the model could help explain the finding in the Hausman test. The data set used for the example is given in Table F5.1. Use these data to carry out the test in a more elaborate specification ct = β1 + β2yt + β3it + β4ct−1 + εt where ct is the log of real consumption, yt is the log of real disposable income, and it is the interest rate (90-day T bill rate). |
buy |
m60775 | The Cox test in Example 8.3 has the same difficulty as the J test in Example 8.2. The sample period might be too long for the test not to have been affected by underlying structural change. Repeat the computations using the 1980 to 2000 data. |
buy |