main

database-mathematics-solutions.com

On this website solutions of tasks in the general mathematics are collected.
Other databases:
  Subject
  All subjects  Calculus
  Econometric  Linear Algebra
  Numerical Analysis  Statistics
  Use search in keywords. (words through a space in any order)
   
  Only free   Search in found   Exact value

Online calculators
  About 833 results. 1294 free access solutions
Page 4 from 42 <123456781442>
To the page  
 
 №  Condition free/or 0.5$
m59288For the normal distribution &#956;2k = &#963;2k (2k)! / (k! 2k) and &#956;2k+1 = 0, k = 0, 1, . . . . Use this result to analyze the two estimators where mk = 1/n &#931;ni=1 (xi – x)k. The following result will be useful. Asy Cov [&#8730;nmj, &#8730;nmk] = &#956;j + k – &#956;j&#956;k + jk&#956;2&#956;j&#8722;1 &#956;k&#8722;1 &#8722; j&#956;j&#8722;1&#956;k+1 &#8722; k&#956;k&#8722;1&#956;j+1. Use the delta method to obtain the asymptotic variances and covariance of these two functions assuming the data are drawn from a normal distribution with mean &#956; and variance &#963;2. Under the assumptions, the sample mean is a consistent estimator of&#956;, so for purposes of deriving asymptotic results, the difference between x and &#956; may be ignored. As such, no generality is lost by assuming the mean is zero, and proceeding from there. Obtain V, the 3 × 3 covariance matrix for the three moments, then use the delta method to show that the covariance matrix for the two estimators is where J is the 2 x 3 matrix of derivatives.br> buy
m59290For the simple regression model yi = &#956; + &#949;i, &#949;i ~ N [0, &#963;2], prove that the sample mean is consistent and asymptotically normally distributed. Now consider the alternative estimator &#956; = &#931;i wi yi, wt = 1/(n(n+1)/2) = i/&#931;i i. Note that &#931;i wi = 1. Prove that this is a consistent estimator of &#956; and obtain its asymptotic variance. buy
m59334Given the data set estimate a probit model and test the hypothesis that x is not influential in determining the probability that y equalone. buy
m59634In random sampling from the exponential distribution f (x) = (1/&#952;)e&#8722;x/&#952;, x &#8805; 0, &#952; > 0, find the maximum likelihood estimator of &#952; and obtain the asymptotic distribution of this estimator. buy
m59642In Solow’s classic (1957) study of technical change in the U.S. economy, he suggests the following aggregate production function: q(t) = A(t) f [k (t)], where q(t) is aggregate output per work hour, k(t) is the aggregate capital labor ratio, and A(t) is the technology index. Solow considered four static models, q/A= &#945; + &#946; ln k, q/A= &#945; &#8722; &#946;/k, ln (q/A) = &#945; + &#946; ln k, and ln (q/A) = &#945; + &#946;/k. Solow’s data for the years 1909 to 1949 are listed in Appendix Table F7.2. Use these data to estimate the &#945; and &#946; of the four functions listed above. [Note: Your results will not quite match Solow’s. See the next exercise for resolution of the discrepancy.] buy
m59644In the aforementioned study, Solow states: A scatter of q/A against k is shown in Chart 4. Considering the amount of a priori doctoring which the raw figures have undergone, the fit is remarkably tight. Except, that is, for the layer of points which are obviously too high. These maverick observations relate to the seven last years of the period, 1943–1949. From the way they lie almost exactly parallel to the main scatter, one is tempted to conclude that in 1943 the aggregate production function simply shifted. a. Compute a scatter diagram of q/Aagainst k. b. Estimate the four models you estimated in the previous problem including a dummy variable for the years 1943 to 1949. How do your results change? [Note: These results match those reported by Solow, although he did not report the coefficient on the dummy variable.] c. Solow went on to surmise that, in fact, the data were fundamentally different in the years before 1943 than during and after. Use a Chow test to examine the difference in the two subperiods using your four functional forms. Note that with the dummy variable, you can do the test by introducing an interaction term between the dummy and whichever function of k appears in the regression. Use an F test to test the hypothesis. buy
m59648In the classical regression model with heteroscedasticity, which is more efficient, ordinary least squares or GMM? Obtain the two estimators and their respective asymptotic covariance matrices, then prove your assertion. buy
m59649In the December, 1969, American Economic Review (pp. 886–896), Nathaniel Leaf reports the following least squares regression results for a cross section study of the effect of age composition on savings in 74 countries in 1964: ln S/Y = 7.3439 + 0.1596 ln Y/N + 0.0254 ln G &#8722; 1.3520 ln D1 &#8722; 0.3990 ln D2 ln S/N = 8.7851 + 1.1486 ln Y/N + 0.0265 ln G&#8722; 1.3438 ln D1 &#8722; 0.3966 ln D2 where S/Y = domestic savings ratio, S/N = per capita savings, Y/N = per capita income, D1 =percentage of the population under 15, D2 =percentage of the population over 64, and G = growth rate of per capita income. Are these results correct? Explain. buy
m59650In the discussion of Harvey’s model in Section 11.7, it is noted that the initial estimator of &#947;1, the constant term in the regression of ln e2i on a constant, and zi is inconsistent by the amount 1.2704. Harvey points out that if the purpose of this initial regression is only to obtain starting values for the iterations, then the correction is not necessary. Explain why this statement would be true. buy
m59651In the discussion of the instrumental variables estimator we showed that the least squares estimator b is biased and inconsistent. Nonetheless, b does estimate something: plim b = &#952; = &#946; + Q&#8722;1&#947;. Derive the asymptotic covariance matrix of b, and show that b is asymptotically normally distributed. buy
m59655In the generalized regression model, suppose that &#8486; is known. a. What is the covariance matrix of the OLS and GLS estimators of &#946;? b. What is the covariance matrix of the OLS residual vector e = y &#8722; Xb? c. What is the covariance matrix of the GLS residual vector &#949; = y &#8722; X&#946;? d. What is the covariance matrix of the OLS and GLS residual vectors? buy
m59658In the panel data models estimated in Example 21.5.1, neither the log it nor the probit model provides a framework for applying a Hausman test to determine whether fixed or random effects is preferred. Explain. buy
m59680Is the model stable? An updated version of Klein’s Model I was estimated. The relevant submatrix of &#8710; is buy
m59686It is commonly asserted that the Durbin–Watson statistic is only appropriate for testing for first-order autoregressive disturbances. What combination of the coefficients of the model is estimated by the Durbin–Watson statistic in each of the following cases: AR(1), AR(2), MA(1)? In each case, assume that the regression model does not contain a lagged dependent variable. Comment on the impact on your results of relaxing this assumption. buy
m59772Let ei be the ith residual in the ordinary least squares regression of y on X in the classical regression model, and let &#949;i be the corresponding true disturbance. Prove that plim (ei &#8722; &#949;i) = 0. buy
m59930(Limited Information Maximum Likelihood Estimation) Consider a bivariate distribution for x and y that is a function of two parameters, &#945; and &#946;. The joint density is f (x, y | &#945;, &#946;). We consider maximum likelihood estimation of the two parameters. The full information maximum likelihood estimator is the now familiar maximum likelihood estimator of the two parameters. Now, suppose that we can factor the joint distribution as done in Exercise 3, but in this case, we have f (x, y | &#945;, &#946;) = f (y | x, &#945;, &#946;) f (x | &#945;). That is, the conditional density for y is a function of both parameters, but the marginal distribution for x involves only &#945;. a. Write down the general form for the log likelihood function using the joint density. b. Since the joint density equals the product of the conditional times the marginal, the log-likelihood function can be written equivalently in terms of the factored density. Write this down, in general terms. c. The parameter &#945; can be estimated by itself using only the data on x and the log likelihood formed using the marginal density for x. It can also be estimated with &#946; by using the full log-likelihood function and data on both y and x. Show this. d. Show that the first estimator in Part c has a larger asymptotic variance than the second one. This is the difference between a limited information maximum likelihood estimator and a full information maximum likelihood estimator. e. buy
m59932Linear Transformations of the data, consider the least squares regression of y on K variables (with a constant) X. Consider an alternative set of regressors Z = XP, where P is a nonsingular matrix. Thus, each column of Z is a mixture of some of the columns of X. Prove that the residual vectors in the regressions of y on X and y on Z are identical. What relevance does this have to the question of changing the fit of a regression by changing the units of measurement of the independent variables? buy
m59967Mixture distribution suppose that the joint distribution of the two random variables x and y is a. Find the maximum likelihood estimators of &#946; and &#952; and their asymptotic joint distribution. b. Find the maximum likelihood estimator of &#952;/(&#946; + &#952;) and its asymptotic distribution. c. Prove that f (x) is of the form f (x) = &#947; (1 &#8722; &#947;)x, x = 0, 1, 2, . . . , and find the maximum likelihood estimator of &#947; and its asymptotic distribution. d. Prove that f (y | x) is of the form Prove that f (y | x) integrates to 1. Find the maximum likelihood estimator of &#955; and its asymptotic distribution. [Hint: In the conditional distribution, just carry the xs along as constants.] e. Prove that f (y) = &#952;e&#8722;&#952;y, y &#8805; 0, &#952;>0. Find the maximum likelihood estimator of &#952; and its asymptotic variance. f. Prove that Based on this distribution, what is the maximum likelihood estimator of &#946;? buy
m59981Now, assume only finite second moments of x; E[x2i ] is finite. Is this sufficient to establish consistency of b? E [|xy|] &#8804; {E[x2]} 1/2{E [y2]}1/2will be helpful.) Is this assumption sufficient to establish asymptotic normality? buy
m59982Now suppose that the disturbances are not normally distributed, although &#8486; is still known. Show that the limiting distribution of previous statistic is (1/J) times a chisquared variable with J degrees of freedom. Conclude that in the generalized regression model, the limiting distribution of the Wald statistic W = (R&#946; &#8722; q) {R(Est.Var[&#946;])R }&#8722;1 (R&#946; &#8722; q) is chi-squared with J degrees of freedom, regardless of the distribution of the disturbances, as long as the data are otherwise well behaved. Note that in a finite sample, the true distribution may be approximated with an F[J, n &#8722; K] distribution. It is a bit ambiguous, however, to interpret this fact as implying that the statistic is asymptotically distributed as F with J and n&#8722; K degrees of freedom, because the limiting distribution used to obtain our result is the chi-squared, not the F. In this instance, the F[J, n &#8722; K] is a random variable that tends asymptotically to the chi-squared variate. buy
 
Page 4 from 42 <123456781442>
To the page  
 

contacts: oneplus2014@gmail.com