main

database-mathematics-solutions.com

On this website solutions of tasks in the general mathematics are collected.
Other databases:
  Subject
  All subjects  Calculus
  Econometric  Linear Algebra
  Numerical Analysis  Statistics
  Use search in keywords. (words through a space in any order)
   
  Only free   Search in found   Exact value

Online calculators
  About 833 results. 1294 free access solutions
Page 6 from 42 Первая<23456789101642>
To the page  
 
 №  Condition free/or 0.5$
m60210Section 14.3.1 presents estimates of a Cobb–Douglas cost function using Nerlove’s 1955 data on theU.S electric power industry. Christensen and Greene’s 1976 update of this study used 1970 data for this industry. The Christensen and Greene data are given in Table F5.2. These data have provided a standard test data set for estimating different forms of production and cost functions, including the stochastic frontier model examined in Example 17.5. It has been suggested that one explanation for the apparent finding of economies of scale in these data is that the smaller firms were inefficient for other reasons. The stochastic frontier might allow one to disentangle these effects. Use these data to fit a frontier cost function which includes a quadratic term in log output in addition to the linear term and the factor prices. Then examine the estimated Jondrow et al. residuals to see if they do indeed vary negatively with output, as suggested. (This will require either some programming on your part or specialized software. The stochastic frontier model is provided as an option in TSP and LIMDEP. Or, the likelihood function can be programmed fairly easily for RATS or GAUSS. Note, for a cost frontier as opposed to a production frontier, it is necessary to reverse the sign on the argument in the &#1060; function.) buy
m60237Show how to estimate a polynomial distributed lag model with lags of six periods and a third-order polynomial. buy
m60246Show that in the multiple regression of y on a constant, x1 and x2 while imposing the restriction &#946;1 + &#946;2 = 1 leads to the regression of y &#8722; x1 on a constant and x2 &#8722; x1. buy
m60279Show that the likelihood inequality in Theorem 17.3 holds for the normal distribution. buy
m60309Show that the likelihood inequality in Theorem 17.3 holds for the Poisson distribution used in Section 17.3 by showing that E[(1/n) ln L(&#952; | y)] is uniquely maximized at &#952; = &#952;0. Hint: First show that the expectation is &#8722;&#952; + &#952;0 ln &#952; &#8722; E0 [ln yi!]. buy
m60323Show the Yule–Walker equations for an ARMA (1, 1) process. buy
m60473Suppose that in the group wise heteroscedasticity model of Section 11.7.2, Xi is the same for all i. What is the generalized least squares estimator of &#946;? How would you compute the estimator if it were necessary to estimate &#963;2i? buy
m60502Suppose that a linear probability model is to be fit to a set of observations on a dependent variable y that takes values zero and one, and a single regress or x that varies continuously across observations. Obtain the exact expressions for the least squares slope in the regression in terms of the mean(s) and variance of x, and interpret the result. buy
m60652Suppose that the classical regression model applies but that the true value of the constant is zero. Compare the variance of the least squares slope estimator computed without a constant term with that of the estimator computed with an unnecessary constant term. buy
m60699Suppose that the model of (13-2) is formulated with an overall constant term and n &#8722; 1 dummy variables (dropping, say, the last one). Investigate the effect that this supposition has on the set of dummy variable coefficients and on the least squares estimates of the slopes. buy
m60701Suppose that the model of Exercise 4 were specified as  Describe a method of estimating the parameters, is ordinary least squaresconsistent? buy
m60705Suppose that the regression model is y = &#956; + &#949;, where &#949; has a zero mean, constant variance, and equal correlation &#961; across observations. Then Cov [&#949;i, &#949;j] = &#963;2&#961; if i &#8800; j . Prove that the least squares estimator of &#956; is inconsistent. Find the characteristic roots of &#8486; and show that Condition 2 after Theorem 10.2 is violated. buy
m60706Suppose that the regression model is yi = &#945; + &#946;xi + &#949;i, where the disturbances &#949;i have f (&#949;i) = (1/&#955;) exp (&#8722;&#955;&#949;i), &#949;i &#8805; 0. This model is rather peculiar in that all the disturbances are assumed to be positive. Note that the disturbances have E[&#949;i | xi] = &#955; and Var[&#949;i | xi] = &#955;2. Show that the least squares slope is unbiased but that the intercept is biased. buy
m60707Suppose that the regression model is yi = &#956; + &#949;i, where E[&#949;i | xi ] = 0, Cov[&#949;i, &#949;j | xi , xj] = 0 for i &#8800; j , but Var[&#949;i | xi] = &#963;2x2i , xi > 0. a. Given a sample of observations on yi and xi, what is the most efficient estimator of &#956;? What is its variance? b. What is the OLS estimator of &#956;, and what is the variance of the ordinary least squares estimator? c. Prove that the estimator in part a is at least as efficient as the estimator in part b. buy
m60710Suppose the true regression model is given by (8-2). The result in (8-4) shows that if either P1.2 is nonzero or &#946;2 is nonzero, then regression of y on X1 alone produces a biased and inconsistent estimator of &#946;1. Suppose the objective is to forecast y, not to estimate the parameters. Consider regression of y on X1 alone to estimate &#946;1 with b1 (which is biased). Is the forecast of y computed using X1b1 also biased? Assume that E[X2 | X1] is a linear function of X1. Discuss your findings generally.What are the implications for prediction when variables are omitted from a regression? buy
m60715Suppose we change the assumptions of the model to AS5: (xi, &#949;) are an independent and identically distributed sequence of random vectors such that xi has a finite mean vector, &#956;x, finite positive definite covariance matrix &#931;xx and finite fourth moments E[xj xkxl xm] = &#966;jklm for all variables. How does the proof of consistency and asymptotic normality of b change? Are these assumptions weaker or stronger than the ones made in Section 5.2? buy
m60718Suppose that x has theWeibull distribution f (x) = &#945;&#946;x&#946;&#8722;1e&#8722;&#945;x&#946;, x &#8805; 0, &#945;, &#946; > 0. a. Obtain the log-likelihood function for a random sample of n observations. b. Obtain the likelihood equations for maximum likelihood estimation of &#945; and &#946;. Note that the first provides an explicit solution for &#945; in terms of the data and &#946;. But, after inserting this in the second, we obtain only an implicit solution for &#946;. How would you obtain the maximum likelihood estimators? c. Obtain the second derivatives matrix of the log-likelihood with respect to &#945; and &#946;. The exact expectations of the elements involving &#946; involve the derivatives of the gamma function and are quite messy analytically. Of course, your exact result provides an empirical estimator. How would you estimate the asymptotic covariance matrix for your estimators in Part b? d. Prove that &#945;&#946; Cov [ln x, x&#946;] = 1. buy
m60725Suppose that you have two independent unbiased estimators of the same parameter &#952;, say &#952;1 and &#952;2, with different variances v1 and v2. What linear combination &#952; = c1&#952;1 + c2&#952;2 is the minimum variance unbiased estimator of &#952;? buy
m60769The consumption function used in Example 5.3 is a very simple specification. One might wonder if the meager specification of the model could help explain the finding in the Hausman test. The data set used for the example is given in Table F5.1. Use these data to carry out the test in a more elaborate specification ct = &#946;1 + &#946;2yt + &#946;3it + &#946;4ct&#8722;1 + &#949;t where ct is the log of real consumption, yt is the log of real disposable income, and it is the interest rate (90-day T bill rate). buy
m60775The Cox test in Example 8.3 has the same difficulty as the J test in Example 8.2. The sample period might be too long for the test not to have been affected by underlying structural change. Repeat the computations using the 1980 to 2000 data. buy
 
Page 6 from 42 Первая<23456789101642>
To the page  
 

contacts: oneplus2014@gmail.com