main

database-mathematics-solutions.com

On this website solutions of tasks in the general mathematics are collected.
Other databases:
  Subject
  All subjects  Calculus
  Econometric  Linear Algebra
  Numerical Analysis  Statistics
  Use search in keywords. (words through a space in any order)
   
  Only free   Search in found   Exact value

  About 833 results. 1294 free access solutions
Page 1 from 42 123451142>
To the page  
 
 №  Condition free/or 0.5$
m58167A data set consists of n = n1 + n2 + n3 observations on y and x. For the first n1 observations, y = 1 and x = 1. For the next n2 observations, y = 0 and x = 1. For the last n3 observations, y = 0 and x = 0. Prove that neither (21-19) nor (21-21) has a solution. buy
m58325A multiple regression of y on a constant x1 and x2 produces the following results: y = 4 + 0.4x1 + 0.9x2. R2 = 8/60, e e = 520, n = 29,  Test the hypothesis that the two slopes sum to1. buy
m58364a. Prove the result directly using matrix algebra. b. Prove that if X contains a constant term and if the remaining columns are in deviation form (so that the column sum is zero), then the model of Exercise 8 below is one of these cases. (The seemingly unrelated regressions model with identical regressor matrices.) buy
m58369A regression model with K = 16 independent variables is fit using a panel of sevenyears of data. The sums of squares for the seven separate regressions and the pooled regression are shown below. The model with the pooled data allows a separate constant for each year. Test the hypothesis that the same coefficients apply in everyyear. buy
m58370A residual maker what is the result of the matrix productM1MwhereM1 is defined in (3-19) and M is defined in (3-14)? buy
m58537Adding an observation, a data set consists of n observations on Xn and yn. The least squares estimator based on these n observations is bn = (X n Xn)−1 X’n yn. Another observation, xs and ys, becomes available. Prove that the least squares estimator computed using this additional observation is Note that the last term is es, the residual from the prediction of ys using the coefficients based on Xn and bn. Conclude that the new data change the results of least squares only if the new observation on y cannot be perfectly predicted using the information already in hand. buy
m58550This and the next two exercises are based on the test statistic usually used to test a set of J linear restrictions in the generalized regression model: where β is the GLS estimator. Show that if Ω is known, if the disturbances are normally distributed and if the null hypothesis, Rβ = q, is true, then this statistic is exactly distributed as F with J and n − K degrees of freedom. What assumptions about the regressors are needed to reach this conclusion? Need they be nonstochastic? buy
m58563As a profit maximizing monopolist, you face the demand curve Q = α + β P + ε. In the past, you have set the following prices and sold the accompanying quantities: Suppose that your marginal cost is 10. Based on the least squares regression, compute a 95 percent confidence interval for the expected value of the profit maximizing output. buy
m58568Assume that the distribution of x is f (x) = 1/θ, 0 ≤ x ≤ θ. In random sampling from this distribution, prove that the sample maximum is a consistent estimator of θ. Note: You can prove that the maximum is the maximum likelihood estimator of θ. But the usual properties do not apply here. Why not? [Hint: Attempt to verify that the expected first derivative of the log-likelihood with respect to θ is zero.] buy
m58576Asymptotic Explore whether averaging individual marginal effects gives the same answer as computing the marginal effect at the mean. buy
m58577Asymptotics take on a different meaning in the Bayesian estimation context, since parameters do not “converge” to a population quantity. Nonetheless, in a Bayesian estimation setting, as the sample size increases, the likelihood function will dominate the posterior density. What does this imply about the Bayesian “estimator” when this occurs? buy
m58603A binomial probability model is to be based on the following index function model: y* = α + βd + ε, y = 1, if y* > 0, y = 0 otherwise. The only regress or, d, is a dummy variable. The data consist of 100 observations that have the following: Obtain the maximum likelihood estimators of α and β, and estimate the asymptotic standard errors of your estimates. Test the hypothesis that β equals zero by using a Weld test (asymptotic t test) and a likelihood ratio test. Use the probit model and then repeat, using the log it model. Do your results change? buy
m58644Carry out an ADF test for a unit root in the rate of inflation using the subset of the data in Table F5.1 since 1974.1. (This is the first quarter after the oil shock of 1973) buy
m58645Carry out the ADF test for a unit root in the bond yield data of Example 20.1. buy
m58656Change in adjusted R2. Prove that the adjusted R2 in (3-30) rises (falls) when variable xk is deleted from the regression if the square of the t ratio on xk in the multiple regression is less (greater) than 1. buy
m58657Change in the sum of squares. Suppose that b is the least squares coefficient vector in the regression of y on X and that c is any other K × 1 vector. Prove that the difference in the two sums of squared residuals is (y − Xc) (y − Xc) − (y − Xb) (y − Xb) = (c − b) X’X(c − b). Prove that this difference is positive. buy
m58660Check the identifiability of the parameters of the followingmodel: buy
m58662Christensen and Greene (1976) estimated a generalized Cobb–Douglas cost function of the form ln(C/Pf) = α + β ln Q + γ (ln2 Q)/2 + δk ln(Pk/Pf) + δ1 ln(Pl/Pf) + ε. Pk, Pl and Pf indicate unit prices of capital, labor, and fuel, respectively, Q is output and C is total cost. The purpose of the generalization was to produce a U-shaped average total cost curve. (See Example 7.3 for discussion of Nerlove’s (1963) predecessor to this study) We are interested in the output at which the cost curve reaches its minimum. That is the point at which (∂ ln C/∂ ln Q) | Q = Q* = 1 or Q* = exp [(1 − β)/γ]. The estimated regression model using the Christensen and Greene 1970 data are as follows, where estimated standard errors are given in parentheses: The estimated asymptotic covariance of the estimators of β and γ is −0.000187067, R2 = 0.991538 and e e = 2.443509. Using the estimates given above, compute the estimate of this efficient scale. Compute an estimate of the asymptotic standard error for this estimate then form a confidence interval for the estimated efficient scale. The data for this study are given in Table F5.2. Examine the raw data and determine where in the sample the efficient scale lies. That is, how many firms in the sample have reached this scale, and is this scale large in relation to the sizes of firms in the sample? buy
m58672Compare the fully parametric and semiparametric approaches to estimation of a discrete choice model such as the multinomial logit model discussed in Chapter 21. What are the benefits and costs of the semiparametric approach? buy
m58676Compare the mean squared errors of b1 and b1.2 in Section 8.2.2. buy
 
Page 1 from 42 123451142>
To the page  
 

contacts: oneplus2014@gmail.com