[Limdep Nlogit List] Wts= option in Random Parameter Models. Consistent? Correct Covariance Estimates?

Mikołaj Czajkowski miq at wne.uw.edu.pl
Tue Feb 2 20:51:16 AEDT 2016


Dear Stephen,

Thanks for starting this discussion - I am also looking forward to 
seeing comments on this.

According to my understanding, weighting does not change consistency. 
But note that by scaling the weights (say multiplying them by 10), while 
the maximum of the LL function is in the same spot (parameter estimates, 
choice probabilities etc. will not change), the Hessian will change. As 
a result, by scaling your weights you will get different standard 
errors. I was wondering if there is a rule that says something like 'if 
the arithmetic/geometric/... mean of the weights equals one/number of 
observations/...' the standard errors are correct but I could not find 
anything about it. Perhaps such a rule does not exist or I did not 
search for it carefully enough. Instead, the solutions which could be 
used to correct for the weights scaling issue could be bootstraping the 
standard errors or using robust standard errors.

Best,

     Mik




On 2016-02-01 21:40, Stephen McLaughlin wrote:
> Hi Limdep/NLogit Listserv,
>
> I'm curious about the use of sampling weights in random parameter models in
> NLogit.
>
> Solon, Haider, and Wooldridge (2011) lay out situations where it is
> advisable to use a weighted estimation approach, and note that in cases
> where sampling is not independent of the dependent variable, conditional on
> the explanatory variables, then one needs to use weighted estimation to
> generate consistent parameter estimates. It seems to me that many cases
> would fit this description, e.g. non-participation in the US National
> Survey on Drug Use and Health is likely related to whether an individual
> has a history of drug use.
>
> Inclusion of Wts seems reasonable in many cases, however in Applied Choice
> analysis, pg. 854 there's a brief paragraph about the use of choice-based
> weights in estimation of ML models and how NLogit attempts to handle these
> situations. The paragraph concludes 'we warn analysts who might unwittingly
> assume that choice-based weights apply without question to ML models'.
> Choice based weights strike me as very similar to sampling weights that are
> adjusted to match a target population total, where the population total
> happens to be an outcome of interest.
>
> My question then is: Is inclusion of sampling weights 'reasonable' in
> simulation based models? Are there particular issues that an analyst should
> be wary of? In general, will inclusion of sampling weights in mixed logit,
> random parameters ordered probit, etc. in NLogit still produce consistent
> parameter estimates, and will the standard errors be 'correct'?
>
> Any advice or references is greatly appreciated. I can also provide a more
> specific example if it helps.
>
> Cite:
> Solon et al. 2011 *What Are We Weighting For?*
> http://www.nber.org/papers/w18859



More information about the Limdep mailing list