From djourdain at ait.asia Thu Feb 15 21:26:52 2018
From: djourdain at ait.asia (Damien Jourdain)
Date: Thu, 15 Feb 2018 12:26:52 +0200
Subject: [Limdep Nlogit List] Interpretation of RPL coefficients when using
lognormal distribution: why results from direct estimation and from
estimated parameters are not giving the same results?
MessageID: <009301d3a647$8264f6a0$872ee3e0$@cirad.fr>
Dear All,
I developing a RPL model using choice experiment data
The model is as followed:
Calc; Ran(1234567)$
RPLOGIT
; Choices = 1,2,3
; Lhs = CHOICE, CSET, ALT
; Rhs = L_IMP, L_RAU, L_GAP, L_PGS,
FRESH, O_SUPE, O_SPEC, NPRICE
; Fcn = L_IMP(n), L_RAU(n), L_GAP(n),
L_PGS(n), FRESH(n), O_SUPE(n), O_SPEC(n), NPRICE(l)
; Halton
; Pds = csi
; Pts = 20
; Parameters
; Maxit = 150$
I have changed the price attribute to negative values, so I can use a
lognormal distribution of for the price attribute.
I am getting the following results
+

 Standard Prob. 95% Confidence
CHOICE Coefficient Error z z>Z* Interval
+

Random parameters in utility
functions..............................
L_IMP 1.11608*** .29152 3.83 .0001 1.68745 .54470
L_RAU 1.49941*** .09880 15.18 .0000 1.30577 1.69304
L_GAP 1.82794*** .10487 17.43 .0000 1.62239 2.03349
L_PGS .63730** .25734 2.48 .0133 .13291 1.14168
FRESH .61318*** .05496 11.16 .0000 .72089 .50546
O_SUPE .43891*** .07567 5.80 .0000 .29060 .58721
O_SPEC .76256*** .17329 4.40 .0000 1.10221 .42291
NPRICE 1.61991*** .33228 4.88 .0000 2.27117 .96865
Distns. of RPs. Std.Devs or limits of
triangular....................
NsL_IMP 1.52346*** .31666 4.81 .0000 .90281 2.14410
NsL_RAU .69380*** .13439 5.16 .0000 .43040 .95721
NsL_GAP .01744 .24287 .07 .9427 .45858 .49346
NsL_PGS .95598*** .21017 4.55 .0000 .54405 1.36790
NsFRESH .48681*** .05657 8.60 .0000 .37593 .59770
NsO_SUPE 1.65455*** .11307 14.63 .0000 1.43293 1.87616
NsO_SPEC 1.08890*** .12068 9.02 .0000 .85237 1.32544
LsNPRICE .99479*** .18655 5.33 .0000 .62917 1.36041
If I am not wrong, I can calculate the population mean of the price E(beta)
= exp(beta + sigma^2/2)
> CALC; LIST; EXP(1.61991 + (0.99479^2)/2)$
[CALC] = .3246179
Then, I am using the procedure described in section N29.8.2 of the Nlogit
manual to examine the distribution of the parameters.
Matrix; bn = beta_i; sn =sdbeta_i $
CREATE; BIMP=0; BRAU=0; BGAP=0; BPGS=0; BFRE =0; BSUP=0; BSPE=0; BNPR=0 $
CREATE ;SIMP=0; SRAU=0; SGAP=0; SPGS=0; SFRE =0; SSUP=0; SSPE=0; SNPR=0 $
NAMELIST; betan = BIMP,BRAU, BGAP, BPGS, BFRE, BSUP, BSPE, BNPR$
NAMELIST; sbetan = SIMP,SRAU, SGAP, SPGS, SFRE, SSUP, SSPE, SNPR$
CREATE ; betan =bn$
CREATE ; sbetan = sn$
CALC; List; XBR(BNPR)$ ? calculate the average of the beta for nprice
> CALC; List; XBR(BNPR)$
[CALC] = .0257560
My understanding is that these two figures should be close to one another.
Is there anything that could explain such difference between these two ways
to estimate the results?
Any help is welcomed?
Best,
Damien
From djourdain at ait.asia Thu Feb 15 22:41:16 2018
From: djourdain at ait.asia (Damien Jourdain)
Date: Thu, 15 Feb 2018 13:41:16 +0200
Subject: [Limdep Nlogit List] Interpretation of RPL coefficients when
using lognormal distribution: why results from direct estimation and from
estimated parameters are not giving the same results?
InReplyTo: <60299233429c049c738de89661957fb7@uw.edu.pl>
References: <009301d3a647$8264f6a0$872ee3e0$@cirad.fr>
<60299233429c049c738de89661957fb7@uw.edu.pl>
MessageID: <00ac01d3a651$e6e119b0$b4a34d10$@cirad.fr>
Dear Mik,
Thank you for the suggestion.
I tried that but there is still an important difference between the two.
> create; expBNPR = exp(BNPR)$
> calc; list; xbr(expBNPR)$
[CALC] = 1.0329355
As the direct estimation from the coefficients is giving :
> > CALC; LIST; EXP(1.61991 + (0.99479^2)/2)$
> [CALC] = .3246179
By the way, the more 'realistic' calculation is the direct estimation from the coefficients (at least it is of the same magnitude than the MNL coefficient for price)
Damien
Message d'origine
De : Miko?aj Czajkowski [mailto:mc at uw.edu.pl]
Envoy? : Thursday, February 15, 2018 12:34 PM
? : Damien Jourdain
Objet : Re: [Limdep Nlogit List] Interpretation of RPL coefficients when using lognormal distribution: why results from direct estimation and from estimated parameters are not giving the same results?
Dear Damien,
Shoulnd't you have something like:
create; expBNPR = exp(BNPR)$
first?
Then
calc; list; xbr(expBNPR)$
Cheers,
Mik
On 20180215 11:26, Damien Jourdain wrote:
> Dear All,
>
>
>
> I developing a RPL model using choice experiment data
>
>
>
> The model is as followed:
>
>
>
> Calc; Ran(1234567)$
>
> RPLOGIT
>
> ; Choices = 1,2,3
>
> ; Lhs = CHOICE, CSET, ALT
>
> ; Rhs = L_IMP, L_RAU, L_GAP, L_PGS,
>
> FRESH, O_SUPE, O_SPEC, NPRICE
>
> ; Fcn = L_IMP(n), L_RAU(n), L_GAP(n),
>
> L_PGS(n), FRESH(n), O_SUPE(n), O_SPEC(n), NPRICE(l)
>
> ; Halton
>
> ; Pds = csi
>
> ; Pts = 20
>
> ; Parameters
>
> ; Maxit = 150$
>
>
>
> I have changed the price attribute to negative values, so I can use a
> lognormal distribution of for the price attribute.
>
> I am getting the following results
>
>
>
> +
> +
> 
>
>  Standard Prob. 95% Confidence
>
> CHOICE Coefficient Error z z>Z* Interval
>
> +
> +
> 
>
> Random parameters in utility
> functions..............................
>
> L_IMP 1.11608*** .29152 3.83 .0001 1.68745 .54470
>
> L_RAU 1.49941*** .09880 15.18 .0000 1.30577 1.69304
>
> L_GAP 1.82794*** .10487 17.43 .0000 1.62239 2.03349
>
> L_PGS .63730** .25734 2.48 .0133 .13291 1.14168
>
> FRESH .61318*** .05496 11.16 .0000 .72089 .50546
>
> O_SUPE .43891*** .07567 5.80 .0000 .29060 .58721
>
> O_SPEC .76256*** .17329 4.40 .0000 1.10221 .42291
>
> NPRICE 1.61991*** .33228 4.88 .0000 2.27117 .96865
>
> Distns. of RPs. Std.Devs or limits of
> triangular....................
>
> NsL_IMP 1.52346*** .31666 4.81 .0000 .90281 2.14410
>
> NsL_RAU .69380*** .13439 5.16 .0000 .43040 .95721
>
> NsL_GAP .01744 .24287 .07 .9427 .45858 .49346
>
> NsL_PGS .95598*** .21017 4.55 .0000 .54405 1.36790
>
> NsFRESH .48681*** .05657 8.60 .0000 .37593 .59770
>
> NsO_SUPE 1.65455*** .11307 14.63 .0000 1.43293 1.87616
>
> NsO_SPEC 1.08890*** .12068 9.02 .0000 .85237 1.32544
>
> LsNPRICE .99479*** .18655 5.33 .0000 .62917 1.36041
>
>
>
> If I am not wrong, I can calculate the population mean of the price
> E(beta) = exp(beta + sigma^2/2)
>
> > CALC; LIST; EXP(1.61991 + (0.99479^2)/2)$
>
> [CALC] = .3246179
>
>
>
> Then, I am using the procedure described in section N29.8.2 of the
> Nlogit manual to examine the distribution of the parameters.
>
> Matrix; bn = beta_i; sn =sdbeta_i $
>
> CREATE; BIMP=0; BRAU=0; BGAP=0; BPGS=0; BFRE =0; BSUP=0; BSPE=0;
> BNPR=0 $
>
> CREATE ;SIMP=0; SRAU=0; SGAP=0; SPGS=0; SFRE =0; SSUP=0; SSPE=0;
> SNPR=0 $
>
> NAMELIST; betan = BIMP,BRAU, BGAP, BPGS, BFRE, BSUP, BSPE, BNPR$
>
> NAMELIST; sbetan = SIMP,SRAU, SGAP, SPGS, SFRE, SSUP, SSPE, SNPR$
>
> CREATE ; betan =bn$
>
> CREATE ; sbetan = sn$
>
>
>
> CALC; List; XBR(BNPR)$ ? calculate the average of the beta for nprice
>
>
>
> > CALC; List; XBR(BNPR)$
>
> [CALC] = .0257560
>
>
>
> My understanding is that these two figures should be close to one another.
> Is there anything that could explain such difference between these two
> ways to estimate the results?
>
>
>
> Any help is welcomed?
>
>
>
> Best,
>
>
>
> Damien
>
>
>
> _______________________________________________
> Limdep site list
> Limdep at mailman.sydney.edu.au
> http://limdep.itls.usyd.edu.au
>
From mrentrena at uco.es Fri Feb 16 00:32:26 2018
From: mrentrena at uco.es (=?UTF8?Q?Macario_RODR=c3=8dGUEZENTRENA?=)
Date: Thu, 15 Feb 2018 14:32:26 +0100
Subject: [Limdep Nlogit List] WTP space estimation issue
MessageID:
Dear Professor Bill and Nlogit users:
I am experiencing some issues regarding a WTP space modelling. In the
first stage, I estimated the model using a standard RPL model with one
interaction (in the mean) and everything is fine since the results are
quite informative for policy making. Nonetheless, to avoid known issues
related to assuring finite moments for WTP estimations I am trying to
estimate de model with such interaction in WTP space. When I do that the
results are very inconsistent and changeable. Any suggestion would be
very welcome since I am disconcerted.
Here I show the inconsistencies (from my point of view):
Thank you so much in advance
Any suggestion will be highly appreciated
Macario

RPL Output
RPLOGIT
; Lhs = ELE
; cHOICES =A,B,SQ
; RPL=DENS
; FCN=EMI500(N),EMI700(N),ERO14(N),ERO28(N),BIO15(N),BIO20(N),EURC[N]
; MODEL: U(A,B,SQ) = <0,0,ASCSQ>+EMI500*EMI500+EMI700*EMI700+ERO14*ERO14+
ERO28*ERO28+BIO15*BIO15+BIO20*BIO20+EURC*EURC
; pwt; Halton ; pts=1000
; ECM=(A,B)
; pds=PDS
; PAR; margin$
Iterative procedure has converged
Normal exit:64 iterations. Status=0, F=.2066905D+04
+
StandardProb.95% Confidence
ELECoefficientErrorzz>Z*Interval
+
Random parameters in utility functions..........................
EMI5002.06088***.243398.47.00001.583842.53793
EMI7003.26465***.2594912.58.00002.756063.77324
ERO141.64629***.246196.69.00001.163762.12882
ERO282.90635***.2586211.24.00002.399463.41324
BIO15.72678***.243842.98.0029.248861.20470
BIO20.69049***.249902.76.0057.200691.18030
EURC.26158***.0136119.22.0000.28825.23492
Nonrandom parameters in utility functions.......................
ASCSQ1.75895***.355814.94.00001.061582.45632
Heterogeneity in mean, Parameter:Variable.......................
EMI5:DEN.71829***.263002.73.00631.23376.20282
EMI7:DEN.85872***.238743.60.00031.32664.39079
ERO1:DEN.11006.25682.43.6683.61341.39330
ERO2:DEN.43107*.226831.90.0574.87565.01351
BIO1:DEN.17070.22053.77.4389.26152.60292
BIO2:DEN.38344.242321.58.1136.09150.85838
EURC:DEN0.0.....(Fixed Parameter).....
Distns. of RPs. Std.Devs or limits of triangular................
NsEMI500.06975.56176.12.90121.031291.17079
NsEMI700.64275**.296852.17.0304.060941.22457
NsERO14.57823**.274212.11.0350.040801.11567
NsERO28.97815***.204944.77.0000.576471.37983
NsBIO15.50831*.269811.88.0596.020501.03713
NsBIO20.88335***.215324.10.0000.461331.30536
NsEURC.16707***.0141911.77.0000.13926.19488
Standard deviations of latent random effects....................
SigmaE014.98259***.3917512.72.00005.750414.21477
+
GMX Output
GMXLOGIT
; userp
; Lhs = ELE
; cHOICES = A,B,SQ
; GMX = DENS
; FCN=EMI500(N),EMI700(N),ERO14(N),ERO28(N),BIO15(N),BIO20(N),EURC[*N]
; MODEL: U(A,B,SQ) = <0,0,ASCSQ>+EMI500*EMI500+EMI700*EMI700+ERO14*ERO14+
ERO28*ERO28+BIO15*BIO15+BIO20*BIO20+EURC*EURC
; pwt; Halton ; pts=1000
; ECM=(A,B)
; pds=PDS
; PAR; margin$
Iterative procedure has converged
Normal exit:89 iterations. Status=0, F=.2518070D+04
+
StandardProb.95% Confidence
ELECoefficientErrorzz>Z*Interval
+
Random parameters in utility functions..........................
EMI500.09175.27163.34.7355.44064.62414
EMI700.02970.18100.16.8697.32506.38446
ERO14.11045.37497.29.7683.62448.84538
ERO28.09214.19681.47.6396.29359.47788
BIO15.00831.29531.03.9776.58710.57049
BIO20.09172.27327.34.7371.44388.62733
EURC1.0.....(Fixed Parameter).....
Nonrandom parameters in utility functions.......................
ASCSQ2.40941***.0955125.23.00002.596612.22221
Heterogeneity in mean, Parameter:Variable.......................
EMI5:DEN.21937.43417.51.6134.631601.07034
EMI7:DEN.21808.31480.69.4885.39892.83508
ERO1:DEN.30811.53738.57.5664.745141.36135
ERO2:DEN.10758.27105.40.6914.42368.63883
BIO1:DEN.50247.51724.97.3313.511291.51623
BIO2:DEN.26820.44615.60.5477.606231.14263
EURC:DEN.00690.36556.02.9849.72338.70958
Distns. of RPs. Std.Devs or limits of triangular................
NsEMI500.0003512.69377.00 1.000024.8789824.87968
NsEMI700 .002145.19543.00.999710.1807210.18500
NsERO14.005429.71863.00.999619.0427419.05358
NsERO28.001816.84284.00.999813.4099113.41354
NsBIO15.002237.20629.00.999814.1218414.12631
NsBIO20.005363.29627.00.99876.455226.46594
CsEURC0.0.....(Fixed Parameter).....
Variance parameter tau in GMX scale parameter...................
TauScale3.36068***.3161810.63.00002.740983.98038
Weighting parameter gamma in GMX model..........................
GammaMXL0.0.....(Fixed Parameter).....
Coefficient on EURCin preference space form................
Beta0WTP177.918200.3959.89.3746570.687214.851
S_b0_WTP16.3943366.1541.04.9643701.2545734.0430
Sample MeanSample Std.Dev.................................
Sigma(i).351002.53852.14.89004.624415.32641
+

From djourdain at ait.asia Fri Feb 16 01:33:20 2018
From: djourdain at ait.asia (Damien Jourdain)
Date: Thu, 15 Feb 2018 16:33:20 +0200
Subject: [Limdep Nlogit List] Interpretation of RPL coefficients when
using lognormal distribution: why results from direct estimation and from
estimated parameters are not giving the same results?
InReplyTo: <0a52bfb4457f39078e40d9eac46008e2@uw.edu.pl>
References: <009301d3a647$8264f6a0$872ee3e0$@cirad.fr>
<60299233429c049c738de89661957fb7@uw.edu.pl>
<00ac01d3a651$e6e119b0$b4a34d10$@cirad.fr>
<0a52bfb4457f39078e40d9eac46008e2@uw.edu.pl>
MessageID: <00f001d3a669$f0620950$d1261bf0$@cirad.fr>
Dear Mik,
Thank you.
I've looked again, and I think found the mistake I made.
When creating the variables from the matrix, I forgot to add the "Sample; 11400". By failing to do so, I suppose the calculation of the average include all the zeros for the variable BNPR (from 1401 to 13500 ... since I have 13500 rows). This result in an average that is much smaller than the reality!
I am now adding the following line Sample ; 11400$ before getting the parameters from the matrix
When I tried again with this statement, I find the calculated from direct estimation being quite close to the average of posterior individualspecific estimates. If that is correct, there is no need to use the exponential of the coefficients.
> CALC; List; XBR(BNPR)$
[CALC] = .3199974
> create; expBNPR = exp(BNPR)$
> calc; list; xbr(expBNPR)$
[CALC] = 1.6429415
> CALC; LIST; exp(1.95961 + (1.31682^2)/2)$
[CALC] = .3353426
Again, thank you for your help and interest.
Damien
Message d'origine
De : Miko?aj Czajkowski [mailto:mc at uw.edu.pl]
Envoy? : Thursday, February 15, 2018 3:21 PM
? : Damien Jourdain
Objet : Re: [Limdep Nlogit List] Interpretation of RPL coefficients when using lognormal distribution: why results from direct estimation and from estimated parameters are not giving the same results?
Dear Damien,
This is what I would expect  direct estimation using coefficients is likely always better than the one based on posterior individualspecific estimates (even though asymptotically they should be equivalent).
Best regards,
Mik
On 20180215 12:41, Damien Jourdain wrote:
> Dear Mik,
>
> Thank you for the suggestion.
>
> I tried that but there is still an important difference between the two.
>
> > create; expBNPR = exp(BNPR)$
> > calc; list; xbr(expBNPR)$
> [CALC] = 1.0329355
>
> As the direct estimation from the coefficients is giving :
>> > CALC; LIST; EXP(1.61991 + (0.99479^2)/2)$
>> [CALC] = .3246179
> By the way, the more 'realistic' calculation is the direct estimation
> from the coefficients (at least it is of the same magnitude than the
> MNL coefficient for price)
>
>
> Damien
>
>
> Message d'origine
>
> De : Miko?aj Czajkowski [mailto:mc at uw.edu.pl] Envoy? : Thursday,
> February 15, 2018 12:34 PM ? : Damien Jourdain Objet : Re: [Limdep
> Nlogit List] Interpretation of RPL coefficients when using lognormal distribution: why results from direct estimation and from estimated parameters are not giving the same results?
>
>
> Dear Damien,
>
> Shoulnd't you have something like:
>
> create; expBNPR = exp(BNPR)$
>
> first?
>
> Then
> calc; list; xbr(expBNPR)$
>
> Cheers,
> Mik
>
>
>
>
> On 20180215 11:26, Damien Jourdain wrote:
>> Dear All,
>>
>>
>>
>> I developing a RPL model using choice experiment data
>>
>>
>>
>> The model is as followed:
>>
>>
>>
>> Calc; Ran(1234567)$
>>
>> RPLOGIT
>>
>> ; Choices = 1,2,3
>>
>> ; Lhs = CHOICE, CSET, ALT
>>
>> ; Rhs = L_IMP, L_RAU, L_GAP, L_PGS,
>>
>> FRESH, O_SUPE, O_SPEC, NPRICE
>>
>> ; Fcn = L_IMP(n), L_RAU(n), L_GAP(n),
>>
>> L_PGS(n), FRESH(n), O_SUPE(n), O_SPEC(n), NPRICE(l)
>>
>> ; Halton
>>
>> ; Pds = csi
>>
>> ; Pts = 20
>>
>> ; Parameters
>>
>> ; Maxit = 150$
>>
>>
>>
>> I have changed the price attribute to negative values, so I can use a
>> lognormal distribution of for the price attribute.
>>
>> I am getting the following results
>>
>>
>>
>> +
>> +
>> +
>> 
>>
>>  Standard Prob. 95% Confidence
>>
>> CHOICE Coefficient Error z z>Z* Interval
>>
>> +
>> +
>> +
>> 
>>
>> Random parameters in utility
>> functions..............................
>>
>> L_IMP 1.11608*** .29152 3.83 .0001 1.68745 .54470
>>
>> L_RAU 1.49941*** .09880 15.18 .0000 1.30577 1.69304
>>
>> L_GAP 1.82794*** .10487 17.43 .0000 1.62239 2.03349
>>
>> L_PGS .63730** .25734 2.48 .0133 .13291 1.14168
>>
>> FRESH .61318*** .05496 11.16 .0000 .72089 .50546
>>
>> O_SUPE .43891*** .07567 5.80 .0000 .29060 .58721
>>
>> O_SPEC .76256*** .17329 4.40 .0000 1.10221 .42291
>>
>> NPRICE 1.61991*** .33228 4.88 .0000 2.27117 .96865
>>
>> Distns. of RPs. Std.Devs or limits of
>> triangular....................
>>
>> NsL_IMP 1.52346*** .31666 4.81 .0000 .90281 2.14410
>>
>> NsL_RAU .69380*** .13439 5.16 .0000 .43040 .95721
>>
>> NsL_GAP .01744 .24287 .07 .9427 .45858 .49346
>>
>> NsL_PGS .95598*** .21017 4.55 .0000 .54405 1.36790
>>
>> NsFRESH .48681*** .05657 8.60 .0000 .37593 .59770
>>
>> NsO_SUPE 1.65455*** .11307 14.63 .0000 1.43293 1.87616
>>
>> NsO_SPEC 1.08890*** .12068 9.02 .0000 .85237 1.32544
>>
>> LsNPRICE .99479*** .18655 5.33 .0000 .62917 1.36041
>>
>>
>>
>> If I am not wrong, I can calculate the population mean of the price
>> E(beta) = exp(beta + sigma^2/2)
>>
>> > CALC; LIST; EXP(1.61991 + (0.99479^2)/2)$
>>
>> [CALC] = .3246179
>>
>>
>>
>> Then, I am using the procedure described in section N29.8.2 of the
>> Nlogit manual to examine the distribution of the parameters.
>>
>> Matrix; bn = beta_i; sn =sdbeta_i $
>>
>> CREATE; BIMP=0; BRAU=0; BGAP=0; BPGS=0; BFRE =0; BSUP=0; BSPE=0;
>> BNPR=0 $
>>
>> CREATE ;SIMP=0; SRAU=0; SGAP=0; SPGS=0; SFRE =0; SSUP=0; SSPE=0;
>> SNPR=0 $
>>
>> NAMELIST; betan = BIMP,BRAU, BGAP, BPGS, BFRE, BSUP, BSPE, BNPR$
>>
>> NAMELIST; sbetan = SIMP,SRAU, SGAP, SPGS, SFRE, SSUP, SSPE, SNPR$
>>
>> CREATE ; betan =bn$
>>
>> CREATE ; sbetan = sn$
>>
>>
>>
>> CALC; List; XBR(BNPR)$ ? calculate the average of the beta for nprice
>>
>>
>>
>> > CALC; List; XBR(BNPR)$
>>
>> [CALC] = .0257560
>>
>>
>>
>> My understanding is that these two figures should be close to one another.
>> Is there anything that could explain such difference between these
>> two ways to estimate the results?
>>
>>
>>
>> Any help is welcomed?
>>
>>
>>
>> Best,
>>
>>
>>
>> Damien
>>
>>
>>
>> _______________________________________________
>> Limdep site list
>> Limdep at mailman.sydney.edu.au
>> http://limdep.itls.usyd.edu.au
>>
From wgreene at stern.nyu.edu Sat Feb 17 01:29:14 2018
From: wgreene at stern.nyu.edu (William Greene)
Date: Fri, 16 Feb 2018 09:29:14 0500
Subject: [Limdep Nlogit List] Interpretation of RPL coefficients when
using lognormal distribution: why results from direct estimation and from
estimated parameters are not giving the same results?
InReplyTo: <00f001d3a669$f0620950$d1261bf0$@cirad.fr>
References: <009301d3a647$8264f6a0$872ee3e0$@cirad.fr>
<60299233429c049c738de89661957fb7@uw.edu.pl>
<00ac01d3a651$e6e119b0$b4a34d10$@cirad.fr>
<0a52bfb4457f39078e40d9eac46008e2@uw.edu.pl>
<00f001d3a669$f0620950$d1261bf0$@cirad.fr>
MessageID:
Damien. That looks even better.
Note, you can use DSTAT with matrices directly  the statistics are
computed for
the columns of the matrices. Note the example in my previous note.
Cheers
Bill Greene
On Thu, Feb 15, 2018 at 9:33 AM, Damien Jourdain wrote:
> Dear Mik,
>
> Thank you.
> I've looked again, and I think found the mistake I made.
> When creating the variables from the matrix, I forgot to add the "Sample;
> 11400". By failing to do so, I suppose the calculation of the average
> include all the zeros for the variable BNPR (from 1401 to 13500 ... since I
> have 13500 rows). This result in an average that is much smaller than the
> reality!
>
> I am now adding the following line Sample ; 11400$ before getting the
> parameters from the matrix
>
> When I tried again with this statement, I find the calculated from direct
> estimation being quite close to the average of posterior
> individualspecific estimates. If that is correct, there is no need to use
> the exponential of the coefficients.
>
> > CALC; List; XBR(BNPR)$
> [CALC] = .3199974
>
> > create; expBNPR = exp(BNPR)$
> > calc; list; xbr(expBNPR)$
> [CALC] = 1.6429415
>
> > CALC; LIST; exp(1.95961 + (1.31682^2)/2)$
> [CALC] = .3353426
>
> Again, thank you for your help and interest.
>
> Damien
>
>
> Message d'origine
> De : Miko?aj Czajkowski [mailto:mc at uw.edu.pl]
> Envoy? : Thursday, February 15, 2018 3:21 PM
> ? : Damien Jourdain
> Objet : Re: [Limdep Nlogit List] Interpretation of RPL coefficients when
> using lognormal distribution: why results from direct estimation and from
> estimated parameters are not giving the same results?
>
>
> Dear Damien,
>
> This is what I would expect  direct estimation using coefficients is
> likely always better than the one based on posterior individualspecific
> estimates (even though asymptotically they should be equivalent).
>
> Best regards,
> Mik
>
>
> On 20180215 12:41, Damien Jourdain wrote:
> > Dear Mik,
> >
> > Thank you for the suggestion.
> >
> > I tried that but there is still an important difference between the two.
> >
> > > create; expBNPR = exp(BNPR)$
> > > calc; list; xbr(expBNPR)$
> > [CALC] = 1.0329355
> >
> > As the direct estimation from the coefficients is giving :
> >> > CALC; LIST; EXP(1.61991 + (0.99479^2)/2)$
> >> [CALC] = .3246179
> > By the way, the more 'realistic' calculation is the direct estimation
> > from the coefficients (at least it is of the same magnitude than the
> > MNL coefficient for price)
> >
> >
> > Damien
> >
> >
> > Message d'origine
> >
> > De : Miko?aj Czajkowski [mailto:mc at uw.edu.pl] Envoy? : Thursday,
> > February 15, 2018 12:34 PM ? : Damien Jourdain Objet : Re: [Limdep
> > Nlogit List] Interpretation of RPL coefficients when using lognormal
> distribution: why results from direct estimation and from estimated
> parameters are not giving the same results?
> >
> >
> > Dear Damien,
> >
> > Shoulnd't you have something like:
> >
> > create; expBNPR = exp(BNPR)$
> >
> > first?
> >
> > Then
> > calc; list; xbr(expBNPR)$
> >
> > Cheers,
> > Mik
> >
> >
> >
> >
> > On 20180215 11:26, Damien Jourdain wrote:
> >> Dear All,
> >>
> >>
> >>
> >> I developing a RPL model using choice experiment data
> >>
> >>
> >>
> >> The model is as followed:
> >>
> >>
> >>
> >> Calc; Ran(1234567)$
> >>
> >> RPLOGIT
> >>
> >> ; Choices = 1,2,3
> >>
> >> ; Lhs = CHOICE, CSET, ALT
> >>
> >> ; Rhs = L_IMP, L_RAU, L_GAP, L_PGS,
> >>
> >> FRESH, O_SUPE, O_SPEC, NPRICE
> >>
> >> ; Fcn = L_IMP(n), L_RAU(n), L_GAP(n),
> >>
> >> L_PGS(n), FRESH(n), O_SUPE(n), O_SPEC(n), NPRICE(l)
> >>
> >> ; Halton
> >>
> >> ; Pds = csi
> >>
> >> ; Pts = 20
> >>
> >> ; Parameters
> >>
> >> ; Maxit = 150$
> >>
> >>
> >>
> >> I have changed the price attribute to negative values, so I can use a
> >> lognormal distribution of for the price attribute.
> >>
> >> I am getting the following results
> >>
> >>
> >>
> >> +
> >> +
> >> +
> >> 
> >>
> >>  Standard Prob. 95%
> Confidence
> >>
> >> CHOICE Coefficient Error z z>Z* Interval
> >>
> >> +
> >> +
> >> +
> >> 
> >>
> >> Random parameters in utility
> >> functions..............................
> >>
> >> L_IMP 1.11608*** .29152 3.83 .0001 1.68745
> .54470
> >>
> >> L_RAU 1.49941*** .09880 15.18 .0000 1.30577
> 1.69304
> >>
> >> L_GAP 1.82794*** .10487 17.43 .0000 1.62239
> 2.03349
> >>
> >> L_PGS .63730** .25734 2.48 .0133 .13291
> 1.14168
> >>
> >> FRESH .61318*** .05496 11.16 .0000 .72089
> .50546
> >>
> >> O_SUPE .43891*** .07567 5.80 .0000 .29060
> .58721
> >>
> >> O_SPEC .76256*** .17329 4.40 .0000 1.10221
> .42291
> >>
> >> NPRICE 1.61991*** .33228 4.88 .0000 2.27117
> .96865
> >>
> >> Distns. of RPs. Std.Devs or limits of
> >> triangular....................
> >>
> >> NsL_IMP 1.52346*** .31666 4.81 .0000 .90281 2.14410
> >>
> >> NsL_RAU .69380*** .13439 5.16 .0000 .43040 .95721
> >>
> >> NsL_GAP .01744 .24287 .07 .9427 .45858 .49346
> >>
> >> NsL_PGS .95598*** .21017 4.55 .0000 .54405 1.36790
> >>
> >> NsFRESH .48681*** .05657 8.60 .0000 .37593 .59770
> >>
> >> NsO_SUPE 1.65455*** .11307 14.63 .0000 1.43293
> 1.87616
> >>
> >> NsO_SPEC 1.08890*** .12068 9.02 .0000 .85237
> 1.32544
> >>
> >> LsNPRICE .99479*** .18655 5.33 .0000 .62917
> 1.36041
> >>
> >>
> >>
> >> If I am not wrong, I can calculate the population mean of the price
> >> E(beta) = exp(beta + sigma^2/2)
> >>
> >> > CALC; LIST; EXP(1.61991 + (0.99479^2)/2)$
> >>
> >> [CALC] = .3246179
> >>
> >>
> >>
> >> Then, I am using the procedure described in section N29.8.2 of the
> >> Nlogit manual to examine the distribution of the parameters.
> >>
> >> Matrix; bn = beta_i; sn =sdbeta_i $
> >>
> >> CREATE; BIMP=0; BRAU=0; BGAP=0; BPGS=0; BFRE =0; BSUP=0; BSPE=0;
> >> BNPR=0 $
> >>
> >> CREATE ;SIMP=0; SRAU=0; SGAP=0; SPGS=0; SFRE =0; SSUP=0; SSPE=0;
> >> SNPR=0 $
> >>
> >> NAMELIST; betan = BIMP,BRAU, BGAP, BPGS, BFRE, BSUP, BSPE, BNPR$
> >>
> >> NAMELIST; sbetan = SIMP,SRAU, SGAP, SPGS, SFRE, SSUP, SSPE, SNPR$
> >>
> >> CREATE ; betan =bn$
> >>
> >> CREATE ; sbetan = sn$
> >>
> >>
> >>
> >> CALC; List; XBR(BNPR)$ ? calculate the average of the beta for nprice
> >>
> >>
> >>
> >> > CALC; List; XBR(BNPR)$
> >>
> >> [CALC] = .0257560
> >>
> >>
> >>
> >> My understanding is that these two figures should be close to one
> another.
> >> Is there anything that could explain such difference between these
> >> two ways to estimate the results?
> >>
> >>
> >>
> >> Any help is welcomed?
> >>
> >>
> >>
> >> Best,
> >>
> >>
> >>
> >> Damien
> >>
> >>
> >>
> >> _______________________________________________
> >> Limdep site list
> >> Limdep at mailman.sydney.edu.au
> >> http://limdep.itls.usyd.edu.au
> >>
>
> _______________________________________________
> Limdep site list
> Limdep at mailman.sydney.edu.au
> http://limdep.itls.usyd.edu.au
>

William Greene
Department of Economics
Stern School of Business, New York University
44 West 4 St., 790
New York, NY, 10012
URL: https://protectau.mimecast.com/s/lziwCL7rK8tm1GrMIBihA2?domain=people.stern.nyu.edu
Email: wgreene at stern.nyu.edu
Ph. +1.212.998.0876
Editor in Chief: Journal of Productivity Analysis
Editor in Chief: Foundations and Trends in Econometrics
Associate Editor: Economics Letters
Associate Editor: Journal of Business and Economic Statistics
Associate Editor: Journal of Choice Modeling
From richard.turner at imarketresearch.com Sat Feb 17 07:20:21 2018
From: richard.turner at imarketresearch.com (Richard Turner)
Date: Fri, 16 Feb 2018 15:20:21 0500
Subject: [Limdep Nlogit List] How to deal with large numbers of attributes?
MessageID:
Greetings,
What is the best way to handle large numbers of attributes in discrete
choice experiments?
Is it better to do a *partial profile design* or to do some* twostep
approach* such as conducting an "initial study" using a partial profile
design, then conduct a final study using the most important attributes,
which were derived from the initial study (implicit in the second method
would be to synthesize the learnings from both studies to get some ranking
of all the attributes)?
I've done some searching, but haven't found any "defining" papers on the
subject.
Any advice and/or direction is greatly appreciated!
Regards,
Richard
From miq at wne.uw.edu.pl Sat Feb 17 07:33:23 2018
From: miq at wne.uw.edu.pl (=?UTF8?Q?Miko=c5=82aj_Czajkowski?=)
Date: Fri, 16 Feb 2018 21:33:23 +0100
Subject: [Limdep Nlogit List] How to deal with large numbers of
attributes?
InReplyTo:
References:
MessageID: <026cc5017aedac1d9d14a71965da8973@wne.uw.edu.pl>
Dear Richard,
It seems to me like the answer to this question would depend on the goal
of the modeller  whether he wants to learn a lot about the most
important attributes only, or have some idea about all the attributes. I
am not sure is a lot of concrete advice can be given in these kinds of
situations.
Cheers,
Mik
On 20180216 21:20, Richard Turner wrote:
> Greetings,
>
> What is the best way to handle large numbers of attributes in discrete
> choice experiments?
>
> Is it better to do a *partial profile design* or to do some* twostep
> approach* such as conducting an "initial study" using a partial profile
> design, then conduct a final study using the most important attributes,
> which were derived from the initial study (implicit in the second method
> would be to synthesize the learnings from both studies to get some ranking
> of all the attributes)?
>
> I've done some searching, but haven't found any "defining" papers on the
> subject.
>
> Any advice and/or direction is greatly appreciated!
>
> Regards,
>
> Richard
> _______________________________________________
> Limdep site list
> Limdep at mailman.sydney.edu.au
> http://limdep.itls.usyd.edu.au
>
From david.hensher at sydney.edu.au Sat Feb 17 07:56:16 2018
From: david.hensher at sydney.edu.au (David Hensher)
Date: Fri, 16 Feb 2018 20:56:16 +0000
Subject: [Limdep Nlogit List] How to deal with large numbers of
attributes?
InReplyTo: <026cc5017aedac1d9d14a71965da8973@wne.uw.edu.pl>
References: ,
<026cc5017aedac1d9d14a71965da8973@wne.uw.edu.pl>
MessageID: <59E77ACA17894E2A8AEEDF90B5679F44@sydney.edu.au>
This relates to the literature on attribute non attendance where different attributes are relevant to different people and selecting a limited set initially without strong evidence of the universal relevant set is behaviourally concerning.
Depending on how many attributes, up to 20 or so is fine and one can ask questions on which attributes are attended to. Lots of papers on this by people such as Hensher, Louviere, Scarpa, and the special issue a couple of years ago in J of choice modelling on process heuristics and especially the design of designs (DoD) approach initially developed by Hensher
Sent from my iPhone
0418 433 057
David A Hensher
Note: hgroup at optusnet.com.au has been cancelled so instead use
hgroup at hensher.com.au David.hensher at bigpond.com
David.hensher at sydney.edu.au
These emails are linked so use one only
On 17 Feb 2018, at 7:33 am, Miko?aj Czajkowski > wrote:
Dear Richard,
It seems to me like the answer to this question would depend on the goal of the modeller  whether he wants to learn a lot about the most important attributes only, or have some idea about all the attributes. I am not sure is a lot of concrete advice can be given in these kinds of situations.
Cheers,
Mik
On 20180216 21:20, Richard Turner wrote:
Greetings,
What is the best way to handle large numbers of attributes in discrete
choice experiments?
Is it better to do a *partial profile design* or to do some* twostep
approach* such as conducting an "initial study" using a partial profile
design, then conduct a final study using the most important attributes,
which were derived from the initial study (implicit in the second method
would be to synthesize the learnings from both studies to get some ranking
of all the attributes)?
I've done some searching, but haven't found any "defining" papers on the
subject.
Any advice and/or direction is greatly appreciated!
Regards,
Richard
_______________________________________________
Limdep site list
Limdep at mailman.sydney.edu.au
http://limdep.itls.usyd.edu.au
_______________________________________________
Limdep site list
Limdep at mailman.sydney.edu.au
http://limdep.itls.usyd.edu.au
From miq at wne.uw.edu.pl Sat Feb 17 08:17:02 2018
From: miq at wne.uw.edu.pl (=?UTF8?Q?Miko=c5=82aj_Czajkowski?=)
Date: Fri, 16 Feb 2018 22:17:02 +0100
Subject: [Limdep Nlogit List] How to deal with large numbers of
attributes?
InReplyTo: <59E77ACA17894E2A8AEEDF90B5679F44@sydney.edu.au>
References:
<026cc5017aedac1d9d14a71965da8973@wne.uw.edu.pl>
<59E77ACA17894E2A8AEEDF90B5679F44@sydney.edu.au>
MessageID:
Dear David,
As far as I understood Richard's question, option (1) *partial profile
design* is having many versions of the study using different attributes
vs. option (2) would be an initial study like (1) + final study aimed at
learning more about the most prominent attributes. Attribute
nonattendance would be a thing to econometrically control in each case,
(1) and (2), but does it help determine if option (1) or (2) is preferable?
Best regards,
Mik
On 20180216 21:56, David Hensher via Limdep wrote:
> This relates to the literature on attribute non attendance where different attributes are relevant to different people and selecting a limited set initially without strong evidence of the universal relevant set is behaviourally concerning.
>
> Depending on how many attributes, up to 20 or so is fine and one can ask questions on which attributes are attended to. Lots of papers on this by people such as Hensher, Louviere, Scarpa, and the special issue a couple of years ago in J of choice modelling on process heuristics and especially the design of designs (DoD) approach initially developed by Hensher
>
> Sent from my iPhone
> 0418 433 057
> David A Hensher
>
> Note: hgroup at optusnet.com.au has been cancelled so instead use
> hgroup at hensher.com.au David.hensher at bigpond.com
> David.hensher at sydney.edu.au
> These emails are linked so use one only
>
>
> On 17 Feb 2018, at 7:33 am, Miko?aj Czajkowski > wrote:
>
>
> Dear Richard,
>
> It seems to me like the answer to this question would depend on the goal of the modeller  whether he wants to learn a lot about the most important attributes only, or have some idea about all the attributes. I am not sure is a lot of concrete advice can be given in these kinds of situations.
>
> Cheers,
> Mik
>
>
> On 20180216 21:20, Richard Turner wrote:
> Greetings,
>
> What is the best way to handle large numbers of attributes in discrete
> choice experiments?
>
> Is it better to do a *partial profile design* or to do some* twostep
> approach* such as conducting an "initial study" using a partial profile
> design, then conduct a final study using the most important attributes,
> which were derived from the initial study (implicit in the second method
> would be to synthesize the learnings from both studies to get some ranking
> of all the attributes)?
>
> I've done some searching, but haven't found any "defining" papers on the
> subject.
>
> Any advice and/or direction is greatly appreciated!
>
> Regards,
>
> Richard
> _______________________________________________
> Limdep site list
> Limdep at mailman.sydney.edu.au
> http://limdep.itls.usyd.edu.au
>
>
> _______________________________________________
> Limdep site list
> Limdep at mailman.sydney.edu.au
> http://limdep.itls.usyd.edu.au
>
> _______________________________________________
> Limdep site list
> Limdep at mailman.sydney.edu.au
> http://limdep.itls.usyd.edu.au
>
From david.hensher at sydney.edu.au Sat Feb 17 08:36:34 2018
From: david.hensher at sydney.edu.au (David Hensher)
Date: Fri, 16 Feb 2018 21:36:34 +0000
Subject: [Limdep Nlogit List] How to deal with large numbers of
attributes?
InReplyTo:
References:
<026cc5017aedac1d9d14a71965da8973@wne.uw.edu.pl>
<59E77ACA17894E2A8AEEDF90B5679F44@sydney.edu.au>
MessageID: <5A874EE2.70602@sydney.edu.au>
Dear Mik
It does sound a bit like the DoD approach and I think it would be very
interesting to design a number of designs (each with subsets of
attributes but some common attributes) and test for differences  bit
like hierarchical information integration that some of us did many years
ago (Louviere, Hensher, Timmermans). Then settle on one design if they
want that or indeed jointly estimate across all designs.
David
On 17/02/2018 8:17 AM, Miko?aj Czajkowski wrote:
>
> Dear David,
>
> As far as I understood Richard's question, option (1) *partial profile
> design* is having many versions of the study using different
> attributes vs. option (2) would be an initial study like (1) + final
> study aimed at learning more about the most prominent attributes.
> Attribute nonattendance would be a thing to econometrically control
> in each case, (1) and (2), but does it help determine if option (1) or
> (2) is preferable?
>
> Best regards,
> Mik
>
>
> On 20180216 21:56, David Hensher via Limdep wrote:
>> This relates to the literature on attribute non attendance where
>> different attributes are relevant to different people and selecting a
>> limited set initially without strong evidence of the universal
>> relevant set is behaviourally concerning.
>>
>> Depending on how many attributes, up to 20 or so is fine and one can
>> ask questions on which attributes are attended to. Lots of papers on
>> this by people such as Hensher, Louviere, Scarpa, and the special
>> issue a couple of years ago in J of choice modelling on process
>> heuristics and especially the design of designs (DoD) approach
>> initially developed by Hensher
>>
>> Sent from my iPhone
>> 0418 433 057
>> David A Hensher
>>
>> Note: hgroup at optusnet.com.au has been
>> cancelled so instead use
>> hgroup at hensher.com.au
>> David.hensher at bigpond.com
>> David.hensher at sydney.edu.au
>> These emails are linked so use one only
>>
>>
>> On 17 Feb 2018, at 7:33 am, Miko?aj Czajkowski
>> > wrote:
>>
>>
>> Dear Richard,
>>
>> It seems to me like the answer to this question would depend on the
>> goal of the modeller  whether he wants to learn a lot about the most
>> important attributes only, or have some idea about all the
>> attributes. I am not sure is a lot of concrete advice can be given in
>> these kinds of situations.
>>
>> Cheers,
>> Mik
>>
>>
>> On 20180216 21:20, Richard Turner wrote:
>> Greetings,
>>
>> What is the best way to handle large numbers of attributes in discrete
>> choice experiments?
>>
>> Is it better to do a *partial profile design* or to do some* twostep
>> approach* such as conducting an "initial study" using a partial profile
>> design, then conduct a final study using the most important attributes,
>> which were derived from the initial study (implicit in the second method
>> would be to synthesize the learnings from both studies to get some
>> ranking
>> of all the attributes)?
>>
>> I've done some searching, but haven't found any "defining" papers on the
>> subject.
>>
>> Any advice and/or direction is greatly appreciated!
>>
>> Regards,
>>
>> Richard
>> _______________________________________________
>> Limdep site list
>> Limdep at mailman.sydney.edu.au
>> http://limdep.itls.usyd.edu.au
>>
>>
>> _______________________________________________
>> Limdep site list
>> Limdep at mailman.sydney.edu.au
>> http://limdep.itls.usyd.edu.au
>>
>> _______________________________________________
>> Limdep site list
>> Limdep at mailman.sydney.edu.au
>> http://limdep.itls.usyd.edu.au
>>
>
> _______________________________________________
> Limdep site list
> Limdep at mailman.sydney.edu.au
> http://limdep.itls.usyd.edu.au
>
>

DAVID HENSHER FASSA, PhD  Professor and Founding Director Institute of Transport and Logistics Studies  The University of Sydney Business School
THE UNIVERSITY OF SYDNEY
Rm 201, Building H73  The University of Sydney  NSW  2006 Street Address: 378 Abercrombie St, Darlington NSW 2008
T +61 2 9114 1871  F +61 2 9114 1863  M +61 418 433 057
E David.Hensher at sydney.edu.au  W sydney.edu.au/business/itls
Celebrating 25 years of ITLS: 19912016 https://protectau.mimecast.com/s/X0DNCvl0PoC2mMKJHQ51YG?domain=youtu.be ERA Rank 5 (Transportation and Freight Services) CoFounder of the International Conference Series on Competition and Ownership of Land Passenger Transport (The 'Thredbo' Series) https://protectau.mimecast.com/s/le6HCwVLQmiAMY95UqD2yg?domain=thredboconferenceseries.org
Join the ITLS group on LinkedIn
Second edition of Applied Choice Analysis now available at www.cambridge.org/9781107465923
CRICOS 00026A
This email plus any attachments to it are confidential. Any unauthorised use is strictly prohibited. If you receive this email in error, please delete it and any attachments.
Please think of our environment and only print this email if necessary.
From Lixian.Qian at xjtlu.edu.cn Sun Feb 18 13:36:31 2018
From: Lixian.Qian at xjtlu.edu.cn (Lixian Qian)
Date: Sun, 18 Feb 2018 02:36:31 +0000
Subject: [Limdep Nlogit List] Hierarchical Baysian Approach in NLogit
MessageID:
Dear all,
I am wondering whether NLogit can support the hierarchical baysian (HB) approach when estimating the DCM. Thanks.
Best,
Lixian
From djourdain at ait.asia Tue Feb 20 18:45:31 2018
From: djourdain at ait.asia (Damien Jourdain)
Date: Tue, 20 Feb 2018 09:45:31 +0200
Subject: [Limdep Nlogit List] Interpretation of RPL coefficients when
using lognormal distribution: why results from direct estimation and from
estimated parameters are not giving the same results?
InReplyTo:
References: <009301d3a647$8264f6a0$872ee3e0$@cirad.fr> <60299233429c049c738de89661957fb7@uw.edu.pl> <00ac01d3a651$e6e119b0$b4a34d10$@cirad.fr> <0a52bfb4457f39078e40d9eac46008e2@uw.edu.pl> <00f001d3a669$f0620950$d1261bf0$@cirad.fr>
MessageID: <008401d3aa1e$cba5d520$62f17f60$@cirad.fr>
Dear Pr. Greene,
Thank you for this tip. It is easier to operate indeed!
Damien
Message d'origine
De : Limdep [mailto:limdepbounces at mailman.sydney.edu.au] De la part de William Greene
Envoy? : Friday, February 16, 2018 4:29 PM
? : Limdep and Nlogit Mailing List
Objet : Re: [Limdep Nlogit List] Interpretation of RPL coefficients when using lognormal distribution: why results from direct estimation and from estimated parameters are not giving the same results?
Damien. That looks even better.
Note, you can use DSTAT with matrices directly  the statistics are computed for the columns of the matrices. Note the example in my previous note.
Cheers
Bill Greene
On Thu, Feb 15, 2018 at 9:33 AM, Damien Jourdain wrote:
> Dear Mik,
>
> Thank you.
> I've looked again, and I think found the mistake I made.
> When creating the variables from the matrix, I forgot to add the
> "Sample; 11400". By failing to do so, I suppose the calculation of
> the average include all the zeros for the variable BNPR (from 1401 to
> 13500 ... since I have 13500 rows). This result in an average that is
> much smaller than the reality!
>
> I am now adding the following line Sample ; 11400$ before getting the
> parameters from the matrix
>
> When I tried again with this statement, I find the calculated from
> direct estimation being quite close to the average of posterior
> individualspecific estimates. If that is correct, there is no need to
> use the exponential of the coefficients.
>
> > CALC; List; XBR(BNPR)$
> [CALC] = .3199974
>
> > create; expBNPR = exp(BNPR)$
> > calc; list; xbr(expBNPR)$
> [CALC] = 1.6429415
>
> > CALC; LIST; exp(1.95961 + (1.31682^2)/2)$
> [CALC] = .3353426
>
> Again, thank you for your help and interest.
>
> Damien
>
>
> Message d'origine
> De : Miko?aj Czajkowski [mailto:mc at uw.edu.pl] Envoy? : Thursday,
> February 15, 2018 3:21 PM ? : Damien Jourdain Objet : Re: [Limdep
> Nlogit List] Interpretation of RPL coefficients when using lognormal
> distribution: why results from direct estimation and from estimated
> parameters are not giving the same results?
>
>
> Dear Damien,
>
> This is what I would expect  direct estimation using coefficients is
> likely always better than the one based on posterior
> individualspecific estimates (even though asymptotically they should be equivalent).
>
> Best regards,
> Mik
>
>
> On 20180215 12:41, Damien Jourdain wrote:
> > Dear Mik,
> >
> > Thank you for the suggestion.
> >
> > I tried that but there is still an important difference between the two.
> >
> > > create; expBNPR = exp(BNPR)$
> > > calc; list; xbr(expBNPR)$
> > [CALC] = 1.0329355
> >
> > As the direct estimation from the coefficients is giving :
> >> > CALC; LIST; EXP(1.61991 + (0.99479^2)/2)$
> >> [CALC] = .3246179
> > By the way, the more 'realistic' calculation is the direct
> > estimation from the coefficients (at least it is of the same
> > magnitude than the MNL coefficient for price)
> >
> >
> > Damien
> >
> >
> > Message d'origine
> >
> > De : Miko?aj Czajkowski [mailto:mc at uw.edu.pl] Envoy? : Thursday,
> > February 15, 2018 12:34 PM ? : Damien Jourdain Objet : Re: [Limdep
> > Nlogit List] Interpretation of RPL coefficients when using lognormal
> distribution: why results from direct estimation and from estimated
> parameters are not giving the same results?
> >
> >
> > Dear Damien,
> >
> > Shoulnd't you have something like:
> >
> > create; expBNPR = exp(BNPR)$
> >
> > first?
> >
> > Then
> > calc; list; xbr(expBNPR)$
> >
> > Cheers,
> > Mik
> >
> >
> >
> >
> > On 20180215 11:26, Damien Jourdain wrote:
> >> Dear All,
> >>
> >>
> >>
> >> I developing a RPL model using choice experiment data
> >>
> >>
> >>
> >> The model is as followed:
> >>
> >>
> >>
> >> Calc; Ran(1234567)$
> >>
> >> RPLOGIT
> >>
> >> ; Choices = 1,2,3
> >>
> >> ; Lhs = CHOICE, CSET, ALT
> >>
> >> ; Rhs = L_IMP, L_RAU, L_GAP, L_PGS,
> >>
> >> FRESH, O_SUPE, O_SPEC, NPRICE
> >>
> >> ; Fcn = L_IMP(n), L_RAU(n), L_GAP(n),
> >>
> >> L_PGS(n), FRESH(n), O_SUPE(n), O_SPEC(n), NPRICE(l)
> >>
> >> ; Halton
> >>
> >> ; Pds = csi
> >>
> >> ; Pts = 20
> >>
> >> ; Parameters
> >>
> >> ; Maxit = 150$
> >>
> >>
> >>
> >> I have changed the price attribute to negative values, so I can use
> >> a lognormal distribution of for the price attribute.
> >>
> >> I am getting the following results
> >>
> >>
> >>
> >> +
> >> +
> >> +
> >> +
> >> 
> >>
> >>  Standard Prob. 95%
> Confidence
> >>
> >> CHOICE Coefficient Error z z>Z* Interval
> >>
> >> +
> >> +
> >> +
> >> +
> >> 
> >>
> >> Random parameters in utility
> >> functions..............................
> >>
> >> L_IMP 1.11608*** .29152 3.83 .0001 1.68745
> .54470
> >>
> >> L_RAU 1.49941*** .09880 15.18 .0000 1.30577
> 1.69304
> >>
> >> L_GAP 1.82794*** .10487 17.43 .0000 1.62239
> 2.03349
> >>
> >> L_PGS .63730** .25734 2.48 .0133 .13291
> 1.14168
> >>
> >> FRESH .61318*** .05496 11.16 .0000 .72089
> .50546
> >>
> >> O_SUPE .43891*** .07567 5.80 .0000 .29060
> .58721
> >>
> >> O_SPEC .76256*** .17329 4.40 .0000 1.10221
> .42291
> >>
> >> NPRICE 1.61991*** .33228 4.88 .0000 2.27117
> .96865
> >>
> >> Distns. of RPs. Std.Devs or limits of
> >> triangular....................
> >>
> >> NsL_IMP 1.52346*** .31666 4.81 .0000 .90281 2.14410
> >>
> >> NsL_RAU .69380*** .13439 5.16 .0000 .43040 .95721
> >>
> >> NsL_GAP .01744 .24287 .07 .9427 .45858 .49346
> >>
> >> NsL_PGS .95598*** .21017 4.55 .0000 .54405 1.36790
> >>
> >> NsFRESH .48681*** .05657 8.60 .0000 .37593 .59770
> >>
> >> NsO_SUPE 1.65455*** .11307 14.63 .0000 1.43293
> 1.87616
> >>
> >> NsO_SPEC 1.08890*** .12068 9.02 .0000 .85237
> 1.32544
> >>
> >> LsNPRICE .99479*** .18655 5.33 .0000 .62917
> 1.36041
> >>
> >>
> >>
> >> If I am not wrong, I can calculate the population mean of the price
> >> E(beta) = exp(beta + sigma^2/2)
> >>
> >> > CALC; LIST; EXP(1.61991 + (0.99479^2)/2)$
> >>
> >> [CALC] = .3246179
> >>
> >>
> >>
> >> Then, I am using the procedure described in section N29.8.2 of the
> >> Nlogit manual to examine the distribution of the parameters.
> >>
> >> Matrix; bn = beta_i; sn =sdbeta_i $
> >>
> >> CREATE; BIMP=0; BRAU=0; BGAP=0; BPGS=0; BFRE =0; BSUP=0; BSPE=0;
> >> BNPR=0 $
> >>
> >> CREATE ;SIMP=0; SRAU=0; SGAP=0; SPGS=0; SFRE =0; SSUP=0; SSPE=0;
> >> SNPR=0 $
> >>
> >> NAMELIST; betan = BIMP,BRAU, BGAP, BPGS, BFRE, BSUP, BSPE, BNPR$
> >>
> >> NAMELIST; sbetan = SIMP,SRAU, SGAP, SPGS, SFRE, SSUP, SSPE, SNPR$
> >>
> >> CREATE ; betan =bn$
> >>
> >> CREATE ; sbetan = sn$
> >>
> >>
> >>
> >> CALC; List; XBR(BNPR)$ ? calculate the average of the beta for
> >> nprice
> >>
> >>
> >>
> >> > CALC; List; XBR(BNPR)$
> >>
> >> [CALC] = .0257560
> >>
> >>
> >>
> >> My understanding is that these two figures should be close to one
> another.
> >> Is there anything that could explain such difference between these
> >> two ways to estimate the results?
> >>
> >>
> >>
> >> Any help is welcomed?
> >>
> >>
> >>
> >> Best,
> >>
> >>
> >>
> >> Damien
> >>
> >>
> >>
> >> _______________________________________________
> >> Limdep site list
> >> Limdep at mailman.sydney.edu.au
> >> http://limdep.itls.usyd.edu.au
> >>
>
> _______________________________________________
> Limdep site list
> Limdep at mailman.sydney.edu.au
> http://limdep.itls.usyd.edu.au
>

William Greene
Department of Economics
Stern School of Business, New York University
44 West 4 St., 790
New York, NY, 10012
URL: https://protectau.mimecast.com/s/vOgiCwVLQmiPLk3LhVsScn?domain=people.stern.nyu.edu
Email: wgreene at stern.nyu.edu
Ph. +1.212.998.0876
Editor in Chief: Journal of Productivity Analysis Editor in Chief: Foundations and Trends in Econometrics Associate Editor: Economics Letters Associate Editor: Journal of Business and Economic Statistics Associate Editor: Journal of Choice Modeling _______________________________________________
Limdep site list
Limdep at mailman.sydney.edu.au
http://limdep.itls.usyd.edu.au
From siibawei2013 at gmail.com Wed Feb 21 16:55:34 2018
From: siibawei2013 at gmail.com (Alhassan Siiba)
Date: Wed, 21 Feb 2018 13:55:34 +0800
Subject: [Limdep Nlogit List] Problems with running MNL in NLOGIT v.5
MessageID:
Dear All,
I am trying to run MNL model with coefficients in NLOGIT v. 5. However, I
kept on getting different error messages in that regard. Some of the error
messages are:
"Error 101: LOGIT  one of the cells (outcomes) has no observations".
"Error 221: The data on the LHS variable appear not to be coded 0,1,2.."
I am new to NLOGIT. I am using SP data with the dependent variable being
RANK for different travel modes and the Independent variables being the
attributes of the travel modes.
Please, can someone help me out? Thank you.
Kind regards, Siiba.
From ssingh10 at unl.edu Thu Feb 22 05:36:31 2018
From: ssingh10 at unl.edu (Sunil Kumar Singh)
Date: Wed, 21 Feb 2018 18:36:31 +0000
Subject: [Limdep Nlogit List] Mediation Question
MessageID:
Limdep Community,
I am trying to run a probit model as specified below. I also want to check if the impact of X1, X2, and X1*X2 is mediated through X4. However, I am not sure how to specify the model or run some ancillary test for this. Would appreciate any help.
skip$
Calc ; ran (12345)$
probit ; lhs = Y
; rhs = one,X1,X2,X3, X1*X2,X4
; Panel
; pds = COUNT
; rpm
; fcn = X4(n)
; pts = 100
; Halton$
Thanks,
Sunil