Hi there,
I designed and conducted a DCE with health workers in Indonesia to assess their employment preferences. Unfortunately, when analysing the results in Nlogit using a MNL model, the attribute coefficients are all weak negatives (for both unforced and forced questions). I've copied the design, analysis code and results here - it would be very appreciated if anyone could let me know whether there are any errors or issues that may explain our results.
Many thanks
Design
;alts = jobA*, jobB*, neither
;rows = 24
;block = 2
;eff = (mnl,d,mean)
;bdraws = sobol(5000)
;model:
U(jobA) = a[(n,3.26747,.33925)] + b1.effects[(n,.11854,.03421)] * supervis [1, 2] + b2.effects[(n,.05645,.03403)]* training [1,2] + b3[(n,-.00019,.00018)] * incent [25,100,300,500] + b4.effects [(n,-.34741,.07350)|(n,.16740,.07204)|(n,.20636,.07173)] * endrsmnt [1,2,3,4] + b5.effects[(n,-.06413,.03539)] *employme [1,2] /
U(jobB) = a + b1*supervis + b2*training + b3*incent + b4* endrsmnt + b5*employme
$
Analysis of unforced choice questions
|-> REJECT ;cset=2 $
NLOGIT
;lhs = choice, cset, altij
;choices = jobA, jobB, neither
;checkdata
;model:
U(jobA) = supervis*supervis + training*training + incent*incent + endrsmnt*endrsmnt + award*award + report*report + employme*employme /
U(jobB) = supervis*supervis + training*training + incent*incent + endrsmnt*endrsmnt + award*award + report*report + employme*employme /
U(neither) = Neither
$
Results
---------------------------------------
Response data are given as ind. choices
Number of obs.= 5652, skipped 0 obs
--------+--------------------------------------------------------------------
| Standard Prob. 95% Confidence
CHOICE| Coefficient Error z |z|>Z* Interval
--------+--------------------------------------------------------------------
SUPERVIS| -.01418 .01551 -.91 .3604 -.04458 .01621
TRAINING| .01058 .01546 .68 .4938 -.01973 .04089
INCENT| -.00090*** .8518D-04 -10.60 .0000 -.00107 -.00074
ENDRSMNT| -.04306 .03087 -1.40 .1630 -.10355 .01744
AWARD| -.00936 .03108 -.30 .7632 -.07027 .05155
REPORT| .09658*** .03020 3.20 .0014 .03739 .15576
EMPLOYME| -.06147*** .01523 -4.04 .0001 -.09132 -.03161
NEITHER| -.82951*** .03703 -22.40 .0000 -.90208 -.75693
--------+--------------------------------------------------------------------
Weak DCE results - seeking review
Moderators: Andrew Collins, Michiel Bliemer, johnr
-
Michiel Bliemer
- Posts: 2057
- Joined: Tue Mar 31, 2009 4:13 pm
Re: Weak DCE results - seeking review
I do not see any specific issues in the design.
Can be multiple reasons:
1. Sample size is too small; especially for qualitative (dummy/effects coded attributes) a large sample size is needed
2. Some attributes are not very important to people
3. Your design looses efficiency if your priors are (very) different from the actual parameter estimates
4. There is significant heterogeneity in your sample, i.e. if some people have a negative parameter and others a positive, then on average it is about zero. Estimating for example a latent class model may detect such differences.
Michiel
Can be multiple reasons:
1. Sample size is too small; especially for qualitative (dummy/effects coded attributes) a large sample size is needed
2. Some attributes are not very important to people
3. Your design looses efficiency if your priors are (very) different from the actual parameter estimates
4. There is significant heterogeneity in your sample, i.e. if some people have a negative parameter and others a positive, then on average it is about zero. Estimating for example a latent class model may detect such differences.
Michiel
-
Tcgadsden86
- Posts: 7
- Joined: Mon Mar 16, 2020 9:10 am
Re: Weak DCE results - seeking review
Great, thanks for your ideas Michiel.
We had a reasonably large sample (~400) so I hope a mixed or latent class model may provide further answers.
Tom
We had a reasonably large sample (~400) so I hope a mixed or latent class model may provide further answers.
Tom