When creating complex designs, I typically run several instances of NGENE using different values for ;RDRAWS and ;REP. Despite using the same ;RSEED, the efficiency measure changes when I evaluate a previously saved design or use it as a starting value. Is this a bug, or am I overlooking something? I would appreciate any suggestions. Thank you very much in advance.
Here is example syntax I am using:
design
; alts(MNL) = alt1*, alt2*, alt3
; alts(MXL) = alt1*, alt2*, alt3
; rows = 20
; block = 5
; bseed = 179424673
; rseed = 179424673
?; bdraws = sobol(1000)
; rdraws = sobol(300)
; alg = swap(random = 100, swap = 1, swaponimprov = 20, reset = 10, resetinc = 10) ?, stop = noimprov(10000 iterations)
; rep = 300
; eff = 7.31101*MNL(mnl,d) + 1.41841758*MXL(rppanel,d)
; start = species - main 1 - sea birds.ngd
; con
; model(MNL):
U(alt1) = b_population.dummy[-0.3270|0.3386|0.2849]*population[0,2,3,1]
+ b_conservation_focus.dummy[-0.1235|-0.0292]*conservation_focus[0,2,1]
+ b_recreation_restrictions.dummy[0.0703|0.0794]*recreation_restrictions[1,2,0]
+ b_cost[-0.002101]*cost[5,10,25,50,100,150,250,500] /
U(alt2) = b_population.dummy*population
+ b_conservation_focus.dummy*conservation_focus
+ b_recreation_restrictions.dummy*recreation_restrictions
+ b_cost*cost /
U(alt3) = b_sq[-0.9669]
; model(MXL):
U(alt1) = b_population.dummy[n,-0.5173,1.0012|n,0.3,0.6073|n,0.4,1.0488]*population[0,2,3,1]
+ b_conservation_focus.dummy[n,-0.1677,0.4624|n,-0.0957,0.4822]*conservation_focus[0,2,1]
+ b_recreation_restrictions.dummy[n,0.0852,0.9283|n,0.0929,1.3373]*recreation_restrictions[1,2,0]
+ b_cost[n,-0.003906,0.006927]*cost[5,10,25,50,100,150,250,500] /
U(alt2) = b_population.dummy*population
+ b_conservation_focus.dummy*conservation_focus
+ b_recreation_restrictions.dummy*recreation_restrictions
+ b_cost*cost /
U(alt3) = b_sq[n,-2.5542,2.4839]
$
RSEED and evaluating saved designs
Moderators: Andrew Collins, Michiel Bliemer, johnr
-
- Posts: 2055
- Joined: Tue Mar 31, 2009 4:13 pm
Re: RSEED and evaluating saved designs
There are potentially three processes that are randomised:
* Draws for random parameters, which can be set via ;rdraws and can be fixed using ;rseed
* Draws for Bayesian priors, which can be set via ;bdraws and can fixed using ;bseed
* Generation of a random sample (for panel models only), which can be set via ;rep
So it is ;rep that causes the variation in outcomes, as in each run there a random sample of virtual respondents is generated as required for evaluating panel mixed logit models. If you would evaluate rp instead of rppanel you would see that the outcome remains the same, but of course rppanel is the appropriate model and I would not change it to rp. It is currently not possible to fix the random sample generation, but I can see that it would be useful to do so. I will put it on the list for future development, thanks for letting us know.
Michiel
* Draws for random parameters, which can be set via ;rdraws and can be fixed using ;rseed
* Draws for Bayesian priors, which can be set via ;bdraws and can fixed using ;bseed
* Generation of a random sample (for panel models only), which can be set via ;rep
So it is ;rep that causes the variation in outcomes, as in each run there a random sample of virtual respondents is generated as required for evaluating panel mixed logit models. If you would evaluate rp instead of rppanel you would see that the outcome remains the same, but of course rppanel is the appropriate model and I would not change it to rp. It is currently not possible to fix the random sample generation, but I can see that it would be useful to do so. I will put it on the list for future development, thanks for letting us know.
Michiel