Abstract

Using multiple choice tasks per respondent in discrete choice experiment studies increases the amount of available information. However, respondents’ learning and fatigue may lead to changes in observed utility function preference (taste) parameters, as well as the variance in its error term (scale); they need to be controlled to avoid potential bias. A sizable body of empirical research offers mixed evidence in terms of whether these ordering effects are observed. We point to a significant component in explaining these differences; we show how accounting for unobservable preference and scale heterogeneity can influence the magnitude of observed ordering effects.

pdf