Elsevier

Economic Analysis and Policy

Volume 70, June 2021, Pages 394-413
Economic Analysis and Policy

Modelling economic policy issues
Examining ordering effects and strategic behaviour in a discrete choice experiment

https://doi.org/10.1016/j.eap.2021.03.005Get rights and content

Abstract

This study explores ordering effects in close association with strategic behaviour in a discrete choice experiment surveying preference for improvements in cyclone warning services. To examine strategic behaviour, we assign respondents to assumed non-strategic and strategic groups using their answers to two questions on their beliefs about the consequentiality of the survey and the payment obligation. The data from each group are analysed non-parametrically and parametrically to investigate ordering effects in their preferences. Comparison of results between the two groups suggests that preferences of possibly strategic respondents are relatively unstable over a sequence of six choice tasks. It appears that these respondents could have behaved strategically from the fourth task onwards, diverting their responses from the choices of the non-strategic respondents. This finding may indicate that strategic behaviour is a possible cause of position-dependent ordering effects in the repeated question format.

Introduction

Discrete choice experiments (DCEs) have been commonly used to value a range of multi-attribute public goods and services (Johnston et al., 2017). In a standard DCE survey, each respondent is requested to make choices in repeated valuation tasks, such that more information on respondents’ preferences can be collected from each DCE survey relative to a survey containing a single valuation question. However, the additional information elicited by the repeated-question format is challenged by the body of evidence for ordering effects (Cao et al., 2018, Day et al., 2012, Day and Pinto Prades, 2010, Holmes and Boyle, 2005, Ladenburg and Olsen, 2008, McNair et al., 2011, Scheufele and Bennett, 2012), which contends that stated preferences may change when the valuation questions are presented in a different order.1 In relation to the ordering of choice tasks, Day et al. (2012) suggest that there are two main types of ordering effects: (1) position-dependent ordering effect and (2) precedent-dependent ordering effect. The first represents the changes in respondents’ preferences relating to the position of a choice task in the series of choice tasks. The second type of ordering effect refers to the changes in respondents’ stated preferences relating to features of the choice alternatives in previous choice tasks, which can be the first choice task or the best or worst option in the range of previous choice tasks.

The position-dependent ordering effect casts doubt on the standard assumption that preferences are stable across a sequence of discrete choice questions (Day et al., 2012, Day and Pinto Prades, 2010, McNair et al., 2011). This type of ordering effect presents stated preference (SP) practitioners with a serious issue that responses in a series of repeated choice questions might not tell us about ‘true’ preferences when respondents’ choices appear to change when the choice questions are presented in a different position.

The precedent-dependent ordering effect, however, “should not necessarily be taken as evidence of some inherent problem” with using the repeated choice question format in a DCE exercise (Day et al., 2012) p.89. The marketing literature has demonstrated that consumers’ purchasing decisions are based on reference prices, which are shaped by consumers’ prior experience and current purchase environment (Mazumdar et al., 2005). Putler (1992) claims that the reference price effects can be empirically tested, which was achieved in a study using weekly retail egg sales data from Southern California. Isoni (2011) also suggests that the effects of best or worst deal on purchasing decisions appear intuitively appealing because of their resemblance to the experience of everyday transactions. For example, the feeling of disappointment at knowing that a product we just bought can be found for a cheaper price is a common experience (Isoni, 2011). Nevertheless, Day et al. (2012) suggest that the presence of the precedent-dependent ordering effects should be addressed since it may affect willingness to pay (WTP) values estimated from a series of choice questions.

The issue of ordering effects is linked to strategic behaviour, which is suggested as a possible cause of ordering effects by Day et al. (2012). It has been recognised that some respondents will not state their true preferences when they believe that they might enhance their utility or well-being by not doing so. Some respondents could have an incentive to understate their value for the goods in SP surveys to encourage the provision of the goods at a low price. In certain circumstances, some respondents believe that they will not be required to actually pay the amounts they state they would be willing to pay; and they may overstate the values to promote the provision of the goods. If many respondents behaving strategically act in a similar manner, the strategic responses will bias the estimated values (Boyle, 2003). In DCE surveys, a series of choice questions could provide opportunities for respondents to develop their strategic responses as they become more aware of the strategic opportunities after each choice task (strategic learning) (Scheufele and Bennett, 2012). Carson and Groves (2007) also point out that when facing a series of choice options offering the non-market good with different levels of price and qualities, respondents might not believe in the credibility of those options; therefore, they may try to answer strategically to manipulate the survey results to their advantage. In this paper, respondents with possibly strategic behaviour, therefore, are identified, and their choices are examined in close association with the issue of ordering effects.

Strategic behaviour that may cause biases in WTP estimates is a concern in the application of SP surveys (Milon, 1989, Throsby and Withers, 1986). Meginnis et al. (2018) find that 27% of respondents misrepresent their preferences, and this reveals evidence of strategic bias in DCEs. The concerns about strategic behaviour increase if the good under consideration is a public good (Day et al., 2012, Milon, 1989, Throsby and Withers, 1986). The key feature of a public good is that only one level can be provided to all people. Some respondents could attempt to choose an option that they think has a reasonable chance of “winning”, even when this excludes their most preferred option (Bennett and Blamey, 2001). By making choices in such a way, they respond strategically and do not reveal their true preferences. In DCE surveys, it is expected that the complexity of choice tasks with multi-attribute and multi-alternatives could make it difficult for respondents to form response strategies in order to strategically bias their answers. However, despite the complexity required to behave strategically in an attribute based decision rule, Meginnis et al. (2018) show that significant numbers of respondents are able to do so. Carson and Groves (2011) suggest that all respondents have to do is to act as if they are more (or less) price sensitive than they actually are when they believe that they would gain benefits from their responses. Using a case study in transportation, Lu et al. (2008) show that making choice tasks more complex (i.e. adding more attributes to the DCE experiment) would not significantly reduce the strategic bias, but contributes to a higher error variance in the DCE responses. They reason that respondents make more errors in the complex design, but respondents’ valuation of the good or service that is being valued is not affected.

Concerning the identification of strategic behaviour, Mitchell and Carson (1989) suggest that respondents’ strategies in SP studies can be a function of two factors: (A) the respondents’ expectations about the provision of the good or service under valuation and (B) the respondents’ perceived payment obligation. The key distinction within (A) is whether or not respondents believe that the survey’s results will potentially influence the related agency’s decisions to provide the good. SP practitioners avoid (or should avoid) any hint that the good they are trying to value is certain to be provided; and it should be explained to respondents that the provision of the good would depend on the results of the survey (Mitchell and Carson, 1989). Carson and Groves (2007) indicate that given the amount of effort investigators expend on the survey, most respondents would believe that the provision of the good under consideration depends on their response to the survey.

In relation to perceived payment obligation (B), it is standard practice in the design of the DCE survey for more than one level of cost to be presented, and this could mean that respondents are uncertain about the amount that they have to pay (Carson and Groves, 2007). It would be difficult to convince respondents that they would have to pay exactly what they state in the hypothetical DCE survey. However, the idea that their payments may be in some way related to their stated WTP should be credible to respondents. Since respondents are uncertain about the amount of their payment, they are assumed to make decisions according to their expected amount of payment calculated using a set of perceived probabilities associated with each of the various cost levels presented in the DCE survey. The difference in the respondents’ perceived probabilities would represent the difference in their belief and other factors as risk adverseness. The interpretation of respondents’ belief in the payment obligation, therefore, will correspond to their perceived probabilities. The distinction within (B) is whether or not respondents believe that they will actually have to pay for the good at a level of payment determined using the WTP amounts they choose in the DCE survey. If a respondent believes in the payment obligation enforced by the agency, the respondent appears to believe that there is no chance he/she will not have to pay for the provision of the good; in other words, the perceived probability associated with the zero-cost level in the calculation of his/her expected payment would be zero. If a respondent does not believe in the payment obligation, the respondent would perceive that there is a positive probability he/she will have to pay nothing even though his/her stated WTP is non-zero; hence, the perceived probability of the zero-cost level in his/her expected payment calculation would be a positive value with possible highest value of 100%.

In a hypothetical SP survey, it is difficult to establish conditions for eliciting true preferences because of the absence of markets with real payments (Mitchell and Carson, 1989). However, when an objective is to elicit amounts which are as close as possible to the respondents’ true WTP amount, Carson and Groves (2007) argue that more truthful preference revelation would be motivated if two conditions hold. First, respondents must perceive that the survey’s results will potentially influence the related agency’s decisions to provide the good they care about, or policy consequentiality (Herriges et al., 2010). Second, respondents must believe that payment for the provision of the good can actually be enforced by the agency; hence, there is no chance he/she will not have to pay for the provision of the good. The latter condition might be referred as payment consequentiality (Herriges et al., 2010). Evidence from previous studies highlights the need for policy consequentiality, implying that the respondents’ belief in influencing the agency’s decision about provision of the good is necessary to elicit reliable WTP amount (Herriges et al., 2010, Vossler and Evans, 2009). Carson and Groves (2007) also suggest that respondents who do not believe the survey is consequential should be omitted from the economic analysis. If the survey’s results are not seen by a respondent as having any influence on the related agency’s actions to provide the good, the respondent will perceive that all possible choices he/she makes have the same influence on his/her well-being. In such a case, it is pointless to try to explain apparent economic anomalies in the respondent’s choices (Carson and Groves, 2007). Besides, the majority of respondents would believe that their responses to a credible survey will affect the provision of the good. The omission of respondents who do not believe the survey is consequential would be expected to have insignificant effects on the statistical power of econometric analyses. Thus, this study focuses on strategies of respondents who believe that the survey’s results will potentially affect the provision of the good or service under valuation.

The requirement for surveys to be seen as consequential might create incentive for respondents to strategically misrepresent their preferences (Meginnis et al., 2018). Respondents who are likely to answer strategically are those who (i) believe the survey is influential in deciding to provide the good that they care about, or policy consequentiality; (ii) but do not believe in the payment obligation enforced by the agency. If a respondent perceiving the survey to be consequential believes that he/she will not have to pay, then the respondent will have an incentive to strategically indicate a higher WTP amount assuming that payment for the good will not be enforced. This is a type of strategic overbidding found in previous studies. Posavac (1998) and Lunander (1998) provide examples in which respondents, who believe that the payment level is zero (i.e. paid by the related agency) or only a small amount of money, report higher WTP than do respondents who expect that they would be personally responsible for paying for the good they care about. If a respondent believes that the likelihood of paying nothing is a positive probability, the zero-cost level will be included in his/her expected payment calculation, resulting a lower value of expected payment. If everything else held constant, this would increase incentive for the respondent to strategically overstate the WTP amount, with the aim of exerting more influence on the provision of the good but only being exposed to a lower value of expected payment.

However, the direction of strategic motivation is not always overbidding when respondents believe that the survey is policy consequential, but do not believe in the payment obligation. Mitchell and Carson (1989) suggest that if respondents are uncertain about their payment, the direction of strategic behaviour would take depends on the comparison between their expected payment and true WTP value. If the respondent’s expected payment is perceived to be less than his/her true value, he/she will tend to overbid; if his/her expected payment is calculated to be larger than the true value, he/she will tend to underbid or free ride. In our DCE experiment, the respondents’ uncertainty about their payment would create incentives to behave strategically in both directions of overbidding and underbidding. And if respondents do not believe in the payment obligation, they may more frequently use the strategy of underbidding. If respondents believe that there is a positive probability of paying nothing, the addition of zero-cost level to their expected payment formulation will make the calculation more complex, leading to higher uncertainty about payment obligation. In the case of the provision of public good with a coercive payment mechanism where the status quo choice set will still be available but where respondents must commit ex ante to paying the uncertain cost, the commitment to uncertain payment is never preferred by risk adverse respondents; hence, some respondents would be expected to shift from “yes” to “no” responses (Carson and Groves, 2007). In laboratory choice experiments, Collins and Vossler (2009) find that respondents seem to move toward the status-quo option, as the preferences of others are unknown and this option is offered at zero cost. If respondents do not believe in the payment obligation, their shift toward the status-quo option, which is usually the second best alternative in each choice set, can be considered as the strategic act of underbidding.

While a number of previous studies have examined the link between strategic behaviour and precedent-dependent ordering effects (Day et al., 2012, McNair et al., 2011, Scheufele and Bennett, 2012), we seek to contribute to the literature by testing whether strategic behaviour is a possible cause of position-dependent ordering effects in a DCE survey. In our study, respondents are assigned to assumed non-strategic and strategic groups using their answers to two follow-up questions on their beliefs about the policy consequentiality of the survey and the payment obligation. Comparison between the two groups may show anomalies in the choice made by the strategic respondents.2 A non-parametric analysis is undertaken based on comparison of respondents’ stated demand along a sequence of six choice tasks between the two groups. Parametric mixed logit models are separately constructed for the non-strategic and strategic groups. Comparison of WTP values estimated from the parametric models enables us to confirm a key finding of the non-parametric analysis, that preferences of strategic respondents are relatively unstable along the sequence of repeated choice tasks when compared with non-strategic respondents.

Section snippets

Overview of survey design and implementation

A DCE survey on improvements in tropical cyclone warning services in Vietnam is used as the basic for this paper. The DCE exercise reported here included successive rounds of design and testing. The design of the DCE required identification of attributes or characteristics of meteorological services and the levels to be offered. Identification of appropriate attributes for inclusion in the DCE was based on information from previous studies (Gunasekera, 2004, Lazo and Chestnut, 2002, Lazo and

Description of respondents with possibly strategic behaviour

Table 3 presents a summary of the socio-economic characteristics of respondents with possibly strategic behaviour in comparison with the characteristics of non-strategic respondents.11

Concluding remarks

The order of a series of choice tasks presented to respondents in a DCE could affect the choice outcomes. The data from this DCE survey is used to examine ordering effects in close association with strategic behaviour of respondents. Our analysis of strategic behaviour focuses on the strategic behaviour of respondents who believe that their responses potentially influence the provision of the good they care about, but believe that they may not actually have to pay for the good even if their

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgement

This research is funded by National Economics University, Hanoi, Vietnam.

References (53)

  • LadenburgJ. et al.

    Gender-specific starting point bias in choice experiments: Evidence from an empirical study

    J. Environ. Econ. Manage.

    (2008)
  • LazoJ.K. et al.

    Valuing improved hurricane forecasts

    Econom. Lett.

    (2011)
  • LunanderA.

    Inducing incentives to understate and to overstate willingness to pay within the open-ended and the dichotomous-choice elicitation formats: An experimental study

    J. Environ. Econ. Manage.

    (1998)
  • McNairB.J. et al.

    A comparison of responses to single and repeated discrete choice questions

    Resour. Energy Econ.

    (2011)
  • MilonJ.W.

    Contingent valuation experiments for strategic behavior

    J. Environ. Econ. Manage.

    (1989)
  • NguyenT.C. et al.

    Estimating the value of economic benefits associated with adaptation to climate change in a developing country: A case study of improvements in tropical cyclone warning services

    Ecol. Econ.

    (2013)
  • PosavacS.S.

    Strategic overbidding in contingent valuation: Stated economic value of public goods varies according to consumers expectations of funding source

    J. Econ. Psychol.

    (1998)
  • ThrosbyC.D. et al.

    Strategic bias and demand for public goods: Theory and an application to the arts

    J. Public Econ.

    (1986)
  • VosslerC.A. et al.

    Bridging the gap between the field and the lab: Environmental goods, policy maker input, and consequentiality

    J. Environ. Econ. Manage.

    (2009)
  • AdamowiczW. et al.

    Stated preference approaches for measuring passive use values: Choice experiments and contingent valuation

    Am. J. Agric. Econ.

    (1998)
  • AlpízarF. et al.

    Using choice experiments for non-market valuation

    Econ. Issues

    (2001)
  • BennettJ. et al.

    Some fundamentals of environmental choice modelling

  • BennettJ. et al.

    The strengths and weaknesses of environmental choice modelling

  • BoyleK.J.

    Contingent valuation in practice

  • BragaJ. et al.

    Preference anomalies, preference elicitation and the discovered preference hypothesis

    Environ. Resource Econ.

    (2005)
  • CaoY. et al.

    Position-dependent order effects on the prediction of consumer preferences in repeated choice experiments

    Appl. Econ.

    (2018)
  • View full text