当前位置: X-MOL 学术Transp. Res. Part B Methodol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Enhancing discrete choice models with representation learning
Transportation Research Part B: Methodological ( IF 6.8 ) Pub Date : 2020-09-09 , DOI: 10.1016/j.trb.2020.08.006
Brian Sifringer , Virginie Lurkin , Alexandre Alahi

In discrete choice modeling (DCM), model misspecifications may lead to limited predictability and biased parameter estimates. In this paper, we propose a new approach for estimating choice models in which we divide the systematic part of the utility specification into (i) a knowledge-driven part, and (ii) a data-driven one, which learns a new representation from available explanatory variables. Our formulation increases the predictive power of standard DCM without sacrificing their interpretability. We show the effectiveness of our formulation by augmenting the utility specification of the Multinomial Logit (MNL) and the Nested Logit (NL) models with a new non-linear representation arising from a Neural Network (NN), leading to new choice models referred to as the Learning Multinomial Logit (L-MNL) and Learning Nested Logit (L-NL) models. Using multiple publicly available datasets based on revealed and stated preferences, we show that our models outperform the traditional ones, both in terms of predictive performance and accuracy in parameter estimation. All source code of the models are shared to promote open science.



中文翻译:

通过表示学习增强离散选择模型

在离散选择建模(DCM)中,模型规格不正确可能导致有限的可预测性和有偏差的参数估计。在本文中,我们提出了一种估算选择模型的新方法,该方法将公用事业规范的系统部分分为(i)知识驱动的部分和(ii)数据驱动的部分,从中学习新的表示形式可用的解释变量。我们的公式提高了标准DCM的预测能力,而不会牺牲其可解释性。我们通过用神经网络(NN)产生的新的非线性表示来扩展多项式Lo​​git(MNL)和Nested Logit(NL)模型的效用规范,从而显示出我们配方的有效性。作为学习多项式Lo​​git(L-MNL),并且学习嵌套Logit(L-NL)模型。基于显示的和陈述的偏好使用多个可公开获得的数据集,我们显示出我们的模型在预测性能和参数估计的准确性方面都优于传统模型。共享模型的所有源代码以促进开放科学。

更新日期:2020-09-10
down
wechat
bug