当前位置: X-MOL 学术IEEE Trans. Neural Netw. Learn. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Low-Rank Tensor Train Coefficient Array Estimation for Tensor-on-Tensor Regression.
IEEE Transactions on Neural Networks and Learning Systems ( IF 10.2 ) Pub Date : 2020-02-10 , DOI: 10.1109/tnnls.2020.2967022
Yipeng Liu , Jiani Liu , Ce Zhu

The tensor-on-tensor regression can predict a tensor from a tensor, which generalizes most previous multilinear regression approaches, including methods to predict a scalar from a tensor, and a tensor from a scalar. However, the coefficient array could be much higher dimensional due to both high-order predictors and responses in this generalized way. Compared with the current low CANDECOMP/PARAFAC (CP) rank approximation-based method, the low tensor train (TT) approximation can further improve the stability and efficiency of the high or even ultrahigh-dimensional coefficient array estimation. In the proposed low TT rank coefficient array estimation for tensor-on-tensor regression, we adopt a TT rounding procedure to obtain adaptive ranks, instead of selecting ranks by experience. Besides, an ℓ₂ constraint is imposed to avoid overfitting. The hierarchical alternating least square is used to solve the optimization problem. Numerical experiments on a synthetic data set and two real-life data sets demonstrate that the proposed method outperforms the state-of-the-art methods in terms of prediction accuracy with comparable computational complexity, and the proposed method is more computationally efficient when the data are high dimensional with small size in each mode.

中文翻译:

张量对张量回归的低秩张量列车系数阵列估计。

张量张量回归可以从张量预测张量,该张量概括了大多数以前的多线性回归方法,包括从张量预测标量和从标量预测张量的方法。但是,由于高阶预测变量和以这种广义方式产生的响应,系数数组的维数可能更高。与当前基于低CANDECOMP / PARAFAC(CP)秩近似的方法相比,低张量列车(TT)近似可以进一步提高高甚至超高维系数阵列估计的稳定性和效率。在针对张量-张量回归的拟议的低TT等级系数阵列估计中,我们采用TT舍入程序来获得自适应等级,而不是根据经验来选择等级。此外,施加ℓ2约束以避免过度拟合。分层交替最小二乘用于解决优化问题。在合成数据集和两个真实数据集上进行的数值实验表明,该方法在预测精度方面优于最新方法,并且具有可比的计算复杂度,并且当数据存在时,该方法的计算效率更高。在每种模式下都是高尺寸且尺寸小。
更新日期:2020-02-10
down
wechat
bug