当前位置: X-MOL 学术Complexity › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Research on PMF Model Based on BP Neural Network Ensemble Learning Bagging and Fuzzy Clustering
Complexity ( IF 2.3 ) Pub Date : 2021-07-21 , DOI: 10.1155/2021/9985894
Zhengjin Zhang 1 , Guilin Huang 1 , Yong Zhang 1 , Siwei Wei 1 , Baojin Shi 2 , Jiabao Jiang 1 , Baohua Liang 3
Affiliation  

Probability matrix factorization model can be used to solve the problem of high-dimensional sparsity of user and rating data in the recommender systems. However, most of the existing methods use the user to model the item rating, ignoring the relationship between the user and the item, so the accuracy of user-item rating prediction is still low. Therefore, this paper proposes a probabilistic matrix factorization model based on BP neural network ensemble learning, bagging, and fuzzy clustering. Firstly, the membership function of fuzzy clustering and the selection of cluster center are used to calculate the user-item rating matrix; secondly, BP neural network trains the user-item scoring matrix after clustering, further improving the accuracy of scoring prediction; finally, the bagging method in ensemble learning is introduced, which takes the number of user-item scores as the base learner, trains the base learner through BP neural network, and finally obtains the score prediction through the voting results, which improves the stability of the model. Compared with the existing PMF models, the root mean square error of the PMF model after fuzzy clustering is increased by 9.27% and 3.95%, and the average absolute error is increased by 21.14% and 1.11%, respectively; then, the performance of the first mock exam is introduced. The root mean square error of the ensemble method is increased by 4.02% and 0.42%, respectively, compared with the existing single model. Finally, the weights of BP neural network training based learner are introduced to improve the accuracy of the model, which also verifies the universality of the model.

中文翻译:

基于BP神经网络集成学习Bagging和模糊聚类的PMF模型研究

概率矩阵分解模型可用于解决推荐系统中用户和评分数据的高维稀疏问题。然而,现有的方法大多使用用户对物品评分进行建模,忽略了用户与物品之间的关系,因此用户-物品评分预测的准确率仍然较低。因此,本文提出了一种基于BP神经网络集成学习、bagging和模糊聚类的概率矩阵分解模型。首先利用模糊聚类的隶属函数和聚类中心的选择计算用户-项目评分矩阵;其次,BP神经网络在聚类后训练user-item评分矩阵,进一步提高评分预测的准确性;最后,介绍了集成学习中的bagging方法,以user-item分数的个数为基学习器,通过BP神经网络训练基学习器,最终通过投票结果得到分数预测,提高了模型的稳定性。与现有PMF模型相比,模糊聚类后PMF模型的均方根误差分别提高了9.27%和3.95%,平均绝对误差分别提高了21.14%和1.11%;然后,介绍了第一次模拟考试的表现。与现有的单一模型相比,集成方法的均方根误差分别增加了4.02%和0.42%。最后,引入基于BP神经网络训练的学习器的权重来提高模型的准确性,这也验证了模型的通用性。通过BP神经网络训练基学习器,最终通过投票结果得到分数预测,提高了模型的稳定性。与现有PMF模型相比,模糊聚类后PMF模型的均方根误差分别提高了9.27%和3.95%,平均绝对误差分别提高了21.14%和1.11%;然后,介绍了第一次模拟考试的表现。与现有的单一模型相比,集成方法的均方根误差分别增加了4.02%和0.42%。最后,引入基于BP神经网络训练的学习器的权重来提高模型的准确性,这也验证了模型的通用性。通过BP神经网络训练基学习器,最终通过投票结果得到分数预测,提高了模型的稳定性。与现有PMF模型相比,模糊聚类后PMF模型的均方根误差分别提高了9.27%和3.95%,平均绝对误差分别提高了21.14%和1.11%;然后,介绍了第一次模拟考试的表现。与现有的单一模型相比,集成方法的均方根误差分别增加了4.02%和0.42%。最后,引入基于BP神经网络训练的学习器的权重来提高模型的准确性,这也验证了模型的通用性。最终通过投票结果得到分数预测,提高了模型的稳定性。与现有PMF模型相比,模糊聚类后PMF模型的均方根误差分别提高了9.27%和3.95%,平均绝对误差分别提高了21.14%和1.11%;然后,介绍了第一次模拟考试的表现。与现有的单一模型相比,集成方法的均方根误差分别增加了4.02%和0.42%。最后,引入基于BP神经网络训练的学习器的权重来提高模型的准确性,这也验证了模型的通用性。最终通过投票结果得到分数预测,提高了模型的稳定性。与现有PMF模型相比,模糊聚类后PMF模型的均方根误差分别提高了9.27%和3.95%,平均绝对误差分别提高了21.14%和1.11%;然后,介绍了第一次模拟考试的表现。与现有的单一模型相比,集成方法的均方根误差分别增加了4.02%和0.42%。最后,引入基于BP神经网络训练的学习器的权重来提高模型的准确性,这也验证了模型的通用性。模糊聚类后PMF模型的均方根误差分别增加了9.27%和3.95%,平均绝对误差分别增加了21.14%和1.11%;然后,介绍了第一次模拟考试的表现。与现有的单一模型相比,集成方法的均方根误差分别增加了4.02%和0.42%。最后,引入基于BP神经网络训练的学习器的权重来提高模型的准确性,这也验证了模型的通用性。模糊聚类后PMF模型的均方根误差分别增加了9.27%和3.95%,平均绝对误差分别增加了21.14%和1.11%;然后,介绍了第一次模拟考试的表现。与现有的单一模型相比,集成方法的均方根误差分别增加了4.02%和0.42%。最后,引入基于BP神经网络训练的学习器的权重来提高模型的准确性,这也验证了模型的通用性。
更新日期:2021-07-21
down
wechat
bug