当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multi-VFL: A Vertical Federated Learning System for Multiple Data and Label Owners
arXiv - CS - Machine Learning Pub Date : 2021-06-10 , DOI: arxiv-2106.05468
Vaikkunth Mugunthan, Pawan Goyal, Lalana Kagal

Vertical Federated Learning (VFL) refers to the collaborative training of a model on a dataset where the features of the dataset are split among multiple data owners, while label information is owned by a single data owner. In this paper, we propose a novel method, Multi Vertical Federated Learning (Multi-VFL), to train VFL models when there are multiple data and label owners. Our approach is the first to consider the setting where $D$-data owners (across which features are distributed) and $K$-label owners (across which labels are distributed) exist. This proposed configuration allows different entities to train and learn optimal models without having to share their data. Our framework makes use of split learning and adaptive federated optimizers to solve this problem. For empirical evaluation, we run experiments on the MNIST and FashionMNIST datasets. Our results show that using adaptive optimizers for model aggregation fastens convergence and improves accuracy.

中文翻译:

Multi-VFL:适用于多个数据和标签所有者的垂直联合学习系统

垂直联邦学习 (VFL) 是指在数据集上协作训练模型,其中数据集的特征在多个数据所有者之间拆分,而标签信息由单个数据所有者拥有。在本文中,我们提出了一种新方法,即多垂直联合学习 (Multi-VFL),用于在有多个数据和标签所有者时训练 VFL 模型。我们的方法是第一个考虑 $D$-数据所有者(分布特征)和 $K$-标签所有者(分布标签)存在的设置。这种提议的配置允许不同的实体在无需共享数据的情况下训练和学习最佳模型。我们的框架利用分裂学习和自适应联合优化器来解决这个问题。对于实证评估,我们在 MNIST 和 FashionMNIST 数据集上运行实验。我们的结果表明,使用自适应优化器进行模型聚合可以加快收敛速度​​并提高准确性。
更新日期:2021-06-11
down
wechat
bug