当前位置: X-MOL 学术J. Netw. Comput. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
DISTILLER: Encrypted traffic classification via multimodal multitask deep learning
Journal of Network and Computer Applications ( IF 7.7 ) Pub Date : 2021-01-20 , DOI: 10.1016/j.jnca.2021.102985
Giuseppe Aceto , Domenico Ciuonzo , Antonio Montieri , Antonio Pescapé

Traffic classification, i.e. the inference of applications and/or services from their network traffic, represents the workhorse for service management and the enabler for valuable profiling information. The growing trend toward encrypted protocols and the fast-evolving nature of network traffic are obsoleting the traffic-classification design solutions based on payload-inspection or machine learning. Conversely, deep learning is currently foreseen as a viable means to design traffic classifiers based on automatically-extracted features. These reflect the complex patterns distilled from the multifaceted (encrypted) traffic, that implicitly carries information in “multimodal” fashion, and can be also used in application scenarios with diversified network visibility for (simultaneously) tackling multiple classification tasks. To this end, in this paper a novel multimodal multitask deep learning approach for traffic classification is proposed, leading to the Distiller classifier. The latter is able to capitalize traffic-data heterogeneity (by learning both intra- and inter-modality dependencies), overcome performance limitations of existing (myopic) single-modal deep learning-based traffic classification proposals, and simultaneously solve different traffic categorization problems associated to different providers’ desiderata. Based on a public dataset of encrypted traffic, we evaluate Distiller in a fair comparison with state-of-the-art deep learning architectures proposed for encrypted traffic classification (and based on single-modality philosophy). Results show the gains of our proposal over both multitask extensions of single-task baselines and native multitask architectures.



中文翻译:

DISTILLER:通过多模式多任务深度学习进行加密的流量分类

流量分类,即应用程序和/或服务从其网络流量中得出的推断,代表了服务管理的主力军和有价值的配置信息的促成因素。加密协议的增长趋势和网络流量的快速发展特性正在淘汰基于有效负载检查或机器学习的流量分类设计解​​决方案。相反,目前预计深度学习是一种基于自动提取的特征设计流量分类器的可行方法。这些反映了从多方面(加密)流量中提取的复杂模式,这些模式隐式地以“多模式”方式承载信息,并且还可以用于具有多种网络可见性的应用程序场景中(同时)解决多个分类任务。为此,蒸馏器分类器。后者能够利用交通数据的异构性(通过学习模态内和模态间的依赖性),克服现有(近视)基于单模式深度学习的交通分类提案的性能限制,并同时解决与之相关的不同交通分类问题到不同提供商的需求。基于加密流量的公共数据集,我们与提议用于加密流量分类的最新深度学习架构(基于单模态哲学)进行公平比较,从而评估了Distiller。结果表明,我们的建议在单任务基线的多任务扩展和本机多任务体系结构上均获得了收益。

更新日期:2021-01-20
down
wechat
bug