34290
当前位置: 首页   >  课题组新闻   >  北京科技大学孙奇福老师、西南交通大学何宣老师、广东工业大学汉鹏超老师的讲座报告
北京科技大学孙奇福老师、西南交通大学何宣老师、广东工业大学汉鹏超老师的讲座报告
发布时间:2025-08-16

荣幸邀请到了

北京科技大学孙奇福老师的报告: 

  孙奇福,北京科技大学教授,博士生导师。分别于20052009年在香港中文大学取得工程学士(甲等荣誉)及哲学博士学位。曾在香港中文大学网络编码研究所担任博士后研究员、澳大利亚新南威尔士大学担任访问研究员。长期从事网络编码相关研究,已在Proc. IEEEIEEE信息论汇刊(TIT)等权威期刊发表论文30余篇,获授权发明专利20余项。先后主持国家自然科学基金项目4项,华为、京东方等技术研发项目7项。于2021年担任中国国际通信大会(IEEE/CIC ICCC)通信理论研讨会联合主席。曾获2018年中国电子学会信息论学术年会“青年新星奖”、2017年北京市科学技术三等奖。


报告题目:Completion Delay of Random Linear Network Coding in Wireless Broadcast Networks

 摘要:In the classical wireless broadcast network, random linear network coding (RLNC) is known to asymptotically approach the optimal completion delay, or equivalently, throughput with increasing field size. In this talk, we first prove that RLNC over GF(2) can also asymptotically approach the optimal completion delay. We next consider the completion delay performance of RLNC in the full-duplex relay broadcast network, which is a generalization of the classical wireless broadcast. In order to explore the fundamental performance limit of RLNC in terms of completion delay, we propose a scheme named perfect RLNC with buffer, whose expected completion delay can serve as the lower bound for all RLNC schemes. We obtain closed-form formulae as well as recursive expressions of the expected completion delay, both at a single receiver and for the whole system. We last introduce how to utilize guessing random additive noise decoding (GRAND) at physical layer to help leverage RLNC packets to generate syndromes so as to reduce packet erasure probabilities and thus further improve the completion delay performance.




西南交通大学何宣老师:

何宣,2018年从电子科技大学获得博士学位,2018年至2020年在新加坡科技设计大学从事博士后研究,2021年加入西南交通大学信息科学与技术学院,现任副教授。主要研究方向为纠错编码和DNA存储编码,在TITTCOMISIT等编码领域旗舰期刊和会议发表多篇学术论文。获得2024年中国电子学会信息论分会最佳论文奖。202410月起担任IEEE Communications Letters编委。

题目:Outer Channel of DNA Storage: Capacity and Efficient Coding Schemes

摘要:With the exponential growth of data, DNA storage, which encodes digital information to DNA strings, has attracted a lot of attention owing to its density, durability, and maintaining cost. DNA storage is in urgent need of efficient coding schemes to protect data against complicated errors. The outer channel, which acts as the equivalent channel that outer codes deal with, is the concatenation of the inner codes and physical channel. In this talk, I first introduce an outer channel model as well as its capacity, then demonstrate two efficient coding schemes for the outer channel: the basis-finding algorithm for decoding fountain codes and the joint decoding algorithm for decoding low-density parity-check (LDPC) codes.

 


 

广东工业大学汉鹏超老师:

 

个人简介:广东工业大学信息工程学院副教授。2021年博士毕业于东北大学。曾在英国帝国理工学院进行访问交流。2021-2023年于香港中文大学(深圳)开展博士后研究工作。主要研究方向包括移动通信网络和边缘计算、网络优化、分布式学习、以及知识蒸馏。目前已发表国际学术论文50余篇。包括本领域知名国际期刊IEEE JSACIEEE TMCIEEE TNSEIEEE Communications MagazineIEEE IOTJ以及顶级会议包括IEEE INFOCOMNeurIPSAAAIICDCS等。担任过NeurIPSICMLIEEE ICDCS的技术委员会成员,并长期担任IEEE TMCTPDSIOTJICC等多个期刊和会议审稿人。

 报告题目:Effective Federated Learning and Service Provisioning at the Edge

 报告简介:Federated Learning (FL) enables collaborative model training across distributed clients by keeping data local, thereby achieving privacy preservation and reducing the risk of data leakage. However, existing FL systems face significant challenges arising from the computational limitations of edge devices, constrained communication bandwidth, and the need to handle dynamic online inference service requests. This report addresses these challenges from the perspective of multi-dimensional heterogeneity and resource constraints in edge environments. Specifically, it investigates federated knowledge distillation methods that account for both model and data heterogeneity; federated split learning approaches that consider constrained computational resources; adaptive gradient compression techniques for FL under limited communication resources; and joint optimization of model training and inference in FL. The ultimate goal is to enhance the adaptability, resource efficiency, and model performance of federated learning in edge networks.