当前位置: X-MOL 学术IEEE Trans. Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
SignalNet: A Low Resolution Sinusoid Decomposition and Estimation Network
IEEE Transactions on Signal Processing ( IF 5.4 ) Pub Date : 2022-08-31 , DOI: 10.1109/tsp.2022.3201336
Ryan M. Dreifuerst 1 , Robert W. Heath 1
Affiliation  

The detection and estimation of sinusoids is a fundamental signal processing task for many applications related to sensing and communications. While algorithms have been proposed for this setting, quantization is a critical, but often ignored modeling effect. In wireless communications, estimation with low resolution data converters is relevant for reduced power consumption in wideband receivers. Similarly, low resolution sampling in imaging and spectrum sensing allows for efficient data collection. In this work, we propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples. We incorporate signal reconstruction internally as domain knowledge within the network to enhance learning and surpass traditional algorithms in mean squared error and Chamfer error. We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions. This threshold provides insight into why neural networks tend to outperform traditional methods and into the learned relationships between the input and output distributions. In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data. We use the learning threshold to explain, in the one-bit case, how our estimators learn to minimize the distributional loss, rather than learn features from the data.

中文翻译:

SignalNet:低分辨率正弦波分解和估计网络,SignalNet:低分辨率正弦波分解和估计网络

The detection and estimation of sinusoids is a fundamental signal processing task for many applications related to sensing and communications. While algorithms have been proposed for this setting, quantization is a critical, but often ignored modeling effect. In wireless communications, estimation with low resolution data converters is relevant for reduced power consumption in wideband receivers. Similarly, low resolution sampling in imaging and spectrum sensing allows for efficient data collection. In this work, we propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples. We incorporate signal reconstruction internally as domain knowledge within the network to enhance learning and surpass traditional algorithms in mean squared error and Chamfer error. We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions. This threshold provides insight into why neural networks tend to outperform traditional methods and into the learned relationships between the input and output distributions. In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data. We use the learning threshold to explain, in the one-bit case, how our estimators learn to minimize the distributional loss, rather than learn features from the data.,正弦曲线的检测和估计是许多与传感和通信相关的应用的基本信号处理任务。虽然已经为此设置提出了算法,但量化是一个关键但经常被忽略的建模效果。在无线通信中,使用低分辨率数据转换器的估计与宽带接收器的功耗降低有关。同样,成像和频谱传感中的低分辨率采样允许有效的数据收集。在这项工作中,我们提出了 SignalNet,这是一种神经网络架构,可以检测正弦曲线的数量并从量化的同相和正交样本中估计它们的参数。我们在内部将信号重建作为网络中的领域知识结合起来,以增强学习并在均方误差和倒角误差方面超越传统算法。我们引入了一个最坏情况学习阈值,用于比较我们的网络相对于基础数据分布的结果。这个阈值提供了关于为什么神经网络往往优于传统方法以及输入和输出分布之间的学习关系的洞察力。在仿真中,我们发现我们的算法总是能够超过三位数据的阈值,但通常不能超过一位数据的阈值。在一位情况下,我们使用学习阈值来解释我们的估计器如何学习以最小化分布损失,而不是从数据中学习特征。我们引入了一个最坏情况学习阈值,用于比较我们的网络相对于基础数据分布的结果。这个阈值提供了关于为什么神经网络往往优于传统方法以及输入和输出分布之间的学习关系的洞察力。在仿真中,我们发现我们的算法总是能够超过三位数据的阈值,但通常不能超过一位数据的阈值。在一位情况下,我们使用学习阈值来解释我们的估计器如何学习以最小化分布损失,而不是从数据中学习特征。我们引入了一个最坏情况学习阈值,用于比较我们的网络相对于基础数据分布的结果。这个阈值提供了关于为什么神经网络往往优于传统方法以及输入和输出分布之间的学习关系的洞察力。在仿真中,我们发现我们的算法总是能够超过三位数据的阈值,但通常不能超过一位数据的阈值。在一位情况下,我们使用学习阈值来解释我们的估计器如何学习以最小化分布损失,而不是从数据中学习特征。这个阈值提供了关于为什么神经网络往往优于传统方法以及输入和输出分布之间的学习关系的洞察力。在仿真中,我们发现我们的算法总是能够超过三位数据的阈值,但通常不能超过一位数据的阈值。在一位情况下,我们使用学习阈值来解释我们的估计器如何学习以最小化分布损失,而不是从数据中学习特征。这个阈值提供了关于为什么神经网络往往优于传统方法以及输入和输出分布之间的学习关系的洞察力。在仿真中,我们发现我们的算法总是能够超过三位数据的阈值,但通常不能超过一位数据的阈值。在一位情况下,我们使用学习阈值来解释我们的估计器如何学习以最小化分布损失,而不是从数据中学习特征。
更新日期:2022-08-31
down
wechat
bug