期刊文献+

一种基于深度学习预测的极化码路径移位SCL译码算法

Deep-learning-predicted Shifted-pruning for SCL Decoding ofPolar Codes
在线阅读 下载PDF
导出
摘要 极化码短码在连续相消列表(Successive Cancellation List, SCL)译码算法下的性能取决于列表路径数L的大小,L足够大时,能有效逼近最大似然译码的性能。然而,SCL-L译码器的实现复杂度随L呈线性增长,这使得设计高效的SCL译码器非常具有挑战性。为此,提出了一种基于深度学习预测的路径移位SCL译码算法,该算法采用L值较小的SCL-L译码,通过启动最多两次SCL-L来有效提高译码性能。当第一次SCL-L译码失败时,利用深度神经网络预测出SCL译码路径首次丢失(正确路径被排除出存活的L个列表路径)的错误节点位置,重新启动一次具有路径移位功能的SCL-L译码,该路径移位SCL译码在预测的错误节点位置处对原SCL-L的L条存活路径进行移位,即选择原来被抛弃的L条路径作为存活路径。以码率为1/2的循环校验级联极化码(128,64+8)(8位CRC校验)为例,所提出的路径移位SCl-32译码算法通过最多启动两次SCl-32译码即可有效逼近SCl-128译码器的性能,并已逼近该短码的有限长理论界限。 Under Successive Cancellation List(SCL)decoding,short polar codes can perform very close to maximum likelihood decoding if a large list size L is allowed.However,it is very costly to implement a SCl-L decoder with large L in practice.In this paper,we propose a deep-learning-predicted post-processing scheme for SCL decoding.In this scheme,whenever a standard SCL decoder with small L fails,a deep-learning neural network is employed to make prediction about the first error position at which the correct path is discarded during the process of SCL decoding.Then,the proposed post-processing scheme restarts a new round of SCL decoding but with shifted pruning of L paths at the predicted position.For a rate-1/2(128,64+8)polar code with 8 bit CRC,the proposed deep-learning-predicted post-processing for SCl-32 decoder can perform very close to the standard SCl-128 decoder with just a single round of re-decoding attempt.
作者 田浩 吴晓富 张索非 TIAN Hao;WU Xiaofu;ZHANG Suofei(National Engineering Research Center of Communications and Networking,Nanjing University of Posts and Telecommunications,Nanjing 210003,China;School of Internet of Things,Nanjing University of Posts and Telecommunications,Nanjing 210003,China)
出处 《无线电通信技术》 2023年第6期1088-1094,共7页 Radio Communications Technology
基金 国家自然科学基金(U21A20450)~~。
关键词 极化码 连续消除列表译码 深度学习 路径移位 polar codes SCL decoding deep learning shifted-pruning
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部