Research article

Decoding as a linear ill-posed problem: The entropy minimization approach

  • Received: 16 November 2024 Revised: 17 February 2025 Accepted: 20 February 2025 Published: 27 February 2025
  • MSC : 15A29, 65F22, 65J50, 46N99

  • The problem of decoding can be thought of as consisting of solving an ill-posed, linear inverse problem with noisy data and box constraints upon the unknowns. Specificially, we aimed to solve $ {{\boldsymbol A}}{{\boldsymbol x}}+{{\boldsymbol e}} = {{\boldsymbol y}}, $ where $ {{\boldsymbol A}} $ is a matrix with positive entries and $ {{\boldsymbol y}} $ is a vector with positive entries. It is required that $ {{\boldsymbol x}}\in{{\mathcal K}} $, which is specified below, and we considered two points of view about the noise term, both of which were implied as unknowns to be determined. On the one hand, the error can be thought of as a confounding error, intentionally added to the coded message. On the other hand, we may think of the error as a true additive transmission-measurement error. We solved the problem by minimizing an entropy of the Fermi-Dirac type defined on the set of all constraints of the problem. Our approach provided a consistent way to recover the message and the noise from the measurements. In an example with a generator code matrix of the Reed-Solomon type, we examined the two points of view about the noise. As our approach enabled us to recursively decrease the $ \ell_1 $ norm of the noise as part of the solution procedure, we saw that, if the required norm of the noise was too small, the message was not well recovered. Our work falls within the general class of near-optimal signal recovery line of work. We also studied the case with Gaussian random matrices.

    Citation: Valérie Gauthier-Umaña, Henryk Gzyl, Enrique ter Horst. Decoding as a linear ill-posed problem: The entropy minimization approach[J]. AIMS Mathematics, 2025, 10(2): 4139-4152. doi: 10.3934/math.2025192

    Related Papers:

  • The problem of decoding can be thought of as consisting of solving an ill-posed, linear inverse problem with noisy data and box constraints upon the unknowns. Specificially, we aimed to solve $ {{\boldsymbol A}}{{\boldsymbol x}}+{{\boldsymbol e}} = {{\boldsymbol y}}, $ where $ {{\boldsymbol A}} $ is a matrix with positive entries and $ {{\boldsymbol y}} $ is a vector with positive entries. It is required that $ {{\boldsymbol x}}\in{{\mathcal K}} $, which is specified below, and we considered two points of view about the noise term, both of which were implied as unknowns to be determined. On the one hand, the error can be thought of as a confounding error, intentionally added to the coded message. On the other hand, we may think of the error as a true additive transmission-measurement error. We solved the problem by minimizing an entropy of the Fermi-Dirac type defined on the set of all constraints of the problem. Our approach provided a consistent way to recover the message and the noise from the measurements. In an example with a generator code matrix of the Reed-Solomon type, we examined the two points of view about the noise. As our approach enabled us to recursively decrease the $ \ell_1 $ norm of the noise as part of the solution procedure, we saw that, if the required norm of the noise was too small, the message was not well recovered. Our work falls within the general class of near-optimal signal recovery line of work. We also studied the case with Gaussian random matrices.



    加载中


    [1] F. L. Bauer, Decrypted secrets: Methods and maxims on cryptography, Berlin: Springer-Verlag, 1997.
    [2] J. M. Borwein, A. S. Lewis, Convex analysis and nonlinear optimization, 2nd Edition, Berlin: CMS-Springer, 2006.
    [3] D. Burshtein, I. Goldenberg, Improved linear programming decoding and bounds on the minimum distance of LDPC codes, IEEE Inf. Theory Work., 2010. Available from: https://ieeexplore.ieee.org/document/5592887.
    [4] E. Candes, T. Tao, Decoding by linear programming, IEEE Tran. Inf. Theory, 51 (2005), 4203–4215. http://dx.doi.org/10.1109/TIT.2005.858979 doi: 10.1109/TIT.2005.858979
    [5] E. Candes, T. Tao, Near optimal signal recovery from random projections: Universal encoding strategies, IEEE Tran. Inf. Theory, 52 (2006), 5406–5425. http://dx.doi.org/10.1109/TIT.2006.885507 doi: 10.1109/TIT.2006.885507
    [6] C. Daskalakis, G. Alexandros, A. G. Dimakis, R. M. Karp, M. J. Wainwright, Probabilistic analysis of linear programming decoding, IEEE Tran. Inf. Theory, 54 (2008), 3565–3578. http://dx.doi.org/10.1109/TIT.2008.926452 doi: 10.1109/TIT.2008.926452
    [7] S. El Rouayyheb, C. N. Georghiades, Graph theoretic methods in coding theory, Classical, Semi-class. Quant. Noise, 2012, 53–62. https://doi.org/10.1007/978-1-4419-6624-7_5
    [8] J. Feldman, M. J. Wainwright, D. R. Karger, Using linear programming to decode binary linear codes, IEEE Tran. Inf. Theory, 51 (2005), 954–972. https://doi.org/10.1109/TIT.2004.842696 doi: 10.1109/TIT.2004.842696
    [9] F. Gamboa, H. Gzyl, Linear programming with maximum entropy, Math. Comput. Modeling, 13 (1990), 49–52.
    [10] Y. S. Han, A new treatment of priority-first search maximum-likelihood soft-decision decoding of linear block codes, IEEE Tran. Inf. Theory, 44 (1998), 3091–3096. https://doi.org/10.1109/18.737538 doi: 10.1109/18.737538
    [11] M. Helmiling, Advances in mathematical programming-based error-correction decoding, OPUS Koblen., 2015. Available from: https://kola.opus.hbz-nrw.de/frontdoor/index/index/year/2015/docId/948.
    [12] M. Helmling, S. Ruzika, A. Tanatmis, Mathematical programming decoding of binary linear codes: Theory and algorithms, IEEE Tran. Inf. Theory, 58 (2012), 4753–4769. https://doi.org/10.1109/TIT.2012.2191697 doi: 10.1109/TIT.2012.2191697
    [13] M. R. Islam, Linear programming decoding: The ultimate decoding technique for low density parity check codes, Radioel. Commun. Syst., 56 (2013), 57–72. https://doi.org/10.3103/S0735272713020015 doi: 10.3103/S0735272713020015
    [14] T. Kaneko, T. Nishijima, S. Hirasawa, An improvement of soft-decision maximum-likelihood decoding algorithm using hard-decision bounded-distance decoding, IEEE Tran. Inf. Theory, 43 (1997), 1314–1319. https://doi.org/10.1109/18.605601 doi: 10.1109/18.605601
    [15] S. B. McGrayne, The theory that would not die. How Bayes' rule cracked the enigma code, hunted down Russian submarines, & emerged triumphant from two centuries of controversy, New Haven: Yale University Press, 2011.
    [16] R. J. McEliece, A public-key cryptosystem based on algebraic, Coding Th., 4244 (1978), 114–116.
    [17] H. Mohammad, N. Taghavi, P. H. Siegel, Adaptive methods for linear programming decoding, IEEE Tran. Inf. Theory, 54 (2008), 5396–5410. https://doi.org/10.1109/TIT.2008.2006384 doi: 10.1109/TIT.2008.2006384
    [18] G. Xie, F. Fu, H. Li, W. Du, Y. Zhong, L. Wang, et al, A gradient-enhanced physics-informed neural networks method for the wave equation, Eng. Anal. Bound. Ele., 166 (2024). https://doi.org/10.1016/j.enganabound.2024.105802
    [19] Q. Yin, X. B. Shu, Y. Guo, Z. Y. Wang, Optimal control of stochastic differential equations with random impulses and the Hamilton-Jacobi-Bellman equation, Optimal Control Appl. Methods, 45 (2024), 2113–2135. https://doi.org/10.1002/oca.3139 doi: 10.1002/oca.3139
    [20] B. Zolfaghani, K. Bibak, T. Koshiba, The odyssey of entropy: Cryptography, Entropy, 24 (2022), 266–292. https://doi.org/10.3390/e24020266 doi: 10.3390/e24020266
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(61) PDF downloads(10) Cited by(0)

Article outline

Figures and Tables

Tables(4)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog