Research article

Weight-2 input sequences of 1/n convolutional codes from linear systems point of view

  • Received: 15 June 2022 Revised: 19 August 2022 Accepted: 05 September 2022 Published: 11 October 2022
  • MSC : 94B10, 93C05, 11T71

  • Convolutional codes form an important class of codes that have memory. One natural way to study these codes is by means of input state output representations. In this paper we study the minimum (Hamming) weight among codewords produced by input sequences of weight two. In this paper, we consider rate 1/n and use the linear system setting called (A,B,C,D) input-state-space representations of convolutional codes for our analysis. Previous results on this area were recently derived assuming that the matrix A, in the input-state-output representation, is nonsingular. This work completes this thread of research by treating the nontrivial case in which A is singular. Codewords generated by weight-2 inputs are relevant to determine the effective free distance of Turbo codes.

    Citation: Victoria Herranz, Diego Napp, Carmen Perea. Weight-2 input sequences of 1/n convolutional codes from linear systems point of view[J]. AIMS Mathematics, 2023, 8(1): 713-732. doi: 10.3934/math.2023034

    Related Papers:

    [1] Ismail Aydogdu . On double cyclic codes over Z2+uZ2. AIMS Mathematics, 2024, 9(5): 11076-11091. doi: 10.3934/math.2024543
    [2] Boran Kim, Chang Heon Kim, Soonhak Kwon, Yeong-Wook Kwon . Jacobi forms over number fields from linear codes. AIMS Mathematics, 2022, 7(5): 8235-8249. doi: 10.3934/math.2022459
    [3] Yang Liu, Ruihu Li, Qiang Fu, Hao Song . On the minimum distances of binary optimal LCD codes with dimension 5. AIMS Mathematics, 2024, 9(7): 19137-19153. doi: 10.3934/math.2024933
    [4] Adel Alahmadi, Asmaa Melaibari, Patrick Solé . Quasi self-dual codes over non-unital rings from three-class association schemes. AIMS Mathematics, 2023, 8(10): 22731-22757. doi: 10.3934/math.20231158
    [5] Chaofeng Guan, Ruihu Li, Hao Song, Liangdong Lu, Husheng Li . Ternary quantum codes constructed from extremal self-dual codes and self-orthogonal codes. AIMS Mathematics, 2022, 7(4): 6516-6534. doi: 10.3934/math.2022363
    [6] Yanyan Gao, Yangjiang Wei . Group codes over symmetric groups. AIMS Mathematics, 2023, 8(9): 19842-19856. doi: 10.3934/math.20231011
    [7] Turki Alsuraiheed, Elif Segah Oztas, Shakir Ali, Merve Bulut Yilgor . Reversible codes and applications to DNA codes over F42t[u]/(u21). AIMS Mathematics, 2023, 8(11): 27762-27774. doi: 10.3934/math.20231421
    [8] Shudi Yang, Zheng-An Yao . Weight distributions for projective binary linear codes from Weil sums. AIMS Mathematics, 2021, 6(8): 8600-8610. doi: 10.3934/math.2021499
    [9] Yang Pan, Yan Liu . New classes of few-weight ternary codes from simplicial complexes. AIMS Mathematics, 2022, 7(3): 4315-4325. doi: 10.3934/math.2022239
    [10] Xiaomeng Zhu, Yangjiang Wei . Few-weight quaternary codes via simplicial complexes. AIMS Mathematics, 2021, 6(5): 5124-5132. doi: 10.3934/math.2021303
  • Convolutional codes form an important class of codes that have memory. One natural way to study these codes is by means of input state output representations. In this paper we study the minimum (Hamming) weight among codewords produced by input sequences of weight two. In this paper, we consider rate 1/n and use the linear system setting called (A,B,C,D) input-state-space representations of convolutional codes for our analysis. Previous results on this area were recently derived assuming that the matrix A, in the input-state-output representation, is nonsingular. This work completes this thread of research by treating the nontrivial case in which A is singular. Codewords generated by weight-2 inputs are relevant to determine the effective free distance of Turbo codes.



    In this work we are interested in investigating codewords of 1/n convolutional codes that are produced by weight-2 information sequences. These codewords play an important role in the computation of the effective free distance in the context of Turbo codes (see [14]) and therefore a better understanding of this particular set of codewords may lead to improvements in the construction of Turbo codes. In this work we focus on the mathematical analysis of these set rather than on possible direct consequences in the performance of Turbo codes. We perform this mathematical investigation within the so-called input state output representations.

    Convolutional codes can be modelled by means of input state output representations in the framework of linear time-invariant systems (see [3,5,6,8,16,27,30] for an introduction of the basic theory of this approach). The main advantage of this approach is that the dynamics of the state (memory) of the system (convolutional code) are explicit in this representation. Moreover this enables the application of the huge and powerful machine of systems theory problems in the context of coding theory.

    In [14] Divsalar and McEliece studied codewords of convolutional codes that are produced by weight-2 information sequences, derived some theoretical bounds for the effective free distance and posed a conjecture. In this paper, we also make use of a state-space representations but choose representations as introduced in [30] which are slightly different to the driven variable representations used in [14]. These representations led to several important theoretical and practical results of convolutional codes (see [7,8,10,18,21,25,26]) and we continue the study in [19] using the (A,B,C,D) input state output representation of finite-weight convolutional codes. In [19], an upper bound on the effective free distance distance was provided for the particular case in which the matrix A in the input state output representation is an invertible matrix. In this paper we consider the case in which the matrix A is singular. Thus, this work can be considered as an extension of previous results. When the matrix A, that represents the update of the state of the system, is nonsingular, the last input entering into the system must immediately steer the state vector to the zero vector in order to obtain a finite-weight codeword. However, when A is singular, this is not necessarily true, and the state vector might remain nonzero some time after the last input has been introduced into the system. For this reason the extension of the results in [19] to the general case is not straightforward as we show in this work. Nevertheless, we present new characterizations of this set of codewords and provide an upper-bound on the effective free distance. As we show in the this paper, the analysis of these systems (with A singular) turned out to be highly nontrival and so the optimally of the upper-bound could not be formally proven.

    The paper is organized as follows: In Section 2 we briefly introduce finite-weight convolutional codes defined over any Galois field and a particular input-state-output representation of such codes. We also recall the relevance of codewords generated by weight-2 inputs and their relation to turbo codes. Section 3 is devoted to provide the main results of the paper. In particular, for a given convolutional code C of dimension one defined over any finite field with an input-state-output representation given by (A,B,C,D) and A a singular matrix, we analyse the dynamics that can occur between the input and the state of the system in this case. We present a conjecture and a novel upper bound on zmin(C) based on this conjecture and, in turn, an upper bound on the effective free distance of C. Finally, we present and study a concrete construction of a class of convolutional codes for which we can compute zmin(C) up to a difference of one value and provide an example to illustrate the results. We conclude the paper by presenting some conclusion and possible future work within this thread of research.

    In this paper, we denote by F=GF(q) the Galois field of q elements and F[z] the polynomial ring on the variable z with coefficients in F.

    Consider the matrices AFδ×δ, BFδ×k, CF(nk)×δ and DF(nk)×k. Following [30] and [28], a rate k/n convolutional code C of complexity δ can be described by the linear system governed by the equations

    xt+1=Axt+Butyt=Cxt+Dut},t=0,1,2, (2.1)
    vt=(ytut),x0=0, (2.2)

    where for each time instant t, xtFδ is the state vector, utFk is the input (also call information vector) and ytFnk is the parity vector. In linear systems theory, this representation is known as the input-state-output representation. This representation was introduced by Rosenthal, York and Schumacher (see [28]) and it has been widely used in the last years to analyze and construct convolutional codes [8,9,29,30]. In terms of Linear Systems, the complexity δ, is the McMillan degree of the linear system (2.2). In the following, we adopt the notation used by McEliece [24] and we call a convolutional code of rate k/n and complexity δ an (n,k,δ)-code.

    Note that the description given by expression (2.2) is in general not unique. But if C has complexity δ, then it is possible to choose the matrices A, B, C, and D of sizes δ×δ, δ×k, (nk)×δ and (nk)×k, respectively. In convolutional coding theory, an input-state-output representation (A,B,C,D), having the above sizes, is called a minimal representation and it is characterized through the condition that the pair (A,B) is controllable, that is (see [29]), the controllability matrix has full rank, rankΦδ(A,B)=δ, where

    Φj(A,B):=(BABAj2BAj1B), jN.

    The controllablility matrix is a well-known matrix in the area of system theory as it allows to characterized the controlability of the linear system. If (A,B) is a controllable pair, then we call the smallest integer κ having the property that rankΦκ(A,B)=δ the controllability index of (A,B). On the other hand, we say that (A,C) is an observable pair if (AT,CT) is a controllable pair (see [29]). If the pair (A,B) is controllable, it means that, by an appropriate choice of input vectors, it is possible to drive a given state vector to any other state vector in finite time. Analogously, the observability of the pair (A,C) means that it is possible to determine the state vector at a given time t0 by observing the output vectors for a finite number of time steps beginning with t0 (see, for example, [28,30]).

    Following the approach adopted in [29] we only consider {vt}t0 in Eq (2.2) to be a finite-weight codeword (see [29] for more details of the algebraic reasons to do so), that is, Equation (2.2) holds for all t=0,1,2, and there is an integer γ such that xγ+1=0, ut=0, for tγ+1, and therefore, yt=0 for tγ+1, so the code sequence has finite weight. In this work we denote such a finite-weight codeword by Vγ.

    Hence, it follows that both the input and the state sequence (and hence the output) must to have finite support in a finite-weight codeword. The set of finite-weight codewords has a module structure over the polynomial ring F[z] (see [29]). By abuse of notation, we will denote this module by C(A,B,C,D) and we refer to it as the finite-weight convolutional code generated by the matrices A, B, C, D. Proposition 2.4 of [29] gives us a characterization of finite-weight codewords. Let us denote by Vγ a finite-weight codeword sequence constituted by (y0u0),(y1u1),,(yγuγ)Fn represents with (y0u0) and (yγuγ)0. Hence, the Eqs of (2.2) are satisfied for all t0 and

    (AγBAγ1BABB)(u0u1uγ1uγ)=0. (2.3)

    Remark 2.1. Notice that it is easy to check that if S is an invertible matrix, then it holds that

    C(SAS1,SB,CS1,D)=C(A,B,C,D).

    The representation considered here, i.e., (2.2), is indeed the description of the dynamics of a rational and systematic encoder, since by Lemma 2.14 of [29], if C(A,B,C,D) is an (n,k,δ)-code, then, the matrices A, B, C and D describe a proper rational transfer function of C(A,B,C,D), given by

    T(z)=C(zIA)1B+D.

    Furthermore, G(z)=(T(z)Ik) is a systematic encoder of C(A,B,C,D).

    Remark 2.2. We note that the state-space realizations considered in this work are different from the driving variable realizations often found in the coding literature [15,23], given by

    xt+1=Axt+Butvt=Cxt+Dut}, (2.4)

    where utFk is the information vector, vtFn the codewords that are, in this case, the outputs of the linear system and xtFδ as above. Although driving-variable representations have been considered the standard way in which convolutional codes were presented in terms of linear systems, many authors have considered linear systems as described in (2.1) and (2.2) in the last decades as they have many advantages when analyzing convolutional codes [28,29,33]. One of these advantages is that in the driving variable representations, the matrix A has to be nilpotent whereas in the one described in (2.1) and (2.2) the matrix A does not have such a restriction. This fact facilitates the construction of optimal input state output representations of convolutional codes (see [29,32,33]). Another advantage of the setting considered in this paper is that these representations are particularly suitable not only for constructing convolutional codes but also for dealing with finite-weight codewords, see [29,30] for more details. These properties allow us to derive new results regarding lowest weight of the parity vectors of the convolutional code C generated by information sequences of weight two.

    Block codes having optimal error correcting capabilities, i.e., with maximum minimum, are quite well-understood, e.g. the class of Reed-Solomon codes [20,34]. However, in order to derive codes with efficient performance, i.e., codes coming closest to the Shannon limit, having large minimum distance it is same times not enough. To achieve optimal performance parallel concatenation of convolutional codes, known as Turbo Codes, were presented by Glavieux and Thitimajshima, see [2]. In a turbo code TC two convolutional codes, C1 and C2 of rates k/n1 and k/n2, respectively, are connected via an inter-leaver in such a way that the first encoder, C1, operates directly on the input information ut (t=0,1,2,) and the second one, C2, encodes the interleaved input information, denoted by Put (t=0,1,2,), where P is a permutation matrix of order k. Therefore, a codeword of these code in divided in the parity vectors of both encoders followed by the information vector. In [4] the input-state-output representation for the turbo code TC was introduced from the state representation of the constituent encoders. For more results on these concatenated (convolutional) codes within a linear systems approach the reader is reffered to [8], [9], [11], [15] and [17].

    The most important parameter through which the constituent convolutional codes influence the turbo code performance is zmin(C) (see [1], [12], [13] and [14]), which it is defined below.

    Definition 2.1. Let C be a convolutional code. We define zmin(C) as the lowest weight of the parity vectors of the convolutional code C generated by information sequences of weight two.

    In [1] and [14] it was shown that the performance of turbo codes is primarily driven by the weight-2 input minimum distance, which is directly related to the minimum weight among the set of codeword sequences generated by input sequences of weight two. Hence, if one considers a TC with C1=C2=C, its weight-2 input minimum distance, which is also referred to as the effective free distance of TC [1], dfree,eff(TC), is described as

    dfree,eff(TC)=2+2zmin(C). (2.5)

    On a AWGN cannel, code performance is determined largely by the effective free distance. In this section, we get bounds on this distance. Moreover, the design objective for the constituent recursive convolutional encoders of a turbo code is to obtain zmin as large as possible. In [1] it was shown that in the binary case there exists a rate 1/n recursive systematic convolutional code C with complexity δ that achieve the maximum value of zmin(C), described by

    zmin(C)(n1)(2δ1+2).

    Consequently, for a turbo code TC with two equal systematic convolutional codes, they obtain the following upper bound on the effective free distance of TC

    dfree,eff(TC)2+2(n1)(2δ1+2).

    Recently, in [19] turbo codes were again studied within a linear systems point of view, over finite field. In particular, they consider a turbo code obtained by the concatenation of two identical 1/n recursive systematic convolutional codes C given by its input-state-output representation (A,B,C,D) where the matrix A is invertible. They studied how to obtain the value of zmin(C), and derived an upper bound that we present next. First, we need to introduce the following definition, which refers to the minimum time instant at which the last nonzero input is introduced into the system.

    As at each time instant the input belongs to the field, in the case the rate is 1/n, we use the typography ut rather than ut, to distinguish between scalars and vectors.

    Definition 3.1. We define ˆs to be the least s for which there exists a finite-weight codeword Vγ of a convolutional code C generated by a vector (u0,u1,,us,us+1,,uγ) with weight equal to two and u0, us0. Such an ˆs is called the minimum effective index of C.

    In [19] an upper bound on the value of zmin(C) among all convolutional codes with equal set of parameters (n,1,δ) was introduced, as we show in the following theorem.

    Theorem 3.1. [Corollary 1 of [19]] Let C(A,B,C,D) be an (n,1,δ)-code, in such a way that the pair (A,B) is controllable and A is an invertible matrix. Let ˆs be the minimum effective index of C. Then,

    zmin(C)(n1)(ˆs+1),

    and, in turn, the effective free distance of TC satisfies

    dfree,eff(TC)2+2(n1)(ˆs+1).

    The authors of [19] give conditions for an (n,1,δ)-code to achieve such a bound and they moreover present a concrete construction of an 1/n recursive systematic convolutional code C whose zmin(C) is as maximum as possible for these parameters.

    Remark 3.1. If we consider that case in which the matrix A is nonsingular, we get zmin(C) over the parity vectors of finite-weight codewords Vγ generated by input vectors (u0,u1,,us) of weight two with u0, us0, with γ=s, since at time instant s the state of the system must go to zero. Thus, the last input us entering into the system has to yield xs+1=0. More concretely, let Vγ be a finite-weight codeword generated by an information vector (u0,u1,,us,us+1,,uγ) of weight two with u0, us0. Then,

    (AγBAγ1BAγsBABB)(u0u1usuγ1uγ)=Aγs(AsBAs1BABB)(u0u1us1us)=0 (3.1)

    implies

    (AsBAs1BABB)(u0u1us1us)=0,

    since A is nonsingular. In other words, if A is nonsingular, it follows that the minimum effective index ˆs of C is obtained by the minimum of the integers s that satisfy the conditions indicated at the beginning of the remark. Moreover, Theorem 1 of [19] indicates that zmin(C) is derived only by the weight of the parity vectors of any finite-weight codeword Vˆs of the convolutional code produced by sequences with length ˆs+1δ+1 where the two nonzero inputs are the first and the last ones.

    When the matrix A is singular we may have that

    (AsBAs1BABB)(u0u1us1us)0

    but relation (3.1) holds. This intuitively means that the state of the system (A,B,C,D) does not necessarily vanish at instant s and could remains nonzero for some time after the second (that is, the last) input us0 enters into the system. Moreover, let Vγ and ˜V˜γ be two finite-weight codewords with input vectors (u0,u1,,us,us+1,uγ) and (˜u0,˜u1,,˜u˜s,˜u˜s+1,˜u˜γ) of weight two with u0, us, ˜u0, ˜u˜s0 and such that ˜s>s. As opposed to the case in which A is nonsingular, in this case we cannot ensure that

    wt(y0,,ys,ys+1,,yγ)wt(˜y0,,˜ys,,˜y˜s,˜y˜s+1,,˜y˜γ),

    that is, the minimum effective index ˆs given in Definition 3.1 is not directly related to zmin(C) is in the case where the matrix A is singular. Therefore, the ideas used to show Theorem 3.1 for A nonsingular cannot be straightforward applied in this case and we need to use a different approach.

    Next, we investigate the set of finite-weight codewords generated by input vectors of weight two which give us the value of zmin when an (n,1,δ)-code C is given by an input-state-output representation (A,B,C,D) such that the matrix A that updates the state vector of the system is singular. As noted in Remark 3.1 the length γ+1 of the finite-weight codeword Vγ can be much larger than the minimum time instant of the last nonzero input, denoted by s. Note that in these γs instants, the corresponding input is zero but the state is nonzero and continues to generates outputs vectors yi=Cxi, i=s+1,,γ. This makes it difficult to obtain an upper bound on zmin in terms of the minimum effective index. Nevertheless, we can delimit the inputs that will generate the finite-weight codewords where zmin will be reached, as we will see at the end of this section.

    Now suppose that C(A,B,C,D) is a rate 1/n convolutional code with complexity δ. Then, the matrices (A,B) form a controllable pair, so

    rankΦκ(A,B)=rank(BABAκ1B)=δ, (3.2)

    where κ is the so-called controllability index of (A,B). Also, in the case that C(A,B,C,D) is an (n,1,δ)-code with (A,B) controllable, it follows that the controllability index κ is equal to the complexity δ, κ=δ.

    Now, let Vγ be a finite-weight codeword with u00. Then, relations (2.3) and (3.2), imply necessarily γ>κ1 and therefore, we get the following result.

    Lemma 3.1. Let C(A,B,C,D), with (A,B) controllable, be an (n,1,δ)-code. It holds that the length γ+1 of a finite-weight codeword with input weight 2 satisfies that γδ.

    Among all the parity vectors of finite-weight codewords generated by input vectors of weight two we can restrict ourselves to a smaller set in order to compute zmin(C), as stated in the following lemma.

    Lemma 3.2. Let C(A,B,C,D) be an (n,1,δ)-code with the pair (A,B) controllable. Let Vγ be a finite-weight codeword generated by the input vector (u0,u1,,us,us+1,uγ) of weight two with u0, us0. Then, the codeword Vγ+m generated by the input vector (u0,u1,,us,us+1,uγ,uγ+1,,uγ+m) of weight two with u0, us0 is a finite-weight codeword for all mN. Moreover,

    wt(y0,,ys,ys+1,,yγ)wt(˜y0,˜y1,,˜ys,ys+1,,˜yγ,,˜yγ+m).

    Proof. Since Vγ is a finite-weight codeword, taking into account relation (2.3), we know that Aγu0+Aγsus=0. In particular, Am(Aγu0+Aγsus)=Aγ+mu0+Aγ+msus=0, so Vγ+m is a finite-weight codeword. Now, observe that the components of the parity vector (y0,y1,,ys) of Vγ are given by the following relations

    y0=Du0yj=CAj1Bu0forj=1,2,,s1ys=CAs1Bu0+Dusyj=CAj1Bu0+CAjsBusforj=s+1,,γ

    and the components of the parity vector (˜y0,˜y1,,˜ys,ys+1,,˜yγ,,˜yγ+m) of Vγ+m are in fact

    ˜y0=Du0=y0˜yj=CAj1Bu0=yjforj=1,2,,s1˜ys=CAs1Bu0+Dus=ys˜yj=CAj1Bu0+CAjsBusforj=s+1,,γ+m

    So,

    wt(y0,,ys,ys+1,,yγ)wt(˜y0,˜y1,,˜ys,ys+1,,˜yγ,,˜yγ+m).

    Assume now that C(A,B,C,D) is a rate 1/n convolutional code with A singular. It is well-known that if (A,B) is a controllable pair, we can assume without loss of generality (see Remark 2.1) that

    A=(010000100001pδ1pδ2pδ3p0),B=(0001) (3.3)

    (the so-called controllable canonical realization [22]). If A is a singular matrix, then there exists an integer τ1 such that pδj=0 for j=1,2,,τ and pδτ10.

    In the remain of this section, we work with the controllable canonical form of (A,B) with A singular. Let Vγ be a finite-weight codeword generated by an information vector (u0,u1,,us,us+1,,uγ) of weight two, with only u0,us0. From (2.3) we have that

    Aγs(AsBAs1BABB)(u0u1us1us)=0,

    that is,

    AsBu0+Busker(Aγs). (3.4)

    So we focus our attention in the kernel of the matrix Aη, for η1.

    The following result provide us with the structure of the η-th power of the matrix A, that we will need later on. Throughout all the paper, we denote by Oα×β the α×β zero matrix and by Im the identity matrix of size m.

    Lemma 3.3. Let (A,B,C,D) be the controllable canonical realization of an (n,1,δ) convolutional code C, that is, A and B are matrices as in (3.3). Assume that A is singular and let τ be the integer such that pδj=0 for j=1,2,,τ and pδτ10. Denote by Aη the η-th power of A for η1 and by a(η)ij the element of Aη corresponding to the row i and the column j. Then:

    1. If ηδ1, then

    Aη=(O(δη)×ηA(η)12A(η)21A(η)22),

    where A(η)12=I(δη) and A(η)21 and A(η)22 are matrices of sizes η×η, and η×(δη), respectively. Furthermore, the matrix A(η)21 is a square matrix such that

    If ητ, then A(η)21=O(δη)×(δη)

    If η>τ, then the elements (a(η)ij)i=δη+1,,δj=1,,δ are given by

    a(η)ij={0if{i=δη+1,,δj=1,,τa(η1)i+1,jif{i=δη+1,,δ1j=τ+1,,ηpη2a(η1)δη,j++p1a(η1)δ1,j+p0a(η1)δ,jif{i=δj=τ+1,,η

    and the elements (a(η)ij)i=δη+1,,δj=η+1,,δ of the matrix A(η)22 are given by

    If ητ, then

    a(η)ij={0if{i=δη+1,,δj=η+1,,τa(η1)i+1,jif{i=δη+1,,δ1j=τ+1,,δpη2a(η1)δη+2,j++p0a(η1)δ,jif{i=δj=η+1,,τ+ηpδj+η1+pη2a(η1)δη+2,j++p0a(η1)δ,jif{i=δj=τ+η+1,,δ

    If η>τ, then

    a(η)ij={a(η1)i+1,jif{i=δη+1,,δ1j=η+1,,δpη2a(η1)δη+2,j++p0a(η1)δ,jif{i=δj=η+1,,δ

    2. If ηδ, then

    A(η)=(O(δ1)×τA(η)120A(η)22) (3.5)

    where A(η)12 and A(η)22 are matrices of sizes (δ1)×(δτ) and 1×(δτ), whose elements are given by

    For the matrix A(η)12, a(η)ij=a(η1)i+1,j, for i=1,,δ1 and j=τ+1,δ.

    For the matrix A(η)22,

    a(η)δ,j=pδτ1a(η1)τ+1,j++p0a(η1)δ,jfor j=τ+1,,δ

    Proof. We can consider that the η-th power of A can be expressed in matrix blocks as follows:

    Aη=(A(η)11A(η)12A(η)21A(η)22),

    where A(η)11, A(η)12, A(η)21 and A(η)22 are matrices of sizes (δη)×η, (δη)×(δη), η×η, and η×(δη), respectively.

    We will make the proof using the induction method on the power η of A.

    By computation, we get that

    A2=(O(δ2)×2Iδ2A(2)21A(2)22)

    where the matrix A(2)21 is of size 2×2 such that

    ● If τ=1, then

    A(2)21=(0a(1)δ,20p0a(1)δ,2)

    ● If τ2, then A(2)21=O2×2.

    On the other hand, the matrix A(2)22 is a matrix of size 2×(δ2) given by

    ● If τ=1, then

    A2=(a(1)δ,3a(1)δ,4a(1)δ,δ1a(1)δ,δpδτ1+p0a(1)δ,3pδτ2+p0a(1)δ,4p2+p0a(1)δ,δ1p1+p0a(1)δ,δ)

    ● If τ1, then

    A2=(00a(1)δ,τ+1a(1)δ,τ+2a(1)δ,δ1a(1)δ,δ00p0a(1)δ,τ+1pδτ1+p0a(1)δ,τ+2p2+p0a(1)δ,δ1p1+p0a(1)δ,δ)

    Now, assume that Aη=(a(η)ij)i=1,,δj=1,,δ is given by the statement of the lemma.

    If η<δ, then,

    Aη+1=AAη=A(O(δη)×ηIδηA(η)21A(η)22)=(O(δη1)×(η+1)Iδη1A(η+1)21A(η+1)22)

    where the matrices A(η+1)21 and A(η+1)22 are of sizes (η+1)×(η+1) and (η+1)×(δη1), respectively, whose elements are given by

    ● For the matrix A(η+1)21:

    - If η+1τ, then a(η+1)ij=0 for all i=δη,,δ and j=1,,η+1,

    - If η+1>τ, then

    a(η+1)ij={0if{i=δη,,δj=1,,τa(η)i+1,jif{i=δη,,δ1j=τ+1,,η+1pη1a(η)δη+1,j++p0a(η)δ,jif{i=δj=τ+1,,η+1

    ● For the matrix A(η+1)22:

    - If η+1τ, then

    a(η)ij={0if{i=δη,,δj=η+2,,τa(η)i+1,jif{i=δη,,δ1j=τ+1,,δpη1a(η)δη+1,j++p0a(η)δ,jif{i=δj=η+2,,τ+η+1pδj+η1+pη1a(η)δη+1,j++p0a(η)δ,jif{i=δj=τ+η+1,,δ

    - If η+1>τ, then

    a(η+1)ij={a(η)i+1,jif{i=δη,,δ1j=η+2,,δpη1a(η)δη+1,j+pη2a(η)δη,j++p0a(η)δ,jif{i=δj=η+2,,δ

    Assume now that ηδ and that matrix Aη is given by relation (3.5). Observe that the matrix A contains the identity matrix of size (δ1)×(δ1) (see (3.3)) and the last row of it is

    (00pδτ1p1p0).

    In particular, A=(O(δ1)×1Iδ10A(1)22). For the above reasoning, we can consider that Aη has the following structure

    Aη=(O1×τA(η)12O(δ1)×τA(η)22).

    Consequently,

    Aη+1=(O(δ1)×1Iδ10A(1)22)(O1×τA(η)12O(δ1)×τA(η)22)=(O(δ1)×τA(η)22O1×τA(1)22A(η)22)=(O(δ1)×τA(η+1)12O1×τA(η+1)22).

    where the matrices A(η+1)12 and A(η+1)22 are of sizes (δ1)×(δτ) and 1×(δτ), respectively, satisfying

    A(η+1)12=A(η)22

    A(η+1)22=(a(η+1)δ,j)j=τ+1,,δ with

    a(η+1)δ,j=pδτ1a(η)τ+1,j+pδτ2a(η)τ+2,j++p0a(η)δ,jfor j=τ+1,,δ.

    So we have proof by the induction method that the η-th power of A is given by the statement of the lemma.

    Now, let Vγ be a finite-weight codeword generated by the input vector (u0,u1,,us,us+1,uγ) of weight two with u0, us0. From relation (3.4) we know that AsBu0+Busker(Aγs). Next result give us the dimension of the kernel of the the η-th power of A. Such a kernel we will need later on.

    Lemma 3.4. Let (A,B,C,D) be the the controllable canonical realization of an (n,1,δ) convolutional code C, that is, A and B are matrices as in (3.3). Assume that A is singular and let τ be the integer such that pδj=0 for j=1,2,,τ and pδτ10. Denote by a(η)j the j-th column of the matrix Aη, for each η1, that is,

    Aη=(a(η)1a(η)2a(η)τ1a(η)τa(η)δ).

    Then, the following holds:

    a) If 1ητ1, then a(η)j=0 for j=1,2,,η and the column vectors {a(η)η+1,,a(η)τ,,a(η)δ} are linearly independent. In particular, dim(kerAη)=η.

    b) If τηδ1, then a(η)j=0 for j=1,2,,τ and the column vectors {a(η)τ+1,,a(η)δ} are linearly independent. In particular, dim(kerAη)=τ.

    c) If ηδ, then a(η)j=0 for j=1,2,,τ and dim(kerAη)τ.

    Proof. Taking into account Lemma 3.3, which give us the structure of Aη, we obtain that

    a) If η=1,2,,τ1, then the first η columns of Aη are zero and the column vectors {a(η)η+1,,a(η)τ,,a(η)δ} are linearly independent, since they contains the identity matrix of size δη. Consequently, dim(kerAη)=η.

    b) If τηδ, then the first τ columns of Aη are zero and the column vectors {a(η)τ+1,,a(η)δ} are linearly independent, since they contains the identity matrix of size δτ. So dim(kerAη)=τ.

    c) If ηδ, then the first τ columns of Aη are zero but we cannot know which columns of the δτ last columns of Aη are linearly independent. In particular, dim(kerAη)τ.

    Remark 3.2. Observe that if ητ, the dimension, and in particular, a basis of the subspace ker(Aη), is independent of η.

    Remark 3.3. Lemma 3.3 also shows the relation of recurrence existing between the last column of the different powers of the matrix A. Indeed, if C(A,B,C,D) is an (n,1,δ)-code with the conditions of Lemma 3.3, and we write a(η)δ=(a(η)1,δa(η)2,δa(η)δ1,δa(η)δ,δ) for the last column of Aη, η=2,3,.Then:

    a(η)i,δ=a(η1)i+1,δfor i=1,2,,δ1

    and

    a(η)δ,δ=pδ1a(η1)1,δ+pδ2a(η1)2,δ++p1a(η1)δ1,δ+p0a(η1)δ,δ

    Next we investigate how are the codewords that need to be considered to compute zmin(C). We shall analyse several sets of finite-weight codewords Vγ sorted by their length γ in order to restrict the set of possible codewords that we need to considered to find the minimum of the weight of its parity vectors that yield zmin(C). This, of course, will optimize the computations required to compute the exact value of zmin(C).

    Theorem 3.2. Let C(A,B,C,D) be an (n,1,δ)-code with (A,B) in canonical controllable form and A singular. Let τ be the integer as in Lemma 3.3. Let Vγ be the set of finite-weight codewords of C generated by an information vector (u0,u1,,us,us+1,,uγ) with u0,us0 and ui=0 for i{0,s}. Then, the lowest weight of the parity vectors of Vγ with s+τγs+δ1 is achieved for sqδ(qτ(q1)).

    Proof. Let Vγ be a finite-weight codeword of C generated by an information vector (u0,u1,,us,,uγ) of weight two with u0, us0. Then,

    (AγBAγ1BAγsBABB)(u0u1usuγ1uγ)=Aγs(AsBu0+Bus)=0,

    so (AsBu0+Bus)kerAγs. It follows from Remark 3.2 that kerAγs is the same subspace for any γ and s, provided τγsδ1. As we consider the case in which s+τγs+δ1 it follows from statement b) of Lemma 3.4 that dim(kerAγs)=τ and a basis for kerAγsFδ×1 is given by the column vectors:

    BkerAγs={e1,e2,,eτ}

    where ei denotes the i-th vector of the canonical basis of Fδ×1, for i=1,2,,τ. Therefore, one has that AsBu0+Buˆs must be of the form (d1,d2,,dτ,0,,0)T or, equivalently (observe the structure of matrix B given by (3.3)),

    AsB=(d1d2dτ00dδ)where dδ0 as we require us0. (3.6)

    Hence, s is in fact the smallest integer such that the last column of As is a vector like

    (any τ elements ,,,,0,,0,a nonzero element)T.

    Remark 3.3 provides the structure of the elements of the last column of As, that it can be seen as a feedback polynomial of a Linear Feedback Shift Register (LFSR). The maximum cycle of a LFSR of length δ is qδ if the associated polynomial is primitive.

    The number of states of the form (3.6) is qdim(kerAγs)=qτ times the (q1) possible nonzero elements for the last row of AsB. This leads to the following upper-bound on s:

    sqδ(qτ(q1)),

    which concludes the proof.

    In the following result we study the set of codewords with length γ+1 such that γ<s+τ.

    Theorem 3.3. Let C(A,B,C,D) be an (n,1,δ)-code with (A,B) in canonical controllable form and A singular. Let τ be the integer as in Lemma 3.3. Let Vγ be the set of finite-weight codewords of C generated by an information vector (u0,u1,,us,us+1,,uγ) with u0,us0 and ui=0 for i{0,s}. Then, the lowest weight of the parity vectors of Vγ with γ<s+τ, is achieved for sqδ(q(γs)(q1)).

    Proof. The proof follows the same idea used in the proof of Theorem 3.2. Note that by Lemma 3.1 we have that γδ. Also it holds from Lemma 3.4 that dim(kerAγs)=γs<τ. Hence, for each value of γ and s such that γs<τ we have that s is the smallest integer such that the last column of As is a vector of the form

    (any γs elements ,,,,0,,0,a nonzero element)T. (3.7)

    As there are qδ possible states and q(γs)(q1) different states are of the form (3.7), the maximum value of s such that AsBu0Bus is not in the kernel of Aγs is qδ(q(γs)(q1)) which yields the result.

    In this section we present a concrete construction of a class of convolutional codes with δ2 for which we can compute the minimum effective index and the exact value of zmin up to a difference of one value. Furthermore, we can show that such upper-bound is optimal and we do that by presenting a particular example that reaches the provided upper-bound. To this end we need a class of matrices that have been very useful for the construction of convolutional codes with large Hamming distance, namely, the so-called superregular matrices.

    Definition 4.1 (Page 1314 of [31]). Let A be an n× matrix over a finite field F. We say that A is a superregular matrix if every square submatrix of A is nonsingular.

    The following Lemma is an immediate consequence of Definition 4.1 and it gives a lower bound on the weight of a linear combination of columns of a superregular matrix.

    Lemma 4.1 (Lemma 3.3 of [10]). Let A be a superregular matrix over a finite field F of size n×, with n. It follows that any nontrivial linear combination of m different columns of A cannot have more than m1 entries equal to zero.

    In the following result we present a particular construction based on a input-state-output representation where the pair (A,B) is in canonical controllable form with A singular, C a superregular matrix and D a column of C. We establish that the lowest weight of the parity vectors of Vγ is achieved in fact by the ones generated by weight-2 input sequences (u0,u1,,uγ) with u00 and u10. Furthermore, we establish a lower and an upper bound of zmin(C) for these case.

    Theorem 4.1. Let δ and n be any positive integers with δ2, nδ+1 and qn+δ. Let C(A,B,C,D) be an (n,1,δ)-code described by the matrices

    A=(01000010000010001),B=(001),C=(c11c1,δc21c2,δc(n1),1c(n1),δ)D=(c1,δ1c2,δ1c(n1),δ1) (4.1)

    of sizes δ×δ, δ×1, (n1)×δ and (n1)×1 respectively and where C is a superregular matrix. Then we have that

    (n1)(δ+1)1zmin(C)(n1)(δ+1). (4.2)

    Moreover, the minimum effective index ˆs achieves its minimum possible value, i.e., ˆs=1 and so the value of zmin(C) is reached in finite-weight codewords of minimum length γ=δ and it is calculate as

    zmin(C)=wt(D)+wt(CBD)+δ1j=1wt(CAjBCAj1B). (4.3)

    Proof. Taking into account the structure of the matrices A and B of (4.1) and Lemma 3.3, the finite-weight codeword of minimal length generated by input of weight two is Vδ. Furthermore, in this case we have that u1=u0 and u2=u3==uδ=0. Then, the parity check vectors of Vδ are of the form:

    y0=Du0y1=(CBD)u0y2=(CABCB)u0y3=(CA2BCAB)u0=yδ=(CAδ1BCAδ2B)u0 (4.4)

    where

    y1=(CBD)u0=(c1,δc1,δ1c2,δc2,δ1c(n1),δc(n1),δ1)andyj=(CAj1BCAj2B)u0=(c1,δj+1c2,δj+1cn1,δj+1),

    for j=2,3,,δ. Furthermore, n2wt(y1)n1 since C is a superregular matrix and by Lemma 4.1 any nontrivial linear combination of two different columns of C cannot have more than 1 entry equal to zero. Similarly, we can ensure that wt(yj)=n1, since C is superregular. So we obtain the following bounds on the weigth of the parity vectors yj of Vδ.

    (n1)(δ+1)1δj=0wt(yj)=wt(D)+wt(CBD)+δ1j=1wt(CAjBCAj1B)(n1)(δ+1) (4.5)

    Our aim now is to proof that in fact, zmin(C) is obtained from the minimum of the parity vectors of all finite-weight codewords of lenght δ+1. In order to do this, consider now a finite-weight codeword Vγ generated by a input vector (ˉu0,ˉu1,,ˉuγ) of weight two with γ>δ. Then, there exists a time instant r1 such that ˉu00, ˉur0 and ˉuj=0 for j0,r. From Lemma 3.2, we know that if r=1, then we can ensure that the parity vector (ˉy0,ˉy1,,ˉyγ) of Vγ have weight greater or equal to the parity vector (y0,y1,,yδ) of any finite-weight codeword Vδ of lenght δ+1. Furthermore, if r>1, then from the structure of matrices A,B,C,D, Lemma 3 and taking into account that C is superregular, then we obtain the following bounds on the weight of the parity vector (ˉy0,ˉy1,,ˉyγ)

    (δ+r)nδrγj=0wt(ˉyj)(n1)(δ+r). (4.6)

    Taking into account relations (4.5) and (4.6) and the fact that δ<n, we obtain

    δj=0wt(yj)(n1)(δ+1)(δ+r)nδrγj=0wt(ˉyj)

    where (y0,y1,,yδ) and (ˉy0,ˉy1,,ˉyγ) are the parity vectors of any codewords Vδ and Vγ with γ>δ, respectively. So we can conclude that zmin(C) is obtained by the minimum of the weight of the parity vectors of all the finite-weight codewords Vδ generated by input vectors with length δ+1. Furthermore, from relation (4.4), we deduce that in fact

    zmin(C)=wt(D)+wt(CBD)+δ1j=1wt(CAjBCAj1B).

    In the following example, we show a convolutional code whose zmin(C) reaches the upper bound of the relation (4.2).

    Example 4.1. Let F be the Galois field of 7 elements and let C(A,B,C,D) be an (4,1,2)-code described by the matrices

    A=(0101),B=(01),C=(455223)D=(452)

    It is easy to see that the the (4,1,2)-code described by the above matrices A,B,C and D satisfy the hypothesis of Theorem 4.2. Then we know that the

    zmin(C)=wt(D)+wt(CBD)+wt(CABCB)=wt((453))+wt((141))+wt((452))=9

    That is in this case the code attains the maximal value.

    In the example before the superregular matrix C is a Cauchy matrix and it is a small example. If we need to construct a turbo code with a determinate zmin(C) we must to consider a bigger parameters and consequently a bigger field. Work with a big finite field increases computational costs. In order to minimize the size of the field we introduce a similar construction for a singular A similar to Theorem 4.1 in which we make use of the so-called extended Cauchy matrices (see [31]).

    Theorem 4.2. Let F be the Galois field of q elements. Let δ and n be any positive integers with δ2, nδ+1 and qn+δ1. Let C(A,B,C,D) be an (n,1,δ)-code described by the matrices

    A=(01000010000010001),B=(001),C=(c11c1δc21c2δc(n1)1c(n1)δ)D=(c1δ1c2δ1c(n1)δ1)

    of sizes δ×δ, δ×1, (n1)×δ and (n1)×1 respectively and where C is a extended Cauchy matrix with the first column c1=(c11,c21,ldots,c(n1),1) of ones. Then we have that

    (n1)(δ+1)1zmin(C)(n1)(δ+1)

    Moreover, the value of zmin(C) is reached in a finite code word of minimum length γ=δ and it is calculate as

    zmin(C)=wt(D)+wt(CBD)+δ1j=1wt(CAjBCAj1B)

    Proof. The proof is analogous of Theorem 4.1.

    Example 4.2. Let F be the Galois field of 5 elements and let C(A,B,C,D) be an (4,1,2)-code described by the matrices

    A=(0101),B=(01),C=(131214)D=(111)

    It is easy to see that the the (4,1,2)-code described by the above matrices A,B,C and D satisfy the hypothesis of Theorem 4.2. Then we know that the

    zmin(C)=wt(D)+wt(CBD)+wt(CABCB)=wt((111))+wt((213))+wt((111))=9

    That is in this case also the code attains the maximal value.

    In this work we study the lowest Hamming weight of the parity vectors generated by information sequences of weight two, that is, zmin, of a 1/n convolutional code C(A,B,C,D) represented in terms of the input-state-output representation. We analyze how one can reduce the computations to derive this value which is, in general, difficult to compute as it is the minimum over the large set of codeword with inputs of weight two. In this work we reduce this set by studying the structure of the codewords produced by the input-state-output system. This will lead to reduce the compute search to obtain the exact value of zmin(C). We also presented a class of convolutional codes for which we know the form of the codewords that lead to the computation of zmin and therefore allow us to determine its exact value up to a difference of one unit.

    It is left as an open problem to provide a specific lower and upper bounds on zmin(C), and consequently, lower and upper bounds on the effective free distance over general finite fields. Also it would be interesting to show that this hypothetical upper bound it tight by presenting a concrete construction of a Turbo Code whose effective free distance reaches this bound. Also interesting it would be to derive different constructions to the one given in Section 4 having better bounds.

    The authors would like to thank the anonymous reviewers for the their comments. The research of the second author was supported by Spanish I+D+i project PID2019-108668GB-I00 of MCIN/AEI/10.13039/501100011033.

    The authors declare that there is no conflict of interests in this paper.



    [1] S. Benedetto, G. Montorsi, Design of parallel concatenated convolutional codes, IEEE T. Commun., 44 (1996), 591–600. https://doi.org/10.1109/26.494303 doi: 10.1109/26.494303
    [2] C. Berrou, A. Glavieux, P. Thitimajshima, Near Shannon limit error-correcting coding and decoding: Turbo Codes (1), Proc. of IEEE ICC 93– IEEE International Conference on Communications, 2 (1993), 1064–1070. https://doi.org/10.1109/ICC.1993.397441 doi: 10.1109/ICC.1993.397441
    [3] R. Bru, R. Cantó, B. Ricarte, V. Rumchev, A Basic Canonical Form of Discrete-time Compartmental Systems, Int. J. Contemp. Math. Sciences, 2 (2007), 261–273. http://dx.doi.org/10.12988/ijcms.2007.07020 doi: 10.12988/ijcms.2007.07020
    [4] P. Campillo, A. Devesa, V. Herranz, C. Perea, Modelization of turbo encoder from linear system point of view, Proceedings of the 10th International Conference on Computational and Mathematical Methods in Science and Engineering (CMMSE 2010), (2010), 314–317.
    [5] B. Cantó, C. Coll, E. Sánchez, On positive behaviour of periodic control systems, Appl. Math. Comput., 161 (2005), 779–786. https://doi.org/10.1016/j.amc.2003.03.001 doi: 10.1016/j.amc.2003.03.001
    [6] M. V. Carriegos, N. De Castro-García, Partitions of elements in a monoid and its applications to systems theory, Linear Algebra Appl., 491 (2016), 161–170. https://doi.org/10.1016/j.laa.2015.05.034 doi: 10.1016/j.laa.2015.05.034
    [7] N. De Castro-García, M. V. Carriegos, A. L. Muñoz Castañeda, A characterization of von Neumann rings in terms of linear systems, Linear Algebra Appl., 494 (2016), 236–244. https://doi.org/10.1016/j.laa.2016.01.019 doi: 10.1016/j.laa.2016.01.019
    [8] J.-J. Climent, V. Herranz, C. Perea, A first approximation of concatenated convolutional codes from linear systems theory viewpoint, Linear Algebra Appl., 425 (2007), 673–699. https://doi.org/10.1016/j.laa.2007.03.017 doi: 10.1016/j.laa.2007.03.017
    [9] J.-J. Climent, V. Herranz, C. Perea, Linear system modelization of concatenated block and convolutional codes, Linear Algebra Appl., 429 (2008), 1191–1212. https://doi.org/10.1016/j.laa.2007.09.006 doi: 10.1016/j.laa.2007.09.006
    [10] J.-J. Climent, D. Napp, C. Perea, R. Pinto, Maximum distance separable 2D convolutional codes, IEEE T. Inform. Theory, 62 (2016), 669–680. https://doi.org/10.1109/TIT.2015.2509075 doi: 10.1109/TIT.2015.2509075
    [11] J. J. Climent, V. Herranz, C. Perea, Parallel concatenated convolutional codes from linear systems theory viewpoint, Systems and Control Letters, 96 (2016), 15–22. https://doi.org/10.1016/j.sysconle.2016.06.016 doi: 10.1016/j.sysconle.2016.06.016
    [12] D. Divsalar, F. Pollara, Low Rate Turbo Codes for Deep Space Communications, Proceedings of 1995 IEEE Int. Symp. Info. Theory, (1995). https://doi.org/10.1109/ISIT.1995.531137
    [13] D. Divsalar, F. Pollara, Multiple turbo codes for deep-space communications, The Telecommunications and Data Acquisition Progress Report, (1995).
    [14] D. Divsalar, R. J. McEliece, The effective free distance of turbo codes, Electron. Lett., 32 (1996), 445–446. https://doi.org/10.1049/el:19960321 doi: 10.1049/el:19960321
    [15] D. Divsalar, R. J. McEliece, On the design of generalized concatenated coding systems with interleavers, TMO Progress Report 42-134, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, USA, (1998).
    [16] M. I. García-Planas, E. M. Soudit, L. E. Um, Convolutional codes under linear systems point of view. Analysis of output-controllability, WSEAS Press. World Scientific and Engineering Academy and Society, 11 (2012), 2224–2880.
    [17] M. I. García-Planas, N. deCastro, Concatenated linear systems over rings and their application to construction of concatenated families of convolutional codes, Linear algebra appl., 542 (2017), 624–647. https://doi.org/10.1016/j.laa.2017.12.009 doi: 10.1016/j.laa.2017.12.009
    [18] V. Herranz, D. Napp, C. Perea, Serial concatenation of a block code and a 2D convolutional code, Multidim. syst. sign. p., 30 (2019), 1113–1127. https://doi.org/10.1007/s11045-018-0591-3 doi: 10.1007/s11045-018-0591-3
    [19] V. Herranz, D. Napp, C. Perea, 1/n Turbo Codes from linear system point of view, Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A. Matemáticas, 114 (2020). https://doi.org/10.1007/s13398-020-00850-2
    [20] S. Hong, R. Wu, On deep holes of generalized Reed-Solomon codes, AIMS Mathematics, 1 (2016), 96–101. https://doi.org/10.3934/Math.2016.2.96 doi: 10.3934/Math.2016.2.96
    [21] J. Lieb, J. Rosenthal, Erasure decoding of convolutional codes using first order representations, Math. Control Signal., 33 (2021), 499–513. https://doi.org/10.1007/s00498-021-00289-9 doi: 10.1007/s00498-021-00289-9
    [22] T. Kailath, Linear Systems, Prentice Hall information and system sciences series, Prentice-Hall, 1980.
    [23] J. L. Massey, M. K. Sain, Codes, automata, and continuous systems: explicit interconnections, IEEE T. Automat. Contr., 12 (1967), 644–650.} https://doi.org/10.1109/TAC.1967.1098736 doi: 10.1109/TAC.1967.1098736
    [24] R. J. McEliece, The algebraic theory of convolutional codes, Handbook of Coding Theory, V. S. Pless and W. C. Huffman, Eds. North-Holland: Elsevier (1998), 1065–1138.
    [25] A. L. M. Castañeda, J. M. Muñoz-Porras, F. J. Plaza-Martín, Rosenthal's Decoding Algorithm for Certain 1-Dimensional Convolutional Codes, IEEE T. Inform. Theory, 65 (2019), 7736–7741. https://doi.org/10.1109/TIT.2019.2921370 doi: 10.1109/TIT.2019.2921370
    [26] D. Napp, R. Pereira, R. Pinto, P. Rocha, Periodic state-space representations of periodic convolutional codes, Cryptography and Communications, 11 (2019), 585–595. https://doi.org/10.1007/s12095-018-0313-6 doi: 10.1007/s12095-018-0313-6
    [27] B. Ricarte, S. Romero-Vivó, An algebraic approach to the structural properties of positive state control systems, Math. Method. Appl. Sci., 41 (2018), 2370–2378. https://doi.org/10.1002/mma.4351 doi: 10.1002/mma.4351
    [28] J. Rosenthal, J. M. Schumacher, E. V. York, On behaviors and convolutional codes, IEEE T. Inform. Theory, 42 (1996), 1881–1891. https://doi.org/10.1109/18.556682 doi: 10.1109/18.556682
    [29] J. Rosenthal, E. V. York, BCH convolutional codes, IEEE T. Inform. Theory, 45 (1999), 1833–1844. https://doi.org/10.1109/18.782104 doi: 10.1109/18.782104
    [30] J. Rosenthal, Connections between linear systems and convolutional codes, Codes, Systems and Graphical Models, ser. The IMA Volumes in Mathematics and its Applications, B. Marcus and J. Rosenthal, Eds. New York: Springer-Verlag, 123 (2001), 39–66. https://doi.org/10.1007/978-1-4613-0165-3_2
    [31] R. M. Roth, A. Lempel, On MDS codes via Cauchy matrices, IEEE T. Inform. Theory, 35 (1989), 1314–1319. https://doi.org/10.1109/18.45291 doi: 10.1109/18.45291
    [32] R. Smarandache, J. Rosenthal, Construction of Convolutional Codes using Methods from Linear Systems Theory, Proc. of the 35-th Annual Allerton Conf. on Commun., Control, and Computing, (1997), 953–960.
    [33] R. Smarandache, J. Rosenthal, A state space approach for constructing MDS rate 1/n convolutional codes, Proc. of the 1998 IEEE Inform. Theory Workshop (ITW 1998), (1998), 116–117. https://doi.org/10.1109/ITW.1998.706461
    [34] X. F. Xu, Y. C. Xu, S. F. Hong, Some results on ordinary words of standard Reed-Solomon codes, AIMS Mathematics, 4 (2019), 1336–1347. https://doi.org/10.3934/math.2019.5.1336 doi: 10.3934/math.2019.5.1336
  • This article has been cited by:

    1. Joan-Josep Climent, Diego Napp, Verónica Requena, An Algorithm to Compute a Minimal Input-State-Output Representation of a Convolutional Code, 2024, 00243795, 10.1016/j.laa.2024.12.005
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1498) PDF downloads(74) Cited by(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog