Research article Special Issues

Unreliable networks with random parameter matrices and time-correlated noises: distributed estimation under deception attacks

  • This paper examines the distributed filtering and fixed-point smoothing problems for networked systems, considering random parameter matrices, time-correlated additive noises and random deception attacks. The proposed distributed estimation algorithms consist of two stages: the first stage creates intermediate estimators based on local and adjacent node measurements, while the second stage combines the intermediate estimators from neighboring sensors using least-squares matrix-weighted linear combinations. The major contributions and challenges lie in simultaneously considering various network-induced phenomena and providing a unified framework for systems with incomplete information. The algorithms are designed without specific structure assumptions and use a covariance-based estimation technique, which does not require knowledge of the evolution model of the signal being estimated. A numerical experiment demonstrates the applicability and effectiveness of the proposed algorithms, highlighting the impact of observation uncertainties and deception attacks on estimation accuracy.

    Citation: Raquel Caballero-Águila, María J. García-Ligero, Aurora Hermoso-Carazo, Josefa Linares-Pérez. Unreliable networks with random parameter matrices and time-correlated noises: distributed estimation under deception attacks[J]. Mathematical Biosciences and Engineering, 2023, 20(8): 14550-14577. doi: 10.3934/mbe.2023651

    Related Papers:

    [1] Dong Xu, Xinling Li, Weipeng Tai, Jianping Zhou . Event-triggered stabilization for networked control systems under random occurring deception attacks. Mathematical Biosciences and Engineering, 2023, 20(1): 859-878. doi: 10.3934/mbe.2023039
    [2] Lernik Asserian, Susan E. Luczak, I. G. Rosen . Computation of nonparametric, mixed effects, maximum likelihood, biosensor data based-estimators for the distributions of random parameters in an abstract parabolic model for the transdermal transport of alcohol. Mathematical Biosciences and Engineering, 2023, 20(11): 20345-20377. doi: 10.3934/mbe.2023900
    [3] Guanghui Cheng, Yuangen Yao, Rong Gui, Ming Yi . Impact of cross-correlated sine-Wiener noises in the gene transcriptional regulatory system. Mathematical Biosciences and Engineering, 2019, 16(6): 6587-6601. doi: 10.3934/mbe.2019328
    [4] Katarzyna Pichór, Ryszard Rudnicki . Stochastic models of population growth. Mathematical Biosciences and Engineering, 2025, 22(1): 1-22. doi: 10.3934/mbe.2025001
    [5] B. Spagnolo, D. Valenti, A. Fiasconaro . Noise in ecosystems: A short review. Mathematical Biosciences and Engineering, 2004, 1(1): 185-211. doi: 10.3934/mbe.2004.1.185
    [6] Ganesh Kumar Thakur, Sudesh Kumar Garg, Tej Singh, M. Syed Ali, Tarun Kumar Arora . Non-fragile synchronization of BAM neural networks with randomly occurring controller gain fluctuation. Mathematical Biosciences and Engineering, 2023, 20(4): 7302-7315. doi: 10.3934/mbe.2023317
    [7] Qi Wang, Lifang Huang, Kunwen Wen, Jianshe Yu . The mean and noise of stochastic gene transcription with cell division. Mathematical Biosciences and Engineering, 2018, 15(5): 1255-1270. doi: 10.3934/mbe.2018058
    [8] Yan Xie, Zhijun Liu, Ke Qi, Dongchen Shangguan, Qinglong Wang . A stochastic mussel-algae model under regime switching. Mathematical Biosciences and Engineering, 2022, 19(5): 4794-4811. doi: 10.3934/mbe.2022224
    [9] Binjie Hou, Gang Chen . A new imbalanced data oversampling method based on Bootstrap method and Wasserstein Generative Adversarial Network. Mathematical Biosciences and Engineering, 2024, 21(3): 4309-4327. doi: 10.3934/mbe.2024190
    [10] Roberta Sirovich, Laura Sacerdote, Alessandro E. P. Villa . Cooperative behavior in a jump diffusion model for a simple network of spiking neurons. Mathematical Biosciences and Engineering, 2014, 11(2): 385-401. doi: 10.3934/mbe.2014.11.385
  • This paper examines the distributed filtering and fixed-point smoothing problems for networked systems, considering random parameter matrices, time-correlated additive noises and random deception attacks. The proposed distributed estimation algorithms consist of two stages: the first stage creates intermediate estimators based on local and adjacent node measurements, while the second stage combines the intermediate estimators from neighboring sensors using least-squares matrix-weighted linear combinations. The major contributions and challenges lie in simultaneously considering various network-induced phenomena and providing a unified framework for systems with incomplete information. The algorithms are designed without specific structure assumptions and use a covariance-based estimation technique, which does not require knowledge of the evolution model of the signal being estimated. A numerical experiment demonstrates the applicability and effectiveness of the proposed algorithms, highlighting the impact of observation uncertainties and deception attacks on estimation accuracy.



    Sensor networks have generated considerable research interest due to their wide range of practical applications and the rapid development of communication and information technologies (see, e.g., [1] and [2]). A widely-explored research topic in this domain is the distributed estimation problem, which arises in situations where the sensor nodes are spatially distributed according to a predetermined network topology and each sensor node can use data information from itself and its neighboring nodes to estimate the signal of interest. Each sensor node then acts as a local fusion centre, combining its own information with that obtained from adjacent nodes to improve the performance of the local estimators, based solely on its own measurement information. This collaborative signal estimation strategy offers many advantages, including ease of implementation, robustness, scalability and high reliability. In [3], the distributed estimation problem is addressed in the presence of random packet dropouts during data transmission. A distributed event-based filtering structure is proposed in [4] for a class of sensor networks with sensor saturations and cyber-attacks. Nonlinear systems over sensor networks, whose topologies are changeable subject to Round-Robin protocol in the finite horizon case, are considered in [5]. A recursive distributed filtering algorithm for networked systems with random parameter matrices and correlated noises is designed in [6]. In recent years, a large variety of distributed estimation algorithms have been designed for sensor networks exposed to various challenges and vulnerabilities, under a range of filter schemes. For example, a new distributed filtering strategy has been presented in [7] by fully taking both the prediction estimations and its own measurement innovation into consideration. In [8], a novel distributed filtering compensation algorithm is presented in terms of the available transmitted data. A new design of an innovation-based stealthy attack strategy against distributed state estimation over a sensor network is proposed in [9], while the state estimation problem in linear time-invariant systems by using a network of distributed observers with switching communication topology is studied in [10]. Some significant achievements in the field of distributed estimation for stochastic systems over sensor networks are reviewed in [11] and [12].

    In the fields of physics, electronics and engineering, many applications can involve infinite-step colored measurement noises, particularly when the sampling frequency is high enough to make the noises significantly correlated over two or more consecutive sampling periods. In recent years, researchers have addressed the estimation problem under the assumption that the measurements are affected by infinite-step time-correlated channel noise, modeled as the output of a linear system driven by white noise. Two popular methods for dealing with this type of noise correlation are state augmentation, which is simple and direct but computationally expensive, and measurement differencing, which avoids the problem of increasing dimensions but requires two consecutive measurements to compute the difference. By referencing to the measurement differencing method, in [13] the time-correlation of the measurement noises is transformed into the cross-correlation between the equivalent measurement noise and the process noise. Convergence conditions of the optimal linear estimator are obtained in [14], by using a new measurement obtained from measurement differencing. In [15], by using the time-differencing approach, the available measurements are transformed into an equivalent set of observations that do not depend on the time-correlated noise. Alternative non-augmentation and non-differencing methods to address the state estimation problem, based on the direct estimation of the time-correlated additive noise, are described in [16] and [17].

    Communication networks are typically subject to resource limitations, which can provoke network-related issues during signal measurement or transmission [18]. Some such issues, such as the presence of multiplicative noise, missing observations or fading measurements, can be described by introducing stochastic parameter matrices into the measurement equations. In recent years, considerable research has been conducted into estimation problems arising in systems with random parameter matrices. For example, in [19] the Tobit Kalman filtering problem is studied for a class of linear discrete-time system with random parameters, where the elements of both the system matrix and the measurement matrix are allowed to be random variables in order to reflect the reality. The distributed fusion estimation problem for networked systems whose multisensor measured outputs involve uncertainties modeled by random parameter matrices is investigated in [20]. In [21], the optimal linear filtering problem for linear discrete-time stochastic systems with random matrices in both the state and measurement equations is addressed. A Kalman-like recursive distributed optimal linear fusion predictor without feedback, for discrete-time linear stochastic systems with correlated random parameter matrices, is presented in [22]. In [23], the distributed fusion estimation problem is discussed in the presence of coupled noises, random delays and packet dropouts, for a class of uncertain systems, where the uncertainty in the measurement model is described by random parameter matrices.

    In addition to the problem of random uncertainties in measurements and transmissions, a critical issue that cannot be ignored in any study of the estimation problem in networked systems is the possibility of suffering cyber-attacks. Security vulnerability is a common weakness that has been widely discussed in the literature; a bibliographical review of recent advances and challenges in this regard can be found in [24]. In particular, deception attacks have attracted significant research attention. This form of attack seeks to compromise data integrity by maliciously and randomly falsifying information. In this respect, [25] examined the centralized security-guaranteed filtering problem for linear time-invariant stochastic systems with multirate-sensor fusion subjected to deception attacks. In other approaches, the H-consensus filtering problem for discrete-time systems with multiplicative noises and deception attacks has been investigated by [26], and the distributed estimation problem in sensor networks with a specific topology structure under deception attacks has been addressed by [27,28,30]. More specifically, the distributed filtering problem in networked systems with fading measurements and multiplicative noises in both the signal and measurement equations is discussed in [27]. In [28], a positive system over a sensor network with simultaneous deception attacks and various network-induced constraints on sensor measurements is considered. The distributed secure state estimation problem is addressed in [29] for a class of general nonlinear systems over sensor networks under unknown deception attacks on innovations, while networked uncertain systems, containing uncertainties due to multiplicative and additive noises in the state and measurement equations, are considered in [30].

    In view of the above considerations, the present study focuses on a class of networked systems, whose sensor nodes are distributed in space according to a fixed network topology. The main study goal is to address the least-squares linear distributed estimation problem from measurements affected by random parameter matrices and time-correlated additive noises, and simultaneously exposed to random deception attacks. For this purpose, a recursive algorithm for the distributed estimators is generated in two stages. First, each sensor node collects measurements from its neighbors to create intermediate least-squares linear estimators by an innovation approach. In the second stage, the intermediate estimators from neighboring sensors are combined to form distributed estimators through least-squares matrix-weighted linear combinations. A greater volume of information from different sensors is used in the second stage than in the first. This enhances the intermediate estimation performance and reduces disagreements among intermediate estimators from different sensors, by steering each distributed estimator closer to the global optimal linear estimator (hypothetically based on measurements from all network sensors).

    This study makes the following main contributions. First, the consideration of random parameter matrices in the measurement equations provides a unified framework for handling common network-induced phenomena, such as multiplicative noise, missing observations and missing or fading measurements; thus, the proposed algorithm can be applied to a wide range of network systems with incomplete information. The model also integrates stochastic deception attacks, to which many networked systems are vulnerable. Second, both the random parameter matrices and the random variables modeling the deception attacks are time-varying, making it possible to consider general situations involving time-dependent network-induced phenomena and different random phenomena at different sensor nodes. Third, the study considers the infinite-step time correlation of the measurement noises and facilitates the direct estimation of time-correlated additive noise, without relying on the differencing method. Fourth, the covariance-based estimation technique employed does not require knowledge of the evolution model for the signal being estimated. Furthermore, and unlike most previous studies of distributed estimation, which usually obtain optimal linear estimators based on a given structure, we present optimal linear-distributed estimation algorithms under the mean squared error criterion, without requiring a particular structure for the estimators. Another advantage of the proposed distributed estimation scheme is that, while most other approaches incorporate an upper bound for the estimation error covariance, this paper derives an exact expression for the error covariance, which can be calculated offline, regardless of the specific measurement set to be processed. Finally, in comparison with the most closely related authors' previous studies on distributed estimation, the current system model, in contrast to the one in [6], incorporates deception attacks and infinite-step time-correlated noise. The system model in [27], as the system considered in this work, includes stochastic deception attacks, but, in contrast to the present study, white additive noises are considered. Furthermore, it is worth noting that the derivation of the distributed filtering algorithm in both [6] and [27] relies on the state-space model equations, whereas the algorithms proposed in this paper do not require explicit information about the state transition equation. Instead, they rely solely on the factorization of the state covariance matrix in a separable form. The superiority of the proposed distributed filter over the one in [27] in the presence of infinite-step time-correlated additive noises will be experimentally tested in a numerical simulation example.

    The rest of this paper is structured as follows. The networked system model with random parameter matrices, time-correlated noises and deception attacks is presented in Section 2, together with the assumptions required of the stochastic processes involved. The distributed estimation problem is then formulated in Section 3. After this, the distributed estimators are derived in two steps, described in Section 4 and Section 5, respectively. Finally, Section 6 provides an illustrative example highlighting the effectiveness of the proposed distributed estimation algorithms and the main conclusions drawn are summarized in Section 7.

    Notation and abbreviations

    The mathematical notation and abbreviations used in this paper are detailed in the following table. If not explicitly stated, all vector and matrix dimensions are assumed to be compatible with algebraic operations.

    Rn Set of n-dimensional real vectors
    0 Zero scalar or matrix of compatible dimension
    1n×n n×n all-ones matrix
    In n×n identity matrix
    MT, M1 and MT Transpose, inverse and transpose of the inverse of matrix M
    Diag(d1,,dm) Diagonal matrix with entries d1,,dm
    (M1 |  | Mk) Partitioned matrix whose blocks are the submatrices M1,,Mk
    Gk=Gk,k Function Gk,h, depending on time instants k and h, when h=k
    M(i)=M(ii) Function M(ij), depending on sensors i and j, when j=i
    Kronecker product of matrices
    Hadamard product of matrices
    δk,h Kronecker delta function
    LS Least-squares
    OPL Orthogonal projection lemma

    Consider a second-order nx-dimensional discrete-time signal process {xk}k1, measured by different sensors which are spatially distributed according to a fixed network topology. Specifically, the sensor network is represented by a digraph G=(N,E,D) of order m, where N={1,,m} denotes the set of sensor nodes, EN×N is the set of edges connecting different nodes and (j,i)E means that the sensor node i receives the information from node j. These link relations among sensors are specified by the adjacency matrix, D=(dij)m×m, with dij=1 for (i,j)E and dij=0 otherwise; since any sensor receives its own information, it is clear that dii=1. For each node iN, the set of adjacent nodes plus the node itself is denoted by Ni={jN:dji=1}; therefore Ni, which we term the neighborhood of node i, is the set of sensor nodes that transmit their information to node i.

    The network sensors provide noisy measurements of the signal with multiplicative perturbations described by random parameter matrices, and the measurement noises in each sensor are assumed to be sequentially correlated. In our study context, deception attacks may be launched by potential adversaries to replace these measurements by deception noises before they are processed.

    We now address the distributed filtering and smoothing problems at each node iN, based on the available signal information in that node, i.e. the measurements derived from all its neighbor nodes jNi. The estimation is addressed under the LS approach, using only covariance information on the processes involved in the model. Therefore, assumptions (A1)–(A6), listed below, are set to guarantee the existence and knowledge of the first and second-order moments of the signal to be estimated and those of the observations on which the estimation is based. Regarding the signal process, the following assumption –which is key to the recursivity of the estimation algorithms– is required.

    (A1) (On the signal). The signal process {xk}k1 is a zero mean second-order process whose covariance function is expressed in a separable form as follows:

    E[xkxTl]=AkBTl,  1lk,

    where the factors Ak,Bk are nx×N-dimensional known matrices.

    In the following, we describe the measurement model and specify the assumptions made for the random measurement matrices and additive noises.

    As previously indicated, at any time k1, the signal xk is measured by all the network sensors, which provide local outputs perturbed by random parameter matrices and time-correlated additive measurement noises. To describe this situation, consider the following measurement model:

    ˘y(i)k=C(i)kxk+v(i)k,  k1;  iN, (2.1)

    where ˘y(i)kRny is the measurement of the signal provided by the i-th sensor at time k, C(i)k is a random parameter matrix and v(i)k is the time-correlated measurement noise, which is assumed to be generated by a white process, {ξ(i)k}k0, from an initial noise v(i)0:

    v(i)k=H(i)k1v(i)k1+ξ(i)k1,  k1;  iN, (2.2)

    where H(i)k are non-singular known deterministic matrices.

    To address the estimation problem using covariance information, the following assumptions about the first and second-order moments of the random parameter matrices and the measurement noises are required.

    (A2) (On the random parameter matrices). {C(i)k}k1,  iN, with C(i)k=(c(i)pq(k))ny×nx, are independent sequences of independent random parameter matrices whose entries have known first and second-order moments. The mean matrices are denoted by ¯C(i)k=E[C(i)k], and their (p,q)-entries are given by E[c(i)pq(k)].

    Remark 1. The existence of second-order moments guarantees that of E[C(i)kGC(j)Tk], for any random matrix G with mean ¯G. Moreover, if G is independent of the matrices C(i)k and C(i)k, the (p,q)-entry of this expectation is given by nxa=1nxb=1E[c(i)pa(k)c(j)qb(k)]¯Gab.

    (A3) (On the measurement noises). The measurement noises {v(i)k}k1, iN, are time-correlated sequences as described in (2.2), where:

    ● The initial vectors v(i)0, iN, have zero-mean and known cross-covariance matrices, Σv(ij)0=E[v(i)0v(j)T0], i,jN.

    ● The white processes {ξ(i)k}k0, iN, are assumed to be independent of each other at different times (consequently, E[ξ(i)kξ(j)Tl]=0, lk; i,jN). Their covariance and cross-covariance functions are denoted by Σξ(ij)k=E[ξ(i)kξ(j)Tk], k0; i,jN.

    {ξ(i)k}k0, iN, are independent of the initial vectors v(i)0, iN, and, consequently, E[v(i)0ξ(j)Tk]=0,  k0; i,jN.

    Remark 2. As a consequence of (A3), the measurement noises, {v(i)k}k1, iN, are zero-mean second-order processes with cross-covariance matrices Σv(ij)k,l=E[v(i)kv(j)Tl]=H(i)k1H(i)lΣv(ij)l, 1l<k, and Σv(ij)k is recursively obtained from the relation Σv(ij)k=H(i)k1Σv(ij)k1H(j)Tk1+Σξ(ij)k1,  k1, with initial condition Σv(ij)0.

    In practice, sensor networks are often exposed to attacks from potential adversaries, seeking to modify or deteriorate the real measurements. In this situation, the observations to be processed for estimation may differ from the actual measurements, and so the mathematical model for the measurement outputs after the attacks must be specified.

    In this paper, we assume that the measurements are subject to deception attacks that, if successful, will neutralize them and insert deceptive information. Such attacks may or may not be successful, and this uncertainty is incorporated into the model via Bernoulli random variables. Therefore, the mathematical model that describes the potentially attacked measurements to be processed in the signal estimation, which will be denoted by y(i)k, is formulated as:

    y(i)k=˘y(i)k+λ(i)k˙y(i)k,  k1;  iN, (2.3)

    where λ(i)k are Bernoulli random variables modeling the randomness on the success (λ(i)k=1) or failure (λ(i)k=0) of the attacks, and ˙y(i)k=˘y(i)k+w(i)k is the signal inserted by the attacker, neutralizing the actual measurement, ˘y(i)k, and replacing it with a deceptive noise represented by w(i)k.

    Therefore, λ(i)k=0 means that the attack against the i-th sensor at time k has failed and the measurement to be processed is the actual one (y(i)k=˘y(i)k), while λ(i)k=1 means that the attack was successful and the processed measurement is the deceptive one (y(i)k=w(i)k). An equivalent expression for the attacked measurement outputs (2.3) is given by:

    y(i)k=(1λ(i)k)˘y(i)k+λ(i)kw(i)k,  k1;  iN. (2.4)

    The following assumptions are made regarding the processes involved in these observation equations.

    (A4) (On the success of attacks). The processes {λ(i)k}k1, iN, are independent sequences of independent Bernoulli random variables with known success probabilities, P(λ(i)k=1)=¯λ(i)k. As a consequence, the first and second-order moments of these variables are:

    E[λ(i)k]=E[(λ(i)k)2]=¯λ(i)k, k1;   E[λ(i)kλ(j)l]=¯λ(i)k¯λ(j)l, lk or ji;  l,k1;  i,jN.

    (A5) (On the deception noises). The noises inserted by successful attacks, {w(i)k}k1, iN, consist of independent white processes with known covariance and cross-covariance matrices, Σw(ij)k=E[w(i)kw(j)Tk], k1;  i,jN.

    Remark 3. In this paper, the attack probabilities of success and the covariances and cross-covariances of the noises of the attacks are assumed to be known. If they were unknown, they should be identified before applying the proposed algorithms. To deal with this issue, a distributed self-tuning filtering algorithm is proposed in [30], based on the identification of unknown characteristics through the sample zero-order and first-order correlation functions of the observations.

    Finally, the following assumption on the processes involved in the described model is required to address the estimation problem.

    (A6) (Mutual independence). For each iN, the signal process, {xk}k1, the random parameter matrices, {C(i)k}k1, the measurement noise, {v(i)k}k1, and the processes involved in the attacks, {λ(i)k}k1 and {w(i)k}k1, are mutually independent.

    In formulating the estimation problem, our aim is to use the distributed fusion method to obtain estimators of the signal at each sensor node, iN, based on the information available at that node. In accordance with the network topology described in Section 2, this information comes not only from the sensor itself but also from all the neighboring ones that transmit their information to it. Therefore, in the i-th sensor, the distributed fusion estimator of the signal xk based on the information up to time h, denoted by ˆxD(i)k/h, is obtained by using the local information from the sensor together with that from the nodes in its neighborhood, Ni={jN:dji=1}.

    The proposed distributed estimators are derived in two steps for each iN: a) derive the intermediate LS linear estimators; b) fuse the neighboring intermediate estimators. In the first step, the LS linear estimator, ˆx(i)k/h, is obtained from the potentially attacked observations (2.4) coming from all the sensor nodes jNi. Then, in the second step, the distributed estimator ˆxD(i)k/h is determined as the LS matrix-weighted linear combination of the neighboring intermediate estimators, ˆx(j)k/h, jNi.

    In order to unify the derivation of the intermediate estimators in all sensors, we jointly consider all the network output information at each sampling time k1, which is described by the gathered vector yk=(y(1)Tk,,y(m)Tk)T; the outputs corresponding to the sensors in Ni are then extracted from this vector to be processed in each sensor iN. Therefore, the intermediate estimator ˆx(i)k/h, for each iN, is based on the measurements described by Y(i)l=D(i)yyl, l=1,,h, where D(i)y is the matrix obtained by removing the all-zero rows of Diag(d1i,,dmi)Iny.

    Next, we specify the gathered measurement model and the statistical properties of the processes involved –derived from our prior assumptions about the local measurements–.

    To describe the gathered observations to be processed in the estimation, the components of the local observation models in Section 2 are stacked as follows:

    yk=(y(1)ky(m)k),    ˘yk=(˘y(1)k˘y(m)k),    vk=(v(1)kv(m)k),    wk=(w(1)kw(m)k),
    Ck=(C(1)kC(m)k),    Λk=(λ(1)k00λ(m)k)Iny,    Hk=(H(1)k00H(m)k).

    From this, the mny-dimensional observation vector yk, which gathers all the sensor measurements at time k, is expressed by:

    yk=(ImnyΛk)˘yk+Λkwk,   k1, (3.1)

    where ˘yk is the actual measurement vector gathered before the attacks, given by:

    ˘yk=Ckxk+vk,   k1. (3.2)

    The following properties of the processes involved in (3.1)-(3.2) are directly obtained from the corresponding assumptions stated in Section 2.

    {Ck}k1 is a sequence of independent second-order random parameter matrices with means ¯Ck=(¯C(1)Tk|| ¯C(m)Tk)T. Moreover, for any first-order random matrix, G,  E[CkGCTk]=(E[C(i)kGC(j)Tk])i,jN, with (i,j)-components as defined in Remark 1.

    ● The measurement noise, {vk}k1, is a zero-mean second-order time-correlated sequence. From Remark 2, the corresponding covariance matrices, Σvk,l=E[vkvTl], are given by Σvk,l=Hk1HlΣvl, 1l<k, and Σvk is recursively obtained from the relation Σvk=Hk1Σvk1HTk1+Σξk1,  k1, with initial condition Σv0=(Σv(ij)0)i,jN and Σξk=(Σξ(ij)k)i,jN.

    Remark 4. The non-singularity of the matrices Hk allows us to factorize the noise covariance matrices in a similar way to those of the signal in (A1); namely:

    Σvk,l=HkFTl,  1lk, (3.3)

    where Hk=Hk1H0 and Fk=ΣvkHTk, k1.

    {Λk}k1 are diagonal independent matrices with means ¯Λk=Diag(¯λ(1)k,,¯λ(m)k)Iny,  k1. For the purpose of further developments, if G is a random matrix independent of {Λk}k1, these matrices operate as follows:

    E[ΛkGΛk]=KλkE[G],   E[(ImnyΛk)G(ImnyΛk)]=K1λkE[G],   k1,

    where Kλk=(E[λ(i)kλ(j)k])i,jN1ny×ny,  K1λk=(E[(1λ(i)k)(1λ(j)k)])i,jN1ny×ny, and the entries E[λ(i)kλ(j)k] are given in (A4).

    ● The deception noise, {wk}k1, is a white process whose covariance matrices, Σwk=E[wkwTk], are obtained from (A5), Σwk=(Σw(ij)k)i,jN, k1.

    Finally, assumption (A6) implies the independence of the processes involved in the gathered model (3.1)-(3.2):

    ● The signal process {xk}k1, the random parameter matrices, {Ck}k1, the measurement noise, {vk}k1, and the processes {Λk}k1 and {wk}k1, are all mutually independent.

    The above properties guarantee that the observations {yk}k1 constitute a zero-mean second-order process, whose covariance matrices, Σyk=E[ykyTk], are given by:

    Σyk=K1λk(E[CkAkBTkCTk]+HkFTk)+KλkΣwk,   k1. (3.4)

    In this paper, the intermediate LS linear estimation problem is addressed by an innovation approach, according to which the observation process at the i-th sensor, {Y(i)k}k1, with Y(i)k=D(i)yyk, is replaced by a white process {η(i)k}k1innovation process–, such that the LS linear estimator of an arbitrary zero-mean second-order vector ak based on the observations {Y(i)1,,Y(i)h} can be obtained by the following linear combination of the corresponding innovations, {η(i)1,,η(i)h}:

    ˆa(i)k/h=hl=1E[akη(i)Tl](Ση(i)l)1η(i)l,   h1;    ˆa(i)k/0=0, (3.5)

    where Ση(i)k=E[η(i)kη(i)Tk] denotes the innovation covariance.

    The innovation at time k is defined as η(i)k=Y(i)kˆY(i)k/k1, where ˆY(i)k/k1 is the LS linear one-stage predictor of the observation Y(i)k=D(i)yyk, which is clearly given by ˆY(i)k/k1=D(i)yˆy(i)k/k1. Hence, the innovations are written as:

    η(i)k=D(i)y(ykˆy(i)k/k1),  k1, (3.6)

    where, according to (3.5), expressions (3.1)-(3.2) and the independence properties on the model, the predictor of yk is expressed in terms of those for the signal and noise as follows:

    ˆy(i)k/k1=(Imny¯Λk)(¯Ckˆx(i)k/k1+ˆv(i)k/k1),  k1. (3.7)

    Expressions (3.6) and (3.7) provide the basis for obtaining the intermediate estimation algorithms presented in the following section.

    In this section, the LS linear filtering and fixed-point smoothing estimators of the signal xk are obtained in each sensor according to the observations available at that sensor. More precisely, for each iN, we first derive a recursive algorithm for the intermediate LS linear filter, ˆx(i)k/k, based on the observations Y(i)l=D(i)yyl, l=1,,k. The filtering estimator of xk is then recursively updated with successive observations, Y(i)l=D(i)yyl, l=k+1,,h, in order to obtain the intermediate LS linear smoothing estimators, ˆx(i)k/h, h>k.

    The intermediate filtering and fixed-point smoothing algorithms are presented in Theorems 1 and 2, respectively. Both theorems also include recursive formulas for the error covariance matrices Σ˜x(i)k/h=E[˜x(i)k/h˜x(i)Tk/h], with ˜x(i)k/h=xkˆx(i)k/h, thus providing the natural measure of the estimation accuracy under the LS criterion.

    Theorem 1. For any iN, the intermediate LS linear filtering estimators, ˆx(i)k/k, and the error covariance matrices, Σ˜x(i)k/k, are calculated as:

    ˆx(i)k/k=(Ak | 0)z(i)k,   k1, (4.1)
    Σ˜x(i)k/k=AkBTk(Ak | 0)Σz(i)k(Ak | 0)T,   k1. (4.2)

    The vectors z(i)k and their covariance matrices, Σz(i)k, satisfy the following recursive formulas:

    z(i)k=z(i)k1+Z(i)k(Ση(i)k)1η(i)k,   k1;   z(i)0=0, (4.3)
    Σz(i)k=Σz(i)k1+Z(i)k(Ση(i)k)1Z(i)Tk,   k1;   Σz(i)0=0, (4.4)

    where the gain coefficients, Z(i)k, are:

    Z(i)k=((¯CkBk | Fk)(¯CkAk | Hk)Σz(i)k1)T(Imny¯Λk)D(i)Ty,   k1, (4.5)

    and the innovations, η(i)k, and their covariance matrices, Ση(i)k, are given by:

    η(i)k=D(i)y(yk(Imny¯Λk)(¯CkAk | Hk)z(i)k1),   k1, (4.6)
    Ση(i)k=D(i)y(Σyk(Imny¯Λk)(¯CkAk | Hk)Σz(i)k1(¯CkAk | Hk)T(Imny¯Λk))D(i)Ty,   k1. (4.7)

    The matrices Hk and Fk are the noise covariance factors defined in (3.3) and the observation covariance, Σyk, is given in (3.4).

    Proof. See Appendix A.

    The key of the recursive algorithm in Theorem 1 is the intermediate filter factorization (4.1), in which the vectors z(i)k can be recursively obtained as indicated in (4.3). Equation (4.2) provides the filtering error covariance matrices, which are needed to evaluate the performance of the filters; since these covariances do not depend of the observations, the filter performance can be measured before the observation set is available. In view of (4.1) and (4.2), the problem is focused on obtaining the vectors z(i)k and their covariances, Σz(i)k, which are derived from the innovations and their covariance matrices by the recursive equations (4.3) and (4.4), respectively. Specifically, by starting from z(i)k1, Σz(i)k1 and the new observation y(i)k, the gain, innovation and innovation covariance are calculated by equations (4.5), (4.6) and (4.7), respectively; then, equations (4.3) and (4.4) are used for obtaining z(i)k and Σz(i)k.

    Theorem 2. For any iN, the intermediate LS linear smoothing estimators, ˆx(i)k/h, h>k, and the error covariance matrices, Σ˜x(i)k/h, are recursively calculated from the filter, ˆx(i)k/k, and the error covariance Σ˜x(i)k/k, respectively, given in Theorem 1:

    ˆx(i)k/h=ˆx(i)k/h1+X(i)k,h(Ση(i)h)1η(i)h,   h>k1, (4.8)
    Σ˜x(i)k/h=Σ˜x(i)k/h1X(i)k,h(Ση(i)h)1X(i)Tk,h,   h>k1, (4.9)

    where:

    X(i)k,h=((Bk | 0)Σxz(i)k,h1)(¯ChAh | Hh)T(Imny¯Λh)D(i)Ty,   h>k1, (4.10)

    and

    Σxz(i)k,h=Σxz(i)k,h1+X(i)k,h(Ση(i)h)1Z(i)Th,  h>k1;    Σxz(i)k=(Ak | 0)Σz(i)k,   k1. (4.11)

    Proof. See Appendix B.

    By starting from the filter, equation (4.8) explains how the estimators of xk are updated when successive observations are available. Besides the innovations –which are given in Theorem 1–, for the update it is necessary to know the gain coefficients X(i)k,h, which are obtained from equation (4.10) and require, in turn, the cross-covariance matrices Σxz(i)k,h, that are given by equation (4.11). Finally, equation (4.9) provides the smoothing error covariances in a recursive way.

    Remark 5. By replacing D(i)y by the matrix obtained taking dji=0, ji, the LS local linear filter and smoothers, based only on the measurements from the i-th sensor, can be obtained from the formulas in Theorems 1 and 2, respectively. Furthermore, the hypothetical global optimal linear estimators –based on measurements from all sensors– can be calculated by replacing D(i)y by the identity matrix.

    The second step of the distributed fusion method is performed in order to derive the distributed filter and fixed-point smoother of the signal, xk. As indicated above, for each iN, the distributed estimators, ˆxD(i)k/h, hk, are derived as the matrix-weighted linear combination of the neighboring intermediate estimators, ˆx(j)k/h, jNi, using the mean squared error as an optimality criterion.

    Specifically, the distributed estimators are expressed as ˆxD(i)k/h=F(i)k/hˆX(i)k/h, hk, for each iN, where ˆX(i)k/h is the vector obtained by assembling all the intermediate estimators of the sensors connected to the i-th one, and where F(i)k/h is a matrix to be determined in order to minimize the mean squared error, E[(xkF(i)k/hˆX(i)k/h)T(xkF(i)k/hˆX(i)k/h)].

    As it is known, the solution to this problem is given by:

    F(i)optk/h=E[xkˆX(i)Tk/h](E[ˆX(i)k/hˆX(i)Tk/h])1

    and, consequently, both matrices, E[xkˆX(i)Tk/h] and E[ˆX(i)k/hˆX(i)Tk/h], must be determined for each iN.

    The entries of the above matrices are extracted from E[xkˆXTk/h] and E[ˆXk/hˆXTk/h], respectively, where ˆXk/h=(ˆx(1)Tk/h,,ˆx(m)Tk/h)T denotes the vector stacking all the intermediate estimators in the network. For this purpose, we write ˆX(i)k/h=D(i)xˆXk/h, where D(i)x is the matrix obtained by removing the all-zero rows of Diag(d1i,,dmi)Inx. The optimal distributed filter and the smoothers of the signal xk are then given by:

    ˆxD(i)k/h=E[xkˆXTk/h]D(i)Tx(D(i)xE[ˆXk/hˆXTk/h]D(i)Tx)1D(i)xˆXk/h,   hk1. (5.1)

    Hence, both matrices, E[xkˆXTk/h] and E[ˆXk/hˆXTk/h] must be determined. Taking into account the OPL, each intermediate estimator ˆx(r)k/h is uncorrelated with the estimation error; therefore, the entries of E[xkˆXTk/h] can be rewritten as E[xkˆx(r)Tk/h]=E[ˆx(r)Tk/hˆx(r)Tk/h], rN and then, denoting the cross-covariance between intermediate estimators as Σˆx(rs)k/h=E[ˆx(r)k/hˆx(s)Tk/h], r,sN, the matrices involved in (5.1) are given by E[ˆXk/hˆXTk/h]=(Σˆx(rs)k/h)r,sN and E[xkˆXTk/h]=(Σˆx(1)k/h || Σˆx(m)k/h).

    The algorithms needed to obtain the cross-covariances between the intermediate filters and smoothers, Σˆx(rs)k/h, hk, r,sN, are stated in the following lemmas.

    Lemma 1 presents the expression of the cross-covariances between the filters, Σˆx(rs)k/k, r,sN, which is directly obtained from (4.1). Thus, it depends on the cross-covariances Σz(rs)k=E[z(r)kz(s)Tk]; a recursive algorithm is then stated for Σz(rs)k, which in turn depends on the innovation cross-covariance matrices, Ση(rs)k=E[η(r)kη(s)Tk].

    Lemma 1. For r,sN, the cross-covariance matrices between the intermediate filters, Σˆx(rs)k/k=E[ˆx(r)k/kˆx(s)Tk/k], are computed by:

    Σˆx(rs)k/k=(Ak | 0)Σz(rs)k(Ak | 0)T,  k1, (5.2)

    where Σz(rs)k satisfies the following recursive relation:

    Σz(rs)k=Σz(rs)k1+Z(rs)k1,k(Ση(s)k)1Z(s)Tk+Z(r)k(Ση(r)k)1(Z(sr)Tk1,k+Ση(rs)k(Ση(s)k)1Z(s)Tk),   k1;   Σz(rs)0=0. (5.3)

    The matrices Z(rs)k1,k=E[z(r)k1η(s)Tk] are given by:

    Z(rs)k1,k=(Σz(r)k1Σz(rs)k1)(¯CkAk | Hk)T(Imny¯Λk)D(s)Ty,   k1, (5.4)

    and the innovation cross-covariance matrices, Ση(rs)k, satisfy:

    Ση(rs)k=D(r)y(Σyk(Imny¯Λk)(¯CkAk | Hk)(Σz(r)k1+Σz(s)k1Σz(rs)k1)(¯CkAk | Hk)(Imny¯Λk))D(s)Ty,   k1. (5.5)

    Proof. See Appendix C.

    In the second lemma, for any r,sN, the cross-covariances between the intermediate smoothers, Σˆx(rs)k/h, h>k, are obtained from that of the filter, Σˆx(rs)k/k, by a recursive algorithm.

    Lemma 2. For r,sN, the cross-covariance matrices between the intermediate fixed-point smoothers, Σˆx(rs)k/h=E[ˆx(r)k/hˆx(s)Tk/h], h>k, are recursively obtained from:

    Σˆx(rs)k/h=Σˆx(rs)k/h1+ˆX(rs)k,h(Ση(s)h)1X(s)Tk,h+X(r)k,h(Ση(r)h)1(ˆX(sr)Tk,h+Ση(rs)h(Ση(s)h)1X(s)Tk,h),  h>k1, (5.6)

    with initial condition Σˆx(rs)k/k, given in Lemma 1.

    The matrices ˆX(rs)k,h=E[ˆx(r)k/h1η(s)Th] are derived as:

    ˆX(rs)k,h=(Σxz(r)k,h1Σˆxz(rs)k,h1)(¯ChAh | Hh)T(Imny¯Λh)D(s)Ty,  h>k1, (5.7)

    where Σxz(r)k,h is given in Theorem 2 and Σˆxz(rs)k,h is recursively computed by:

    Σˆxz(rs)k,h=Σˆxz(rs)k,h1+ˆX(rs)k,h(Ση(s)h)1Z(s)Th+X(r)k,h(Ση(r)h)1(Z(sr)Th1,h+Ση(rs)h(Ση(s)h)1Z(s)Th),  h>k1, (5.8)

    with initial condition Σˆxz(rs)k=(Ak | 0)Σz(rs)k, k1.

    Proof. See Appendix D.

    The above results for the distributed filters and fixed-point smoothers are presented in the following theorem, which also includes the formula to calculate their estimation error covariance matrices, ΣD(i)k/h=E[(xkˆxD(i)k/h)(xkˆxD(i)k/h)T].

    Theorem 3. For each iN, the distributed filter and fixed-point smoothers, ˆxD(i)k/h, hk, are given by:

    ˆxD(i)k/h=(Σˆx(1)k/h || Σˆx(m)k/h)D(i)Tx(D(i)x(Σˆx(rs)k/h)r,sND(i)Tx)1D(i)xˆXk/h,  hk1, (5.9)

    where ˆXk/h is the stacked vector of intermediate estimators and Σˆx(rs)k/h are the cross-covariances between them, given in Lemma 1 and Lemma 2 for the filter and smoothers, respectively. The estimation error covariance matrices, ΣD(i)k/h, are given by:

    ΣD(i)k/h=AkBTk(Σˆx(1)k/h || Σˆx(m)k/h)D(i)Tx(D(i)x(Σˆx(rs)k/h)r,sND(i)Tx)1D(i)x(Σˆx(1)k/h || Σˆx(m)k/h)T,  hk1. (5.10)

    Proof. Expression (5.9) is merely a rewrite of (5.1). To derive (5.10), the OPL must be used to express ΣD(i)k/h=E[xkxTk]E[ˆxD(i)k/hˆxD(i)Tk/h], and the formula is straightforwardly achieved by using the signal covariance factorization, (A1), together with (5.9).

    The applicability and effectiveness of the proposed distributed estimators are illustrated by the sensor network shown in Figure 1, where the topology is represented by a digraph G=(N,E,D), with the set of nodes N={1,2,3,4,5}. The elements in the adjacency matrix, D, are dji=1 when sensor i can obtain information from sensor j; otherwise, dji=0:

    D=(1010111010111000111100011).
    Figure 1.  Topological structure of the sensor network with five nodes.

    Consider a two-dimensional stochastic signal, xk=(x1kx2k), whose first and second component, x1k and x2k, denote the position and velocity of a target, respectively. Moreover, assume that the signal evolution is the same as in [30]; namely:

    xk+1=(A+2n=1αnkAn)xk+(B+2n=1βnkBn)μk,   k0,

    where

    A=(0.950.0100.95),   A1=(0.1000.01),   A2=(0.2000.02);   B=(0.80.6),   B1=(10),   B2=(01).

    The initial signal, x0, is a two-dimensional standard Gaussian vector. The multiplicative noises {αnk}k0, {βnk}k0, n=1,2, are white Gaussian scalar processes, with variances Var(αnk)=0.16, Var(βnk)=0.11, n=1,2. The additive noise {μk}k0 is a zero-mean Gaussian scalar process with variance 0.5. All of these noise sequences and the initial signal are assumed to be mutually independent. The signal covariance function can then be expressed in a separable form as E[xkxTl]=AkBTl, lk, with Ak=Ak and BTl=AlΣxl, where Σxl=E[xlxTl] is recursively obtained by:

    Σxl=AΣxl1AT+0.162n=1AnΣxl1ATn+0.5(BBT+0.112n=1BnBTn),   l1;    Σx0=I2.

    Scalar measurements of the signal, simultaneously disturbed by fading effects and both additive and multiplicative noises, are provided by the five sensor nodes, according to model (2.1), where the processes involved have the following characteristics:

    ● The multiplicative perturbations are described by C(i)k=θ(i)k(˙C(i)+ρ(i)k¨C(i)), iN, where:

    - ˙C(1)=(0.8, 0.9), ˙C(2)=(0.9, 0.7), ˙C(3)=(0.6, 0.7), ˙C(4)=(0.7, 0.8), ˙C(5)=(0.9, 0.5), ¨C(i)=(1, 0), i=1,2, and ¨C(i)=(0, 1), i=3,4,5.

    - {θ(i)k}k1 and {ρ(i)k}k1, iN, are independent sequences of independent and identically distributed random variables; namely:

    * θ(1)k and θ(2)k are random variables uniformly distributed over [0.3,0.7] and [0.2,0.8], respectively, representing a continuous random fading effect on the measurements from sensors 1 and 2.

    * θ(3)k and θ(4)k are discrete random variables, representing a discrete random fading effect on the measurements from sensors 3 and 4, according to the following probability mass functions:

    P(θ(3)k=0)=0.1,  P(θ(3)k=0.5)=0.5,  P(θ(3)k=1)=0.4.P(θ(4)k=0.2)=0.2,  P(θ(4)k=0.5)=0.5,  P(θ(4)k=0.8)=0.3.

    * θ(5)k are Bernoulli random variables, where P(θ(5)k=1)=¯θ, represents the randomly missing measurement phenomenon in sensor 5.

    * The multiplicative components ρ(i)k, iN, are standard Gaussian variables.

    ● The time-correlated noises {v(i)k}k1, iN, are defined by (2.2), with H(1)=H(3)=H(5)=0.8 and H(2)=H(4)=0.7. These noises are generated by the processes {ξ(i)k}k0, defined by ξ(i)k=a(i)ξk, iN, where {ξk}k0 is a standard Gaussian white process, a(1)=a(3)=a(5)=0.5, a(2)=a(4)=0.25, and the initial conditions are v(i)0=v0, for iN, and where v0 is a standard Gaussian variable.

    Under these assumptions, the covariance matrices of the stacked noise vk=(v(1)k,,v(5)k)T can always be factorized as Σvk,l=HkFTl, lk, where Hk=Hk and FTl=HlΣvl, being H=Diag(0.8,0.7,0.8,0.7,0.8) and where Σvl is recursively obtained by Σvl=HΣvl1HT+Σξl1,  l1, with Σξl=(a(i)a(j))i,jN and initial condition Σv0=15×5.

    In accordance with the theoretical model, let us suppose that the measurements at each sensor are subject to deception attacks and that the attacked measurement outputs are given by (2.4), where:

    ● The noises of the false data injection attacks are defined as w(i)k=b(i)wk, for iN, where b(1)=b(2)=b(5)=0.5, b(3)=b(4)=0.75, and {wk}k1 is a standard Gaussian white process. Clearly, these attack noises are correlated and Σw(ij)k=b(i)b(j), i,jN.

    ● The status of the attacks is described by mutually independent sequences of independent and identically distributed Bernoulli random variables, {λ(i)k}k1, iN, with known probabilities P(λ(i)k=1)=¯λ.

    To illustrate the effectiveness of the proposed distributed filtering and fixed-point smoothing algorithms and to quantify the estimation accuracy obtained, the estimation error variances of the first and second signal components (position and velocity) were calculated at every sensor node iN. First, the local, intermediate, distributed and global estimators were compared, for fixed values of the probabilities ¯θ (probability that the signal is present in the measured outputs of sensor 5) and ¯λ (probability of a successful attack). Different values of the probabilities ¯θ and ¯λ were then considered to highlight the effects of the missing measurement and attack phenomena, respectively, on the performance of the proposed distributed estimators, analyzing how these probabilities influence the estimation error variances for both signal components.

    Considering the same value of 0.5 for the probabilities ¯θ and ¯λ, Figure 2 depicts –for the first signal component– the error variances of the local filters (obtained using only the measurements from the sensor itself) and those of the proposed intermediate filters, ˆx(i)k/k, and distributed filters, ˆxD(i)k/k, and smoothers, ˆxD(i)k/k+L (with lag L=1,2,3,4,5), at every sensor node iN. On the one hand, this figure shows that the error variances corresponding to the intermediate filters are significantly less than those of the local filters and that the distributed filters outperform the intermediate ones. In addition, it is apparent that the distributed smoothing error variances are less than the filtering ones and, also, that at each fixed-point k, the fixed-point smoothers become more accurate as the number of available observations, k+L, increases. As expected, the improvement is smaller as L increases; indeed, in this example, the improvement is practically imperceptible for L5. Similar results are obtained for the second signal component.

    Figure 2.  Error variance comparison of the local and intermediate filters, distributed filters and smoothers, for the first component of the signal vector.

    Close inspection of Figure 2 reveals little difference in the values of the distributed estimation error variances over the five nodes. For a better evaluation of these differences, Figure 3 shows, for the first and second signal components, the error variances of the intermediate filters together with those of the distributed filters and smoothers (for L=1,2) at the different nodes, as well as those of the global optimal linear filtering and smoothing estimators, based on the set of measurements obtained from the five nodes of the network. A desirable property for distributed estimators over sensor networks is that the discrepancies between different nodes should be as small as possible; indeed, as we can see in Figure 3, the proposed distributed estimators considerably reduce the disagreements among intermediate estimators from different sensors. Thus, the distance between the error variances of the distributed estimators at the different nodes is fairly small and close to the global optimal error variances. In consequence, not only do the proposed distributed estimators present only slight discrepancies among the sensors, but they also provide a very similar level of performance to that of the global optimal estimators. Moreover, the proximity between the error variances of the global optimal filtering and smoothing estimators and those of the corresponding proposed distributed estimators shows that the latter estimators perform well.

    Figure 3.  Error variance comparison of intermediate, distributed and global estimators for the first and second signal components.

    Assuming, as above, that the attack probability is ¯λ=0.5, we now evaluate the performance of the proposed distributed estimators with respect to the missing measurements phenomenon in sensor node 5. To do so, the distributed filtering and smoothing (L=2) error variances for the first signal component are plotted in Figure 4 for different values of the probability ¯θ (namely, ¯θ=0.3,0.5,0.7 and 0.9). For these values, the distributed estimation error variances at sensor node 5 are plotted in the left-side panel of Figure 4. This figure shows that the performance of the distributed estimators is indeed influenced by these probabilities and, as expected, that the distributed estimation error variances decrease as the probability ¯θ increases. Hence, both the filtering and the smoothing distributed estimators achieve better estimation accuracy when 1¯θ, the probability of missing measurements, decreases, as this means that further information about the signal is available. Analogous results were obtained for sensor nodes 2, 3 and 4, since these nodes all utilize information from node 5; in fact, node 4 uses the measurements from node 5 to obtain its intermediate estimators, which are subsequently sent to nodes 2 and 3 to construct the distributed estimators at these nodes. In node 1, however, the distributed error variances do not change since the estimators in this node do not use the measurements from node 5. Since the behavior of the distributed error variances is analogous in all the iterations, for a better visualization of the decreasing trend of the error variances as the probability ¯θ increases at all sensor nodes –except sensor 1, in which it remains constant–, the right-side panel of Figure 4 displays the distributed error variance at the iteration k=100. As in the previous figures, Figure 4 also shows that the error variances corresponding to the smoothers are less than those of the filters. Similar results are obtained for the second signal component and therefore the same conclusions are drawn.

    Figure 4.  The left-side (resp. right-side) panel depicts the distributed estimation error variances for ¯θ=0.3,0.5,0.7,0.9 in node 5 (resp. in all nodes at k=100).

    Next, our aim is to examine the association between the probability of a successful attack and the performance of the estimators. For this purpose, we compare the distributed filtering error variances for different values of this probability (namely, ¯λ=0.1 to 0.9). The distributed filtering error variances at sensor node 1, corresponding to the second signal component, are plotted in the left-side panel of Figure 5. Here, as expected, these error variances rise in line with ¯λ. Furthermore, this increase is more pronounced for higher values of ¯λ. Similar results are obtained in all nodes, as shown in the right-side panel of Figure 5, which displays the distributed filtering and smoothing error variances at k=100 versus ¯λ in the five sensor nodes. The discrepancies between the different nodes are negligible and, as shown in all the other figures, the smoother with lag L=2 outperforms the one with L=1 which, in turn, outperforms the filter. Similar results are inferred for the first signal component, and therefore the same conclusions are drawn.

    Figure 5.  The left-side (resp. right-side) panel depicts the distributed filtering (resp. filtering and smoothing) error variances for ¯λ=0.1 to 0.9 in node 1 (resp. in all nodes at k=100).

    Our final aim in this section is to show the superior performance of the proposed distributed filter in the presence of infinite-step time-correlated additive noises. For this purpose, we conduct a comparative analysis between the distributed filter proposed in this paper and the one proposed in [27] for networked systems with fading measurements, multiplicative noises in both the signal and measurements equations and stochastic deception attacks, but without infinite-step time correlation of the measurement noises.

    At every sensor node and for each of the two components of the distributed filtering estimates, the comparison is made on the basis of the empirical values of the mean-squared error at each time instant, which are calculated from two thousand independent simulations by

    MSE(i)a,k=120002000s=1(x(s)a,kˆxD(i,s)a,k/k)2,  1k100, iN, a=1,2,

    where, for each sampling time k and for the s-th simulation run, x(s)a,k denotes the a-th component of the simulated signal, and ˆxD(i,s)a,k/k is the a-th component of the distributed filter calculated in the i-th sensor node.

    Assuming again the same fixed value 0.5 for the probabilities ¯θ and ¯λ, the results are displayed in Figure 6, which shows that for all the sensor nodes and for both the first and second components, the empirical mean-squared error values of the proposed distributed filtering estimates are lower than those of the distributed filtering estimates in [27]. Note that the proposed filter was indeed expected to outperform the filter in [27], since the latter does not take into account the infinite-step time correlation of the sensor measurement noises.

    Figure 6.  Empirical mean-squared error comparison of the proposed distributed filter and the distributed filter in [27].

    In this paper, we investigate the distributed estimation problem –including filtering and fixed-point smoothing– in networked systems whose sensor nodes are spatially distributed according to a predetermined network topology, represented by a directed graph. Random parameter matrices and stochastic deception attacks are incorporated into the measurement model. Thus, a broad theoretical framework is provided with which to address general stochastic multi-sensor systems with different network-induced uncertainties. The presence of time-correlated additive noise in the observation model is handled by a non-augmentation method, based on the direct estimation of the noise. For every sensor node, the proposed distributed estimation algorithm runs in two phases. The first yields an intermediate least-squares linear estimator using its own local measurements and those received from its neighboring nodes. In the second phase, the own-sensor intermediate estimator is combined with those calculated by its neighboring nodes to obtain the desired distributed estimator as the minimum mean squared error matrix-weighted linear combination of the intermediate estimators. The proposed estimation strategy does not rely on explicit information provided by the signal evolution equation, but rather on the factorization of the signal and time-correlated noise covariance matrices in a separable form. As a result, whether or not the signal evolution model is completely known, the proposed distributed estimation technique can be used to estimate a wide class of stochastic signals, including those whose evolution is affected by multiplicative noises. The simulation experiment performed shows that the theoretical system model we present covers common random imperfections, such as the presence of multiplicative noise, missing observations and fading effects. The numerical results obtained were used to examine the influence of two degrading effects on estimation performance: a) the probability of missing measurements; b) the probability of successful attacks. Comparative analysis of the estimation error variances shows that the proposed distributed estimators outperform the intermediate ones and reduce the disagreements between different sensors by bringing each distributed estimator closer to the global optimal linear estimator, based on the full set of measurements of the entire network. Finally, in the presence of infinite-step colored measurement noises, the proposed estimators are shown to outperform the distributed estimators in the authors' previous work [27].

    A challenging topic for future studies is the derivation of self-tuning estimation algorithms for the case when the attack probabilities and/or the covariances and cross-covariances of the attack noises are unknown. It would also be interesting to consider the possibility that attack noise is not stochastic but a constant or time-varying deterministic sequence, as well as the scenario of random packet dropouts in the transmissions among sensor nodes.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research was suported by the "Ministerio de Ciencia e Innovación, Agencia Estatal de Investigación" of Spain and the European Regional Development Fund [grant number PID2021-124486NB-I00].

    The authors declare there is no conflict of interest.

    According to expression (3.5) for the LS linear estimators, the coefficients X(i)k,l=E[xkη(i)Tl], 1lk, with η(i)l given from (3.6) and (3.7), must be calculated for the signal filter ˆx(i)k/k. For this purpose, we use expressions (3.1) and (3.2) for yl which, together with (A1) on the signal covariance and the independence properties, easily lead to:

    X(i)k,l=(AkBTl¯CTlE[xkˆx(i)Tl/l1]¯CTlE[xkˆv(i)Tl/l1])(Imny¯Λl)D(i)Ty,  1lk.

    Now, by denoting V(i)k,l=E[vkη(i)Tl] and using (3.5) for the one-stage predictors ˆx(i)Tl/l1 and ˆv(i)Tl/l1, we obtain:

    E[xkˆx(i)Tl/l1]=(1δl,1)l1l=1X(i)k,l(Ση(i)l)1X(i)Tl,l;   E[xkˆv(i)Tl/l1]=(1δl,1)l1l=1X(i)k,l(Ση(i)l)1V(i)Tl,l,   l1,

    and X(i)k,l is then written as:

    X(i)k,l=(AkBTl¯CTl(1δl,1)l1l=1X(i)k,l(Ση(i)l)1(¯ClX(i)l,l+V(i)l,l)T)(Imny¯Λl)D(i)Ty,   1lk.

    In a similar way, using now the factorization (3.3) for the noise covariance, the coefficients V(i)k,l are also written as:

    V(i)k,l=(HkFTl(1δl,1)l1l=1V(i)k,l(Ση(i)l)1(¯ClX(i)l,l+V(i)l,l)T)(Imny¯Λl)D(i)Ty,  1lk.

    These expressions guarantee the following factorizations:

    X(i)k,l=AkZx(i)l;   V(i)k,l=HkZv(i)l,   1lk, (7.1)

    where

    Zx(i)l=(BTl¯CTl(1δl,1)l1l=1Zx(i)l(Ση(i)l)1(¯ClAlZx(i)l+HlZv(i)l)T)(Imny¯Λl)D(i)Ty,  l1,
    Zv(i)l=(FTl(1δl,1)l1l=1Zv(i)l(Ση(i)l)1(¯ClAlZx(i)l+HlZv(i)l)T)(Imny¯Λl)D(i)Ty,  l1,

    and, hence, the augmented matrices Z(i)k=(Zx(i)Tk| Zv(i)Tk)T satisfy:

    Z(i)k=((¯CkBk | Fk)T(1δk,1)k1l=1Z(i)l(Ση(i)l)1Z(i)Tl(¯CkAk | Hk)T)(Imny¯Λk)D(i)Ty,  k1. (7.2)

    From these preliminaries, we can now derive the expressions of the theorem at hand. Expression (4.1) for the signal filter comes directly from (3.5) using the factorization (7.1) for X(i)k,l, and defining z(i)kkl=1Z(i)l(Ση(i)l)1η(i)l which clearly satisfies (4.3). Now, using the OPL –which guarantees the uncorrelation between the estimation error and the observations–, we express Σ˜x(i)k/k=E[xkxTk]E[ˆx(i)k/kˆx(i)Tk/k], and (4.2) is easily obtained from (4.1), by denoting Σz(i)k=E[z(i)kz(i)Tk]. Recursive formula (4.4) for these matrices is derived by from the fact that the innovation process is white, which leads to Σz(i)k=kl=1Z(i)l(Ση(i)l)1Z(i)Tl; now, (4.5) is merely expression (7.2) for Z(i)k rewritten in terms of Σz(i)k1.

    Again, taking again into account the factorizations (7.1), the one-stage predictors of the signal and noise are given by ˆx(i)k/k1=(Ak | 0)z(i)k1 and ˆv(i)k/k1=(0 | Hk)z(i)k1, respectively, and so (3.7) can be rewritten as:

    ˆy(i)k/k1=(Imny¯Λk)(¯CkAk | Hk)z(i)k1. (7.3)

    Expression (4.6) for the innovation is obtained immediately from (3.6), and (4.7) is deduced by again using the OPL to express the innovation covariance matrix as Ση(i)k=D(i)y(ΣykE[ˆy(i)k/k1ˆy(i)Tk/k1])D(i)Ty.

    Recursive formula (4.8) is an immediate consequence of the general expression (3.5). From this, the estimation error is expressed as ˜x(i)k/h=˜x(i)k/h1X(i)k,h(Ση(i)h)1η(i)h, and (4.9) is derived straightforwardly taking into account that, from the OPL, E[˜x(i)k/h1η(i)Th]=E[xkη(i)Th]=X(i)k,h.

    Since, for h>k, E[xkyTh]=BkATh¯CTh(Imny¯Λh), expression (4.10) for the gain coefficients, X(i)k,h=E[xkη(i)Th], is obtained using (4.6) for η(i)h and denoting Σxz(i)k,h=E[xkz(i)Th], hk.

    Finally, recursive formula (4.11) is deduced from (4.3) for z(i)h; its initial condition is another consequence of the OPL, from which E[xkz(i)Tk]=E[ˆx(i)k/kz(i)Tk], and it is obtained simply by using (4.1) for ˆx(i)k/k.

    Clearly, expression (5.2) for the cross-covariance matrices is obtained from (4.1) for ˆx(r)k/k and ˆx(s)k/k, being Σz(rs)k=E[z(r)kz(s)Tk]. Expression (5.3) for Σz(rs)k is derived by using (4.3) for z(r)k and z(s)k, and denoting Z(rs)k1,k=E[z(r)k1η(s)Tk].

    From (4.6) for η(s)k, we have Z(rs)k1,k=(E[z(r)k1yTk]Σz(rs)k1(¯CkAk | Hk)T(Imny¯Λk))D(s)Ty. Now, using the OPL, we express E[z(r)k1yTk]=E[z(r)k1ˆy(r)Tk/k1], and from (7.3) for ˆy(r)k/k1, the following identity holds:

    E[z(r)k1yTk]=Σz(r)k1(¯CkAk | Hk)T(Imny¯Λk),   k1, (7.4)

    and (5.4) is obtained.

    To end the proof, we again use (4.6) for η(r)k together with the definition Z(rs)k1,k=E[z(r)k1η(s)Tk] to express the innovation cross-covariance as Ση(rs)k=D(r)y(E[ykη(s)Tk](Imny¯Λk)(¯CkAk | Hk)Z(rs)k1,k). Then, from (4.6) for η(s)k, and taking into account (13.1) for E[ykz(s)Tk1], it is clear that:

    E[ykη(s)Tk]=(Σyk(Imny¯Λk)(¯CkAk | Hk)Σz(s)k1(¯CkAk | Hk)T(Imny¯Λk))D(s)Ty,   k1,

    and expression (5.5) is concluded simply by using (5.3) for Z(rs)k1,k.

    Expression (5.6) for Σˆx(rs)k/h, h>k, is directly obtained from (4.8), by denoting ˆX(rs)k,h=E[ˆx(r)k/h1η(s)Th]. In order to derive (5.7), we use (4.6) for η(s)h, and denote Σˆxz(rs)k,h1=E[ˆx(r)k/h1z(s)Th1] to obtain ˆX(rs)k,h=(E[ˆx(r)k/h1yTh]Σˆxz(rs)k,h1(¯ChAh | Hh)T(Imny¯Λh))D(s)Ty. Then, taking into account that –from the OPL– E[ˆx(r)k/h1yTh]=E[xkˆy(r)Th/h1], we need only use (7.3) for ˆy(r)h/h1.

    Finally, expression (5.8) of Σˆxz(rs)k,h, h>k, is obtained straightforwardly simply by using (4.8) for ˆx(r)k/h and (4.3) for z(s)h. Its initial condition is derived directly from (4.1) and Σz(rs)k=E[z(r)kz(s)Tk].



    [1] U. Singh, A. Abraham, A. Kaklauskas, T. Hong, Smart Sensor Networks. Analytics, Sharing and Control, Springer, Switzerland, 2022. https://doi.org/10.1007/978-3-030-77214-7
    [2] Z. Zhou, H. Xu, H. Feng, W. Li, A Non-Equal Time Interval Incremental Motion Prediction Method for Maritime Autonomous Surface Ships, Sensors, 23 (2023), 2852. https://doi.org/10.3390/s23052852 doi: 10.3390/s23052852
    [3] R. Caballero-Águila, A. Hermoso-Carazo, J. Linares-Pérez, Distributed fusion filters from uncertain measured outputs in sensor networks with random packet losses, Inform. Fusion, 34 (2017), 70–79. https://doi.org/10.1016/j.inffus.2016.06.008 doi: 10.1016/j.inffus.2016.06.008
    [4] J. Liu, Y. Gu, J. Cao, S. Fei, Distributed event-triggered H filtering over sensor networks with sensor saturations and cyber-attacks, ISA Trans., 81 (2018), 63–75. https://doi.org/10.1016/j.isatra.2018.07.018
    [5] X. Bu, H. Dong, F. Han, N. Hou, G. Li, Distributed filtering for time-varying systems over sensor networks with randomly switching topologies under the round-robin protocol, Neurocomputing, 346 (2019), 58–64. https://doi.org/10.1016/j.neucom.2018.07.087 doi: 10.1016/j.neucom.2018.07.087
    [6] R. Caballero-Águila, A. Hermoso-Carazo, J. Linares-Pérez, Z. Wang, A new approach to distributed fusion filtering for networked systems with random parameter matrices and correlated noises, Inform. Fusion, 45 (2019), 324–332. https://doi.org/10.1016/j.inffus.2018.02.006 doi: 10.1016/j.inffus.2018.02.006
    [7] J. Hu, Z. Wang, G.-P. Liu, H. Zhang, R. Navaratne, A prediction-based approach to distributed filtering with missing measurements and communication delays through sensor networks, IEEE Trans. Syst. Man Cybern. -Syst., 51 (2021), 7063–7074. https://doi.org/10.1109/TSMC.2020.2966977 doi: 10.1109/TSMC.2020.2966977
    [8] J. Li, J. Hu, J. Cheng, Y. Wei, H. Yu, Distributed filtering for time-varying state-saturated systems with packet disorders: An event-triggered case, Appl. Math. Comput., 434 (2022), 127411. https://doi.org/10.1016/j.amc.2022.127411 doi: 10.1016/j.amc.2022.127411
    [9] M. Niu, G. Wen, Y. Lv, G. Chen, Innovation-based stealthy attack against distributed state estimation over sensor networks, Automatica, 152 (2023), 110962. https://doi.org/10.1016/j.automatica.2023.110962 doi: 10.1016/j.automatica.2023.110962
    [10] G. Yang, H. Rezaee, A. Alessandri, T. Parisini, State estimation using a network of distributed observers with switching communication topology, Automatica, 147 (2023), 110690. https://doi.org/10.1016/j.automatica.2022.110690 doi: 10.1016/j.automatica.2022.110690
    [11] J. Hu, Z. Wang, D. Chen, F. E. Alsaadi, Estimation, filtering and fusion for networked systems with network-induced phenomena: New progress and prospects, Inform. Fusion, 31 (2016), 65–75. https://doi.org/10.1016/j.inffus.2016.01.001 doi: 10.1016/j.inffus.2016.01.001
    [12] S. Sun, H. Lin, J. Ma, X. Li, Multi-sensor distributed fusion estimation with applications in networked systems: A review paper, Inform. Fusion, 38 (2017), 122–134. https://doi.org/10.1016/j.inffus.2017.03.006 doi: 10.1016/j.inffus.2017.03.006
    [13] H. Geng, Z. Wang, Y. Cheng, F. Alsaadi, A. M. Dobaie, State estimation under non-Gaussian Lévy and time-correlated additive sensor noises: A modified Tobit Kalman filtering approach, Signal Process., 154 (2019), 120–128. https://doi.org/10.1016/j.sigpro.2018.08.005 doi: 10.1016/j.sigpro.2018.08.005
    [14] W. Liu, P. Shi, Convergence of optimal linear estimator with multiplicative and time-correlated additive measurement noises, IEEE Trans. Autom. Control, 64 (2019), 2190–2197. https://doi.org/10.1109/TAC.2018.2869467 doi: 10.1109/TAC.2018.2869467
    [15] R. Caballero-Águila, A. Hermoso-Carazo, J. Linares-Pérez, Networked fusion estimation with multiple uncertainties and time-correlated channel noise, Inform. Fusion, 54 (2020), 161–171. https://doi.org/10.1016/j.inffus.2019.07.008 doi: 10.1016/j.inffus.2019.07.008
    [16] J. Ma, S. Sun, Optimal linear recursive estimators for stochastic uncertain systems with time-correlated additive noises and packet dropout compensations, Signal Process., 176 (2020), 107704. https://doi.org/10.1016/j.sigpro.2020.107704 doi: 10.1016/j.sigpro.2020.107704
    [17] R. Caballero-Águila, J. Hu, J. Linares-Pérez, Two Compensation Strategies for Optimal Estimation in Sensor Networks with Random Matrices, Time-Correlated Noises, Deception Attacks and Packet Losses, Sensors, 22 (2022), 8505. https://doi.org/10.3390/s22218505 doi: 10.3390/s22218505
    [18] Q. Liu, Z. Wang, X. He, Stochastic Control and Filtering over Constrained Communication Networks, Springer, Switzerland, 2019. https://doi.org/10.1007/978-3-030-00157-5
    [19] F. Han, H. Dong, Z. Wang, G. Li, F. E. Alsaadi, Improved Tobit Kalman filtering for systems with random parameters via conditional expectation, Signal Process., 147 (2018), 35–45. http://dx.doi.org/10.1016/j.sigpro.2018.01.015 doi: 10.1016/j.sigpro.2018.01.015
    [20] R. Caballero-Águila, A. Hermoso-Caraz, J. oLinares-Pérez, Centralized filtering and smoothing algorithms from outputs with random parameter matrices transmitted through uncertain communication channels, Digit. Signal Process., 85 (2019), 77–85. https://doi.org/10.1016/j.dsp.2018.11.010 doi: 10.1016/j.dsp.2018.11.010
    [21] W. Liu, X. Xie, W. Qian, X. Xu, Y. Shi, Optimal linear filtering for networked control systems with random matrices, correlated noises, and packet dropouts, IEEE Access, 8 (2020), 59987–59997. http://dx.doi.org/10.1109/ACCESS.2020.2983122 doi: 10.1109/ACCESS.2020.2983122
    [22] S. Sun, Distributed optimal linear fusion predictors and filters for systems with random parameter matrices and correlated noises, IEEE Trans. Signal Process., 68 (2020), 1064–1074. https://doi.org/10.1109/TSP.2020.2967180 doi: 10.1109/TSP.2020.2967180
    [23] R. Caballero-Águila, J. Linares-Pérez, Distributed fusion filtering for uncertain systems with coupled noises, random delays and packet loss prediction compensation, Int. J. Syst. Sci., 54 (2023), 371–390. https://doi.org/10.1080/00207721.2022.2122905 doi: 10.1080/00207721.2022.2122905
    [24] M. S. Mahmoud, M. M. Hamdan, U. A. Baroudi, Modeling and control of Cyber-Physical Systems subject to cyber attacks: A survey of recent advances and challenges, Neurocomputing, 338 (2019), 101–115. https://doi.org/10.1016/j.neucom.2019.01.099 doi: 10.1016/j.neucom.2019.01.099
    [25] Z. Wang, D. Wang, B. Shen, F. E. Alsaadi, Centralized security-guaranteed filtering in multirate-sensor fusion under deception attacks, J. Frankl. Inst., 355 (2018), 406–420. https://doi.org/10.1016/j.jfranklin.2017.11.010 doi: 10.1016/j.jfranklin.2017.11.010
    [26] F. Han, H. Dong, Z. Wang, G. Li, Local design of distributed H-consensus filtering over sensor networks under multiplicative noises and deception attacks, Int. J. Robust Nonlinear Control, 29 (2019), 2296–2314. https://doi.org/10.1002/rnc.4493 doi: 10.1002/rnc.4493
    [27] R. Caballero-Águila, A. Hermoso-Carazo, J. Linares-Pérez, A two-phase distributed filtering algorithm for networked uncertain systems with fading measurements under deception attacks, Sensors, 20 (2020), 6445. https://doi.org/10.3390/s20226445 doi: 10.3390/s20226445
    [28] S. Xiao, Q. Han, X. Ge, Y. Zhang, Secure distributed finite-time filtering for positive systems over sensor networks under deception attacks, IEEE Trans. Cybern., 50 (2020), 1200–1228. https://doi.org/10.1109/tcyb.2019.2900478 doi: 10.1109/tcyb.2019.2900478
    [29] L. Ma, Z. Wang, Y. Chen, X. Yi, Probability-guaranteed distributed secure estimation for nonlinear systems over sensor networks under deception attacks on innovations, IEEE Trans. Signal Inf. Proc. Netw., 7 (2021), 465–477. https://doi.org/10.1109/TSIPN.2021.3097217 doi: 10.1109/TSIPN.2021.3097217
    [30] Y. Ma, S. Sun, Distributed Optimal and Self-Tuning Filters Based on Compressed Data for Networked Stochastic Uncertain Systems with Deception Attacks, Sensors, 23 (2023), 335. https://doi.org/10.3390/s23010335 doi: 10.3390/s23010335
  • This article has been cited by:

    1. Baifu Zheng, Xuenan Zhang, Yuan Gao, Tian Tian, 2024, Event-triggered-based sequential fusion filters for CPSs with deception attacks and correlated noises, 979-8-3503-8778-0, 4780, 10.1109/CCDC62350.2024.10588196
    2. Caballero-Águila Raquel, Linares-Pérez Josefa, Centralized Fusion Estimation in Networked Systems: Addressing Deception Attacks and Packet Dropouts with a Zero-Order Hold Approach, 2024, 2653-6226, 100021, 10.53941/ijndi.2024.100021
    3. Raquel Caballero-Águila, Jun Hu, Josefa Linares-Pérez, Distributed estimation for uncertain systems subject to measurement quantization and adversarial attacks, 2025, 120, 15662535, 103044, 10.1016/j.inffus.2025.103044
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1492) PDF downloads(70) Cited by(3)

Figures and Tables

Figures(6)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog