Loading [MathJax]/jax/element/mml/optable/GreekAndCoptic.js
Review

G protein-coupled receptors: the evolution of structural insight

  • Received: 15 June 2017 Accepted: 08 August 2017 Published: 21 August 2017
  • G protein-coupled receptors (GPCR) comprise a diverse superfamily of over 800 proteins that have gained relevance as biological targets for pharmaceutical drug design. Although these receptors have been investigated for decades, three-dimensional structures of GPCR have only recently become available. In this review, we focus on the technological advancements that have facilitated efforts to gain insights into GPCR structure. Progress in these efforts began with the initial crystal structure determination of rhodopsin (PDB: 1F88) in 2000 and has continued to the most recently published structure of the A1AR (PDB: 5UEN) in 2017. Numerous experimental developments over the past two decades have opened the door for widespread GPCR structural characterization. These efforts have resulted in the determination of three-dimensional structures for over 40 individual GPCR family members. Herein we present a comprehensive list and comparative analysis of over 180 individual GPCR structures. This includes a summary of different GPCR functional states crystallized with agonists, dual agonists, partial agonists, inverse agonists, antagonists, and allosteric modulators.

    Citation: Samantha B. Gacasan, Daniel L. Baker, Abby L. Parrill. G protein-coupled receptors: the evolution of structural insight[J]. AIMS Biophysics, 2017, 4(3): 491-527. doi: 10.3934/biophy.2017.3.491

    Related Papers:

    [1] M. D. König, Stefano Battiston, M. Napoletano, F. Schweitzer . On algebraic graph theory and the dynamics of innovation networks. Networks and Heterogeneous Media, 2008, 3(2): 201-219. doi: 10.3934/nhm.2008.3.201
    [2] Regino Criado, Julio Flores, Alejandro J. García del Amo, Miguel Romance . Structural properties of the line-graphs associated to directed networks. Networks and Heterogeneous Media, 2012, 7(3): 373-384. doi: 10.3934/nhm.2012.7.373
    [3] Valérie Dos Santos, Bernhard Maschke, Yann Le Gorrec . A Hamiltonian perspective to the stabilization of systems of two conservation laws. Networks and Heterogeneous Media, 2009, 4(2): 249-266. doi: 10.3934/nhm.2009.4.249
    [4] Alexandre M. Bayen, Alexander Keimer, Nils Müller . A proof of Kirchhoff's first law for hyperbolic conservation laws on networks. Networks and Heterogeneous Media, 2023, 18(4): 1799-1819. doi: 10.3934/nhm.2023078
    [5] Steinar Evje, Kenneth H. Karlsen . Hyperbolic-elliptic models for well-reservoir flow. Networks and Heterogeneous Media, 2006, 1(4): 639-673. doi: 10.3934/nhm.2006.1.639
    [6] Alessandro Gondolo, Fernando Guevara Vasquez . Characterization and synthesis of Rayleigh damped elastodynamic networks. Networks and Heterogeneous Media, 2014, 9(2): 299-314. doi: 10.3934/nhm.2014.9.299
    [7] Dilip Sarkar, Shridhar Kumar, Pratibhamoy Das, Higinio Ramos . Higher-order convergence analysis for interior and boundary layers in a semi-linear reaction-diffusion system networked by a k-star graph with non-smooth source terms. Networks and Heterogeneous Media, 2024, 19(3): 1085-1115. doi: 10.3934/nhm.2024048
    [8] Nathaniel J. Merrill, Zheming An, Sean T. McQuade, Federica Garin, Karim Azer, Ruth E. Abrams, Benedetto Piccoli . Stability of metabolic networks via Linear-in-Flux-Expressions. Networks and Heterogeneous Media, 2019, 14(1): 101-130. doi: 10.3934/nhm.2019006
    [9] José Ignacio Alvarez-Hamelin, Luca Dall'Asta, Alain Barrat, Alessandro Vespignani . K-core decomposition of Internet graphs: hierarchies, self-similarity and measurement biases. Networks and Heterogeneous Media, 2008, 3(2): 371-393. doi: 10.3934/nhm.2008.3.371
    [10] Alessia Marigo, Benedetto Piccoli . A model for biological dynamic networks. Networks and Heterogeneous Media, 2011, 6(4): 647-663. doi: 10.3934/nhm.2011.6.647
  • G protein-coupled receptors (GPCR) comprise a diverse superfamily of over 800 proteins that have gained relevance as biological targets for pharmaceutical drug design. Although these receptors have been investigated for decades, three-dimensional structures of GPCR have only recently become available. In this review, we focus on the technological advancements that have facilitated efforts to gain insights into GPCR structure. Progress in these efforts began with the initial crystal structure determination of rhodopsin (PDB: 1F88) in 2000 and has continued to the most recently published structure of the A1AR (PDB: 5UEN) in 2017. Numerous experimental developments over the past two decades have opened the door for widespread GPCR structural characterization. These efforts have resulted in the determination of three-dimensional structures for over 40 individual GPCR family members. Herein we present a comprehensive list and comparative analysis of over 180 individual GPCR structures. This includes a summary of different GPCR functional states crystallized with agonists, dual agonists, partial agonists, inverse agonists, antagonists, and allosteric modulators.


    In this paper we study systems of linear hyperbolic equations on a bounded interval, say, (0,1), sometimes referred to as port-Hamiltonians, see for example [14], and the role is played by the boundary conditions coupling the incoming and outgoing Riemann invariants determined by the system at the endpoints x=0 and x=1. In particular, we consider the set of equations

    t(υϖ)=(C+00C)x(υϖ),0<x<1,t>0, (1a)
    Ξ(υ(0,t),ϖ(1,t),υ(1,t),ϖ(0,t))T=0,t>0, (1b)
    υ(x,0)=˚υ(x),ϖ(x,0)=˚ϖ(x)0<x<1, (1c)

    where υ and ϖ are the Riemann invariants flowing from 0 to 1 and from 1 to 0, respectively, C+ and C are m+×m+ and m×m diagonal matrices with positive entries and, with 2m=m++m, Ξ is a 2m×4m matrix relating outgoing υ(0),ϖ(1) and incoming υ(1),ϖ(0) boundary values so that (1b) can be written as

    Ξout(υ(0,t),ϖ(1,t))T+Ξin(υ(1,t),ϖ(0,t))T=0,t>0. (2)

    An important class of such problems arises from dynamical systems on metric graphs. Let Γ be a graph with r vertices {vj}1jr=:Υ and m edges {ej}1jm (identified with (0,1) through a suitable parametrization). The dynamics on each edge ej is described by

    tpj+Mjxpj=0,0<x<1,t>0,1jm, (3)

    where pj=(pj1,pj2)T and Mj=(Mjlk)1k,l2 are defined on [0,1] and Mj(x) is a strictly hyperbolic real matrix for each x[0,1] and 1jm. System (3) is complemented with initial conditions and suitable transmission conditions coupling the values of pj at the vertices which the edges ej are incident to. Then, (1) can be obtained from (3) by diagonalization so that (suitably re-indexed) υ and ϖ are the Riemann invariants (see [7,Section 1.1]) of p=(pj)1jm.

    Such problems have been a subject of intensive research, both from the dynamics on graphs, [1,10,5,4,11,17,19], and the 1-D hyperbolic systems, [7,21,15,14], points of view. However, there is hardly any overlap, as there seems to be little interest in the network interpretation of the results in the latter, while in the former the conditions on the Riemann invariants seem to be "difficult to adapt to the case of a network", [11,Section 3].

    The main aim of this paper, as well as of the preceding one [2] is to bring together these two approaches. In [2] we have provided explicit formulae allowing for a systematic conversion of Kirchhoff's type network transmission conditions to (1b) in such a way that the resulting system(1) is well-posed. We also gave a proof of the well-posedness on any Lp, 1p<, which, in contrast to [21], is purely based on an operator semigroup approach. For notational clarity, we focused on 2×2 hyperbolic systems on each edge but the method works equally well for systems of arbitrary (finite) dimension. In this paper we are concerned with the reverse question, that is, to determine under what assumptions on Ξ, (1) describes a network dynamics given by 2×2 hyperbolic systems on each edge, coupled by Kirchhoff's transmission conditions at incident vertices.

    To briefly describe the content of the paper, we observe that if the matrix Ξ={ξij}1i2m,1j4m in (1b) describes transmission conditions at the vertices of a graph, say Γ, on whose edges we have 2×2 systems of hyperbolic equations, then we should be able to group the indices j into pairs {j,j"} corresponding to the edges of Γ on which we have 2×2 systems for the components j and j" of (υ,ϖ). Thus, in a sense, the columns of Ξ determine the edges of Γ. It follows that it is easier to split the reconstruction of Γ into two steps and first build a digraph Γ, where each column index j of Ξ is associated with an arc, say εj, on which we have a first order system for either υj or ϖj. Thus, the main problem is to construct vertices of Γ (and Γ) which should be somehow determined by a partition of the row indices of Ξ. To do this, we observe that the coefficients of Ξ represent a map of connections of the edges in the sense that, roughly speaking, if ξij0 and ξik0, then arcs εj and εk are incident to the same vertex and, if they are incoming to it, then they cannot be incoming to any other vertex. A difficulty here is that while for the flow to occur from, say, εj to εk, these arcs must be incident to the same vertex but the converse may not hold, that is, for εj incoming to and εk outgoing from the same v, the flow from εj may not enter εk but go to other outgoing arcs. To avoid such a case, in this paper we formulate conditions ensuring that the flow connectivity at each vertex is the same as the graph connectivity. This assumption yields a relatively simple criterion for the reconstruction of Γ, which is that

    ^(^Ξout)T^Ξin

    is the adjacency matrix of a line graph (where for a matrix A, ˆA is obtained by replacing non-zero entries of A by 1.) This, together with some technical assumptions, allows us to apply the theory of [13], see also [6,Theorem 4.5.1], to construct first Γ and then Γ in such a way that (2) can be localized at each vertex of Γ in a way which is consistent with (1a).

    The main idea of this paper is similar to that of [3]. However, [3] dealt with first order problems with (2) solved with respect to the outgoing data. Here, we do not make this assumption and, while(1) technically is one-dimensional, having reconstructed Γ, we still have to glue together its pairs of arcs to obtain the edges of Γ in such a way that the corresponding pairs of solutions of (1a) are Riemann invariants of 2×2 systems on Γ. Another difficulty in the current setting is potential presence of sources and sinks in Γ. Their structure is not reflected in the line graph, [3], and reconstructing them in a way consistent with a system of 2×2 equations on Γ is technically involved.

    The paper is organized as follows. In Section 2 we briefly recall the notation and relevant results from [2]. Section 3 contains the main result of the paper. In Appendix we recall basic results on line graphs in the interpretation suitable for the considerations of the paper.

    We consider a network represented by a finite, connected and simple (without loops and multiple edges) metric graph Γ with r vertices {vj}1jr=:Υ and m edges {ej}1jm. We denote by Ev the set of edges incident to v, let Jv={j;ejEv} and |Ev|=|Jv| be the valency of v. We identify the edges with unit intervals through sufficiently smooth invertible functions lj:ej[0,1]. In particular, we call v with lj(v)=0 the tail of ej and the head if lj(v)=1. On each edge ej we consider the system (3). Let λj<λj+ be the eigenvalues of Mj,1jm (the strict inequality is justified by the strict hyperbolicity of Mj). The eigenvalues can be of the same sign as well as of different signs. In the latter case, we have λj<0<λj+. By fj±=(fj±,1,fj±,2)T we denote the eigenvectors corresponding to λj±, respectively, and by

    Fj=(fj+,1fj,1fj+,2fj,2),

    the diagonalizing matrix on each edge. The Riemann invariants uj=(uj1,uj2)T,1jm, are defined by

    uj=(Fj)1pjandpj=(fj+,1uj1+fj,1uj2fj+,2uj1+fj,2uj2). (4)

    Then, we diagonalize (3) and, discarding lower order terms, we consider

    tuj=Ljxuj=(λj+00λj)xuj, (5)

    for each 1jm.

    Remark 1. We refer an interested reader to [7,Section 1.1] for a detailed construction of the Riemann invariants for a general 1D hyperbolic system and the explanation of the name.

    The most general linear local boundary conditions at vΥ are given by

    Φvp(v)=0, (6)

    where p=((pj1,pj2)1jm)T and the real matrix Φv is given by

    Φv:=(ϕj1v,1φj1v,1ϕj|Jv|v,1φj|Jv|v,1ϕj1v,kvφj1v,kvϕj|Jv|v,kvφj|Jv|v,kv),  (7)

    where Jv={j1,,j|Jv|} and kv is a parameter determined by the problem. The difficulty with such a formulation is that it is not immediately clear what properties Φv should have to ensure well-posedness of the hyperbolic problem for which (6), vΥ, serve as boundary conditions. There are various ways tackling this difficulty. For example, in [20,11] conditions are imposed directly on Φv to ensure specific properties, such as dissipativity, of the resulting initial boundary value problem. However, we follow the paradigm introduced in [7,Section 1.1.5.1] and require that at each vertex all outgoing data must be determined by the incoming data. Since for a general system (3) it is not always obvious which data are outgoing and which are incoming at a vertex, we write (6) in the equivalent form using the Riemann invariants u=F1p, as

    Ψvu(v):=ΦvF(v)u(v)=0. (8)

    For Riemann invariants, we can define their outgoing values at v as follows.

    Definition 2.1. Let vΥ. The following values ujk(v),jJv,k=1,2, are outgoing at v.

    If λ^j_+>λ^j_->0 λ^j_+>0>λ^j_- 0>λ^j_+>λ^j_-
    l_j(\bf v) =0 u^j_1(\bf v), u^j_2(\bf v) u^j_1(\bf v) none
    l_j(\bf v) =1 none u^j_2(\bf v) u^j_1(\bf v), u^j_2(\bf v)

     | Show Table
    DownLoad: CSV

    Denote by \alpha_j the number of positive eigenvalues on \boldsymbol{e_j} . Then, we see that for a given vertex \mathbf{v} with valence |J_{ \mathbf{v}}| the number of outgoing values is given by

    \begin{equation} k_{ \mathbf{v}} : = \sum\limits_{j\in J_{ \mathbf{v}}} (2(1-\alpha_j)l_j( \mathbf{v}) +\alpha_j ). \end{equation} (9)

    Definition 2.2. We say that \mathbf{v} is

    ● a sink if either \alpha_j = 2 and l_j( \mathbf{v}) = 1 or \alpha_j = 0 and l_j( \mathbf{v}) = 0 for all j \in J_{ \mathbf{v}} ;

    ● a source if either \alpha_j = 0 and l_j( \mathbf{v}) = 1 or \alpha_j = 2 and l_j( \mathbf{v}) = 0 for all j \in J_{ \mathbf{v}} ;

    ● a transient (or internal) vertex if it is neither a source nor a sink.

    We denote the sets of sources, sinks and transient vertices by \Upsilon_s, \Upsilon_z and \Upsilon_t , respectively.

    We observe that if \mathbf{v}\in \Upsilon_z , then k_{ \mathbf{v}} = 0 (so that no boundary conditions are imposed at a sink), while if \mathbf{v}\in \Upsilon_s , then k_{ \mathbf{v}} = 2|J_{ \mathbf{v}}| .

    A typical example of (8) is Kirchhoff's law that requires that the total inflow rate into a vertex must equal the total outflow rate from it. Its precise formulation depends on the context, we refer to [8,Chapter 18] for a detailed description in the context of flows in networks. Since it provides only one equation, in general it is not sufficient to ensure the well-posedness of the problem. So, we introduce the following definition.

    Definition 2.3. We say that \boldsymbol{p} satisfies a generalized Kirchhoff conditions at \mathbf{v} \in \Upsilon\setminus\Upsilon_z if, for \boldsymbol{u} = \mathcal{F}^{-1} \boldsymbol{p}, (8) is satisfied for some matrix \boldsymbol{\Phi}_{ \mathbf{v}} = \boldsymbol{\Psi}_{ \mathbf{v}}\mathcal{F}^{-1} with k_{ \mathbf{v}} given by (9).

    To realize the requirement that the outgoing values should be determined by the incoming ones, we have to analyze the structure of \boldsymbol{\Psi}_{ \mathbf{v}} . Let us introduce the partition

    \begin{equation} \{1, \ldots, m\} = : J_1\cup J_2\cup J_0, \end{equation} (10)

    where j \in J_1 if \alpha_j = 1, j \in J_2 if \alpha_j = 2 and j\in J_0 if \alpha_j = 0 . This partition induces the corresponding partition of each J_{ \mathbf{v}} as

    J_{ \mathbf{v}} : = J_{ \mathbf{v}, 1}\cup J_{ \mathbf{v}, 2}\cup J_{ \mathbf{v}, 0}.

    We also consider another partition J_{ \mathbf{v}} = J_{ \mathbf{v}}^0\cup J_{ \mathbf{v}}^1, where j\in J_{ \mathbf{v}}^0 if l_j( \mathbf{v}) = 0 and j\in J_{ \mathbf{v}}^1 if l_j( \mathbf{v}) = 1 . Then, we can give an alternative expression for k_{ \mathbf{v}} as

    \begin{equation} k_{ \mathbf{v}} = \sum\limits_{j\in J^0_{ \mathbf{v}}} \alpha_j + \sum\limits_{j\in J^1_{ \mathbf{v}}} (2-\alpha_j) = |J_{ \mathbf{v}, 1}|+ 2(|J^0_{ \mathbf{v}}\cap J_{ \mathbf{v}, 2}| + |J^1_{ \mathbf{v}}\cap J_{ \mathbf{v}, 0}|). \end{equation} (11)

    Then, by [2,Lemma 3.6],

    \rm (i) u^j_1(0) is outgoing if and only if j \in (J_{ \mathbf{v}, 1}\cup J_{ \mathbf{v}, 2})\cap J^0_{ \mathbf{v}},

    \rm (ii) u^j_2(0) is outgoing if and only if j \in J_{ \mathbf{v}, 2}\cap J^0_{ \mathbf{v}},

    \rm(iii) u^j_1(1) is outgoing if and only if j \in J_{ \mathbf{v}, 0}\cap J^1_{ \mathbf{v}},

    \rm (iv) u^j_2(1) is outgoing if and only if j \in (J_{ \mathbf{v}, 1}\cup J_{ \mathbf{v}, 0})\cap J^1_{ \mathbf{v}}.

    We introduce the block diagonal matrix

    \begin{equation} \mathcal{{\tilde{{F}}}}_{out}( \mathbf{v}) = {\rm diag}\{\mathcal{{\tilde{{F}}}}_{out}^j( \mathbf{v})\}_{j \in J_{ \mathbf{v}}}, \end{equation} (12)

    where

    \mathcal{{\tilde{F}}}_{out}^j( \mathbf{v}) = \left\{\begin{array} {ccc} \left(\begin{array}{cc}0&0\\0&0\end{array}\right)&\text{if} &j\in (J_{ \mathbf{v}, 0} \cap J_{ \mathbf{v}}^0)\cup (J_{ \mathbf{v}, 2}\cap J_{ \mathbf{v}}^1), \\ \left(\begin{array}{cc}f^j_{+, 1}(l_j( \mathbf{v}))&f^j_{-, 1}(l_j( \mathbf{v}))\\f^j_{+, 2}(l_j( \mathbf{v}))&f^j_{-, 2}(l_j( \mathbf{v}))\end{array}\right)&\text{if}& j\in (J_{ \mathbf{v}, 0} \cap J_{ \mathbf{v}}^1)\cup (J_{ \mathbf{v}, 2}\cap J_{ \mathbf{v}}^0), \\ \left(\begin{array}{cc}f^j_{+, 1}(0)&0\\f^j_{+, 2}(0)&0\end{array}\right)&\text{if} &j\in J_{ \mathbf{v}, 1} \cap J_{ \mathbf{v}}^0, \\ \left(\begin{array}{cc}0&f^j_{-, 1}(1)\\0&f^j_{-, 2}(1)\end{array}\right)&\text{if}& j\in J_{ \mathbf{v}, 1} \cap J_{ \mathbf{v}}^1. \end{array} \right.

    Further, by \mathcal{F}_{out}( \mathbf{v}) we denote the contraction of \mathcal{{\tilde{F}}}_{out}( \mathbf{v}) ; that is, the 2|J_{ \mathbf{v}}| \times k_{ \mathbf{v}} matrix obtained from \mathcal{{\tilde{F}}}_{out}( \mathbf{v}) by deleting 2|J_{ \mathbf{v}}| - k_{ \mathbf{v}} zero columns, and then define \mathcal{F}_{in}( \mathbf{v}) as the analogous contraction of \mathcal{F}( \mathbf{v})-\mathcal{{\tilde{F}}}_{out}( \mathbf{v}) .

    In a similar way, we extract from \boldsymbol{u}( \mathbf{v}) the outgoing boundary values \widetilde{\boldsymbol{u}}_{out}( \mathbf{v}) = (\widetilde{\boldsymbol{u}}^j_{out}( \mathbf{v}))_{j\in J_{ \mathbf{v}}} by

    \widetilde{\boldsymbol{u}}^j_{out}( \mathbf{v}) = \left\{\begin{array} {ccc} (0, 0)^T&\text{if} &j\in (J_{ \mathbf{v}, 0} \cap J_{ \mathbf{v}}^0)\cup (J_{ \mathbf{v}, 2}\cap J_{ \mathbf{v}}^1), \\ (u^j_1(l_j( \mathbf{v})), u^j_2(l_j( \mathbf{v})))^T&\text{if}& j\in (J_{ \mathbf{v}, 0} \cap J_{ \mathbf{v}}^1)\cup (J_{ \mathbf{v}, 2}\cap J_{ \mathbf{v}}^0), \\ (u^j_1(0), 0)^T&\text{if} &j\in J_{ \mathbf{v}, 1} \cap J_{ \mathbf{v}}^0, \\ (0, u^j_2(1))^T&\text{if}& j\in J_{ \mathbf{v}, 1} \cap J_{ \mathbf{v}}^1, \end{array} \right.

    and \widetilde{\boldsymbol{u}}_{in}( \mathbf{v}) = {\boldsymbol{u}( \mathbf{v})}-\widetilde{\boldsymbol{u}}_{out}( \mathbf{v}). As above, we define \boldsymbol{u}_{out}( \mathbf{v}) to be the vector in \mathbb{R}^{k_{ \mathbf{v}}} obtained by discarding the zero entries in \widetilde{\boldsymbol{u}}_{out}( \mathbf{v}) , as described above and, similarly, \boldsymbol{u}_{in}( \mathbf{v}) is the vector in \mathbb{R}^{2|J_{ \mathbf{v}}|-k_{ \mathbf{v}}} obtained from \widetilde{\boldsymbol{u}}_{in}( \mathbf{v}) .

    Proposition 1. [2,Proposition 3.8] The boundary system (8) at \mathbf{v}\in \Upsilon\setminus \Upsilon_z is equivalent to

    \begin{equation} \boldsymbol{\Phi}_{ \mathbf{v}} \mathcal{F}_{out}( \mathbf{v}) \boldsymbol{u}_{out}( \mathbf{v}) + \boldsymbol{\Phi}_{ \mathbf{v}} \mathcal{F}_{in}( \mathbf{v})\boldsymbol{u}_{in}( \mathbf{v}) = 0 \end{equation} (13)

    and hence it uniquely determines the outgoing values of \boldsymbol{u}( \mathbf{v}) at \mathbf{v} as defined by Definition 2.1 if and only if

    \begin{equation} \boldsymbol{\Phi}_{ \mathbf{v}} \mathcal{F}_{out}( \mathbf{v})\quad \mathit{\text{is nonsingular}}. \end{equation} (14)

    In this case,

    \begin{equation} \boldsymbol{u}_{out}( \mathbf{v}) = - (\boldsymbol{\Phi}_{ \mathbf{v}} \mathcal{F}_{out}( \mathbf{v}))^{-1} \boldsymbol{\Phi}_{ \mathbf{v}} \mathcal{F}_{in}( \mathbf{v})\boldsymbol{u}_{in}( \mathbf{v}). \end{equation} (15)

    To pass from (3) with Kirchhoff's boundary conditions at each vertex \mathbf{v} \in \Upsilon\setminus \Upsilon_z to(1) we have to write the former in a global form. Assuming the vertices in \Upsilon\setminus\Upsilon_z are ordered as \{ \mathbf{v}_1, \ldots, \mathbf{v}_{r'}\} , we define \boldsymbol{\Psi}' = {\rm diag} \{\boldsymbol{\Psi}_{ \mathbf{v}} \}_{ \mathbf{v}\in \Upsilon\setminus\Upsilon_z} and \gamma \boldsymbol{u} = ((\boldsymbol{u}( \mathbf{v}))_{ \mathbf{v} \in \Upsilon\setminus\Upsilon_z})^T and write (8) as

    \begin{equation} \boldsymbol{\Psi}' \gamma \boldsymbol{u} = 0. \end{equation} (16)

    We note that the function values that are incoming at \mathbf{v}\in \Upsilon_z do not influence any outgoing data. However, to keep the track of all vertex values, we extend \boldsymbol{\Psi}' with zero columns corresponding to edges coming to sinks and denote such an extended matrix by \boldsymbol{\Psi} . Since, by the hand shake lemma, we have 2\sum_{ \mathbf{v}\in \Upsilon} |J_{ \mathbf{v}}| = 4m and by [2,Section 3.2] also \sum_{ \mathbf{v}\in \Upsilon\setminus\Upsilon_z} k_{ \mathbf{v}} = 2m, \boldsymbol{\Psi} is a 2m\times 4m matrix. In the same way, we can provide a global form of (13), splitting (16) as

    \begin{equation} \boldsymbol{\Psi}^{out} \gamma \boldsymbol{u}_{out} + \boldsymbol{\Psi}^{in}\gamma\boldsymbol{u}_{in} = 0, \end{equation} (17)

    where \boldsymbol{\Psi}^{out} = {\rm diag} \{\boldsymbol{\Phi}_{ \mathbf{v}} \mathcal{F}_{out}( \mathbf{v}) \}_{ \mathbf{v}\in \Upsilon\setminus\Upsilon_z} and \boldsymbol{\Psi}^{in} = {\rm diag} \{\boldsymbol{\Phi}_{ \mathbf{v}} \mathcal{F}_{in}( \mathbf{v}) \}_{ \mathbf{v}\in \Upsilon\setminus\Upsilon_z}, extended by zero columns corresponding to the incoming functions at the sinks, \gamma \boldsymbol{u}_{out} : = ((\boldsymbol{u}_{out}( \mathbf{v}))_{ \mathbf{v} \in \Upsilon\setminus\Upsilon_z})^T , and \gamma \boldsymbol{u}_{in} is ((\boldsymbol{u}_{in}( \mathbf{v}))_{ \mathbf{v} \in \Upsilon\setminus\Upsilon_z})^T extended by incoming values at the sinks.

    Using the adopted parametrization and the formalism of Definition 2.1, we only need to distinguish between functions describing the flow from 0 to 1 and from 1 to 0 . Accordingly, we group the Riemann invariants \boldsymbol{u} into parts corresponding to positive and negative eigenvalues and rename them as:

    \begin{equation} \begin{split} \boldsymbol{{\upsilon}} &: = \left((u^j_1)_{j\in J_1\cup J_2}, (u^j_2)_{j\in J_2}\right) = ( \upsilon_j)_{j\in J^+}, \\ \boldsymbol{{ \varpi}}& : = \left((u^j_1)_{j\in J_0}, (u^j_2)_{j\in J_1\cup J_0}\right) = ( \varpi_j)_{j\in J^-}, \end{split} \end{equation} (18)

    where J^+ and J^- are the sets of indices j with at least 1 positive eigenvalue, and at least 1 negative eigenvalue of \mathcal{M}^j , respectively. In J^+ (respectively J^- ) the indices from J_2 (respectively J_0 ) appear twice so that we rearrange them in some consistent way to avoid confusion. For instance, we can take J^+ = \{1, \ldots, m^u, m^u+1, \ldots, m^+\} and J^- = \{m^++1, \ldots, m_u, m_u+1, \ldots, 2m\} and there are bijections between J_1\cup J_2 and \{1, \ldots, m^u\} , J_2 and \{m^u+1, \ldots, m^+\} , J_0 and \{m^++1, \ldots, m_u\} , and J_1\cup J_0 and \{m_u+1, \ldots, 2m\} , respectively. We emphasize that such a renumbering is largely arbitrary and different ways of doing it result in just re-labelling of the components of(1) without changing its structure.

    In this way, we converted \Gamma into a multi digraph \boldsymbol{\Gamma} with the same vertices \Upsilon , in such a way that each edge of \Gamma was split into two arcs parametrized by x\in [0, 1], where x = 0 on each arc corresponds to the same vertex in \Gamma and the same is valid for x = 1 . Conversely, if we have a multi digraph \boldsymbol{\Gamma}, where all edges appear in pairs and each two edges joining the same vertex are parametrized concurrently, then we can collapse \boldsymbol{{\Gamma}} to a graph \Gamma . We note that this approach originates from [16].

    Using this construction, the second order hyperbolic problem (3), (17) was transformed into first order system (1) with (17) written in the form (2). However, it is clear that (1) can be formulated with an arbitrary matrix \boldsymbol{\Xi} . Thus, we arrive at the main problem considered in this paper:

    how to characterize matrices \boldsymbol{\Xi} that arise from \boldsymbol{\Psi} so that (1) describes a network dynamics?

    For a graph \Gamma , let us consider the multi digraph \boldsymbol{\Gamma} constructed above. The sets of vertices \Upsilon are the same for \Gamma and \boldsymbol{\Gamma} . For \mathbf{v} \in\Upsilon of \boldsymbol{{\Gamma}}, we can talk about incoming and outgoing arcs which are determined by j\in J_{ \mathbf{v}}, l_j( \mathbf{v}) and the signs of \lambda_+^j and \lambda_-^j , as in Definition 2.1. We denote by \boldsymbol{J}_{ \mathbf{v}}^+ and \boldsymbol{J}_{ \mathbf{v}}^- the (ordered) sets of indices of arcs \boldsymbol{{\varepsilon}}^j incoming and outgoing from \mathbf{v} in \boldsymbol{{\Gamma}} , respectively. We note that |\boldsymbol{J}_{ \mathbf{v}}^-| = k_{ \mathbf{v}} , the number of the outgoing conditions. With this notation, the matrix \boldsymbol{{\Psi_{ \mathbf{v}}}} can be split into two matrices

    \boldsymbol{{\Psi}}_{ \mathbf{v}}^{out} = (\psi_{ \mathbf{v}, i}^j)_{1\leq i\leq k_{ \mathbf{v}}, j\in \boldsymbol{J}_{ \mathbf{v}}^-}, \qquad \boldsymbol{{\Psi}}_{ \mathbf{v}}^{in} = (\psi_{ \mathbf{v}, i}^j)_{1\leq i\leq k_{ \mathbf{v}}, j\in \boldsymbol{J}_{ \mathbf{v}}^+}.

    Since no outgoing value should be missing, we adopt the following

    Assumption 1. No column or row of \boldsymbol{\Psi}_{ \mathbf{v}}^{out} is identically zero.

    These matrices provide some insight into how the arcs are connected by the flow which is an additional feature, superimposed on the geometric structure of the incoming and outgoing arcs at the vertex. In principle, these two structures do not have to be the same, that is, it may happen that the substance flowing from \boldsymbol{{\varepsilon}}^j , j \in \boldsymbol{J}_{ \mathbf{v}}^+, is only directed to some of the outgoing arcs. An extreme case of such a situation is when both \boldsymbol{{\Psi}}_{ \mathbf{v}}^{out} and \boldsymbol{{\Psi}}_{ \mathbf{v}}^{in} are completely decomposable, see [9], with blocks in both matrices having the same row indices. Then, from the flow point of view, \mathbf{v} can be regarded as several nodes of the flow network, which are not linked with each other. Such cases, where the geometric structure at a vertex is inconsistent with the flow structure, may generate problems in determining the graph underlying transport problems. Thus in this paper we adopt assumptions ensuring that the map of the flow connections given by the matrices \boldsymbol{{\Psi}}_{ \mathbf{v}}^{out} and \boldsymbol{{\Psi}}_{ \mathbf{v}}^{in} coincides with the geometry at \mathbf{v} . We begin with the necessary definitions.

    Definition 3.1. Let \mathbf{v}\in\Upsilon_t . We say that an arc \boldsymbol{{\varepsilon}}^j , j\in \boldsymbol{J}^+_{ \mathbf{v}}, flow connects to \boldsymbol{{\varepsilon}}^l, l \in \boldsymbol{J}^-_{ \mathbf{v}}, if \psi_{ \mathbf{v}, i}^j \neq 0 and \psi_{ \mathbf{v}, i}^l\neq 0 for some 1\leq i\leq k_{ \mathbf{v}}.

    Using this idea, we define a connectivity matrix \mathsf C_{ \mathbf{v}} = (\mathsf{c}_{ \mathbf{v}, lj})_{l \in \boldsymbol{J}^-_{ \mathbf{v}}, j \in \boldsymbol{J}^+_{ \mathbf{v}}} by

    \mathsf{c}_{ \mathbf{v}, lj} = \left\{\begin{array}{lcl} 1&\text{if}& \boldsymbol{{\varepsilon}}^j\;\text{flow connects to}\; \boldsymbol{{\varepsilon}}^l, \\ 0&&\text{otherwise}. \end{array} \right.

    Remark 2. We observe that

    ● the above definition implies that for \boldsymbol{{\varepsilon}}^j and \boldsymbol{{\varepsilon}}^l to be flow connected, \boldsymbol{{\varepsilon}}^j and \boldsymbol{{\varepsilon}}^l must be incident to the same vertex;

    \mathsf C_{ \mathbf{v}} can be interpreted as the adjacency matrix of the bipartite line digraph constructed from the incoming and outgoing arcs at \mathbf{v}, where the connections between arcs are defined by flow connections, see [9,Section 3].

    For an arbitrary matrix A = (a_{ij})_{1\leq i\leq p, 1\leq j\leq q}, by \widehat A = (\hat a_{ij})_{1\leq i\leq p, 1\leq j\leq q} we denote the matrix with every nonzero entry of A replaced by 1.

    Lemma 3.2. If \mathbf{v} is a transient vertex, then

    \begin{equation} \mathsf{C}_{ \mathbf{v}} = \widehat{\left(\widehat{\boldsymbol{{\Psi}}^{out}_{ \mathbf{v}}}\right)^T \widehat{\boldsymbol{{\Psi}}^{in}_{ \mathbf{v}}}}. \end{equation} (19)

    Proof. Denote \mathsf{B} = \widehat{\left(\widehat{\boldsymbol{{\Psi}}^{out}_{ \mathbf{v}}}\right)^T \widehat{\boldsymbol{{\Psi}}^{in}_{ \mathbf{v}}}}. Then, \mathsf{b}_{ij} = 1, i\in \boldsymbol{J}_{ \mathbf{v}}^-, j \in \boldsymbol{J}_{ \mathbf{v}}^+ if and only if

    \sum\limits_{r = 1}^{k_{ \mathbf{v}}} \hat{\psi}_{ \mathbf{v}, r}^i\hat{\psi}^j_{ \mathbf{v}, r} \neq 0.

    This occurs if and only if there is r = 1, \ldots, k_{ \mathbf{v}} such that both \hat{\psi}_{ \mathbf{v}, r}^i\neq 0 and \hat{\psi}^j_{ \mathbf{v}, r} \neq 0, which is equivalent to \boldsymbol{{\varepsilon}}^j flow connecting with \boldsymbol{{\varepsilon}}^i , that is, \mathsf{c}_{ \mathbf{v}, ij} = 1 .

    Let \mathbf{v} be a source (as we do not impose boundary conditions on sinks). As above, we need to ensure that the flow from a source cannot be split into several isolated subflows. Though here we do not have inflows and outflows, we use a similar idea to that for transient vertices.

    Definition 3.3. Let \mathbf{v}\in\Upsilon_s . We say that \boldsymbol{{\varepsilon}}^i and \boldsymbol{{\varepsilon}}^j , i, j \in \boldsymbol{J}_{ \mathbf{v}}^-, are flow connected if there is l \in\{1, \ldots, k_{ \mathbf{v}}\} such that \psi_{ \mathbf{v}, l}^i\neq 0 and \psi_{ \mathbf{v}, l}^j\neq 0.

    As before, we construct a connectivity matrix \mathsf C_{ \mathbf{v}} = (\mathsf{c}_{ \mathbf{v}, ij})_{i, j \in \boldsymbol{J}^-_{ \mathbf{v}}}, where

    \begin{equation} \mathsf{c}_{ \mathbf{v}, ij} = \left\{\begin{array}{lcl} 1&\text{if}& \boldsymbol{{\varepsilon}}^j\;\text{and}\; \boldsymbol{{\varepsilon}}^i\;\text{are flow connected}, \\ 0&&\text{otherwise}. \end{array} \right. \end{equation} (20)

    Note that, contrary to an internal vertex, here the connectivity matrix is symmetric. We also do not stipulate that i\neq j so that \boldsymbol{{\varepsilon}}^j is always flow connected to itself and hence, by Assumption 1, each entry of the diagonal of \mathsf{C}_{ \mathbf{v}} is 1. From this, we get a result similar to Lemma 3.2.

    Lemma 3.4. If \mathbf{v} is a source,

    \begin{equation} \mathsf{C}_{ \mathbf{v}} = \widehat{\left(\widehat{\boldsymbol{{\Psi}}^{out}_{ \mathbf{v}}}\right)^T \widehat{\boldsymbol{{\Psi}}^{out}_{ \mathbf{v}}}}. \end{equation} (21)

    Proof. As before, let \mathsf{B} = \widehat{\left(\widehat{\boldsymbol{{\Psi}}^{out}_{ \mathbf{v}}}\right)^T \widehat{\boldsymbol{{\Psi}}^{out}_{ \mathbf{v}}}}. Then, \mathsf{b}_{ij} = 1, i, j\in \boldsymbol{J}_{ \mathbf{v}}^-, if and only if

    \sum\limits_{r = 1}^{k_{ \mathbf{v}}} \hat{\psi}_{ \mathbf{v}, r}^i\hat{\psi}^j_{ \mathbf{v}, r} \neq 0.

    Certainly, by Assumption 1, \mathsf{b}_{ii} = 1, i\in \boldsymbol{J}_{ \mathbf{v}}^-. For i\neq j , this occurs if and only if there is r\in\{1, \ldots, k_{ \mathbf{v}}\} such that both \hat{\psi}_{ \mathbf{v}, r}^i\neq 0 and \hat{\psi}^j_{ \mathbf{v}, r} \neq 0 which is equivalent to \boldsymbol{{\varepsilon}}^j and \boldsymbol{{\varepsilon}}^i being flow connected, that is, \mathsf{c}_{ \mathbf{v}, ij} = 1 .

    We adopt an assumption that the structure of flow connectivity is the same as of the geometry at the vertex. Thus, if \mathbf{v} is an internal vertex and j\in \boldsymbol{J}_{ \mathbf{v}}^+ and i\in \boldsymbol{J}^-_{ \mathbf{v}} , then \boldsymbol{{\varepsilon}}^j flow connects to \boldsymbol{{\varepsilon}}^i . In particular, we have

    Assumption 2. For all \mathbf{v}\in\Upsilon_t ,

    \mathsf{C}_{ \mathbf{v}} = \boldsymbol{1}_{ \mathbf{v}} = \left(\begin{array}{cccc}1&1&\ldots&1\\ \vdots&\vdots&\vdots&\vdots\\ 1&1&\ldots&1 \end{array}\right).

    We observe that the dimension of \mathsf{C}_{ \mathbf{v}} is |\boldsymbol{J}_{ \mathbf{v}}^-|\times |\boldsymbol{J}^+_{ \mathbf{v}}| .

    If \mathbf{v} \in \Upsilon_s , then we assume that the outflow from \mathbf{v} cannot be separated into independent subflows, that is, that the arcs outgoing from \mathbf{v} cannot be divided into groups such that no arc in any group is flow connected to an arc in any other. Equivalently, for each two arcs \boldsymbol{{\varepsilon}}^i and \boldsymbol{{\varepsilon}}^j , i, j \in \boldsymbol{J}_{ \mathbf{v}}^-, there is a sequence j = j_0, j_1, \ldots, j_k = i such that \boldsymbol{{\varepsilon}}^{j_r} and \boldsymbol{{\varepsilon}}^{j_{r+1}} , r = 0, \ldots, k-1 are flow connected. Indeed, if such a division was possible, then it would be impossible to find such a sequence between indices j and i in different groups as some pair would have to connect arcs from these different groups. Conversely, if for some arcs \boldsymbol{{\varepsilon}}^i and \boldsymbol{{\varepsilon}}^j there is no such a sequence, then we can build two groups of indices containing i and j , respectively, by considering all indices for which such sequences can be found. Clearly, no arc in the first group is flow connected to any arc in the second as otherwise there would be a sequence connecting \boldsymbol{{\varepsilon}}^j and \boldsymbol{{\varepsilon}}^i . By Lemma 3.4, \mathsf{C}_{ \mathbf{v}} can be considered as the adjacency matrix of the graph with vertices given by \{ \boldsymbol{{\varepsilon}}^j\}_{j \in \boldsymbol{J}^-_{ \mathbf{v}}} and the edges determined by the flow connectivity (20). Moreover, \mathsf{C}_{ \mathbf{v}} is symmetric, which shows that the assumption described at the beginning of this paragraph is equivalent to

    Assumption 3. For all \mathbf{v}\in\Upsilon_s, \mathsf{C}_{ \mathbf{v}} is irreducible.

    Remark 3. Assumption 3 is weaker than requiring each two arcs from \{ \boldsymbol{{\varepsilon}}^j\}_{j \in \boldsymbol{J}^-_{ \mathbf{v}}} to be flow connected. Then we would have \mathsf{C}_{ \mathbf{v}} = \boldsymbol{1}_{|\boldsymbol{J}_{ \mathbf{v}}^-|\times |\boldsymbol{J}^-_{ \mathbf{v}}|}.

    Proposition 2. Let \mathbf{v}\in \Upsilon_t . If the system (8), that is,

    \begin{equation} \boldsymbol{\Psi}_{ \mathbf{v}} \boldsymbol{u}( \mathbf{v}) = 0, \end{equation} (22)

    contains a Kirchhoff's condition

    \begin{equation} \sum\limits_{j \in J_{ \mathbf{v}}} (\psi^j_{ \mathbf{v}, r} u^j_1( \mathbf{v}) + \psi^j_{ \mathbf{v}, r} u^j_2( \mathbf{v})) = 0, \end{equation} (23)

    with \psi^j_{ \mathbf{v}, r} \neq 0 for all j\in J_{ \mathbf{v}} and some r \in\{1, \ldots, k_{ \mathbf{v}}\} , then Assumption 2 is satisfied.

    Proof. Condition (23) ensures that each entry of the r -th row of both \widehat{\boldsymbol{{\Psi}}_{ \mathbf{v}}^{out}} and \widehat{\boldsymbol{{\Psi}}_{ \mathbf{v}}^{in}} is 1 and thus the product of each column of \widehat{\boldsymbol{{\Psi}}_{ \mathbf{v}}^{out}} with each column of \widehat{\boldsymbol{{\Psi}}_{ \mathbf{v}}^{in}} is non-zero, which yields Assumption 2.

    Example 1. Consider the model of [20], analysed in the framework of our approach in [2,Example 5.12], i.e.,

    \begin{equation} {\partial}_tp^{j}_1 + K^j {\partial}_x p^j_2 = 0, \quad {\partial}_tp^{j}_2 + L^j {\partial}_x p^j_1 = 0, \end{equation} (24)

    for t>0, 0<x<1, 0\leq j\leq m, where K^j>0, L^j>0 for all j . For a given vertex \mathbf{v}, we define (\boldsymbol{p_1}( \mathbf{v}), \boldsymbol{p_2}( \mathbf{v})) = ((p^j_1( \mathbf{v}), p_2^j( \mathbf{v}))_{j\in J_{ \mathbf{v}}}, \nu^j( \mathbf{v}) = -1 if l_j( \mathbf{v}) = 0 and \nu^j( \mathbf{v}) = 1 if l_j( \mathbf{v}) = 1, and T_{ \mathbf{v}} \boldsymbol{p_2} ( \mathbf{v}) = (\nu^j( \mathbf{v})p^j_2( \mathbf{v}))_{j\in J_{ \mathbf{v}}}. In this case \alpha_j = 1 for any j and thus for any vertex \mathbf{v} we need |J_{ \mathbf{v}}| boundary conditions. We focus on \mathbf{v} with |E_{ \mathbf{v}}|>1 . Then, we split \mathbb{R}^{|J_{ \mathbf{v}}|} into X_{ \mathbf{v}} of dimension n_{ \mathbf{v}} and its orthogonal complement X_{ \mathbf{v}}^\perp of dimension l_{ \mathbf{v}} = |J_{ \mathbf{v}}|-n_{ \mathbf{v}} and require that

    \boldsymbol{p_1}( \mathbf{v}) \in X_{ \mathbf{v}}, \quad T_{ \mathbf{v}}\boldsymbol{p_2}( \mathbf{v}) \in X^\perp_{ \mathbf{v}},

    that is, denoting I_1 = \{1, \ldots, n_{ \mathbf{v}}\} and I_2 = \{n_{ \mathbf{v}}+1, \ldots, |J_{ \mathbf{v}}|\} ,

    \begin{equation} \sum\limits_{j\in J_{ \mathbf{v}}}\phi^j_r p^j_1( \mathbf{v}) = 0, \quad r \in I_2, \quad \sum\limits_{j\in J_{ \mathbf{v}}} \varphi^j_r \nu^j( \mathbf{v})p^j_2( \mathbf{v}) = 0, \qquad r \in I_1, \end{equation} (25)

    where (( \varphi^j_r)_{j\in J_{ \mathbf{v}}})_{r\in I_1} is a basis in X_{ \mathbf{v}} and ((\phi^j_r)_{j\in J_{ \mathbf{v}}})_{r\in I_2} is a basis in X^\perp_{ \mathbf{v}}. It is clear that, in general, boundary conditions (25) do not satisfy Assumption 2. Consider \mathbf{v} such that each \boldsymbol{e}^j incident to \mathbf{v} is parameterised so as l_j( \mathbf{v}) = 0 so that each u^j_1(0) is outgoing and each u^j_2(0) is incoming. If we take \phi_r^j = \varphi^j_r = \delta_{rj} and L^j = K^j = 1 for j\in J_{ \mathbf{v}} , we obtain

    \begin{align*} p^r_1(0) & = u^r_1(0)+ u^r_2(0) = 0, \quad r = n_{ \mathbf{v}}+1, \ldots, |J_{ \mathbf{v}}|, \\ p^r_2(0) & = u^r_1(0)- u^r_2(0) = 0, \quad r = 1, \ldots, n_{ \mathbf{v}}. \end{align*}

    Thus \widehat{\boldsymbol{\Psi}_{ \mathbf{v}}^{out}} and \widehat{\boldsymbol{\Psi}_{ \mathbf{v}}^{in}} are both the identity matrices and Assumption 2 is not satisfied.

    On the other hand, the Kirchhoff condition,

    \begin{equation} \sum\limits_{j\in J_{ \mathbf{v}}} \nu^{j}( \mathbf{v}) p^{j}_2( \mathbf{v}) = 0, \end{equation} (26)

    see [20,Eqn (4)], satisfies the assumption of Proposition 2, as we have

    \begin{align*} 0& = \sum\limits_{j\in J_{ \mathbf{v}}}\nu^j( \mathbf{v})p^j_2( \mathbf{v}) = \sum\limits_{j\in J_{ \mathbf{v}}}\nu^j( \mathbf{v}) (f^j_{+, 2}( \mathbf{v})u^j_1( \mathbf{v}) +f^j_{-, 2}( \mathbf{v})u^j_2( \mathbf{v})) \\& = \sum\limits_{j\in J_{ \mathbf{v}}}\nu^j( \mathbf{v})\sqrt{K^jL^j} (u^j_1( \mathbf{v}) -u^j_2( \mathbf{v})) \\ & = -\sum\limits_{j \in J^0_{ \mathbf{v}}} \sqrt{K^jL^j}u^j_1(0) -\sum\limits_{j\in J^1_{ \mathbf{v}}} \sqrt{K^jL^j}u^j_2(1) \\ &\phantom{x}+\sum\limits_{j \in J^1_{ \mathbf{v}}} \sqrt{K^jL^j}u^j_1(1) +\sum\limits_{j\in J^0_{ \mathbf{v}}} \sqrt{K^jL^j}u^j_2(0), \end{align*}

    where we used [2,Eqn 5.2]. Hence, by Proposition 2, Assumption 2 is satisfied.

    Example 2. Let us consider the linearized Saint-Venant system,

    \begin{equation} {\partial}_tp^j_1 = -V^j {\partial}_x p^j_1 - H^j{\partial}_x p^j_2, \quad {\partial}_t p^j_2 = -g {\partial}_x p^j_1 -V^j {\partial}_x p^j_2, \end{equation} (27)

    see [2,Example 1.2], assuming that on each edge we have \lambda^j_{\pm} = V^j \pm \sqrt{gH^j} >0 . Then, we have

    \begin{equation} \binom{p^j_1}{p^j_2} = \binom{f^j_{+, 1} u^j_1 + f^j_{-, 1}u^j_2}{f^j_{+, 2}u^j_1 + f^j_{-, 2}u^j_2} = \binom{H^j u^j_1+ H^j u^j_2}{\sqrt{gH^j}u^j_1 -\sqrt{gH^j}u^j_2}. \end{equation} (28)

    We use the flow structure of [11,Example 5.1], shown in Fig. 1, and focus on \mathbf{v}_1, where we need 2N-2 boundary conditions which were given as

    p^j_1(0) = p^1_1(1), \quad p^j_2(0) = p_2^1(1), \qquad j = 2, \ldots, N.
    Figure 1. 

    Starlike network of channels

    .

    In terms of the Riemann invariants, they can be written as

    \begin{align*} &\left(\begin{array}{ccccccc}H^2&H^2&0&0&\ldots&0&0\\\sqrt{gH^2}&-\sqrt{gH^2}&0&0&\ldots&0&0\\\vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&0&\ldots&H^N&H^N\\0&0&0&0&\ldots&\sqrt{gH^N}&-\sqrt{gH^N}\end{array}\right)\left(\begin{array}{c}u_1^2(0)\\u_2^2(0)\\\vdots \\ u_1^N(0)\\u_2^N(0)\end{array}\right)\\ &\phantom{xxxx} = \left(\begin{array}{cc}H^1&H^1\\\sqrt{gH^1}&-\sqrt{gH^1}\\\vdots\\ H^1&H^1\\\sqrt{gH^1}&-\sqrt{gH^1}\end{array}\right)\left(\begin{array}{c}u_1^1(1)\\u_2^1(1)\end{array}\right) \end{align*}

    and it is clear that Assumption 2 is satisfied.

    Figure 2. 

    The reconstructed multi digraph \boldsymbol{\Gamma} . It is seen that it cannot describe a flow on \Gamma as \varpi_5 and \varpi_6 must flow in the same direction

    .
    Figure 3. 

    The reconstructed multi digraph \boldsymbol{\Gamma} for (46), (47)

    .
    Figure 4. 

    A network \Gamma realizing the flow (48), (49)

    .

    For a matrix A = (a_{ij})_{1\leq i\leq n, 1\leq j\leq m}, let us denote by \boldsymbol{a}^c_j, 1\leq j\leq m, the columns of A and by \boldsymbol{a}^r_i, 1\leq i\leq n, its rows. Then, we often write

    \begin{equation} A = (\boldsymbol{a}^c_{j})_{1\leq j\leq m} = (\boldsymbol{a}^r_{i})_{1\leq i\leq n}, \end{equation} (29)

    that is, we represent the matrix as a row vector of its columns or a column vector of its rows. In particular, we write

    \begin{align*} \boldsymbol{\Xi}_{out} & = (\xi^{out}_{ij})_{1\leq i\leq 2m, 1\leq j\leq 2m} = (\boldsymbol{\xi}^{out, c}_{j})_{1\leq j\leq 2m} = (\boldsymbol{\xi}^{out, r}_{i})_{1\leq i\leq 2m}, \\\boldsymbol{\Xi}_{in} & = (\xi^{in}_{ij})_{1\leq i\leq 2m, 1\leq j\leq 2m} = (\boldsymbol{\xi}^{in, c}_{j})_{1\leq j\leq 2m} = (\boldsymbol{\xi}^{in, r}_{i})_{1\leq i\leq 2m}. \end{align*}

    For any vector \boldsymbol{\mu} = (\mu_1, \ldots, \mu_k) , we define \text{supp}\;\boldsymbol{\mu} = \{j\in \{1, \ldots, k\};\; \mu_j\neq 0\} .

    Definition 3.5. We say that the problem (1) is graph realizable if there is a graph \Gamma = \{\{ \mathbf{v}_i\}_{1\leq i\leq r}, \{\boldsymbol{e_k}\}_{1\leq k\leq m}\} and a grouping of the column indices of \boldsymbol{\Xi} into pairs (j'_k, j_k^{"})_{1\leq k\leq m} such that (1a) describes a flow along the edges \boldsymbol{e_k} of \Gamma , which satisfies generalized Kirchhoff's condition at each vertex of \Gamma . In other words, (1) is graph realizable if there is a graph \Gamma and a matrix \boldsymbol{\Psi} such that (1a), (1c) can be written, after possibly permutating rows and columns of \boldsymbol{\Xi}, as (5), (17).

    Before we formulate the main theorem, we need to introduce some notation. Let us recall that we consider the boundary system (2), i.e.,

    \boldsymbol{\Xi}_{out}(( \upsilon_j(0, t))_{j\in J^+}, ( \varpi_j(1, t))_{j\in J^-}) = - \boldsymbol{\Xi}_{in}(( \upsilon_j(1, t))_{j\in J^+}, ( \varpi_j(0, t))_{j\in J^-}).

    Let us emphasize that in this notation, the column indices on the left and right hand side correspond to the values of the same function. To shorten notation, let us renumber them as 1\leq j\leq 2m . As noted in Introduction, appropriate pairs of the columns would determine the edges of the graph \Gamma that we try to reconstruct, hence the first step is to identify the possible vertices of \Gamma . For this, first, we will try to construct a multi digraph \boldsymbol{\Gamma} on which (2) can be written in the form (17) for (\boldsymbol{ \upsilon}, \boldsymbol{ \varpi}) . Roughly speaking, this corresponds to \boldsymbol{\Xi}_{out} and \boldsymbol{\Xi}_{in} being composed (up to permutations) of non-communicating blocks corresponding to the vertices. Here, each j should correspond to an arc \boldsymbol{{\varepsilon}}^j and the column j on the left hand side corresponds to the outflow along \boldsymbol{{\varepsilon}}^j from a unique vertex, while the column j on the right hand side corresponds to the inflow along \boldsymbol{{\varepsilon}}^j to a unique vertex. The vertices of \boldsymbol{\Gamma} should be then determined by a suitable partition of the rows of \boldsymbol{\Xi}_{out} (and of \boldsymbol{\Xi}_{in} ).

    In the second step we will determine additional assumptions that allow \boldsymbol{\Gamma} to be collapsed into a graph \Gamma on which (2) can be written in the form (17).

    Since we do not want (2) to be under- or over-determined, we adopt

    Assumption 4. For all 1\leq j\leq 2m ,

    \boldsymbol{\xi}^{out, c}_j\neq 0\quad\mathit{\text{and}}\quad \boldsymbol{\xi}^{out, r}_j \neq 0.

    Our strategy is to treat \boldsymbol{\Xi}_{out} and \boldsymbol{\Xi}_{in} as the outgoing and incoming incidence matrices of a multi digraph with vertices 'smeared' over subnetworks of flow connections. Thus we have

    Assumption 5. The matrix

    \mathsf{A}: = \widehat {\left(\widehat{\boldsymbol{\Xi}_{out}}\right)^T\widehat{\boldsymbol{\Xi}_{in}}}

    is the adjacency matrix of the line graph of a multi digraph.

    For \mathsf{A} , let V^{out}_j and V^{in}_i be groups of row and column indices, respectively, potentially outgoing from (respectively incoming to) a vertex, see Appendix A. We introduce

    I : = \{i\in \{1, \ldots, 2m\};\; \boldsymbol{\xi}^{in, r}_i = 0\}

    and adopt

    Assumption 6. For all i\in I there exists j\in\{1, \ldots, M'\} such that

    \mathit{\text{supp}}\; \boldsymbol{\xi}^{out, r}_i \subset V^{out}_j.

    In the next proposition we shall show that V^{out}_j and V^{in}_i determine a partition of the row indices into sets that can be used to define vertices. The idea is that if supports of columns of \boldsymbol{\Xi}_{out} or \boldsymbol{\Xi}_{in} overlap, the arcs determined by these columns must be incident to the same vertex.

    Proposition 3. If Assumptions 4, 5 and 6 are satisfied, then the sets

    \begin{equation} \{\{\mathcal{V_i}\}_{1\leq i\leq n}, \mathcal{V_S}\}, \end{equation} (30)

    where

    \begin{align} \mathcal{V_S} & = \{i \in \{1, \ldots, 2m\};\; \mathrm{supp\;} \boldsymbol{\xi}^{out, r}_i \subset V^{out}_{M'}\}, \end{align} (31)
    \begin{align} \mathcal{V_i} & = \bigcup\limits_{s \in V^{out}_i}\mathrm{supp\;} \boldsymbol{{\xi}}^{out, c}_s, \;1\leq i\leq n, \end{align} (32)

    form a partition of the row indices of both \boldsymbol{\Xi}_{out} and \boldsymbol{\Xi}_{in} such that if for any j , \mathrm{supp\;}\boldsymbol{\xi}^{out, c}_j\cap \mathcal{V_i}\neq \emptyset for some i = 1, \ldots, n, S , then \mathrm{supp\;}\boldsymbol{\xi}^{out, c}_j\subset \mathcal{V_i}, and if for any k , \mathrm{supp\;}\boldsymbol{\xi}^{in, c}_k\cap \mathcal{V_l}\neq \emptyset for some l = 1, \ldots, n, S , then \mathrm{supp\;}\boldsymbol{\xi}^{in, c}_k \subset \mathcal{V_l}.

    Since the proof is quite long, we first present its outline.

    Step 1. Reconstruct a multi digraph \boldsymbol{\Gamma} from the adjacency matrix of its line graph.

    Step 2. Identify the rows of \boldsymbol{\Xi}_{out} which are zero in \boldsymbol{\Xi}_{in} and which correspond to sources and associate them with vertices.

    Step 3. Associate other rows of \boldsymbol{\Xi}_{out} which are zero in \boldsymbol{\Xi}_{out} with appropriate vertices.

    Step 4. Associate remaining rows with vertices and construct a possible partition of the row indices.

    Step 5. Check that the constructed partition has the required properties.

    Proof. Step 1. By Assumption 5, \mathsf{A} is the adjacency matrix of L(\boldsymbol{\Gamma}) for some multi digraph \boldsymbol{\Gamma} . As explained in Appendix A, we can reconstruct \boldsymbol{\Gamma} with the transient vertices defined in a unique way and admissible sources and sinks. Let us fix such a construction. Then, we have the sets \{V^{in}_j\}_{1\leq j\leq n} and \{V^{out}_i\}_{1\leq i\leq n} of incoming and outgoing arcs determining any transient vertex. Further, we have (possibly) the sets V^{in}_{N'} and V^{out}_{M'} that group the arcs incoming to sink(s) and outgoing from source(s), respectively. Since \mathsf{A} represents all arcs, the same decomposition is valid for \boldsymbol{\Xi}_{out} and \boldsymbol{\Xi}_{in} , that is, we have subdivisions \{V^{out}_i\}_{1\leq i\leq M'} and \{V^{in}_j\}_{1\leq j\leq N'} of the columns of \boldsymbol{\Xi}_{out} and \boldsymbol{\Xi}_{in} , respectively, and hence the correspondence of the columns with the vertices. Thus, we have to show that (30) is a partition of the rows of \boldsymbol{\Xi}_{out} and \boldsymbol{\Xi}_{in} satisfying the conditions of the proposition.

    Let us recall that the entry (i, j) of \mathsf{A} is defined by

    \mathsf{a}_{ij} = \widehat{\widehat{\boldsymbol{{\xi}}^{out, c}_i}\cdot \widehat{\boldsymbol{{\xi}}^{in, c}_j}}

    and if a row k of \mathsf{A} is zero, that is, it represents a source, then there is a zero row in {\boldsymbol{\Xi}_{in}} . Indeed, since, by Assumption 4, \text{supp}\; \boldsymbol{{\xi}}^{out, c}_k \neq \emptyset , there is a nonzero entry, say, \xi^{out}_{lk} and thus we must have \xi^{in}_{lj} = 0 for any j . So, to every zero row in \mathsf{A}, there corresponds a zero row in \boldsymbol{\Xi}_{in} . However, there may be other zero rows in \boldsymbol{\Xi}_{in} .

    Step 2. To determine the rows in \boldsymbol{\Xi}_{out} , corresponding to sources, we consider Assumption 6. First, we note that any j of that assumption, if it exists, is determined in a unique way as the sets V^{out}_j are not overlapping. Next, we observe that if \text{supp}\; \boldsymbol{\xi}^{out, r}_i \subset V^{out}_j and \text{supp}\; \boldsymbol{\xi}^{out, r}_i\cap \text{supp}\; \boldsymbol{\xi}^{out, r}_k \neq \emptyset , then \text{supp}\; \boldsymbol{\xi}^{out, r}_k \subset V^{out}_j . Indeed, if \text{supp}\; \boldsymbol{\xi}^{out, r}_k \subset V^{out}_p and l\in \text{supp}\; \boldsymbol{\xi}^{out, r}_i\cap \text{supp}\; \boldsymbol{\xi}^{out, r}_k , then l \in V^{out}_j\cap V^{out}_p which implies V^{out}_j = V^{out}_p. Thus we can define the set

    \mathcal{V_S} = \{i \in \{1, \ldots, 2m\};\; \text{supp}\; \boldsymbol{\xi}^{out, r}_i \subset V^{out}_{M'}\}.

    For any i\in \mathcal{V_S} and q\in \text{supp}\; \boldsymbol{\xi}^{out, r}_i, we have \text{supp}\; \boldsymbol{\xi}^{out, c}_q\subset I, as otherwise there would be a nonzero product \boldsymbol{\xi}^{out, c}_q\cdot \boldsymbol{\xi}^{in, c}_s for some s as \boldsymbol{\xi}^{in, r}_{t} \neq 0 for each t\notin I . Then, let there be k such that \text{supp}\; \boldsymbol{\xi}^{out, c}_p \cap \text{supp}\; \boldsymbol{\xi}^{out, c}_j\ni k for some j \in V^{out}_{M'} . This means, by Assumption 6, that \text{supp}\;\boldsymbol{\xi}^{out, r}_k \subset V^{out}_{M'} and hence p \in V^{out}_{M'} . Consider any nonzero element of \boldsymbol{\xi}^{out, c}_p, say, \xi^{out}_{lp}\neq 0 . By the above argument, l \in I . If \text{supp}\; \boldsymbol{\xi}^{out, r}_l \subset V^{out}_{M'} , then l\in \mathcal{V_S} . If not, \text{supp}\; \boldsymbol{\xi}^{out, r}_l\cap V^{out}_j\neq \emptyset for some j\neq M' which contradicts Assumption 6. Thus, \mathcal{V_S} satisfies the first part of the statement. The second part is void as there is no \boldsymbol{\xi}^{in, c}_j with \text{supp}\;\boldsymbol{\xi}^{in, c}_j\cap \mathcal{V_S}\neq \emptyset. Therefore, all indices i\in \mathcal{V_S}, that is, such that \text{supp}\; \boldsymbol{\xi}^{out, r}_i \subset V^{out}_{M'}, determine a source as there is no connection to any inflow.

    Step 3. Now, consider the indices i\in I\setminus \mathcal{V_S} . Then, again by Assumption 6, for any i\in I\setminus \mathcal{V_S} there is a unique j\neq M' such that \text{supp}\; \boldsymbol{\xi}^{out, r}_i \subset V^{out}_{j} , that is, such an i belongs to the vertex determined by V^{out}_{j} . This determines a partition of I corresponding to the vertices (recall that there are no zero rows in \boldsymbol{\Xi}_{out} and so each row must belong to a vertex).

    Step 4. Next, we associate the remaining rows in \boldsymbol{\Xi}_{out} and \boldsymbol{\Xi}_{in} with the vertices. Consider V^{out}_i and V^{in}_j for some 1\leq i\leq n and j defined by (A.2). The non-zero entries \mathsf{a}_{pq} of \mathsf{A} , where p\in V^{out}_i and q\in V^{in}_j , occur whenever \text{supp}\; \boldsymbol{{\xi}}^{out, c}_p \cap \text{supp}\; \boldsymbol{{\xi}}^{in, c}_q\neq \emptyset . Hence, the rows with indices k \in \text{supp}\; \boldsymbol{{\xi}}^{out, c}_p \cap \text{supp}\; \boldsymbol{{\xi}}^{in, c}_q must belong to a vertex through which the incoming arc \boldsymbol{{\varepsilon}}^q communicates with the outgoing arc \boldsymbol{{\varepsilon}}^p . Since all nonzero entries in \text{supp}\; \boldsymbol{{\xi}}^{out, c}_p and \text{supp}\; \boldsymbol{{\xi}}^{in, c}_q , respectively, reflect non-zero outflow along \boldsymbol{{\varepsilon}}^p, and inflow along \boldsymbol{{\varepsilon}}^q , respectively, \text{supp}\; \boldsymbol{{\xi}}^{out, c}_p and \text{supp}\; \boldsymbol{{\xi}}^{in, c}_q must belong to the same vertex. Since the same is true for any indices from V^{out}_i and V^{in}_j , plausible partitions of row indices of \boldsymbol{\Xi}_{out} and \boldsymbol{\Xi}_{in} defining vertices are

    \mathcal{V}^{out}_i = \bigcup\limits_{s \in V^{out}_i} \text{supp}\; \boldsymbol{{\xi}}^{out, c}_s, \qquad \mathcal{V}^{in}_j = \bigcup\limits_{q \in V^{in}_j} \text{supp}\; \boldsymbol{{\xi}}^{in, c}_q.

    We first observe that if V^{out}_i and V^{in}_j determine the same transient vertex, then

    \begin{equation} \mathcal{V}^{out}_i\setminus \{s \in \mathcal{V}^{out}_i;\; \boldsymbol{\xi}^{in, r}_s = 0\} = \mathcal{V}^{in}_j. \end{equation} (33)

    Indeed, let p \in \mathcal{V}^{out}_i\setminus \{s \in \mathcal{V}^{out}_i;\; \boldsymbol{\xi}^{in, r}_s = 0\} . Then, there is s \in V^{out}_i such that p \in \text{supp}\; \boldsymbol{{\xi}}^{out, c}_s . Since p\notin \{s \in \mathcal{V}^{out}_i;\; \boldsymbol{\xi}^{in, r}_s = 0\} , there is q such that \xi^{in}_{pq}\neq 0 and thus \widehat{\boldsymbol{\xi}^{out, c}_s} \cdot \widehat{\boldsymbol{\xi}^{in, c}_q} \neq 0 . Hence, q \in V^{in}_j and consequently p \in \mathcal{V}^{in}_j . The converse can be proved in the same way by using Assumption 4 since if p\in \mathcal{V}^{in}_j then, by construction, p must belong to a support of some \boldsymbol{{\xi}}^{in, c}_q and thus cannot be in \{s \in \mathcal{V}^{out}_i;\; \boldsymbol{\xi}^{in, r}_s = 0\} . As we see, if \mathcal{V}^{out}_i contains rows \boldsymbol{\xi}^{out, r}_k with k\in I\setminus \mathcal{V_S} , then these rows satisfy \text{supp}\; \boldsymbol{\xi}^{out, r}_k \subset V^{out}_i . If we add the indices of such rows to \mathcal{V}^{in}_j with V^{in}_j determining the same vertex as V^{out}_i , then such an extended \mathcal{V}^{in}_j will be equal to \mathcal{V}^{out}_i and thus we use can use (30) to denote the partition of \{1, \ldots, 2m\} into \mathcal{V_1}^{out}, \ldots, \mathcal{V}^{out}_n, \mathcal{V_S} .

    Step 5. We easily check that this partition satisfies the conditions of the proposition. We have already checked this for \mathcal{V_S} . So, let \text{supp}\;\boldsymbol{\xi}^{out, c}_q \cap \mathcal{V}^{out}_i\neq\emptyset for some 1\leq i\leq n , then there is s\in V^{out}_i such that k\in \text{supp}\;\boldsymbol{\xi}^{out, c}_q \cap \text{supp}\;\boldsymbol{\xi}^{out, c}_s . Clearly, k\notin \mathcal{V_S} by the construction of \mathcal{V}^{out}_i . If k \in I\setminus \mathcal{V_S} , then q \in V^{out}_i by Assumption 6 and hence \text{supp}\;\boldsymbol{\xi}^{out, c}_q \subset \mathcal{V}^{out}_i . If k\notin I , then \widehat{ \boldsymbol{\xi}^{out, c}_s}\cdot \widehat{\boldsymbol{\xi}^{in, c}_p}\neq 0 for some p but then also \widehat{\boldsymbol{\xi}^{in, c}_p}\cdot \widehat{\boldsymbol{\xi}^{out, c}_q}\neq 0 and hence p \in V^{in}_j , yielding q \in V^{out}_i and consequently \text{supp}\; \boldsymbol{\xi}^{out, c}_q \subset \mathcal{V}^{out}_i. Similarly, if \text{supp}\;\boldsymbol{\xi}^{in, c}_p \cap \mathcal{V}^{out}_i\neq\emptyset , then there is k \in \text{supp}\;\boldsymbol{\xi}^{in, c}_p\cap \text{supp}\;\boldsymbol{\xi}^{out, c}_q for some q \in V^{out}_i . But then, immediately from the definition, \text{supp}\;\boldsymbol{\xi}^{in, c}_p\subset \mathcal{V}^{in}_j\subset \mathcal{V}^{out}_i by (33).

    We note that (30) does not contain rows corresponding to sinks and they must be added following the rules described in Appendix A. With such an extension, we consider the multi digraph \boldsymbol{\Gamma} , determined by

    \begin{equation} \{\{\mathcal{V_i}\}_{1\leq i\leq n}, \mathcal{V_S}, \mathcal{V_Z}\}, \quad \{\{V^{out}_i\}_{1\leq i\leq n}, V^{out}_{M'}, \emptyset\}, \quad \{\{V^{in}_{j_i}\}_{1\leq i\leq n}, \emptyset, V^{in}_{N'}\}, \end{equation} (34)

    where the association i\mapsto j_i is defined in (A.2). By construction, if we take the triple \mathcal{V_i}, V^{out}_i, V^{in}_{j_i} , 1\leq i\leq n , it determines a transient vertex, the outgoing arcs given by the indices of columns in \boldsymbol{\Xi}_{out} and the incoming arcs given by the indices of columns in \boldsymbol{\Xi}_{in} . Similarly, the pair \mathcal{V_S}, V^{out}_{M'} determines the sources and all outgoing arcs, while the set of incoming arcs is empty. Thus, if we denote by \boldsymbol{\Xi}^i_{out} and \boldsymbol{\Xi}^i_{in} the submatrices of \boldsymbol{\Xi}_{out} and \boldsymbol{\Xi}_{in} consisting of the rows with indices in \mathcal{V_i} and columns in V^{out}_i and V^{in}_{j_i} , respectively, with an obvious modification for \mathcal{V_S} , then (2) decouples into n (or n+1 ) independent systems

    \begin{equation} \begin{split} &\boldsymbol{\Xi}^i_{out}(( \upsilon_j(0, t))_{j\in J^+\cap V^{out}_i}, ( \varpi_j(1, t))_{j\in J^-\cap V^{out}_i})\\& \phantom{xxx} = - \boldsymbol{\Xi}^i_{in}(( \upsilon_j(1, t))_{j\in J^+\cap V^{in}_{j_i}}, ( \varpi_j(0, t))_{j\in J^-\cap V^{in}_{j_i}}), \quad 1\leq i\leq n, \\ &\boldsymbol{\Xi}^S_{out}(( \upsilon_j(0, t))_{j\in J^+\cap V^{out}_{M'}}, ( \varpi_j(1, t))_{j\in J^-\cap V^{out}_{M'}}) = 0. \end{split} \end{equation} (35)

    This system can be seen as a Kirchhoff system on the multi digraph \boldsymbol{\Gamma} but we need to collapse \boldsymbol{\Gamma} to a graph \Gamma on which (35) can be written as (17). We observe that the question naturally splits into two problems – one is about collapsing the graph, while the other is about grouping the components of (\boldsymbol{ \upsilon}, \boldsymbol{ \varpi}) into pairs compatible with the parametrization of \Gamma .

    Let \mathsf{A} be the adjacency matrix of L(\boldsymbol{\Gamma}) , with Assumptions 4 and 6 satisfied. As in Appendix A, we can construct outgoing and incoming incidence matrices A^+ and A^- but these are uniquely determined only if there are no sources and sinks. However, we have an additional piece of information about sources.

    If we grouped all sources into one node, as before Proposition 5, then, by Lemma 3.4, the flow connectivity in this source would be given by

    \mathsf{C}_{ \mathbf{v}}: = \widehat{\left(\widehat{\boldsymbol{\Xi}_{out}^{S}}\right)^T\widehat{\boldsymbol{\Xi}_{out}^{S}}}.

    However, such a matrix would not necessarily satisfy Assumption 3. Thus, we separate the arcs into non-communicating groups, each determining a source satisfying Assumption 3. For this, by simultaneous permutations of rows and columns, \mathsf{C}_{ \mathbf{v}} can be written as

    \begin{equation} \boldsymbol{\Xi}^S = {\rm diag}\{ \boldsymbol{\Xi}^S_{i}\}_{1\leq i\leq k}, \end{equation} (36)

    where k may equal 1. Since the simultaneous permutation of rows and columns is given as P\mathsf{C}_{ \mathbf{v}}P^T, where P is a suitable permutation matrix, [18,p. 140], we see that \boldsymbol{\Xi}^S is a symmetric matrix, along with \mathsf{C}_{ \mathbf{v}} . By [12,Sections III \S 1 and Ⅲ \S 4], \boldsymbol{\Xi}^S is irreducible if and only if it cannot be transformed by simultaneous row and column permutations to the form (36) with k>1 (since \boldsymbol{\Xi}^S is symmetric, all off-diagonal blocks must be zero). Then, \boldsymbol{\Xi}^S can be reduced to the canonical form, [12,Section Ⅲ,Eq. (68)], in which each \boldsymbol{\Xi}^S_i is irreducible. If (36) is in the canonical form, then we say that \boldsymbol{\Xi}^S allows for k sources, each satisfying Assumption 3. The indices of the columns contributing to the blocks define the k non-communicating sources \mathcal{V}_{S_1}, \ldots, \mathcal{V}_{S_k} in \boldsymbol{\Gamma} , which we denote V^{out}_{S_1}, \ldots, V^{out}_{S_k} , V^{out}_{M'} = V^{out}_{S_1}\cup \ldots\cup V^{out}_{S_k} . Finally, define \boldsymbol{\xi}^{out, r} = (\xi^{out, r}_{S_i j})_{1\leq i\leq k, 1\leq j\leq 2m} by

    \xi^{out, r}_{S_i j} = \left\{\begin{array}{lcl} 1 &\text{if}& j\in V^{out}_{S_i}, \\0&\text{otherwise}.&\end{array}\right.

    For the sinks, it is simpler as there is no constraining information from (2). We have columns with indices in V^{in}_{N'} corresponding to sinks. These are zero columns in \boldsymbol{\Xi}_{in} but the columns with these indices in \boldsymbol{\Xi}_{out} have nonempty supports and thus we can determine from which vertices they are outgoing. Let us denote

    \begin{equation} V^{in}_{\mathcal{V_i}} = \{j\in V^{in}_{N'};\; \text{supp}\; \boldsymbol{\xi}^{out, c}_j\cap \mathcal{V_i}\neq \emptyset\}, \quad i = 1, \ldots, n, S_1, \ldots, S_k. \end{equation} (37)

    For each i we consider a partition

    \begin{equation} V^{in}_{\mathcal{V_i}} = V^{in}_{i, {1}}\cup\ldots\cup V^{in}_{i, {l_i}}, \quad i = 1, \ldots, n, S_1, \ldots, S_k, \end{equation} (38)

    where l_i\leq |V^{in}_{\mathcal{V_i}}|, into non-overlapping sets V^{in}_{i, {l}} , 1\leq l\leq l_i . Then, we define sinks \mathcal{V}_{i, {l}} as the heads of the arcs with indices from V^{in}_{i, {l}} ; we have n_z = l_1+\cdots + l_{S_k} sinks. Then, as above, define \boldsymbol{\xi}^{in, r} = (\xi^{in, r}_{\{i, {l}\}, q})_{i\in \{1, \ldots, S_k\}, l\in \{1, \ldots, l_i\}, q\in \{1, \ldots, 2m\}} by

    \xi^{in, r}_{\{i, {l}\}, q} = \left\{\begin{array}{lcl} 1 &\text{if}& q\in V^{in}_{i, {l}}, \\0&\text{otherwise}.&\end{array}\right.

    Remark 4. We expect |V^{in}_{\mathcal{V_i}}|, i = 1, \ldots, n, S_1, \ldots, S_k, to be even numbers and (38) to represent a partition of V^{in}_{\mathcal{V_i}} into pairs so that l_i = |V^{in}_{\mathcal{V_i}}|/2.

    Then, as in Remark 5, the incoming and outgoing incidence matrices are

    A^+ = \left(\begin{array}{c} \mathsf{A}^+\\\boldsymbol{0}\\\boldsymbol{\xi}^{in, r}\end{array}\right), \qquad A^- = \left(\begin{array}{c} \mathsf{A}^-\\\boldsymbol{\xi}^{out, r}\\\boldsymbol{0}\end{array}\right)

    which, by a suitable permutation of columns moving the sources and the sinks to the last positions, can be written as

    (39)

    respectively. Both matrices have 2m columns and \mathcal{n} : = n + k + n_z rows. Hence, as shown in Remark 5, the adjacency matrix of the full multi digraph \boldsymbol{\Gamma} is given by

    (40)

    where the dimensions of the blocks in the first row are, respectively, n\times n , n\times k and n\times n_z , in the second row, k\times n , k\times k and k\times n_z and in the last one, n_z\times n , n_z\times k and n_z\times n_z . Thus, if a_{ij} is in the block (p, q), 1\leq p, q\leq 3, then a_{ji} will be in the block (q, p) .

    Consider a nonzero pair (a_{ij}, a_{ji}) of entries of A(\boldsymbol{\Gamma}) . If, say, a_{ij} = h , then it means that the i -th row of A^+ and j -th row of A^- have entry 1 in the same h columns, that is, there are exactly h arcs coming from \mathbf{v}_j to \mathbf{v}_i . Similarly, if a_{ji} = e , then there are exactly e arcs coming from \mathbf{v}_i to \mathbf{v}_j . Conversely, if there are h arcs from \mathbf{v}_j to \mathbf{v}_i and e arcs from \mathbf{v}_i to \mathbf{v}_j , then (a_{ij}, a_{ji}) = (h, e). In particular, h+e = 2 if and only if there are two arcs between \mathbf{v}_j and \mathbf{v}_i running either concurrently or countercurrently. Since the columns of A^+ and A^- are indexed in the same way as that of \boldsymbol{\Xi}_{out} and \boldsymbol{\Xi}_{in} , the pair (a_{ij}, a_{ji}) determines the rows \boldsymbol{a}^{+, r}_i and \boldsymbol{a}^{+, r}_j of A^+ and thus the indices

    \begin{equation} \begin{split} &(a_{ij}\mapsto \{k^{ij}_1, \ldots, k^{ij}_h\}, a_{ji}\mapsto \{k^{ji}_1, \ldots, k^{ji}_e\}) \\ & = (a_{ij}\mapsto \text{supp}\; \boldsymbol{a}^{+, r}_i\cap \text{supp}\; \boldsymbol{a}^{-, r}_j, a_{ji}\mapsto \text{supp}\; \boldsymbol{a}^{+, r}_j\cap \text{supp}\; \boldsymbol{a}^{-, r}_i)\end{split} \end{equation} (41)

    of columns of \boldsymbol{\Xi}_{out} (and of \boldsymbol{\Xi}_{in} ). Finally, we are ready to formulate the main result of this paper.

    Theorem 3.6. System (2) is graph realizable with generalized Kirchhoff's conditions satisfying Assumptions 1 and 2 for \mathbf{v}\in \Upsilon_t and Assumption 3 for \mathbf{v}\in\Upsilon_s if and only if, in addition to Assumptions 4, 5 and 6, there is a partition (38) such that A(\boldsymbol{\Gamma}) defined by (40) satisfies

    1. for any 1\leq i, j\leq \mathcal{n} , a_{ii} = 0 and (a_{ij}, a_{ji}) is in one of the following form:

    \begin{equation} (2, 0), \; (1, 1), \; (0, 2)\; \mathit{\text{or}}\; (0, 0); \end{equation} (42)

    2. if (a_{ij}, a_{ji}) determines the indices k and l according to (41), then

    \begin{array}{l} if\;({a_{ij}}, {a_{ji}}) = (2, 0)\;or\;(0, 2), \;\;\;then\;k, l \in {J^ + }\;or\;k, l \in {J^ - }\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;and\;{c_k} \ne {c_l}, \\ if\;({a_{ij}}, {a_{ji}}) = (1, 1), \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;then\;k \in {J^ + }\;and\;l \in {J^ - }\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;or\;k \in {J^ - }\;and\;l \in {J^ + }. \end{array} (43)

    Proof. Necessity. Let us consider the Kirchhoff system (17). By construction, both matrices \boldsymbol{\Psi}^{out} and \boldsymbol{\Psi}^{in} are in block diagonal form with equal row dimensions of the blocks. We consider the problem already transformed to \boldsymbol{\Gamma} . We note that each arc's index must appear twice in \boldsymbol{\Psi} – once in \boldsymbol{\Psi}^{out} and once in \boldsymbol{\Psi}^{in} (if there are sinks, the indices of incoming arcs will correspond to the zero columns). Further, whenever column indices k and l appear in the blocks of \boldsymbol{\Psi}^{out} and \boldsymbol{\Psi}^{in} , respectively, then \boldsymbol{{\varepsilon}}^l is incoming to, while \boldsymbol{{\varepsilon}}^k is outgoing from the same vertex (and not any other). Thus, by Assumption 2, the matrix

    \tilde A = (\tilde a_{ij})_{1\leq i, j\leq 2m} = \widehat{(\widehat{\boldsymbol{\Psi}^{out}})^T\widehat{\boldsymbol{\Psi}^{in}}}

    is block diagonal with blocks of the form \mathsf{C}_{ \mathbf{v}} = \boldsymbol{1}_{ \mathbf{v}}, except for zero rows corresponding to the sources and zero columns corresponding to the sinks. However, in general, the column indices in \boldsymbol{\Psi}^{out} and \boldsymbol{\Psi}^{in} do not correspond to the indices of the arcs they represent. Precisely, \tilde a_{ij} = 1 if and only if there is a vertex for which the arc \boldsymbol{{\varepsilon}}^{j'} is incoming and \boldsymbol{{\varepsilon}}^{i'} is outgoing, where j' and i' are the indices of the arcs that correspond to the columns j and i of \boldsymbol{\Psi}^{in} and \boldsymbol{\Psi}^{out} , respectively. To address this, we construct \boldsymbol{\Xi}_{out} and \boldsymbol{\Xi}_{in} as

    \boldsymbol{\Xi}_{out} = \boldsymbol{\Psi}^{out}P, \qquad \boldsymbol{\Xi}_{in} = \boldsymbol{\Psi}^{in}Q,

    where P and Q are permutation matrices, so that in both matrices the column indices 1, \ldots, 2m correspond to \boldsymbol{{\varepsilon}}^1, \ldots, \boldsymbol{{\varepsilon}}^{2m} . Hence

    A = (a_{ij})_{1\leq i, j\leq 2m} : = \widehat{(\widehat{\boldsymbol{\Xi}_{out}})^T\widehat{\boldsymbol{\Xi}_{in}}} = \widehat{(\widehat{\boldsymbol{\Psi}^{out}}P)^T\widehat{\boldsymbol{\Psi}^{in}}Q} = P^T\tilde AQ

    is a matrix where the indices 1, \ldots, 2m of both the columns and the rows correspond to \boldsymbol{{\varepsilon}}^1, \ldots, \boldsymbol{{\varepsilon}}^{2m} . Since \Gamma did not have loops, it is clear that a_{ii} = 0 for all i = 1, \ldots, 2m. It is also clear that any two columns (or rows) of \tilde A are either equal or orthogonal and this property is preserved by permutations of columns and of rows. Hence, by Proposition 4, A is the adjacency matrix of a line digraph and hence Assumption 5 is satisfied. Since the arcs' connections given by A and \tilde A are the same, we see that A is equal to the adjacency matrix of the line graph of \boldsymbol{\Gamma} . Therefore, the transient vertices determined by A are the same as in \boldsymbol{\Gamma} (and hence in \Gamma ). On the other hand, as we know, A does not determine the structure of sources and sinks in \boldsymbol{\Gamma} . The fact that Assumption 4 is satisfied is a consequence of Assumption 1. For Assumption 6, we recall, see Appendix A, that the sets V_j^{out} group together the indices representing arcs \boldsymbol{{\varepsilon}}^k outgoing from a single vertex, thus they correspond to the blocks \boldsymbol{\Psi}^{out}_{ \mathbf{v}} in the matrix \boldsymbol{\Psi}^{out} and therefore Assumption 6 is satisfied, even for any i . Next, since \boldsymbol{\Gamma} has been constructed from \Gamma , the structure of the blocks in \boldsymbol{\Psi}^{out} corresponding to sources ensures that, after permutations, their entries will coincide with \boldsymbol{\Xi}^S_{out} and thus (36) will hold with the blocks in (36) exactly corresponding to the sources in \boldsymbol{\Gamma}, on account of Assumption 3. Similarly, in \Gamma the sinks are determined and thus we have groupings of pairs of the arcs (a partition of the set of indices corresponding to arcs incoming to sinks) coming from transient vertices or sources to sinks and thus the constructions (39) and (40) are completely determined. Then, we observe that whenever we have a source \mathbf{v} , then the arcs outgoing from \mathbf{v} must be coming in pairs with a single pair coming from \mathbf{v} to any other possible vertex, meaning that the respective entry in A(\boldsymbol{\Gamma}) must be either (2, 0) or (0, 2). Similar argument holds for the sinks. Since the problem comes from a graph, by construction, the orientation of the flows is consistent with the parametrization.

    Sufficiency. Given (2), we have flows (( \upsilon_j)_{j\in J^+}, ( \varpi_j)_{j\in J^-}) defined on (0, 1). Assumptions 4, 5 and 6 ensure, by Proposition 3, the existence of a multi digraph \boldsymbol{\Gamma} on which (2) can be localized to decoupled systems at vertices and written as (35). Precisely speaking, Assumption 5 associates the indices of incoming components of the solution at a vertex with incoming arcs and similarly for the indices of the outgoing components. Therefore, if an arc \boldsymbol{{\varepsilon}}^p runs from \mathbf{v}_j to \mathbf{v}_i , then the flow occurs from \mathbf{v}_j to \mathbf{v}_i , that is, if p\in J^+ , then the flow on \boldsymbol{{\varepsilon}}^p is given by \upsilon_p with \upsilon_p(0) at \mathbf{v}_j and \upsilon_p(1) at \mathbf{v}_i and analogous statement holds for p\in J^- . In other words, the index p of the arc \boldsymbol{{\varepsilon}}^p running from \mathbf{v}_j to \mathbf{v}_i determines the orientation of the parametrization: 0\mapsto \mathbf{v}_j and 1\mapsto \mathbf{v}_i if p\in J^+ and 0\mapsto \mathbf{v}_i and 1\mapsto \mathbf{v}_j if p\in J^-.

    Now, (42) ensures that there are no loops at vertices and that between any two vertices there are either two arcs or none. If a_{ij} is an entry in \mathsf{A}^+_S\mathsf{S}^T , \mathsf{Z}(\mathsf{A_Z}^-)^T or \mathsf{Z_S}(\mathsf{S_Z})^T , then a_{ij} = 0 or a_{ij} = 2 and, by the dimensions of the blocks, a_{ji}\in\{0, 2\} and a_{ji} = 0 , respectively. On the other hand, if a_{ij} is an entry in \mathsf{A}^+_T(\mathsf{A}^-_T)^T , then it can take any value 0, 1 or 2 and the a_{ji} equals 2 or 0 , 1, \, 0 , respectively. Thus, double arcs indexed, say, by (k, l) between vertices could be combined into edges of an undirected graph (with no loops and multiple edges). However, in this way we construct a combinatorial graph which does not take into account that if \boldsymbol{{\varepsilon}}^k and \boldsymbol{{\varepsilon}}^l are combined into one edge \boldsymbol{e} of \Gamma , their orientations must be the same. Thus, if (a_{ij}, a_{ji}) = (2, 0) determines the pair of indices k, l according to (43), then both arcs \boldsymbol{{\varepsilon}}^k and \boldsymbol{{\varepsilon}}^l of \boldsymbol{\Gamma} run from \mathbf{v}_j to \mathbf{v}_i and the components k and l of the solution flow concurrently along \boldsymbol{e} . By assumption, k, l \in J^+ or k, l \in J^- . In the first case, we associate \mathbf{v}_j with 0 and \mathbf{v}_i with 1 and we have ( \upsilon_{k}, \upsilon_{l}) on \boldsymbol{e} , in agreement with the orientation. Otherwise, we associate \mathbf{v}_j with 1 and \mathbf{v}_i with 0 and we have ( \varpi_{k}, \varpi_{l}) on \boldsymbol{e} . On the other hand, if (a_{ij}, a_{ji}) = (1, 1) then, by assumption, either k\in J^+ and l\in J^- or k\in J^- and l\in J^+ and the components k and l flow countercurrently. Again, in the first case, k\in J^+ and \boldsymbol{{\varepsilon}}^k running from \mathbf{v}_j to \mathbf{v}_i requires \mathbf{v}_j to be associated with 0 and \mathbf{v}_i with 1, while l\in J^- and \boldsymbol{{\varepsilon}}^l running from \mathbf{v}_i to \mathbf{v}_j also requires \mathbf{v}_j to be associated with 0 and \mathbf{v}_i with 1. Thus, we have ( \upsilon_{k}, \varpi_{l}) on \boldsymbol{e} . Otherwise, we associate \mathbf{v}_j with 1 and \mathbf{v}_i with 0 and we have ( \varpi_{k}, \upsilon_{l}) on \boldsymbol{e} .

    Finally, the assumption c_k\neq c_l in the first case of assumption (43) ensures that the resulting system is hyperbolic on each edge.

    Example 3. Let us consider the system

    \begin{equation} \begin{split} {\partial}_t \upsilon_{j}+c_j{\partial}_x \upsilon_j & = 0, \quad 1\leq j\leq 4, \\ {\partial}_t \varpi_{j}-c_j{\partial}_x \varpi_j & = 0, \quad 5\leq j\leq 6, \end{split} \end{equation} (44)

    where c_j>0, with boundary conditions

    \begin{equation} \left(\!\!\begin{array}{cccccc}0&1&1&0&0&0\\ 1&0&0&0&0&0\\ 1&1&0&0&0&0\\ 0&0&1&1&0&0\\ 0&0&0&0&1&0\\ 0&0&0&0&0&1\end{array}\!\! \right)\left(\!\!\!\begin{array}{c} \upsilon_1(0)\\ \upsilon_2(0)\\ \upsilon_3(0)\\ \upsilon_4(0)\\ \varpi_5(1)\\ \varpi_6(1)\end{array}\!\!\!\right) = \left(\!\!\begin{array}{cccccc}0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 1&1&0&0&0&1\\ 0&0&1&1&1&0\end{array}\!\! \right)\left(\!\!\!\begin{array}{c} \upsilon_1(1)\\ \upsilon_2(1)\\ \upsilon_3(1)\\ \upsilon_4(1)\\ \varpi_5(0)\\ \varpi_6(0)\end{array}\!\!\!\right). \end{equation} (45)

    Thus,

    A = \widehat{(\widehat{\boldsymbol{\Xi}_{out}})^T\widehat{\boldsymbol{\Xi}_{in}}} = \left(\begin{array}{cccccc}0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 1&1&0&0&0&1\\ 0&0&1&1&1&0\end{array} \right).

    Thus, there is a multi digraph \boldsymbol{\Gamma} for which A is the adjacency matrix of L(\boldsymbol{\Gamma}) . There is no sink and to determine the structure of the sources, we observe that

    \boldsymbol{\Xi}^S_{out} = \left(\begin{array}{cccc}0&1&1&0\\ 1&0&0&0\\ 1&1&0&0\\ 0&0&1&1\end{array}\right) \quad\text{and so}\quad \boldsymbol{\Xi}^S = \widehat{\left(\widehat{\boldsymbol{\Xi}_{out}^{S}}\right)^T\widehat{\boldsymbol{\Xi}_{out}^{S}}} = \left(\begin{array}{cccc}1&1&0&0\\ 1&1&1&0\\ 0&1&1&1\\ 0&0&1&1\end{array}\right).

    This matrix is irreducible and thus we have one source. Therefore

    A^+ = \left(\begin{array}{cccccc}1&1&0&0&0&1\\ 0&0&1&1&1&0\\ 0&0&0&0&0&0\end{array}\right), \qquad A^- = \left(\begin{array}{cccccc}0&0&0&0&1&0\\ 0&0&0&0&0&1\\ 1&1&1&1&0&0\end{array}\right)

    and consequently

    Further,

    \begin{align*} \text{supp}\; \boldsymbol{a}^{+, r}_1& = \{1, 2, 6\}, \; \text{supp}\; \boldsymbol{a}^{+, r}_2 = \{3, 4, 5\}, \\ \text{supp}\; \boldsymbol{a}^{-, r}_1& = \{5\}, \; \text{supp}\; \boldsymbol{a}^{-, r}_2 = \{6\}, \; \text{supp}\; \boldsymbol{a}^{-, r}_3 = \{1, 2, 3, 4\}, \end{align*}

    hence, by (41),

    (a_{12}\mapsto \{6\}, a_{21}\mapsto \{5\}), \quad (a_{13}\mapsto \{1, 2\}, a_{31}\mapsto \emptyset), \quad (a_{23}\mapsto \{3, 4\}, a_{32}\mapsto \emptyset).

    To reconstruct \Gamma , we see that \boldsymbol{{\varepsilon}}^5 and \boldsymbol{{\varepsilon}}^6 should be combined into a single edge \boldsymbol{e} . However, since J^+ = \{1, 2, 3, 4\}, J^- = \{5, 6\}, the flow along \boldsymbol{{\varepsilon}}^5 runs from 1 to 0 and hence \mathbf{v}_1 should correspond to 1 in the parametrization, while \mathbf{v}_2 to 0. On the other hand, \boldsymbol{{\varepsilon}}^6 runs also from 1 to 0 but from \mathbf{v}_2 to \mathbf{v}_1 and hence \mathbf{v}_2 should correspond to 1, while \mathbf{v}_1 to 0. This contradiction is in agreement with the violation of assumption (43) as (a_{12}, a_{21}) = (1, 1) but in the corresponding (k, l) = (6, 5) , both k and l belong to J^- .

    Consider a small modification of (44), (45),

    \begin{equation} \begin{split} {\partial}_t \upsilon_{j}+c_j{\partial}_x \upsilon_j & = 0, \quad 1\leq j\leq 5, \\ {\partial}_t \varpi_{6}-c_6{\partial}_x \varpi_6 & = 0, \end{split} \end{equation} (46)

    c_j>0 , with the last two boundary conditions of (45) accordingly changed to

    \begin{equation} \begin{split} \upsilon_5(0)- \upsilon_1(1)- \upsilon_2(1)- \varpi_6(0)& = 0, \\ \varpi_6(1)- \upsilon_3(1)- \upsilon_4(1)- \upsilon_5(1)& = 0. \end{split} \end{equation} (47)

    The matrices \boldsymbol{\Xi}_{out}, \boldsymbol{\Xi}_{in}, \boldsymbol{\Xi}^S, A^+, A^- and A are the same as above and thus the multi digraph \boldsymbol{\Gamma} is the same as before. However, this time on \boldsymbol{{\varepsilon}}^5 we have the flow \upsilon_5 , occurring from 0 to 1 and thus \boldsymbol{{\varepsilon}}^5 and \boldsymbol{{\varepsilon}}^6 can be combined with a parametrization running from 0 at \mathbf{v}_1 to 1 at \mathbf{v}_2 . Assuming c_1>c_2 , c_3>c_4 , we identify u^1_1 = \upsilon_1, u^1_2 = \upsilon_2, u^3_1 = \upsilon_3, u^3_2 = \upsilon_4, u^2_1 = \upsilon_5, u^2_2 = \varpi^6 and write (46) as a system of 2\times 2 hyperbolic systems on a graph \Gamma = (\{ \mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}, \{\boldsymbol{e_1}, \boldsymbol{e_2}, \boldsymbol{e_3}\}) of the form

    \begin{equation} \begin{array}{c} {\partial}_t u^1_1+c_1{\partial}_x u^1_1 = 0, \\ {\partial}_t u^1_2+c_2{\partial}_x u^1_2 = 0, \end{array}\;\begin{array}{c} {\partial}_t u^2_1+c_5{\partial}_x u^2_1 = 0, \\ {\partial}_t u^2_2-c_6{\partial}_x u^2_2 = 0, \end{array}\; \begin{array}{c}{\partial}_t u^3_1+c_3{\partial}_x u^3_1 = 0, \\{\partial}_t u^3_2+c_4{\partial}_x u^3_2 = 0, \end{array} \end{equation} (48)

    with boundary conditions at

    (49)

    Consider a digraph G (possibly with multiple arcs but with no loops) and its line graph L(G) . For both G and L(G) we consider their adjacency matrices \mathsf{A}(G) and \mathsf{A}(L(G)) . The matrix \mathsf{A}(L(G)) is always binary, with zeroes on the diagonal. Not any binary matrix is the adjacency matrix of a line graph, see [3,6]. In fact, we have

    Proposition 4. [6,Thm. 2.4.1] A binary matrix \mathsf{A} is the adjacency matrix of a line digraph of a multi digraph if and only if all diagonal entries are 0 and any two columns (equivalently rows) of \mathsf{A} are either equal or orthogonal (as vectors).

    For our analysis, it is important to understand the reconstruction of G from a matrix \mathsf{A} = (\mathsf{{a}}_{ij})_{1\leq i, j\leq m} satisfying the above conditions. As in (29), we write

    \mathsf{A} = (\mathsf{{a}}_{ij})_{1\leq i, j\leq m} = (\boldsymbol{{a}}^c_j)_{1\leq j\leq m} = (\boldsymbol{{a}}^r_i)_{1\leq i\leq m}.

    If for some i_1 we have \mathsf{a}_{i_1j_1} = \ldots = \mathsf{a}_{i_1j_k} = 1, then it means that \boldsymbol{e}_{j_1}, \ldots, \boldsymbol{e}_{j_k} join \boldsymbol{e}_{i_1} and thus they must be incident to the same vertex \mathbf{v} and all \boldsymbol{e}_{i_l} for which \mathsf{a}_{i_lj_1} = 1 (and thus all \mathsf{a}_{i_lj_p} = 1 for p = 1, \ldots, k ) are outgoing from \mathbf{v} . We further observe that all zero rows can be identified with source(s). Similarly, zero columns correspond to sinks. If \boldsymbol{a}^c_j = \boldsymbol{a}^r_j = 0 for some j , then \boldsymbol{e_j} connects a source to a sink.

    Using the adjacency matrix of a line digraph, we cannot determine how many sources or sinks the original graph could have without additional information. We can lump all potential sources and sinks into one source and one sink, we can have as many sinks and sources as there are zero columns and rows, respectively, or we can subdivide the arcs into some intermediate arrangement. We describe a construction with one source and one sink and indicate its possible variants.

    We introduce V^{in}_1 = \{r\in\{1, \ldots, m\};\; \boldsymbol{a}^c_r = \boldsymbol{a}^c_1\} and, inductively, V^{in}_k = \{r\in\{1, \ldots, m\};\; \boldsymbol{a}^c_r = \boldsymbol{a}^c_{j_k}, j_k = \min\{j;\; j\notin \bigcup_{1\leq p\leq k-1} V^{in}_p\}\} and the process terminates at N' such that \bigcup_{1\leq p\leq N'} V^{in}_p = \{1, \ldots, m\} . In the same way, V^{out}_1 = \{l\in\{1, \ldots, m\};\; \boldsymbol{a}^r_l = \boldsymbol{a}^r_1\} and V^{out}_k = \{l\in\{1, \ldots, m\};\; \boldsymbol{a}^r_l = \boldsymbol{a}^r_{j_k}, j_k = \min\{j;\; j\notin \bigcup_{1\leq p\leq k-1} V^{out}_p\}\} and the process terminates at M' such that \bigcup_{1\leq p\leq M'} V^{out}_p = \{1, \ldots, m\} . In other words, \{V^{in}_j\}_{1\leq j\leq N'} and \{V^{out}_i\}_{1\leq i\leq M'} represent the vertices of G through incoming and outgoing arcs, respectively. If there are any zero rows in G , then we swap the corresponding set V^{out}_{j_0} with the last set V^{out}_{M'} . In this way, V^{out}_{M'} represents all arcs outgoing from sources (if they exist). For this construction, we represent them as coming from a single source but other possibilities are allowed, see Remark 5. Similarly, if there are any zero columns, we swap the corresponding set V^{in}_{i_0} with V^{in}_{N'} , that is, V^{in}_{N'} represents the arcs incoming to sink(s). Then, we denote

    \begin{equation} \begin{split} M: = \left\{\begin{array}{lcl} M' &\text{if}& V^{out}_{M'} = \{j;\; \boldsymbol{a}^r_j \neq 0\}, \\ M'-1 &\text{if} & V^{out}_{M'} = \{j;\; \boldsymbol{a}^r_j = 0\}, \end{array}\right.\\ N: = \left\{\begin{array}{lcl} N' &\text{if}& V^{in}_{N'} = \{j;\; \boldsymbol{a}^c_j \neq 0\}, \\ N'-1 &\text{if} & V^{in}_{N'} = \{j;\; \boldsymbol{a}^c_j = 0\}.\end{array}\right.\end{split} \end{equation} (A.1)

    Thus, we see that the number of internal (or transient) vertices, that is, which are neither sources nor sinks is n: = M = N . For such vertices it is important to note that, in general, V^{out}_j and V^{in}_j, 1\leq j\leq n, do not represent the same vertex. To combine V^{out}_i and V^{in}_j into the same vertex, we have, for 1\leq i, j\leq n ,

    \begin{equation} \mathbf{v}_j = \{V^{in}_j, V^{out}_i\}, \qquad a_{i_pj_r} = 1\;\text{for some/any }i_p \in V^{out}_i, j_r \in V^{in}_j. \end{equation} (A.2)

    With this notation, we present a more algorithmic way of reconstructing G from \mathsf{A} . First, we collapse equal rows of \mathsf{A} into a single row of \mathsf{A}^+ and equal columns of \mathsf{A} into a single column and then take the transpose to get \mathsf{A}^- . Mathematically, let \mathbb{I}^+ be a set of indices such that \mathbb{I}^+\cap V^{out}_i consists of exactly one point for each 1\leq i\leq M' and ordered by the order of \{V^{out}_i\} . Similarly, let \mathbb{I}^- be a set of indices such that \mathbb{I}^-\cap V^{in}_i consists of exactly one point for each 1\leq i\leq N'. We order \mathbb{I}^- consistently with \mathbb{I}^+ , namely, if i_k\in \mathbb{I}^+ and j_k \in \mathbb{I}^- are the k -th indices in \mathbb{I}^+ and \mathbb{I}^- , respectively, then i_k \in V^{out}_k and j_k\in V^{in}_j with j related to k by (A.2), that is, \{V^{out}_k, V^{in}_j\} determines the same vertex \mathbf{v}_k . As mentioned above, possible zero rows correspond to the highest indices. With this, we define

    \begin{equation} \mathsf{A}^+ = (\boldsymbol{a}^r_i)_{i\in \mathbb{I}^+}, \qquad \mathsf{A}^- = \left((\boldsymbol{a}^c_j)_{j\in \mathbb{I}^-}\right)^T. \end{equation} (A.3)

    We see now that each row of \mathsf{A}^+ corresponds to a vertex and each column of \mathsf{A}^+ corresponds to an incoming arc. If there are zero rows in \mathsf{A} , there is a zero row at the bottom of \mathsf{A}^+ showing the presence of a (single) source. The presence of a sink is indicated by zero columns in \mathsf{A}^+ . Similarly, each row of \mathsf{A}^- corresponds to a vertex with arcs outgoing from it represented by nonzero entries in this row, in columns with indices corresponding to the indices of the arcs. If there are zero columns in \mathsf{A} , they appear as a zero row in \mathsf{A}^- , which represents a (single) sink. Possible sources are visible in \mathsf{A}^- as zero columns. However, what is important is that even though we lumped all sources and sinks into one single source and a single sink, the zero columns in \mathsf{A}^+ and \mathsf{A}^- keep track of the arcs going into the sink or out of the source, respectively. Unless there are no sources and sinks, \mathsf{A}^+ and \mathsf{A}^- are not the incoming and outgoing incidence matrices of a graph (for the definition of these, see e.g. [3]). Indeed, \mathsf{A}^+ does not contain sinks that, clearly, are part of the incoming incidence matrix. Similarly, \mathsf{A}^- does not include sources. If we keep our requirement that there is only one sink and one source, then we add one row to \mathsf{A}^+ and one to \mathsf{A}^- to represent the sink and the source, respectively. We use convention that, if both the sink and the source are present, the source is the last but one row and the sink is the last one. To determine the entries we use the required property of the incidence matrices, that there is exactly one non-zero entry in each column (expressing the fact that each arc has a unique tail and a unique head). Thus, we put 1 in the added rows in any column that was zero in \mathsf{A}^+ (resp. \mathsf{A}^- ). We denote such extended matrices by A^+ and A^- . It is easy to see that the following result is true.

    Proposition 5. A^+ and A^- are, respectively, incoming and outgoing incidence matrices of a multi digraph G having \mathsf{A} as the adjacency matrix of L(G) .

    Proof. Since each column of A^+ and A^- contains 1 only in one row, we can construct a multi digraph G from them using \mathsf{A}(G) = A^+(A^-)^T as its adjacency matrix. Since we allow G to be a multi digraph, the entries of \mathsf{A}(G) give the number of arcs joining the vertices. A (k, l) entry in \mathsf{A}(G) is given by \boldsymbol{a_k}^{+, r}\cdot \boldsymbol{a_l}^{-, r} and, by construction, \boldsymbol{a_k}^{+, r} is a row in \mathsf{A} belonging to \mathbf{v}_k and \boldsymbol{a_l}^{-, r} is a column in \mathsf{A} corresponding to \mathbf{v}_l . Nonzero entries in \boldsymbol{a_k}^{+, r} correspond to the arcs incoming to \mathbf{v}_k and nonzero entries in \boldsymbol{a_l}^{-, r} correspond to the arcs outgoing from \mathbf{v}_l so the value of \boldsymbol{a_k}^{+, r}\cdot \boldsymbol{a_l}^{-, r} is the number of nonzero entries occurring at the same places in both vectors and thus the number of arcs from \mathbf{v}_l to \mathbf{v}_k .

    The adjacency matrix \mathsf{A}(L(G)) is determined as (A^-)^T A^+ . The entries of this product are given by \boldsymbol{a_k}^{-, c}\cdot \boldsymbol{a_l}^{+, c} . Since each column has only one nonzero entry (equal to 1), the product will be either 0 or 1. It is 1 if and only if there is i (exactly one) such that the entry 1 appears as the i -th coordinate of both \boldsymbol{a_k}^{-, c} and \boldsymbol{a_l}^{+, c}. Now, by construction, a_{ik}^{-} = 1 if and only if k \in V^{out}_i and a_{il}^{+} = 1 if and only if l \in V^{in}_j, where the correspondence between j and i is determined by (A.2). This is equivalent to \mathsf{a}_{kl} = 1 .

    Remark 5. Assume that \mathsf{A} has k zero rows and l zero columns. We cannot identify the numbers of sinks and sources from \mathsf{A} without additional information. Above, we lumped all sources and all sinks into one source and one sink, respectively, but sometimes we require more flexibility. As we know, the k zero rows in \mathsf{A} become k zero columns in \mathsf{A}^- associated with the arcs outgoing from sources. In a similar way, the l zero columns in \mathsf{A} stay to be l zero columns in \mathsf{A}^+ associated with the arcs incoming to sinks. We can group these arcs in an arbitrary way, with each group corresponding to a source or a sink, respectively. Assume we wish to have \bar k sources and \bar l sinks. Then, we build the corresponding matrix A^+ by adding \bar k-1 zero rows for the sources to \mathsf{A}^+ and \bar l rows corresponding to sinks, which will consist of zeroes everywhere apart from the columns that were zero columns in \mathsf{A}^+ ; in these columns we put 1s in such a way that each column contains only one nonzero entry (and zeroes elsewhere). Then, columns having 1 in a particular row will represent the arcs incoming to a given sink. In exactly the same way we extend \mathsf{A}^- , by creating \bar l-1 zero rows for the sinks and \bar k rows for the sources. In this way, we construct the following incoming and outgoing incidence matrices, respectively, A^+ and A^- that, by a suitable permutation of columns, can be written as

    where P is the required permutation matrix and, in both cases, the first group of columns have indices corresponding to V^{in}_i, 1\leq i\leq n (resp. V^{out}_j, 1\leq j\leq n ), the second group corresponds to the arcs incoming from the sources to the transient vertices, the third group combines arcs connecting sources and sinks and the last group corresponds to the sinks fed by the transient vertices. We observe that the number of columns in each group in A^- and A^+ is the same. Since

    \bar A^+(\bar A^-)^T = (A^+P)(A^-P)^T = A^+PP^T(A^-)^T = A^+A^-,

    as P^T = P^{-1} , see [18,p. 140], for such a digraph G we have

    (A.4)

    Example 4 Consider the networks G_1 and G_2 presented on Fig. 5. We observe that grouping of sources and sinks does not affect the line graph, L(G_1) = L(G_2), see Fig. 6. To illustrate the discussion above, we have

    \begin{equation} \mathsf{A} = \left(\begin{array}{ccccccc}0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 1&1&0&1&0&0&0\\ 0&0&1&0&1&0&0\\ 0&0&0&0&0&0&0\\ 0&0&1&0&1&0&0\\ 0&0&0&0&0&0&0\end{array}\right). \end{equation} (A.5)
    Figure 5. 

    Multi digraphs G_1 with 3 sources and two sinks and G_2 with all sources and all sinks grouped into a single source and a single sink

    .
    Figure 6. 

    The line digraph for both G_1 and G_2

    .

    Then,

    \begin{equation} \mathsf{A}^+ = \left(\begin{array}{ccccccc} 1&1&0&1&0&0&0\\ 0&0&1&0&1&0&0\\ 0&0&0&0&0&0&0\end{array}\right) \end{equation} (A.6)

    and we see that there are two transient (internal) vertices \mathbf{v}_1 and \mathbf{v}_2 with arcs \boldsymbol{e_1}, \boldsymbol{e_2} and \boldsymbol{e_4} incoming to \mathbf{v}_1 and arcs \boldsymbol{e_3} and \boldsymbol{e_5} are incoming to \mathbf{v}_2. The last row in \mathsf{A}^+ corresponds to source(s) with outgoing arcs \boldsymbol{e_1}, \boldsymbol{e_2}, \boldsymbol{e_5} and \boldsymbol{e_7} . We also note that the zero columns in \mathsf{A}^+ correspond to arcs \boldsymbol{e_6} and \boldsymbol{e_7} that are incoming to sinks. To build \mathsf{A}^- , we first collapse the identical columns of \mathsf{A} and take the transpose. We see from \mathsf{A} that the first row of the transpose corresponds to the incoming arcs \boldsymbol{e_1}, \boldsymbol{e_2} and \boldsymbol{e_4} and thus also to vertex \mathbf{v}_1 of \mathsf{A}^+. Hence, there is no need to re-order the rows and so (A.3) gives

    \begin{equation} \mathsf{A}^- = \left(\begin{array}{ccccccc} 0&0&1&0&0&0&0\\ 0&0&0&1&0&1&0\\ 0&0&0&0&0&0&0\end{array}\right). \end{equation} (A.7)

    The last row corresponds to sinks and the zero columns inform us that arcs \boldsymbol{e_1}, \boldsymbol{e_2}, \boldsymbol{e_5} and \boldsymbol{e_7} emanate from sources.

    If we want to reconstruct the original graph with one source and one sink, then

    A^+ = \left(\begin{array}{ccccccc} 1&1&0&1&0&0&0\\ 0&0&1&0&1&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&1&1\end{array}\right), \qquad A^- = \left(\begin{array}{ccccccc} 0&0&1&0&0&0&0\\ 0&0&0&1&0&1&0\\ 1&1&0&0&1&0&1\\ 0&0&0&0&0&0&0\end{array}\right)

    and

    A^+(A^-)^T = \left(\begin{array}{cccc} 0&1&2&0\\ 1&0&1&0\\ 0&0&0&0\\ 0&1&1&0\end{array}\right),

    which describes the right multi digraph in Fig. 5. On the other hand, we can consider two sinks (maximum number, as there are two zero columns in \mathsf{A} ) and, say, three sources. Then,

    A^+ = \left(\begin{array}{ccccccc} 1&1&0&1&0&0&0\\ 0&0&1&0&1&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&1\\ 0&0&0&0&0&1&0\end{array}\right), \qquad A^- = \left(\begin{array}{ccccccc} 0&0&1&0&0&0&0\\ 0&0&0&1&0&1&0\\ 1&0&0&0&0&0&1\\ 0&1&0&0&0&0&0\\ 0&0&0&0&1&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\end{array}\right)

    and

    A^+(A^-)^T = \left(\begin{array}{ccccccc} 0&1&1&1&0&0&0\\ 1&0&0&0&1&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&1&0&0&0&0\\ 0&1&0&0&0&0&0\end{array}\right)

    which describes the left multi digraph in Fig. 5.

    It is easily seen that both digraphs have the same line digraph, shown on Fig. 6, whose adjacency matrix is \mathsf{A} .

    [1] Venter JC, Adams MD, Myers EW, et al. (2001) The sequence of the human genome. Science 291: 1304–1351. doi: 10.1126/science.1058040
    [2] Lander ES, Linton LM, Birren B, et al. (2001) Initial sequencing and analysis of the human genome. Nature 409: 860–921. doi: 10.1038/35057062
    [3] Fredriksson R, Lagerstrom MC, Lundin LG, et al. (2003) The G-protein-coupled receptors in the human genome form five main families. Phylogenetic analysis, paralogon groups, and fingerprints. Mol Pharmacol 63: 1256–1272.
    [4] Pierce KL, Premont RT, Lefkowitz RJ (2002) Seven-transmembrane receptors. Nat Rev Mol Cell Biol 3: 639–650. doi: 10.1038/nrm908
    [5] Lagerstrom MC, Schioth HB (2008) Structural diversity of G protein-coupled receptors and significance for drug discovery. Nat Rev Drug Discov 7: 339–357. doi: 10.1038/nrd2518
    [6] Chien EY, Liu W, Zhao Q, et al. (2010) Structure of the human dopamine D3 receptor in complex with a D2/D3 selective antagonist. Science 330: 1091–1095. doi: 10.1126/science.1197410
    [7] Yang J, Villar VA, Armando I, et al. (2016) G protein-coupled receptor kinases: Crucial regulators of blood pressure. J Am Heart Assoc 5: e003519. doi: 10.1161/JAHA.116.003519
    [8] Bar-Shavit R, Maoz M, Kancharla A, et al. (2016) G protein-coupled receptors in cancer. Int J Mol Sci 17: 1320. doi: 10.3390/ijms17081320
    [9] Flower DR (1999) Modelling G-protein-coupled receptors for drug design. Biochim Biophys Acta 1422: 207–234. doi: 10.1016/S0304-4157(99)00006-4
    [10] Katritch V, Cherezov V, Stevens RC (2012) Diversity and modularity of G protein-coupled receptor structures. Trends Pharmacol Sci 33: 17–27. doi: 10.1016/j.tips.2011.09.003
    [11] Gether U, Ballesteros JA, Seifert R, et al. (1997) Structural instability of a constitutively active G protein-coupled receptor. Agonist-independent activation due to conformational flexibility. J Biol Chem 272: 2587–2590.
    [12] Seifert R, Wenzel-Seifert K (2002) Constitutive activity of G-protein-coupled receptors: cause of disease and common property of wild-type receptors. Naunyn Schmiedebergs Arch Pharmacol 366: 381–416. doi: 10.1007/s00210-002-0588-0
    [13] Nelson G, Hoon MA, Chandrashekar J, et al. (2001) Mammalian sweet taste receptors. Cell 106: 381–390. doi: 10.1016/S0092-8674(01)00451-2
    [14] Nelson G, Chandrashekar J, Hoon MA, et al. (2002) An amino-acid taste receptor. Nature 416: 199–202. doi: 10.1038/nature726
    [15] Milligan G (2004) G protein-coupled receptor dimerization: Function and ligand pharmacology. Mol Pharmacol 66.
    [16] Syrovatkina V, Alegre KO, Dey R, et al. (2016) Regulation, signaling, and physiological functions of G-proteins. J Mol Biol 428: 3850–3868. doi: 10.1016/j.jmb.2016.08.002
    [17] Wess J (1997) G-protein-coupled receptors: molecular mechanisms involved in receptor activation and selectivity of G-protein recognition. FASEB J 11: 346–354.
    [18] Rasumussen SG, DeVree BT, Zou Y, et al. (2011) Crystal structure of the beta2 adrenergic receptor-Gs protein complex. Nature 477: 549–555. doi: 10.1038/nature10361
    [19] Gilman AG (1987) G proteins: transducers of receptor-generated signals. Annu Rev Biochem 56: 615–649. doi: 10.1146/annurev.bi.56.070187.003151
    [20] Nie J, Lewis DL (2001) Structural domains of the CB1 cannabinoid receptor that contribute to constitutive activity and G-protein sequestration. J Neurosci 21: 8758–8764.
    [21] Kobilka BK, Deupi X (2007) Conformational complexity of G-protein-coupled receptors. Trends Pharmacol Sci 28: 397–406. doi: 10.1016/j.tips.2007.06.003
    [22] Ross EM, Wilkie TM (2000) GTPase-activating proteins for heterotrimeric G proteins: regulators of G protein signaling (RGS) and RGS-like proteins. Annu Rev Biochem 69: 795–827. doi: 10.1146/annurev.biochem.69.1.795
    [23] Claing A, Laporte SA, Caron MG, et al. (2002) Endocytosis of G protein-coupled receptors: roles of G protein-coupled receptor kinases and beta-arrestin proteins. Prog Neurobiol 66: 61–79. doi: 10.1016/S0301-0082(01)00023-5
    [24] Okada T, Le TI, Fox BA, et al. (2000) X-Ray diffraction analysis of three-dimensional crystals of bovine rhodopsin obtained from mixed micelles. J Struct Biol 130: 73–80. doi: 10.1006/jsbi.1999.4209
    [25] Palczewski K, Kumasaka T, Hori T, et al. (2000) Crystal structure of rhodopsin: A G protein-coupled receptor. Science 289: 739–745. doi: 10.1126/science.289.5480.739
    [26] Cherezov V, Rosenbaum DM, Hanson MA, et al. (2007) High resolution crystal structure of an engineered human beta2-adrenergic G protein-couple receptor. Science 318: 1258–1265. doi: 10.1126/science.1150577
    [27] Rasmussen SG, Choi HJ, Rosenbaum DM, et al. (2007) Crystal structure of the human beta2 adrenergic G-protein-coupled receptor. Nature 450: 383–387. doi: 10.1038/nature06325
    [28] Stroud RM (2011) New tools in membrane protein determination. F1000 Biol Rep 3: 8.
    [29] Ujwal R, Bowie JU (2011) Crystallizing membrane proteins using lipidic bicelles. Methods 55: 337–341. doi: 10.1016/j.ymeth.2011.09.020
    [30] Denisov IG, Sligar SG (2016) Nanodiscs for structural and functional studies of membrane proteins. Nat Struct Mol Biol 23: 481–486. doi: 10.1038/nsmb.3195
    [31] Xiang J, Chun E, Liu C, et al. (2016) Successful strategies to determine high-resolution structures of GPCRs. Trends Pharmacol Sci 37: 1055–1069. doi: 10.1016/j.tips.2016.09.009
    [32] Rawlings AE (2016) Membrane proteins: always an insoluble problem? Biochem Soc Trans 44: 790–795. doi: 10.1042/BST20160025
    [33] Neubig RR, Siderovski DP (2002) Regulators of G-protein signalling as new central nervous system drug targets. Nat Rev Drug Discov 1: 187–197. doi: 10.1038/nrd747
    [34] Bayburt TH, Grinkova YV, Sligar SG (2002) Self-assembly of discoidal phospholipid bilayer nanoparticles with membrane scaffold proteins. Nano Lett 2: 853–856. doi: 10.1021/nl025623k
    [35] Delmar JA, Bolla JR, Su CC, et al. (2015) Crystallization of membrane proteins by vapor diffusion. Methods Enzymol 557: 363–392. doi: 10.1016/bs.mie.2014.12.018
    [36] Jaakola VP, Griffith MT, Hanson MA, et al. (2008) The 2.6 angstrom crystal structure of a human A2A adenosine receptor bound to an antagonist. Science 322: 1211–1217.
    [37] Dore AS, Okrasa K, Patel JC, et al. (2014) Structure of class C GPCR metabotropic glutamate receptor 5 transmembrane domain. Nature 511: 557–562. doi: 10.1038/nature13396
    [38] Thompson AA, Liu W, Chun E, et al. (2012) Structure of the nociceptin/orphanin FQ receptor in complex with a peptide mimetic. Nature 485: 395–399. doi: 10.1038/nature11085
    [39] Wang C, Wu H, Katritch V, et al. (2013) Structure of the human smoothened receptor bound to an antitumour agent. Nature 497: 338–343. doi: 10.1038/nature12167
    [40] Yin J, Babaoglu K, Brautigam CA, et al. (2016) Structure and ligand-binding mechanism of the human OX1 and OX2 orexin receptors. Nat Struct Mol Biol 23: 293–299. doi: 10.1038/nsmb.3183
    [41] Zhang K, Zhang J, Gao Z, et al. (2014) Structure of the human P2Y12 receptor in complex with an antithrombotic drug. Nature 509.
    [42] Isberg V, Mordalski S, Munk C, et al. (2016) GPCRdb: an information system for G protein-coupled receptors. Nucleic Acids Res.
    [43] Berman HM, Westbrook J, Feng Z, et al. (2000) The protein data bank. Nucl Acids Res 28: 235–242. doi: 10.1093/nar/28.1.235
    [44] Teller DC, Okada T, Behnke CA, et al. (2001) Advances in determination of a high-resolution three-dimensional structure of rhodopsin, a model of G-protein-coupled receptors (GPCRs). Biochemistry 40: 7761–7772. doi: 10.1021/bi0155091
    [45] Yeagle PL, Choi G, Albert AD (2001) Studies on the structure of the G-protein-coupled receptor rhodopsin including the putative G-protein binding site in unactivated and activated forms. Biochemistry 40: 11932–11937. doi: 10.1021/bi015543f
    [46] Okada T, Fujiyoshi Y, Silow M, et al. (2002) Functional role of internal water molecules in rhodopsin revealed by X-ray crystallography. Proc Natl Acad Sci USA 99: 5982–5987. doi: 10.1073/pnas.082666399
    [47] Choi G, Landin J, Galan JF, et al. (2002) Structural studies of metarhodopsin II, the activated form of the G-protein coupled receptor, rhodopsin. Biochemistry 41: 7318–7324. doi: 10.1021/bi025507w
    [48] Li J, Edwards PC, Burghammer M, et al. (2004) Structure of bovine rhodopsin in a trigonal crystal form. J Mol Biol 343: 1409–1438. doi: 10.1016/j.jmb.2004.08.090
    [49] Okada T, Sugihara M, Bondar AN, et al. (2004) The retinal conformation and its environment in rhodopsin in light of a new 22 A crystal structure. J Mol Biol 342: 571–583. doi: 10.1016/j.jmb.2004.07.044
    [50] Nakamichi H, Okada T (2006) Crystallographic analysis of primary visual photochemistry. Angew Chem Int Ed Engl 45: 4270–4273. doi: 10.1002/anie.200600595
    [51] Nakamichi H, Okada T (2006) Local peptide movement in the photoreaction intermediate of rhodopsin. Proc Natl Acad Sci USA 103: 12729–12734. doi: 10.1073/pnas.0601765103
    [52] Salom D, Lodowski DT, Stenkamp RE, et al. (2006) Crystal structure of a photoactivated deprotonated intermediate of rhodopsin. Proc Natl Acad Sci USA 103: 16123–16128. doi: 10.1073/pnas.0608022103
    [53] Standfuss J, Xie G, Edwards PC, et al. (2007) Crystal structure of a thermally stable rhodopsin mutant. J Mol Biol 372: 1179–1188. doi: 10.1016/j.jmb.2007.03.007
    [54] Nakamichi H, Buss V, Okada T (2007) Photoisomerization mechanism of rhodopsin and 9-cis-rhodopsin revealed by x-ray crystallography. Biophys J 92: L106–L108. doi: 10.1529/biophysj.107.108225
    [55] Stenkamp RE (2008) Alternative models for two crystal structures of bovine rhodopsin. Acta Crystallogr D D64: 902–904.
    [56] Park JH, Scheerer P, Hofmann KP, et al. (2008) Crystal structure of the ligand-free G-protein-coupled receptor opsin. Nature 454: 183–187. doi: 10.1038/nature07063
    [57] Scheerer P, Park JH, Hildebrand PW, et al. (2008) Crystal structure of opsin in its G-protein-interacting conformation. Nature 455: 497–502. doi: 10.1038/nature07330
    [58] Standfuss J, Edwards PC, D'Antona A, et al. (2011) The structural basis of agonist-induced activation in constitutively active rhodopsin. Nature 471: 656–660. doi: 10.1038/nature09795
    [59] Makino CL, Riley CK, Looney J, et al. (2010) Binding of more than one retinoid to visual opsins. Biophys J 99: 2366–2373. doi: 10.1016/j.bpj.2010.08.003
    [60] Choe HW, Kim YJ, Park JH, et al. (2011) Crystal structure of metarhodopsin II. Nature 471: 651–655. doi: 10.1038/nature09789
    [61] Deupi X, Edwards P, Singhal A, et al. (2012) Stabilized G protein binding site in the structure of constitutively active metarhodopsin-II. Proc Natl Acad Sci USA 109: 119–124. doi: 10.1073/pnas.1114089108
    [62] Park JH, Morizumi T, Li Y, et al. (2013) Opsin, a structural model for olfactory receptors? Angew Chem Int Ed Engl 52: 11021–11024. doi: 10.1002/anie.201302374
    [63] Singhal A, Ostermaier MK, Vishnivetskiy SA, et al. (2013) Insights into congenital stationary night blindness based on the structure of G90D rhodopsin. EMBO Rep 14: 520–526. doi: 10.1038/embor.2013.44
    [64] Szczepek M, Beyriere F, Hofmann KP, et al. (2014) Crystal structure of a common GPCR-binding interface for G protein and arrestin. Nat Commun 5: 4801. doi: 10.1038/ncomms5801
    [65] Blankenship E, Vahedi-Faridi A, Lodowski DT (2015) The high-resolution structure of activated opsin reveals a conserved solvent network in the transmembrane region essential for activation. Structure 23: 2358–2364. doi: 10.1016/j.str.2015.09.015
    [66] Kang Y, Zhou XE, Gao X, et al. (2015) Crystal structure of rhodopsin bound to arrestin by femtosecond X-ray laser. Nature 523: 561–567. doi: 10.1038/nature14656
    [67] Singhal A, Guo Y, Matkovic M, et al. (2016) Structural role of the T94I rhodopsin mutation in congenital stationary night blindness. EMBO Rep 17: 1431–1440. doi: 10.15252/embr.201642671
    [68] Gulati S, Jastrzebska B, Banerjee S, et al. (2017) Photocyclic behavior of rhodopsin induced by an atypical isomerization mechanism. Proc Natl Acad Sci USA 114: E2608–E2615. doi: 10.1073/pnas.1617446114
    [69] Warne T, Serrano-Vega MJ, Baker JG, et al. (2008) Structure of a beta1-adrenergic G-protein-coupled receptor. Nature 454: 486–491. doi: 10.1038/nature07101
    [70] Warne T, Moukhametzianov R, Baker JG, et al. (2011) The structural basis for agonist and partial agonist action on a beta(1)-adrenergic receptor. Nature 469: 241–244. doi: 10.1038/nature09746
    [71] Moukhametzianov R, Warne T, Edwards PC, et al. (2011) Two distinct conformations of helix 6 observed in antagonist-bound structures of a beta-1 adrenergic receptor. Proc Natl Acad Sci USA 108: 8228–8232. doi: 10.1073/pnas.1100185108
    [72] Christopher JA, Brown J, Dore AS, et al. (2013) Biophysical fragment screening of the beta1-adrenergic receptor: identification of high affinity arylpiperazine leads using structure-based drug design. J Med Chem 56: 3446–3455. doi: 10.1021/jm400140q
    [73] Warne T, Edwards PC, Leslie AG, et al. (2012) Crystal structures of a stabilized beta1-adrenoceptor bound to the biased agonists bucindolol and carvedilol. Structure 20: 841–849. doi: 10.1016/j.str.2012.03.014
    [74] Miller-Gallacher JL, Nehme R, Warne T, et al. (2014) The 2.1 A resolution structure of cyanopindolol-bound beta1-adrenoceptor identifies an intramembrane Na+ ion that stabilises the ligand-free receptor. PLoS One 9: e92727.
    [75] Huang J, Chen S, Zhang JJ, et al. (2013) Crystal structure of oligomeric beta1-adrenergic G protein-coupled receptors in ligand-free basal state. Nat Struct Mol Biol 20: 419–425. doi: 10.1038/nsmb.2504
    [76] Sato T, Baker J, Warne T, et al. (2015) Pharmacological analysis and structure determination of 7-Methylcyanopindolol-bound beta1-adrenergic receptor. Mol Pharmacol 88: 1024–1034. doi: 10.1124/mol.115.101030
    [77] Leslie AG, Warne T, Tate CG (2015) Ligand occupancy in crystal structure of beta1-adrenergic G protein-coupled receptor. Nat Struct Mol Biol 22: 941–942. doi: 10.1038/nsmb.3130
    [78] Hanson MA, Cherezov V, Griffith MT, et al. (2008) A specific cholesterol binding site is established by the 2.8 A structure of the human beta2-adrenergic receptor. Structure 6: 897–905.
    [79] Bokoch MP, Zou Y, Rasmussen SG, et al. (2010) Ligand-specific regulation of the extracellular surface of a G-protein-coupled receptor. Nature 463: 108–112. doi: 10.1038/nature08650
    [80] Wacker D, Fenalti G, Brown MA, et al. (2010) Conserved binding mode of human beta2 adrenergic receptor inverse agonists and antagonist revealed by X-ray crystallography. J Am Chem Soc 132: 11443–11445. doi: 10.1021/ja105108q
    [81] Rasmussen SG, Choi HJ, Fung JJ, et al. (2011) Structure of a nanobody-stabilized active state of the beta(2) adrenoceptor. Nature 469: 175–180. doi: 10.1038/nature09648
    [82] Rosenbaum DM, Zhang C, Lyons JA, et al. (2011) Structure and function of an irreversible agonist-beta(2) adrenoceptor complex. Nature 469: 236–240. doi: 10.1038/nature09665
    [83] Zou Y, Weis WI, Kobilka BK (2012) N-terminal T4 lysozyme fusion facilitates crystallization of a G protein coupled receptor. PLos One 7.
    [84] Ring AM, Manglik A, Kruse AC, et al. (2013) Adrenaline-activated structure of beta2-adrenoreceptor stabilized by an engineered nanobody. Nature 502: 575–579. doi: 10.1038/nature12572
    [85] Weichert D, Kruse AC, Manglik A, et al. (2014) Covalent agonists for studying G protein-coupled receptor activation. Proc Natl Acad Sci USA 111: 10744–10748. doi: 10.1073/pnas.1410415111
    [86] Huang CY, Olieric V, Ma P, et al. (2016) In meso in situ serial X-ray crystallography of soluble and membrane proteins at cryogenic temperatures. Acta Crystallogr D 72: 93–112. doi: 10.1107/S2059798315021683
    [87] Staus DP, Strachan RT, Manglik A, et al. (2016) Allosteric nanobodies reveal the dynamic range and diverse mechanisms of G-protein-coupled receptor activation. Nature 535: 448–452. doi: 10.1038/nature18636
    [88] Shimamura T, Shiroishi M, Weyand S, et al. (2011) Structure of the human histamine H1 receptor complex with doxepin. Nature 475: 65–70. doi: 10.1038/nature10236
    [89] Wang C, Jiang Y, Ma J, et al. (2013) Structural basis for molecular recognition at serotonin receptors. Science 340: 610–614. doi: 10.1126/science.1232807
    [90] Wacker D, Wang C, Katritch V, et al. (2013) Structural features for functional selectivity at serotonin receptors. Science 340: 615–619. doi: 10.1126/science.1232808
    [91] Liu W, Wacker D, Gati C, et al. (2013) Serial femtosecond crystallography of G protein-coupled receptors. Science 342: 1521–1524. doi: 10.1126/science.1244142
    [92] Wacker D, Wang S, McCorvy JD, et al. (2017) Crystal structure of an LSD-bound human serotonin receptor. Cell 168: 377–389. doi: 10.1016/j.cell.2016.12.033
    [93] Thal DM, Sun B, Feng D, et al. (2016) Crystal structures of the M1 and M4 muscarinic acetylcholine receptors. Nature 531: 335–340. doi: 10.1038/nature17188
    [94] Haga K, Kruse AC, Asada H, et al. (2012) Structure of the human M2 muscarinic acetylcholine receptor bound to an antagonist. Nature 482: 547–551. doi: 10.1038/nature10753
    [95] Kruse AC, Ring AM, Manglik A, et al. (2013) Activation and allosteric modulation of a muscarinic acetylcholine receptor. Nature 504: 101–106. doi: 10.1038/nature12735
    [96] Kruse AC, Hu J, Pan AC, et al. (2012) Structure and dynamics of the M3 muscarinic acetylcholine receptor. Nature 482: 552–556. doi: 10.1038/nature10867
    [97] Thorsen TS, Matt R, Weis WI, et al. (2014) Modified T4 lysozyme fusion proteins facilitate G protein-coupled receptor crystallogenesis. Structure 22: 1657–1664. doi: 10.1016/j.str.2014.08.022
    [98] Glukhova A, Thal DM, Nguyen AT, et al. (2017) Structure of the adenosine A1 receptor reveals the basis for subtype selectivity. Cell 168: 867–877. doi: 10.1016/j.cell.2017.01.042
    [99] Dore AS, Robertson N, Errey JC, et al. (2011) Structure of the adenosine A(2A) receptor in complex with ZM241385 and the xanthines XAC and caffeine. Structure 19: 1283–1293. doi: 10.1016/j.str.2011.06.014
    [100] Xu F, Wu H, Katritch V, et al. (2011) Structure of an agonist-bound human A2A adenosine receptor. Science 332: 322–327. doi: 10.1126/science.1202793
    [101] Lebon G, Warne T, Edwards PC, et al. (2011) Agonist-bound adenosine A2A receptor structures reveal common features of GPCR activation. Nature 474: 521–525. doi: 10.1038/nature10136
    [102] Hino T, Arakawa T, Iwanari H, et al. (2012) G-protein-coupled receptor inactivation by an allosteric inverse-agonist antibody. Nature 482: 237–240.
    [103] Congreve M, Andrews SP, Dore AS, et al. (2012) Discovery of 1,2,4-triazine derivatives as adenosine A(2A) antagonists using structure based drug design. J Med Chem 55: 1898–1903. doi: 10.1021/jm201376w
    [104] Liu W, Chun E, Thompson AA, et al. (2012) Structural basis for allosteric regulation of GPCRs by sodium ions. Science 337: 232–236. doi: 10.1126/science.1219218
    [105] Lebon G, Edwards PC, Leslie AG, et al. (2015) Molecular determinants of CGS21680 binding to the human adenosine A2A receptor. Mol Pharmacol 87: 907–915. doi: 10.1124/mol.114.097360
    [106] Segala E, Guo D, Cheng RK, et al. (2016) Controlling the dissociation of ligands from the adenosine A2A receptor through modulation of salt bridge strength. J Med Chem 59: 6470–6479. doi: 10.1021/acs.jmedchem.6b00653
    [107] Batyuk A, Galli L, Ishchenko A, et al. (2016) Native phasing of x-ray free-electron laser data for a G protein-coupled receptor. Sci Adv 2: e1600292. doi: 10.1126/sciadv.1600292
    [108] Carpenter B, Nehme R, Warne T, et al. (2016) Structure of the adenosine A(2A) receptor bound to an engineered G protein. Nature 536: 104–107. doi: 10.1038/nature18966
    [109] Sun B, Bachhawat P, Chu ML, et al. (2017) Crystal structure of the adenosine A2A receptor bound to an antagonist reveals a potential allosteric pocket. Proc Natl Acad Sci USA 114: 2066–2071. doi: 10.1073/pnas.1621423114
    [110] Hanson MA, Roth CB, Jo E, et al. (2012) Crystal structure of a lipid G protein-coupled receptor. Science 335: 851–855. doi: 10.1126/science.1215904
    [111] Chrencik JE, Roth CB, Terakado M, et al. (2015) Crystal structure of antagonist bound human lysophosphatidic acid receptor 1. Cell 161: 1633–1643. doi: 10.1016/j.cell.2015.06.002
    [112] Hua T, Vemuri K, Pu M, et al. (2016) Crystal structure of the human cannabinoid receptor CB1. Cell 167: 750–762. doi: 10.1016/j.cell.2016.10.004
    [113] Shao Z, Yin J, Chapman K, et al. (2016) High-resolution crystal structure of the human CB1 cannabinoid receptor. Nature.
    [114] White JF, Noinaj N, Shibata Y, et al. (2012) Structure of the agonist-bound neurotensin receptor. Nature 490: 508–513. doi: 10.1038/nature11558
    [115] Egloff P, Hillenbrand M, Klenk C, et al. (2014) Structure of signaling-competent neurotensin receptor 1 obtained by directed evolution in Escherichia coli. Proc Natl Acad Sci USA 111: E655–E662. doi: 10.1073/pnas.1317903111
    [116] Krumm BE, White JF, Shah P, et al. (2015) Structural prerequisites for G-protein activation by the neurotensin receptor. Nat Commun 6: 7895. doi: 10.1038/ncomms8895
    [117] Krumm BE, Lee S, Bhattacharya S, et al. (2016) Structure and dynamics of a constitutively active neurotensin receptor. Sci Rep 6: 38564. doi: 10.1038/srep38564
    [118] Yin J, Mobarec JC, Kolb P, et al. (2015) Crystal structure of the human OX2 orexin receptor bound to the insomnia drug suvorexant. Nature 519: 247–250.
    [119] Gayen A, Goswami SK, Mukhopadhyay C (2011) NMR evidence of GM1-induced conformational change of Substance P using isotropic bicelles. Biochim Biophys Acta 1808: 127–139. doi: 10.1016/j.bbamem.2010.09.023
    [120] Shihoya W, Nishizawa T, Okuta A, et al. (2016) Activation mechanism of endothelin ETB receptor by endothelin-1. Nature 537: 363–368. doi: 10.1038/nature19319
    [121] Park SH, Das BB, Casagrande F, et al. (2012) Structure of the chemokine receptor CXCR1 in phospholipid bilayers. Nature 491: 779–783.
    [122] Zheng Y, Qin L, Zacarias NV, et al. (2016) Structure of CC chemokine receptor 2 with orthosteric and allosteric antagonists. Nature 540: 458–461. doi: 10.1038/nature20605
    [123] Wu B, Chien EY, Mol CD, et al. (2010) Structures of the CXCR4 chemokine GPCR with small-molecule and cyclic peptide antagonists. Science 330: 1066–1071. doi: 10.1126/science.1194396
    [124] Qin L, Kufareva I, Holden LG, et al. (2015) Structural biology. Crystal structure of the chemokine receptor CXCR4 in complex with a viral chemokine. Science 347: 1117–1122.
    [125] Tan Q, Zhu Y, Li J, et al. (2013) Structure of the CCR5 chemokine receptor-HIV entry inhibitor maraviroc complex. Science 341: 1387–1390. doi: 10.1126/science.1241475
    [126] Oswald C, Rappas M, Kean J, et al. (2016) Intracellular allosteric antagonism of the CCR9 receptor. Nature 540: 462–465. doi: 10.1038/nature20606
    [127] Miller RL, Thompson AA, Trapella C, et al. (2015) The importance of ligand-receptor conformational pairs in stabilization: Spotlight on the N/OFQ G protein-coupled receptor. Structure 23: 2291–2299. doi: 10.1016/j.str.2015.07.024
    [128] Wu H, Wacker D, Mileni M, et al. (2012) Structure of the human kappa-opioid receptor in complex with JDTic. Nature 485: 327–332. doi: 10.1038/nature10939
    [129] Manglik A, Kruse AC, Kobilka TS, et al. (2012) Crystal structure of the mu-opioid receptor bound to a morphinan antagonist. Nature 485: 321–326. doi: 10.1038/nature10954
    [130] Huang W, Manglik A, Venkatakrishnan AJ, et al. (2015) Structural insights into micro-opioid receptor activation. Nature 524: 315–321. doi: 10.1038/nature14886
    [131] Granier S, Manglik A, Kruse AC, et al. (2012) Structure of the delta-opioid receptor bound to naltrindole. Nature 485: 400–404. doi: 10.1038/nature11111
    [132] Fenalti G, Giguere PM, Katritch V, et al. (2014) Molecular control of delta-opioid receptor signaling. Nature 506: 191–196. doi: 10.1038/nature12944
    [133] Fenalti G, Zatsepin NA, Betti C, et al. (2015) Structural basis for bifunctional peptide recognition at human delta-opioid receptor. Nat Struct Mol Biol 22: 265–268. doi: 10.1038/nsmb.2965
    [134] Zhang H, Unal H, Gati C, et al. (2015) Structure of the Angiotensin receptor revealed by serial femtosecond crystallography. Cell 161: 833–844. doi: 10.1016/j.cell.2015.04.011
    [135] Zhang H, Unal H, Desnoyer R, et al. (2015) Structural basis for ligand recognition and functional selectivity at angiotensin receptor. J Biol Chem 290: 29127–29139. doi: 10.1074/jbc.M115.689000
    [136] Zhang H, Han GW, Batyuk A, et al. (2017) Structural basis for selectivity and diversity in angiotensin II receptors. Nature.
    [137] Burg JS, Ingram JR, Venkatakrishnan AJ, et al. (2015) Structural biology. Structural basis for chemokine recognition and activation of a viral G protein-coupled receptor. Science 347: 1113–1117.
    [138] Zhang D, Gao ZG, Zhang K, et al. (2015) Two disparate ligand-binding sites in the human P2Y1 receptor. Nature 520: 317–321. doi: 10.1038/nature14287
    [139] Zhang J, Zhang K, Gao ZG, et al. (2014) Agonist-bound structure of the human P2Y12 receptor. Nature 509: 119–122. doi: 10.1038/nature13288
    [140] Zhang C, Srinivasan Y, Arlow DH, et al. (2012) High-resolution crystal structure of human protease-activated receptor 1. Nature 492: 387–392. doi: 10.1038/nature11701
    [141] Srivastava A, Yano J, Hirozane Y, et al. (2014) High-resolution structure of the human GPR40 receptor bound to allosteric agonist TAK-875. Nature 513: 124–127. doi: 10.1038/nature13494
    [142] Siu FY, He M, de Graaf C, et al. (2013) Structure of the human glucagon class B G-protein-coupled receptor. Nature 499: 444–449. doi: 10.1038/nature12393
    [143] Jazayeri A, Dore AS, Lamb D, et al. (2016) Extra-helical binding site of a glucagon receptor antagonist. Nature 533: 274–277. doi: 10.1038/nature17414
    [144] Hollenstein K, Kean J, Bortolato A, et al. (2013) Structure of class B GPCR corticotropin-releasing factor receptor 1. Nature 499: 438–443. doi: 10.1038/nature12357
    [145] Dore AS, Bortolato A, Hollenstein K, et al. (2017) Decoding corticotropin-releasing factor receptor type 1 crystal structures. Curr Mol Pharmacol: Epub ahead of print, DOI: 10.2174/1874467210666170110114727.
    [146] Wu H, Wang C, Gregory KJ, et al. (2014) Structure of a class C GPCR metabotropic glutamate receptor 1 bound to an allosteric modulator. Science 344: 58–64. doi: 10.1126/science.1249489
    [147] Christopher JA, Aves SJ, Bennett KA, et al. (2015) Fragment and structure-based drug discovery for a class C GPCR: Discovery of the mGlu5 negative allosteric modulator HTL14242 (3-Chloro-5-[6-(5-fluoropyridin-2-yl)pyrimidin-4-yl]benzonitrile). J Med Chem 58: 6653–6664. doi: 10.1021/acs.jmedchem.5b00892
    [148] Wang C, Wu H, Evron T, et al. (2014) Structural basis for Smoothened receptor modulation and chemoresistance to anticancer drugs. Nat Commun 5: 4355.
    [149] Byrne EF, Sircar R, Miller PS, et al. (2016) Structural basis of Smoothened regulation by its extracellular domains. Nature 535: 517–522. doi: 10.1038/nature18934
    [150] Lundstrom K (2006) Latest development in drug discovery on G protein-coupled receptors. Curr Protein Pept Sci 7: 465–470. doi: 10.2174/138920306778559403
    [151] Shonberg J, Kling RC, Gmeiner P, et al. (2015) GPCR crystal structures: Medicinal chemistry in the pocket. Bioorg Med Chem 23: 3880–3906. doi: 10.1016/j.bmc.2014.12.034
    [152] Attwood TK, Findlay JB (1993) Design of a discriminating fingerprint for G-protein-coupled receptors. Protein Eng 6: 167–176. doi: 10.1093/protein/6.2.167
    [153] Chee MS, Satchwell SC, Preddie E, et al. (1990) Human cytomegalovirus encodes three G protein-coupled receptor homologues. Nature 344: 774–777. doi: 10.1038/344774a0
    [154] Attwood TK, Findlay JB (1994) Fingerprinting G-protein-coupled receptors. Protein Eng 7: 195–203. doi: 10.1093/protein/7.2.195
    [155] Strader CD, Sigal IS, Dixon RA (1989) Structural basis of beta-adrenergic receptor function. FASEB J 3: 1825–1832.
    [156] Liapakis G, Ballesteros JA, Papachristou S, et al. (2000) The forgotten serine. A critical role for Ser-2035.42 in ligand binding to and activation of the beta 2-adrenergic receptor. J Biol Chem 275: 37779–37788.
    [157] Swaminath G, Xiang Y, Lee TW, et al. (2004) Sequential binding of agonists to the beta2 adrenoceptor. Kinetic evidence for intermediate conformational states. J Biol Chem 279: 686–691.
    [158] Baldwin JM (1994) Structure and function of receptors coupled to G proteins. Curr Opin Cell Biol 6: 180–190. doi: 10.1016/0955-0674(94)90134-1
    [159] Tyndall JD, Sandilya R (2005) GPCR agonists and antagonists in the clinic. Med Chem 1: 405–421. doi: 10.2174/1573406054368675
    [160] Jacoby E, Bouhelal R, Gerspacher M, et al. (2006) The 7 TM G-protein-coupled receptor target family. Chem Med Chem 1: 761–782.
    [161] Spiss CK, Maze M (1985) Adrenoreceptors. Anaesthesist 34: 1–10.
    [162] Civelli O, Reinscheid RK, Zhang Y, et al. (2013) G protein-coupled receptor deorphanizations. Annu Rev Pharmacol Toxicol 53: 127–146. doi: 10.1146/annurev-pharmtox-010611-134548
    [163] Barst RJ, Langleben D, Frost A, et al. (2004) Sitaxsentan therapy for pulmonary arterial hypertension. Am J Respir Crit Care Med 169: 441–447. doi: 10.1164/rccm.200307-957OC
    [164] Kotake T, Usami M, Akaza H, et al. (1999) Goserelin acetate with or without antiandrogen or estrogen in the treatment of patients with advanced prostate cancer: a multicenter, randomized, controlled trial in Japan. Zoladex Study Group. Jpn J Clin Oncol 29: 562–570. doi: 10.1093/jjco/29.11.562
    [165] Onuffer JJ, Horuk R (2002) Chemokines, chemokine receptors and small-molecule antagonists: recent developments. Trends Pharmacol Sci 23: 459–467. doi: 10.1016/S0165-6147(02)02064-3
    [166] Fatkenheuer G, Pozniak AL, Johnson MA, et al. (2005) Efficacy of short-term monotherapy with maraviroc, a new CCR5 antagonist, in patients infected with HIV-1. Nat Med 11: 1170–1172. doi: 10.1038/nm1319
    [167] Lu M, Wu B (2016) Structural studies of G protein-coupled receptors. IUBMB Life 68: 894–903. doi: 10.1002/iub.1578
    [168] Strotmann R, Schrock K, Boselt I, et al. (2011) Evolution of GPCR: change and continuity. Mol Cell Endocrinol 331: 170–178. doi: 10.1016/j.mce.2010.07.012
    [169] Ballesteros JA, Weinstein H (1995) Integrated methods for the construction of three-dimensional models and computational probing of structure-function relations in G protein-coupled receptors. Method Neurosci 25: 366–428. doi: 10.1016/S1043-9471(05)80049-7
    [170] Zhang D, Weinstein H (1994) Polarity conserved positions in transmembrane domains of G-protein coupled receptors and bacteriorhodopsin. FEBS Lett 337: 207–212. doi: 10.1016/0014-5793(94)80274-2
    [171] Katritch V, Fenalti G, Abola EE, et al. (2014) Allosteric sodium in class A GPCR signaling. Trends Biochem Sci 39: 233–244. doi: 10.1016/j.tibs.2014.03.002
    [172] Bjarnadottir TK, Geirardsdottir K, Ingemansson M, et al. (2007) Identification of novel splice variants of Adhesion G protein-coupled receptors. Gene 387: 38–48. doi: 10.1016/j.gene.2006.07.039
    [173] Lin HH, Chang GW, Davies JQ, et al. (2004) Autocatalytic cleavage of the EMR2 receptor occurs at a conserved G protein-coupled receptor proteolytic site motif. J Biol Chem 279: 31823–31832. doi: 10.1074/jbc.M402974200
    [174] Isberg V, Vroling B, Van DKR, et al. (2014) GPCRDB: an information system for G protein-coupled receptors. Nucleic Acids Res 42: D422–D425. doi: 10.1093/nar/gkt1255
    [175] Gasparini F, Kuhn R, Pin JP (2002) Allosteric modulators of group I metabotropic glutamate receptors: novel subtype-selective ligands and therapeutic perspectives. Curr Opin Pharmacol 2: 43–49. doi: 10.1016/S1471-4892(01)00119-9
    [176] Malherbe P, Kratochwil N, Zenner MT, et al. (2003) Mutational analysis and molecular modeling of the binding pocket of the metabotropic glutamate 5 receptor negative modulator 2-methyl-6-(phenylethynyl)-pyridine. Mol Pharmacol 64: 823–832. doi: 10.1124/mol.64.4.823
    [177] Litschig S, Gasparini F, Rueegg D, et al. (1999) CPCCOEt, a noncompetitive metabotropic glutamate receptor 1 antagonist, inhibits receptor signaling without affecting glutamate binding. Mol Pharmacol 55: 453–461.
    [178] Silve C, Petrel C, Leroy C, et al. (2005) Delineating a Ca2+ binding pocket within the venus flytrap module of the human calcium-sensing receptor. J Biol Chem 280: 37917–37923. doi: 10.1074/jbc.M506263200
    [179] Hermans E, Challiss RA (2001) Structural, signalling and regulatory properties of the group I metabotropic glutamate receptors: prototypic family C G-protein-coupled receptors. Biochem J 359: 465–484. doi: 10.1042/bj3590465
    [180] Nakanishi S (1992) Molecular diversity of glutamate receptors and implications for brain function. Science 258: 597–603. doi: 10.1126/science.1329206
    [181] Riedel G, Platt B, Micheau J (2003) Glutamate receptor function in learning and memory. Behav Brain Res 140: 1–47. doi: 10.1016/S0166-4328(02)00272-3
    [182] Kunishima N, Shimada Y, Tsuji Y, et al. (2000) Structural basis of glutamate recognition by a dimeric metabotropic glutamate receptor. Nature 407: 971–977. doi: 10.1038/35039564
    [183] Niswender CM, Conn PJ (2010) Metabotropic glutamate receptors: physiology, pharmacology, and disease. Annu Rev Pharmacol Toxicol 50: 295–322. doi: 10.1146/annurev.pharmtox.011008.145533
    [184] Bhanot P, Brink M, Samos CH, et al. (1996) A new member of the frizzled family from Drosophila functions as a Wingless receptor. Nature 382: 225–230. doi: 10.1038/382225a0
    [185] Murone M, Rosenthal A, de Sauvage FJ (1999) Sonic hedgehog signaling by the patched-smoothened receptor complex. Curr Biol 9: 76–84. doi: 10.1016/S0960-9822(99)80018-9
    [186] Chen CM, Strapps W, Tomlinson A, et al. (2004) Evidence that the cysteine-rich domain of Drosophila Frizzled family receptors is dispensable for transducing Wingless. Proc Natl Acad Sci USA 101: 15961–15966. doi: 10.1073/pnas.0407103101
    [187] Nakano Y, Nystedt S, Shivdasani AA, et al. (2004) Functional domains and sub-cellular distribution of the Hedgehog transducing protein Smoothened in Drosophila. Mech Dev 121: 507–518. doi: 10.1016/j.mod.2004.04.015
    [188] Isberg V, de Graaf C, Bortolato A, et al. (2015) Generic GPCR residue numbers-Aligning topology maps minding the gaps. Trends Pharmacol Sci 36: 22–31. doi: 10.1016/j.tips.2014.11.001
  • This article has been cited by:

    1. Jacek Banasiak, Adam Błoch, Telegraph systems on networks and port-Hamiltonians. Ⅲ. Explicit representation and long-term behaviour, 2022, 11, 2163-2480, 2165, 10.3934/eect.2022016
    2. Adam Bobrowski, Elżbieta Ratajczyk, Pairs of complementary transmission conditions for Brownian motion, 2024, 388, 0025-5831, 4317, 10.1007/s00208-023-02613-x
  • Reader Comments
  • © 2017 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(11997) PDF downloads(1761) Cited by(60)

Figures and Tables

Figures(7)  /  Tables(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog