
We consider the mathematical model of chemotaxis introduced by Patlak, Keller, and Segel. Aggregation and progression waves are present everywhere in the population dynamics of chemotactic cells. Aggregation originates from the chemotaxis of mobile cells, where cells are attracted to migrate to higher concentrations of the chemical signal region produced by themselves. The neural net can be used to find the approximate solution of the PDE. We proved that the error, the difference between the actual value and the predicted value, is bound to a constant multiple of the loss we are learning. Also, the Neural Net approximation can be easily applied to the inverse problem. It was confirmed that even when the coefficient of the PDE equation was unknown, prediction with high accuracy was achieved.
Citation: Sunwoo Hwang, Seongwon Lee, Hyung Ju Hwang. Neural network approach to data-driven estimation of chemotactic sensitivity in the Keller-Segel model[J]. Mathematical Biosciences and Engineering, 2021, 18(6): 8524-8534. doi: 10.3934/mbe.2021421
[1] | Thierry Colin, Marie-Christine Durrieu, Julie Joie, Yifeng Lei, Youcef Mammeri, Clair Poignard, Olivier Saut . Modeling of the migration of endothelial cells on bioactive micropatterned polymers. Mathematical Biosciences and Engineering, 2013, 10(4): 997-1015. doi: 10.3934/mbe.2013.10.997 |
[2] | Shangbing Ai, Zhian Wang . Traveling bands for the Keller-Segel model with population growth. Mathematical Biosciences and Engineering, 2015, 12(4): 717-737. doi: 10.3934/mbe.2015.12.717 |
[3] | Tong Li, Zhi-An Wang . Traveling wave solutions of a singular Keller-Segel system with logistic source. Mathematical Biosciences and Engineering, 2022, 19(8): 8107-8131. doi: 10.3934/mbe.2022379 |
[4] | Sung Woong Cho, Sunwoo Hwang, Hyung Ju Hwang . The monotone traveling wave solution of a bistable three-species competition system via unconstrained neural networks. Mathematical Biosciences and Engineering, 2023, 20(4): 7154-7170. doi: 10.3934/mbe.2023309 |
[5] | Lin Zhang, Yongbin Ge, Xiaojia Yang . High-accuracy positivity-preserving numerical method for Keller-Segel model. Mathematical Biosciences and Engineering, 2023, 20(5): 8601-8631. doi: 10.3934/mbe.2023378 |
[6] | Reymundo Itzá Balam, Francisco Hernandez-Lopez, Joel Trejo-Sánchez, Miguel Uh Zapata . An immersed boundary neural network for solving elliptic equations with singular forces on arbitrary domains. Mathematical Biosciences and Engineering, 2021, 18(1): 22-56. doi: 10.3934/mbe.2021002 |
[7] | Fadoua El Moustaid, Amina Eladdadi, Lafras Uys . Modeling bacterial attachment to surfaces as an early stage of biofilm development. Mathematical Biosciences and Engineering, 2013, 10(3): 821-842. doi: 10.3934/mbe.2013.10.821 |
[8] | Lin Zhang, Yongbin Ge, Zhi Wang . Positivity-preserving high-order compact difference method for the Keller-Segel chemotaxis model. Mathematical Biosciences and Engineering, 2022, 19(7): 6764-6794. doi: 10.3934/mbe.2022319 |
[9] | Bin Li, Zhi Wang, Li Xie . Regularization effect of the mixed-type damping in a higher-dimensional logarithmic Keller-Segel system related to crime modeling. Mathematical Biosciences and Engineering, 2023, 20(3): 4532-4559. doi: 10.3934/mbe.2023210 |
[10] | Inna Samuilik, Felix Sadyrbaev . On trajectories of a system modeling evolution of genetic networks. Mathematical Biosciences and Engineering, 2023, 20(2): 2232-2242. doi: 10.3934/mbe.2023104 |
We consider the mathematical model of chemotaxis introduced by Patlak, Keller, and Segel. Aggregation and progression waves are present everywhere in the population dynamics of chemotactic cells. Aggregation originates from the chemotaxis of mobile cells, where cells are attracted to migrate to higher concentrations of the chemical signal region produced by themselves. The neural net can be used to find the approximate solution of the PDE. We proved that the error, the difference between the actual value and the predicted value, is bound to a constant multiple of the loss we are learning. Also, the Neural Net approximation can be easily applied to the inverse problem. It was confirmed that even when the coefficient of the PDE equation was unknown, prediction with high accuracy was achieved.
Chemotaxis is the process by which cells or organisms change the state of migration by reacting to the presence of chemicals to access chemically favorable and adverse environments. It plays an important role in many cases, for example, that bacteria move toward the hight concentration of foods and eukaryotic cells migrate towards or away from the chemical source [1,2]. Several aspects of chemotactic motility have been studied in great detail, especially for the model organism E. coli. Keller and Segel propsed a mathematical model with chemotaxis in 1970 [3]. The model described the behavior of chemotaxis well and has been widely used for the modeling chemotaxis [4,5].
This article considers the mathematical model of coinage introduced by Patlak, Keller, and Segel [3,6,7,8]. Aggregation and propagation exist everywhere in the population dynamics of chemotactic cells. Chemical chemotactic bacteria are well known for expanding their habitats by patterning them into localized populations. Aggregation occurs in the chemotaxis of a migrating cell that is attracted to move the cell to a higher concentration of the region of the chemical signal it produces. The reaction-diffusion-progression equation for describing the population dynamics of chemotactic cells was proposed by Keller and Segel, where the term of reaction refers to local population dynamics such as cell proliferation and the term of diffusion refers to random movement and convection of cells. A variant of the classic Keller-Segel model was developed to obtain a model that could explain more realistic behavior of chemotactic cells.
Focus on the following equation, the Patlak-Keller-Segel equation.
∂tu=Du∂xxu−∂x(χu∂xv)∂tv=Dv∂xxv+αu−βvu(0,x)=u0(x),v(0,x)=v(x)u(t,x)=gu(t,x),v(t,x)=gv(t,x),for x∈{−1,1} |
We study on deep learning architectures to solve Patlak-Keller-Segel equation. Using the initial, boundary condition, and pde equations as loss functions, the Neural Net will be trained as an approximation solution. This approach has two major advantages. First, a function learned in our method can make predictions on any gird. A function trained with a neural net gives an output for any input value. This is a difference from the existing numerical approach that predicted only the value of a specific gird. Second, the unknown coefficient can be predicted by learning. It is sometimes necessary to make predictions when the coefficient is unknown. In this Forward-Inverse Problem, the Neural Net approach is easily applied[9]. Each coefficients parameterizes as weights that can be learned.
There have been attempts to solve Differential Equation using Neural Nets [10,11]. Neural Nets can fit any function [12]. In the case of PDEs with complex behaviors, it is difficult to find a solution. While studies to find an approximation solution using a Neural Net in such a PDE has been started recently, there are not many works with the Neural Net approaches in the field of mathematical biology. In this paper, we applied the Neural Net approach to the Patlak-Keller-Segel equation. It is shown in the paper that the Neural Net method can have a good accuracy compared to the existing method, Finite Volume Methods (FVM).
Keller-Segel equations deal with several numerical approaches. Numerical methods have been implemented using methods such as finite volume methods and finite element methods. We prove existence and uniqueness of a numerical solution to the proposed scheme. Then, we give a priori estimates and establish a threshold on the initial mass, for which we show that the numerical approximation converges to the solution to the PKS system when the initial mass is lower than this threshold. Numerical simulations are performed to verify accuracy and the properties of the scheme. Finally, in the last section we investigate blow-up of the solution for large mass.
In [11], a mass transport steepest descent scheme was proposed to solve the modified 1D Keller-Segel system for the log interaction kernel proposed in [9]. This method is designed based on the transformation method of the Fokker-Planck type equation introduced in [13,14] and applied to the Keller-Segel type model in to satisfy the discrete free energy dissipation principle. By taking into account the problem of transformed variables, this method can accurately solve areas of high concentration without fine-tuning the mesh. This approach has been extended to several dimensions for nonlinear aggregate-diffusion equations and from discretization in [15,16] references therein to other approaches.
In these methods, the χ is entered as a condition. Looking at the CFL condition, it is necessary to hold the grid more closely according to the χ. This is why it is difficult to analyze the Inverse Problem numerically. It is impossible to satisfy the CFL condition with an unknown χ value. A new method is needed to deal with the inverse problem.
The neural net can be used to find the approximate solution of the PDE. We define the loss function according to equation, boundary and initial condition. Neural net structure is built and learning is performed using the loss function. It has been proved that this approximate solution can always be obtained using the universal approximation theorem. The Neural Net approach is useful for solving inverse problems. We define unknown coefficients as variables.
The neural net approach is useful for solving inverse problems. We define unknown coefficients as variables and proceed with learning. When learning through the optimizer, a coefficient close to the actual value can be obtained. This method does not depend on the value of χ.
Neural network approaches rarely discuss errors between actual and predicted values. It cannot be easily discussed because the actual value is unknown. For this reason, existing studies often measure only the loss used for learning. We compensate for this weakness by bounding the error using loss. The next subsection is about this topic.
In this section, we introduce our DNN structure and Numerical Method. The proof process in this section was applied by modifying the proof applied to the Fokker-Planck equation.[17]
Let fNN(t,x;m,w,b) be a deep learning function with smooth activation function. f will be approximation solution of Keller-Segel equation. The structure of f is as follows.
z(l+1)j=ml∑i=1w(l+1)jiˉσl(z(l)i)+b(l+1)j |
We need the data of grid points for each variable domain. We use random sampling to pick grid points within the domain. The gird points of initial condition and the boundary condition are the same as {(t=0,xj)}j and {(ti,x=1 or −1)}i, respectively.
We use the Adam optimizer to finds the optimal parameters w(l+1)ji and b(l+1)j to minimize loss functions using the back-propagation method.
The governing equation loss is defined as follows:
Loss1GE=∫(0,T)dt∫(−1,1)dx|∂tfNN1(t,x;m,w,b)−Du∂xxfNN1(t,x;m,w,b)+χ∂xfNN1∂xfNN2+χfNN1∂xxfNN2,Loss2GE=∫(0,T)dt∫(−1,1)dx|∂tfNN2(t,x;m,w,b)−Dv∂xxfNN1(t,x;m,w,b)−αfNN1(t,x;m,w,b)+βfNN2(t,x;m,w,b). |
Then we define LossGE as
LossGE=Loss1GE+Loss2GE. |
The initial grid points as follows:
LossIC=∫(−1,1)dx|fNN(0,x)−f0(x)|2≈1Nj∑j|fNN(0,xj)−f0(xj)|2. |
The boundary condition is defined as follows:
LossBC=∫(0,T)|fNN(t,−1;m,w,b)−g(t,−1;m,w,b)|2dt+∫(0,T)|fNN(t,1;m,w,b)−g(t,1;m,w,b)|2dt+∫(0,T)|fNNt(t,−1;m,w,b)−gt(t,−1;m,w,b)|2dt+∫(0,T)|fNNt(t,1;m,w,b)−gt(t,1;m,w,b)|2dt≈12N∑x∈{−1,1},i|fNN(ti,x;m,w,b)−g(ti,x;m,w,b)|2+12N∑x∈{−1,1},i|fNNt(ti,x;m,w,b)−gt(ti,x;m,w,b)|2. |
Finally, we define the total loss as
Losstotal=LossGE+LossIC+LossBC. | (1) |
Theorem 1. Let
l1=∫Dint||L(u,v)||l2=∫Dini||(u,v)(x,0)−(u0,v0)||l3=∫Dbd|∂∂n(u,v)|. |
Then,
∫D|(u,v)−(u∗,v∗)|≤C(l1+l2+l3) |
for some constant C, where (u∗,v∗) is the solution of equation.
Proof. Consider
∂tu−Du∂xxu+χ∂x(u∂xv). |
Integrate on I, then
∂t∫Iu−Du∫I∂xxu+χ∫Iu∂xv=∂t∫Iu−Du[∂xu]∂I+χ[u∂xv]∂I. |
Integrate on [0,T], then
l1,u=∫Iu(,t)dx−Du∫I[∂xu]∂Iu+χ∫I[∂xu]∂I=∫Iu(,t)dx−Du∫I[∂xu]∂Iu−χ∫I[∂xu]∂I |
Subtract same equation of u∗.
Also consider
∂tv−Dv∂xxv+βv=αu |
Remark 2. The assumption f∈ˆC(1,1)([0,T]×[0,1]) can be replaced by a general Sobolev space, since the functions in a Sobolev space can be approximated by the continuous functions on a compact set.
Let u and v be solutions and uNN and vNN be approximation solutions. ans ud=u−uNN, vd=v−vNN.
d(1)ge=∂tud−∂xxvd+∂x(u∂xv)dd(2)ge=∂tvd−∂xxvd−ud+vdd(1)bd=∂xuNNd(2)bd=∂xvNNd(1)ini=udd(2)ini=vd |
Consider
2ud∂tud=2ud∂xxud+2ud∂x(u∂xv)d+2udd(1)ge. |
Integrate on x, then the first term of RHS
2∫Iud∂xxuddx=2∫∂Iud∂xudnxdSx−2∫I(∂xud)2dx=∫∂I(ud)2nxdSx+∫∂I(∂xud)2nxdSx−2∫I(∂xud)2dx=∫I(ud)2nxdSx+∫∂I(∂xud)2nxdSx−∫I(∂xud)2dx. |
The second term of RHS
2∫Iud∂x(u∂xv)dnxdSx=2∫∂Iud(u∂xv)ddx+2∫I∂xud(u∂xv)ddx |
Define B1=2∫∂Iud(u∂xv)dnxdSx, and B2=2∫I∂xud(u∂xv)ddx.
|B1|=2|∫∂I(u−uNN)(u∂xv−uNN∂xvNN)nxdSx|≤2|∫∂I(ud)2∂xvNNnxdSx|+2|∫∂Iuud∂xvNNnxdSx|≤2||d(2)bd||L∞(∂I)||ud||L2(∂I)+M1(ϵ1||ud||L2(∂I)+1ϵ1||vNNx||L2(∂I)) |
where ||ud||L∞(I)<M1.
We may assume ||d(2)bd||L∞(∂I)<1/6.
|B1(t)|≤(13+M1ϵ1)(||ud||L2(∂I))+M1ϵ1||d(2)bd||L2(∂I)≤(13+M1ϵ1)(||ud||L2(I)+||∂xud||L2(I))+M1ϵ1||d(2)bd||L2(∂I) |
Choose ϵ1 such that 1/3+M1ϵ1<1/2,
|B1(t)|≤12||ud||L2(I)+12‖∂xud‖L2(I)+C1||d(2)bd||L2(∂I). |
Consider B2(t)
B2(t)=2∫I∂xuduNN∂xvddx+2∫I∂xud∂xvuddx. |
First term of RHS is transformed as follows.
2|∫I∂xuduNN∂xvddx|≤||uNN||L∞(I)(ϵ2‖∂xud‖L2(I)+1ϵ2‖∂xvd‖L2(∂I))≤M2ϵ2‖∂xud‖L2(I)+M2ϵ2‖∂xvd‖L2(∂I) |
where ||uNN||L∞(I)≤||u||L∞(I)+||ud||L∞(I)<M2. Second term of RHS is transformed as follows.
2|∫I∂xud∂xvuddx|≤‖∂xv‖L∞(I)(ϵ2‖∂xud‖L2(I)+1ϵ2||ud||L2(∂I))≤M3ϵ3‖∂xud‖L2(I)+M3ϵ3||ud||L2(∂I) |
where ‖∂xv‖L∞(I)<M3. Hence
|B2(t)|≤(M2ϵ2+M3ϵ3)‖∂xud‖L2(I)+M2ϵ2‖∂xvd‖L2(∂I)+M3ϵ3||ud||L2(∂I) |
Choose ϵ3 such that M2ϵ2+M3ϵ3<1/2,
ddt||ud||L2(I)≤||d(1)ge||L2(I)+||d(1)bd||L2(∂I)+C1||d(2)bd||L2(∂I)+C2||ud||L2(∂I)+M‖∂vd‖L2(∂I). | (2) |
Also consider,
2vd∂tvd=2vd∂xxvd−2vdud+2(vd)2+2vdd(2)ge. |
Silimarly,
ddt||vd||L2(I)≤||d(2)ge||L2(I)+C3||d(2)bd||L2(∂I)+C4||ud||L2(∂I)+C5||ud||L2(∂I)−‖∂vd‖L2(∂I). | (3) |
Calculate (2)+M×(3),
ddt||ud||L2(I)+Mddt||vd||L2(I)≤||d(1)ge||L2(I)+||d(1)bd||L2(∂I)+C1||d(2)bd||L2(∂I)+C2||ud||L2(∂I)+M||d(2)ge||L2(I)+MC3||d(2)bd||L2(∂I)+MC4||ud||L2(∂I)+MC5||ud||L2(∂I) |
Hence,
||ud||L2(Ω×[0,T])+M||vd||L2(Ω×[0,T])≤C(||d(1)ge||L2(Ω×[0,T])+||d(2)ge||L2(Ω×[0,T])+||d(1)bd||H1(∂Ω×[0,T])+||d(2)bd||H1(∂Ω×[0,T])+||d(1)ini||L2(Ω×{0})+||d(2)ini||L2(Ω×{0})), |
for some constant C.
In most cases, L2 loss is used in the PDE solution approximation through Neural Net. As mentioned in the previous subsection, this does not guarantee error bound. We found a loss that can bound the error through the energy estimate.
The energy estimate was calculated from the given PDE equation. By modifying this equation, we proved that the error, the difference between the actual value and the predicted value, is bound to a constant multiple of the loss we are learning. From this proof it can be seen that the approximation with a sufficiently small loss is very close to the actual solution.
The found loss uses H1 loss instead of L2 loss in some terms. This means that reducing the difference in function values is insufficient to reduce errors. If H1 loss is used, not only the function value but also the difference between the derivative value is reduced. In order to make the error sufficiently small, the derivative value of some terms must be small.
Our method is directly applicable to the Inverse Problem. When there is an unknown coefficient, most of the methods cannot be applied as it is. In particular, Numerical methodologies are developed depending on coefficients. It is only possible to apply a new method of predicting parameters through observed data points. On the other hand, our method needs no transformation. If it converges sufficiently, it becomes a good approximator for both the parameter and the solution.
Chemotaxis has been primarily studied as an average characteristic of a population, with little regard for variability among individuals. Despite the simplifying approximations involved in our derivations, especially in the extrapolation to higher spatial dimensions, the models demonstrate a satisfactory and very useful ability to quantitatively interpret population assays for bacterial and leukocyte chemotactic migration.
The results from the previous method are as follows. It can be seen that the Neural Net method fits well with the Keller-Segel solution like well-known numerical method. A finite volume method (FVM) was used for the comparison. The simulations were performed using Clawpack [18,19] with a fractional step method [20] in Python 3.6.
Figure 1 and Figure 2 show the result of Neural Net approximation and FVM approximation, repectively. It can be seen that the Neural Net method is not behind the FVM. Our approximate solution has a chemotaxis phenomenon. One can see that the aggregation takes place over time. This new methodology can sufficiently replace the existing methodology.
We find the coefficients of the solution and the solution itself simultaneously. It couldn't be done with the Numerical method. Find the coefficient and solution simultaneously through the observed data points. It can be seen that there is very little error even when comparing the found coefficient with the actual value.
Our method has the advantage of being able to deal with the inverse problem as well as the forward problem compared to the previous studies. As for the existing numerical method, few studies of the Inverse Problem in Keller-Segel have been conducted. In the case of Neural Net, the forward-inverse problem can be easily solved through simple transformation.
Neural network approaches rarely discuss errors between actual and predicted values. It cannot be easily discussed because the actual value is unknown. For this reason, existing studies often measure only the loss used for learning. We compensate for this weakness by bounding the error using loss.
H1 loss is used instead of L2 loss in some terms. This means that reducing the difference in function values is insufficient to reduce errors. If H1 loss is used, not only the function value but also the difference between the derivative value is reduced. In order to make the error sufficiently small, the derivative value of some terms must be small.
Seongwon Lee was supported by National Institute for Mathematical Sciences (NIMS) grant funded by the Korea government (MSIT) (No. B21810000). Hyung Ju Hwang was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2017R1E1A1A03070105 and NRF-2019R1A5A1028324), Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No. 2019-0-01906, Artificial Intelligence Graduate School Program (POSTECH)), and the Information Technology Research Center (ITRC) support program (No. IITP-2018-0-01441).
The authors have no conflict of interest.
[1] |
P. Devreotes, C. Janetopoulos, Eukaryotic chemotaxis: Distinctions between directional sensing and polarization, J. Biol. Chem., 278 (2003), 20445–20448. doi: 10.1074/jbc.R300010200
![]() |
[2] |
D. V. Zhelev, A. M. Alteraifi, D. Chodniewicz, Controlled pseudopod extension of human neutrophils stimulated with different chemoattractants, Biophys. J., 87 (2004), 688–695. doi: 10.1529/biophysj.103.036699
![]() |
[3] |
E. F. Keller, L. A. Segel, Initiation of slime mold aggregation viewed as an instability, J. Theor. Biol., 26 (1970), 399–415. doi: 10.1016/0022-5193(70)90092-5
![]() |
[4] |
L. A. Segel, A. Goldbeter, P. N. Devreotes, B. E. Knox, A mechanism for exact sensory adaptation based on receptor modification, J. Theor. Biol., 120 (1986), 151–179. doi: 10.1016/S0022-5193(86)80171-0
![]() |
[5] |
J. A. Sherratt, Chemotaxis and chemokinesis in eukaryotic cells: The Keller-Segel equations as an approximation to a detailed model, Bull. Math. Biol., 56 (1994), 129–146. doi: 10.1007/BF02458292
![]() |
[6] |
C. S. Patlak, Random walk with persistence and external bias, Bull. Math. Biophys., 15 (1953), 311–338. doi: 10.1007/BF02476407
![]() |
[7] |
E. F. Keller, L. A. Segel, Model for chemotaxis, J. Theor. Biol., 30 (1971), 225–234. doi: 10.1016/0022-5193(71)90050-6
![]() |
[8] |
E. F. Keller, L. A. Segel, Traveling bands of chemotactic bacteria: a theoretical analysis, J. Theor. Biol., 30 (1971), 235–248. doi: 10.1016/0022-5193(71)90051-8
![]() |
[9] |
H. Jo, H. Son, H. J. Hwang, E. H. Kim, Deep neural network approach to forward-inverse problems, Networks Heterog. Media, 15 (2020), 247. doi: 10.3934/nhm.2020011
![]() |
[10] | M. Raissi, P. Perdikaris, G. E. Karniadakis, Physics informed deep learning (part i): Data-driven solutions of nonlinear partial differential equations, arXiv: 1711.10561. |
[11] |
M. Raissi, P. Perdikaris, G. E. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, J. Comput. Phys., 378 (2019), 686–707. doi: 10.1016/j.jcp.2018.10.045
![]() |
[12] |
X. Li, Simultaneous approximations of multivariate functions and their derivatives by neural networks with one hidden layer, Neurocomputing, 12 (1996), 327–343. doi: 10.1016/0925-2312(95)00070-4
![]() |
[13] |
J. Soler, J. A. Carrillo, L. L. Bonilla, Asymptotic behavior of an initial-boundary value problem for the Vlasov–Poisson–Fokker–Planck system, SIAM J. Appl. Math., 57 (1997), 1343–1372. doi: 10.1137/S0036139995291544
![]() |
[14] | F. Bouchut, J. Dolbeault, On long time asymptotics of the Vlasov-Fokker-Planck equation and of the Vlasov-Poisson-Fokker-Planck system with Coulombic and Newtonian potentials, Differ. Integral Equat., 8 (1995), 487–514. |
[15] |
J. Han, A. Jentzen, E. Weinan, Solving high-dimensional partial differential equations using deep learning, Proceed. Nat. Aca. Sci., 115 (2018), 8505–8510. doi: 10.1073/pnas.1718942115
![]() |
[16] | J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations, Geometric & Functional Analysis GAFA, 3 (1993), 209–262. |
[17] |
H. J. Hwang, J. W. Jang, H. Jo, J. Y. Lee, Trend to equilibrium for the kinetic Fokker-Planck equation via the neural network approach, J. Comput. Phys., 419 (2020), 109665. doi: 10.1016/j.jcp.2020.109665
![]() |
[18] | Clawpack Development Team, Clawpack software, http://www.clawpack.org, Version 5.5.0, (2018). |
[19] |
K. T. Mandli, A. J. Ahmadia, M. Berger, D. Calhoun, D. L. George, Y. Hadjimichael, et al., Clawpack: Building an open source ecosystem for solving hyperbolic PDEs, PeerJ Comput. Sci., 2 (2016), e68. doi: 10.7717/peerj-cs.68
![]() |
[20] |
R. Tyson, L. G. Stern, R. J. LeVeque, Fractional step methods applied to a chemotaxis model, J. Math. Biol., 41 (2000), 455–475. doi: 10.1007/s002850000038
![]() |