
Climate change and the consequent increase in society's environmental awareness have triggered new forms of pressure on companies and the way they face environmental sustainability. Hence, either by legal imposition or by other types of pressures, companies must incorporate the environmental impact of their activities into their responsibilities, without compromising their financial viability. Environmental certification is a common procedure for firms to improve their environmental performance, as well as increase their reputation and image. Notwithstanding, determinants for environmental certification are still poorly understood. Therefore, the present study aims to assess, from a sample of 1,917 Portuguese companies of various sectors and dimensions, the influence of some factors on the probability of a firm adopting the ISO 14001 certification. Results showed that the different variables used as measures of profitability of the firm have a positive impact on its probability of adopting ISO 14001. Other factors with the same effect include the participation of the firm in the stock market and the firm's size. This can be justified by a greater need for large firms to improve their reputation or to have access to some markets where this kind of certification is an obligation. However, it is also possible to observe a certain variation in results depending on the measures of profitability used. Another conclusion is that the fact that a firm is an exporter does not influence its probability to be certified in any of the specifications, something contrary to what has been found in the literature.
Citation: Valter Franco, Susana Silva, Erika Laranjeira. Determinants for the adoption of ISO 14001: the case of Portuguese firms[J]. Green Finance, 2023, 5(1): 68-84. doi: 10.3934/GF.2023003
[1] | Yangjin Kim, Hans G. Othmer . Hybrid models of cell and tissue dynamics in tumor growth. Mathematical Biosciences and Engineering, 2015, 12(6): 1141-1156. doi: 10.3934/mbe.2015.12.1141 |
[2] | Shuo Wang, Heinz Schättler . Optimal control of a mathematical model for cancer chemotherapy under tumor heterogeneity. Mathematical Biosciences and Engineering, 2016, 13(6): 1223-1240. doi: 10.3934/mbe.2016040 |
[3] | Hsiu-Chuan Wei . Mathematical modeling of tumor growth: the MCF-7 breast cancer cell line. Mathematical Biosciences and Engineering, 2019, 16(6): 6512-6535. doi: 10.3934/mbe.2019325 |
[4] | Erin N. Bodine, K. Lars Monia . A proton therapy model using discrete difference equations with an example of treating hepatocellular carcinoma. Mathematical Biosciences and Engineering, 2017, 14(4): 881-899. doi: 10.3934/mbe.2017047 |
[5] | Samantha L Elliott, Emek Kose, Allison L Lewis, Anna E Steinfeld, Elizabeth A Zollinger . Modeling the stem cell hypothesis: Investigating the effects of cancer stem cells and TGF−β on tumor growth. Mathematical Biosciences and Engineering, 2019, 16(6): 7177-7194. doi: 10.3934/mbe.2019360 |
[6] | Zhan Chen, Yuting Zou . A multiscale model for heterogeneous tumor spheroid in vitro. Mathematical Biosciences and Engineering, 2018, 15(2): 361-392. doi: 10.3934/mbe.2018016 |
[7] | Yuyang Xiao, Juan Shen, Xiufen Zou . Mathematical modeling and dynamical analysis of anti-tumor drug dose-response. Mathematical Biosciences and Engineering, 2022, 19(4): 4120-4144. doi: 10.3934/mbe.2022190 |
[8] | Donggu Lee, Sunju Oh, Sean Lawler, Yangjin Kim . Bistable dynamics of TAN-NK cells in tumor growth and control of radiotherapy-induced neutropenia in lung cancer treatment. Mathematical Biosciences and Engineering, 2025, 22(4): 744-809. doi: 10.3934/mbe.2025028 |
[9] | Maria Vittoria Barbarossa, Christina Kuttler, Jonathan Zinsl . Delay equations modeling the effects of phase-specific drugs and immunotherapy on proliferating tumor cells. Mathematical Biosciences and Engineering, 2012, 9(2): 241-257. doi: 10.3934/mbe.2012.9.241 |
[10] | Fiona R. Macfarlane, Mark A. J. Chaplain, Tommaso Lorenzi . A hybrid discrete-continuum approach to model Turing pattern formation. Mathematical Biosciences and Engineering, 2020, 17(6): 7442-7479. doi: 10.3934/mbe.2020381 |
Climate change and the consequent increase in society's environmental awareness have triggered new forms of pressure on companies and the way they face environmental sustainability. Hence, either by legal imposition or by other types of pressures, companies must incorporate the environmental impact of their activities into their responsibilities, without compromising their financial viability. Environmental certification is a common procedure for firms to improve their environmental performance, as well as increase their reputation and image. Notwithstanding, determinants for environmental certification are still poorly understood. Therefore, the present study aims to assess, from a sample of 1,917 Portuguese companies of various sectors and dimensions, the influence of some factors on the probability of a firm adopting the ISO 14001 certification. Results showed that the different variables used as measures of profitability of the firm have a positive impact on its probability of adopting ISO 14001. Other factors with the same effect include the participation of the firm in the stock market and the firm's size. This can be justified by a greater need for large firms to improve their reputation or to have access to some markets where this kind of certification is an obligation. However, it is also possible to observe a certain variation in results depending on the measures of profitability used. Another conclusion is that the fact that a firm is an exporter does not influence its probability to be certified in any of the specifications, something contrary to what has been found in the literature.
According to the American Cancer Society, an estimated 2 million new cancer cases will be diagnosed in the United States in 2024, with approximately 611,000 cancer-related deaths expected [1]. Mathematical modeling is a powerful tool to dissect and standardize the complex biological mechanisms underlying tumor initiation and growth. Already revealing significant findings alone, the insights gained from in silico hypothesis testing complement existing experimental trials and clinical observations. Furthermore, the quantitative techniques discussed here bear a significant influence on implications for optimizing and personalizing cancer therapy. Simulating strategies prior to application while maintaining patient-specific qualities allows algorithms to ensure successful and desired outcomes for every case. With this motivation in mind, we establish a benchmark for the current state of cancer research and highlight promising leads to explore further. The individual models discussed below reveal considerable diversity in their approach to spatiotemporal dynamics, intra-tumor heterogeneity, and environmental influences.
The cell cycle determines the cell's ability to proliferate in its environment. The complex series of events required for cells to divide is regulated by protein signaling and resource availability and can be disrupted to initiate tumorigenesis. Healthy somatic cells rotate through interphase–comprised of G1, S, and G2 stages–and mitosis. A cell first enters into the G1 growth phase, where it fulfills its basic biological functions for any duration. If the conditions to proliferate are unfavorable, the cell may remain in G1 or arrest in the dormant (G0) phase, where it carries out cellular duties but does not prepare for cellular division. Once conditions become optimal, the cell is surveyed to ensure a complete nuclear genome, and the cell is signaled to divide; this quiescent cell passes the checkpoint and enters the proliferating S phase. Here, the cell undergoes DNA replication and advances into the resting G2 phase, where other organelles prepare for the division. The cell will then enter the mitotic phase, transitioning through mitosis and cytokinesis until it splits into two daughter cells. For a detailed overview of the cell cycle and its relationship with cancer, see Schafer [2] or Alberts et al. [3].
The disregulation of growth factors, tumor suppressor proteins, or proto-oncogenes may disrupt this process. Overexpressed growth factors enable the cell to proliferate without consideration of resources or internal readiness. Mitosis rapidly occurs, and the cancerous cells often have telomeric mutations that increase their DNA life span. Moreover, tumor suppressor genes like p53 play a vital role in DNA repair by releasing inhibitory proteins that pause the cell's progression in the cell cycle when the genome is damaged. If p53 is not functional, the lack of inhibition will accelerate proliferation, and unrepaired mutations may compound the risk of cancer. Lastly, although healthy proto-oncogenes accelerate the cell's transition into the next stages of the cycle, mutated oncogenes may promote unsupported growth. The multiple mechanisms that endanger the cell cycle–cellular, biochemical, molecular–can be further exacerbated by genetic predispositions and environmental factors that endanger the cell's ability to repress fast proliferative rates [4].
The mathematical models explored in later sections use a variety of factors to simulate cancer growth. Below, a handful of these factors are defined in greater detail to explain how studies may use available data to increase the accuracy of their algorithms.
Often, proliferating (non-quiescent) cancerous cells are categorized by their potential to continue dividing. Kolokotroni et al. [5] classified them as undifferentiated cancer stem cells, limited mitotic potential (LIMP) cells, and terminally differentiated tumor cells. Any category of cells can produce cells of its own population or of the next category that follows the path toward terminal differentiation. For example, stem cells can produce LIMP/progenitor cells, and progenitor cells can produce terminally differentiated cells. Cancer stem cells are primarily responsible for tumor sustenance and propagation, so distinguishing cancer populations is important to predict global tumor behavior. Additionally, the average duration spent in each phase is an important metric to predict glucose consumption, acid production, and overall tumor size [6].
Two other categories, apoptotic and necrotic cells, account for dead or dying cancer cells [5]. Due to poor survival conditions in an environment of rapidly dividing cells, quiescent cells that are not recruited back into the active cell cycle can die uncontrollably through necrosis. Alternatively, cancer cells in any category or cell cycle phase may undergo programmed cell death, or apoptosis, if they are signaled appropriately. Figure 1, adapted from Wang et al. 2015, demonstrates a rudimentary pathway for a noncancerous cell's transition through the cell cycle and various phenotypes.
One commonly modeled environmental phenomenon during tumor development is angiogenesis. Both healthy and cancer tissues exploit networks of vasculature to provide cells with nutrients, hormones, and oxygen necessary for cellular function. Angiogenesis is tumor-induced vascularization that branches out from existing blood vessels into tumor architecture [7]. Because rapid cell division and nutrient competition pressure the survival of less aggressive cancer phenotypes, tumor angiogenic factors (TAF) stimulate the extension of vasculature to tumor regions. As a result, models occasionally include angiogenesis as a factor for development because it sustains a larger tumor volume and harbors different cancerous phenotypes.
Even a brief overview of cancer research has demonstrated the need for a multi-scale investigation into the field. By multi-scale we mean processes that occur at two or more spatial or temporal scales, such as molecular signaling pathways, sequencing of genetic mutations, histology of cell biomarkers, homeostasis of the tissue environment, and positron emission tomography (PET) scans of metastasized tumors [8]. Each of these examples is a level of biological organization at which cancerous phenomena occur [9,10].
Scientists often employ in vitro and in vivo experiments to explore cancer growth as a dynamic process. However, to truly understand the interdependent variables that characterize the cancer system, mathematical modeling and computer simulations have become increasingly popular to observe change at the appropriate time scale. This in silico approach fits biological phenomena to free parameters that can be manipulated at the scientist's discretion, to study asymptotic behavior or behavior under specific, known conditions. Used in isolation, in combination with, or as a precursor to the more traditional methods of cancer research, in silico research has promoted exploratory analyses of cancer based on existing clinical data [11,12,13].
In this expository review, we will specifically investigate spatiotemporal modeling approaches to free growth behavior–that is, growth in a host environment unrestricted by immune responses or anticancer therapies. Despite the current emphasis on understanding the efficacy of treatment and drug administration in cancerous states in the current literature, the first priority of the in silico approach must be to understand the biological variables that significantly impact tumor growth and cancerous cell populations in the human body. These models can then be applied to create new therapeutic targets or simulate treatment initiation [5].
The objective to discover patterns in given data and predict future trends requires that models represent observed cancer systems with a systematic treatment of biological complexity across relevant scales [7]. For example, scientists may compromise on modeling the stochastic (random) behavior of individual cells to better simulate the macroscopic tissue-wide effects of tumor expansion in their algorithms. Other model objectives may include customizable parameters for patient-specific contexts or the production of testable hypotheses that can be further refined in wet lab research evaluations. To perform any function, mathematical models must first quantitatively clarify basic cancer phenotypes and assess their relationship with environmental changes.
As seen throughout this paper, mathematical modeling confronts several challenges in its pursuit of predictive power and exploration of processes. Although the multi-scale approach is common, different simulations focus on different selections of modeling scales: some focus especially on protein and cellular movements, while others explore cancerous changes through the homeostasis of tissue environments [9,14,15,16]. Another significant challenge is determining the level of complexity of the parameters and processes simulated, weighing accuracy against computational load and applicability in the clinical setting. The decision to choose certain variables may also depend on the model's objective or past findings that indicate where an algorithm can be appropriately simplified [17].
Despite operating under similar computational objectives and limitations, mathematical models of the cancer system have demonstrated enormous diversity in their approaches toward and predictions of free tumor growth. This paper is organized as follows: Section 2 explores continuum approaches to tumor growth modeling, Section 3 focuses on discrete lattice-based and lattice-free simulations, and Section 4 discusses recent hybrid frameworks in the field.
Continuum models treat the tumor as a continuous mass, prioritizing the overall tumor morphology and nutrient distributions rather than the influence of individual cells [7]. Understanding the tumor body as a material region within a Euclidean space, this modeling technique maintains that the previous local history of deformation determines the mechanical stress applied to the body and that the material's response is identical for all observers. The mechanical stress, exerted by the tumor body itself and interstitial fluid in a cancer system, impacts the growth kinetics of the proliferating cells [18]. A continuum approach requires that the uniform agents fill the space they occupy and are all deformable, or sensitive to the loads and pressures they experience [19]. Typical assumptions in these simulations include continuity, homogeneity, and isotropy of invariant vector properties. When these conditions are not applicable, subspaces are established in which each region satisfies these properties. For example, Castorina et al. [20] divided a heterogeneous tumor into subpopulations of cells with shared properties to be modeled as separate continuous regions.
By examining larger scales of growth (multicellular, tissue), stochastic variations of single-cellular responses are often neglected. One of two popular formats of continuum modeling is the use of ordinary differential equations (ODEs). Built on the assumption that signaling molecules are highly abundant, well-mixed, and uniform in the cellular microenvironment, ODEs can disregard the behavior of the individual cell to instead focus on investigating the global production/consumption of the molecules in the mass [21,22]. Models implementing ODEs are therefore restricted by the requirement of a homogeneous microenvironment, but they can be practical for consolidating data from assays of in vivo or in vitro cell lines.
Although many ODE models–exponential, logistic, Mendelsohn, linear, surface, Gompertz, and Bertalanffy–have been proposed to simulate tumor growth data, only a select few are discussed below. The criteria for proving the 'best' model under a specific circumstance typically depends on the function's fit, its biological basis, and its capability to estimate future growth. The Mendelsohn, linear, surface, and exponential models assume infinite growth without bound, a biologically unrealistic behavior. Murphy et al. [23] found that Mendelsohn, linear, and surface models do not amply fit experimental data of in vivo lung and breast cell lines. Coupled with their lack of physiological foundation, these three models are not included in the discussion below. (For an overview of these models, see Murphy et al. [23], Heesterman et al. [24], and Gerlee [25]). The authors' statistical analysis also reveals the lowest Aikaike's information criterion (AICc) for the exponential model and the minimum sum of squared residuals (SSR) for the Bertalanffy model, indicating promising results for these two functions. Additionally, based on recent successes with logistic and Gompertzian growth modeling, we have included these approaches in the review as well [26,27,28]. As a result, Sections 2.1 through 2.4 investigate these four models simulating cancerous cell populations (exponential, logistic, Gompertzian, Bertalanffy) that are classified as first-order ODEs. Each of these models adheres to the following tumor cell population growth equation:
dN(t)dt=g(t)N(t) | (2.1) |
where N(t) represents the number of tumor cells at time t, and g(t) is the growth rate of cells, which can also be expressed as a function of time [22].
Another format of continuum modeling is by partial differential equations (PDEs), which allow for expressions containing several variables rather than just one. As a result, PDEs can account for the spatial heterogeneity of tumor growth and the surrounding environment by exploring the temporal and spatial evolution of the tumor, in 2D monolayers or 3D architecture [22]. In Section 2.5, we discuss the continuous reaction-diffusion model as an example of a closed PDE system.
Exponential models are based on the assumption of constant, unbounded cell growth. The creation of two daughter cells by cellular division occurs at a stable rate, independent of tumor size. To represent this biological description of cell reproduction, the function employs one free parameter, the growth rate, also known as the proportionality constant since growth remains proportional to the current population [23]. In reference to Eq (2.1), g=r where r is a constant value [22]. Tumor doubling time DT can be simply modeled by
DT=ln(2)r. | (2.2) |
This traditional understanding of cell growth is often used to fit the growth curve at tumor initiation. Theoretically, cellular mechanisms support the notion of geometric growth for tumor populations, assuming there is no limitation on nutrient availability or physical space [29]. Its potential for ideal or early growth modeling is depicted by Jarrett et al. [22], who found that exponential growth occurs when a constant nutrient source exists, but it is no longer practical when nutrients are variable. Incorporating five experimental datasets of in vivo breast and liver cancer, Talkington and Durrett [30] found trends of exponential growth. The authors' small dataset, providing tumor volume only at two time points, indicates that exponential growth is most effective for modeling limited data or early data from a tumor that has not fully developed. This scenario of limited data is further confirmed by Murphy et al. [23] who found the exponential model to have a poor SSR value but the lowest AICc of all models tested, meaning it succeeds at modeling the experimental data when correcting for the number of parameters and small sample size. Other models, such as the Bertalanffy function, do not show enough improvement in fit to counter their need for increased parameters. The exponential model, with just one free parameter, appropriately fits a small experimental dataset.
However, as the tumor matures, or a longer timeframe of data is provided, exponential growth becomes inaccurate. Figure 2 shows how fitting late time point data of in vivo mice tumor samples forces the proliferation parameter r to converge at a low estimated value, in a study by Vaghi et al. [31]. As nutrients, oxygen, and space become population-limiting factors, the growth rate decelerates and betrays the assumption of constant doubling time. Exponential models do not allow for slowed growth when resources inevitably deplete. Benzekry [32] shows how exponential growth does not accurately explain in vivo breast and lung tumor growth data taken over weeks, and the growth curve has the largest error residuals of all models tested. Over a long period with ample data, the exponential model has poor descriptive power.
Kolokotroni et al. [5] demonstrated a possible case for exponential growth for fully developed tumors when focusing on short-term approximations and when the tumor is in a state of population equilibrium or balanced growth. For example, cell populations must become asymptotic, even if the overall tumor cell population continues to grow. However, complications including quiescence, differentiation, and recruitment of dormant cells back into the cell cycle result in further deviations from the exponential pattern of growth.
Exponential modeling arises from assumptions of constant conditions like inexhaustible nutrient sources, and this criterion makes it more suitable for early phases of population growth or a small window of data collection. In this scenario, the simulation can convey the rapid tumor growth as proportional to the population. As the cancer continues to develop, it undergoes complicating factors including depleting nutrients, angiogenesis, and necrotic cores that invalidate the unbounded model for long-term growth. Once these factors significantly impact the cell population, the growth rate declines in a manner characteristic of a carrying capacity.
Logistic growth models display this observed capacity trend as a maximum population size. They assume that population growth is density-dependent on space and nutrients, where the rate of growth decreases linearly with increasing population size. Expressed as two sigmoid functions between two asymptotes N=0 and N=K, where K is the carrying capacity, the logistic trend is characterized by an initial period of fast growth, followed by a subsequent phase of decreasing growth [31]. The location of the point of inflection between the radially symmetrical periods of fast growth and delayed growth is halfway between the two asymptotes [33]. When fitted to Eq (2.1), growth rate g can be modeled as
g(N)=r(1−NK) | (2.3) |
where r is the specific growth rate [22]. For example, Vaghi et al. [31] experimentally determined r to be 1NdNdt, from a logistic analysis of animal tumor model datasets. As N approaches K and 0, g approaches 0 and 1N at a rate determined by the ratio of population density and carrying capacity.
Heesterman et al. [24] examine various growth curves on in vivo patient case data of paragangliomas, finding that the logistic model fits the data as well as other decelerating growth laws. Unlike the other models, though, the logistic model is not derived from biological concepts, nor does it have a physiological basis. This makes it a capable, but not optimal, model to depict tumor growth trends.
Nevertheless, logistic growth is still successfully applied to a variety of tumor growth data because of its instrumental benefits, such as in vitro cervical cancer growth. Dimitriou et al. [26] proposed a classical single-cell population model, a type of logistic growth model with an additional term for effective tumor cell death. Among all models, classical logistic growth proves to be the best fit for the tumor data when there are little to no treatment interventions. Another study by Dimitriou et al. [34] also incorporates logistic growth for individual cells in a hybrid model by adding a growth term su(1−u), where s represents the growth constant of the cell and u represents the spatiotemporal evolution of the cell. By including logistic growth, the authors accurately model a 3D in vitro breast cancer culture system.
The classical assumption of symmetry between the accelerating and decelerating phases is not realized in many growth processes, and it becomes restrictive when applied to cancerous cell proliferation. Figure 2 showcases the constraint of earlier data points on the pace of growth deceleration [31]. Because the initial growth is sometimes far more rapid than in the cases where the logistic function holds, it cannot always conform to trends of tumor cell proliferation.
A variation of the logistic model is the generalized logistic model, or the Spratt model, with the small modification that
g(N)=r(1−(NK)v) | (2.4) |
with the additional parameter v bounded by [0,1] [24]. When v→1, the function is reduced to classical logistic growth. The revised function makes the generalized logistic model more flexible to data alterations than its predecessor [30]. Benzekry et al. [32] showcased this using in vivo human breast and lung cancer xenografted in mice subjects. They find that this model has high descriptive power because its large flexibility can adapt to growth curves for the best fit. However, this becomes a disadvantage when needing to extrapolate future growth trends, in which case model rigidity is prioritized, so the generalized model has low prediction scores.
The logistic growth curve successfully models and predicts tumor growth. Huber and Mistry [27] and Atuegwu et al. [35] demonstrate this when exploring tumor responses to treatment therapies with logistic modeling. Despite this, the modeling approach lacks both a biological foundation and predictive power, which becomes useful when forecasting tumor development during maturation or treatment.
The Gompertzian model was first constructed in 1825 to understand human mortality trends before Anna Laird applied it to cancerous cell growth in 1964 [24,36,37]. Now a popular growth curve for tumor development, the Gompertz curve approaches growth as time-dependent (decreasing with increasing time) rather than density-dependent. Available nutrients deplete alongside the population, allowing for sustainable yet limited food sources [22].
Its behavior deviates from logistic models by using an exponential decrease of the specific growth rate, rather than a linear decrease proportional to population N. The proliferation degradation rate now decreases at a more accelerated pace, at a speed Vaghi et al. [31] estimates as
dβdt=−βN,e−βt | (2.5) |
where ß represents the new specific growth rate as a nonlinear function of time [31]. Thus, the function of the population proliferation rate is represented by
g(N)=λe−βt | (2.6) |
where λ is the maximal growth rate that bounds g [22].
The Gompertzian sigmoidal curve expresses initial exponential growth which eventually converges at a limiting value, independent of other model parameters. Figure 2 represents this S-shape that is not symmetric around the inflection point as the logistic curve is, and again unlike the logistic model, its point of inflection is related to the asymptote but without symmetrical phases of growth [6]. This absolves the model from the restriction that limited the capabilities of the classical logistic function. Figure 2 also shows the remarkable predictive power of Gompertzian modeling for this experimental setup, and this success has been replicated numerous times in other studies [6,20,31,33,38,39].
The Gompertzian form can be derived from the biological mechanisms of cellular kinetics. Laird initially defended the model by stating that only a fraction of cells will divide exponentially at a given time and that tumor cells will rotate through the cell cycle at a decelerating speed–hence the success of the model that adapts both exponential and logistic traits [36]. Frenzen and Murray [38] explore this biological background further through the cell kinetics models. They observe that as the total number of cells increases, the individual maturation rates decrease, and the population asymptotes to a bounded value or equilibrium size. As a result, this kinetics foundation provides a biologically plausible explanation for why the Gompertz model has been so successful in mathematical oncology. However, the spontaneously decelerating growth rate integral to these justifications does not have any physiological basis [32]. Like the logistic approach, it remains capable of modeling tumor development in theoretical and clinical contexts without a biological foundation for its significance.
The Gompertz model has been successful in fitting experimental data, simulating patient-specific cases, and predicting future growth. Its robustness and adaptability establish it as a mathematical tool appropriate for more complex cancer systems. One such application is a cancerous environment with an emerging tumor subsystem that has a higher proliferation rate than the parent strain. Although tumor heterogeneity is typically disregarded to understand general global trends, Castorina et al. [20] focused on modeling this growth instability to understand its impact on therapeutic efficacy and patient survival. The paper constructs a two-population dynamic model by which both populations N1 and N2 adhere to Gompertzian growth laws. The size of one population of cells is modeled as
Ni(t)=Ni(t∗)e−ωi(t)(t−t∗) | (2.7) |
where Ni(t) is the size of an unspecified population at a given time point t, Ni(t∗) is the size at the onset of treatment, and ω(t) represents the corresponding rate of cell reduction through treatment [20]. ω(t) is simplified to a constant for the parent population N1 so that the subpopulation N2 can be approximated with a different proliferation rate and reaction to therapy:
ω2(t)=ω1−β(1−e−K(t−t∗)K(t−t∗)). | (2.8) |
The population N1(t) exponentially decreases with a constant rate ω1 while subpopulation N2(t) adheres to a time-dependent rate ω2 [20]. This establishes the nontrivial boundary condition ω2(t∗)=ω1−β, where β is an equation parameter. With Eq (2.8), the authors can explore the delay of N2 onset, different possible replication rates, and size of the N2 system alongside clinical data of breast cancer survival probability. Here, the Gompertzian formula has been adapted to approach more complex tumor masses like cancerous subsystems.
Reducing the standard model to one population parameter and one individual parameter, Vaghi et al. [31] found that the Gompertz curve coupled with Bayesian estimation produces an accurate and precise fit of in vivo breast and lung tumor data in animal subjects. It displays strong predictive power, exceeding the capabilities of the logistic form. Tatro [28] also successfully fits the Gompertz equation on a patient dataset of untreated breast cancer cases, accurately modeling the data with the classical model. Benzekry et al. [32] confirmed the Gompertzian model's high descriptive and predictive power for its lung and breast cancer datasets. Another application of this versatile model is to parameterize the internal variability that drives the time-evolution of tumor growth, as seen in Chignola et al. [39]. A stochastic Gompertzian adaptation, this model includes a term for additional noise that mimics the intrinsic variability of growth, without needing a probability time evolution of possible outcomes. This modification resizes the upper limit of the maximum possible population volume, and by extension, the largely accepted understanding of tumor growth potential.
One open avenue of interest for Gompertzian modeling is validating the integrity of the model at tumor initiation. Vaghi et al. [31] operated with late time point data, Tatro [28] models the dataset with information on the whole tumor lifespan, and Chignola et al. [39] simulated tumor spheroid over a similarly long duration. However, patient cases are not always caught after maturation, but often in a developing stage. Studies should be performed to determine whether the Gompertz model remains a successful fit for early stages of tumor growth. More specifically, the model should be validated for circumstances when the growth rate has not decelerated significantly and the cancerous population is still rapidly proliferating.
An under-researched yet promising ODE approach is the Bertalanffy model, which has shown a strong fit and possesses a physiological basis for its parameters. Originally formulated as a model for organism growth, it follows the fundamental energetic principles of metabolic growth and decline [40]. The Bertalanffy trend assumes that proliferation and cell death are responsible for tumor size and development, as cells continuously divide and die off. The basic formula for this simplistic growth curve is as follows:
g(N)=aNγ−bN | (2.9) |
where a and b are constants for the growth and loss terms, respectively. The model is structured so that proliferation increases proportional to the surface area, and loss of tumor mass due to cell deaths is also in proportion to tumor volume [23,25]. The specific case under which the Bertalanffy model is commonly applied is when the metabolic scaling component is γ=23 and the cell loss factor is b=1. However, recent studies such as Renner-Martin [41] find considerable variability in the value of the exponent, and this may encourage new studies to reevaluate parameter values for every context.
The Bertalanffy model has displayed strong predictive and descriptive power in simulating tumor growth curves. Heesterman et al. [24] found that out of seven models fit to three time points of tumor volume data, the Bertanalffy function provides an equally strong fit to the Gompertz, logistic, and Spratt models tested. Additionally, it demonstrated its extrapolation abilities by presenting a realistic age of onset and predicted development for various cases. Murphy et al. [23] also finds that, when fitting in vivo data from animal model xenografts, this model has the lowest SSR compared to the exponential, Mendelsohn, logistic, linear, surface, and Gompertz curves. The function has the flexibility to describe the available data despite its simplistic nature. The authors contradict the former study by remaining skeptical of the model's predictive capabilities: when fit to data, parameter b is reduced to 0, eliminating the influence of cell deaths on tumor volume and presenting biologically unrealistic tumor sizes.
Diebner et al. [42] constructed a mathematical model that is founded in the biological principles of tumor developement, considering both allometric scaling and population selection tendencies, in their attempt to simulate tumor multiclonal competition. The authors' model is compatible with the traditional Bertalanffy growth model, and it preserves both the metabolic foundation and the captured complexity of the original function. Although the study's model has not been rigorously tested and validated, the fact that a biologically motivated function agrees with the Bertalanffy model structure indicates the potential for the classical case, and for any growth trend rooted in the physiological context of tumor development.
Despite the Bertalanffy model's simple structure, strong model fit, and connection to biological mechanisms of tumor growth, little research has been performed to ascertain the best contexts and scenarios for the growth curve. Future studies should focus on tailoring the model to specific tumor cases, such as tumor invasion, angiogenesis, heterogeneity, and metastasis. These findings will demonstrate the function's applicability of different tumor conditions and different experimental settings.
We have already demonstrated different approaches to modeling tumor cell proliferation with various multi-scalar, continuum simulations. These cell-based models require large assumptions across model parameters to apply to personalized clinical situations [43]. In comparison, the reaction-diffusion PDE can be used macroscopically, adapting information from medical images to perform powerful simulations of the tumor mass.
Wong et al. [43] offers the following equation:
∂N∂t=div(D∇N)+ρN(1−NK) | (2.10) |
where N is the tumor cell population, K is the carrying capacity, ρ is the proliferation rate, and D is the anisotropic diffusion tensor matrix. This matrix represents the diffusion coefficients along each of the three (x-, y-, and z-) directions for the invasive property of the tumor into healthy host tissue. The first term of this equation represents the tumor invasion based on this tensor, and the second term models for logistic cell proliferation.
The reaction-diffusion model is often seen as a system of coupled equations. For example, Frieboes et al. [15] takes advantage of the macroscopic lens that reaction-diffusion models offer to focus on cellular interactions rather than kinetics. The authors construct a closed system of PDEs to simulate three microenvironment conditions in their in vitro model: nutrients, growth factor (GF) diffusion, and O2. Focusing on the environmental influences on tumor invasion, Gatenby and Gawlinski [44] model the spatial distribution and temporal evolution of host cell density, tumor cell density, and excess concentration of H+ ions, with a set of PDEs. By exploring the dynamics and structure of the tumor-host interface, authors find that a tumor-induced disturbance of the homeostatic pH (a change to which the normal cells are intolerant) acts as a mechanism for cancer invasion. Figure 3 notably shows a drop in intracellular volume fractions (ICVF) at the boundary of the tumor growth, where the cell-to-volume ratio is low. This front of expansion into the healthy tissue is a prominent research topic in cancer modeling. Looking exclusively at these borders, Cristini, Lowengrub, and Nie [45] establish boundary-integral simulations to find that environmental factors are critical conditions for determining the invasive potential of the tumor.
Researchers have used reaction-diffusion equations for coupling simulations with patient imaging, applying their three-dimensional modeling capacity, investigating growth as a biomechanical function of surrounding tissue, and integrating data from longitudinal cancer cases [46,47,48,49,50,51]. This approach is especially popular in glioma modeling where the tumor cell-density function can effectively reconstruct the diffuse nature of invading tumor cells in the brain, in comparison to medical imaging that cannot depict these borders at a cellular level [52]. Reaction-diffusion growth models for glioma cases are used in conjunction with medical imaging, such as in Jbabdi et al. [53] and Konukoglu et al. [54], or in place of this clinical tool, such as in Clatz et al. [47] and Hogea et al. [46]. In the former circumstance, the authors implement medical imaging information to calculate parameter estimates, from which they can build glioma growth models. In the latter cases, imaging validates the in silico simulations of invasion and expansion. Reaction-diffusion equations in all examples prove to be a successful approach to depict the spatiotemporal evolution of glioma cell densities.
A significant obstacle to this modeling approach is rooted in its applicability to in vivo human glioma cases. The extensive literature on building personalized reaction-diffusion models creates a demand for patient-specific parameters for simulations and treatment plans, but the model framework carries free parameters that are not easily accessible through current imaging tools. Recently, quantities such as tumor cell density, diffusion rate, and proliferation rate are often estimated to produce findings, or the model itself is simplified to reduce error margins from inexact measurements. Konukoglu et al. [49] estimated tumor cell diffusivity at the tumor boundary from magnetic resonance (MR) and CT imaging but left the proliferation rate as constant for all cases. Subramanian et al. [55] estimated tumor origin, diffusivity, and proliferation from one data time point, but only for dimensionless parameters that cannot be used in a personalized patient context. Scheufele et al. [56] identified the exact parameters for proliferation rate, migration rate, and tumor origin, but they do not estimate the density distribution of tumor cells at the time of diagnosis. Tunc et al. [57] calibrated three reaction-diffusion models from MR imaging clinical data to find that the model with mass effect provided the most accurate prediction of glioma growth. However, the computational load and complexity of the model force the author to restrict the model to 2D reconstructions. More recently, Martens et al. [52] employed a deep learning-based approach to establish parameter values for patient-specific reaction-diffusion modeling. Trained from only two imaging time points over a large dataset of synthetic tumors, the personalized model presents accurate reconstructive simulations and predictions for glioma growth. However, this model has not been validated on real patient data, nor does it include the tumor mass effect of pressure and displacement, a significant factor in solid tumor mechanics [57].
Focusing on clinical imaging data from pancreatic neuroendocrine tumors, Wong et al. [43] remedied this parameter estimation problem by non-dimensionalizing the algorithm: dividing both sides by K and transforming values into their ratios to the carrying capacity of the population:
∂θ∂t=div(D∇θ)+ρθ(1−θ) | (2.11) |
where θ=NK, representing the tumor cell ratio. With this transformation, Wong et al. created easily attainable parameters: the ratio θ is proportional to the total space occupied by the tumor, so it can be approximated to the proportion of intracellular space to tissue volume, or the ICVF [43]. The authors take advantage of contrast-enhanced CT imaging to calculate the ICVF of the tumor from the baseline blood tissue ICVF. Equation (2.11) can then be simulated, refined with further scans, and personalized to a patient's needs and treatment schedule. By establishing free parameters that can be derived directly from medical imaging, the authors create a foundation for patient-specific simulations.
Figure 3 shows how the simulated ICVF image informs about the macroscopic shape and nature of the mass in the surrounding environment. This demonstrates the tumor mass effect and invasion without participating in the trade-off between realism and computational efficiency.
Reaction-diffusion modeling has successfully simulated microenvironmental conditions, fronts of expansion, and invasive mechanisms. Although a considerable portion of literature surrounding reaction-diffusion modeling relates to patient specificity in model applications, the PDE approach forces significant trade-offs in estimating important parameters, simplifying simulation complexity, or neglecting biophysical phenomena. For glioma research especially, reaction-diffusion models suffer from the lack of clinically available values that are integral to the simulations' success. Further research should apply successful cases of non-glioma tumor modeling, like Wong et al. [43], to construct personalized glioma reaction-diffusion models. Increased patient-specificity and model robustness would improve the diagnosis and treatment of cancer cases in clinical settings.
Continuum models in general face a common set of difficulties. The heterogeneity of the tumor subject, especially regarding individual cell phenotypes and behaviors, is often neglected. Focusing the model on cell-scale (20 μm) rather than tissue-scale (100–200 μm) resolution would contradict the premise behind the continuum approach, and therefore cannot be performed with biological significance [58]. Also, calibrating these models to experimental data, especially when parameters involve more than one biological meaning, require iterative trial-and-error until the resultant growth curve corroborates the given data points. The discrete models described in Section 3 address these obstacles by examining a cellular perspective, from which global tumor-level trends can be understood.
Discrete modeling deviates from the continuum approach by considering time and space as discrete systems, inside which cells adhere to predefined rules of motility and proliferation. With the initial condition of the system xn assigned to t = 0, every additional time-step xn+1(where the unit of time is biologically relevant to tumor growth) observes a change so that dynamical phenomena can be simulated as a series of steps. Figure 4 visualizes this discrete-time system, where both time and cellular agents are modeled as distinct quantities [59].
By constraining xn+1 to only depend on xn, a rudimentary equation can be constructed:
xn+1=f(xn) | (3.1) |
where the function f simulates population growth with a desired behavior [60]. With this foundation, a discrete algorithm could take into consideration cell identities (cell cycle phases and stem cell stages) and the transition rates between states (proliferative capacity, quiescence, necrosis, apoptosis) [5]. Or, the equation may only take into account the number of living tumor cells at a given time. Kamel [61] considers this approach in their generic discrete-time system of equations, one part of which is demonstrated in Eq (3.2):
xn+1=s1xn(1−xnq1)−p12xnyn−p13xnzn. | (3.2) |
Here, x represents the number of cancer cells, y represents healthy host cells, and z the immune cells. s1 denotes the free growth rate of cancer cells, q1 is the maximum carrying capacity, p12 is the death rate from host cells, and p13 is the death rate from immune cells. With these parameters, the discrete equation models the tumor population as it depends on cellular growth rate (first term, positive), cellular death rate from neighboring host cells (second term, negative), and cellular death rate from immune cells (third term, negative). The author assembles this differential equation, along with two more modeling the temporal evolution of healthy and immune cells, to demonstrate the chaotic dynamics of a discretized cancer system.
Like continuum modeling, discrete-time models that transform biological phenomena into mathematical expressions can be adapted into reconstructive or predictive simulations. This format of modeling provides exceptional insight into tumor microstructure, including heterogeneity in density and cell types. By reconstructing (rather than averaging or simplifying) individual cellular dynamics, the discrete-time system informs about intra-tumor changes and patterns throughout the cancer's lifespan [61]. These algorithms require a predetermined set of rules that methodically make and execute decisions at every time step. A simplified model of discretization logic is shown in Figure 5, which is loosely based on the more specialized simulation described in Kolokotroni et al. [5].
Alongside these models' descriptive and predictive capabilities, there are additional considerations and obstacles to determining the best model for a specific context due to the nature of the discrete approach. These include assessing the availability of parameters for a specific circumstance, determining the computational load or complexity needed, and considering a lattice-based or lattice-free approach. The cellular scale that most discrete models operate in requires parameter values that are not easily attainable in a patient setting, so simulations are often confined to theoretical applications or in vitro wet-lab experimental procedures. Computational load and complexity of the model exist in a trade-off with each other, often leading studies to simplify the number of free parameters or use small population sizes. The last consideration of operating on or off a lattice motivates the outline for this section: Section 4.1 discusses lattice-based cellular automata models, while Section 4.2 explores lattice-free agent-based modeling approaches. Section 4.3 contains an overview of chaos behavior and its potential applications to mathematical oncology.
Cellular automata models consider agents as confined to a regular spatial lattice, disregarding cellular properties but maintaining cellular identities [21]. In the context of tumor growth dynamics, this approach mathematically formalizes the stochastic nature of cell kinetics to estimate population-level dynamics [62].
Figure 6 visualizes the concept of cancer cells occupying grid points on a square lattice [62]. Each cell retains identity parameters, including cell cycle stage, proliferation potential, and migration potential. When cells migrate or divide to form two daughter cells, they require space adjacent to them. If the tumor environment is unsaturated, cell movements into vacant spaces occur at random, as seen in Figure 8. These vacant spaces may be occupied by migrated tumor cells or new daughter cells at every step in the simulation. Saturated spaces where the cell is unable to move or proliferate lead to the cell becoming quiescent. Cell transition into quiescence or spontaneous death is recalculated and simulated at every time step [62].
This modeling approach originates from the Game of Life, a mathematical machine invented by John Conway in 1970 [63]. The game's premise is to develop a logical structure that contains the instructions to replicate itself. Established within a 2D square lattice, cells assume one of two statuses, living or dead. At every timestep, the cell state is updated depending on its previous identity and the identities of cells in its immediate neighborhood. Game of Life rules entail that all neighbors of a candidate cell are treated equally (a Moore neighborhood) and that any random configurations must stabilize or exhibit bounded growth eventually. Conway's concept of a lattice configuration, where agent behaviors are determined by their local environment and re-evaluated every generation, later became the abstract mathematical representation for the cellular automata approach.
In cellular automata, similar automata, or agents, are connected together in a regular pattern. Although these agents are typically individual cells, they may instead represent clusters of heterogeneous cells, such as in Kolokotroni et al. [5]. In this manner, the tumor cells within clusters are simulated as their cell identity (stem, apoptotic, necrotic) to determine the creation, deletion, or preservation of their geometric cell. The homogeneous condition for the tumor, along with the cluster-method approach, simplifies an otherwise computationally intensive simulation that is performed stepwise.
Although the cellular automata approach presents a common set of obstacles when weighing computational efficiency against complexity, recent studies have navigated this difficulty differently. For example, the amount and type of information known a priori can help determine the degree of complexity that the model can possess. Garcia-Morales et al. [64] constructed a cellular automata model with only two free parameters: the strength of the global coupling of cells due to confinement pressure, and the strength of intercellular coupling, where Game of Life rules govern local dynamics. By simplifying the simulation to only two inputs, the authors assign the first parameter to the type of tumor modeled and the second parameter to growth inhibition, tumor volume, and other phenomena influenced by mechanical pressures. To further streamline this simulation, the authors establish a lattice size beforehand to preserve memory load, optimize resources, and ensure consistency of results for all studies they attempt to reproduce. The paper successfully simulates the microscopic growth dynamics of emergent tumors, taken from previous studies' experimental observations. While this model's simplicity lends itself to the ability to predict future growth for various datasets, Migliaccio et al. [65] favor increased complexity by simulating infinite grid space with dynamically expanding boundaries. Like the previous model, these authors require two free parameters to predict migration rates and simulate the spatiotemporal evolution of their samples. However, in contrast to Garcia-Morales et al., these authors sacrifice the optimization of resources for higher grid resolution and accuracy, and as a result, the model requires extensive calibration with in vitro data of tumor and control cell line assays to ensure outputs are accurate and meaningful.
Having prior access to specific information, such as the appropriate lattice size for the simulation, proves to be a difficult endeavor in certain cases. Valentim et al. [66] constructs and runs a cellular automata model through different scenarios, such as growth from a stem cell, growth from a clonogenic cell, and variable apoptosis rates. The model captures the heterogeneity of individual cells and the expanding boundaries of the tumor region on a 2D lattice. The decision to not encode a constant matrix size is rooted in two factors. First, the final tumor size is unknown prior to observing the cancer development or running the simulation, like in this circumstance. Second, restricting the matrix borders may lead to an undesirable level of detail or granularity if the domain is too large or small. Also, dynamically expanding borders comes with a considerable computational cost since the model must account for the new spatial configuration at every repetition, and larger lattices strain the memory capacity.
A lattice model often indicates that the resulting model or simulation operates on a two-dimensional grid. Migliaccio et al. [65] and Pourhasanzade and Sabzpoushan [67] exemplify this characteristic of most cellular automata models. These 2D simulations lack applicability to circumstances that demand a three-dimensional output, such as investigations into tumor architecture, the extracellular matrix (ECM), and some in vivo environments. 2D explorations are therefore limited to earlier stages of research, including the use of assays. Late-stage research into tumor spheroids and biopsies must therefore work with models that are capable of 3D reconstructions. Santos and Monteagudo [68] provide one example of a cellular automata model with a novel 3D grid environment of in vitro multicellular tumor spheroids. Experimenting with a tissue-like environment, the authors model tumor initiation at a high level of complexity–expanding the grid to a third dimension and incorporating seven free parameters for each cell to understand the effect of mutation rates on morphology.
Additionally, recent studies have investigated ways to minimize computational load while preserving model accuracy. Poleszczuk and Enderling [62] sought to illustrate the cell neighborhood vacancies in a computationally efficient manner with a coded lattice structure, rather than a simple Boolean method. Shown in Figure 7, traditional simple lattices for diffuse tumors store Boolean information for occupied (1) or vacant (0) status in every cell. Especially dense or complex tumors require a more effective representation of data, like the coded array. In this version, each grid cell contains information about the number of vacant spots in the immediate cell neighborhood (0-8 spots, and 9 denoting empty). Both models can illustrate migratory/proliferative cells (white) and quiescent (grey) tumor cells. As cells reach or surpass the current lattice boundary, the grid is extended in the direction of expansion to model dynamically growing domains. In this respect, boundary tumor cells may become significant agents in tumor mass growth and population expansion. With this coded lattice approach and restricting simulations to small populations, the authors can afford an increased computational load with a dynamically expanding lattice.
Another example is Tanade et al. [69], which extends an existing 2D simulation into a 3D growth model through parallelization. Typically, increasing the dimensionality of a cellular automata model would strain its memory capabilities and disrupt the integrity of the model over long durations. The authors choose to parallelize cell movement through an N-body scheme and cell lattices through halo exchanges, overlapping layers once cells reach the end of their domain and consolidating lattice values of comparable layers. By parallelizing the chosen model, larger and more complex tumor morphology can be investigated efficiently and without burdening the system. This proof-of-concept approach provides an opportunity for existing cellular automata models to increase their applicability to in vivo datasets, patient cases, and biopsies, without drastically straining computational systems.
Cellular automata leverage a regular lattice framework to simulate cell behavior, interactions, and spatiotemporal evolution, providing insights into the cancer system not obtainable by experimental methods or imaging data. However, challenges such as parameter sensitivity and computational demands require careful calibration and optimization. Depending on the amount of information learned a priori, the type of data being reconstructed or validated, and the dimensionality needed to accomplish the research goal, various applications of the cellular automata approach have been proven successful. For any of these models, tools such as a coded array and 3D parallelization can be integrated to increase complexity and lower load constraints. Future avenues of research involve applying these modeling tools to existing simulations to confirm their applicability and scalability to diverse scenarios.
Also known as individual-based models, agent-based models (ABM) simulate cells as free agents that interact with each other and the environment under specific conditions. ABM represents an off-lattice alternative to grid-based models, but the underlying premises remain the same. Simple behavioral rules and initial conditions direct the evolution and movement of agents in a designated space. The Boids model demonstrates how ABM maintains organized patterns despite not having a structured lattice: a cell's behavior is governed by cohesive forces to neighboring cells, separation tendencies to avoid overcrowding, and alignment, where it moves in the same direction as other cells [70]. Natural manifestations of agent interactions found in ABM are animal flocking behavior or swarm intelligence. Similar to cellular automata, these cells are assigned intrinsic physical properties, including their size, cell state, and duration in that state [21].
This modeling approach can simulate changes in the microenvironment and cell-level heterogeneity, as well as explore patterns in movement that influence tumor morphology and geometry. To accomplish this, models consider structural changes in DNA; molecular interactions like signaling and nutrient behavior; cellular identities like phenotypic transformations; multicellular relationships like space, motility, and dimensional structure; and tissue/macroscale imaging like drug distribution, tissue pH, hormones, and vascularization [10]. This section will highlight the merits and caveats of a lattice-free approach as a whole, compared to cellular automata or continuum modeling. For a detailed breakdown of ABM subcategorizations, see Liedekerke and Buttenschon [71].
Tumors are in constant communication with the surrounding microenvironment, and the ECM heavily influences the development of cancerous bodies. Whereas cell phenotypes cannot be accurately represented through continuum modeling, ABM address individual cellular identities that affect the environment, which in turn affects tissue mechanics [72]. Kim et al. [73] and Kolokotroni et al. [5] represent examples of adapting continuum approaches to an easily manipulated discretized system: the authors solve PDEs for O2 diffusion, nutrient diffusion, and hormonal distribution, across the tumor. Estrella et al. [74] also notably concluded that increasing the concentration of H+ in the ECM (thereby lowering the pH in the surrounding tissue) acidifies the host regions and increases tumor invasiveness. This multiscalar, high-power benefit makes the ABM approach appealing to simulate and predict cellular competition [9]. El-Kenawi et al. [75] also found from prostate cancer patient case data that acidic environments (lowered pH) influence cell phenotypes to activate available macrophages, which play a key role in tumor progression and development.
Beyond tumor heterogeneity, ABM successfully captures cellular behaviors and stochastic tendencies, such as mutation accumulations. Probabilistic realizations generate a distribution of outcomes, rather than forcing the simulation to over-simplify like some continuous algorithms [9]. Using a branching process model, which takes advantage of mean-field approximations and stochastic uncertainty, Bozic et al. [76] simulated the effect of single-driver mutations that accelerate the rate of clonal expansion at every subsequent time step. Another example is found in Colom et al. [14], who took advantage of deep sequencing to identify key genes susceptible to mutations, which can induce changes in cell phenotypes and therefore overall tumor morphology. This finding helps determine probabilities and possibilities for genetic risks to evolve into masses, and masses to evolve into invasive tumors. By producing a distribution of outcomes, ABM can simulate error-prone sequencing to match the limited depth of current medical technology and imaging, meaning that precise parameters are not required to provide clinically relevant outputs [9].
Additionally, compared to cellular automata, the lattice-free approach enables increased flexibility and spatial heterogeneity, advantages that outweigh the greater uncertainty and computational costs in certain scenarios. Kennedy et al. [77] highlighted their preference for an irregular grid when establishing a framework for basic cell behavior because it allows the cell to determine the level of granularity at different regions in the tumor. The agent-based model enables the emergence of a dynamic spatial structure, a key factor in ensuring biological realism. Even though the off-lattice approach is computationally expensive in determining local neighbor identities and constantly updating the filled space, it is necessary for cases like tumor morphogenesis, interfacial regions, epithelial irregularities, and tissue repair, where tumor shapes are uncertain and cell patterns are unknown. This trade-off between uncertainty and heterogeneity is again found in Jamous et al. [78], which investigates glioma pattern formation within in vivo mouse models. The authors explore how tumor cells follow self-organizing patterns that create the resulting tumor form, and ABM allows for greater spatial heterogeneity to account for the lack of pre-existing information on glioma morphology. The model uses several parameters to determine cellular configurations–in 'flocks' or 'streams'–that cannot be realistically conveyed through a grid format.
The advantage that ABM provides for understanding cellular mechanics is apparent in lattice-free modeling of ductal carcinoma in situ (DCIS). Originating in the milk ducts, this cancerous lesion displays a unique morphology that cannot be accurately conveyed through the low-resolution cellular automata lattice, which imposes restraints on cell arrangements, orientation, and interaction. Many studies that focus on DCIS take advantage of ABM to accurately display the radial tumor dynamics characteristic of the cancer. Macklin et al. [79] developed a personalizable calibration technique that could render an agent-based model available for patient-specific circumstances, using pathology data from a single time point. The authors demonstrate how cellular automata fails to model a key characteristic of DCIS–the phenomenon of tumor cell proliferation when the cell is already surrounded. Agent-based modeling is an optimal choice to simulate highly motile cells or cells that self-organize regularly. Cristini and Lowengrub [58], Butner et al. [80], and Butner et al. [81] also construct and apply agent-based models specifically for DCIS application use. The considerable literature on tumor modeling opens up an interesting pathway for exploration. Agent-based models show great success in simulating the radial geometry of breast ducts, so they have the potential to do the same for other types of tumors with high motility rates and axial developments. Lesions in prostate ductal structures, breast lobules, and epithelial layers could be successfully modeled by lattice-free simulations.
Several examples of open-access software demonstrate the adaptability of the ABM approach. Ghaffarizadeh et al. [82] constructed a 3D agent-based modeling toolkit, understanding cell phenotypes within the context of available substrates and signaling factors in the local microenvironment. This model has since been integrated with a Boolean modeling platform to incorporate physical dimension in the multicellular simulation [83], and it has also been combined with a computational model exploration manager to establish a high-throughput workflow for large, computationally intensive datasets [84].
In comparison to lattice-based models, ABM requires more computational effort because the spatial organization of cells must be reevaluated for every generation of cells, and, unlike some cellular automata, the tumor is simulated in a dynamically growing region. Additionally, this modeling approach's algorithmic nature and sequential dependencies make parallelizing ABM algorithms more difficult to implement. The computational cost of agent-based models also restrict the approach to smaller cell populations. That said, these models are highly effective for scenarios involving irregularly arranged cells, high degrees of cellular movement, and unknown spatial arrangements. For cases like angiogenesis, metastasis, and certain cancers, ABM's flexibility and level of detail make it more suitable than the more restrictive cellular automata alternative. This complexity is also apparent in its ability to model a variety of scales: signaling pathways, cellular phenotypes, and substrate production within the tissue. Recent developments in computational frameworks have also improved the computational load that hinders the applicability of agent-based modeling to large populations and long durations. Moreover, extending the success of ABM in DCIS context to similar cancer formations may be a promising avenue of research exploration. Lastly, in Section 4, agent-based modeling is discussed in a joint hybrid approach with continuum simulations. This expands the range of ABM applications, most notably into microenvironment interactions and molecular signaling [8,85,86].
Although discussion of chaos theory and chaotic behavior in tumor growth is beyond the scope of this paper, we find a brief overview of the current literature to be beneficial. Mathematical chaos can simply be defined as a system that is bounded within a specific orbit, deterministic when given the same initial conditions, and experiences large amplifications with small perturbations from the system's fixed points [87]. A system can only be considered chaotic if its data is not stochastic and the signal undergoes a de-noising method to effectively correct any variation.
Several publications have found tumor growth dynamics to exhibit chaotic behavior. Kamel [61] implements a fixed Marotto theorem [88] on nontrivial fixed points to prove the chaotic dynamics of the cancer system, and Saeed et al. [89] discretized data adapted from the continuous AIMS model [90] (which models cell-cell interaction and competition as a differential equation set). Discrete methods, rather than continuous modeling, are more effective at treating and controlling unpredictable behavior. Saeed et al. and Letellier et al. [91] found that increasing the growth rate of host cells increases the range in which the global population can fluctuate (between 0 and 1) but decreases the frequency of oscillations between maximal and minimal values. This mathematical foundation informs clinical cases of both host-cell-dominant environments and tumor-cell-dominant environments (faster, more aggressive cancers). The stability of a biological environment and maintenance of homeostasis rests on the population of host cells rather than the number of tumor cells—a finding supported by Debbouche et al. [92] as well. Additionally, rather than exploring fixed points, these authors performed a non-conventional analysis of observability and topological analysis. This finding creates two significant pursuits for further study: first, a new track for controlling chaos in the cancer system beyond fixed points and chaotic attractor analysis; and second, the formation of a mathematically-informed cancer therapy that acts on stimulating healthy host cells.
Chaotic dynamics also manifest very differently (or possibly not at all) in different types of tumors [93]. A possible direction for research is to understand the relationship between chaos dynamics and cancers derived from different origin tissues.
A more recent evolution in this field is the use of hybrid models. These algorithms integrate various approaches to tumor modeling in one simulation–such as continuum and discrete models, or physics-based and data-drive models. Hybrid tumor growth models typically resolve obstacles of their component functions, including computational load, complexity, data limitations, and scales of interest. Section 4.1 reviews traditional hybrid models that employ discrete and continuous algorithms to describe tumor growth. Section 4.2 explores a multi-resolution example of mechanistic hybrid models that varies the level of resolution at different tumor regions of interest. Section 4.3 examines a more recently emerging literature of hybrid models that combine physics-based models with artificial intelligence (AI) deep learning algorithms to improve the predictive power and reduce load constraints of the simulation.
By consolidating cellular interactions and global morphological change, hybrid approaches enable the representation of cancerous growth on multiple spatial and temporal scales. They can be used to examine the impact of gene expression/mutation, in/extracellular signaling, cellular migration, multicellular proliferation, and tumor architecture, as well as being applied on and off the lattice structure [94]. Hybrid tumor growth models share their structure with hybrid engineering systems, which explore dynamical systems subject to both continuous-time and discrete-time dynamics [95,96]. In a biological context, tumor dynamics can be approached as a collection of multi-particle systems with agents that interact with one another [97]. Tumor agents can exhibit both discrete and continuous behaviors (biological cells and environmental substrates/nutrients, respectively), and they are in constant communication with each other. Moreover, they hold a degree of uncertainty that is best captured by transition probabilities. These characteristics–contradictory behaviors, interactions, and stochasticity–form the foundation of the hybrid modeling approach [97].
A standard hybrid approach is to model the cells as discrete entities, either on or off-lattice, and the agents in the ECM as continuous concentrations or density fields. Although PDEs provide flexibility in describing tumor growth over detailed spatial and temporal scales, they cannot characterize mechanisms at the cellular scale. Hybrid models incorporate discrete agents into these continuous fields to provide a local and global perspective of tumor dynamics [98]. As a result, stochastic probabilities are preserved on the cellular scale, while environmental trends can be standardized with continuous PDEs to reduce simulation load [9].
Robertson-Tessi et al. [85] examined how variations in the spatial and temporal field create selection pressures for the existing tumor cells. The fluctuations in environmental factors are modeled as reaction-diffusion functions that interact with cellular automata agent cells on a lattice. A standard diffusible molecule has a concentration modeled as
∂C∂t=D∇2C+f(C,p) | (4.1) |
where D is the diffusion constant, and f is the function for production/consumption of the molecule that depends on the concentration (C(x)) of all extracellular molecules (oxygen, glucose, protons (pH)) and cell-specific parameters at position x (p(x)) [85]. The Laplacian operator provides a scalar metric for how these free molecules may diverge from the calculated diffusion gradient due to stochastic tendency.
While this equation provides the foundation for molecular movement, tumor cells are simulated on a lattice grid with one cell type confined to one grid point. Each phenotype undergoes a decision process based on its cell death risk, metabolic state, and nutrient concentrations in that position by solving Eq (4.1) for each cell position and timestep. As cells gain metabolic/proliferative advantages from substrate conditions, the tumor experiences organizational and morphological changes that, in turn, affect the environment [85]. Frankenstein et al. [99] shared this approach when modeling prostate tumor evolution as a result of stromal interactions. The heterogeneity of the tumor/stroma boundary demands detailed insight into cellular mechanics, so cells are organized on a 2D lattice within a continuum field. Lopez et al. [100] similarly takes advantage of the alternating scales to explore tumor-immune interactions, restricting cells to orthogonal motion while diffusion of nutrients and growth factors are modeled by reaction-diffusion equations. Additionally, Messina et al. [101] investigated mechanisms of invasiveness and growth by discretizing tumor spheroid in a lattice and modeling the glucose concentration field as a continuous function.
Another interesting application is found in Anderson [86], which simultaneously employs continuous and discrete variables, but treats all within a lattice. The author models one discrete variable (tumor cells) and three continuous concentrations (host tissue, matrix-degradative enzymes (MDE), and O2). The PDE system of these four variables is discretized to find probabilities of an individual cell's behavior in its environment. Through this technique, the author follows the stepwise path of an individual tumor cell. However, certain assumptions must still be made: MDE production and oxygen can only be nonzero at a grid point when occupied by a cell. Otherwise, it is assumed to be n=0 during vacancy. This hybrid approach leans much more heavily into a discrete analysis of tumor growth, and as such, it requires alterations in how oxygen and MDE are modeled to retain their diffusive properties in a lattice if they are to be discretized. Suveges et al. [102] and Gallaher et al. [103] further demonstrate examples of off-lattice cellular entities (allowing greater flexibility and fewer assumptions) within a continuously modeled ECM.
The hybrid approach also allows for detailed insight into tumor vasculature. Whereas many standalone discrete models focus on avascular tumors to simplify an already granular simulation, hybrid models can explore the effects of tumor-induced angiogenesis because they integrate cellular and tissue-level scales [67,104,105]. Stephanou et al. [104], for example, employed a lattice grid to represent tumor cells and vasculature, as well as continuous equations to model the kinetics of growth factors, oxygen, and enzymes [104]. With this model, they can successfully simulate tumor-induced vascular changes observed experimentally through in vivo samples. More examples of hybrid modeling of angiogenesis can be found in Kremheller et al. [106], Phillips et al. [107], and Duswald et al. [108].
A unique deviation from the literature on hybrid models is found in Wang and Deisboeck [10], who optimized for a multi-resolution approach. While previous examples consider interactions of both continuum and discrete variables simultaneously at all locations, these authors use different modeling techniques on different regions of interest (ROIs) within a single tumor. The discrete function becomes necessary for small numbers of cells, like at boundary regions. In these regions, discretized agents are more effective at revealing the significant mechanisms of tumor growth. However, the high computational power required to simulate discrete entities weakens the algorithm's applicability if modeling the entire tumor at large. As a result, a continuum approach captures the homogeneous growth of the internal mass, neglecting or averaging cell/gene/protein variables.
This establishes a model formula that is hybrid, multi-scale, and multi-resolution, as shown in Figure 8: High resolution is enabled for ROIs with large datasets requiring maximized predictive power, but the model optimizes macroscopic continuum techniques where it is possible [10]. Although the simulation theory is not yet prepared for clinical application, the proposal offers an extremely effective roadmap of compromise between the advantages of different models. The authors analyze large-scale environmental changes in the tumor, maintaining the efficiency of low computational cost while exploring (sub-)cellular dynamics of tumor-host and tumor-tumor interactions at the boundaries of development.
An interesting research pathway that has increased in popularity over the last five years is the integration of deep learning networks in hybrid tumor growth modeling. Chamseddine and Rejniak [109] provide a detailed overview of hybrid modeling frameworks–namely, tumor growth models that incorporate physics-based models, data-driven models, or optimization models. Physics-based models include the classical discrete/continuous models, optimization models seek to optimize the system regarding a specific criterion, and data-driven models often refer to machine learning approaches.
Recently, the heightened usage of AI has led to the emergence of deep learning (DL) algorithms in tumor development research [98]. This approach involves training deep neural networks to learn patterns from imaging or quantitative datasets, often to extrapolate future behaviors. The availability of medical imaging creates an opportunity for DL algorithms to identify, quantify, and predict patterns in clinical or experimental data better than standalone discrete/continuum hybrid models. Additionally, the risk of DL networks overfitting the training set is addressed by employing a large mechanistic model dataset to fully investigate tumor heterogeneity. The feedback loop between the parameter optimization of traditional hybrid models and the robust behavior analysis of DL algorithms provides a strong foundation for a new hybrid modeling framework.
Gerlee and Anderson [25] present an early example of incorporating an artificial neural network in a hybrid cellular automata model to study tumor evolution. By simulating cellular pathways with a deep learning decision mechanism, the model can express a variety of phenotypes and cellular responses to the environment, adding greater complexity. The authors find a relationship between oxygen concentrations influencing tumor growth dynamics and clonal evolutionary dynamics. Much more recently, Chen et al. [110] constructed a DL algorithm that consolidated imaging data and physics-based models of tumor growth. The authors chose to solve the PDE system over a specific duration and implement medical imaging, a routine tool in clinical settings. Training the simulation with both in vitro imaging of pancreatic cell cultures and mechanistic (PDE) models, the study successfully predicts tumor growth within patient-specific contexts.
Matin and Setayeshi [111] integrated the cellular automata approach with deep convolution neural networks to counter the load of a traditional discrete model. The cellular automata helps capture the localized interactions between tumor cells, but it is unable to recognize complex global patterns and extrapolate from them. Rather than restricting their model to repetitive, computationally expensive Boolean processing, the authors leverage DL algorithms to understand behaviors, find patterns, and generate predictions about future growth trends. Both cellular automata and DL networks share the ability to process individual units simultaneously, whether these units are cells in a lattice or pixels in an image. This fundamental compatibility allows for the integration of both into a single simulation. The author's hybridized model explores both the microscopic, cellular dynamics as well as the macroscopic, population-level changes that tumors undergo. The resulting model is scalable and adaptable to new conditions, parameter settings, or patient-specific contexts. Amanzholova and Coskun [112] similarly constructed a hybrid AI-based model to predict the emergence of cancers, for timely detection and treatment. Incorporating the DL approach with traditional hybrid simulations has created a more effective, timely, and practical alternative to current diagnosis methods and treatment assessments.
Further investigation is required to fully integrate DL neural network algorithms in traditional models, as AI continues to advance rapidly. Its applicability to tumor development, and more specifically mechanistic modeling of tumor development, harbors strong potential in updating diagnostic approaches, facilitating medical imaging, and improving clinical assessments.
In this paper, we survey the current literature on cancer growth modeling, discussing continuum, discrete, and hybrid approaches. Each modeling approach is explored for its advantages, disadvantages, appropriate scenarios for application, and potential areas for additional investigation.
Continuum models typically treat the tumor as a continuous mass, focusing on tumor-level population growth rather than individual cell behaviors. Exponential models, which assume constant conditions of space and nutrients, have demonstrated success in modeling early phases of growth or small windows of data collection, but become inaccurate as the cancer matures. Depleting nutrients, angiogenesis, and cell death decrease the growth rate, leading to the more prevalent use of limited growth models. Among these, logistic and Gompertzian growth curves fit and predict experimental data but lack biological motivations for their successes. On the other hand, Bertalanffy models, which also capture complex trends with a strong model fit, have a physiological basis rooted in metabolic growth and decline. This model requires further research to determine appropriate scenarios for use, such as stages of growth, specific conditions within tumors, and experimental or clinical settings. Reaction-diffusion models, a class of PDE simulations, are an example of continuous models that have been successfully applied to patient-specific circumstances, especially for brain cancer research. Future directions for reaction-diffusion models include applying successful non-glioma simulations to address the lack of clinically available parameters and model simplicity of glioma cases. In many instances, the continuum approach does not resolve the heterogeneity of cellular behaviors and phenotypes, and calibration of population data to parameter quantities requires repetitive testing to determine accurate values.
Discrete models explore tumor development through cell-scale interactions, where cells move through discrete time and space systems with predetermined rules for proliferation. Implementing these models often requires considerations of parameter availability, the complexity required, the acceptable computational load, and the decision of a lattice-based or lattice-free approach. Cellular automata, a lattice approach based on the Game of Life simulation, confines cellular agents to a grid. Successful cellular automata models that simulate tumor spatiotemporal evolution have used predetermined matrix sizes or lattice borders that dynamically expand with the tumor, which increases the computational load. Recent tools such as coded arrays and 3D parallelization can be integrated to improve model accuracy while reducing load constraints, and these techniques should be implemented to understand the extent of their scalability and applicability to different circumstances. Agent-based models allow more flexibility in spatial organizations by not using a lattice grid, but the demand to reprocess the exact organization of cells every generation requires more computational effort. These models are especially effective in modeling irregularly arranged cells or highly motile cells, such as in the cases of angiogenesis and metastasis, or cancers like DCIS. They can simulate unknown or stochastic behaviors like phenotypes and mutations, but this approach is often restricted to small populations or short durations because of its heightened complexity.
Hybrid models traditionally combine continuum and discrete approaches, and these mechanistic models often model cells as discrete agents over a continuous field of nutrients, growth factors, and enzymes, which diffuse throughout the tissue. The integration of both model formats helps resolve issues of either approach, including computational load and simplistic simulations by being more flexible in the scales it employs and the detail it demands. More recently, hybrid frameworks have expanded to include the integration of different model classes, such as physics-based models, data-driven models, and optimization models. The recent popularity of AI has encouraged the use of DL algorithms in mechanistic modeling. These neural networks drastically improve how models diagnose current growth from medical imaging and forecast future growth by extrapolating behavior. DL enhances the predictive capabilities of discrete and continuous models, and they show a promising avenue for future research in tumor modeling.
We leave off with two suggestions for continued exploration, based on the limitations identified in the reviewed models. First, as literature is becoming increasingly dependent on imaging scans to build and refine simulations, this data must be regularly compiled, published, and maintained for different tumor types and stages in publicly available datasets. Access to high-resolution and robust data is paramount for scientists to set initial parameters for simulations, as well as the overall predictive applicability of future models. Moreover, model validity must depend on the biological justification for future success. Different models we cover have different goals: understanding tumor dynamics and development, predicting growth, or integrating experimental/imaging data. However, all methodologies must be firmly rooted in biological justifications for future adjustments or adaptations to other models. Even experimentally derived algorithms depend on biological explanations so that scientific knowledge of tumor behavior can grow alongside the robustness of the models.
Modeling is already lauded as an important tool for advancing scientific understanding of complex biological processes. For a phenomenon as diverse as cancer development, models offer a standardized approach to cancer research and clinical cases. These reviewed in silico approaches demonstrate enormous potential in being leveraged to improve patient outcomes.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
We thank Dr. Kim Failor for her support and guidance. The authors declare no sources of funding for the study.
The authors declare there is no conflict of interest.
[1] |
Arocena P, Orcos R, Zouaghi, F (2021) The impact of ISO 14001 on firm environmental and economic performance: The moderating role of size and environmental awareness. Bus Strategy Environ 30: 955–967. https://doi.org/10.1002/bse.2663 doi: 10.1002/bse.2663
![]() |
[2] |
Brunnermeier SB, Cohen MA (2003) Determinants of environmental innovation in US manufacturing industries. J Environ Econ Manage 45: 278–293. https://doi.org/10.1016/S0095-0696(02)00058-X doi: 10.1016/S0095-0696(02)00058-X
![]() |
[3] |
Camilleri MA (2022). The rationale for ISO 14001 certification: A systematic review and a cost–benefit analysis. Corp Soc Resp Environ Manage 29: 1067–1083. https://doi.org/10.1002/csr.2254 doi: 10.1002/csr.2254
![]() |
[4] | Certificação de PME-Decreto-Lei n.º 372/2007. Available from: https://data.dre.pt/eli/dec-lei/372/2007/11/06/p/dre/pt/html |
[5] |
Christmann P, Taylor G (2001) Globalization and the Environment: Determinants of Firm Self-Regulation in China. J Int Bus Stud 32: 439–458. https://doi.org/10.1057/palgrave.jibs.8490976 doi: 10.1057/palgrave.jibs.8490976
![]() |
[6] |
Cushing K, McGray H, Lu H (2005) Understanding ISO 14001 Adoption and Implementation in China. Int J Environ Sustain Dev 4: 246–68. https://doi.org/10.1504/IJESD.2005.007740 doi: 10.1504/IJESD.2005.007740
![]() |
[7] |
DAdamo I (2022) The analytic hierarchy process as an innovative way to enable stakeholder engagement for sustainability reporting in the food industry. Environ Dev Sustain https://doi.org/10.1007/s10668-022-02700-0 doi: 10.1007/s10668-022-02700-0
![]() |
[8] |
Darnall N, Henriques I, Sadorsky P (2008) Do environmental management systems improve business performance in an international setting? J Int Manag 14: 364–376. https://doi.org/10.1016/j.intman.2007.09.006 doi: 10.1016/j.intman.2007.09.006
![]() |
[9] | DeCanio SJ, Watkins WE (1998) Investment in energy efficiency: do the characteristics of firms matter? Rev Econ Stat 80: 95–107. |
[10] |
De Jong P, Paulraj A, Blome C (2014) The Financial Impact of ISO 14001 Certification: Top-Line, Bottom-Line, or Both? J Bus Ethics 119: 131–149. https://doi.org/10.1007/s10551-012-1604-z doi: 10.1007/s10551-012-1604-z
![]() |
[11] |
Ferron RT, Funchal B, Nossa V, et al. (2012) Is ISO 14001 certification effective? An experimental analysis of firm profitability. BAR-Brazilian Administration Rev 9: 78–94. https://doi.org/10.1590/S1807-76922012000500006 doi: 10.1590/S1807-76922012000500006
![]() |
[12] |
Fikru MG (2014) Firm Level Determinants of International Certification: Evidence from Ethiopia. World Dev 64: 286–297. https://doi.org/10.1016/j.worlddev.2014.06.016 doi: 10.1016/j.worlddev.2014.06.016
![]() |
[13] |
Frondel M, Kratschell K, Zwick L (2018) Environmental management systems: does certification pay? Econ Anal Policy 59: 14–24. https://doi.org/10.1016/j.eap.2018.02.006 doi: 10.1016/j.eap.2018.02.006
![]() |
[14] |
Hang M, Geyer-Klingeberg J, Rathgeber AW (2019) It is merely a matter of time: a meta-analysis of the causality between environmental performance and financial performance. Bus Strateg Environ 28: 257–273. https://doi.org/10.1002/bse.2215 doi: 10.1002/bse.2215
![]() |
[15] | Hartman R, Huq M, Wheeler D (1997) Why Paper Mills Clean Up: Determinants of Pollution Abatement in four Asian Countries. Policy Research Working Paper 1710, Washington, D.C.: World Bank. |
[16] | Henriques I, Sadorsky P (1996) The determinants of an environmentally responsive firm: an empirical approach. J Environ Econ manag 30: 381–395. |
[17] |
Heras-Saizarbitoria I, Molina-Azorin JF, Dick GPM (2011) ISO 14001 certification and financial performance: selection-effect versus treatment-effect. J Clean Prod 19: 1–12. https://doi.org/10.1016/j.jclepro.2010.09.002. doi: 10.1016/j.jclepro.2010.09.002
![]() |
[18] |
Hillary R (2004) Environmental management systems and the smaller enterprise. J Clean Prod 12: 561–569. https://doi.org/10.1016/j.jclepro.2003.08.006. doi: 10.1016/j.jclepro.2003.08.006
![]() |
[19] |
Hojnik J, Ruzzier M (2017) Does it pay to be eco? The mediating role of competitive benefits and the effect of ISO14001. Eur Manag J 35: 581–594. https://doi.org/10.1016/j.emj.2017.07.008. doi: 10.1016/j.emj.2017.07.008
![]() |
[20] |
Horvathova E (2012) The impact of environmental performance on firm performance: Short-term costs and long-term benefits? Ecol Econ 84: 91–97. https://doi.org/10.1016/j.ecolecon.2012.10.001. doi: 10.1016/j.ecolecon.2012.10.001
![]() |
[21] | ISO (2015) Introduction to ISO 14001. Available from: https://www.iso.org/publication/PUB100371.html. |
[22] |
Luan CJ, Tien CL, Chen WL (2016) Which "green" is better? An empirical study of the impact of green activities on firm performance. Asia Pac Manag Rev 21: 102–110. https://doi.org/10.1016/j.apmrv.2015.12.001. doi: 10.1016/j.apmrv.2015.12.001
![]() |
[23] | Melnyk SA, Sroufe RP, Calantone R (2003) Assessing the impact of environmental management systems on corporate and environmental performance. J Oper Manag 21: 329–351. https://doi.org/PiiS0272-6963(02)00109-2Doi10.1016/S0272-6963(02)00109-2. |
[24] |
Miroshnychenko I, Barontini R, Testa F (2017) Green practices and financial performance: A global outlook. J Clean Prod 147: 340–351. https://doi.org/10.1016/j.jclepro.2017.01.058. doi: 10.1016/j.jclepro.2017.01.058
![]() |
[25] |
Nakamura M, Takahashi T, Vertinsky I (2001) Why Japanese Firms Choose to Certify: A Study of Managerial Responses to Environmental Issues. J Environ Econ Manag 42: 23–52. https://doi.org/10.1006/jeem.2000.1148. doi: 10.1006/jeem.2000.1148
![]() |
[26] |
Nishitani K (2011) An Empirical Analysis of the Effects on Firms Economic Performance of Implementing Environmental Management Systems. Environ Resourc Econ 48: 569–586. https://doi.org/10.1007/s10640-010-9404-3. doi: 10.1007/s10640-010-9404-3
![]() |
[27] |
Nishitani K (2009) An empirical study of the initial adoption of ISO 14001 in Japanese manufacturing firms. Ecol Econ 68: 669–679. https://doi.org/10.1016/j.ecolecon.2008.05.023. doi: 10.1016/j.ecolecon.2008.05.023
![]() |
[28] | Ong TS, Teh BH, Ng SH, et al. (2016) Environmental management system and financial performance. Inst Econ 8: 26–52. http://eprints.intimal.edu.my/id/eprint/754 |
[29] | Porter ME, Van der Linde C (1991). Green competitiveness. Sci Am 264: 168. |
[30] | Porter ME, Vanderlinde C (1995) Green and Competitive-Ending the Stalemate. Harvard Bus Rev 73: 120–134. |
[31] |
Przychodzen J, Przychodzen W (2015) Relationships between eco-innovation and financial performance - evidence from publicly traded companies in Poland and Hungary. J Clean Prod 90: 253–263. https://doi.org/10.1016/j.jclepro.2014.11.034 doi: 10.1016/j.jclepro.2014.11.034
![]() |
[32] |
Reis AV, Neves FDO, Hikichi SE, et al. (2018) Is ISO 14001 certification really good to the company? A critical analysis. Production 28. https://doi.org/10.1590/0103-6513.20180073 doi: 10.1590/0103-6513.20180073
![]() |
[33] |
Riaz H, Saeed A (2020) Impact of environmental policy on firms market performance: The case of ISO 14001. Corp Soc Responsib Environ Manag 27: 681–693. https://doi.org/10.1002/csr.1834. doi: 10.1002/csr.1834
![]() |
[34] |
Robaina M, Madaleno M (2020) The relationship between emissions reduction and financial performance: Are Portuguese companies in a sustainable development path? Corp Soc Responsib Environ Manag 27: 1213–1226. https://doi.org/10.1002/csr.1876 doi: 10.1002/csr.1876
![]() |
[35] | Sartor M, Orzes G, Moras E (2019a) ISO 14001. In Quality Management: Tools, methods, and standards. Emerald Publishing Limited. https://doi.org/10.1108/978-1-78769-801-720191013 |
[36] |
Sartor M, Orzes G, Touboulic A, et al. (2019b) ISO 14001 standard: Literature review and theory-based research agenda. Qual Manage J 26: 32–64. https://doi.org/10.1080/10686967.2018.1542288 doi: 10.1080/10686967.2018.1542288
![]() |
[37] |
Sarumpaet S (2005) The relationship between environmental performance and financial performance of Indonesian companies. J Akuntansi dan Keuangan 7: 89–98. https://doi.org/10.9744/jak.7.2.pp.%2089-98 doi: 10.9744/jak.7.2.pp.%2089-98
![]() |
[38] |
Treacy R, Humphreys P, McIvor R, et al. (2019) ISO14001 certification and operating performance: A practice-based view. Int J Prod Econ 208: 319–328. https://doi.org/10.1016/j.ijpe.2018.12.012. doi: 10.1016/j.ijpe.2018.12.012
![]() |
[39] |
Wu SY, Chu PY, Liu TY (2007) Determinants of a firms ISO 14001 certification: an empirical study of Taiwan. Pac Econ Revi 12: 467–487. https://doi.org/10.1111/j.1468-0106.2007.00365.x. doi: 10.1111/j.1468-0106.2007.00365.x
![]() |
[40] |
Zeng SX, Tam CM, Tam V, et al. (2005) Towards implementation of ISO 14001 environmental management systems in selected industries in China. J Clean Prod 13: 645–656. https://doi.org/10.1016/j.jclepro.2003.12.009 doi: 10.1016/j.jclepro.2003.12.009
![]() |
[41] |
Zhang ZG, Zhang C, Cao DT (2021) Is ISO14001 certification of the corporate effective? Nankai Bus Rev Int 12: 1–20. https://doi.org/10.1108/Nbri-12-2019-0074 doi: 10.1108/Nbri-12-2019-0074
![]() |