The detection of neurological disorders and diseases is aided by automatically identifying brain tumors from brain magnetic resonance imaging (MRI) images. A brain tumor is a potentially fatal disease that affects humans. Convolutional neural networks (CNNs) are the most common and widely used deep learning techniques for brain tumor analysis and classification. In this study, we proposed a deep CNN model for automatically detecting brain tumor cells in MRI brain images. First, we preprocess the 2D brain image MRI image to generate convolutional features. The CNN network is trained on the training dataset using the GoogleNet and AlexNet architecture, and the data model's performance is evaluated on the test data set. The model's performance is measured in terms of accuracy, sensitivity, specificity, and AUC. The algorithm performance matrices of both AlexNet and GoogLeNet are compared, the accuracy of AlexNet is 98.95, GoogLeNet is 99.45 sensitivity of AlexNet is 98.4, and GoogLeNet is 99.75, so from these values, we can infer that the GooGleNet is highly accurate and parameters that GoogLeNet consumes is significantly less; that is, the depth of AlexNet is 8, and it takes 60 million parameters, and the image input size is 227 × 227. Because of its high specificity and speed, the proposed CNN model can be a competent alternative support tool for radiologists in clinical diagnosis.
Citation: Chetan Swarup, Kamred Udham Singh, Ankit Kumar, Saroj Kumar Pandey, Neeraj varshney, Teekam Singh. Brain tumor detection using CNN, AlexNet & GoogLeNet ensembling learning approaches[J]. Electronic Research Archive, 2023, 31(5): 2900-2924. doi: 10.3934/era.2023146
[1] | Yaning Li, Mengjun Wang . Well-posedness and blow-up results for a time-space fractional diffusion-wave equation. Electronic Research Archive, 2024, 32(5): 3522-3542. doi: 10.3934/era.2024162 |
[2] | Shaoqiang Shang, Yunan Cui . Weak approximative compactness of hyperplane and Asplund property in Musielak-Orlicz-Bochner function spaces. Electronic Research Archive, 2020, 28(1): 327-346. doi: 10.3934/era.2020019 |
[3] | Eteri Gordadze, Alexander Meskhi, Maria Alessandra Ragusa . On some extrapolation in generalized grand Morrey spaces with applications to PDEs. Electronic Research Archive, 2024, 32(1): 551-564. doi: 10.3934/era.2024027 |
[4] | Yangrong Li, Shuang Yang, Qiangheng Zhang . Odd random attractors for stochastic non-autonomous Kuramoto-Sivashinsky equations without dissipation. Electronic Research Archive, 2020, 28(4): 1529-1544. doi: 10.3934/era.2020080 |
[5] | Huali Wang, Ping Li . Fractional integral associated with the Schrödinger operators on variable exponent space. Electronic Research Archive, 2023, 31(11): 6833-6843. doi: 10.3934/era.2023345 |
[6] | Peng Gao, Pengyu Chen . Blowup and MLUH stability of time-space fractional reaction-diffusion equations. Electronic Research Archive, 2022, 30(9): 3351-3361. doi: 10.3934/era.2022170 |
[7] | Ling-Xiong Han, Wen-Hui Li, Feng Qi . Approximation by multivariate Baskakov–Kantorovich operators in Orlicz spaces. Electronic Research Archive, 2020, 28(2): 721-738. doi: 10.3934/era.2020037 |
[8] | Shuguan Ji, Yanshuo Li . Quasi-periodic solutions for the incompressible Navier-Stokes equations with nonlocal diffusion. Electronic Research Archive, 2023, 31(12): 7182-7194. doi: 10.3934/era.2023363 |
[9] | Kun Cheng, Yong Zeng . On regularity criteria for MHD system in anisotropic Lebesgue spaces. Electronic Research Archive, 2023, 31(8): 4669-4682. doi: 10.3934/era.2023239 |
[10] | Francisco Javier García-Pacheco, María de los Ángeles Moreno-Frías, Marina Murillo-Arcila . On absolutely invertibles. Electronic Research Archive, 2024, 32(12): 6578-6592. doi: 10.3934/era.2024307 |
The detection of neurological disorders and diseases is aided by automatically identifying brain tumors from brain magnetic resonance imaging (MRI) images. A brain tumor is a potentially fatal disease that affects humans. Convolutional neural networks (CNNs) are the most common and widely used deep learning techniques for brain tumor analysis and classification. In this study, we proposed a deep CNN model for automatically detecting brain tumor cells in MRI brain images. First, we preprocess the 2D brain image MRI image to generate convolutional features. The CNN network is trained on the training dataset using the GoogleNet and AlexNet architecture, and the data model's performance is evaluated on the test data set. The model's performance is measured in terms of accuracy, sensitivity, specificity, and AUC. The algorithm performance matrices of both AlexNet and GoogLeNet are compared, the accuracy of AlexNet is 98.95, GoogLeNet is 99.45 sensitivity of AlexNet is 98.4, and GoogLeNet is 99.75, so from these values, we can infer that the GooGleNet is highly accurate and parameters that GoogLeNet consumes is significantly less; that is, the depth of AlexNet is 8, and it takes 60 million parameters, and the image input size is 227 × 227. Because of its high specificity and speed, the proposed CNN model can be a competent alternative support tool for radiologists in clinical diagnosis.
In this article, we study the following anisotropic singular →p(⋅)-Laplace equation
{−N∑i=1∂xi(|∂xiu|pi(x)−2∂xiu)=f(x)u−β(x)+g(x)uq(x) in Ω,u>0 in Ω,u=0 on ∂Ω, | (1.1) |
where Ω is a bounded domain in RN (N≥3) with smooth boundary ∂Ω; f∈L1(Ω) is a positive function; g∈L∞(Ω) is a nonnegative function; β∈C(¯Ω) such that 1<β(x)<∞ for any x∈¯Ω; q∈C(¯Ω) such that 0<q(x)<1 for any x∈¯Ω; pi∈C(¯Ω) such that 2≤pi(x)<N for any x∈¯Ω, i∈{1,...,N}.
The differential operator
N∑i=1∂xi(|∂xiu|pi(x)−2∂xiu), |
that appears in problem (1.1) is an anisotropic variable exponent →p(⋅)-Laplace operator, which represents an extension of the p(⋅)-Laplace operator
N∑i=1∂xi(|∂xiu|p(x)−2∂xiu), |
obtained in the case for each i∈{1,...,N}, pi(⋅)=p(⋅).
In the variable exponent case, p(⋅), the integrability condition changes with each point in the domain. This makes variable exponent Sobolev spaces very useful in modeling materials with spatially varying properties and in studying partial differential equations with non-standard growth conditions [1,2,3,4,5,6,7,8].
Anisotropy, on the other hand, adds another layer of complexity, providing a robust mathematical framework for modeling and solving problems that involve complex materials and phenomena exhibiting non-uniform and direction-dependent properties. This is represented mathematically by having different exponents for different partial derivatives. We refer to the papers [9,10,11,12,13,14,15,16,17,18,19,20,21] and references for further reading.
The progress in researching anisotropic singular problems with →p(⋅)-growth, however, has been relatively slow. There are only a limited number of studies available on this topic in academic literature. We could only refer to the papers [22,23,24] that were published recently. In [22], the author studied an anisotropic singular problems with constant case p(⋅)=p but with a variable singularity, where existence and regularity of positive solutions was obtained via the approximation methods. In [23], the author obtained the existence and regularity results of positive solutions by using the regularity theory and approximation methods. In [24], the authors showed the existence of positive solutions using the regularity theory and maximum principle. However, none of these papers studied combined effects of variable singular and sublinear nonlinearities.
We would also like to mention that the singular problems of the type
{−Δu=f(x)u−β in Ω,u>0 in Ω,u=0 on ∂Ω, | (1.2) |
have been intensively studied because of their wide applications to physical models in the study of non-Newtonian fluids, boundary layer phenomena for viscous fluids, chemical heterogenous catalysts, glacial advance, etc. (see, e.g., [25,26,27,28,29,30]).
These studies, however, have mainly focused on the case 0<β<1, i.e., the weak singularity (see, e.g. [31,32,33,34,35,36]), and in this case, the corresponding energy functional is continuous.
When β>1 (the strong singularity), on the other hand, the situation changes dramatically, and numerous challenges emerge in the analysis of differential equations of the type (1.2), where the primary challenge encountered is due to the lack of integrability of u−β for u∈H10(Ω) [37,38,39,40,41].
To overcome these challenges, as an alternative approach, the so-called "compatibility relation" between f(x) and β has been introduced in the recent studies [37,40,42]. This method, used along with a constrained minimization and the Ekeland's variational principle [43], suggests a practical approach to obtain solutions to the problems of the type (1.2). In the present paper, we generalize these results to nonstandard p(⋅)-growth.
The paper is organized as follows. In Section 2, we provide some fundamental information for the theory of variable Sobolev spaces since it is our work space. In Section 3, first we obtain the auxiliary results. Then, we present our main result and obtain a positive solution to problem (1.1). In Section 4, we provide an example to illustrate our results in a concrete way.
We start with some basic concepts of variable Lebesgue-Sobolev spaces. For more details, and the proof of the following propositions, we refer the reader to [1,2,44,45].
C+(¯Ω)={p;p∈C(¯Ω),infp(x)>1, for all x∈¯Ω}. |
For p∈C+(¯Ω) denote
p−:=infx∈¯Ωp(x)≤p(x)≤p+:=supx∈¯Ωp(x)<∞. |
For any p∈C+(¯Ω), we define the variable exponent Lebesgue space by
Lp(⋅)(Ω)={u∣u:Ω→R is measurable,∫Ω|u(x)|p(x)dx<∞}, |
then, Lp(⋅)(Ω) endowed with the norm
|u|p(⋅)=inf{λ>0:∫Ω|u(x)λ|p(x)dx≤1}, |
becomes a Banach space.
Proposition 2.1. For any u∈Lp(⋅)(Ω) and v∈Lp′(⋅)(Ω), we have
∫Ω|uv|dx≤C(p−,(p−)′)|u|p(⋅)|v|p′(⋅) |
where Lp′(x)(Ω) is the conjugate space of Lp(⋅)(Ω) such that 1p(x)+1p′(x)=1.
The convex functional Λ:Lp(⋅)(Ω)→R defined by
Λ(u)=∫Ω|u(x)|p(x)dx, |
is called modular on Lp(⋅)(Ω).
Proposition 2.2. If u,un∈Lp(⋅)(Ω) (n=1,2,...), we have
(i) |u|p(⋅)<1(=1;>1)⇔Λ(u)<1(=1;>1);
(ii) |u|p(⋅)>1⟹|u|p−p(⋅)≤Λ(u)≤|u|p+p(⋅);
(iii) |u|p(⋅)≤1⟹|u|p+p(⋅)≤Λ(u)≤|u|p−p(⋅);
(iv) limn→∞|un|p(⋅)=0⇔limn→∞Λ(un)=0;limn→∞|un|p(⋅)=∞⇔limn→∞Λ(un)=∞.
Proposition 2.3. If u,un∈Lp(⋅)(Ω) (n=1,2,...), then the following statements are equivalent:
(i) limn→∞|un−u|p(⋅)=0;
(ii) limn→∞Λ(un−u)=0;
(iii) un→u in measure in Ω and limn→∞Λ(un)=Λ(u).
The variable exponent Sobolev space W1,p(⋅)(Ω) is defined by
W1,p(⋅)(Ω)={u∈Lp(⋅)(Ω):|∇u|∈Lp(⋅)(Ω)}, |
with the norm
‖u‖1,p(⋅)=|u|p(⋅)+|∇u|p(⋅), |
or equivalently
‖u‖1,p(⋅)=inf{λ>0:∫Ω(|∇u(x)λ|p(x)+|u(x)λ|p(x))dx,≤1} |
for all u∈W1,p(⋅)(Ω).
As shown in [46], the smooth functions are in general not dense in W1,p(⋅)(Ω), but if the variable exponent p∈C+(¯Ω) is logarithmic Hölder continuous, that is
|p(x)−p(y)|≤−Mlog(|x−y|),for allx,y∈Ω such that|x−y|≤12, | (2.1) |
then the smooth functions are dense in W1,p(⋅)(Ω) and so the Sobolev space with zero boundary values, denoted by W1,p(⋅)0(Ω), as the closure of C∞0(Ω) does make sense. Therefore, the space W1,p(⋅)0(Ω) can be defined as ¯C∞0(Ω)‖⋅‖1,p(⋅)=W1,p(⋅)0(Ω), and hence, u∈W1,p(⋅)0(Ω) iff there exists a sequence (un) of C∞0(Ω) such that ‖un−u‖1,p(⋅)→0.
As a consequence of Poincaré inequality, ‖u‖1,p(⋅) and |∇u|p(⋅) are equivalent norms on W1,p(⋅)0(Ω) when p∈C+(¯Ω) is logarithmic Hölder continuous. Therefore, for any u∈W1,p(⋅)0(Ω), we can define an equivalent norm ‖u‖ such that
‖u‖=|∇u|p(⋅). |
Proposition 2.4. If 1<p−≤p+<∞, then the spaces Lp(⋅)(Ω) and W1,p(⋅)(Ω) are separable and reflexive Banach spaces.
Proposition 2.5. Let q∈C(¯Ω). If 1≤q(x)<p∗(x) for all x∈¯Ω, then the embedding W1,p(⋅)(Ω)↪Lq(⋅)(Ω) is compact and continuous, where
p∗(x)={Np(x)N−p(x),ifp(x)<N,+∞,ifp(x)≥N. |
Finally, we introduce the anisotropic variable exponent Sobolev spaces.
Let us denote by →p:¯Ω→RN the vectorial function →p(⋅)=(p1(⋅),...,pN(⋅)) with pi∈C+(¯Ω), i∈{1,...,N}. We will use the following notations.
Define →P+,→P−∈RN as
→P+=(p+1,...,p+N), →P−=(p−1,...,p−N), |
and P++,P+−,P−−∈R+ as
P++=max{p+1,...,p+N},P+−=max{p−1,...,p−N}, P−−=min{p−1,...,p−N}, |
Below, we use the definitions of the anisotropic variable exponent Sobolev spaces as given in [12] and assume that the domain Ω⊂RN satisfies all the necessary assumptions given in there.
The anisotropic variable exponent Sobolev space is defined by
W1,→p(⋅)(Ω)={u∈LP++(Ω):∂xiu∈Lpi(⋅)(Ω), i∈{1,...,N}}, |
which is associated with the norm
‖u‖W1,→p(⋅)(Ω)=|u|P++(⋅)+N∑i=1|∂xiu|pi(⋅). |
W1,→p(⋅)(Ω) is a reflexive Banach space under this norm.
The subspace W1,→p(⋅)0(Ω)⊂W1,→p(⋅)(Ω) consists of the functions that are vanishing on the boundary, that is,
W1,→p(⋅)0(Ω)={u∈W1,→p(⋅)(Ω):u=0on∂Ω}, |
We can define the following equivalent norm on W1,→p(⋅)0(Ω)
‖u‖→p(⋅)=N∑i=1|∂xiu|pi(⋅). |
since the smooth functions are dense in W1,→p(⋅)0(Ω), as the variable exponent pi∈C+(¯Ω), i∈{1,...,N} is logarithmic Hölder continuous.
The space W1,→p(⋅)0(Ω) is also a reflexive Banach space (for the theory of the anisotropic Sobolev spaces see, e.g., the monographs [2,47,48] and the papers [12,15]).
Throughout this article, we assume that
N∑i=11p−i>1, | (2.2) |
and define P∗−∈R+ and P−,∞∈R+ by
P∗−=N∑Ni=11p−i−1, P−,∞=max{P+−,P∗−}. |
Proposition 2.6. [[15], Theorem 1] Suppose that Ω⊂RN(N≥3) is a bounded domain with smooth boundary and relation (2.2) is fulfilled. For any q∈C(¯Ω) verifying
1<q(x)<P−,∞forallx∈¯Ω, |
the embedding
W1,→p(⋅)0(Ω)↪Lq(⋅)(Ω), |
is continuous and compact.
We define the singular energy functional J:W1,→p(⋅)0(Ω)→R corresponding to equation (1.1) by
J(u)=∫ΩN∑i=1|∂xiu|pi(x)pi(x)dx−∫Ωg(x)|u|q(x)+1q(x)+1dx+∫Ωf(x)|u|1−β(x)β(x)−1dx. |
Definition 3.1. A function u is called a weak solution to problem (1.1) if u∈W1,→p(⋅)0(Ω) such that u>0 in Ω and
∫Ω[N∑i=1|∂xiu|pi(x)−2∂xiu⋅∂xiφ−[g(x)uq(x)+f(x)u−β(x)]φ]dx=0, | (3.1) |
for all φ∈W1,→p(⋅)0(Ω).
Definition 3.2. Due to the singularity of J on W1,→p(⋅)0(Ω), we apply a constrained minimization for problem (1.1). As such, we introduce the following constrains:
N1={u∈W1,→p(⋅)0(Ω):∫Ω[N∑i=1|∂xiu|pi(x)−g(x)|u|q(x)+1−f(x)|u|1−β(x)]dx≥0}, |
and
N2={u∈W1,→p(⋅)0(Ω):∫Ω[N∑i=1|∂xiu|pi(x)−g(x)|u|q(x)+1−f(x)|u|1−β(x)]dx=0}. |
Remark 1. N2 can be considered as a Nehari manifold, even though in general it may not be a manifold. Therefore, if we set
c0:=infu∈N2J(u), |
then one might expect that c0 is attained at some u∈N2 (i.e., N2≠∅) and that u is a critical point of J.
Throughout the paper, we assume that the following conditions hold:
(A1) β:¯Ω→(1,∞) is a continuous function such that 1<β−≤β(x)≤β+<∞.
(A2) q:¯Ω→(0,1) is a continuous function such that 0<q−≤q(x)≤q+<1 and q++1≤β−.
(A3) 2≤P−−≤P++<P∗− for almost all x∈¯Ω.
(A4) f∈L1(Ω) is a positive function, that is, f(x)>0 a.e. in Ω.
(A5) g∈L∞(Ω) is a nonnegative function.
Lemma 3.3. For any u∈W1,→p(⋅)0(Ω) satisfying ∫Ωf(x)|u|1−β(x)dx<∞, the functional J is well-defined and coercive on W1,→p(⋅)0(Ω).
Proof. Denote by I1,I2 the indices sets I1={i∈{1,2,...,N}:|∂xiu|pi(⋅)≤1} and I2={i∈{1,2,...,N}:|∂xiu|pi(⋅)>1}. Using Proposition 2.2, it follows
|J(u)|≤1P−−N∑i=1∫Ω|∂xiu|pi(x)dx−|g|∞q++1∫Ω|u|q(x)+1dx+1β−−1∫Ωf(x)|u|1−β(x)dx≤1P−−(∑i∈I1|∂xiu|P−−pi(⋅)+∑i∈I2|∂xiu|P++pi(⋅))−|g|∞q++1min{|u|q++1q(x)+1,|u|q−+1q(x)+1}+1β−−1∫Ωf(x)|u|1−β(x)dx≤1P−−(N∑i=1|∂xiu|P++pi(⋅)+∑i∈I1|∂xiu|P−−pi(⋅))−|g|∞q++1min{|u|q++1q(x)+1,|u|q−+1q(x)+1}+1β−−1∫Ωf(x)|u|1−β(x)dx≤1P−−(N∑i=1|∂xiu|P++pi(⋅)+N)−|g|∞q++1min{|u|q++1q(x)+1,|u|q−+1q(x)+1}+1β−−1∫Ωf(x)|u|1−β(x)dx | (3.2) |
which shows that J is well-defined on W1,→p(⋅)0(Ω).
Applying similar steps and using the generalized mean inequality for ∑Ni=1|∂xiu|P−−pi(⋅) gives
J(u)≥1P++N∑i=1∫Ω|∂xiu|pi(x)dx−|g|∞q−+1∫Ω|u|q(x)+1dx+1β+−1∫Ωf(x)|u|1−β(x)dx≥1P++(∑i∈I1|∂xiu|P++pi(⋅)+∑i∈I2|∂xiu|P−−pi(⋅))−|g|∞q−+1∫Ω|u|q(x)+1dx+1β+−1∫Ωf(x)|u|1−β(x)dx≥NP++(‖u‖P−−→p(⋅)NP−−−1)−|g|∞q−+1‖u‖q++1→p(⋅)+1β+−1∫Ωf(x)|u|1−β(x)dx | (3.3) |
That is, J is coercive (i.e., J(u)→∞ as ‖u‖→p(⋅)→∞), and bounded below on W1,→p(⋅)0(Ω).
Next, we provide a-priori estimate.
Lemma 3.4. Assume that (un)⊂N1 is a nonnegative minimizing sequence for the minimization problem limn→∞J(un)=infN1J. Then, there are positive real numbers δ1,δ2 such that
δ1≤‖un‖→p(⋅)≤δ2 |
Proof. We assume by contradiction that there exists a subsequence (un) (not relabelled) such that un→0 in W1,→p(⋅)0(Ω). Thus, we can assume that ‖un‖→p(⋅)<1 for n large enough, and therefore, |∂xiun|Lpi(⋅)<1. Then, using Proposition 2.2, we have
∫ΩN∑i=1|∂xiun|pi(x)dx≤N∑i=1|∂xiun|p−ipi(⋅)≤N∑i=1|∂xiun|P−−pi(⋅) | (3.4) |
We recall the following elementary inequality: for all r,s>0 and m>0 it holds
rm+sm≤K(r+s)m | (3.5) |
where K:=max{1,21−m}. If we let r=|∂x1un|P−−Lp1(⋅), s=|∂x2un|P−−Lp2(⋅) and m=P−− in (3.5), it reads
|∂x1un|P−−Lp1(⋅)+|∂x2un|P−−Lp2(⋅)≤K(|∂x1un|Lp1(⋅)+|∂x2un|Lp2(⋅))P−− | (3.6) |
where K=max{1,21−P−−}=1. Applying this argument to the following terms in the sum ∑Ni=1|∂xiun|P−−pi(⋅) consecutively leads to
∫ΩN∑i=1|∂xiun|pi(x)dx≤N∑i=1|∂xiun|p−ipi(⋅)≤N∑i=1|∂xiun|P−−pi(⋅)≤(N∑i=1|∂xiun|pi(⋅))P−−≤‖un‖P−−→p(⋅) | (3.7) |
Now, using (3.7) and the reversed Hölder's inequality, we have
(∫Ωf(x)1/β−dx)β−(∫Ω|un|dx)1−β−≤∫Ωf(x)|un|1−β−dx≤∫Ωf(x)|un|1−β(x)dx | (3.8) |
By the assumption, (un)⊂N1. Thus, using (3.8) and Proposition 2.2 leads to
(∫Ωf(x)1/β−dx)β−(∫Ω|un|dx)1−β−≤∫Ωf(x)|un|1−β−dx≤‖un‖P−−→p(⋅)−|g|∞q−+1‖un‖q++1→0 | (3.9) |
Considering the assumption (A2), this can only happen if ∫Ω|un|dx→∞, which is not possible. Therefore, there exists a positive real number δ1 such that ‖un‖→p(⋅)≥δ1.
Now, let's assume, on the contrary, that ‖un‖→p(⋅)>1 for any n. We know, by the coerciveness of J, that the infimum of J is attained, that is, ∞<m:=infu∈W1,→p(⋅)0(Ω)J(u). Moreover, due to the assumption limn→∞J(un)=infN1J, (J(un)) is bounded. Then, applying the same steps as in (3.3), it follows
C‖un‖→p(⋅)+J(un)≥NP++(‖un‖P−−→p(⋅)NP−−−1)−|g|∞q−+1‖un‖q++1→p(⋅)+1β+−1∫Ωf(x)|un|1−β(x)dx |
for some constant C>0. If we drop the nonnegative terms, we obtain
C‖un‖→p(⋅)+J(un)≥1P++(‖un‖P−−→p(⋅)NP−−−1−N)−|g|∞q−+1‖u‖q++1→p(⋅) |
Dividing the both sides of the above inequality by ‖un‖q++1→p(⋅) and passing to the limit as n→∞ leads to a contradiction since we have q−+1<P−−. Therefore, there exists a positive real number δ2 such that ‖un‖→p(⋅)≤δ2.
Lemma 3.5. N1 is closed in W1,→p(⋅)0(Ω).
Proof. Assume that (un)⊂N1 such that un→ˆu(strongly) in W1,→p(⋅)0(Ω). Thus, un(x)→ˆu(x) a.e. in Ω, and ∂xiun→∂xiˆu in Lpi(⋅)(Ω) for i=1,2,...,N. Then, using Fatou's lemma, it reads
∫Ω[N∑i=1|∂xiun|pi(x)−g(x)|un|q(x)+1−f(x)|un|1−β(x)]dx≥0lim infn→∞[∫ΩN∑i=1|∂xiun|pi(x)dx]−∫Ωg(x)|ˆu|q(x)+1dx≥lim infn→∞[∫Ωf(x)|un|1−β(x)dx] |
and hence,
∫Ω[N∑i=1|∂xiˆu|pi(x)−g(x)|ˆu|q(x)+1−f(x)|ˆu|1−β(x)]dx≥0 |
which means ˆu∈N1. N1 is closed in W1,→p(⋅)0(Ω).
Lemma 3.6. For any u∈W1,→p(⋅)0(Ω) satisfying ∫Ωf(x)|u|1−β(x)dx<∞, there exists a unique continuous scaling function u∈W1,→p(⋅)0(Ω)→(0,∞):u⟼t(u) such that t(u)u∈N2, and t(u)u is the minimizer of the functional J along the ray {tu:t>0}, that is, inft>0J(tu)=J(t(u)u).
Proof. Fix u∈W1,→p(⋅)0(Ω) such that ∫Ωf(x)|u|1−β(x)dx<∞. For any t>0, the scaled functional, J(tu), determines a curve that can be characterized by
Φ(t):=J(tu),t∈[0,∞). | (3.10) |
Then, for a t∈[0,∞), tu∈N2 if and only if
Φ′(t)=ddtΦ(t)|t=t(u)=0. | (3.11) |
First, we show that Φ(t) attains its minimum on [0,∞) at some point t=t(u).
Considering the fact 0<∫Ωf(x)|u|1−β(x)dx<∞, we will examine two cases for t.
For 0<t<1:
Φ(t)=J(tu)≥tP++P++N∑i=1∫Ω|∂xiu|pi(x)dx−tq−+1q−+1∫Ωg(x)|u|q(x)+1dx+t1−β−β+−1∫Ωf(x)|u|1−β(x)dx:=Ψ0(t) |
Then, Ψ0:(0,1)→R is continuous. Taking the derivative of Ψ0 gives
Ψ′0(t)=tP++−1N∑i=1∫Ω|∂xiu|pi(x)dx−tq−∫Ωg(x)|u|q(x)+1dx+(1−β−β+−1)t−β−∫Ωf(x)|u|1−β(x)dx | (3.12) |
It is easy to see from (3.12) that Ψ′0(t)<0 when t>0 is small enough. Therefore, Ψ0(t) is decreasing when t>0 is small enough. In the same way,
Φ(t)=J(tu)≤tP−−P−−N∑i=1∫Ω|∂xiu|pi(x)dx−tq++1q++1∫Ωg(x)|u|q(x)+1dx+t1−β+β−−1∫Ωf(x)|u|1−β(x)dx:=Ψ1(t) |
Then, Ψ1:(0,1)→R is continuous. Taking the derivative of Ψ1 gives
Ψ′1(t)=tP−−−1N∑i=1∫Ω|∂xiu|pi(x)dx−tq+∫Ωg(x)|u|q(x)+1dx+(1−β+β+−1)t−β+∫Ωf(x)|u|1−β(x)dx | (3.13) |
But (3.13) also suggests that Ψ′1(t)<0 when t>0 is small enough. Thus, Ψ1(t) is decreasing when t>0 is small enough. Therefore, since Ψ0(t)≤Φ(t)≤Ψ1(t) for 0<t<1, Φ(t) is decreasing when t>0 is small enough.
For t>1: Following the same arguments shows that Ψ′0(t)>0 and Ψ′1(t)>0 when t>1 is large enough, and therefore, both Ψ0(t) and Ψ1(t) are increasing. Thus, Φ(t) is increasing when t>1 is large enough. In conclusion, since Φ(0)=0, Φ(t) attains its minimum on [0,∞) at some point, say t=t(u). That is, ddtΦ(t)|t=t(u)=0. Then, t(u)u∈N2 and inft>0J(tu)=J(t(u)u).
Next, we show that scaling function t(u) is continuous on W1,→p(⋅)0(Ω).
Let un→u in W1,→p(⋅)0(Ω)∖{0}, and tn=t(un). Then, by the definition, tnun∈N2. Defined in this way, the sequence tn is bounded. Assume on the contrary that tn→∞ (up to a subsequence). Then, using the fact tnun∈N2 it follows
∫ΩN∑i=1|∂xitnun|pi(x)dx−∫Ωg(x)|tnun|q(x)+1dx=∫Ωf(x)|tnun|1−β(x)dxtP−−n∫ΩN∑i=1|∂xiun|pi(x)dx−tq−+1n∫Ωg(x)|un|q(x)+1dx≤t1−β−n∫Ωf(x)|un|1−β(x)dx |
which suggests a contradiction when tn→∞. Hence, sequence tn is bounded. Therefore, there exists a subsequence tn (not relabelled) such that tn→t0, t0≥0. On the other hand, from Lemma 3.4, ‖tnun‖→p(⋅)≥δ1>0. Thus, t0>0 and t0u∈N2. By the uniqueness of the map t(u), t0=t(u), which concludes the continuity of t(u). In conclusion, for any ∈W1,→p(⋅)0(Ω) satisfying ∫Ωf(x)|u|1−β(x)dx<∞, the function t(u) scales u∈W1,→p(⋅)0(Ω) continuously to a point such that t(u)u∈N2.
Lemma 3.7. Assume that (un)⊂N1 is the nonnegative minimizing sequence for the minimization problem limn→∞J(un)=infN1J. Then, there exists a subsequence (un) (not relabelled) such that un→u∗ (strongly) in W1,→p(⋅)0(Ω).
Proof. Since (un) is bounded in W1,→p(⋅)0(Ω) and W1,→p(⋅)0(Ω) is reflexive, there exists a subsequence (un), not relabelled, and u∗∈W1,→p(⋅)0(Ω) such that
● un⇀u∗ (weakly) in W1,→p(⋅)0(Ω),
● un→u∗ in Ls(⋅)(Ω), 1<s(x)<P−,∞, for all x∈¯Ω,
● un(x)→u∗(x) a.e. in Ω.
Since the norm ‖⋅‖→p(⋅) is a continuous convex functional, it is weakly lower semicontinuous. Using this fact along with the Fatou's lemma, and Lemma 3.4, it reads
infN1J=limn→∞J(un)≥lim infn→∞[∫ΩN∑i=1|∂xiun|pi(x)pi(x)dx]−∫Ωg(x)|u∗|q(x)+1q(x)+1dx+lim infn→∞[∫Ωf(x)|un|1−β(x)β(x)−1dx]≥∫ΩN∑i=1|∂xiu∗|pi(x)pi(x)dx−∫Ωg(x)|u∗|q(x)+1q(x)+1dx+∫Ωf(x)|u∗|1−β(x)β(x)−1dx=J(u∗)≥J(t(u∗)u∗)≥infN2J≥infN1J | (3.14) |
The above result implies, up to subsequences, that
limn→∞‖un‖→p(⋅)=‖u∗‖→p(⋅). | (3.15) |
Thus, (3.15) along with un⇀u∗ in W1,→p(⋅)0(Ω) show that un→u∗ in W1,→p(⋅)0(Ω).
The following is the main result of the present paper.
Theorem 3.8. Assume that the conditions (A1)−(A5) hold. Then, problem (1.1) has at least one positive W1,→p(⋅)0(Ω)-solution if and only if there exists ¯u∈W1,→p(⋅)0(Ω) satisfying ∫Ωf(x)|¯u|1−β(x)dx<∞.
Proof. (⇒): Assume that the function u∈W1,→p(⋅)0(Ω) is a weak solution to problem (1.1). Then, letting u=φ in Definition (3.1) gives
∫Ωf(x)|u|1−β(x)dx=∫ΩN∑i=1|∂xiu|pi(x)dx−∫Ωg(x)|u|q(x)+1dx≤‖u‖PM→p(⋅)−|g|∞|u|qMq(x)+1≤‖u‖PM→p(⋅)<∞, |
where PM:=max{P−−,P++} and qM:=max{q−,q+}, changing according to the base.
(⇐): Assume that there exists ¯u∈W1,→p(⋅)0(Ω) such that ∫Ωf(x)|¯u|1−β(x)dx<∞. Then, by Lemma 3.6, there exists a unique number t(¯u)>0 such that t(¯u)¯u∈N2.
The information we have had about J so far and the closeness of N1 allow us to apply Ekeland's variational principle to the problem infN1J. That is, it suggests the existence of a corresponding minimizing sequence (un)⊂N1 satisfying the following:
(E1) J(un)−infN1J≤1n,
(E2) J(un)−J(ν)≤1n‖un−ν‖→p(⋅),∀ν∈N1.
Due to the fact J(|un|)=J(un), it is not wrong to assume that un≥0 a.e. in Ω. Additionally, considering that (un)⊂N1 and following the same approach as it is done in the (⇒) part, we can obtain that ∫Ωf(x)|un|1−β(x)dx<∞. If all this information and the assumptions (A1), (A2) are taken into consideration, it follows that un(x)>0 a.e. in Ω.
The rest of the proof is split into two cases.
Case Ⅰ: (un)⊂N1∖N2 for n large.
For a function φ∈W1,→p(⋅)0(Ω) with φ≥0, and t>0, we have
0<(un(x)+tφ(x))1−β(x)≤un(x)1−β(x)a.e. inΩ. |
Therefore, using (A1), (A2) gives
∫Ωf(x)(un+tφ)1−β(x)dx≤∫Ωf(x)u1−β(x)ndx≤∫ΩN∑i=1|∂xiun|pi(x)dx−∫Ωg(x)uq(x)+1ndx<∞ | (3.16) |
Then, when t>0 is small enough in (3.16), we obtain
∫Ωf(x)(un+tφ)1−β(x)dx≤∫ΩN∑i=1|∂xi(un+tφ)|pi(x)dx−∫Ωg(x)(un+tφ)q(x)+1dx | (3.17) |
which means that ν:=un+tφ∈N1. Now, using (E2), it reads
1n‖tφ‖→p(⋅)≥J(un)−J(ν)=∫ΩN∑i=1|∂xiun|pi(x)pi(x)dx−∫ΩN∑i=1|∂xi(un+tφ)|pi(x)pi(x)dx−∫Ωg(x)uq(x)+1nq(x)+1dx+∫Ωg(x)(un+tφ)q(x)+1q(x)+1dx+∫Ωf(x)u1−β(x)nβ(x)−1dx−∫Ωf(x)(un+tφ)1−β(x)β(x)−1dx |
Dividing the above inequality by t and passing to the infimum limit as t→0 gives
lim inft→0‖φ‖→p(⋅)n+lim inft→0[∫ΩN∑i=1[|∂xi(un+tφ)|pi(x)−|∂xiun|pi(x)]tpi(x)dx]⏟:=I1−lim inft→0[∫Ωg(x)[(un+tφ)q(x)+1−uq(x)+1n]t(q(x)+1)dx]⏟:=I2≥lim inft→0[∫Ωf(x)[(un+tφ)1−β(x)−u1−β(x)n]t(1−β(x))dx]⏟:=I3 |
Calculation of I1,I2 gives
I1=ddt(∫ΩN∑i=1|∂xi(un+tφ)|pi(x)pi(x)dx)|t=0=∫ΩN∑i=1|∂xiun|pi(x)−2∂xiun⋅∂xiφdx | (3.18) |
and
I2=ddt(∫Ωg(x)(un+tφ)q(x)+1q(x)+1dx)|t=0=∫Ωg(x)uq(x)nφdx. | (3.19) |
For I3: Since for t>0 it holds
u1−β(x)n(x)−(un(x)+tφ(x))1−β(x)≥0,a.e. inΩ |
we can apply Fatou's lemma, that is,
I2=lim inft→0∫Ωf(x)[(un+tφ)1−β(x)−u1−β(x)n]t(1−β(x))dx≥∫Ωlim inft→0f(x)[(un+tφ)1−β(x)−u1−β(x)n]t(1−β(x))dx≥∫Ωf(x)u−β(x)nφdx | (3.20) |
Now, substituting I1,I2,I3 gives
‖φ‖→p(⋅)n+∫ΩN∑i=1|∂xiun|pi(x)−2∂xiun⋅∂xiφdx−∫Ωg(x)uq(x)nφdx≥∫Ωf(x)u−β(x)nφdx |
From Lemma 3.7, we know that un→u∗ in W1,→p(⋅)0(Ω). Thus, also considering Fatou's lemma, we obtain
∫ΩN∑i=1|∂xiu∗|pi(x)−2∂xiu∗⋅∂xiφdx−∫Ωg(x)(u∗)q(x)φdx−∫Ωf(x)(u∗)−β(x)φdx≥0, | (3.21) |
for any φ∈W1,→p(⋅)0(Ω) with φ≥0. Letting φ=u∗ in (3.21) shows clearly that u∗∈N1.
Lastly, from Lemma 3.7, we can conclude that
limn→∞J(un)=J(u∗)=infN2J, |
which means
u∗∈N2,(witht(u∗)=1) | (3.22) |
Case Ⅱ: There exists a subsequence of (un) (not relabelled) contained in N2.
For a function φ∈W1,p(x)0(Ω) with φ≥0, t>0, and un∈N2, we have
∫Ωf(x)(un+tφ)1−β(x)dx≤∫Ωf(x)u1−β(x)ndx=∫ΩN∑i=1|∂xiu|pi(x)dx−∫Ωg(x)uq(x)+1ndx<∞, | (3.23) |
and hence, there exists a unique continuous scaling function, denoted by θn(t):=t(un+tφ)>0, corresponding to (un+tφ) so that θn(t)(un+tφ)∈N2 for n=1,2,.... Obviously, θn(0)=1. Since θn(t)(un+tφ)∈N2, we have
0=∫ΩN∑i=1|∂xiθn(t)(un+tφ)|pi(x)dx−∫Ωg(x)(θn(t)(un+tφ))q(x)+1dx−∫Ωf(x)(θn(t)(un+tφ))1−β(x)dx≥∫ΩN∑i=1|∂xiθn(t)(un+tφ)|pi(x)dx−θqM+1n(t)∫Ωg(x)(un+tφ)q(x)+1dx−θ1−βmn(t)∫Ωf(x)(un+tφ)1−β(x)dx, | (3.24) |
and
0=∫ΩN∑i=1|∂xiun|pi(x)dx−∫Ωg(x)uq(x)+1ndx−∫Ωf(x)u1−β(x)ndx. | (3.25) |
where βm:=min{β−,β+}. Then, using (3.24) and (3.25) together gives
0≥[−(q++1)[θn(0)+τ1(θn(t)−θn(0))]qm∫Ωg(x)(un+tφ)q(x)+1dx−(1−βm)[θn(0)+τ2(θn(t)−θn(0))]−βm∫Ωf(x)(un+tφ)1−β(x)dx](θn(t)−θn(0))+∫ΩN∑i=1|∂xiθn(t)(un+tφ)|pi(x)dx−∫ΩN∑i=1|∂xi(un+tφ)|pi(x)dx+∫ΩN∑i=1|∂xi(un+tφ)|pi(x)dx−∫ΩN∑i=1|∂xiun|pi(x)dx−[∫Ωg(x)(un+tφ)q(x)+1dx−∫Ωg(x)uq(x)+1ndx]−[∫Ωf(x)(un+tφ)1−β(x)dx−∫Ωf(x)u1−β(x)ndx] | (3.26) |
for some constants τ1,τ2∈(0,1). To proceed, we assume that θ′n(0)=ddtθn(t)|t=0∈[−∞,∞]. In case this limit does not exist, we can consider a subsequence tk>0 of t such that tk→0 as k→∞.
Next, we show that θ′n(0)≠∞.
Dividing the both sides of (3.26) by t and passing to the limit as t→0 leads to
0≥[P−−∫ΩN∑i=1|∂xiun|pi(x)dx+(βm−1)∫Ωf(x)u1−β(x)ndx−(q++1)∫Ωg(x)uq(x)+1ndx]θ′n(0)+P−−∫ΩN∑i=1|∂xiun|pi(x)−2∂xiun⋅∂xiφdx−(q++1)∫Ωg(x)uq(x)nφdx+(βm−1)∫Ωf(x)u−β(x)nφdx | (3.27) |
or
0≥[(P−−−q+−1)∫ΩN∑i=1|∂xiun|pi(x)dx+(βm+q+)∫Ωf(x)u1−β(x)ndx]θ′n(0)+P−−∫ΩN∑i=1|∂xiun|pi(x)−2∂xiun⋅∂xiφdx−(q++1)∫Ωg(x)uq(x)nφdx+(βm−1)∫Ωf(x)u−β(x)nφdx | (3.28) |
which, along with Lemma 3.4, concludes that −∞≤θ′n(0)<∞, and hence, θ′n(0)≤¯c, uniformly in all large n.
Next, we show that θ′n(0)≠−∞.
First, we apply Ekeland's variational principle to the minimizing sequence (un)⊂N2(⊂N1). Thus, letting ν:=θn(t)(un+tφ) in (E2) gives
1n[|θn(t)−1|‖un‖→p(⋅)+tθn(t)‖φ‖→p(⋅)]≥J(un)−J(θn(t)(un+tφ))=∫ΩN∑i=1|∂xiun|pi(x)pi(x)dx−∫Ωg(x)uq(x)+1nq(x)+1dx+∫Ωf(x)u1−β(x)nβ(x)−1dx−∫ΩN∑i=1|∂xiθn(t)(un+tφ)|pi(x)pi(x)dx+∫Ωg(x)[θn(t)(un+tφ)]q(x)+1q(x)+1dx−∫Ωf(x)[θn(t)(un+tφ)]1−β(x)β(x)−1dx≥∫ΩN∑i=1|∂xiun|pi(x)pi(x)dx−∫ΩN∑i=1|∂xiθn(t)(un+tφ)|pi(x)pi(x)dx−∫Ωg(x)uq(x)+1nq(x)+1dx+∫Ωg(x)[θn(t)(un+tφ)]q(x)+1q(x)+1dx−1β−−1∫ΩN∑i=1|∂xiθn(t)(un+tφ)|pi(x)dx | (3.29) |
If we use Lemma 3.4 to manipulate the norm ‖u+tφ‖→p(⋅), the integral in the last line of (3.29) can be written as follows
1β−−1∫ΩN∑i=1|∂xiθn(t)(un+tφ)|pi(x)dx≤θPMn(t)β−−1∫ΩN∑i=1|∂xi(un+tφ)|pi(x)dx≤θPMn(t)β−−1‖un+tφ‖PM→p(⋅)≤2P++−1θPMn(t)CPM(δ2)‖φ‖PM→p(⋅)β−−1t | (3.30) |
Then,
1n[|θn(t)−1|‖un‖→p(⋅)+tθn(t)‖φ‖→p(⋅)]+∫ΩN∑i=1[|∂xi(un+tφ)|pi(x)−|∂xiun|pi(x)]pi(x)dx+2P++−1θPMn(t)CPM(δ2)‖φ‖PM→p(⋅)β−−1t≥[(1q−+1)[θn(0)+τ1(θn(t)−θn(0))]qm∫Ωg(x)(un+tφ)q(x)+1dx](θn(t)−θn(0))≥−∫ΩN∑i=1[|∂xiθn(t)(un+tφ)|pi(x)−|∂xi(un+tφ)|pi(x)]pi(x)dx+1q−+1∫Ωg(x)[(un+tφ)q(x)+1−uq(x)+1n]dx | (3.31) |
Dividing by t and passing to the limit as t→0 gives
1n‖φ‖→p(⋅)+2P++−1θPMn(t)CPM(δ2)‖φ‖PM→p(⋅)β−−1≥[(−1+1q−+1)∫ΩN∑i=1|∂xiun|pi(x)dx−1q−+1∫Ωf(x)u1−β(x)ndx−‖un‖→p(⋅)nsgn[θn(t)−1]]θ′n(0)−∫ΩN∑i=1|∂xiun|pi(x)−2∂xiun⋅∂xiφdx+∫Ωg(x)uq(x)ndx | (3.32) |
which concludes that θ′n(0)≠−∞. Thus, θ′n(0)≥c_ uniformly in large n.
In conclusion, there exists a constant, C0>0 such that |θ′n(0)|≤C0 when n≥N0,N0∈N.
Next, we show that u∗∈N2.
Using (E2) again, we have
1n[|θn(t)−1|‖un‖→p(⋅)+tθn(t)‖φ‖→p(⋅)]≥J(un)−J(θn(t)(un+tφ))=∫ΩN∑i=1|∂xiun|pi(x)pi(x)dx−∫Ωg(x)uq(x)+1nq(x)+1dx+∫Ωf(x)u1−β(x)nβ(x)−1dx−∫ΩN∑i=1|∂xiθn(t)(un+tφ)|pi(x)pi(x)dx+∫Ωg(x)[θn(t)(un+tφ)]q(x)+1q(x)+1dx−∫Ωf(x)[θn(t)(un+tφ)]1−β(x)β(x)−1dx=−∫ΩN∑i=1|∂xi(un+tφ)|pi(x)pi(x)dx+∫ΩN∑i=1|∂xiun|pi(x)pi(x)dx−∫Ωf(x)(un+tφ)1−β(x)β(x)−1dx+∫Ωf(x)u1−β(x)nβ(x)−1dx−∫ΩN∑i=1|∂xiθn(t)(un+tφ)|pi(x)pi(x)dx+∫ΩN∑i=1|∂xi(un+tφ)|pi(x)pi(x)dx−∫Ωf(x)[θn(t)(un+tφ)]1−β(x)β(x)−1dx+∫Ωf(x)(un+tφ)1−β(x)β(x)−1dx∫Ωg(x)[θn(t)(un+tφ)]q(x)+1q(x)+1dx−∫Ωg(x)(un+tφ)q(x)+1q(x)+1dx−∫Ωg(x)uq(x)+1nq(x)+1dx+∫Ωg(x)(un+tφ)q(x)+1q(x)+1dx | (3.33) |
Dividing by t and passing to the limit as t→0 gives
1n[|θ′n(0)|‖un‖→p(⋅)+‖φ‖→p(⋅)]≥−∫ΩN∑i=1|∂xiun|pi(x)−2∂xiun⋅∂xiφdx+∫Ωf(x)u−β(x)nφdx+∫Ωg(x)uq(x)nφdx[−∫ΩN∑i=1|∂xiun|pi(x)dx+∫Ωg(x)uq(x)+1ndx+∫Ωf(x)u1−β(x)ndx]θ′n(0)=−∫ΩN∑i=1|∂xiun|pi(x)−2∂xiun⋅∂xiφdx+∫Ωg(x)uq(x)nφdx+∫Ωf(x)u−β(x)nφdx | (3.34) |
If we consider that |θ′n(0)|≤C0 uniformly in n, we obtain that ∫Ωf(x)u−β(x)ndx<∞. Therefore, for n→∞ it reads
∫ΩN∑i=1|∂xiu∗|pi(x)−2∂xiu∗⋅∂xiφdx−∫Ωg(x)(u∗)q(x)φdx−∫Ωf(x)(u∗)−β(x)φdx≥0 | (3.35) |
for all φ∈W1,→p(⋅)0(Ω), φ≥0. Letting φ=u∗ in (3.35) shows clearly that u∗∈N1.
This means, as with the Case Ⅰ, that we have
u∗∈N2 | (3.36) |
By taking into consideration the results (3.21), (3.22), (3.35), and (3.36), we infer that u∗∈N2 and (3.35) holds, in the weak sense, for both cases. Additionally, since u∗≥0 and u∗≠0, by the strong maximum principle for weak solutions, we must have u∗(x)>0almost everywhere inΩ.
Next, we show that u∗∈W1,→p(⋅)0(Ω) is a weak solution to problem (1.1).
For a random function ϕ∈W1,→p(⋅)0(Ω), and ε>0, let φ=(u∗+εϕ)+=max{0,u∗+εϕ}. We split Ω into two sets as follows:
Ω≥={x∈Ω:u∗(x)+εϕ(x)≥0}, | (3.37) |
and
Ω<={x∈Ω:u∗(x)+εϕ(x)<0}. | (3.38) |
If we replace φ with (u∗+εϕ) in (3.35), it follows
0≤∫ΩN∑i=1|∂xiu∗|pi(x)−2∂xiu∗⋅∂xiφdx−∫Ω[g(x)(u∗)q(x)+f(x)(u∗)−β(x)]φdx=∫Ω≥N∑i=1|∂xiu∗|pi(x)−2∂xiu∗⋅∂xi(u∗+εϕ)dx−∫Ω≥[g(x)(u∗)q(x)(u)∗+f(x)(u∗)−β(x)](u∗+εϕ)dx=∫Ω−∫Ω<[N∑i=1|∂xiu∗|pi(x)−2∂xiu∗⋅∂xi(u∗+εϕ)−[g(x)(u∗)q(x)+f(x)(u∗)−β(x)](u∗+εϕ)]dx=∫ΩN∑i=1|∂xiu∗|pi(x)dx+ε∫ΩN∑i=1|∂xiu∗|pi(x)−2∂xiu∗⋅∂xiϕdx−∫Ωf(x)(u∗)1−β(x)dx−ε∫Ωf(x)(u∗)−β(x)ϕdx−∫Ωg(x)(u∗)q(x)+1dx−ε∫Ωg(x)(u∗)q(x)ϕdx−∫Ω<[N∑i=1|∂xiu∗|pi(x)−2∂xiu∗⋅∂xi(u∗+εϕ)−[g(x)(u∗)q(x)+f(x)(u∗)−β(x)](u∗+εϕ)]dx | (3.39) |
Since u∗∈N2, we have
0≤ε[∫ΩN∑i=1|∂xiu∗|pi(x)−2∂xiu∗⋅∂xiϕ−[g(x)(u∗)q(x)+f(x)(u∗)−β(x)]ϕ]dx−ε∫Ω<N∑i=1|∂xiu∗|pi(x)−2∂xiu∗⋅∂xiϕdx+ε∫Ω<g(x)(u∗)q(x)ϕdx+ε∫Ω<f(x)(u∗)−β(x)ϕdx | (3.40) |
Dividing by ε and passing to the limit as ε→0, and considering that |Ω<|→0 as ε→0 gives
∫ΩN∑i=1|∂xiu∗|pi(x)−2∂xiu∗⋅∂xiϕdx−∫Ωg(x)(u∗)q(x)ϕdx≥∫Ωf(x)(u∗)−β(x)ϕdx,∀ϕ∈W1,→p(⋅)0(Ω) | (3.41) |
However, since the function ϕ∈W1,→p(⋅)0(Ω) is chosen randomly, it follows that
∫ΩN∑i=1|∂xiu∗|pi(x)−2∂xiu∗⋅∂xiϕdx−∫Ωg(x)(u∗)q(x)ϕdx=∫Ωf(x)(u∗)−β(x)ϕdx | (3.42) |
which concludes that u∗∈W1,→p(⋅)0(Ω) is a weak solution to problem (1.1).
Suppose that
{g(x)=ekcos(|x|),andf(x)=(1−|x|)kβ(x),x∈B1(0)⊂RN,k>0. |
Then equation (1.1) becomes
{−N∑i=1∂xi(|∂xiu|pi(x)−2∂xiu)=(1−|x|)kβ(x)u−β(x)+ekcos(|x|)uq(x) in B1(0),u>0 in B1(0),u=0 on ∂B1(0). | (4.1) |
Theorem 4.1. Assume that the conditions (A1)−(A3) hold. If 1<β+<1+k+1α and α>1/2, then, problem (4.1) has at least one positive W1,→p(⋅)0(B1(0))-solution.
Proof. Function f(x)=(1−|x|)kβ(x)≤(1−|x|)kβ− is clearly non-negative and bounded above within the unit ball B1(0) since |x|<1. Hence, f(x)∈L1(B1(0)).
Now, let's choose ¯u=(1−|x|)α. Since ¯u is also non-negative and bounded within B(0,1), it is in ¯u∈LP++(B(0,1)). Indeed,
N∑i=1∫B1(0)((1−|x|)α)pi(x)dx≤N[∫B1(0)((1−|x|)α)P−−dx+∫B1(0)((1−|x|)α)P++dx]<∞. |
Next, we show that ∂xi¯u∈Lpi(⋅)(B1(0)) for i∈{1,...,N}. Fix i∈{1,...,N}. Then
∂xi(1−|x|)α=α(1−|x|)α−1−xi|x| |
Considering that x∈B1(0), we obtain
∫B1(0)|∂xi(1−|x|)α|pi(x)dx≤αPM∫B1(0)(1−|x|)(α−1)P−−dx |
Therefore,
N∑i=1∫B1(0)|∂xi(1−|x|)α|pi(x)dx≤NαPMN∑i=1∫B(0,1)(1−|x|)(α−1)P−−dx<∞ |
if α>P−−−1P−−. Thus, ∂xi¯u∈Lpi(⋅)(B1(0)) for i∈{1,...,N}, and as a result, ¯u∈W1,→p(⋅)0(B1(0)).
Finally, we show that ∫B(0,1)(1−|x|)k(1−|x|)α(1−β(x))β(x)dx<∞. Then,
∫B1(0)(1−|x|)k(1−|x|)α(1−β(x))β(x)dx≤1β−∫B1(0)(1−|x|)k+α(1−β+)dx<∞. |
Thus, by Theorem 3.8, problem (4.1) has at least one positive W1,→p(⋅)0(B1(0))-solution.
The author declares he has not used Artificial Intelligence (AI) tools in the creation of this article.
This work was supported by Athabasca University Research Incentive Account [140111 RIA].
The author declares there is no conflict of interest.
[1] |
F. Abdolkarimzadeh, M. R. Ashory, A. Ghasemi-Ghalebahman, A. Karimi, Inverse dynamic finite element-optimization modeling of the brain tumor mass-effect using a variable pressure boundary, Comput. Methods Programs Biomed., 212 (2021), 106476, https://doi.org/10.1016/j.cmpb.2021.106476 doi: 10.1016/j.cmpb.2021.106476
![]() |
[2] |
R. R. Agravat, M. S. Raval, A survey and analysis on automated glioma brain tumor segmentation and overall patient survival prediction, Arch. Comput. Methods Eng., 28 (2021), 4117–4152. https://doi.org/10.1007/s11831-021-09559-w doi: 10.1007/s11831-021-09559-w
![]() |
[3] |
A. M. Alhassan, W. Zainon, Brain tumor classification in magnetic resonance image using hard swish-based RELU activation function-convolutional neural network, Neural Comput. Appl., 33 (2021), 9075–9087. https://doi.org/10.1007/s00521-020-05671-3 doi: 10.1007/s00521-020-05671-3
![]() |
[4] |
M. Alshayeji, J. Al-Buloushi, A. Ashkanani, S. Abed, Enhanced brain tumor classification using an optimized multi-layered convolutional neural network architecture, Multimedia Tools Appl., 80 (2021), 28897–28917. https://doi.org/10.1007/s11042-021-10927-8 doi: 10.1007/s11042-021-10927-8
![]() |
[5] |
W. Ayadi, W. Elhamzi, I. Charfi, M. Atri, Deep CNN for brain tumor classification, Neural Process. Lett., 53 (2021), 671–700. https://doi.org/10.1007/s11063-020-10398-2 doi: 10.1007/s11063-020-10398-2
![]() |
[6] |
F. Bashir-Gonbadi, H. Khotanlou, Brain tumor classification using deep convolutional autoencoder-based neural network: Multi-task approach, Multimed. Tools Appl., 80 (2021), 19909–19929. https://doi.org/10.1007/s11042-021-10637-1 doi: 10.1007/s11042-021-10637-1
![]() |
[7] |
B. S. Chen, L. L. Zhang, H. Y. Chen, K. W. Liang, X. Z. Chen, A novel extended Kalman filter with support vector machine based method for the automatic diagnosis and segmentation of brain tumors, Comput. Methods Prog. Biomed., 200 (2021), 105797. https://doi.org/10.1016/j.cmpb.2020.105797 doi: 10.1016/j.cmpb.2020.105797
![]() |
[8] |
N. V. Dhole, V. V. Dixit, Review of brain tumor detection from MRI images with hybrid approaches, Multimed. Tools Appl., 81 (2022), 10189–10220. https://doi.org/10.1007/s11042-022-12162-1 doi: 10.1007/s11042-022-12162-1
![]() |
[9] |
B. V. Isunuri, J. Kakarla, Three-class brain tumor classification from magnetic resonance images using separable convolution based neural network, Concurr. Comput. Prac. Experience, 34 (2022), e6541–e6549. https://doi.org/10.1002/cpe.6541 doi: 10.1002/cpe.6541
![]() |
[10] |
T. A. Jemimma, Y. J. V. Raj, Significant LOOP with clustering approach and optimization enabled deep learning classifier for the brain tumor segmentation and classification, Multimed. Tools Appl., 81 (2022), 2365–2391. https://doi.org/10.1007/s11042-021-11591-8 doi: 10.1007/s11042-021-11591-8
![]() |
[11] |
B. Jena, G. K. Nayak, S. Saxena, An empirical study of different machine learning techniques for brain tumor classification and subsequent segmentation using hybrid texture feature, Mach. Vision Appl., 33 (2022), 1–16. https://doi.org/10.1007/s00138-021-01262-x doi: 10.1007/s00138-021-01262-x
![]() |
[12] |
M. Jiang, F. H. Zhai, J. Kong, A novel deep learning model DDU-net using edge features to enhance brain tumor segmentation on MR images, Artif. Intell. Med., 121 (2021), 102180. https://doi.org/10.1016/j.artmed.2021.102180 doi: 10.1016/j.artmed.2021.102180
![]() |
[13] |
S. Kadry, V. Rajinikanth, N. S. M. Raja, D. J. Hemanth, N. M. S. Hannon, A. N. J. Raj, Evaluation of brain tumor using brain MRI with modified-moth-flame algorithm and Kapu's thresholding: A study, Evol. Intell., 14 (2021), 1053–1063. https://doi.org/10.1007/s12065-020-00539-w doi: 10.1007/s12065-020-00539-w
![]() |
[14] |
S. Kokkalla, J. Kakarla, I. B. Venkateswarlu, M. Singh, Three-class brain tumor classification using deep dense inception residual network, Soft Comput., 25 (2021), 8721–8729. https://doi.org/10.1007/s00500-021-05748-8 doi: 10.1007/s00500-021-05748-8
![]() |
[15] |
R. L. Kumar, J. Kakarla, B. V. Isunuri, M. Singh, Multiclass brain tumor classification using residual network and global average pooling, Multimed. Tools Appl., 80 (2021), 13429–13438. https://doi.org/10.1007/s11042-020-10335-4 doi: 10.1007/s11042-020-10335-4
![]() |
[16] |
Y. Guan, M. Aamir, Z. Rahman, A. Ali, W. A. Abro, Z. A. Dayo, et al., A framework for efficient brain tumor classification using MRI images, Math. Biosci. Eng., 18 (2021), 5790–5815. https://doi.org/10.3934/mbe.2021292 doi: 10.3934/mbe.2021292
![]() |
[17] |
M. Aamir, Z. Rahman, Z. A. Dayo, W. A. Abro, M. I. Uddin, I. Khan, et al., A deep learning approach for brain tumour classification using MRI images, Comput. Electr. Eng., 101 (2022), 108105. https://doi.org/10.1016/j.compeleceng.2022.108105 doi: 10.1016/j.compeleceng.2022.108105
![]() |
[18] |
X. L. Lei, X. S. Yu, J. N. Chi, Y. Wang, J. S. Zhang, C. D. Wu, Brain tumor segmentation in MR images using a sparse constrained level set algorithm, Expert Syst. Appl., 168 (2021), 114262. https://doi.org/10.1016/j.eswa.2020.114262 doi: 10.1016/j.eswa.2020.114262
![]() |
[19] |
G. L. Li, J. H. Sun, Y. L. Song, J. F. Qu, Z. Y. Zhu, M. R. Khosravi, Real-time classification of brain tumors in MRI images with a convolutional operator-based hidden Markov model, J. Real Time Image Process., 18 (2021), 1207–1219. https://doi.org/10.1007/s11554-021-01072-4 doi: 10.1007/s11554-021-01072-4
![]() |
[20] |
O. Polat, C. Gungen, Classification of brain tumors from MR images using deep transfer learning, J. Supercomput., 77 (2021), 7236–7252. https://doi.org/10.1007/s11227-020-03572-9 doi: 10.1007/s11227-020-03572-9
![]() |
[21] |
S. Preethi, P. Aishwarya, An efficient wavelet-based image fusion for brain tumor detection and segmentation over PET and MRI image, Multimed. Tools Appl., 80 (2021), 14789–14806. https://doi.org/10.1007/s11042-021-10538-3 doi: 10.1007/s11042-021-10538-3
![]() |
[22] |
S. Ramesh, S. Sasikala, N. Paramanandham, Segmentation and classification of brain tumors using modified median noise filter and deep learning approaches, Multimed. Tools Appl., 80 (2021), 11789–11813. https://doi.org/10.1007/s11042-020-10351-4 doi: 10.1007/s11042-020-10351-4
![]() |
[23] |
C. S. Rao, K. Karunakara, A comprehensive review on brain tumor segmentation and classification of MRI images, Multimed. Tools Appl., 80 (2021), 17611–17643. https://doi.org/10.1007/s11042-020-10443-1 doi: 10.1007/s11042-020-10443-1
![]() |
[24] |
A. S. Reddy, P. C. Reddy, MRI brain tumor segmentation and prediction using modified region growing and adaptive SVM, Soft Comput., 25 (2021), 4135–4148. https://doi.org/10.1007/s00500-020-05493-4 doi: 10.1007/s00500-020-05493-4
![]() |
[25] |
V. V. S. Sasank, S. Venkateswarlu, Brain tumor classification using modified kernel based softplus extreme learning machine, Multimed. Tools Appl., 80 (2021), 3513–13534. https://doi.org/10.1007/s11042-020-10423-5 doi: 10.1007/s11042-020-10423-5
![]() |
[26] |
V. V. S. Sasank, S. Venkateswarlu, Hybrid deep neural network with adaptive rain optimizer algorithm for multi-grade brain tumor classification of MRI images, Multimed. Tools Appl., 81 (2022), 8021–8057. https://doi.org/10.1007/s11042-022-12106-9 doi: 10.1007/s11042-022-12106-9
![]() |
[27] |
S. N. Shivhare, N. Kumar, Tumor bagging: A novel framework for brain tumor segmentation using metaheuristic optimization algorithms, Multimed. Tools Appl., 80 (2021), 26969–26995. https://doi.org/10.1007/s11042-021-10969-y doi: 10.1007/s11042-021-10969-y
![]() |
[28] |
J. J. Wang, J. Gao, J. Ren, Z. Luan, Z. Yu, Y. Zhao, et al., DFP-ResUNet: Convolutional neural network with a dilated convolutional feature pyramid for multimodal brain tumor segmentation, Comput. Methods Prog. Biomed., 208 (2021), 106208. https://doi.org/10.1016/j.cmpb.2021.106208 doi: 10.1016/j.cmpb.2021.106208
![]() |
[29] |
P. Wang, A. C. S. Chung, Relax and focus on brain tumor segmentation, Medical Image Anal., 75 (2022), 102259. https://doi.org/10.1016/j.media.2021.102259 doi: 10.1016/j.media.2021.102259
![]() |
[30] |
Y. Wang, J. L. Peng, Z. D. Jia, Brain tumor segmentation via C-dense convolutional neural network, Prog. Artif. Intell., 10 (2021), 147–156. https://doi.org/10.1007/s13748-021-00232-8 doi: 10.1007/s13748-021-00232-8
![]() |
[31] |
Y. L. Wang, Z. J. Zhao, S. Y. Hu, F. L. Chang, CLCU-Net: Cross-level connecte d U-shape d network with selective feature aggregation attention module for brain tumor segmentation, Comput. Methods Prog Biomed., 207 (2021), 106154. https://doi.org/10.1016/j.cmpb.2021.106154 doi: 10.1016/j.cmpb.2021.106154
![]() |
[32] |
X. H. Wu, L. Bi, M. Fulham, D. D. Feng, L. P. Zhou, and J. Kim, Unsupervised brain tumor segmentation using a symmetric-driven adversarial network, Neurocomputing, 455 (2021), 242–254. https://doi.org/10.1016/j.neucom.2021.05.073 doi: 10.1016/j.neucom.2021.05.073
![]() |
[33] |
D. W. Zhang, G. H. Huang, Q. Zhang, J. G. Han, J. W. Han, Y. Z. Yu, Cross-modality deep feature learning for brain tumor segmentation, Pattern Recogn., 110 (2021), 107562. https://doi.org/10.1016/j.patcog.2020.107562 doi: 10.1016/j.patcog.2020.107562
![]() |
[34] |
H. ZainEldin, S. A. Gamel, E. M. El-Kenawy, A. H. Alharbi, D. S. Khafaga, A. Ibrahim, et al., Brain tumor detection and classification using deep learning and sine-cosine fitness grey wolf optimization, Bioengineering, 1 (2023), 10–18. https://doi.org/10.3390/bioengineering10010018 doi: 10.3390/bioengineering10010018
![]() |
[35] | V. Kushwaha, P. Maidamwar, BTFCNN: Design of a brain tumor classification model using fused convolutional neural networks, in 2022 10th International Conference on Emerging Trends in Engineering and Technology-Signal and Information Processing (ICETET-SIP-22), (2022), 1–6. https://doi.org/10.1109/ICETET-SIP-2254415.2022.9791734 |
[36] |
Y. Guan, M. Aamir, Z. Rahman, A. Ali, W. A. Abro, Z. A. Dayo, et al., A framework for efficient brain tumor classification using MRI images, Math. Biosci. Eng., 18 (2021), 5790–5815. https://doi.org/10.3934/mbe.2021292 doi: 10.3934/mbe.2021292
![]() |
[37] |
M. Aamir, Z. Rahman, Z. A. Dayo, W. A. Abro, M. I. Uddin, I. Khan, et al., A deep learning approach for brain tumour classification using MRI images, Comput. Electr. Eng., 1 (2022), 108105–108120. https://doi.org/10.1016/j.compeleceng.2022.108105 doi: 10.1016/j.compeleceng.2022.108105
![]() |
[38] |
M. M. Badža, M. Č. Barjaktarović, Classification of brain tumors from MRI images using a convolutional neural network., Appl. Sci. (Basel), 10 (2020), 1999–2020. https://doi.org/10.3390/app10061999 doi: 10.3390/app10061999
![]() |
[39] |
N. Zheng, G. Zhang, Y. Zhang, F. R. Sheykhahmad, Brain tumor diagnosis based on Zernike moments and support vector machine optimized by chaotic arithmetic optimization algorithm, Biomed. Signal Process. Control., 82 (2023), 104543, 104543–104553. https://doi.org/10.1016/j.bspc.2022.104543 doi: 10.1016/j.bspc.2022.104543
![]() |
[40] |
Q. Zhou, Medical image classification using light-weight CNN with spiking cortical model based attention module, IEEE J. Biomed. Health Inform., 1 (2023), 1–13. https://doi.org/10.1109/JBHI.2023.3241439 doi: 10.1109/JBHI.2023.3241439
![]() |
[41] |
S. Deepak, P. M. Ameer, Brain tumor categorization from imbalanced MRI dataset using weighted loss and deep feature fusion, Neurocomputing, 520 (2023), 94–102. https://doi.org/10.1016/j.neucom.2022.11.039 doi: 10.1016/j.neucom.2022.11.039
![]() |
[42] |
G. Xiao, H. Wang, J. Shen, Z. Chen, Z. Zhang, X. Ge, Contrastive learning with dynamic weighting and jigsaw augmentation for brain tumor classification in mrI, Neural Process Lett., 1 (2023), 1–29. https://doi.org/10.1007/s11063-022-11108-w doi: 10.1007/s11063-022-11108-w
![]() |
1. | Mustafa Avci, On a p(x)-Kirchhoff Problem with Variable Singular and Sublinear Exponents, 2024, -1, 1027-5487, 10.11650/tjm/240904 | |
2. | Mustafa Avci, On a p(x)-Kirchhoff-type Equation with Singular and Superlinear Nonlinearities, 2024, 0971-3514, 10.1007/s12591-024-00702-0 | |
3. | Mustafa Avci, Singular p(x) -Laplacian equation with application to boundary layer theory , 2025, 0003-6811, 1, 10.1080/00036811.2025.2473492 |