Loading [MathJax]/jax/output/SVG/jax.js
Research article

Optimizing the max-min function with a constraint on a two-sided linear system

  • Received: 28 September 2023 Revised: 30 November 2023 Accepted: 13 December 2023 Published: 23 February 2024
  • MSC : 15A24, 15A69

  • The behavior of discrete-event systems, in which the individual components move from event to event rather than varying continuously through time, is often described by systems of linear equations in max-min algebra, in which classical addition and multiplication are replaced by and , representing maximum and minimum, respectively. Max-min equations have found a broad area of applications in causal models, which emphasize relationships between input and output variables. Many practical situations can be described using max-min systems of linear equations. We shall deal with a two-sided max-min system of linear equations with unknown column vector x of the form Axc=Bxd, where A, B are given square matrices, c, d are column vectors and operations and are extended to matrices and vectors in the same way as in the classical algebra. We give an equivalent condition for its solvability. For a given max-min objective function f, we consider optimization problem of type fxmax or min constraint to Axc=Bxd. We solve the equation in the form f(x)=v on the set of solutions of the equation Axc=Bxd and extend the problem to the case of an interval function f and an interval value v. We define several types of the reachability of the interval value v by the interval function f and provide equivalent conditions for them.

    Citation: Helena Myšková, Ján Plavka. Optimizing the max-min function with a constraint on a two-sided linear system[J]. AIMS Mathematics, 2024, 9(4): 7791-7809. doi: 10.3934/math.2024378

    Related Papers:

    [1] Miaoxia Chen, Abdul Samad Shibghatullah, Kasthuri Subramaniam, Xiaopeng Yang . Weighted minimax programming subject to the max-min fuzzy relation inequalities. AIMS Mathematics, 2024, 9(6): 13624-13641. doi: 10.3934/math.2024665
    [2] Taixiang Sun, Guangwang Su, Bin Qin, Caihong Han . Global behavior of a max-type system of difference equations of the second order with four variables and period-two parameters. AIMS Mathematics, 2023, 8(10): 23941-23952. doi: 10.3934/math.20231220
    [3] Ebrahim. A. Youness, Abd El-Monem. A. Megahed, Elsayed. E. Eladdad, Hanem. F. A. Madkour . Min-max differential game with partial differential equation. AIMS Mathematics, 2022, 7(8): 13777-13789. doi: 10.3934/math.2022759
    [4] Chang-Jun Wang, Zi-Jian Gao . Two-stage stochastic programming with imperfect information update: Value evaluation and information acquisition game. AIMS Mathematics, 2023, 8(2): 4524-4550. doi: 10.3934/math.2023224
    [5] Ebenezer Fiifi Emire Atta Mills . The worst-case scenario: robust portfolio optimization with discrete distributions and transaction costs. AIMS Mathematics, 2024, 9(8): 20919-20938. doi: 10.3934/math.20241018
    [6] Abdeljabbar Ghanmi, Abdelhakim Sahbani . Existence results for $ p(x) $-biharmonic problems involving a singular and a Hardy type nonlinearities. AIMS Mathematics, 2023, 8(12): 29892-29909. doi: 10.3934/math.20231528
    [7] Hadeel Zaki Mohammed Azumi, Wafa Mohammed Ahmed Shammakh, Abdeljabbar Ghanmi . Min-max method for some classes of Kirchhoff problems involving the $ \psi $-Hilfer fractional derivative. AIMS Mathematics, 2023, 8(7): 16308-16319. doi: 10.3934/math.2023835
    [8] Muhammad Sarfraz, Yongjin Li . Minimum functional equation and some Pexider-type functional equation on any group. AIMS Mathematics, 2021, 6(10): 11305-11317. doi: 10.3934/math.2021656
    [9] Guocheng Zhu, Zhining Wang, Xiaopeng Yang . On the minimal solution for max-product fuzzy relation inequalities. AIMS Mathematics, 2024, 9(11): 30667-30685. doi: 10.3934/math.20241481
    [10] Dong Pan, Huizhen Qu . Finite-time boundary synchronization of space-time discretized stochastic fuzzy genetic regulatory networks with time delays. AIMS Mathematics, 2025, 10(2): 2163-2190. doi: 10.3934/math.2025101
  • The behavior of discrete-event systems, in which the individual components move from event to event rather than varying continuously through time, is often described by systems of linear equations in max-min algebra, in which classical addition and multiplication are replaced by and , representing maximum and minimum, respectively. Max-min equations have found a broad area of applications in causal models, which emphasize relationships between input and output variables. Many practical situations can be described using max-min systems of linear equations. We shall deal with a two-sided max-min system of linear equations with unknown column vector x of the form Axc=Bxd, where A, B are given square matrices, c, d are column vectors and operations and are extended to matrices and vectors in the same way as in the classical algebra. We give an equivalent condition for its solvability. For a given max-min objective function f, we consider optimization problem of type fxmax or min constraint to Axc=Bxd. We solve the equation in the form f(x)=v on the set of solutions of the equation Axc=Bxd and extend the problem to the case of an interval function f and an interval value v. We define several types of the reachability of the interval value v by the interval function f and provide equivalent conditions for them.



    Problems on algebraic structures, in which pairs of operations (max,+) or (max,min) replace addition and multiplication of the classical algebra, appear in the literature from approximately the sixties of the last century, see e. g., [3,18]. A systematic theory of such algebraic structures was likely published for the first times in [3].

    The behavior of discrete event systems, in which the individual components move from event to event rather than varying continuously through time, is often described using max-plus or max-min algebra. Max-min algebra is a convenient algebraic setting for some types of optimization problems, see [4]. It can be used in a range of practical problems related to scheduling and optimization, fuzzy discrete dynamical systems, graph theory, knowledge engineering, cluster analysis, fuzzy systems and for describing the diagnosis of technical devices [17,20], medical diagnosis [16] and offer many new problems. Discrete dynamical systems and the related algebraic structures were studied using max-min matrix operations in [3,5,15]. In recent decades, there has been significant effort to study systems of max-min linear equations in the form Ax=b, where the elements of A and b represent capacities, relationships between certain objects, equipment reliability, and the like. In practice, these values are not precise numbers but are usually contained in some intervals. Interval arithmetic is an efficient way to represent matrices in a guaranteed way on a computer. Interval systems of linear equations of the form Ax=b have been intensively studied, see [11,12,13,14,15].

    Since the operation of maximum replacing addition is not a group operation, but only a semi-group one, there is substantial difference between solving systems with variables on one side of equation and systems with variables occurring on both sides, i. e., Ax=Bx. Two-sided max-min linear systems were studied in [6], where a simple polynomial algorithm was described.

    The issue of linear programming (optimization) in classical algebra is well known and intensively researched. M. Hladík has dealt with interval versions of linear optimization in classical algebra in several publications, e.g., [7,8]. In max-plus algebra, max-linear programming (optimization) was introduced by Aminu and Butkovič in [1]. A special type of max-linear programming has been studied in [2]. Optimization problems of classical linear functions under an max-min equation constraint have been studied in [9,19].

    In this paper, we shall deal with a two-sided system of linear equations of the form Axc=Bxd, where A, B are given square matrices, b, d are column vectors and operations and are extended to matrices and vectors in the same way as in the classical algebra. We give an equivalent condition for its solvability. For a given objective function f, we shall deal with an optimization problem of type fxmax or min such that Axc=Bxd. We will also solve an equation of the form f(x)=v on the solution set of the equation Axc=Bxd. Replacing an objective function f and a number v with an interval function f and an interval value v leads to the definition of several types of the reachability of interval value v by interval function f.

    The following example is a modified and extended version of the example listed in [6].

    Example 1.1. Let us take a manufacturing company that transports components to produce a specific product from places P1,P2,P3. Transportation means of different size are transporting these components from places Pi, i=1,2,3 to place T, where the assembly line is located. The goods are unloaded in T, assembled and the transportation means with completed products have to return to Pi. We assume that the connection between Pi and T is only possible via one of the places Qj,j=1,2, the roads between Pi and Qj are one-way roads, and the capacity of the road in the direction from Pi to Qj is equal to aij. The transport from T to Pi is carried out via other one-way roads between places Qj and Pi with capacities of the road in the direction from Qj to Pi equal to bij.

    In Figure 1, there is an arrow (PiQj), if there exists a connection from Pi to Qj, and there is an arrow (QjPi) if there exists a connection from Qj to Pi, where i=1,2,3,4, j=1,2,3. We have to join places Qj with T by a two-way road with the same capacity xj in both directions. The maximum reachable capacity of the connection from Pi to T is therefore equal to maxjmin{aij,xj}. Since the roads between T and Qj are two-way roads, the maximum reachable capacity of the connection from T to Pi is equal to maxjmin{bij,xj} for i=1,2,3,4.

    Figure 1.  Transportation system.

    We assume that the transportation means can only pass through some roads with a capacity which is not smaller than the capacity of the transportation mean. So that each of the transportation means may return to Pi, it is natural to require for each i=1,2,3,4 that the maximal reachable capacity of the connection from Pi to T is equal to the maximal reachable capacity of the connection from T to Pi on the way back. We obtain

    max{min{a11,x1},min{a12,x2}}=max{min{b11,x1},min{b12,x2}},max{min{a21,x1},min{a22,x2}}=max{min{b21,x1},min{b22,x2}},max{min{a32,x2},min{a33,x3}}=max{min{b32,x2},min{b33,x3}},max{min{a42,x2},min{a43,x3}}=max{min{b42,x2},min{b43,x3}},

    In general, suppose that there are m places P1,P2,,Pm and n transfer points Q1,Q2,,Qn. If the road from Pi to Qj (from Qj to Pi) does not exist, we set aij=0 (bij=0).

    Our task is to choose the appropriate capacities xj for any jN,N={1,2,,n} such that

    maxjNmin{aij,xj}=maxjNmin{bij,xj}

    for all iM, M={1,2,,m}.

    Assume that the amount of components in T delivered from Pi has to be at least ci and the missing goods are replenished from the stock. Similarly, the amount of completed products in Pi delivered from T has to be at least di. Therefore, we are looking for the solution of the equation

    max{maxjNmin{aij,xj},ci}=max{maxjNmin{bij,xj},di} (1.1)

    for each iM.

    Let us provide more information about the organization of the paper and the results achieved. The next section is occupied by definitions and notations. Section 3 is devoted to the solvability of two-sided linear equations. The already known results for the solvability of the equation Ax=Bx are extended by other properties. Besides that, the problem of solvability of the system Axc=Bxd is introduced. Theorem 3.1 gives an equivalent condition for its solvability. Section 4 deals with the problem of linear optimization. We give the range of values for the objective function f on the set satisfying Axc=Bxd. The main result is obtained in Theorem 4.3. In the last section we define the concept of reachability of the interval value by the interval objective function. We define eleven types of reachability. Theorems 5.2–5.5 provide equivalent conditions for them.

    Max-min algebra is the triple (I,,), where I=[O,I] is a linearly ordered dense set with the least element O and the greatest element I, and , are binary operations defined as follows:

    ab=max{a,b} and ab=min{a,b}.

    The set of all m×n matrices over I is denoted by I(m,n), and the set of all column n-vectors over I by I(n). Specifically, O(m,n) (I(m,n)) will be used for the matrix with all entries equal to O (I). Operations and are extended to matrices and vectors in the same way as in the classical algebra. We consider the ordering on the sets I(m,n) and I(n) defined as follows:

    ● for A,CI(m,n): AC if aijcij for each iM and for each jN,

    ● for x,yI(n): xy if xjyj for each jN.

    We will use the monotonicity of , which means that for each A,BI(m,n) and for each x, yI(n), the implication

    if AB and xy then AxBy

    holds. Let AI(m,n) and bI(m). We can write the system of max-min linear equations in the matrix form

    Ax=b. (2.1)

    It is well known (see [3,21]) that system (2.1) is solvable if and only if the vector x(A,b), defined by

    xj(A,b)=miniM{bi:aij>bi} (2.2)

    for any jN, where min=I, is its solution. The vector x(A,b) is called a principal solution of system (2.1).

    Theorem 2.1. [3,21] Let AI(m,n) and bI(m) be given.

    (ⅰ) If Ax=b for xI(n), then xx(A,b).

    (ⅱ) Ax(A,b)b.

    (ⅲ) The system Ax=b is solvable if and only if x(A,b) is its solution.

    The properties of the principal solution are expressed in the following assertions.

    Lemma 2.1. [3] Let AI(m,n) and b,dI(m) be given and let bd. Then, x(A,b)x(A,d).

    In this section we will deal with the solution of systems with variables occurring on both sides of the equation. The already known results for the solvability of the equation Ax=Bx are extended by other properties. Besides that, the problem of the solvability of the system Axc=Bxd is introduced.

    Let AI(m,n), BI(m,n). Let us consider the system of the form

    Ax=Bx. (3.1)

    Let us note that (3.1) is always solvable, since for αmin{aij,bij,iM,jN}, the vector x=(α,α,,α)I(n) is its solution. In [6], the algorithm for finding the greatest solution of (3.1) such that x¯x, where ¯xI(n), was provided. We introduce this algorithm with slightly modified notations as it will be used in the main part of the paper. Some notations are necessary to do this. Set

    S(A,B)={xI(n);Ax=Bx},S(A,B,¯x)={xS(A,B);x¯x}.

    Further, define

    ai(x)=[Ax]i,bi(x)=[Bx]i.

    Suppose that ¯xS(A,B). It should be noted that if we exchange the ith row of matrix A and the ith row of matrix B, we get a new system that has the same set of solutions as the original system. It follows that we may assume w.l.o.g. that ai(¯x)bi(¯x) for each iM. Let us set

    M<(¯x)={iM;ai(¯x)<bi(¯x)},M=(¯x)={iM;ai(¯x)=bi(¯x)},

    and let us introduce the following notations for any given upper bound ¯x:

    α(¯x)=min{ai(¯x);iM<(¯x)},
    M<(α(¯x))={iM<(¯x);ai(¯x)=α(¯x)},
    M=(α(¯x))={iM=(¯x);ai(¯x)α(¯x)},
    N(α(¯x))={jN;(iM<(α(¯x)))[bij¯xj>α(¯x)]}.

    Algorithm: Greatest Solution

    Input: matrices A, BI(m,n), vector ¯xI(n)

    Output: x(A,B,¯x) – the greatest element of S(A,B,¯x)

    begin

    1 If ¯xS(A,B,¯x) then x(A,B,¯x):=¯x; STOP;

    2 Swap the rows of the matrices A,B so that ai(¯x)bi(¯x) holds for all iM;

    3 Compute α(¯x), M<(α(¯x)), M=(α(¯x)), J(α(¯x));

    4Set ˜xj:=α(¯x) if jN(α(¯x)), ˜xj:=¯xj otherwise;

    5 If ˜xS(A,B,¯x) then x(A,B,¯x):=˜x; STOP;

    6 Set ¯x:=˜x; go to 2.

    end

    If we set ¯x=II(n), the Greatest Solution algorithm finds the greatest element of S(A,B), denoted by x(A,B). On the other hand, there are several minimal solutions of (3.1), but it is not known whether there is the smallest element of S(A,B).

    Example 3.1. Let I=[0,10]. Find the greatest solution of the two-sided equation Ax=Bx, whereby

    A=(32576461255321435423),B=(36526435247283338132).

    Solution. We have ¯x=(10,10,10,10,10).

    1Since A¯x=(7,6,5,5) and B¯x=(6,5,8,8), ¯xS(A,B,¯x);

    2 We swap the 1st and 2nd rows of A and B and obtain

    (36526435245321435423)x=(32576461257283338132)x.

    3 Since A¯x=(6,5,5,5) and B¯x=(7,6,8,8), we obtain α(¯x)=5, M<(5)={2,3,4}, M=(5)=, N(5)={1,2,3}

    4</p><p>˜x:=(5,5,5,10,10);

    5Since A˜x=(6,5,5,5); B˜x:=(7,5,5,5), ˜xS(A,B);

    6</p><p>¯x:=(5,5,5,10,10);

    ------------------------------------------------------------------------------------------

    3 α(¯x)=6, M<(6)={1}, M=(6)=, N(6)={4};

    4</p><p>˜x:=(5,5,5,6,10);

    5Since A˜x=(6,5,5,5)=B˜x, ˜xS(A,B); x(A,B,¯x):=(5,5,5,6,10); STOP

    Answer: The greatest solution of the given equation is x(A,B)=(5,5,5,6,10).

    Lemma 3.1. Let x(A,B) be the greatest solution of (3.1). Then

    xj(A,B)=min{xj(A,e),xj(B,e)},

    for each jN, where e=Ax(A,B)=Bx(A,B).

    Proof. It is easy to see that, for each eI(m), the equality Ax=Bx=e implies ee. Since Ax(A,B)=e in accordance with Theorem 2.1, we obtain x(A,B)x(A,e). Similarly, Bx(A,B)=e implies x(A,B)x(B,e). Hence, x(A,B)min{x(A,e),x(B,e)}. Since x(A,B) is the greatest solution of (3.1), the equality holds.

    Denote H(A,B)={aij,bij,iM,jN}. For a given xI(n), denote by ˜x the vector given by

    ˜xj={max{tH(A,B);xjt}ifxjminH(A,B);Ootherwise. (3.2)

    Lemma 3.2. If xS(A,B) then ˜xS(A,B).

    Proof. Suppose that xS(A,B). Then, for each iM, the equality [Ax]i=[Bx]i is satisfied. Let iM be arbitrary, but fixed. We shall prove that [A˜x]i=[B˜x]i. We shall distinguish three possibilities (not necessarily disjoint) for which we refer to Cases 1–3.

    Case 1. If [Ax]i=air and [Bx]i=bis for some r,sN, then air˜xr, bis˜xs and ˜xr,˜xsO, which implies air˜xr=air and bis˜xs=bis. Together with aij˜xjaijxj and bij˜xjbijxj, for each jN we obtain [A˜x]i=air and [B˜x]i=bis. Hence, [A˜x]i=[B˜x]i.

    Case 2. If [Ax]i=xr and [Bx]i=xs for some r,sN, then air˜xr=˜xrjraij˜xj and bis˜xs=˜xsjsbij˜xj. Then, [A˜x]i=[B˜x]i=˜xr=˜xs (the last equality follows from xr=xs).

    Case 3. ⅰ) If [Ax]i=xr and [Bx]i=bis for some r,sN, then xr=bis, which implies ˜xr=xr. Further, ˜xsbis. Since xr=˜xrjraij˜xj and bis˜xs=bisjsbij˜xj, we obtain [A˜x]i=[B˜x]i=xr=bis.

    ⅱ) If [Ax]i=air and [Bx]i=xs for some r,sN, the proof is similar to the proof of part ⅰ).

    Let A,BI(m,n) and c,dI(m) be given. Let us consider the system

    Axc=Bxd. (3.3)

    Returning to Example 1.1, we can see that using max-min algebra, (1.1) can be written in the form of (3.3).

    We shall discuss the solvability of (3.3). If c=d, then system (3.3) is still solvable, since at least the solution of the system Ax=Bx is also a solution of system (3.3). Suppose that cd. In this case, system (3.3) may or may not have a solution. If we exchange ci and di and at the same time we exchange the ith row of matrix A and the ith row of matrix B (that is, the matrices A, B will change) so that cidi, the solution of the original equation is the same as the solution of the equation after changing the rows. So we can assume without loss of generality that cd.

    Denote by M> the set M>={iM;ci>di}. The following lemma provides the necessary condition for the solvability of (3.3).

    Lemma 3.3. If system (3.3) with cd is solvable then for each iM> there exists jN such that bijci.

    Proof. Suppose that there exists iM such that ci>di and bij<ci for each jN. Then, [Bx]i<ci for each xI(n). Together with di<ci, we obtain [Bxd]i<ci. Since [Axc]ici, we obtain [Axc]i[Bxd]i for each xI(n). Hence, (3.3) is not solvable.

    Denote by S(A,B,c,d) the set of all solutions of (3.3. If S(A,B,c,d), we use the notation x(A,B,c,d) for the greatest solution (3.3 which can always be calculated using the Greatest Solution algorithm.

    Denote by (A|c) and (B|d) the matrices arising from A and B, respectively, by adding columns c and d, respectively. System (3.3) can be converted to the system

    (A|c)x=(B|d)x (3.4)

    with unknown xI(n+1).

    Theorem 3.1. System (3.3) is solvable if and only if xn+1((A|c),(B|d))=I.

    Proof. If (3.3) is solvable then there exists xI(n+1) satisfying (3.4) such that xn+1maxiM{ci, di}. Then the vector ˆx given by ˆxj=xj for jN and ˆxn+1=I is a solution of (3.4), too. It follows that xn+1((A|c),(B|d))=I.

    Conversely, if xn+1((A|c),(B|d))=I, then x(A,B,c,d)I(n) such that xj(A,B,c,d)= xj((A|c),(B|d)) for jN is a solution of (3.3).

    Let us start with the extension of Example 1.1.

    Example 1.1-continuation. The capacities aij, bij, xj do not necessarily have to be considered as absolute values of the number of components or products transported. We assume that these capacities express the ratio of the number of objects to a fixed value of the upper bound of this number. Then, all capacities are expressed by a number from the interval 0 to 1.

    In practice, we may be interested not only in the capacity of a particular connection, but also in its reliability. A connection with high capacity but low reliability may be less advantageous than others with less capacity but high reliability. Let the reliability of the connection between Qj and T in both directions be expressed by the number fj. Then, the utility of the connection between Qj and T is equal to min{fj,xj} and the maximum utility of the system is f(x)=maxjN(fjxj)=fx. This leads to the question how to find xI(n) which satisfies (3.3) and ensures sufficient utility.

    In this section, we develop methods for finding xI(n) that minimizes (maximizes) the function f(x)=fx, where f=(f1,f2,,fn)I(n) subject to Eq (3.3).

    As it was mentioned in previous part, we can assume that cd. In the case that (3.3) does not have a solution, the task does not make sense. Suppose that S(A,B,c,d), and let

    M(f)=max{fx;xS(A,B,c,d)},
    m(f)=min{fx;xS(A,B,c,d)}.

    We start by calculating the upper bound, that is, M(f).

    Theorem 4.1. Let A,BI(m,n), c,dI(m) and fI(n) be given. Then,

    M(f)=fx(A,B,c,d).

    Proof. Since xx(A,B,c,d) for each xS(A,B,c,d), in accordance with the monotonicity of we obtain fxfx(A,B,c,d).

    Now, we shall discuss the lower bound. Define

    L=minjNfjmaxrM>cr. (4.1)

    Lemma 4.1. If cd, then f(x)L for every xS(A,B,c,d).

    Proof. Let xS(A,B,c,d) and rM>. Then, [Bx]rcr, which implies that there exists kN such that brkxkcr, and consequently xkcr. Hence,

    f(x)=fxfkxkfkcrminjNfjcr.

    Since the previous inequality holds for each rM>, we obtain

    f(x)maxrM>(crminjNfj)=maxrM>crminjNfj=L.

    The value of L given by (4.1) gives us a lower bound that may or may not be reached by some solution of (3.3). Hence, m(f)L.

    Let vI be given. If we ask whether there exists xS(A,B,c,d) such that f(x)=v, we are looking for the solution of the following non-homogeneous system:

    Axc=Bxd
    O(n)xv=fxO,

    where O(n) denotes the row vector with any element equal to O. The above system can be converted to the system

    (A|c)O|vx=(B|d)f|Ox, (4.2)

    where (A|c)O|v is the matrix formed from (A|c) by adding the row (O OO v) as the last row, and (B|d)f|O is the matrix formed from (B|d) by adding the row (f1 f2fn O) as the last row.

    Lemma 4.2. Let vI be given. Then, there exists xS(A,B,c,d) such that f(x)=v if and only if

    xn+1((A|c)O|v,(B|d)f|O)=I.

    Proof. The proof is similar to the proof of Theorem 3.1.

    Denote by x(A,B,c,d,f,v) the greatest solution of (3.3) such that f(x)=v. It is easy to see that

    xi(A,B,c,d,f,v)=xi((A|c)O|v,(B|d)f|O)

    for each iN.

    Denote by Rv(f) the range of values of f on S(A,B,c,d), i. e.,

    Rv(f)={f(x); xS(A,B,c,d)}.

    and set H(A,B,c,d,f)=H(A,B){ci,di;iM}{fj;jN}.

    Theorem 4.2. Let vRv(f) be such that vH(A,B,c,d,f). Then, there exists ˜vH(A,B,c,d,f) such that ˜v<v and ˜vRv(f).

    Proof. If there exists xS(A,B,c,d) such that f(x)=vH(A,B,c,d,f), then f(x)=nj=1fjxj=fkxk=xk<fk for some kN. Then, for the vector ˜xS(A,B,c,d) defined similarly as in (3.2) by formula

    ˜xj={max{tH(A,B,c,d);xjt}if xjminH(A,B,c,d);Ootherwise,

    we obtain

    v=f(x)f(˜x)=nj=1fj˜xj=fr˜xr

    for some rN. Denote ˜v=f(˜x). Since fr,˜xrH(A,B,c,d,f), we obtain ˜vH(A,B,c,d,f) and ˜v<v.

    It follows from Theorem 4.2 that, in finding the value m(f), it is sufficient to consider only the elements of the set H(A,B,c,d,f).

    Algorithm: Minimal value

    Input: A,B,c,d,f

    Output: minimal value m(f)

    begin

    1 Order the elements of H(A,B,c,d,f) in such way that h1<h2<<hs.

    2 Find k such that L=hk; i:=k;

    3 v:=hi; compute x((A|c)O|v,(B|d)f|O) using the Greatest Solution algorithm;

    4 If xn+1((A|c)O|v,(B|d)f|O)=I then m(f):=v, STOP;

    5 i:=i+1; go to 2;

    end

    Once the values of m(f) and M(f) are calculated, the question which arises is whether all values between m(f) and M(f) can be achieved by some xS(A,B,c,d). The following theorem gives an answer to this question.

    Theorem 4.3. Let A,BI(m,n), c,dI(m) and fI(n) be given. Then,

    Rv(f)=[m(f),M(f)].

    Proof. {First, let uRv(f). There then exists xS(A,B,c,d) such that f(x)=u. Then,

    m(f)u=f(x)M(f).

    We obtain that Rv(f)[m(f),M(f)]. To prove that [m(f),M(f)]Rv(f) suppose that v[m(f),M(f)]. }To simplify notations, we shall use m and M instead of m(f) and M(f), respectively. Since mRv(f) and MRv(f), in accordance with Lemma 4.2 we have xn+1((A|c)O|m,(B|d)f|O)=I and xn+1((A|c)O|M,(B|d)f|O)=I. We have to prove that xn+1((A|c)O|v,(B|d)f|O)=I.

    Denote by em, eM and ev the vectors

    em=(A|c)O|mx((A|c)O|m,(B|d)f|O)=(B|d)f|Ox((A|c)O|m,(B|d)f|O),
    eM=(A|c)O|Mx((A|c)O|M,(B|d)f|O)=(B|d)f|Ox((A|c)O|m,(B|d)f|O),
    ev=(A|c)O|vx((A|c)O|v,(B|d)f|O)=(B|d)f|Ox((A|c)O|v,(B|d)f|O).

    Suppose that xn+1((A|c)O|v,(B|d)f|O)=uI, which is equivalent to evn+1=u<v.

    Since x((A|c)O|v,(B|d)f|O) is a solution of (3.3), we have umaxiM{ci,di}. Let us take the vector

    ˆx=(x1((A|c)O|v,(B|d)f|O),x2((A|c)O|v,(B|d)f|O),,xn((A|c)O|v,(B|d)f|O),I).

    We obtain (A|c)O|vˆx=ˆev, where ˆevi=evi for iM and ˆevn+1=v.

    We have emˆeveM, emn+1=m and eMn+1=M. Since xn+1((B|d)f|O,em)=xn+1((B|d)f|O,eM)=I, in accordance with Lemma 2.1 we obtain xn+1((B|d)f|O,ˆev)=I.

    Since (A|c)O|vˆx=(B|d)f|Oˆx=ˆev and ˆevn+1=v, we obtain the contradiction to that xn+1((A|c)O|v,(B|d)f|O)I.

    Denote by x(A,B,c,d,f,v) the greatest solution of (3.3) such that f(x)=v. The following theorem gives the necessary and sufficient condition for the solvability of the system of inequalities.

    Theorem 4.4. Let A,BI(m,n), c,dI(m), f1,f2I(n) and v1,v2I be given. Then, the system of inequalities

    f1(x)v1, (4.3)
    f2(x)v2, (4.4)

    has a solution xS(A,B,c,d) if and only if either

    M(f1)v1 and M(f2)v2, (4.5)

    or there exists xS(A,B,c,d) such that

    f1(x)=v1 and f2(x(A,B,c,d,f1,v1))v2. (4.6)

    Proof. If condition (4.5) is satisfied, then the equalities M(f1)=f1(x(A,B,c,d)), M(f2)=f2(x(A,B,c,d)) imply that the vector x(A,B,c,d) is a solution of the systems (4.3) and (4.4).

    In the second case, if (4.6) holds, then x(A,B,c,d,f1,v1) is a solution of the systems (4.3) and (4.4).

    For the converse implication suppose that systems (4.3) and (4.4) are solvable and condition (4.5) is not satisfied. We have to prove that condition (4.6) is satisfied.

    If system (4.3) and (4.4) are solvable and f2(x(A,B,c,d))<v2, then f2(x)<v2 for each xS(A,B,c,d), which is incompatible with the solvability of systems (4.3) and (4.4). This means that this assumption is never fulfilled, hence the implication holds.

    If systems (4.3) and (4.4) are solvable and M(f1)>v1, then, in accordance with Theorem 4.3, there exists xS(A,B,c,d), xx(A,B,c,d,f1,v1) such that f1(x)=v1. Since for each xS(A,B,c,d) such that f1(x)=vv1 the inequality xx(A,B,c,d,f1,v)x(A,B,c,d,f1,v1) holds, the existence of x such that f2(x)v2 implies f2(x(A,B,c,d,f1,v1))v2.

    Example 4.1. Let I=[0,10]. Check whether the system Axc=Bxd is solvable and in positive case find the range of values Rv(f), if

    A=(3257461253213542), B=(3652435272833813),
    c=(6543),d=(6432), f=(4536).

    We have M>={2,3,4}. Since for each iM> there exists jN such that bijci, the necessary condition for the solvability of Axc=Bxd given by Lemma 3.3 is satisfied. We shall continue with computing the greatest solution of the system (A|c)x=(B|d)x, namely,

    (32576461255321435423)x=(36526435247283338132)x.

    The above system was solved in Example 3.1, so we have x((A|c),(B|d))=(5,5,5,6,10). Since x5((A|c),(B|d))=10=I, the system Axc=Bxd is solvable and x(A,B,c,d)=(5,5,5,6).

    We compute the value M(f)=fx(A,B,c,d)=6. To find the value m(f) we first compute the value L by formula (4.1). We obtain

    L=minkNfkmaxrM>cr=35=3,

    so f(x)3 for each xS(A,B,c,d).

    For checking whether there exists xS(A,B,c,d) such that f(x)=v, we solve the matrix equation

    (325764612553214354230000v)x=(3652643524728333813245360)x.

    For v=3 we obtain x((A|c)O|v,(B|d)f|O)=(3,3,3,3,3). Since x5((A|c)O|v,(B|d)f|O)I, there is no xS(A,B,c,d) such that f(x)=3.

    We continue with v=4. We obtain x((A|c)O|v,(B|d)f|O)=(5,4,5,4,10). Hence, there exists xS(A,B,c,d), namely x=(5,4,5,4) such that f(x)=4. Hence, m(f)=4 and x(A,B,c,d,f,4)=(5,4,5,4). In accordance with Theorem 4.3, we have

    Rv(f)=[4,6].

    In this part, we shall deal with the case that the entries of the objective function f are not exact values, but they are in intervals of possible values. Define an interval function f as follows:

    f=[f_,¯f]={f; f_f¯f},f_¯f.

    In this way f denotes a real vector-interval as is usual in interval mathematics. Further, denote

    Rv(f)={f(x);ff, xS(A,B,c,d)}.

    Theorem 5.1. Let AI(m,n), BI(m,n), c,dI(m) and f be given. Then,

    Rv(f)=[m(f_),M(¯f)].

    Proof. It is clear that Rv(f)=ffRv(f)=Rv(f_)(ff,ff_Rv(f)). According to Theorem 4.3 we have Rv(f_)=[m(f_),M(f_)], so Rv(f_) is a dense continuous set. Since M(f)=f(x(A,B,c,d)) for each ff, we obtain that M(f) is a continuous function defined on the set f, and hence {M(f);ff}=[M(f_),M(¯f)], which is a dense set. In accordance with those, for each v[M(f_),M(¯f)] there exists ff such that v=M(f). Then, Rv(f)=[m(f_),M(f_)](ff, ff_[m(f),M(f)])=[m(f_),M(¯f)].

    In the following, we will deal with the situation where we require the values of the function ff to be in a given range of allowable values v=[v_,¯v]. We say that the value v is reachable by f, if f(x)=v for some xS(A,B,c,d).

    We can define several types of interval reachability. For example, we can require the existence of vv such that f(x)=v for some ff and for some xS(A,B,c,d). Another possibility is to require the existence of such a vector xS(A,B,c,d) so that f(x)v for each ff. By using general and existential quantifiers for x,f and v in different orders, we obtain many types of interval reachability.

    Definition 5.1. Let A,BI(m,n), c,dI(m), f and v be given. An interval value v is

    ⅰ) weakly possibly reachable by f if there exist xS(A,B,c,d), ff and vv such that f(x)=v,

    ⅱ) possibly reachable by f if for each xS(A,B,c,d) there exist ff and vv such that f(x)=v,

    ⅲ) strongly possibly reachable by f if there exists vv such that for each xS(A,B,c,d) there exist ff such that f(x)=v.

    Theorem 5.2. Let A,BI(m,n), c,dI(m), fI(n) and vI be given. An interval value v is

    ⅰ) weakly possibly reachable by f if and only if there exists xS(A,B,c,d) such that

    f_(x)¯vand¯f(x)v_, (5.1)

    ⅱ) possibly reachable by f if and only if

    M(f_)¯vandm(¯f)v_. (5.2)

    ⅲ) strongly possibly reachable by f if and only if it is possibly reachable by f and M(f_)m(¯f).

    Proof. For a fixed xS(A,B,c,d), the existence of ff and vv such that f(x)=v is equivalent to the existence of ff such that f(x)v. Since {f(x);ff}=[f_(x),¯f(x)], we obtain [f_(x),¯f(x)][v_,¯v], which is equivalent to system (5.1).

    ⅰ) The weakly possible reachability means the existence of solution of (5.1) in the set S(A,B,c,d), which can be verified using Theorem 4.4.

    ⅱ) The possible reachability means that the system (5.1) is satisfied for each xS(A,B,c,d). The equality f_(x)¯v holds for each xS(A,B,c,d) if and only if M(f_)¯v. Similarly, validity of the equation ¯f(x)v_ for each xS(A,B,c,d) is equivalent to m(¯f)v_.

    ⅲ) We can write the proof as a sequence of equivalences

    (vv)(xS(A,B,c,d))(ff)f(x)=v
    (vv)(xS(A,B,c,d))v[f_(x),¯f(x)]
    (vv)(xS(A,B,c,d))[vf_(x)v¯f(x)]
    (vv)[vM(f_)vm(¯f)][(vv)M(f_)vm(¯f)]
    [M(f_)m(¯f)[M(f_),m(¯f)][v_,¯v]]
    M(f_)m(¯f) and v is possibly reachable by f,

    because the condition [M(f_),m(¯f)][v_,¯v] is equivalent to the being possibly reachable.

    Corollary 5.1. If an interval value v is strongly possibly reachable by f, then it is possibly reachable by f and if it is possibly reachable by f, then it is weakly possibly reachable by f.

    Definition 5.2. Let A,BI(m,n), c,dI(m), fI(n) and vI be given. An interval value v is

    ⅰ) tolerably reachable by f if there exists xS(A,B,c,d) such that for each ff there exists vv such that f(x)=v,

    ⅱ) strongly tolerably reachable by f if for each xS(A,B,c,d) and for each ff there exists vv such that f(x)=v,

    ⅲ) weakly tolerably reachable by f if for each ff there exist xS(A,B,c,d) and vv such that f(x)=v.

    Theorem 5.3. Let A,BI(m,n), c,dI(m), fI(n) and vI be given. An interval value v is

    ⅰ) tolerably reachable by f if and only if there exists xS(A,B,c,d) such that

    ¯f(x)¯vandf_(x)v_, (5.3)

    ⅱ) strongly tolerably reachable by f if and only if

    M(¯f)¯vandm(f_)v_, (5.4)

    ⅲ) weakly tolerably reachable by f if and only if

    m(¯f)¯vandM(f_)v_. (5.5)

    Proof. Let xS(A,B,c,d) be fixed. Then for each ff there exists vv such that f(x)=v if and only if [f_(x),¯f(x)][v_,¯v], which is equivalent to system (5.3).

    ⅰ) An interval value v is tolerably reachable by f if and only if (5.3) has a solution in S(A,B,c,d). The necessary and sufficient condition for the solvability of (5.3) is given by Theorem 4.4.

    ⅱ) The strongly tolerable reachability means that each xS(A,B,c,d) satisfies system (5.3). Validity of the equality ¯f(x)¯v for each xS(A,B,c,d) is equivalent to M(¯f)¯v. Similarly, f_(x)v_ for each xS(A,B,c,d) if and only if m(f_)v_.

    ⅲ) The proof of weakly tolerable reachability follows from the following equivalences

    (ff)(xS(A,B,c,d))(vv)f(x)=v
    (ff)(xS(A,B,c,d))f(x)v
    (ff)[m(f),M(f)][v_,¯v](ff)[m(f)¯vM(f)v_]
    m(¯f)¯vM(f_)v_.

    Corollary 5.2. If an interval value v is strongly tolerably reachable by f, then it is possibly tolerably by f and if it is tolerably reachable by f, then it is weakly tolerably reachable by f.

    Definition 5.3. Let A,BI(m,n), c,dI(m), fI(n) and vI be given. An interval value v is

    ⅰ) controllably reachable by f if there exists xS(A,B,c,d) such that for each vv there exists ff such that f(x)=v,

    ⅱ) strongly controllably reachable by f if for each xS(A,B,c,d) and for each vv there exists ff such that f(x)=v,

    ⅲ) weakly controllably reachable by f if for each vv there exist xS(A,B,c,d) and ff such that f(x)=v.

    Theorem 5.4. Let A,BI(m,n), c,dI(m), fI(n) and vI be given. An interval value v is

    ⅰ) controllably reachable by f if and only if there exists xS(A,B,c,d) such that

    f_(x)v_and¯f(x)¯v, (5.6)

    ⅱ) strongly controllably reachable by f if and only if

    M(f_)v_andm(¯f)¯v, (5.7)

    ⅲ) weakly controllably reachable by f if and only if

    m(f_)v_andM(¯f)¯v. (5.8)

    Proof. Let xS(A,B,c,d) be given. Then for each vv there exists ff such that f(x)=v if and only if [v_,¯v][f_(x),¯f(x)], which is equivalent to system (5.6).

    ⅰ) An interval value v is controllably reachable by f if and only if (5.6) has a solution in S(A,B,c,d), which can be checked by Theorem 4.4.

    ⅱ) The strongly controllable reachability means that each xS(A,B,c,d) satisfies the system (5.6). Validity of system (5.6) for each xS(A,B,c,d) is equivalent to (5.7).

    ⅲ) The weakly controllable reachability is equivalent to vRv(f), i. e., [v_,¯v][m(f_),M(¯f)]. Hence, weakly controllable reachability is equivalent to (5.8).

    Corollary 5.3. If an interval value v is strongly controllably reachable by f, then it is possibly controllably reachable by f, and if it is controllably reachable by f, then it is weakly controllably reachable by f.

    Definition 5.4. Let A,BI(m,n), c,dI(m), fI(n) and vI be given. An interval value v is

    ⅰ) universally reachable by f if there exists vv such that f(x)=v for each xS(A,B,c,d) and for each ff,

    ⅱ) weakly universally reachable by f if there exist vv and xS(A,B,c,d) such that f(x)=v for each ff.

    Theorem 5.5. Let A,BI(m,n), c,dI(m), fI(n) and vI be given. An interval value v is

    ⅰ) universally reachable by f if and only if m(f_)=M(¯f)v,

    ⅱ) weakly universally reachable by f if and only if there exists xS(A,B,c,d) such that f_(x)=¯f(x)v.

    Proof. ⅰ) The universal reachability is equivalent to Rv(f)={v} for some vv. According to Theorem 4.3, the assertion follows.

    ⅱ) Let xS(A,B,c,d) be such that f_(x)=¯f(x)v. Then there exists vv such that for each ff we have v=f_(x)f(x)¯f(x)=v, which implies f(x)=vv.

    The converse implication trivially follows.

    Theorem 5.5 ⅱ) does not give an algorithm for checking whether there exists xS(A,B,c,d) such that f_(x)=¯f(x)v. To decide on the existence of such a solution, we solve the equation

    (A|c)f_|Ox=(B|d)¯f|Ox. (5.9)

    Denote by x(A,B,c,d,f_,¯f) the greatest solution of (3.3) such that f_(x)=¯f(x), if such a solution exists. We shall continue as follows:

    ● If xn+1((A|c)f_|O,(B|d)¯f|O)I or f_x(A,B,c,d,f_,¯f)=¯fx(A,B,c,d,f_,¯f)<v_, then v is not weakly universally reachable by f.

    ● If xn+1((A|c)f_|O,(B|d)¯f|O)=I and f_x(A,B,c,d,f_,¯f)=¯fx(A,B,c,d,f_,¯f)v then v is weakly universally reachable by f.

    ● If xn+1((A|c)f_|O,(B|d)¯f|O)=I and f_x(A,B,c,d,f_,¯f)= ¯fx(A,B,c,d,f_,¯f)>¯v, then we add to matrices (A|c)f_|O, (B|d)¯f|O the rows O|¯v and ¯f|O, respectively. Then, v is weakly universally reachable by f if and only if the greatest solution of the obtained system has the (n+1)-st coordinate equal to I.

    Corollary 5.4. If an interval value v is strongly universally reachable by f, then it is weakly universally reachable by f.

    Despite the relations in Corollaries 1–4, the following implications hold:

    ● If an interval value v is strongly controllably reachable by f, then it is strongly possibly reachable by f and strongly tolerably reachable by f.

    ● If an interval value v is universally reachable by f, then it is strongly possibly reachable by f and strongly tolerably reachable by f.

    ● If an interval value v is weakly universally reachable by f, then it is tolerably reachable by f.

    Example 5.1. Decide about all defined types of reachability of interval value v=[3,6] by f=([2,4],[1,5],[2,3],[4,6]) in accordance with Axc=Bxd, where A,B,c,d are matrices and vectors from Example 4.1.

    Since the current function ¯f is the function f from Example 4.1, we have m(¯f)=4 and M(¯f)=6. We compute M(f_)=f_x(A,B,c,d)=4 and find m(f_) in the same way as in Example 4.1. We obtain m(f_)=2. We can now discuss the types of reachability.

    We start with possible reachability. Since M(f_)=46=¯v and m(¯f)=42=v_, in accordance with Theorem 5.2 ⅱ), v is possibly reachable by f. Moreover, M(f_)=4=m(¯f), which implies that v is strongly possibly reachable by f. By the definition we obtain that v is weakly possibly reachable by f, too.

    For strongly tolerable reachability, we check whether inequalities (5.4) are satisfied. Since m(f_)=2<3=v_, v is not strongly tolerably reachable by f. We continue with tolerable reachability. For x=x(A,B,c,d) we obtain ¯f(x)=M(¯f)=6=¯v and f_(x)=M(f_)=4v_. Hence, x(A,B,c,d) satisfies the system of inequalities (5.3) which means that v is tolerably reachable by f. Then it is weakly tolerably reachable by f, too.

    We continue with strongly controllable reachability. Since M(f_)=4>v_, the system of inequalities (5.7) is not satisfied. Hence, v is not strongly controllably reachable by f. For controllable reachability, we have to solve the system of inequalities

    f_(x)3,¯f(x)6

    using Theorem 4.4. Since M(f_)>v_, we shall continue with solving equation f_(x)=3. This equation is solvable with the greatest solution x(A,B,c,d,f_,3)=(10,5,5,3), but ¯f(10,5,5,3)=5<6, so the above system is not solvable. Hence v is not controllably reachable by f. System of inequalities (5.8) is satisfied because m(f_)=23=v_ and M(¯f)=6=¯v, so v is weakly controllably reachable by f.

    Finally, we check the conditions for universal and weakly universal reachability. Since m(f_)M(¯f), v is not universally reachable by f. For weakly universal solvability, we have to solve the matrix equation

    (3257646125532143542321240)x=(3652643524728333813245360)x.

    We obtain x(A,B,c,d,f_,¯f)=(6,5,5,4). Since f_(6,5,5,4)=¯f(6,5,5,4)=4v, v is weakly universally reachable by f.

    We will summarize the obtained results: For given A,B,c,d,f and v, an interval value v is possibly, weakly possibly, strongly possibly, tolerably, weakly tolerably, weakly controllably and weakly universally reachable by f.

    In this paper, we have extended the solvability of two-sided systems of equations to a more general type. We dealt with the task of optimization in max-min algebra. In addition to minimizing or maximizing the objective function, we have introduced the notion of reachability of an interval value by an interval function. We have defined eleven types of reachability for which we have derived equivalent conditions. Other types of reachability can be a challenge for further research in this area.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors declare that they have no conflict of interest.



    [1] A. Aminu, P. Butkovič, Introduction to max-linear programming, IMA J. Manag. Math., 20 (2009), 233–249. https://doi.org/10.1093/imaman/dpn029 doi: 10.1093/imaman/dpn029
    [2] P., M. MacCaig, On the integer max-linear programming problem, Discrete Appl. Math., 162 (2014), 128–141. https://doi.org/10.1016/j.dam.2013.08.007 doi: 10.1016/j.dam.2013.08.007
    [3] R. Cuninghame-Green, Minimax algebra, Berlin: Springer, 1979. https://doi.org/10.1007/978-3-642-48708-8
    [4] M. Fiedler, J. Nedoma, J. Ramík, J. Rohn, K. Zimmermann, Linear optimization problems with inexact data, New York: Springer, 2006. https://doi.org/10.1007/0-387-32698-7
    [5] M. Gavalec, J. Plavka, Monotone interval eigenproblem in max-min algebra, Kybernetika, 46 (2010), 387–396.
    [6] M. Gavalec, K. Zimmermann, Solving systems of two-sided (max, min)-linear equations, Kybernetika, 46 (2010), 405–414.
    [7] M. Hladík, On approximation of the best case optimal value in interval linear programming, Optim. Lett., 8 (2014), 1985–1997. https://doi.org/10.1007/s11590-013-0715-5 doi: 10.1007/s11590-013-0715-5
    [8] M. Hladík, How to determine basis stability in interval linear programming, Optim. Lett., 8 (2014), 375–389. https://doi.org/10.1007/s11590-012-0589-y doi: 10.1007/s11590-012-0589-y
    [9] J. Loetamonphong, S. C. Fang, R. E. Young, Multi-objective optimization problem with fuzzy relation equations constraints, Fuzzy Set. Syst., 127 (2002), 141–164. https://doi.org/10.1016/S0165-0114(01)00052-5 doi: 10.1016/S0165-0114(01)00052-5
    [10] H. Myšková, Interval systems of max-separable linear equations, Linear Algebra Appl., 403 (2005), 263–272. https://doi.org/10.1016/j.laa.2005.02.011 doi: 10.1016/j.laa.2005.02.011
    [11] H. Myšková, Control solvability of interval systems of max-separable linear equations, Linear Algebra Appl., 416 (2006), 215–223. https://doi.org/10.1016/j.laa.2005.11.008 doi: 10.1016/j.laa.2005.11.008
    [12] H. Myšková, An iterative algorithm for testing solvability of max-min interval systems, Kybernetika, 48 (2012), 879–889.
    [13] H. Myšková, On an algorithm for testing T4 solvability of max-min interval systems, Kybernetika, 48 (2012), 924–938.
    [14] H. Myšková, Interval max-plus systems of linear equations, Linear Algebra Appl., 437 (2012), 1992–2000. https://doi.org/10.1016/j.laa.2012.04.051 doi: 10.1016/j.laa.2012.04.051
    [15] J. Plavka, On eigenproblem for circulant matrices in max-algebra, Optimization, 50 (2001), 477–483. https://doi.org/10.1080/02331930108844576 doi: 10.1080/02331930108844576
    [16] E. Sanchez, Medical diagnosis and composite advances, In: Fuzzy set theory and applications, 1979,437–444.
    [17] Y. Tsukamoto, T. Terano, Failure diagnosis by using fuzzy logic, In: 1977 IEEE conference on decision and control including the 16th symposium on adaptive processes and a special symposium on fuzzy set theory and applications, New Orleans, 1977, 1390–1395. https://doi.org/10.1109/CDC.1977.271521
    [18] N. N. Vorobjov, Extremal algebra of positive matrices, Elektronische Datenverarbeitung und Kybernetik, 3 (1967), 39–71.
    [19] Y. K. Wu, S. M. Guu, Minimizing a linear function under a fuzzy max-min relational equation constraint, Fuzzy Set. Syst., 150 (2005), 147–162. https://doi.org/https://doi.org/10.1016/j.fss.2004.09.010 doi: 10.1016/j.fss.2004.09.010
    [20] L. A. Zadeh, Toward a theory of max-min systems, In: Aspects of network and systems theory, 1971.
    [21] K. Zimmermann, Extremální algebra (in Czech), 1976.
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1162) PDF downloads(86) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog