
The behavior of discrete-event systems, in which the individual components move from event to event rather than varying continuously through time, is often described by systems of linear equations in max-min algebra, in which classical addition and multiplication are replaced by ⊕ and ⊗, representing maximum and minimum, respectively. Max-min equations have found a broad area of applications in causal models, which emphasize relationships between input and output variables. Many practical situations can be described using max-min systems of linear equations. We shall deal with a two-sided max-min system of linear equations with unknown column vector x of the form A⊗x⊕c=B⊗x⊕d, where A, B are given square matrices, c, d are column vectors and operations ⊕ and ⊗ are extended to matrices and vectors in the same way as in the classical algebra. We give an equivalent condition for its solvability. For a given max-min objective function f, we consider optimization problem of type f⊤⊗x→max or min constraint to A⊗x⊕c=B⊗x⊕d. We solve the equation in the form f(x)=v on the set of solutions of the equation A⊗x⊕c=B⊗x⊕d and extend the problem to the case of an interval function f and an interval value v. We define several types of the reachability of the interval value v by the interval function f and provide equivalent conditions for them.
Citation: Helena Myšková, Ján Plavka. Optimizing the max-min function with a constraint on a two-sided linear system[J]. AIMS Mathematics, 2024, 9(4): 7791-7809. doi: 10.3934/math.2024378
[1] | Miaoxia Chen, Abdul Samad Shibghatullah, Kasthuri Subramaniam, Xiaopeng Yang . Weighted minimax programming subject to the max-min fuzzy relation inequalities. AIMS Mathematics, 2024, 9(6): 13624-13641. doi: 10.3934/math.2024665 |
[2] | Taixiang Sun, Guangwang Su, Bin Qin, Caihong Han . Global behavior of a max-type system of difference equations of the second order with four variables and period-two parameters. AIMS Mathematics, 2023, 8(10): 23941-23952. doi: 10.3934/math.20231220 |
[3] | Ebrahim. A. Youness, Abd El-Monem. A. Megahed, Elsayed. E. Eladdad, Hanem. F. A. Madkour . Min-max differential game with partial differential equation. AIMS Mathematics, 2022, 7(8): 13777-13789. doi: 10.3934/math.2022759 |
[4] | Chang-Jun Wang, Zi-Jian Gao . Two-stage stochastic programming with imperfect information update: Value evaluation and information acquisition game. AIMS Mathematics, 2023, 8(2): 4524-4550. doi: 10.3934/math.2023224 |
[5] | Ebenezer Fiifi Emire Atta Mills . The worst-case scenario: robust portfolio optimization with discrete distributions and transaction costs. AIMS Mathematics, 2024, 9(8): 20919-20938. doi: 10.3934/math.20241018 |
[6] | Abdeljabbar Ghanmi, Abdelhakim Sahbani . Existence results for $ p(x) $-biharmonic problems involving a singular and a Hardy type nonlinearities. AIMS Mathematics, 2023, 8(12): 29892-29909. doi: 10.3934/math.20231528 |
[7] | Hadeel Zaki Mohammed Azumi, Wafa Mohammed Ahmed Shammakh, Abdeljabbar Ghanmi . Min-max method for some classes of Kirchhoff problems involving the $ \psi $-Hilfer fractional derivative. AIMS Mathematics, 2023, 8(7): 16308-16319. doi: 10.3934/math.2023835 |
[8] | Muhammad Sarfraz, Yongjin Li . Minimum functional equation and some Pexider-type functional equation on any group. AIMS Mathematics, 2021, 6(10): 11305-11317. doi: 10.3934/math.2021656 |
[9] | Guocheng Zhu, Zhining Wang, Xiaopeng Yang . On the minimal solution for max-product fuzzy relation inequalities. AIMS Mathematics, 2024, 9(11): 30667-30685. doi: 10.3934/math.20241481 |
[10] | Dong Pan, Huizhen Qu . Finite-time boundary synchronization of space-time discretized stochastic fuzzy genetic regulatory networks with time delays. AIMS Mathematics, 2025, 10(2): 2163-2190. doi: 10.3934/math.2025101 |
The behavior of discrete-event systems, in which the individual components move from event to event rather than varying continuously through time, is often described by systems of linear equations in max-min algebra, in which classical addition and multiplication are replaced by ⊕ and ⊗, representing maximum and minimum, respectively. Max-min equations have found a broad area of applications in causal models, which emphasize relationships between input and output variables. Many practical situations can be described using max-min systems of linear equations. We shall deal with a two-sided max-min system of linear equations with unknown column vector x of the form A⊗x⊕c=B⊗x⊕d, where A, B are given square matrices, c, d are column vectors and operations ⊕ and ⊗ are extended to matrices and vectors in the same way as in the classical algebra. We give an equivalent condition for its solvability. For a given max-min objective function f, we consider optimization problem of type f⊤⊗x→max or min constraint to A⊗x⊕c=B⊗x⊕d. We solve the equation in the form f(x)=v on the set of solutions of the equation A⊗x⊕c=B⊗x⊕d and extend the problem to the case of an interval function f and an interval value v. We define several types of the reachability of the interval value v by the interval function f and provide equivalent conditions for them.
Problems on algebraic structures, in which pairs of operations (max,+) or (max,min) replace addition and multiplication of the classical algebra, appear in the literature from approximately the sixties of the last century, see e. g., [3,18]. A systematic theory of such algebraic structures was likely published for the first times in [3].
The behavior of discrete event systems, in which the individual components move from event to event rather than varying continuously through time, is often described using max-plus or max-min algebra. Max-min algebra is a convenient algebraic setting for some types of optimization problems, see [4]. It can be used in a range of practical problems related to scheduling and optimization, fuzzy discrete dynamical systems, graph theory, knowledge engineering, cluster analysis, fuzzy systems and for describing the diagnosis of technical devices [17,20], medical diagnosis [16] and offer many new problems. Discrete dynamical systems and the related algebraic structures were studied using max-min matrix operations in [3,5,15]. In recent decades, there has been significant effort to study systems of max-min linear equations in the form A⊗x=b, where the elements of A and b represent capacities, relationships between certain objects, equipment reliability, and the like. In practice, these values are not precise numbers but are usually contained in some intervals. Interval arithmetic is an efficient way to represent matrices in a guaranteed way on a computer. Interval systems of linear equations of the form A⊗x=b have been intensively studied, see [11,12,13,14,15].
Since the operation of maximum replacing addition is not a group operation, but only a semi-group one, there is substantial difference between solving systems with variables on one side of equation and systems with variables occurring on both sides, i. e., A⊗x=B⊗x. Two-sided max-min linear systems were studied in [6], where a simple polynomial algorithm was described.
The issue of linear programming (optimization) in classical algebra is well known and intensively researched. M. Hladík has dealt with interval versions of linear optimization in classical algebra in several publications, e.g., [7,8]. In max-plus algebra, max-linear programming (optimization) was introduced by Aminu and Butkovič in [1]. A special type of max-linear programming has been studied in [2]. Optimization problems of classical linear functions under an max-min equation constraint have been studied in [9,19].
In this paper, we shall deal with a two-sided system of linear equations of the form A⊗x⊕c=B⊗x⊕d, where A, B are given square matrices, b, d are column vectors and operations ⊕ and ⊗ are extended to matrices and vectors in the same way as in the classical algebra. We give an equivalent condition for its solvability. For a given objective function f, we shall deal with an optimization problem of type f⊤⊗x→max or min such that A⊗x⊕c=B⊗x⊕d. We will also solve an equation of the form f(x)=v on the solution set of the equation A⊗x⊕c=B⊗x⊕d. Replacing an objective function f and a number v with an interval function f and an interval value v leads to the definition of several types of the reachability of interval value v by interval function f.
The following example is a modified and extended version of the example listed in [6].
Example 1.1. Let us take a manufacturing company that transports components to produce a specific product from places P1,P2,P3. Transportation means of different size are transporting these components from places Pi, i=1,2,3 to place T, where the assembly line is located. The goods are unloaded in T, assembled and the transportation means with completed products have to return to Pi. We assume that the connection between Pi and T is only possible via one of the places Qj,j=1,2, the roads between Pi and Qj are one-way roads, and the capacity of the road in the direction from Pi to Qj is equal to aij. The transport from T to Pi is carried out via other one-way roads between places Qj and Pi with capacities of the road in the direction from Qj to Pi equal to bij.
In Figure 1, there is an arrow (PiQj), if there exists a connection from Pi to Qj, and there is an arrow (QjPi) if there exists a connection from Qj to Pi, where i=1,2,3,4, j=1,2,3. We have to join places Qj with T by a two-way road with the same capacity xj in both directions. The maximum reachable capacity of the connection from Pi to T is therefore equal to maxjmin{aij,xj}. Since the roads between T and Qj are two-way roads, the maximum reachable capacity of the connection from T to Pi is equal to maxjmin{bij,xj} for i=1,2,3,4.
We assume that the transportation means can only pass through some roads with a capacity which is not smaller than the capacity of the transportation mean. So that each of the transportation means may return to Pi, it is natural to require for each i=1,2,3,4 that the maximal reachable capacity of the connection from Pi to T is equal to the maximal reachable capacity of the connection from T to Pi on the way back. We obtain
max{min{a11,x1},min{a12,x2}}=max{min{b11,x1},min{b12,x2}},max{min{a21,x1},min{a22,x2}}=max{min{b21,x1},min{b22,x2}},max{min{a32,x2},min{a33,x3}}=max{min{b32,x2},min{b33,x3}},max{min{a42,x2},min{a43,x3}}=max{min{b42,x2},min{b43,x3}}, |
In general, suppose that there are m places P1,P2,…,Pm and n transfer points Q1,Q2,…,Qn. If the road from Pi to Qj (from Qj to Pi) does not exist, we set aij=0 (bij=0).
Our task is to choose the appropriate capacities xj for any j∈N,N={1,2,…,n} such that
maxj∈Nmin{aij,xj}=maxj∈Nmin{bij,xj} |
for all i∈M, M={1,2,…,m}.
Assume that the amount of components in T delivered from Pi has to be at least ci and the missing goods are replenished from the stock. Similarly, the amount of completed products in Pi delivered from T has to be at least di. Therefore, we are looking for the solution of the equation
max{maxj∈Nmin{aij,xj},ci}=max{maxj∈Nmin{bij,xj},di} | (1.1) |
for each i∈M.
Let us provide more information about the organization of the paper and the results achieved. The next section is occupied by definitions and notations. Section 3 is devoted to the solvability of two-sided linear equations. The already known results for the solvability of the equation A⊗x=B⊗x are extended by other properties. Besides that, the problem of solvability of the system A⊗x⊕c=B⊗x⊕d is introduced. Theorem 3.1 gives an equivalent condition for its solvability. Section 4 deals with the problem of linear optimization. We give the range of values for the objective function f on the set satisfying A⊗x⊕c=B⊗x⊕d. The main result is obtained in Theorem 4.3. In the last section we define the concept of reachability of the interval value by the interval objective function. We define eleven types of reachability. Theorems 5.2–5.5 provide equivalent conditions for them.
Max-min algebra is the triple (I,⊕,⊗), where I=[O,I] is a linearly ordered dense set with the least element O and the greatest element I, and ⊕,⊗ are binary operations defined as follows:
a⊕b=max{a,b} and a⊗b=min{a,b}. |
The set of all m×n matrices over I is denoted by I(m,n), and the set of all column n-vectors over I by I(n). Specifically, O(m,n) (I(m,n)) will be used for the matrix with all entries equal to O (I). Operations ⊕ and ⊗ are extended to matrices and vectors in the same way as in the classical algebra. We consider the ordering ≤ on the sets I(m,n) and I(n) defined as follows:
● for A,C∈I(m,n): A≤C if aij≤cij for each i∈M and for each j∈N,
● for x,y∈I(n): x≤y if xj≤yj for each j∈N.
We will use the monotonicity of ⊗, which means that for each A,B∈I(m,n) and for each x, y∈I(n), the implication
if A≤B and x≤y then A⊗x≤B⊗y |
holds. Let A∈I(m,n) and b∈I(m). We can write the system of max-min linear equations in the matrix form
A⊗x=b. | (2.1) |
It is well known (see [3,21]) that system (2.1) is solvable if and only if the vector x∗(A,b), defined by
x∗j(A,b)=mini∈M{bi:aij>bi} | (2.2) |
for any j∈N, where min∅=I, is its solution. The vector x∗(A,b) is called a principal solution of system (2.1).
Theorem 2.1. [3,21] Let A∈I(m,n) and b∈I(m) be given.
(ⅰ) If A⊗x=b for x∈I(n), then x≤x∗(A,b).
(ⅱ) A⊗x∗(A,b)≤b.
(ⅲ) The system A⊗x=b is solvable if and only if x∗(A,b) is its solution.
The properties of the principal solution are expressed in the following assertions.
Lemma 2.1. [3] Let A∈I(m,n) and b,d∈I(m) be given and let b≤d. Then, x∗(A,b)≤x∗(A,d).
In this section we will deal with the solution of systems with variables occurring on both sides of the equation. The already known results for the solvability of the equation A⊗x=B⊗x are extended by other properties. Besides that, the problem of the solvability of the system A⊗x⊕c=B⊗x⊕d is introduced.
Let A∈I(m,n), B∈I(m,n). Let us consider the system of the form
A⊗x=B⊗x. | (3.1) |
Let us note that (3.1) is always solvable, since for α≤min{aij,bij,i∈M,j∈N}, the vector x=(α,α,…,α)∈I(n) is its solution. In [6], the algorithm for finding the greatest solution of (3.1) such that x≤¯x, where ¯x∈I(n), was provided. We introduce this algorithm with slightly modified notations as it will be used in the main part of the paper. Some notations are necessary to do this. Set
S(A,B)={x∈I(n);A⊗x=B⊗x},S(A,B,¯x)={x∈S(A,B);x≤¯x}. |
Further, define
ai(x)=[A⊗x]i,bi(x)=[B⊗x]i. |
Suppose that ¯x∉S(A,B). It should be noted that if we exchange the ith row of matrix A and the ith row of matrix B, we get a new system that has the same set of solutions as the original system. It follows that we may assume w.l.o.g. that ai(¯x)≤bi(¯x) for each i∈M. Let us set
M<(¯x)={i∈M;ai(¯x)<bi(¯x)},M=(¯x)={i∈M;ai(¯x)=bi(¯x)}, |
and let us introduce the following notations for any given upper bound ¯x:
α(¯x)=min{ai(¯x);i∈M<(¯x)}, |
M<(α(¯x))={i∈M<(¯x);ai(¯x)=α(¯x)}, |
M=(α(¯x))={i∈M=(¯x);ai(¯x)≤α(¯x)}, |
N(α(¯x))={j∈N;(∃i∈M<(α(¯x)))[bij⊗¯xj>α(¯x)]}. |
Algorithm: Greatest Solution
Input: matrices A, B∈I(m,n), vector ¯x∈I(n)
Output: x∗(A,B,¯x) – the greatest element of S(A,B,¯x)
begin
1 If ¯x∈S(A,B,¯x) then x∗(A,B,¯x):=¯x; STOP;
2 Swap the rows of the matrices A,B so that ai(¯x)≤bi(¯x) holds for all i∈M;
3 Compute α(¯x), M<(α(¯x)), M=(α(¯x)), J(α(¯x));
4Set ˜xj:=α(¯x) if j∈N(α(¯x)), ˜xj:=¯xj otherwise;
5 If ˜x∈S(A,B,¯x) then x∗(A,B,¯x):=˜x; STOP;
6 Set ¯x:=˜x; go to 2.
end
If we set ¯x=I∈I(n), the Greatest Solution algorithm finds the greatest element of S(A,B), denoted by x∗(A,B). On the other hand, there are several minimal solutions of (3.1), but it is not known whether there is the smallest element of S(A,B).
Example 3.1. Let I=[0,10]. Find the greatest solution of the two-sided equation A⊗x=B⊗x, whereby
A=(32576461255321435423),B=(36526435247283338132). |
Solution. We have ¯x=(10,10,10,10,10)⊤.
1Since A⊗¯x=(7,6,5,5)⊤ and B⊗¯x=(6,5,8,8)⊤, ¯x∉S(A,B,¯x);
2 We swap the 1st and 2nd rows of A and B and obtain
(36526435245321435423)⊗x=(32576461257283338132)⊗x. |
3 Since A⊗¯x=(6,5,5,5)⊤ and B⊗¯x=(7,6,8,8), we obtain α(¯x)=5, M<(5)={2,3,4}, M=(5)=∅, N(5)={1,2,3}
4</p><p>●˜x:=(5,5,5,10,10)⊤;
5Since A⊗˜x=(6,5,5,5)⊤; B⊗˜x:=(7,5,5,5)⊤, ˜x∉S(A,B);
6</p><p>●¯x:=(5,5,5,10,10)⊤;
------------------------------------------------------------------------------------------
3 α(¯x)=6, M<(6)={1}, M=(6)=∅, N(6)={4};
4</p><p>●˜x:=(5,5,5,6,10)⊤;
5Since A⊗˜x=(6,5,5,5)⊤=B⊗˜x, ˜x∈S(A,B); x∗(A,B,¯x):=(5,5,5,6,10)⊤; STOP
Answer: The greatest solution of the given equation is x∗(A,B)=(5,5,5,6,10)⊤.
Lemma 3.1. Let x∗(A,B) be the greatest solution of (3.1). Then
x∗j(A,B)=min{x∗j(A,e∗),x∗j(B,e∗)}, |
for each j∈N, where e∗=A⊗x∗(A,B)=B⊗x∗(A,B).
Proof. It is easy to see that, for each e∈I(m), the equality A⊗x=B⊗x=e implies e≤e∗. Since A⊗x∗(A,B)=e∗ in accordance with Theorem 2.1, we obtain x∗(A,B)≤x∗(A,e∗). Similarly, B⊗x∗(A,B)=e∗ implies x∗(A,B)≤x∗(B,e∗). Hence, x∗(A,B)≤min{x∗(A,e∗),x∗(B,e∗)}. Since x∗(A,B) is the greatest solution of (3.1), the equality holds.
Denote H(A,B)={aij,bij,i∈M,j∈N}. For a given x∈I(n), denote by ˜x the vector given by
˜xj={max{t∈H(A,B);xj≥t}ifxj≥minH(A,B);Ootherwise. | (3.2) |
Lemma 3.2. If x∈S(A,B) then ˜x∈S(A,B).
Proof. Suppose that x∈S(A,B). Then, for each i∈M, the equality [A⊗x]i=[B⊗x]i is satisfied. Let i∈M be arbitrary, but fixed. We shall prove that [A⊗˜x]i=[B⊗˜x]i. We shall distinguish three possibilities (not necessarily disjoint) for which we refer to Cases 1–3.
Case 1. If [A⊗x]i=air and [B⊗x]i=bis for some r,s∈N, then air≤˜xr, bis≤˜xs and ˜xr,˜xs≠O, which implies air⊗˜xr=air and bis⊗˜xs=bis. Together with aij⊗˜xj≤aij⊗xj and bij⊗˜xj≤bij⊗xj, for each j∈N we obtain [A⊗˜x]i=air and [B⊗˜x]i=bis. Hence, [A⊗˜x]i=[B⊗˜x]i.
Case 2. If [A⊗x]i=xr and [B⊗x]i=xs for some r,s∈N, then air⊗˜xr=˜xr≥⨁j≠raij⊗˜xj and bis⊗˜xs=˜xs≥⨁j≠sbij⊗˜xj. Then, [A⊗˜x]i=[B⊗˜x]i=˜xr=˜xs (the last equality follows from xr=xs).
Case 3. ⅰ) If [A⊗x]i=xr and [B⊗x]i=bis for some r,s∈N, then xr=bis, which implies ˜xr=xr. Further, ˜xs≥bis. Since xr=˜xr≥⨁j≠raij⊗˜xj and bis⊗˜xs=bis≥⨁j≠sbij⊗˜xj, we obtain [A⊗˜x]i=[B⊗˜x]i=xr=bis.
ⅱ) If [A⊗x]i=air and [B⊗x]i=xs for some r,s∈N, the proof is similar to the proof of part ⅰ).
Let A,B∈I(m,n) and c,d∈I(m) be given. Let us consider the system
A⊗x⊕c=B⊗x⊕d. | (3.3) |
Returning to Example 1.1, we can see that using max-min algebra, (1.1) can be written in the form of (3.3).
We shall discuss the solvability of (3.3). If c=d, then system (3.3) is still solvable, since at least the solution of the system A⊗x=B⊗x is also a solution of system (3.3). Suppose that c≠d. In this case, system (3.3) may or may not have a solution. If we exchange ci and di and at the same time we exchange the ith row of matrix A and the ith row of matrix B (that is, the matrices A, B will change) so that ci≥di, the solution of the original equation is the same as the solution of the equation after changing the rows. So we can assume without loss of generality that c≥d.
Denote by M> the set M>={i∈M;ci>di}. The following lemma provides the necessary condition for the solvability of (3.3).
Lemma 3.3. If system (3.3) with c≥d is solvable then for each i∈M> there exists j∈N such that bij≥ci.
Proof. Suppose that there exists i∈M such that ci>di and bij<ci for each j∈N. Then, [B⊗x]i<ci for each x∈I(n). Together with di<ci, we obtain [B⊗x⊕d]i<ci. Since [A⊗x⊕c]i≥ci, we obtain [A⊗x⊕c]i≠[B⊗x⊕d]i for each x∈I(n). Hence, (3.3) is not solvable.
Denote by S(A,B,c,d) the set of all solutions of (3.3. If S(A,B,c,d)≠∅, we use the notation x∗(A,B,c,d) for the greatest solution (3.3 which can always be calculated using the Greatest Solution algorithm.
Denote by (A|c) and (B|d) the matrices arising from A and B, respectively, by adding columns c and d, respectively. System (3.3) can be converted to the system
(A|c)⊗x=(B|d)⊗x | (3.4) |
with unknown x∈I(n+1).
Theorem 3.1. System (3.3) is solvable if and only if x∗n+1((A|c),(B|d))=I.
Proof. If (3.3) is solvable then there exists x∈I(n+1) satisfying (3.4) such that xn+1≥maxi∈M{ci, di}. Then the vector ˆx given by ˆxj=xj for j∈N and ˆxn+1=I is a solution of (3.4), too. It follows that x∗n+1((A|c),(B|d))=I.
Conversely, if x∗n+1((A|c),(B|d))=I, then x∗(A,B,c,d)∈I(n) such that x∗j(A,B,c,d)= x∗j((A|c),(B|d)) for j∈N is a solution of (3.3).
Let us start with the extension of Example 1.1.
Example 1.1-continuation. The capacities aij, bij, xj do not necessarily have to be considered as absolute values of the number of components or products transported. We assume that these capacities express the ratio of the number of objects to a fixed value of the upper bound of this number. Then, all capacities are expressed by a number from the interval 0 to 1.
In practice, we may be interested not only in the capacity of a particular connection, but also in its reliability. A connection with high capacity but low reliability may be less advantageous than others with less capacity but high reliability. Let the reliability of the connection between Qj and T in both directions be expressed by the number fj. Then, the utility of the connection between Qj and T is equal to min{fj,xj} and the maximum utility of the system is f(x)=maxj∈N(fj⊗xj)=f⊤⊗x. This leads to the question how to find x∈I(n) which satisfies (3.3) and ensures sufficient utility.
In this section, we develop methods for finding x∈I(n) that minimizes (maximizes) the function f(x)=f⊤⊗x, where f=(f1,f2,…,fn)⊤∈I(n) subject to Eq (3.3).
As it was mentioned in previous part, we can assume that c≥d. In the case that (3.3) does not have a solution, the task does not make sense. Suppose that S(A,B,c,d)≠∅, and let
M(f)=max{f⊤⊗x;x∈S(A,B,c,d)}, |
m(f)=min{f⊤⊗x;x∈S(A,B,c,d)}. |
We start by calculating the upper bound, that is, M(f).
Theorem 4.1. Let A,B∈I(m,n), c,d∈I(m) and f∈I(n) be given. Then,
M(f)=f⊤⊗x∗(A,B,c,d). |
Proof. Since x≤x∗(A,B,c,d) for each x∈S(A,B,c,d), in accordance with the monotonicity of ⊗ we obtain f⊤⊗x≤f⊤⊗x∗(A,B,c,d).
Now, we shall discuss the lower bound. Define
L=minj∈Nfj⊗maxr∈M>cr. | (4.1) |
Lemma 4.1. If c≥d, then f(x)≥L for every x∈S(A,B,c,d).
Proof. Let x∈S(A,B,c,d) and r∈M>. Then, [B⊗x]r≥cr, which implies that there exists k∈N such that brk⊗xk≥cr, and consequently xk≥cr. Hence,
f(x)=f⊤⊗x≥fk⊗xk≥fk⊗cr≥minj∈Nfj⊗cr. |
Since the previous inequality holds for each r∈M>, we obtain
f(x)≥maxr∈M>(cr⊗minj∈Nfj)=maxr∈M>cr⊗minj∈Nfj=L. |
The value of L given by (4.1) gives us a lower bound that may or may not be reached by some solution of (3.3). Hence, m(f)≥L.
Let v∈I be given. If we ask whether there exists x∈S(A,B,c,d) such that f(x)=v, we are looking for the solution of the following non-homogeneous system:
A⊗x⊕c=B⊗x⊕d |
O(n)⊤⊗x⊕v=f⊤⊗x⊕O, |
where O(n) denotes the row vector with any element equal to O. The above system can be converted to the system
(A|c)O|v⊗x=(B|d)f|O⊗x, | (4.2) |
where (A|c)O|v is the matrix formed from (A|c) by adding the row (O O…O v) as the last row, and (B|d)f|O is the matrix formed from (B|d) by adding the row (f1 f2…fn O) as the last row.
Lemma 4.2. Let v∈I be given. Then, there exists x∈S(A,B,c,d) such that f(x)=v if and only if
x∗n+1((A|c)O|v,(B|d)f|O)=I. |
Proof. The proof is similar to the proof of Theorem 3.1.
Denote by x∗(A,B,c,d,f,v) the greatest solution of (3.3) such that f(x)=v. It is easy to see that
x∗i(A,B,c,d,f,v)=x∗i((A|c)O|v,(B|d)f|O) |
for each i∈N.
Denote by Rv(f) the range of values of f on S(A,B,c,d), i. e.,
Rv(f)={f(x); x∈S(A,B,c,d)}. |
and set H(A,B,c,d,f)=H(A,B)∪{ci,di;i∈M}∪{fj;j∈N}.
Theorem 4.2. Let v∈Rv(f) be such that v∉H(A,B,c,d,f). Then, there exists ˜v∈H(A,B,c,d,f) such that ˜v<v and ˜v∈Rv(f).
Proof. If there exists x∈S(A,B,c,d) such that f(x)=v∉H(A,B,c,d,f), then f(x)=n⨁j=1fj⊗xj=fk⊗xk=xk<fk for some k∈N. Then, for the vector ˜x∈S(A,B,c,d) defined similarly as in (3.2) by formula
˜xj={max{t∈H(A,B,c,d);xj≥t}if xj≥minH(A,B,c,d);Ootherwise, |
we obtain
v=f(x)≥f(˜x)=n⨁j=1fj⊗˜xj=fr⊗˜xr |
for some r∈N. Denote ˜v=f(˜x). Since fr,˜xr∈H(A,B,c,d,f), we obtain ˜v∈H(A,B,c,d,f) and ˜v<v.
It follows from Theorem 4.2 that, in finding the value m(f), it is sufficient to consider only the elements of the set H(A,B,c,d,f).
Algorithm: Minimal value
Input: A,B,c,d,f
Output: minimal value m(f)
begin
1 Order the elements of H(A,B,c,d,f) in such way that h1<h2<…<hs.
2 Find k such that L=hk; i:=k;
3 v:=hi; compute x∗((A|c)O|v,(B|d)f|O) using the Greatest Solution algorithm;
4 If x∗n+1((A|c)O|v,(B|d)f|O)=I then m(f):=v, STOP;
5 i:=i+1; go to 2;
end
Once the values of m(f) and M(f) are calculated, the question which arises is whether all values between m(f) and M(f) can be achieved by some x∈S(A,B,c,d). The following theorem gives an answer to this question.
Theorem 4.3. Let A,B∈I(m,n), c,d∈I(m) and f∈I(n) be given. Then,
Rv(f)=[m(f),M(f)]. |
Proof. {First, let u∈Rv(f). There then exists x∈S(A,B,c,d) such that f(x)=u. Then,
m(f)≤u=f(x)≤M(f). |
We obtain that Rv(f)⊆[m(f),M(f)]. To prove that [m(f),M(f)]⊆Rv(f) suppose that v∈[m(f),M(f)]. }To simplify notations, we shall use m and M instead of m(f) and M(f), respectively. Since m∈Rv(f) and M∈Rv(f), in accordance with Lemma 4.2 we have x∗n+1((A|c)O|m,(B|d)f|O)=I and x∗n+1((A|c)O|M,(B|d)f|O)=I. We have to prove that x∗n+1((A|c)O|v,(B|d)f|O)=I.
Denote by em, eM and ev the vectors
em=(A|c)O|m⊗x∗((A|c)O|m,(B|d)f|O)=(B|d)f|O⊗x∗((A|c)O|m,(B|d)f|O), |
eM=(A|c)O|M⊗x∗((A|c)O|M,(B|d)f|O)=(B|d)f|O⊗x∗((A|c)O|m,(B|d)f|O), |
ev=(A|c)O|v⊗x∗((A|c)O|v,(B|d)f|O)=(B|d)f|O⊗x∗((A|c)O|v,(B|d)f|O). |
Suppose that x∗n+1((A|c)O|v,(B|d)f|O)=u≠I, which is equivalent to evn+1=u<v.
Since x∗((A|c)O|v,(B|d)f|O) is a solution of (3.3), we have u≥maxi∈M{ci,di}. Let us take the vector
ˆx=(x∗1((A|c)O|v,(B|d)f|O),x∗2((A|c)O|v,(B|d)f|O),…,x∗n((A|c)O|v,(B|d)f|O),I). |
We obtain (A|c)O|v⊗ˆx=ˆev, where ˆevi=evi for i∈M and ˆevn+1=v.
We have em≤ˆev≤eM, emn+1=m and eMn+1=M. Since x∗n+1((B|d)f|O,em)=x∗n+1((B|d)f|O,eM)=I, in accordance with Lemma 2.1 we obtain x∗n+1((B|d)f|O,ˆev)=I.
Since (A|c)O|v⊗ˆx=(B|d)f|O⊗ˆx=ˆev and ˆevn+1=v, we obtain the contradiction to that x∗n+1((A|c)O|v,(B|d)f|O)≠I.
Denote by x∗(A,B,c,d,f,v) the greatest solution of (3.3) such that f(x)=v. The following theorem gives the necessary and sufficient condition for the solvability of the system of inequalities.
Theorem 4.4. Let A,B∈I(m,n), c,d∈I(m), f1,f2∈I(n) and v1,v2∈I be given. Then, the system of inequalities
f1(x)≤v1, | (4.3) |
f2(x)≥v2, | (4.4) |
has a solution x∈S(A,B,c,d) if and only if either
M(f1)≤v1 and M(f2)≥v2, | (4.5) |
or there exists x∈S(A,B,c,d) such that
f1(x)=v1 and f2(x∗(A,B,c,d,f1,v1))≥v2. | (4.6) |
Proof. If condition (4.5) is satisfied, then the equalities M(f1)=f1(x∗(A,B,c,d)), M(f2)=f2(x∗(A,B,c,d)) imply that the vector x∗(A,B,c,d) is a solution of the systems (4.3) and (4.4).
In the second case, if (4.6) holds, then x∗(A,B,c,d,f1,v1) is a solution of the systems (4.3) and (4.4).
For the converse implication suppose that systems (4.3) and (4.4) are solvable and condition (4.5) is not satisfied. We have to prove that condition (4.6) is satisfied.
If system (4.3) and (4.4) are solvable and f2(x∗(A,B,c,d))<v2, then f2(x)<v2 for each x∈S(A,B,c,d), which is incompatible with the solvability of systems (4.3) and (4.4). This means that this assumption is never fulfilled, hence the implication holds.
If systems (4.3) and (4.4) are solvable and M(f1)>v1, then, in accordance with Theorem 4.3, there exists x∈S(A,B,c,d), x≤x∗(A,B,c,d,f1,v1) such that f1(x)=v1. Since for each x∈S(A,B,c,d) such that f1(x)=v≤v1 the inequality x≤x∗(A,B,c,d,f1,v)≤x∗(A,B,c,d,f1,v1) holds, the existence of x such that f2(x)≥v2 implies f2(x∗(A,B,c,d,f1,v1))≥v2.
Example 4.1. Let I=[0,10]. Check whether the system A⊗x⊕c=B⊗x⊕d is solvable and in positive case find the range of values Rv(f), if
A=(3257461253213542), B=(3652435272833813), |
c=(6543),d=(6432), f=(4536). |
We have M>={2,3,4}. Since for each i∈M> there exists j∈N such that bij≥ci, the necessary condition for the solvability of A⊗x⊕c=B⊗x⊕d given by Lemma 3.3 is satisfied. We shall continue with computing the greatest solution of the system (A|c)⊗x=(B|d)⊗x, namely,
(32576461255321435423)⊗x=(36526435247283338132)⊗x. |
The above system was solved in Example 3.1, so we have x∗((A|c),(B|d))=(5,5,5,6,10)⊤. Since x∗5((A|c),(B|d))=10=I, the system A⊗x⊕c=B⊗x⊕d is solvable and x∗(A,B,c,d)=(5,5,5,6)⊤.
We compute the value M(f)=f⊤⊗x∗(A,B,c,d)=6. To find the value m(f) we first compute the value L by formula (4.1). We obtain
L=mink∈Nfk⊗maxr∈M>cr=3⊗5=3, |
so f(x)≥3 for each x∈S(A,B,c,d).
For checking whether there exists x∈S(A,B,c,d) such that f(x)=v, we solve the matrix equation
(325764612553214354230000v)⊗x=(3652643524728333813245360)⊗x. |
For v=3 we obtain x∗((A|c)O|v,(B|d)f|O)=(3,3,3,3,3)⊤. Since x∗5((A|c)O|v,(B|d)f|O)≠I, there is no x∈S(A,B,c,d) such that f(x)=3.
We continue with v=4. We obtain x∗((A|c)O|v,(B|d)f|O)=(5,4,5,4,10)⊤. Hence, there exists x∈S(A,B,c,d), namely x=(5,4,5,4)⊤ such that f(x)=4. Hence, m(f)=4 and x∗(A,B,c,d,f,4)=(5,4,5,4)⊤. In accordance with Theorem 4.3, we have
Rv(f)=[4,6]. |
In this part, we shall deal with the case that the entries of the objective function f are not exact values, but they are in intervals of possible values. Define an interval function f as follows:
f=[f_,¯f]={f; f_≤f≤¯f},f_≤¯f. |
In this way f denotes a real vector-interval as is usual in interval mathematics. Further, denote
Rv(f)={f(x);f∈f, x∈S(A,B,c,d)}. |
Theorem 5.1. Let A∈I(m,n), B∈I(m,n), c,d∈I(m) and f be given. Then,
Rv(f)=[m(f_),M(¯f)]. |
Proof. It is clear that Rv(f)=∪f∈fRv(f)=Rv(f_)∪(∪f∈f,f≠f_Rv(f)). According to Theorem 4.3 we have Rv(f_)=[m(f_),M(f_)], so Rv(f_) is a dense continuous set. Since M(f)=f(x∗(A,B,c,d)) for each f∈f, we obtain that M(f) is a continuous function defined on the set f, and hence {M(f);f∈f}=[M(f_),M(¯f)], which is a dense set. In accordance with those, for each v∈[M(f_),M(¯f)] there exists f∈f such that v=M(f). Then, Rv(f)=[m(f_),M(f_)]∪(∪f∈f, f≠f_[m(f),M(f)])=[m(f_),M(¯f)].
In the following, we will deal with the situation where we require the values of the function f∈f to be in a given range of allowable values v=[v_,¯v]. We say that the value v is reachable by f, if f(x)=v for some x∈S(A,B,c,d).
We can define several types of interval reachability. For example, we can require the existence of v∈v such that f(x)=v for some f∈f and for some x∈S(A,B,c,d). Another possibility is to require the existence of such a vector x∈S(A,B,c,d) so that f(x)∈v for each f∈f. By using general and existential quantifiers for x,f and v in different orders, we obtain many types of interval reachability.
Definition 5.1. Let A,B∈I(m,n), c,d∈I(m), f and v be given. An interval value v is
ⅰ) weakly possibly reachable by f if there exist x∈S(A,B,c,d), f∈f and v∈v such that f(x)=v,
ⅱ) possibly reachable by f if for each x∈S(A,B,c,d) there exist f∈f and v∈v such that f(x)=v,
ⅲ) strongly possibly reachable by f if there exists v∈v such that for each x∈S(A,B,c,d) there exist f∈f such that f(x)=v.
Theorem 5.2. Let A,B∈I(m,n), c,d∈I(m), f⊆I(n) and v⊆I be given. An interval value v is
ⅰ) weakly possibly reachable by f if and only if there exists x∈S(A,B,c,d) such that
f_(x)≤¯vand¯f(x)≥v_, | (5.1) |
ⅱ) possibly reachable by f if and only if
M(f_)≤¯vandm(¯f)≥v_. | (5.2) |
ⅲ) strongly possibly reachable by f if and only if it is possibly reachable by f and M(f_)≤m(¯f).
Proof. For a fixed x∈S(A,B,c,d), the existence of f∈f and v∈v such that f(x)=v is equivalent to the existence of f∈f such that f(x)∈v. Since {f(x);f∈f}=[f_(x),¯f(x)], we obtain [f_(x),¯f(x)]∩[v_,¯v]≠∅, which is equivalent to system (5.1).
ⅰ) The weakly possible reachability means the existence of solution of (5.1) in the set S(A,B,c,d), which can be verified using Theorem 4.4.
ⅱ) The possible reachability means that the system (5.1) is satisfied for each x∈S(A,B,c,d). The equality f_(x)≤¯v holds for each x∈S(A,B,c,d) if and only if M(f_)≤¯v. Similarly, validity of the equation ¯f(x)≥v_ for each x∈S(A,B,c,d) is equivalent to m(¯f)≥v_.
ⅲ) We can write the proof as a sequence of equivalences
(∃v∈v)(∀x∈S(A,B,c,d))(∃f∈f)f(x)=v⇔ |
(∃v∈v)(∀x∈S(A,B,c,d))v∈[f_(x),¯f(x)]⇔ |
(∃v∈v)(∀x∈S(A,B,c,d))[v≥f_(x)∧v≤¯f(x)]⇔ |
(∃v∈v)[v≥M(f_)∧v≤m(¯f)]⇔[(∃v∈v)M(f_)≤v≤m(¯f)]⇔ |
[M(f_)≤m(¯f)∧[M(f_),m(¯f)]∩[v_,¯v]≠∅]⇔ |
M(f_)≤m(¯f) and v is possibly reachable by f, |
because the condition [M(f_),m(¯f)]∩[v_,¯v]≠∅ is equivalent to the being possibly reachable.
Corollary 5.1. If an interval value v is strongly possibly reachable by f, then it is possibly reachable by f and if it is possibly reachable by f, then it is weakly possibly reachable by f.
Definition 5.2. Let A,B∈I(m,n), c,d∈I(m), f⊆I(n) and v⊆I be given. An interval value v is
ⅰ) tolerably reachable by f if there exists x∈S(A,B,c,d) such that for each f∈f there exists v∈v such that f(x)=v,
ⅱ) strongly tolerably reachable by f if for each x∈S(A,B,c,d) and for each f∈f there exists v∈v such that f(x)=v,
ⅲ) weakly tolerably reachable by f if for each f∈f there exist x∈S(A,B,c,d) and v∈v such that f(x)=v.
Theorem 5.3. Let A,B∈I(m,n), c,d∈I(m), f⊆I(n) and v⊆I be given. An interval value v is
ⅰ) tolerably reachable by f if and only if there exists x∈S(A,B,c,d) such that
¯f(x)≤¯vandf_(x)≥v_, | (5.3) |
ⅱ) strongly tolerably reachable by f if and only if
M(¯f)≤¯vandm(f_)≥v_, | (5.4) |
ⅲ) weakly tolerably reachable by f if and only if
m(¯f)≤¯vandM(f_)≥v_. | (5.5) |
Proof. Let x∈S(A,B,c,d) be fixed. Then for each f∈f there exists v∈v such that f(x)=v if and only if [f_(x),¯f(x)]⊆[v_,¯v], which is equivalent to system (5.3).
ⅰ) An interval value v is tolerably reachable by f if and only if (5.3) has a solution in S(A,B,c,d). The necessary and sufficient condition for the solvability of (5.3) is given by Theorem 4.4.
ⅱ) The strongly tolerable reachability means that each x∈S(A,B,c,d) satisfies system (5.3). Validity of the equality ¯f(x)≤¯v for each x∈S(A,B,c,d) is equivalent to M(¯f)≤¯v. Similarly, f_(x)≥v_ for each x∈S(A,B,c,d) if and only if m(f_)≥v_.
ⅲ) The proof of weakly tolerable reachability follows from the following equivalences
(∀f∈f)(∃x∈S(A,B,c,d))(∃v∈v)f(x)=v⇔ |
(∀f∈f)(∃x∈S(A,B,c,d))f(x)∈v⇔ |
(∀f∈f)[m(f),M(f)]∩[v_,¯v]≠∅⇔(∀f∈f)[m(f)≤¯v∧M(f)≥v_]⇔ |
m(¯f)≤¯v∧M(f_)≥v_. |
Corollary 5.2. If an interval value v is strongly tolerably reachable by f, then it is possibly tolerably by f and if it is tolerably reachable by f, then it is weakly tolerably reachable by f.
Definition 5.3. Let A,B∈I(m,n), c,d∈I(m), f⊆I(n) and v⊆I be given. An interval value v is
ⅰ) controllably reachable by f if there exists x∈S(A,B,c,d) such that for each v∈v there exists f∈f such that f(x)=v,
ⅱ) strongly controllably reachable by f if for each x∈S(A,B,c,d) and for each v∈v there exists f∈f such that f(x)=v,
ⅲ) weakly controllably reachable by f if for each v∈v there exist x∈S(A,B,c,d) and f∈f such that f(x)=v.
Theorem 5.4. Let A,B∈I(m,n), c,d∈I(m), f⊆I(n) and v⊆I be given. An interval value v is
ⅰ) controllably reachable by f if and only if there exists x∈S(A,B,c,d) such that
f_(x)≤v_and¯f(x)≥¯v, | (5.6) |
ⅱ) strongly controllably reachable by f if and only if
M(f_)≤v_andm(¯f)≥¯v, | (5.7) |
ⅲ) weakly controllably reachable by f if and only if
m(f_)≤v_andM(¯f)≥¯v. | (5.8) |
Proof. Let x∈S(A,B,c,d) be given. Then for each v∈v there exists f∈f such that f(x)=v if and only if [v_,¯v]⊆[f_(x),¯f(x)], which is equivalent to system (5.6).
ⅰ) An interval value v is controllably reachable by f if and only if (5.6) has a solution in S(A,B,c,d), which can be checked by Theorem 4.4.
ⅱ) The strongly controllable reachability means that each x∈S(A,B,c,d) satisfies the system (5.6). Validity of system (5.6) for each x∈S(A,B,c,d) is equivalent to (5.7).
ⅲ) The weakly controllable reachability is equivalent to v⊆Rv(f), i. e., [v_,¯v]⊆[m(f_),M(¯f)]. Hence, weakly controllable reachability is equivalent to (5.8).
Corollary 5.3. If an interval value v is strongly controllably reachable by f, then it is possibly controllably reachable by f, and if it is controllably reachable by f, then it is weakly controllably reachable by f.
Definition 5.4. Let A,B∈I(m,n), c,d∈I(m), f⊆I(n) and v⊆I be given. An interval value v is
ⅰ) universally reachable by f if there exists v∈v such that f(x)=v for each x∈S(A,B,c,d) and for each f∈f,
ⅱ) weakly universally reachable by f if there exist v∈v and x∈S(A,B,c,d) such that f(x)=v for each f∈f.
Theorem 5.5. Let A,B∈I(m,n), c,d∈I(m), f⊆I(n) and v⊆I be given. An interval value v is
ⅰ) universally reachable by f if and only if m(f_)=M(¯f)∈v,
ⅱ) weakly universally reachable by f if and only if there exists x∈S(A,B,c,d) such that f_(x)=¯f(x)∈v.
Proof. ⅰ) The universal reachability is equivalent to Rv(f)={v} for some v∈v. According to Theorem 4.3, the assertion follows.
ⅱ) Let x∈S(A,B,c,d) be such that f_(x)=¯f(x)∈v. Then there exists v∈v such that for each f∈f we have v=f_(x)≤f(x)≤¯f(x)=v, which implies f(x)=v∈v.
The converse implication trivially follows.
Theorem 5.5 ⅱ) does not give an algorithm for checking whether there exists x∈S(A,B,c,d) such that f_(x)=¯f(x)∈v. To decide on the existence of such a solution, we solve the equation
(A|c)f_|O⊗x=(B|d)¯f|O⊗x. | (5.9) |
Denote by x∗(A,B,c,d,f_,¯f) the greatest solution of (3.3) such that f_(x)=¯f(x), if such a solution exists. We shall continue as follows:
● If x∗n+1((A|c)f_|O,(B|d)¯f|O)≠I or f_⊤⊗x∗(A,B,c,d,f_,¯f)=¯f⊤⊗x∗(A,B,c,d,f_,¯f)<v_, then v is not weakly universally reachable by f.
● If x∗n+1((A|c)f_|O,(B|d)¯f|O)=I and f_⊤⊗x∗(A,B,c,d,f_,¯f)=¯f⊤⊗x∗(A,B,c,d,f_,¯f)∈v then v is weakly universally reachable by f.
● If x∗n+1((A|c)f_|O,(B|d)¯f|O)=I and f_⊤⊗x∗(A,B,c,d,f_,¯f)= ¯f⊤⊗x∗(A,B,c,d,f_,¯f)>¯v, then we add to matrices (A|c)f_|O, (B|d)¯f|O the rows O|¯v and ¯f|O, respectively. Then, v is weakly universally reachable by f if and only if the greatest solution of the obtained system has the (n+1)-st coordinate equal to I.
Corollary 5.4. If an interval value v is strongly universally reachable by f, then it is weakly universally reachable by f.
Despite the relations in Corollaries 1–4, the following implications hold:
● If an interval value v is strongly controllably reachable by f, then it is strongly possibly reachable by f and strongly tolerably reachable by f.
● If an interval value v is universally reachable by f, then it is strongly possibly reachable by f and strongly tolerably reachable by f.
● If an interval value v is weakly universally reachable by f, then it is tolerably reachable by f.
Example 5.1. Decide about all defined types of reachability of interval value v=[3,6] by f=([2,4],[1,5],[2,3],[4,6])⊤ in accordance with A⊗x⊕c=B⊗x⊕d, where A,B,c,d are matrices and vectors from Example 4.1.
Since the current function ¯f is the function f from Example 4.1, we have m(¯f)=4 and M(¯f)=6. We compute M(f_)=f_⊤⊗x∗(A,B,c,d)=4 and find m(f_) in the same way as in Example 4.1. We obtain m(f_)=2. We can now discuss the types of reachability.
We start with possible reachability. Since M(f_)=4≤6=¯v and m(¯f)=4≥2=v_, in accordance with Theorem 5.2 ⅱ), v is possibly reachable by f. Moreover, M(f_)=4=m(¯f), which implies that v is strongly possibly reachable by f. By the definition we obtain that v is weakly possibly reachable by f, too.
For strongly tolerable reachability, we check whether inequalities (5.4) are satisfied. Since m(f_)=2<3=v_, v is not strongly tolerably reachable by f. We continue with tolerable reachability. For x=x∗(A,B,c,d) we obtain ¯f(x)=M(¯f)=6=¯v and f_(x)=M(f_)=4≥v_. Hence, x∗(A,B,c,d) satisfies the system of inequalities (5.3) which means that v is tolerably reachable by f. Then it is weakly tolerably reachable by f, too.
We continue with strongly controllable reachability. Since M(f_)=4>v_, the system of inequalities (5.7) is not satisfied. Hence, v is not strongly controllably reachable by f. For controllable reachability, we have to solve the system of inequalities
f_(x)≤3,¯f(x)≥6 |
using Theorem 4.4. Since M(f_)>v_, we shall continue with solving equation f_(x)=3. This equation is solvable with the greatest solution x∗(A,B,c,d,f_,3)=(10,5,5,3)⊤, but ¯f(10,5,5,3)=5<6, so the above system is not solvable. Hence v is not controllably reachable by f. System of inequalities (5.8) is satisfied because m(f_)=2≤3=v_ and M(¯f)=6=¯v, so v is weakly controllably reachable by f.
Finally, we check the conditions for universal and weakly universal reachability. Since m(f_)≠M(¯f), v is not universally reachable by f. For weakly universal solvability, we have to solve the matrix equation
(3257646125532143542321240)⊗x=(3652643524728333813245360)⊗x. |
We obtain x∗(A,B,c,d,f_,¯f)=(6,5,5,4)⊤. Since f_(6,5,5,4)=¯f(6,5,5,4)=4∈v, v is weakly universally reachable by f.
We will summarize the obtained results: For given A,B,c,d,f and v, an interval value v is possibly, weakly possibly, strongly possibly, tolerably, weakly tolerably, weakly controllably and weakly universally reachable by f.
In this paper, we have extended the solvability of two-sided systems of equations to a more general type. We dealt with the task of optimization in max-min algebra. In addition to minimizing or maximizing the objective function, we have introduced the notion of reachability of an interval value by an interval function. We have defined eleven types of reachability for which we have derived equivalent conditions. Other types of reachability can be a challenge for further research in this area.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
The authors declare that they have no conflict of interest.
[1] |
A. Aminu, P. Butkovič, Introduction to max-linear programming, IMA J. Manag. Math., 20 (2009), 233–249. https://doi.org/10.1093/imaman/dpn029 doi: 10.1093/imaman/dpn029
![]() |
[2] |
P., M. MacCaig, On the integer max-linear programming problem, Discrete Appl. Math., 162 (2014), 128–141. https://doi.org/10.1016/j.dam.2013.08.007 doi: 10.1016/j.dam.2013.08.007
![]() |
[3] | R. Cuninghame-Green, Minimax algebra, Berlin: Springer, 1979. https://doi.org/10.1007/978-3-642-48708-8 |
[4] | M. Fiedler, J. Nedoma, J. Ramík, J. Rohn, K. Zimmermann, Linear optimization problems with inexact data, New York: Springer, 2006. https://doi.org/10.1007/0-387-32698-7 |
[5] | M. Gavalec, J. Plavka, Monotone interval eigenproblem in max-min algebra, Kybernetika, 46 (2010), 387–396. |
[6] | M. Gavalec, K. Zimmermann, Solving systems of two-sided (max, min)-linear equations, Kybernetika, 46 (2010), 405–414. |
[7] |
M. Hladík, On approximation of the best case optimal value in interval linear programming, Optim. Lett., 8 (2014), 1985–1997. https://doi.org/10.1007/s11590-013-0715-5 doi: 10.1007/s11590-013-0715-5
![]() |
[8] |
M. Hladík, How to determine basis stability in interval linear programming, Optim. Lett., 8 (2014), 375–389. https://doi.org/10.1007/s11590-012-0589-y doi: 10.1007/s11590-012-0589-y
![]() |
[9] |
J. Loetamonphong, S. C. Fang, R. E. Young, Multi-objective optimization problem with fuzzy relation equations constraints, Fuzzy Set. Syst., 127 (2002), 141–164. https://doi.org/10.1016/S0165-0114(01)00052-5 doi: 10.1016/S0165-0114(01)00052-5
![]() |
[10] |
H. Myšková, Interval systems of max-separable linear equations, Linear Algebra Appl., 403 (2005), 263–272. https://doi.org/10.1016/j.laa.2005.02.011 doi: 10.1016/j.laa.2005.02.011
![]() |
[11] |
H. Myšková, Control solvability of interval systems of max-separable linear equations, Linear Algebra Appl., 416 (2006), 215–223. https://doi.org/10.1016/j.laa.2005.11.008 doi: 10.1016/j.laa.2005.11.008
![]() |
[12] | H. Myšková, An iterative algorithm for testing solvability of max-min interval systems, Kybernetika, 48 (2012), 879–889. |
[13] | H. Myšková, On an algorithm for testing T4 solvability of max-min interval systems, Kybernetika, 48 (2012), 924–938. |
[14] |
H. Myšková, Interval max-plus systems of linear equations, Linear Algebra Appl., 437 (2012), 1992–2000. https://doi.org/10.1016/j.laa.2012.04.051 doi: 10.1016/j.laa.2012.04.051
![]() |
[15] |
J. Plavka, On eigenproblem for circulant matrices in max-algebra, Optimization, 50 (2001), 477–483. https://doi.org/10.1080/02331930108844576 doi: 10.1080/02331930108844576
![]() |
[16] | E. Sanchez, Medical diagnosis and composite advances, In: Fuzzy set theory and applications, 1979,437–444. |
[17] | Y. Tsukamoto, T. Terano, Failure diagnosis by using fuzzy logic, In: 1977 IEEE conference on decision and control including the 16th symposium on adaptive processes and a special symposium on fuzzy set theory and applications, New Orleans, 1977, 1390–1395. https://doi.org/10.1109/CDC.1977.271521 |
[18] | N. N. Vorobjov, Extremal algebra of positive matrices, Elektronische Datenverarbeitung und Kybernetik, 3 (1967), 39–71. |
[19] |
Y. K. Wu, S. M. Guu, Minimizing a linear function under a fuzzy max-min relational equation constraint, Fuzzy Set. Syst., 150 (2005), 147–162. https://doi.org/https://doi.org/10.1016/j.fss.2004.09.010 doi: 10.1016/j.fss.2004.09.010
![]() |
[20] | L. A. Zadeh, Toward a theory of max-min systems, In: Aspects of network and systems theory, 1971. |
[21] | K. Zimmermann, Extremální algebra (in Czech), 1976. |