Topological graph indices have been of great interest in the research of several properties of chemical substances as it is possible to obtain these properties only by using mathematical calculations. The irregularity indices are the ones to determine the degree of irregularity of a graph. Albertson and Bell indices are two of them. Edge and vertex deletion and addition are important and useful methods in calculating several properties of a given graph. In this paper, the effects of adding a new edge or a new vertex to a graph on the Albertson and Bell indices are determined.
1.
Introduction
Dealing with optimization problems is maximizing or minimizing objective functions by selecting optimal parameters and schemes under given constraints. From the perspective of objective function, there are two branches of optimization problems, multi-objective problems and single-objective problems. The single-objective problems have a unique objective function and the result is an undisputed optimal solution. Multi-objective problems usually have multiple objective functions, and the result is usually a pareto optimal solution set, which usually requires trade-offs to select the relatively better solution.
From the perspective of decision variables, optimization problems are classified into continuous problems and discrete problems. In continuous problems, the decision variables belong to the field of real numbers. The range of continuum problems is vast, and the literature is rich, covering such hot fields as engineering, medical science, and machine learning [1,2,3]. However, the decision variables are often the elements of an integer set in discrete problems. There are many practical optimization problems in the field of discrete optimization, e.g., the Traveling Salesman Problem (TSP) [4,5,6], Graph Coloring Problem (GCP) [5] and DNA Sequence Design Problem (DSDP) [7]. TSP is classified as an Np problem because its time complexity is O (N!) [8]. Although the solution of TSP is very time-consuming [9], it still has many practical applications in many areas, e.g., DNA Fragment Assembly Problem (DFAP) [10], Job Shop Scheduling Problem (JSP) [11,12] and Vehicle Routing Problem (VRP) [13].
The methods for solving TSP are roughly classified into two categories in literature: deterministic algorithms and nondeterministic algorithms. The deterministic algorithms include, Branch and bound (BnB) [14], Dynamic Programming (DP) [15] and Lagrangian Dual (LD) [16], etc. However, as the size of TSP increases, the performance of such algorithms declines significantly [17]. nondeterministic algorithms are generally referring to approximation algorithms and meta-heuristic algorithms, where meta-heuristic algorithms can solve TSP well in controllable time cost. Numerous meta-heuristic algorithms for solving TSP were put forward in the existing literature. These algorithms usually use discrete operators to reconstruct the original algorithms. For example, Karuna Panwar reconstructed the Gray Wolf Optimizer (GWO) by 2-optimization (2-opt) and hamming distance to solve TSP [6]. Mesut Gunduz proposed a Discrete JAYA algorithm (DJAYA) to solve TSP. In DJAYA, roulette is used to control the behavior of the transformation operator, and two search trend parameters ST1 and ST2 are added to enhance comprehensive optimization ability [18]. Huang et al. put forward a nearest neighbor heuristic information mechanism and obtained Discrete Shuffled Frog-leaping Algorithm (DSFLA). In DSFLA, population diversity is maintained by adopting a reverse roulette strategy, and exploration ability is enhanced by utilizing a separate elite set mechanism [19]. Akhand et al. realized the discreteness of the Discrete Spider Monkey Optimizer (DSMO) by utilizing two new cross operators [20]. To deal with TSP, Cinar et al. put forward Discrete Tree Seed Algorithm (DTSA). In DTSA, a combination of multiple transformation operators is introduced to increase exploration ability, and the final solution is improved by 2-opt [21]. Yongquan Zhou et al combined 3-Opt and 2-Opt to propose a discrete invasive weed optimization algorithm (DIWO) to solve the TSP problem [22], and the team also proposed discrete flower pollination algorithms based on the order-based crossover, pollen discarding behavior and partial behaviors [23]. Although these algorithms have good performance, they still have room for improvement in robustness and time cost. In particular, The TSP is a practical problem with more stringent requirements for robustness and time cost. We hope to propose a new discrete algorithm with stable performance and fast running speed to solve TSP.
We put forward a Discrete Salp Swarm Algorithm (DSSA) for solving TSP. Salp Swarm Algorithm (SSA) is a swarm-based algorithm [24]. It was originally used to solve benchmark and real problems of continuous optimization, and SSA also has satisfactory performance in solving engineering design problems. There are two main reasons for proposing a discrete version of SSA to solve the TSP problem:
(1) There are few pieces of literature on the discretization of SSA, especially on TSP.
(2) The exploration mode of the SSA algorithm is that the leader leads the follower to move, which makes the approach of the whole population towards the optimal solution gradually, which is similar to the common neighborhood search idea in the TSP problem.
Therefore, this paper proposes an improved DSSA based on the properties of TSP, and compares it with many advanced meta-heuristic algorithms on 23 benchmark instances. Experimental results reflect DSSA the effectiveness and robustness in solving TSP. In addition, the application results of DSSA on TSP also illustrate the application prospect of this algorithm in solving discrete optimization problems. The rest is arranged as follows: In Section 2, TSP and the corresponding mathematical model are briefly described. The original SSA is briefly described in Section 3. In section 4, the d-opt operator, TPALS operator and DSSA are introduced. Section 5 proves that DSSA has good performance and robustness in solving TSP through several experiments. Finally, in Section 6, the conclusion is presented.
2.
Traveling Salesman Problem (TSP)
We can describe TSP as follows, a salesman should pass multiple cities and towns to sell goods. The salesman starts in one city, passes through all the planned cities along the way, and ends his trip in the starting city. And to cut down time costs, the salesman should choose the shortest travel path as far as possible. The main difficulty of the TSP is that there are too many potential travel routes: for symmetric TSP of n cities, there is a total of (n-1)/2! Possible paths. The distance from the ith city to the jth city is calculated using Euclidean distance, as shown in Formula Eq (1) [6]:
The distance from the ith city to the jth city is defined di, j. xi and yi are x coordinate and y coordinate of the ith city, xj and yj are x coordinate and y coordinate of the jth city. To calculate the length of the travel path, we use the f function in Eq (2) [6]:
Where n is city numbers. If dj, i and di, j are equivalent(i = 1, 2, …, n), then it is called symmetric TSP. We need to find a least cost Hamiltonian path on a weighted graph in TSP [6].
3.
Salps Swarm Algorithm (SSA)
SSA is a very efficient algorithm which is put forward by Seyedali Mirjalili in 2017, which is inspired by the phenomenon that salps move in a chain when foraging. There is one leader and many followers in salps, where the leader leads swarm to the food position, while followers directly or indirectly follow the leader [24].
The leader position is renew using Formula Eq (3):
Where x1, j represent the jth dimension leader position and Fj represents the jth dimension food position. ubj represents the jth dimension upper bound, lbj represents the jth dimension lower bound. c2 and c3 are random numbers in [0, 1]. c1 is the key coefficient used to balance exploration and exploitation, its calculation formula is shown in Eq (4):
Where T is maximum iterations and t is current iteration. The followers position is renewed using formula Eq (5):
Where xi, j represents the jth dimension position of follower i. See 1 for pseudocode about SSA.
4.
Discrete Salps Swarm Algorithm (DSSA)
4.1. d-opt
To cut down the time cost of 2-optimization (2-opt), we improved it to obtain a d-optimization (d-opt). 2-opt is a local search algorithm proposed by Croes [25] which is broad applied by researchers to deal with various discrete problems. The core idea of 2-opt is to select i and j, and then reverse the subsequence from i to j. If the path cost becomes smaller, perform this operation; otherwise, keep the original solution. For example, the original solution is A−D−C−B−E. If i = 2 and j = 5 are selected, the original solution is converted to A−E−B−C−D. The algorithm continuously improves the solution by repeating these steps.
The time cost of 2-opt is high for two main reasons: 1) 2-opt traverses all possible i and j (i ≠ j), 2) and calculates the total length of the path in each iteration. d-opt improves the above two disadvantages respectively. Firstly, parameter d is introduced to reduce the traversal scale. Where d represents the minimum distance from i to j selected, only subsequences with a length greater than d are selected to attempt inversion. Secondly, d-opt only calculates the change of path length after inversion, not the total path length. See Algorithm 2 and Algorithm 3 for pseudocode about d-opt.
4.2. TPALS
In order to better integrate the Problem Aware Local Search (PALS) algorithm into DSSA to solve TSP. We improve it to obtain Tsp Problem Aware Local Search (TPALS). PALS is a heuristic algorithm proposed by Alba and Luque in 2007 to solve DNA Fragment Assembly Problem (DFAP) [26]. The solutions of PALS are defined as sequences of ordinal numbers of DNA fragments and replaced the current solution with neighborhood information in each iteration. The neighborhood solution set is obtained by reversing the subsequence from given i to j in the current solution. Unlike 2-opt, the termination condition of PALS is that no better solution exists in the domain solution set.
Two problems need to be overcome in applying PALS to TSP. Firstly, the indicator for evaluating the neighborhood solution set in PALS is contig, so we delete the calculation about contig and take the solution with a shorter path length as the better solution among the neighborhood solutions. Secondly, there is no need to consider the overlap score between the last fragment and the first fragment in DFAP, so we add the calculation of the last path back to the starting point in TPALS. See Algorithm 4 for pseudocode about TPALS, where deltaF is also calculated with Algorithm 3.
4.3. The proposed Discrete Salps Swarm Algorithm (DSSA)
SSA was originally put forward to solve continuous problems, while TSP is discrete. Therefore, we use discrete operators to reconstruct the original SSA to solve the TSP. The discrete operators used are d-opt and TPALS mentioned above. To increase the exploration ability, we verify the effectiveness of five classical discrete crossover operators respectively, and introduced the operator with the best performance into DSSA. Each slap in the swarm represents a viable solution to TSP.
In the proposed method, d-opt is used to update leaders, and the update formula is shown in Eq (6):
Where Touri represents ith slap. d is a parameter used to control the search intensity of d-opt, and its updating formula is shown in Eq (7):
Where n is city numbers in TSP. The minimum and maximum value of d are defined as dMax and dMin. T is the maximum iterations and t is the current iteration. [.] is integer function.
We use TPALS and crossover operators to renew the followers position. There are 6 alternative crossover operators in this paper, and the best crossover operator is determined in Experiment 5.1. The update formula of followers is shown in Eq (8):
Where operator is the best crossover operator determined in the experiment.
To enhance the exploration ability, DSSA introduced a mechanism named Second Leader Principle (SPL), whose core idea is that in each iteration, a second leader will appear among followers, and the second leader will also use TPALS to renew the position. The formula of second leader is shown in Eq (9):
The formula for calculating the probability of followers becoming the second leader is shown in Eq (10):
Note that there is only one second leader in each iteration. In Algorithm 5, the pseudo code about DSSA can be obtained.
5.
Experiments and results
We tested it on 23 benchmark instances of small, medium, and large symmetric TSP to verify the effectiveness of the DSSA [18]. Table 1 provides the relevant information of benchmark instances that appeared in the article. Where the number after the instance name represents the number of cities, for example, Oliver30 indicates that the benchmark instance has 30 cities. All the benchmark functions used in the paper are from TSPLIB.
All methods run 20 times for a comprehensive comparison. Each was run with the set parameters: population number N = 50, iterations number T=D+ΣDi=1, where D is the scale of the problem. The results were analyzed by Best, Avg, Standard deviation (Std) and Relative error (Re). The calculation formula of Re is shown in Eq (10) [18]:
Where Avg is the average value of the best path cost obtained by running the algorithm 20 times, and Optimal is the Optimal path cost of the benchmark instance. All experiments were carried out under the same experimental environment: Intel(R) Core (TM) I510500 3.10 GHz CPU and 16.00 GB RAM, and were programmed on MATLAB R2020b. The parameters of the algorithm used in this article are shown in Table 2.
5.1. Experiment 1: Determine the best crossover operator
In order to determine the influence of crossover operators and parameter c1 on DSSA, this experiment studied five crossover operators and five parameter combinations on the 19 benchmark instances. The alternative Crossover operators are Partial-Mapped Crossover (PMX) [27], Order Crossover (OX) [28], position-based Crossover (PBX) [29], Order Based Crossover (OBX) [29] and Subtour Exchange Crossover (SEC) [30]. Table 3 shows the influence of different crossover operators on DSSA performance, in which bold numbers represent best results. From Table 3, When the SEC crossover operator is used, the Re of 19 instances is the smallest among all algorithms, and is less than one percent. In addition, see Table 4 for the Friedman test results of Best and Avg. From Table 4, the rank of DSSA-SEC is the smallest on both Best and Avg, and the p-value is much less than 0.05, which indicates that DSSA-SEC is obviously superior to other algorithms in performance and robustness. Therefore, SEC is regarded as the best crossover operator of DSSA.
5.2. Experiment 2: Comparisons with DSSA, ESA, GA, IBA and IDGA
In this experiment, DSSA was compared with several classical or advanced algorithms (DGWO, DFA, DICA, ESA, GA, IBA, IDGA), which have been taken from recently published work [6,31]. Table 5 shows the performance of eight algorithms on 23 benchmark instances, with the best results in bold. Where "\" indicates that the algorithm has not been tested on related problems in the original literature. As can be seen from Table 5, DSSA achieved the best results in all indicators. On the one hand, from the Best index, DSSA can get the theoretical optimal value in 23 instances, which shows that DSSA has satisfactory performance in solving TSP problems. According to the analysis, this may be due to the SEC operator and d-opt operator which gradually reduces the search range, which to some extent improves the accuracy of the optimal solution. On the other hand, from the Avg index, DSSA can win in all 23 instances, which shows that DSSA has satisfactory robustness in solving TSP problems. This shows that the combination of the TPALS operator and the SPL mechanism provides a certain degree of randomness to the algorithm and effectively avoids the algorithm falling into the local optimal solution. In addition, DSSA is superior to DGWO and slightly inferior to the other six algorithms in the Time index, this indicates that DSSA takes a little longer to solve the TSP problem, which may be due to the fact that both d-opt operator and TPALS operator contain the behavior of searching for the optimal solution of the neighborhood. But as a whole, the Re of these six algorithms is much larger than DSSA. If more than 1% of the instance of Re were considered as failures, 53.8% (7/13) of DGWO, 82.4% (14/17) of DFA, 88.2% (15/17) of DICA, 76.5% (13/17) of ESA, and 88.2% (15/17) of GA failed, 64.7% (11/17) of IBA and 88.2% (15/17) of IDGA failed. In all DSSA cases, the Re is less than 1%.
5.3. Experiment 3: Convergence analysis
In this experiment, DSSA was compared with several classical or advanced algorithms (IBA, ESA and DFA), which have been taken from recently published work [6,31]. In Table 6, the average number (in thousands) of objective function evaluations required to reach the final solution for each instance is shown, with the best results in bold. From Table 6, on the one hand, the average evaluations number of DSSA is much smaller than the other algorithms, which indicates that DSSA shows better convergence in all 23 instances, on the other hand, the evaluations number of DSSA does not change significantly by orders of magnitude as the size of the problem rises, suggesting that DSSA has advantages in solving TSP on a larger scale. Finally, despite the longest single run time of the DSSA, the overall time cost of solving TSP can be reduced by determining the appropriate evaluations number. Therefore, the DSSA proposed in the paper is a promising approach to solving TSP.
6.
Conclusion
As a swarm-based algorithm, SSA was put forward to deal with continuous optimization problems of single and multiple objectives. In this paper, we proposed a DSSA for solving TSP. Firstly, we improved 2-opt and PALS into d-opt and TPALS respectively, and added them into DSSA as discrete operators. Secondly, we made a comparative study of five crossover operators, and confirmed that SEC is the best crossover operator of DSSA and introduce it into DSSA. Finally, the proposed DSSA was compared with several advanced algorithms on 23 benchmark examples, and the results showed DSSA possesses satisfactory properties and robustness in solving TSP. In the process of experiment, we found that the SEC operator and d-opt operator which gradually reduced the search range could improve the exploitation ability of the algorithm and help to improve the accuracy of the optimal solution, and the combination of the TPALS operator and the second leader mechanism provided certain randomness to the algorithm, so that the algorithm could avoid falling into the local optimal solution. At the same time, DSSA also exposes the disadvantage of long running time. According to the analysis, it may be because the d-opt operator and TPALS operator both contain the behavior of searching for the optimal solution of the neighborhood. In the future, In the future, we plan to make improvements to address the long running time of DSSA, and we intend to put forward excellent and novel discrete operators for DSSA to deal with DNA fragment assembly problems.
Acknowledgement
This work is supported by 111 Project (No.D23006), by the National Natural Science Foundation of China (Nos. 62272079, 61972266), Liaoning Revitalization Talents Program (No. XLYC2008017), Natural Science Foundation of Liaoning Province (Nos. 2021-MS-344, 2021-KF-11-03, 2022-KF-12-14), the Postgraduate Education Reform Project of Liaoning Province (No. LNYJG2022493), the Dalian Outstanding Young Science and Technology Talent Support Program (No. 2022RJ08).
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.