1.
Introduction
Network design problems (NDPs) involve determining a design scheme for new transportation network components or existing infrastructure configurations to achieve various social welfare objectives (e.g., minimizing the total network travel cost). The motivation of this study is to consider several existing problems that are associated with finding optimal numbers and locations for new links, apart from expanding the existing ones with budget limitations, while considering the corresponding travel behavior of network users [1,2]. Generally, network design problems can be divided into three categories: discrete network design problems (DNDPs), which improve network capacity by adding new links or lanes; continuous network design problems (CNDPs), which expand the network by increasing the capacity of the existing links; and mixed network design problems (MNDPs), which combine CNDPs and DNDPs [3]. The literature emphasizes that the DNDP is of great academic and practical importance [4,5,6]. With the growth in population coupled with economic development, the significance of the DNDP increases, as transportation infrastructure cannot deal with the growing travel demand. However, land resources for capacity enhancement remain extremely limited, particularly in metropolitan areas [7]. Accordingly, this study aims to develop a network design strategy that includes the allocation of new links and the number of lanes to be constructed for maximizing the social benefits with limited investments in transportation.
The DNDP can be analyzed from the perspective of optimization as a hierarchical decision-making problem that involves network managers at the upper level, who determine the network design strategy, and network users at the lower level, who pursue minimal individual travel time in response to the given network design. The total system cost is affected by both manager and user decision variables [8,9,10]. Owing to two groups of stakeholders within the network, the DNDP is typically formulated using bi-level programming models.
In general, the DNDP is most commonly solved using general approaches such as Lagrangian relaxation, decomposition methods and descent methods, after transforming bi-level formulations into single-level ones [11]. These transformations are often based on unrealistic assumptions for simplifying constraints, and this leads to the dynamic traffic being captured inaccurately [12]. Furthermore, these algorithms are computationally demanding for practical purposes [13,14]. Considering their solution efficiency, meta-heuristic algorithms have become increasingly popular in recent decades. Evolutionary algorithms, swarm intelligence and hybrid meta-heuristic algorithms have been used to address DNDPs; these approaches sacrifice the optimality of the solution but significantly reduce the computational time [6,15,16,17].
Although scholars have made many efforts, the DNDP is still criticized as an old and impractical approach by practitioners because of the limited practical applicability of the existing methods [11]. To deal with real industry-specific projects, it is necessary to acknowledge the existing gaps between engineering and theory. First, traffic models for cities differ from benchmark networks, such as Sioux Falls, in their complexity. Second, large-scale networks are rarely considered in experimental cases. Therefore, the largely simplified multimodal features of actual networks in the literature should be considered.
Recently, advanced simulation packages have provided the potential to generate multimodal environments close to reality and are flexible in constructing networks at varying scales [18]. However, the non-convex and costly to-evaluate relationships between the simulation inputs and outputs make simulation-based optimization extremely challenging because there are no closed-form functions to compute or approximate the derivative.
Inspired by state-of-the-art parameter tuning techniques for machine learning models, Bayesian optimization (BO) has proven to be an effective algorithm for achieving a global optimizer of costly black-box functions for simulation-based optimization problems [19]. The BO method generally comprises two components. First, a surrogate of the objective function is constructed, and the model uncertainty is quantified using a Bayesian-type statistical model (e.g., Gaussian process regression). Further, the location to sample next using an acquisition function derived from the surrogate model is determined [20]. The successful application of BO to a simulation-based CNDP was reported by Yin et al. [21]. Their experimental results suggest that BO has a predominant advantage in reducing the number of simulation runs for simulation-based optimization problems. Standard BO relies on acquisition functions that are defined only on continuous domains, implying that the input variables must be continuous. Therefore, the optimization of DNDPs with discrete inputs using BO is a challenging task that remains unexplored.
Considering the aforementioned literature, this study first develops a simulation-based bi-level model of the DNDP that enables the simulation of a real-word large-scale network and captures accurate multimodal traffic dynamics. Next, to solve the proposed model, a novel hybrid method, BO-B&B, which combines Bayesian optimization and the branch-and-bound algorithm, is developed. By combining state-of-the-art machine-learning parameter-tuning techniques with an exact mathematical optimization technique, the proposed algorithm can achieve both efficiency and accuracy for real-scale DNDPs.
The remainder of this paper is structured as follows: A literature review is introduced in Section 2. The problem description is presented in Section 3. The proposed BO-B&B algorithm is presented in Section 4 to solve a simulation-based multimodal DNDP. In Section 5, a numerical example based on the Sioux-Falls network is presented. Finally, Section 6 concludes the paper.
2.
Literature review
To present the innovation of our proposed research model and solution algorithm, this section is divided into the solution of DNDPs and the application of BO-based methods.
2.1. The solution of DNDPs
Previous studies have demonstrated that solving DNDPs can be challenging, even on experimental and small-scale networks [22,23]. Various methods have been used to address DNDPs, including exact mathematical algorithms, heuristics and meta-heuristic algorithms, owing to their practical significance. Leblanc [9] first employed the branch-and-bound (B&B) algorithm to obtain optimal solutions for DNDPs. Poorzahedy and Turnquist [24] proposed a heuristic B&B method to address a single-level formulated DNDP. Farvaresh and Sepehri [25] further presented a more efficient B&B algorithm that can compute lower bounds. The difficulty in exploiting the B&B approach lies in its ability to accommodate the inherent nonconvexity of the DNDP resulting from complex equilibrium constraints [4].
To map the relationship between improved flows and newly added links, the concept of the support function was developed by Gao et al. [23]. A generalized Benders decomposition method was employed as the solution algorithm. Using variational inequality constraints (VI) to formulate user equilibrium (UE) conditions, Luathep et al. [26] transformed a bi-level DNDP into a single-level optimization problem and solved it using a cutting plane algorithm. To provide optimal solutions to a bi-level DNDP, a single-level mixed-integer linear programming (MILP) formulation was developed by Farvaresh and Sepehri [27]. The conventional DNDP was later extended by Wang et al. [5] to a multi-capacity DNDP that determines both the locations and numbers of added lanes of candidate links. To solve the multi-capacity DNDP, two global optimization methods, namely, SO-relaxation and UE-reduction, were developed by exploiting the difference between the system optimization (SO) and UE conditions. Zhang et al. [28] proposed an active-set algorithm to solve a single-level multimodal DNDP with complementarity constraints. Fontaine and Minner [29] reformulated single-level MILP for a DNDP by adopting a piecewise linear approximation and solved it using the Benders decomposition method. Fontaine and Minner [30] further extended the DNDP to solve a dynamic maintenance planning model using the same solution algorithm. Wang et al. [6] developed a novel DNDP that can simultaneously optimize the network design strategy and capacities. The formulated model was solved using global optimization methods that incorporate outer approximation, linearization and range reduction techniques. Bagloee et al. [11] proposed a hybrid method integrating B&B and Benders decomposition for a DNDP considering multiclass and multimodal traffic flows.
2.2. The application of BO-based methods
Previous attempts have been made to develop BO-based methods for other discrete optimization problems [31]. One naïve idea assumes that there is no change in the expected value of the objective function at discrete points and that the inputs can simply be rounded to the nearest integer [32]. For simplicity, this method is referred to as naïve BO in the following text. However, this approach involves a risk in that the BO may become stuck and repeat preexisting observations. Luong et al. [33] attempted to improve the efficiency of naïve BO by adjusting the length scale factor in the kernel function and the tradeoff parameter in the acquisition function. Another approach is sequential model-based optimization for general algorithm configuration (SMAC), in which a random forest surrogate model replaces GP to deal with discrete input [34]. Nevertheless, the random forest has limitations with regard to extrapolation [35]. Another tree-based method, Tree-Parzen estimators (TPEs), can deal with discrete variables by sampling from discrete distributions and estimating the density of high-quality and low-quality candidate points in the search domain [36]. However, the density distribution cannot be efficiently modelled with TPE until there are sufficient observations at the beginning. The disadvantages of the existing approaches pose challenges in the optimization of the discrete black-box functions.
3.
Problem description
This section first introduces the notations used and then presents the simulation-based bi-level DNDP model with budget constraints. The notations used in the proposed framework are defined in Table 1.
The DNDP can be considered as a basic Stackelberg game, in which the transportation network managers are the leaders, and the network users, or followers, react to the leader's decisions through route choices [3]. This interaction can be commonly be represented using a bi-level structure formulation, in which the upper-level objective is to pursue an optimal network design strategy to minimize the total system cost constrained by government budgets, and the lower-level deals with the traffic assignment problem corresponding to the upper-level solution in a user-optimal manner. The upper level of the DNDP can be formulated as follows [3,14,37,38].
Upper Level
where va and ta(va) can be obtained by solving a simulation-based lower-level model. Constraint (2) ensures that the total network improvement cost does not exceed the required investment budget. Constraints (3) and (4) guarantee that the number of newly added lanes is a nonnegative integer and will not exceed the maximum number of lanes that can be accommodated to the potential links. Notably, most existing literature on DNDPs considers the binary restriction of the decision variable ua, which denotes whether the candidate project link should be added without determining the number of lanes [9,21,23], while only a few studies focus on the number of added lanes [4]. Evidently, considering the number of lanes directly increases the difficulty of the problem. In this study, the upper-level model is assumed to make optimal decisions regarding the number of newly added lanes to provide clear construction guidance.
A successful traffic network improvement project highly depends on how the outcome is evaluated or, more precisely, on the way it predicts the result of increased network capacity. Thus, it is important to accurately model the route choice of multimodal network users after determining the traffic network design strategy. Conventional analytical models achieve the resultant traffic pattern by solving an equilibrium assignment problem, in which the link travel time is formulated using the Bureau of Public Roads (BPR) functions, and the objective is to achieve minimal individual travel costs [4,39,40]. However, ignoring the impact of signals and multimodal traffic in urban areas is unrealistic for analytical models.
With the aid of simulation-based techniques, the lower level can construct an accurate function to establish a relationship between network user reactions and network design strategies. Transportation networks with signalized intersections and public transport modes can be easily established by exploiting the advantages of off-the-shelf simulation packages such as VISUM simulation software. The network flow pattern v(u) = (va(u), a∈A)T can be simulated using an advanced simulation package under approximate equilibrium conditions. To reflect the stochastic nature of travel and traffic, randomness is introduced into the simulation. Every simulation run involves sampling from probability distributions that account for uncertainty, such as the passenger waiting time for public transit. Thus, the simulated link flow pattern can be considered a stochastic variable, denoted by Φv(u) = (Φva(u), a∈A)T. Herein, Φva(u) stands for the equilibrium link flow assigned to link a∈A given the network design strategy u in the simulator, which can be expressed as follows.
Lower Level
where the link flow pattern is represented as the expectation E[⋅] of the stochastic network performance measures Φv(u), which are evaluated using the embedded traffic assignment module in the simulator.
Without the loss of generality, actuated control strategies are utilized in the signal settings of added lanes, and the operational plans of public transits remain unchanged throughout the optimization process. The Monte Carlo method was adopted to determine the expected link pattern v(u), considering stochasticity in the simulation run. For a given network design project u, assume that r independent simulation evaluations, denoted Φv1(u), Φv2(u), …, Φvr(u), are observed by changing the random seeds in the simulator. Then, v(u) can be approximated using the sample average as follows:
where the evaluation batch r is maintained constant across all iterations. The objective function can be evaluated at the upper level once the output link flow pattern is assigned.
Consequently, a complete simulation-based bi-level framework was developed to effectively address the DNDP through iterative interaction between the upper and lower levels.
4.
BO-B&B algorithm
In this section, an optimization solution framework combining Bayesian optimization (BO) and the branch-and-bound (B&B) algorithm is presented for simulation-based DNDPs.
4.1. BO-B&B overview
Although simulation-based models flexibly capture the network structure and conveniently solve the multimodal traffic assignment problem, the optimization process is more difficult than in analytical models because there is no explicit function between the simulation input and output. To optimize the objective function with fewer evaluations, an approach that can depict the implicit input-output function surface is required. Therefore, in the proposed solution framework, BO was employed as a technique to evaluate black-box objective functions that are costly to evaluate. Two components constitute Bayesian optimization: a Bayesian statistical model as a surrogate of the objective function, which is a nonparametric Gaussian progress regression, and an acquisition function, which is easy to solve [19]. Compared with other data-driven models, Bayesian optimization models do not require a great amount of training data [41], and data-driven models are suitable for handling complex features from multiple categories of datasets [42,43]. Bayesian optimization allocates the remaining evaluations after random sampling using both prior and posterior probability distributions and can consider the tradeoff between exploration and exploitation in unexplored domains.
However, in the conventional Bayesian optimization algorithm, the objective function is assumed to be continuous in all acquisition functions so that they return real-valued points in the next evaluation. Given the discrete optimization problem, where only discrete inputs are allowed, incorporating the B&B algorithm can provide an optimal integer solution of the acquisition function. The B&B algorithm iteratively builds a tree structure consisting of nodes connected by directed branches. For each node in the tree, a "partial solution" to the integer programming problem is identified. By performing these operations, the B&B algorithm ensures that the global optima can be determined by recursively searching the tree of instances formed by the branch operation.
The following subsections provide an illustration of the key components of the proposed hybrid algorithm (BO-B&B), followed by the pseudocode and a step-by-step procedure description.
4.2. Gaussian process
Here, we explain how a Gaussian process is used to build the relationship between a set of sample points x and the unknown objective function, f(x). Rasmussen and Williams [44] provide a more detailed illustration.
By assessing any sample point xi using a predetermined mean function μ0, a mean vector can be constructed for the sample set. Generally, a constant value is selected as the mean function:
A kernel function κ0 is introduced for the two samples (xi and xj), and we build a covariance matrix based on the kernel function. For data pairs with a larger positive correlation, their kernel function values are more similar to those of pairs that are far apart.
The radial basis function (RBF), also known as the Gaussian kernel, is one of the most commonly used kernels.
where ||xi,xj||2 is the Euclidean distance between the two samples (xi,xj), α is a variable parameter of the kernel, and d is defined as the length scale of the kernel. Liu et al. [41] found that when the RBF kernel is used, severe overfitting occurs if the objective function is too complex and overly flexible; this phenomenon is termed as the deformation of GP regression. Because our simulation-based DNDP is a fully "black-box" problem and has an unknown functional form, a more generalized kernel function, the Matérn kernel, which can avoid such deformations, is selected. The Matérn kernel is expressed as follows:
where Kν is a modified Bessel function, Γ denotes a gamma function, and ν controls the degree of kernel smoothness, which is nonnegative. For ν→∞, the Matérn kernel converges to the RBF.
The resultant prior distribution on f(X)=[f(x1),f(x2),...,f(xn)] becomes
When a new sample xn+1 is added to the set, the prior distribution is updated as follows:
Based on Bayes' theorem, the posterior/conditional probability distribution of f(xn+1) is derived as follows:
4.3. Acquisition functions
To select the location for the next observation, an acquisition function can be constructed using the posterior mean and covariance of the f(xn+1) derived using Eqs (13) and (14). In general, acquisition functions are inexpensive and simultaneously account for both "exploration" (such as the uncertainty associated with unexplored domains) and "exploitation" (such as the expected performance of the posterior mean). The three most commonly used acquisition functions are 1) expected improvement (EI), 2) probability of improvement (PoI) and 3) upper confidence bound (UCB) [45]. As multiple parameterized acquisition functions are available in the literature, it is often unclear which one should be employed [46]. Among them, the upper confidence bound (UCB) method is arguably the most commonly used, because the trade-off between exploitation and exploration is straightforward and easy to balance [47]. Therefore, without loss of generality, we select UCB as our sample acquisition function in this study, and the mathematical expression of UCB is as follows.
Upper confidence bound (UCB)
The UCB acquisition function combines the explicit exploitation component μ(x) and exploration component σ(x), and the mathematical expression of UCB for minimization problems is as follows:
where β>0 is a tradeoff parameter for achieving a balance between high expected performance and high uncertainty.
Using the above acquisition function, the next-best observation is chosen from xn+1, whose acquisition function value is the smallest.
Compared with the previous black-box objective function, these acquisition functions have deterministic functional forms that are relatively easy to solve, and they allow for the evaluation of derivatives. With the conventional BO algorithm, Eq (19) can be solved with a first-order or second-order optimization algorithm, such as the quasi-Newton approach [48]. Thus, BO only suggests a continuous point to be used when sampling the next point for function evaluation, leading to an invalid DNDP input. Thus, we incorporate the B&B algorithm to solve the acquisition functions in BO, which ensures a discrete output. In the next subsection, the B&B algorithm incorporated in BO is introduced.
4.4. Branch-and-bound algorithm embedded in BO
After the aforementioned Gaussian process and acquisition function are established, the problem to be solved is as follows:
Denote the formulations (17)–(20) as the original problem P0 and the region of feasible solutions Ω. Traditional B&B algorithms are described as a query through a search tree, in which the original problem P0 is represented by a root node, and each node corresponds to a subproblem of that problem. As NP-hard problems have an exponential number of feasible solutions, instance trees containing an exponential number of leaves exist. A bounding function g exists, which is linked to each node in the tree, referred to as the bound of the node. Particularly, for leaves, the bound equals the value of the corresponding solution, while for internal nodes, the value is a lower bound for any solution in the subspace where the node resides. The following three conditions must be satisfied by the bounding function g :
ⅰ. g(Pi)⩽Ψ(Pi) for all nodes Pi in the tree;
ⅱ. g(Pi) = Ψ(Pi) for all leaves in the tree;
ⅲ. g(Pi)⩾g(Pj) if Pj is the father of Pi.
This guarantees that g is a bounding function that contains more information about additional constraints for a subproblem, and these bounds are closely aligned with the objective function. During the search process, a search tree is dynamically formed, initially containing only a root node. There are several problems in which a feasible solution can be generated using a heuristic, and the result is used as the current best solution. The B&B algorithm selects nodes for exploration from a pool of live nodes for each iteration. These nodes correspond to the unexplored feasible subproblems. Using eager strategies, a branching process is performed by adding constraints to the subproblem of the node and producing two or more children. Consequently, the subspace is subdivided into smaller subspaces. Using the result of the sub-problem optimization, the bound for the node for each of these is calculated. Once the node corresponds to a feasible solution, or the bound represents the value of an optimal solution, the value of the node is compared with the current best solution, and the optimal solution is retained. If the subproblem cannot achieve any viable solution, it can be viewed as no bound among the current best solution. Subproblems are also considered if no feasible solutions exist. Alternatively, a better solution in the subproblem cannot be ruled out; subsequently, the node that includes the bound information stored becomes a member of the live subproblem pool.
Consequently, the B&B algorithm comprises three major elements: 1) a bounding function, 2) a node selection strategy and 3) a branching rule. Any successful implementation of the B&B algorithm relies heavily on the bounding function, as a poor bounding function cannot be compensated for by good selection or branching strategies. An ideal bounding function should be capable of generating the best feasible solution for a given sub-problem. To construct the bounding function, we relaxed the integer constraints of the original problem, thereby expanding the set of feasible solutions. Sequential quadratic programming, which is the most commonly used method for constrained nonlinear optimization, is used to solve relaxed problems [49].
4.5. Pseudocode of BO-B&B algorithm
The pseudocode of the BO-B&B algorithm is as follows. Notably, the proposed method can solve specific DNDPs and generic discrete optimization problems.
4.6. Step-by-step solution framework
This subsection provides a detailed solution process for specifically simulation-based DNDPs using BO-B&B.
Step 1. Initialization
A multimodal transport network is established in the simulation package, and the loop counter is set as m=0.
Step 2. Sample construction
A small sample set of network design strategies, U=[u1, u2, ..., un], is generated. The samples were selected using a random sampling method.
Step 3. Multimodal traffic assignment in simulator
For each ui, the added links are constructed in the selected macroscopic simulator (e.g., VISUM). The traffic private vehicle and public transit assignment procedures are executed under approximated equilibrium conditions in the simulator. The link flow pattern and total network travel time under the network design strategy ui is generated.
Step 4. Objective function value calculation
Given the outputs of the simulation in Step 3, we can evaluate the function Z(U)=[Z(u1),Z(u2),...,Z(un)] using Eq (1).
Step 5. Gaussian process regression
With the observed samples, the posterior probability distribution of Z(un+1) is modeled using GP. A surrogate model mapping the relationship between the discrete network design u and total travel time Z(u) is established.
Step 6. Next sample point determination
With the posterior mean and covariance of Z(un+1) calculated in Step 5, we constructed the acquisition function Ψ(un+1). Using the B&B algorithm, we find the optimal un+1 which has the best potential to optimize our acquisition function. The selected un+1 is stored, and we subsequently update the iteration counter n=n+1.
Step 7. Check stop criterion
The procedure terminates if both the following criteria are satisfied: 1) The number of iterations exceeds the maximum number N of iterations; 2) the permitted relative gap δ between the two iterations is less than 10 - 6. The stop criteria are the maximum number of iterations and the permitted relative gap δ between two simultaneous iterations. If n>N and δ<10−6, the iteration is terminated, and the network design strategy u that minimizes the objective function is returned. Otherwise, the algorithm returns to Step 3, and un+1 is evaluated in the simulator. The newly determined sample point is then applied to the sample set for GP updating.
5.
Numerical experiment
The Sioux Falls network, which constitutes 24 nodes and 76 links, is shown in Figure 1. This network has been widely studied in transportation experiments. The source data on traffic demands are listed in Table A1, which originated from a research paper published by Wang et al. [5]. Since multimodal traffic is currently everywhere in urban areas, in our research, the traffic network is extended to a multimodal environment in accordance with reality, which allows autos, buses, rails and combined modes to be considered.
Five projects were selected as potential lane addition links, and detailed information is shown in Table 2. Candidate projects were assumed to be two-way links, and the variables corresponded to the number of new lanes. In actual complex network scenarios or under restricted land planning, our proposed model and solution algorithm can also be applied to both scenarios with existing candidate links and without. In existing studies on DNDPs, we found that the feasible region is generally very small. To better present the algorithm performance, we provide detailed network design guidance in which the transport authority considers adding 0, 1, 2, 3 or 4 lanes to these links. This setting results in up to 55 = 3,125 total feasible design solutions. Without loss of generality, actuated control strategies were utilized for the signal settings of both existing links and added lanes, and the operational plans of public transits remain unchanged throughout the optimization process. The cost of adding one new lane was set as one unit of the budget, and a total of 10 investment budgets were assumed. In other words, no more than 10 lanes can be added to the network to improve capacity.
The proposed BO-B&B method was compared with existing machine learning methods for discrete optimization, such as BO with naive rounding (Naive BO), sequential model-based optimization for general algorithm configuration (SMAC), Tree-Parzen estimators (TPE) and random search (RS). These baselines are described in the Introduction section. Figure 2 shows the iteration process of the proposed BO-B&B algorithm and the benchmarks. The results show that BO-B&B generally obtained a better solution in much less computational time than the other algorithms.
Table 3 presents a detailed analysis of the algorithm performance. The result of CPU time is the average of three runs. It can be observed that BO-B&B, TPE and SMAC reached their optimal solutions, but BO-B&B required fewer iterations. Within a limited number of iterations, naive BO and RS failed to find the optimal solution. In terms of reducing the number of iterations, the proposed BO-B&B exhibits predominant advantages. In the optimization process, the efficiency of the optimization process is strongly influenced by the number of simulation runs because each simulation run takes approximately 3 s, which is much longer than the run-time of the algorithm. In other words, the simulations have a substantial effect on the computation time with the increase in the number of simulations. BO-B&B consumes more time for its B&B operations than other algorithms; however, compared with simulations, the amount of extra computation is almost negligible. It should be noted that no particular optimization algorithm is likely to outperform all others in every case. It is not suitable to use BO-based algorithms for problems with more than 20 dimensions because of the complex matrix operations involved.
The optimal discrete network design obtained for the five potential projects is (3, 0, 4, 3, 0), implying that projects 1, 3, and 4 should add three, four and three lanes, respectively. The optimized discrete network design induces a 1907 h reduction in the network travel cost. Assuming the average value of time to be $15/h, which is a reference from Jayasooriya et al. [50], the network improvement is suggested to save at least $10.44 million annually, representing a significant reduction in terms of operational savings. Consequently, the optimal discrete network design scheme can be considered as an option to improve the performance of multimodal urban transportation networks.
6.
Conclusions
We reformulated the bi-level DNDP using a simulation-based method. The objective of the upper layer of the proposed model is to optimize network design, providing the minimal total system cost subject to budget constraints. Lower-level traffic assignment in a multimodal traffic network was performed using simulation-related methods. We used VISUM simulation software, an advanced simulation package, to evaluate each discrete network design plan/solution, which provides the system performance. To solve the simulation-based bi-level optimization problem, an algorithm called BO-B&B, which combines Bayesian optimization and branch-and-bound, is proposed. This combination provides a suitable approach for solving discrete optimization problems by BO-based methods. Furthermore, Bayesian optimization can rely on previous empirical values and accelerate to obtain the next optimal values in the fewest iterations, which is suitable for problems that are "expensive to evaluate, " even lacking known special structures like concavity or linearity.
According to the results, BO-B&B produced more systematic savings in computational budgets than the three other machine learning techniques. Using BO-B&B, the number of simulation evaluations and iterations required to determine the global optimum of a simulation-based optimization problem was significantly reduced. Using simulation-related methods, we can easily develop complex transport networks with combined modes and signal controls. In this regard, the proposed model may be a useful tool for network managers to deal with a variety of discrete decision variables in the context of real-world distribution network development plans.
Based on our current research, the following investigations will be performed: First, the BO-B&B algorithm will be further modified and extended to high-dimensional problems using parallel computing. Second, for some key areas in the network, mesoscopic and microscopic simulation models will be combined to refine the results.
Acknowledgments
This study is supported by the Distinguished Young Scholar Project (No. 71922007) and Key Project (No. 52131203) of the National Natural Science Foundation of China.
Conflict of interest
The authors declare there is no conflict of interest.
Appendix