Processing math: 100%

A clustering based mate selection for evolutionary optimization

  • Published: 01 January 2017
  • 78M32

  • The mate selection plays a key role in natural evolution process. Although a variety of mating strategies have been proposed in the community of evolutionary computation, the importance of mate selection has been ignored. In this paper, we propose a clustering based mate selection (CMS) strategy for evolutionary algorithms (EAs). In CMS, the population is partitioned into clusters and only the solutions in the same cluster are chosen for offspring reproduction. Instead of doing a whole new clustering process in each EA generation, the clustering iteration process is combined with the evolution iteration process. The combination of clustering and evolving processes benefits EAs by saving the cost to discover the population structure. To demonstrate this idea, a CMS utilizing the k-means clustering method is proposed and applied to a state-of-the-art EA. The experimental results show that the CMS strategy is promising to improve the performance of the EA.

    Citation: Jinyuan Zhang, Aimin Zhou, Guixu Zhang, Hu Zhang. 2017: A clustering based mate selection for evolutionary optimization, Big Data and Information Analytics, 2(1): 77-85. doi: 10.3934/bdia.2017010

    Related Papers:

    [1] Guojun Gan, Kun Chen . A Soft Subspace Clustering Algorithm with Log-Transformed Distances. Big Data and Information Analytics, 2016, 1(1): 93-109. doi: 10.3934/bdia.2016.1.93
    [2] Guojun Gan, Qiujun Lan, Shiyang Sima . Scalable Clustering by Truncated Fuzzy c-means. Big Data and Information Analytics, 2016, 1(2): 247-259. doi: 10.3934/bdia.2016007
    [3] Marco Tosato, Jianhong Wu . An application of PART to the Football Manager data for players clusters analyses to inform club team formation. Big Data and Information Analytics, 2018, 3(1): 43-54. doi: 10.3934/bdia.2018002
    [4] Xiaoxiao Yuan, Jing Liu, Xingxing Hao . A moving block sequence-based evolutionary algorithm for resource investment project scheduling problems. Big Data and Information Analytics, 2017, 2(1): 39-58. doi: 10.3934/bdia.2017007
    [5] Pawan Lingras, Farhana Haider, Matt Triff . Fuzzy temporal meta-clustering of financial trading volatility patterns. Big Data and Information Analytics, 2017, 2(3): 219-238. doi: 10.3934/bdia.2017018
    [6] Tao Wu, Yu Lei, Jiao Shi, Maoguo Gong . An evolutionary multiobjective method for low-rank and sparse matrix decomposition. Big Data and Information Analytics, 2017, 2(1): 23-37. doi: 10.3934/bdia.2017006
    [7] Jianguo Dai, Wenxue Huang, Yuanyi Pan . A category-based probabilistic approach to feature selection. Big Data and Information Analytics, 2018, 3(1): 14-21. doi: 10.3934/bdia.2017020
    [8] Xiaoying Chen, Chong Zhang, Zonglin Shi, Weidong Xiao . Spatio-temporal Keywords Queries in HBase. Big Data and Information Analytics, 2016, 1(1): 81-91. doi: 10.3934/bdia.2016.1.81
    [9] Minlong Lin, Ke Tang . Selective further learning of hybrid ensemble for class imbalanced increment learning. Big Data and Information Analytics, 2017, 2(1): 1-21. doi: 10.3934/bdia.2017005
    [10] Yaguang Huangfu, Guanqing Liang, Jiannong Cao . MatrixMap: Programming abstraction and implementation of matrix computation for big data analytics. Big Data and Information Analytics, 2016, 1(4): 349-376. doi: 10.3934/bdia.2016015
  • The mate selection plays a key role in natural evolution process. Although a variety of mating strategies have been proposed in the community of evolutionary computation, the importance of mate selection has been ignored. In this paper, we propose a clustering based mate selection (CMS) strategy for evolutionary algorithms (EAs). In CMS, the population is partitioned into clusters and only the solutions in the same cluster are chosen for offspring reproduction. Instead of doing a whole new clustering process in each EA generation, the clustering iteration process is combined with the evolution iteration process. The combination of clustering and evolving processes benefits EAs by saving the cost to discover the population structure. To demonstrate this idea, a CMS utilizing the k-means clustering method is proposed and applied to a state-of-the-art EA. The experimental results show that the CMS strategy is promising to improve the performance of the EA.



    1. Introduction

    In this paper, we consider the following continuous global optimization problem.

    minf(x) s.t x[ai,bi]n (1)

    where x=(x1,x2,,xn)Rn is a decision variable vector; [ai,bi]n defines the feasible range of the decision space and aixibi for i=1,2,,n; and f:RnR is the objective function.

    The evolutionary algorithm (EA) is a type of heuristic optimization method, which is inspired by the natural evolution process [1]. It has become a major method to tackle (1). The major components in a general EA include a reproduction operator, and a selection operator. There is a key operation in natural evolution, named as mate selection, which chooses mating pairs or groups for breeding and plays a key role in sexual propagation. In EAs, a proper mate selection can also control the population convergence and diversity efficiently [13,6,12]. In the last decades some mating strategies have been proposed [16], including random mating, roulette wheel selection, truncate selection, tournament selection, gender based selection [7,15,19], niche based selection [2], dissociative selection [3,4], and some other methods [5,14,18]. Although these mating strategies have been proposed, it has not attracted much attention in the community of evolutionary computation [16]. The major reasons might be that (a) most of existing mating strategies need some problem specific control parameters or are computationally expensive, and (b) some widely used EAs work well by randomly choosing mating pairs. In this paper, we shall demonstrate that existing EAs can be improved by using properly designed mating strategies.

    Statistical and machine learning (SML) techniques aim to extract information from data sets and transform it into an understandable pattern or structure for further use [8]. It is arguable that in EAs, an individual can be regarded as a training example, and its corresponding fitness value be a label. From the viewpoint of SML, the population of an EA forms a training data set. Therefore, SML techniques can be naturally applied to EAs to extract population information and guide the search. Some algorithms, such as estimation of distribution algorithms [10], and surrogate assisted evolutionary algorithms [9], are along this direction. Basically SML techniques are computationally expensive comparing to general EAs, which limits their usages. How to use SML techniques in EAs more efficiently is still an open question.

    In this paper, we present a new way to combine SML methods with EAs. The basic idea is to iteratively call SML training step and EA evolving step. In the SML step, the obtained population is utilized to train a model that captures the population structure, and then in the EA step, the population structure information extracted in the SML step is used to guide the search. The combination of the SML iteration process and the EA iteration process can find and refine the population structure information and thus save the SML cost. In multi-objective evolutionary optimization, there are some works with similar idea[20]. However for scalar-objective optimization, this strategy is still new. Based on this idea, this paper proposes a clustering based mate selection (CMS) operator for EAs. In CMS, the population is partitioned into classes in each generation, and only the solutions in the same class are allowed to mate with each other. A CMS utilizing the k-means [11] clustering method is proposed and applied to a state-of-the-art EA to show its advantages.

    The rest of the paper is organized as follows. Section 2 presents the proposed CMS strategy. An EA integrated CMS is introduced in detail as well. Section 3 compares the proposed CMS strategy with some other mating strategies, and studies the influence of the control parameters. Finally, the paper is concluded in Section 4.


    2. Clustering based mate selection

    A major challenge by applying SML techniques in EAs is on the high computational cost. This section introduces a clustering based mate selection (CMS) to address the challenge. The basic idea is to combine an EA with an iterative clustering method together. Take the k-means method as an example. In each generation (iteration) of the combined process, the clustering process uses the EA population to assign points and update cluster centers; and then based on the population partition, an EA chooses the parents in the same cluster to generate new trails solutions. It should be noted that the CMS assisted EA does not implement a clustering method in each generation. Instead it combines the clustering iteration with the EA iteration, and only one clustering iteration is implemented in each generation. Actually, the clustering process is only implemented several times sequentially along with the EA process. By this way, the computational cost is saved up. The major components of an iterative clustering method and an EA are combined in the CMS assisted EA (CMS-EA for short). A restart checking component is added to reinitialize the clustering. A major reason is to avoid getting into local optima in clustering.

    In this paper, we use the CMS strategy to improve the performance of the composite differential evolution (CoDE) algorithm [21]. In CoDE, each solution produces three candidate offspring solutions by using three reproduction operators with randomly selected three control parameters, and chooses the best candidate as the offspring solution for updating. More details of CoDE can be found in [21]. The k-means clustering method is used to partition the population. In the following, we give the framework of the proposed approach, named as CMS-CoDE.

    1 Randomly initialize a population P=(x1,x2,,xN), and set generation count g=0.
    2 If mod(g,G)=0, initialize the cluster centers m1,m2,,mK, and set K empty clusters C1,C2,,CK.
    3 For each solution xi, (i=1,2,,N), assign it to the k-th cluster Ck which satisfies
    k=argminj=1,2,,Kdis(xi,mj),
      where dis(a,b) is the Euclidean distance between a and b.
    4 For each cluster Ck, (k=1,2,,K), update its center as
    mk=1|Ck|xCkx.
    5 For each solution xCk (k=1,2,,K),
      5.1 Generate trial solution u1, u2, and u3 by using the parents from Ck.
      5.2 Set y=argminu{u1,u2,u3}f(u).
      5.3 Replace x by y if f(y)<f(x).
    6 If the stop condition is not satisfied, set g=g+1 go to Step 2; otherwise, terminate and rerun the best solution found so far.

    We would make the following comments to the above algorithm.

    • In Step 2, the clustering process is re-initialized every G generations. The purpose is to prevent the clustering process tracking in local optima.

    • In Step 5.1, CoDE generates three candidate solutions for each solution x by

    u1,j={xr1,j+F(xr2,jxr3,j)if rand<Cr or j=jrndxjotherwiseu2,j={xr1,j+F(xr2,jxr3,j)+F(xr4,jxr5,j)if rand<Cr or j=jrndxjotherwiseu3,j=xj+rand(xr1,jxj)+F(xr2,jxr3,j)

    where j=1,2,,n, jrnd is a random index between 1 and n, rand returns a random number in [0.0,1.0], xr1xr5 are randomly selected parents from the same cluster as x, and F and Cr are two control parameters which are randomly selected from [F=1.0,Cr=0.1], [F=1.0,Cr=0.9], and [F=0.8,Cr=0.2].

    • In Step 6, CoDE terminates when the function evaluation exceeds a given threshold.

    • It is required that the minimum number of solutions in each cluster is 5 in the reproduction. If the number of solutions of a cluster is less than 5, the parents are selected from the whole population.


    3. Comparison with other mating strategies

    In this section, we compare the proposed CMS strategy with the following related strategies. Random mating strategy (RND): In this strategy, the parent solutions are randomly chosen from the whole population. The original CoDE algorithm actually utilizes this strategy. Nearest neighbor strategy (NNS): For a solution x, this strategy selects the closest NK solutions to form a mating pool for x, and the parents are randomly choose from the mating pool, where N is the number of population size and K is the number of niche the population can be divided. Batch clustering based strategy (BCS): As the CMS strategy, this strategy also uses the k-means method to partition the population. The difference is that the whole clustering process is implemented in the beginning of each iteration. All the strategies are incorporated into CoDE algorithm as CMS-CoDE does.

    To access the performance of the compared strategies, the first 20 instances from the CEC 2005 test suite [17] are used for the comparison study. The parameters in the experiments are as follows: the dimension of the instances is n=30 for all the 20 problems, all the algorithms stop after 30 independent runs with a maximum of 300,000 function evaluations (FES), the population size is N=100 for all algorithms, the number of clusters is K=3 in the k-means clustering, and the k-means restarts every G=10 generations. To have a fair comparison, the Wilcoxon's rank sum test at a 0.05 significance level is conducted, and , +, and in the tables indicate that the performance of the corresponding method is better than, worse than, and similar to that of CMS, respectively. All the algorithms are executed in the workstation.


    3.1. Experimental results

    The experimental results are given in Table 1, and the population partitions of a typical run for BCS and CMS strategies with CoDE are plotted in Fig. 1 on two instances.

    Table 1. The mean results of the compared methods over 30 independent runs on 20 test instances of 30 variables with 3000,000 FES.
    RND NNS BCS CMS
    F13.21e-08+0.00e+00≈0.00e+00≈0.00e+00
    F23.14e-01+8.17e-04+2.71e-05−5.66e-05
    F31.21e+05−2.13e+05−2.22e+05≈2.64e+05
    F41.08e+01+3.74e+00+1.87e-01−2.72e-01
    F53.90e+02−1.00e+03≈6.85e+02−8.69e+02
    F62.65e+01+7.19e+01+4.55e+01≈3.80e+01
    F74.70e+03+4.70e+03+4.70e+03≈4.70e+03
    F82.09e+01+2.08e+01+2.02e+01−2.03e+01
    F91.67e+01+5.15e-06−4.65e+00−7.31e+00
    F101.63e+02+3.52e+01−4.64e+01+4.11e+01
    F113.37e+01+9.85e+00−1.33e+01≈1.35e+01
    F121.88e+05+1.54e+05+7.16e+04≈7.90e+04
    F138.18e+00+2.65e+00−2.58e+00−2.91e+00
    F141.33e+01+1.21e+01−1.24e+01≈1.23e+01
    F156.40e+02+5.14e+02+4.71e+02≈4.78e+02
    F164.25e+02+3.00e+02−3.10e+02≈3.15e+02
    F174.66e+02+3.03e+02−3.16e+02≈3.19e+02
    F189.26e+02−9.26e+02≈9.24e+02≈9.24e+02
    F199.26e+02≈9.26e+02−9.25e+02≈9.26e+02
    F209.26e+02≈9.24e+02−9.25e+02≈9.25e+02
    +1571
    3106
    2313
     | Show Table
    DownLoad: CSV
    Figure 1. Population partition of a typical run for BCS and CMS strategies with CoDE on (a) F2 and (b) F3.

    RND vs. CMS: From Table 1, we can see that CMS-CoDE performs better than RND-CoDE on 15 test instances and worse than RND-CoDE on 3 test instances. This suggests that, since CMS restricts the mating parents to be selected from the similar individuals, the mating strategy can improve the algorithm performance significantly.

    NNS vs. CMS: It is clear from Table 1 that, CMS-CoDE outperforms NNS-CoDE on 7 instances and is outperformed by NNS-CoDE on 10 instances. In NNS, the parents are the closest ones with similar characteristics around the solution. Thus it may help to converge to optima quickly especially when there is no variable dependency in the problems. In CMS, the parents are likely to be the closest ones but there is still some probability that the parents are far away from each. This may help to keep population diversity in a sense. And this might be the reason to explain the different performances between NNS and CMS. Although the results are comparable, we can see from the next section that CMS-CoDE has a lower theoretical computational complexity than NNS-CoDE.

    BCS vs. CMS: It is surprising that BCS-CoDE performs slightly better than CMS-CoDE. Table 1 shows that there is not much difference between the results obtained by the two algorithms on 13 out of 20 test instances. The reason might be that the clustering results of k-means highly depend on the initial cluster centers and k-means is very likely to converge to local optima. Therefore, the mis-clustering in k-means leads some randomness to the population and prevent the premature of the population. Although BCS-CoDE is slightly superior to CMS-CoDE, it has a higher computational complexity according to the analysis in the next section.

    With respect to the population partitions for BCS and CMS strategies with CoDE on F2 and F3, it is easy to find from Fig. 1 that, for CMS-CoDE, during the continuous 9 generations, the population partitions change a little; but for BCS-CoDE, it presents quite different partition models at each generation. The reason might be that the clustering operation of CMS-CoDE only iterates one time but that of BCS-CoDE iterate for many times. Actually, the stable population should be more helpful to generate the solutions with high quality.


    3.2. Time complexity

    The additional time complexity brought by the mating strategies is a major concern. The time complexity for the four strategies are as follows. RND: O(N). NNS: For all solutions, the time complexity to calculate the distance between each pair is O(N2n). For each solution, it choose the closes NK solutions, and the time complexity is O(NNNK=1KN3). Therefore, the total time complexity is O(N2n+1KN3). BCS: In k-means assignment step, the time complexity to assign each point to a cluster is O(NKn). In the update step, the time complexity is O((|C1|+|C2|++|CK|)n)=O(N˙n). Each solution will randomly select at most 5 parents from corresponding cluster and its time complexity is O(N). Suppose the training steps in k-means is L. The total time complexity is O((K+1)NnL+N). CMS: From the above analysis, we can see that the time complexity is O((K+1)nN+N).

    It is reasonable to assume that KN and NL. The time complexities of NNS and BCS are much higher than those of RND and CMS. We can also see that although the time complexity of CMS is higher than that of RND, it is still linear according to N.

    We also record the CPU run time in Table 2 although it depends on the algorithm implementation. It clearly shows that the additional cost consumed by CMS is not much by comparing RND and CMS strategies. On some instances, the CPU time of CMS is slightly less than that of RND. On all the instances, BCS needs more time than CMS which is consistent with the above analysis.

    Table 2. The average CPU time (seconds) used by the four algorithms on F1-F20 with 300,000 function evaluations over 30 runs.
    RND NNS BCS CMS
    F111.6812.8417.6112.99
    F212.2112.7015.0212.82
    F312.9312.8315.1113.09
    F412.9913.2915.3913.31
    F516.0715.1817.0815.09
    F611.7212.76500.2812.71
    F714.5415.7317.3315.58
    F816.8917.6516.8914.39
    F914.9814.46126.0612.82
    F1015.0613.1413.9813.09
    F1161.2958.7658.6457.58
    F1246.4947.5944.4442.98
    F1313.0613.1514.5313.00
    F1417.0816.4016.3214.39
    F15113.89115.58180.81115.32
    F16119.73116.78171.78116.16
    F17120.10117.87452.28117.31
    F18123.71121.59121.74120.65
    F19149.03152.96152.79147.89
    F20220.64217.06218.65215.53
     | Show Table
    DownLoad: CSV

    3.3. Influence of control parameters

    There are two control parameters in CMS: the number of clusters K, and the number of generations G to restart the clustering process. This section studies the influence of the two parameters. Two unimodal functions F2 and F3, two multimodal functions F7 and F8, and two hybrid composition functions F16 and F17 are used to assess the performance. The population size is N=100, the cluster number is set to K=2,4,6 or 8, and the generation number to restart clustering is set to G=5,10,20, or 30. The other parameters are the same as in the previous section.

    Fig. 2 plots the error bars of the results obtained by CMS-CoDE with different combinations of control parameters over 30 runs on the 6 instances. On F2, it clearly shows that as K and G increase, the performance decreases. The reason is that F2 is unimodal problem and the best cluster number is 1, and k-means fails to capture the population structure with the given control parameters. On the contrary on F8, the performance increases as K and G increase. The reason is that F8 is a multimodal problem and large number of clusters may lead to better population partition. On F7, CMS-CoDE obtains very stable results and this indicates that CMS is not sensitive to the control parameters on the two problems. On F3, F16, and F17, the performance curves are not stable and the standard deviations are big on several combinations. We can also see from Fig. 2 that the performance is more sensitive to K than G. A moderate number of clusters is suitable.

    Figure 2. The error bars of the results obtained by CMS-CoDE with different combinations of control parameters (K, G) over 30 runs on some test instances.

    4. Conclusions

    In this paper, we proposed a strategy to integrate statistical and machine learning (SML) techniques to guide the search of evolutionary algorithms (EAs) efficiently. The idea is to combine the SML iteration and EA iteration together. The learning process and optimization process are performed alternatively. As an example, a general clustering based mate selection (CMS) assisted EA framework was proposed. In CMS, the population is partitioned into classes and the parents in the same class are allowed to do offspring reproduction. More specifically, a CMS utilizing k-means clustering technique was designed and integrated into a state-of-the-art EA. The experimental results suggested that CMS can improve the performance of existing EAs. The time complexity analysis also showed that the proposed approach does not bring much additional cost to the EA to improve.

    It should be noted the current work is very preliminary and there are a variety of directions worth exploring. The combination of CMS and EAs should be improved. How to organize data for model building should be studied. Furthermore, it is worth to apply CMS strategy to multi-objective optimization problems.


    [1] Back T., Fogel D.B., Michalewicz v, et al. (1997)  Handbook of Evolutionary Computation Oxford University Press.
    [2] K. Deb and D. E. Goldberg, An investigation of niche and species formation in genetic function optimization, in Proceedings of the 3rd International Conference on Genetic Algorithms. Morgan Kaufmann Publishers Inc. , 1989, 42–50.
    [3] L. J. Eshelman and J. D. Schaffer, Preventing premature convergence in genetic algorithms by preventing incest, in International Conference on Genetic Algorithms, 1991,115–122.
    [4] Fernandes C.M., Rosa A.C. (2008) Evolutionary algorithms with dissortative mating on static and dynamic environments. Advances in Evolutionary Algorithms 181-206.
    [5] Galán S.F., Mengshoel O.J., Pinter R. (2013) A novel mating approach for genetic algorithms. Evolutionary Computation 21: 197-229.
    [6] A. Gog, C. Chira, D. Dumitrescu and D. Zaharie, Analysis of some mating and collaboration strategies in evolutionary algorithms, in 10th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, IEEE, 2008,538–542.

    10.1109/SYNASC.2008.87

    [7] Goh K.S., Lim A., Rodrigues B. (2003) Sexual selection for genetic algorithms. Artificial Intelligence Review 19: 123-152.
    [8] T. Hastie, R. Tibshirani and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction Second edition. Springer Series in Statistics. Springer, New York, 2009.

    10.1007/978-0-387-84858-7

    MR2722294

    [9] Jin Y. (2011) Surrogate-assisted evolutionary computation: Recent advances and future challenges. Swarm and Evolutionary Computation 1: 61-70. doi: 10.1016/j.swevo.2011.05.001
    [10] P. Larranaga and J. A. Lozano, Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation Kluwer Academic Publishers, 2002.

    10.1007/978-1-4615-1539-5

    [11] J. B. MacQueen, Some methods for classification and analysis of multivariate observations, in Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability, Ed. University of California Press, (1967), 281–297.

    MR0214227

    [12] G. Ochoa, C. Mädler-Kron, R. Rodriguez and K. Jaffe, Assortative mating in genetic algorithms for dynamic problems, in Applications of Evolutionary Computing, Springer, 2005,617–622.

    10.1007/978-3-540-32003-6_65

    [13] T. S. Quirino, Improving Search in Genetic Algorithms Through Instinct-Based Mating Strategies Ph. D. dissertation, The University of Miami, 2012.
    [14] Quirino T., Kubat M., Bryan N.J. (2010) Instinct-based mating in genetic algorithms applied to the tuning of 1-nn classifiers. IEEE Transactions on Knowledge and Data Engineering 22: 1724-1737. doi: 10.1109/TKDE.2009.211
    [15] J. Sanchez-Velazco and J. A. Bullinaria, Sexual selection with competitive/co-operative operators for genetic algorithms, in Neural Networks and Computational Intelligence(NCI). ACTA Press, 2003,191–196.
    [16] Sivaraj R., Ravichandran T. (2011) A review of selection methods in genetic algorithm. International Journal of Engineering Science and Technology (IJEST) 3: 3792-3797.
    [17] P. N. Suganthan, N. Hansen, J. J. Liang, K. Deb, Y. P. Chen, A. Auger and S. Tiwari, Problem Definitions and Evaluation Criteria for the cec 2005 Special Session on Real-Parameter Optimization Tech. rep. , Nanyang Technological University, Singapore and Kanpur Genetic Algorithms 369 Laboratory, IIT Kanpur, 2005.
    [18] Ting C.-K., Li S.-T., Lee C. (2003) On the harmonious mating strategy through tabu search. Information Sciences 156: 189-214. doi: 10.1016/S0020-0255(03)00176-2
    [19] S. Wagner and M. Affenzeller, Sexualga: Gender-specific selection for genetic algorithms, in Proceedings of the 9th World Multi-Conference on Systemics, Cybernetics and Informatics (WMSCI), 4 (2005), 76–81.
    [20] Wang R., Fleming P.J., Purshousea R.C. (2014) General framework for localised multi-objective evolutionary algorithms. Information Sciences 258: 29-53. doi: 10.1016/j.ins.2013.08.049
    [21] Wang Y., Cai Z., Zhang Q. (2011) Differential evolution with composite trial vector generation strategies and control parameters. IEEE Transactions on Evolutionary Computation 15: 55-66. doi: 10.1109/TEVC.2010.2087271
  • Reader Comments
  • © 2017 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3562) PDF downloads(524) Cited by(0)

Article outline

Figures and Tables

Figures(2)  /  Tables(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog