
This paper deals with a non-autonomous discrete second-order Hamiltonian system under asymptotically linear conditions. The existence of a periodic solution is obtained via the saddle point theorem.
Citation: Xiaoxing Chen, Chungen Liu, Jiabin Zuo. A discrete second-order Hamiltonian system with asymptotically linear conditions[J]. Electronic Research Archive, 2023, 31(9): 5151-5160. doi: 10.3934/era.2023263
[1] | Jing Zhang, Baoqun Yin, Yu Zhong, Qiang Wei, Jia Zhao, Hazrat Bilal . A two-stage grasp detection method for sequential robotic grasping in stacking scenarios. Mathematical Biosciences and Engineering, 2024, 21(2): 3448-3472. doi: 10.3934/mbe.2024152 |
[2] | Xuewu Wang, Bin Tang, Xin Zhou, Xingsheng Gu . Double-robot obstacle avoidance path optimization for welding process. Mathematical Biosciences and Engineering, 2019, 16(5): 5697-5708. doi: 10.3934/mbe.2019284 |
[3] | Ruiping Yuan, Jiangtao Dou, Juntao Li, Wei Wang, Yingfan Jiang . Multi-robot task allocation in e-commerce RMFS based on deep reinforcement learning. Mathematical Biosciences and Engineering, 2023, 20(2): 1903-1918. doi: 10.3934/mbe.2023087 |
[4] | Zhen Yang, Junli Li, Liwei Yang, Qian Wang, Ping Li, Guofeng Xia . Path planning and collision avoidance methods for distributed multi-robot systems in complex dynamic environments. Mathematical Biosciences and Engineering, 2023, 20(1): 145-178. doi: 10.3934/mbe.2023008 |
[5] | Chengjun Wang, Xingyu Yao, Fan Ding, Zhipeng Yu . A trajectory planning method for a casting sorting robotic arm based on a nature-inspired Genghis Khan shark optimized algorithm. Mathematical Biosciences and Engineering, 2024, 21(2): 3364-3390. doi: 10.3934/mbe.2024149 |
[6] | Baoye Song, Shumin Tang, Yao Li . A new path planning strategy integrating improved ACO and DWA algorithms for mobile robots in dynamic environments. Mathematical Biosciences and Engineering, 2024, 21(2): 2189-2211. doi: 10.3934/mbe.2024096 |
[7] | Liwei Yang, Lixia Fu, Ping Li, Jianlin Mao, Ning Guo, Linghao Du . LF-ACO: an effective formation path planning for multi-mobile robot. Mathematical Biosciences and Engineering, 2022, 19(1): 225-252. doi: 10.3934/mbe.2022012 |
[8] | Ping Li, Liwei Yang . Conflict-free and energy-efficient path planning for multi-robots based on priority free ant colony optimization. Mathematical Biosciences and Engineering, 2023, 20(2): 3528-3565. doi: 10.3934/mbe.2023165 |
[9] | Xinyu Shao, Zhen Liu, Baoping Jiang . Sliding-mode controller synthesis of robotic manipulator based on a new modified reaching law. Mathematical Biosciences and Engineering, 2022, 19(6): 6362-6378. doi: 10.3934/mbe.2022298 |
[10] | Xingjia Li, Jinan Gu, Zedong Huang, Chen Ji, Shixi Tang . Hierarchical multiloop MPC scheme for robot manipulators with nonlinear disturbance observer. Mathematical Biosciences and Engineering, 2022, 19(12): 12601-12616. doi: 10.3934/mbe.2022588 |
This paper deals with a non-autonomous discrete second-order Hamiltonian system under asymptotically linear conditions. The existence of a periodic solution is obtained via the saddle point theorem.
Currently, the utility of robots in fulfilling various tasks in place of human beings in industrial production and daily life is gaining a significant importance. For instance, robots outperform human beings in many operations like welding, cutting, punching, paint spraying, treatment of heavy materials and sophisticated material processing [1]. In the grasping operations of the robots, frequent changes are associated in the poses of the target objects. To make robotic grasping more adaptive, robots have to regulate their motions on a real-time basis based on the information about object poses.
At present, more studies focus on the robotic grasping based on machine visions. Firstly, the vision-based robots need to visually determine the poses of objects. Then, coordinates of robots’ joints are determined through inverse kinetics. At last, the motions of the robots are planned by the designed motion planner. In this process, it is necessary to calibrate the visual systems. The operators need to be highly professional by traditional visual calibration methods [2,3,4]. Meanwhile, the computations involved in the process are very time-consuming. Besides, it is difficult to determine the inverse solutions for robots with new structures or redundant degrees of freedom. Finally, the planner design is a little complicated, which may cause other problems.
Researchers satisfied the present needs for robotic grasping by different methods; some studies were devoted to equip the end actuators with sensors such as tactile feedbacks [5], while some studies improved the end actuators to make robots more adaptive to grasping, for instance, D. Petković presented a novel design of an adaptive neuro fuzzy inference strategy (ANFIS) for controlling the input displacement of a new adaptive compliant gripper [6]. The use of embedded sensors in a robot gripper offered the control system, the ability to control the input displacement of the gripper and to recognize particular shapes of the grasping objects. Some studies focused on increasing the underactuated degree of freedom [7], and some studies used flexible materials [8] to manufacture more flexible end actuators. C. Qian et al. designed the robotic arms with 12 degrees of freedom based on multi-sensor fusion [9]. In these robotic arms, the sensors were used in combination with the controllers. It presented general shapes of objects through information fusion, and endowed the robotic arms with bionic functions. In consideration of the considerable costs in motor control and poor control precision of the robotic arms, S. Liu et al. designed the STM32-based robotic arms [10]. In this way, problems such as inaccurate motor positioning and considerable costs were effectively avoided. In addition, these robotic arms could be controlled more easily and flexibly. Although, the aforementioned methods are effective for increasing adaptivity of the robots’ end actuators, they are only useful for robots to grasp objects within the scope of work of the end actuators.
Many researchers identified the poses of objects visually and adjusted the robots’ motions for the purpose to grasp objects. Saxena et al. identified the positions and shapes of the target objects by analyzing several images of the objects, so as to draft suitable strategies for robotic grasping [11]. W. Dong introduced the computer visions in the original industrial transport robots [12]. They obtained information about the workpieces and surroundings by machine vision. Then, they identified the target workpieces to be operated and made decisions to guide the industrial robots to grasp and place workpieces.
Currently, more and more researchers have introduced the “intelligence” into grasping. They performed deep learning and training for robotic arms and gained some outcomes. Z. Yan et al. proposed a method for detecting the robots’ grasping positions based on deep learning [13]. In this method, the multimodal features of the target objects were employed as the training data. Robots learnt about the optimal grasping positions of target objects through unsupervised and supervised learning. In this way, they could accurately identify the optimal grasping positions of the target objects. Xia Jing put forward a rapid detection method of robot's plane grabbing posture based on cascade convolutional neural network [14]. He built a cascaded two-stage convolutional neural network model and used transfer learning to train models on small data sets. The results of online robotic grasping experiments showed that the method can quickly calculate the optimal grasping point and pose for irregular objects with arbitrary poses and different shapes. Its recognition accuracy and speed are improved compared with previous methods, and its robustness and stability are strong.
Robots can acquire skills for fulfilling the tasks from the people’s presentations. Y. Mollard [15] made a rapid and intuitive programming by demonstration of robot. K. Bousmalis [16] studied the effect of the randomized simulated environments and domain adaptation methods in training a grasping system to grasp novel objects from raw monocular RGB images. Asa result, they were able to reduce the number of real-world samples needed to achieve a given level of performance, by up to 50 times. Max Schwarz et al. proposed a deep object perception pipeline [17]. This method quickly and efficiently adapted to the new items using a custom turntable capture system and transfer learning. It produced high-quality item segments, on which grasp poses were found. Philipp Schmidt presented a data-driven, bottom-up, deep learning approach to robotic grasping of unknown objects using Deep Convolutional Neural Networks (DCNNs) [18]. They demonstrated the performance of our approach in qualitative grasping experiments on the humanoid robot ARMAR-III.
Earlier, we proposed a robot demonstration method based on the combination of locally weighted regression (LWR) and Q-learning algorithm [19]. This method adapted to the work task by learning from the demonstration and generating new actions. In literature [20], we built models in the Gaussian process to develop the relationship between the observable variables and the joint variables. Then the robot was able to adaptively grasp the target objects. However, adaptive grasping is only realized within relatively small training areas when models are built in a Gaussian process. If an area for object distribution is enlarged and extends to the untrained area, the effectiveness of this modelling method for robotic grasping gets greatly weakened, and the success rate of robotic grasping, in turn, gets lower.
In view of these problems, this paper proposes a robotic grasping method based on the improved Gaussian mixture models. The coordinates of the targets were extracted through a camera to get some samples for teaching. The relationships between observable variables and robots’ joint angles were mapped by the improved Gaussian models. Furthermore, the Gaussian models gained via training were used as prior models and integrated into the process of semi-supervised self-taught learning. By self-taught learning of grasping, robots grasped objects in new adjacent areas and new training samples were collected. Then we updated the probability distribution of the entire Gaussian process, obtained the posterior probabilistic models and achieved a higher success rate of grasping. For this purpose, models were built for simulation based on V-REP [21,22,23]. Then, they were compared with the original and improved Gaussian models in success rate of grasping. Besides, test areas where grasping by all three methods failed were analyzed in terms of positional deviation. At last, it was found that the improved Gaussian models were more adaptive.
Gaussian model precisely quantifies things using Gaussian probability density functions (normal distribution curves). It decomposes one thing into several Gaussian probability density functions (normal distribution curves).
The Expectation-Maximization (EM) algorithms are optimal algorithms for the estimation of the maximum likelihood. In general, they are used in place of the Newton’s iteration. Meanwhile they are used for estimating the parameters of the probabilistic models containing latent variables or missing data. The standard computational framework for EM algorithms is alternately composed of E and M steps. The algorithms are convergent, so they are useful for ensuring that iteration can at least approach local maximum.
Robots have to regulate themselves to adapt to objects in the process of grasping.
f:o→r | (1) |
In Figure 1 and Formula (1), o is an observable variable of objects, so is oi; r is robot’s joint variable corresponding to the observable variable, so is ri; f is mapping of the observable variable into the joint variable. Assuming that X = {x1, x2, …, xn} is a set of training samples obtained through demonstration, where xi = [ri, oi]Tis a single training sample (vector) made up of joint variables and observable variables. In the process of its work, the robot firstly learn from the set of training samples (X) to get the mapping function (f); when the new observable variable (onew) is determined, corresponding joint variable (rnew) is calculated based on the mapping function (f).
Selecting a suitable model to describe the mapping function (f) is fairly suitable for robots to learn about grasping. In this paper, modelling was performed by the improved Gaussian mixture models, to map the relationships between the observable variables of objects and the joint variables of robots. The models were trained with EM algorithms, and training samples were divided into several categories. Each type of samples followed a kind of Gaussian distribution and corresponded to one area.
A Gaussian mixture model was a linear superposition of several Gaussian distributions. For probability distribution of a Gaussian mixture model, the appearance probability of the sample x° is as follows:
p(xo)=m∑k=1p(k)p(xo|k)=m∑k=1akN(xo;μk,∑k) | (2) |
Where, m is total number of Gaussian distributions; p(k)-α(k) is the probability of the kth Gaussian distribution from the sample xo; p(xO|k) = N(xO; μk; Σk) is the probability of the sample xo generated by the kth Gaussian distribution; μk and ∑k are mean vector and covariance matrix of the kth Gaussian distribution respectively.
Samples were trained by EM algorithms of Gaussian mixture models and categorized. A work area was designated for each kind of samples. The relationships between observable variables of objects and joint variables of robots were mapped. After model parameters were initialized, E-step and M-step were iterated to constantly update model parameters until they are converged:
E-step: calculate the weight (rik) of the kth Gaussian distribution from the ith sample (xi).
rik=p(k|xi)=αkN(xi;μk,∑k)m∑j=1αjN(xi;μj,∑j) | (3) |
M-step: update parameters of each cluster.
nk=n∑i=1rik,αk=nknμk=n∑i=1rikxi∑k=n∑i=1rik(xi−μk)(xi−μk)Tnk | (4) |
While robots were grasping objects based on improved Gaussian model, the observable variables of the objects were identified through a camera to calculate the posterior probability of these variables in each Gaussian distribution. The robots’ joint coordinates were determined by a regression of the improved Gaussian process corresponding to the maximum posterior probability. Then robots were able to grasp the objects successfully.
Principles of robotic grasping methods based on improved Gaussian models are shown in Figure 2:
(1) Relationships between observable variables of target objects and corresponding robots’ joint variables were mapped using original Gaussian models. Then we identified the overall mapped relationships based on data regarding 12 groups of samples.
(2) Target positions were randomly selected in new training areas as inputs of Gaussian processes. Joint angles were predicted as the outputs. In simulation models, positive solutions were determined on the basis of joint angles. Positions of the corresponding pixel coordinates at ends of robots were observed and recorded. If the predicted and actual positions deviated by Δd less than 10 mm, the newly determined robots’ joint angles were regarded as new samples to input and update the training sets of Gaussian processes, in order to update the entire distribution process. When cumulative values reached the thresholds, it meant that enough samples were additionally obtained from the new training areas. Then posterior Gaussian distribution in the new training areas was identified.
Gaussian process modelling (GPM) method is a hypothesis for following joint normal distribution based on observable and predicted variables. We use it to calculate observable variables. Posterior probability distribution can be identified by a random input covariance matrix for training machines. GPM is a Bayesian approach, by which robots can get mapping functions from samples f. For a minority of samples, Gaussian process models can be trained. Then nonlinear connections of related variables are created.
Before data are obtained, it is assumed that joint variables and observable variables of objects follow Gaussian distribution, where the mean is μ and the covariance matrix is K:
h:N(μ,K) | (5) |
Where, h = [a, o]T is a vector composed of observable variables and joint variables.
Meanwhile, the posterior distribution of multi-dimensional variables obtained for the sample set X is Gaussian distribution:
p(h|X,θ)=N(μ,K+σ2nI) | (6) |
Where, θ = {μ, K, δn2} The marginal likelihood function of the sample set X is as follows:
p=(X|H,θ)=∏n(i=1)p(xi|hi,θ)=∏n(i=1)p(xi|hi)p(hi|θ)=P | (7) |
Where:
P=∏n(i=1)1√2π|K+σ2nI|exp(−τi2) | (8) |
τi=(xi−μ)T(K+σ2nI)−1(xi−μ) | (9) |
The logarithm of the above formula is determined as follows:
log(P)=A+B−n2log(2π) | (10) |
Where:
A=−12∑n(i=1)τi | (11) |
B=−n2log|K+σ2nI| | (12) |
A and B are fitted data of the model and penalty term of model complexity respectively.
By determining partial derivatives of covariance matrix and mean vectors of the mode, the derivatives were found to be 0. The maximum likelihood estimations of the mean vectors and covariance matrix are respectively as follows:
μ=∑n(i=1)xin | (13) |
(K+σ2nI)=Cov([x1,x2,…,xn]T) | (14) |
Next, it is necessary to forecast joint variables using Gaussian process models. The relationships between visual observable variables and joint variables shall be established based on teaching and training samples. At first, vectors and matrices of the Gaussian process are partitioned as follows:
[ao]∼N[[μaμo],[Kaa+σ2nIKaoKoaKoo+σ2nI]] | (15) |
Robots acquire information O* about target objects through a camera, and the conditional probability distribution of corresponding joint angle a* is as follows:
p(a∗|o∗)=N(μ∗a,K∗aa) | (16) |
Where:
μ∗a=μa+Kao(Koo+σ2nI)−1(o∗−μo) | (17) |
K∗aa=Kaa−Kao(Koo+σ2nI)−1Koa | (18) |
μa* is the mean of joint angle corresponding to the new target position. It is the maximum probability of corresponding Gaussian distribution meanwhile. The covariance matrix is Kaa*, which reflects predictive uncertainty. If robot joint is μa*, the likelihood of robotic grasping will be the highest.
By reading data of the controller, relatively accurate joint coordinates of robots are determined as follows:
K∗aa≈(K∗aa+σ2nI)=Kaa+σ2nI−Kao(Koo+σ2nI)−1Koa | (19) |
In case of no visual calibration and inverse kinematic solution, target objects of the Gaussian process are closely associated with joint variables. Therefore, robots’ joint angles adaptable to target poses can be forecast according to observable variables.
The obtained Gaussian process models are deemed prior models. New models are built for the grasping in new trained areas by improved Gaussian methods. The equation is as follows:
p(θ|X)=p(X|θ)⋅p(θ)p(X) | (20) |
Where p(θ) is probability distribution of prior models; X is training sample; p(θ|X) is probability distribution of posterior models; p(X) is marginal likelihood. By the following equation, it is calculated that:
p(X)=∫p(X|θ)p(θ)dθ | (21) |
The Gaussian methods were improved for the purpose of expanding new application areas without excessively increasing sample points of original models. Therefore, this paper introduced an evaluation mechanism, in order that improved models could have expected functions after certain amount of sample points were updated. A mechanism was established for evaluating grasping in combination with the actual grasping positions. The actual positions were identified based on target positions and posterior models, so as to figure out termination requirements for posterior models. Evaluation functions were determined as follows:
r=Sre−λΔd+Sr′,Δd=√(xa−xd)2+(ya−yd)2 | (22) |
Where, Sr is the evaluated value when Δd is 0; λ is a parameter for ensuring convergence of e-λΔd to 0 when Δd is high enough. S'r is the value evaluated after another new sample is used.
The termination requirements for completing updates of areas are as follows:
R=∑n(i=1)ri>κ | (23) |
In other words, the new area shall be deemed to have been updated when the cumulative evaluated value reaches certain threshold.
This robotic grasping system covered data acquisition, data processing and industrial robots, as shown in Figure 3. A UR3 robot was used. It had six spindles from the bottle to the end, including base, shoulder joint, elbow joint, wrist joint 1, wrist joint 2 and wrist joint 3. MV-EM200C/M camera was used and its resolution was 1,600*1,200. The gripper at the end was a motor-driven three-finger gripper with a single degree of freedom.
The experimental platform was configured as shown in Figure 4. The camera was placed above the platform. UR3 robot was on the experimental platform. The block was in the area near the black cloth. The block to be grasped was a small cube. By determining the mean of the coordinates of edge points of the block, the pixel coordinates x and y in the centre of the object were identified, as shown in Figure 5.
The experimenter dragged the robot to the position of each block to determine the joint angles. Pixel coordinates of the blocks were obtained through a camera. Twelve experiments were performed. At last, obtained data of the observable variables and joint angles were used as training samples of models. Joint angles and pixel coordinates of the robot was recorded and utilized as training samples and input for the Gaussian process models, as shown in Table 1.
No. | Observable Variables | Corresponding Joint Coordinates (Radian) | ||||||
Pixel (x) | Pixel (y) | Base | Shoulder Joint | Elbow Joint | Wrist Joint 1 | Wrist Joint 2 | Wrist Joint 3 | |
1 | 239 | 88 | -0.134 | -1.730 | -2.022 | -0.965 | 1.573 | 0.007 |
2 | 347 | 88 | 0.003 | -1.661 | -2.103 | -0.951 | 1.574 | 0.145 |
3 | 455 | 87 | 0.149 | -1.609 | -2.161 | -0.944 | 1.575 | 0.291 |
4 | 455 | 191 | 0.130 | -1.768 | -1.976 | -0.970 | 1.574 | 0.271 |
5 | 347 | 192 | 0.003 | -1.810 | -1.920 | -0.985 | 1.573 | 0.145 |
6 | 239 | 192 | -0.118 | -1.868 | -1.841 | -1.008 | 1.572 | 0.023 |
7 | 239 | 297 | -0.105 | -2.004 | -1.639 | -1.073 | 1.571 | 0.036 |
8 | 347 | 297 | 0.003 | -1.953 | -1.717 | -1.045 | 1.572 | 0.144 |
9 | 454 | 296 | 0.116 | -1.917 | -1.772 | -1.026 | 1.573 | 0.257 |
10 | 401 | 244 | 0.062 | -1.861 | -1.852 | -1.003 | 1.573 | 0.204 |
11 | 293 | 245 | -0.055 | -1.907 | -1.785 | -1.024 | 1.572 | 0.086 |
12 | 293 | 140 | -0.062 | -1.767 | -1.977 | -0.973 | 1.573 | 0.079 |
No. | Pixel (x) | Pixel (y) | Position Error (Δd) | Base | Shoulder Joint | Elbow Joint | Wrist Joint 1 | Wrist Joint 2 | Wrist Joint 3 |
1 | 560 | 88 | 5.26 | 0.253 | -1.568 | -2.232 | -0.912 | 1.576 | 0.395 |
2 | 668 | 192 | 1.63 | 0.378 | -1.662 | -2.107 | -0.942 | 1.576 | 0.520 |
3 | 560 | 192 | 8.86 | 0.252 | -1.713 | -2.041 | -0.960 | 1.575 | 0.394 |
4 | 722 | 244 | 8.64 | 0.440 | -1.709 | -2.045 | -0.957 | 1.576 | 0.582 |
5 | 615 | 245 | 4.02 | 0.316 | -1.761 | -1.977 | -0.975 | 1.575 | 0.458 |
6 | 614 | 140 | 7.57 | 0.315 | -1.615 | -2.170 | -0.927 | 1.576 | 0.457 |
After training samples were obtained, the mean vector of the Gaussian process μ and the maximum likelihood estimation of covariance matrix(K+δn2I)were determined. The new observable variables and robot’s joint coordinates followed the same probability distribution. In case, if there was any new observable input o*, the joint angles μa* were obtained that the robot was most likely to grasp at, so as to make grasping successful.
Figure 6 shows the pixel coordinates and the corresponding joint angles of training samples. The joint angles of the base, shoulder joint, elbow joint, wrist joint 1, wrist joint 2 and wrist joint 3 were allocated from the bottom to the top. Figure 7 shows the distribution and grasping status of the training samples and the test samples on the pixel plane. The squares represent the training samples, hollow circles were successfully grasped test samples, and the solid circles were the test samples that were not successfully grasped. It is evident from the figure that the Gaussian models were helpful in the successful grasping in areas within the set of the training samples, but the grasping always failed outside the training areas. Hence, it is inferred that the Gaussian process models function fairly well within the training areas. The grasping results were not satisfactory on the boundaries near the training areas and test points outside the training areas. This suggested that the Gaussian process models were reliant upon the prior samples and were able to function well without these samples. This is a limitation of Gaussian models.
Sample points of new areas were cumulatively updated by the improved Gaussian models. We selected points conforming to Δd < 10 mm as new samples for updating sample sets and calculated the evaluated values, until they reached the thresholds. This experiment suggested that only six new data were needed, and this number was far lower than the number of training samples necessary for the original Gaussian models.
Figure 8 shows distribution of the robot’s joint angles. The first row lists the joint angles of the original Gaussian model, while the second row shows the joint angles that were determined after the update of six sample points. It is evident that changes occurred to these joint angles. In particular, these changes were rather significant for the wrist joints 2 and 3, which suggested that the joint angles for grasping changed considerably in the new mapping relationships after the update.
In order that the experiment could be accurate and generalized, this paper modelled and simulated the grasping based on VREP. As shown in Figure 9, The UR3 and blocks were put on the table and a camera was placed above them. We compared the success rate of the grasping among the original Gaussian model for grasping real objects, the improved Gaussian model for grasping simulation based on VREP and the improved Gaussian model for grasping real objects, as shown in Figures 10-12.
Sixty points of the new area were randomly selected for testing, the test results are shown in Figure 10–12. The red solid circles indicted successful grasping and the hollow circles reflected failed grasping. According to the results, the success rate of direct grasping by the original Gaussian model was only 18.3%. The points for successful grasping were consisted in adjacency to the original area. The success rate of grasping by VREP-based simulations and improved Gaussian model was 81.7%, while the success rate of grasping by the improved Gaussian model was 75%. The success rate of the improved Gaussian model was slightly lower than the success rate of the simulation, possibly due to the errors in the precision of the camera sensor and robotic arm. This suggested that the improved Gaussian model could achieve more desirable grasping results in untrained areas after a minority of samples were updated.
In addition, we analysed the positional deviations for the 11 points where the grasping failed in any of the three methods. We got their positional deviation (Δd) between the statistical points and gripper end, as shown in Figure 13. It was observed that the displacement deviation was the highest in the original Gaussian model, displacement deviation of the improved Gaussian model was higher than that of V-REP simulation but far lower than that of the original Gaussian model. This indirectly suggested that the improved Gaussian model was much more adaptive than the original model even in areas where grasping failed.
In this paper, a robotic grasping method based on the improved Gaussian model was put forward. This method was effective for grasping, as like the Gaussian models. It associated the observable variables with joint angles, trained areas with a minority of samples and successfully predicted the grasping at points of training areas without calibrating the visual systems. Unlike the original Gaussian model which was ineffective for predicting grasping results outside training areas, the improved model predicted grasping and achieved desirable results only by upgrading a minority of samples without any need for training samples in large areas. Finally, comparative tests were performed on real grasping, VREP-based grasping simulations by the improved Gaussian model and positional deviation analysis on areas of failed grasping. The experimental results validated the effectiveness of the method proposed in this paper.
The authors declare there is no conflict of interest.
[1] |
Z. Wang, J. Zhang, New existence results on periodic solutions of non-autonomous second order Hamiltonian systems, Appl. Math. Lett., 79 (2018), 43–50. https://doi.org/10.1016/j.aml.2017.11.016 doi: 10.1016/j.aml.2017.11.016
![]() |
[2] | Q. Zheng, Homoclinic solutions for a second-order nonperiodic asymptotically linear Hamiltonian systems, Abstr. Appl. Anal., Hindawi, 2013 (2013), 417020. https://doi.org/10.1155/2013/417020 |
[3] |
M. Makvand Chaharlang, M. Ragusa, A. Razani, A sequence of radially symmetric weak solutions for some nonlocal elliptic problem in RN, Mediterr. J. Math., 17 (2020), 53. https://doi.org/10.1007/s00009-020-1492-X doi: 10.1007/s00009-020-1492-X
![]() |
[4] |
Z. Guo, J. Yu, Existence of periodic and subharmonic solutions for second-order superlinear difference equations, Sci. China Ser. A-Math., 46 (2003), 506–515. https://doi.org/10.1007/BF02884022 doi: 10.1007/BF02884022
![]() |
[5] |
Z. Guo, J. Yu, Periodic and subharmonic solutions for superquadratic discrete Hamiltonian systems, Nonlinear Anal. Theory Methods Appl., 55 (2003), 969–983. https://doi.org/10.1016/j.na.2003.07.019 doi: 10.1016/j.na.2003.07.019
![]() |
[6] |
Z. Guo, J. Yu, The Existence of periodic and subharmonic solutions of subquadratic second order difference equations, J. London Math. Soc., 68 (2003), 419–430. https://doi.org/10.1112/S0024610703004563 doi: 10.1112/S0024610703004563
![]() |
[7] |
X. Tang, X. Zhang, Periodic solutions for second-order discrete Hamiltonian systems, J. Differ. Equations Appl., 17 (2011), 1413–1430. https://doi.org/10.1080/10236190903555237 doi: 10.1080/10236190903555237
![]() |
[8] |
Z. Wang, J. Zhang, M. Chen, A unified approach to periodic solutions for a class of non-autonomous second order Hamiltonian systems, Nonlinear Anal. Real World Appl., 58 (2021), 103218. https://doi.org/10.1016/j.nonrwa.2020.103218 doi: 10.1016/j.nonrwa.2020.103218
![]() |
[9] |
X. Tang, L. Xiao, Homoclinic solutions for a class of second-order Hamiltonian systems, Nonlinear Anal., 71 (2009), 1140–1152. https://doi.org/10.1016/j.na.2008.11.038 doi: 10.1016/j.na.2008.11.038
![]() |
[10] |
Y. Xue, C. Tang, Multiple periodic solutions for superquadratic second-order discrete Hamiltonian systems, Appl. Math. Comput., 196 (2008), 494–500. https://doi.org/10.1016/j.amc.2007.06.015 doi: 10.1016/j.amc.2007.06.015
![]() |
[11] |
Z. Wang, J. Xiao, On periodic solutions of subquadratic second order non-autonomous Hamiltonian systems, Appl. Math. Lett., 40 (2015), 72–77. https://doi.org/10.1016/j.aml.2014.09.014 doi: 10.1016/j.aml.2014.09.014
![]() |
[12] |
C. Tang, X. Wu, Periodic solutions for a class of new superquadratic second order Hamiltonian systems, Appl. Math. Lett., 34 (2014), 65–71. https://doi.org/10.1016/j.aml.2014.04.001 doi: 10.1016/j.aml.2014.04.001
![]() |
[13] |
Q. Jiang, C. Tang, Periodic and subharmonic solutions of a class of sub-quadratic second-order Hamiltonian systems, J. Math. Anal. Appl., 328 (2007), 380–389. https://doi.org/10.1016/j.jmaa.2006.05.064 doi: 10.1016/j.jmaa.2006.05.064
![]() |
[14] |
J. Xie, J. Li, Z. Luo, Periodic and subharmonic solutions for a class of the second-order Hamiltonian systems with impulsive effects, Boundary Value Probl., 2015 (2015), 52. https://doi.org/10.1186/s13661-015-0313-9 doi: 10.1186/s13661-015-0313-9
![]() |
[15] |
F. Zhao, J. Chen, M. Yang, A periodic solution for a second-order asymptotically linear Hamiltonian system, Nonlinear Anal., 70 (2009), 4021–4026. https://doi.org/10.1016/j.na.2008.08.009 doi: 10.1016/j.na.2008.08.009
![]() |
[16] |
X. Chen, F. Guo, P. Liu, Existence of periodic solutions for second-order Hamiltonian systems with asymptotically linear conditions, Front. Math. China, 13 (2018), 1313–1323. https://doi.org/10.1007/s11464-018-0736-6 doi: 10.1007/s11464-018-0736-6
![]() |
[17] |
R. Cheng, Periodic solutions for asymptotically linear Hamiltonian system with resonance both at infinity and origin via computation of critical groups, Nonlinear Anal., 89 (2013), 134–145. https://doi.org/10.1016/j.na.2013.05.009 doi: 10.1016/j.na.2013.05.009
![]() |
[18] | G. Fei, Nontrivial periodic solutions of asymptotically linear Hamiltonian systems, Electron. J. Differ. Equations, 2001 (2001), 1–17. |
[19] |
X. Tang, J. Jiang, Existence and multiplicity of periodic solutions for a class of second-order Hamiltonian systems, Comput. Math. Appl., 59 (2010), 3646–3655. https://doi.org/10.1016/j.camwa.2010.03.039 doi: 10.1016/j.camwa.2010.03.039
![]() |
1. | Mingjun Ye , Heng Zhou, Haoyu Yang, Bin Hu, Xiong Wang, Multi-Strategy Improved Dung Beetle Optimization Algorithm and Its Applications, 2024, 9, 2313-7673, 291, 10.3390/biomimetics9050291 | |
2. | Xiong Wang, Yaxin Wei, Zihao Guo, Jihong Wang, Hui Yu, Bin Hu, A Sinh–Cosh-Enhanced DBO Algorithm Applied to Global Optimization Problems, 2024, 9, 2313-7673, 271, 10.3390/biomimetics9050271 | |
3. | Shuwan Feng, Jihong Wang, Ziming Li, Sai Wang, Ziyi Cheng, Hui Yu, Jiasheng Zhong, Research on Move-to-Escape Enhanced Dung Beetle Optimization and Its Applications, 2024, 9, 2313-7673, 517, 10.3390/biomimetics9090517 | |
4. | Shihua Li, Yanjie Zhou, Bing Zhou, Zongmin Wang, Workload-based adaptive decision-making for edge server layout with deep reinforcement learning, 2025, 139, 09521976, 109662, 10.1016/j.engappai.2024.109662 |
No. | Observable Variables | Corresponding Joint Coordinates (Radian) | ||||||
Pixel (x) | Pixel (y) | Base | Shoulder Joint | Elbow Joint | Wrist Joint 1 | Wrist Joint 2 | Wrist Joint 3 | |
1 | 239 | 88 | -0.134 | -1.730 | -2.022 | -0.965 | 1.573 | 0.007 |
2 | 347 | 88 | 0.003 | -1.661 | -2.103 | -0.951 | 1.574 | 0.145 |
3 | 455 | 87 | 0.149 | -1.609 | -2.161 | -0.944 | 1.575 | 0.291 |
4 | 455 | 191 | 0.130 | -1.768 | -1.976 | -0.970 | 1.574 | 0.271 |
5 | 347 | 192 | 0.003 | -1.810 | -1.920 | -0.985 | 1.573 | 0.145 |
6 | 239 | 192 | -0.118 | -1.868 | -1.841 | -1.008 | 1.572 | 0.023 |
7 | 239 | 297 | -0.105 | -2.004 | -1.639 | -1.073 | 1.571 | 0.036 |
8 | 347 | 297 | 0.003 | -1.953 | -1.717 | -1.045 | 1.572 | 0.144 |
9 | 454 | 296 | 0.116 | -1.917 | -1.772 | -1.026 | 1.573 | 0.257 |
10 | 401 | 244 | 0.062 | -1.861 | -1.852 | -1.003 | 1.573 | 0.204 |
11 | 293 | 245 | -0.055 | -1.907 | -1.785 | -1.024 | 1.572 | 0.086 |
12 | 293 | 140 | -0.062 | -1.767 | -1.977 | -0.973 | 1.573 | 0.079 |
No. | Pixel (x) | Pixel (y) | Position Error (Δd) | Base | Shoulder Joint | Elbow Joint | Wrist Joint 1 | Wrist Joint 2 | Wrist Joint 3 |
1 | 560 | 88 | 5.26 | 0.253 | -1.568 | -2.232 | -0.912 | 1.576 | 0.395 |
2 | 668 | 192 | 1.63 | 0.378 | -1.662 | -2.107 | -0.942 | 1.576 | 0.520 |
3 | 560 | 192 | 8.86 | 0.252 | -1.713 | -2.041 | -0.960 | 1.575 | 0.394 |
4 | 722 | 244 | 8.64 | 0.440 | -1.709 | -2.045 | -0.957 | 1.576 | 0.582 |
5 | 615 | 245 | 4.02 | 0.316 | -1.761 | -1.977 | -0.975 | 1.575 | 0.458 |
6 | 614 | 140 | 7.57 | 0.315 | -1.615 | -2.170 | -0.927 | 1.576 | 0.457 |
No. | Observable Variables | Corresponding Joint Coordinates (Radian) | ||||||
Pixel (x) | Pixel (y) | Base | Shoulder Joint | Elbow Joint | Wrist Joint 1 | Wrist Joint 2 | Wrist Joint 3 | |
1 | 239 | 88 | -0.134 | -1.730 | -2.022 | -0.965 | 1.573 | 0.007 |
2 | 347 | 88 | 0.003 | -1.661 | -2.103 | -0.951 | 1.574 | 0.145 |
3 | 455 | 87 | 0.149 | -1.609 | -2.161 | -0.944 | 1.575 | 0.291 |
4 | 455 | 191 | 0.130 | -1.768 | -1.976 | -0.970 | 1.574 | 0.271 |
5 | 347 | 192 | 0.003 | -1.810 | -1.920 | -0.985 | 1.573 | 0.145 |
6 | 239 | 192 | -0.118 | -1.868 | -1.841 | -1.008 | 1.572 | 0.023 |
7 | 239 | 297 | -0.105 | -2.004 | -1.639 | -1.073 | 1.571 | 0.036 |
8 | 347 | 297 | 0.003 | -1.953 | -1.717 | -1.045 | 1.572 | 0.144 |
9 | 454 | 296 | 0.116 | -1.917 | -1.772 | -1.026 | 1.573 | 0.257 |
10 | 401 | 244 | 0.062 | -1.861 | -1.852 | -1.003 | 1.573 | 0.204 |
11 | 293 | 245 | -0.055 | -1.907 | -1.785 | -1.024 | 1.572 | 0.086 |
12 | 293 | 140 | -0.062 | -1.767 | -1.977 | -0.973 | 1.573 | 0.079 |
No. | Pixel (x) | Pixel (y) | Position Error (Δd) | Base | Shoulder Joint | Elbow Joint | Wrist Joint 1 | Wrist Joint 2 | Wrist Joint 3 |
1 | 560 | 88 | 5.26 | 0.253 | -1.568 | -2.232 | -0.912 | 1.576 | 0.395 |
2 | 668 | 192 | 1.63 | 0.378 | -1.662 | -2.107 | -0.942 | 1.576 | 0.520 |
3 | 560 | 192 | 8.86 | 0.252 | -1.713 | -2.041 | -0.960 | 1.575 | 0.394 |
4 | 722 | 244 | 8.64 | 0.440 | -1.709 | -2.045 | -0.957 | 1.576 | 0.582 |
5 | 615 | 245 | 4.02 | 0.316 | -1.761 | -1.977 | -0.975 | 1.575 | 0.458 |
6 | 614 | 140 | 7.57 | 0.315 | -1.615 | -2.170 | -0.927 | 1.576 | 0.457 |