
Virtual reality (VR) technology has been increasingly employed in human-robot interaction (HRI) research to enhance the immersion and realism of the interaction. However, the integration of VR into HRI also introduces new challenges, such as latency, mismatch between virtual and real environments and potential adverse effects on human users. Despite these challenges, the use of VR in HRI has the potential to provide numerous benefits, including improved communication, increased safety and enhanced training and education. Yet, little research has been done by scholars to review the state of the art of VR applications in human-robot interaction. To bridge the gap, this paper provides an overview of the challenges and benefits of using VR in HRI, as well as current research in the field and future directions for development. It has been found that robots are getting more personalized, interactive and engaging than ever; and with the popularization of virtual reality innovations, we might be able to foresee the wide adoption of VR in controlling robots to fulfill various tasks of hospitals, schools and factories. Still, there are several challenges, such as the need for more advanced VR technologies to provide more realistic and immersive experiences, the development of more human-like robot models to improve social interactions and the need for better methods of evaluating the effectiveness of VR in human-robot interaction.
Citation: Yu Lei, Zhi Su, Chao Cheng. Virtual reality in human-robot interaction: Challenges and benefits[J]. Electronic Research Archive, 2023, 31(5): 2374-2408. doi: 10.3934/era.2023121
[1] | Yanling Dong, Xiaolan Zhou . Advancements in AI-driven multilingual comprehension for social robot interactions: An extensive review. Electronic Research Archive, 2023, 31(11): 6600-6633. doi: 10.3934/era.2023334 |
[2] | Juan Li, Geng Sun . Design of a virtual simulation interaction system based on enhanced reality. Electronic Research Archive, 2023, 31(10): 6260-6273. doi: 10.3934/era.2023317 |
[3] | Jiarui Chen, Aimin Tang, Guanfeng Zhou, Ling Lin, Guirong Jiang . Walking dynamics for an ascending stair biped robot with telescopic legs and impulse thrust. Electronic Research Archive, 2022, 30(11): 4108-4135. doi: 10.3934/era.2022208 |
[4] | Miguel Ferreira, Luís Moreira, António Lopes . Differential drive kinematics and odometry for a mobile robot using TwinCAT. Electronic Research Archive, 2023, 31(4): 1789-1803. doi: 10.3934/era.2023092 |
[5] | Yu Shen, Hecheng Li . A multi-strategy genetic algorithm for solving multi-point dynamic aggregation problems with priority relationships of tasks. Electronic Research Archive, 2024, 32(1): 445-472. doi: 10.3934/era.2024022 |
[6] | Yi Gong . Consensus control of multi-agent systems with delays. Electronic Research Archive, 2024, 32(8): 4887-4904. doi: 10.3934/era.2024224 |
[7] | Qianqian Zhang, Mingye Mu, Heyuan Ji, Qiushi Wang, Xingyu Wang . An adaptive type-2 fuzzy sliding mode tracking controller for a robotic manipulator. Electronic Research Archive, 2023, 31(7): 3791-3813. doi: 10.3934/era.2023193 |
[8] | Xiaoyu Zheng, Dewang Chen, Liping Zhuang . Empowering high-speed train positioning: Innovative paradigm for generating universal virtual positioning big data. Electronic Research Archive, 2023, 31(10): 6197-6215. doi: 10.3934/era.2023314 |
[9] | Hongyan Dui, Huiting Xu, Haohao Zhou . Reliability analysis and resilience measure of complex systems in shock events. Electronic Research Archive, 2023, 31(11): 6657-6672. doi: 10.3934/era.2023336 |
[10] | Chao Ma, Hang Gao, Wei Wu . Adaptive learning nonsynchronous control of nonlinear hidden Markov jump systems with limited mode information. Electronic Research Archive, 2023, 31(11): 6746-6762. doi: 10.3934/era.2023340 |
Virtual reality (VR) technology has been increasingly employed in human-robot interaction (HRI) research to enhance the immersion and realism of the interaction. However, the integration of VR into HRI also introduces new challenges, such as latency, mismatch between virtual and real environments and potential adverse effects on human users. Despite these challenges, the use of VR in HRI has the potential to provide numerous benefits, including improved communication, increased safety and enhanced training and education. Yet, little research has been done by scholars to review the state of the art of VR applications in human-robot interaction. To bridge the gap, this paper provides an overview of the challenges and benefits of using VR in HRI, as well as current research in the field and future directions for development. It has been found that robots are getting more personalized, interactive and engaging than ever; and with the popularization of virtual reality innovations, we might be able to foresee the wide adoption of VR in controlling robots to fulfill various tasks of hospitals, schools and factories. Still, there are several challenges, such as the need for more advanced VR technologies to provide more realistic and immersive experiences, the development of more human-like robot models to improve social interactions and the need for better methods of evaluating the effectiveness of VR in human-robot interaction.
Virtual reality (VR) can help human-robot interaction (HRI) by providing a tool for successfully iterating across robot concepts. Moreover, virtual reality (VR) has been utilized in various human-robot cooperative and agreeable modern applications to facilitate human-robot interaction. VR headsets, such as the Oculus Rift or HTC Vive, allow users to see and interact with virtual representations of robots in real time, as if they were physically present in the same room. This can make it easier for humans to understand and control the movements and actions of robots, as well as provide visual feedback to the robot about its environment. Furthermore, VR also allows for remote collaboration, where multiple users can interact with a robot from different locations. By providing a shared virtual environment, VR can enable humans to work together more effectively with robots, whether they are located in the same room or in different parts of the world. Overall, VR has the potential to greatly enhance human-robot interaction by providing a more immersive and intuitive interface for controlling and communicating with robots. It can also facilitate remote collaboration and enable simulation-based testing, making it a valuable tool for a wide range of applications in industry, healthcare and research.
This paper serves as a general review of the latest progress of VR applications in HRI and summarizes how VR facilitates HRI by investigating dozens of recent papers in this field. The inclusion criteria were the following: (ⅰ) publications indexed in the Web of Science, Scopus and ProQuest databases; (ⅱ) publication dates between 1994 and 2022 (mostly in 2022); (ⅲ) written in English; (ⅳ) being a review paper or an innovative empirical study; and (ⅴ) certain search terms covered. (ⅰ) Editorial materials, (ⅱ) conference proceedings and (ⅲ) books were removed from the research.
The manuscript is structured as follows. Section 1 serves as a brief introduction of the research, including the research areas covered and methodology employed. The second part of the article is an overall introduction to HRI in terms of its definition, application and development. Four application areas for HRI are introduced in this section: the control of robots to perform routine and nonroutine tasks, the control of robots to perform unusual jobs in dangerous or inaccessible settings, the concept of autonomous vehicles and human-robot social interaction. Also in this part, a detailed introduction to VR is given, which covers its definition, background and latest applications in reality. Section 3 is a detailed introduction to the latest application areas of virtual reality, including virtual reality aided design (VRAD), VR goggles, driving simulation, medicine, education and virtual machining. Section 4 presents an overview of the development of VR in HRI in recent years, summarizes the benefits and costs and also predicts the future direction in this field. Specifically, four popular topics have been traced, namely, robot-assisted surgery, COVID-19, education and training and smart manufacturing. Sections 5 and 6 serve as the discussion part and conclusion, respectively, where the major findings of the research are discussed.
Virtual reality has lately become a prominent topic in human-robot interaction, with numerous VR innovations, such as virtual assistive robot, welding robot, and SIGVerse, as shown in Figure 1; and the key technologies of virtual reality and human-robot interaction involved are shown in Figures 2 and 3, respectively. This section mainly introduces the concept of human-robot introduction and virtual reality, and their application areas will also be covered.
Human-robot interaction (HRI) is a relatively new topic that has gained prominence recently as a result of the increasing availability of complex robots and people's exposure to such robots in their ordinary routine. By definition, human-robot interaction is the interaction between people and robots. As a multidisciplinary area, it incorporates aspects of human-computer interaction, artificial intelligence, robotics, natural-language comprehension, design and psychology.
As a multidisciplinary field, HRI involves various technologies, such as artificial intelligence, computer vision, natural language processing and haptics. AI enables robots to perform tasks autonomously and make decisions based on data inputs. Computer vision allows robots to perceive and interpret the visual environment, while natural language processing enables them to understand and respond to human speech. Haptics technology allows robots to provide tactile feedback to humans, allowing for a more immersive and realistic interaction experience. Additionally, machine learning and deep learning algorithms are also commonly used in HRI to improve the robot's ability to adapt to and learn from its interactions with humans. Figure 3 depicts the key technologies used in human-robot interaction, including artificial intelligence, the internet of things, cloud computing and sensors.
There are essentially four application areas for HRI [9]:
1). Robots under human supervision performing mundane jobs. These involve handling parts on production assembly lines, as well as delivering goods, parts, mail and medications to offices, clinics and warehouses. These devices, known as telerobots, can execute a specific set of tasks automatically based on a software application. They are also able to sense their surroundings and their own combined positions and convey this information back to a human who updates the software program as necessary.
2). Vehicles in space, the air, the ground and the sea may all be controlled remotely to perform unusual jobs in dangerous or inaccessible settings. Such gadgets are known as teleoperators if they operate and move items in a remote physical environment in response to ongoing control motions performed by the faraway person. If a computer is routinely reprogrammed by a manager to complete certain tasks, the device is a telerobot.
3). Automated vehicles, such as commercial airplanes and automated rail and road vehicles, that carry passengers.
4). Human-robot social interaction, including robots that entertain, educate, soothe and help the elderly, autistic people and people with disabilities.
Based on the above categorization, we know that human-robot interaction spans a wide range of topics, with a significant portion focusing on assistive mechanical technology, robot-assisted search and rescue, space exploration and other areas, and it has been widely studied by researchers. For instance, Harbers et al. assessed and analyzed key values as well as value tensions of the stakeholders of search and rescue robots in [10]. Akgun et al. conducted 3 studies to find the advantages of using sentiments as the modality of interaction for search and rescue robots in [11]. Qian et al. developed a reconfigurable rotary series elastic actuator with nonlinear stiffness for the assistive robot in [12]. In [13], Getson and Nejat suggested a human-robot interaction research using an automated multi-task social robot for non-contact screening in nursing homes. Wang et al. offered a spherical robot featuring flexible motion as well as great environmental adaptability in [14]. A methodology for creating a hybrid stochastic optimizer for multi-robot space travel was presented by Gul and colleagues in [15]. Liu discussed the design concerns and development of the assistive robot arm for neuromuscular condition patients receiving rehabilitation treatment in [16]. In [17], Shveda et al. developed an energy harvesting device that derives power from the user's foot impact while walking. In [18], Kumar et al. developed and modeled an industrial robot inside the virtual environment to explain the end-effector trajectory using forward kinematics. In [3], Su et al. introduced an integrated mapping of motion and visualization technique for the immersive and intuitive telemanipulation of robotic arm-hand systems based on a mixed reality subspace method.
For human-robot interaction, a crucial technical bottleneck still exists. Currently, robots cannot completely replace human work. The most modern autonomous robots have been developed for simplicity of use, yet they cannot function without the assistance of their human counterparts. For example, one of the most noteworthy ways robots have altered organizations in the face of Covid is how robots allow business leaders to precisely analyze performance and KPIs (Key Performance Indicators) by providing robotic assistance. These robots are not only able to do laborious chores of cleaning floors, but they can also do so while automatically delivering extensive "proof of work" data (Figure 4) in the manner of heatmaps [19], which is really hard to get manually.
Though there are still many challenges for HRI development, it has a bright future due to the wide prospects of application. For instance, the use of autonomous robots in the workforce has brought discussion about the various advantages they offer. These intelligent assisting devices have altered the ways many organizations function, especially in the post-Covid world, by enhancing productivity and ensuring public and employee safety [20]. Also, with the exception of specific contexts, including commercial aviation and military systems, where human factors specialists have traditionally contributed, design in HRI necessitates substantially larger engagement from the human factors community than has previously happened. "Self-driving" automobiles [21] and drones face enormous problems in terms of acceptance and safety. The evolution of robots and computers is happening far more quickly than that of the human species, which changes relatively slowly. Therefore, it is likely that particular HRI results will be proven false soon. The HRI sector appears to be driven more by creating and proving what works and inspiring new ideas than by offering thorough and verified scientific results.
Virtual reality is a branch of research that tries to provide users with an environment that is similar to actual life through devices like smart glasses or head-mounted displays, as shown in Figure 4. Since the system is the source of the user's sensory input, the perception is said to be artificial, synthesized or simulated [23]. In most cases, the system used consists of a variety of devices, such as sensors that track user activity, computers that analyze that activity and provide output data, displays and other tools to enhance reality perception. Developers build a virtual world, which frequently consists of spatially ordered objects that are exhibited to the client via sensory displays, in order to create and replicate virtual experiences.
Technologies used in virtual reality today are based on concepts that go all the way back to the 1800s—pretty much the very beginning of widespread practical photography. The View-Expert (a stereoscopic vision testing device) in 1939 as well as Heilig's Sensorama multi-experience theater in the 1950s marked the beginning of VR's repertoire of experiences. In 1968, the primary development of head-mounted displays proceeded. Then, decision-makers focused on brilliantly designed applications from the 1970s and 1980s. They could create automated VR experiences for the domains of medicine, aviation simulation and military training with more contemporary innovation. After 1990, shortly after AR became well-known and through computer games, VR penetrated the wider consumer market [24]. Since then, VR has evolved to become more dynamically rational and complicated. For instance, the haptic response for human-robot interaction is greatly improved in a virtual environment, as is shown in Figure 5 [4]. Currently, many doctors utilize headsets or other devices to produce practical pictures, sounds and vibrations that reenact a client's actual presence in a virtual environment [25].
Virtual reality hardware includes a variety of devices and equipment that are used to create and experience VR environments. One of the most important pieces of hardware is the head-mounted display (HMD), which is worn on the head like a headset and contains a screen that displays the VR environment. Additionally, hand-held controllers such as Oculus Touch, Vive wands or PS move controllers are also used to interact with the virtual environment. Some VR systems also include additional equipment such as room-scale sensors, which track the user's movements and help to create a more immersive experience. Other VR systems use external cameras like the ones in the PlayStation VR to track the position of the headset, while some others rely on inside-out tracking, which uses cameras built into the headset itself. Additionally, some VR systems also use haptic feedback technology to simulate touch and provide a more realistic experience.
Overall, the hardware required for VR can vary depending on the type of system and the intended use, but it generally includes a combination of head-mounted displays controllers, and tracking equipment.
There are two main types of virtual reality: immersive and non-immersive. Immersive VR is a fully-enclosed, computer-generated environment that completely surrounds the user, blocking out the real world and creating a sense of presence in the virtual environment. This type of VR is typically achieved through the use of head-mounted displays (HMDs) and other equipment such as hand-held controllers, room-scale sensors or haptic feedback technology. Examples of immersive VR include the Oculus Rift, HTC Vive and PlayStation VR. Non-immersive VR, on the other hand, is a less-immersive experience that typically uses a computer monitor or other display to present a VR environment. Users may interact with the virtual environment using a keyboard, mouse or other input device, but they do not wear a head-mounted display or other equipment. Non-immersive VR can be used for a variety of purposes such as education, training or design visualization. Examples of non-immersive VR include Google Street View and Google Earth.
VR technology offers economic potential, and businesses are looking at novel strategies due to the falling costs of software and hardware, together with the variety of applications. Immersive technology usage is increasing in a variety of fields, including education [27,28,29], healthcare [28,30,31] and construction [32,33]. The application of VR in healthcare is shown in Figure 6.
Virtual reality is a novel medium with previously unimagined potential for communication and teaching, among other things. It offers considerable promise for building applications for daily aid and support, notably for schools and the elderly. The medical sector has collaborated with VR businesses to develop tests to evaluate the brain activity of elderly people [35]. Such examinations can assist assess a senior's cognitive ability as well as motoric performance. VR systems, for example, have been employed to detect early signs of Alzheimer's disease [36]. Elders are assessed to gauge how effectively they reorient themselves in a new setting. This technique is being used by scientists to investigate balancing difficulties. Once these difficulties are identified, the family can implement actions to help avoid accidents in the home. In recent years, VR cycling, as shown in Figure 7, has also been a prominent topic [37]. A stationary bike, sensor package and headset are needed. By activating the competition mode, the user may even define training parameters like time, distance or competitions.
An individual utilizing VR hardware can check out the simulated environment, turn around and cooperate with virtual elements [38]. The impact is normally made by VR headsets comprising a head-mounted display with a little screen before the eyes, yet it can likewise be made through exceptionally planned rooms with numerous enormous screens. VR ordinarily consolidates hearable and video feedback yet may permit different kinds of tangible and force inputs through haptic innovation. VR is a recreated experience that can be similar to or different from this present reality. Other distinctive kinds of VR-style innovation incorporate AR and XR. Listed below are some latest technological applications of VR.
Virtual reality is gaining popularity in rehabilitation medicine for the treatment of cognitive and neurological problems [40] as well as physiotherapy [41]. The surgeon may direct the robotic arm's motions using virtual reality [42], especially for small, delicate movements that would be challenging for a human surgeon to execute. Remote telesurgery, in which the sufferer is treated by a doctor in a separate place, is another use [43]. Moreover, virtual reality may facilitate a patient's mobility and help them exercise (as demonstrated in Figure 8), something physical therapy cannot [44]. The main reason for this is that patients in a simulated environment are totally immersed in the reality created by the virtual world. This is the reason why, unlike in physical therapy, they do not focus as much on the experience of bodily pain.
This section offers a brief introduction of the latest application areas as well as applications of virtual reality and presents the connection of each application area with human-robot interaction. Specifically, six major application areas will be mentioned, including virtual reality aided design, VR goggles, driving simulation, medicine, education and virtual machining.
Virtual reality aided design (VRAD) is a concept that is superior to traditional CAx design in that it provides 3D virtual space at all stages of design. In a broad sense, it ranges from 2D drafting and drawing to 3D modeling, which will be discussed down below.
2D drafting and drawing is the process of creating and editing technical drawings, as well as annotating designs [45]. During this process, landscaping layouts, floor plans, building inspection plans, together with building permit drawings, are developed by drafters using computer-aided design, and the process is shown in Figure 9. Furthermore, it gives various perspectives on objects: for example, the front view, the top view, the sides view and so on. Presently, a few programs are accessible for 2D drafting: for example, AUTO computer-aided design, DRAFTSIGHT, AUTODESK, AUTODESK Innovator, DELTA CAD, DESIGN CAD, DOUBLE CAB and so on.
3D modeling is the act of using specific software to create a mathematical representation of the surface of an item in 3 dimensions by manipulating vertices, edges and polygons in a simulated three-dimensional space [46]. Its product is called a 3D model. A 3D model can alternatively be shown as a two-dimensional image using a technique known as 3D rendering, or it can be utilized in a computer simulation of physical events. 3D modeling software is employed to create 3D models. Specific programs in this category, such as SketchUp, are referred to as modeling apps. Rapid advancements in digital graphics and computer power over the previous two decades have made possible a new generation of 3-dimensional virtual resources, which exceed the constraints of more conventional 2-dimensional materials [47]. Numerous immersive 3D visualization technologies are available now for tasks, including studying neuroanatomy, modeling surgical procedures and organizing surgical treatments using patient-specific models.
When VR-aided design and HRI are combined, it can allow for the use of robots to physically interact with and manipulate virtual models in a VR environment, providing an even more realistic and intuitive design experience. This can be useful in fields such as architecture, engineering and industrial design. Currently, a combination of VR-aided design and HRI remains to be explored. However, tons of research has been done on the connection between augmented reality and human-robot collaboration.
VR glasses or goggles, as shown in Figure 10, are gaining popularity in the entertainment and gaming industries. They are lightweight and more convenient to wear than traditional head mounted displays, and many comprise a variety of interactive devices [50]. These glasses function similarly to 3D goggles because they present two pictures. Ordinary glasses display a single picture, but VR goggles use polarized lenses to display two images, one for each eye.
More sophisticated variants of these glasses include head tracking systems. It is linked to a laptop, which transmits signals to alter the visuals viewed by the user as they move about their surroundings. These glasses allow the user to view 3D pictures that provide the perception of illusion of depth, as demonstrated in Figure 11. For example, if the user is utilizing VR for architectural reasons, they will be capable of seeing a structure from various angles and travel around or through it.
With the development of VR and robotic technologies, VR glasses are increasingly crucial in facilitating HRI. Cetin talked about potential uses for robotic technology and smart glasses in banking activities in [51]. In [52], Hachaj presented a head motion-based robot control system utilizing VR glasses, which was developed using a variety of cutting-edge software and hardware technologies. It turns out that VR goggles are conducive to controlling robots in human-robot interaction, making more advanced human-robot cooperation possible.
The capacity to construct driving simulations that allow users to be put in dangerous driving situations without actual danger is a main use of VR simulations within the automotive sector as well as for driving instruction [53]. Driving simulators can be helpful for a variety of purposes, such as gathering data about driving behaviors or instructing rookie drivers in a setting with low stress. Young or inexperienced drivers could be trained via VR driving simulators, as shown in Figure 12, which can also help them learn from their errors or identify problematic driving behaviors that need to be changed. In a simulation, drivers can be put inside a virtual car in a setting that resembles a metropolis, and their behaviors can be watched and videotaped to later review for any problems or errors or to determine whether drivers made the right choices in a particular circumstance [54]. Motorists may learn from their errors and receive advice on how to behave better in real-world driving situations after completing the exercise. These driving simulations may also be helpful for young drivers with neurodevelopmental conditions like autism spectrum disorder who often have trouble learning in an unrestricted setting [55].
The ability to gather actual data as to how users respond to various events while driving in a simulated environment is another purpose for VR driving simulators. With VR simulations continually offering options for secure and effective data collection as well as user testing, autonomous vehicles (AV) will remain a developing area of technology [57]. Building trust between drivers and autonomous cars and knowing how to lessen the mistrust most users have are major problems in the area [58]. Users must be able to trust the AV in order for drivers to gain control when necessary. In light of this, placing motorists in a virtual world where they interact with the vehicles may provide a significant quantity of data on how users act in that environment while also guaranteeing that they are at ease and can become used to operating an AV [59]. Though much attention has been focused on VR driving simulation, the combination of VR and robotic technologies is still an unexplored field.
The first application of VR in healthcare took place in the early 1990s, with the use of VR to visualize complex medical data during surgery and to preoperatively plan surgical procedures [61]. Many scientists now portray VR as a computer-generated, three-dimensional representation of the actual world where groups of real people may interact, develop goods and services and generate actual financial value through online commerce [62]. When force-feedback haptic devices are used, haptic technology may be integrated to virtual reality to create a more realistic simulation. By exerting pressure, creating vibrations or moving objects on the users, haptic technology simulates the tactile experience. It is crucial to acquire a complete understanding of the bony structures of the surgery target before doing any actual surgery [63]. This issue is crucial in plastic surgery since the majority of clinical outcomes are directly related to the patient's outward look. As visual effects and sensor technology have advanced, VR and AR have emerged as technologies that may open up new avenues for the advancement of the diagnostic and surgical methods applied in cosmetic surgery and plastic surgery. Moreover, VR and AR technologies may provide medical curriculum and necessary training for medical students in an authentic but safe environment, as shown in Figure 13.
The sudden onset of the pandemic has caused widespread disruptions in healthcare training and education for residents and medical students around the world, including a decline in surgical treatments and a switch to conservative methods, as well as limitations on physical attendance at conferences and workshops, whenever possible [64]. As a result, the application of computer technology to sustain medical education and care is being adopted more quickly and creatively than ever before in order to comply with social distance rules. The use of extended reality (XR) within digital transformation modalities is advancing quickly in the sphere of medical education. Because XR is computer-based, it enables learning practices that are not feasible in the actual world [65]. Consequently, medical education has to be modified. The progress of science and technology, shifts in society and students, and new educational demands and requirements all require that medical education be modified. For resident doctors and medical students, who are referred to as digital natives, it is becoming necessary in medical training to provide a highly immersive and interactive teaching experience and learning environment employing immersive technology with XR techniques [66].
Virtual reality is closely related to human-robot interaction and collaboration, especially in robotic surgery. Bric and colleagues conducted a review of the literature on the use of virtual reality simulation to learn robotic surgical techniques using the da Vinci Surgical System in [67]. In [68], Moglia et al. conducted research on the usefulness of virtual simulation training in robotic surgery. Lee and Lee studied how far trainees got with a contemporary VR simulator for self-learning and if extra mentorship improved skill learning, skill transfer and cognitive burdens in robotic surgical simulation training in [69]. In the future, it is expected that VR will play a more crucial role in robotic surgery.
Virtual reality has extended the boundaries of education [70]. Allowing for a 3D perspective of educational information strengthens the learning process, which often results in higher grades and greater application of knowledge in practice. Specifically, VR assists pupils in better understanding lectures and detecting errors that they might otherwise overlook. Furthermore, conventional pedagogies are plain, uncomplicated and rather dull. Introducing some technology into the mix totally changes the experience. VR and education are thus a winning combination, and the process is indicated in Figure 14.
The usage of VR in education has been shown to be able to promote higher level thinking, student engagement and interest [71], information acquisition [72] and cognitive habits and comprehension that are typically beneficial in an academic sense.
However, VR still has its drawbacks in the usage in education. One prominent problem is addiction. VR, like other technology products, may be seductive, and more pupils may succumb to the lure of addiction. If a student believes that what they are seeing in the simulated world is superior to reality, they will spend time on it all day, and consequently, their studies are delayed.
Virtual reality can be combined with human-robot interaction in many areas, especially in education. Eliahu and colleagues reviewed contemporary AR and VR concepts and their applicability to robot-assisted spine surgery [74]. Lo et al. in [75] created a series of virtual reality human-robot interaction technology acceptance models for studying direct current and alternating current, with the goal of immersing students in the generation, existence and flow of electricity using VR technology. Chen et al. developed an application system that uses robots to train English-language local guides. It used the AI Unity plug-in for coding to create material for tours and an immersive 3D environment utilizing AI and VR technology in [76]. The latest research listed above demonstrates a close combination of VR and robotic technologies.
The process of utilizing computers to model and simulate the operation of machine tools for part production is known as virtual machining [77]. In VR systems, such activity simulates the behavior and mistakes of a real world. This can give beneficial methods for manufacturing items that do not require physical testing on the shop floor. Accordingly, cost and time spent in part production can be reduced.
Virtual machining has several advantages in production. For instance, simulated machining in virtual settings identifies problems without wasting resources, damaging machine equipment or endangering employees' safety, as is proposed in [78]. Moreover, virtual inspection systems such as surface finish, surface metrology and waviness can be applied to the simulated parts in virtual environments to increase accuracy, as is shown in [79]. The virtual machining system may also be utilized in the process planning of machining operations by taking into account the most appropriate steps of machining operations in terms of time and cost of component manufacturing [80].
Virtual machining can be combined with human-robot interaction in many ways. Slavkovic et al. presented a sophisticated virtual robot machining model as part of a digital twin for the modeling of a changed tool path provided by the compensation algorithm for cutting force mistakes caused by robot compliance in [81]. Nilsson et al. suggested a proposal for virtual machine vision utilizing RobCad, a commercial computer aided robotics package [82]. Zivanovic et al. provided a way for implementing a new programming technique on the STEP-NC standard in machining operations utilizing various computer numerical control machines and robots. These applications indicate that human-robot interaction is becoming an inseparable part in virtual machining.
Virtual reality has arisen as a new method of improving human-robot interaction. A growing number of researchers in HRI have recently revealed how VR facilitates improved interactions between humans and robots. Specifically, great emphasis has been put on medicine, education and industry when it comes to the combination of VR and HRI. From the three major application areas, we will dig into several hot issues the world is facing and summarize how VR applications in HRI help to fix them.
Tons of research has been done by scholars on the application of virtual reality in the medical area, especially in robotic surgery, also called robot-assisted surgery. This part reviews the latest articles relating to the use of VR in robotic surgery and sees how they are combined in practice.
With technologies like robots, advanced image guiding, augmented reality, virtual reality and artificial intelligence striving to influence the future of spine surgery, surgical innovation is at an all-time high. Eliahu and colleagues reviewed contemporary virtual reality concepts and their potential in robot-assisted spine surgery in [75]. Chen et al. in [83] suggested a virtual and real registration strategy based on enhanced identification as well as a robot-assisted method to enhance the precision of virtual and real registration, as shown in Figure 15. It turns out that a new layer of information for real-time intraoperative feedback, preoperative planning and trainee education is added when augmented and virtual reality are combined with existing image-guidance systems and robots.
In [84], An interactive tablet-based augmented reality surgical guiding system with a Kinect sensor for 3D localization of the patient's skeletal architecture and perioperative 3D navigation for the surgical instrument was proposed by Wen and his colleagues. They showed that the technology could deliver mobile augmented guidance and interactivity for surgical tool navigation.
The use of simulation in surgical training has become more and more important. In [85], Lefor developed a virtual reality simulator for training in robot-assisted liver surgery that would provide kinematic data for further analysis and development. In order to collect kinematic data for additional analysis and development, simulation may be a useful tool. This is the first procedural simulator for liver surgery, and it offers kinematic data in a common format to make future research and development easier. The simulator will continually evolve, and future versions may include interchangeable tools, such as stapling devices, and the expansion of the simulated liver to the right lobe may be possible.
For robot-assisted interventional surgery, Shi in [86] created a cutting-edge virtual reality interventional training system. The master side and the VR simulator are the two components of this system. They completed the catheter interface simulation and virtual force feedback for the suggested virtual reality interventional training system. According to the results of the experiments, the haptic force interface can supply exact force to the operator, and the motion precision is sufficient for robot-assisted interventional surgery.
In [87], Berges et al. investigated the relationship between trainee performance in virtual reality simulation, overall intraoperative performance during robotic-assisted laparoscopic hysterectomy operations and suturing performance during the case's vaginal cuff closure component. The study's findings highlight the importance of developing intraoperative evaluation tools that can differentiate between various but similar skill levels.
The above research shows the benefits of using virtual reality in robotic surgery, that is, improving the surgical field view of doctors and reducing errors brought by negligence. Also, virtual reality creates a perfect simulation of robotic surgery through which medical students, trainees and surgeons are likely to enhance their performance in practice.
Sanitation and healthcare professionals around the nation are risking their lives to perform their duties during the present COVID-19 pandemic. Robotics and virtual reality can reduce many of the problems caused by the epidemic, and this possibility is becoming more and more important in academia.
Global healthcare delivery has been significantly disrupted by the COVID-19 epidemic. There has not been any research done on the probable decline in surgical abilities during the epidemic and this surgical hiatus. In [88], Der et al. carried out a study where surgeons underwent a series of robotic virtual reality workouts both before and after a required 6-week surgical break to close the gap. The VR sessions were performed by 16 surgeons before and after the COVID-19 shutdown, and the researchers gathered thorough surgical performance data in a controlled setting before and after. The results showed that the success of suturing is related to the long-term restoration of urine continence following robotic prostatectomy. Here, they assessed the decline in suturing ability caused by the COVID-19 outage.
The pandemic's effects are driving the hotel sector's recent growth in automation and the use of service robots. Parvez et al. in [89] looked at how service workers felt about service robots. Structural equation modeling was used to evaluate data collected from 405 service workers in the United States of America via Amazon's MTurk service. The findings showed that service workers' perceptions of robot-induced unemployment are strongly influenced by how socially adept robots are seen to be.
The obstacles to providing care to elderly people with developmental disabilities include those relating to companionship, medication intake and fall monitoring, among other things. The creation of a VR robot simulator for a socially assistive mobile robot designed to aid the elderly and individuals with developmental disabilities in achieving a better level of independence was suggested by Alves et al. in [90]. Their findings show that the virtual sensor has detection capabilities comparable to those of the genuine sensor, confirming that the simulated data may be used for testing in the real world.
In [91], Fujihara and Ukimura look into the influence of virtual reality technology on urological procedures, especially in the treatment of prostate and kidney cancer. It turns out that VR technologies are being used more and more in the diagnosis and treatment of prostate cancer, including targeted focused therapy, robot-assisted radical prostatectomy, surgical training using a simulator and magnetic resonance imaging/ultrasound fusion biopsy.
In the context of COVID-19, serious games have been created for virtual reality settings as a way to make therapy more effective and entertaining. Along with these rehabilitation activities, robotic devices have been used to assist the user in doing exercise actions. In [92], Villamizar et al. momentarily presented a mechanical gadget for lower leg recovery and the control conspire, showing the improvement of a VR experience hustling game. It was also found that the VR application has to be updated to include more elements that can increase user immersion and lower the likelihood of motion sickness, such as first-person views with the usage of cockpits as rest frames.
The present system of surgical training requires that beginning surgeons take certain classes, see some procedures and perform their first surgeries under the close supervision of an experienced surgeon. Due to the necessity of physical contact during this kind of medical training, all parties engaged, including both beginner and experienced surgeons, run the danger of contracting a virus. In order to lessen physical contact in medical facilities during the COVID-19 pandemic and other crises, Motaharifar et al. in [93] reviewed recent technological advancements as well as new fields in which assistive technologies might offer a workable solution. It has been found that the danger of infection is lower with haptic-based collaboration than it is with standard collaboration approaches since it does not call for physical touch between the participants.
It is envisaged that the COVID-19 crisis will generate enough drive for these cutting-edge medical teaching techniques to steadily boost their uptake among medical campuses, and the advantages of combining VR and HRI will be further explored and studied.
For robotic surgery, VR can play a crucial role in demonstrating the surgical process and teaching trainees. The function of virtual reality to provide data in the teaching and training process in other fields related to HRI has also been widely studied.
In response to the COVID-19 pandemic in 2020, schools and colleges are working harder to make the most of educational resources and offer opportunities for distance learning. Many novel approaches, such as virtual reality, extended reality and human-robot interaction are brought up. In [94], Sanfilippo suggested a novel approach to multi-sensory learning in STEM education by combining VR and XR with haptic wearables. The suggested haptic-enabled framework appears to increase student engagement and the sense of presence, according to the results. In [95], Shahab et al. investigated the potential of implementing virtual music education for children with autism in treatment/research facilities without any need to buy a robot, offering the ability to deliver schedules on a greater scale and at a reduced cost. The small number of participants was the study's primary drawback, and there were some issues with the choice of appropriate research subjects.
The major virtual acupuncture techniques now in use do not permit operators to naturally engage in a wide space. To fill the gap, Du et al. in [96] proposed a portable regular human-robot connection interface for virtual Chinese needle therapy education, including a programmed hand following technique and a needle therapy cooperation strategy joining vision and power feedback. From the perspective of interaction, the proposed strategy was shown to be more natural and effective in the virtual acupuncture teaching environment. This technique allows the operators to feel more immersed.
In [76], Lo and his fellow scholars have developed a model for acceptance of VR HRI technology that aims to use VR to immerse students in the generation, existence and flow of electricity. The results show that the adoption of the model offers a high degree of comprehension and interaction. Because the goal of this study was to build a technology acceptance model which suited secondary school students' learning habits and curriculum, the development and creation of these materials needed a significant amount of effort. Therefore, more attention should be invested in the future in improving the technology acceptance model.
Kim et al. utilized the training system and found potential beneficial effects in characterizing the brain activation patterns in [97], and the process is shown in Figure 16. As part of the procedure, the participants saw a VR film of robotic action to help them with their motor imagery of reaching motion. The recommended training approach could aid in brain activity optimization and adaptability, allowing the BMI to eventually serve realistically as an assistive tool.
Szafir presents a robot interface that stresses the need to consider information perception for supporting examination and dynamic cycles in [98]. His work advocates for the necessity of more interaction between visualization and HRI. Such interdisciplinary cooperation will result in mutually beneficial innovations that support the development of data-centric HRI interfaces for more effective processing of enormous amounts of information generated by robots.
Modern robotics education has benefited from using virtual reality experiences to teach concepts. Naceri et al. studied depth perception in action space, as shown in Figure 17, while keeping the target item's angular size fixed in [99]. This was accomplished by making the target item physically larger as the distance between it and the viewer grew.
Immersive virtual reality and socially assistive robots are interactive platforms that foster user involvement, which can encourage users to follow therapeutic frameworks.
Cohavi et al. delivered a cognitive training by adapting immersive virtual reality (iVR) and socially assistive robots (SARs) in [100]. The global COVID-19 epidemic prevents older persons from accessing community-based cognitive training. Hence, it is crucial to implement home-based training methods that will keep them interested and motivated over time.
One of the biggest challenges for education is the growing gap between the demand for knowledge and the availability of schooling, and the combination of virtual reality and robots is narrowing the gap. Agüero et al. demonstrated the software framework developed to enable cloud-hosted robot simulation [101]. This framework addresses the challenges of managing the Defense Advanced Research Projects Agency's Virtual Robotics Challenge, a task-oriented and real-time robot competition that aims to mimic reality. Samuels and Haapasalo explored the topic of improving mathematics instruction through educational robotics kits and virtual robotic animations by advocating their simultaneous deployment during the transition from middle school to university [102]. This method could also be a good complement to more conventional classroom teaching and learning methods. It is hoped that this would improve students' attitudes about their present and future mathematical studies and better prepare them for careers in the field. In [103], Alsoliman investigated the experiences of eighth-grade pupils and their instructors in 5 distinct K-12 schools which have already formally introduced physical robotics into STEM (science, technology, engineering and mathematics) teaching. It is crucial to draw attention to the dearth of empirical research investigating and evaluating the effects of virtual robotics on STEM or its effects on students' future interests, fulfilment and career choices. Virtual robotics' application to enhance STEM-related knowledge and abilities has not yet been subject to studies incorporating the quantitative evaluation of empirical efficacy. Based on the research, we find that virtual robotics can be used in tandem with a physical robot to assist students to build confidence in repetitive programming, raise enthusiasm for educational robotics and give instructors a very flexible teaching alternative in the future.
Virtual reality serves as an efficient tool in smart manufacturing and virtual machining, as it can simulate human-robot collaboration in a cheap and safe way, and numerous papers that relate virtual reality, industry 4.0 and robots have been proposed in recent years.
Humans could work with collaborative robots to complete boring, hazardous or otherwise risky activities. The safety and comfort of human operators, however, significantly restrict the interaction between people and robots. This issue may be resolved by combining VR and cobots together. Sergi Badia et al. provide a review on virtual reality in the simulation of a collaboration environment and the employment of cobot digital twins [104]. The findings of the paper show that VR has the potential to alter how cobots are developed, tested and trained in a variety of settings, from business to healthcare to space missions. In [105], Palmieri and his coworkers provided details of a VR application that they used to test a collision prevention control method they had created for the redundant cooperative robot KUKA iiwa LBR. Future developments could include the creation of more accurate workstation models based on actual industrial test scenarios, along with the incorporation of motion capture systems or external optical sensors to precisely digitize the operator's movements to increase the system's perceptual capability.
In an effort to keep up with market changes, industries have developed hybrid human robot collaborative stations that offer quick throughput speeds as well as the flexibility to do a variety of jobs in an effort to keep up with market changes. The idea put forth by Togias et al. in [106] describes a teleoperation-based approach for the process design and control of industrial robots using virtual reality. The primary goal is to minimize the time and effort needed to repurpose the robot operation without the actual physical presence of a robot operator on the production floor.
ROS Reality, a VR teleoperation package, was created by Rosen et al. in [107]. They talked about how the package enables an ROS-networked robot, like Baxter from Rethink Robotics, to bilaterally connect with an HTC Vive via the Unity game engine over the Internet.
The difficulties in collaboration between human operators and industrial robots for assembly activities were examined by Papanastasiou et al. in [108] with an emphasis on safety and streamlined interaction. A case study incorporating perceptual technologies for the robot and wearables employed by the operator was presented.
Although industry and academia are rapidly adopting the new paradigm of integrating robots and people in hybrid production systems, there are still no effective ways to program such robotic cells. In [109], Makris developed the idea of creating dual-arm robot motion by emulating human behavior in a virtual setting to bridge the gap. MATLAB is used to implement the suggested approach on a software tool.
In [110], Simões et al. presented a thorough writing survey on planning human-robot collaboration work areas for people and robots in modern settings. The study indicates the necessity of an integrated, interdisciplinary approach to collaborative workplace design to optimize human involvement in the decision-making process and to foster well-being and high standards of work.
Dianatfar and his kindred associates in [111] surveyed the ongoing status of virtual and expanded reality arrangements in human-robot collaboration (HRC) with meta-examination and feature missing components. The review shows that a more authentic setting for users to comprehend instructions and ambient circumstances might be provided by advancements in VR and AR technology. These technologies could be used for real-world and industrial scenarios. Future initiatives could concentrate on taking users' safety into account. Further research is required on how different industrial jobs interact with larger robots.
The gaze input modality has proven a simple and challenging human-computer interaction technique for a variety of applications in the past. A revolutionary immersive eye-gaze-guided camera (seen in Figure 18) which can smoothly direct the motions of a camera placed on an unmanned aerial vehicle from the eye-gaze of a distant user has been proposed by BN et al. in [112]. The camera's video feed is relayed onto a head-mounted display equipped with a pair of binocular eye trackers. In order to assess the suggested framework, user research was done by taking into account the static and moving targets in a 3D space. The suggested GazeGuide outperformed the remote controller substantially, according to the quantitative and qualitative findings.
In [113] Higgins and his scientists explain how to use HRI in VR to instruct a robot. The ultimate objective of this endeavor is to increase their capacity to collect data from various groups and in various settings. They do this by developing VR settings where a person may instruct a robot about items using simulated perceptual data as well as language and gestures. Simulators for VR make it possible to test engineering systems and robotic solutions in secure settings. To this end, VR simulators must execute algorithms in real time to properly mimic the predicted behavior of real-world processes.
In [114], Bustamante et al. set out to find a good OpenCV setup for processing photographs from a Unity-created virtual unmanned aerial aircraft. In order to avoid potential pitfalls like delays and bottlenecks, the concentration was on comparing two methods of integrating video processing. The investigation proved that using the Python module in parallel does not cause the Unity main thread to become overloaded and that it performs better than the C++ plugin in real-time simulation.
In [115], Hjorth et al. looked at the scene of human-robot cooperative disassembly and surveyed the advancement in the field. It was discovered that the majority of the currently conducted applied research on industrial automated disassembly systems is concentrated on consumer electronics (i.e., TVs and smartphones). Destructive operations are frequently used by many of these systems in an effort to separate sub-assemblies and components for remanufacturing purposes, which leads to ineffective procedures.
In [8], Gallala et al. presented a utilization case situation of the proposed strategy utilizing Microsoft Hololens 2 and the KUKA IIWA collaborative robot. The results showed that even with operators who lack programming experience, effective human-robot interactions are still achievable employing these cutting-edge technologies.
Testing human-robot interaction (HRI) underwater is challenging for a number of reasons, including the operator's inconsistent access to bodies of water and the risks involved in submerging a test subject. Virtual reality is seen to be an effective technology for submerging users in a virtual environment and enabling interaction. In [116], Rana planned a VR framework for testing human-robot communication conventions for submerged robots to guarantee protected and effective human-machine cooperation. Cruz and colleagues demonstrated a human-robot interface module that incorporates VR technology in [117]. They witnessed the development and testing of an immersive VR interface to a simulated underwater vehicle as part of the TWINBOT project. Users prefer this style of interface to alternatives that are more difficult to learn and use, as predicted. To improve the learning process, they need to update the interface manual and the VR video to make the hardware easier to comprehend.
As virtual reality becomes more common, there is a chance to use it to construct intuitive operator interfaces for interacting with complicated dynamic systems such as humanoid robots. In [118], Wonsick and Padır presented the process of transforming a traditional interface into a VR interface for NASA's Valkyrie humanoid robot. Descriptions of both interfaces have been offered, with an emphasis on the interaction methods used by each and how exploiting the 3D world VR offers can benefit operators over regular 2D displays. Future work will entail additional enhancement of the VR interface as well as user research in order to progress toward more natural and user-friendly interfaces for HRI.
HRI research necessitates careful consideration of experimental design and a large amount of time spent practicing the subject experiment. Recent VR technology can overcome these time and effort issues. In [119], Inamura and Mizuchi proposed an exploration stage that can ultimately give the components expected for HRI research by coordinating VR and cloud advancements. This study addressed four major challenges: lowering the cost of experimental environment design, providing participants with the same experimental settings, reproducing previous experiences and developing a basic system for the intuitive interface of robot teleoperations. In a complicated actual environment, future intelligent robots will be expected to demonstrate profound social behaviors. As a result, a dataset for learning social behaviors and assessing competence should be created.
In [120], Prati et al. presented a coordinated methodology for the plan of human-robot cooperative workstations on modern shop floors. The suggested method has been used in two real-world industrial case studies, one involving the design of an intense warehousing and the other involving the design of a collaborative assembly workstation for the automobile sector. Future research will be conducted in order to improve the technique and propose one that is more specific, supported by data and experimental findings and incorporating larger user samples. This step will enable the suggested technique to be applied practically to the design of actual industrial environments.
From the papers listed, we may conclude that virtual reality is being employed to enhance the control of robots, not only in medical practice, education and manufacturing but also in many other areas. Within the context of COVID-19, there is a growing need for virtual reality combined with robots to replace human labor, and the influence of these technologies will continue to expand.
Virtual reality is an advanced technology that can generate virtual environments to simulate the real world or to visualize phenomena invisible to the human eye. Researchers are now able to substantially integrate mixed reality into human-robot interaction as a result of recent developments in augmented and virtual reality, which have considerably expanded the range of what is conceivable in mixed reality settings [121]. By reviewing over one hundred papers on virtual reality applications in human-robot interaction, we have summarized eight key advantages for VR innovations.
1). Immersive experience: Virtual reality technology can be used to create an immersive experience for users interacting with robots in a manufacturing or assembly line setting. For example, a virtual reality simulation can be created to simulate the assembly of a car, allowing users to assemble the car in a virtual environment before working with the real car.
2). Remote interaction: Virtual reality technology can be used to enable remote interaction with robots in different locations. It allows an operator in one location to remotely control a robot in another location, enabling the operator to interact with the robot and complete tasks remotely.
3). Increased safety: Virtual reality simulations can be used to train users on how to safely interact with robots, reducing the risk of accidents and injuries. For instance, a virtual reality simulation may be developed to teach users how to interact securely with the robots in a dangerous setting, such as a nuclear power plant.
4). Cost-effectiveness: Virtual reality simulations can be a cost-effective alternative to training with real robots. They can be used to train surgeons on a new surgical procedure, allowing them to practice the procedure in a virtual environment before performing it on a real patient.
5). Realistic training: Virtual reality simulations can provide realistic training environments that closely replicate real-world conditions. They can simulate a search-and-rescue operation, allowing users to practice navigating difficult terrain and interacting with search-and-rescue robots in a realistic environment.
6). Improved communication: Virtual reality technology can be employed to improve communication between human and robots. It allows users to interact with a robot using natural gestures and movements, such as pointing or nodding.
7). Emotional interaction: Virtual reality technology can be used to create emotional interactions between human and robots. For example, a virtual environment can be generated to simulate a conversation between a human and a robot, where users can interact with the robot in a way that is emotionally engaging.
8). Customizable: Virtual reality simulations can be customized to suit the needs of different users and tasks. For example, a virtual reality simulation can be used to train a particular skill, such as welding or painting, and customized to match the specific tools and equipment that the user will be using in the real-world scenario.
Nevertheless, there are still many challenges for virtual reality to be fully applied in intelligent manufacturing, with seven major problems listed down below.
1). Natural human-robot interaction: One of the main challenges in virtual reality-based human-robot interaction is to make the interaction as natural as possible. This can be a challenge since the robot may not be able to interpret and respond to the user's actions and movements in the virtual environment.
2). Realism: Another challenge is to make the virtual reality experience as realistic as possible. For example, the robot's movements and actions should be as realistic as possible, and the virtual environment should be as detailed and lifelike as possible.
3). Safety: Ensuring the safety of the user while they are interacting with the robot in a virtual reality environment is also a challenge. The robot should be programmed to stop its movements if it detects that the user is in danger or if the user requests it.
4). User acceptance: Another challenge is to make sure that users are comfortable and willing to interact with the robot in a virtual reality environment. This can be achieved by designing the virtual environment and the robot's interactions to be as intuitive and user-friendly as possible.
5). Technical limitations: There are also technical limitations that can make it difficult to create a realistic and natural human-robot interaction in a virtual reality environment. For example, the current technology may not be advanced enough to allow for highly detailed virtual environments or highly realistic robotic movements.
6). Latency: Another challenge is to minimize latency between the user's actions and the robot's response in a virtual reality environment. Latency can cause users to feel disoriented and can make the experience less immersive.
7). Privacy and security: In a virtual reality-based human-robot interaction, it is important to ensure the safety and privacy of the user. For example, the robot may have cameras or microphones that can collect personal data such as the user's facial expressions, body movements or spoken words. This data could then be shared with third parties without the user's knowledge or consent.
By summarizing the benefits and challenges of virtual reality and human-robot interaction, we may make a prediction of the future trend of this field. In the future, VR technology is expected to play an increasingly significant role in human-robot interaction. VR can provide a more immersive and realistic experience for users interacting with robots, allowing for more natural and intuitive communication. Additionally, VR can be employed to improve the training and programming of robots by providing them with a virtual environment in which to learn and practice tasks. As robots are getting more personalized, interactive and engaging than ever, with the popularization of virtual reality innovations, we might be able to foresee the wide adoption of VR in controlling robots to fulfill various tasks of hospitals, schools and factories. Specifically, in the face of the COVID-19 pandemic, the need for a safe working and learning environment will promote the prevalence of VR and HRI. As VR technology continues to improve and become more widely adopted, we can expect to see it being used more frequently in human-robot interaction, leading to more advanced and sophisticated robots that are better able to understand and interact with humans. Hopefully, virtual reality and human-robot interaction will become a popular research trend in the medical area and are likely to play a crucial role in diagnosis, surgery and rehabilitation. Still, there are some technical problems to be solved. How to cope with the technical bottleneck may become the top priority in future works.
This paper presents the most recent improvements with regard to the VR application in the field of human-robot interaction. It sums up how VR technology is utilized to assist human-robot interaction. It is found that VR technology is gaining increasing importance in assisting HRI and plays a major role in education, manufacturing and surgery. Just as it is for AI and all similar technologies, the same goes for VR—it has a lot of benefits and could (when fully developed) be integrated into our everyday lives. Specifically, VR can provide a more immersive and intuitive way for humans to interact with robots, as it allows users to experience a realistic simulation of the robot's environment and movements. This can make it easier for users to understand and control the robot, as well as to train and test the robot's capabilities. Additionally, VR can also be used to enhance the social interactions between humans and robots, making the robots more human-like and easier to communicate with. This can be particularly useful in fields such as education, therapy and entertainment. Also, the demand for a safe working and learning environment in the face of the COVID-19 epidemic will increase the prevalence of VR and HRI. The combination of virtual reality and human-robot interaction, which is projected to play a critical role in diagnosis, surgery and rehabilitation, should become a major research trend in the medical field, and this research may be helpful for researchers to study the current applications of VR in HRI. Still, there are several challenges, such as the need for more advanced VR technologies to provide more realistic and immersive experiences, the development of more human-like robot models to improve social interactions and the need for better methods of evaluating the effectiveness of VR in human-robot interaction. Ensuring that the technology is designed with these considerations in mind is crucial for its successful implementation.
The authors declare there is no conflict of interest.
[1] |
T. Dardona, S. Eslamian, L. A. Reisner, A. Pandya, Remote presence: Development and usability evaluation of a head-mounted display for camera control on the da vinci surgical system, Robotics, 8 (2019), 31. https://doi.org/10.3390/robotics8020031 doi: 10.3390/robotics8020031
![]() |
[2] |
L. Shi, C. Copot, S. Vanlanduit, Gazeemd: Detecting visual intention in gaze-based human-robot interaction, Robotics, 10 (2021), 68. https://doi.org/10.3390/robotics10020068 doi: 10.3390/robotics10020068
![]() |
[3] |
Y. P. Su, X. Q. Chen, T. Zhou, C. Pretty, G. Chase, Mixed-reality-enhanced human–robot interaction with an imitation-based mapping approach for intuitive teleoperation of a robotic arm-hand system, Appl. Sci., 12 (2022), 4740. https://doi.org/10.3390/app12094740 doi: 10.3390/app12094740
![]() |
[4] |
S. Mugisha, V. K. Guda, C. Chevallereau, M. Zoppi, R. Molfino, D. Chabalt, Improving haptic response for contextual human robot interaction, Sensors, 22 (2022), 2040. https://doi.org/10.3390/s22052040 doi: 10.3390/s22052040
![]() |
[5] |
D. Ververidis, S. Nikolopoulos, I. Kompatsiaris, A review of collaborative virtual reality systems for the architecture, engineering, and construction industry, Architecture, 2 (2022), 476–496. https://doi.org/10.3390/architecture2030027 doi: 10.3390/architecture2030027
![]() |
[6] |
M. Serras, L. García-Sardiña, B. Simões, H. Álvarez, J. Arambarri, Dialogue enhanced extended reality: Interactive system for the operator 4.0, Appl. Sci., 10 (2020), 3960. https://doi.org/10.3390/app10113960 doi: 10.3390/app10113960
![]() |
[7] | D. Uchechukwu, A. Siddique, A. Maksatbek, I. Afanasyev, ROS-based integration of smart space and a mobile robot as the internet of robotic things, in 2019 25th Conference of Open Innovations Association (FRUCT), IEEE, Helsinki, Finland, (2019), 339–345. https://doi.org/10.23919/FRUCT48121.2019.8981532 |
[8] |
A. Gallala, A. A. Kumar, B. Hichri, P. Plapper, Digital Twin for human–robot interactions by means of industry 4.0 enabling technologies, Sensors, 22 (2022), 4950. https://doi.org/10.3390/s22134950 doi: 10.3390/s22134950
![]() |
[9] |
T. B. Sheridan, Human–robot interaction: status and challenges, Hum. Factors, 58 (2016), 525–532. https://doi.org/10.1177/0018720816644364 doi: 10.1177/0018720816644364
![]() |
[10] | M. Harbers, J. de Greeff, I. Kruijff-Korbayová, M. A. Neerincx, K. V. Hindriks, Exploring the ethical landscape of robot-assisted search and rescue, in A World with Robots: International Conference on Robot Ethics: ICRE 2015, Springer, (2017), 93–107. https://doi.org/10.1007/978-3-319-46667-5_7 |
[11] | S. A. Akgun, M. Ghafurian, M. Crowley, K. Dautenhahn, Using affect as a communication modality to improve human-robot communication in robot-assisted search and rescue scenarios, arXiv preprint, (2022), arXiv: 2208.09580. https://doi.org/10.48550/arXiv.2208.09580 |
[12] |
Y. Qian, S. Han, G. Aguirre-Ollinger, C. Fu, H. Yu, Design, modelling, and control of a reconfigurable rotary series elastic actuator with nonlinear stiffness for assistive robots, Mechatronics, 86 (2022), 102872. https://doi.org/10.1016/j.mechatronics.2022.102872 doi: 10.1016/j.mechatronics.2022.102872
![]() |
[13] | C. Getson, G. Nejat, The robot screener will see you now: A socially assistive robot for COVID-19 screening in long-term care homes, in 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), IEEE, Napoli, Italy, (2022), 672–677. https://doi.org/10.1109/RO-MAN53752.2022.9900620 |
[14] |
F. Wang, C. Li, S. Niu, P. Wang, H. Wu, B. Li, Design and analysis of a spherical robot with rolling and jumping modes for deep space exploration, Machines, 10 (2022), 126. https://doi.org/10.3390/machines10020126 doi: 10.3390/machines10020126
![]() |
[15] | F. Gul, S. Mir, I. Mir, Coordinated multi-robot exploration: Hybrid stochastic optimization approach, in AIAA SCITECH 2022 Forum, (2022), 1414. https://doi.org/10.2514/6.2022-1414 |
[16] | M. Liu, Design and development of assistive robotic arm, 2022. Available from: https://hdl.handle.net/10356/158578. |
[17] |
R. A. Shveda, A. Rajappan, T. F. Yap, Z. Liu, M. D. Bell, B. Jumet, et al., A wearable textile-based pneumatic energy harvesting system for assistive robotics, Sci. Adv., 8 (2022), 34. https://doi.org/10.1126/sciadv.abo2418 doi: 10.1126/sciadv.abo2418
![]() |
[18] | D. A. Kumar, K. C. Rath, K. Muduli, F. Ajesh, Design and modeling of virtual robot for industrial application in smart manufacturing assembly line, in ntelligent Systems: Proceedings of ICMIB 2021, Springer, Singapore, (2022), 471–483. https://doi.org/10.1007/978-981-19-0901-6_42 |
[19] | C. S. Crawford, M. Andujar, J. E. Gilbert, Neurophysiological heat maps for human-robot interaction evaluation, in Proceedings of 2017 AAAI Fall Symposium Series: Artificial Intelligence for Human-Robot Interaction AAAI Technical Report FS-17-01, AAAI, Arlington, USA, (2017), 90–93. |
[20] | R. R. Murphy, V. B. M. Gandudi, J. Adams, Applications of robots for COVID-19 response, Robotics, arXiv preprint, (2020), arXiv: 2008.06976. https://doi.org/10.48550/arXiv.2008.06976 |
[21] | T. Manglani, R. Rani, R. Kaushik, P. K. Singh, Recent trends and challenges of diverless vehicles in real world application, in 2022 International Conference on Sustainable Computing and Data Communication Systems (ICSCDS), IEEE, Erode, India, (2022), 803–806. https://doi.org/10.1109/ICSCDS53736.2022.9760886 |
[22] |
R. Monterubbianesi, V. Tosco, F. Vitiello, G. Orilisiet, F. Fraccastoro, A. Putignano, et al., Augmented, Virtual and Mixed Reality in Dentistry: A narrative review on the existing platforms and future challenges, Appl. Sci., 12 (2022), 877. https://doi.org/10.3390/app12020877 doi: 10.3390/app12020877
![]() |
[23] |
C. Antonya, S. Butnariu, Preservation of cultural heritage using virtual reality technologies and haptic feedback: A prototype and case study on antique carpentry tools, Appl. Sci., 12 (2022), 8002. https://doi.org/10.3390/app12168002 doi: 10.3390/app12168002
![]() |
[24] | L. G. Akhmaeva, D. V. Dolgopolov, A. I. Eremeeva, Demographic features of interconnection between VR and gaming experience on consumer market, in Proceedings of the International Scientific Conference "Smart Nations: Global Trends In The Digital Economy", Springer, (2022), 205–212. https://doi.org/10.1007/978-3-030-94870-2_27 |
[25] |
D. J. Thomas, Augmented reality in surgery: The computer-aided medicine revolution, Int. J. Surg., 36 (2016), 25. https://doi.org/10.1016/j.ijsu.2016.10.003 doi: 10.1016/j.ijsu.2016.10.003
![]() |
[26] |
S. Y. Lee, J. Y. Cha, J. W. Yoo, M. Nazareno, Y. S. Cho, S. Y. Joo, et al., Effect of the application of virtual reality on pain reduction and cerebral blood flow in robot-assisted gait training in burn patients, J. Clin. Med., 11 (2022), 3762. https://doi.org/10.3390/jcm11133762 doi: 10.3390/jcm11133762
![]() |
[27] | L. Freina, M. Ott, A literature review on immersive virtual reality in education: state of the art and perspectives, eLSE, 1 (2015), 133–141. |
[28] | S. Kavanagh, A. Luxton-Reilly, B. Wuensche, B. Plimmer, A systematic review of virtual reality in education, Themes Sci. Technol. Educ., 10 (2017), 85–119. |
[29] |
L. Jensen, F. Konradsen, A review of the use of virtual reality head-mounted displays in education and training, Educ. Inf. Technol., 23 (2018), 1515–1529. https://doi.org/10.1007/s10639-017-9676-0 doi: 10.1007/s10639-017-9676-0
![]() |
[30] |
G. Riva, Virtual reality for health care: the status of research, Cyberpsychol. Behav., 5 (2002), 219–225. https://doi.org/10.1089/109493102760147213 doi: 10.1089/109493102760147213
![]() |
[31] |
A. J. Snoswell, C. L. Snoswell, Immersive virtual reality in health care: systematic review of technology and disease states, JMIR Biomed. Eng., 4 (2019), e15025. https://doi.org/10.2196/15025 doi: 10.2196/15025
![]() |
[32] | W. Thabet, M. F. Shiratuddin, D. Bowman, Virtual reality in construction: a review, in Engineering computational technology, Civil-Comp press, Edinburgh, UK, (2002), 25–52. https://doi.org/10.4203/csets.8.2 |
[33] | M. Bassanino, K. C. Wu, J. Yao, F. Khosrowshahi, T. Fernando, J. Skjærbæk, The impact of immersive virtual reality on visualisation for a design review in construction, in 2010 14th International Conference Information Visualisation, IEEE, London, UK, (2010), 585–589. https://doi.org/10.1109/IV.2010.85 |
[34] |
C. P. Ortet, A. I. Veloso, L. V. Costa, Cycling through 360° virtual reality tourism for senior citizens: Empirical analysis of an assistive technology, Sensors, 22 (2022), 6169. https://doi.org/10.3390/s22166169 doi: 10.3390/s22166169
![]() |
[35] |
D. Yu, X. Li, F. H. Y. Lai, The effect of virtual reality on executive function in older adults with mild cognitive impairment: a systematic review and meta-analysis, Aging Mental Health, 1 (2022), 1–11. https://doi.org/11.10.1080/13607863.2022.2076202 doi: 10.1080/13607863.2022.2076202
![]() |
[36] |
H. B. Abdessalem, M. Cuesta, S. Belleville, C. Frasson, Virtual reality zoo therapy: An interactive relaxing system for Alzheimer's disease, J. Exp. Neurol., 3 (2022), 15–19. https://doi.org/10.1007/978-3-030-78775-2_12. doi: 10.1007/978-3-030-78775-2_12.
![]() |
[37] | A. Schweidler, "Mixed Reality Bike"–Development and Evaluation of a Virtual Reality Cycling Simulator: Wien, Ph.D thesis, E193-Institut für Visual Computing and Human-Centered Technology, 2022. |
[38] | R. Radoeva, E. Petkov, T. Kalushkov, D. Valcheva, G. Shipkovenski, Overview on hardware characteristics of virtual reality systems, in 2022 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), IEEE, Ankara, Turkey, (2022), 01–05. https://doi.org/10.1109/HORA55278.2022.9799932 |
[39] |
V. Z. Pérez, J. C. Yepes, J. F. Vargas, J. C. Franco, N. I. Escobar, L. Betancur, et al., Virtual reality game for physical and emotional rehabilitation of landmine victims, Sensors, 22 (2022), 5602. https://doi.org/10.3390/s22155602 doi: 10.3390/s22155602
![]() |
[40] |
A. Kolk, M. Saard, A. Roštšinskaja, K. Sepp, C. Kööp, Power of combined modern technology: Multitouch-multiuser tabletops and virtual reality platforms (PowerVR) in social communication skills training for children with neurological disorders: A pilot study, Child Neuropsychol., 1 (2022), 10. https://doi.org/10.1080/21622965.2022.2066532 doi: 10.1080/21622965.2022.2066532
![]() |
[41] |
G. C. Burdea, Virtual rehabilitation–benefits and challenges, Methods Inf. Med., 42 (2003), 519–523. https://doi.org/10.1055/s-0038-1634378 doi: 10.1055/s-0038-1634378
![]() |
[42] |
S. F. Hardon, A. Kooijmans, R. Horeman, M. van der Elst, A. L. Bloemendaal, T. Horeman, Validation of the portable virtual reality training system for robotic surgery (PoLaRS): a randomized controlled trial, Surg. Endosc., 36 (2022), 5282–5292. https://doi.org/10.1007/s00464-021-08906-z doi: 10.1007/s00464-021-08906-z
![]() |
[43] | B. Seeliger, J. W. Collins, F. Porpiglia, J. Marescaux, The role of virtual reality, telesurgery, and teleproctoring in robotic surgery, in Robotic Urologic Surgery, Springer, (2022), 61–77. https://doi.org/10.1007/978-3-031-00363-9_8 |
[44] |
K. Sevcenko, I. Lindgren, The effects of virtual reality training in stroke and Parkinson's disease rehabilitation: a systematic review and a perspective on usability, Eur. Rev. Aging Phys. Act., 19 (2022), 1–16. https://doi.org/10.1186/s11556-022-00283-3 doi: 10.1186/s11556-021-00281-x
![]() |
[45] |
N. Puodžiūnienė, E. Narvyda, Standards for transition from 2D drawing to model based definition in mechanical engineering, Archives, 27 (2021), 351–354. https://doi.org/10.5755/j02.mech.25777 doi: 10.5755/j02.mech.25777
![]() |
[46] |
E. Touloupaki, T. Theodosio, Performance simulation integrated in parametric 3D modeling as a method for early stage design optimization—A review, Energies, 10 (2017), 637. https://doi.org/10.3390/en10050637 doi: 10.3390/en10050637
![]() |
[47] |
S. B. Tomlinson, B. K. Hendricks, A. Cohen-Gadol, Immersive three-dimensional modeling and virtual reality for enhanced visualization of operative neurosurgical anatom, World Neurosurg., 131 (2019), 313–320. https://doi.org/10.1016/j.wneu.2019.06.081 doi: 10.1016/j.wneu.2019.06.081
![]() |
[48] |
S. T. Puente, L. Más, F. Torres, F. A. Candelas, Virtualization of robotic hands using mobile devices, Robotics, 8 (2019), 81. https://doi.org/10.3390/robotics80300818:81. doi: 10.3390/robotics8030081
![]() |
[49] |
S. L. Pizzagalli, V. Kuts, T. Otto, User-centered design for Human-Robot Collaboration systems, IOP Conf. Ser.: Mater. Sci. Eng., 1140 (2021), 012011. https://doi.org/10.1088/1757-899X/1140/1/012011 doi: 10.1088/1757-899X/1140/1/012011
![]() |
[50] | K. Pasanen, J. Pesonen, J. Murphy, J. Heinonen, J. Mikkonen, Comparing tablet and virtual reality glasses for watching nature tourism videos, in Information and Communication Technologies in Tourism, Springer, (2019), 120–131. 10.1007/978-3-030-05940-8_10. |
[51] |
H. Çetin, The impact of ICT patents on OECD countries' banks risk indicators and discussion on the use of robotic communication and smart glasses in the banking sector, Turk. Stud. Econ. Finance Polit., 15 (2020), 799–816. https://doi.org/10.29228/TurkishStudies.42916 doi: 10.29228/TurkishStudies.42916
![]() |
[52] | T. Hachaj, Head motion–based robot's controlling system using virtual reality glasses, in Image Processing and Communications: Techniques, Algorithms and Applications 11, Springer, (2019), 6–13. https://doi.org/10.1007/978-3-030-31254-1_2 |
[53] | Q. C. Ihemedu-Steinke, R. Erbach, P. Halady, G. Meixner, M. Weber, Virtual reality driving simulator based on head-mounted displays, in Automotive User Interfaces, Springer, (2017), 401–428. https://doi.org/10.1007/978-3-319-49448-7_15 |
[54] |
A. El hafidy, T. Rachad, A. Idri, A. Zellou, Gamified mobile applications for improving driving behavior: A systematic mapping study, Mobile Inf. Syst., 2021 (2021), 6677075. https://doi.org/10.1155/2021/6677075 doi: 10.1155/2021/6677075
![]() |
[55] | J. Wade, D. Bian, L. Zhang, A. Swanson, M. Sarkar, Z. Warren, et al., Design of a virtual reality driving environment to assess performance of teenagers with ASD, in Universal Access in Human-Computer Interaction. Universal Access to Information and Knowledge, Springer, (2014), 466–474. https://doi.org/10.1007/978-3-319-07440-5_43 |
[56] |
J. K. Muguro, P. W. Laksono, Y. Sasatake, K. Matsushita, M. Sasaki, User monitoring in autonomous driving system using gamified task: A case for VR/AR in-car gaming, Multimodal Technol. Interact., 5 (2021), 40. https://doi.org/10.3390/mti5080040 doi: 10.3390/mti5080040
![]() |
[57] | A. M. Nascimento, A. C. M. Queiroz, L. F. Vismari, J. N. Bailenson, P. S. Cugnasca, J. B. C. Junior, et al., The role of virtual reality in autonomous vehicles' safety, in 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), IEEE, San Diego, USA, (2019), 50–507. https://doi.org/10.1109/AIVR46125.2019.00017 |
[58] |
L. Morra, F. Lamberti, F. G. Pratticó, S. L. Rosa, P. Montuschi, Building trust in autonomous vehicles: Role of virtual reality driving simulators in HMI design, IEEE Trans. Veh. Technol., 68 (2019), 9438–9450. https://doi.org/10.1109/TVT.2019.2933601 doi: 10.1109/TVT.2019.2933601
![]() |
[59] | J. Helgath, P. Braun, A. Pritschet, M. Schubert, P. Böhm, D. Isemann, Investigating the effect of different autonomy levels on user acceptance and user experience in self-driving cars with a VR driving simulator, in Design, User Experience, and Usability: Users, Contexts and Case Studies, Springer, (2018), 247–256. https://doi.org/10.1007/978-3-319-91806-8_19 |
[60] |
A. D. Pieterse, B. P. Hierck, P. G. M. de Jong, J. Kroese, L. N. A. Willems, M. E. J. Reinders, Design and implementation of "AugMedicine: Lung Cases, " an augmented reality application for the medical curriculum on the presentation of dyspnea, Front. Virtual Reality, 1 (2020), 577534. https://doi.org/10.3389/frvir.2020.577534 doi: 10.3389/frvir.2020.577534
![]() |
[61] | C. Chinnock, Virtual reality in surgery and medicine, Hosp. Technol. Ser., 13 (1994), 1–48. |
[62] |
C. Pensieri, M. Pennacchini, Overview: Virtual reality in medicine, J. Virtual Worlds Res., 7 (2014). https://doi.org/10.4101/jvwr.v7i1.6364 doi: 10.4101/jvwr.v7i1.6364
![]() |
[63] |
Y. Kim, H. Kim, Y. O. Kim, Virtual reality and augmented reality in plastic surgery: A review, Arch. Plast. Surg., 44 (2017), 179–187. https://doi.org/10.5999/aps.2017.44.3.179 doi: 10.5999/aps.2017.44.3.179
![]() |
[64] |
B. Fiani, R. Jenkins, I. Siddiqi, A. Khan, A. Taylor, Socioeconomic impact of COVID-19 on spinal instrumentation companies in the era of decreased elective surgery, Cureus, 12 (2020), 9776. https://doi.org/10.7759/cureus.9776 doi: 10.7759/cureus.9776
![]() |
[65] |
T. Morimoto, T. Kobayashi, H. Hirata, K. Otani, M. Sugimoto, M. Tsukamoto, et al., XR (extended reality: virtual reality, augmented reality, mixed reality) technology in spine medicine: status quo and quo vadis, J. Clin. Med., 11 (2022), 470. https://doi.org/10.3390/jcm1102047067 doi: 10.3390/jcm11020470
![]() |
[66] |
T. Morimoto, Hirata H., M. Ueno, N. Fukumori, T. Sakai, M. Sugimoto, et al., Digital transformation will change medical education and rehabilitation in spine surgery, Medicina (Kaunas), 58(2022), 508. doi: 10.3390/medicina58040508 doi: 10.3390/medicina58040508
![]() |
[67] |
J. D. Bric, D. C. Lumbard, M. J. Frelich, J. C. Gould, Current state of virtual reality simulation in robotic surgery training: a review, Surg. Endosc., 30 (2016), 2169–2178. https://doi.org/10.1007/s00464-015-4517-y doi: 10.1007/s00464-015-4517-y
![]() |
[68] |
A. Moglia, V. Ferrari, L. Morelli, M. Ferrari, F. Mosca, A. Cuschieri, A systematic review of virtual reality simulators for robot-assisted surgery, Eur. Urol., 69 (2016), 1065–1080. https://doi.org/10.1016/j.eururo.2015.09.021 doi: 10.1016/j.eururo.2015.09.021
![]() |
[69] |
G. I. Lee, M. R. Lee, Can a virtual reality surgical simulation training provide a self-driven and mentor-free skills learning? Investigation of the practical influence of the performance metrics from the virtual reality robotic surgery simulator on the skill learning and associated cognitive workloads, Surg. Endosc., 32 (2018), 62–72. https://doi.org/10.1007/s00464-017-5634-6 doi: 10.1007/s00464-017-5634-6
![]() |
[70] | E. A. L. Lee, K. W. Wong, A review of using virtual reality for learning, in Transactions on Edutainment I, Springer, (2008), 231–241. https://doi.org/10.1007/978-3-540-69744-2_18 |
[71] |
D. Allcoat, A. von Mühlenen, Learning in virtual reality: Effects on performance, emotion and engagement, Res. Learn. Technol., 26 (2018), 2140. https://doi.org/10.25304/rlt.v26.2140 doi: 10.25304/rlt.v26.2140
![]() |
[72] |
A. L. Butt, S. Kardong-Edgren, A. Ellertson, Using game-based virtual reality with haptics for skill acquisition, Clin. Simul. Nurs., 16 (2018), 25–32. https://doi.org/10.1016/j.ecns.2017.09.010 doi: 10.1016/j.ecns.2017.09.010
![]() |
[73] |
V. Román-Ibáñez, F. A. Pujol-López, H. Mora-Mora, M. L. Pertegal-Felices, A. Jimeno-Morenilla, A low-cost immersive virtual reality system for teaching robotic manipulators programming, Sustainability, 10 (2018), 1102. https://doi.org/10.3390/su10041102 doi: 10.3390/su10041102
![]() |
[74] |
K. Eliahu, J. Liounakos, M. Y. Wang, Applications for augmented and virtual reality in robot-assisted spine surgery, Curr. Rob. Rep., 3 (2022), 33–37. https://doi.org/10.1007/s43154-022-00073-w doi: 10.1007/s43154-022-00073-w
![]() |
[75] |
C. M. Lo, J. H. Wang, H. W. Wang, Virtual reality human–robot interaction technology acceptance model for learning direct current and alternating current, J. Supercomput., 78 (2022), 15314–15337. https://doi.org/10.1007/s11227-022-04455-x doi: 10.1007/s11227-022-04455-x
![]() |
[76] | Y. L. Chen, C. C. Hsu, C. Y. Lin, H. H. Hsu, Personalized English language learning through a robot and virtual reality platform, in Proceedings of Society for Information Technology & Teacher Education International Conference, AACE, San Diego, United States, (2022), 1612–1615. |
[77] |
M. Soori, B. Arezoo, M. Habibi, Dimensional and geometrical errors of three-axis CNC milling machines in a virtual machining system, Comput.-Aided Des., 45 (2013), 1306–1313. https://doi.org/10.1016/j.cad.2013.06.002 doi: 10.1016/j.cad.2013.06.002
![]() |
[78] |
Y. Altintas, C. Brecher, M. Weck, S. Witt, Virtual machine tool, CIRP Ann., 54 (2005), 115–138. https://doi.org/10.1016/S0007-8506(07)60022-5 doi: 10.1016/S0007-8506(07)60022-5
![]() |
[79] |
C. F. Cheung, W. B. Lee, A framework of a virtual machining and inspection system for diamond turning of precision optics, J. Mater. Process. Technol., 119 (2001), 27–40. https://doi.org/10.1016/S0924-0136(01)00893-7 doi: 10.1016/S0924-0136(01)00893-7
![]() |
[80] |
H. Narita, K. Shirase, H. Wakamatsu, E. Arai, Pre-process evaluation of end milling operation using virtual machining simulator, Jpn. Soc. Mech. Eng. Int. J. Ser. C, 43 (2000), 492–497. https://doi.org/10.1299/jsmec.43.492 doi: 10.1299/jsmec.43.492
![]() |
[81] |
N. Slavkovic, S. Zivanovic, B. Kokotovic, Z. Dimic, M. Milutinovic, Simulation of compensated tool path through virtual robot machining model, J. Braz. Soc. Mech. Sci. Eng., 42 (2020), 1–17. https://doi.org/10.1007/s40430-020-02461-9 doi: 10.1007/s40430-019-2074-3
![]() |
[82] | J. Nilsson, M. Ericsson, F. Danielsson, Virtual machine vision in computer aided robotics, in 2009 IEEE Conference on Emerging Technologies & Factory Automation, IEEE, Palma de Mallorca, Spain, (2009), 1–8. https://doi.org/10.1109/ETFA.2009.5347003 |
[83] |
L. Chen, F. Zhang, W. Zhan, M. Gan, L. Sun, Optimization of virtual and real registration technology based on augmented reality in a surgical navigation system, BioMed. Eng. OnLine, 19 (2020), 1. https://doi.org/10.1186/s12938-019-0745-z doi: 10.1186/s12938-019-0745-z
![]() |
[84] |
R. Wen, C. B. Chng, C. K. Chui, Augmented reality guidance with multimodality imaging data and depth-perceived interaction for robot-assisted surgery, Robotics, 6 (2017), 13. https://doi.org/10.3390/robotics6020013 doi: 10.3390/robotics6020013
![]() |
[85] |
A. K. Lefor, S. A. Heredia Pérez, A. Shimizu, H. C. Lin, J. Witowski, Development and validation of a virtual reality simulator for robot-assisted minimally invasive liver surgery training, J. Clin. Med., 11 (2022), 4145. https://doi.org/10.3390/jcm11144145 doi: 10.3390/jcm11144145
![]() |
[86] | P. Shi, Study on a novel virtual reality interventional training system for robot-assisted interventional surgery, Ph.D thesis, Kagawa University, 2022. |
[87] |
A. J. Berges, S. S. Vedula, A. Malpani, C. C. G. Chen, Virtual reality simulation has weak correlation with overall trainee robot-assisted laparoscopic hysterectomy performance, J. Minimally Invasive Gynecol., 29 (2022), 507–518. https://doi.org/10.1016/j.jmig.2021.12.002 doi: 10.1016/j.jmig.2021.12.002
![]() |
[88] |
B. Der, D. Sanford, R. Hakim, E. Vanstrum, J. H. Nguyen, A. J. Hung, Efficiency and accuracy of robotic surgical performance decayed among urologists during COVID-19 shutdown, J. Endourology, 35 (2021), 888–890. http://doi.org/10.1089/end.2020.0869 doi: 10.1089/end.2020.0869
![]() |
[89] |
M. O. Parvez, A. Öztüren, C. Cobanoglu, H. Arasli, K. K. Eluwole, Employees' perception of robots and robot-induced unemployment in hospitality industry under COVID-19 pandemic, Int. J. Hosp. Manag., 107 (2022), 103336. https://doi.org/10.1016/j.ijhm.2022.103336 doi: 10.1016/j.ijhm.2022.103336
![]() |
[90] | S. Alves, A. Quevedo, D. Chen, J. Morris, S. Radmard, Leveraging simulation and virtual reality for a long term care facility service robot during COVID-19, in Symposium on Virtual and Augmented Reality (SVR'21), ACM, New York, USA, (2021), 187–191. https://doi.org/10.1145/3488162.3488185 |
[91] |
A. Fujihara, O. Ukimura, Virtual reality of three-dimensional surgical field for surgical planning and intraoperative management, World J. Urol., 40 (2022), 687–696. https://doi.org/10.1007/s00345-021-03841-z doi: 10.1007/s00345-021-03841-z
![]() |
[92] | J. Y. M. Villamizar, I. Ostan, D. A. E. Ortega, A. A. G. Siqueira, Remote control architecture for virtual reality application for ankle therapy, in XXVⅡ Brazilian Congress on Biomedical Engineering, Springer, (2022), 517–522. https://doi.org/10.1007/978-3-030-70601-2_80 |
[93] |
M. Motaharifar, A. Norouzzadeh, P. Abdi, A. Iranfar, F. Lotfi, B. Moshiri, et al., Applications of haptic technology, virtual reality, and artificial intelligence in medical training during the COVID-19 pandemic, Front. Rob. AI, 8 (2021), 612949. https://doi.org/10.3389/frobt.2021.612949 doi: 10.3389/frobt.2021.612949
![]() |
[94] |
F. Sanfilippo, T. Blazauskas, G. Salvietti, I. Ramos, S. Vert, J. Radianti, et al., A Perspective Review on Integrating VR/AR with haptics into STEM education for multi-sensory learning, Robotics, 11 (2022), 41. https://doi.org/10.3390/robotics11020041 doi: 10.3390/robotics11020041
![]() |
[95] |
M. Shahab, A. Taheri, M. Mokhtari, A. Shariati, R. Heidari, A. Meghdari, et al., Utilizing social virtual reality robot (V2R) for music education to children with high-functioning autism, Educ. Inf. Technol., 27 (2022), 819–843. https://doi.org/10.1007/s10639-020-10392-0 doi: 10.1007/s10639-020-10392-0
![]() |
[96] |
G. Du, Y. Li, K. Su, C. Li, P. X. Liu, A mobile natural human-robot interaction method for virtual chinese acupuncture, IEEE Trans. Instrum. Meas., 72 (2023), 1–10. https://doi.org/10.1109/TIM.2022.3201202 doi: 10.1109/TIM.2022.3201202
![]() |
[97] |
Y. J. Kim, H. S. Nam, W. H. Lee, H. G. Seo, J. H. Leigh, B. M. Oh, et al., Vision-aided brain–machine interface training system for robotic arm control and clinical application on two patients with cervical spinal cord injury, Biomed. Eng. Online, 18 (2019), 1–21, https://doi.org/10.1186/s12938-019-0633-6 doi: 10.1186/s12938-018-0620-3
![]() |
[98] | D. Szafir, D. A. Szafir, Connecting human-robot interaction and data visualization, in Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, ACM, Boulder, USA, (2021), 281–292, https://doi.org/10.1145/3434073.3444683 |
[99] |
A. Naceri, A. Moscatelli, R. Chellali, Depth discrimination of constant angular size stimuli in action space: role of accommodation and convergence cues, Front. Hum. Neurosci., 9 (2015), 511. https://doi.org/10.3389/fnhum.2015.00511 doi: 10.3389/fnhum.2015.00511
![]() |
[100] |
O. Cohavi, S. L.Tzedek, Young and old users prefer immersive virtual reality over a social robot for short-term cognitive training, Int. J. Hum.-Comput. Stud., 161 (2022), 102775. https://doi.org/10.1016/j.ijhcs.2022.102775 doi: 10.1016/j.ijhcs.2022.102775
![]() |
[101] |
C. E. Agüero, N. Koenig, I. Chen, H. Boyer, S. Peters, J. Hsu, et al., Inside the virtual robotics challenge: Simulating real-time robotic disaster response, IEEE Trans. Autom. Sci. Eng., 12 (2015), 494–506. https://doi.org/10.1109/TASE.2014.2368997 doi: 10.1109/TASE.2014.2368997
![]() |
[102] |
P. Samuels, L. Haapasalo, Real and virtual robotics in mathematics education at the school–university transition, Int. J. Math. Educ. Sci. Technol., 43 (2012), 285–301. https://doi.org/10.1080/0020739X.2011.618548 doi: 10.1080/0020739X.2011.618548
![]() |
[103] | B. H. Alsoliman, Virtual robotics in education: The experience of eighth grade students in STEM, in Frontiers in Education, (2022), 1–12, https://doi.org/10.3389/feduc.2022.950766 |
[104] |
S. B. I. Badia, P. A. Silva, D. Branco, A. Pinto, C. Carvalho, P. Menezes, et al., Virtual reality for safe testing and development in collaborative robotics: Challenges and perspectives, Electronics 2022, 11 (2022), 1726; https://doi.org/10.3390/electronics11111726 doi: 10.3390/electronics11111726
![]() |
[105] | G. Palmieri, C. Scoccia, D. Costa, M. Callegari, Development of a virtual reality application for the assessment of human-robot collaboration tasks, in Advances in Service and Industrial Robotics: RAAD 2022, Springer (2022), 597–604. https://doi.org/10.1007/978-3-031-04870-8_70 |
[106] |
T. Togias, C. Gkournelos, P. Angelakis, G. Michalos, S. Makris, Virtual reality environment for industrial robot control and path design, Procedia CIRP, 100 (2021), 133–138. https://doi.org/10.1016/j.procir.2021.05.021 doi: 10.1016/j.procir.2021.05.021
![]() |
[107] | S. Tellex, E. Rosen, D. Whitney, E. Phillips, D. Ullman, Testing robot teleoperation using a virtual reality interface with ROS reality, in Proceedings of the 1st International Workshop on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI), (2018), 1–4. |
[108] |
S. Papanastasiou, N. Kousi, P. Karagiannis, C. Gkournelos, A. Papavasileiou, K. Dimoulas, et al., Towards seamless human robot collaboration: integrating multimodal interaction, Int. J. Adv. Manuf. Technol., 105 (2019), 3881–3897. https://doi.org/10.1007/s00170-019-03790-3 doi: 10.1007/s00170-019-03790-3
![]() |
[109] | S Makris, Virtual reality for programming cooperating robots based on human motion mimicking, in Cooperating Robots for Flexible Manufacturing, Springer, (2021), 339–353. https://doi.org/10.1007/978-3-030-51591-1_18 |
[110] |
A. C. Simões, A. Pinto, J. Santos, S. Pinheiro, D. Romero, Designing human-robot collaboration (HRC) workspaces in industrial settings: A systematic literature review, J. Manuf. Syst., 62 (2022), 28–43. https://doi.org/10.1016/j.jmsy.2021.11.007 doi: 10.1016/j.jmsy.2021.11.007
![]() |
[111] |
M. Dianatfar, J. Latokartano, M. Lanz, Review on existing VR/AR solutions in human–robot collaboration, Procedia CIRP, 97 (2021), 407–411. https://doi.org/10.1016/j.procir.2020.05.259 doi: 10.1016/j.procir.2020.05.259
![]() |
[112] |
P. K. BN, A. Balasubramanyam, A. K. Patil, Y. H. Chai, GazeGuide: An eye-gaze-guided active immersive UAV camera, Appl. Sci., 10 (2020), 1668. https://doi.org/10.3390/app10051668 doi: 10.3390/app10051668
![]() |
[113] | P. Higgins, G. Y. Kebe, A. Berlier, K. Darvish, D. Engel, F. Ferraro, et al., Towards making virtual human-robot interaction a reality, in Proc. of the 3rd International Workshop on Virtual, Augmented, and Mixed-Reality for Human-Robot Interactions, (2021). |
[114] |
A. Bustamante, L. M. Belmonte, R. Morales, A. Pereira, A. Fernández-Caballero, Video processing from a virtual unmanned aerial vehicle: Comparing two approaches to using OpenCV in unity, Appl. Sci., 12 (2022), 5958. https://doi.org/10.3390/app12125958 doi: 10.3390/app12125958
![]() |
[115] |
S. Hjorth, D. Chrysostomou, C. I. Manufacturing, Human–robot collaboration in industrial environments: A literature review on non-destructive disassembly, Rob. Comput. Integr. Manuf., 73 (2022), 102208. https://doi.org/10.1016/j.rcim.2021.102208 doi: 10.1016/j.rcim.2021.102208
![]() |
[116] | M. Rana, Using virtual reality for simulation of underwater human-robot interaction, in UMTC Undergraduate Research Presentations and Papers, (2021). |
[117] |
M. de la Cruz, G. Casañ, P. Sanz, R. Marin, Preliminary work on a virtual reality interface for the guidance of underwater robots, Robotics, 9 (2020), 81. https://doi.org/10.3390/robotics9040081 doi: 10.3390/robotics9040081
![]() |
[118] | M. Wonsick, T. Padır, Human-humanoid robot interaction through virtual reality interfaces, in 2021 IEEE Aerospace Conference, IEEE, Big Sky, USA, (2021), 1–7. https://doi.org/10.1109/AERO50100.2021.9438400 |
[119] |
T. Inamura, Y. Mizuchi, SIGVerse: A cloud-based VR platform for research on multimodal human-robot interaction, Front. Rob. AI, 8 (2021), 549360. https://doi.org/10.3389/frobt.2021.549360 doi: 10.3389/frobt.2021.549360
![]() |
[120] |
E. Prati, V. Villani, M. Peruzzini, L. Sabattini, An approach based on VR to design industrial human-robot collaborative workstations, Appl. Sci., 11 (2021), 11773. https://doi.org/10.3390/app112411773 doi: 10.3390/app112411773
![]() |
[121] | T. Williams, D. Szafir, T. Chakraborti, H. Ben Amor, Virtual, augmented, and mixed reality for human-robot interaction (VAM-HRI), in Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, ACM, Cambridge, UK, (2020), 663–664. https://doi.org/10.1145/3371382.3374850 |
1. | Antonio Cimino, Francesco Longo, Giovanni Mirabelli, Vittorio Solina, Saverino Verteramo, An ontology-based, general-purpose and Industry 4.0-ready architecture for supporting the smart operator (Part II – Virtual Reality case), 2024, 73, 02786125, 52, 10.1016/j.jmsy.2024.01.001 | |
2. | Wei Xu, Tim Huff, Siyang Ye, Joshua Rafael Sanchez, Darius Rose, Harry Tung, Yuang Tong, Jack Hatcher, Matthew Klein, Eric Morales, Davy Guo, Yusam Hsu, Haonan Peng, Zubin Assadian, John Raiti, 2024, Virtual Reality-based Human-Robot Interaction for Remote Pick-and-Place Tasks, 9798400703232, 1148, 10.1145/3610978.3640748 | |
3. | Yu Lei, Yi Deng, Lin Dong, Xiaohui Li, Xiangnan Li, Zhi Su, A Novel Sensor Fusion Approach for Precise Hand Tracking in Virtual Reality-Based Human—Computer Interaction, 2023, 8, 2313-7673, 326, 10.3390/biomimetics8030326 | |
4. | Qiao Wu, Ying Jin, Design and application of human-computer interaction visual communication platform for Guandong culture by integrating RF and light GBM algorithm, 2025, 15, 2045-2322, 10.1038/s41598-025-88193-z | |
5. | Dang Tri Dung, Nguyen Minh Trieu, Nguyen Truong Thinh, Burak Tasci, Collaboration Between Human–Robot Interaction Based on CDPR in a Virtual Reality Game Environment, 2025, 2025, 1687-5893, 10.1155/ahci/8685903 | |
6. | Alina Roštšinskaja, Marianne Saard, Liisa Korts, Christen Kööp, Kätlin Kits, Triinu-Liis Loit, Johanna Juhkami, Anneli Kolk, Unlocking the Potential of Social Robot Pepper: A Comprehensive Evaluation of Child-Robot Interaction, 2025, 08915245, 10.1016/j.pedhc.2025.01.010 |