Review

Immersive virtual reality application for intelligent manufacturing: Applications and art design

  • Intelligent manufacturing (IM), sometimes referred to as smart manufacturing (SM), is the use of real-time data analysis, machine learning, and artificial intelligence (AI) in the production process to achieve the aforementioned efficiencies. Human-machine interaction technology has recently been a hot issue in smart manufacturing. The unique interactivity of virtual reality (VR) innovations makes it possible to create a virtual world and allow users to communicate with that environment, providing users with an interface to be immersed in the digital world of the smart factory. And virtual reality technology aims to stimulate the imagination and creativity of creators to the maximum extent possible for reconstructing the natural world in a virtual environment, generating new emotions, and transcending time and space in the familiar and unfamiliar virtual world. Recent years have seen a great leap in the development of intelligent manufacturing and virtual reality technologies, yet little research has been done to combine the two popular trends. To fill this gap, this paper specifically employs Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) guidelines to conduct a systematic review of the applications of virtual reality in smart manufacturing. Moreover, the practical challenges and the possible future direction will also be covered.

    Citation: Yu Lei, Zhi Su, Xiaotong He, Chao Cheng. Immersive virtual reality application for intelligent manufacturing: Applications and art design[J]. Mathematical Biosciences and Engineering, 2023, 20(3): 4353-4387. doi: 10.3934/mbe.2023202

    Related Papers:

    [1] Robert Stephen Cantrell, Chris Cosner, William F. Fagan . Edge-linked dynamics and the scale-dependence of competitive. Mathematical Biosciences and Engineering, 2005, 2(4): 833-868. doi: 10.3934/mbe.2005.2.833
    [2] Jessica L. Hite, André M. de Roos . Pathogens stabilize or destabilize depending on host stage structure. Mathematical Biosciences and Engineering, 2023, 20(12): 20378-20404. doi: 10.3934/mbe.2023901
    [3] Tyler Cassidy, Morgan Craig, Antony R. Humphries . Equivalences between age structured models and state dependent distributed delay differential equations. Mathematical Biosciences and Engineering, 2019, 16(5): 5419-5450. doi: 10.3934/mbe.2019270
    [4] Nancy Azer, P. van den Driessche . Competition and Dispersal Delays in Patchy Environments. Mathematical Biosciences and Engineering, 2006, 3(2): 283-296. doi: 10.3934/mbe.2006.3.283
    [5] Gonzalo Robledo . Feedback stabilization for a chemostat with delayed output. Mathematical Biosciences and Engineering, 2009, 6(3): 629-647. doi: 10.3934/mbe.2009.6.629
    [6] Lizhong Qiang, Ren-Hu Wang, Ruofan An, Zhi-Cheng Wang . A stage-structured SEIR model with time-dependent delays in an almost periodic environment. Mathematical Biosciences and Engineering, 2020, 17(6): 7732-7750. doi: 10.3934/mbe.2020393
    [7] Nahla Abdellatif, Radhouane Fekih-Salem, Tewfik Sari . Competition for a single resource and coexistence of several species in the chemostat. Mathematical Biosciences and Engineering, 2016, 13(4): 631-652. doi: 10.3934/mbe.2016012
    [8] Paolo Fergola, Marianna Cerasuolo, Edoardo Beretta . An allelopathic competition model with quorum sensing and delayed toxicant production. Mathematical Biosciences and Engineering, 2006, 3(1): 37-50. doi: 10.3934/mbe.2006.3.37
    [9] Jacques A. L. Silva, Flávia T. Giordani . Density-dependent dispersal in multiple species metapopulations. Mathematical Biosciences and Engineering, 2008, 5(4): 843-857. doi: 10.3934/mbe.2008.5.843
    [10] Qingwen Hu . A model of regulatory dynamics with threshold-type state-dependent delay. Mathematical Biosciences and Engineering, 2018, 15(4): 863-882. doi: 10.3934/mbe.2018039
  • Intelligent manufacturing (IM), sometimes referred to as smart manufacturing (SM), is the use of real-time data analysis, machine learning, and artificial intelligence (AI) in the production process to achieve the aforementioned efficiencies. Human-machine interaction technology has recently been a hot issue in smart manufacturing. The unique interactivity of virtual reality (VR) innovations makes it possible to create a virtual world and allow users to communicate with that environment, providing users with an interface to be immersed in the digital world of the smart factory. And virtual reality technology aims to stimulate the imagination and creativity of creators to the maximum extent possible for reconstructing the natural world in a virtual environment, generating new emotions, and transcending time and space in the familiar and unfamiliar virtual world. Recent years have seen a great leap in the development of intelligent manufacturing and virtual reality technologies, yet little research has been done to combine the two popular trends. To fill this gap, this paper specifically employs Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) guidelines to conduct a systematic review of the applications of virtual reality in smart manufacturing. Moreover, the practical challenges and the possible future direction will also be covered.



    The Internet of Things (IoT) has enormous potential in a wide range of fields, including smart grids, earth observation, environmental monitoring, agriculture, resource management, public health, public security, transport, and the military. However, over the past few years, remote sensing has emerged as an attractive new source of data for environmental applications. These applications have acquired considerable momentum in terms of spatial, radiometric, and spectral resolution. Remote sensing and monitoring systems are responsible for acquiring, storing, and using collected data to make decisions and analyze complex problems. Recent studies have evaluated the collateral operation of IoT and remote sensing systems and how IoT constitutes an integral part of remote sensing [1,2]. In particular, the studies considered the technological relationship between IoT and remote sensing edge components, that is, the global positioning system (GPS) and geographical information system (GIS).

    Recently, smart homes have garnered considerable interest [3] because they help enable smart cities and complete the smart grid, as shown in Figure 1(a); smart farming and e-healthcare, as shown in Figure 1(b); water quality; and other smart systems [4,5,6,7,8]. These advancements ensure an efficient lifestyle and safety for people through 24/7 monitoring. Most smart homes are equipped with passive infrared sensors to detect motion and report binary data for visualization. The IoT provides a wide range of personalized treatment possibilities when utilized in conjunction with traditional healthcare. Furthermore, multimedia IoT (MIoT) monitoring system for smart homes have been developed for several possible applications and their integration has been suggested with the existing smart house testbeds for further research [9]. MIoT provides enhanced device, system, and service connectivity in the context of smart homes beyond machine-to-machine communication. The purpose of creating a smart home ontology, as explained in previous studies [4,7], is to provide semantics for the data shared by the systems and devices and establish semantic interoperability in smart homes. From a technological perspective, cloud computing and IoT/MIoT are combined to provide pervasive sensing services and robust processing of the collected data that go beyond what individual "things" can do, leading to advancements in both domains. Another approach employs several machine learning techniques as well as data management and analysis approaches. However, the major challenges in differentiating cases located in the same space include allowing the interoperability of different technologies and guaranteeing data security and privacy [4,10,11].

    Figure 1.  (a) Smart Grid communication network architecture, and (b) an ehealth care monitoring system.

    Corporal sensors face the same challenges as general wireless sensor networks (WSNs). Indeed, the major concern is how to extend the lifetime of wireless sensor nodes and provide security with an appropriate Quality of Service (QoS) for each type of event. Energy efficiency is another principal criterion [12] that should be considered in the design of remote monitoring systems [13]. Many of these criteria have been addressed by various WSN Media access control (MAC) protocols that focus only on energy conservation without a major concentration on urgent data transmissions, remote monitoring, security, and privacy [14].

    Current monitoring systems suffer from several problems such as alarm settings, convoluted wires, complex artifacts, excessive information, adaptability of many hardware devices made by different manufacturers, in-person monitoring, and challenges resulting from human performance. To overcome these problems, we aimed to establish a smart and secure home based on IoT for real-time remote monitoring of smart home architectures. The initial research has been reported in our previous study [15]. Our primary objectives of the present study were as follows:

    ● To develop an intelligent end-to-end secure IoT-based health monitoring system for smart homes. This system uses current monitoring mechanisms and employs the most recent ambient sensing devices and wireless technologies in addition to cloud services. It also includes security and privacy mechanisms that allow reliable end-to-end transfer of information through secure channels.

    ● To define the entire architecture and set up suitable processes and interconnectivity to help automate the complete operation of the system (monitor and secure) with a preventive solution for critical situations. This contribution includes both software development and communication establishment in addition to a heterogeneous hardware environment.

    ● To develop techniques for enhancing the performance of remote monitoring systems. This contribution involves alleviating the complexity of the mechanisms to make them adequate for hardware platform resources. Therefore, the process involves enhancing communication processes, cryptographic algorithms, data collection, aggregation, analysis, and visualization.

    ● To implement a real testbed to perform the experiments and generate results for conducting comparisons in terms of power consumption. This phase allows for building and running an entire heterogeneous platform, including several ambient sensors, communication protocols, hypertext transfer protocol secure (HTTPS), and API Key mechanisms for security, analysis, and remote visualization.

    Based on these contributions, the research conducted in this study can be employed in various monitoring applications in smart cities, which include event sensing, data communication, information security, analysis, and visualization. The remainder of this paper is organized as follows: In Section 2, we introduce the background, including the major directions of this study in terms of IoT-based security, confidentiality, and remote monitoring approaches. In Section 3, we performed a literature review and conducted a comparative study of related works. In Section 4, we describe the methodology of the proposed method, and, in Section 5, we outline the experimental results. Finally, in Section 6, we present our conclusions and discuss the future scope of this research.

    Swift progress in electronics and wireless communication technologies has allowed the evolution of IoT, which provides practical solutions for several applications such as smart grids, home security, military, target tracking, and pollution level monitoring [4]. In particular, researchers in the field of computing, networking, and software are working together to implement this promising technology, which provides high-quality services to people even in their homes, primarily for older individuals, children, and those with persistent maladies. WSNs are located on the human body or implemented on the premises to form clusters. The WSN allows for the collection of sensed data, aggregating them in a coordinator and sending them to a central database to be used by the administration [7]. At the IoT level, three challenges must be considered: sensor limitations, network performance and security, and confidentiality support.

    With rapid advancements in information and communication technology, the advent of multimedia and innovation in systems using IoT and WSNs has become increasingly significant. In a previous study [9], the researchers presented a comprehensive review of distinct frameworks, real-time applications, data compression, sensory data, challenges, limitations, and open research issues related to the MIoT in industrial production systems. The integration of multimedia and IoT leads to exciting possibilities across various domains, including healthcare, smart cities, and precision agriculture. Khan et al. [9] explored different frameworks that combine multimedia and IoT. These frameworks enable seamless communication, data exchange, and intelligent decision-making. Real-time multimedia applications benefit from IoT systems, and examples of this include video streaming, serious games, rehabilitative exercises, and e-healthcare. The integration of multimedia and cloud services has enhanced these applications. Efficient data compression techniques are crucial for MIoT systems; however, balancing data quality and resource constraints is challenging. IoT sensors capture sensory data from the environment, and integrating these data with multimedia content allows for context-aware applications.

    Sensor limitations: Corporal sensors share the same challenges as general WSNs. Energy efficiency is the principal requirement for designing low-level capacity node protocols [12]. Indeed, the challenge lies in extending the lifetime of the body sensor nodes. Sensor capabilities in terms of processing rate and buffer size also constitute an additional major sensor limitation.

    Network performance: QoS metrics, such as the coverage rate, reliability, end-to-end delay, throughput, and error rates for delivery or sensing, are network performance measures that should satisfy an acceptable level [13]. A large proportion of this problem has been addressed using various WSN protocols. Most studies have focused only on energy conservation, and only a few studies have concentrated on urgent data transmission [14]. The timely and reliable delivery of life-crucial medical data must be assured; that is, a patient who requires the most instantaneous service should be given major priority, and the patient's vital signs should be transferred with higher reliability and shorter delay. Furthermore, IoT-based e-health systems assist diverse services with various characteristics, such as different quantities of energy, different-sized data, and various transmission rates. Therefore, it has become important to arrange a service-differentiated scheme to allow multilevel QoS support, as most approaches process all packets in an identical manner [4].

    Challenges in security and confidence: Owing to the multiplicity of devices and the dynamic nature of networks, healthcare IoT systems face many security and privacy challenges because they are frequently vulnerable to threats and attacks that can compromise the system. To address these challenges, security and privacy issues must first be identified. Second, a physically unclonable functional authentication approach is needed, considering both sensor limitations and network capacity. Prominent companies are incorporating artificial intelligence into printed circuit board (PCB) designs [16], producing embedded programs that are tailored to the specifications of the circuit. In electronics, PCBs have revolutionized the creation of circuits. Although many designs exist worldwide, PCB designs are becoming increasingly valued owing to their automation. Several companies have concentrated on PCB designs that improve the user experience by integrating artificial intelligence into the system.

    Technological development in the areas of telecommunications and electronics has led to rapid advances in IoT, which is a promising technology for several applications such as healthcare. Because wireless body area networks (WBANs) are used in healthcare to monitor the vital signs of patients, any delayed or missing data can threaten the patient's life. Urgent data are immediately transmitted to supply a QoS guarantee to the WBAN. Recent advances in smartphones and remote healthcare monitoring based on IoT provide multidimensional and intelligent services. With continuous monitoring and improved interventions, the development of IoT-based wearable sensor devices has prevented millions of deaths worldwide. The SIM7600E GSM and GNSS HAT Module enable on-demand positioning of the patient as well as remote monitoring of body temperature (DS18B20), oxygen saturation (SPO2; MAX30100), and heart rate [17]. Information from sensors is sent across the network and stored in the cloud. The proposed method in the present study uses the most recent IoT microcontroller and device versions, which have a substantial impact on the overall accuracy and speed of the system.

    Information security encounters serious challenges while sending data. In addition, predicting diseases using gathered data is a complex task. To address these problems, Alandjani [18] suggested a new deep learning method called convolutional neural network–based disease prediction. The Rives-Shamir-Adleman cryptographic technique can be used to ensure the security of data stored in the cloud. We compared the performance of the method proposed in the present with that of methods proposed in previous studies. A comparative analysis demonstrated that the proposed approach provides improved prediction accuracy and better security.

    QoS is a key factor of the WBAN. Assigning various priority levels for sensed data allows traffic differentiation and provides a suitable QoS level based on event, anomaly, and emergency cases. QoS support considers both the IoT device capability and network response/performance. The major metrics used to evaluate the QoS support are latency, throughput, link quality, and loss rate.

    Alharbi et al. [19] introduced a novel approach called label augmentation. It assigns a joint label to each generated image by considering both the label and data augmentation. Furthermore, they addressed the challenge posed by augmented samples, which increases the intraclass diversity in the training set. To enhance the classification accuracy, the method employs the Kullback–Leibler divergence to constrain the output distribution of two training samples with the same scene category. Extensive experiments on widely used datasets (UCM, AID and NWPU) demonstrated that the method proposed in the present study outperforms other state-of-the-art approaches in terms of classification accuracy.

    The same idea was applied by Puustjärvi and Puustjärvi [7], wherein the goal of the scheme was to classify and prioritize sensed data to improve real-time delivery using differentiated services, traffic scheduling, and multipath transmission. Sensor nodes sense the data and begin an anomaly detection process based on a predefined threshold. The coordinator node has a criticality table used to classify patients based on data from the source nodes. The sensed data are sent to the packet classification mechanism through the path selector using the path state information. The received signal strength indicator between the nodes, residual energy, and path delay is used to estimate the path states by indicating link quality information.

    Other protocols for real-time applications exploit the decoding and forwarding mechanisms for cooperation [20]. Thus, decoding each packet at the relay nodes causes these networks to experience security and privacy attacks. Almost all the current network coding (NC)–based studies on WBSNs use linear combinations or the Exclusively-OR (XoR) process for coding. Almost all current NC-based error recovery methods are not QoS-aware for medical applications. However, NC-based error recovery techniques have a substantial capacity to be appropriate for network channel conditions. Based on this, Razzaque et al. [20] employed Marinkovic and Popovici's XoR coding–based method for WBSNs. In this study, we enhanced the NC-based approach to make it suitable for network channel conditions and user QoS requirements. Two QoS mechanisms are involved in this approach: adaptive service differentiation and adaptive error recovery.

    Suganthi et al. [21] employed a mobile healthcare technology to enhance the quality of patient monitoring. This objective was achieved by employing sensor nodes affixed to the patient's body to monitor physiological data and vital signs. To protect patient privacy and security while balancing security and performance, they proposed an end-to-end mutual authentication system. The proposed authentication method also uses a smartphone or personal digital assistant (PDA) as a gateway node, allowing for ongoing patient monitoring even outside the clinical setting. The approach also includes emergency protocol measures necessary to maintain the standard of patient care in dire circumstances.

    Saleem et al. [15] addressed the need for secure monitoring in smart environments. By leveraging the cellular IoT technology, the proposed system ensures robust communication and data exchange and provides features that include real-time monitoring, security, and scalability. The system integrates various sensors (such as temperature, humidity, and motion sensors) to monitor the environment. Cellular connectivity allows seamless communication even in remote or challenging locations. This system can be applied to diverse scenarios, including home automation, industrial settings, and agriculture. Security measures include the transmission and authentication of encrypted data. The researchers provided a comprehensive overview of the network's architecture, implementation, and performance evaluation.

    Manikandan et al. [22] introduced a novel system designed to promptly detect fires while pinpointing the affected area. Raspberry Pi 3 was employed as a control; this compact yet powerful device integrates several sensors and cameras. A validation mechanism was employed to prevent false alerts, and the system validated the fire suspicions. Upon detecting a fire, the system not only captures an image of the afflicted area but also sends an automated message. If the administration confirms a fire, an alert is promptly raised and a message is sent to the local fire department. By leveraging IoT and Raspberry Pi, this approach enhances safety, minimizes false alarms, and facilitates rapid responses in critical situations.

    Meddeb et al. [23] introduced a Raspberry Pi 3 Model B-based robot that can be seamlessly integrated into any household. Furthermore, equipped with a webcam, the robot monitors the area and sends notifications upon detecting trespassing or obtrusion. The camera also features a face recognition algorithm, enabling it to identify the person responsible for triggering the motion. If an individual is authorized, an onboard voice assistant engages in conversation. Notifications are dispatched only to authorized personnel, accompanied by pictures of the trespasser and live streaming of the webcam feed. Pi's live streaming capability allows for the remote analysis of the camera feed via the Internet. Such a system provides users with enhanced security and peace of mind when they are away from home or when they leave vulnerable family members alone. In the case of authorization, the system does not notify the owner, which is a vulnerability that may cause harmful situations.

    Upadrista et al. [10] addressed healthcare data security and privacy challenges, where health information is critical and requires fail-safe methods against unauthorized access, leakage, and manipulation. They proposed a remote health monitoring (RHM) solution using blockchain technology that allows immutability, decentralization, and transparency to address data security and privacy challenges. In summary, they provided a comprehensive review of related works on blockchain for RHM, focusing on privacy, security, reliability, and latency.

    Shreya et al. [24] made use of the Internet of Medical Things and cloud computing to replace the old medical system, making healthcare easier, customized, and more efficient. They employed medical sensors to capture the health information transmitted to a remote server via a cloud-based solution for processing or diagnostic purposes. They outlined different types of threats that affect the integrity, confidentiality, and privacy of health data. Consequently, the data are shared between stakeholders in an encrypted format, resulting in a reduction in the consumed energy (79.19%), communication costs (15.62%), and execution time (80.03%) compared with the existing methods.

    Sangeethalakshmi et al. [25] proposed an IoT-based real-time health monitoring system to overcome the limitations of traditional medical treatments. A mobile application and GSM were combined in the proposed system to provide hospitalized or at-home patients with a dependable patient management system. They used sensors to regularly collect vital parameters (temperature, heartbeat rate, echocardiogram [ECG] readings, blood pressure, and SPO2), which are sent to the cloud via a WiFi module. The system consists of sensors, a data acquisition unit, a microcontroller (ESP32), and software and provides real-time online health information monitoring. They assigned a threshold to each parameter once a message was sent to the doctor.

    Kekre and Gawre [5] provided a solution for monitoring and maintaining solar photovoltaics. As the cost of renewable energy equipment has decreased, the number of solar photovoltaic installations has increased significantly. Many of these installations serve as auxiliary power sources and are in diverse and, at times, inaccessible areas ranging from rooftops to remote desert locations. To address the need for efficient monitoring, they proposed an inexpensive IoT-based embedded solar photovoltaic monitoring system. The system utilizes a general packet radio service (GPRS) module and an affordable microcontroller to collect data from the production end and transmit them over the Internet. This real-time information can be accessed globally, aiding maintenance, fault detection, and data recording at regular intervals.

    Xie et al. [26] discussed the challenges related to scalability, interoperability, and privacy. They also highlighted the potential benefits of integrating edge computing into systems. Consequently, they introduced a conceptual architecture that combined IoT principles with smart grid functionalities. This framework aims to predict and prevent potential threats to the grid. The system employs an electronic fence mechanism to detect unauthorized access or abnormal behavior within the grid. Real-time video monitoring enhances situational awareness and helps identify security breaches. Ultrawideband technology contributes to precise location tracking and rapid response during emergencies. By leveraging IoT, the system anticipates and mitigates malicious activities and safeguards critical infrastructure.

    Albataineh et al. [27] proposed a hybrid solution that leveraged both cloud and edge computing for data processing. Edge computing processes smart grid information at the edge of the IoT network, where microgrids are located. The authors introduced a machine learning engine that employs decision trees to facilitate communication among the edge layer, failure mechanisms, and cloud layer. By combining edge and cloud computing, this novel smart home control system aims to enhance efficiency, reliability, and overall performance. The authors addressed the challenges of existing power grids facing outages, unpredictable power disturbances, inflexible energy rates, and customer fraud. These issues have contributed to the rising fossil fuel demand and service costs. This approach minimizes latency and storage compared with centralizing all processing in the cloud.

    Sreenivasu et al. [8] proposed an innovative cloud-based electric vehicle (EV) temperature monitoring system by leveraging the power of artificial intelligence (AI) and IoT. The sensors were integrated into the system interfaces by embedding them in the EV battery. These sensors continuously collect data on critical parameters, such as voltage, current, and temperature. The collected data are transmitted to the cloud via the IoT network. This real-time flow of information enables remote monitoring and analysis. Users can access the performance data of the battery through a mobile application connected to the cloud. This interface provides insights into the state of charge and overall health of the battery. The cloud-based system employs AI algorithms to analyze historical data, predict potential issues, and recommend preventive measures. This proactive approach enhances battery management and reduces the risk of unexpected failures.

    Nasraoui et al. [28] investigated authenticated key agreements as a crucial part of providing end-to-end security solutions for WSNs using LoRa connectivity. As the applications of WSNs and IoT continue to expand, ensuring security has become paramount. However, owing to the limited resources of the connected nodes, new challenges arise. The researchers provided a primer that covers the general security aspects of WSNs and IoT at different levels. This foundational understanding sets the stage for the proposed lightweight key exchange method. We focused on implementing lightweight key agreement methods based on a standardized Internet Key Exchange protocol. These methods aim to establish secure communication channels while minimizing the computational overhead. The method was experimentally evaluated, and communication costs in terms of power consumption and memory usage were considered in the assessment. Realistic scenarios were used to demonstrate the practical implications of these lightweight approaches.

    Sangeethalakshmi et al. [29] investigated the security aspects of WSNs in the context of environmental monitoring systems. They emphasized that any security mechanism within a WSN becomes ineffective if an attacker gains access to the firmware of the sensor device. This study revealed that the bootstrap loader (BSL) password for the MSP430 microcontroller unit (MCU) is insecure. The attacker successfully obtains WSN cryptographic keys by reversing the firmware. In a few days, the BSL password can be cracked. The researchers introduced a two-factor authentication using the capabilities of the MSP430 MCU by adding an additional layer of security. A solution was proposed to enhance the security of the firmware of MSP430 processors. The Secure-BSL application generates random BSL passwords to ensure robustness against brute-force attacks.

    Nguyen-Tan et al. [30] proposed an autonomous irrigation solution that combines historical soil moisture data stored in a database with real-time weather forecasts. Two deep-learning models based on the transformer architecture were employed to predict weather conditions and soil moisture levels. Surprisingly, despite having fewer training variables than the long short-term memory model, the transformer model achieved comparable accuracy (91.41% for weather forecasts and 82.06% for soil moisture forecasts). However, being an IoT system, security concerns in smart agricultural solutions must be addressed. To safeguard against eavesdropping, control command spoofing, and poisoned machine learning models, the researchers have proposed end-to-end encryption and authentication approaches. This solution employs AES 256-bit encryption, HMAC, and the CRYSTALS-Kyber key exchange technique, even in the quantum age. The evaluation results demonstrated that the proposed security measures can be effectively deployed in IoT devices such as Arduino, STM32, and Raspberry Pi 4. We contribute to sustainable and efficient agricultural practices by combining intelligent irrigation and robust security.

    An end device is a wireless device responsible for detecting events in its environment and transmitting them to the sink node. The end device receives the data from the sensors, processes it, and sends it to the sink node when there is a danger. To perform these tasks efficiently, the end device should be equipped with a microcontroller with specific features [31]. In this context, a low-power and fast microcontroller is required to execute complex computational instructions without ignoring the power constraints, as shown in Figure 2.

    Figure 2.  Schematic of the platform's processes, including event sensing, data communication, and remote monitoring.

    Many types of microcontrollers with different features and capabilities have been proposed. In the proposed platform, we used ATmega328P, which was integrated into the Arduino Uno board. For the data communication protocol, the ZigBee XBee module was chosen, which includes the XBee-S2C gateway, as shown in Figure 2.

    The XBee USB Adapter Board is sold as a partially assembled kit and provides a cost-effective solution to interfacing a PC or microcontroller to any XBee or XBee Pro module. A PC connection can be used to configure the XBee module using Digi's X-CTU software. This module works with XBee Series 1, 2, and Pro modules.

    Using this adapter board, an easy interface can be provided to the XBee or XBee Pro modules. The adapter board also provides a means of connecting pluggable wires, solder connections, and mounting holes, as shown in Figure 3.

    Figure 3.  Interfaces and interconnectivity of Arduino Uno and the XBee module using the XBee adapter.

    ● Advantages:

    o Provides easy pluggable wire or solder connections

    o Provides an easy interface for configuring XBee modules using Digi's X-CTU software.

    o No shield is required (decreases the cost).

    ● Disadvantages:

    o To set up GPIOs, a serial software library must be downloaded.

    o Requires coding expertise.

    From the two previously suggested approaches (XBee shield and XBee adapter), we used the XBee adapter to configure (XBee s2c) and establish communication between the two XBees. The XBee shield used to configure the (XBee s2c) on top of the Arduino is shown in Figure 4, and the real experimental testbed is shown in Figure 5.

    Figure 4.  Schematic of the experimental testbed developed for event sensing and communication.
    Figure 5.  Real testbed implementation for practical experiments.

    Arduino is an open-source electronic platform based on easy-to-use hardware and software. Arduino boards can read inputs—light on a sensor and a finger on a button—and convert them into an output. To program it, the Arduino integrated development environment (IDE) or Arduino software must be used, which contains a text editor for writing a code, a message area, a text console, a toolbar with buttons for common functions, and a series of menus. It connects to the Arduino hardware to upload and communicate with programs.

    Initially, we visited (https://www.arduino.cc/en/software) and downloaded the most recent version of the Arduino IDE. Subsequently, the driver was downloaded. However, if the driver is not downloaded for some reason, it must be manually downloaded to post the code onto the platform.

    Libraries for XBee, Software Serial, DHT, and MQ2 sensors were downloaded. The IDE's Sketch > Include Library > Manage Libraries menu will allow the individuals to download the software.

    The XBee module was configured using the XCTU software. It was developed by DIGI and is available at (https://www.digi.com/products/embedded-systems/digi-xbee/digi-xbee-tools/xctu). The first step consisted of starting the XCTU software and connecting the XBee module using a USB adapter, as shown in Figure 6. Once XBee is connected to the PC, the command (Ctrl + Shift + D) allows the module to be located at the XCTU graphical interface, as shown in Figure 7. Once detected, the XBee module is set up to enable effective communication. The XBee module is configured at least once. The IDs of the source and destination should be reversed. As discussed earlier, the XBee module was disconnected from the USB adapter and connected to the XBee shield on top of the Arduino Uno. Next, the two XBee modules were configured, as shown in Figure 8.

    Figure 6.  XBee configuration using XCTU and attached with the USB adapter.
    Figure 7.  Discovering the XBee module using XCTU software through the USB adapter.
    Figure 8.  XCTU graphical interface for XBee connectivity and configuration.

    Our primary objective of this study was to create a sensor-based monitoring system for smart homes. We constructed a WSN using the ZigBee protocol. Additionally, we used the Global System for Mobile Communication (GSM) to enable Arduino to send an SMS message to the phone. The user should be able to receive specific data from the WSN through the GSM when it detects an incident. The sensors controlled by the Arduino first read the data. The processed data that detect an event send a message through the XBee module via the ZigBee protocol. The sink node then receives the data and automatically sends them to the user through the GSM.

    Power consumption is a crucial consideration when working with IoT-based hardware platforms, which include various types of sensing devices. Various board components, including processors, sensors, and communication modules, consume power. Identifying the factors that affect the power consumption of IoT devices, as well as the Arduino Uno platform, can help make adequate decisions regarding hardware and software configurations. By utilizing low-power sensors and efficient coding practices, energy usage can be minimized, potentially extending the battery life. To investigate strategies for reducing power consumption and demonstrate how to measure the power usage of Arduino Uno, it is necessary to explore various components of Arduino Uno and the sensors that consume power.

    The test results of power usage were measured using two variables: 1) variation in the sensing duty cycle (i.e., the time interval between two consecutive sensors) and 2) variation in the number of operating sensors. Arduino Uno was used to collect information regarding the amount of electric energy consumed according to the configuration of the hardware platform for several experiments. Specifically, the ACS612 sensor was powered by a 9-V battery as a unique source of energy. Using N sensors, first, the power usage PN was measured based on the current and voltage levels using Eq (1). Notably, the voltage was considered constant during the experiment. The energy dissipated by these sensors considers the sensing duty cycle and is calculated using Eq (2).

    P=V×I (1)
    E=P×t (2)

    In this experiment, only one sensor was involved in event sensing, with a sensing time cycle of 2 s. Figure 9 shows a small reduction in the remaining energy in the battery because only one sensing device was in operation. The red curve represents the average variation in the remaining energy. Table 1 lists certain consecutive values of power, voltage, and consumed energy collected from the hardware platform within the experimental period. In this experiment, measurements were performed every 2 s.

    Figure 9.  Battery's remaining energy for one operating sensor with a sensing cycle of 2 s.
    Table 1.  Collected values of power, voltage level, and consumed energy in experiment 1.
    Sec Power V CH1 Energy Time
    2 1.44 9 0.16 2.88 22:00:12.16
    2 1, 44 9 0.16 2.88 22:00:09.69
    2 1.53 9 0.17 3.06 22:00:07.22
    2 1.53 9 0.17 3.06 22:00:04.75
    2 1.53 9 0.17 3.06 22:00:02.28
    2 1.53 9 0.17 3.06 21:59:59.81
    2 1.53 9 0.17 3.06 21:59:57.33
    2 1.53 9 0.17 3.06 21:59:54.86
    2 1.44 9 0.16 2.88 21:59:52.40
    2 1.53 9 0.17 3.06 21:59:49.92
    2 1.53 9 0.17 3.06 21:59:47.45
    2 1.53 9 0.17 3.06 21:59:44.99
    2 1.44 9 0.16 2.88 21:59:42.52

     | Show Table
    DownLoad: CSV

    In this experiment, the number of sensors was increased to three, and the sensing time cycle was maintained at 2 s. The range of energy dissipated during this experiment was estimated to be twice that in experiment 1. Figure 10 shows the energy variations in experiment 2 and are presented in Table 2. For experiment 2, the same process as described in experiment 1 was followed.

    Figure 10.  Battery's remaining energy for three operating sensors with a sensing cycle of 2 s.
    Table 2.  Collected values of power, voltage level, and consumed energy in experiment 2.
    Sec V CH1 Power Energy Time
    2 9 0.17 1.53 3.06 19:05:33.62
    2 9 0.17 1.53 3.06 19:05:31.16
    2 9 0.17 1.53 3.06 19:05:28.70
    2 9 0.18 1.62 3.24 19:05:26.23
    2 9 0.18 1.62 3.24 19:05:23.76
    2 9 0.17 1.53 3.06 19:05:21.28
    2 9 0.18 1.62 3.24 19:05:18.81
    2 9 0.18 1.62 3.24 19:05:16.33
    2 9 0.18 1.62 3.24 19:05:13.86
    2 9 0.18 1.62 3.24 19:05:11.39
    2 9 0.18 1.62 3.24 19:05:08.93
    2 9 0.18 1.62 3.24 19:05:06.46
    2 9 0.18 1.62 3.24 19:05:03.99

     | Show Table
    DownLoad: CSV

    Three other experiments were performed to obtain more descriptive results regarding energy consumption. In experiment 3, six sensors were operated with a sensing time cycle of 2 s, as shown in Figure 11; the exact values are listed in Table 3. Experiment 4 included six sensors configured for sensing every 30 s, as shown in Figure 12; the exact values are listed in Table 4. Finally, in experiment 5, six sensors were operated with a sensing time cycle of 60 s, as shown in Figure 13; the exact values are listed in Table 5. In these experiments, the sensing time cycle was increased to optimize energy consumption without ignoring the events occurring in the smart home. At this level, a major compromise was observed between energy optimization and event detection. In other words, it is useful to increase the time cycle without exceeding a certain threshold to avoid the non-detection of events or delayed detection.

    Figure 11.  Battery's remaining energy for six operating sensors with a sensing cycle of 2 s.
    Table 3.  Collected values of power, voltage level, and consumed energy in experiment 3.
    Sec Power V CH1 Energy Time
    2 1.44 9 0.16 2.88 22:00:12.16
    2 1.44 9 0.16 2.88 22:00:09.69
    2 1.53 9 0.17 3.06 22:00:07.22
    2 1.53 9 0.17 3.06 22:00:04.75
    2 1.53 9 0.17 3.06 22:00:02.28
    2 1.53 9 0.17 3.06 21:59:59.81
    2 1.53 9 0.17 3.06 21:59:57.33
    2 1.53 9 0.17 3.06 21:59:54.86
    2 1.44 9 0.16 2.88 21:59:52.40
    2 1.53 9 0.17 3.06 21:59:49.92
    2 1.53 9 0.17 3.06 21:59:47.45
    2 1.53 9 0.17 3.06 21:59:44.99
    2 1.44 9 0.16 2.88 21:59:42.52

     | Show Table
    DownLoad: CSV
    Figure 12.  Battery's remaining energy for six operating sensors with a sensing cycle of 30 s.
    Table 4.  Collected values of power, voltage level, and consumed energy in experiment 4.
    Sec Power V CH1 Energy Time
    30 2.52 9 0.28 75.6 20:09:13.73
    30 2.52 9 0.28 75.6 20:08:43.25
    30 2.52 9 0.28 75.6 20:08:12.78
    30 2.52 9 0.28 75.6 20:07:42.31
    30 2.61 9 0.29 78.3 20:07:11.83
    30 2.52 9 0.28 75.6 20:06:41.35
    30 2.61 9 0.29 78.3 20:06:10.86
    30 2.61 9 0.29 78.3 20:05:40.40
    30 2.61 9 0.29 78.3 20:05:09.91
    30 2.7 9 0.3 81 20:04:39.45
    30 2.61 9 0.29 78.3 20:04:08.97
    30 2.79 9 0.31 83.7 20:03:38.50
    30 2.88 9 0.32 86.4 20:03:08.02

     | Show Table
    DownLoad: CSV
    Figure 13.  Battery's remaining energy for six operating sensors with a sensing cycle of 60 s.
    Table 5.  Collected values of power, voltage level, and consumed energy in experiment 5.
    Sec Power V CH1 Energy Time
    60 0.72 9 0.08 43.2 22:30:36.30
    60 0.81 9 0.09 48.6 22:29:35.83
    60 0.99 9 0.11 59.4 22:28:35.36
    60 0.99 9 0.11 59.4 22:27:34.88
    60 1.08 9 0.12 64.8 22:26:34.41
    60 1.26 9 0.14 75.6 22:24:33.47
    60 1.26 9 0.14 75.6 22:23:32.98
    60 1.35 9 0.15 81 22:22:32.52
    60 1.35 9 0.15 81 22:21:32.05
    60 1.35 9 0.15 81 22:20:31.57
    60 1.44 9 0.16 86.4 22:19:31.10

     | Show Table
    DownLoad: CSV

    An average power consumption chart was used to visualize the amount of power consumed by the device over a specific period. By plotting the power consumption data in a chart, it becomes easier to identify patterns and trends in energy usage that can influence the design and optimization of devices. This can be particularly useful when working with battery-powered devices, where power consumption is a critical factor. Understanding the power consumption of a device over time also helps identify components that use more power than necessary, potentially leading to more efficient and sustainable designs.

    Furthermore, measuring and monitoring power consumption can provide valuable insights into the performance of the device and help make informed decisions regarding future design choices. With these considerations, Figure 14 shows that energy consumption is inversely proportional to the sensing time cycle. Because the time cycle is long, the energy consumed is reduced. The reduction in energy did not obey a linear function; experiment 5 involved considerable optimization than experiment 4, with experiment 3 as a reference. Therefore, to develop energy-efficient and sustainable projects, the sensing time cycle can be increased to an optimal point where the consumed energy is minimal and almost all the events are detected.

    Figure 14.  Power consumption in the sensing platform in different scenarios (Nb sensors; sensing cycle).

    The sensor readings were obtained and whether the code had been successfully uploaded to the platform was determined. In this simple demonstration, we employed only DHT and MQ2 sensors. Although the MQ2 sensor was responsible for determining the liquefied petroleum gas (LPG), smoke, and methane gas, the DHT sensor was used to determine the temperature and humidity. The humidity unit was expressed as a percentage, with 100% denoting the highest possible humidity. The temperature was displayed in Celsius, as shown in Figure 15. The DHT and MQ2 readings were viewed on a serial monitor in Arduino IDE; these readings include the room temperature (T), humidity (H), LPG, and smoke. The sensor readings were sent to the sink node using the XBee module.

    Figure 15.  Humidity, temperature, and light intensity readings displayed on the Arduino IDE serial monitor.

    The Zigbee protocol is a wireless networking protocol widely used in IoT because of its low power consumption, robustness, and simplicity. With ZigBee, devices can communicate wirelessly, thereby providing a flexible and scalable solution for various IoT applications. In this section, we tested the wireless communication link between two Arduinos using the Zigbee protocol. Specifically, we used the Xbee module to enable wireless connectivity between the devices, as shown in Figure 16.

    Figure 16.  Arduino IDE interface for network establishment.

    The sensor readings were sent to the IP gateway using the XBee module over a secure channel. Communication with the gateway can be established manually or automatically using a secure HTTP protocol. The manual configuration uses MAC addresses for IP communication because the XBee module does not integrate the IP layer. Communication considers the creation of a frame using XCTU software. To achieve this, the XBee module must be connected to the IP gateway, which is considered as the coordinator in the "network working mode." Switching to the console mode was then achieved using (Ctrl + C) in XCTU (Figure 17). Next, a connection was opened and a frame (frame_0) was created. The three major attributes of the manual configuration are: Frame type, destination, and content. "Transmit Request" represents the type of the frame. The IP-gateway MAC address was provided through a 64-bit destination identifier. The message content to be transmitted was indicated by the "RF data" using ASCII or HEX formats. Because the data were received at base 64, a decoding step was required to make the received message readable by humans. The manual configuration was used only for debugging purposes.

    Figure 17.  Creating a frame in XCTU.

    In this study, we proposed a method to provide remote monitoring services using an Internet server that allows the supervision of smart home events from any location in the world and at any time. This solution results in continuously showing sensor readings using an application-based end system. This result allows establishing a comparison with the input readings and a system efficiency analysis. Any sensing metric provides the user with three levels: green as normal with no reaction, yellow as the average level requiring a follow-up, and red as the highest level requiring a serious action.

    Servers constitute an important component in accessing and managing IoT events in remote locations. The server-finding process should consider certain characteristics that are suitable for wireless sensor networks. To begin, we surveyed the features of many types of servers such as Amazon Web Services, Google Cloud, and Microsoft Azure. A comparative study of these services, as well as their capabilities, was conducted. Furthermore, a comparison with the local cloud servers of Saudi Arabia, such as STC Cloud and Sahara Net, was also conducted. Unfortunately, cloud services are unavailable to individuals in Saudi Arabia. Therefore, ThingSpeak was selected for use in the comparative study for several reasons: ThingSpeak is a free cloud platform used to monitor and visualize live data provided in both values and graphs, and it supports the MATLAB programming language, which simplifies graph analysis. In addition to ThingSpeak, the Blynk application was used as a server in the GSM device. Blynk supports both iOS and Android devices.

    After signing up with ThingSpeak, a certain number of channels must be created according to the list of metrics collected from the IoT sensors. In its free version, ThingSpeak offers up to eight fields for monitoring sensing events from the smart home. Therefore, the most significant environmental and other vital sign metrics were considered to create the channels (Figure 18).

    Figure 18.  ThingSpeak channel fields displaying the sensor data.

    Blynk allows readings collected from various home sensors to be monitored. It supports GSM, Wi-Fi, USB, and several other connection methods. In addition, Blynk operates on many microcontrollers and microcomputer platforms, such as Arduino, Raspberry Pi, and NodeMCU. In this study, Arduino constituted the first connectivity method, whereas GSM constituted the second way of connectivity. To communicate the readings to the server, the "API Key" and "Channel ID" represent the main parameters that successfully set up the secure connection [28,30]. The Arduino Uno was then configured with these parameters.

    Blynk supports three types of pins: Analog, digital, and virtual. In other words, the first type allows direct connection with the analog pin of the platform, whereas the second type enables a similar feature from the digital pin of the platform; however, the virtual type is needed for any variable metric in the code. As a result, the work requires the use of virtual pins, as Blynk provides more than 1000 virtual pins with no limitation on the GPIO pins of the platform.

    In this study, several concepts related to the IoT, remote monitoring, and their applications were studied, and the most recent approaches for remote monitoring were reviewed. We proposed a flexible architecture that includes real electronic components used for event detection, data communication, and information visualization and analysis. The major steps for building an IoT-based monitoring system (i.e., configuring the XBee module, IP gateway, GSM transmitter, and Arduino platform, including various sensors) were outlined. In addition, remote monitoring was incorporated through the inclusion of a server and its application to the entire architecture. Finally, we described the establishment of heterogeneous hardware and software connections to achieve the main objectives of remote monitoring. The 2-level secure channel was enabled based on the HTTPS and built-in API Key features of the Cloud. In conclusion, the obtained results justify the usefulness of the proposed method for various monitoring applications in smart cities, which include event sensing, secure data communication, information security, analysis, and visualization. Based on the collected values, the results demonstrate the efficiency of the proposed system in terms of power consumption and delivery ratio.

    In the future, the system is expected to be enhanced with an AI algorithm that detects sensed data anomalies at an advanced stage and manages them automatically in critical situations. In other words, the system will sense danger, apply AI to determine its type, and communicate with the competent government agencies to address the situation. In this manner, the confidentiality of the sensed information is ensured, and security risks can be addressed using sophisticated end-to-end security and privacy mechanisms supported during communication between individual partitions of the system.

    The authors declare that they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors extend their appreciation to the Deputyship for Research & Innovation, "Ministry of Education" in Saudi Arabia for funding this research work through the project number (IFKSUDR_D104).

    The authors declare that there are no conflicts of interest.



    [1] L. K. Johnson, Smart intelligence, Foreign Policy, (1992), 53–69.
    [2] J. Wang, C. Xu, J. Zhang, R. Zhong, Big data analytics for intelligent manufacturing systems: A review, J. Manuf Syst., (2021). https://doi.org/10.1016/j.jmsy.2021.03.005
    [3] W. H. Zijm, Towards intelligent manufacturing planning and control systems, OR-Spektrum, 22 (2000), 313–345. https://doi.org/10.1007/s002919900032 doi: 10.1007/s002919900032
    [4] W. Qi H. Su, A cybertwin based multimodal network for ecg patterns monitoring using deep learning, IEEE Trans. Industr. Inform., (2022). https://doi.org/10.1109/TII.2022.3159583
    [5] L. Monostori, J. Prohaszka, A step towards intelligent manufacturing: Modelling and monitoring of manufacturing processes through artificial neural networks, CIRP Ann., 42 (1993), 485–488. https://doi.org/10.1016/S0007-8506(07)62491-3 doi: 10.1016/S0007-8506(07)62491-3
    [6] X. Yao, J. Zhou, J. Zhang, C. R. Boër, From intelligent manufacturing to smart manufacturing for industry 4.0 driven by next generation artificial intelligence and further on, in 2017 5th international conference on enterprise systems (ES). IEEE, (2017), 311–318. https://doi.org/10.1109/ES.2017.58
    [7] J. Yi, C. Lu, G. Li, A literature review on latest developments of harmony search and its applications to intelligent manufacturing, Math. Biosci. Eng., 16 (2019), 2086–2117. https://doi.org/10.3934/mbe.2019102 doi: 10.3934/mbe.2019102
    [8] S. Shan, X. Wen, Y. Wei, Z. Wang, Y. Chen, Intelligent manufacturing in industry 4.0: A case study of sany heavy industry, Syst. Res. Behav. Sci., 37 (2020), 679–690. https://doi.org/10.1002/sres.2709 doi: 10.1002/sres.2709
    [9] H. Yoshikawa, Manufacturing and the 21st century intelligent manufacturing systems and the renaissance of the manufacturing industry, Technol. Forecast Soc. Change, 49 (1995), 195–213. https://doi.org/10.1016/0040-1625(95)00008-X doi: 10.1016/0040-1625(95)00008-X
    [10] J. Zheng, K. Chan, I. Gibson, Virtual reality, IEEE Potent., 17 (1998), 20–23.
    [11] M. J. Schuemie, P. Van Der Straaten, M. Krijn, C. A. Van Der Mast, Research on presence in virtual reality: A survey, Cyberpsychol. & Behav., 4 (2001), 183–201. https://doi.org/10.1089/109493101300117884 doi: 10.1089/109493101300117884
    [12] C. Anthes, R. J. García-Hernández, M. Wiedemann, D. Kranzlmüller, State of the art of virtual reality technology, in IEEE Aerosp. Conf.. (2016), 1–19. 10.1109/AERO.2016.7500674
    [13] F. Biocca, B. Delaney, Immersive virtual reality technology, Communication in the age of virtual reality, 15 (1995). https://doi.org/10.4324/9781410603128
    [14] T. Mazuryk, M. Gervautz, Virtual reality-history, applications, technology and future, 1996.
    [15] N.-N. Zhou, Y.-L. Deng, Virtual reality: A state-of-the-art survey, Int. J. Autom. Comput., 6 (2009), 319–325. https://doi.org/10.1007/s11633-009-0319-9 doi: 10.1007/s11633-009-0319-9
    [16] J. Egger, T. Masood, Augmented reality in support of intelligent manufacturing–a systematic literature review, Comput. Ind. Eng., 140 (2020), 106195. https://doi.org/10.1016/j.cie.2019.106195 doi: 10.1016/j.cie.2019.106195
    [17] B.-H. Li, B.-C. Hou, W.-T. Yu, X.-B. Lu, C.-W. Yang, Applications of artificial intelligence in intelligent manufacturing: a review, Front. Inform. Tech. El., 18 (2017), 86–96. https://doi.org/10.1631/FITEE.1601885 doi: 10.1631/FITEE.1601885
    [18] B. He, K.-J. Bai, Digital twin-based sustainable intelligent manufacturing: A review, Adv. Manuf., 9 (2021), 1–21. https://doi.org/10.1007/s40436-020-00302-5 doi: 10.1007/s40436-020-00302-5
    [19] G.-J. Cheng, L.-T. Liu, X.-J. Qiang, Y. Liu, Industry 4.0 development and application of intelligent manufacturing, in 2016 international conference on information system and artificial intelligence (ISAI). IEEE, (2016), 407–410. https://doi.org/10.1109/ISAI.2016.0092
    [20] G. Y. Tian, G. Yin, D. Taylor, Internet-based manufacturing: A review and a new infrastructure for distributed intelligent manufacturing, J. Intell. Manuf., 13 (2002), 323–338. https://doi.org/10.1023/A:1019907906158 doi: 10.1023/A:1019907906158
    [21] H. Su, W. Qi, J. Chen, D. Zhang, Fuzzy approximation-based task-space control of robot manipulators with remote center of motion constraint, IEEE Trans. Fuzzy Syst., 30 (2022), 1564–1573. https://doi.org/10.1109/TFUZZ.2022.3157075 doi: 10.1109/TFUZZ.2022.3157075
    [22] M.-S. Yoh, The reality of virtual reality, in Proceedings seventh international conference on virtual systems and multimedia. IEEE, (2001), 666–674. https://doi.org/10.1109/VSMM.2001.969726
    [23] V. Antoniou, F. L. Bonali, P. Nomikou, A. Tibaldi, P. Melissinos, F. P. Mariotto, et al., Integrating virtual reality and gis tools for geological mapping, data collection and analysis: An example from the metaxa mine, santorini (greece), Appl. Sci., 10 (2020), 8317. https://doi.org/10.3390/app10238317 doi: 10.3390/app10238317
    [24] A. Kunz, M. Zank, T. Nescher, K. Wegener, Virtual reality based time and motion study with support for real walking, Proced. CIRP, 57 (2016), 303–308. https://doi.org/10.1016/j.procir.2016.11.053 doi: 10.1016/j.procir.2016.11.053
    [25] M. Serras, L. G.-Sardia, B. Simes, H. lvarez, J. Arambarri, Dialogue enhanced extended reality: Interactive system for the operator 4.0, Appl. Sci., 10 (2020). https://doi.org/10.3390/app10113960
    [26] A. G. da Silva, M. V. M. Gomes, I. Winkler, Virtual reality and digital human modeling for ergonomic assessment in industrial product development: A patent and literature review, Appl. Sci., 12 (2022), 1084. https://doi.org/10.3390/app12031084 doi: 10.3390/app12031084
    [27] J. Kim, J. Jeong, Design and implementation of opc ua-based vr/ar collaboration model using cps server for vr engineering process, Appl. Sci., 12 (2022), 7534. https://doi.org/10.3390/app12157534 doi: 10.3390/app12157534
    [28] J.-d.-J. Cordero-Guridi, L. Cuautle-Gutiérrez, R.-I. Alvarez-Tamayo, S.-O. Caballero-Morales, Design and development of a i4. 0 engineering education laboratory with virtual and digital technologies based on iso/iec tr 23842-1 standard guidelines, Appl. Sci., 12 (2022), 5993. https://doi.org/10.3390/app12125993 doi: 10.3390/app12125993
    [29] H. Heinonen, A. Burova, S. Siltanen, J. Lähteenmäki, J. Hakulinen, M. Turunen, Evaluating the benefits of collaborative vr review for maintenance documentation and risk assessment, Appl. Sci., 12 (2022), 7155. https://doi.org/10.3390/app12147155 doi: 10.3390/app12147155
    [30] V. Settgast, K. Kostarakos, E. Eggeling, M. Hartbauer, T. Ullrich, Product tests in virtual reality: Lessons learned during collision avoidance development for drones, Designs, 6 (2022), 33. https://doi.org/10.3390/designs6020033 doi: 10.3390/designs6020033
    [31] D. Mourtzis, J. Angelopoulos, N. Panopoulos, Smart manufacturing and tactile internet based on 5g in industry 4.0: Challenges, applications and new trends, Electronics-Switz, 10 (2021), 3175. https://doi.org/10.3390/electronics10243175 doi: 10.3390/electronics10243175
    [32] Y. Saito, K. Kawashima, M. Hirakawa, Effectiveness of a head movement interface for steering a vehicle in a virtual reality driving simulation, Symmetry, 12 (2020), 1645. https://doi.org/10.3390/sym12101645 doi: 10.3390/sym12101645
    [33] Y.-P. Su, X.-Q. Chen, T. Zhou, C. Pretty, G. Chase, Mixed-reality-enhanced human–robot interaction with an imitation-based mapping approach for intuitive teleoperation of a robotic arm-hand system, Appl. Sci., 12 (2022), 4740. https://doi.org/10.3390/app12094740 doi: 10.3390/app12094740
    [34] F. Arena, M. Collotta, G. Pau, F. Termine, An overview of augmented reality, Computers, 11 (2022), 28. https://doi.org/10.3390/computers11020028 doi: 10.3390/computers11020028
    [35] P. C. Thomas, W. David, Augmented reality: An application of heads-up display technology to manual manufacturing processes, in Hawaii international conference on system sciences, 2. ACM SIGCHI Bulletin New York, NY, USA, 1992.
    [36] J. Safari Bazargani, A. Sadeghi-Niaraki, S.-M. Choi, Design, implementation, and evaluation of an immersive virtual reality-based educational game for learning topology relations at schools: A case study, Sustainability-Basel, 13 (2021), 13066. https://doi.org/10.3390/su132313066 doi: 10.3390/su132313066
    [37] K. Židek, J. Pitel', M. Balog, A. Hošovskỳ, V. Hladkỳ, P. Lazorík, et al., CNN training using 3d virtual models for assisted assembly with mixed reality and collaborative robots, Appl. Sci., 11 (2021), 4269. https://doi.org/10.3390/app11094269 doi: 10.3390/app11094269
    [38] S. Mandal, Brief introduction of virtual reality & its challenges, Int. J. Sci. Eng. Res., 4 (2013), 304–309.
    [39] D. Rose, N. Foreman, Virtual reality. The Psycho., (1999).
    [40] G. Riva, C. Malighetti, A. Chirico, D. Di Lernia, F. Mantovani, A. Dakanalis, Virtual reality, in Rehabilitation interventions in the patient with obesity. Springer, (2020), 189–204.
    [41] J. N. Latta, D. J. Oberg, A conceptual virtual reality model, IEEE Comput. Graph. Appl., 14 (1994), 23–29. https://doi.org/10.1109/38.250915 doi: 10.1109/38.250915
    [42] J. Lanier, Virtual reality: The promise of the future. Interactive Learning International, 8 (1992), 275–79.
    [43] S. Serafin, C. Erkut, J. Kojs, N. C. Nilsson, R. Nordahl, Virtual reality musical instruments: State of the art, design principles, and future directions, Comput. Music. J., 40 (2016). https://doi.org/10.1162/COMJ_a_00372
    [44] W. Qi, H. Su, A. Aliverti, A smartphone-based adaptive recognition and real-time monitoring system for human activities, IEEE Trans. Hum. Mach. Syst., 50 (2020), 414 - 423. https://doi.org/10.1109/THMS.2020.2984181 doi: 10.1109/THMS.2020.2984181
    [45] P. Kopacek, Intelligent manufacturing: present state and future trends, J. Intell. Robot. Syst., 26 (1999), 217–229. https://doi.org/10.1023/A:1008168605803 doi: 10.1023/A:1008168605803
    [46] Y. Feng, Y. Zhao, H. Zheng, Z. Li, J. Tan, Data-driven product design toward intelligent manufacturing: A review, Int. J. Adv. Robot. Syst., 17 (2020), 1729881420911257. https://doi.org/10.1177/1729881420911257 doi: 10.1177/1729881420911257
    [47] H. Su, W. Qi, Y. Hu, H. R. Karimi, G. Ferrigno, E. De Momi, An incremental learning framework for human-like redundancy optimization of anthropomorphic manipulators, IEEE Trans. Industr. Inform., 18 (2020), 1864–1872. https://doi.org/10.1109/TII.2020.3036693 doi: 10.1109/TII.2020.3036693
    [48] E. Hozdić, Smart factory for industry 4.0: A review, Int. J. Adv. Manuf. Technol., 7 (2015), 28–35.
    [49] R. Burke, A. Mussomeli, S. Laaper, M. Hartigan, B. Sniderman, The smart factory: Responsive, adaptive, connected manufacturing, Deloitte Insights, 31 (2017), 1–10.
    [50] R. Y. Zhong, X. Xu, E. Klotz, S. T. Newman, Intelligent manufacturing in the context of industry 4.0: a review, Engineering-Prc, 3 (2017), 616–630. https://doi.org/10.1016/J.ENG.2017.05.015 doi: 10.1016/J.ENG.2017.05.015
    [51] A. Kusiak, Intelligent manufacturing, System, Prentice-Hall, Englewood Cliffs, NJ, (1990).
    [52] G. Rzevski, A framework for designing intelligent manufacturing systems, Comput. Ind., 34 (1997), 211–219. https://doi.org/10.1016/S0166-3615(97)00056-0 doi: 10.1016/S0166-3615(97)00056-0
    [53] E. Oztemel, Intelligent manufacturing systems, in Artificial intelligence techniques for networked manufacturing enterprises management. Springer, (2010), pp. 1–41. https://doi.org/10.1007/978-1-84996-119-6_1
    [54] J. Zhou, P. Li, Y. Zhou, B. Wang, J. Zang, L. Meng, Toward new-generation intelligent manufacturing, Engineering-Prc, 4 (2018), 11–20. https://doi.org/10.1016/j.eng.2018.01.002 doi: 10.1016/j.eng.2018.01.002
    [55] R. Y. Zhong, X. Xu, E. Klotz, S. T. Newman, Intelligent manufacturing in the context of industry 4.0: a review, Engineering-Prc, 3 (2017), 616–630. https://doi.org/10.1016/J.ENG.2017.05.015 doi: 10.1016/J.ENG.2017.05.015
    [56] H. S. Kang, J. Y. Lee, S. Choi, H. Kim, J. H. Park, J. Y. Son, B. H. Kim, S. D. Noh, Smart manufacturing: Past research, present findings, and future directions, Int. J. Pr. Eng. Man-Gt., 3 (2016), 111–128. https://doi.org/10.1007/s40684-016-0015-5 doi: 10.1007/s40684-016-0015-5
    [57] R. Jardim-Goncalves, D. Romero, A. Grilo, Factories of the future: challenges and leading innovations in intelligent manufacturing, Int. J. Comput. Integr. Manuf., 30 (2017), 4–14.
    [58] A. Kusiak, Smart manufacturing, Int. J. Prod. Res., 56 (2018), 508–517. https://doi.org/10.1080/00207543.2017.1351644
    [59] B. Wang, F. Tao, X. Fang, C. Liu, Y. Liu, T. Freiheit, Smart manufacturing and intelligent manufacturing: A comparative review, Engineering-Prc, 7 (2021), 738–757. https://doi.org/10.1016/j.eng.2020.07.017 doi: 10.1016/j.eng.2020.07.017
    [60] P. Zheng, Z. Sang, R. Y. Zhong, Y. Liu, C. Liu, K. Mubarok, et al., Smart manufacturing systems for industry 4.0: Conceptual framework, scenarios, and future perspectives, Front. Mech. Eng., 13 (2018), 137–150. https://doi.org/10.1007/s11465-018-0499-5 doi: 10.1007/s11465-018-0499-5
    [61] P. Osterrieder, L. Budde, T. Friedli, The smart factory as a key construct of industry 4.0: A systematic literature review, Int. J. Prod. Econ., 221 107476. https://doi.org/10.1016/j.ijpe.2019.08.011
    [62] D. Guo, M. Li, R. Zhong, G. Q. Huang, Graduation intelligent manufacturing system (gims): an industry 4.0 paradigm for production and operations management, Ind. Manage. Data Syst., (2020). https://doi.org/10.1108/IMDS-08-2020-0489
    [63] A. Barari, M. de Sales Guerra Tsuzuki, Y. Cohen, M. Macchi, Intelligent manufacturing systems towards industry 4.0 era, J. Intell. Manuf., 32 (2021), 1793–1796. https://doi.org/10.1007/s10845-021-01769-0 doi: 10.1007/s10845-021-01769-0
    [64] C. Christo, C. Cardeira, Trends in intelligent manufacturing systems, in 2007 IEEE International Symposium on Industrial Electronics-Switz.. IEEE, (2007), 3209–3214. https://doi.org/10.1109/ISIE.2007.4375129
    [65] M.-P. Pacaux-Lemoine, D. Trentesaux, G. Z. Rey, P. Millot, Designing intelligent manufacturing systems through human-machine cooperation principles: A human-centered approach, Comput. Ind. Eng., 111 (2017), 581–595. https://doi.org/10.1016/j.cie.2017.05.014 doi: 10.1016/j.cie.2017.05.014
    [66] W. F. Gaughran, S. Burke, P. Phelan, Intelligent manufacturing and environmental sustainability, Robot. Comput. Integr. Manuf., 23 (2007), 704–711. https://doi.org/10.1016/j.rcim.2007.02.016 doi: 10.1016/j.rcim.2007.02.016
    [67] Y. Boas, Overview of virtual reality technologies, in Inter. Mult. Confer., 2013 (2013).
    [68] lvaro Segura, H. V. Diez, I. Barandiaran, A. Arbelaiz, H. lvarez, B. Simes, J. Posada, A. Garca-Alonso, R. Ugarte, Visual computing technologies to support the operator 4.0, Comput. Ind. Eng., 139 (2020), 105550. https://doi.org/10.1016/j.cie.2018.11.060 doi: 10.1016/j.cie.2018.11.060
    [69] D. Romero, J. Stahre, T. Wuest, O. Noran, P. Bernus, Fasth, Fast-Berglund, D. Gorecky, Towards an operator 4.0 typology: A human-centric perspective on the fourth industrial revolution technologies, 10 (2016).
    [70] H. Qiao, J. Chen, X. Huang, A survey of brain-inspired intelligent robots: Integration of vision, decision, motion control, and musculoskeletal systems, " IEEE T. Cybernetics, 52 (2022), 11267 - 11280. https://doi.org/10.1109/TCYB.2021.3071312
    [71] F. Firyaguna, J. John, M. O. Khyam, D. Pesch, E. Armstrong, H. Claussen, H. V. Poor et al., Towards industry 5.0: Intelligent reflecting surface (irs) in smart manufacturing, arXiv preprint arXiv: 2201.02214, (2022). https://doi.org/10.1109/MCOM.001.2200016
    [72] A. M. Almassri, W. Wan Hasan, S. A. Ahmad, A. J. Ishak, A. Ghazali, D. Talib, C. Wada, Pressure sensor: state of the art, design, and application for robotic hand, J. Sensors, 2015 (2015). https://doi.org/10.1155/2015/846487
    [73] B. Munari, Design as art. Penguin UK, (2008).
    [74] B. De La Harpe, J. F. Peterson, N. Frankham, R. Zehner, D. Neale, E. Musgrave, R. McDermott, Assessment focus in studio: What is most prominent in architecture, art and design? IJADE., 28 (2009), 37–51. https://doi.org/10.1111/j.1476-8070.2009.01591.x
    [75] C. Gray, J. Malins, Visualizing research: A guide to the research process in art and design. Routledge, (2016).
    [76] M. Barnard, Art, design and visual culture: An introduction. Bloomsbury Publishing, (1998).
    [77] C. Crouch, Modernism in art, design and architecture. Bloomsbury Publishing, (1998).
    [78] M. Biggs, The role of the artefact in art and design research, Int. J. Des. Sci. Technol., 2002.
    [79] H. Su, W. Qi, Y. Schmirander, S. E. Ovur, S. Cai, X. Xiong, A human activity-aware shared control solution for medical human–robot interaction, Assembly Autom., (2022) ahead-of-print. https://doi.org/10.1108/AA-12-2021-0174
    [80] R. D. Gandhi, D. S. Patel, Virtual reality–opportunities and challenges, Virtual Real., 5 (2018).
    [81] A. J. Trappey, C. V. Trappey, M.-H. Chao, C.-T. Wu, Vr-enabled engineering consultation chatbot for integrated and intelligent manufacturing services, J. Ind. Inf. Integrat., 26 (2022), 100331. https://doi.org/10.1016/j.jii.2022.100331 doi: 10.1016/j.jii.2022.100331
    [82] K. Valaskova, M. Nagy, S. Zabojnik, G. Lăzăroiu, Industry 4.0 wireless networks and cyber-physical smart manufacturing systems as accelerators of value-added growth in slovak exports, Mathematics-Basel, 10 (2022), 2452. https://doi.org/10.3390/math10142452 doi: 10.3390/math10142452
    [83] J. de Assis Dornelles, N. F. Ayala, A. G. Frank, Smart working in industry 4.0: How digital technologies enhance manufacturing workers' activities, Comput. Ind. Eng., 163 (2022), 107804. https://doi.org/10.1016/j.cie.2021.107804 doi: 10.1016/j.cie.2021.107804
    [84] V. Tripathi, S. Chattopadhyaya, A. K. Mukhopadhyay, S. Sharma, C. Li, S. Singh, W. U. Hussan, B. Salah, W. Saleem, A. Mohamed, A sustainable productive method for enhancing operational excellence in shop floor management for industry 4.0 using hybrid integration of lean and smart manufacturing: An ingenious case study, Sustainability-Basel, 14 (2022), 7452. https://doi.org/10.3390/su14127452 doi: 10.3390/su14127452
    [85] S. M. M. Sajadieh, Y. H. Son, S. D. Noh, A conceptual definition and future directions of urban smart factory for sustainable manufacturing, Sustainability-Basel, 14 (2022), 1221. https://doi.org/10.3390/su14031221 doi: 10.3390/su14031221
    [86] Y. H. Son, G.-Y. Kim, H. C. Kim, C. Jun, S. D. Noh, Past, present, and future research of digital twin for smart manufacturing, J. Comput. Des. Eng., 9 (2022), 1–23. https://doi.org/10.1093/jcde/qwab067 doi: 10.1093/jcde/qwab067
    [87] G. Moiceanu, G. Paraschiv, Digital twin and smart manufacturing in industries: A bibliometric analysis with a focus on industry 4.0, Sensors-Basel, 22 (2022), 1388. https://doi.org/10.3390/s22041388 doi: 10.3390/s22041388
    [88] K. Cheng, Q. Wang, D. Yang, Q. Dai, M. Wang, Digital-twins-driven semi-physical simulation for testing and evaluation of industrial software in a smart manufacturing system, Machines, 10 (2022), 388. https://doi.org/10.3390/machines10050388 doi: 10.3390/machines10050388
    [89] S. Arjun, L. Murthy, P. Biswas, Interactive sensor dashboard for smart manufacturing, Procedia Comput. Sci., 200 (2022), 49–61. https://doi.org/10.1016/j.procs.2022.01.204 doi: 10.1016/j.procs.2022.01.204
    [90] J. Yang, Y. H. Son, D. Lee, S. D. Noh, Digital twin-based integrated assessment of flexible and reconfigurable automotive part production lines, Machines, 10 (2022), 75. https://doi.org/10.3390/machines10020075 doi: 10.3390/machines10020075
    [91] J. Friederich, D. P. Francis, S. Lazarova-Molnar, N. Mohamed, A framework for data-driven digital twins for smart manufacturing, Comput. Ind., 136 (2022), 103586. https://doi.org/10.1016/j.compind.2021.103586 doi: 10.1016/j.compind.2021.103586
    [92] L. Li, B. Lei, C. Mao, Digital twin in smart manufacturing, J. Ind. Inf. Integr., 26 (2022), 100289. https://doi.org/10.1016/j.jii.2021.100289 doi: 10.1016/j.jii.2021.100289
    [93] D. Nåfors, B. Johansson, Virtual engineering using realistic virtual models in brownfield factory layout planning, Sustainability-Basel, 13 (2021), 11102. https://doi.org/10.3390/su131911102 doi: 10.3390/su131911102
    [94] A. Geiger, E. Brandenburg, R. Stark, Natural virtual reality user interface to define assembly sequences for digital human models, Appl. System Innov., 3 (2020), 15. https://doi.org/10.3390/asi3010015 doi: 10.3390/asi3010015
    [95] G. Gabajova, B. Furmannova, I. Medvecka, P. Grznar, M. Krajčovič, R. Furmann, Virtual training application by use of augmented and virtual reality under university technology enhanced learning in slovakia, Sustainability-Basel, 11 (2019), 6677. https://doi.org/10.3390/su11236677 doi: 10.3390/su11236677
    [96] W. Qi, S. E. Ovur, Z. Li, A. Marzullo, R. Song, Multi-sensor guided hand gesture recognition for a teleoperated robot using a recurrent neural network, IEEE Robot Autom Lett., 6 (2021), 6039–6045. https://doi.org/10.1109/LRA.2021.3089999 doi: 10.1109/LRA.2021.3089999
    [97] L. Pérez, S. Rodríguez-Jiménez, N. Rodríguez, R. Usamentiaga, D. F. García, Digital twin and virtual reality based methodology for multi-robot manufacturing cell commissioning, Appl. Sci., 10 (2020), 3633. https://doi.org/10.3390/app10103633 doi: 10.3390/app10103633
    [98] J. Mora-Serrano, F. Muñoz-La Rivera, I. Valero, Factors for the automation of the creation of virtual reality experiences to raise awareness of occupational hazards on construction sites, Electronics-Switz., 10 (2021), 1355. https://doi.org/10.3390/electronics10111355 doi: 10.3390/electronics10111355
    [99] C. McDonald, K. A. Campbell, C. Benson, M. J. Davis, C. J. Frost, Workforce development and multiagency collaborations: A presentation of two case studies in child welfare, Sustainability-Basel, 13 (2021), 10190. https://doi.org/10.3390/su131810190 doi: 10.3390/su131810190
    [100] Z. Xu, N. Zheng, Incorporating virtual reality technology in safety training solution for construction site of urban cities, Sustainability-Basel, 13 (2020), 243. https://doi.org/10.3390/su13010243 doi: 10.3390/su13010243
    [101] L. Frizziero, L. Galletti, L. Magnani, E. G. Meazza, M. Freddi, Blitz vision: Development of a new full-electric sports sedan using qfd, sde and virtual prototyping, Inventions, 7 (2022), 41. https://doi.org/10.3390/inventions7020041 doi: 10.3390/inventions7020041
    [102] N. Lyons, Deep learning-based computer vision algorithms, immersive analytics and simulation software, and virtual reality modeling tools in digital twin-driven smart manufacturing, Econom. Manag. Financ. Markets, 17 (2022).
    [103] H. Qiao, S. Zhong, Z. Chen, H. Wang, Improving performance of robots using human-inspired approaches: A survey, Sci. China Inf. Sci., 65 (2022), 221201. https://doi.org/10.1007/s11432-022-3606-1 doi: 10.1007/s11432-022-3606-1
    [104] H. Su, A. Mariani, S. E. Ovur, A. Menciassi, G. Ferrigno, E. De Momi, Toward teaching by demonstration for robot-assisted minimally invasive surgery, IEEE Trans. Autom, 18 (2021), 484 - 494. https://doi.org/10.1109/TASE.2020.3045655 doi: 10.1109/TASE.2020.3045655
    [105] H. Su, W. Qi, Z. Li, Z. Chen, G. Ferrigno, E. De Momi, Deep neural network approach in EMG-based force estimation for human–robot interaction, IEEE Trans. Artif. Intell., 2 (2021), 404 - 412. https://doi.org/10.1109/TAI.2021.3066565 doi: 10.1109/TAI.2021.3066565
    [106] A. A. Malik, T. Masood, A. Bilberg, Virtual reality in manufacturing: immersive and collaborative artificial-reality in design of human-robot workspace, Int. J. Comput. Integr. Manuf., 33 (2020), 22–37. https://doi.org/10.1080/0951192X.2019.1690685 doi: 10.1080/0951192X.2019.1690685
    [107] A. Corallo, A. M. Crespino, M. Lazoi, M. Lezzi, Model-based big data analytics-as-a-service framework in smart manufacturing: A case study, Robot. Comput. Integr. Manuf., 76 (2022), 102331. https://doi.org/10.1016/j.rcim.2022.102331 doi: 10.1016/j.rcim.2022.102331
    [108] Y.-M. Tang, G. T. S. Ho, Y.-Y. Lau, S.-Y. Tsui, Integrated smart warehouse and manufacturing management with demand forecasting in small-scale cyclical industries, Machines, 10 (2022), 472. https://doi.org/10.3390/machines10060472 doi: 10.3390/machines10060472
    [109] M. Samardžić, D. Stefanović, U. Marjanović, Transformation towards smart working: Research proposal, in 2022 21st International Symposium INFOTEH-JAHORINA (INFOTEH). IEEE, (2022), 1–6. https://doi.org/10.1109/INFOTEH53737.2022.9751256
    [110] T. Caporaso, S. Grazioso, G. Di Gironimo, Development of an integrated virtual reality system with wearable sensors for ergonomic evaluation of human–robot cooperative workplaces, Sensors-Basel, 22 (2022), 2413. https://doi.org/10.3390/s22062413 doi: 10.3390/s22062413
    [111] W. Qi, N. Wang, H. Su, A. Aliverti DCNN based human activity recognition framework with depth vision guiding, Neurocomputing, 486 (2022), 261–271. https://doi.org/10.1016/j.neucom.2021.11.044 doi: 10.1016/j.neucom.2021.11.044
    [112] W. Zhu, X. Fan, Y. Zhang, Applications and research trends of digital human models in the manufacturing industry, VRIH, 1 (2019), 558–579. https://doi.org/10.1016/j.vrih.2019.09.005 doi: 10.1016/j.vrih.2019.09.005
    [113] O. Robert, P. Iztok, B. Borut, Real-time manufacturing optimization with a simulation model and virtual reality, Procedia Manuf., 38 (2019), 1103–1110. https://doi.org/10.1016/j.promfg.2020.01.198 doi: 10.1016/j.promfg.2020.01.198
    [114] I. Kačerová, J. Kubr, P. Hořejší, J. Kleinová, Ergonomic design of a workplace using virtual reality and a motion capture suit, Appl. Sci., 12 (2022), 2150. https://doi.org/10.3390/app12042150 doi: 10.3390/app12042150
    [115] M. Woschank, D. Steinwiedder, A. Kaiblinger, P. Miklautsch, C. Pacher, H. Zsifkovits, The integration of smart systems in the context of industrial logistics in manufacturing enterprises, Procedia Comput. Sci., 200 (2022), 727–737. https://doi.org/10.1016/j.procs.2022.01.271 doi: 10.1016/j.procs.2022.01.271
    [116] A. Umbrico, A. Orlandini, A. Cesta, M. Faroni, M. Beschi, N. Pedrocchi, A. Scala, P. Tavormina, S. Koukas, A. Zalonis et al., Design of advanced human–robot collaborative cells for personalized human–robot collaborations, Appl. Sci., 12 (2022), 6839. https://doi.org/10.3390/app12146839 doi: 10.3390/app12146839
    [117] W. Qi, A. Aliverti, A multimodal wearable system for continuous and real-time breathing pattern monitoring during daily activity, IEEE JBHI., 24 (2019), 2199–2207. https://doi.org/10.1109/JBHI.2019.2963048 doi: 10.1109/JBHI.2019.2963048
    [118] J. M. Runji, Y.-J. Lee, C.-H. Chu, User requirements analysis on augmented reality-based maintenance in manufacturing, J. Comput. Inf. Sci. Eng., 22 (2022), 050901. https://doi.org/10.1115/1.4053410 doi: 10.1115/1.4053410
    [119] D. Wuttke, A. Upadhyay, E. Siemsen, A. Wuttke-Linnemann, Seeing the bigger picture? ramping up production with the use of augmented reality, Manuf. Serv. Oper. Manag., (2022). https://doi.org/10.1287/msom.2021.1070
    [120] M. Catalano, A. Chiurco, C. Fusto, L. Gazzaneo, F. Longo, G. Mirabelli, L. Nicoletti, V. Solina, S. Talarico, A digital twin-driven and conceptual framework for enabling extended reality applications: A case study of a brake discs manufacturer, Procedia Comput. Sci., 200 (2022), 1885–1893. https://doi.org/10.1016/j.procs.2022.01.389 doi: 10.1016/j.procs.2022.01.389
    [121] J. S. Devagiri, S. Paheding, Q. Niyaz, X. Yang, S. Smith, Augmented reality and artificial intelligence in industry: Trends, tools, and future challenges, Expert Syst. Appl., (2022), 118002. https://doi.org/10.1016/j.eswa.2022.118002
    [122] P. T. Ho, J. A. Albajez, J. Santolaria, J. A. Yagüe-Fabra, Study of augmented reality based manufacturing for further integration of quality control 4.0: A systematic literature review, Appl. Sci., 12 (2022), 1961. https://doi.org/10.3390/app12041961 doi: 10.3390/app12041961
    [123] Z.-H. Lai, W. Tao, M. C. Leu, Z. Yin, Smart augmented reality instructional system for mechanical assembly towards worker-centered intelligent manufacturing, J. Manuf. Syst., 55 (2020), 69–81. https://doi.org/10.1016/j.jmsy.2020.02.010 doi: 10.1016/j.jmsy.2020.02.010
    [124] J. Xiong, E.-L. Hsiang, Z. He, T. Zhan, S.-T. Wu, Augmented reality and virtual reality displays: emerging technologies and future perspectives, Light Sci. Appl., 10 (2021), 1–30. https://doi.org/10.1038/s41377-021-00658-8 doi: 10.1038/s41377-021-00658-8
    [125] M.-G. Kim, J. Kim, S. Y. Chung, M. Jin, M. J. Hwang, Robot-based automation for upper and sole manufacturing in shoe production, Machines, 10 (2022), 255. https://doi.org/10.3390/machines10040255 doi: 10.3390/machines10040255
    [126] P. Grefen, I. Vanderfeesten, K. Traganos, Z. Domagala-Schmidt, J. van der Vleuten, Advancing smart manufacturing in europe: Experiences from two decades of research and innovation projects, Machines, 10 (2022), 45. https://doi.org/10.3390/machines10010045 doi: 10.3390/machines10010045
    [127] Y. Zhou, J. Zang, Z. Miao, T. Minshall, Upgrading pathways of intelligent manufacturing in china: Transitioning across technological paradigms, Engineering-Prc, 5 (2019), 691–701. https://doi.org/10.1016/j.eng.2019.07.016 doi: 10.1016/j.eng.2019.07.016
    [128] K. S. Kiangala, Z. Wang, An experimental safety response mechanism for an autonomous moving robot in a smart manufacturing environment using q-learning algorithm and speech recognition, Sensors-Basel, 22 (2022), 941. https://doi.org/10.3390/s22030941 doi: 10.3390/s22030941
    [129] S. Fernandes, Which way to cope with covid-19 challenges? contributions of the iot for smart city projects, Big Data Cogn. Comput., 5 (2021), 26. https://doi.org/10.3390/bdcc5020026 doi: 10.3390/bdcc5020026
    [130] C. Thomay, U. Bodin, H. Isakovic, R. Lasch, N. Race, C. Schmittner, G. Schneider, Z. Szepessy, M. Tauber, Z. Wang, Towards adaptive quality assurance in industrial applications, in 2022 IEEE/IFIP NOMS.. IEEE, (2022), 1–6. https://doi.org/10.1109/NOMS54207.2022.9789928
    [131] D. Stadnicka, P. Litwin, D. Antonelli, Human factor in intelligent manufacturing systems-knowledge acquisition and motivation, Proced. CIRP, 79 (2019), 718–723. https://doi.org/10.1016/j.procir.2019.02.023 doi: 10.1016/j.procir.2019.02.023
    [132] H.-X. Li, H. Si, Control for intelligent manufacturing: A multiscale challenge, Engineering-Prc, 3 (2017), 608–615. https://doi.org/10.1016/J.ENG.2017.05.016 doi: 10.1016/J.ENG.2017.05.016
    [133] T. Kalsoom, N. Ramzan, S. Ahmed, M. Ur-Rehman, Advances in sensor technologies in the era of smart factory and industry 4.0, Sensors-Basel, 20 (2020), 6783. https://doi.org/10.3390/s20236783 doi: 10.3390/s20236783
    [134] J. Radianti, T. A. Majchrzak, J. Fromm, I. Wohlgenannt, A systematic review of immersive virtual reality applications for higher education: Design elements, lessons learned, and research agenda, Comput. Educ., 147 (2020), COMPUT EDUC103778. https://doi.org/10.1016/j.compedu.2019.103778 doi: 10.1016/j.compedu.2019.103778
    [135] D. Kamińska, T. Sapiński, S. Wiak, T. Tikk, R. E. Haamer, E. Avots, A. Helmi, C. Ozcinar, G. Anbarjafari, Virtual reality and its applications in education: Survey, Information, 10 (2019), 318. https://doi.org/10.3390/info10100318 doi: 10.3390/info10100318
    [136] T. Joda, G. Gallucci, D. Wismeijer, N. U. Zitzmann, Augmented and virtual reality in dental medicine: A systematic review, Comput. Biol. Med., 108 (2019), 93–100. https://doi.org/10.1016/j.compbiomed.2019.03.012 doi: 10.1016/j.compbiomed.2019.03.012
    [137] C. Li, Y. Chen, Y. Shang, A review of industrial big data for decision making in intelligent manufacturing, J. Eng. Sci. Technol., (2021). https://doi.org/10.1016/j.jestch.2021.06.001
    [138] L. Zhou, Z. Jiang, N. Geng, Y. Niu, F. Cui, K. Liu, N. Qi, Production and operations management for intelligent manufacturing: a systematic literature review, Int. J. Prod. Res., 60 (2022), 808–846. https://doi.org/10.1080/00207543.2021.2017055 doi: 10.1080/00207543.2021.2017055
    [139] L. Adriana Crdenas-Robledo, scar Hernndez-Uribe, C. Reta, J. Antonio Cantoral-Ceballos, Extended reality applications in industry 4.0. a systematic literature review, Telemat. Inform., 73 (2022), 101863. https://doi.org/10.1016/j.tele.2022.101863 doi: 10.1016/j.tele.2022.101863
    [140] Z. Wang, X. Bai, S. Zhang, M. Billinghurst, W. He, P. Wang, W. Lan, H. Min, Y. Chen, A comprehensive review of augmented reality-based instruction in manual assembly, training and repair, Robot. Comput. Integr. Manuf., 78 (2022), 102407. https://doi.org/10.1016/j.rcim.2022.102407 doi: 10.1016/j.rcim.2022.102407
    [141] N. Kumar, S. C. Lee, Human-machine interface in smart factory: A systematic literature review, Technol. Forecast. Soc. Change, 174 (2022), 121284. https://doi.org/10.1016/j.techfore.2021.121284 doi: 10.1016/j.techfore.2021.121284
    [142] M. Javaid, A. Haleem, R. P. Singh, R. Suman, Enabling flexible manufacturing system (fms) through the applications of industry 4.0 technologies, Int. Things Cyber-Phys. Syst., (2022). https://doi.org/10.1016/j.iotcps.2022.05.005
    [143] A. Künz, S. Rosmann, E. Loria, J. Pirker, The potential of augmented reality for digital twins: A literature review, in 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, (2022), 389–398. https://doi.org/10.1109/VR51125.2022.00058
    [144] I. Shah, C. Doshi, M. Patel, S. Tanwar, W.-C. Hong, R. Sharma, A comprehensive review of the technological solutions to analyse the effects of pandemic outbreak on human lives, Medicina (Kaunas), 58 (2022), 311. https://doi.org/10.3390/medicina58020311 doi: 10.3390/medicina58020311
    [145] R. P. Singh, M. Javaid, R. Kataria, M. Tyagi, A. Haleem, R. Suman, Significant applications of virtual reality for covid-19 pandemic, Diabetes Metab. Syndr., 14 (2020), 661–664. https://doi.org/10.1016/j.dsx.2020.05.011 doi: 10.1016/j.dsx.2020.05.011
    [146] A. O. Kwok, S. G. Koh, Covid-19 and extended reality (xr), Curr. Issues Tour., 24 (2021), 1935–1940. https://doi.org/10.1080/13683500.2020.1798896 doi: 10.1080/13683500.2020.1798896
    [147] G. Czifra, Z. Molnár et al., Covid-19, industry 4.0, Research papers faculty of materials science and technology slovak university of technology, 28 (2020), 36–45. https://doi.org/10.2478/rput-2020-0005
    [148] Q. Yu-ming, D. San-peng et al., Research on intelligent manufacturing flexible production line system based on digital twin, in 2020 35th Youth Academic Annual Conference of Chinese Association of Automation (YAC), IEEE, (2020), 854–862. https://doi.org/10.1109/YAC51587.2020.9337500
    [149] L. O. Alpala, D. J. Quiroga-Parra, J. C. Torres, D. H. Peluffo-Ordóñez, Smart factory using virtual reality and online multi-user: Towards a metaverse for experimental frameworks, Appl. Sci., 12 (2022), 6258. https://doi.org/10.3390/app12126258 doi: 10.3390/app12126258
    [150] E. Chang, H. T. Kim, B. Yoo, Virtual reality sickness: A review of causes and measurements, Int. J. Hum-Comput. Int., 36 (2020), 1658–1682. https://doi.org/10.1080/10447318.2020.1778351 doi: 10.1080/10447318.2020.1778351
    [151] H. Su, W. Qi, C. Yang, J. Sandoval, G. Ferrigno, E. De Momi, Deep neural network approach in robot tool dynamics identification for bilateral teleoperation, IEEE Robot. Autom. Lett., 5 (2020), 2943–2949. https://doi.org/10.1109/LRA.2020.2974445 doi: 10.1109/LRA.2020.2974445
    [152] H. Su, Y. Hu, H. R. Karimi, A. Knoll, G. Ferrigno, E. De Momi, Improved recurrent neural network-based manipulator control with remote center of motion constraints: Experimental results, Neural Netw., 131 (2020), 291–299. https://doi.org/10.1016/j.neunet.2020.07.033 doi: 10.1016/j.neunet.2020.07.033
    [153] S. Phuyal, D. Bista, R. Bista, Challenges, opportunities and future directions of smart manufacturing: A state of art review, Sustain. Fut., 2 (2020), 100023. https://doi.org/10.1016/j.sftr.2020.100023 doi: 10.1016/j.sftr.2020.100023
  • This article has been cited by:

    1. Eduardo Hernandez, Shashank Pandey, Dwijendra N. Pandey, Abstract integro‐differential equations with state‐dependent integration intervals: Existence, uniqueness, and local well‐posedness, 2024, 0025-584X, 10.1002/mana.202400126
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4703) PDF downloads(397) Cited by(6)

Figures and Tables

Figures(9)  /  Tables(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog