Review Special Issues

A systematic review of the current state of collaborative mixed reality technologies: 2013–2018

  • Over the last few decades, Mixed Reality (MR) interfaces have received great attention from academia and industry. Although a considerable amount of research has already been done to support collaboration between users in MR, there is still no systematic review to determine the current state of collaborative MR applications. In this paper, collaborative MR studies published from 2013 to 2018 were reviewed. A total of 259 papers have been categorised based on their application areas, types of display devices used, collaboration setups, and user interaction and experience aspects. The primary contribution of this paper is to present a high-level overview of collaborative MR influence across several research disciplines. The achievements from each application area are summarised. In addition, remarkable papers in their respective areas are highlighted. Among other things, our study finds that there are three complementary factors to support and enhance collaboration in MR environments: (i) annotation techniques, which provide non-verbal communication cues to users, (ii) cooperative object manipulation techniques, which divide complex 3D object manipulation process into simpler tasks between different users, and (iii) user perception and cognition studies, which aim to lessen cognitive workload for task understanding and completion, and to increase users’ perceptual awareness and presence. Finally, this paper identifies research gaps and future directions that can be useful for researchers who want to explore ways on how to foster collaboration between users and to develop collaborative applications in MR.

    Citation: Ryan Anthony J. de Belen, Huyen Nguyen, Daniel Filonik, Dennis Del Favero, Tomasz Bednarz. A systematic review of the current state of collaborative mixed reality technologies: 2013–2018[J]. AIMS Electronics and Electrical Engineering, 2019, 3(2): 181-223. doi: 10.3934/ElectrEng.2019.2.181

    Related Papers:

    [1] Fang Liu, Yanfei Du . Spatiotemporal dynamics of a diffusive predator-prey model with delay and Allee effect in predator. Mathematical Biosciences and Engineering, 2023, 20(11): 19372-19400. doi: 10.3934/mbe.2023857
    [2] Kawkab Al Amri, Qamar J. A Khan, David Greenhalgh . Combined impact of fear and Allee effect in predator-prey interaction models on their growth. Mathematical Biosciences and Engineering, 2024, 21(10): 7211-7252. doi: 10.3934/mbe.2024319
    [3] Yun Kang, Sourav Kumar Sasmal, Amiya Ranjan Bhowmick, Joydev Chattopadhyay . Dynamics of a predator-prey system with prey subject to Allee effects and disease. Mathematical Biosciences and Engineering, 2014, 11(4): 877-918. doi: 10.3934/mbe.2014.11.877
    [4] Mengyun Xing, Mengxin He, Zhong Li . Dynamics of a modified Leslie-Gower predator-prey model with double Allee effects. Mathematical Biosciences and Engineering, 2024, 21(1): 792-831. doi: 10.3934/mbe.2024034
    [5] Juan Ye, Yi Wang, Zhan Jin, Chuanjun Dai, Min Zhao . Dynamics of a predator-prey model with strong Allee effect and nonconstant mortality rate. Mathematical Biosciences and Engineering, 2022, 19(4): 3402-3426. doi: 10.3934/mbe.2022157
    [6] A. Q. Khan, I. Ahmad, H. S. Alayachi, M. S. M. Noorani, A. Khaliq . Discrete-time predator-prey model with flip bifurcation and chaos control. Mathematical Biosciences and Engineering, 2020, 17(5): 5944-5960. doi: 10.3934/mbe.2020317
    [7] Yuhong Huo, Gourav Mandal, Lakshmi Narayan Guin, Santabrata Chakravarty, Renji Han . Allee effect-driven complexity in a spatiotemporal predator-prey system with fear factor. Mathematical Biosciences and Engineering, 2023, 20(10): 18820-18860. doi: 10.3934/mbe.2023834
    [8] Kunlun Huang, Xintian Jia, Cuiping Li . Analysis of modified Holling-Tanner model with strong Allee effect. Mathematical Biosciences and Engineering, 2023, 20(8): 15524-15543. doi: 10.3934/mbe.2023693
    [9] Claudio Arancibia–Ibarra, José Flores . Modelling and analysis of a modified May-Holling-Tanner predator-prey model with Allee effect in the prey and an alternative food source for the predator. Mathematical Biosciences and Engineering, 2020, 17(6): 8052-8073. doi: 10.3934/mbe.2020408
    [10] Yongli Cai, Malay Banerjee, Yun Kang, Weiming Wang . Spatiotemporal complexity in a predator--prey model with weak Allee effects. Mathematical Biosciences and Engineering, 2014, 11(6): 1247-1274. doi: 10.3934/mbe.2014.11.1247
  • Over the last few decades, Mixed Reality (MR) interfaces have received great attention from academia and industry. Although a considerable amount of research has already been done to support collaboration between users in MR, there is still no systematic review to determine the current state of collaborative MR applications. In this paper, collaborative MR studies published from 2013 to 2018 were reviewed. A total of 259 papers have been categorised based on their application areas, types of display devices used, collaboration setups, and user interaction and experience aspects. The primary contribution of this paper is to present a high-level overview of collaborative MR influence across several research disciplines. The achievements from each application area are summarised. In addition, remarkable papers in their respective areas are highlighted. Among other things, our study finds that there are three complementary factors to support and enhance collaboration in MR environments: (i) annotation techniques, which provide non-verbal communication cues to users, (ii) cooperative object manipulation techniques, which divide complex 3D object manipulation process into simpler tasks between different users, and (iii) user perception and cognition studies, which aim to lessen cognitive workload for task understanding and completion, and to increase users’ perceptual awareness and presence. Finally, this paper identifies research gaps and future directions that can be useful for researchers who want to explore ways on how to foster collaboration between users and to develop collaborative applications in MR.


    The vigorous advancement of internet technology continues to generate large volumes of data through several sources, such as media, the cloud, the Web, the Internet of Things and databases [1]. The aggregation of these sources is referred to as big data, and companies are looking to process and analyze these huge data sets to extract benefits [2]. As a result, numerous studies have shown the benefits of BDA in organizations. Specifically, the use of BDA enhances the prediction of future product development trends, which improves the decision-making process [2,3,4,5], and enhances supply chain systems [6]. BDA prediction is also paramount in the promotion of firm performance [7,8,9], improvement of marketing efficiency [5,10] and prediction of market trends [5,11]. The momentousness of BDA adoption culminates with the development of a sustainable dynamic economic system that takes advantage of current contextual demands [12]. Evidence shows that firms that succeed in implementing BDA primarily graduate into major cross-national corporations. Examples of such corporations include Google, Apple, Twitter, Uber, Walmart, Amazon, IBM Watson, Rolls-Royce, Toyota and others [13]. Despite the benefits of BDA for firms and economic performance, numerous companies still encounter an assortment of barriers that inhibit the adoption of BDA, especially by SMEs [14,15,16]. For most developing economies, SMEs are pivotal in economic development and validation of BDA implementation. However, Coleman et al. [17] indicated that SMEs are still slow in implementing BDA, as they are faced with several barriers in the application of big data [17,18]. Del Vecchio et al. [19] pointed out the challenges and benefits of big data for SMEs. Noonpakdee et al. [20] presented barriers when Thailand SMEs adopted big data. Similarly, Chuah and Thurusamry [21] mentioned the challenges of SMEs in Malaysia using BDA. In addition, Mangla et al. [22] demonstrated the performance of SMEs' adoptions of BDA in India. Park and Kim [23] and Maroufkhani et al. [9] identified drivers of big data adoption among Korean and Iranian SMEs. However, the majority of these studies concentrate on the advantages and efficiency of BDA adoption, as well as the challenges that SMEs face when performing BDA. Previous research examining the factors influencing the use of BDA by SMEs is still scarce. With limited studies on BDA application by SMEs, such as in Vietnam, it becomes very difficult for SMEs to adopt BDA. The Technology-Organization-Environment (TOE) framework is composed of technology, organization and environment pillars [24]. It is considered to be the most comprehensive and flexible approach for examining company decisions on the adoption and implementation of information technology-based innovations [25]. Therefore, this study applies the TOE framework and four data mining algorithms (CHAID, Bayesian networks, neural networks and C5.0) to identify the predictors of readiness to adopt BDA by SMEs. The study was guided by the following objectives:

    1) To identify the best model for the predicting factors' influences on the readiness to adopt BDA among SMEs and

    2) To predict the key factors that affect the readiness to adopt BDA in SMEs.

    The findings will be useful for managers, policymakers and providers to understand the influences of BDA adoption readiness. Managers can, therefore, build competitive strategies to enhance company performance through the use of BDA. Additionally, the study proves new techniques that can be used to predict the factors influencing enterprise readiness to adopt BDA.

    Big data includes both structured and unstructured large volumes of data, and their analysis requires specific processing. The key features of the big data process are categorized into 3 Vs: (i) volume, (ii) velocity and (iii) variety. In this case, volume depicts the amount of information in the dataset, while velocity refers to the rate at which data are created. Variety indicates the different forms of data that are created. Zhong et al. [26] added two more Vs, verification and value, to characterize big data as a "5Vs" data source. In this case, verification concerns bad data that need to be verified, whereas value addresses the economic and social costs of application. On the other hand, Saggi and Jain [27] classified big data features into volume, velocity, variety, valence, veracity, variability and value to produce the "7Vs" classification. The valence is related to the complexity of the data, and veracity reflects accuracy within the dataset, while the inconsistencies in all data are mostly responsible for variability.

    Ideally, BDA involves two components, big data and business analytics [5]. The former provides the foundation for informational and technological analysis for business activities, whereas the latter provides valuable insights necessary for the improvement of the decision process in the business unit. This has a multidisciplinary benefit that promotes firm business performance [28]. For example, big data has been adopted in the manufacturing sector [9], the health care sector [29], the service sector [26] and the hospitality industry [30]. Dubey et al. [31] argued that BDA presents unequivocal and fundamental impact effects on the swiftness of supply chains and competitive advantage. Previous studies that presented benefits, challenges and performance applied big data in SMEs [17,19,20,22,32,33]. For example, Park and Kim [23] used the analytic hierarchy process and regression analysis and found that benefits received, technological abilities, financial abilities and data quality are the major factors predicting the intention to apply big data among Korean companies. Mangla et al. [22] applied structural equation modeling (SEM) to show that BDA increased project performance in Indian SMEs. Similarly, Maroufkhani et al. [9] and Lutfi et al. [34] also used partial least squares structural equation modeling (PLS-SEM) to identify the elements impacting the intentions of Iranian SMEs and Jordanian SMEs to use BDA. In addition, Sun et al. [35], Maroufkhani et al. [36] and Baig et al. [37] used a review of related articles to figure out drivers of an organization's inclination toward the utilization of big data for businesses purposes. Clearly, most of the previous studies on factors affecting the intentions of BDA adoption used latent variables. This leads to the limitation of independent factors [38]. The observed variables (e.g., demographic variables, sector, firm size) are rarely included in the research model. This research works to bridge this gap.

    The TOE framework is useful in revealing the drivers of decisions to embrace new information technology [24]. It is a threefold framework consisting of technology, organization and environment. The technology pillar defines factors associated with tools, software, IT infrastructure, etc. which affect decisions to apply big data by individuals and/or organizations. The organization pillar defines the capacity of a firm to acquire competence in the employment of multiple resources required for the operation of information systems in firms. The environmental pillar consists of multiple industrial features, e.g., competitors and vendor support, directly or indirectly affecting the operations of enterprises. The TOE framework is considered to be flexible and is widely used in technology application studies amongst companies [39]. Some previous studies on BDA adoption have applied the TOE framework. Sun et al. [35] and Baig et al. [37] laid out a synopsis of the determinants of big data adoption using the TOE framework. Park et al. [40] and Park and Kim [23] applied the TOE framework to ascertain the drivers of big data adoption among Korean companies. Similarly, Lai et al. [41], used the TOE framework to identify the determinants of BDA adoption by Chinese firms. Maroufkhani et al. [9] applied the framework to find out the determinants of BDA application among SMEs in Iranian. However, previous studies evaluating factors affecting BDA mostly refer to latent variables without considering observed variables. Therefore, the present study extends the TOE framework to understand the drivers of BDA adoption. The research model of this study is shown in Figure 1.

    Figure 1.  Research model.

    The technology pillar involves intra- and inter-organizational drivers that influence company decisions to embrace new information technology [42]. In this dimension, the first factor mentioned is the relative advantage, which outlines the level to which the new proposed technology provides greater benefit for firms [43]. According to Ghobakhloo et al. [44], SMEs are only willing to embrace new technology if the said advantages outweigh the performance of existing technology. IT infrastructure is salient for organizational competitiveness [30], reflecting a firm's ability to operationalize information systems. However, SMEs often lack IT resources, undercutting their abilities for data collection and analysis [19]. According to Wang and Wang [32], the lack of IT specialists is a major drawback for most SMEs in attaining flexibility in IT infrastructure usage. Data quality is an important factor leading to the success of enterprises' BDA adoption.

    Big data stockpiles could be structured, semi-structured or unstructured. Organizations must choose specific software to ensure the quality of the data as well as the efficiency of BDA [14]. Park and Kim [23] mentioned that data quality has a great influence on big data adoption decisions among Korean firms. The security issue is critical for firms' decisions to adopt BDA. Third parties are privy to personal and company information, thus exposing individuals and companies to cybercrime [45]. Therefore, data security is a key factor affecting the decisions of enterprises to adopt BDA [35]. Technical competence refers to expertise, which is a prerequisite for analyzing big data by employees. Yadegaridehkordi et al. [30] indicated that enough knowledge for staff to analyze information technology is an important factor affecting the application of innovation in organizations. Alharthi et al. [14] concluded that staff lack of BDA skills is a barrier when companies adopt BDA.

    The organizational dimension represents different organizational conditions that affect readiness toward the adoption of BDA. The first element is management support, which is critically vital in the adoption of an innovation [46]. If managers realize the benefits of BDA adoption, they can allocate the resources needed for implementation. By contrast, if management does not see the profits of BDA adoption, they will oppose the application of that data [47]. Second, the adoption of BDA is attached to a cost factor to maintain and develop the application of big data [35]. In this regard, company development-related costs are usually funded through support from the financial institution. Such support tends to be limited for SMEs compared to larger firms, thereby undermining the adoption of BDA by small companies [17]. Hence, firm size is considered an essential driver for the adoption of technological innovations [24]. The type of industry is another driver believed to influence the intentions to apply new technology in enterprises. Gangwar [48] pointed out that there was a significant difference between the manufacturing and the service sectors regarding BDA. Finally, decision-making culture is another factor that influences the adoption of BDA. More often than not, organizations that apply an evidence-based decision-making culture embrace big data analytics to develop evidence that enhances managers' competence for strategic decision-making, thereby improving enterprise profitability [35].

    Environmental factors include external factors that the organization may encounter [49]. Factors such as competition pressure, partner pressure and government support are perceived as external drivers of big data adoption by SMEs [23]. Competitive pressure outlines the extent to which competitors affect organizational decisions towards the adoption of new technologies [24]. The role of competition pressure is widely acknowledged in the literature on IT adoption [50,51]. Zhu et al. [52] revealed the importance of the pressure from trading partners in influencing company decisions to adopt and utilize new information technology. In addition, the government also plays a fundamental role in influencing the adoption of information technology. If the government exudes a strong political will and ensures a good institutionally enabling environment for the enrollment of big data technology, firms are often encouraged to develop internal policies for the adoption and implementation of BDA. Such a positive relationship has been confirmed by numerous studies [35,41]. Government support and policy include the provision of public data, fostering of experts, protecting intellectual property and regulation for privacy and security that affect the use of big data by firms [53].

    Rojas-Méndez et al. [54] demonstrated that demographic variables (gender, age, education level) are important factors for predicting people's willingness to adopt the technology. In this regard, the manager's level of education is the most important demographic characteristic affecting the application of technology [54]. Parasuraman and Colby [55] pointed out that there is a need for studies focusing on factors such as age, education level, occupation and demographic characteristics to assess the readiness to use the new technology of each person. For this reason, the manager's characteristics dimension is included to predict the determinants of big data adoption by SMEs.

    Data mining includes many different algorithms used mainly for classification purposes. CHAID analysis is an algorithm that develops a predictive model that merges predictors that best explain the response variable [56]. A Bayesian network is a probability-based graphical model that represents expertise about an uncertain domain, where individual nodes correspond to some random variable, and each edge represents the conditional probability for the corresponding random variables [56,57]. Neural networks are a set of connected input/output units where each connection has a distinct weight associated with each other [56]. One of the most often used decision tree inducers is the C5.0 model, which divides the sample according to the field that delivers the most information gained at each level.

    The four algorithms have some differences. Neural networks are widely used because of their ability to produce results quickly, although their capacity for problem-solving is limited. The CHAID model uses simple predictions based on the frequency distribution of potential problems. The C5.0 model is considered an algorithm with outstanding performance and high accuracy [58].

    The data mining technique is applied in research to collect data from questionnaires and predict factors affecting the research problem. For instance, Cortez and Silva [59] collected data from 788 students in a public school in Portugal by questionnaire. The questionnaire included 37 items that mentioned demographics, social and school information. Four algorithms, consisting of decision trees, random tree, neural networks and support vector machines, were used to predict students' mathematics and Portuguese grades in this study. Yukselturk et al. [60] predicted dropout students through four algorithms: k-nearest neighbor, decision tree, naive Bayes and neural network. In that study, data was collected from 189 students in Turkey. The questionnaire included ten variables to predict students who drop out of courses. Applying the data mining technique, the researcher can easily discover unexpected factors [61]. However, studies using the data mining technique to predict factors affecting the adoption of BDA have still not been found.

    The questionnaire was literature-based and collected comments from professionals and managers of SMEs. The questionnaire was partitioned into three sections. Section A used thirty-five items collecting data on determinants of readiness to implement big data among SMEs. Section B consisted of nine items assessing the readiness to apply BDA. The first two sections used a seven-point agreement Likert-scale, ranging from 1 for "Strongly Disagree" to 7 for "Strongly Agree." Section C collected data on the respondents' socio-economic characteristics.

    The subjects of this study are SMEs involved in manufacturing and service provision. The manufacturing and service sectors are two areas that have important roles in the economies of each country [62]. Manufacturing refers to the activities of people using tools and machines to convert raw materials into finished products, transport them to suppliers and recycle used products [26,63]. Services include areas such as retail, finance, tourism, health, accommodation services, restaurants, etc., whereby the service sector provides services to consumers. The questionnaire was emailed to Vietnamese managers of SMEs that met the eligibility criterion of the study. A total sample of 240 managers of manufacturing and service provider companies participated in the study. The data were collected during the period from September to December 2020.

    Table 1 shows the respondents' demographic analysis. The gender proportion showed that the majority of respondents were males (72.5%), followed by females (27.5%). Age distribution was such that the majority of the respondents were aged 30 to 45 (57.9%), with those aged ≥ 46 accounting for 29.2%, and those aged < 30 accounted for 12.9%. The descriptive statistics revealed that 46.7% of managers hold bachelor's degrees, 39.2% hold post-graduate degrees, and only 14.2% have college or vocational training. Firm size showed that the majority of participants were small enterprises (82.5%), and medium enterprises accounted for 17.5%. Among these firms, 50.8% were manufacturing firms, and 49.2% were service firms.

    Table 1.  Demographics of respondents (n = 240).
    Variable Type Frequency Percentage (%)
    Gender Male 174 72.5
    Female 66 27.5
    Age < 30 31 12.9
    30–45 139 57.9
    ≥ 46 70 29.2
    Education level College education 34 14.2
    Bachelor's degree 112 46.7
    Master's degree or above 94 39.2
    Role of respondent Chief Executive Officer 85 35.4
    Executive management 91 37.9
    IT management 64 26.7
    Sector Manufacturing 122 50.8
    Service 118 49.2
    Firm size Small enterprise 198 82.5
    Medium enterprise 42 17.5

     | Show Table
    DownLoad: CSV

    In this study, each variable is measured by at least three items based on references. To be more specific, the variables are relative advantage (four items) [51], IT infrastructure (three items) [20], data quality (three items) [41], data security (three items) [64], technical competence (four items) [65], management support (three items) [66], cost (three items) [51], decision-making culture (three items) [35], competitive pressure (three items) [67], partner pressure (three items) [67], government support (three items) [26] and readiness to apply BDA in SMEs (nine items) [37,55,68].

    To assess the reliability and validity of latent variables, Cronbach's α value, composite reliability (CR), average variance extracted (AVE) of all constructs and factor loadings of items are shown in Table A1. A preliminary dataset analysis of External Factor Analysis (EFA) was carried out. The KMO (Kaiser-Meyer-Olkin) value was 0.814, being greater than the critical value (0.7) [69], and the Bartlett sphericity test's significant value was p = 0.000, indicating that factor analysis is suitable for the original dataset. Cronbach's α value was computed to assess the reliability of the questionnaire. The reliability test indicated that the value of Cronbach's α for the latent variables ranged between 0.626 and 0.867. According to Hair et al. [70], if the Cronbach's α value is greater than 0.700 (0.600 acceptable), the questionnaire has good internal consistency. Therefore, the questionnaire for this study was found to be consistent and reliable.

    All factor loadings (from 0.520 to 0.865) were higher than the acceptable limit (0.5) [69]. The CR of all constructs indicated good internal consistency, being higher than 0.7 [71]. All constructs, except for data quality (0.457) and management support (0.471), had AVE values higher than 0.5, indicating good convergent validity. Taking into consideration the Fornell and Larcker [72] proposal that an AVE value equal to 0.4 can be acceptable if the CR value is greater than 0.6, the data quality and management support variables were accepted in this study because they had a CR value high of 0.7. This proves that all latent variables in this study have acceptable convergent values.

    To predict the factors' influences on BDA adoption readiness, the dependent variable (readiness to apply big data in SMEs) was divided into two options based on an average of nine items that identify the readiness to apply BDA among SMEs. The first option was coded "1 = Low readiness, " with the mean values of the nine items < 6.0, and "2 = High readiness" was used with the mean values of the nine items ≥ 6.0. Table 2 and Table 3 present the sixteen independent (input) variables and the dependent (target) variable.

    Table 2.  The description of the independent variables.
    No. Variable Data type Description
    Technology dimension
    1 Relative advantage Continuous Mean value
    2 IT infrastructure Continuous Mean value
    3 Data quality Continuous Mean value
    4 Data security Continuous Mean value
    5 Technical competence Continuous Mean value
    Organization dimension
    6 Management support Continuous Mean value
    7 Cost Continuous Mean value
    8 Firm size Nominal 1 = "Small", 2 = "Medium"
    9 Sector Nominal 1 = "Manufacturing", 2 = "Service"
    10 Decision-making culture Continuous Mean value
    Environment dimension
    11 Competitive pressure Continuous Mean value
    12 Partner pressure Continuous Mean value
    13 Government support Continuous Mean value
    Manager's characteristics dimension
    14 Gender Nominal 1 = "Male", 2 = "Female"
    15 Age Nominal 1 = " < 30", 2 = "30–45", 3 = "≥ 46"
    16 Education level Nominal 1 = "High school, College/Vocational education",
    2 = "Bachelor's degree",
    3 = "Master's degree, or above"

     | Show Table
    DownLoad: CSV
    Table 3.  The description of the dependent variable.
    Category Frequency Percentage (%)
    1 (Low readiness) 119 49.6
    2 (High readiness) 121 50.4
    Total 240 100.0

     | Show Table
    DownLoad: CSV

    This study used four data mining algorithms that were run through the Statistical Package for Social Sciences (SPSS) 18 software (IBM, Armonk, NY, USA). The algorithms used for the prediction of factors' influences on the adoption readiness of BDA include CHAID, Bayesian networks, neural networks and C5.0. These algorithms are commonly applied in studies that analyze data collected from questionnaires.

    CHAID algorithm

    CHAID is one of the pioneer algorithms that partition data into multiple subgroups [73]. However, this method does not allow for data pruning. CHAID applies the chi-square independence test to identify the splitting rule for each node. This test performs an automatic split categorization of independent categorical variables from continuous variables. Super-classes are then produced through the merging of the input variables based on statistical analogy, maintaining them if they are statistically dissimilar. A comparative analysis between the super-classes and the target variable is done to assess dependency using the chi-square independence test. The super-class that shows the highest significance is then selected as the splitting criteria for the node.

    Bayesian networks algorithm

    The Bayesian network is popular, being used in multiple research fields [74]. This method combines qualitative and quantitative variables. A Bayesian network is a directed graph with an additional set of probability distributions. Here, the graph represents the qualitative aspect, whereas the probability distributions represent the quantitative part. In the graph, the nodes denote dubious factors, while the arcs address the presence of a causal connection between two factors. Bayesian networks are very effective in predictive studies. The structure makes inferences from Bayesian networks robust, reduces the differences of estimated parameters and is also robust against overfitting.

    Neural network algorithm

    Neural networks are modeled from brain functionality. They use numerous connected receptor units that accept messages from other units, processing them and conveying the new message to other units. However, the output of the neural network is difficult to retrace; hence, interpretation becomes hard. These disadvantages are overridden by the complexity and flexibility of the algorithm, transforming it into a robust and comprehensive discriminator that is applicable to resolve varied problems compared to other methods [56].

    C5.0 algorithm

    The C5.0 algorithm evolved from the C4.5 algorithm as formulated by Ross Quinlan [75]. The algorithm has the capacity to segment data into multiple subgroups. The C5.0 possesses pruning ability, selecting splitting rules through an impurity measure [56]. The pros of the C5.0 algorithm include its robustness in handling missing data points and several input columns. In addition, the method requires shorter training sessions for estimates and uses normal enhancement techniques to improve the accuracy of the classification function.

    This study sought to categorize response variables into two options (Low readiness and High readiness); then, a partition node was inserted to segregate the data into training (70%) and testing (30%) sets. The performance of models was assessed through the confusion matrix (Table 4). Next, the performance of models was analyzed using the attributes of accuracy, precision, recall, specificity, F-measure and area under the receiver operating characteristic (ROC) curve (AUC) and k-fold cross-validation.

    Table 4.  Form of confusion matrix.
    Confusion Matrix of Readiness
    Low or High
    Predicted value
    Low readiness High readiness
    Observed value Low readiness True Negative (TN) False Positive (FP)
    High readiness False Negative (FN) True Positive (TP)

     | Show Table
    DownLoad: CSV

    In Table 4, true positive and true negative present the number of correct positive and correct negative samples predicted by the model. False positive and false negative stand for the number of wrong positive and wrong negative samples [76,77].

    Accuracy is judging the overall correct rate, that is, that the actual category is consistent with the predicted category [76,77].

    Accuracy=TP+TNTP+FP+FN+TN (1)

    Precision is judging how much of the recall is true, that is, how much of the actual truth is accurately predicted to be true [76].

    Precision=TPTP+FP (2)

    Recall is the proportion of true positives to the total number of true positives and false negatives [76,77].

    Recall=TPR=TPTP+FN (3)
    1Recall=FPR=FPFP+TN (4)

    Specificity is the correct rate of judgment that is true, that is, the ratio of true to true among predictions [76].

    Specificity=TNTN+FP (5)

    F-measure: The harmonic mean of the precision and precision performance measurements is used to calculate the precision recovery curve. A high F-measurement result suggests that the categorization quality is excellent [76].

    Fmeasure=2×PrecisionxRecallPrecision+Recall (6)

    AUC: The ROC is a two-dimensional diagram of the false positive rate (FPR) on the horizontal axis versus the true positive rate (TPR) on the vertical axis. Based on Eqs (3) and (4), the TPR and FPR values of the cut-off points between 0 and 1 are calculated, and then the diagram is plotted by joining these data points. The area under the curve (AUC) is an appropriate measure if its value always varies between 0.5 and 1. The AUC is the standard to evaluate the model performance [78]. More specifically, the model performance is evaluated as acceptable (0.7 < = AUC < 0.8), good (0.8 < AUC < 0.9) or outstanding (AUC ≥ 0.9) discrimination [79].

    k-fold cross-validation: In a comparative analysis of various forecast models, the total collection data is commonly divided into training and testing subsets, and thoroughly expecting models are analyzed based on their precision in the test data set. By dividing the information into designing and testing datasets, a decision of doing a single split or multiple splits can be made, which is regularly called k-fold cross-validation. To estimate the performances of classifiers, a stratified 10-fold cross-validation approach is used. Empirical studies showed that 10 folds seem to be an optimal number [80]. In this study, each fold of data included 24 cases (240 cases/10 = 24 cases) and was used once to test the performance of the classifier.

    To be clearer, the research process of this study is shown in Figure 2.

    Figure 2.  The study research process.

    Data were collected from 240 managers of Vietnamese SMEs. A total of sixteen input variables (eleven latent variables and five observed variables) were analyzed through the data mining technique. The performances of prediction models were evaluated through the four classification models. The best performance was revealed by the C5.0 model, predicting readiness to apply big data with more accuracy. Therefore, C5.0 was employed to predict the five observed variables' (firm size, sector, gender, age and education level) impacts on the readiness to apply BDA. Finally, the C5.0 procedure was illustrated as a decision tree.

    As shown in Table 5, the correctness values of the predictions of CHAID, Bayesian networks, neural network and C5.0 for training data were 83.32, 82.93, 81.10 and 87.20%, respectively. The results of the correct predictions on the testing data were 68.42, 85.53, 71.05 and 89.47%, respectively. Hence, these models have high prediction accuracy.

    Table 5.  Evaluating the measurement results of four models.
    Model type Title Training Testing AUC
    Training Testing
    CHAID Correct 135 83.32% 52 68.42% 0.891 0.747
    Wrong 29 17.68% 24 31.58%
    Total 164 76
    Bayesian networks Correct 136 82.93% 65 85.53% 0.941 0.910
    Wrong 28 17.07% 11 14.47%
    Total 164 76
    Neural network Correct 133 81.10% 54 71.05% 0.861 0.815
    Wrong 31 18.90% 22 28.95%
    Total 164 76
    C5.0 Correct 143 87.20% 68 89.47% 0.893 0.939
    Wrong 21 12.80% 8 10.53%
    Total 164 76

     | Show Table
    DownLoad: CSV

    Moreover, the training data showed an AUC value range of 0.861 to 0.941, while the test set ranged from 0.747 to 0.939. Hence, the models were considered good in discriminating the predictors [79]. The stream of four models is shown in Figure 3.

    Figure 3.  Stream of the four models with sixteen input variables.

    The ROC curve is also used for the evaluation of the classification algorithms. The ROC curve visualizes the false positive rate against the true positive rate. The false positive rate result will change according to the classification threshold value, and the best classification result model can be selected according to the area under the ROC curve. A larger area means the model has a better classification effect. In Figure 4, the results show that on training data, Bayesian networks are the best model, and for testing data, C5.0 is the best model.

    Figure 4.  Graph of the ROC values of the four models.

    The coincidence matrix and the evaluation results of the four models are shown in Table 6. It is clear that the resulting values in all four models for accuracy, precision, recall, specificity and F-measure were higher than 0.7, excepting accuracy, precision, recall, specificity and F-measure values on the testing data of the CHAID model and precision and specificity values on the testing data of the neural network model, which were approximately 0.7. This proves that the four models used in this study have good classification quality. Specifically, C5.0 is the model with the highest performance evaluation, followed by Bayesian networks, neural networks, and (the lowest) the CHAID model.

    Table 6.  Coincidence matrix and the evaluation results of the four models.
    Model type Partition Title Coincidence matrix Accuracy Precision Recall Specificity F-measure
    Low
    readiness
    High
    readiness
    CHAID Training Low readiness 69 9 0.8232 0.8800 0.7674 0.8846 0.8199
    High readiness 20 66
    Total 89 75
    Testing Low readiness 28 13 0.6842 0.6486 0.6857 0.6829 0.6667
    High readiness 11 24
    Total 39 37
    Bayesian networks Training Low readiness 70 8 0.8293 0.8919 0.7674 0.8974 0.8250
    High readiness 20 66
    Total 90 74
    Testing Low readiness 37 4 0.8553 0.8750 0.8000 0.9024 0.8358
    High readiness 7 28
    Total 44 32
    Neural network Training Low readiness 65 13 0.8110 0.8395 0.7907 0.8333 0.8144
    High readiness 18 68
    Total 83 81
    Testing Low readiness 27 14 0.7105 0.6585 0.7714 0.6585 0.7105
    High readiness 8 27
    Total 35 41
    C5.0 Training Low readiness 67 11 0.8720 0.8736 0.8837 0.8590 0.8786
    High readiness 10 76
    Total 77 87
    Testing Low readiness 36 5 0.8947 0.8649 0.9143 0.8780 0.8889
    High readiness 3 32
    Total 39 37

     | Show Table
    DownLoad: CSV

    Similarly, the results of the 10-fold cross-validation for the four models are shown in Table A2, which indicated that C5.0 is the model with the highest average accuracy among the four selected tested models. Accordingly, the accuracy average for the 10-fold cross-validation of the C5.0 model is 0.885, followed by Bayesian networks (0.833) and neural networks (0.772), making the CHAID model (0.679) the lowest. This can be explained because C5.0 has higher memory performance than other algorithms and then can generate more precise rules. CHAID, an algorithm, applies the chi-square independence test that is suitable for categorical data. However, the input variables of this study are mostly continuous variables. To improve the precision of the model, this algorithm must perform by grouping data into categories. That is the reason why CHAID performs with the least precise predictions.

    Predictor importance is a sensitivity analysis technique. It is used to identify the more important variables and/or omit the least important variables in the forecasting model [76].

    The important drivers of readiness to adopt BDA are presented in Table 7. In CHAID and Bayesian networks, the most important variable was the management support variable. Conversely, in the neural network, the cost variable was the most critical. The most critical variable in the C5.0 model was data quality. The predictors of four algorithms rank the predictors from most important to least essential based on the total value (total relative importance value for each attribute).

    Table 7.  The most important factors impacting the readiness for BDA.
    Variable Technique Total value
    CHAID Bayesian networks Neural network C5.0
    Management support 0.3566 0.3627 0.1179 0.2029 1.0401
    Data quality 0.2079 0.2007 0.0973 0.2073 0.7132
    Firm size 0.1485 0.0675 0.0000 0.1099 0.3259
    Data security 0.0954 0.0455 0.1398 0.0000 0.2807
    Cost 0.0000 0.0000 0.1613 0.0779 0.2392
    Sector 0.0759 0.0293 0.0516 0.0685 0.2253
    Competitive pressure 0.0028 0.0936 0.0610 0.0297 0.1871
    Partner pressure 0.0000 0.0728 0.0775 0.0000 0.1503
    Gender 0.0000 0.0000 0.0410 0.0958 0.1368
    Government support 0.0254 0.0000 0.0720 0.0000 0.0974
    Technical competence 0.0000 0.0507 0.0443 0.0000 0.0950
    IT infrastructure 0.0000 0.0000 0.0000 0.0747 0.0747
    Age 0.0028 0.0000 0.0000 0.0541 0.0569
    Decision-making culture 0.0000 0.0349 0.0000 0.0000 0.0349
    Education level 0.0343 0.0000 0.0000 0.0000 0.0343
    Relative advantage 0.0000 0.0288 0.0000 0.0000 0.0288

     | Show Table
    DownLoad: CSV

    To get an overview of the gauge result of the four models, we consolidated the values of the four models. The mix of these prescient models is known as aggregation-based sensitivity examination and is suggested in light of the fact that it produces hearty, exact models [76,81]. As a result, the sixteen input variables were categorized into four dimensions—technology dimension (relative advantage, IT infrastructure, data quality, data security, technical competence), organization dimension (top management support, cost, sector, firm size, decision making culture), environment dimension (competitive pressure, partner pressure, government support) and manager's characteristics dimension (gender, age, education level)—that have an impact on the readiness of BDA adoption. The major predictor variables for BDA adoption among Vietnam SMEs were identified to be management support, data quality, firm size, data security and cost.

    Based on the results of the evaluation of the four forecasting models, the C5.0 is the model with the highest predictive accuracy. Therefore, the authors used the C5.0 model to evaluate in detail the observed variables affecting the readiness to use BDA in SMEs. The output variable was the readiness to apply BDA among SMEs (Low readiness and High readiness), and input variables were firm size, sector, gender, age and education level. The stream of the C5.0 model is presented in Figure 5.

    Figure 5.  Stream of C5.0 model with five observed variables.

    The process of the C5.0 model consists of five input observed variables. This model used the whole dataset, with the result of a correct prediction percentage of 73.75% and an AUC value of 0.758. This proves that the model has high-performance measurements. The results of the model represented three descriptors splitting nodes (firm size, sector and age).

    Figure 6 illustrates the results of the decision tree of the C5.0 model. The first splitting node of readiness to apply BDA in SMEs was firm size. In node 1, the proportion of small companies that are not ready to adopt BDA is 57.58%, while the number of small companies with high readiness is lower (42.42%). Next, node 1 diverged into nodes 2 and 3. In node 2, 69.83% of manufacturing companies were still not ready to adopt BDA, and only 30.17% of companies had high readiness. In node 3, the rate of the services companies' readiness to apply BDA is high, 59.76%, and the figure for low willingness companies was 40.24%. Next, node 3 diverged into nodes 4 and 5. In node 4, 70.97% of service companies with leaders under 46 have a high level of readiness to adopt BDA, whereas only 29.03% of service companies have low readiness. Otherwise, in node 5, with leaders aged 46 and over, the percentage of companies willing to adopt BDA (25.00%) was lower than the percentage of companies that were not ready to adopt BDA (75.00%). Finally, in node 6, the majority of medium companies have a high willingness to adopt BDA (88.09%), whereas only 11.91% of medium enterprises have low readiness.

    Figure 6.  Prediction readiness to apply BDA by observed variables (C5.0 model).

    The findings of the current study demonstrated that sixteen factors of four dimensions (technology, organization, environment and manager's characteristics) have impacts on the readiness to adopt BDA. Furthermore, management support, data quality, firm size, data security and cost were revealed as major predictors of the readiness to apply BDA among Vietnamese SMEs. In addition, medium-sized companies in the service sector are assessed to have higher readiness to apply BDA than other SMEs. In addition, the results of the C5.0 model indicated that firm size, sector and age do have an impact on the BDA adoption readiness.

    The results of the study show that management support is the strongest decisive factor in the readiness to apply BDA among Vietnamese SMEs. The result is similar to findings from previous studies such as Sun et al. [35], Maroufkhani et al. [9], Lai et al. [41], Asiaei and Rahim [82]. The support of managers will create favorable conditions for the company in maintaining and using technology [82]. Realizing the benefits of big data, management can allocate the resources needed for adoption and implementation. By contrast, if the management does not see the benefits of big data for businesses, they will oppose its adoption [47].

    Generally, data is supposed to be an important input when companies adopt BDA. To perform a successful BDA, data quality is extremely important. Firms have abundant data sources and have high accuracy that will contribute to applying big data readiness. In this study, data quality is a strong factor of BDA adoption, which is consistent with the findings of Park and Kim [23].

    Not surprisingly, firm size affected the readiness of BDA adoption. This is consistent with the results of Sohaib et al. [83] and Alshamaila et al. [84]. To be more specific, medium enterprises have higher readiness to adopt BDA than small enterprises. This can be explained by medium-sized companies having larger revenue and more employees than small companies. Therefore, they have many advantages when investing in BDA applications.

    Data security was also predicted as a strong influencing factor in this study. Big data includes a lot of personal information [14]; hence, it is of serious concern among firms when deciding to adopt BDA. The influence of data security in technology adoption was also found in many previous studies, such as in software-as-a-service adoption [85], cloud computing [51,83] and big data adoption [23,35,37].

    Cost is one of the five factors that are predicted to have an important influence on the readiness of SMEs to adopt BDA. This finding is similar to Park and Kim [23] and Sun et al. [35], who found that cost is an important factor in maintaining and developing the analysis of big data in enterprises. In addition, costs for big data adoption can be a barrier for companies to implementing big data [17,86].

    The classification results of the C5.0 model with five observed variables show that the service sector has a higher readiness to apply BDA than the manufacturing sector. This result is consistent with Gangwar [48], who identified factors influencing big data adoption in Indian companies. This is because service organizations like wholesalers, retailers and lodging providers have early access to information technology systems and high-quality human resources to analyze large amounts of data. Moreover, in the context of the complicated development of the COVID-19 pandemic, wholesale and retail companies in Vietnam have had a rapid shift from traditional shopping to online shopping. As a result, organizations must develop suggestion systems and find ways to respond to client information as quickly as possible. Hence, service SMEs are better prepared to adopt BDA. Manufacturing companies are stated to be encountering numerous obstacles, such as a lack of infrastructure and BDA tools, when it comes to using BDA to optimize supply chains [86].

    The findings show that small service firms with managers under the age of 46 have a higher readiness to adopt BDA than those firms with older managers. This can be explained by young managers being bolder in adopting new technology, while older managers consider more carefully the necessary conditions when applying BDA, such as information technology, high-quality human resources and finance. In addition, in the implementation of new technologies, some of the older leaders have a lagging mindset, fear of risk and fear of change. This is consistent with the findings of Badri et al. [87], who mentioned that elderly teachers are thought to show less technology readiness than younger teachers.

    Applying BDA plays an important role in helping organizations improve competitiveness, enhance supply chains, optimize logistics and improve business performance. Based on the data mining technique, the findings of the study show that the C5.0 model is the best model to predict factors affecting BDA adoption readiness in SMEs. Five factors have the greatest influence on the readiness to adopt BDA: management support, data quality, firm size, data security and cost. Moreover, an important finding of this study is that the age of managers also affects the readiness to adopt BDA.

    This study is useful to managers of SMEs, providers and policymakers in developing better policies and strategies for the adoption of BDA. In terms of managers, the volume of data generated in organizations is growing exponentially. So, how to effectively analyze big data is a matter of great interest to organizations today. The proposed model can assist businesses in determining their readiness to adopt BDA. Furthermore, the findings of the study assist managers in increasing their awareness of the elements affecting the enterprise's readiness to use big data. For example, this research shows that management support is the most important factor influencing BDA adoption readiness. As a result, before deciding to embrace BDA, SME management should be proactive in studying to increase their knowledge of the technology and developing a clear strategy. In terms of service providers, the outcomes of this study reveal that SMEs should prioritize data quality, data security and cost factors when preparing to embrace BDA. SMEs, on the other hand, are having financial challenges. As a result, plans for developing BDA tools, hardware, software and other products that meet the needs of providers' clients in emerging and underdeveloped countries should be formed. In addition, when implementing BDA, suppliers must improve services to support SMEs. In terms of policymakers, the survey revealed that the service sector is more prepared to use big data than the manufacturing sector and that medium-sized businesses are more prepared to use big data than small businesses. As a result, the government should have policies in place to assist each sort of business.

    Thanks to the great benefits that BDA contributes to business development, a huge number of businesses are interested in BDA. This study has made significant contributions that help practitioners and researchers understand the importance of influencing factors on the readiness to apply big data in SMEs. First, instead of using traditional analytical methods to perform information-based sensitivity analysis, as shown in previous studies, well-known data mining algorithms were used to develop predictive models in this study. Second, this study explored factors that have strong impacts on the readiness to adopt BDA among SMEs. From these findings, the research model is expected to be a useful reference for practitioners in developing countries and the scientific community for doing future related research.

    In addition to the study findings, this study also demonstrates some limitations. First is the limitation on the number of samples when using the data mining technique. Therefore, future studies should be conducted with larger sample sizes. Second, the numbers of input variables and prediction algorithms are limited. In future investigations, the number of input variables should increase, and different forecasting algorithms may be used to evaluate the predictive model's findings.

    The authors would like to thank all respondents who spent valuable time answering questionnaires and the insightful comments of the reviewers.

    The authors declare there is no conflict of interest.

    Table A1.  Reliability and validity assessment.
    Variable Item number Factor loadings Cronbach α CR AVE
    Relative advantage 4 0.530–0.865 0.807 0.805 0.519
    IT structure 3 0.617–0.848 0.798 0.809 0.590
    Data quality 3 0.569–0.728 0.691 0.714 0.457
    Data security 3 0.705–0.756 0.716 0.764 0.520
    Technical competence 4 0.744–0.816 0.867 0.867 0.620
    Management support 3 0.520–0.781 0.707 0.722 0.471
    Cost 3 0.755–0.849 0.798 0.843 0.643
    Decision-making culture 3 0.615–0.846 0.746 0.768 0.528
    Competitive pressure 3 0.787–0.842 0.856 0.856 0.664
    Partner pressure 3 0.690–0.717 0.626 0.751 0.501
    Government support 3 0.703–0.727 0.719 0.757 0.509
    Readiness to adopt big data 9 0.684–0.810 0.773 0.909 0.526
    *Note: CR: Composite Reliability, AVE: Average Variance Extracted

     | Show Table
    DownLoad: CSV
    Table A2.  The results of the 10-fold cross-validation for the four model types.
    Fold No. CHAID Bayesian networks Neural network C5.0
    Confusion matrix Accuracy Confusion matrix Accuracy Confusion matrix Accuracy Confusion matrix Accuracy
    1 12 5 0.649 13 4 0.784 12 5 0.703 14 3 0.892
    11 9 4 16 6 14 1 19
    2 21 11 0.729 28 4 0.847 26 6 0.780 27 5 0.881
    5 22 5 22 7 20 2 25
    3 28 13 0.684 37 4 0.855 27 14 0.711 36 5 0.895
    11 24 7 28 8 27 3 32
    4 32 21 0.615 49 4 0.846 36 17 0.692 45 8 0.875
    25 26 12 39 15 36 5 46
    5 53 10 0.736 58 5 0.840 46 17 0.720 55 8 0.888
    23 39 15 47 18 44 6 56
    6 46 25 0.697 64 7 0.828 51 20 0.752 62 9 0.890
    19 55 18 56 16 58 7 67
    7 66 13 0.669 71 20 0.828 65 14 0.761 69 9 0.883
    41 43 8 64 25 59 10 75
    8 84 12 0.728 87 23 0.836 62 34 0.692 83 13 0.887
    41 58 9 76 26 73 9 90
    9 93 14 0.657 96 11 0.833 86 21 0.681 93 14 0.884
    60 49 25 84 48 61 11 98
    10 62 55 0.627 105 12 0.835 44 18 0.732 101 16 0.877
    33 86 27 92 22 65 13 106
    Average 0.679 0.833 0.722 0.885
    Confusion matrix illustrates the classification of the cases in the test dataset. In the confusion matrix, the columns represent the actual cases, and the rows represent the predicted. Accuracy = (TP + TN)/(TP + FP + TN + FN).

     | Show Table
    DownLoad: CSV


    [1] Dey A, Billinghurst M, Lindeman RW, et al. (2018) A Systematic Review of 10 Years of Augmented Reality Usability Studies: 2005 to 2014. Frontiers in Robotics and AI 5.
    [2] Bai Z, Blackwell AF (2012) Analytic review of usability evaluation in ISMAR. Interact Comput 24: 450–460. doi: 10.1016/j.intcom.2012.07.004
    [3] Dünser A, Grasset R, Billinghurst M (2008) A survey of evaluation techniques used in augmented reality studies. Human Interface Technology Laboratory New Zealand.
    [4] Swan JE, Gabbard JL (2005) Survey of user-based experimentation in augmented reality. In: Proceedings of 1st International Conference on Virtual Reality 22: 1–9.
    [5] Milgram P, Kishino F (1994) A taxonomy of mixed reality visual displays. IEICE Transactions on Information and Systems 77: 1321–1329.
    [6] Azuma RT (1997) A survey of augmented reality. Presence: Teleoperators & Virtual Environments 6: 355–385.
    [7] Milgram P, Takemura H, Utsumi A, et al. (1995) Augmented reality: A class of displays on the reality-virtuality continuum. Telemanipulator and Telepresence Technologies 2351: 282–293. International Society for Optics and Photonics. doi: 10.1117/12.197321
    [8] Irizarry J, Gheisari M, Williams G, et al. (2013) InfoSPOT: A mobile Augmented Reality method for accessing building information through a situation awareness approach. Automat Constr 33: 11–23.
    [9] Ibáñez MB, Di Serio Á, Villarán D, et al. (2014) Experimenting with electromagnetism using augmented reality: Impact on flow student experience and educational effectiveness. Comput Educ 71: 1–13. doi: 10.1016/j.compedu.2013.09.004
    [10] Henderson S, Feiner S (2011) Exploring the benefits of augmented reality documentation for maintenance and repair. IEEE transactions on visualization and computer graphics 17: 1355–1368. doi: 10.1109/TVCG.2010.245
    [11] Dow S, Mehta M, Harmon E, et al. (2007) Presence and engagement in an interactive drama. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1475–1484, ACM.
    [12] Billinghurst M, Kato H (1999) Collaborative mixed reality. In: Proceedings of the First International Symposium on Mixed Reality, pp. 261–284, Berlin: Springer Verlag.
    [13] Wang X, Dunston PS (2006) Groupware concepts for augmented reality mediated human-to-human collaboration. In: Proceedings of the 23rd Joint International Conference on Computing and Decision Making in Civil and Building Engineering, pp. 1836–1842.
    [14] Brockmann T, Krüger N, Stieglitz S, et al. (2013) A Framework for Collaborative Augmented Reality Applications. In 19th Americas Conference on Information Systems (AMCIS).
    [15] Renevier P, Nigay L (2001) Mobile collaborative augmented reality: the augmented stroll. In: IFIP International Conference on Engineering for Human-Computer Interaction, pp. 299–316, Springer, Berlin, Heidelberg.
    [16] Arias E, Eden H, Fischer G, et al. (2000) Transcending the individual human mind-creating shared understanding through collaborative design. ACM Transactions on Computer-Human Interaction 7: 84–113. doi: 10.1145/344949.345015
    [17] Kim S, Billinghurst M, Lee GA (2018) The Effect of Collaboration Styles and View Independence on Video-Mediated Remote Collaboration. Computer Supported Cooperative Work (CSCW) 27: 569–607.
    [18] Cabral M, Roque G, Nagamura M, et al. (2016) Batmen-Hybrid collaborative object manipulation using mobile devices. In: 2016 IEEE Symposium on3D User Interfaces (3DUI), pp. 275–276.
    [19] Reilly D, Salimian M, MacKay B, et al. (2014) SecSpace: prototyping usable privacy and security for mixed reality collaborative environments. In: Proceedings of the 2014 ACM SIGCHI symposium on Engineering interactive computing systems, pp. 273–282.
    [20] Lin T-H, Liu C-H, Tsai M-H, et al. (2014) Using augmented reality in a multiscreen environment for construction discussion. J Comput Civil Eng 29: 04014088.
    [21] Hollenbeck JR, Ilgen DR, Sego DJ, et al. (1995) Multilevel theory of team decision making: Decision performance in teams incorporating distributed expertise. Journal of Applied Psychology 80: 292–316. doi: 10.1037/0021-9010.80.2.292
    [22] Lightle JP, Kagel JH, Arkes HR (2009) Information exchange in group decision making: The hidden profile problem reconsidered. Manage Sci 55: 568–581. doi: 10.1287/mnsc.1080.0975
    [23] Gül LF, Uzun C, Halıcı SM (2017) Studying Co-design. In: International Conference on Computer-Aided Architectural Design Futures, pp. 212–230.
    [24] Al-Hammad A, Assaf S, Al-Shihah M (1997) The effect of faulty design on building maintenance. Journal of Quality in Maintenance Engineering 3: 29–39. doi: 10.1108/13552519710161526
    [25] Casarin J, Pacqueriaud N, Bechmann D (2018) UMI3D: A Unity3D Toolbox to Support CSCW Systems Properties in Generic 3D User Interfaces. Proceedings of the ACM on Human-Computer Interaction 2: 29.
    [26] Coppens A, Mens T (2018) Towards Collaborative Immersive Environments for Parametric Modelling. In: International Conference on Cooperative Design, Visualization and Engineering, pp. 304–307, Springer.
    [27] Cortés-Dávalos A, Mendoza S (2016) Layout planning for academic exhibits using Augmented Reality. In: 2016 13th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), pp. 1–6, IEEE.
    [28] Croft BL, Lucero C, Neurnberger D, et al. (2018) Command and Control Collaboration Sand Table (C2-CST). In: International Conference on Virtual, Augmented and Mixed Reality, pp. 249–259, Springer.
    [29] Dong S, Behzadan AH, Chen F, et al. (2013) Collaborative visualization of engineering processes using tabletop augmented reality. Adv Eng Softw 55: 45–55. doi: 10.1016/j.advengsoft.2012.09.001
    [30] Elvezio C, Ling F, Liu J-S, et al. (2018) Collaborative exploration of urban data in virtual and augmented reality. In: ACM SIGGRAPH 2018 Virtual, Augmented, and Mixed Reality, p. 10, ACM.
    [31] Etzold J, Grimm P, Schweitzer J, et al. (2014) kARbon: a collaborative MR web application for communicationsupport in construction scenarios. In: Proceedings of the companion publication of the 17th ACM conference on Computer supported cooperative work & social computing, pp. 9–12, ACM.
    [32] Flotyński J, Sobociński P (2018) Semantic 4-dimensionai modeling of VR content in a heterogeneous collaborative environment. In: Proceedings of the 23rd International ACM Conference on 3D Web Technology, p. 11, ACM.
    [33] Ibayashi H, Sugiura Y, Sakamoto D, et al. (2015) Dollhouse vr: a multi-view, multi-user collaborative design workspace with vr technology. SIGGRAPH Asia 2015 Emerging Technologies, p. 8, ACM.
    [34] Leon M, Doolan DC, Laing R, et al. (2015) Development of a Computational Design Application for Interactive Surfaces. In: 2015 19th International Conference on Information Visualisation, pp. 506–511, IEEE.
    [35] Li WK, Nee AYC, Ong SK (2018) Mobile augmented reality visualization and collaboration techniques for on-site finite element structural analysis. International Journal of Modeling, Simulation, and Scientific Computing 9: 1840001. doi: 10.1142/S1793962318400019
    [36] Nittala AS, Li N, Cartwright S, et al. (2015) PLANWELL: spatial user interface for collaborative petroleum well-planning. In: SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications, p. 19, ACM.
    [37] Phan T, Hönig W, Ayanian N (2018) Mixed Reality Collaboration Between Human-Agent Teams. In: 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 659–660.
    [38] Rajeb SB, Leclercq P (2013) Using spatial augmented reality in synchronous collaborative design. In: International Conference on Cooperative Design, Visualization and Engineering, pp. 1–10, Springer.
    [39] Ro H, Kim I, Byun J, et al. (2018) PAMI: Projection Augmented Meeting Interface for Video Conferencing. In: 2018 ACM Multimedia Conference on Multimedia Conference, pp. 1274–1277, ACM.
    [40] Schattel D, Tönnis M, Klinker G, et al. (2014) On-site augmented collaborative architecture visualization. In: 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 369–370.
    [41] Shin JG, Ng G, Saakes D (2018) Couples Designing their Living Room Together: a Study with Collaborative Handheld Augmented Reality. In: Proceedings of the 9th Augmented Human International Conference, p. 3, acm.
    [42] Singh AR, Delhi VSK (2018) User behaviour in AR-BIM-based site layout planning. International Journal of Product Lifecycle Management 11: 221–244. doi: 10.1504/IJPLM.2018.094715
    [43] Trout TT, Russell S, Harrison A, et al. (2018) Collaborative mixed reality (MxR) and networked decision making. In: Next-Generation Analyst VI 10653: 106530N. International Society for Optics and Photonics.
    [44] Alhumaidan H, Lo KPY, Selby A (2017) Co-designing with children a collaborative augmented reality book based on a primary school textbook. International Journal of Child-Computer Interaction 15: 24–36.
    [45] Alhumaidan H, Lo KPY, Selby A (2015) Co-design of augmented reality book for collaborative learning experience in primary education. In: 2015 SAI Intelligent Systems Conference (IntelliSys), pp. 427–430, IEEE.
    [46] Benavides X, Amores J, Maes P (2015) Invisibilia: revealing invisible data using augmented reality and internet connected devices. In: Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers, pp. 341–344, ACM.
    [47] Blanco-Fernández Y, López-Nores M, Pazos-Arias JJ, et al. (2014) REENACT: A step forward in immersive learning about Human History by augmented reality, role playing and social networking. Expert Syst Appl 41: 4811–4828. doi: 10.1016/j.eswa.2014.02.018
    [48] Boyce MW, Rowan CP, Baity DL, et al. (2017) Using Assessment to Provide Application in Human Factors Engineering to USMA Cadets. In: International Conference on Augmented Cognition, pp. 411–422, Springer.
    [49] Bressler DM, Bodzin AM (2013) A mixed methods assessment of students' flow experiences during a mobile augmented reality science game. Journal of Computer Assisted Learning 29: 505–517. doi: 10.1111/jcal.12008
    [50] Chen M, Fan C, Wu D (2016) Designing Effective Materials and Activities for Mobile Augmented Learning. In: International Conference on Blended Learning, pp. 85–93, Springer.
    [51] Daiber F, Kosmalla F, Krüger A (2013) BouldAR: using augmented reality to support collaborative boulder training. In: CHI' 13 Extended Abstracts on Human Factors in Computing Systems, pp. 949–954, ACM.
    [52] Desai K, Belmonte UHH, Jin R, et al. (2017) Experiences with Multi-Modal Collaborative Virtual Laboratory (MMCVL). In: 2017 IEEE Third International Conference on Multimedia Big Data (BigMM), pp. 376–383, IEEE.
    [53] Fleck S, Simon G (2013) An augmented reality environment for astronomy learning in elementary grades: An exploratory study. In: Proceedings of the 25th Conference on I'Interaction Homme-Machine, p. 14, ACM.
    [54] Gazcón N, Castro S (2015) ARBS: An Interactive and Collaborative System for Augmented Reality Books. In: International Conference on Augmented and Virtual Reality, pp. 89–108, Springer.
    [55] Gelsomini F, Kanev K, Hung P, et al. (2017) BYOD Collaborative Kanji Learning in Tangible Augmented Reality Settings. In: International Conference on Global Research and Education, pp. 315–325, Springer.
    [56] Gironacci IM, Mc-Call R, Tamisier T (2017) Collaborative Storytelling Using Gamification and Augmented Reality. In: International Conference on Cooperative Design, Visualization and Engineering, pp. 90–93, Springer.
    [57] Goyal S, Vijay RS, Monga C, et al. (2016) Code Bits: An Inexpensive Tangible Computational Thinking Toolkit For K-12 Curriculum. In: Proceedings of the TEI'16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction, pp. 441–447, ACM.
    [58] Greenwald SW (2015) Responsive Facilitation of Experiential Learning Through Access to Attentional State. In: Adjunct Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, pp. 1–4, ACM.
    [59] Han J, Jo M, Hyun E, et al. (2015) Examining young children's perception toward augmented reality-infused dramatic play. Educational Technology Research and Development 63: 455–474. doi: 10.1007/s11423-015-9374-9
    [60] Iftene A, Trandabăț D (2018) Enhancing the Attractiveness of Learning through Augmented Reality. Procedia Computer Science 126: 166–175. doi: 10.1016/j.procs.2018.07.220
    [61] Jyun-Fong G, Ju-Ling S (2013) The Instructional Application of Augmented Reality in Local History Pervasive Game. pp. 387.
    [62] Kang S, Norooz L, Oguamanam V, et al. (2016) SharedPhys: Live Physiological Sensing, Whole-Body Interaction, and Large-Screen Visualizations to Support Shared Inquiry Experiences. In: Proceedings of the The 15th International Conference on Interaction Design and Children, pp. 275–287, ACM.
    [63] Kazanidis I, Palaigeorgiou G, Papadopoulou Α, et al. (2018) Augmented Interactive Video: Enhancing Video Interactivity for the School Classroom. Journal of Engineering Science and Technology Review 11.
    [64] Keifert D, Lee C, Dahn M, et al. (2017) Agency, Embodiment, & Affect During Play in a Mixed-Reality Learning Environment. In: Proceedings of the 2017 Conference on Interaction Design and Children, pp. 268–277, ACM.
    [65] Kim H-J, Kim B-H (2018) Implementation of young children English education system by AR type based on P2P network service model. Peer-to-Peer Networking and Applications 11: 1252–1264. doi: 10.1007/s12083-017-0612-2
    [66] Krstulovic R, Boticki I, Ogata H (2017) Analyzing heterogeneous learning logs using the iterative convergence method. In: 2017 IEEE 6th International Conference on Teaching, Assessment, and Learning for Engineering, pp. 482–485.
    [67] Le TN, Le YT, Tran MT (2014) Applying Saliency-Based Region of Interest Detection in Developing a Collaborative Active Learning System with Augmented Reality. In: International Conference on Virtual, Augmented and Mixed Reality, pp. 51–62, Springer.
    [68] MacIntyre B, Zhang D, Jones R, et al. (2016) Using projection ar to add design studio pedagogy to a cs classroom. In: 2016 IEEE Virtual Reality (VR), pp. 227–228.
    [69] Malinverni L, Valero C, Schaper MM, et al. (2018) A conceptual framework to compare two paradigms of augmented and mixed reality experiences. In: Proceedings of the 17th ACM Conference on Interaction Design and Children, pp. 7–18, ACM.
    [70] Maskott GK, Maskott MB, Vrysis L (2015) Serious+: A technology assisted learning space based on gaming. In: 2015 International Conference on Interactive Mobile Communication Technologies and Learning (IMCL), pp. 430–432, IEEE.
    [71] Pareto L (2012) Mathematical literacy for everyone using arithmetic games. In: Proceedings of the 9th International Conference on Disability, Virtual Reality and Associated Technologies 9: 87–96. Reading, UK: University of Readings.
    [72] Peters E, Heijligers B, de Kievith J, et al. (2016) Design for collaboration in mixed reality: Technical challenges and solutions. In: 2016 8th International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES), pp. 1–7, IEEE.
    [73] Punjabi DM, Tung LP, Lin BSP (2013) CrowdSMILE: a crowdsourcing-based social and mobile integrated system for learning by exploration. In: 2013 IEEE 10th International Conference on Ubiquitous Intelligence and Computing and 2013 IEEE 10th International Conference on Autonomic and Trusted Computing, pp. 521–526.
    [74] Rodríguez-Vizzuett L, Pérez-Medina JL, Muñoz-Arteaga J, et al. (2015) Towards the Definition of a Framework for the Management of Interactive Collaborative Learning Applications for Preschoolers. In: Proceedings of the XVI International Conference on Human Computer Interaction, p. 11, ACM.
    [75] Sanabria JC, Arámburo-Lizárraga J (2017) Enhancing 21st Century Skills with AR: Using the Gradual Immersion Method to develop Collaborative Creativity. Eurasia Journal of Mathematics, Science and Technology Education 13: 487–501.
    [76] Shaer O, Valdes C, Liu S, et al. (2014) Designing reality-based interfaces for experiential bio-design. Pers Ubiquit Comput 18: 1515–1532. doi: 10.1007/s00779-013-0752-1
    [77] Shirazi A, Behzadan AH (2015) Content Delivery Using Augmented Reality to Enhance Students' Performance in a Building Design and Assembly Project. Advances in Engineering Education 4.
    [78] Shirazi A, Behzadan AH (2013) Technology-enhanced learning in construction education using mobile context-aware augmented reality visual simulation. In: 2013 Winter Simulations Conference (WSC), pp. 3074–3085, IEEE.
    [79] Sun H, Liu Y, Zhang Z, et al. (2018) Employing Different Viewpoints for Remote Guidance in a Collaborative Augmented Environment. In: Proceedings of the Sixth International Symposium of Chinese CHI, pp. 64–70, ACM.
    [80] Sun H, Zhang Z, Liu Y, et al. (2016) OptoBridge: assisting skill acquisition in the remote experimental collaboration. In: Proceedings of the 28th Australian Conference on Computer-Human Interaction, pp. 195–199, ACM.
    [81] Thompson B, Leavy L, Lambeth A, et al. (2016) Participatory Design of STEM Education AR Experiences for Heterogeneous Student Groups: Exploring Dimensions of Tangibility, Simulation, and Interaction. In: 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), pp. 53–58.
    [82] Wiehr F, Kosmalla F, Daiber F, et al. (2016) betaCube: Enhancing Training for Climbing by a Self-Calibrating Camera-Projection Unit. In: Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, pp. 1998–2004, ACM.
    [83] Yangguang L, Yue L, Xiaodong W (2014) Multiplayer collaborative training system based on Mobile AR innovative interaction technology. In: 2014 International Conference on Virtual Reality and Visualization, pp. 81–85, IEEE.
    [84] Yoon SA, Wang J, Elinich K (2014) Augmented reality and learning in science museums. Digital Systems for Open Access to Formal and Informal Learning, pp. 293–305, Springer.
    [85] Zubir F, Suryani I, Ghazali N (2018) Integration of Augmented Reality into College Yearbook. In: MATEC Web of Conferences 150: 05031. EDP Sciences. doi: 10.1051/matecconf/201815005031
    [86] Dascalu MI, Moldoveanu A, Shudayfat EA (2014) Mixed reality to support new learning paradigms. In: 2014 8th International Conference on System Theory, Control and Computing (ICSTCC), pp. 692–697, IEEE.
    [87] Boonbrahm P, Kaewrat C, Boonbrahm S (2016) Interactive Augmented Reality: A New Approach for Collaborative Learning. In: International Conference on Learning and Collaboration Technologies, pp. 115–124, Springer.
    [88] LaViola Jr JJ, Kruijff E, McMahan RP, et al. (2017) 3D user interfaces: theory and practice. Addison-Wesley Professional.
    [89] Kim S, Lee GA, Sakata N (2013) Comparing pointing and drawing for remote collaboration. In: 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 1–6, IEEE.
    [90] Akahoshi S, Matsushita M (2018) Magical Projector: Virtual Object Sharing Method among Multiple Users in a Mixed Reality Space. In: 2018 Nicograph International (NicoInt), pp. 70–73, IEEE.
    [91] Baillard C, Fradet M, Alleaume V, et al. (2017) Multi-device mixed reality TV: a collaborative experience with joint use of a tablet and a headset. In: Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology, p. 67, ACM.
    [92] Baldauf M, Fröhlich P (2013) The augmented video wall: multi-user AR interaction with public displays. In: CHI'13 Extended Abstracts on Human Factors in Computing Systems, pp. 3015–3018, ACM.
    [93] Ballagas R, Dugan TE, Revelle G, et al. (2013) Electric agents: fostering sibling joint media engagement through interactive television and augmented reality. In: Proceedings of the 2013 conference on Computer supported cooperative work, pp. 225–236, ACM.
    [94] Beimler R, Bruder G, Steinicke F (2013) Smurvebox: A smart multi-user real-time virtual environment for generating character animations. In: Proceedings of the Virtual Reality International Conference: Laval Virtual, p. 1, ACM.
    [95] Bollam P, Gothwal E, Tejaswi V G, et al. (2015) Mobile collaborative augmented reality with real-time AR/VR switching. In: ACM SIGGRAPH 2015 Posters, p. 25, ACM.
    [96] Bourdin P, Sanahuja JMT, Moya CC, et al. (2013) Persuading people in a remote destination to sing by beaming there. In: Proceedings of the 19th ACM Symposium on Virtual Reality Software and Technology, pp. 123–132, ACM.
    [97] Brondi R, Avveduto G, Alem L, et al. (2015) Evaluating the effects of competition vs collaboration on user engagement in an immersive game using natural interaction. In: Proceedings of the 21st ACM Symposium on Virtual Reality Software and Technology, p. 191, ACM.
    [98] Ch'ng E, Harrison D, Moore S (2017) Shift-life interactive art: Mixed-reality artificial ecosystem simulation. Presence: Teleoperators & Virtual Environments 26: 157–181.
    [99] Courchesne L, Durand E, Roy B (2014) Posture platform and the drawing room: virtual teleportation in cyberspace. Leonardo 47: 367–374. doi: 10.1162/LEON_a_00842
    [100] Dal Corso A, Olsen M, Steenstrup KH, et al. (2015) VirtualTable: a projection augmented reality game. In: SIGGRAPH Asia 2015 Posters, p. 40, ACM.
    [101] Datcu D, Lukosch S, Lukosch H (2016) A Collaborative Game to Study Presence and Situational Awareness in a Physical and an Augmented Reality Environment. J Univers Comput Sci 22: 247–270.
    [102] Datcu D, Lukosch SG, Lukosch HK (2014) A collaborative game to study the perception of presence during virtual co-location. In: Proceedings of the companion publication of the 17th ACM conference on Computer supported cooperative work & social computing, pp. 5–8, ACM.
    [103] Figueroa P, Hernández JT, Merienne F, et al. (2018) Heterogeneous, distributed mixed reality Applications. A concept. In: 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 549–550.
    [104] Fischbach M, Lugrin J-L, Brandt M, et al. (2018) Follow the White Robot-A Role-Playing Game with a Robot Game Master. In: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pp. 1812–1814.
    [105] Fischbach M, Striepe H, Latoschik ME, et al. (2016) A low-cost, variable, interactive surface for mixed-reality tabletop games. In: Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology, pp. 297–298, ACM.
    [106] Günther S, Müller F, Schmitz M, et al. (2018) CheckMate: Exploring a Tangible Augmented Reality Interface for Remote Interaction. In: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, p. LBW570, ACM.
    [107] Huo K, Wang T, Paredes L, et al. (2018) SynchronizAR: Instant Synchronization for Spontaneous and Spatial Collaborations in Augmented Reality. In: Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, pp. 19–30, ACM.
    [108] Karakottas A, Papachristou A, Doumanoqlou A, et al. (2018) Augmented VR. In: 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 18–22, IEEE.
    [109] Lantin M, Overstall SL, Zhao H (2018) I am afraid: voice as sonic sculpture. In: ACM SIGGRAPH 2018 Posters, pp. 1–2, ACM.
    [110] Loviska M, Krause O, Engelbrecht HA, et al. (2016) Immersed gaming in Minecraft. In: Proceedings of the 7th International Conference on Multimedia Systems, p. 32, ACM.
    [111] Mackamul EB, Esteves A (2018) A Look at the Effects of Handheld and Projected Augmented-reality on a Collaborative Task. In: Proceedings of the Symposium on Spatial User Interaction, pp. 74–78, ACM.
    [112] Margolis T, Cornish T (2013) Vroom: designing an augmented environment for remote collaboration in digital cinema production. In: The Engineering Reality of Virtual Reality 2013 8649: 86490F. International Society for Optics and Photonics. doi: 10.1117/12.2008587
    [113] McGill M, Williamson JH, Brewster SA (2016) Examining the role of smart TVs and VR HMDs in synchronous at-a-distance media consumption. ACM T Comput-Hum Int 23: 33.
    [114] Mechtley B, Stein J, Roberts C, et al. (2017) Rich State Transitions in a Media Choreography Framework Using an Idealized Model of Cloud Dynamics. In: Proceedings of the onThematic Workshops of ACM Multimedia 2017, pp. 477–484, ACM.
    [115] Pillias C, Robert-Bouchard R, Levieux G (2014) Designing tangible video games: lessons learned from the sifteo cubes. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 3163–3166, ACM.
    [116] Podkosova I, Kaufmann H (2018) Co-presence and proxemics in shared walkable virtual environments with mixed collocation. In: Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, pp. 21, ACM.
    [117] Prins MJ, Gunkel SN, Stokking HM, et al. (2018) TogetherVR: A framework for photorealistic shared media experiences in 360-degree VR. SMPTE Motion Imag J 127: 39–44.
    [118] Rostami A, Bexell E, Stanisic S (2018) The Shared Individual. In: Proceedings of the Twelfth International Conference on Tangible, Embedded, and Embodied Interaction, pp. 511–516, ACM.
    [119] Sato T, Hwang DH, Koike H (2018) MlioLight: Projector-camera Based Multi-layered Image Overlay System for Multiple Flashlights Interaction. In: Proceedings of the 2018 ACM International Conference on Interactive Surfaces and Spaces, pp. 263–271, ACM.
    [120] Spielmann S, Schuster A, Götz K, et al. (2016) VPET: a toolset for collaborative virtual filmmaking. In: SIGGRAPH ASIA 2016 Technical Briefs, p. 29, ACM.
    [121] Trottnow J, Götz K, Seibert S, et al. (2015) Intuitive virtual production tools for set and light editing. In: Proceedings of the 12th European Conference on Visual Media Production, p. 6, ACM.
    [122] Valverde I, Cochrane T (2017) Senses Places: soma-tech mixed-reality participatory performance installation/environment. In: Proceedings of the 8th International Conference on Digital Arts, pp. 195–197, ACM.
    [123] Van Troyer A (2013) Enhancing site-specific theatre experience with remote partners in sleep no more. In: Proceedings of the 2013 ACM International workshop on Immersive media experiences, pp. 17–20, ACM.
    [124] Vermeer J, Alaka S, de Bruin N, et al. (2018) League of lasers: a superhuman sport using motion tracking. In: Proceedings of the First Superhuman Sports Design Challenge on First International Symposium on Amplifying Capabilities and Competing in Mixed Realities, p. 8, ACM.
    [125] Wegner K, Seele S, Buhler H, et al. (2017) Comparison of Two Inventory Design Concepts in a Collaborative Virtual Reality Serious Game. In: Extended Abstracts Publication of the Annual Symposium on Computer-Human Interaction in Play, pp. 323–329, ACM.
    [126] Zhou Q, Hagemann G, Fels S, et al. (2018) Coglobe: a co-located multi-person FTVR experience. In: ACM SIGGRAPH 2018 Emerging Technologies, p. 5, ACM.
    [127] Zimmerer C, Fischbach M, Latoschik ME (2014) Fusion of Mixed-Reality Tabletop and Location-Based Applications for Pervasive Games. In: Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces, pp. 427–430, ACM.
    [128] Speicher M, Hall BD, Yu A, et al. (2018) XD-AR: Challenges and Opportunities in Cross-Device Augmented Reality Application Development. Proceedings of the ACM on Human-Computer Interaction 2: 7.
    [129] Gauglitz S, Nuernberger B, Turk M, et al. (2014) World-stabilized annotations and virtual scene navigation for remote collaboration. In: Proceedings of the 27th Annual ACM symposium on User interface software and technology, pp. 449–459, ACM.
    [130] Abramovici M, Wolf M, Adwernat S, et al. (2017) Context-aware Maintenance Support for Augmented Reality Assistance and Synchronous Multi-user Collaboration. Procedia CIRP 59: 18–22. doi: 10.1016/j.procir.2016.09.042
    [131] Aschenbrenner D, Li M, Dukalski R, et al. (2018) Collaborative Production Line Planning with Augmented Fabrication. In: 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 509–510, IEEE.
    [132] Bednarz T, James C, Widzyk-Capehart E, et al. (2015) Distributed collaborative immersive virtual reality framework for the mining industry. Machine Vision and Mechatronics in Practice, pp. 39–48, Springer.
    [133] Capodieci A, Mainetti L, Alem L (2015) An innovative approach to digital engineering services delivery: An application in maintenance. In: 2015 11th International Conference on Innovations in Information Technology (IIT), pp. 342–349, IEEE.
    [134] Choi SH, Kim M, Lee JY (2018) Situation-dependent remote AR collaborations: Image-based collaboration using a 3D perspective map and live video-based collaboration with a synchronized VR mode. Comput Ind 101: 51–66. doi: 10.1016/j.compind.2018.06.006
    [135] Clergeaud D, Roo JS, Hachet M, et al. (2017) Towards seamless interaction between physical and virtual locations for asymmetric collaboration. In: Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology, pp. 1–4, ACM.
    [136] Datcu D, Cidota M, Lukosch SG, et al. (2014) Virtual co-location to support remote assistance for inflight maintenance in ground training for space missions. In: Proceedings of the 15th International Conference on Computer Systems and Technologies, pp. 134–141, ACM.
    [137] Domova V, Vartiainen E, Englund M (2014) Designing a remote video collaboration system for industrial settings. In: Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces, pp. 229–238, ACM.
    [138] Elvezio C, Sukan M, Oda O, et al. (2017) Remote collaboration in AR and VR using virtual replicas. In: ACM SIGGRAPH 2017 VR Village, p. 13, ACM.
    [139] Funk M, Kritzler M, Michahelles F (2017) HoloCollab: A Shared Virtual Platform for Physical Assembly Training using Spatially-Aware Head-Mounted Displays. In: Proceedings of the Seventh International Conference on the Internet of Things, p. 19, ACM.
    [140] Galambos P, Csapó ÁB, Zentay PZ, et al. (2015) Design, programming and orchestration of heterogeneous manufacturing systems through VR-powered remote collaboration. Robotics and Computer-Integrated Manufacturing 33: 68–77. doi: 10.1016/j.rcim.2014.08.012
    [141] Galambos P, Baranyi PZ, Rudas IJ (2014) Merged physical and virtual reality in collaborative virtual workspaces: The VirCA approach. In: IECON 2014 – 40th Annual Conference of the IEEE Industrial Electronics Society, pp. 2585–2590, IEEE.
    [142] Gupta RK, Ucler C, Bernard A (2018) Extension of the Virtual Customer Inspection for Distant Collaboration in NPD. In: 2018 IEEE International Conference on Engineering, Technology and Innovation, pp. 1–7.
    [143] Gurevich P, Lanir J, Cohen B (2015) Design and implementation of teleadvisor: a projection-based augmented reality system for remote collaboration. Computer Supported Cooperative Work (CSCW) 24: 527–562. doi: 10.1007/s10606-015-9232-7
    [144] Günther S, Kratz SG, Avrahami D, et al. (2018) Exploring Audio, Visual, and Tactile Cues for Synchronous Remote Assistance. In: Proceedings of the 11th Pervasive Technologies Related to Assistive Environments Conference, pp. 339–344, ACM.
    [145] Morosi F, Carli I, Caruso G, et al. (2018) Analysis of Co-Design Scenarios and Activities for the Development of A Spatial-Augmented Reality Design Platform. In: DS 92: Proceedings of the DESIGN 2018 15th International Design Conference, pp. 381–392.
    [146] Plopski A, Fuvattanasilp V, Poldi J, et al. (2018) Efficient In-Situ Creation of Augmented Reality Tutorials. In: 2018 Workshop on Metrology for Industry 4.0 and IoT, pp. 7–11, IEEE.
    [147] Seo D-W, Lee S-M, Park K-S, et al. (2015) INTEGRATED ENGINEERING PRODUCT DESIGN SIMULATION PLATFORM FOR COLLABORATIVE SIMULATION UNDER THE USER EXPERIENCE OF SME USERS. simulation 1: 2.
    [148] Zenati N, Hamidia M, Bellarbi A, et al. (2015) E-maintenance for photovoltaic power system in Algeria. In: 2015 IEEE International Conference on Industrial Technology, pp. 2594–2599.
    [149] Zenati N, Benbelkacem S, Belhocine M, et al. (2013) A new AR interaction for collaborative E-maintenance system. IFAC Proceedings Volumes 46: 619–624.
    [150] Zenati-Henda N, Bellarbi A, Benbelkacem S, et al. (2014) Augmented reality system based on hand gestures for remote maintenance. In: 2014 International Conference on Multimedia Computing and Systems (ICMCS), pp. 5–8, IEEE.
    [151] Huang W, Billinghurst M, Alem L, et al. (2018) HandsInTouch: sharing gestures in remote collaboration. In: Proceedings of the 30th Australian Conference on Computer-Human Interaction, pp. 396–400, ACM.
    [152] Davis MC, Can DD, Pindrik J, et al. (2016) Virtual interactive presence in global surgical education: international collaboration through augmented reality. World neurosurgery 86: 103–111. doi: 10.1016/j.wneu.2015.08.053
    [153] Alharthi SA, Sharma HN, Sunka S, et al. (2018) Designing Future Disaster Response Team Wearables from a Grounding in Practice. In: Proceedings of the Technology, Mind, and Society, p. 1, ACM.
    [154] Carbone M, Freschi C, Mascioli S, et al. (2016) A wearable augmented reality platform for telemedicine. In: International Conference on Augmented Reality, Virtual Reality and Computer Graphics, pp. 92–100, Springer.
    [155] Elvezio C, Ling F, Liu J-S, et al. (2018) Collaborative Virtual Reality for Low-Latency Interaction. In: The 31st Annual ACM Symposium on User Interface Software and Technology Adjunct Proceedings, pp. 179–181, ACM.
    [156] Gillis J, Calyam P, Apperson O, et al. (2016) Panacea's Cloud: Augmented reality for mass casualty disaster incident triage and co-ordination. In: 2016 13th IEEE Annual Consumer Communications & Networking Conference (CCNC), pp. 264–265, IEEE.
    [157] Kurillo G, Yang AY, Shia V, et al. (2016) New emergency medicine paradigm via augmented telemedicine. In: 8th International Conference on Virtual, Augmented and Mixed Reality, VAMR 2016 and Held as Part of 18th International Conference on Human-Computer Interaction, HCI International 2016, pp. 502–511, Springer.
    [158] Nunes M, Nedel LP, Roesler V (2013) Motivating people to perform better in exergames: Collaboration vs. competition in virtual environments. In: 2013 IEEE Virtual Reality (VR), pp. 115–116, IEEE.
    [159] Nunes IL, Lucas R, Simões-Marques M, et al. (2017) Augmented Reality in Support of Disaster Response. In: International Conference on Applied Human Factors and Ergonomics, pp. 155–167, Springer.
    [160] Popescu D, Lăptoiu D, Marinescu R, et al. (2017) Advanced Engineering in Orthopedic Surgery Applications. Key Engineering Materials 752: 99–104. doi: 10.4028/www.scientific.net/KEM.752.99
    [161] Shluzas LA, Aldaz G, Leifer L (2016) Design Thinking Health: Telepresence for Remote Teams with Mobile Augmented Reality. In: Design Thinking Research, pp. 53–66, Springer.
    [162] Sirilak S, Muneesawang P (2018) A New Procedure for Advancing Telemedicine Using the HoloLens. IEEE Access 6: 60224–60233. doi: 10.1109/ACCESS.2018.2875558
    [163] Vassell M, Apperson O, Calyam P, et al. (2016) Intelligent Dashboard for augmented reality based incident command response co-ordination. In: 2016 13th IEEE Annual Consumer Communications & Networking Conference (CCNC), pp. 976–979, IEEE.
    [164] Bach B, Sicat R, Beyer J, et al. (2018) The Hologram in My Hand: How Effective is Interactive Exploration of 3D Visualizations in Immersive Tangible Augmented Reality? IEEE Transactions on Visualization & Computer Graphics 24: 457–467.
    [165] Daher S (2017) Optical see-through vs. spatial augmented reality simulators for medical applications. In: 2017 IEEE Virtual Reality (VR), pp. 417–418.
    [166] Camps-Ortueta I, Rodríguez-Muñoz JM, Gómez-Martín PP, et al. (2017) Combining augmented reality with real maps to promote social interaction in treasure hunts. CoSECivi, pp. 131–143.
    [167] Chen H, Lee AS, Swift M, et al. (2015) 3D collaboration method over HoloLens™ and Skype™ end points. In: Proceedings of the 3rd International Workshop on Immersive Media Experiences, pp. 27–30, ACM.
    [168] Gleason C, Fiannaca AJ, Kneisel M, et al. (2018) FootNotes: Geo-referenced Audio Annotations for Nonvisual Exploration. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2: 109.
    [169] Huang W, Kaminski B, Luo J, et al. (2015) SMART: design and evaluation of a collaborative museum visiting application. In: 12th International Conference, CDVE 2015 – Cooperative Design, Visualization, and Engineering 12th International Conference 9320: 57–64.
    [170] Kallioniemi P, Heimonen T, Turunen M, et al. (2015) Collaborative navigation in virtual worlds: how gender and game experience influence user behavior. In: Proceedings of the 21st ACM Symposium on Virtual Reality Software and Technology, pp. 173–182, ACM.
    [171] Li N, Nittala AS, Sharlin E, et al. (2014) Shvil: collaborative augmented reality land navigation. In: CHI'14 Extended Abstracts on Human Factors in Computing Systems, pp. 1291–1296, ACM.
    [172] Nuernberger B, Lien K-C, Grinta L, et al. (2016) Multi-view gesture annotations in image-based 3D reconstructed scenes. In: Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology, pp. 129–138, ACM.
    [173] Kallioniemi P, Hakulinen J, Keskinen T, et al. (2013) Evaluating landmark attraction model in collaborative wayfinding in virtual learning environments. In: Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia, pp. 1–10, ACM.
    [174] Bork F, Schnelzer C, Eck U, et al. (2018) Towards Efficient Visual Guidance in Limited Field-of-View Head-Mounted Displays. IEEE transactions on visualization and computer graphics 24: 2983–2992. doi: 10.1109/TVCG.2018.2868584
    [175] Sodhi RS, Jones BR, Forsyth D, et al. (2013) BeThere: 3D mobile collaboration with spatial input. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 179–188, ACM.
    [176] Lien K-C, Nuernberger B, Turk M, et al. (2015) [POSTER] 2D-3D Co-segmentation for AR-based Remote Collaboration. In: 2015 IEEE International Symposium on Mixed and Augmented Reality, pp. 184–185, IEEE.
    [177] Nuernberger B, Lien K-C, Höllerer T, et al. (2016) Anchoring 2D gesture annotations in augmented reality. In: 2016 IEEE Virtual Reality (VR), pp. 247–248, IEEE.
    [178] Nuernberger B, Lien K-C, Höllerer T, et al. (2016) Interpreting 2d gesture annotations in 3d augmented reality. In: 2016 IEEE Symposium on 3D User Interfaces (3DUI), pp. 149–158.
    [179] Kovachev D, Nicolaescu P, Klamma R (2014) Mobile real-time collaboration for semantic multimedia. Mobile Networks and Applications 19: 635–648. doi: 10.1007/s11036-013-0453-z
    [180] You S, Thompson CK (2017) Mobile collaborative mixed reality for supporting scientific inquiry and visualization of earth science data. In: 2017 IEEE Virtual Reality (VR), pp. 241–242.
    [181] Wiehr F, Daiber F, Kosmalla F, et al. (2017) ARTopos: augmented reality terrain map visualization for collaborative route planning. In: Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers, pp. 1047–1050, ACM.
    [182] Müller J, Rädle R, Reiterer H (2017) Remote Collaboration With Mixed Reality Displays: How Shared Virtual Landmarks Facilitate Spatial Referencing. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 6481–6486, ACM.
    [183] Park S, Kim J (2018) Augmented Memory: Site-Specific Social Media with AR. In: Proceedings of the 9th Augmented Human International Conference, p. 41, ACM.
    [184] Ryskeldiev B, Igarashi T, Zhang J, et al. (2018) Spotility: Crowdsourced Telepresence for Social and Collaborative Experiences in Mobile Mixed Reality. In: Companion of the 2018 ACM Conference on Computer Supported Cooperative Work and Social Computing, pp. 373–376, ACM.
    [185] Grandi JG, Berndt I, Debarba HG, et al. (2017) Collaborative manipulation of 3D virtual objects in augmented reality scenarios using mobile devices. In: 2017 IEEE Symposium on 3D User Interfaces (3DUI), pp. 264–265, IEEE.
    [186] Cortés-Dávalos A, Mendoza S (2016) AR-based Modeling of 3D Objects in Multi-user Mobile Environments. In: CYTED-RITOS International Workshop on Groupware, pp. 21–36, Springer.
    [187] Cortés-Dávalos A, Mendoza S (2016) Augmented Reality-Based Groupware for Editing 3D Surfaces on Mobile Devices. In: 2016 International Conference on Collaboration Technologies and Systems (CTS), pp. 319–326, IEEE.
    [188] Zhang W, Han B, Hui P, et al. (2018) CARS: Collaborative Augmented Reality for Socialization. In: Proceedings of the 19th International Workshop on Mobile computing Systems & Applications, pp. 25–30, ACM.
    [189] Cortés-Dávalos A, Mendoza S (2016) Collaborative Web Authoring of 3D Surfaces Using Augmented Reality on Mobile Devices. In: 2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI), pp. 640–643, IEEE.
    [190] Pani M, Poiesi F (2018) Distributed Data Exchange with Leap Motion. International Conference on Augmented Reality, Virtual Reality, and Computer Graphics, pp. 655–667, Springer.
    [191] Grandi JG, Debarba HG, Bemdt I, et al. (2018) Design and Assessment of a Collaborative 3D Interaction Technique for Handheld Augmented Reality. In: 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 49–56.
    [192] Müller J, Rädle R, Reiterer H (2016) Virtual Objects as Spatial Cues in Collaborative Mixed Reality Environments: How They Shape Communication Behavior and User Task Load. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 1245–1249, ACM.
    [193] Müller J, Butscher S, Feyer SP, et al. (2017) Studying collaborative object positioning in distributed augmented realities. In: Proceedings of the 16th International Conference on Mobile and Ubiquitous Multimedia, pp. 123–132, ACM.
    [194] Francese R, Passero I, Zarraonandia T (2012) An augmented reality application to gather participant feedback during a meeting. In: Information systems: crossroads for organization, management, accounting and engineering, pp. 173–180.
    [195] Datcu D, Lukosch SG, Lukosch HK (2016) Handheld Augmented Reality for Distributed Collaborative Crime Scene Investigation. In: Proceedings of the 19th International Conference on Supporting Group Work, pp. 267–276, ACM.
    [196] Pece F, Steptoe W, Wanner F, et al. (2013) Panoinserts: mobile spatial teleconferencing. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1319–1328, ACM.
    [197] Cai M, Masuko S, Tanaka J (2018) Gesture-based Mobile Communication System Providing Side-by-side Shopping Feeling. In: Proceedings of the 23rd International Conference on Intelligent User Interfaces Companion, p. 2, ACM.
    [198] Chang YS, Nuernberger B, Luan B, et al. (2017) Gesture-based augmented reality annotation. In: 2017 IEEE Virtual Reality (VR), pp. 469–470, IEEE.
    [199] Le Chénéchal M, Duval T, Gouranton V, et al. (2016) Vishnu: virtual immersive support for helping users an interaction paradigm for collaborative remote guiding in mixed reality. In: 2016 IEEE Third VR International Workshop on Collaborative virtual Environments (3DCVE), pp. 9–12.
    [200] Piumsomboon T, Lee Y, Lee GA, et al. (2017) Empathic Mixed Reality: Sharing What You Feel and Interacting with What You See. In: 2017 International Symposium on Ubiquitous Virtual Reality (ISUVR), pp. 38–41, IEEE.
    [201] Piumsomboon T, Lee Y, Lee G, et al. (2017) CoVAR: a collaborative virtual and augmented reality system for remote collaboration. In: SIGGRAPH Asia 2017 Emerging Technologies, p. 3, ACM.
    [202] Lee Y, Masai K, Kunze KS, et al. (2016) A Remote Collaboration System with Empathy Glasses. In: 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), pp. 342–343, IEEE.
    [203] Piumsomboon T, Dey A, Ens B, et al. (2017) [POSTER] CoVAR: Mixed-Platform Remote Collaborative Augmented and Virtual Realities System with Shared Collaboration Cues. In: 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), pp. 218–219, IEEE.
    [204] Piumsomboon T, Day A, Ens B, et al. (2017) Exploring enhancements for remote mixed reality collaboration. In: SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications, p. 16, ACM.
    [205] Amores J, Benavides X, Maes P (2015) Showme: A remote collaboration system that supports immersive gestural communication. In: Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, pp. 1343–1348, ACM.
    [206] Yu J, Noh S, Jang Y, et al. (2015) A hand-based collaboration framework in egocentric coexistence reality. In: 2015 12th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), pp. 545–548, IEEE.
    [207] Piumsomboon T, Lee GA, Hart JD, et al. (2018) Mini-Me: An Adaptive Avatar for Mixed Reality Remote Collaboration. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 46, ACM.
    [208] Piumsomboon T, Lee GA, Billinghurst M (2018) Snow Dome: A Multi-Scale Interaction in Mixed Reality Remote Collaboration. In: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, p. D115, ACM.
    [209] Cidota M, Lukosch S, Datcu D, et al. (2016) Workspace awareness in collaborative AR using HMDS: a user study comparing audio and visual notifications. In: Proceedings of the 7th Augmented Human International Conference 2016, p. 3, ACM.
    [210] Jo D, Kim K-H, Kim GJ (2016) Effects of avatar and background representation forms to co-presence in mixed reality (MR) tele-conference systems. In: SIGGRAPH Asia 2016 Virtual Reality meets Physical Reality: Modelling and Simulating Virtual Humans and Environments, p. 12, ACM.
    [211] Yu J, Jeon J-u, Park G, et al. (2016) A Unified Framework for Remote Collaboration Using Interactive AR Authoring and Hands Tracking. In: International Conference on Distributed, Ambient, and Pervasive Interactions, pp. 132–141, Springer.
    [212] Nassani A, Lee G, Billinghurst M, et al. (2017) [POSTER] The Social AR Continuum: Concept and User Study. In: 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), pp. 7–8.
    [213] Gao L, Bai H, Lee G, et al. (2016) An oriented point-cloud view for MR remote collaboration. SIGGRAPH ASIA 2016 Mobile Graphics and Interactive Applications, p. 8, ACM.
    [214] Lee GA, Teo T, Kim S, et al. (2017) Mixed reality collaboration through sharing a live panorama. SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications, p. 14, ACM.
    [215] Gao L, Bai H, Lindeman R, et al. (2017) Static local environment capturing and sharing for MR remote collaboration. SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications, p. 17, ACM.
    [216] Lee GA, Teo T, Kim S, et al. (2017) Sharedsphere: MR collaboration through shared live panorama. SIGGRAPH Asia 2017 Emerging Technologies, pp. 1–2, ACM.
    [217] Rühmann LM, Prilla M, Brown G (2018) Cooperative Mixed Reality: An Analysis Tool. In: Proceedings of the 2018 ACM Conference on Supporting Groupwork, pp. 107–111, ACM.
    [218] Lee H, Ha T, Noh S, et al. (2013) Context-of-Interest Driven Trans-Space Convergence for Spatial Co-presence. In: Proceedings of the First International Conference on Distributed, Ambient, and Pervasive Interactions 8028: 388–395. doi: 10.1007/978-3-642-39351-8_42
    [219] Yang P, Kitahara I, Ohta Y. (2015) [POSTER] Remote Mixed Reality System Supporting Interactions with Virtualized Objects. In: 2015 IEEE International Symposium on Mixed and Augmented Reality, pp. 64–67, IEEE.
    [220] Benbelkacem S, Zenati-Henda N, Belghit H, et al. (2015) Extended web services for remote collaborative manipulation in distributed augmented reality. In: 2015 3rd International Conference on Control, Engineering & Information Technology (CEIT), pp. 1–5, IEEE.
    [221] Pan Y, Sinclair D, Mitchell K (2018) Empowerment and embodiment for collaborative mixed reality systems. Comput Animat Virt W 29: e1838. doi: 10.1002/cav.1838
    [222] Drochtert D, Geiger C (2015) Collaborative magic lens graph exploration. In: SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications, p. 25, ACM.
    [223] Lee J-Y, Kwon J-H, Nam S-H, et al. (2016) Coexistent Space: Collaborative Interaction in Shared 3D Space. In: Proceedings of the 2016 Symposium on Spatial User Interaction, pp. 175–175, ACM.
    [224] Müller F, Günther S, Nejad AH, et al. (2017) Cloudbits: supporting conversations through augmented zero-query search visualization. In: Proceedings of the 5th Symposium on Spatial User Interaction, pp. 30–38, ACM.
    [225] Lehment NH, Tiefenbacher P, Rigoll G (2014) Don't Walk into Walls: Creating and Visualizing Consensus Realities for Next Generation Videoconferencing. In: Proceedings, Part I, of the 6th International Conference on Virtual, Augmented and Mixed Reality. Designing and Developing Virtual and Augmented Environments 8525: 170–180.
    [226] Roth D, Lugrin J-L, Galakhov D, et al. (2016) Avatar realism and social interaction quality in virtual reality. In: 2016 IEEE Virtual Reality (VR), pp. 277–278, IEEE.
    [227] Kasahara S, Nagai S, Rekimoto J (2017) JackIn Head: Immersive visual telepresence system with omnidirectional wearable camera. IEEE transactions on visualization and computer graphics 23: 1222–1234. doi: 10.1109/TVCG.2016.2642947
    [228] Luongo C, Leoncini P (2018) An UE4 Plugin to Develop CVE Applications Leveraging Participant's Full Body Tracking Data. International Conference on Augmented Reality, Virtual Reality, and Computer Graphics, pp. 610–622.
    [229] Piumsomboon T, Lee GA, Ens B, et al. (2018) Superman vs Giant: A Study on Spatial Perception for a Multi-Scale Mixed Reality Flying Telepresence Interface. IEEE Transactions on Visualization and Computer Graphics 24: 2974–2982. doi: 10.1109/TVCG.2018.2868594
    [230] Kasahara S, Rekimoto J (2015) JackIn head: immersive visual telepresence system with omnidirectional wearable camera for remote collaboration. In: Proceedings of the 21st ACM Symposium on Virtual Reality Software and Technology, pp. 217–225, ACM.
    [231] Adams H, Thompson C, Thomas D, et al. (2015) The effect of interpersonal familiarity on cooperation in a virtual environment. In: Proceedings of the ACM SIGGRAPH Symposium on Applied Perception, pp. 138–138, ACM.
    [232] Ryskeldiev B, Cohen M, Herder J (2017) Applying rotational tracking and photospherical imagery to immersive mobile telepresence and live video streaming groupware. In: SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications, p. 5.
    [233] Mai C, Bartsch SA, Rieger L (2018) Evaluating Shared Surfaces for Co-Located Mixed-Presence Collaboration. In: Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia, pp. 1–5, ACM.
    [234] Congdon BJ, Wang T, Steed A (2018) Merging environments for shared spaces in mixed reality. In: Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, p. 11.
    [235] Gao L, Bai H, He W, et al. (2018) Real-time visual representations for mobile mixed reality remote collaboration. SIGGRAPH Asia 2018 Virtual & Augmented Reality, p. 15.
    [236] Lee G, Kim S, Lee Y, et al. (2017) [POSTER] Mutually Shared Gaze in Augmented Video Conference. In: 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), pp. 79–80, IEEE.
    [237] Tiefenbacher P, Gehrlich T, Rigoll G (2015) Impact of annotation dimensionality under variable task complexity in remote guidance. In: 2015 IEEE Symposium on 3D User Interfaces (3DUI), pp. 189–190, IEEE.
    [238] Adcock M, Gunn C (2015) Using Projected Light for Mobile Remote Guidance. Computer Supported Cooperative Work (CSCW) 24: 591–611. doi: 10.1007/s10606-015-9237-2
    [239] Kim S, Lee GA, Ha S, et al. (2015) Automatically freezing live video for annotation during remote collaboration. In: Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, pp. 1669–1674, ACM.
    [240] Tait M, Billinghurst M (2015) The effect of view independence in a collaborative AR system. Computer Supported Cooperative Work (CSCW) 24: 563–589. doi: 10.1007/s10606-015-9231-8
    [241] Adcock M, Anderson S, Thomas B (2013) RemoteFusion: real time depth camera fusion for remote collaboration on physical tasks. In: Proceedings of the 12th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry, pp. 235–242, ACM.
    [242] Kim S, Lee GA, Sakata N, et al. (2013) Study of augmented gesture communication cues and view sharing in remote collaboration. In: 2013 IEEE International Symposium on Mixed and Augmented Reality, pp. 261–262, IEEE.
    [243] Sakata N, Takano Y, Nishida S (2014) Remote Collaboration with Spatial AR Support. In: International Conference on Human-Computer Interaction, pp. 148–157, Springer.
    [244] Tiefenbacher P, Gehrlich T, Rigoll G, et al. (2014) Supporting remote guidance through 3D annotations. In: Proceedings of the 2nd ACM Symposium on Spatial User Interaction, pp. 141–141, ACM.
    [245] Tait M, Billinghurst M (2014) View independence in remote collaboration using AR. ISMAR, pp. 309–310.
    [246] Gauglitz S, Nuernberger B, Turk M, et al. (2014) In touch with the remote world: Remote collaboration with augmented reality drawings and virtual navigation. In: Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology, pp. 197–205, ACM.
    [247] Lukosch S, Lukosch H, Datcu D, et al. (2015) Providing information on the spot: Using augmented reality for situational awareness in the security domain. Computer Supported Cooperative Work (CSCW) 24: 613–664. doi: 10.1007/s10606-015-9235-4
    [248] Lukosch SG, Lukosch HK, Datcu D, et al. (2015) On the spot information in augmented reality for teams in the security domain. In: Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, pp. 983–988, ACM.
    [249] Yamada S, Chandrasiri NP (2018) Evaluation of Hand Gesture Annotation in Remote Collaboration Using Augmented Reality. In: 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 727–728.
    [250] Anton D, Kurillo G, Bajcsy R (2018) User experience and interaction performance in 2D/3D telecollaboration. Future Gener Comp Sy 82: 77–88. doi: 10.1016/j.future.2017.12.055
    [251] Tait M, Tsai T, Sakata N, et al. (2013) A projected augmented reality system for remote collaboration. In: 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 1–6, IEEE.
    [252] Irlitti A, Itzstein GSV, Smith RT, et al. (2014) Performance improvement using data tags for handheld spatial augmented reality. In: Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology, pp. 161–165, ACM.
    [253] Iwai D, Matsukage R, Aoyama S, et al. (2018) Geometrically Consistent Projection-Based Tabletop Sharing for Remote Collaboration. IEEE Access 6: 6293–6302. doi: 10.1109/ACCESS.2017.2781699
    [254] Pejsa T, Kantor J, Benko H, et al. (2016) Room2room: Enabling life-size telepresence in a projected augmented reality environment. In: Proceedings of the 19th ACM Conference on Conference on Computer-Supported Cooperative Work & Social Computing, pp. 1716–1725, ACM.
    [255] Schwede C, Hermann T (2015) HoloR: Interactive mixed-reality rooms. In: 2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), pp. 517–522, IEEE.
    [256] Salimian MH, Reilly DF, Brooks S, et al. (2016) Physical-Digital Privacy Interfaces for Mixed Reality Collaboration: An Exploratory Study. In: Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces, pp. 261–270, ACM.
    [257] Weiley V, Adcock M (2013) Drawing in the lamposcope. In: Proceedings of the 9th ACM Conference on Creativity & Cognition, pp. 382–383, ACM.
    [258] Irlitti A, Itzstein GSV, Alem L, et al. (2013) Tangible interaction techniques to support asynchronous collaboration. In: 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 1–6, IEEE.
    [259] Kratky A (2015) Transparent touch–interacting with a multi-layered touch-sensitive display system. In: International Conference on Universal Access in Human-Computer Interaction, pp. 114–126, Springer.
    [260] Moniri MM, Valcarcel FAE, Merkel D, et al. (2016) Hybrid team interaction in the mixed reality continuum. In: Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology, pp. 335–336, ACM.
    [261] Seo D, Yoo B, Ko H (2018) Webizing collaborative interaction space for cross reality with various human interface devices. In: Proceedings of the 23rd International ACM Conference on 3D Web Technology, pp. 1–8, ACM.
    [262] Randhawa JS (2016) Stickie: Mobile Device Supported Spatial Collaborations. In: Proceedings of the 2016 Symposium on Spatial User Interaction, pp. 163–163, ACM.
    [263] Tabrizian P, Petrasova A, Harmon B, et al. (2016) Immersive tangible geospatial modeling. In: Proceedings of the 24th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, p. 88, ACM.
    [264] Ren D, Lee B, Höllerer T (2018) XRCreator: interactive construction of immersive data-driven stories. In: Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, p. 136, ACM.
    [265] Minagawa J, Choi W, Li L, et al. (2016) Development of collaborative workspace system using hand gesture. In: 2016 IEEE 5th Global Conference on Consumer Electronics, pp. 1–2, IEEE.
    [266] Tanaya M, Yang K, Christensen T, et al. (2017) A Framework for analyzing AR/VR Collaborations: An initial result. In: 2017 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), pp. 111–116, IEEE.
    [267] Butscher S, Hubenschmid S, Müller J, et al. (2018) Clusters, Trends, and Outliers: How Immersive Technologies Can Facilitate the Collaborative Analysis of Multidimensional Data. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 90, ACM.
    [268] Machuca MDB, Chinthammit W, Yang Y, et al. (2014) 3D mobile interactions for public displays. In: SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications, p. 17, ACM.
    [269] Ríos AP, Callaghan V, Gardner M, et al. (2014) Interactions within Distributed Mixed Reality Collaborative Environments. In: IE'14 Proceedings of the 2014 International Conference on Intelligent Environments, pp. 382–383.
    [270] Ueda Y, Iwazaki K, Shibasaki M, et al. (2014) HaptoMIRAGE: mid-air autostereoscopic display for seamless interaction with mixed reality environments. In: ACM SIGGRAPH 2014 Emerging Technologies, p. 10, ACM.
    [271] Wang X, Love PED, Kim MJ, et al. (2014) Mutual awareness in collaborative design: An Augmented Reality integrated telepresence system. Computers in Industry 65: 314–324. doi: 10.1016/j.compind.2013.11.012
    [272] Komiyama R, Miyaki T, Rekimoto J (2017) JackIn space: designing a seamless transition between first and third person view for effective telepresence collaborations. In: Proceedings of the 8th Augmented Human International Conference, p. 14, ACM.
    [273] Oyekoya O, Stone R, Steptoe W, et al. (2013) Supporting interoperability and presence awareness in collaborative mixed reality environments. In: Proceedings of the 19th ACM Symposium on Virtual Reality Software and Technology, pp. 165–174, ACM.
    [274] Reilly DF, Echenique A, Wu A, et al. (2015) Mapping out Work in a Mixed Reality Project Room. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 887–896, ACM.
    [275] Dean J, Apperley M, Rogers B (2014) Refining personal and social presence in virtual meetings. In: Proceedings of the Fifteenth Australasian User Interface Conference 150: 67–75. Australian Computer Society, Inc.
    [276] Robert K, Zhu D, Huang W, et al. (2013) MobileHelper: remote guiding using smart mobile devices, hand gestures and augmented reality. In: SIGGRAPH Asia 2013 Symposium on Mobile Graphics and Interactive Applications, p. 39, ACM.
    [277] Billinghurst M, Nassani A, Reichherzer C (2014) Social panoramas: using wearable computers to share experiences. In: SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications, p. 25, ACM.
    [278] Kim S, Lee G, Sakata N, et al. (2014) Improving co-presence with augmented visual communication cues for sharing experience through video conference. In: 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 83–92, IEEE.
    [279] Cha Y, Nam S, Yi MY, et al. (2018) Augmented Collaboration in Shared Space Design with Shared Attention and Manipulation. In: The 31st Annual ACM Symposium on User Interface Software and Technology Adjunct Proceedings, pp. 13–15, ACM.
    [280] Grandi JG (2017) Design of collaborative 3D user interfaces for virtual and augmented reality. In: 2017 IEEE Virtual Reality (VR), pp. 419–420, IEEE.
    [281] Koskela T, Mazouzi M, Alavesa P, et al. (2018) AVATAREX: Telexistence System based on Virtual Avatars. In: Proceedings of the 9th Augmented Human International Conference, p. 13, ACM.
    [282] Heiser J, Tversky B, Silverman M (2004) Sketches for and from collaboration. Visual and spatial reasoning in design III 3: 69–78.
    [283] Fakourfar O, Ta K, Tang R, et al. (2016) Stabilized annotations for mobile remote assistance. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 1548–1560, ACM.
    [284] Schmidt K (2002) The problem with 'awareness': Introductory remarks on 'awareness in CSCW'. Computer Supported Cooperative Work (CSCW) 11: 285–298. doi: 10.1023/A:1021272909573
    [285] Olson GM, Olson JS (2000) Distance matters. Human–computer interaction 15: 139–178. doi: 10.1207/S15327051HCI1523_4
    [286] Ishii H, Kobayashi M, Arita K (1994) Iterative design of seamless collaboration media. Communications of the ACM 37: 83–97.
    [287] Ishii H, Kobayashi M, Grudin J (1993) Integration of interpersonal space and shared workspace: ClearBoard design and experiments. ACM Transactions on Information Systems 11: 349–375. doi: 10.1145/159764.159762
  • This article has been cited by:

    1. F. A. Rihan, H. J. Alsakaji, C. Rajivganthi, Stability and Hopf Bifurcation of Three-Species Prey-Predator System with Time Delays and Allee Effect, 2020, 2020, 1076-2787, 1, 10.1155/2020/7306412
    2. Heba Alsakaji, Fathalla A. Rihan, Rajivganthi Chinnathambi, Dynamics of a Three Species Predator-Prey Delay Differential Model with Allee Effect and Holling Type-II Functional Response, 2018, 1556-5068, 10.2139/ssrn.3273687
    3. Jai Prakash Tripathi, Partha Sarathi Mandal, Ashish Poonia, Vijay Pal Bajiya, A widespread interaction between generalist and specialist enemies: The role of intraguild predation and Allee effect, 2021, 89, 0307904X, 105, 10.1016/j.apm.2020.06.074
    4. Dingyong Bai, Yun Kang, Shigui Ruan, Lisha Wang, Dynamics of an intraguild predation food web model with strong Allee effect in the basal prey, 2021, 58, 14681218, 103206, 10.1016/j.nonrwa.2020.103206
    5. Liyun Lai, Zhenliang Zhu, Fengde Chen, Stability and Bifurcation in a Predator–Prey Model with the Additive Allee Effect and the Fear Effect, 2020, 8, 2227-7390, 1280, 10.3390/math8081280
    6. Hua Liu, Yong Ye, Yumei Wei, Weiyuan Ma, Ming Ma, Kai Zhang, Pattern Formation in a Reaction-Diffusion Predator-Prey Model with Weak Allee Effect and Delay, 2019, 2019, 1076-2787, 1, 10.1155/2019/6282958
    7. Yong Ye, Yi Zhao, Bifurcation Analysis of a Delay-Induced Predator–Prey Model with Allee Effect and Prey Group Defense, 2021, 31, 0218-1274, 2150158, 10.1142/S0218127421501583
    8. R. P. GUPTA, DINESH K. YADAV, ROLE OF ALLEE EFFECT AND HARVESTING OF A FOOD-WEB SYSTEM IN THE PRESENCE OF SCAVENGERS, 2022, 30, 0218-3390, 149, 10.1142/S021833902250005X
    9. Hafizul Molla, Sahabuddin Sarwardi, Stacey R. Smith, Mainul Haque, Dynamics of adding variable prey refuge and an Allee effect to a predator–prey model, 2022, 61, 11100168, 4175, 10.1016/j.aej.2021.09.039
    10. Prahlad Majumdar, Sabyasachi Bhattacharya, Susmita Sarkar, Uttam Ghosh, On optimal harvesting policy for two economically beneficial species mysida and herring: a clue for conservation biologist through mathematical model, 2022, 0228-6203, 1, 10.1080/02286203.2022.2064708
    11. Xiaofen Lin, Hua Liu, Xiaotao Han, Yumei Wei, Stability and Hopf bifurcation of an SIR epidemic model with density-dependent transmission and Allee effect, 2022, 20, 1551-0018, 2750, 10.3934/mbe.2023129
    12. Xiaqing He, Zhenliang Zhu, Jialin Chen, Fengde Chen, Dynamical analysis of a Lotka Volterra commensalism model with additive Allee effect, 2022, 20, 2391-5455, 646, 10.1515/math-2022-0055
    13. Dipankar Kumar, Md. Mehedi Hasan, Gour Chandra Paul, Dipok Debnath, Nayan Mondal, Omar Faruk, Revisiting the spatiotemporal dynamics of a diffusive predator-prey system: An analytical approach, 2023, 44, 22113797, 106122, 10.1016/j.rinp.2022.106122
    14. Ali Yousef, Fatma Bozkurt, Thabet Abdeljawad, Qualitative Analysis of a Fractional Pandemic Spread Model of the Novel Coronavirus (Covid-19), 2020, 66, 1546-2226, 843, 10.32604/cmc.2020.012060
    15. Sangeeta Saha, Guruprasad Samanta, Switching effect on a two prey–one predator system with strong Allee effect incorporating prey refuge, 2024, 17, 1793-5245, 10.1142/S1793524523500122
    16. Anuj Kumar Umrao, Prashant K. Srivastava, Bifurcation Analysis of a Predator–Prey Model with Allee Effect and Fear Effect in Prey and Hunting Cooperation in Predator, 2023, 0971-3514, 10.1007/s12591-023-00663-w
    17. Dingyong Bai, Jianhong Wu, Bo Zheng, Jianshe Yu, Hydra effect and global dynamics of predation with strong Allee effect in prey and intraspecific competition in predator, 2024, 384, 00220396, 120, 10.1016/j.jde.2023.11.017
    18. S. Biswas, D. Pal, G.S. Mahapatra, Harvesting effect on prey-predator system with strong Allee effect in prey and herd behaviour in both, 2023, 37, 0354-5180, 1561, 10.2298/FIL2305561B
    19. Ruma Kumbhakar, Mainul Hossain, Nikhil Pal, Dynamics of a two-prey one-predator model with fear and group defense: A study in parameter planes, 2024, 179, 09600779, 114449, 10.1016/j.chaos.2023.114449
    20. Qun Zhu, Fengde Chen, Impact of Fear on Searching Efficiency of First Species: A Two Species Lotka–Volterra Competition Model with Weak Allee Effect, 2024, 23, 1575-5460, 10.1007/s12346-024-01000-4
    21. Samim Akhtar, Nurul Huda Gazi, Sahabuddin Sarwardi, Mathematical modelling and bifurcation analysis of an eco-epidemiological system with multiple functional responses subjected to Allee effect and competition, 2024, 26667207, 100421, 10.1016/j.rico.2024.100421
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(13694) PDF downloads(2819) Cited by(76)

Figures and Tables

Figures(2)  /  Tables(7)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog