1.
Introduction
Practical work has learning benefits that stem across all levels of education [1]. Practical laboratory experiences, referred to as traditional laboratory mode, have long supported engineering education, where students bridge the gap between theoretical knowledge and practical application [2]. Traditionally, these labs have been conducted in physical spaces involving hands-on learning experiences, but over many years, online laboratory modes have emerged as a viable alternative. Such viability was witnessed during the COVID-19 lockdowns when many traditional laboratory implementations were transitioned to online experiences [3,4]. The term online laboratory can be associated with various implementations, including remote, virtually represented, fully simulated and otherwise emulated laboratories [5]. This collection of approaches defines the online mode used within this study. A mixed laboratory refers to implementations that combine traditional and online modes. The definitions used here under the umbrella of modes are very broad, and a more granular introduction and description of a large spectrum of laboratory formats is outlined in the work of May, Terkowsky [6]. Different advantages are associated with the different modes, some key points are listed below [7,8,9,10]:
- Traditional Labs:
- Hands-On Learning: Allowing students to touch, feel and interact with physical equipment, fostering a deep understanding of engineering principles.
- Immediate Feedback: Students receive instant feedback through real-time observations and measurements.
- Collaboration: Traditional labs often encourage collaboration among students. Working in teams fosters communication and teamwork.
- Safety: Engineers can work in dangerous environments. Traditional labs provide a controlled environment where safety protocols can be observed and practiced.
- Instructor Guidance: Instructors can offer hands-on guidance, answer questions in real-time and provide valuable insights based on their expertise.
- Troubleshooting: Implementing an experiment involves interacting with various tools and equipment, which can lead to errors, faulty equipment and problems. This provides students with a multi-sensory experience that replicates real-world challenges.
- Online Labs:
- Accessibility: Depending on the online format, online labs can be accessed from anywhere with an internet connection, allowing students to engage with experiments at their convenience. This was a significant benefit during COVID-19 lockdowns.
- Cost-Efficiency: Setting up and maintaining physical labs can be expensive. Online labs often require fewer resources, making engineering education more cost-effective. Remote labs, for example, can give students in remote or underprivileged areas access to hardware that would otherwise not be possible.
- Repeatability: Online labs can be repeated multiple times quickly. For example, students can change components and values and rerun a simulation at the speed of a button. There is no need to find physical items and worry about wear and tear.
- Data Analysis: Online labs often provide data analysis tools that allow students to process and visualize results more efficiently.
- Self-Paced Learning: Online labs can accommodate various learning styles and paces, empowering students to tailor their educational experience to their needs.
- Safety: Students can work in a safe environment without the risk of harm or equipment damage.
- Scalability: Online labs can easily scale to accommodate larger numbers of students, making them an excellent choice for institutions with high enrolment.
- Troubleshooting: Online labs allow students to focus on the core learning objectives, eliminating the need for complex fault finding and dealing with potential faulty equipment and logistical problems.
By combining online and traditional components, mixed laboratories try to build upon the different strengths across the different modes. For example, verification of theoretical concepts may be simulated first and followed up with traditional hands-on activities. Using a mixed approach, students have access to various benefits associated with both modes [11,12].
The list of advantages highlights that different laboratory modes have different strengths and weaknesses. Not surprisingly, this relationship also extends to differences in learning outcomes across modes [13]. Substantial evidence shows that online laboratories can provide equal or better cognitive learning outcomes than traditional approaches [14,15,16]. However, beyond the cognitive domain, little empirical evidence offers the understanding that we need for learning outcomes across the psychomotor and affective domains, resulting in a knowledge gap [17]. Nikolic et al. [17] pinpointed two reasons for this phenomenon. First, the ease of data collection is cited as a factor, particularly when it comes to cognitive learning. Second, the predominant focus of empirical data on learning revolves around gaining insights into innovation, such as new simulations, experimental setups or technologies. The emphasis is often not placed on achieving a holistic understanding of laboratory-based learning. Therefore, due to the diversity of applications and intended outcomes in different laboratory implementations, direct comparisons are not necessarily helpful in developing a holistic understanding of learning [5]. There are several steps we can undertake to overcome this limitation. The first step in developing this holistic understanding is gaining insights into what learning is occurring beyond those defined in course learning objectives. The first author has made progress on this in terms of perceived learning [18] and is currently building evidence in terms of real learning. In other words, whether there is a relationship between what students think they learned and what they actually learned. The second step is to understand which learning objectives are important and to whom, and major inroads have been made on this front [19,20,21]. The third step is to confirm if we are effectively assessing said objectives [17] because misalignment is probable [22]. For instance, Nightingale, Carew and Fung [22] found that a significant mismatch can occur between the stated learning objectives of subjects and how students are assessed. This is the next research phase by the team. By doing this, staff can associate the best mode, implementation and assessment with intended learning objectives.
In this work, we focus on the second step, which is understanding the importance of learning objectives, as mentioned above. Collectively, the work of Nikolic et al. [19] found that the most important cognitive ranked items were understanding, design/modeling and analysis. For the psychomotor-based objectives, the items ranked highest were successful experimentation, planning & execution and instrument use. For the affective-based objectives, items ranked highest were teamwork, communication and independence. However, ranking order may be influenced by location or discipline, with discipline being most influential to the slight variances [19,20,21]. These differences are important to understand as they give meaning to laboratory design decisions by different groups of engineering educators.
Within the literature, the influence of laboratory mode on design decisions is unknown. That is, do academics designing laboratory implementations think about objectives differently when working in a specific mode? This study contributes to the field by exploring the targeted learning objectives of academics implementing traditional, mixed or online-only laboratory implementations. In doing so, the study answers the following research question: "How do academics implementing different laboratory modes of teaching think about laboratory learning objectives and rank them in terms of importance?". This study's findings will help classify which mode is best suited to which learning objective.
2.
Laboratory learning
There are 13 core laboratory learning objectives: Instrumentation, models, experiment, data analysis, safety, design, psychomotor, sensory awareness, learning from failure, creativity, communication, teamwork and ethics [2]. Learning also occurs across three domains: Cognitive, psychomotor and affective [23]. An instrument called Laboratory Learning Objectives Measurement (LLOM) [19] combines the laboratory learning objectives with Bloom's Taxonomy, providing an easy-to-use, context-modifiable, holistic template to explore learning in engineering laboratories. Using this instrument, it has been discovered that students perceive that learning occurs across all three domains during experimentation [18]. However, the scientific community has concentrated its efforts on measuring cognitive learning [5,17]. For example, Steger and Nitsche [24] compared learning of simulation and traditional implementations by exploring student achievement based on post-laboratory test results. Likewise, Singh and Mantri [15] investigated the differences through pre and post-laboratory tests. All tests focused on the cognitive domain.
While perceived learning can highlight many benefits to student experience [25], more effort is needed to understand real learning. To overcome this, over the last ten years, through the efforts of higher-quality journals, research efforts have accelerated to analyze empirical data on student learning through learning instruments or student performance than simply focusing on perceptions of learning through survey instruments [17]. Complicating the analysis and evidence collection is whether learning is considered on an immediate or long-term basis [26]. Furthermore, influences such as the impact of teaching staff can affect the results of studies if not carefully accounted for [24,27]. We can improve knowledge in this area by improving our processes and strategies towards measuring learning [28].
The community has built substantial evidence that cognitive learning occurs across any laboratory mode. For example, Uzunidis and Pagiatakis [29] showed that the average student grade for laboratory reports was similar across virtual and physical implementations. Similarities in cognitive learning were also found by Memik and Nikolic [30]. Growing evidence shows that combining modes increases learning benefits [14]. For example, Coleman and Hosein [31] found that the maximum marks for laboratory reports increased when a simulation was added and used in a traditional laboratory. A similar uplift in marks was seen by Gamo [12] and Kollöffel and de Jong [32]. However, a more significant problem across all studies is that there is no unity in the laboratory objectives being addressed; they are primarily targeted at a specific innovation [17]. Therefore, as innovation drives the researched learning outcomes, it is important to take a step back and determine if the important learning objectives, whatever they may be, are not being lost in this innovation-driven approach. Hence, it is important to determine which laboratory objectives are important.
Not much attention beyond perceptions of learning is given to psychomotor or affective learning, primarily because it is harder to collect the data [17,33]. For example, using a pre and post-test is easy to implement [34]. Attempts made to try and measure psychomotor or affective learning have primarily come from instructor observations and interviews [35,36], possibly more time consuming and suggestive. The problem with this deficiency of empirical data is that we do not have a holistic understanding of learning across all modes. If there are areas of weakness, the community can work towards innovative solutions. For example, teamwork is strongly associated with the traditional mode, but examples of collaboration in online modes have emerged [37]. Additionally, COVID-19 transitions to remote work environments have shown how the world can adjust to online forms of collaboration. Just as with the cognitive domain, some commonality is needed to understand better the laboratory objectives across the psychomotor and affective domains, hence the need for this study. We need a better understanding of which laboratory objectives are essential. Then, it will be possible to collectively test the impact of learning across modes using the best assessment methods.
3.
Laboratory learning objectives measurement
The Laboratory Learning Objectives Measurement (LLOM) instrument provides a holistic list of learning objectives that combine the laboratory objectives outlined in [2] with Bloom's Taxonomy [23], hierarchical models used for the classification of educational learning objectives. It uses a template format in which keywords can be substituted for any engineering discipline or context. This allows it to be used in traditional labs and new innovative labs (e.g., 3D Printing). The template is outlined in Table 1. As an example of its use, item C1 could be written as "Understand the operation of soil testing equipment" for a civil laboratory, while it could be written as "Understand the operation of a multimeter" for an electronics laboratory. A comprehensive explanation of the instrument is available in [19].
Through the use of this instrument, it is possible to develop a common understanding of what is perceived as the most important laboratory objectives. This allows for a reflection on the direction and thinking of academic communities across disciplines, locations and modes. Questions can then be asked, such as, are the perceived rankings optimum? Are some objectives more important in some disciplines than others? Moreover, in the case of this study, do academics with a traditional focus think about objectives differently from those that design online laboratories? Answering such questions allows for some positive reflection and possible realignment of actions.
Through investigations to date, evidence suggests that students perceive that learning occurs across all three domains in a laboratory [18]. This was achieved by students rating their ability against the instrument items before the start of the first laboratory session and after the last laboratory session, with the differences equating to their perceived learning. In terms of determining the most and least important objectives, the LLOM items have been used in ranking exercises.
In terms of ranking, there is much commonality in order across continents [20]. As discovered by Nikolic et al. [19], even though a general common order is present, the most accurate rankings are determined by discipline [19]. Through those findings, it has been possible to develop insights into why the higher and lower rankings are how the academics perceived them. It allows for reflection and an opportunity to consider the correlation of objectives to the given assessment tasks. Interestingly, for the cognitive and psychomotor domains, the ranking order correlated somewhat to the hierarchical structure of Bloom's Taxonomy, but this was not the case for the affective domain [19]. Repeating this analysis with laboratory modes can open new insights.
4.
The experiment
In 2021, over 3,000 academics worldwide were invited to participate in a survey that required ranking learning objectives using the LLOM instrument. This is an instrument used as a foundation for multiple papers [17,18,19,20,21] and has undergone a range of testing, including Cronbach's alpha and factor analysis (Kaiser rule, parallel analysis, optimal coordinates and acceleration factor) as outlined in [18]. Recruitment for participation came from advertisements via direct email and through social and professional networks of the research team. This included professional networks on platforms such as Facebook and LinkedIn. From the invitations, there were 219 survey commencements and 160 completions. Given the high workload on the academic community and the cognitive load required to complete the rankings, the number of completions met expectations.
Response distribution was 113 from Australasia, 25 from Europe, 12 from Asia, nine from North America and one from South America. While Australasian responses dominate, an earlier study [24] found that across the board, statistical differences and rankings were minimal across the cognitive and psychomotor domains but evident across the affective domain. Discipline response distribution was 2 Aeronautical, 7 Biomedical, 17 Chemical, 14 Civil, 17 Computer, 22 Electrical, 19 Electronics, 2 Industrial/Process, 10 Materials, 21 Mechanical, 8 Mechatronics, 1 Mining, 4 Other, 10 Software and 6 Telecommunications. Regarding laboratory teaching experience, 23% of respondents had less than five years of teaching experience, 20% had between five and ten years of experience and 57% had ten or more years of experience.
Participants completed the survey through Qualtrics. They were provided insights into how the template functioned and could be tailored for purpose. The survey required participants to rank the multi-domain objectives in order of importance (1 = highest ranked) as listed in the Laboratory Learning Objectives Measurement (LLOM) instrument. Participants were required to rank the objectives from most important (ranking = 1) to least important. A fixed initial ranking was used to determine if any rankings remained unchanged based on the order in Table 1. None of the rankings were left in the default state for the responses analyzed.
The data was analyzed in five groups:
- Collectively (n = 160): This included all responses.
- Traditional (n = 56): This covered those that only implemented face-to-face styled laboratories.
- Online (n = 13): This covered those that only implemented online-styled laboratories.
- Mixed (n = 90): This covered those that implemented laboratories that combined traditional and online modes.
- Other (n = 1): As classified by the respondents as not fitting any of the groups. This data was not investigated separately, only within the collective.
It must be observed that the online-only cohort is a relatively small sample. This could create some noise within the ranking order. However, the data would still be helpful as the authors believe that the presented ratio probably represents the current implementation ratio.
Limitations
This study does have certain constraints. It relies on a self-selection approach, meaning that the viewpoints expressed might predominantly reflect those of academics who are more actively involved in and influenced by research in engineering education. Although we provided guidance on how to understand and use the LLOM template, there is no assurance that every participant comprehended all the elements and correctly applied the template, including identifying key terms within the context. Despite inviting approximately 3000 academics to participate, only a relatively small number completed the survey in its entirety. It is worth noting that such a limited response rate aligns with common patterns observed in previous experiences of this kind.
5.
Results
The statistician on the team analyzed the results. The platform R version 4.05 was used for the statistical analysis with the results shown in Tables 2 (cognitive), 3 (psychomotor) and 4 (affective). Rankings were determined using averages. The lower the number, the more academics ranked the objective as more important than objectives with a higher average. In brackets, the 95% confidence interval (CI) is shown. When two confidence intervals do not overlap, a statistically significant difference in mean values can be concluded. The differences between the online-only and traditional groups are highlighted in green. Differences between the online group and the collective are shown in blue. For example, for P2S in Table 3, the online-only group has a confidence interval (2.16, 4.95), and the traditional group has a confidence interval (5.10, 6.15). As the intervals do not overlap (2.16 and 4.95 are both smaller than 5.10 and 6.15), and the lower endpoint, 5.10, is larger than the higher endpoint of the other (which is 4.95), a statistically significant difference in mean values can be concluded.
The value in the last column shows the p-value of the non-parametric equivalent test of ANOVA, the Kruskal-Wallis test, to account for non-Gaussian distributed data, which is also best suited to the small sample size. The p-value is used to test for mean differences across groups; this examines whether, for a particular objective (e.g., C1), the mean responses are different across the laboratory modes, i.e., if a p-value is less than 5% (highlighted in grey), then responses differ across groups for that question, otherwise not.
Each table also provides a visual representation of the objectives in ranking order. Visual representations can help develop a better understanding of data. Colour coding is used to show how the collective ranking differs across the laboratory modes. For example, in Table 2, C1 is light blue. The different ranking of C1 for each laboratory modes can be easily observed in the table by following the colour trend.
From the sample, it was interesting that most respondents implemented mixed-mode laboratory activities. One reason could be that a substantial percentage of the academic community believes in the benefits of mixed-modal learning. Another reason could be that there is a growing opportunity to mesh online and hands-on skills for experimentation [7], such as in robotics [38].
The results of the research questions are outlined in the upcoming discussion section.
6.
Discussion
Each domain is discussed separately below.
6.1. Cognitive domain
The data indicates that ranking preferences across the collective, traditional and mixed modes were mainly in alignment. The substantial differences came from academics that implemented online implementations only. The online-only results show a firm preference for C2, the ability to 'design experiments/models to verify course concepts' as the most important objective, with a value lower than the highest ranking objective for the other groups, even for the traditional group that also ranked it first. However, it is important to note that this difference is not statistically significant.
More interesting was that C1, 'understanding the operation of equipment/software used within the laboratory', was ranked fourth for the 'online-only' group. Across continents [20] and all disciplines, apart from computer and software engineering [19], C1 was ranked first or second. While not statistically significant, this suggests a pattern regarding the thinking of learning objectives across computer-based academics. The other major differences are that C8, summarising findings, is ranked higher for the online group and C6, 'safety' is ranked lowest (as expected). C6 is the only cognitive objective in which a statistical difference is found both across groups (grey highlight) and mean values between the online-only group and across both the collective (blue highlight) and traditional groups (green highlight). The weight of C6 is substantially higher in the other groups. This is not surprising because online laboratories' safety benefits are often touted as one of the main benefits of such an implementation [8,14] and, therefore, unlikely to be emphasised as a key learning objective. While in traditional laboratories, engaging with safe practices is one of the benefits [34,39], and hence would have higher emphasis. However, these insights overlook the fact that virtual reality-based online laboratory implementations may change this dynamic as the technology becomes prevalent, allowing for immersive experiences that bring safety front and center [40,41]; hence, the need for this study to reflect on learning objectives.
The opposing ranking viewpoints between traditional-only and online-only rankings would suggest that combining the two modes would diversify the cognitive focus in the learning experience. For example, online resources can play a supporting role in aiding understanding in a traditional laboratory [42]. However, the mixed mode group data shows that those implementing such setups think broadly in line with academics implementing traditional-only laboratories.
6.2. Psychomotor domain
Across the psychomotor domain, the ranking pattern mimicked that found across the cognitive domain. There was a very clear alignment across the collective, mixed and traditional groups. P1, reflecting successful experimentation, consistently occupied the highest rank in each of these groups. The one noticeable outlier across the four was that P5, the psychomotor skills associated with fault finding, was ranked higher for the traditional group, however, this difference was not statistically significant. P5 had also been ranked mostly last or second last across the other comparisons undertaken in different studies by the researchers [19,20,21]. It seems appropriate that those academics focusing more on the traditional laboratory approach, where things are more likely to go wrong, would rate P5 higher. The authors have previously argued [19] that there is merit for a rethink that this objective should be ranked higher. It is apparent that objectives related to traditional implementations take higher precedence for those implementing mixed modes, just as was found in the cognitive domain.
Across the groups, it was no surprise that the online rankings were the most different, as online and traditional modes have obvious differences in psychomotor opportunity. Noting that virtual reality-based online laboratory implementations provide a platform to change such a dynamic as they gradually become more immersive [40,43]. Four of the nine psychomotor items, P2H, P4, P6H and P6S, had statistical mean differences across groups.
One standout observation involves P2H and P6H (statistically significant across both mean values and groups), representing the selection and operation of instruments. While in all other groups, P2H is unanimously placed in the third position and P6H in fourth, they notably drop to the fifth and eighth rank, respectively, when assessed solely within the online-only group. This is unsurprising, as hands-on activities would not be a primary focus in a hands-off environment. Another noticeable difference is that P4, construct/coding, moved up from fifth in the other groups to second in the online group. This divergence highlights a distinct pattern that sets the online-only group apart from the others in terms of their preferences and evaluations. This could be due to the lower key role equipment plays in online modes compared to the activity of constructing/coding working circuits, simulations and programs.
Different strengths and weaknesses of the modes were highlighted in the literature review, most with psychomotor implications, as such engagement with psychomotor objectives is very different. These findings show that those objectives that resonate strongest with simulation/remote competencies were ranked higher than those associated with hardware. Interestingly, correctly conducting an experiment was the highest-ranked objective across all modes.
6.3. Affective domain
Unlike the other two domains, there was almost complete alignment across all groups for the affective domain. In this particular domain, the collective, mixed and traditional groups exhibited a striking symmetry, mirroring one another perfectly. The significant outlier was the swapping of objectives A3 and A1 for the online group. That is, the online group ranked independent learning higher than teamwork, which is unsurprising as many online experimentation implementations are targeted at individual work. A1 was the only statistical difference recorded at the group level.
Face-to-face learning easily enables many advantages of collaborative learning, especially soft skills [44]. This is not to say that teamwork is not possible in online modes, indeed, it can [37]. When synthesising the results from this study with the other three [19,20,21], it appears that discipline-based influences have the greatest influence on rankings across the affective domain. Ethics (A4) is ranked last across all groups. Given that data collection may be more prevalent away from the eyes of teaching staff in online modes, it may be wise to give this objective higher priority to ensure that the data being collected is the same being reported and analyzed, especially if marks are involved. It can be easy to manipulate data, and such practices must be encouraged as wrong. This is just one example, but ethics is clearly an area requiring greater consideration [45].
6.4. The road ahead
As outlined in the literature review, most laboratory studies, especially those comparing laboratory mode implementations, focus on learning in the cognitive domain. As a result, the rankings in the cognitive domain correlate with such findings. The window of opportunity can be found in the psychomotor domain, where little effort has been made to gather non-perception-based empirical data on learning [5,17]. With a focus on different learning objectives, studies can attempt to measure and explore if the differences translate with learning. Such knowledge can aid in making design decisions. It has never been more important to improve our understanding of psychomotor learning due to the impact ChatGPT and other AI technologies are about to have on cognitive learning experiences [46].
We have been developing knowledge of laboratory learning, and we now better understand which learning objectives are important and to whom. The next step is synthesising this information and examining if our assessment practices correlate, which we need a much better understanding of [22]. If they do not, we can start to make changes.
7.
Conclusions
We investigated the research question: "How do academics implementing different laboratory modes of teaching think about laboratory learning objectives and rank them in terms of importance?". An almost perfect alignment was found across all modes for the affective domain. The main difference was the swap in priority between independent and collaborative learning, which closely aligns with the typical experience a student may face engaging in such modes. Online-only academics, prioritized independent learning over teamwork. While independent work may be the default approach when using many online technologies, collaborative learning is possible with the right technology and approach [37].
For the cognitive and psychomotor domains, much similarity was found across the collective, traditional and mixed groups, with the greatest difference coming from the online-only group. These differences can, to some degree, be attributed to the technology associated with each mode. Specifically, the online-only group tended to assign higher rankings to items that were more relevant to their mode of learning. For example, the aspect of 'safety' received the lowest ranking from the online-only group, most probably because student engaging in simulation or remote laboratory setups may not need to give significant consideration to safety due to the safe laboratory controlled environment. However, it is possible that if the focus of the technology changed, e.g., virtual reality bringing about highly immersive learning experiences where safety was the core learning objective, it would be very beneficial [40]. Similarly, virtual reality could simulate a great range of psychomotor activities. This reinforces the contribution of this study, allowing the academic community to reflect on the factors influencing ranking decisions, which, in turn, can influence their design choices. Academics can consider if the rankings are justified, optimal or need adjustment. The different areas of ranking priorities identified can be used by researchers to home in their investigations on the strengths and weaknesses of different laboratory modes, ultimately informing the development of more effective teaching strategies. One key takeaway from these findings is that academics should not let technology limitations guide their focus on the important laboratory objectives. The laboratory object should be the focus.
The first step in developing this holistic understanding is gaining insights into what learning is occurring beyond those defined in course learning objectives. Progress on this in terms of perceived learning has been made [10], and research is currently underway to build evidence regarding real learning. The second step is to understand which learning objectives are important and to whom, and major inroads have been made on this front [11,12,13]. The third step is to confirm if we are effectively assessing said objectives [9], the next research phase by the research team. By doing this, staff can associate the best mode, implementation and assessment with intended learning objectives. This will help engineering educators enhance the alignment of their teaching modes, implementations and assessments with their intended learning objectives.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgments
We would like to thank the constructive feedback provided by the reviewers. We also thank all the academics that took the time to participate in the survey. Without their time and input, this research would not have been possible.
Conflict of interest
The authors have no conflicts of interest in this paper.
Ethics declaration
This study was completed with UOW ethics approval number 2021/252.