Loading [MathJax]/jax/output/SVG/jax.js
Special Issues

PAIGE: A generative AI-based framework for promoting assignment integrity in higher education


  • The integration of Generative Artificial Intelligence (GAI) tools like ChatGPT, Google Bard, and Bing Chat in higher education shows excellent potential for transformation. However, this integration also raises issues in maintaining academic integrity and preventing plagiarism. In this study, we investigate and analyze practical approaches for efficiently harnessing the potential of GAI while simultaneously ensuring the preservation of assignment integrity. Despite the potential to expedite the learning process and improve accessibility, concerns regarding academic misconduct highlight the necessity for the implementation of novel GAI frameworks for higher education. To effectively tackle these challenges, we propose a conceptual framework, PAIGE (Promoting Assignment Integrity using Generative AI in Education). This framework emphasizes the ethical integration of GAI, promotes active student interaction, and cultivates opportunities for peer learning experiences. Higher education institutions can effectively utilize the PAIGE framework to leverage the promise of GAI while ensuring the preservation of assignment integrity. This approach paves the way for a responsible and thriving future in Generative AI-driven education.

    Citation: Shakib Sadat Shanto, Zishan Ahmed, Akinul Islam Jony. PAIGE: A generative AI-based framework for promoting assignment integrity in higher education[J]. STEM Education, 2023, 3(4): 288-305. doi: 10.3934/steme.2023018

    Related Papers:

    [1] Mahmoud S. Mehany, Faizah D. Alanazi . An η-Hermitian solution to a two-sided matrix equation and a system of matrix equations over the skew-field of quaternions. AIMS Mathematics, 2025, 10(4): 7684-7705. doi: 10.3934/math.2025352
    [2] Vladislav N. Kovalnogov, Ruslan V. Fedorov, Denis A. Demidov, Malyoshina A. Malyoshina, Theodore E. Simos, Vasilios N. Katsikis, Spyridon D. Mourtas, Romanos D. Sahas . Zeroing neural networks for computing quaternion linear matrix equation with application to color restoration of images. AIMS Mathematics, 2023, 8(6): 14321-14339. doi: 10.3934/math.2023733
    [3] Kezheng Zuo, Yang Chen, Li Yuan . Further representations and computations of the generalized Moore-Penrose inverse. AIMS Mathematics, 2023, 8(10): 23442-23458. doi: 10.3934/math.20231191
    [4] Abdur Rehman, Ivan Kyrchei, Muhammad Zia Ur Rahman, Víctor Leiva, Cecilia Castro . Solvability and algorithm for Sylvester-type quaternion matrix equations with potential applications. AIMS Mathematics, 2024, 9(8): 19967-19996. doi: 10.3934/math.2024974
    [5] Wenxv Ding, Ying Li, Anli Wei, Zhihong Liu . Solving reduced biquaternion matrices equation ki=1AiXBi=C with special structure based on semi-tensor product of matrices. AIMS Mathematics, 2022, 7(3): 3258-3276. doi: 10.3934/math.2022181
    [6] Yang Chen, Kezheng Zuo, Zhimei Fu . New characterizations of the generalized Moore-Penrose inverse of matrices. AIMS Mathematics, 2022, 7(3): 4359-4375. doi: 10.3934/math.2022242
    [7] R. Sriraman, R. Samidurai, V. C. Amritha, G. Rachakit, Prasanalakshmi Balaji . System decomposition-based stability criteria for Takagi-Sugeno fuzzy uncertain stochastic delayed neural networks in quaternion field. AIMS Mathematics, 2023, 8(5): 11589-11616. doi: 10.3934/math.2023587
    [8] Qi Xiao, Jin Zhong . Characterizations and properties of hyper-dual Moore-Penrose generalized inverse. AIMS Mathematics, 2024, 9(12): 35125-35150. doi: 10.3934/math.20241670
    [9] Houssem Jerbi, Izzat Al-Darraji, Saleh Albadran, Sondess Ben Aoun, Theodore E. Simos, Spyridon D. Mourtas, Vasilios N. Katsikis . Solving quaternion nonsymmetric algebraic Riccati equations through zeroing neural networks. AIMS Mathematics, 2024, 9(3): 5794-5809. doi: 10.3934/math.2024281
    [10] R. Sriraman, P. Vignesh, V. C. Amritha, G. Rachakit, Prasanalakshmi Balaji . Direct quaternion method-based stability criteria for quaternion-valued Takagi-Sugeno fuzzy BAM delayed neural networks using quaternion-valued Wirtinger-based integral inequality. AIMS Mathematics, 2023, 8(5): 10486-10512. doi: 10.3934/math.2023532
  • The integration of Generative Artificial Intelligence (GAI) tools like ChatGPT, Google Bard, and Bing Chat in higher education shows excellent potential for transformation. However, this integration also raises issues in maintaining academic integrity and preventing plagiarism. In this study, we investigate and analyze practical approaches for efficiently harnessing the potential of GAI while simultaneously ensuring the preservation of assignment integrity. Despite the potential to expedite the learning process and improve accessibility, concerns regarding academic misconduct highlight the necessity for the implementation of novel GAI frameworks for higher education. To effectively tackle these challenges, we propose a conceptual framework, PAIGE (Promoting Assignment Integrity using Generative AI in Education). This framework emphasizes the ethical integration of GAI, promotes active student interaction, and cultivates opportunities for peer learning experiences. Higher education institutions can effectively utilize the PAIGE framework to leverage the promise of GAI while ensuring the preservation of assignment integrity. This approach paves the way for a responsible and thriving future in Generative AI-driven education.



    The real-time solution to the Moore-Penrose inverse (MP-inverse or pseudoinverse) [1,2], that frequently arises in robotics [3,4,5], game theory [6], nonlinear systems [7] and other technical and scientific disciplines [8,9,10], has attracted a lot of interest in recent times. Quaternions, on the other hand, are crucial in a wide range of domains, such as computer graphics [11,12,13], robotics [14,15], navigation [16], quantum mechanics [17], electromagnetism [18] and mathematical physics [19,20]. Let Hm×n present the set of all m×n matrices on the quaternion skew field H={δ1+δ2ı+δ3ȷ+δ4k | ı2=ȷ2=k2=ıȷk=1,δ1,δ2,δ3,δ4R}. Considering that ˜AHm×n, its conjugate transpose is denoted by ˜A, and its rank by rank(˜A). The generalization of the inverse matrix ˜A1 is the MP-inverse ˜A, whereas ˜A is just one solution ˜X that satisfies the next Penrose equations [21,22]:

    (i) ˜A˜X˜A=˜A,(ii) ˜X˜A˜X=˜X,(iii) (˜A˜X)=˜A˜X,(iv) (˜X˜A)=˜X˜A. (1.1)

    Recently, research has begun to focus on time-varying quaternion (TVQ) problems involving matrices, such as inversion of TVQ matrices [23], solving the dynamic TVQ Sylvester matrix equation [24], addressing the TVQ constrained matrix least-squares problem [25] and solving the TVQ linear matrix equation for square matrices [26]. Furthermore, real-world applications involving TVQ matrices are employed in kinematically redundant manipulator of robotic joints [15,27], chaotic systems synchronization [25], mobile manipulator control [23,28] and image restoration [26,29]. All these studies have one thing in common: they all use the zeroing neural network (ZNN) approach to derive the solution.

    ZNNs are a subset of recurrent neural networks that are especially good at parallel processing and are used to address time-varying issues. They were initially developed by Zhang et al. [30] to handle the problem of time-varying matrix inversion, but their subsequent iterations were dynamic models for computing the time-varying MP-inverse of full-row/column rank matrices [31,32,33,34] in the real and complex domain. Today, their use has expanded to include the resolution of generalized inversion issues [35,36,37,38,39,40], linear and quadratic programming tasks [41,42,43], certain types of matrix equation [44,45], systems of nonlinear equations [46,47], systems of linear equations [48,49,50] and systems of equations with noise [51]. The TVQ MP-inverse (TVQ-MPI) problem for any TVQ matrix will be addressed in this paper using the ZNN approach. Of greater significance, we will determine whether a direct solution in the quaternion domain or an indirect solution through representation in the complex and real domains is more efficient. To do this, we will create three ZNN models, one for each domain, and rigorously validate them on four numerical simulations and a real-world application involving robotic motion tracking. By doing theoretical analysis and analyzing the computational complexity of all presented models, this research strengthens the existing body of literature.

    The rest of the article is divided into the following sections. Section 2 presents introductory information and the TVQ-MPI problem formulation. Section 3 introduces the three ZNN models, while their theoretical analysis is presented in Section 4. Numerical simulations and applications are explored in Section 5 and, finally, Section 6 provides concluding thoughts and comments.

    This part outlines some introductory information about TVQ matrices, the TVQ-MPI problem, ZNNs and the notation that will be used throughout the remainder of the study as well as the primary findings that will be covered.

    A division algebra or skew-field over the field of real numbers makes up a quaternion. As a result, the set of quaternions H is not commutative under the operation of multiplication, which causes complexity to rise quickly in real-world applications [52]. On the other hand, a real 4×4 matrix or a complex 2×2 matrix can both be used to easily express a scalar quaternion [53]. The dimensions of the representation matrices scale suitably, and this fact also applies to matrices of quaternions. In order to tackle quaternion-based problems, it is now customary to first solve an analogous issue in the real or complex domain before converting the result back to quaternion form. This method, which is undeniably effective even in a static setting, excels at solving issues with time-varying characteristics.

    Let ˜A(t)=A1(t)+A2(t)ı+A3(t)ȷ+A4(t)kHm×n, with Ai(t)Rm×n for i=1,2,3,4, be a TVQ matrix and t[0,tf)[0,+) be the time. The conjugate transpose of a TVQ matrix ˜A(t) is the following [52,53]:

    ˜A(t)=AT1(t)AT2(t)ıAT3(t)ȷAT4(t)k, (2.1)

    where the operator ()T denotes transposition. The product of the two TVQ matrices ˜A(t) and ˜B(t)=B1(t)+B2(t)ı+B3(t)ȷ+B4(t)kHn×g, with Bi(t)Rn×g for i=1,,4, is:

    ˜A(t)˜B(t)=˜Y(t)=Y1(t)+Y2(t)ı+Y3(t)ȷ+Y4(t)kHm×g (2.2)

    where

    Y1(t)=A1(t)B1(t)A2(t)B2(t)A3(t)B3(t)A4(t)B4(t),Y2(t)=A1(t)B2(t)+A2(t)B1(t)+A3(t)B4(t)A4(t)B3(t),Y3(t)=A1(t)B3(t)+A3(t)B1(t)+A4(t)B2(t)A2(t)B4(t),Y4(t)=A1(t)B4(t)+A4(t)B1(t)+A2(t)B3(t)A3(t)B2(t), (2.3)

    with Yi(t)Rm×g for i=1,,4. Additionally, one complex representation of the TVQ matrix ˜A(t) is the following [24,54]:

    ˆA(t)=[A1(t)A4(t)ıA3(t)A2(t)ıA3(t)A2(t)ıA1(t)+A4(t)ı]C2m×2n, (2.4)

    and one real representation of the TVQ matrix ˜A(t) is the following [26]:

    A(t)=[A1(t)A4(t)A3(t)A2(t)A4(t)A1(t)A2(t)A3(t)A3(t)A2(t)A1(t)A4(t)A2(t)A3(t)A4(t)A1(t)]R4m×4n. (2.5)

    In this paper, the following TVQ matrix equations problem is taken into consideration for computing the TVQ-MPI of any ˜A(t)Hm×n [21,22]:

    {˜A(t)˜A(t)˜X(t)˜A(t)=0n×m,mn˜X(t)˜A(t)˜A(t)˜A(t)=0n×m,m<n, (2.6)

    where the TVQ matrix ˜X(t)=X1(t)+X2(t)ı+X3(t)ȷ+X4(t)kHn×m, with Xi(t)Rn×m for i=1,2,3,4, is the TVQ matrix of interest. Additionally, we consider that ˜A(t) is a smoothly time-varying matrix and its time derivative is either given or can be accurately estimated. It is important to note that (2.6) is the TVQ-MPI problem and it is satisfied only for ˜X(t)=˜A(t).

    On the one hand, taking into account that the complex representation of the TVQ matrix acquired by multiplying two TVQ matrices is similar to the TVQ matrix acquired by multiplying the complex representations of two TVQ matrices [24, Theorem 1], solving (2.6) is equivalent to solving the real matrix equation:

    {ˆA(t)ˆA(t)ˆX(t)ˆA(t)=02n×2m,mnˆX(t)ˆA(t)ˆA(t)ˆA(t)=02n×2m,m<n, (2.7)

    where ˆX(t)C2n×2m. On the other hand, taking into account that the real representation of the TVQ matrix acquired by multiplying two TVQ matrices is similar to the TVQ matrix acquired by multiplying the real representations of two TVQ matrices [26, Corollary 1], solving (2.6) is equivalent to solving the real matrix equation:

    {AT(t)A(t)X(t)AT(t)=04n×4m,mnX(t)A(t)AT(t)AT(t)=04n×4m,m<n, (2.8)

    where X(t)R4n×4m.

    Three novel ZNN models are introduced for solving the TVQ-MPI problem of (2.6) in this research. One model, dubbed as ZNNQ, is created for directly resolving the TVQ-MPI problem of (2.6). The two additional models, dubbed as ZNNQC and ZNNQR, are created to indirectly solve the TVQ-MPI problem of (2.6) through (2.7) in the complex domain and (2.8) in the real domain, respectively. The creation of a ZNN model typically involves two fundamental steps. First, one defines an error matrix equation (ERME) function E(t). Second, the next ZNN dynamical system under the linear activation function must be used:

    ˙E(t)=λE(t), (2.9)

    where the operator () denotes the time derivative. Additionally, the design parameter λ>0 is a positive real number, though one may adjust the convergence rate of the model. For instance, a higher value for λ will result in the model converging even faster [55,56,57]. It is important to point out that continual learning is defined as learning continually from non-stationary data, while transferring and preserving prior knowledge. It is true that as time evolves, the ZNN's architecture relies around driving each entry of the error function E(t) to 0. The continuous-time learning rule, which is the consequence of the definition of the ERME function (2.9), is used to do this. Therefore, it is possible to think of the error function as a tool for tracking the learning of ZNN models.

    The key conclusions of the paper are listed next:

    (1) For the first time, the TVQ-MPI problem is addressed through the ZNN approach.

    (2) With the purpose of addressing the TVQ-MPI problem, three novel ZNN models are provided.

    (3) Matrices of any dimension can be used with the proposed ZNN models.

    (4) The models are subjected to a theoretical analysis that validates them.

    (5) Numerical simulations and applications are carried out to complement the theoretical concepts.

    The following notations are employed in the remainder of this article: Iu refers to the identity u×u matrix; 0u and 0m×n, respectively, refer to the zero u×u and m×n matrices; F denotes the matrix Frobenius norm; vec() denotes the vectorization process; denotes the elementwise multiplication; denotes the Kronecker product.

    Three ZNN models, each working in a distinct domain, will be developed in this section. Further, we assume that ˜A(t)Hm×n is a differentiable TVQ matrix, and ˜X(t)Hn×m is the unknown MP-inverse matrix of ˜A(t) to be found.

    To develop the ZNNQ model, the TVQ-MPI of (2.6) is considered. According to (2.1) and (2.2), we set ˜A(t)˜A(t)=˜U(t)=U1(t)+U2(t)ı+U3(t)ȷ+U4(t)k, where

    U1(t)=AT1(t)A1(t)+AT2(t)A2(t)+AT3(t)A3(t)+AT4(t)A4(t),U2(t)=AT1(t)A2(t)AT2(t)A1(t)AT3(t)A4(t)+AT4(t)A3(t),U3(t)=AT1(t)A3(t)AT3(t)A1(t)AT4(t)A2(t)+AT2(t)A4(t),U4(t)=AT1(t)A4(t)AT4(t)A1(t)AT2(t)A3(t)+AT3(t)A2(t), (3.1)

    with Ui(t)Rn×n for i=1,,4, and ˜A(t)˜A(t)=˜V(t)=V1(t)+V2(t)ı+V3(t)ȷ+V4(t)k, where

    V1(t)=A1(t)AT1(t)+A2(t)AT2(t)+A3(t)AT3(t)+A4(t)AT4(t),V2(t)=A1(t)AT2(t)+A2(t)AT1(t)A3(t)AT4(t)+A4(t)AT3(t),V3(t)=A1(t)AT3(t)+A3(t)AT1(t)A4(t)AT2(t)+A2(t)AT4(t),V4(t)=A1(t)AT4(t)+A4(t)AT1(t)A2(t)AT3(t)+A3(t)AT2(t), (3.2)

    with Vi(t)Rm×m for i=1,,4. Taking into account (3.1) and (3.2), the TVQ-MPI (2.6) can be rewritten as below:

    {˜U(t)˜X(t)˜A(t)=0n×m,mn˜X(t)˜V(t)˜A(t)=0n×m,m<n, (3.3)

    or equivalent,

    {C1(t)AT1(t)+(C2(t)+AT2(t))ı+(C3(t)+AT3(t))ȷ+(C4(t)+AT4(t))k=0n×m,mnD1(t)AT1(t)+(D2(t)+AT2(t))ı+(D3(t)+AT3(t))ȷ+(D4(t)+AT4(t))k=0n×m,m<n, (3.4)

    where

    C1(t)=U1(t)X1(t)U2(t)X2(t)U3(t)X3(t)U4(t)X4(t),C2(t)=U1(t)X2(t)+U2(t)X1(t)+U3(t)X4(t)U4(t)X3(t),C3(t)=U1(t)X3(t)+U3(t)X1(t)+U4(t)X2(t)U2(t)X4(t),C4(t)=U1(t)X4(t)+U4(t)X1(t)+U2(t)X3(t)U3(t)X2(t), (3.5)

    with Ci(t)Rn×m for i=1,,4, and

    D1(t)=X1(t)V1(t)X2(t)V2(t)X3(t)V3(t)X4(t)V4(t),D2(t)=X1(t)V2(t)+X2(t)V1(t)+X3(t)V4(t)X4(t)V3(t),D3(t)=X1(t)V3(t)+X3(t)V1(t)+X4(t)V2(t)X2(t)V4(t),D4(t)=X1(t)V4(t)+X4(t)V1(t)+X2(t)V3(t)X3(t)V2(t), (3.6)

    with Ci(t)Rn×m for i=1,,4. Then, setting

    Z1(t)=[U1(t)U2(t)U3(t)U4(t)U2(t)U1(t)U4(t)U3(t)U3(t)U4(t)U1(t)U2(t)U4(t)U3(t)U2(t)U1(t)],Z2(t)=[V1(t)V2(t)V3(t)V4(t)V2(t)V1(t)V4(t)V3(t)V3(t)V4(t)V1(t)V2(t)V4(t)V3(t)V2(t)V1(t)]R4n×4m,Y1(t)=[XT1(t),XT2(t),XT3(t),XT4(t)]TR4n×m,Y2(t)=[X1(t),X2(t),X3(t),X4(t)]Rn×4m,W1(t)=[A1(t),A2(t),A3(t),A4(t)]TR4n×m,W2(t)=[AT1(t),AT2(t),AT3(t),AT4(t)]Rn×4m, (3.7)

    where Z1(t)R4m×4n, Z2(t)R4n×4m, Y1(t),W1(t)R4n×m and Y2(t),W2(t)Rn×4m, the following ERME is considered:

    EQ(t)={E1(t)=Z1(t)Y1(t)W1(t),mnE2(t)=Y2(t)Z2(t)W2(t),m<n, (3.8)

    where E1(t)R4n×m and E2(t)Rn×4m. The first time derivative of (3.8) is as follows:

    ˙EQ(t)={˙E1(t)=˙Z1(t)Y1(t)+Z1(t)˙Y1(t)˙W1(t),mn˙E2(t)=˙Y2(t)Z2(t)+Y2(t)˙Z2(t)˙W2(t),m<n. (3.9)

    When EQ(t) of (3.8) and ˙EQ(t) of (3.9) are replaced in (2.9) and solving in terms of ˙Y1(t) and ˙Y2(t), we have the next result:

    {Z1(t)˙Y1(t)=λE1(t)˙Z1(t)Y1(t)+˙W1(t),mn˙Y2(t)Z2(t)=λE2(t)Y2(t)˙Z2(t)+˙W2(t),m<n. (3.10)

    Then, with the aid of the Kronecker product and the vectorization process, the dynamic model of (3.10) may be simplified:

    {(ImZ1(t))vec(˙Y1(t))=vec(λE1(t)˙Z1(t)Y1(t)+˙W1(t)),mn(Z2(t)In)vec(˙Y2(t))=vec(λE2(t)Y2(t)˙Z2(t)+˙W2(t)),m<n, (3.11)

    and after setting:

    K(t)={ImZ1(t),mn& rank(˜A(t))=nImZ1(t)+γI4mn,mn& rank(˜A(t))<nZ2(t)In,m<n& rank(˜A(t))=mZ2(t)In+γI4mn,m<n& rank(˜A(t))<m,˙y(t)={vec(˙Y1(t)),mnvec(˙Y2(t)),m<n,L(t)={vec(λE1(t)˙Z1(t)Y1(t)+˙W1(t)),mnvec(λE2(t)Y2(t)˙Z2(t)+˙W2(t)),m<n,y(t)={vec(Y1(t)),mnvec(Y2(t)),m<n, (3.12)

    the next ZNNQ model is derived for solving the TVQ-MPI of (2.6):

    K(t)˙y(t)=L(t) (3.13)

    where ˙y(t),y(t),L(t)R4mn, K(t)R4mn×4mn is a nonsingular mass matrix and γ0 is the regularization parameter.

    Given that we perform 4mn additions/subtractions and (4mn)2 multiplications in each iteration of (3.13), the complexity of solving (3.13) is O((4mn)2) operations. In addition, the complexity of solving (3.13) through use of an implicit ode MATLAB solver is O((4mn)3 as it involves a (4mn)×(4mn) matrix. As a consequence, the computational complexity of the ZNNQ model of (3.13) is O((4mn)3).

    To develop the ZNNQC model, the TVQ-MPI of (2.7) is considered. Let ˆA(t)C2m×2n and ˆX(t)C2n×2m, we set the following ERME:

    EC(t)={E1(t)=ˆA(t)ˆA(t)ˆX(t)ˆA(t),mnE2(t)=ˆX(t)ˆA(t)ˆA(t)ˆA(t),m<n, (3.14)

    where E1(t),E2(t)C2n×2m. The first time derivative of (3.14) is as follows:

    ˙EC(t)={˙E1(t)=(˙ˆA(t)ˆA(t)+ˆA(t)˙ˆA(t))ˆX(t)+ˆA(t)ˆA(t)˙ˆX(t)˙ˆA(t),mn˙E2(t)=˙ˆX(t)ˆA(t)ˆA(t)+ˆX(t)(˙ˆA(t)ˆA(t)+ˆA(t)˙ˆA(t))˙ˆA(t),m<n. (3.15)

    When EC(t) of (3.14) and ˙EC(t) of (3.15) are replaced in (2.9) and solving in terms of ˙ˆX(t), we have the next result:

    {ˆA(t)ˆA(t)˙ˆX(t)=λE1(t)(˙ˆA(t)ˆA(t)+ˆA(t)˙ˆA(t))ˆX(t)+˙ˆA(t),mn˙ˆX(t)ˆA(t)ˆA(t)=λE2(t)ˆX(t)(˙ˆA(t)ˆA(t)+ˆA(t)˙ˆA(t))+˙ˆA(t),m<n. (3.16)

    Then, with the aid of the Kronecker product and the vectorization process, the dynamic model of (3.16) may be simplified:

    {(I2mˆA(t)ˆA(t))vec(˙ˆX(t))=vec(λE1(t)(˙ˆA(t)ˆA(t)+ˆA(t)˙ˆA(t))ˆX(t)+˙ˆA(t)),mn(ˆA(t)ˆA(t)I2n)vec(˙ˆX(t))=vec(λE2(t)ˆX(t)(˙ˆA(t)ˆA(t)+ˆA(t)˙ˆA(t))+˙ˆA(t)),m<n, (3.17)

    and after setting:

    W(t)={I2mˆA(t)ˆA(t),mn& rank(ˆA(t))=2nI2mˆA(t)ˆA(t)+γI4mn,mn& rank(ˆA(t))<2nˆA(t)ˆA(t)I2n,m<n& rank(ˆA(t))=2mˆA(t)ˆA(t)I2n+γI4mn,m<n& rank(ˆA(t))<2m,H(t)={vec(λE1(t)(˙ˆA(t)ˆA(t)+ˆA(t)˙ˆA(t))ˆX(t)+˙ˆA(t)),mnvec(λE2(t)ˆX(t)(˙ˆA(t)ˆA(t)+ˆA(t)˙ˆA(t))+˙ˆA(t)),m<n,˙ˆx(t)=vec(˙ˆX(t)),ˆx(t)=vec(ˆX(t)), (3.18)

    the ZNNQC model is derived for solving the TVQ-MPI of (2.6):

    W(t)˙ˆx(t)=H(t) (3.19)

    where ˙ˆx(t),ˆx(t),H(t)C4mn, W(t)C4mn×4mn is a nonsingular mass matrix and γ0 is the regularization parameter.

    In terms of computational complexity, it is important to note that multiplying two complex numbers results in the calculation (c+dı)(k+hı)=ckdh+chı+dkı, which calls for a total of 4 multiplications and 2 addition/subtraction operations. Taking this into account, the complexity of computing (3.19) is O(4(4mn)2)=O((8mn)2) as each iteration of (3.19) has 4(4mn)2 multiplication and 2(4mn) addition/subtraction operations. In addition, the complexity of solving (3.19) through use of an implicit ode MATLAB solver is O((8mn)3) as it involves a (4mn)×(4mn) matrix in the complex domain. As a consequence, the computational complexity of the ZNNQC model of (3.19) is O((8mn)3).

    To develop the ZNNQR model, the TVQ-MPI of (2.8) is considered. Let A(t)C4m×4n and X(t)R4n×4m, we set the following ERME:

    ER(t)={E1(t)=AT(t)A(t)X(t)AT(t),mnE2(t)=X(t)A(t)AT(t)AT(t),m<n, (3.20)

    where E1(t),E2(t)R4n×4m. The first time derivative of (3.20) is as follows:

    ˙ER(t)={˙E1(t)=(˙AT(t)A(t)+AT(t)˙A(t))X(t)+AT(t)A(t)˙X(t)˙AT(t),mn˙E2(t)=˙X(t)A(t)AT(t)+X(t)(˙A(t)AT(t)+A(t)˙AT(t))˙AT(t),m<n. (3.21)

    When ER(t) of (3.20) and ˙ER(t) of (3.21) are replaced in (2.9) and solving in terms of X(t), we have the next result:

    {AT(t)A(t)˙X(t)=λE1(t)(˙AT(t)A(t)+AT(t)˙A(t))X(t)+˙AT(t),mn˙X(t)A(t)AT(t)=λE2(t)X(t)(˙A(t)AT(t)+A(t)˙AT(t))+˙AT(t),m<n. (3.22)

    Then, with the aid of the Kronecker product and the vectorization process, the dynamic model of (3.22) may be simplified:

    {(I4mAT(t)A(t))vec(˙X(t))=vec(λE1(t)(˙AT(t)A(t)+AT(t)˙A(t))X(t)+˙AT(t)),mn(A(t)AT(t)I4n)vec(˙X(t))=vec(λE2(t)X(t)(˙A(t)AT(t)+A(t)˙AT(t))+˙AT(t)),m<n, (3.23)

    and after setting:

    M(t)={I4mAT(t)A(t),mn& rank(A(t))=4nI4mAT(t)A(t)+γI16mn,mn& rank(A(t))<4nA(t)AT(t)I4n,m<n& rank(A(t))=4mA(t)AT(t)I4n+γI16mn,m<n& rank(A(t))<4m,P(t)={vec(λE1(t)(˙AT(t)A(t)+AT(t)˙A(t))X(t)+˙AT(t)),mnvec(λE2(t)X(t)(˙A(t)AT(t)+A(t)˙AT(t))+˙AT(t)),m<n,˙x(t)=vec(˙X(t)),x(t)=vec(X(t)), (3.24)

    the ZNNQR model is derived for solving the TVQ-MPI of (2.6):

    M(t)˙x(t)=P(t) (3.25)

    where ˙x(t),x(t),P(t)R16mn, M(t)R16mn×16mn is a nonsingular mass matrix and γ0 is the regularization parameter.

    Given that we perform 16mn additions/subtractions and (16mn)2 multiplications in each iteration of (3.25), the complexity of solving (3.25) is O((16mn)2) operations. In addition, the complexity of solving (3.25) through use of an implicit ode MATLAB solver is O((16mn)3 as it involves a (16mn)×(16mn) matrix. As a consequence, the computational complexity of the ZNNQR model of (3.25) is O((16mn)3).

    This section examines the convergence and stability of the ZNNQ (3.13), ZNNQC (3.19), and ZNNQR (3.25) models.

    Theorem 4.1. Assuming that Z1(t)R4m×4n, Z2(t)R4n×4m, Y1(t),W1(t)R4n×m and Y2(t),W2(t)Rn×4m, and Z1(t),Z2(t),W1(t) and W2(t) are differentiable, the dynamical system (3.10) converges to ˜A(t), which is the theoretical solution (THSO) of the TVQ-MPI (2.6). The solution is then stable, based on Lyapunov.

    Proof. The substitution ˉYi(t):=ˇYi(t)Yi(t),i=1,2, implies Yi(t)=ˇYi(t)ˉYi(t), where ˇYi(t) is a THSO. The time derivative of Yi(t),i=1,2, is ˙Yi(t)=˙ˇYi(t)˙ˉYi(t). Notice that

    {Z1(t)ˇY1(t)W1(t)=04n×m,mnˇY2(t)Z2(t)W2(t)=0n×4m,m<n, (4.1)

    and its first derivative

    {˙Z1(t)ˇY1(t)+Z1(t)˙ˇY1(t)˙W1(t)=04n×m,mn˙ˇY2(t)Z2(t)+ˇY2(t)˙Z2(t)˙W2(t)=0n×4m,m<n. (4.2)

    As a result, following the substitution of Yi(t)=ˇYi(t)ˉYi(t),i=1,2, into (3.8), one can verify

    ˉEQ(t)={Z1(t)(ˇY1(t)ˉY1(t))W1(t),mn(ˇY2(t)ˉY2(t))Z2(t)W2(t),m<n. (4.3)

    Further, the implicit dynamics (2.9) imply

    ˙ˉEQ(t)={˙Z1(t)(ˇY1(t)ˉY1(t))+Z1(t)(˙ˇY1(t)˙ˉY1(t))˙W1(t),mn(˙ˇY2(t)˙ˉY2(t))Z2(t)+(ˇY2(t)ˉY2(t))˙Z2(t)˙W2(t),m<n=λˉEQ(t). (4.4)

    We then determine the candidate Lyapunov function so as to confirm convergence:

    L(t)=12ˉEQ(t)2F=12Tr(ˉEQ(t)(ˉEQ(t))T). (4.5)

    Then, the next identities can be verified:

    ˙L(t)=2Tr((ˉEQ(t))T˙ˉEQ(t))2=Tr((ˉEQ(t))T˙ˉEQ(t))=λTr((ˉEQ(t))TˉEQ(t)). (4.6)

    Consequently, it holds

    dL(t)dt{<0,ˉEQ(t)0=0,ˉEQ(t)=0,˙L(t){<0,{Z1(t)(ˇY1(t)ˉY1(t))W1(t)0,mn(ˇY2(t)ˉY2(t))Z2(t)W2(t)0,m<n=0,{Z1(t)(ˇY1(t)ˉY1(t))W1(t)=0,mn(ˇY2(t)ˉY2(t))Z2(t)W2(t)=0,m<n,˙L(t){<0,{ˉY1(t)0,mnˉY2(t)0,m<n=0,{ˉY1(t)=0,mnˉY2(t)=0,m<n, (4.7)

    With ˉY(t)={ˉY1(t),mnˉY2(t),m<n being the equilibrium point of the system (4.4) and EQ(0)=0, we have that:

    dL(t)dt0, ˉY(t)0. (4.8)

    By the Lyapunov stability theory, we infer that the equilibrium state:

    {ˉY1(t)=ˇY1(t)Y1(t)=0,mnˉY2(t)=ˇY2(t)Y2(t)=0,m<n, (4.9)

    is stable. Thus, Yi(t)ˇYi(t),i=1,2, as t.

    Theorem 4.2. Let ˜A(t)Hm×n be differentiable. For any initial value y(0) that one may consider, the ZNNQ model (3.13) converges exponentially to the THSO ˇy(t) at each time t.

    Proof. First, the ERME of (3.8) is declared so as to determine the THSO of the TVQ-MPI. Second, the model (3.10) is developed utilizing the ZNN's architecture (2.9) for zeroing (3.8). So, when t, Y(t)ˇY(t) for any initial value, according to Theorem 4.1. Third, with the aid of the Kronecker product and the vectorization process, the dynamic model of (3.10) is simplified into the ZNNQ model (3.13). Therefore, the ZNNQ model (3.13) converges to the THSO ˇy(t) for any initial value y(0) when t, as it is simply an alternative version of (3.10). The proof is thus completed.

    Theorem 4.3. Assuming that ˆA(t)C2m×2n is differentiable, the dynamical system (3.16) converges to ˆA(t), which is the THSO of the TVQ-MPI (2.7). The solution is then stable, based on Lyapunov.

    Proof. Given that the proof mirrors the proof of Theorem 4.1, it is omitted.

    Theorem 4.4. Let ˆA(t)C2m×2n be differentiable. For any initial value ˆx(0) that one may consider, the ZNNQC model (3.19) converges exponentially to the THSO ˇˆx(t) at each time t.

    Proof. Given that the proof mirrors the proof of Theorem 4.2 once we replace Theorem 4.1 with Theorem 4.3, it is omitted.

    Theorem 4.5. Assuming that A(t)R4m×4n is differentiable, the dynamical system (3.22) converges to A(t), which is the THSO of the TVQ-MPI (2.8). The solution is then stable, based on Lyapunov.

    Proof. Given that the proof mirrors the proof of Theorem 4.1, it is omitted.

    Theorem 4.6. Let A(t)R4m×4n be differentiable. For any initial value x(0) that one may consider, the ZNNQR model (3.25) converges exponentially to the THSO ˇx(t) at each time t.

    Proof. Given that the proof mirrors the proof of Theorem 4.2 once we replace Theorem 4.1 with Theorem 4.5, it is omitted.

    In this section, four numerical simulations (NSs) and a real-world application involving robotic motion tracking are presented. The essential clarifications that have been used across all NSs and application are shown below. The ZNN design parameter λ is used with values of 10 and 100 in NSs and 10 in application, while the initial values of the ZNNQ, ZNNQC and ZNNQR models have been set to y(0)=04mn, ˆx(0)=04mn and x(0)=016mn, respectively. Additionally, we have set α(t)=sin(t) and β(t)=cos(t) and we will refer to the four Penrose equations in (1.1) as (P-i), (P-ii), (P-iii) and (P-iv) for convenience. The notation QMP (i.e. quartenion MP) in the figures legend refers to the MP-inverse of the input TVQ matrix ˜A(t), i.e. ˜A(t). Finally, the NSs have used the MATLAB ode15s solver in the time interval [0,10] using the default double precision arithmetic (eps=2.221016), whereas the application has used the solver in the time interval [0,20]. As a consequence, the minimum value in all of the figures of this section are mostly of order 105.

    Example 5.1. Considering the next coefficients:

    A1(t)=[2α(t)+711211411], A2(t)=[5112α(t)+111311], A3(t)=[3α(t)+2111211511], A4(t)=[2112α(t)+111711],

    the input matrix ˜A(t)H3×3 is a singular TVQ matrix with rank(˜A(t))=2. As a consequence, we set γ=108 in the ZNNQ, ZNNQC and ZNNQR models. The performance of the ZNN models is shown in Figures 1, 2 and 3.

    Figure 1.  ERMEs in NSs 5.1–5.4 for λ with values 10 and 100.
    Figure 2.  Real and imaginary parts of the trajectories of ˜X(t) in NSs 5.1–5.4 for λ=10.
    Figure 3.  Error of Penrose equations (1.1) in NSs 5.1–5.4 for λ=10.

    Example 5.2. Utilizing the next coefficients:

    A1(t)=[2α(t)+173β(t)+271β(t)+22β(t)+47],A2(t)=[22α(t)+12α(t)372β(t)+4β(t)+672α(t)+1],A3(t)=[3α(t)+262α(t)+17β(t)+23β(t)+57],A4(t)=[32α(t)+1773α(t)+2572α(t)+1],

    the input matrix ˜A(t)H4×2 is a full rank TVQ matrix with rank(˜A(t))=2. As a consequence, we set γ=0 in the ZNNQ, ZNNQC and ZNNQR models. The performance of the ZNN models is shown in Figures 1, 2 and 3.

    Example 5.3. Using the following coefficients:

    A1(t)=[2α(t)+724β(t)852α(t)+724β(t)912α(t)+724β(t)91],A2(t)=[52α(t)+13β(t)8552α(t)+13β(t)9152α(t)+13β(t)91],A3(t)=[3α(t)+2125β(t)853α(t)+2125β(t)913α(t)+2125β(t)91],A4(t)=[22α(t)+17β(t)8522α(t)+17β(t)9122α(t)+17β(t)91],

    the input matrix ˜A(t)H3×6 is a rank deficient TVQ matrix with rank(˜A(t))=2. As a consequence, we set γ=108 in the ZNNQ, ZNNQC and ZNNQR models. The performance of the ZNN models is shown in Figures 1, 2 and 3.

    Example 5.4. Considering the following matrix

    K=[100000011111110000001111111000000111111000000111],

    the coefficients of the input matrix ˜A(t) have been set to

    A1(t)=KT(1+α(t)),A2(t)=KT(1+2α(t)),A3(t)=KT(1+2β(t)),A4(t)=KT(1+4β(t)).

    As a consequence, ˜A(t)H12×4 is a rank deficient TVQ matrix with rank(˜A(t))=3 and, thus, we set γ=108 in the ZNNQ, ZNNQC and ZNNQR models. The performance of the ZNN models is shown in Figures 1, 2 and 3.

    The applicability of the ZNNQ, ZNNQC and ZNNQR models is validated in this experiment using a 3-link planar manipulator (PM), as shown in Figure 4a. It is important to mention that the 3-link PM's kinematics equations at the position level r(t)Rm and the velocity level ˙r(t)Rm are expressed as follows:

    r(t)=f(θ(t)),˙r(t)=J(θ)˙θ(t),

    where θRn is the angle of the 3-link PM, f() is a smooth nonlinear mapping function, r(t) is the end-effector's position, and J(θ)=f(θ)/θRm×n.

    To comprehend how this 3-link PM tracked motion, the inverse kinematic equation is solved. The equation of velocity can be thought of as a system of linear equations when the end-effector motion tracking task is assigned with ˙r(t) known and ˙θ(t) unknown. To put it another way, by setting ˜A(t)=J(θ), we find ˜X(t)=A(t) to solve ˙θ(t)=˜X(t)˙r(t). Therefore, we may track control of the 3-link PM by using the ZNN models to resolve the underlying linear equation system.

    The 3-link PM's end-effector is anticipated to follow a "M"-shaped path in the simulation experiment; [58] contains the X and Y-axis velocity functions of this path along with the specifications of 3-link PM. The task duration 4T is 20 seconds (i.e., T=5 seconds) in these functions, and the design parameter is s=6 cm. Additionally, the link length is α=[1,2/3,5/4]T and the initial value of the joints is θ(0)=[π/4,π/4,π/4]T. The performance of the ZNN models is shown in Figure 4.

    Figure 4.  Robotic motion tracking application results in Section 5.2.

    The performance of the ZNNQ (3.13), ZNNQC (3.19) and ZNNQR (3.10) models for solving the TVQ-MPI (2.6) is investigated throughout the NSs 5.1–5.4. While the input TVQ matrices ˜A(t) used have varied dimensions and rank conditions, each NS solves the TVQ-MPI problem. Particularly, NS 5.1 has a singular TVQ matrix of dimensions 3×3, NS 5.2 has a full rank TVQ matrix of dimensions 4×2, NS 5.3 has a rank deficient TVQ matrix of dimensions 3×6, while NS 5.4 has a rank deficient TVQ matrix of dimensions 12×4, which is much larger than the other NSs.

    Figsure 1a–1d and Figure 1e1h show the ERME's Frobenius norms of the ZNNQ, ZNNQC and ZNNQR models for λ values of 10 and 100 in NSs 5.1–5.4, respectively. In the case of λ=10 in Figure 1a1d, it can be observed that the error values in all NSs start from a high error value at t=0 and, by the time-mark of t1, they experience a steep decline that brings them to the range [104,102]. Notice that a higher value for λ will typically cause the ZNN models to converge even more quickly. This is demonstrated in the case of λ=100 in Figures 1e1h, where we observe that the error values in all NSs start from a high error value at t=0 and, by the time-mark of t0.1, they experience a steep decline that brings them to the range [104,103]. Also, all ZNN models exhibit the same convergence speed, but the ZNNQC has the highest overall error in the region [0,10] while the ZNNQ has the lowest. In other words, the ZNNQ model shows better performance than the ZNNQC and ZNNQR models.

    The fact that all three models successfully converged is further highlighted in Figure 2, which contrasts the THSO's real and imaginary parts trajectories with the corresponding ˜X(t) trajectories produced by the three models. Particularly, Figure 2a depicts the real part and Figure 2b2d depict the imaginary parts in NS 5.1, Figure 2e depicts the real part and Figure 2f2h depict the imaginary parts in NS 5.2, Figure 2i depicts the real part and Figure 2j2l depict the imaginary parts in NS 5.3, and Figure 2m depicts the real part and Figure 2n2p depict the imaginary parts in NS 5.4. In these figures, it can be observed that the three models' ˜X(t) trajectories coincide with the corresponding THSO's trajectories, whereas their convergence speed follows the convergence tendency of the ZNNQ, ZNNQC and ZNNQR models' ERMEs shown in Figure 1. It is important to note that the convergence and stability theorems of Section 4 are validated in all of the figures in this section by the convergence tendency of the ERME's Frobenius norms alongside the solution trajectories that match the THSO's trajectories. That is, the ZNNQ, ZNNQC and ZNNQR models of Section 3 converge exponentially to the QMP, i.e. ˜A(t), for any initial value when t.

    By assessing the error of Penrose equations (1.1), the three models' performance is examined in order to further validate them. Particularly, Figure 3a, 3e, 3i and 3m depict the error of (P-i) in NSs 5.1–5.4, respectively, Figure 3b, 3f, 3j and 3n depict the error of (P-ii) in NSs 5.1–5.4, respectively, Figure 3c, 3g, 3k and 3o depict the error of (P-iii) in NSs 5.1–5.4, respectively, and Figure 3d, 3h, 3l and 3p depict the error of (P-iv) in NSs 5.1–5.4, respectively. In these figures, it can be observed that convergence speed of the error produced by the three models follows the convergence tendency of the ZNNQ, ZNNQC and ZNNQR models' ERMEs shown in Figure 1. In NSs 5.1 and 5.2, the ZNNQ and ZNNQR have identical performance. Additionally, the ZNNQC has the highest overall error in the region [0,10] while the ZNNQ and ZNNQR have has the lowest. In NSs 5.3 and 5.4, all models produces almost identical overall error values.

    Additionally, the applicability of the ZNNQ, ZNNQC and ZNNQR models is validated in the experiment of Section 5.2 using a 3-link PM. Particularly, Figure 4b shows the ERME's Frobenius norms of the ZNN models. It can be observed that the error values start from a high error value at t=0 and, by the time-mark of t1, they experience a steep decline that brings them to the range [105,104]. The error values fall more gradually after t1 until t=20, when they drop to a range of [1013,1011]. All ZNN models exhibit the same convergence speed and very similar overall error in the region [0,20], but the ZNNQR has the highest overall error while the ZNNQ has the lowest. Figsure 4c–4f depict the error of (P-i)-(P-iv), respectively. In these figures, it can be observed that convergence speed of the error produced by the three models follows the convergence tendency of the ZNNQ, ZNNQC and ZNNQR models' ERMEs shown in Figure 4b. In other words, the ZNNQ model shows better performance than the ZNNQC and ZNNQR models. Figure 4g and 4h depict the trajectories of the velocity and the "M"-shaped path tracking. As seen in these figures, all ZNN model solutions match the actual velocity ˙θ(t), and the 3-link PM successfully completes the "M"-shaped path tracking task, where ˙r(t) is the actual "M"-shaped path.

    Not to mention, once we take into account the complexity of each model, the results above can be placed into better context. Because the dimensions of the associated real valued matrix A(t) are 2 times larger than those of the complex valued matrix ˆA(t) and 4 times larger than those of the quaternion valued matrix ˜A(t), the ZNNQR is, by far, the most complex model. Because of this, choosing to solve the TVQ-MPI problem in the real domain has a significant memory penalty, with RAM fast being a limiting factor as ˜A(t) grows in size. When everything is taken into account, all three ZNN models can solve the TVQ-MPI problem, although the ZNNQ appears to have the most potential.

    Three models, namely ZNNQ, ZNNQC and ZNNQR, have been presented in order to address the TVQ-MPI problem for TVQ matrices of arbitrary dimension. The creation of those models has been aided by theoretical research and an examination of their computing complexity, in addition to simulated examples and the real-world application involving robotic motion tracking. The TVQ-MPI problem has been successfully solved both directly in the quaternion domain and indirectly, through representation in the complex and real domains. Of the three methods, the direct method, implemented by the ZNNQ model, has been suggested as the most effective and efficient. In light of this, the established findings pave the path for more engaging research projects. The following considerations ought to be taken into account:

    ● Using the predefined-time ZNN architecture to TVQ-based problems is something that can be looked into.

    ● Solving nonlinear TVQ-based matrix equations is another task that could be taken into consideration.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was supported by a Mega Grant from the Government of the Russian Federation within the framework of federal project No. 075-15-2021-584.

    Vasilios N. Katsikis is an editorial board member for AIMS Mathematics and was not involved in the editorial review or the decision to publish this article. All authors declare that there are no competing interests.



    [1] Rahaman, M.S., Ahsan, M.M., Anjum, N., Rahman, M.M. and Rahman, M.N., The AI Race Is On! Google's Bard and Openai's Chatgpt Head to Head: An Opinion Article. SSRN Electronic Journal, 2023. https://doi.org/10.2139/ssrn.4351785 doi: 10.2139/ssrn.4351785
    [2] Teubner, T., Flath, C.M., Weinhardt, C., van der Aalst, W. and Hinz, O., Welcome to the Era of ChatGPT et al. the prospects of large language models. Business & Information Systems Engineering, 2023, 65(2): 95–101. https://doi.org/10.1007/s12599-023-00795-x doi: 10.1007/s12599-023-00795-x
    [3] Gilson, A., Safranek, C.W., Huang, T., Socrates, V., Chi, L., Taylor, R.A., et al., How Does ChatGPT Perform on the United States Medical Licensing Examination? The Implications of Large Language Models for Medical Education and Knowledge Assessment. JMIR Medical Education, 2023, 9(1): e45312. https://doi.org/10.2196/45312 doi: 10.2196/45312
    [4] Chan, C.K.Y. and Tsi, L.H.Y., The AI Revolution in Education: Will AI Replace or Assist Teachers in Higher Education? arXiv preprint arXiv: 2305.01185, 2023.
    [5] Dergaa, I., Chamari, K., Zmijewski, P. and Saad, H.B., From Human Writing to Artificial Intelligence Generated Text: Examining the Prospects and Potential Threats of ChatGPT in Academic Writing. Biology of Sport, 2013, 40(2): 615–622. https://doi.org/10.5114/biolsport.2023.125623 doi: 10.5114/biolsport.2023.125623
    [6] Haleem, A., Javaid, M. and Singh, R.P., An Era of ChatGPT as a Significant Futuristic Support Tool: A Study on Features, Abilities, and Challenges. BenchCouncil Transactions on Benchmarks, Standards and Evaluations, 2023, 2(4): 100089. https://doi.org/10.1016/j.tbench.2023.100089 doi: 10.1016/j.tbench.2023.100089
    [7] Chan, C.K.Y., Is AI Changing the Rules of Academic Misconduct? An In-Depth Look at Students' Perceptions of 'AI-Giarism.' arXiv preprint arXiv: 2306.03358, 2023.
    [8] Frosio, G., The Artificial Creatives: The Rise of Combinatorial Creativity from Dall-E to GPT-3. Handbook of Artificial Intelligence at Work: Interconnections and Policy Implications (Edward Elgar, Forthcoming), 2023. https://doi.org/10.2139/ssrn.4350802 doi: 10.2139/ssrn.4350802
    [9] Meskó, B. and Topol, E.J., The Imperative for Regulatory Oversight of Large Language Models (or Generative AI) in Healthcare. Npj Digital Medicine, 2023, 6(1): 1–6. https://doi.org/10.1038/s41746-023-00873-0 doi: 10.1038/s41746-023-00873-0
    [10] Ahmad, N., Murugesan, S. and Kshetri, N., Generative Artificial Intelligence and the Education Sector. Computer, 2023, 56(6): 72–76. https://doi.org/10.1109/mc.2023.3263576 doi: 10.1109/mc.2023.3263576
    [11] Murugesan, S. and Cherukuri, A.K., The Rise of Generative Artificial Intelligence and Its Impact on Education: The Promises and Perils. Computer, 2023, 56(5): 116–121. https://doi.org/10.1109/mc.2023.3253292 doi: 10.1109/mc.2023.3253292
    [12] Baidoo-Anu, D. and Ansah, L.O., Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning. Journal of AI, 2023, 7(1): 52–62. https://doi.org/10.61969/jai.1337500 doi: 10.61969/jai.1337500
    [13] Lodge, J.M., Thompson, K. and Corrin, L., Mapping out a Research Agenda for Generative Artificial Intelligence in Tertiary Education. Australasian Journal of Educational Technology, 2023, 39(1): 1–8. https://doi.org/10.14742/ajet.8695 doi: 10.14742/ajet.8695
    [14] Lim, W.M., Gunasekara, A., Pallant, J.L., Pallant, J.I. and Pechenkina, E., Generative AI and the Future of Education: Ragnarök or Reformation? A Paradoxical Perspective from Management Educators. The International Journal of Management Education, 2023, 21(2): 100790. https://doi.org/10.1016/j.ijme.2023.100790 doi: 10.1016/j.ijme.2023.100790
    [15] Eysenbach, G., The Role of ChatGPT, Generative Language Models and Artificial Intelligence in Medical Education: A Conversation with ChatGPT - and a Call for Papers (Preprint). JMIR Medical Education, 2023, 9(1): e46885. https://doi.org/10.2196/46885 doi: 10.2196/46885
    [16] Su, J. and Yang, W., Unlocking the Power of ChatGPT: A Framework for Applying Generative AI in Education. ECNU Review of Education, 2023, 6(3): 209653112311684. https://doi.org/10.1177/20965311231168423 doi: 10.1177/20965311231168423
    [17] Cooper, G., Examining Science Education in ChatGPT: An Exploratory Study of Generative Artificial Intelligence. Journal of Science Education and Technology, 2023, 32(3): 444–452. https://doi.org/10.1007/s10956-023-10039-y doi: 10.1007/s10956-023-10039-y
    [18] Chan, C.K.Y. and Lee, K.K.W., The AI Generation Gap: Are Gen Z Students More Interested in Adopting Generative AI such as ChatGPT in Teaching and Learning than Their Gen X and Millennial Generation Teachers? arXiv Preprint arXiv: 2305.02878, 2023.
    [19] Gupte, T., Watts, F.M., Schmidt-McCormack, J.A., Zaimi, I., Gere, A.R. and Shultz, G.V., Students' Meaningful Learning Experiences from Participating in Organic Chemistry Writing-To-Learn Activities. Chemistry Education Research and Practice, 2021, 22(2): 396–414. https://doi.org/10.1039/D0RP00266F doi: 10.1039/D0RP00266F
    [20] Lim, J. and Polio, C., Multimodal Assignments in Higher Education: Implications for Multimodal Writing Tasks for L2 Writers. Journal of Second Language Writing, 2020, 47: 100713. https://doi.org/10.1016/j.jslw.2020.100713 doi: 10.1016/j.jslw.2020.100713
    [21] Tavares, D., Lopes, A.I., Castro, C., Maia, G., Leite, L. and Quintas, M., The Intersection of Artificial Intelligence, Telemedicine, and Neurophysiology. Handbook of Research on Instructional Technologies in Health Education and Allied Disciplines, 2023,130–152. https://doi.org/10.4018/978-1-6684-7164-7.ch006 doi: 10.4018/978-1-6684-7164-7.ch006
    [22] Dao, X.Q., Performance Comparison of Large Language Models on VNHSGE English Dataset: OpenAI ChatGPT, Microsoft Bing Chat, and Google Bard. arXiv Preprint arXiv: 2307.02288, 2023.
    [23] Rahaman, M.S., Ahsan, M.M.T., Anjum, N., Terano, H.J.R. and Rahman, M.M., From ChatGPT-3 to GPT-4: A Significant Advancement in AI-Driven NLP Tools. Journal of Engineering and Emerging Technologies, 2023, 2(1): 1–11. https://doi.org/10.52631/jeet.v1i1.188 doi: 10.52631/jeet.v1i1.188
    [24] Ray, P.P., Web3: A Comprehensive Review on Background, Technologies, Applications, Zero-Trust Architectures, Challenges and Future Directions. Internet of Things and Cyber-Physical Systems, 2023. https://doi.org/10.1016/j.iotcps.2023.05.003 doi: 10.1016/j.iotcps.2023.05.003
    [25] Mondal, S., Das, S. and Vrana, V.G., How to Bell the Cat? A Theoretical Review of Generative Artificial Intelligence towards Digital Disruption in All Walks of Life. Technologies, 2023, 11(2): 44. https://doi.org/10.3390/technologies11020044 doi: 10.3390/technologies11020044
    [26] Chen, L., Chen, P. and Lin, Z., Artificial Intelligence in Education: A Review. IEEE Access, 2020, 8: 75264–75278. https://doi.org/10.1109/ACCESS.2020.2988510 doi: 10.1109/ACCESS.2020.2988510
    [27] Kumar, P., Chauhan, S. and Awasthi, L.K., Artificial Intelligence in Healthcare: Review, Ethics, Trust Challenges & Future Research Directions. Engineering Applications of Artificial Intelligence, 2023,120: 105894. https://doi.org/10.1016/j.engappai.2023.105894 doi: 10.1016/j.engappai.2023.105894
    [28] Zastudil, C., Rogalska, M., Kapp, C., Vaughn, J. and MacNeil, S., Generative AI in Computing Education: Perspectives of Students and Instructors. arXiv preprint arXiv: 2308.04309, 2023.
    [29] Jony, A.I., Rahman, M.S. and Islam, Y.M., ICT in Higher Education: Wiki-Based Reflection to Promote Deeper Thinking Levels. International Journal of Modern Education and Computer Science, 2017, 9(4): 43–49. https://doi.org/10.5815/ijmecs.2017.04.05 doi: 10.5815/ijmecs.2017.04.05
    [30] Alasadi, E.A. and Baiz, C.A., Generative AI in Education and Research: Opportunities, Concerns, and Solutions. Journal of Chemical Education, 2023,100(8): 2965–2971. https://doi.org/10.1021/acs.jchemed.3c00323 doi: 10.1021/acs.jchemed.3c00323
    [31] Macfarlane, B., Zhang, J. and Pun, A., Academic Integrity: A Review of the Literature. Studies in Higher Education, 2012, 39(2): 339–358. https://doi.org/10.1080/03075079.2012.709495 doi: 10.1080/03075079.2012.709495
    [32] Dalalah, D. and Dalalah, O.M.A., The False Positives and False Negatives of Generative AI Detection Tools in Education and Academic Research: The Case of ChatGPT. The International Journal of Management Education, 2023, 21(2): 100822. https://doi.org/10.1016/j.ijme.2023.100822 doi: 10.1016/j.ijme.2023.100822
    [33] Chaudhry, I.S., Sarwary, S.A.M., EI Refae, G.A. and Chabchoub, H., Time to Revisit Existing Student's Performance Evaluation Approach in Higher Education Sector in a New Era of ChatGPT — A Case Study. Cogent Education, 2023, 10(1): 2210461. https://doi.org/10.1080/2331186x.2023.2210461 doi: 10.1080/2331186x.2023.2210461
    [34] Humphry T. and Fuller, A.L., Potential ChatGPT Use in Undergraduate Chemistry Laboratories. Journal of Chemical Education, 2023,100: 1434–1436. https://doi.org/10.1021/acs.jchemed.3c00006 doi: 10.1021/acs.jchemed.3c00006
    [35] Konecki, M., Konecki, M. and Biškupić, I., Using Artificial Intelligence in Higher Education. Proceedings of the 15th International Conference on Computer Supported Education, 2023. https://doi.org/10.5220/0012039700003470 doi: 10.5220/0012039700003470
    [36] Dai, Y., Liu, A. and Lim, C.P., Reconceptualizing ChatGPT and generative AI as a student-driven innovation in higher education. 2023.
    [37] Dwivedi, Y.K., Kshetri, N., Hughes, L., Slade, E.L., Jeyaraj, A., Kar, A.K., et al., "So What If ChatGPT Wrote It?" Multidisciplinary Perspectives on Opportunities, Challenges and Implications of Generative Conversational AI for Research, Practice and Policy. International Journal of Information Management, 2023, 71: 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642 doi: 10.1016/j.ijinfomgt.2023.102642
    [38] Wach, K., Doanh, D.C., Ejdys, J., Kazlauskaitė, R., Korzynski, P., Mazurek, G., et al., The Dark Side of Generative Artificial Intelligence: A Critical Analysis of Controversies and Risks of ChatGPT. Entrepreneurial Business and Economics Review, 2023, 11(2): 7–30. https://doi.org/10.15678/eber.2023.110201 doi: 10.15678/eber.2023.110201
    [39] Ferrara, E., Should ChatGPT Be Biased? Challenges and Risks of Bias in Large Language Models. arXiv preprint arXiv: 2304.03738, 2023.
  • This article has been cited by:

    1. Sondess B. Aoun, Nabil Derbel, Houssem Jerbi, Theodore E. Simos, Spyridon D. Mourtas, Vasilios N. Katsikis, A quaternion Sylvester equation solver through noise-resilient zeroing neural networks with application to control the SFM chaotic system, 2023, 8, 2473-6988, 27376, 10.3934/math.20231401
    2. Houssem Jerbi, Obaid Alshammari, Sondess Ben Aoun, Mourad Kchaou, Theodore E. Simos, Spyridon D. Mourtas, Vasilios N. Katsikis, Hermitian Solutions of the Quaternion Algebraic Riccati Equations through Zeroing Neural Networks with Application to Quadrotor Control, 2023, 12, 2227-7390, 15, 10.3390/math12010015
    3. Houssem Jerbi, Izzat Al-Darraji, Saleh Albadran, Sondess Ben Aoun, Theodore E. Simos, Spyridon D. Mourtas, Vasilios N. Katsikis, Solving quaternion nonsymmetric algebraic Riccati equations through zeroing neural networks, 2024, 9, 2473-6988, 5794, 10.3934/math.2024281
    4. Spyridon D. Mourtas, P. Stanimirović, A. Stupina, I. Kovalev, Color restoration of images through high order zeroing neural networks, 2024, 59, 2271-2097, 01005, 10.1051/itmconf/20245901005
  • Author's biography Shakib Sadat Shanto is an undergraduate currently studying Bachelor of Science in Computer Science and Engineering at American International University-Bangladesh. He is extremely passionate about AI and Data Science domain. His current research interests include AI, Educational Technology, Natural Language Processing and Cybersecurity; Zishan Ahmed is an enthusiastic undergraduate pursuing a Bachelor of Science in Computer Science and Engineering. He is captivated by the potential of data to alter the world we live in. In data science, natural language processing, and machine learning, he sees the greatest potential for innovation and influence. His knack for mathematics and programming has been refined throughout his academic career. He is well-versed in programming languages and is always keen to acquire new tools and technologies. His current research interests include AI, Educational Technology, Natural Language Processing, and computer vision; Dr. Akinul Islam Jony is currently working as an Associate Professor of Computer Science at American International University-Bangladesh. His current research interests include AI, machine learning, e-Learning, educational technology, cybersecurity, and issues in software engineering
    Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3156) PDF downloads(217) Cited by(15)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog