Research article Special Issues

Quasi self-dual codes over non-unital rings from three-class association schemes

  • Let E and I denote the two non-unital rings of order 4 in the notation of (Fine, 93) defined by generators and relations as E=a,b2a=2b=0,a2=a,b2=b,ab=a,ba=b and I=a,b2a=2b=0,a2=b,ab=0. Recently, Alahmadi et al classified quasi self-dual (QSD) codes over the rings E and I for lengths up to 12 and 6, respectively. The codes had minimum distance at most 2 in the case of I, and 4 in the case of E. In this paper, we present two methods for constructing linear codes over these two rings using the adjacency matrices of three-class association schemes. We show that under certain conditions the constructions yield QSD or Type Ⅳ codes. Many codes with minimum distance exceeding 4 are presented. The form of the generator matrices of the codes with these constructions prompted some new results on free codes over E and I.

    Citation: Adel Alahmadi, Asmaa Melaibari, Patrick Solé. Quasi self-dual codes over non-unital rings from three-class association schemes[J]. AIMS Mathematics, 2023, 8(10): 22731-22757. doi: 10.3934/math.20231158

    Related Papers:

    [1] Jianwei Jiao, Keqin Su . A new Sigma-Pi-Sigma neural network based on L1 and L2 regularization and applications. AIMS Mathematics, 2024, 9(3): 5995-6012. doi: 10.3934/math.2024293
    [2] Jiawen Ye, Lei Dai, Haiying Wang . Enhancing sewage flow prediction using an integrated improved SSA-CNN-Transformer-BiLSTM model. AIMS Mathematics, 2024, 9(10): 26916-26950. doi: 10.3934/math.20241310
    [3] Shanhao Yuan, Yanqin Liu, Yibin Xu, Qiuping Li, Chao Guo, Yanfeng Shen . Gradient-enhanced fractional physics-informed neural networks for solving forward and inverse problems of the multiterm time-fractional Burger-type equation. AIMS Mathematics, 2024, 9(10): 27418-27437. doi: 10.3934/math.20241332
    [4] Abdelwahed Motwake, Aisha Hassan Abdalla Hashim, Marwa Obayya, Majdy M. Eltahir . Enhancing land cover classification in remote sensing imagery using an optimal deep learning model. AIMS Mathematics, 2024, 9(1): 140-159. doi: 10.3934/math.2024009
    [5] Kottakkaran Sooppy Nisar, Muhammad Shoaib, Muhammad Asif Zahoor Raja, Yasmin Tariq, Ayesha Rafiq, Ahmed Morsy . Design of neural networks for second-order velocity slip of nanofluid flow in the presence of activation energy. AIMS Mathematics, 2023, 8(3): 6255-6277. doi: 10.3934/math.2023316
    [6] Alaa O. Khadidos . Advancements in remote sensing: Harnessing the power of artificial intelligence for scene image classification. AIMS Mathematics, 2024, 9(4): 10235-10254. doi: 10.3934/math.2024500
    [7] Spyridon D. Mourtas, Emmanouil Drakonakis, Zacharias Bragoudakis . Forecasting the gross domestic product using a weight direct determination neural network. AIMS Mathematics, 2023, 8(10): 24254-24273. doi: 10.3934/math.20231237
    [8] Muhammad Aqmar Fiqhi Roslan, Nur Ezlin Zamri, Mohd. Asyraf Mansor, Mohd Shareduwan Mohd Kasihmuddin . Major 3 Satisfiability logic in Discrete Hopfield Neural Network integrated with multi-objective Election Algorithm. AIMS Mathematics, 2023, 8(9): 22447-22482. doi: 10.3934/math.20231145
    [9] Zhi-Ying Feng, Xiang-Hua Meng, Xiao-Ge Xu . The data-driven localized wave solutions of KdV-type equations via physics-informed neural networks with a priori information. AIMS Mathematics, 2024, 9(11): 33263-33285. doi: 10.3934/math.20241587
    [10] E Laxmi Lydia, Chukka Santhaiah, Mohammed Altaf Ahmed, K. Vijaya Kumar, Gyanendra Prasad Joshi, Woong Cho . An equilibrium optimizer with deep recurrent neural networks enabled intrusion detection in secure cyber-physical systems. AIMS Mathematics, 2024, 9(5): 11718-11734. doi: 10.3934/math.2024574
  • Let E and I denote the two non-unital rings of order 4 in the notation of (Fine, 93) defined by generators and relations as E=a,b2a=2b=0,a2=a,b2=b,ab=a,ba=b and I=a,b2a=2b=0,a2=b,ab=0. Recently, Alahmadi et al classified quasi self-dual (QSD) codes over the rings E and I for lengths up to 12 and 6, respectively. The codes had minimum distance at most 2 in the case of I, and 4 in the case of E. In this paper, we present two methods for constructing linear codes over these two rings using the adjacency matrices of three-class association schemes. We show that under certain conditions the constructions yield QSD or Type Ⅳ codes. Many codes with minimum distance exceeding 4 are presented. The form of the generator matrices of the codes with these constructions prompted some new results on free codes over E and I.



    Water, a vital element for life and crucial in numerous domestic and commercial processes, is paradoxically one of the most exploited natural resources, disregarding its utmost importance. The exploitation arises from urbanization, rapid industrialization, improved living standards, and population growth, resulting in unregulated and excessive worldwide water consumption. As a result, the global community confronts an urgent water crisis, as the quantity and quality of water face compromises. Additionally, urbanization encroaches upon natural resources, with industries like textiles releasing effluents containing harmful components, notably colors, endangering water bodies and further diminishing water availability and safety. The significance of this crucial matter necessitates prompt action to tackle it, ensuring the sustainable management and preservation of water resources for the well-being of future generations.

    The extensive use and poor removal rate during aerobic waste treatment make synthetic dyes a severe environmental threat. The textile industry alone contributes to this issue by utilizing approximately 8×105 tonnes of synthetic dyes among the over 10,000 distinct types of dyes produced globally [1]. The complex and ever-changing mixture of pollutants, including dyes responsible for color and organic load, disrupts the biological equilibrium of the receiving water system [2]. The occurrence of this disruption can be attributed to the substantial presence of organic compounds and heavy metals found in the dyes, leading to approximately 93% of the water used in textile production being discharged as colored effluent [3,4]. In addition, the nonbiodegradable nature of dyes used in textile dye baths poses a severe environmental risk. The unpleasant color of wastewater negatively affects aquatic organisms and compromises the water's ability to sustain life, ultimately disturbing the entire aquatic environment and food chain [5]. To address this challenge, it is crucial to promote existing processes and explore innovative approaches to effectively decolorize a mixture of colors rather than solely focusing on individual dye solutions in textile effluents.

    Researchers have devised various physiochemical methods to eliminate color efficiently and other pollutants found in effluents from textile dyeing operations. These techniques include adsorption, chemical oxidation and reduction, chemical precipitation, flocculation, and photolysis [6,7,8]. However, these techniques often suffer from inefficiency, high costs, and side effects such as the generation of significant sludge and byproducts. Moreover, they may only be universally effective in degrading some types of dyes [9,10]. Consequently, the focus of researchers has shifted towards biological treatments, which offer the advantage of lower operational costs [11,12]. The remarkable capability of various microorganisms, including bacteria, fungi, and actinomycetes, to decolorize dyes has been discovered [13,14]. Among them, white rot fungus has emerged as one of the extensively studied bacteria due to its remarkable dye-decolorizing properties, ability to thrive in challenging environmental conditions, and production of multiple extracellular enzymes [15].

    Sathian et al. [16] conducted a study where they utilized pleurotus floridanus in a batch reactor to treat textile dye effluent. The main objective was to optimize the %DEC process by studying the influence of different process variables, such as pH, temperature, agitation speed, and dye wastewater concentration. To analyze the impact of these factors, researchers utilized the response surface methodology and conducted optimization research. To streamline the process and reduce the number of experiments, they implemented the box-Behnken design, which allows for fitting a quadratic surface and facilitates the analysis of parameter interactions and effective parameter optimization [16]. In order to mathematically describe the response as a function of continuous factors, a regression design was utilized to estimate the model parameters accurately. The study relied on the methodology proposed by Sathian et al. to determine the second-order polynomial response function [16].

    Y(x)=β0+ki=1βixi+ki=1βiix2i+k1i=1,i<jkj=2βijxixj,

    here, Y represents the predicted response, while βi and βij are coefficients estimated from the regression analysis.

    The experimental settings for treating textile dye wastewater utilizing pleurotus floridanus, based on the box-Behnken design, are provided in Table 1. The authors [16] presented the following response surface models for the %DEC and %COD reduction:

    Y%DEC=58.62+0.09x10.59x2+3.63x3+14.07x4+0.70x1x21.95x1x32.67x1x4+2.43x2x30.20x2x41.28x3x410.32x212.02x222.40x234.30x24,Y%COD=69.35+0.30x10.54x2+2.68x3+13.95x40.80x1x21.75x1x33.45x1x4+2.83x2x30.80x2x41.55x3x49.92x212.18x221.88x234.72x24, (1.1)
    Table 1.  Range and level of process variables [16].
    Variables %COD Levels
    1 0 +1
    pH x1 5 7 9
    Temperature(C) x2 23 28 33
    Agitation speed (rpm) x3 100 150 200
    Dye wastewater concentration x4 Raw 1:1 1:2

     | Show Table
    DownLoad: CSV

    where x1, x2, x3, and x4 represent the %COD values of the process variables (i.e., pH, temperature (C), agitation speed (rpm), and wastewater concentration, respectively). Machine learning algorithms surpass traditional prediction methods [17], like ordinary least square methods. These algorithms are flexible enough to model the nonlinear phenomena hidden in the data. Machine learning algorithms are highly versatile due to their predictive power, finding use across various applications such as detecting depression through data analytics, forecasting gross domestic product, analyzing fractal-fractional mathematical models, and controlling the spread of waterborne diseases [18,19,20,21]. Using the FFNN, we want to make the prediction models for %DEC and %COD.

    This study presents an optimized single-layer feedforward neural network (FFNN) model that excels in predicting %DEC and %COD in textile wastewater treatment. The model showcases exceptional accuracy, achieving R2 values as high as 0.9998 in training and 0.9838 in testing phases. The log-sigmoid (logsig) activation function notably stands out, delivering the lowest maximum absolute error of 4.0787 and a mean absolute error of 0.4821 for %DEC predictions, with similarly impressive results for %COD. The robustness of the FFNN model is further reinforced through 10-fold cross-validation, consistently yielding low mean absolute errors between 0.4–1 for %DEC and 0.4–0.9 for %COD across different folds.

    Sensitivity analysis, conducted via input perturbation, highlights the relative influence of each input variable on the model's predictions, providing insights of model numerical stability. When juxtaposed with polynomial regression models, the FFNN demonstrates superior capabilities, evident from its higher R2 values for both training (0.99941 vs 0.9714 for %DEC; 0.99446 vs 0.9747 for %COD) and testing (0.99363 vs 0.8354 for %DEC; 0.99716 vs 0.9495 for %COD). Moreover, the FFNN model shows lower maximum and mean absolute errors, underscoring its enhanced predictive precision.

    Contour plots are effectively utilized to visualize the interactive effects of variables, offering a comprehensive view of their combined impact on treatment efficacy. In summary, this study leverages meticulous performance metrics, sensitivity analyses, and rigorous validation techniques to establish the optimized FFNN architecture as a highly accurate and reliable tool for predicting key parameters in textile wastewater treatment.

    The current study's goal is to unveil a feed-forward neural network as a precise and dependable machine learning model to aid in treating textile dye wastewater. This is achieved by evaluating two key outputs: percentage of %DEC and %COD reduction. The feed-forward neural network's superior accuracy is highlighted through the comparison of its R2-values and mean absolute errors against those of traditional second-order polynomial regression models. Our findings illustrate that neural network models possess greater flexibility in identifying the nonlinear patterns embedded within the data, in contrast to traditional polynomial regression models which exhibit relative weakness in this aspect. This conclusion is supported by the performance metrics we have reported.

    The feedforward neural network (FFNN) architecture is inspired by the human brain's complex neural structure. Like the brain's interconnected neurons, FFNN consists of processing units, or neurons, organized in layers and connected in parallel. Each connection's strength is determined by weights, and in an FFNN, the output of one layer seamlessly flows forward as the input to the next, without any feedback loops. This network structure comprises an input layer with independent variables, an output layer with dependent variables, and hidden layers that function as feature detectors. Despite the potential for multiple hidden layers, the universal approximation theory suggests that even a single-layered network with ample neurons can accurately represent any input-output mapping [22,23].

    The FFNN undergoes a training phase where it learns to predict output variables from given input-output pairs. This process involves adjusting the connection strengths, or weights and biases, between neurons across layers, transforming input signals into desired outputs. An activation function, applied to the weighted sum of inputs, aids in minimizing the prediction error. The FFNN's mathematical representation can be expressed as:

    Y=W(2)n×1h(W(1)n×qXTq×m+b(1)n×1)+b(2)1×1, (2.1)

    where Y and X correspond to output and input vectors, respectively. W(r) and b(r),(r=1,2) denote weights and biases, with m representing input variables, and n the number of hidden layer neurons. The function h symbolizes the activation function at the hidden layer, typically including the log-sigmoid, hyperbolic tangent sigmoid, and linear transfer function [24].

    h(z)=logsig(z)=11+ez,0h(z)1, (2.2)
    h(z)=tansig(z)=21+e2z1,1h(z)1, (2.3)
    h(z)=purelin(z)=z,h(z). (2.4)

    Prior to training, scaling inputs and targets to a consistent range is crucial. Input values Xi are scaled to Xscaled within [1,1]:

    Xscaled=1+2(XiXmin)XmaxXmin, (2.5)

    where Xmin and Xmax are the minimum and maximum input values. The dataset is divided into training and testing samples, with a typical split of 70%–30% or 80%–20%. When including validation, the 30% test data can be equally split into two parts of 15% each. This setup ensures the network is trained effectively and tested for accuracy.

    The optimization of the single-layer feedforward neural network (FFNN) was meticulously conducted through an iterative trial-and-error methodology. The selection criteria for the optimal FFNN architecture hinged on maximizing the coefficient of determination (R2) and minimizing the mean square error (MSE), as defined in Eqs (2.6) and (2.7):

    R2=1Ni=1(yFFNNiyexpi)2Ni=1(yFFNNi¯yexp)2, (2.6)
    MSE=1NNi=1(yFFNNiyexpi)2. (2.7)

    Here, N denotes the total data count, yexpi and yFFNNi are the normalized experimental and FFNN predicted data vectors, respectively, and ¯yexp is the experimental data mean. Error minimization within the network typically employs error backpropagation, demonstrated through the update rule:

    xk+1=xkγkgk,

    where xk represents current weights and biases, gk is the gradient, and γk is the learning rate. Adjustments in network parameters are predicated on the discrepancies between training outputs and actual outcomes. Figure 1 illustrates a standard backpropagation process.

    Figure 1.  Flow chart diagram of the backpropagation process.

    To fine-tune the FFNN, various factors were evaluated, encompassing:

    1) Training algorithms: Levenberg-Marquardt with Bayesian regulation and gradient descent with momentum,

    2) Activation functions: tansig, logsig, and purelin.

    The dataset was partitioned into training (70%), validation (15%), and testing (15%) subsets. In the case of the Bayesian Regulation (trainbr) algorithm, the validation set was excluded, and a 80%-20% training-testing split was adopted. The MATLAB® software version 2023a was utilized for the MSE optimization process on a laptop equipped with an Intel® Xeon® E-2176M CPU at 2.70GHz and 16.0 GB of RAM, running on a 64-bit Windows 11 Pro system. A genetic algorithm ascertained the most efficacious factor combination for FFNN performance.

    The Bayesian regularization algorithm (trainbr), particularly paired with the logsig activation function, yielded remarkable results. The optimal configuration, comprising four hidden neurons and the logsig function, minimized prediction errors, indicating that even a single hidden neuron can be effective. Figure 2 depicts the optimized FFNN model structure.

    Figure 2.  The architecture of the optimized FFNN model.

    The mean squared error (MSE) is an effective objective function for optimizing the selection of weights and biases. However, the optimal solution achieved can vary based on the initial conditions, potentially converging to local optima. As depicted in Figure 3, the MSE values exhibit a declining trend with each iteration. Specifically, the training phase MSE for the parameters under investigation diminishes consistently with the number of epochs, reaching a nadir at epoch 209 for %DEC and epoch 34 for %COD. Training and testing curves corroborate the mitigation of overfitting, evidenced by the testing curve's interruption of overfitting progression. Both curves display a synchronous decrease over successive epochs, suggesting a parallel improvement in model performance. Optimal validation performances for %DEC and %COD were attained at 1.799×105 and 0.0014 respectively, corresponding to epochs 209 and 34.

    Figure 3.  Analyzing training performance using mean squared error.

    The error histograms, represented in Figures 4 and 5, illustrate the deviation between target and predicted values post-training of the neural network, with each bin denoting a range of error magnitudes. The y-axis quantifies the count of samples within each error range. The maximum observed error for %DEC was 4.0787, with an average error of 0.4821, an average percentage error of 1.0802%, and a maximum percentage error of 11.2448%. In the case of %COD reduction, the respective errors were a maximum of 2.4486, an average of 0.7256, an average percentage of 1.1460%, and a maximum percentage error of 3.4946%. The comparative analysis of the anticipated versus experimental data is graphically presented in Figure 6. These figures elucidate the close congruence between the FFNN predictions and the experimental results, visually underscoring the model's predictive accuracy.

    Figure 4.  Absolute error histogram for performance evaluation.
    Figure 5.  Absolute percentage error histogram for error distribution.
    Figure 6.  Graphical representation of experimental and predicted data.

    To examine the impact of different activation functions on the performance of a feed-forward neural network, three specific functions were selected: logsig, tansig, and purelin. The logsig function, with its sigmoidal response, is ideal for binary outputs, whereas tansig produces outputs from 1 to 1, beneficial for gradient-based learning methods. The purelin, being a linear function, is typically used for continuous outcomes. Their effects are graphically represented in Figure 7.

    Figure 7.  Comparative plots of neural network activation functions: logistic sigmoid (logsig), hyperbolic tangent (tansig), and linear transfer (purelin).

    The performance of FFNN with different activation functions logsig, tansig, and purlin was evaluated based on their maximum error, average error, and coefficient of determination (R2) values for %DEC and %COD reduction. The comparative results are detailed in Table 2. The logsig function exhibited commendable performance with the lowest maximum and average errors for DEC and COD, alongside high R2 values, indicating precise predictions and strong model fitness. Notably, the Tansig function also performed well, especially in the COD category, with the highest R2 values across the test, train, and complete datasets. However, it showed a higher error percentage compared to logsig. Conversely, the purlin function demonstrated significantly higher errors and lower R2 values, suggesting less accuracy and model adequacy.

    Table 2.  Performance metrics of FFNN activation functions.
    Activation Function Max Error (DEC) Avg Error (DEC) Max Error (COD) Avg Error (COD)
    logsig 4.079 0.482 2.449 0.726
    tansig 4.488 0.760 3.592 0.781
    purlin 10.882 4.956 10.484 5.192

     | Show Table
    DownLoad: CSV

    Upon comparison, the logsig activation function emerges as the most effective, combining lower error rates with high R2 values, thus ensuring both accuracy and reliability in the predictive modeling of wastewater treatment parameters. The bounded nature of the logsig function makes it well-suited for classification tasks because it normalizes its output between 0 and 1. The tansig function, also bounded, is capable of producing negative values, which can be beneficial in regression scenarios similar to the one in this study. Although logsig and tansig perform comparably, purelin(being unbounded and with a consistent gradient) tends to underperform for this particular application as it operates as an identity function. Simulations indicate a preference for logsig based on its performance metrics; however, tansig may also be a viable option depending on specific use cases.

    To evaluate the model's performance, we employed k-fold cross-validation on a dataset comprising 30 observations, which were divided into ten approximately equal-sized groups. During the cross-validation process, the neural network was trained on nine groups (constituting 90% of the data), while the remaining group (10%) was used for testing the model's accuracy. This division and testing sequence was repeated ten times with unique group combinations. The maximum absolute error and mean absolute error for each iteration were recorded, as illustrated in Figures 8 and 9. This method is crucial as it provides insights into the artificial neural network's (ANN) overall predictive accuracy.

    Figure 8.  Error plot using max absolute errors k-fold models.
    Figure 9.  Error plot using mean absolute errors k-fold models.

    A neural network's effectiveness hinges on its internal parameters: weights and biases. These parameters are refined during training via an optimization algorithm like backpropagation, which adjusts them based on prediction errors compared to actual outcomes. Properly calibrated weights and biases are crucial for the neural network's task performance and accuracy on new data. The trained model's weights and biases for %DEC, shown below (Eqs (2.8) and (2.9)), represent these scaled parameters

    W(1)=[2.38980.14920.26410.98480.43230.46410.03651.60581.95630.98550.75590.05560.55951.36660.97000.2274],b(1)=[1.46891.49820.89960.5276], (2.8)
    W(2)=[2.33451.25861.61421.6828],b(2)=[0.6462]. (2.9)

    Similarly, the weights and biases of the trained model for %COD reduction are given below (Eqs (2.10) and (2.11)).

    W(1)=[0.99530.22350.06540.68782.20700.75260.62120.27011.82520.25320.13901.68960.57981.38361.03260.8945],b(1)=[0.64591.17441.56340.7813], (2.10)
    W(2)=[1.09462.03692.46521.5414],b(2)=[0.1190]. (2.11)

    The R2-value between the outputs and targets of the designed network is shown in Figure 10 for %DEC case (for %COD reduction see Figure 11). The dashed line depicts the line of ideal results, while the solid line shows the line of best fit. With R2-values ranging from 0.9982 to 0.9998 for the training phase and 0.9456 to 0.9838 for the test step, a good match between the output and the target data was found for all examined parameters for the training and test stages. The R2-value varied from 0.9970 to 0.9990 overall for all parameters tested, demonstrating the model's accuracy and reliability.

    Figure 10.  Regression models for %DEC.
    Figure 11.  Regression models for %COD reduction.

    A single-layer perceptron, a basic form of an FFNN, processes inputs by multiplying them with weights, adding biases, and applying an activation function to produce the hidden layer output. This output is further transformed to produce the final output as shown in Eq (2.1). The second-order approximation of Eq (2.1) using Taylor's series expansion is given by:

    ˆY%DEC=58.621.96x11.35x2+4.20x3+13.98x4+2.65x2x15.69x3x18.56x4x1+4.48x2x31.59x2x41.53x3x417.98x213.44x221.85x235.12x24,ˆY%COD=69.762.08x10.70x2+3.30x3+15.09x4+1.82x2x14.69x3x18.79x4x1+4.73x2x3+0.49x2x42.43x3x416.79x213.35x221.94x237.41x24.

    Here, ˆY%DEC and ˆY%COD predict %DEC and %COD reduction, respectively, with x1,x2,x3,x4 representing pH, temperature, agitation speed, and dye wastewater concentration.

    The model's sensitivity to predictors is analyzed through partial derivatives at optimal conditions (x1=0.2,x2=0.16,x3=0.66,x4=1):

    dY%DECdx1=0.3630,dY%DECdx2=0.1669,dY%DECdx3=0.1462,dY%DECdx4=5.0106,dY%CODdx1=0.4650,dY%CODdx2=0.3646,dY%CODdx3=0.6051,dY%CODdx4=3.9230.

    These derivatives indicate how a 1% change in each predictor affects %DEC and %COD. Similar analysis is performed for the FFNN model at optimal conditions (x1=0.3309,x2=0.2588,x3=0.4337,x4=1):

    dYFFNN%DECdx1=0.5119,dYFFNN%DECdx2=2.7096,dYFFNN%DECdx3=3.8914,dYFFNN%DECdx4=7.8107,dYFFNN%CODdx1=0.3353,dYFFNNdx2=0.2229,dYFFNN%CODdx3=1.5798,dYFFNN%CODdx4=4.5630.

    The partial derivatives of the second-order approximation of the FFNN model are:

    dˆY%DECdx1=0.4073,dˆY%DECdx2=3.6537,dˆY%DECdx3=4.1129,dˆY%DECdx4=5.5041,dˆY%CODdx1=1.3121,dˆY%CODdx2=0.5003,dˆY%CODdx3=1.9685,dˆY%CODdx4=2.2511.

    These derivatives demonstrate the rate of change in %DEC and %COD reduction with respect to changes in the predictors (x1,x2,x3,x4).

    Contour plots offer an effective method to study the interactive effects of parameters on %DEC and %COD reduction. In this study, artificial neural network models were employed to generate the contour plots, enabling a comprehensive examination of the interaction impact between two parameters on dye wastewater's %DEC and %COD reduction. These plots represent two elements simultaneously while keeping the other components constant, making them highly valuable for understanding the direct and indirect impacts of the two factors.

    Figures 1223 illustrate the contours for the %DEC and %COD reduction of dye wastewater. The nature of these contours vividly illustrates how the variables interact, providing valuable insights into their combined influence on the treatment process. The curve's elliptical shape denotes a positive interaction between the two variables, while its round shape denotes a negative interaction. The circular form of the contour in graphs, as seen from the pictures, represents how all the variables interact. Every pair of variables exhibits a significant interaction, and the surface included inside the contour diagram's smallest ellipse denotes the most significant expected yield.

    Figure 12.  Relationship between pH, temperature, and %DEC with fixed agitation speed and wastewater concentration.
    Figure 13.  Relationship between pH, agitation speed, and %DEC with fixed temperature and wastewater concentration.
    Figure 14.  Relationship between pH, wastewater concentration, and %DEC with fixed temperature and agitation speed.
    Figure 15.  Relationship between temperature, agitation, and %DEC with fixed pH and wastewater concentration.
    Figure 16.  Relationship between temperature, wastewater concentration, and %DEC with fixed pH and agitation speed.
    Figure 17.  Relationship between agitation speed, wastewater concentration, and %DEC with fixed temperature and pH.
    Figure 18.  Relationship between pH, temperature, and %COD reduction with fixed agitation speed and wastewater concentration.
    Figure 19.  Relationship between pH, agitation speed, and %COD reduction with fixed temperature and wastewater concentration.
    Figure 20.  Relationship between pH, wastewater concentration, and %COD reduction with fixed agitation speed and temperature.
    Figure 21.  Relationship between temperature, agitation speed, and %COD reduction with fixed pH and wastewater concentration.
    Figure 22.  Relationship between temperature, wastewater concentration and %COD reduction with fixed agitation speed and pH.
    Figure 23.  Relationship between agitation speed, wastewater concentration, and %COD reduction with fixed pH and temperature.

    Figure 12 shows the interaction between pH and temperature in textile dye %DEC. pH significantly influences how microorganisms treat dyeing effluents, which is vital in %DEC. The contour plot in Figure 12 visually illustrates the combined impact of pH and temperature on dye %DEC, offering insights for optimizing treatment conditions. The efficacy of dye %DEC is increased by increasing pH, according to Figure 12. Following that, the %DEC's effectiveness declines. Figures 13 and 14 showed a similar trend. The pH significantly impacts the effectiveness of dye %DEC, and for most dyes, the ideal pH range for color removal is between 6.0 and 7.0.

    The %DEC is likewise seen in Figure 12 to increase with temperature up to 29.2 C and then decrease. At higher temperatures, decolorizing action was dramatically reduced. This could be caused by a decrease in cell viability or by the deactivation of the %DEC-causing enzymes. According to Figure 13, the %DEC percentage rises with increasing agitation speed, reaching a maximum of 171.68 rpm. The efficacy of dye removal declines after that. This was seen in Figures 15 and 17, too. A drop in dye concentration accelerates %DEC. Figures 14, 16 and 17 demonstrate this in great detail. The findings indicate that the percentage of dye elimination increases as dye concentration decreases. This is attributed to the higher presence of chemicals and contaminants in dye wastewater at higher concentrations, which hinders the growth and activity of microorganisms responsible for dye degradation.

    Similar trends were observed for the influence of process factors on %COD reduction in textile dye wastewater, as depicted in Figures 1823. The optimal conditions of pH 6.3, temperature 29.29 C, agitation speed 171.68 rpm, and dye wastewater concentration 1:2 resulted in the highest %DEC and %COD reduction. These optimal conditions were further validated through experiments, confirming the effectiveness of the ANN predictions. Under these optimal settings, the maximum color removal and %COD reduction were determined to be 70.69% and 80.50%, respectively.

    Sensitivity analysis was conducted on the neural network models to evaluate the robustness of %COD and %DEC predictions. This entailed adjusting the ANN model parameters slightly to observe the response variation. Stability was inferred from the boundedness of the resulting errors. In our study, a perturbation of 0.01 was introduced to the FFNN model's weights and biases. The perturbed model's outcomes were then measured against the original FFNN model's performance, with the deviations captured in Figure 24. The observed maximum absolute errors in Figure 24 were relatively low, with 2.3852 for %DEC and 1.1949 for %COD, demonstrating the model's stability against small internal variations.

    Figure 24.  Error plots using input perturbation method for the sensitive analysis of predicting FFNN.

    The authors [16] used second-order polynomial equations from Response Surface Methodology (RSM) on experimental data to determine optimal conditions for %DEC and %COD removal. Optimization in MATLAB® yielded optimal conditions: pH 6.6, temperature 28.8 C, agitation speed 183 rpm, and dye wastewater concentration 1:2, with experimental validation achieving 71.2% %DEC and 80.5% %COD removal. We utilized the fminmax constrained optimization function from the optimization toolbox in MATLAB® to invoke the ANN models for the percentage of dye removal (%DEC) and chemical oxygen demand reduction (%COD). The necessity to simultaneously maximize %DEC and %COD dictated the use of the minmax optimization strategy. The ANN predicted optimal conditions were a pH of 6.3, a temperature of 29.2 ℃, an agitation speed of 171 rpm, and a dye concentration ratio of 1:2. These conditions were experimentally validated, achieving a %DEC of 71.69% and a %COD removal of 80.50%. Under these optimized conditions, the ANN's second-order approximation yielded a %DEC of 69.91% and a %COD of 79.98%.

    We analyze multiple performance indicators when assessing the efficacy of the second-degree polynomial regression model (1.1) and the FFNN model for predicting %DEC and %COD reduction. For %DEC, the FFNN model exhibits superior performance over the second-degree polynomial regression in terms of both training (R2=0.99941 vs. R2=0.9714) and testing (R2=0.99363 vs. R2=0.8354) datasets, signifying a more substantial proportion of variance explained by the FFNN model. Moreover, the FFNN model attains a lower maximum absolute error (4.0787 vs. 4.64) and mean absolute error (0.4821 vs. 1.44), indicating enhanced predictive precision for %DEC.

    For % COD reduction, the trend continues with the FFNN model surpassing the polynomial regression with higher training (R2=0.99446 vs. R2=0.9747) and testing (R2=0.99716 vs. R2=0.9495) R2 values. Error metrics further corroborate the FFNN's superiority, with a reduced maximum absolute error (2.4486 compared to 3.79) and mean absolute error (0.7256 versus 1.36).

    The FFNN model outstrips the second-degree polynomial regression model in both the explanation and prediction of %DEC and %COD reduction. The FFNN model not only elucidates a more significant fraction of the data's variability but also forecasts with higher accuracy, as reflected by the diminished error magnitudes.

    The comprehensive analysis of the single-layer Feed-Forward Neural Network (FFNN) model in this study underscores its efficacy in treating textile dye wastewater, particularly in reducing %decolorization (%DEC) and chemical oxygen demand (%COD). The model's structure, involving a single input layer with four neurons and one output neuron, adeptly captures the complex relationships within the data. The training and testing phases, guided by the mean squared error (MSE) objective function, have demonstrated impressive accuracy, as reflected in the high coefficient of determination (R2) values. We summarizes the study's critical findings, which are as follows:

    High model accuracy: The FFNN model exhibits superior performance with R2 values ranging from 0.9982 to 0.9998 during training and 0.9456 to 0.9838 in testing. These high R2 values indicate a strong correlation between the model's predictions and the target data, thereby validating its accuracy and reliability.

    Effective activation functions: Among various activation functions, logsig stands out, demonstrating the lowest maximum and average errors for %DEC and %COD, with values of 4.079 and 0.482 for %DEC, and 2.449 and 0.726 for %COD, respectively. This finding establishes logsig as the most effective activation function for the FFNN model.

    Optimal treatment conditions: The optimal conditions predicted by the FFNN model for %DEC and %COD reduction were experimentally validated, resulting in a significant %DEC of 71.69% and %COD reduction of 80.50%. These results highlight the model's practical utility in real-world applications.

    Comparative analysis with polynomial regression: The FFNN model outperforms the second-degree polynomial regression model in both explanation and prediction of %DEC and %COD reduction. The FFNN model achieves a lower maximum absolute error and mean absolute error compared to the polynomial regression, thereby demonstrating enhanced predictive precision.

    The study effectively demonstrates the potential of single-layer FFNN models in environmental applications, particularly in the context of wastewater treatment in the textile industry.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors are thankful to the Deanship of Scientific Research at Najran University for funding this work, under the Distinguished Research Program grant Code (NU/DRP/SERC/12/19).

    The authors declare no conflict of interest.



    [1] A. Alahmadi, A. Altassan, W. Basaffar, A. Bonnecaze, H. Shoaib, P. Solé, Quasi Type Ⅳ codes over a non-unital ring, Appl. Algebr. Eng. Comm., 32 (2021), 217–228. https://doi.org/10.1007/s00200-021-00488-6 doi: 10.1007/s00200-021-00488-6
    [2] A. Alahmadi, A. Altassan, W. Basaffar, H. Shoaib, A. Bonnecaze, P. Solé, Type Ⅳ codes over a non-unital ring, J. Algebra Appl., 21 (2022), 2250142. https://doi.org/10.1142/S0219498822501420 doi: 10.1142/S0219498822501420
    [3] A. Alahmadi, A. Altassan, H. Shoaib, A. Alkathiry, A. Bonnecaze, P. Solé, The build-up construction of quasi self-dual codes over a non-unital ring, J. Algebra Appl., 21 (2022), 2250143. https://doi.org/10.1142/S0219498822501432 doi: 10.1142/S0219498822501432
    [4] A. Alahmadi, A. Alkathiry, A. Altassan, A. Bonnecaze, H. Shoaib, P. Solé, The build-up construction over a commutative non-unital ring, Design. Code. Cryptogr., 90 (2022), 3003–3010. https://doi.org/10.1007/s10623-022-01044-0 doi: 10.1007/s10623-022-01044-0
    [5] A. Alahmadi, A. Melaibari, P. Solé, Self-orthogonal codes over a non-unital ring from two class association schemes, submitted for publication.
    [6] M. Bilal, J. Borges, S. T. Dougherty, C. Fernández-Córdoba, Self-dual codes from 3-class association schemes, Appl. Algebr. Eng. Comm., 26 (2015), 227–250. https://doi.org/10.1007/s00200-014-0238-z doi: 10.1007/s00200-014-0238-z
    [7] W. Bosma, J. Cannon, C. Playoust, The Magma algebra system Ⅰ: The user language, J. Symbolic Comput., 24 (1997), 235–265. https://doi.org/10.1006/jsco.1996.0125 doi: 10.1006/jsco.1996.0125
    [8] A. R. Calderbank, E. M. Rains, P. M. Shor, N. J. Sloane, Quantum error correction via codes over GF(4), IEEE T. Inform. Theory, 44 (1998), 1369–1387. https://doi.org/10.1109/18.681315 doi: 10.1109/18.681315
    [9] A. R. Calderbank, N. J. Sloane, Double circulant codes over Z4 and even unimodular lattices, J. Algebr. Comb., 6 (1997), 119–131. https://doi.org/10.1023/A:1008639004036 doi: 10.1023/A:1008639004036
    [10] P. Delsarte, An algebraic approach to the association schemes of coding theory, Philips Res. Rep. Suppl., 10 (1973).
    [11] S. T. Dougherty, J. L. Kim, P. Solé, Double circulant codes from two class association schemes, Adv. Math. Commun., 1 (2007), 45–64. https://doi.org/10.3934/amc.2007.1.45 doi: 10.3934/amc.2007.1.45
    [12] B. Fine, Classification of finite rings of order p2, Math. Mag., 66 (1993), 248–252. https://doi.org/10.1080/0025570X.1993.11996133 doi: 10.1080/0025570X.1993.11996133
    [13] P. Gaborit, Quadratic double circulant codes over fields, J. Comb. Theory A, 97 (2002), 85–107.
    [14] C. D. Godsil, Algebraic combinatorics, New York: Routledge, 1993. https://doi.org/10.1201/9781315137131
    [15] A. Hanaki, I. Miyamoto, Classification of association schemes with small vertices, Shinshu University: Department of Mathematical Sciences Website, 2019. Available from: http://math.shinshu-u.ac.jp/hanaki/as/.
    [16] E. Nomiyama, Classification of association schemes with at most ten vertices, Kyushu J. Math., 49 (1995), 163–195. https://doi.org/10.2206/kyushujm.49.163 doi: 10.2206/kyushujm.49.163
    [17] M. Shi, S. Wang, J. L. Kim, P. Solé, Self-orthogonal codes over a non-unital ring and combinatorial matrices, Design. Code. Cryptogr., 91 (2023), 677–689. https://doi.org/10.1007/s10623-021-00948-7 doi: 10.1007/s10623-021-00948-7
  • This article has been cited by:

    1. Chandrasekaran Ravichandran, Raja Balakrishanan, Selvajyothi Kamakshy, Enhancing quadcopter motor performance prediction using Jaya-optimized feed forward neural network, 2025, 0948-7921, 10.1007/s00202-024-02886-8
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1516) PDF downloads(91) Cited by(2)

Figures and Tables

Tables(10)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog