Research article

Effective thermal properties of a magnetite-polyester composite conformed in the presence of a constant magnetic field

  • In this study, we report the thermophysical properties (at room temperature) and the morphology of a composite with a polyester resin matrix and magnetite filler powders (Fe3O4), conformed in three configurations: randomly dispersed particles, particles oriented parallel to a constant 300 mT magnetic field, and particles oriented perpendicular to a constant 300 mT magnetic field. Samples were formed by hand lay-up with weight percentages of 10, 20 and 30%, where the highest concentration corresponds to the resin. The thermophysical properties were determined using the KD2 Pro® system, which uses the physical principle of linear transient heat flow, for which the dual sensor SH-1 was used. The morphology and microanalysis were studied using a scanning electron microscope (SEM, FEI Quanta 650 FEG). It was observed from the morphology that the magnetite particles are oriented in the direction of the magnetic field lines during the process of resin curing. It was also perceived that the values of the thermophysical properties found experimentally are within the limits (upper and lower) of Hashin and Shtrikman and that those values increase when the magnetite concentration increases in the sample. No significant difference was observed in the thermal properties because of the magnetic field applied.

    Citation: Luis Ángel Lara González, Gabriel Peña-Rodríguez, Yaneth Pineda Triana. Effective thermal properties of a magnetite-polyester composite conformed in the presence of a constant magnetic field[J]. AIMS Materials Science, 2019, 6(4): 549-558. doi: 10.3934/matersci.2019.4.549

    Related Papers:

    [1] Jing Zhang, Haoliang Zhang, Ding Lang, Yuguang Xu, Hong-an Li, Xuewen Li . Research on rainy day traffic sign recognition algorithm based on PMRNet. Mathematical Biosciences and Engineering, 2023, 20(7): 12240-12262. doi: 10.3934/mbe.2023545
    [2] Yongmei Ren, Xiaohu Wang, Jie Yang . Maritime ship recognition based on convolutional neural network and linear weighted decision fusion for multimodal images. Mathematical Biosciences and Engineering, 2023, 20(10): 18545-18565. doi: 10.3934/mbe.2023823
    [3] Hong Qi, Guanglei Zhang, Heming Jia, Zhikai Xing . A hybrid equilibrium optimizer algorithm for multi-level image segmentation. Mathematical Biosciences and Engineering, 2021, 18(4): 4648-4678. doi: 10.3934/mbe.2021236
    [4] Shikai Wang, Heming Jia, Xiaoxu Peng . Modified salp swarm algorithm based multilevel thresholding for color image segmentation. Mathematical Biosciences and Engineering, 2020, 17(1): 700-724. doi: 10.3934/mbe.2020036
    [5] Xiao Ma, Xuemei Luo . Finger vein recognition method based on ant colony optimization and improved EfficientNetV2. Mathematical Biosciences and Engineering, 2023, 20(6): 11081-11100. doi: 10.3934/mbe.2023490
    [6] Shuai Cao, Biao Song . Visual attentional-driven deep learning method for flower recognition. Mathematical Biosciences and Engineering, 2021, 18(3): 1981-1991. doi: 10.3934/mbe.2021103
    [7] Sakorn Mekruksavanich, Wikanda Phaphan, Anuchit Jitpattanakul . Epileptic seizure detection in EEG signals via an enhanced hybrid CNN with an integrated attention mechanism. Mathematical Biosciences and Engineering, 2025, 22(1): 73-105. doi: 10.3934/mbe.2025004
    [8] Jinhua Zeng, Xiulian Qiu, Shaopei Shi . Image processing effects on the deep face recognition system. Mathematical Biosciences and Engineering, 2021, 18(2): 1187-1200. doi: 10.3934/mbe.2021064
    [9] Jing Wang, Jiaohua Qin, Xuyu Xiang, Yun Tan, Nan Pan . CAPTCHA recognition based on deep convolutional neural network. Mathematical Biosciences and Engineering, 2019, 16(5): 5851-5861. doi: 10.3934/mbe.2019292
    [10] Basem Assiri, Mohammad Alamgir Hossain . Face emotion recognition based on infrared thermal imagery by applying machine learning and parallelism. Mathematical Biosciences and Engineering, 2023, 20(1): 913-929. doi: 10.3934/mbe.2023042
  • In this study, we report the thermophysical properties (at room temperature) and the morphology of a composite with a polyester resin matrix and magnetite filler powders (Fe3O4), conformed in three configurations: randomly dispersed particles, particles oriented parallel to a constant 300 mT magnetic field, and particles oriented perpendicular to a constant 300 mT magnetic field. Samples were formed by hand lay-up with weight percentages of 10, 20 and 30%, where the highest concentration corresponds to the resin. The thermophysical properties were determined using the KD2 Pro® system, which uses the physical principle of linear transient heat flow, for which the dual sensor SH-1 was used. The morphology and microanalysis were studied using a scanning electron microscope (SEM, FEI Quanta 650 FEG). It was observed from the morphology that the magnetite particles are oriented in the direction of the magnetic field lines during the process of resin curing. It was also perceived that the values of the thermophysical properties found experimentally are within the limits (upper and lower) of Hashin and Shtrikman and that those values increase when the magnetite concentration increases in the sample. No significant difference was observed in the thermal properties because of the magnetic field applied.


    In recent years, artificial intelligence technique has advanced rapidly [1,2,3]. Biometric recognition [4,5,6], which includes face recognition, voice recognition, fingerprint recognition, iris recognition, eye pattern recognition, etc. will occupy a very important position in the field of artificial intelligence in the future. At present, technologies such as smart cards based on radio frequency identification, second-generation ID cards, and user passwords are mostly used in identification, and biometrics will gradually occupy an important market share.

    Due to the superiority of biometrics, it is widely used in bank payment, securities, transportation, e-commerce, airport subway entrance access control, attendance, and criminal investigation by public security and judicial departments [7,8,9]. Major enterprises, institutions, companies, and government agencies have established their own biometric-based access control systems and attendance systems to improve the informatization and intelligence of management, greatly improve management efficiency, and effectively liberate the labor force [10,11,12,13].

    Face recognition technology started relatively late, but the technology related to face recognition [37] has developed rapidly, and has achieved remarkable results in recognition accuracy, etc., and related technical achievements have attracted worldwide attention. Due to the lack of prior knowledge of face images, large illumination changes, complex backgrounds, and variable face angles, the demand for face images is large, expression changes are large, and face occlusion leads to low accuracy of face recognition.

    Deep learning applications often use convolutional neural networks to achieve image processing and recognition with high efficiency and accuracy [14,15,16,17]. Facial images are highly structured images. Combining with prior facial knowledge is a very popular method in face recognition.

    Dong et al. proposed an image super-resolution method based on deep convolutional neural network [18], which realizes the mapping from the low-resolution end to the high-resolution end of the image, and extends the traditional super-resolution method based on coding coefficients. Since then, the research of neural network combined with image super-resolution has continued to deepen. Kim et al. [19] proposed the use of very deep convolutional neural network on the basis of VGG network to improve the super-resolution accuracy by using multiple filters in the neural network, and realize the use of image context information.

    Yu et al. proposed a transforming and distinguishing neural network [20] for the serious problems of multi-posture and degradation, and solving the problem of multi-posture and image misalignment. As the depth of the network increases, the features gradually disappear during the transmission process. In response to this problem, the multi-scale residual network image super-resolution (MSRN) algorithm [21] uses the combination of local multi-scale features and global features to maximize utilizing the features of low-resolution images; the problem of feature disappearance during transmission is solved. There are various existing algorithms and deep learning models [38,39,40] in which not only can they facilitate face recognition, but also perform human activity recognition and motion prediction.

    In this paper, we propose a face detection and recognition model based on multi-task convolutional neural network (MTCNN) [22,23,24]. The recognition model combined with deep learning has a high accuracy rate, and has a shorter recognition time, which can reduce the waste of human resources.

    MTCNN implements the face area detection and face key point detection together, and its subject framework is similar to cascade. The whole can be divided into a three-layer network structure of Proposal Network (P-Net), Refine Network (R-Net), and Output Network (O-Net) [25,26]. It is a multi-task neural network model for face detection tasks which mainly uses three cascaded networks and the idea of candidate boxes plus classifiers.

    The three cascaded networks are P-Net for quickly generating candidate windows, R-Net for filtering and selecting high-precision candidate windows, and O-Net for generating final bounding boxes and face key points. And many convolutional neural network models that deal with image problems; the model also uses image pyramids, border regression, non-maximum suppression and other technologies. The network structure of P-Net, R-Net and O-Net is shown in Figure 1.

    Figure 1.  Network structure of P-Net, R-Net and O-Net (a) P-Net; (b) R-Net; (c) O-Net.

    To balance computational expenses and performance, MTCNN avoids the huge performance consumption caused by traditional ideas such as sliding windows and classifiers, we first use a small model to generate a candidate frame for the target area with a certain possibility, and then use a more complex model for fine classification. Then we return the higher-precision area box, and let this step be executed recursively. The network structure of MTCNN is presented by Figure 2.

    Figure 2.  Our network pertaining to MTCNN.

    R-CNN [27] draws on the idea of sliding window and adopts the scheme of area recognition. The specific identification scheme is the first step. Given an input image, extract 2000 independent candidate regions from the image; the second step is to use CNN to extract a fixed-length feature vector for each region; the third step, we use SVM to classify each area. Figure 3 shows the process of R-CNN in the recognition of faces.

    Figure 3.  The process of R-CNN in the recognition of faces.

    Faster R-CNN [28,29] creatively uses the convolutional network to generate the suggestion frame by itself, and shares the convolutional network with the target detection network, so that the number of suggestion frames is reduced from about 2000 to 300. This framework can even be combined with nonlinear anisotropic diffusion filtering and other morphological methods to perform denoising of image. To ensure excellent face recognition, the noise and the other unrelated background areas can be removed. The process of Faster R-CNN to recognize faces is shown in Figure 4.

    Figure 4.  The process of Faster R-CNN in the recognition of faces.

    The commonly used kernel functions [30] include linear kernel functions [31,32], Gaussian kernel functions [33,34], and other similar algorithms. One of the most commonly used is the Gaussian kernel function, which can map feature data to infinite dimensions.

    The linear kernel function is mainly used in the case of linear separability, which can achieve a good classification effect. The mathematical expression of the linear kernel function is shown in Eq (1). There are few parameters in the function, so the computation rate is very fast, and the dimension of its input space is the same as the dimension of the feature space, which is suitable for the first attempt in the classification task.

    K(X,Y)=(XTY+C) (1)

    In Eq (1), X and Y represent eigenvectors, and C represents a constant.

    The Gaussian kernel function can map the sample from the input space to the higher-dimensional feature space, and it can achieve good results regardless of the sample size. When the classification task cannot determine which kernel function to use, the Gaussian kernel function is the most widely used one of the kernel functions. The mathematical expression of the Gaussian kernel function is shown in Eq (2).

    K(X,Y)=exp(XY22σ2) (2)

    In particular, σ in Eq (2) is the width parameter of the function, which controls the radial range of the function.

    We collect 2500 face images for our face recognition research in this paper. During the test, the data set is divided into the original image data set and the test data set according to the parity position. The two images at the corresponding positions of the two data sets are the comparison objects, as shown in Figure 5. Then, we apply the algorithm of this paper to these two data sets, and extract the images out in turn. Table 1 show the gender ratio and picture size of the two data sets.

    Figure 5.  Sample images of the original image data set and the test data set.
    Table 1.  Gender ratio and image size of the two data sets.
    Characteristics Original image data set Test data set
    Male: Female 10: 15 13: 12
    Image size 1897×1897 1897×1897

     | Show Table
    DownLoad: CSV

    The high-resolution face image reconstructed after the algorithm processing requires a certain measurement standard to examine the performance of the algorithm. The earlier evaluation standard was subjective evaluation through naked eye observation. This method is simple and direct. The objective evaluation calculates the similarity between the synthesized image and the original image, and has a specific value to measure the reconstruction result of the image. Compared with the subjective evaluation, its advantage is that the comparison result is more concise and accurate. At present, the commonly used objective evaluation methods include Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measurement (SSIM) [35,36].

    PSNR is an image quality evaluation method based on the error between corresponding pixels. It is the most common image quality evaluation index, and its expression is as Eq (3).

    PSNR=10log10((2b1)2MSE) (3)

    In Eq (3), b is the number of bits of the pixel, which is usually 8. The unit of PSNR is decibel (dB). The larger the value means higher quality of the reconstructed image. Mean Square Error (MSE) represents the mean square error between images. The symbol h represents the height of the image, and w represents the width of the image. The expression is shown in Eq (4).

    MSE=1h×whi=1wj=1[X1(i,j)X2(i,j)]2 (4)

    However, because the human perception of an area will be affected by the surrounding area and other reasons, this method does not take into account the human visual characteristics, but only calculates the difference between pixels, so the result of the peak signal-to-noise ratio evaluation method will appear to be different from that of humans.

    The value of SSIM is a value between 0 and 1. The larger the value of SSIM, explant the smaller the difference between both photos of the faces. Therefore, the larger the value of SSIM replies the better the image reconstruction quality. The SSIM method measures image similarity from three aspects: brightness (L), contrast (C), and structure (S). The expression of SSIM is shown in Eq (5).

    SSIM(X,Y)=L(X,Y)×C(X,Y)×S(X,Y) (5)

    SSIM simulates the human perception of changes in image information, uses the image average to model the image brightness, the image standard deviation to model the image contrast, and the image covariance to model the image structure, which makes up for the shortcomings of the PSNR method.

    Receiver operating characteristic (ROC) curve is a comprehensive indicator that reflects the sensitivity and specificity of continuous variables. Each point on the ROC curve reflects the susceptibility to the same signal stimulus.

    The abscissa is false positive rate (FPR), that is, the proportion of all negative samples that are predicted to be positive but actually negative. The larger the FPR, the positive prediction the more negative classes in the class. The computational FPR is as Eq (6).

    FPR=FPTN+FN (6)

    The ordinate is true positive rate (TPR), the proportion of all positive samples that are predicted to be positive and actually positive. The larger the TPR stands the more actual positive classes in the predicted positive class. The computational TPR is as Eq (7).

    TPR=TPTP+FP (7)

    In Eqs (6) and (7), True Positive (TP), the prediction is a positive sample and the actual number of features is also a positive sample. False Positive (FP), the number of features predicted to be a positive sample and actually a negative sample. True Negative (TN) is predicted to be a negative sample and is actually the number of features of a negative sample. False Negative (FN), the number of features predicted to be negative samples and actually positive samples.

    In order to better recognize the face image, this paper implements the Gaussian kernel function as the mapping function in the model. In order to achieve the best results, we need to adjust three parameters, which are the local constraint parameter λ, the kernel function similarity parameter σ, and the high resolution layer constraint parameter k.

    We fix the value of k and the kernel function similarity parameter σ, and set the value range of λ to be 0.01 to 0.12. The mean value of PSNR and mean value of SSIM as the evaluation criteria change with the value of λ as shown in Figure 6(a), (b). As shown in the figure, when λ = 0.08, the average PSNR of MTCNN and Faster R-CNN is the largest, and when λ = 0.08, the average PSNR of R-CNN is the largest. The average value of SSIM reaches its peak. Since the difference between the two values is very small, the value of λ is selected here as 0.08.

    Figure 6.  Parameter optimization of λ, σ and k in the face recognition framework based on MTCNN, R-XNN, and Faster R-CNN.

    We fix the values of k and λ, and set the value of the kernel function similarity parameter σ from 200 to 1000. The average value of PSNR and the average value of SSIM as the evaluation criteria change with the value of σ as shown in Figure 6(c), (d). We find that when the σ value is 400, the average PSNR of the three models reaches the maximum, while the average SSIM reaches the peak at σ = 400 and 500. Combining the two objective evaluation criteria, we choose σ = 400.

    We set the values of λ and the kernel function similarity parameter σ, and set the value of the high-resolution layer error term parameter k to 0.01 to 0.10. As the evaluation criteria, the mean value of PSNR and mean value of SSIM change with the value of k as shown in Figure 6(e), (f). It can be clearly seen from the following line chart that both the mean PSNR and the mean SSIM peak when k takes 0.05, so we determine the optimal value of k to be 0.05.

    From the optimization parameters, we know that when our parameters are set to λ as 0.08, σ as 400 and k as 0.05, the three models are in the optimal state. Table 2 shows the average values of PSNR and SSIM of the three models.

    Table 2.  Results of mean PSNR and mean SSIM of the three models.
    Model PSNR (dB) SSIM
    MTCNN 36.245 0.954
    R-CNN 35.005 0.927
    Faster R-CNN 35.305 0.938

     | Show Table
    DownLoad: CSV

    From Table 2, the mean PSNR and SSIM pertaining to the MTCNN are better than R-CNN and Faster R-CNN. The average PSNR of our method is 1.24 dB higher than that of R-CNN and 0.94 dB higher than that of Faster R-CNN. The average SSIM of MTCNN is 10.3% higher than R-CNN and 8.7% higher than Faster R-CNN.

    Figure 7 shows the ROC curves of MTCNN, R-CNN and Faster R-CNN. The Area Under Curve (AUC) of MTCNN is 97.56%, the AUC of R-CNN is 91.24%, and the AUC of Faster R-CNN is 92.01%. MTCNN has the best overall face recognition performance. For detection of faces, MTCNN still has the best effect.

    Figure 7.  ROC curves of MTCNN, R-CNN and Faster R-CNN.

    In actual scenes, the acquired face images are usually of poor resolution and low-quality, which is caused by a variety of reasons: first, the location of the surveillance camera is high, the shooting range is large, and the target face image is small; second, the monitoring The device is limited by storage space and highly compresses video images, so the image loses detailed information; third, external environments such as rainy weather and poor lighting will further reduce the quality of the captured images. In response to these problems, the face recognition technology combined with convolutional neural network realizes its practical application value.

    Although this paper has conducted considerable research worthy of investigation, and performed exploration on learning-based image classification methods, there are still significant limitations that need to be addressed. Despite the sound conclusion of this research in this field, there are still some issues worthy of attention and future implementation of other relevant networks is required for a thorough analysis.

    With the continuous deepening of research on convolutional neural networks, vector-based convolution and pooling processing have been fully studied. In fact, in a network, we can apply the Riemannian manifold geometry to the data in the middle layer for processing. This pooling and iterative process in the form of a matrix can have a positive effect on the final output of the network.

    MTCNN mainly uses three cascaded networks, and uses the idea of candidate box plus classifier to execute fast and efficient face recognition. Among the three evaluation indicators, MTCNN has the best overall face recognition performance, and for defective faces, MTCNN still has the best effect as well as performance.

    The technology based on MTCNN has a good development prospect, which greatly improves the accuracy of face recognition. While improving accuracy, it also improves the security of the image recognition system.

    The authors declare no conflict of interest.

    This research is funded by the National Natural Science Foundation of China (No. 62006102).



    [1] Ngo I, Jeon S, Byon C (2016) Thermal conductivity of transparent and flexible polymers containing fillers: a literature review. Int J Heat Mass Tran 98: 219–226. doi: 10.1016/j.ijheatmasstransfer.2016.02.082
    [2] Hussain ARJ, Alahyari AA, Eastman S, et al. (2017) Review of polymers for heat exchanger applications: factors concerning thermal conductivity. Appl Therm Eng 113: 1118–1127. doi: 10.1016/j.applthermaleng.2016.11.041
    [3] Wong CP, Bollampally RS (1999) Thermal conductivity, elastic modulus, and coefficient of thermal expansion of polymer composites filled with ceramic particles for electronic packaging. J Appl Polym Sci 74: 3396–3403. doi: 10.1002/(SICI)1097-4628(19991227)74:14<3396::AID-APP13>3.0.CO;2-3
    [4] Feng Y, Qin M, Feng W, et al. (2016) Thermal conducting properties of aligned carbon nanotubes and their polymer composites. Compos Part A-Appl S 91: 351–369. doi: 10.1016/j.compositesa.2016.10.009
    [5] Mishras S, Shimpi NG (2005) Comparison of nano CaCO3 and flyash filled with styrene butadiene rubber on mechanical and thermal properties. J Sci Ind Res India 64: 744–751.
    [6] Kutz M (2011) Applied Plastics Engineering Handbook: Processing and Materials, William Andrew.
    [7] Xu JZ, Gao BZ, Kang FY (2016) A reconstruction of maxwell model for effective thermal conductivity of composite materials. Appl Therm Eng 102: 972–979. doi: 10.1016/j.applthermaleng.2016.03.155
    [8] Tong XC (2011) Advanced Materials for Thermal Management of Electronic Packaging, London: Springer.
    [9] Moore AL, Shi L (2014) Emerging challenges and materials for thermal management of electronics. Mater Today 17: 163–174. doi: 10.1016/j.mattod.2014.04.003
    [10] Burgen N, Laachachi A, Ferriol M, et al. (2016) Review of thermal conductivity in composites: mechanisms, parameters and theory. Prog Polym Sci 61: 1–28. doi: 10.1016/j.progpolymsci.2016.05.001
    [11] Weidenfeller B, Hofer M, Schilling F (2002) Thermal and electrical properties of magnetite filled polymers. Compos Part A-Appl S 33: 1041–1053. doi: 10.1016/S1359-835X(02)00085-4
    [12] Younes H, Christensen G, Liu M, et al. (2014) Alignment of carbon nanofibers in water and epoxy by wxternal magnetic field. J Nanofluids 3: 33–37. doi: 10.1166/jon.2014.1081
    [13] Horton M, Hong H, Li C, et al. (2010). Magnetic alignment of Ni-coated single wall carbon nanotubes in heat transfer nanofluids. J Appl Phys 107: 1–4.
    [14] Liu M, Younes H, Hong H, et al. (2019) Polymer nanocomposites with improved mechanical and thermal properties by magnetically aligned carbon nanotubes. Polymer 166: 81–87. doi: 10.1016/j.polymer.2019.01.031
    [15] Younes H, Christensen G, Luan X, et al. (2012) Effects of alignment, pH, surfactant, and solvent on heat transfer nanofluids containing Fe2O3 and CuO nanoparticles. J Appl Phys 111: 064308. doi: 10.1063/1.3694676
    [16] Ku J, Valdez-Grijalva MA, Deng R, et al. (2019) Modelling external magnetic fields of magnetite particles: from micro- to macro-scale. Geosciences 9: 133. doi: 10.3390/geosciences9030133
    [17] Vargas Z, Filipcsei G, Zrinyi M (2006) Magnetic field sensitive functional elastomers with tuneable elastic modulus. Polymer 47: 227–233. doi: 10.1016/j.polymer.2005.10.139
    [18] Vargas Z, Filipcsei G, Zrinyi M (2005) Smart composites with controlled anisotropy. Polymer 46: 7779–7787. doi: 10.1016/j.polymer.2005.03.102
    [19] Boon MS, Mariatti M (2014) Optimization of magnetic and dielectric properties of surface-treated magnetite-filled epoxy composites by factorial design. J Magn Magn Mater 355: 319–324. doi: 10.1016/j.jmmm.2013.12.002
    [20] Pedroso AG, Rosa DS, Atvars TDZ (2002) Manufacture of sheets using post-consumer unsaturated polyester resin/glass fibre composites. Prog Rubber Plast Re 18: 111–125.
    [21] Oladunjove M, Sanuade OA (2012) Thermal diffusivity, thermal effusivity and specific heat in soils in Olurungo Powerplan, soutwestern Nigeria. IJRRAS 13: 502–521.
    [22] Ma X, Omer S, Zhang W, et al. (2008) Thermal conductivity measurement of two microencapsulated phase change slurries. Int J Low-Carbon Tec 3: 245–253. doi: 10.1093/ijlct/3.4.245
    [23] Gustavsson M, Karawaracki E, Gustafsson SE (1994) Thermal conductivity, thermal diffusivity, and specific heat of thin samples from transient measurements with hot disk sensors. Rev Sci Instrum 65: 3856–3859. doi: 10.1063/1.1145178
    [24] Maldonado LM, Rodríguez GP (2014) Effect of ceramic dental waste in thermo-physical properties of materials composed with polyester resins. Ingeniería Investigación y Desarrollo 14: 2–5. Available from: https://doi.org/10.19053/1900771X.3442.
    [25] Schilling F, Weidenfeller B, Ho M (2002) Thermal and electrical properties of magnetite filled polymers. Compos Part A-Appl S 33: 1041–1053. doi: 10.1016/S1359-835X(02)00085-4
    [26] Hashin Z, Shtrikman S (1962) A variational approach to the theory of the effective magnetic permeability of multiphase materials. J Appl Phys 33: 3125–3131. doi: 10.1063/1.1728579
    [27] Razzaq MY, Anhlt M, Fromann L, et al. (2007) Thermal, electrical and magnetic studies of magnetite filled polyurethane shape memory polymers. Mat Sci Eng A-Struct 444: 227–235. doi: 10.1016/j.msea.2006.08.083
    [28] Gong L, Wang Y, Cheng X, et al. (2014) International journal of heat and mass transfer a novel effective medium theory for modelling the thermal conductivity of porous materials. Int J Heat Mass Tran 68: 295–298. doi: 10.1016/j.ijheatmasstransfer.2013.09.043
    [29] Pena-Rodríguez G, Rivera-Suarez PA, Gónzalez-Gómez CH, et al. (2018) Effect of the concentration of magnetite on the structure, electrical and magnetic properties of a polyester resin-based composite. TecnoLogicas 21: 13–27.
  • This article has been cited by:

    1. Manu Shree, Amita Dev, A. K. Mohapatra, 2023, Chapter 56, 978-981-19-6630-9, 807, 10.1007/978-981-19-6631-6_56
    2. Toshiya Akiyama, Kazuyuki Matsumoto, Kyoko Osaka, Ryuichi Tanioka, Feni Betriana, Yueren Zhao, Yoshihiro Kai, Misao Miyagawa, Yuko Yasuhara, Hirokazu Ito, Gil Soriano, Tetsuya Tanioka, Comparison of Subjective Facial Emotion Recognition and “Facial Emotion Recognition Based on Multi-Task Cascaded Convolutional Network Face Detection” between Patients with Schizophrenia and Healthy Participants, 2022, 10, 2227-9032, 2363, 10.3390/healthcare10122363
    3. Chieh-Liang Wu, Shu-Fang Liu, Tian-Li Yu, Sou-Jen Shih, Chih-Hung Chang, Shih-Fang Yang Mao, Yueh-Se Li, Hui-Jiun Chen, Chia-Chen Chen, Wen-Cheng Chao, Deep Learning-Based Pain Classifier Based on the Facial Expression in Critically Ill Patients, 2022, 9, 2296-858X, 10.3389/fmed.2022.851690
    4. Berrimi Fella, Hedli Riadh, Kara-Mohamed Chafia, 2022, Adaptive Diffusion Based Restoration for Noisy Facial Image Recognition, 978-1-7281-8442-5, 487, 10.1109/SETIT54465.2022.9875579
    5. J.A.S.Y. Jayasinghe, Stamos Katsigiannis, Lakmini Malasinghe, 2023, Comparative Study of Face Tracking Algorithms for Remote Photoplethysmography, 979-8-3503-2781-6, 1, 10.1109/ICECET58911.2023.10389182
    6. Jiarui Chen, Wei Li, Peihao Yang, Sheng Li, Baoqin Chen, Fault Diagnosis of Electric Submersible Pumps Using a Three‐Stage Multiscale Feature Transformation Combined with CNN–SVM, 2023, 11, 2194-4288, 10.1002/ente.202201033
    7. Tuotuo Xiong, Ben Wang, Wanyuan Qin, Ling Yang, Yunsheng Ou, Development and validation of a risk prediction model for cage subsidence after instrumented posterior lumbar fusion based on machine learning: a retrospective observational cohort study, 2023, 10, 2296-858X, 10.3389/fmed.2023.1196384
    8. Libo Qiao, Sheng Guan, Wei Wang, 2024, Online learning concentration recognition based on computer vision, 9798400709920, 334, 10.1145/3687311.3687372
    9. Mohamed Gamal, Magdy Shayboub, 2024, Chapter 9, 978-981-97-6713-7, 111, 10.1007/978-981-97-6714-4_9
    10. Arjon Turnip, Adnnia Nafis Qulub Aina, Erwin Sitompul, 2024, Patient Face Recognition for Medical Robots Based on Convolutional Block Attention Module, 979-8-3315-1116-6, 1, 10.1109/CERIA64726.2024.10914760
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4698) PDF downloads(1027) Cited by(2)

Figures and Tables

Figures(2)  /  Tables(5)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog