Research article Special Issues

Applied convolutional neural network framework for tagging healthcare systems in crowd protest environment


  • Received: 05 July 2021 Accepted: 09 September 2021 Published: 13 October 2021
  • Healthcare systems constitute a significant portion of smart cities infrastructure. The aim of smart healthcare is two folds. The internal healthcare system has a sole focus on monitoring vital parameters of patients. The external systems provide proactive health care measures by the surveillance mechanism. This system utilizes the surveillance mechanism giving impetus to healthcare tagging requirements on the general public. The work exclusively deals with the mass gatherings and crowded places scenarios. Crowd gatherings and public places management is a vital challenge in any smart city environment. Protests and dissent are commonly observed crowd behavior. This behavior has the inherent capacity to transform into violent behavior. The paper explores a novel and deep learning-based method to provide an Internet of Things (IoT) environment-based decision support system for tagging healthcare systems for the people who are injured in crowd protests and violence. The proposed system is intelligent enough to classify protests into normal, medium and severe protest categories. The level of the protests is directly tagged to the nearest healthcare systems and generates the need for specialist healthcare professionals. The proposed system is an optimized solution for the people who are either participating in protests or stranded in such a protest environment. The proposed solution allows complete tagging of specialist healthcare professionals for all types of emergency response in specialized crowd gatherings. Experimental results are encouraging and have shown the proposed system has a fairly promising accuracy of more than eight one percent in classifying protest attributes and more than ninety percent accuracy for differentiating protests and violent actions. The numerical results are motivating enough for and it can be extended beyond proof of the concept into real time external surveillance and healthcare tagging.

    Citation: Gaurav Tripathi, Kuldeep Singh, Dinesh Kumar Vishwakarma. Applied convolutional neural network framework for tagging healthcare systems in crowd protest environment[J]. Mathematical Biosciences and Engineering, 2021, 18(6): 8727-8757. doi: 10.3934/mbe.2021431

    Related Papers:

  • Healthcare systems constitute a significant portion of smart cities infrastructure. The aim of smart healthcare is two folds. The internal healthcare system has a sole focus on monitoring vital parameters of patients. The external systems provide proactive health care measures by the surveillance mechanism. This system utilizes the surveillance mechanism giving impetus to healthcare tagging requirements on the general public. The work exclusively deals with the mass gatherings and crowded places scenarios. Crowd gatherings and public places management is a vital challenge in any smart city environment. Protests and dissent are commonly observed crowd behavior. This behavior has the inherent capacity to transform into violent behavior. The paper explores a novel and deep learning-based method to provide an Internet of Things (IoT) environment-based decision support system for tagging healthcare systems for the people who are injured in crowd protests and violence. The proposed system is intelligent enough to classify protests into normal, medium and severe protest categories. The level of the protests is directly tagged to the nearest healthcare systems and generates the need for specialist healthcare professionals. The proposed system is an optimized solution for the people who are either participating in protests or stranded in such a protest environment. The proposed solution allows complete tagging of specialist healthcare professionals for all types of emergency response in specialized crowd gatherings. Experimental results are encouraging and have shown the proposed system has a fairly promising accuracy of more than eight one percent in classifying protest attributes and more than ninety percent accuracy for differentiating protests and violent actions. The numerical results are motivating enough for and it can be extended beyond proof of the concept into real time external surveillance and healthcare tagging.



    加载中


    [1] D. Singh, G. Tripathi, A. J. Jara, A survey of Internet-of-Things: Future vision, architecture, challenges and services, in 2014 IEEE world forum on Internet of Things (WF-IoT), 2014.
    [2] C. Guy, Wireless sensor networks, in Sixth international symposium on instrumentation and control technology: Signal analysis, measurement theory, photoelectronic technology, and artificial intelligence, (2006), 635711.
    [3] N. S. Kumar, B. Vuayalakshmi, R. J. Prarthana, A. Shankar, IOT based smart garbage alert system using Arduino UNO, in IEEE Region 10 International Conference TENCON, 1028-1034.
    [4] M. S. Munir, I. S. Bajwa, A. Ashraf, W. Anwar, R. Rashid, Intelligent and Smart Irrigation System Using Edge Computing and IoT, Complexity, 2021.
    [5] R. Rajavel, S. K. Ravichandran, K. Harimoorthy, P. Nagappan, K. R. Gobichettipalayam, IoT-based smart healthcare video surveillance system using edge computing, J. Ambient Intell. Humaniz Comput., 3 (2021), 1-13
    [6] F. Malik, M. A. Shah, H. A. Khattak, Intelligent transport system: An important aspect of emergency management in smart cities, in 2018 24th International Conference on Automation and Computing (ICAC), 2018.
    [7] A. H. Sodhro, M. S. Obaidat, A. Gurtov, N. Zahid, S. Pirbhulal, L. Wang, et al., Towards wearable sensing enabled healthcare framework for elderly patients, in ICC 2020-2020 IEEE International Conference on Communications (ICC), 2020.
    [8] A. H. Sodhro, A. Gurtov, N. Zahid, S. Pirbhulal, L. Wang, M. Rahman, et al., Toward convergence of AI and IoT for energy-efficient communication in smart homes, IEEE Int. Things J., 8 (2020), 9664-9671.
    [9] J. V. Stekelenburg, B. Klandermans, The social psychology of protest, Curr. Sociol., (2013), 886-905
    [10] J. Joo, W. Li, F. F. Steen, S. C. Zhu, Visual persuasion: Inferring communicative intents of images, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014.
    [11] W. S. DeKeseredy, B. Perry, Advancing critical criminology: theory and application, Lexington Books, 2006.
    [12] https://www.merriam-webster.com/dictionary/protest, Available: https://www.merriam-webster.com/dictionary/protest.[Accessed 13.2.2021 2 2021.]
    [13] https://www.macmillandictionary.com/dictionary/british/protest_1, Available: https://www.macmillandictionary.com/dictionary/british/protest_1.[Accessed 13 02 2021.]
    [14] Z. C. Steinert-Threlkeld, Spontaneous collective action: Peripheral mobilization during the Arab Spring, Am. Polit. Sci. Rev., (2017), 379-403
    [15] Z. C. Steinert-Threlkeld, D. Mocanu, A. Vespignani, J. Fowler, Online social networks and offline protest, EPJ Data Sci., (2015), 1-9
    [16] S. González-Bailón, J. Borge-Holthoefer, Y. Moreno, Broadcasters and hidden influentials in online protest diffusion, Am. Behav. Sci., (2013), 943-965.
    [17] R. Enikolopov, A. Makarin, M. Petrova, Social media and protest participation: Evidence from Russia, Econometrica, (2020), 1479-1514.
    [18] T. Senst, V. Eiselein, A. Kuhn, T. Sikora, Crowd Violence Detection Using Global Motion-Compensated Lagrangian Features and Scale-Sensitive Video-Level Representation, in IEEE Transactions on Information Forensics and Security, (2017), 2945-2956.
    [19] J. Lofland, Protest: studies of collective behaviour and social movements, Routledge, 2017.
    [20] H. Grabner, F. Nater, M. Druey, G. L. Van, Visual interestingness in image sequences, in Proceedings of the 21st ACM international conference on Multimedia, 2013.
    [21] M. Redi, N. O'Hare, R. Schifanella, M. Trevisiol, A. Jaimes, 6 seconds of sound and vision: Creativity in micro-videos, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014.
    [22] A. Krizhevsky, I. Sutskever, E. H. Geoffrey, Imagenet classification with deep convolutional neural networks, Adv. Neur. Inf. Process. Syst., 2012.
    [23] M. Perez, A. C. Kot, A. Rocha, Detection of Real-world Fights in Surveillance Videos, Process. IEEE Int. Conf. Acoust. Speech Signal Process., 2019.
    [24] K. Simonyan, A. Zisserman, Two-stream convolutional networks for action recognition in videos, Adv. Neur. Inf. Process. Syst., 2014.
    [25] R. Girshick, J. Donahue, T. Darrell, J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, Comput. Sci., 2014.
    [26] K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, preprint, arXiv: 1409.1556.
    [27] S. Christian, I. Sergey, V. Vincent, A. Alexander, Inception-v4 inception-resnet and the impact of residual connections on learning, Comput. Sci., 2017.
    [28] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, et al., Imagenet large scale visual recognition challenge, Int. J. Comput. Vis., (2015), 211-252
    [29] C. Lum, J. Schallhorn, F. Lum, S. Ramanathan, J. Oatts, A. G. de A. Campomanes, et al., Ocular injuries and blindness caused by crowd control measures in civil protests, Ophthalmology, 2020.
    [30] D. Won, Z. C. Steinert-Threlkeld, J. Joo, Protest activity detection and perceived violence estimation from social media images, in Proceedings of the 25th ACM international conference on Multimedia, 2017.
    [31] Y. Wang, Y. Li, J. Luo, Deciphering the 2016 µs presidential campaign in the twitter sphere: A comparison of the trumpists and clintonists, in Tenth International AAAI Conference on Web and Social Media, 2016.
    [32] D. Chen, K. Park, J. Joo, Understanding Gender Stereotypes and Electoral Success from Visual Self-presentations of Politicians in Social Media, Adv. Comput. Sci. Profess., 2020.
    [33] D. Ganguly, M. H. Mofrad, A. Kovashka, Detecting sexually provocative images, in IEEE Workshop on Applications of Computer Vision (WACV), 2017.
    [34] H. Peng, J. Li, Y. Song, R. Yang, R. Ranjan, P. S. Yu, Streaming Social Event Detection and Evolution Discovery in Heterogeneous Information Networks, ACM Trans. Knowl. Discov. Data, (2021), 1-33
    [35] G. Tripathi, K. Singh, D. K. Vishwakarma, Detecting Arson and Stone Pelting in Extreme Violence: A Deep Learning Based Identification Approach, in International Conference on Intelligent Human Computer Interaction, 2020.
    [36] H. Zhang, J. Pan, Casm: A deep-learning approach for identifying collective action events with text and image data from social media, Sociol. Methodol., (2019), 1-57.
    [37] D. Thenmozhi, C. Aravindan, A. Shyamsunder, A. Viswanathan, A. K. Pujari, Extracting Protests from News Using LSTM models with different Attention Mechanisms, in CLEF (Working Notes), 2019.
    [38] A. Hanson, K. Pnvr, S. Krishnagopal, L. Davis, Bidirectional Convolutional LSTM for the Detection of Violence in Videos, in European Conference on Computer Vision, 2018.
    [39] S. A. Sumon, M. T. Shahria, M. R. Goni, N. Hasan, A. Almarufuzzaman, R. M. Rahman, Violent Crowd Flow Detection Using Deep Learning, in Asian Conference on Intelligent Information and Database Systems, 2019.
    [40] W. Song, D. Zhang, X. Zhao, J. Yu, R. Zheng, A. Wang, A Novel Violent Video Detection Scheme Based on Modified 3D Convolutional Neural Networks, IEEE Access, (2019), 39172-39179
    [41] X. Xu, X. Wu, G. Wang, H. Wang, Violent Video Classification Based on Spatial-Temporal Cues Using Deep Learning, in 2018 11th International Symposium on Computational Intelligence and Design (ISCID), 2018.
    [42] C. Li, L. Zhu, D. Zhu, J. Chen, Z. Pan, X. Li, et al., End-to-end Multiplayer Violence Detection based on Deep 3D CNN, in Proceedings of the 2018 VⅡ International Conference on Network, Communication and Computing, 2018.
    [43] D. Tran, L. Bourdev, R. Fergus, L. Torresani, M. Paluri, Learning spatiotemporal features with 3d convolutional networks, Proc. IEEE Int. Conf. Comput., 2015.
    [44] Z. Zhou, M. Zhu, K. Yahya, Violence Behavior Detection Based on 3D-CNN, Comput. Syst. Appl., (2017), 34.
    [45] B. Mandal, J. Fajtl, V. Argyriou, D. Monekosso, P. Remagnino, Deep residual network with subclass discriminant analysis for crowd behavior recognition, in 2018 25th IEEE International Conference on Image Processing (ICIP), 2018.
    [46] S. Ammar, M. R. Anjum, T. Rounak, M. Islam, Using deep learning algorithms to detect violent activities, Semant. Schol., 2019.
    [47] Z. Meng, J. Yuan, Z. Li, Trajectory-Pooled Deep Convolutional Networks for Violence Detection in Videos, in International Conference on Computer Vision Systems, 2017.
    [48] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016.
    [49] R. E. Fan, K. W. Chang, C. J. Hsieh, X. R Wang, C. J. Lin, LIBLINEAR: A library for large linear classification, J. Mach. Learn.Res., 9 (2008), 1871-1874.
    [50] J. Duchi, E. Hazan, Y. Singer, Adaptive Subgradient Methods for Online Learning and Stochastic Optimization, J. Mach. Learn. Res., 12 (2011), 2121-2159.
    [51] H. Iqbal, Harisiqbal88/plotneuralnet v1. 0.0, URL: https://doi.org/10.5281/Zenodo, 2018.
    [52] I. Amnesty, Police-brutality, Amnesty Int., 2021.
    [53] S. Venkatesan, The Riot Trauma: What Injuries Should You Expect From Non-Lethal Police Weapons and Protests?, Trauma, Soc. EM, EMS, 2020.
    [54] M. A. Kadivar, N. Ketchley, Sticks, stones, and Molotov cocktails: Unarmed collective violence and democratization, in Socius: Sociological Research for a Dynamic World, 4 (2018), 2378023118773614.
    [55] B. E. Mangus, L. Y. Shen, S. D. Helmer, J. Mahe, R. S. Smith, Taser and Taser associated injuries: a case series, Am. Surg., 74 (2008), 862-865. doi: 10.1177/000313480807400920
    [56] K. M. Buchanan, L. J. Elias, G. B. Goplen, Differing perspectives on outcome after subarachnoid hemorrhage: the patient, the relative, the neurosurgeon, Neurosurgery, 46 (2000), 831-840.
    [57] D. K. Raja, D. R. Raja, V. N. Shakul, The role of maxillofacial surgeon in the management of skull base tumors, " IP Int. J. Maxillofac. Imaging, 2 (2021), 107-109.
    [58] A. Younas, I. Shah, T. C. Lim, M. Figari, G. Louie, D. Matic, et al., Evaluating an International Facial Trauma Course for Surgeons: Did We Make a Difference?, in Craniomaxillo-facial Trauma Reconstruction Open, 6 (2021), 24727512211019245.
    [59] E. Y. Y. Chan, K. K. C. Hung, H. H. Y. Hung, C. A. Graham, Use of tear gas for crowd control in Hong Kong, The Lancet, 394 (2019), 1517-1518. doi: 10.1016/S0140-6736(19)32326-8
    [60] K. Pauley, R. Flin, A. Azuara-Blanco, Intra-operative decision making by ophthalmic surgeons, Br. J. Ophthalmol. 97 (2013), 1300-1307.
    [61] J. W. Eikelboom, G. Karthikeyan, N. Fagel, J. Hirsh, American Association of Orthopedic Surgeons and American College of Chest Physicians guidelines for venous thromboembolism prevention in hip and knee arthroplasty differ: what are the implications for clinicians and patients?, Chest, 135 (2009), 513-520. doi: 10.1378/chest.08-2655
    [62] S. Budd, E. C. Robinson, B. Kainz, A survey on active learning and human-in-the-loop deep learning for medical image analysis, Med. Image Anal., (2021), 102062.
    [63] M. Kuhn, K. Johnson, Applied Predictive Modelling, 2013.
    [64] S. H. Wang, K. Wu, T. Chu, S. L. Fernandes, Q. Zhou, Y. D. Zhang, et al., SOSPCNN: Structurally Optimized Stochastic Pooling Convolutional Neural Network for Tetralogy of Fallot Recognition, Wireless Commun. Mobile Comput., 2021 (2021).
    [65] Y. D. Zhang, S. C. Satapathy, D. S. Guttery, Improved breast cancer classification through combining graph convolutional network and convolutional neural network, Inf. Process. Manage., 58 (2021), 102439.
    [66] S. Narayanan, Understanding farmer protests in India, Academ. Stand. Against Povert., 1 (2020), 1.
    [67] B. Bhushan, South atlantic quarterly, S. Atl. Q., 120 (2021), 201-208.
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2661) PDF downloads(87) Cited by(1)

Article outline

Figures and Tables

Figures(19)  /  Tables(8)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog