Research article

Adaptive Neuro-Symbolic framework with dynamic contextual reasoning: A novel framework for semantic understanding

  • Received: 11 September 2025 Revised: 03 October 2025 Accepted: 10 October 2025 Published: 17 October 2025
  • Despite significant advances in image processing, achieving human-like semantic understanding and explainability remains a formidable challenge. Current deep learning models excel at feature extraction but lack the ability to reason about relationships, interpret context, or provide transparent decision-making. To address these limitations, we propose the adaptive neuro-symbolic framework with dynamic contextual reasoning (ANS-DCR), a novel architecture that seamlessly integrates neural networks with symbolic reasoning. ANS-DCR introduces four key innovations: 1) A contextual embedding layer (CEL) that dynamically converts neural features into structured symbolic embeddings tailored to the scene's context; 2) hierarchical knowledge graphs (HKGs) that encode multi-level object relationships and update in real-time on the basis of neural feedback; 3) an adaptive reasoning engine (ARE) that performs scalable, context-aware logical reasoning; and 4) an explainable decision-making module (EDM) that generates human-readable explanations, including counterfactuals, enhancing interpretability. This framework bridges the gap between pattern recognition and logical reasoning, enabling deeper semantic understanding and dynamic adaptability. We demonstrate ANS-DCR's efficacy in complex scenarios such as autonomous driving, where it accurately interprets traffic scenes, predicts behaviors, and provides clear explanations for decisions. Experimental results show superior performance in semantic segmentation, contextual reasoning, and explainability compared with state-of-the-art methods. By combining the strengths of neural and symbolic paradigms, ANS-DCR sets a new benchmark for intelligent, transparent, and scalable image processing systems, offering transformative potential for applications in robotics, healthcare, and beyond. The source code of the proposed ANS-DCR is at github.com/livingjesus/ANS-DCR.

    Citation: Idowu Paul Okuwobi, Jingyuan Liu, Olayinka Susan Raji, Olusola Funsho Abiodun. Adaptive Neuro-Symbolic framework with dynamic contextual reasoning: A novel framework for semantic understanding[J]. Mathematical Biosciences and Engineering, 2025, 22(12): 3028-3059. doi: 10.3934/mbe.2025112

    Related Papers:

  • Despite significant advances in image processing, achieving human-like semantic understanding and explainability remains a formidable challenge. Current deep learning models excel at feature extraction but lack the ability to reason about relationships, interpret context, or provide transparent decision-making. To address these limitations, we propose the adaptive neuro-symbolic framework with dynamic contextual reasoning (ANS-DCR), a novel architecture that seamlessly integrates neural networks with symbolic reasoning. ANS-DCR introduces four key innovations: 1) A contextual embedding layer (CEL) that dynamically converts neural features into structured symbolic embeddings tailored to the scene's context; 2) hierarchical knowledge graphs (HKGs) that encode multi-level object relationships and update in real-time on the basis of neural feedback; 3) an adaptive reasoning engine (ARE) that performs scalable, context-aware logical reasoning; and 4) an explainable decision-making module (EDM) that generates human-readable explanations, including counterfactuals, enhancing interpretability. This framework bridges the gap between pattern recognition and logical reasoning, enabling deeper semantic understanding and dynamic adaptability. We demonstrate ANS-DCR's efficacy in complex scenarios such as autonomous driving, where it accurately interprets traffic scenes, predicts behaviors, and provides clear explanations for decisions. Experimental results show superior performance in semantic segmentation, contextual reasoning, and explainability compared with state-of-the-art methods. By combining the strengths of neural and symbolic paradigms, ANS-DCR sets a new benchmark for intelligent, transparent, and scalable image processing systems, offering transformative potential for applications in robotics, healthcare, and beyond. The source code of the proposed ANS-DCR is at github.com/livingjesus/ANS-DCR.



    加载中


    [1] A. Krizhevsky, I. Sutskever, G. E. Hinton, ImageNet classification with deep convolutional neural networks, Commun. ACM, 60 (2017), 84–90. https://doi.org/10.1145/3065386 doi: 10.1145/3065386
    [2] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, et al., Attention is all you need, in Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS), (2017), 6000–6010.
    [3] Y. Bengio, A. Courville, P. Vincent, Representation learning: A review and new perspectives, IEEE Trans. Pattern Anal. Mach. Intell., 35 (2013), 1798–1828. https://doi.org/10.1109/TPAMI.2013.50 doi: 10.1109/TPAMI.2013.50
    [4] C. Finn, P. Abbeel, S. Levine, Model-agnostic meta-learning for fast adaptation of deep networks, in International Conference on Machine Learning (ICML), (2017), 1–5. https://doi.org/10.48550/arXiv.1703.03400
    [5] A. A. Garcez, L. C. Lamb, D. M. Gabbay, Neural-Symbolic Cognitive Reasoning, Springer, Berlin, 2010. https://doi.org/10.1007/978-3-540-73246-4
    [6] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, A. Torralba, Learning deep features for discriminative localization, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 2921–2929. https://doi.org/10.1109/CVPR.2016.319
    [7] Z. C. Lipton, The mythos of model interpretability, Commun. ACM, 61 (2018), 36–43. https://doi.org/10.1145/3233231 doi: 10.1145/3233231
    [8] F. Yang, Z. Yang, W. W. Cohen, Differentiable learning of logical rules for knowledge base reasoning, in International Conference on Neural Information Processing Systems (NIPS), (2017), 2316–2325. https://doi.org/10.48550/arXiv.1702.08367
    [9] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 770–778. https://doi.org/10.1109/CVPR.2016.90
    [10] G. Marcus, Deep learning: A critical appraisal, preprint, arXiv: 1801.00631.
    [11] J. B. Tenenbaum, C. Kemp, T. L. Griffiths, N. D. Goodman, How to grow a mind: Statistics, structure, and abstraction, Science, 331 (2011), 1279–1285. https://doi.org/10.1126/science.1192788 doi: 10.1126/science.1192788
    [12] M. T. Ribeiro, S. Singh, C. Guestrin, Why should I trust you: Explaining the predictions of any classifier, in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, (2016), 1135–1144. https://doi.org/10.1145/2939672.2939778
    [13] B. M. Lake, T. D. Ullman, J. B. Tenenbaum, S. J. Gershman, Building machines that learn and think like people, preprint, arXiv: 1604.0028.
    [14] Q. Zhang, Y. N. Wu, S. C. Zhu, Interpretable convolutional neural networks, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2018), 8827–8836. https://doi.org/10.1109/CVPR.2018.00920
    [15] D. Xu, Y. Zhu, C. B. Choy, L. Fei-Fei, Scene graph generation by iterative message passing, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 3097–3106. https://doi.org/10.1109/CVPR.2017.330
    [16] S. Khandelwal, L. Sigal, Iterative scene graph generation, preprint, arXiv: 2207.13440.
    [17] S. Atakishiyev, M. Salameh, H. Yao, R. Goebel, Explainable artificial intelligence for autonomous driving: A comprehensive overview and field guide for future research directions, IEEE Access, 12 (2024), 101603–101625. https://doi.org/10.1109/ACCESS.2024.3431437 doi: 10.1109/ACCESS.2024.3431437
    [18] C. Chen, A. Seff, A. Kornhauser, J. Xiao, DeepDriving: Learning affordance for direct perception in autonomous driving, in 2015 IEEE International Conference on Computer Vision (ICCV), (2015), 2722–2730. https://doi.org/10.1109/ICCV.2015.312
    [19] G. Huang, Z. Liu, L. Van Der Maaten, K. Q. Weinberger, Densely connected convolutional networks, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 2261–2269. https://doi.org/10.1109/CVPR.2017.243
    [20] J. Pearl, Causality: Models, Reasoning and Inference, Cambridge University Press, New York, 2009. https://dl.acm.org/doi/book/10.5555/1642718
    [21] J. Mao, C. Gan, P. Kohli, J. B. Tenenbaum, J. Wu, The neuro-symbolic concept learner: Interpreting scenes, words, and sentences from natural supervision, preprint, arXiv: 1904.12584.
    [22] L. Serafini, A. d'Avila Garcez, Logic tensor networks: Deep learning and logical reasoning from data and knowledge, preprint, arXiv: 1606.04422.
    [23] S. H. Bach, M. Broecheler, B. Huang, L. Getoor, Hinge-loss Markov random fields and probabilistic soft logic, J. Mach. Learn. Res., 18 (2017), 3846–3912. https://doi.org/10.48550/arXiv.1505.04406 doi: 10.48550/arXiv.1505.04406
    [24] S. M. Lundberg, S. I. Lee, A unified approach to interpreting model predictions, Adv. Neural Inf. Process. Syst., 30 (2017), 4768–4777.
    [25] C. Cui, H. Ma, X. Dong, C. Zhang, C. Zhang, Y. Yao, et al., Model-agnostic counterfactual reasoning for identifying and mitigating answer bias in knowledge tracing, Neural Networks, 178 (2024), 1–11. https://doi.org/10.1016/j.neunet.2024.106495 doi: 10.1016/j.neunet.2024.106495
    [26] S. Wachter, B. Mittelstadt, C. Russell, Counterfactual explanations without opening the black box: Automated decisions and the GDPR, Harvard J. Law Technol., 31 (2018), 841–887. https://doi.org/10.2139/ssrn.3063289 doi: 10.2139/ssrn.3063289
    [27] C. Finn, P. Abbeel, S. Levine, Model-agnostic meta-learning for fast adaptation of deep networks, in Proceedings of the 34th International Conference on Machine Learning, (2017), 1126–1135.
    [28] A. Nichol, J. Achiam, J. Schulman, On first-order meta-learning algorithms, preprint, arXiv: 1803.02999.
    [29] T. Kinouchi, A. Ogawa, Y. Wakabayashi, K. Ohta, N. Kitaoka, Domain adaptation using non-parallel target domain corpus for self-supervised learning-based automatic speech recognition, Speech Commun., 174 (2025), 1–8. https://doi.org/10.1016/j.specom.2025.103303 doi: 10.1016/j.specom.2025.103303
    [30] D. Li, Y. Yang, Y. Z. Song, T. M. Hospedales, Learning to generalize: Meta-learning for domain generalization, preprint, arXiv: 1710.03463.
    [31] J. Hoffman, E. Tzeng, T. Park, J. Zhu, P. Isola, K. Saenko, et al., CyCADA: Cycle-consistent adversarial domain adaptation, in Proceedings of the 35th International Conference on Machine Learning, 80 (2018), 1989–1998.
    [32] H. Liu, Y. Ding, H. Zeng, H. Pu, J. Luo, B. Fan, A cascaded multimodule image enhancement framework for underwater visual perception, IEEE Trans. Neural Networks Learn. Syst., 36 (2025), 6286–6298. https://doi.org/10.1109/TNNLS.2024.3397886 doi: 10.1109/TNNLS.2024.3397886
    [33] W. Wang, D. Xu, Z. Liu, Q. Xie, C. Su, C. Peng, Secure data transmission and classification for digital twin, Sci. China Inf. Sci., 68 (2025), 182303. https://doi.org/10.1007/s11432-024-4269-5 doi: 10.1007/s11432-024-4269-5
    [34] W. Wang, Q. Xie, H. Du, L. Zhang, J. J. P. C. Rodrigues, K. Wu, Lightweight and fast authentication protocol for digital healthcare services, IEEE Trans. Mob. Comput., 2025 (2025), 1–16. https://doi.org/10.1109/TMC.2025.3593533 doi: 10.1109/TMC.2025.3593533
    [35] W. Wang, Q. Xie, Y. Huang, Y. Ding, L. Zhang, D. Gao, et al., Attack analysis and enhanced authentication protocol design for vehicle networks, IEEE Trans. Dependable Secure Comput., 2025 (2025), 1–12. https://doi.org/10.1109/TDSC.2025.3593599 doi: 10.1109/TDSC.2025.3593599
    [36] Y. Shang, B. Duan, Z. Zong, L. Nie, Y. Yan, Lipschitz continuity guided knowledge distillation, in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), (2021), 10655–10664. https://doi.org/10.1109/ICCV48922.2021.01050
    [37] A. Geiger, P. Lenz, R. Urtasun, Are we ready for autonomous driving? The KITTI vision benchmark suite, in 2012 IEEE Conference on Computer Vision and Pattern Recognition, (2012), 3354–3361. https://doi.org/10.1109/CVPR.2012.6248074
    [38] H. Caesar, J. Uijlings, V. Ferrari, COCO-Stuff: Thing and stuff classes in context, preprint, arXiv: 1612.03716.
    [39] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, et al., The cityscapes dataset for semantic urban scene understanding, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 3213–3223. https://doi.org/10.1109/CVPR.2016.350
    [40] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, et al., Visual genome: Connecting language and vision using crowdsourced dense image annotations, Int. J. Comput. Vision, 123 (2017), 32–73. https://doi.org/10.1007/s11263-016-0981-7 doi: 10.1007/s11263-016-0981-7
    [41] A. Radford, J. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, et al., Learning transferable visual models from natural language supervision, preprint, arXiv: 2103.00020.
    [42] C. Jia, Y. Yang, Y. Xia, Y. Chen, Z. Parekh, H. Pham, et al., Scaling up visual and vision-language representation learning with noisy text supervision, preprint, arXiv: 2102.05918.
    [43] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, et al., Segment anything, preprint, arXiv: 2304.02643.
    [44] G. Yang, J. Zhang, Y. Zhang, B. Wu, Y. Yang, Probabilistic modeling of semantic ambiguity for scene graph generation, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2021), 12522–12531. https://doi.org/10.48550/arXiv.2103.05271
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(512) PDF downloads(29) Cited by(0)

Article outline

Figures and Tables

Figures(3)  /  Tables(9)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog