Special Issue: Trustworthy and explainable machine learning
Guest Editors
Prof. Amparo Alonso-Betanzos
CITIC, University of A Coruña. Dept. Computer Science. Campus de Elviña s/n. A, Coruña, Spain
Email: ciamparo@udc.es
Prof. Óscar Fontenla-Romero
CITIC, University of A Coruña. Dept. Computer Science. Campus de Elviña s/n. A, Coruña, Spain
Email: oscar.fontenla@udc.es
Prof. Bertha Guijarro-Berdiñas
CITIC, University of A Coruña. Dept. Computer Science. Campus de Elviña s/n. A, Coruña, Spain
Email: berta.guijarro@udc.es
Manuscript Topics
Artificial Intelligence is nowadays one of the technologies with greatest impact in several areas, with geopolitical, social or economic implications, among others. The possibility of discovering patterns and structures from data using Machine Learning (ML) is one of the more successful fields of AI. However classically most of the algorithms in ML follow the opaque black-box approach, while most of the regulations in many countries aim for more transparent solutions. The conditions to be fulfilled involve issues such as data Trust, Fairness, Privacy, Trustworthy and Explainable data analysis at all stages in the machine learning project pipeline: from data preparation to deployment. All these topics will be addressed in this special issue. Thus, we encourage submissions from, but not limited to, the following list of topics:
● Transparency and explainability: intrinsic XML models, model-agnostic explainability, surrogate methods, etc.
● Robustness to distributional shifts and data imperfections
● Adversarial robustness
● Privacy preserving
● Ethical ML
● Fairness, bias and causality
● Model obsolescence and maintenance
Instructions for authors
https://www.aimspress.com/era/news/solo-detail/instructionsforauthors
Please submit your manuscript to online submission system
https://aimspress.jams.pub/