The growing adoption of AI in high-stakes applications—such as healthcare, finance, and autonomous systems—has emphasized the need for trustworthy and interpretable AI. While state-of-the-art AI systems achieve remarkable performance, their opaque nature raises critical concerns around transparency, reliability, fairness, and ethical compliance. Neurosymbolic AI presents a transformative paradigm that merges neural systems’ scalability and adaptability with the structure, rigor, and explainability of symbolic reasoning.

The Neurosymbolic Methods for Trustworthy and Interpretable AI Special Track at the 19th International Conference on Neurosymbolic Learning and Reasoning (NeSy 2025) aims to bring together researchers and practitioners working at the intersection of neurosymbolic approaches and trustworthiness in AI. This special track invites submissions that advance the development of interpretable, fair, robust, and ethically aligned AI systems through neurosymbolic methods.

Our goal is to explore how the combination of symbolic reasoning, knowledge representation, and neural-based approaches can ensure AI systems that are not only high-performing but also accountable, interpretable, and aligned with societal values.

Topics of Interest

Submissions are encouraged from all areas related to neurosymbolic methods that enhance trustworthiness and interpretability in AI. Relevant topics include, but are not limited to:

  • Fairness and Bias Mitigation: Using symbolic reasoning and ontologies to identify, mitigate, and explain biases in neural systems.
  • Explainable Decision-Making: Neurosymbolic techniques for generating interpretable, human-understandable justifications for AI decisions.
  • Symbolic Regulation for Ethical AI: Incorporating ethical principles, regulations, and standards (e.g., GDPR, IEEE ethics) into AI models via symbolic reasoning.
  • Robustness and Verifiability: Leveraging neurosymbolic approaches for the formal verification of AI systems to ensure safety and reliability.
  • Combining Neural and Symbolic Models for Interpretability: Developing hybrid frameworks to generate consistent, transparent outputs from neural systems.
  • Knowledge Graphs and Trustworthy AI: Utilizing knowledge graphs, ontologies, and other structured knowledge representations to enhance AI reliability and accountability.
  • Neurosymbolic Debugging and Error Analysis: Techniques for identifying and explaining failures or inconsistencies in AI predictions.
  • Human-AI Collaboration: Neurosymbolic methods for fostering intuitive and trustworthy human-machine interactions.
  • Metrics and Benchmarks: Developing new metrics, datasets, and evaluation methodologies to assess trustworthiness, fairness, interpretability, and robustness in AI.

Contact

For inquiries related to this special track, please contact the track chairs:

  • Abhilekha Dalal (Kansas State University) – abhilekha.dalal@ksu.edu
  • Vaishak Belle (University of Edinburgh) – vbelle@ed.ac.uk

Submission

Please submit your paper according to the conference submission guidelines. In the submission form on OpenReview, please select the “Neurosymbolic Methods for Trustworthy and Interpretable AI” special track.