List of Special Sessions
EXIT-ANA 2026: Explainable, Interpretable & Trustworthy AI for Next-Generation Network Autonomy
Organizers
Associate Prof. Murat Karakus, Department of Software Engineering, Ankara University, Turkiye
Dr. Murat Karakus received his B.Sc. in Mathematics from Suleyman Demirel University (2009),
his M.Sc. in Computer Science and Information Systems from the University of Michigan-Flint (2013),
and his Ph.D. in Computer Science from the Purdue School of Science (2018). He is currently an
Associate Professor and the Head of the Department of Software Engineering at Ankara University.
Dr. Karakus has authored over 60 peer-reviewed publications and serves as a reviewer for leading
IEEE, ACM, and Elsevier venues. He has served on organizing committees of major IEEE conferences
(MeditCom, BalkanCom, BCCA, BlackSeaCom). He has delivered tutorials at MeditCom 2023, BCCA
2023, and CCNC 2025, and has given invited talks at events such as the IEEE Blockchain Summit
2022, Blockchain Summit T¨urkiye 2024, SIBIS 2024, ISMSIT 2025, and ISCT¨urkiye 2025. He has
organized workshops in BCCA 2024 and BCCA 2025. His research interests include nextgeneration networking, network autonomy, NLP, explainable AI, Blockchain, network scalability, QoS,
and routing. He has led several TUB¨ ˙ITAK-funded and industry-supported research projects.
Assist. Prof. Rukiye Savran Kızıltepe, Department of Software Engineering, Ankara University, Turkiye
Dr. Rukiye Savran Kiziltepe received her Ph.D. in Computer Science from the University of Essex,
UK, in 2022. She also completed her M.Sc. in Advanced Computer Science at the University of Essex
in 2017 and her B.Sc. in Computer Education and Instructional Technology at Hacettepe University,
T¨urkiye, in 2014. She is currently an Assistant Professor in the Department of Software Engineering at
Ankara University. She has contributed to national and international R&D projects in data science and
multimedia analysis. Dr. Kiziltepe has been a reviewer and editor for several journals and conferences
and has served as an organizer of the Predicting Media Memorability benchmark at the international
MediaEval initiative since 2019. Her research interests include machine learning, natural language
processing, computer vision, and intelligent data-driven systems.
Fatih Bildirici, ASELSAN, Ankara, Turkiye
Fatih Bildirici is currently pursuing his Ph.D. in Artificial Intelligence at Ankara University. He
holds an M.Sc. in Management Information Systems and is continuing his graduate studies in Cognitive Science, reflecting a strong multidisciplinary orientation toward technology and intelligent systems. Professionally, he serves as a Senior Subject Matter Expert at ASELSAN, where he leads large-scale
digital transformation initiatives. In addition to his industrial role, he provides consultancy to startups
and collaborates with technology companies focusing on AI integration. His research interests include
explainable and responsible AI, as well as software development methodologies. Mr. Bildirici is committed to advancing transparent, ethical, and innovative AI systems aligned with emerging business
and technology needs.
Berkay Bayramoglu, Department of Software Engineering, Ankara University, Turkiye
Berkay Bayramoglu is currently pursuing the B.Sc. degree in Software Engineering at Ankara
University, Ankara, T¨urkiye. Since 2024, he has been with the ANLAM-NET Research Laboratory at
Ankara University, working on research in Large Language Model reliability, hallucination detection,
and XAI. He actively contributes to various TUB¨ ˙ITAK-funded research projects. In 2025, he was an
AI Research Intern at the Interventional MR R&D Institute (G˙IMRE) at Ankara University. He has
co-authored a paper accepted to IISEC 2026. Furthermore, he achieved 3rd place in the Teknofest 5G
Positioning Competition and was a finalist in the Turkish Natural Language Processing Competition.
He currently serves as the Vice President of the YAZGIT Student Community. His research interests
include natural language processing, deep learning, agentic AI, explainable AI, and computer vision.
Scope
This workshop aims to address the critical challenges of ”black-box” AI/ML models in next-generation network autonomy, operations, and management. As networks evolve toward intent-based and highly automated
operations (6G, O-RAN), the lack of transparency in AI-driven analytics creates barriers to operator trust
and regulatory compliance (e.g., EU AI Act). The workshop will provide a comprehensive exploration of
Explainable AI (XAI) tailored for network autonomy, operations, and management, covering explainability,
interpretability, and trustworthy AI foundations, as well as widely used techniques (LIME, SHAP, Integrated
Gradients) and their application to real-world next-generation telecom use cases.
Topics of the Special Session
The workshop will cover a broad spectrum of topics at the intersection of XAI and next-generation networking, including but not limited to:
- XAI and Trustworthy AI for network autonomy, operations, and management.
- Explainability methods for telecom AI/ML models (LIME, SHAP, Integrated Gradients, Grad-CAM, surrogate models, counterfactuals).
- Operator-centric interpretability and human-in-the-loop network decision support.
- Transparent and verifiable AI for anomaly detection, fault/failure prediction, and SLA assurance.
- Explainability challenges in 5G/6G, O-RAN, and cloud-native network architectures.
- Metrics, benchmarking, and validation frameworks for evaluating XAI in communication networks.
- Regulatory and compliance considerations, including EU AI Act, ETSI ENI, and O-RAN Alliance requirements.
- System design patterns enabling trustworthy closed-loop automation and intent-based networking.
- Case studies and real-world deployments of XAI-enabled autonomous network control.
- Future challenges and emerging research opportunities in explainable, trustworthy, and autonomous networking.
- AIOps/MLOps Applications such as anomaly triage, root-cause analysis, and SLA violation explanations.
- XAI for LLM-based network agents, multi-agent systems, and IoT-scale environments.
- Security and privacy implications of explaining AI decisions
- Digital twins and synthetic data generation for XAI model training.
Attendees
Countries
Published papers




