Can we reliably verify if an LLM has truly “forgotten” specific information?
SVELA (Selective Verification of Erasure from LLM Answers) is a shared task at EVALITA 2026 focusing on the evaluation of Machine Unlearning in Large Language Models. Participants will design, implement, and benchmark metrics for verifying selective forgetting, ensuring that models forget targeted knowledge while retaining unrelated capabilities. The task is intended to be model-agnostic and resource-inclusive, offering pre-trained models in different sizes so that teams with varying computational resources can participate.
Important Dates
- Task Registration Opens: September 1st, 2025
- Development Data Release: September 22nd, 2025
- System Submission Deadline: November 3rd, 2025
- Results Notification: November 17th, 2025
- System Description Paper Deadline: December 15th, 2025
- EVALITA 2026 Conference: February 26th-27th, 2026 in Bari, Italy
All deadlines refer to midnight (23:59) in the AoE (Anywhere on Earth) time zone. Participants are required to submit a system description paper describing their approach and results.
News
-
SVELA task announced! The SVELA task has been officially announced as part of EVALITA 2026 Evaluation Campaign.
-
Official task website is live! The official website for SVELA @ EVALITA 2026 is now live.
Part of EVALITA 2026
