Can we reliably verify if an LLM has truly “forgotten” specific information?
SVELA (Selective Verification of Erasure from LLM Answers) is a shared task at EVALITA 2026 focusing on the evaluation of Machine Unlearning in Large Language Models. Participants will design, implement, and benchmark metrics for verifying selective forgetting, ensuring that models forget targeted knowledge while retaining unrelated capabilities. The task is intended to be model-agnostic and resource-inclusive, offering pre-trained models in different sizes so that teams with varying computational resources can participate.
Important Dates
- Task Registration Opens: September 1st, 2025
- Development Data Release: September 22nd, 2025
- System Submission Opens: December 1st, 2025
-
System Submission Deadline:
November 3rd, 2025→ December 8th, 2025 -
Results Notification:
November 17th, 2025→ December 15th, 2025 -
System Description Paper Deadline:
December 15th, 2025→ January 9th, 2026 - EVALITA 2026 Conference: February 26th–27th, 2026 in Bari, Italy
All deadlines refer to midnight (23:59) in the AoE (Anywhere on Earth) time zone. Participants are required to submit a system description paper describing their approach and results.
News
-
Baseline model available! A first model is now available for participants to start testing: SVELA-model-unlearned_1B. Additional models will be released soon.
-
New dataset splits released! The SVELA train split (with retain, forget, and test partitions) and the validation split (unlabeled) are now available on Hugging Face.
-
Registration is open! You can now register for the challenge by filling out the form here.
-
SVELA task announced! The SVELA task has been officially announced as part of EVALITA 2026 Evaluation Campaign.
-
Official task website is live! The official website for SVELA @ EVALITA 2026 is now live.
Part of EVALITA 2026