Official schedule.

The workshop will take place on April 29th during ICLR 2022.

Time (UTC) Session
12h00 - 12h15 Welcome Note
12h15 - 13h00 Invited Talk: Thomas Wolf
The challenges of open evaluation at Hugging Face: from open-source evaluation libraries to evaluating +30k hosted models and models of 176B parameters
13h00 - 13h45 Invited Talk: Frank Schneider
Improving Optimizer Evaluation in Deep Learning
13h45 - 14h30 Invited Talk: Rotem Dror
A Statistical Analysis of Automatic Evaluation Metrics for Summarization
14h30 - 14h50 Contributed Talks (x2)
14h50 - 15h50 Panel: Reproducibility and Rigor in ML
Rotem Dror, Sara Hooker, Koustuv Sinha, Frank Schneider and Gaël Varoquaux
15h50 - 16h35 1st Poster Session
16h35 - 17h20 Invited Talk: James Evans
AI Advance and the Paradox of Sustained Innovation
17h20 - 18h20 Panel: Slow vs Fast Science
Michela Paganini, Chelsea Finn, James Evans, Russel Poldrack and Oriol Vinyals
18h20 - 18h50 Coffee Break
18h50 - 19h35 Invited Talk: Melanie Mitchell
Beyond Accuracy: How to Evaluate Understanding on Conceptual Abstraction Benchmarks
19h35 - 20h20 Invited Talk: Katherine Heller
A framework for improved ML evaluations
20h20 - 21h05 Invited Talk: Corinna Cortes
Inconsistency in Conference Peer Review: Revisiting the 2014 NeurIPS Experiment
21h05 - 21h35 Contributed Talks (x3)
21h35 - 22h35 Panel: Incentives for better evaluation
Corinna Cortes, John Langford, Kyunghyun Cho and Yoshua Bengio
22h35 - 23h35 2nd Poster Session
23h35 - 23h50 Closing Remarks