Panels
We plan to organize 3 panel discussions on the topics mentioned below.
Reproducibility and Rigor in ML
Moderator: Rishabh Agarwal
Example of questions we would like to cover are: What is reproducibility? What is it useful for and is it necessary for machine learning research? Are we in a different position with respect to other scientific fields because of e.g. the speed of iteration, a culture of open source, and stronger control over our experiments (compared to the natural sciences)? Should we embrace more statistical tools of other scientific domains, or are current benchmarking methods sufficiently reliable? How much rigor is needed for ML?
Panelists
Rotem Dror - University of PennsylvaniaRotem Dror is a Postdoctoral Researcher at the Cognitive Computation Group at the Department of Computer and Information Science, University of Pennsylvania. She is working with Prof. Dan Roth. She has completed her Ph.D. in the Natural Language Processing Group, supervised by Prof. Roi Reichart, at the Faculty of Industrial Engineering and Management at the Technion - Israel Institute of Technology. In her Ph.D. thesis, she discussed Algorithms and Statistically Sound Evaluation of Structured Solutions in Natural Language Processing. For more information: rtmdrr.github.io. |
|
Sara Hooker - Google BrainSara Hooker is a research scholar at Google Brain. Her research interests include interpretability, model compression and security in deep neural networks. In 2014, Sara founded Delta Analytics, a non-profit dedicated to building technical capacity to help communities across the world use machine learning. In 2014, she founded Delta Analytics, a non-profit dedicated to bringing technical capacity to help non-profits across the world use machine learning for good. She grew up in Mozambique, Lesotho, Swaziland, South Africa, and Kenya and currently resides in California. |
|
Koustuv Sinha - Mila, McGill UniversityKoustuv Sinha is a PhD Candidate at McGill University / Mila, supervised by Joelle Pineau. Koustuvâs research focuses on investigating systematicity in natural language understanding (NLU) models, especially the state-of-the-art large language models. His research goal is to develop methods to analyze the failure cases in robustness and systematicity of these NLU models, and develop methods to alleviate them in production. He is the organizer of the annual ML Reproducibility Challenge since 2018, and serves as an associate editor in ReScience journal. He has also served as Reproducibility Chair at NeurIPS in 2019 and 2020 |
|
Frank Schneider - University of TĂŒbingenFrank Schneider is a Ph.D. student in the Methods of Machine Learning group supervised by Prof. Dr. Philipp Hennig at the University of TĂŒbingen in Germany. His research focuses on making deep learning more user-friendly. He has previously published work on new debugging tools for neural network training and on improving the evaluation process of optimization algorithms for deep learning. He is currently a co-chair of the MLCommonsâą Algorithms Working Group. He holds a Bachelor's and Master's degree in Simulation Technology from the University of Stuttgart as well as a Master's degree in Industrial and Applied Mathematics from the Eindhoven University of Technology. |
|
Gaël Varoquaux - INRIAGaël Varoquaux is a research director working on data science and health at Inria (French Computer Science National research). His research focuses on statistical-learning tools for data science and scientific inference, with an eye on applications in health and social science. He develops tools to make machine learning easier, with statistical models suited for real-life, uncurated data, and software for data science. For example, since 2008, he has been exploring data-intensive approaches to understand brain function and mental health. He co-funded scikit-learn, one of the reference machine-learning toolboxes, and helped build various central tools for data analysis in Python. Varoquaux has a PhD in quantum physics and is a graduate from Ecole Normale Superieure, Paris. |
Slow vs Fast Science
Moderator: Xavier Bouthillier
Examples of questions we would like to cover are: Slow or fast science, is this a false dichotomy? Is the slow-science movement a threat for exploration? Or is the current pace of publication becoming overwhelming, harming the dissemination of knowledge? Should we look for incentives to balance so-called âslowâ and âfastâ approaches and if so how could we evaluate what would be a proper balance?
Panelists
Chelsea Finn - Stanford UniversityChelsea Finn is an Assistant Professor in Computer Science and Electrical Engineering at Stanford University, and the William George and Ida Mary Hoover Faculty Fellow. Finn's research interests lie in the capability of robots and other agents to develop broadly intelligent behavior through learning and interaction. To this end, her work has included deep learning algorithms for concurrently learning visual perception and control in robotic manipulation skills, inverse reinforcement methods for learning reward functions underlying behavior, and meta-learning algorithms that can enable fast, few-shot adaptation in both visual perception and deep reinforcement learning. Finn received her Bachelor's degree in Electrical Engineering and Computer Science at MIT and her PhD in Computer Science at UC Berkeley. Her research has been recognized through the Microsoft Research Faculty Fellowship, the IEEE RAS Early Academic Career Award, the ONR Young Investigator Award, the ACM doctoral dissertation award, and the MIT Technology Review 35 under 35 Award, and her work has been covered by various media outlets, including the New York Times, Wired, and Bloomberg. Throughout her career, she has sought to increase the representation of underrepresented minorities within CS and AI by developing an AI outreach camp at Berkeley for underprivileged high school students, a mentoring program for underrepresented undergraduates across four universities, and leading efforts within the WiML and Berkeley WiCSE communities of women researchers. |
|
Michela Paganini - DeepMindMichela is a Research Scientist at DeepMind. She was previously a Postdoctoral Researcher at Facebook AI Research and an affiliate at Lawrence Berkeley National Lab. She earned her Ph.D. in physics from Yale University, where she worked on the design, development, and deployment of deep learning algorithms for the ATLAS experiment at CERN, with a focus on computer vision and generative modeling. Prior to that, she graduated from the University of California, Berkeley with degrees in physics and astrophysics. Her current research focuses on model understanding and sparsification: her work involves empirically characterizing neural network behavior, with a recent interest in large scale language models, by investigating their inner workings in the over-parametrized and under-parametrized regimes. Michela has a broad interest in the science of deep learning, with a focus on understanding emergent behavior in neural networks from a mechanistic perspective. |
|
James Evans - University of ChicagoJames Evans is the Max Palevsky Professor of History and Civilization in Sociology, Director of Knowledge Lab, and Founding Faculty Director of Computational Social Science at the University of Chicago and the Santa Fe Institute. Evans' research uses large-scale data, machine learning and generative models to understand how collectives think and what they know. This involves inquiry into the emergence of ideas, shared patterns of reasoning, and processes of attention, communication, agreement, and certainty. Thinking and knowing collectives like science, Wikipedia or the Web involve complex networks of diverse human and machine intelligences, collaborating and competing to achieve overlapping aims. Evans' work connects the interaction of these agents with the knowledge they produce and its value for themselves and the system. Evans designs observatories for understanding that fuse data from text, images and other sensors with results from interactive crowd sourcing and online experiments. Much of Evans' work has investigated modern science and technology to identify collective biases, generate new leads taking these into account, and imagine alternative discovery regimes. He has identified R&D institutions that generate more and less novelty, precision, density and robustness. Evans also explores thinking and knowing in other domains ranging from political ideology to popular culture. His work has been published in Nature, Science, PNAS, American Sociological Review, American Journal of Sociology and many other outlets. |
|
Russel Poldrack - Stanford UniversityRussell Poldrack is a Professor in the Stanford Department of Psychology, Associate Director of Stanford Data Science, and Director of the Center for Open and Reproducible Science (CORES). His laboratoryâs basic research focuses on understanding the brain systems involved in decision making and self control in humans using neuroimaging and behavioral methods. The laboratory has also developed a number of resources for open and reproducible science, including the OpenNeuro data sharing platform and the fMRIPrep preprocessing workflow. |
|
Oriol Vinyals - DeepMindOriol Vinyals is a Principal Scientist at Google DeepMind, and a team lead of the Deep Learning group. His work focuses on Deep Learning and Artificial Intelligence. Prior to joining DeepMind, Oriol was part of the Google Brain team. He holds a Ph.D. in EECS from the University of California, Berkeley and is a recipient of the 2016 MIT TR35 innovator award. His research has been featured multiple times at the New York Times, Financial Times, WIRED, BBC, etc., and his articles have been cited over 70000 times. His academic involvement includes program chair for the International Conference on Learning Representations (ICLR) of 2017, and 2018. He has also been an area chair for many editions of the NeurIPS and ICML conferences. Some of his contributions such as seq2seq, knowledge distillation, or TensorFlow are used in Google Translate, Text-To-Speech, and Speech recognition, serving billions of queries every day, and he was the lead researcher of the AlphaStar project, creating an agent that defeated a top professional at the game of StarCraft, achieving Grandmaster level, also featured as the cover of Nature. At DeepMind he continues working on his areas of interest, which include artificial intelligence, with particular emphasis on machine learning, deep learning and reinforcement learning. |
Incentives for Better Evaluation
Moderator: Stephanie Chan
Examples of questions we would like to cover are: What are the pain-points for which we would need new/better incentives to improve the situation? What roles are the conferences and journals playing to improve these pain-points? Are we investing enough effort to monitor and identify issues in the review process? Is it acceptable to increase the workload of reviewers and ACs in order to gather data about the review process? Would a tighter loop between research and production lead to greater accountability?
Panelists
Corinna Cortes - Google Research NYCCorinna Cortes is a VP of Google Research, NY, where she is working on a broad range of theoretical and applied large-scale machine learning problems. Prior to Google, Corinna spent more than ten years at AT&T Labs - Research, formerly AT&T Bell Labs, where she held a distinguished research position. Corinna's research work is well-known in particular for her contributions to the theoretical foundations of support vector machines (SVMs), for which she jointly with Vladimir Vapnik received the 2008 Paris Kanellakis Theory and Practice Award, and her work on data-mining in very large data sets for which she was awarded the AT&T Science and Technology Medal in the year 2000. Corinna received her MS degree in Physics from University of Copenhagen and joined AT&T Bell Labs as a researcher in 1989. She received her Ph.D. in computer science from the University of Rochester in 1993. Corinna is also a competitive runner, and a mother of two. |
|
Yoshua Bengio - Mila, UniversitĂ© de MontrĂ©alRecognized worldwide as one of the leading experts in artificial intelligence, Yoshua Bengio is most known for his pioneering work in deep learning, earning him the 2018 A.M. Turing Award, âthe Nobel Prize of Computing,â with Geoffrey Hinton and Yann LeCun. He is a Full Professor at UniversitĂ© de MontrĂ©al, and the Founder and Scientific Director of Mila â Quebec AI Institute. He co-directs the CIFAR Learning in Machines & Brains program as Senior Fellow and acts as Scientific Director of IVADO. In 2019, he was awarded the prestigious Killam Prize and in 2021, became the second most cited computer scientist in the world. He is a Fellow of both the Royal Society of London and Canada, Knight of the Legion of Honor of France and Officer of the Order of Canada. Concerned about the social impact of AI and the objective that AI benefits all, he actively contributed to the Montreal Declaration for the Responsible Development of Artificial Intelligence. |
|
John Langford - Microsoft ResearchJohn Langford is a computer scientist working in machine learning and learning theory. He is well known for work on the Isomap embedding algorithm, CAPTCHA challenges, Cover Trees for nearest neighbor search, Contextual Bandits for reinforcement learning applications, and learning reductions. John is the author of the blog hunch.net and the principal developer of Vowpal Wabbit. He works at Microsoft Research New York, of which he was one of the founding members, and was previously affiliated with Yahoo! Research, Toyota Technological Institute at Chicago, and IBM's Watson Research Center. He studied Physics and Computer Science at the California Institute of Technology, earning a double bachelor's degree in 1997, and he received his Ph.D. in Computer Science from Carnegie Mellon University in the year of 2002. John was the program co-chair for the 2012 International Conference on Machine Learning (ICML), general chair for the 2016 ICML, and is the President of ICML from 2019â2021. |
|
Kyunghyun Cho - New York UniversityKyunghyun Cho is an associate professor of computer science and data science at New York University and CIFAR Fellow of Learning in Machines & Brains. He is also a senior director of frontier research at the Prescient Design team within Genentech Research & Early Development (gRED). He was a research scientist at Facebook AI Research from June 2017 to May 2020 and a postdoctoral fellow at University of Montreal until Summer 2015 under the supervision of Prof. Yoshua Bengio, after receiving PhD and MSc degrees from Aalto University April 2011 and April 2014, respectively, under the supervision of Prof. Juha Karhunen, Dr. Tapani Raiko and Dr. Alexander Ilin. |