neurIPS

UBC Computer Science publishes 18 papers in leading AI conference

Four papers from UBC Computer Science highlighted as top papers at NeurIPS 2025 

Several UBC researchers will be representing the Department of Computer Science at the 39th Annual Conference on Neural Information Processing Systems (NeurIPS), a premier conference on machine learning and artificial intelligence. The conference takes place in two locations: San Diego, United States from December 2 - 7, 2025 and Mexico City, Mexico from November 30 - December 5, 2025. 

A total of 14 papers were accepted at the main conference, including four Spotlight Papers which represent the top 3.3% of submitted papers at the conference. In addition, UBC Computer Science researchers published an additional four papers at the conference’s associated workshops.  

In addition to the papers in the conference and workshops, several UBC professors have speaking or leadership roles at NeurIPS. Assistant Professor Danica Sutherland will be giving a talk at the UniReps 2025: Unifying Representations in Neural Models workshop. Assistant Professor Kelsey Allen is part of the organizing committee for LAW 2025: Bridging Language, Agent, and World Models workshop. Professor Jeff Clune will be giving a talk at The MindGames Challenge: Theory-of-Mind and Game Intelligence in LLM Agents competition. 

The 14 papers in the main conference are: 

1. Absence Bench: Language Models Can’t See What’s Missing
Harvey Yiyun Fu, Aryan Shrivastava, Jared Moore, Peter West, Chenhao Tan, Ari Holtzman
Spotlight Paper 
This paper reveals that large language models perform poorly in tasks that involve finding missing information when comparing two sets of texts. The researchers’ analysis suggest that these limitations are due to the transformer attention mechanism, which can’t attend to the gaps in documents since they don’t correspond to specific keys.  

 

2. Asymmetric Duos: Sidekicks Improve Uncertainty 
Tim G. Zhou, Evan Shelhamer, Geoff Pleiss 
Spotlight Paper 
The researchers present a new cost-effective framework for improving how machine learning models assess the certainty of their output and decision making. By pairing a large model with a fine-tuned smaller model, the researchers showed that this duo model improves accuracy, uncertainty quantification and selective classification metrics in image classification tasks with only a small increase in computational effort.  

 

3. Implicit Bias of Spectral Descent and Muon on Multiclass Separable Data 
Chen Fan, Mark Schmidt, Christos Thrampoulidis 
Spotlight Paper 
This work is the first to characterize implicit optimization bias for algorithms in multi-class linear classifiers, which are machine learning models for classification tasks. In the paper, the researchers prove that these algorithms converge to solutions that maximize the margin. The results show that the multi-class linear setting provides the most transparent framework for studying implicit biases of matrix-parameter optimization algorithms.  

 

4. On the Hardness of Conditional Independence Testing In Practice 
Zheng He, Roman Pogodin, Yazhe Li, Namrata Deka, Arthur Gretton, Danica J. Sutherland  
Spotlight Paper 
Conditional independence testing asks whether two random variables are associated with one another, given a third: are loan decisions associated with the applicant’s race even controlling for income? This paper studies a common class of approaches to this problem, clarifying why it is so much harder to avoid false positives when conditioning than in the unconditional problem, and identifying key issues practitioners should be careful to avoid if they want reliable tests. 

 

5. Can Multi-Modal LLMs Provide Live Step-by-Step Task Guidance? 
Apratim Bhattacharyya*, Bicheng Xu*, Sanjay Haresh, Reza Pourreza, Litian Liu, Sunny Panchal, Leonid Sigal, Roland Memisevic 

 

6. Diffusion-Driven Two-Stage Active Learning for Low-Budget Semantic Segmentation 
Jeongin Kim, Wonho Bae, YouLee Han, Giyeong Oh, Youngjae Yu, Danica J. Sutherland, Junhyug Noh 

 

7. DUAL: Learning Diverse Kernels for Aggregated Two-sample and Independence Testing 
Zhijian Zhou*, Xunye Tian*, Liuhua Peng, Chao Lei, Antonin Schrab, Danica J. Sutherland, Feng Liu 

 

8. LookWhere? Efficient Visual Recognition by Learning Where to Look and What to See from Self-Supervision 
Anthony Fuller*, Yousef Yassin*, Junfeng Wen, Tarek Ibrahim, Daniel Kyrollos, James Green, Evan Shelhamer 

 

9. Metritocracy: Representative Metrics for Lite Benchmarks 
Ariel Procaccia, Ben Schiffer, Serena Wang, Shirley Zhang 

 

 

11. ReservoirTTA: Prolonged Test-time Adaptation for Evolving and Recurring Domains  
Guillaume Vray*, Devavrat Tomar*, Xufeng Gao, Jean-Philippe Thiran, Evan Shelhamer, Behzad Bozorgtabar 

 

12. ReMA: Learning to Meta-Think for LLMs with Multi-agent Reinforcement Learning 
Ziyu Wan*, Yunxiang Li*, Xiaoyu Wen, Yan Song, Hanjing Wang, Linyi Yang, Mark Schmidt, Jun Wang, Weinan Zhang, Shuyue Hu, Ying Wen 

 

 

 

UBC Computer Science researchers also published four papers at various conference workshops: 

1. A Unified Noise-Curvature View of Loss of Trainability 
Gunbir Singh Baveja, Mark Schmidt 
OPT 2025: Optimization for Machine Learning Workshop 


2. DEPART: A Hierarchical Multi-Agent System for Multi-Turn Interaction 
Hao-Lun Hsu, Jing Xu, Nikhil Vichare, Francesco Carbone, Miroslav Pajic, Giuseppe Carenini 
MAR 2025: Multimodal Algorithmic Reasoning Workshop 

 

3. Implicit Bias of Polyak and Line-Search Step Sizes on Linear Classification with Separable Data 
Chen Fan, Reza Babanezhad Harikandeh, Christos Thrampoulidis, Mark Schmidt, Sharan Vaswani 
OPT 2025: Optimization for Machine Learning Workshop 


4. Position: World Models must live in Parallel Worlds 
Sahithya Ravi, Aditya Chinchure, Pushkar Shukla, Vered Shwartz, Leonid Sigal 
LAW 2025: Bridging Language, Agent, and World Models Workshop