Four UBC Computer Science students receive CRA Outstanding Undergraduate Award Honourable Mentions

Four UBC Computer Science students receive CRA Outstanding Undergraduate Award Honourable Mentions

Students advanced research in formal verification, continual learning systems, software security and AI policies 

Four UBC students are recognized for their exceptional research contributions by the Computing Research Association (CRA). Computer science students Emilie Ma, Gunbir Baveja, Shibo Ai and Trie Yang received honourable mentions for the 2026 Outstanding Undergraduate Researcher Award 

The award highlights undergraduate students from universities across North America who demonstrate outstanding potential in computing research.  

 

Emilie Ma, an Honours Computer Science major with a minor in French language, worked with Associate Professors Ivan Beschastnikh and Dongwook Yoon on characterizing collaboration patterns in open-source software development. She found a new, graph-centric approach to identify, query and visualize open-source collaboration patterns, which she presented at the 2024 Open Source Summit North America. She also helped develop a tool for visualizing and studying workflow types, which was published in the Foundations of Software Engineering (FSE) 2024 conference. Currently, she is working with Dr. Beschastnikh’s team and collaborators to develop a suite of tools to automate formal verification for distributed and concurrent systems and help software developers expose bugs that are difficult to find, ensuring that systems run smoothly. The paper from the project, SysMoBench: Evaluating AI on Formally Modeling Complex Real-World Systems, was recently accepted at the International Conference on Learning Representations (ICLR) 2026. 

 

Gunbir Baveja, who majors in Computer Science, works with Professor Mark Schmidt to understand why neural networks struggle to keep learning as conditions change. In his research, he tracked optimization signals over long training horizons and identified two common failure modes: training updates becoming dominated by noise and training updates becoming unstable due to volatile local curvature. He then developed a lightweight, per-layer adaptive optimizer that can be paired with existing continual learning methods to make them more robust, reducing the amount of careful tuning they typically require and helping stabilize training in settings where they otherwise fail. The paper from the project, A Unified Noise-Curvature View of Loss of Trainability, was presented at NeurIPS 2025 Optimization for Machine Learning (OPT) Workshop. Gunbir is currently extending this work with graduate students in Dr. Schmidt's lab to refine the method and expand its evaluation. 

 

Shibo Ai is a Computer Science major whose research focuses on a type of software vulnerability that affects compartmentalized software, called compartment interface vulnerabilities. Under the supervision of Postdoctoral Research Fellow Hugo Lefeuvre and Professor Margo Seltzer, Shibo is systematically studying these vulnerabilities in real-world software by discovering and categorizing them. By using an approach that combines automated detection with manual code analysis, he found over 30 compartment interface vulnerabilities in a security-focused operating system, leading to fixes and discovery of additional vulnerabilities. His work ultimately helps developers understand and protect against these types of software vulnerabilities. 

 

Trie Yang, a Computer Science major, worked with Associate Professor Dongwook Yoon on research at the intersection of AI governance and human-centered AI at SOCIUS lab. Her work contributes both methodological insights and a technical system for AI governance, aiming to make regulatory compliance more accessible to AI practitioners. She developed a scalable framework, PASTA, that applies large language models to evaluate AI systems against AI policies, translating regulatory requirements into machine-interpretable criteria and actionable feedback, in collaboration with Dr. Yoon and Dr. Ig-Jae Kim from the Korea Institute of Science and Technology. PASTA was validated through a user study with AI practitioners and an expert study with legal professionals, demonstrating its usability, efficiency and accuracy. The paper, PASTA: A Scalable Framework for Multi-Policy AI Compliance Evaluation, will be presented at this year's ACM CHI conference on Human Factors in Computing Systems (CHI 2026).