Research Award Opportunities
One of the most valuable research experiences for an undergraduate student is to be a research assistant. Each year, the department receives a number of research awards that help provide funding for an undergrad student to spend 16 weeks over the summer working full time in one of the department’s research labs, often with the opportunity to publish their work (see examples of previous projects.) This kind of research experience is highly sought after by graduate programs.
All applicants must confirm their eligibility to apply and to work. You must have all necessary requirements prepared (ex. Social Insurance Number and permits).
International students must have a valid Social Insurance Number and be eligible to work on campus for the duration of the award (ex. in the summer). you will be required to provide any necessary details and documentation upon accepting the award (SURE or WLIUR). This is necessary for processing and payment. If you are offered an award but do not meet this criteria, you will not be able to accept. For questions about eligibility, please speak with an International Student Advisor.
The positions are available to 2nd, 3rd, and 4th year students with strong academic records. You'll find more information about the awards available, including eligibility requirements below. Watch for in-class and email announcements from the department for details and deadlines.
NSERC USRA - NSERC Undergraduate Student Research Award
SURE - Science Undergraduate Research Experience Award
WLIUR - Work Learn International Undergraduate Research Award
See the following links for more details:
How to Apply
Deadline: February 10, 2023 at 4:00 PM
New Application Requirement: all applicants are required to apply with a confirmed supervisor. The list of projects and supervisors will be posted on this page, and you can use this to approach any of the supervisors listed. However, you aren't limited to the projects and supervisors listed below. We encourage you to directly contact professors you would like to work with to find a match. Many professors will be happy to talk to you about the opportunity to hire students at a subsidized wage. You can find our faculty directory here. For some additional tips, please see the UBC Careers page.
- Read the details above and the information at the links on this page
- Determine which awards you are eligible for
- Contact potential supervisors from the Projects and Supervisors list (see below) or by approaching Computer Science faculty members you would like to work with
- Once you have a confirmed supervisor, submit the online application webform by the stated deadline (the webform will become available before the deadline):
- Before submitting, please ensure that you have read over the online guidelines, eligibility requirements, and webform instructions carefully
- Make sure to read all of the instruction text in the webform, there may be important details noted below each field
- When the department is informed of how many awards are available, a departmental adjudication committee will rank the applications. All applicants will then receive a decision. If you are selected for an award, you will then receive an email with instructions to submit a new webform to provide additional information/documentation:
- When you are applying: please read the webform carefully to ensure you are prepared to accept by providing these details
- (Your supervisor will also be contacted for required information)
- All students should complete the NSERC Form 202 on the NSERC USRA website by clicking "On-line System Login" or, if you are a first-time user, "Register"
- All students should upload a PDF of the completed NSERC Form 202 to the online application webform
- DO NOT submit the application on the NSERC website until you have been accepted for the award and instructed to do so (at the end only students awarded for NSERC USRA will submit the application to NSERC website)
- Instructions on how to complete the forms can be found on the NSERC USRA website
Questions? For further details, please visit the UBC Student Services website or the NSERC USRA website and review the information and links provided, as these will likely give you the answers to your questions. If you would still like additional assistance, please see our Advising Webform instructions to see if you are eligible to submit a webform request.
Projects and Supervisors: Summer 2023
Meta-humanoid: Evaluating the Effects of Realistic AR Avatar in Social Interaction with Humanoid Robot
Meta-humanoid is a novel telepresence application that synthesizes a suite of cutting-edge technologies, including humanoid robots, life-like avatars, and augmented reality to reduce the user's perceptual discordance between the robot's body and the human teleoperator's identity. In meta-humanoid, the user wears an AR HMD that overlays the teleoperator’s lifelike avatar on a humanoid robot. The lifelike avatar moves and animates according to the humanoid’s motion. This can reduce the user’s perceptual discordance.
In canonical robot-mediated telepresence applications, a teleoperator controls a robot to offer physical services to a user. However, the machine-like appearance of the robot is a critical barrier for the user to perceive the robot as the embodied self of the teleoperator. The user’s perceptual discordance between the actor’s embodiment (i.e., the robot’s body) and its identity (i.e., the human teleoperator) leads to diminished quality of emotional and cognitive experiences, such as trust, likeability, and presence. This gap hampers the potential of telepresence robots in domains like telemedicine and tele-training where emotional and perceptual quality is critical.
In this research, we aim to evaluate the impact of the lifelike avatar overlay on the perceptual quality of social interaction in meta-humanoid applications. We leverage a Wizard of Oz approach, where the user is immersed in a fully VR environment in which a confederate researcher plays the role of the humanoid robot controlled by the teleoperator.
Keywords: Augmented Reality, Tele-presence Robot, Social Interaction, Usability Evaluation
- (optional) HCI evaluation skills or successful completion of the introductory HCI courses (e.g., CPSC 344)
- (optional) Experience or strong interest in UX evaluation methods (e.g., user study, survey, interview)
(optional) Successful completion of the Computer Graphics course (e.g., CPSC 314)
Augmenting Zoom Meeting Experiences with an AI Moderator
Zoom meetings have become a widespread form of communication, but the video conferencing system itself offers little more than the transmission of pixels. The aim of this project is to create an "AI moderator" agent that aids meeting participants in browsing, comprehending, and annotating video content from Zoom meetings. The intern will work on constructing the system by utilizing a combination of AI techniques such as computer vision, speech recognition, and natural language processing.
- Strong technical skills including OOP, data structures, and algorithms
- Experience in building computer vision, speech processing, and/or natural language processing applications
(optional) Experience in web front-end development
Automatically synthesizing efficient algorithms for solving SAT
Please contact Chris Cameron at firstname.lastname@example.org if interested and to set up a time to talk.
Designing efficient algorithms for solving the Boolean satisfiability (SAT) problem is crucial for practical applications such as scheduling, planning, and software testing. Machine learning is an important tool in algorithm design for SAT, however SAT practitioners have yet to exploit the representational power of deep learning. We are working on training neural networks to learn to make variable ordering decisions for the widely used DPLL-style SAT solvers. DPLL SAT solvers search for a solution in a tree-like fashion, where a "branching policy" dictates how to iteratively partition the search space. Existing heuristics for deciding how to partition the search space don't use machine learning to leverage the structure of the internal solver state. Inspired by AlphaGo, we developed Monte Carlo Forest Search, a reinforcement learning approach for learning branching policies that lead to small proof trees of unsatisfiability.
To date, we have achieved a working implementation with performance gains on small-size SAT distributions. The student will join in to help us scale up our method to work on much larger and more difficult SAT distributions that are more typical in industrial applications. We see two main bottlenecks to achieving this. The first is memory requirements for model training. A few ideas we would like to explore with the student are parallelizing across GPUs, memory-efficient gradient approximations, and smaller state representations. The second challenge is exploring an action space that is many orders of magnitude larger. We think a promising direction is to learn an action representation and reduce the action space by collapsing similar actions together. To manage the challenge of jumping from a few hundred variables to a few million, we plan to build a curriculum-learning
pipeline to gradually increase problem size and iteratively leverage policies from smaller problems.
The student will work closely with Leyton-Brown's graduate students. They have primary responsibility for well defined tasks (usually implementing aspects of a system, running computational experiments, and analyzing the results) but also often get involved in problem (re)formulation and generating new research ideas.
The student should have a basic understanding of machine learning concepts and should be competent in C++ (for SAT solver) and Python (for ML). Knowledge of deep learning (pytorch), cluster computing, statistics, and CDCL SAT solvers will be an asset.
The student will learn how to (1) design large computational experiments (2) leverage neural networks (specifically graph neural nets) and (3) modify and implement monte-carlo tree search. Neural nets are ubiquitous in industry and Monte Carlo Tree Search is becoming a major method with many potential applications in the coming decade. The student will also get exposure to the research process which will be beneficial for any future graduate program they might attend. The student will have close interaction with Leyton-Brown and his graduate students and learn research methodology, including how to come up with research questions, design experiments to test hypotheses, and effectively communicate research arguments.
Work Setting: Mostly on campus with some online.
Understanding the performance of graph-based anomaly detection
The history of a system execution can be represented as a directed graph. Nodes in the graph represent states of system objects (e.g., processes, files, sockets, etc.) and edges represent interactions between those objects (e.g., read, write, fork, etc.). This graph can be analyzed to understand how the states of two objects relate. In practice, each implementation of this concept represents system execution history differently, leading to graphs with very different properties.
Over the last ten years, many papers have looked at machine learning techniques to analyze those graphs and automatically identify anomalous patterns. More attention should be paid to the underlying graph semantics and how it affects detection performance.
In this project, you will systematically study the interplay of the graph representation semantics and the detection performance. The ideal student will be comfortable with applied machine learning techniques, and systems development and management. Time permitting you will leverage this knowledge to design a better anomaly detection system.
Proof Construction from Rust Compiler Analyses
The Rust programming language incorporates a unique type system for governing, in a fine-grained way, the management of aliases, mutation and allocations, forcing programmers to provide much richer information about their intentions than in other mainstream languages. This type system is supported by a number of complex compiler analyses: a combination of classical program analyses such as liveness information with Rust-specific analyses such as borrow and reference lifetime tracking. The Prusti project (along with many other subsequent works) observed that this information could be leveraged to simplify formal reasoning about what Rust programs actually do, and in particular to query the compiler for information that helps to construct a mathematically-precise modelling of a Rust program into another language (Viper) and toolset suitable for program verification.
While this research direction has been successful, Prusti’s current approach intertwines the modelling of Rust into Viper with the information we need to extract from the Rust compiler. This makes the code difficult to develop, and means that the current techniques cannot be easily used to experiment with other back-end encodings and verification techniques. In recent work, we have been exploring the possibility of defining a fully-elaborated form of a Rust program enriched with information from compiler analyses, explaining its interactions with the type system in a way which is fully decoupled from our eventual verification language and goals.
In this internship, we plan to explore three main objectives. Firstly, we will extend this elaboration of a Rust program to support the interaction between struct type definitions and lifetime parameters in a general way, which is not possible in our current approach. Secondly, we will develop an encoding to reconnect this elaborated representation to the Viper verification infrastructure (ideally supplanting the existing codebase). Thirdly, we will consider at least one alternative/variant encoding of the Rust program into Viper, to demonstrate the flexibility of our new model and its independence from the backend approach.
The project affords opportunities to learn and practice Rust programming, and to gain familiarity with type system features, compiler design and program verification. Prusti is a collaboration between several faculty and students, and this project will include exposure to the wider research directions of the project and the chance to collaborate with and present ideas to a large research team.
Experience with Rust and (ideally) formal reasoning or proofs is desirable.
Automated Testing of Haskell Compiler Rewrite Rules
Optimizing compilers employ a number of techniques that transform code to yield the same result with better runtime characteristics. To support such optimizations, the Haskell programming language (as implemented in the Glasgow Haskell Compiler, GHC) allows library authors to implement their own compile-time optimizations in the form of rewrite rules. Many well-known Haskell libraries make heavy use of rewrite rules to achieve better runtime performance.
However, incorrect rewrite rules (those that define invalid program transformations) are difficult to debug and can change program behaviour in unexpected ways. Unfortunately, in general, there is no easy way to test Haskell rewrite rules for correctness.
In recent work we developed a first prototype tool, Rulecheck, that automatically generates and executes property-based tests for Haskell rewrite rules, with the goal of identifying incorrect rules in real-world Haskell libraries. Although we have developed a proof-of-concept, there is still much more work to be done; in particular, we have the following next aims for this research internship:
1. Extend Rulecheck to support a wider range of Haskell type system features
To test a rewrite rule, Rulecheck must generate well-typed Haskell expressions that match the arguments of the rule. Due to the expressiveness of Haskell's type system, this is not trivial. Our prototype tool only supports a subset of Haskell types; and therefore cannot generate expressions for some rewrite rules (for example, those operating on expressions with higher-kinded types). Extending Rulecheck to support more of Haskell's type system will enable more rewrite rules to be testable, and improve Rulecheck's ability to find bugs.
2. Reduce the rate of false positives in generated tests
The tests generated by our current implementation of Rulecheck sometimes fail, even when the underlying rewrite rule is correct. For example, some test failures occur because Rulecheck automatically generates expressions that would never be actually used in the application of a rule. Manual analysis is necessary to determine if test failures correspond to actual issues. We would like to reduce the amount of manual intervention required, by generating better tests in the first place.
3. Evaluate Rulecheck against real-world Haskell libraries
Rulecheck is intended to test the rewrite rules in real-world Haskell libraries; as yet, we have only used Rulecheck on a handful of libraries. However, Hackage (the Haskell package repository) contains over 100 libraries that define rewrite rules. To prove the efficacy of our technique, we would like to apply Rulecheck to these libraries; identifying incorrect rewrite rules and reporting any problematic rules to library authors. Doing so will likely require extending Rulecheck to support a wider range of Haskell language features.
The project affords opportunities to learn and practice Haskell programming, and to gain familiarity with property-based testing, advanced type system features and compiler design. In addition, this project will provide experience working on a research project in collaboration with other faculty and students.
Ideal candidates for this project should have an interest in functional programming. Previous experience with Haskell and QuickCheck is a plus.
Reengineering a Debugger for Quantifier Reasoning in SMT
SMT solvers have a wide variety of applications across Computer Science, including program analysis and synthesis tools, automated planning and constraint solving, optimisation problems and software verification. Advanced tools such as program verifiers are often built around SMT encodings of their problems. However, designing these encodings to perform reliably and fast is a challenging task. In previous work, we developed the Axiom Profiler tool to serve as a first debugging tool for quantifier-rich SMT problems. While vastly more useful than trying to debug by hand, this tool has a number of limitations in practice, and requires a substantial redesign and reimplementation with the following objectives:
- Build an application with a modern and OS-independent GUI, capable of mixing both simple button-press interactions with more-sophisticated functionality for navigating graph visualisations.
- Design a modular, maintainable codebase in a suitable programming language, so that the tool can be robustly developed in the future.
- Understand and reimplement the core analysis algorithms which support user navigation and explanation of the information visualised.
- (optionally) interact with modern proof logging formats as generated by newer versions of the Z3 SMT solver.
- (optionally) consider how the profiler can be generalised to debug runs of other SMT solvers (if possible).
The project affords opportunities to learn about formal reasoning and the automation of logical proofs using state-of-the-art SMT solvers. Prior experience with GUI application development, algorithms, SMT Solving and/or formal reasoning would all be advantageous but not necessarily essential; good analytical skills and expertise with imperative programming of some kind are needed.
Applying the Prusti Verifier to a Rust model of an AWS File System
The Prusti project (a collaboration between UBC and ETH Zurich) aims to provide modular, code-level verification tools for Rust, a modern systems programming language. The combination of Rust's advanced ownership type system and user-defined specifications makes it possible to provide scalable and efficient formal verification for a wide variety of program properties, including allowing programmers to express their intentions and check that their code will always live up to these.
In discussions with collaborators at AWS, we have a greatly-simplified model of a file system they are developing; as part of their software engineering process they are building models of the eventual systems at various degrees of accuracy/complexity all in Rust: each version of the models is designed to behave similarly to the previous, but with more efficiency/real-world concerns added. At the simplest end, there is a sequential implementation of various core data structures and algorithms relevant for the file system’s unique design. Verifying desirable properties even of this simplest model is potentially of value in such a design process, since any bugs in these simplified models would likely persist in the more-complex variants of the software.
In this project, we will investigate the application of Prusti to this file system model as a case study. We will consider a variety of desirable properties, ranging from simple panic (crash) freedom, to data structure invariants and potentially even to crash-consistency properties that show that data will be recovered during unexpected failures. Depending on the outcomes of our first experiences using Prusti on this codebase, we may develop additional features in the Prusti tool itself to make this kind of verification more direct, easier or more efficient. If we are successful at verifying one model, we might also consider the challenges involved in working with a more accurate software model including concurrency, and how to relate the two versions.
This project offers experience with system design and formal verification, as well as experience with Rust and modern verification tools. Prior experience with Rust and/or formal reasoning about programs are advantageous but not essential; expertise with imperative programming of some kind is needed.
A DSL and Query Engine for Large-Scale Analysis of Open-Source Rust Code
Rust is a rapidly-growing systems language attracting ever-growing interest from engineers and researchers alike. There is already a huge corpus of open-source Rust software available via the crates.io standard repository. This collection of real-world software is a rich source of information about how developers are using this new language in the wild. For example, in a recent paper, “How do Programmers use Unsafe Rust?”, we performed an empirical evaluation of the entirety of this corpus to answer a number of questions about the “unsafe” features of the Rust languages, and how they are actually employed in the wild.
To perform this analysis, we developed infrastructure called “qrates” for assembling a database of facts related to the available online repositories and presenting these facts for analysis in Jupyter notebooks. However, the information extracted into this database was specific to the empirical study in question, and changing (or even refining slightly) the information extracted over the corpus of software projects is currently a manual process. Rerunning this database extraction process is also cumbersome and requires excessive computing power, since all information is temporarily stored in memory, making it difficult to scale this work up to the increasingly huge amount of open-source Rust code available online.
In this project, we will aim for a generalisation and reengineering of the qrates approach and tools, with the following objectives:
- Design a query DSL suitable for defining the desired facts to be assembled into a database, as well as the selection of which Rust projects (crates) should be included. This should make it possible to assemble multiple different databases without making manual changes to the qrates code itself.
- Reengineer the underlying qrates framework to support pluggable queries via your newly-developed DSL, and to avoid the current need to hold all data in memory; supporting concurrency in the assembly of the database would also be interested to explore.
- Evaluate possible technical solutions for deploying the qrates framework via either cloud computing or dedicated server solutions, so that evaluation can be performed remotely by users of typical computing hardware (for example, we might setup a specific machine to run these corpus evaluation workloads).
- Demonstrate the usability and flexibility of the new framework by both rerunning prior evaluations and running new ones over the current corpus of Rust crates.
The project affords opportunities in language design and implementation, remote computing and concurrency in a practical setting, as well as programming experience in Rust. Prior experience with Rust and database / query languages or other data analysis frameworks would be advantageous.
3D Gaze Estimation for View-Dependent Display
Eye contact and gaze are significant components of in-person interactions which are difficult to simulate virtually. Often, a monocular view of a RGB video stream is limited in portraying the dimensionality of a 3D object - the viewer can only observe the object in a fixed position and changing the head's position and orientation of the viewer will not result in varying perspectives of the object being viewed.
By using a head-mounted display (HMD)'s built-in tracker or other means of head position and orientation trackers, one could calculate the view direction of the user and simulate the viewdependent rendering by varying the virtual camera's position and orientation. However, this would require the purchase and possession of such devices, while equipping the devices may serve as an intrusive barrier in virtual communications.
In this project, the student will aim to create a deep-learning based solution for gaze estimation from a consuler-level depth camera, in particular the 3D calibration of the camera, display, and user position. Building upon previous works on 3D gaze estimation and view-dependent rendering, the goal is to build a real-time 3D view-dependent display based on an RGBD camera input. The software will serve as a non-intrusive addition to telecommunication and aid in an immersive 3D interaction experience.
Design and analysis of randomized algorithms
Many of the tasks performed by modern computers involve randomization. Examples include estimating properties of a data set by sampling, or privatizing aspects of a data set by adding random noise.
One of the simplest examples of such a task is drawing a random sample from an array without replacement. The standard approaches for this task either require modifying the array, or require an auxiliary hash table data structure. The former is undesirable in many settings, such as multi-threading environments; the latter can involve quite a lot of overhead and may lack provable guarantees. In this research, we aim to explore simple, efficient approaches to this problem that still have provable guarantees.
This project offers experience with mathematical proofs, tools from probability theory, reading relevant research papers, and an opportunity to write a research paper.
Strong performance in CPSC 436R (Introduction to Randomized Algorithms) and honours mathematics courses is required.